Here Be Dragons: AI and Other Wicked EdTech Creatures

There comes a point when I can’t stay silent and need to shout: Beware, here be dragons! And by that, I don’t mean the uncharted lands of medieval maps. I’m referring to the uncritical adoption of AI and other digital tools in education. I’m also talking about the platformization, depersonalization, datafication, appification, and chatbotification of teaching and learning. The obsession with efficiency and speed. The fixation on rapid results over meaningful learning processes. The overall “pigeonification” of education (click, click, click) that Audrey Watters has cautioned us about time and again.

The pressure I am feeling as a teacher is suffocating. I am open to learning about new tools, including AI, but with lots of criticality. Just heard in a webinar: teachers must use AI, or they won’t be hired in the future. Such a sweeping statement (or even a threat!) isn’t just misleading; it’s a forceful way of pushing artificial intelligence on educators, as if it’s the only skill that matters. Are AI skills truly the top priority for future teachers? This sort of vision scares me. I feel a bit like Agatha Christie in the image below, standing there with her scary-looking dolls, wondering what the hell is going on. 

Agatha Christie with her dolls. Image source

I am not against digitalisation. In fact, I have always been interested in digital tools and platforms. Long before COVID, I arranged virtual exchanges with universities worldwide, incorporating a range of online tools into our teaching designs and encouraging students to embrace digital technology. If I could go back, though, I’d focus more on the human connections in those virtual projects and less on the latest tools and flashy platforms.

In 2021, I gave a keynote titled "Re-imagining online learning communities with equity, creativity, and care"  at the Teaching, Learning and Research Symposium hosted by Vancouver Community College. Due to the pandemic, the entire conference was online. I delivered my talk from my sofa, with my black cat sitting in the background. I was speaking mostly on positive themes and with optimisim. My slides featured illustrations of bunnies, cats, and ponies. And even some ducklings!


Lately, I’ve become increasingly concerned about the use of education technology. Last week I gave a lecture in the course "Digital environments in language learning and teaching" and my topic was similar: building learning communities and fostering social presence in online education. But this time, there were no bunnies, cats or ponies. I used a Halloween theme from slidescarnival.com, fitting both the message and the timing of the lecture on October 29th.


I feel frustrated, disappointed, and even concerned when the future of education is envisioned only through the lens of AI and digitalization. It seems as though we’re losing sight of what makes a class truly engaging and meaningful. Conferences, webinars, and other educational gatherings appear focused almost exclusively on AI. The hype is captivating, making it all too easy to forget the lessons we learned from the pandemic—and from the broader history of educational technology. That teaching cannot and should not be automated. That teachers cannot and should not be replaced by machines. That we should be careful.

I am not alone in having such critical thoughts. There is the brilliant Audrey Watters, a folklorist and the "Ed-tech's Cassandra", who has written books on the monsters of education technology and the history of teaching machines. Then there is the inspiring Maha Bali, Professor of Practice at the Center for Learning & Teaching in Cairo, who is deeply committed to building online communities. I could also mention the Civics of Technology, a group of academics, whose aim is to empower students and educators to critically examine the negative impacts of technology. And these are just a few of the many voices raising important cautionary and critical perspectives.


In my lecture, I shared stories drawn from my own online learning experiences, though this time, the stories were a bit unsettling. I spoke about the anxiety I felt in Zoom breakout rooms, the frustration of having to post on forums just to earn course credits, and the shock I experienced when a webinar was interrupted by an uninvited participant who shared their screen with a pornographic video. Oh, the horror!

As I did in my 2021 keynote, I structured the lecture around three key themes: care, equity, and play. But this time, I approached them from a different angle, highlighting how these concepts can be misused in online communities. 


In class, we discussed concerns about how chatbots (including AI “teachers”) can dangerously create the illusion of real social presence. These virtual entities are deliberately designed to look and sound human, to be endlessly friendly and engaging, never tire, and be available 24/7 at the learner’s convenience. But what happens if students ask questions outside of the assigned topic? How would these AI personas respond to inquiries about hobbies, politics, religion, or war? What information do they reveal about themselves? Do they respond to personal advances, like date invitations? And who has access to these conversations? Where is this data stored, and for how long?

As Audrey Watters has said it many times: if we do not care about the creatures that we invented, they can turn evil, just like Victor Frankenstein's monster in Mary Shelley's novel, who blamed his own master for abandoning him: "Remember that I am thy creature; I ought to be thy Adam, but I am rather the fallen angel, whom thou drivest from joy for no misdeed. Everywhere I see bliss, from which I alone am irrevocably excluded. I was benevolent and good; misery made me a fiend."


We must be extremely cautious about the tools and platforms we use in our teaching. Nothing is simply a tool, read, for example, Maha Bali's post on the AI cultural hallucination bias. Nothing is free; at the very least, we pay with our data. As the Civics of Technology points out, "Technologies are not neutral tools, but embedded with biases. Technologies create new environments that have unintended, ecological, and disproportionate effects." 

At the end of the class, I asked my students to consider the following questions (inspired by Maha Bali) before they integrate any digital tools or platforms into their teaching: 

- What data is obtained from me and the students? What happens to this data?

- What risks does the use of the tool entail?

- How does the tool affect socially just care & accessibility?

- How do we see our students (and how do they see us) through this tool? How do we see ourselves? How do they see themselves & each other? 

- How does the use of the tool affect trust, inclusion & community?


I ended the presentation with a final scary slide. 

               

There seems to be a recent shift from discussing how to regulate AI in higher education to focusing primarily on how to embrace it. Critical issues like data privacy, bias, environmental impact, factual inaccuracies, cultural hallucinations, and machine-generated (fake) social presence are not getting enough attention. Moreover, I find it unacceptable to suggest that teachers risk future employment if they lack "AI skills". Who can say with certainty what skills are needed for what kinds of teachers in what sort of potential futures?

Predicting the "future" (whether that’s next year or 200 years down the line) feels precarious. I hope that teachers will still rely on pedagogical and subject expertise, empathy, creativity, a love for students, an openness to learning, and a critical mindset. If they determine that AI and digital tools are beneficial and safe, they’ll naturally integrate them. But we should all pause and question before jumping on any bandwagon.


Comments