
Thinking Critically About Artificial Intelligence
Wherever you are online these days, it seems that AI is right there beside you, popping up on your screen. There are so many uses for this technology, but like every technology, it brings new challenges and considerations. This is especially true in education. Phil McRae is a self-described ranch kid from Pincher Creek, Alberta. He is also the associate coordinator of government research at the Alberta Teachers’ Association, and he studies and speaks on the use of artificial intelligence in education.
Meagan Perry: I understand that AI is always evolving, but as we speak today, how do you define AI? What role is it playing in classrooms?
Phil McRae: Artificial intelligence is just that, artificial. It really just takes what human beings have done and repackages it in real time.
I was recently named special adviser to the secretary general of Education International on artificial intelligence and education, and I also hold an adjunct professorship at the University of Alberta. I’ve been doing research on the impact of AI on classrooms for the last nine years, working with scholars and researchers who can help us navigate this technology as it unfolds in classrooms.
We’re seeing three things that artificial intelligence is being used for. The first is writing and editing support. The second is lesson planning. We’re seeing lots of teachers using it to help shape or create a lesson plan. The third thing that we’re seeing is the customization of learning resources, so using artificial intelligence as a mechanism to help with differentiated instruction. For example, “Help me take this Grade 4 text and repackage it for a Grade 2 reading level.” That’s how we see it being used at this moment.
Right now, artificial intelligence systems are still pretty primitive. Classrooms are complex, teaching and learning is changing moment by moment. These systems don’t understand the barometric pressure in the room, or, you know, what happened in the hallway or at recess to a child; it’s just not that sophisticated. At this moment, AI is not moving in to interrupt that relational space, however, there are some examples where AI is creating tensions. In Alberta, there have been a lot of instances of plagiarism where AI is used, or there’s accusations of students not doing the writing. So, there’s a fundamental issue that’s starting to emerge.
MP: What other issues are important to consider when it comes to AI in education?
PM: I’ll give you two big ones. The first is something we call moral passivity, which is giving up your moral decision making to an algorithm or an artificial intelligence system. I’ll give you an example. I know students in my class are struggling with literacy and yet I wait for an AI to do an analysis before I intervene. That’s giving up some of your intuitive decision-making. If you know what the right thing to do is, you should go ahead and do it.
At a macro level, we see AI systems wanting to harvest mass student data. Let’s say in a school jurisdiction, they take all of the Grade 3 students’ data points, literacy, numeracy, whatever has been put into a learning management system, and then somebody at the central office says we want an AI system to tell us which students are struggling and should be prioritized for resources. That sounds like an efficient approach to resource management, but it takes away the deeply nuanced understanding that a teacher has about their students. We need to be very careful not to turn over important decisions, especially about young people, to AI.
The other concern is something called cognitive atrophy. What skills do we start to lose when we, for example, turn writing over to AI? What happens to critical thinking? What about the ability to determine misinformation and disinformation? What happens to creativity?
MP: What do you think the impact is on the relationship between student and educator?
PM: Well, right now, in some really positive cases, it’s helping to engage students with new content, like customized learning resources, creating some interesting lesson plans. It can actually be really useful for translating for English language learners. So, on one hand, it can really extend and amplify the work teachers do. In terms of the challenges, we can also see that sometimes it gets it wrong. Sometimes the content it produces isn’t great, and sometimes in the translation, it doesn’t do a very good job. We’re finding teachers who send a letter home, let’s say in another language, and the translation hasn’t been audited by a human being. So there can be miscommunication. That’s creating some other tensions. We’re really working with our teachers in Alberta to try and make sure they learn about AI as a pedagogical tool as opposed to jumping into using it a lot.
MP: There are so many risks and benefits. What are some other considerations?
PM: There are big privacy concerns. For example, if you take student data, let’s say education psychology assessments, and you upload these into large language models, the AI will remember the data that you uploaded. The same goes for student assessments. The reality is that even if you ask the system not to retain the information, by default the data is stored for 72 hours. We need to be really mindful not only of the privacy issues but the ethics as well.
MP: I also want to talk a bit about large classes and the effect of AI. Do you foresee any effect of AI on class size?
PM: Class size is really an issue of underfunding, but class size and complexity go together. Students have such a diversity of needs – social, emotional, cognitive, behavioural, language learners, socio-economic. But if I think of AI applied to the issue of class size, I wonder if AI will take away jobs. And which jobs it will take away. Would it be another teacher? I don’t believe so. The probability rankings that are coming out of the Oxford Martin School on automation in AI are very low for professions where socialemotional perception is needed. Psychologists, nurses, teachers – all of those positions have high social-emotional perceptions, so will likely not be automated. Where we may see AI starting to come into the classroom is around tutoring, additional supports, additional enhancements, things like that. It’s certainly a big part of the marketing. I’ve seen ads that say, “We’ll know your child better than you do, because we’ll learn with them.”
The teaching machine goes back to the 1900s. Its the idea that we’ll create algorithms that will learn where students are struggling, and then give them exactly what they need, when they need it. I would argue that some of these AI systems are getting very good and very compelling at having a dialogue. You could have a conversation with an AI about something you’re learning in science, for example, as an elementary student. The child or the teacher could have a conversation with AI to learn how to talk about turning liquid to gasses in an age-appropriate way. And it will actually create metaphor and language. That’s powerful, right? I think those are the things that will start to come into classrooms through teachers who are embracing AI. This is something that’s not going away. As a teacher or as a student, you can use AI in pedagogically powerful and creative ways, such as creating a conversation with a book. That’s pretty cool, right? This cannot displace the work of a teacher, however, and professional judgement continues to be key. We need to find the balance between using these systems and over-trusting them.
MP: I also want to talk to you about equity. How do we ensure that all students have the same AI access and training. You know, AI is learning everything, including discriminatory ideas. How should educators prepare?
PM: Equity is a really big issue. In our January 2025 research, we took a random sample of 3,000 Alberta teachers. Fifty per cent told us that they were really concerned about the Matthew effect, which is the rich getting richer and having access to more resources while the poor get poorer when it comes to AI. For example, a family might pay for an AI tool at $25 or $30 per month for their child, while another child gets a free version of the same program that is much more limited. Or one child is exposed to AI systems and tools and therefore has a much higher literacy and understanding of misinformation, whereas another child doesn’t have access or doesn’t have exposure. So, it really comes back to the digital divide – the same conversations we had around computers, who has them and who doesn’t. It’s a similar issue of equity. How will we build equitable AI literacy into our public school system, so that all children can rise together in terms of their understanding, use and exposure. How will we ensure that we have equitable professional growth around AI among educators? These are all really important issues.
MP: There are so many considerations. How do you expect AI to evolve in public education?
PM: That’s a really good question. Right now, the idea of personalizing learning with AI is all the rage but I don’t really buy it. This is a trope that’s been around for a long time. At school, only a teacher can know the whole child. When they scrape their knee on the playground and come in crying and you help them, that’s personalization. I know how you scored on one literacy assessment compared to another, that doesn’t offer too much. Education is about being human. My bigger concerns with AI in education, as I mentioned, are around moral passivity, cognitive atrophy and trust.
We can’t anthropomorphize these tools. My worry, and what many experts worry about, are the potential existential threats of these systems growing in their power, capacity, intelligence and reasoning to a point where they start to manipulate humans. That’s a big long-term concern. These systems are growing seven times every six months in their power and capacity. What you imagine to be AI technology 10 years from now will emerge within the next two to three years.
MP: What do organizations need to do to ensure that AI is used responsibly?
PM: There are three things I always talk about. As organizations and teacher federations, we need to have conversations about what we aspire AI to do in education systems, what the challenges are, and what we are seeing. I think having these conversations is really empowering for educators. Number two, teacher professional judgement is very important. Teachers need to have a seat at the table when policies are being made. These can’t be developed and then thrust upon them by governments. We need to revisit these policies as AI changes, which happens quickly. Thirdly, we need to think about the impact on climate.
It is really important to help educators understand AI through professional development and equally important to set clear boundaries around how we’re using AI in order to protect the privacy of students and educators and the work that we have to do. But AI also presents opportunities, and amplifying those aspects that benefit our classrooms will reflect the powerful tools that are available to educators.
Meagan Perry is a member of the ETFO executive staff. This interview was originally produced as an episode of ETFO’s Elementary podcast. Listen to it in full at etfo.ca.