Beyond Sustainability: AI, Education and Regenerative Futures
Maria Vamvalis was an Intermediate educator in the TDSB who went on to engage in doctoral research on the intersection of climate justice, regenerative systems change, and transformative learning. She credits ETFO’s Reflections on Practice: Women’s Leadership program with being a bridge between the classroom and her research. Those investigations have led her into the realm of artificial intelligence. She is a co-leader of Canada’s first national climate education course and the Toronto District School Board’s climate camp. She is also a Director with the Critical Thinking Consortium, an organization that nurtures quality thinking that inspires action for a flourishing world. She is also the founder of Anayennsi, a regenerative field of practice for changemakers.
Meagan Perry: How did you become interested in the intersection of AI and climate justice?
Maria Vamvalis: I was always very interested in the intersections of the environment and justice issues, but over 20 years ago, when I first started teaching about the accelerating climate crisis and the impact on youth wellbeing, I was thinking a lot about our responsibilities as educators and how we were going to support youth in their response to the climate and nature emergency. That led me ultimately to doctoral studies focusing on the question of how we can teach about climate justice in ways that nurture a sense of meaning, purpose, and hope. Given the ecological impacts and justice concerns around AI, and the rapid integration of AI and education, I began to carefully consider how we could support leaders, educators, and learners in navigating this complex moment.
MP: We’re seeing AI grow and change very rapidly. Educators are really struggling to keep up with it, particularly with large and complex classes. What’s most important as we consider the uses of AI?
MV: This is a profound opportunity within education to think in integrative ways about what’s happening, about the intersections between our commitments to equity and our commitments to ecological sustainability.
One of the things we want to be doing right now is slowing down and thinking about how to navigate this moment with discernment. For instance, what kind of criteria should guide us in making sure that quality thinking stays at the centre of education? We need to consider how we will align the use of technologies with core values and connect the digital with ecological and relational literacies.
I want us to think about regeneration not sustainability, along with truth and reconciliation, decolonization, and pedagogies of inclusion that support well-being. All these things should be considered together in thoughtful ways.
MP: Can you talk a bit about what you mean by the term regenerative?
MV: The reason I talk about regeneration as opposed to sustainability is that our whole approach is unsustainable. This is why we’re in ecological crisis. So, we don’t want to be sustaining the current systems. We need to regenerate ecosystems using a lens of healing, repair, and restoring relationships. Sustainability doesn’t capture that.
Many distinct Indigenous knowledge systems advocate coming back to principles of reciprocity that really bring about regeneration, as opposed to what we are doing right now by extracting as much as possible from the Earth as quickly as we can in order to make profit and sustain our current way of life.
MP: Could you talk a bit about the environmental impacts of AI?
MV: Every time you put a prompt into ChatGPT that is the equivalent of using about a bottle of water, and that number goes up substantially when you’re engaged in image generation. There was a study released November 10 in Nature and Sustainability reporting that by 2030, the current rate of AI growth would annually put about 24 to 44 million metric tons of carbon dioxide into the atmosphere. That’s the equivalent of adding about five to 10 million cars to the roadway. Thinking about that in the context of the climate crisis, the way AI systems are currently being developed is increasing those emissions. For water, by 2030 we’re on track to use as much water for AI as the entire country of Denmark uses, and that will only continue to increase.
Looking at it from the lens of environmental justice, people are building AI data centres in poor and marginalized communities, which are already facing water shortages.
There are ways that AI can be configured that would be more ecologically sustainable. We could decarbonize and use more sustainable regenerative energy for AI operational efficiencies that would absolutely reduce emissions and water use. But that’s not the mindset of corporations. It’s important to build collective movements and that education advocate for discernment about how we’re using these tools, how we’re talking about them, how we’re supporting learners to map and understand the systemic realities of AI. These issues are interconnected.
AI runs on data. All the knowledge we’ve produced fuels AI. But to feed that data into AI, it needs to be what’s called “cleaned.” Removing violent and pornographic images is work that needs to be done, and is often done in Global South nations by exploited workers making unfair wages and exposing them to trauma. That’s another shadow aspect of AI that people are not aware of.
These are critical justice issues, intersecting with environmental issues, that absolutely must be driving decision-making in society.
MP: There are lots of resources about how to use AI, but there are not a lot of conversations happening about whether to use it. How do you think educators can teach students to think critically about the use of AI and to consider its environmental impacts?
MV: There are mixed studies about whether AI is good or bad for critical thinking, but what it is telling us very clearly is that AI magnifies learning design. Some studies show that if AI is well scaffolded with critical inquiry, it can actually support the development of quality thinking, but in unstructured use, when students are outsourcing the hard parts of thinking, we actually see measurable drops in critical engagement and in students’ critical thinking abilities. The cognitive offloading piece is very important.
Within education, we should be very concerned about this and carefully designing learning to prevent the decline in critical thinking. One of the big concerns around whether to use AI is whether AI is training us to accept unexamined answers, especially in the current context of rising authoritarianism. Examining evidence, having deliberative discussions with each other, engaging in the thoughtful examination of issues in our communities is essential for democracy. If we’re offloading critical thinking to AI and accepting its answers uncritically, there are profound implications. How we’re going to ultimately consider AI use, and the ways in which we’re going to use it, are such important questions for education systems to be grappling with right now.
MP: What would you advise for people who are using AI?
MV: Take the time to develop discernment about your use of AI; start from a critically minded stance in approaching how or whether to use it. Learn about the environmental and equity implications and teach your students about those issues. Help students understand that they need to evaluate AI, rather than just absorbing its use uncritically. Nurture students to be inquiry minded so that they are asking better, richer, deeper questions, not trying to get quicker answers. That kind of rich critical inquiry becomes a powerful driver for meaningful learning. AI cannot construct meaning, nor can it replace our own experiences or the rich personal network of ideas, experiences, schema, and insights that inform how we make a decision.
Supporting learners in accepting discomfort in learning is essential, particularly because AI makes it very easy to bypass the hard parts of thinking. We want to ensure that the tasks we design are keeping students in that zone of productive struggle, what I would call the transformative zone of learning, not the convenience zone. Nurturing transformative thinking habits, advances this in powerful ways. A more holistic approach to teaching and learning helps students hold that complexity.
MP: Is there a particular consideration when it comes to AI and climate justice that you’d like educators to take away from this interview?
MV: I really would love for all educators to be thinking very relationally and systemically about AI. We need young people to see the whole system behind a single AI output, from the cobalt mine to the data centre, to the water use, to the energy use, to the classroom decisions AI shapes. I would love it if we put criteria around our AI use, so that we’re educating learners who are going to be transformative agents of change in whether and how they use artificial intelligence. Let’s ask ourselves: Can we use AI to support the creation of more regenerative, just, relational, and caring futures? If we put that at the centre, and we make that a really powerful, critical inquiry, how does that transform education?
We must not allow this tool to erase and diminish our human capabilities, but instead ensure that AI is being used in a way that is supporting justice and regeneration. That should really be the focus.
This interview was originally published on the Elementary podcast. The transcript has been edited and condensed. Listen to the original at etfo.ca or on most podcast apps.
Meagan Perry is a member of ETFO Executive Staff.