

The Stanford AI+Education Summit was co-hosted by the Stanford Accelerator for Learning and the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Isabelle Hau, Executive Director of the Stanford Accelerator for Learning, opened by acknowledging that the research community is divided about AI and education, exacerbated by the proliferation of AI solutions which vary widely in pedagogical soundness and have seen limited evaluation. To address this, she called for learning science to be integral to AI tools and for human relationships to remain central.
The Global Learning Conference 2026 in Africa was co-organised by the Global Learning Council, the The Villars Institute and the United Nations Institute for Training and Research (UNITAR), and co-hosted by the Ministry of Education, Government of Rwanda. Claudette Irere, the Minister of State for Education in Rwanda, opened this event with a wider focus: Africa has a very young population with around 60% under 25, and there is a need for lifelong learning to ensure adaptability to rapidly changing job markets, and this was set within the context of environmental pressures and challenging economic conditions. She made the case for systems-level thinking.
Both events recognised that AI is advancing far faster than our learning systems, but the challenges were very different. At Stanford, the problem was too many tools and too little evidence. In Kigali, the problem was ensuring that the systems and infrastructure are in place so learners and educators can fully participate.
In this article, I explore four themes that were covered across the two gatherings: access to technology; future workforce uncertainty and what that means for the curriculum now; empowering teachers through augmentation rather than automation; and evaluating AI solutions for education, including safety and bias.
The sharpest contrast between Stanford and Kigali appeared in discussions about access to technology.
At Stanford, Randi Michel, Senior Advisor for Technology to California Governor Gavin Newsom, spoke about the challenge of excessive technology in schools and new moves to curb mobile phone use in schools. Shantanu Sinha from Google for Education added that students are already using powerful AI technology extensively outside school, which is why it is imperative to provide pedagogically sound alternatives.

In Kigali, the focus was on enabling access in the first place. Yves N. Iradukunda, MBA, PMP®, Rwanda’s Minister of State for ICT and Innovation, described the country’s efforts to improve infrastructure and position Rwanda as a hub for technology in Africa. However, Nathalie Munyampenda from Kepler – an innovative higher education provider with the single focus of jobs for young Africans – shared concerns about Rwanda’s readiness for an AI revolution. One major issue is the language. Students are expected to learn in English, a language many of them do not speak.

I observed the infrastructure gap and language challenge first-hand on a visit to a rural Rwandan school with Azeem Mushimiyimana from Rising Academies. The road was good until the last mile, where tarmac gave way to a dirt track. The school had one ICT classroom with a projector, access to the internet and mobile phones, but all other classrooms used chalkboards.

In a lesson with 50 students, the teacher asked: “Which is bigger, 24 or 42?” and invited them to discuss it with a partner. Little happened until she repeated the instruction in Kinyarwanda. She was not only teaching mathematics, but a second language as well. Students’ followed along with the FasterMath textbooks provided by Rising Academies, but the classroom was so crowded that they struggled to fit both the textbooks and their exercise books on their shared desks.
Back in the ICT classroom, I observed students using RORI, an AI tutor developed by Rising Academies, on mobile phones. Each student was working at their own pace and receiving personalized instruction. In combining textbooks and technology, Rising Academies are a fantastic example of what can be done when you design with the environment in mind.
At Eedi Labs we have also been working to develop technology which is mindful of the constraints of real classrooms. Our DQR app, for instance, only requires the teacher to have a device, which they can use to gather real-time, formative assessment data from a whole class. I presented this solution to a group of Rwandan teachers and the response was mixed. The idea resonated well but a major concern was data cost. Because WiFi coverage is often limited to a single ICT room, and the teachers would be using their own mobile phones, they feared they would need to personally cover data charges. This is a constraint we had not encountered in the UK, where data is cheap and virtually all classrooms have WiFi. We are now developing an offline-first option which can operate with just an intermittent WiFi connection.
The conferences were more aligned when it came to the workforce uncertainty and the changes we need to make to our education systems now to prepare students.
At Stanford, Neerav Kingsland from Anthropic forecasted a significant shift in the job market within two to ten years, and argued that high schools and colleges should be preparing for this immediately. Loodwige Lince from STEM from Dance pointed out that young people are already worrying about their future employment. Loodwige drew a parallel with self-checkout machines. Many predicted they would have an impact on jobs, but very little was done to prepare people for the transition. We are now in a very similar position, but on a much larger scale.
In Kigali, Illah Nourbakhsh from CMU apportioned some blame for the decline in entry-level jobs on the creators of AI tools who compare technology and humans on the same scale, which devalues human agency. He asked how we can change this pattern by incentivising creation for stakeholder value rather than shareholder value.
Neither conference addressed the geographical differences in how AI will impact the workforce. AI tools are not “cheap” everywhere. Taking programming as an example: in countries where salaries are high, companies are happy to cover the cost of AI coding tools to make their team more productive. This does increase productivity and, as a result, reduces the need for outsourcing. Programmers who cannot afford access to these tools are now at a disadvantage.
Rwanda’s Minister of Public Service and Labour, Ambassador Christine Nkulikiyinka, predicted that the future workforce will be defined less by specific technical expertise and more by the capacity to unlearn, relearn, adapt and transition. This sentiment was echoed by Charles Avelino, from UNICEF, who urged education systems to go beyond domain knowledge and invest in “soft skills” like critical thinking, problem solving, communication and collaboration. In Kigali, there was a strong focus on lifelong learning, going beyond formal schooling, to encompass systems for workforce retraining and informal learning.
Back at Stanford, James Landay of the Stanford School of Engineering, stressed the need to “go to where the puck is going to be” to prepare students for what they will need to know, not prepare them for the industrial revolution. Mehran Sahami of the Stanford School of Engineering made the case for teaching AI literacy, treating AI as a topic and not just a tool, highlighting that AI is more often used to short-circuit learning than enhance it.
Whilst curriculum adjustments remained quite practical in Kigali, some bolder visions were discussed at Stanford. Wendy Kopp from Teach for All advocated for a rethink of the very purpose of education, moving to holistic student development. Meira Levinson from the Stanford Graduate School of Education argued that in the absence of clear workforce requirements, we need to build students to be people of character, who live lives of virtue. She noted that this focus on character has a two-millennia history, with the emphasis on job-seeking being a relatively recent development post-World War II.
Susanna Loeb from the Stanford Accelerator for Learning highlighted that many of the outcomes we now care about – creativity, initiative, collaboration – are not well captured by traditional summative exams. She asked how we can measure “coming up with an idea”? She wants us to find ways to measure different things and measure them consistently. Mehran Sahami argued that prior to AI, project work and assessment results were strong indicators of student learning, but AI has completely severed this connection.
In Kigali, Illah Nourbakhsh argued that we need to create technology which is in the service of those who use the technology, augmenting educators rather than attempting to automate their roles. A sentiment echoed at Stanford by Susan Athey who asked how we can incentivize providers of educational software to achieve the goals that educators actually care about.
Randi Michel reminded the Stanford audience that technological change in classrooms is not new, and proceeded to list out all the changes educators have adapted to starting with the chalkboard. Rebecca Winthrop, from Brookings, stressed the need to support teachers through this shift in their pedagogy, for example in deciding when to use AI and when not to.
Wendy Kopp raised the challenge that technology amplifies what is already there indiscriminately, it can amplify the good stuff, but also the not-so-good stuff. As a consequence, Teach for All is doing everything they can to encourage teachers to experiment with AI, so they become the voices needed to build classroom-aligned solutions. Mike Taubman, a teacher from Uncommon Schools, described how he was using AI to fuel classroom discussions, but he treats the classroom as a sacred space so he uses AI for no more than 20% of classroom time. In Kigali, the argument was made that to teach AI literacy we need to stress the AI solutions until they fail, so educators understand what these systems can and cannot do.
In Kigali, there was an additional thread concerning the status of the teaching profession in Rwanda. Historically, teaching had a poor reputation and there was even a saying: “They could not even marry a teacher”. The government has almost doubled teacher pay in recent years as part of an initiative to change the perception of the teaching profession. Despite this, the starting salary can be as low as $72 per month, less than a hotel receptionist or a motorbike taxi driver. Gururaj Deshpande from Deshpande Foundation hopes that AI might move education away from a factory-like model and towards more personalised learning. It is his belief that this would increase the appeal of the teaching profession by providing the agency to allow them to inspire and mentor learners.
A recurring theme at Stanford was the evaluation of AI solutions. Susan Athey suggested that the democratisation of product creation has led to a bottleneck: there are too many pilots and not enough implementations which are actually effective.
Susanna Loeb argued that we need to build a robust measurement infrastructure for AI solutions for education which allows us to learn and iterate quickly. She said it is untenable to wait for multiyear randomised controlled trials because technology is moving on at a pace which would make their conclusions redundant.
The question then, is what to measure. Multiple speakers, at both events, argued that we need to go beyond improvements in students’ exam performance to encompass the impact on classroom dynamics, critical thinking, creativity, collaboration, and even character. A concerning example was provided by Guilherme Lichand from Stanford Graduate School of Education, sharing results of an investigation into the effect of AI on creativity: the group with AI support performed better when they had the tool but worse on a subsequent task when the AI support was removed.
Mike Taubman challenged us to think very carefully about how we measure engagement. Traditional measurements of engagement with technology, like time on platform, can be counter to the engagement you want in classrooms, or as he put it: “We want them to look up more than they look down”.
Evaluation of AI solutions in education should not only address pedagogy but also safety and bias. Interestingly, safety was more prominent at Stanford, whereas the theme of bias was more prominent in Kigali.
In Kigali, Suzanne Elise Walsh from City University asked whether we might be at risk of losing cultural diversity if a handful of global AI models end up mediating most of the world’s knowledge? Dr. Krithi K. Karanth from Centre for Wildlife Studies agreed with this concern, explaining that large language models are trained on publicly available written material on the internet, which is far from neutral. For example, Rwanda had a predominantly oral tradition until the 19th century, with stories passed on orally rather than in writing. As a result, the Rwandan voice is likely to be deeply underrepresented in current training data.
At Stanford, the focus was more on immediate, visible harms. Amanda Bickerstaff from AI for Education, presented concerning statistics: nearly half of all AI users are under 25, and they are using chatbots more for mental health support than education. Meira Levinson explained that because fake images or videos are now so good they cannot be trusted, we face a “liar’s dividend” where even authentic evidence can be dismissed as fake. This fuels an epistemic collapse, where society loses a shared sense of truth. Not all voices were pessimistic. Alexander "Sasha" Sidorkin from California State University Sacramento, urged people to take context into account. Is an AI tutor better than a great human teacher? No. Is it better than no teacher at all? Often, yes.
Erin Mote from InnovateEDU had this to say about safety and innovation:
In Rwanda I spoke on the AI Fluency for All panel and argued for global standards for AI in education, agreeing with points made by both Randi Michel and Erin Mote at Stanford, that smart, deliberate regulation can actually promote innovation, not inhibit it. I described how philanthropic funders are starting to support benchmarks and datasets tailored to AI in education. The Learning Agency, for instance, has released an Educational AI Leaderboard to compare tools on standardised educational tasks. There was concern in Rwanda about each country having their own policy, Conrad Tucker Conrad Tucker from CMU pointed to aviation as an example of an industry where global safety standards have been set successfully, but highlighted that the harms of AI in education may be slow and distributed making them less salient and more easily ignored.
A topic largely absent at Stanford but front and centre in Kigali was the environmental impact of large-scale AI deployments. Rwanda has a strong reputation for championing the environment, for example through their protection of the gorillas, and their ban on plastic bags since 2008. As AI solutions hyperscale it is imperative that we include environmental impact in their evaluation, and do not turn a blind eye to the environmental consequences because of the incredible convenience.
The gatherings at Stanford and Kigali provided two distinct vantage points on the shared challenge of AI in education. Uncertainty regarding future workforces led both events to propose that “soft skills”, such as critical thinking and adaptability, should be taught alongside traditional subjects. However, while Kigali emphasized lifelong learning, Stanford went further, arguing for a return to the original purpose of education: building people of character and virtue.
There was strong agreement that AI must augment rather than automate teachers. In Kigali, there was hope that AI could actually improve the social status and agency of the teaching profession. Stanford led on the need for pedagogical evaluation, calling for new measures and processes that support rapid iteration. While Stanford prioritised safety, Kigali highlighted systematic bias and environmental impact of AI.
All of these positions make sense when considering the different pressures each region faces. One group is grappling with education systems saturated with AI pilots, the other is racing to build the infrastructure to ensure their young population is not left behind.
I believe we have a huge opportunity to learn from these different viewpoints, and the unifying principle could be a global approach to evaluation. This could stem the tide of inadequate tools and ensure that AI in education meets standards for pedagogy, safety, bias and environmental impact.
I’ll end with a quote from Michelle Gyles-McDonnough from UNITAR:
"We cannot wait for the systems to catch up. The scale is continental, the timeline is now, the responsibility is ours."