Podcast 🎧 & blog: AI in education starts with a question, not an answer
Written by Federico Plantera, Researcher on tech policy and AI
When Tiger Leap started in 1996, neither the schools nor the students had the technology – everybody had to learn together. Now, the asymmetry runs the other way: the students have leaped, while the education system is catching up.
In this episode of the Digital Government Podcast, Ivo Visak, CEO of AI Leap (TI-HĂĽpe), the national programme integrating artificial intelligence into Estonian high schools, Visak makes the case for why treating AI in schools as an intervention rather than an experiment is the only responsible path forward.
The education system does the leap
“It’s not that much the students that are leaping this time. It’s the education system,” Visak says. He cites research from Tallinn University, based on as early as 2024 data, showing that around 90% of Estonian high school students were already using large language models monthly, with ca. 85% using them weekly. That framing shapes already the entire programme.
More than about introducing a new technology to schools, AI Leap seeks to reclaim some degree of educational control over a technology that has already moved in. Often in unguided and pedagogically questionable ways. Students are already thinking along with machines. The question is whether the education system can offer them something better than the default.
Visak was around six years old when Tiger Leap took full swing in 1996. He was young enough to grow up inside its effects, not old enough to remember the disruption. But he has heard the stories from colleagues who were already teaching back then. “There was a lot of fear. That computers are coming, everybody will be stupid, they will cancel out the teachers.” Some fears today are similar, but there is one key difference: in the mid-90s, the technology had to be brought into schools. This time, it is already there – in every student’s pocket – since before the programme began.
A full answer is not the point
At the heart of this effort sits a learning application developed in partnership with OpenAI and researchers from the University of Tartu, Tallinn University, and Stanford. The tool is built on the latest ChatGPT model, but with a critical difference in behaviour. Where the standard versions of ChatGPT, Gemini, or Mistral are designed to deliver the most complete answer possible, the AI Leap application is designed for you to get to it. It follows what the team calls a Socratic model: rather than providing solutions, it asks questions, prompts reflection, and guides the student toward understanding. “The vanilla ChatGPT, the vanilla Gemini – they all want to give you the full answer,” Visak explains. “This version doesn’t really want to give the full answer.”
This is where the core dilemma surfaces. Why would a student choose a tool that makes learning harder, when another tool gives the answer instantly? The piloting phase – conducted from mid-October to mid-December 2025 with ten high schools – offered some encouraging signals, Visak says. Students reported enjoying the tool more and trusting it more. Perhaps most tellingly, many expressed concern about the long-term effects of their own existing AI use. “While they are delegating their thought processes to the machine, they are at the same time quite worried about the long-term outcomes.”
What PISA doesn’t measure
The application is deliberately tailored to the Estonian educational context. Estonia has performed exceptionally well in PISA – consistently the top-ranked country in Europe. But PISA measures knowledge, and knowledge building is not the only thing schools need to develop. “The Estonian education system is very knowledge-heavy.” The AI Leap tool therefore places greater emphasis on learning skills: metacognition (how students think about their own thinking), learning beliefs (the gap between knowing what a growth mindset is and actually practicing one), and active learning strategies.
Safety is another dimension where localisation proves essential. While the major AI companies have invested in making their models safer for under-18 users, certain needs remain specifically Estonian. Starting from integrating local crisis helplines to grounding the tool in Estonia’s cultural and educational landscape. “Sometimes this very general help doesn’t really aid the Estonian student who needs effective assistance immediately.” And there is a layer beyond individual safety: these tools carry the culture of their makers. “This technology is the face of its makers,” Visak notes. How a model talks about politics, about culture, about values – it all comes down to the people who built it. An Estonian educational tool needs to account for that.
Teacher’s role transforms
The programme also cooperates with Google, which, alongside OpenAI, supports the teacher track. Before any student received access to the application, nearly 5,000 Estonian high school teachers went through training. A deliberate sequencing. Teacher preparation began in late August 2025, covering both the psychological effects of AI on students and the cultivation of teachers’ own AI literacy. Professional learning communities continue to meet across the country. The reasoning is: if teachers are not equipped to guide this transition, the tools alone will not deliver educational value.
And the teacher’s role, in Visak’s view, is transforming too. As AI tools become more capable, the need for teachers grows, not shrinks. “The information landscape has long surpassed the teacher’s ability to be an encyclopaedia in front of the classroom. We don’t need that anymore.” What schools still need are adults who can build social-emotional skills in young people, help students navigate an overwhelming information environment, and develop media literacy.
The OECD will, for the first time, measure media literacy alongside AI literacy in its 2029 PISA assessment. And this convergence is telling, because while AI tools are powerful for good, “it’s also a very powerful tool of disseminating disinformation at a much quicker pace, in all of the languages, with much better understanding of the nuance of how to sound effective.”
Dead languages and data scientists
And for the practical advice column: what fields to study, then, that are relevant in the age of AI? The familiar narrative of recent decades positioned STEM as the only rational career path, with the humanities cast as a luxury. Visak pushes back. Linguists now work at major AI companies; humanities graduates contribute to research at Anthropic and elsewhere; the capacity to think critically about language, culture, and meaning turns out to be what the technology needs from its human counterparts. “A lot of these fields that have been dismissed – ‘it’s humanities, it’s some old dead language thing’ – well, sometimes these language people are some of the most useful in these companies’ view.”
At the same time, STEM is not going anywhere. At a recent event at a Tallinn high school, Visak addressed students worried that studying programming has become pointless. He says the field is changing, not disappearing. Prototyping may be faster now – “you can have a website up and running in ten minutes” – but the world needs more data scientists, more back-end engineers, more people who understand security.
And the programme itself is generating new research: AI Leap, together with the University of Tartu, Stanford, and OpenAI, is working on a framework to measure whether large language models can have a net positive effect on learning skills – a contribution Visak hopes will be “a gift to the world in a year or two.”
“What it means to be human”
Visak is clear-eyed about the risks and impatient with criticism that offers no alternative. “You can be super critical. Okay. Are you going to ban them (here: large language models)? How are you going to ban them (here: large language models)? Some of these companies might go under in a few years. But then there are going to be 10 new Chinese models that are free, and you can put on your phone. They don’t even need internet.” The landscape is shifting fast, and inaction is its own kind of risk – particularly for countries as digitally connected as Estonia.
But the deepest challenge, in his view, is human more than technological. In a conversation with Professor Dan Schwartz at Stanford, Visak encountered what he considers one of the central questions of AI literacy: what does it mean to be human, now that we have technology that is increasingly good at mimicking human behaviour? AI companions that sound caring, that never tire, that always have time.
“We have been built to see human faces in the clouds. When something sounds human, we immediately want to associate it with human.” The allure is real, and the rabbit hole can go in the wrong direction. A tool can explain a concept in more ways and with more patience than any teacher. It can be available at any hour. But as Visak puts it: “The teacher that’s a bit tired and gets a bit angry probably still loves and cares about you more than this.” Teaching students to see that distinction, as a lived capacity, is what AI Leap is ultimately about. What the technology reveals, we still need to teach.
Interested in more? Â