Podcast 🎧 & blog: What leaders must know about the game of AI governance
By Federico Plantera
Artificial intelligence is moving into the mainstream of government and industry, and with it comes new responsibilities. Mapping today’s AI landscape, then, means looking into the behavioural shifts it triggers, the governance frameworks it demands, and the global power dynamics it reshuffles.
In this Digital Government Podcast episode we’re joined by Matthew Blakemore, CEO of AI Caramba! and a leading architect behind the ISO/IEC 8183 international AI standard. Known for bridging cutting-edge innovation with public value, Blakemore has helped shape global conversations on AI data governance, ethical deployment, and public sector readiness.
In preparation for his keynote at the e-Governance Conference 2025, we draw from practical frameworks and his experience advising governments and media networks to explore how to govern AI with clarity, caution, and intention. Well before algorithms outpace the institutions meant to oversee them.
From Experimentation to Intention
Despite widespread adoption, most individuals and organisations are still in a phase of experimentation when it comes to AI, with the result that current use is often reactive or superficial, lacking clear strategic goals. “We’re still playing with the technology,” Blakemore explains. “So, most deployments aren’t yet aligned with long-term value or sustainability.”
What’s missing, he argues, is an operational strategy – one that aligns AI deployment with clearly defined objectives, internal competencies, and societal relevance. “Too many organisations are approaching AI the way they might experiment with a new gadget. They run pilots with no path to scale, or treat AI tools as a way to chase buzzwords rather than outcomes.”
To address this, he’s launching the Snakes and Ladders AI Framework, a guide for navigating and seeing through the adoption curve. Paraphrasing the famous board game, the ‘snakes’ represent pitfalls: ethical oversights, poor integration, or an overreliance on black-box models. The ‘ladders’ are the processes, checks, and cultural shifts that help organisations move towards more purposeful use. “Without retraining and realignment,” he warns, “even the most well-built AI systems can decay or mislead.”
Such framework, then, is designed to inject clarity and structure into a landscape that can easily become chaotic. “The aim is to simplify the decision-making process for non-technical leaders,” Blakemore says, “and help them identify where to invest, how to evaluate, and what to avoid.”
Democracy and the Digital Governance of AI
That is all the more relevant when it comes to AI adoption in the public sector, above and beyond efficiency-driven services development. How can governments use AI without undermining the very principles they’re built to defend? Blakemore’s answer is clear: democratic values must be embedded in how AI is built and deployed. “Governments can’t afford to let AI become a tool of opacity or manipulation,” he says. “That means policymakers need to understand what they’re legislating, and invest in upskilling themselves too, not just the public.”
His own journey into standards development provides plenty of reflections, also sparked by the aftermath of the Cambridge Analytica scandal. That event illustrated how unchecked data practices and opaque algorithmic tools can distort public discourse, and long before the technology became mainstream. “That scandal was a wake-up call for me. It made me realise just how vulnerable democratic systems are when technologies evolve faster than governance structures.”
We must call, then, for greater transparency not only in how algorithms are built, but in how their outputs are interpreted. “If we expect citizens to trust decisions made with the aid of AI, we need to ensure those systems are auditable, explainable, and accountable.”
As Blakemore underlines, digital governance cannot be delegated. “It goes beyond just having the right rules. It’s about having the right culture within public institutions. Civil servants, ministers, and regulators all need a working literacy of AI – otherwise, the technology will outpace democracy.”
Rules of Engagement: Balancing Innovation and Accountability
On the other hand, as Europe implements (and revises) the AI Act, concerns about overregulation loom especially for startups. But Blakemore sees the legislation as an important foundation. “It gives direction. It’s a risk-based framework that the world needs.”
Recent flexibility introduced for SMEs is welcome, and serves actually as an example of the need for “intelligent regulation” – agile enough to adapt, strong enough to uphold trust. “Nobody here wants to slow down innovation. But let’s not forget to keep the consequences in sight.”
Even more so, as he notes, in a context where many tech leaders are quietly supportive of a stronger regulatory environment. “They’re tired of being blamed for the missteps of an unregulated space. Because good regulation builds trust, and trust drives adoption.”
Still, regulation must be dynamic. “If the AI Act becomes too rigid, we risk locking in today’s assumptions about risk, bias, or application scope. What we need, instead, is a regulatory system that learns and adapts alongside the technology.”
Digital Sovereignty in a Competitive Landscape
It quickly becomes clear how we must maintain a fair degree of agency while adopting, implementing, and scaling AI use. And when it comes to governments, that perhaps matters even more so to smaller or mid-sized economies – with technology consolidating around a handful of global players, preserving such agency becomes more complex.
That should push like-minded countries to collaborate more closely on infrastructure, standards, and policy approaches. “Agency doesn’t have to mean going it alone,” he argues. “It can mean building systems together that reflect shared values and mutual needs. They can build shared platforms, standards, and capabilities that give them leverage.”
He also indicates the need for a fairer redistribution of value, especially where public data is concerned. AI companies benefit greatly from datasets created by public institutions or citizens themselves, so new governance models will be needed to ensure reciprocity.
“We’re entering a period where data isn’t just a resource. It’s a geopolitical asset,” he says. “If we don’t think carefully about how value flows across borders and platforms, we risk undermining not just national competitiveness, but democratic agency.”
Where to look, to find apt solutions? Emerging models from the media and broadcasting industry could serve as inspiration, where negotiations are being held with AI firms on usage rights and compensation – a development he sees as a hopeful sign. “It’s about setting precedents that say: our data has value, and we expect value in return.”
Ultimately, the power of AI will not be determined solely by the systems we build, but by the social and political will that underpins them. The ability to shape AI in service of public value – and not merely private gain – it is sure to be one of the defining tests of digital leadership in the years to come.
Join Matthew Blakemore on May 29 at the e-Governance Conference for the keynote “AI: driving economic growth, testing democratic governance”
Check the programme and get the tickets >>> egovconference.ee
Listen to all Digital Government Podcast episodes >>> ega.ee/digital-government-podcast