Open search
Accessibility
Piret Hirv

Lessons learned of deploying AI in public services

Public administrations across Europe are shifting from debating AI ethics and risks to deploying AI in practice, especially where it can improve citizen and business services.

 

Written by Piret Hirv, Head of the Data Management and AI

 

This shift aligns with the European Commission’s Apply AI Strategy, which emphasises real-world adoption in strategic sectors such as GovTech and public administration. The ambition is no longer abstract readiness. Operational capability embeds AI into workflows, decision-support systems, and service delivery models.

At the same time, digital sovereignty has matured from a political slogan into a technical question. In the context of AI-enabled public services, digital sovereignty means that European administrations can adopt, govern, and evolve AI systems without losing control over their data, infrastructure, or strategic direction. The objective is about ensuring that AI strengthens public institutions rather than creating new forms of dependency.

Understanding the AI Technology Stack

AI does not begin with a chatbot or a user interface. It begins far deeper in the technology stack. At the foundation are semiconductor chips and computing hardware. Above that sits cloud infrastructure. Then come the platforms, data environments, models, and, finally, the applications that citizens and civil servants use. Each layer depends on the one below it.

Today, Europe relies heavily on non-European providers across much of this chain. This reality raises a central question: as public services increasingly depend on AI, who controls the infrastructure, the data, and the models that enable those services? When essential public functions run on external platforms, the risks are practical rather than theoretical. How resilient are these systems during geopolitical tension or supply disruptions? Who ultimately safeguards sensitive public data? How easily can a government change provider if costs, policies, or risks shift?

Digital sovereignty in this context does not mean isolation or technological protectionism. It means retaining meaningful control over critical digital capabilities while remaining open to innovation and global cooperation.

Strengthening technological sovereignty, therefore, requires conscious decisions: adopting open standards, ensuring public oversight and governance of key datasets, and selecting platforms that align with European legal frameworks and public-interest values. Only with these foundations in place can AI-enabled public services remain both innovative and accountable.

Deterministic vs Probabilistic AI

We can distinguish between two broad approaches to AI: deterministic and probabilistic. Deterministic systems follow predefined rules. The inputs, logic, and outputs are structured and predictable. Many traditional government IT systems operate in this way. Probabilistic AI, including large language models, works differently. It learns patterns from vast amounts of unstructured data. It can interpret, summarise, and generate text in ways that feel human. But its outputs are based on probabilities, not fixed rules.

For public services, this distinction matters. Deterministic systems offer reliability and traceability. Probabilistic systems offer flexibility and insight. The challenge is not choosing one over the other, but combining them in safe and structured architectures.

From Static Portals to AI-Powered Assistants

Many public portals today are static. Information is published, but citizens and businesses must navigate complex structures on their own. Traditional chatbots tried to simplify this. Most failed because they relied on rigid scripts.

Modern AI systems enable a different approach. When combined with structured data and retrieval mechanisms, they can provide contextual answers and guide users more naturally.

Architecture Matters: Multi-Agent Systems and Guardrails

AI in the public sector should not be a single, monolithic system. It should resemble an ecosystem. Different AI tools perform different tasks: retrieving documents, extracting data, summarising text, and checking compliance. These tools can be orchestrated through a supervisor or router model that directs requests to the appropriate component.

This multi-agent approach reduces risk. It allows administrations to apply guardrails, domain rules, and ontologies that define what the system is allowed to do.

Ontologies play a critical role. They define concepts, relationships, and constraints within a domain. In practical terms, they act as the rulebook that guides AI behaviour. For public administrations, this is essential to maintain consistency, legality, and trust.

The Real Bottleneck is Data

Across regions, one pattern stands out. The main obstacle is not only AI capability. It is data. Public sector data is often fragmented, stored in silos, inconsistently digitised, or outdated. Information may exist, but not in forms that AI systems can easily access or interpret. Without high-quality digital data, even the most advanced models will underperform.

Vectorisation, embeddings, and semantic indexing can help map user questions to relevant datasets. But the underlying data must first be clean, structured, and accessible. In many cases, improving data governance will deliver more value than experimenting with the latest model.

AI Literacy and Risk Management are the Cornerstones

One of the most important lessons has been about capacity. Sustainable AI cannot be fully outsourced. Governments that invest in digital and AI literacy across the civil service are better equipped to govern technology responsibly. This includes not only technical skills, but the ability to define problems, evaluate outcomes and reflect on unintended effects. Community practices, shared standards and internal learning structures matter more than individual success stories.

Our understanding of risk has evolved as well. Responsible use of AI is not about avoiding uncertainty or embracing technology blindly, but about managing risk deliberately throughout the lifecycle of a system: setting expectations early, monitoring impacts continuously and being willing to adapt or stop when public value is not delivered.

Six Priority Actions for Implementing AI in Public Services

For EU and regional leaders responsible for implementing AI in public services, several priorities emerge:

  • Invest in data foundations before scaling AI solutions.
  • Prefer modular, multi-agent architectures over single large systems.
  • Combine probabilistic AI with deterministic safeguards.
  • Ensure human oversight, especially in sensitive or regulated domains.
  • Align AI deployment with broader digital sovereignty strategies.
  • Build capacity before scaling solutions. Lasting impact comes from skilled public servants and institutional learning, not from isolated pilots or outsourced expertise.

 

AI in public services is not about replacing civil servants. It is about augmenting their capacity and delivering more personalised and proactive services to citizens and businesses.

Europe has the regulatory maturity, technical expertise, and institutional experience to lead this transformation. The next step is disciplined implementation.

 

 

This blog post reflects the lessons learned by the e-Governance Academy’s experts during the EU-funded Technical Support Instrument (TSI) project, Supporting Regional Entrepreneurship through the Adoption of Innovative Technologies, Including AI, in Public Service Delivery (ref. 24ES06/24DE33). The project supported the modernisation of how regional authorities in Germany and Spain create modern, user-centric public services for startups and small businesses and strengthened officials’ capacity to apply new technologies effectively in their daily work.

Read more about the project.