fbpx

Perhaps, the AI revolution is not as loud as we thought

31.01.2022 | Federico Plantera

It seems almost physiological that the latest tech buzzwords, with time, undergo some sort of resizing. Technology advances, consulting firms and market actors take the leap envisioning how the latest development could change everything, and then we notice how such upheavals are slower to unfold than expected – or they even underachieve, compared to the initial expectations. It’s in the scheme of things, as we saw with Heiko Vainsalu in a related podcast.

In this sense, the case of artificial intelligence is peculiar. Is AI happening? For sure. Algorithms have increasingly been integrated into how both the public and private sectors provide products and services. But the AI revolution appears less disruptive than we thought it would be, at least in these initial phases. And that’s good news because it also means we’re taking time to evaluate its proper use cases.

On the other hand, though, a more silent revolution could bring an undesired side-effect. That is, while giving room for more thought-through considerations on how to deploy new tech, some key reflections over its ethical use risk of being side-lined because the change is not as disruptive as it initially appeared.

 

Velsberg: AI empowers data-driven decision making in the public sector

How are governments approaching the use of artificial intelligence in public service provision? Consider Estonia, as a case study – and let Ott Velsberg, Chief Data Officer of the Estonian Government, be our guide. The country’s AI plans are rooted in words and deeds that started already in 2019. “Our AI strategy aims to make government more proactive, seamless, automated. The end goal is zero bureaucracy, and AI plays a key part in moving meaningfully towards that direction,” Velsberg begins with.

In broad terms, this translates into making the life of citizens and companies as good as possible. A way to achieve that is using data to increase the quality of living, employment, and so on. Artificial intelligence is being used in the Estonian public sector already for both simpler and more complex tasks. “From simultaneous transcriptions of government meetings or the national podcast, to analysing and classifying the content of texts to understand citizens’ sentiment towards different public services. Or to assign specific risk factors to a wide array of emergency situations,” Velsberg explains. The list goes on, as the Government CDO says in Digital Government Podcast.

Photo: Federico Plantera and Ott Velsberg

 

Outside the digital world, remote sensing is being used in agriculture to understand if farmers mow the land and are eligible for grants and benefits; to see where ice-breaking vessels should go; to keep track of trees’ height in different areas. “It’s all in the service of avoiding unnecessary extra work, automating straightforward tasks, getting recommendations and a better overview of a given situation.”

The potential of artificial intelligence and interoperability already tried and tested by the Estonian government – 80 completed case studies, 30 ongoing projects – will soon be channelled through Bürokratt. “In short, it is a government virtual assistant. It will enable citizens to access public services through voice-based interactions. And these services are highly personalized – we’re talking passport and driving license renewals, applications for benefits. So, everything the government does, but more citizen-friendly,” Velsberg says.

 

Can proactive public services ever get too zealous?

All goes in the direction of a more proactive way of managing the public administration, leveraging the power of data and reimagining unfit-for-purpose organizational processes. For example, if the government knows of one’s eligibility for childcare benefits, it should proactively reach out to the potential recipients. Particularly in the case of government support that might, otherwise, remain untapped – be it for healthcare, unemployment, old age, and such.

However, not all that glitters is gold.

One year ago, a European government crisis and resignation had been triggered by a welfare benefits scandal centred around algorithmic misbehaviour. In the Netherlands, as it emerged, an overzealous mechanism of automated checks on benefits recipients deemed as fraudsters some 26000 parents receiving childcare support. Applicants’ wrongdoings were minor errors in compiling paperwork, like missing signatures on certain pages of the application forms. The administrative rulings resulted in benefits being revoked, and thousands of euros of unjust fines to households.

Mark Rutte, then (and to this day) Prime Minister of the country, called it “a colossal stain” on the government, an unprecedented injustice. Even more so when, just one year earlier, a Dutch court had ordered the immediate suspension of an automated surveillance system for detecting welfare fraud because it violated human rights. While this instance does not halt the exploration of AI use cases in the public sector, it sets off alarm bells.
Yes to AI, but in an ethical way.

 

Nyman-Metcalf: Evaluating information triggers ethical challenges

The Dutch case, in fact, shows how the decision resulting from an automated process depends on the info fed to the computer. “Through machine learning you also bring in bias from those who create the system. But even without that – and here is where it gets interesting – certain outcomes might not make sense even if the supporting facts are not wrong,” Katrin Nyman-Metcalf says, Senior Expert on Legal Framework at e-Governance Academy. In a recent episode of the Digital Government Podcast, we extensively addressed the ethical challenges of automating public services.

Photo: Federico Plantera and Katrin Nyman-Metcalf

 

While using fairly basic IT tools to automate specific tasks does not present ethical challenges, two problems arise in other situations. First, if the public sector decides to provide a given service with full automation, citizens may not have a choice to access the service in a different way – which in the instance of a private service provider is always there, instead, by simply opting for a different one. Secondly, the big question on when and what to automate comes into being when the machine is taking the decision, so when the outcome of a process involves degrees of discretionality.

“Tech would allow much more automation than it currently is in use in public services. Limitations do not pertain questions of feasibility, as much as of opportunity. Is it ethical for a machine to fully take charge of a given process? Is this a good thing?” Nyman-Metcalf asks. “It’s not a legal issue, it’s not a technical question. Machines make fewer mistakes than humans, but what is considered a mistake? For many public services, it is just a matter of looking at facts A-B-C and making a decision. But some cases require an evaluation of these facts – and machines do not act quite like humans there, yet,” Nyman-Metcalf explains.

 

In the build-up of futuristic public services, let’s keep an eye on principles

Artificial Intelligence for years seemed the next big thing to take the world by storm – luckily, not in a literal sense. “But is ‘singularity of AI replacing humans’ around the corner? We are very, very far from it. We may talk about classifying email texts, providing recommendations, but with these cases we’re nowhere near replacing humans,” Velsberg says.

As his examples have shown, automation is already being deployed for data input and backroom tasks in many countries. “But this implementation is incremental, not a change that happens suddenly from something to everything. Evaluating the convenience of using AI should happen step-by-step on the path to its implementation. Yes, we may have a machine that can do that – but should it do it?” Nyman-Metcalf concludes. A friendly reminder, rather than a warning, to go beyond the hype and reflect on (only seemingly) collateral issues around all tech and innovation.