AI in society: Treasure chest or Pandora’s box?
Written by Dr Katrin Nyman Metcalf, eGA Partner
That AI is part of society is a fact. It would be presumptuous to suggest whether it will prove to be more beneficial than dangerous.
Although technology scepticism has existed for as long as technology itself, quite often it was based on a lack of understanding of the innovation, and this lack of understanding could be mitigated by explaining the new tools.
However, for AI, warnings about potential unforeseeable and perhaps unstoppable negative effects are also coming from real experts. Geoffrey Hinton, who won the Nobel Prize in physics in 2024, is one such person.
He states: “There is also a longer term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control. But we now have evidence that if they are created by companies motivated by short-term profits, our safety will not be the top priority.
We urgently need research on how to prevent these new beings from wanting to take control. They are no longer science fiction.”1
Should we de-invent AI?
With such an uncertain outlook, some feel it would be better if AI had not been invented at all and that ideally, it should be de-invented. This is not the first time in history that there have been calls to de-invent a particular technology; similar statements were made about cars and nuclear weapons.
Those debates showed the futility of the idea. If something has been invented, it exists, and pretending it does not is unlikely to be effective. Even if some states would agree not to use a technology, other states or non-state actors could continue, and developments in the dark would be even more dangerous.
Furthermore, the geopolitical landscape of 2026 is hardly conducive to even imagining a global consensus on countering the potential negative effects of new technologies. Not only are states highly likely to come together at this moment, but in addition, private firms (which tend to be more interested in short-term profits) have unprecedented power in the AI field.
Consequently, we are destined to share our future with AI – at least for as long as AI agrees to share its future with us!
Doomsday scenarios with machines taking over make for exciting discussions, but extreme debates may deflect attention from more mundane and immediate concerns.
The rapid spread of the technology to users in the private and public sector poses problems and questions that may not threaten humanity with extinction but that nevertheless challenge the rule of law and protection of fundamental rights – or simply make the daily lives of ordinary people less comfortable.
AI has the potential to make our lives easier, but as with most technologies, it will not do so by itself. Technologies are generally neither good nor bad – everything depends on how they are used.
The same is true for AI, albeit with the important difference that the technology’s autonomy may mean that it is not a human who decides how it will be used. This creates an extremely challenging situation for regulation.
Regulations for innovation
In Estonia, people are generally favourable toward the use of technologies for governance. We have used digital tools for more than a quarter-century, and people are familiar with digital data and e-services.
Digital solutions are integrated into legislation on many different issues to ensure that digital governance is not separate from “regular” governance but that digitalisation is integrated throughout society.
The right to issue automated decisions, whether using AI or earlier technologies, is based on authorisation norms in acts regulating various fields. In a highly digital society, it is easier to move to the most modern technologies.
In 2018, the Estonian government established a cross-sectoral expert group to analyse and prepare for the introduction of AI, including the development of a test environment and a study commissioned from the Tallinn University of Technology to determine the legal changes required.
The study (issued in 2019) advised against a single, comprehensive AI law, as the issues were too disparate and the technology was not developed enough to be meaningfully regulated.*
Estonia participated in the AI regulatory work of the EU, leading to the AI Act in 2024. The fact that this act is already subject to change illustrates just how difficult it is to regulate a technology that is still disparate and rapidly developing, even if it is so widespread that regulation is nevertheless meaningful.
Regulation does not mean stopping innovation – it means doing what is possible to prevent negative consequences or at least to create an environment in which potential risks must be evaluated. The AI Act (as well as AI laws in other countries) has an important role in creating systems and institutions that can make risk assessments.
Automation without discretion
Authorities (just like private firms) may be overly eager to automate everything just because it is possible. In the private sector, consumer demands and competition can keep the developments reasonable. For the public sector, policy decisions must be made. If effective digital administration is used wisely, it will free up resources to deal with non-routine queries and human contacts.
Take Estonia as an example. In line with its general technology-friendliness, the government has embraced AI, but the initial uses have been for what may loosely be likened to back-room activities.
Even before AI, most queries in the X-Road data interoperability platform were automated, with databases being updated by the machines themselves If technology permits more comprehensive data management, it is unlikely to lead to problems or protests.
To promote AI use and at the same time make it more transparent, access to open-source AI components is provided for interested parties, whether in the public or private sector, to reuse free of charge.**
However, delegating discretion to AI is another thing. It is possible that the technology would make fewer mistakes and be less biased than humans, but whether this is the case is largely unknowable, which is the main problem with AI from a legal and human rights viewpoint. Perhaps in certain cases, we still need simply to decide not to use a technology, even when it is available.
Ensuring human-centred administration
In comparison with end-of-the-world scenarios, the concern that AI prevents reasoning for administrative decisions appears insignificant. However, this is one of the seemingly small matters that help to ensure a human-centred administration with the rule of law.
If a person knows why a certain decision was made, there is more opportunity to challenge it, for which there should be an independent court process. It is easier to identify discrimination, corruption and nepotism if the decision-maker must explain themselves and ordinary observers can see on what basis something was decided.
Access to information is an essential tool in a democratic state: we have to share information about ourselves, and we have to fulfil various obligations in order to live in a society, but we do so in the knowledge that we have a right to know, for instance, how our data is used and what our taxes are paying for. An algorithmic decision presented as a fait accompli deprives us of this key part of the social contract.
So far, there is limited case law on the use of AI, but cases are starting to appear in courts around the world. Complaints in Estonia and elsewhere claim that AI has made wrongful decisions in administrative cases.
Still, in many such cases, the outcome of the court deliberations shows that the final decision – the one for which there is discretion, the need to choose between possible outcomes – was made by humans.
Even if an AI autonomous decision was made, courts have focused on how humans used the AI and what prompts and what data were given. Thus, the key outcome is that the administration is responsible for its decisions regardless of what technology it uses to reach them.
Trying to stay in control
This is a reasonable principle, but whether it is future-proof is another question. Furthermore, even if courts retain such reasoning, it might not be adequate at a time when administrations might have less and less knowledge about how a certain decision was reached and might not even know the basis on which it was made.
We may come to a point where the administration either feels it cannot be responsible or finds itself responsible for something it cannot affect in any way. Today we may say this illustrates why discretionary decisions cannot be made by AI – but can AI be blocked from exercising such powers indefinitely?
In the debate, it has been suggested that AI should be given a legal personality, but it is difficult to see how this would provide a solution unless the AI itself can answer in court and rectify any damage.
I am fully aware that this article contains several question marks. As Dr Hinton said, we have no idea whether we can stay in control – but for now, we should at least try.
Even just accepting that human rights and the rule of law shall continue to apply under new technological circumstances means something. It is possible to create environments and institutions that ensure that in the development and application of AI, there is a legal (and moral) obligation to consider the technology’s potential for good – or bad.