Who Wants to Regulate Artificial Intelligence?

No jurisdiction in the world has developed a comprehensive and specific list of problems arising from artificial intelligence. This does not mean a complete legislative vacuum: many of the harms that AI can cause have other ways of responding.

In the winter of 2016, Google Nest’s director of home automation performed a software update on its thermostats that damaged the batteries. A large number of users were logged out, although many were able to change batteries, buy a new thermostat, or wait for Google to fix it. The company indicated that the failure was due to the artificial intelligence (AI) system that managed these updates.

What would happen if the majority of the population used one of these thermostats and the failure left half the country exposed to the cold for days? The technical problem would have become a social emergency requiring state intervention. All because of a glitch in the AI system.

No jurisdiction in the world has developed a comprehensive and specific list of problems arising from artificial intelligence. This does not mean a complete legislative vacuum: many of the harms that AI can cause have other ways of responding. Chordtela Selalu Ada

For example:

  • For accidents caused by self-driving cars, insurance will continue to be the first recipient of claims.
  • Companies that use AI systems for their job selection processes can be sued if they engage in discriminatory practices.
  • Insurers that engage in anti-consumer practices derived from the analysis generated by their AI models to set rates and decide who to insure will continue to respond as businesses.

In general, other existing regulations – such as contract law, transportation, civil liability, consumer law and even regulations for the protection of human rights – would adequately cover many of AI’s regulatory needs.

In general, this does not seem to be enough. There is a certain consensus that the use of these systems will generate problems that cannot be easily solved in our legal systems. From the distribution of responsibility between developers and professional users to the scalability of damages, AI systems challenge our legal reasoning.

For example, if an AI finds illegal information on the dark web and makes investment decisions based on it, should the bank that manages the pension funds or the company that creates the automated investment system be responsible for these illegal investment practices?

If an autonomous community decides to incorporate a copayment of prescriptions managed by an AI system, and that system makes small mistakes (say, a few cents in each receipt), but affects almost the entire population, whose fault is the initial lack of control ? Administration? Is the contractor installing the system?

Towards a European (and global) regulatory order
Since the introduction of the proposed EU regulation in April 2021 to regulate artificial intelligence, the so-called AI law, the slow legislative process has been launched that should lead us to a regulatory regime for the entire economic area. Europe and, who knows, Switzerland, in 2025. The first steps are already being observed with state bodies, which will exercise part of the control over the systems.

But what about outside the European Union? Who else wants to regulate artificial intelligence?

On these issues, we tend to look to the United States, China, and Japan, and we often assume that legislation is a matter of degrees: more or less environmental protection, more or less consumer protection. However, in the context of artificial intelligence, it is surprising how different policymakers’ views can be.

United State

In the United States, the primary legislation on AI is a limited objective content standard, more concerned with cybersecurity, which refers to other indirect regulatory techniques such as creating standards. The basic idea is that the standards developed to control the risks of AI systems are voluntarily accepted by companies and become their de facto standards.

In order to maintain some control over these standards, instead of leaving them to the discretion of the organizations that normally develop the technical standards and being controlled by the companies themselves, in this case, the risk control standards for AI systems are developed by a federal body entity. Agency (Nest).

Thus, the US is immersed in a process open to industry, consumers, and users of standard setting. This now accompanies the White House draft AI Bill of Rights, also on a voluntary basis. At the same time, many countries are trying to develop specific legislation for certain specific contexts, such as the use of artificial intelligence in job selection processes.

China

China has developed a complex plan not only to lead the development of artificial intelligence, but also to regulate it.

To do this, they are combined:

  • Regulatory experimentation (some provinces may develop their own regulations, for example, to facilitate the development of autonomous driving).
  • Standards development (with a complex plan covering over thirty sub-sectors).
  • Strict regulation (e.g., online recommendation engines to avoid recommendations that might alter the social order).
  • For all these reasons, China is committed to regulatory oversight of AI that does not hinder its development.

Japan

In Japan, on the other hand, they don’t seem particularly concerned about the need to regulate AI.

Instead, they are confident that their tradition of partnership between state, business, workers and users will prevent the worst problems that AI can cause. At the moment, they focus their policies on the development of Society 5.0.

Canada

Canada is perhaps the most advanced country in terms of regulation. There, for two years, every AI system used in the public sector must undergo an impact analysis that predicts its risks.

For the private sector, the Canadian legislature is now debating a similar (albeit simpler) standard than the European standard. A similar process began last year in Brazil. Although it seems to have lost steam, it can now be saved after the election.

From Australia to India

Other countries, from Mexico to Australia to Singapore and India, are on hold.

These countries seem confident that their existing rules can be adapted to prevent the worst damage that AI can do, and are allowing themselves to see what happens with other initiatives.

Two Parties with Different Views

Within this legislative diversity, two parties are at stake.

One, among proponents that it is too early to regulate a disruptive – and poorly understood – technology like AI; and those who prefer a clear regulatory framework that addresses key issues while creating legal certainty for developers and users.

The second, and perhaps more interesting, game is the competition to be the de facto global regulator of AI.

The EU’s commitment is clear: first, to establish the rules that will oblige anyone who wants to sell their products in its territory. The success of the General Data Protection Regulation, which today has become a world reference for technology companies, encourages European institutions to follow this model.

Faced with this, China and the United States have chosen to avoid detailed regulations, hoping that their companies can develop without excessive restrictions and that their standards, even voluntary, become a reference for other countries and companies.

At the moment he plays against Europe. The United States will publish the first version of its standards in the coming months. The EU will have no applicable legislation for another two years. Perhaps the excess of European ambition has a cost, inside and outside the continent, creating rules that have already been superseded when they came into force by other regulations.

Leave a Reply

Your email address will not be published. Required fields are marked *