Artificial Intelligence Act in Europe Have Fails to Consider Powerful AI Systems

Members of the European Parliament (MEPs) recently discussed with scientists several aspects surrounding artificial intelligence and the benefits it could bring to the progressing economy of the union.


The Problem of AI Regulation in EU

European Artificial Intelligence AI Act
An illustration picture taken in London on December 18, 2020 shows the logos of Google, Apple, Facebook, Amazon and Microsoft displayed on a mobile phone with an EU flag displayed in the background. JUSTIN TALLIS/AFP via Getty Images

According to the talks, the specialists explained that the EU's proposed bill that controls the regulation of technology is not 'futureproof' due to its failure to involve beneficial and most powerful AI systems that could process various tasks compared to others.

Experts consider the AI systems detailed in the talks as 'foundation models.' Powerful techs like these are developed by giant enterprises such as Google and are perceived by many as the future of the current state of AI innovations.

The foundation models can learn multiple loads of datasets, including images, texts, and sounds. However, the EU legislation states that it will not cover huge AI systems like these, but only the particular targets in which their functions are utilized.

Massachusetts Institute of Technology (MIT) physics and artificial intelligence specialist Max Tegmark explained with the MEPs on March 21 that the current draft of the EU act is not future-proof enough based on his personal standards.

One example of this anomaly, according to Tegmark, is that excluding just a single general-purpose AI system from the act will shift the responsibility and liability of the tech to the enterprises in Europe which use most of their company's applications.

The problem is seemingly pushed to the end-users while keeping the developers such as those based in the United States and China without accountability from future repercussions.

Science Business reported that Tegmark compared the issue to a scenario wherein an airplane company attempts to buy an engine from other developers but is not allowed to see what's inside the engine, have the full details regarding the product, and how it works; despite the fact that the company has to put the engine inside one of their vehicles.


Tegmark explained that the only liable party is the airplane company and not the engine developer if the plane crashes. The expert continued to put the end-users and businesses in a terrible position.

University of California Berkeley computer science specialist Stuart Russell agreed to Tegmark's insights, saying that that it makes perfect sense for companies to screen qualities and ethics at the source, which happens to be the large-scale vendors of the massive general-purpose AI systems that hold bigger datasets and design information to curate conformity assessments, instead of the larger population of small businesses and integrators in Europe who do not.

European Artificial Intelligence Act

The European Artificial Intelligence Act was proposed to the parliament to implement laws on AI systems existing across the union. It is the first attempt at major regulation over AI techs and preventive measures over the possible downsides from the particular field. The act includes strict prohibitions in subliminal manipulation and China-style social scoring.

General-purpose AI algorithms such as Google's GPT-3 already have an immense understanding of intellectual tasks, and can stretch their skills over various functions and other high-level concepts, Tegmark said to MEPs.

However, GPT-3's intelligence is also at the plane in which it could already bring catastrophic misunderstandings. In a previous experimental analysis through GPT-3's chatbot, the AI system mistakenly advised suicidal humans to kill themselves.

Check out more news and information on Artificial Intelligence in Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics