Table of Contents
US lawmakers are going through another congressional hearing regarding the dangers of algorithmic bias in social media. Meanwhile, the European Commission has unveiled a regulatory framework that, if adopted, will have global implications for the future of AI development.
This isn’t the EC’s first try when it comes to guiding the growth and evolution of this technology. After several meetings with advocate groups, the EC released the first European Strategy on AI and Coordinated Plan on AI. This was back in 2018. Then, in 2019, those were followed by the Guidelines for Trustworthy AI. After that, later in 2020, the Commission’s White Paper on AI and Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics was released.
Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being. Rules for artificial intelligence available in the Union market or otherwise affecting Union citizens should thus put people at the centre (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights. At the same time, such rules for artificial intelligence should be balanced, proportionate and not unnecessarily constrain or hinder technological development. This is of particular importance because, although artificial intelligence is already present in many aspects of people’s daily lives, it is not possible to anticipate all possible uses or applications thereof that may happen in the future.
The European Commission
Artificial intelligence systems are already ubiquitous. This manifests itself in the recommending algorithms that help us choose what to watch on Netflix. Also these things influence us as to which person to follow on Twitter, etc.
The European Commission once again has stepped out in a bold fashion to address emerging technology, just like they had done with data privacy through the GDPR. The proposed regulation is quite interesting in that it is attacking the problem from a risk-based approach.
Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley
Some applications include chatbots and deepfaked content. In these cases, all the AI maker needs to do is inform users up front that they’ll be interacting with a machine rather than another person. And for minimal risk products, such as the AI that is present in video games, the regulations are not going to require any special restrictions or added requirements that would need to be completed before going to market.
If a company or developer ignores these regulations, they will receive a large fine that is measured in percentages of GDP. So that means that fines for noncompliance can range up to 30 million euros or 4 percent of the entity’s global annual revenue, whichever is higher.
It’s important for us at a European level to pass a very strong message and set the standards in terms of how far these technologies should be allowed to go. Putting a regulatory framework around them is a must and it’s good that the European Commission takes this direction.
Dragos Tudorache – Head of the European Parliament’s committee on artificial intelligence,