Last Updated on December 9, 2023 by SPN Editor
On Friday, a landmark decision was made by the European Union’s policymakers to introduce a comprehensive new law, known as the #AIAct, to regulate artificial intelligence (AI). This #AIAct is one of the first global attempts to control this rapidly advancing AI technology, which has significant societal and economic impacts.
The launch of OpenAI’s ChatGPT chatbot in November 2022 marked AI’s entry into the mainstream. The popularity of generative AI technology surged almost instantly, triggering a competitive race in AI development.
With AI predicted to reshape the global economy, trillions of dollars in estimated value are at stake. As Jean-Noël Barrot, France’s digital minister, stated this week, “Technological dominance precedes economic dominance and political dominance.”
Europe has been at the forefront of AI regulation, having initiated work on what would become the #AIAct in 2018. In recent years, EU leaders have sought to impose a new level of tech oversight, similar to the regulation of the healthcare or banking sectors. The bloc has already implemented extensive laws related to data privacy, competition, and content moderation.
The first draft of the #AIAct was released in 2021. However, as technological breakthroughs emerged, policymakers found themselves revising the law. The initial version did not mention general-purpose AI models like those that power ChatGPT.
The need to regulate AI became more pressing following the release of ChatGPT last year, which showcased the advancing capabilities of AI and became a global phenomenon. In the United States, the Biden administration recently issued an executive order partly focusing on the national security implications of AI. Other countries like Britain and Japan have adopted a more laissez-faire approach, while China has imposed some restrictions on data usage and recommendation algorithms.
The #AIAct aims to set a global standard for countries looking to exploit the potential benefits of AI while mitigating its potential risks, such as job automation, online misinformation spread, and threats to national security. Although the law is yet to be fully approved, the political consensus indicates that its main provisions have been established.
Certain practices, like the indiscriminate collection of images from the internet to build a facial recognition database, would be completely prohibited under the #AIAct.
The focus of European policymakers has been on the riskiest applications of AI by corporations and governments, including those used in law enforcement and the management of essential services like water and energy. The creators of large-scale general-purpose AI systems, such as those behind the ChatGPT chatbot, would be subject to new transparency rules. Chatbots and software that generate manipulated images, known as “deepfakes”, would need to disclose that their output is AI-generated, as per EU officials and preliminary drafts of the law.
The use of facial recognition software by police and governments would be limited, with exceptions only for certain safety and national security situations. Companies that breach these regulations could face penalties amounting to up to 7% of their global sales.
Thierry Breton, the European Commissioner who played a key role in negotiating the deal, stated that Europe has established itself as a trailblazer, recognizing the significance of its role as a global standard setter. Thierry expressed on social media that the #AIAct is more than just a set of rules – it’s a springboard for EU startups and researchers to lead the global AI race. He optimistically stated, “The best is yet to come.”
The agreement, reached in Brussels, was the result of three days of negotiations, including an initial 22-hour session that started on Wednesday afternoon and continued into Thursday. The final agreement was not immediately made public, as discussions were anticipated to continue behind closed doors to finalize technical details, which could postpone the final passage. Votes must be conducted in Parliament and the European Council, which includes representatives from the 27 countries in the union.
The debate within the European Union was intense, reflecting the confusion AI has caused among lawmakers. EU officials were split over the extent to which newer AI systems should be regulated, fearing that excessive regulation might hinder European startups attempting to compete with American firms like Google and OpenAI. Systems with limited risk, such as chatbots like OpenAI’s ChatGPT, or technology that generates images, audio, or video content, would be subject to new transparency requirements under the law.
To regulate Artificial Intelligence (AI), policymakers have adopted what they term a “risk-based approach”. This approach involves stringent oversight and restrictions on a specific set of applications. Companies that develop AI tools with the potential to cause significant harm to individuals and society, particularly in areas such as recruitment and education, would be required to present regulators with evidence of risk assessments, details of the data used to train the systems, and guarantees that the software does not perpetuate harmful biases such as racial discrimination. The creation and deployment of these systems would also necessitate human supervision.
However, despite the #AI Act being celebrated as a regulatory milestone, there are still questions about its effectiveness. Many elements of the policy are not expected to be implemented for 12 to 24 months, a significant period given the pace of AI development.
Until the final moments of the negotiations, policymakers and countries were debating over the language of the law and how to strike a balance between promoting innovation and preventing potential harm.