It is the first large-scale regulation worldwide on the use of artificial intelligence. It aims to establish a global standard for using AI, recognising its great potential and seeking to mitigate its risks and impacts.
The regulation is based on the first proposal submitted by the European Commission in April 2021. The rules establish obligations for AI based on its potential risks and level of impact.
AI systems presenting only limited risk are subject to very light transparency obligations (e.g., the declaration that their content has been generated by AI), while high-risk AI systems could be subject to a set of requirements and obligations to access the EU market.
Banned applications of AI, among other, entail biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race) and AI systems that manipulate human behaviour to circumvent their free will or that are used to exploit people's vulnerabilities.
The provisional agreement also clarifies that the regulation should not affect the national security competencies of member states and will not apply to systems used exclusively for military or defense purposes or research and innovation.
To oversee the most advanced AI models, and enforce the common rules in all Member States the Commission has set up an AI Office.
EU Member States and industry representatives and SMEs have an essential role in consultation and provision of technical expertise in implementing the regulation.
Finally, the regulation introduces severe fines for violations of the AI Act determined as a percentage of the company's annual global turnover.