As AI becomes increasingly indispensable in our daily lives, concerns about its impact on fundamental rights and freedoms will inevitably arise. Until now, the responsible application of AI has largely depended on the integrity of Data Scientists and adherence to data privacy laws. To address these concerns, the European Union (EU) has implemented the EU AI Act, which now has officially taken effect!

So, what is the EU AI Act and what does it mean for your business? Let’s dive deep into it!

Key points of the EU AI Act

The AI Act provides a framework for regulating the supply, deployment and use of AI within the EU. It standardizes processes for market access and operation of single-use AI systems, ensuring consistency across EU member states. The primary goal of the Act is to safeguard against unethical practices by companies, both within the EU and from external entities, thereby promoting trust and integrity in the AI ecosystem.

Risk categories according to the EU AI Act:

High-risk AI applications

  • High-risk AI applications must comply with detailed requirements to manage potential threats. Systems for the purpose of delivering healthcare, finance, banking, insurance, education, employment and public services are particularly impacted. These systems must maintain strict control over their AI applications.

  • Unethical or harmful AI uses are banned. Prohibited applications include social credit scoring, emotion recognition, exploiting people’s vulnerabilities (e.g. age, disability), behavioral manipulation, untargeted scraping of facial images, biometric categorization, predictive policing applications and unauthorized use of real-time biometric identification in public.

Limited, or low, risk applications

  • Systems using generative AI will have somewhat light obligations, including in areas like transparency, where they may simply have to indicate that content was AI-generated or/and how the models are trained.

  • General-purpose models like OpenAI’s ChatGPT and Google’s Gemini “present unique innovation opportunities”, they also present “challenges to artists, authors, and other creators and the way their creative content is created, distributed, used and consumed”. Therefore, these models are subject to strict requirements in respect to EU copyright law, routine testing and cybersecurity.

What happens if you don’t comply?

Businesses that don’t comply with the EU AI Act will face a range of fines and penalties, depending on the offense. Member states will first notify the organization when non-compliance is suspected. Suppose the organization does not take corrective action. In that case, the market surveillance authorities can take ‘all appropriate provisional measures’, including prohibiting, restricting, withdrawing, or recalling the AI system from the national market.

Member states will also have the power to issue monetary penalties. They range from €7.5 million to €35 million or 1% to 7% of the global annual turnover, depending on the severity of the infringement.

How can businesses prepare for the EU AI Act?

If you are planning to build, purchase, or utilize AI systems in the future, it is essential to assess your organization’s readiness for AI Act compliance as soon as possible. Often, organizations may not even realize they are already using AI. For instance, popular tools developed by large, international software companies often contain AI elements. These AI components are nowadays used in tools used for basic organizational functions such as training, recruiting and background checks. As a result, by the time the Act comes into force, many organizations will likely be using high-risk AI systems, even if they have no specific plans to integrate AI. Therefore, early assessment and compliance preparation are crucial.

Conclusion

The EU AI Act is set to be a significant milestone in the field of AI regulation and innovation. However, relying solely on the law is insufficient. As the implementation of AI and machine learning models becomes more accessible, the potential for bias, underfitting, overfitting, and other misapplications increases. While this accessibility fosters innovation, it also heightens the likelihood of AI having significant impacts on people’s lives if not properly managed.

Organizations must therefore develop their own policies and controls tailored to their specific risk levels. It is crucial for those who understand the mathematical behavior of AI models to collaborate closely with responsible decision-makers to mitigate potential risks effectively.

Need help developing and implementing a responsible AI strategy? Together with our sister company Conclusion AI360, we help clients deploy AI strategically and responsibly. Promoting ethical, responsible and sustainable innovation for a positive impact on society. Find out more about it here.