eu ai act

The European Union Artificial Intelligence (AI) Act: Managing security and compliance risk at the technological frontier

A growing wave of AI-related legislation and regulation is building, with the most significant example being the European Union’s (EU) Artificial Intelligence (AI) Act. In March 2024, European leaders passed this sweeping legislation into law.

It will clearly have huge impacts on the way business is done, both in the EU and globally. In this post we’ll go look at the implications for organizations deploying AI to drive business value. 

We’ll also explore how Scrut’s Responsible AI framework can help organizations address the coming regulatory requirements.

Background

With initial drafts starting in 2018, the AI Act was formally proposed by the European Commission in early 2021. But it has continued to evolve over the subsequent 2.5 years. The explosion in the use of AI tools following the launch of ChatGPT in late 2022 provided special urgency to EU rulemakers.

Following a graduated approach, the AI Act lays out four different categories:

  • Unacceptable risk
  • High risk
  • Non-high risk
  • Specific transparency risk

While each of the EU member states will need to develop its own regulatory infrastructure, the AI Act also creates a European AI Office within the European Commission. This office will help coordinate between the various national governments as well as supervise and regulate “general purpose” AI models, such as Large Language Models (LLMs) trained on a diverse array of information.

Although it’s not clear from official communications, a leaked draft of the Act suggested it would not apply to open-source models. The open-source community has aggressively criticized many EU regulatory efforts, such as the AI Act but also the proposed Cyber Resilience Act (CRA).

Enforcement of the Act will begin within six months of passage when unacceptably risky AI systems will be legally banned. After 12 months, rules for general purpose AI will come into force. And at 24 months, the entire AI Act will be in force. 

The forecasted fines for non-compliance are steep:

  • €35 million or 7% (whichever is greater) of global annual revenue for prohibited AI application use.
  • €7.5 million or 1.5% of revenue for supplying incorrect information.
  • €15 million or 3% of revenue for violations of other obligations.

The first category of potential fines is meant to strongly deter organizations from deploying certain types of AI applications, which we’ll dive into next.

Banned applications of AI

The EU Commission drew a clear line in the sand by completely outlawing certain types of AI applications and development. Banned systems include those:

  • That manipulate human behavior to circumvent free will. While the EU press release gives the example of toys that use voice assistance to encourage dangerous behavior in minors, it isn’t clear how this rule will apply to more ambiguous situations. Basically, every type of advertising attempts to redirect human behavior, and it’s hard to see how advertising will not use AI in the future. So, this is definitely something that will need clarification.
  • That allow ‘social scoring’ by governments or companies. This provision is a clear allusion to fears that China is planning to build a system that integrates financial, social media, and criminal record monitoring to evaluate its entire population. The implementation details will be important here because many companies use things like net promoter or customer sentiment scores to track reputation and other business risks.
  • Use emotion recognition systems in the workplace. This is another area that will need substantial elaboration. While it is understandable that the EU might want to prohibit certain types of oppressive monitoring of employees, where it will draw the line is important. There is already a range of AI-powered communications tools that use emotions to predict things like churn risk, for example.
  • Include certain applications of predictive policing. While this is not a blanket ban as some had hoped, it seems certain crime-prediction methods will be outlawed.
  • Allow real-time remote biometric identification for law enforcement purposes in public (with some exceptions for national security). This provision appears to ban police from deploying facial recognition or other sensor systems in a general way to identify criminals. The narrowness of the exceptions will be key to determining how big an issue this provision is.

High-risk systems and their required controls

Aside from banned systems, there is another category of permitted but high-risk use cases. These include:

  • Critical infrastructure applications, e.g., water, gas, and electricity
  • Educational institution admission
  • Biometric identification
  • Justice administration
  • Sentiment analysis
  • Medical devices
  • Border control

The AI Act will require that such systems comply with a range of requirements to mitigate the risk, including those related to:

  • High-quality data sets
  • Logging and auditing
  • Human oversight
  • High accuracy
  • Cybersecurity

How Responsible AI sets organizations up for success

As a wave of AI-related innovation swept the globe, we understood that organizations would need a firm set of guidelines in place to navigate it safely, effectively, and ethically. That is why we launched our Responsible AI framework

No matter how the AI Act eventually turns out, it is clear that companies will need to deal with an array of first- and second-order challenges (in the form of regulatory action). These include:

  • Dependence on unreliable or biased data sources
  • Exposure of sensitive intellectual property
  • Legal complexity and uncertainty
  • Potential privacy infringement

The clear signs of how important these things would be led us to develop an actionable framework to help address them. That is why Responsible AI provides a roadmap for:

  • Cost savings through early risk identification
  • Responsible data and systems usage
  • Avoidance of fines and penalties
  • Risk identification and mitigation
  • Ethical and legal compliance
  • Building customer trust
  • Out-of-the-box controls

Conclusion

As we have seen from previous EU regulatory efforts, especially the General Data Protection Regulation (GDPR), the impacts of the AI Act are likely to be felt far and wide. While it may take some time for regulators to catch up with the pace of technology, they inevitably do so. 

Even five years after it came into effect, the GDPR is just building up momentum in terms of enforcement action, resulting in some shocking fines from major companies.

This type of “regulation through enforcement” is unfortunate but likely unavoidable as companies test the limits of new rules and governments react aggressively. The best approach, then, is to follow a balanced course of action that allows for taking advantage of AI’s many benefits while avoiding or mitigating its greatest risks.

Interested in seeing how Scrut’s Responsible AI framework can help you navigate rules like the EU AI Act? Book a demo!

Stay up to date

Get the latest content and updates in information security and compliance delivered to straight to your inbox.

Book Your Free Consultation Call

Related Posts

We are entering the Spring of 2024 with fresh new capital – […]

Noetic is a continuous cyber asset management and control platform that provides […]

In the digital age, where data flows freely and information is a […]

When looking for a CSPM platform, one of the basic questions to […]

A growing wave of AI-related legislation and regulation is building, with the[...]

A growing wave of AI-related legislation and regulation is building, with the[...]

A growing wave of AI-related legislation and regulation is building, with the[...]

See Scrut in action!