The EU AI Act and SMB compliance

On July 12, 2024, the European Union (EU) Official Journal published the full text of the AI Act. This set into motion the final chapter of the most impactful security and privacy law since the General Data Protection Regulation (GDPR) came into force in 2018.

It will have enormous implications for how companies do business in the EU and globally.

We reviewed some of the law’s requirements in a previous post, but in this one, we will examine the practical implications for small and medium businesses (SMBs).

The law applies broadly

Definitions are important when it comes to new legislation. And the AI Act is broad in this respect. For example, it defines an “AI system” as a machine-based one “designed to operate with varying levels of autonomy and may exhibit adaptiveness after deployment.”

This can cover many software applications that many SMBs use, develop, and resell. While the law does make specific allowances for SMBs and startups, they are not exempt from its requirements.

The act also lays out a variety of roles related to AI systems, such as:

  • Provider – anyone who develops an AI system directly or contracts someone else to do so and puts it on the EU market under its own name or trademark
  • Deployer – anyone using an AI system (with an exception for personal use)
  • Importer – anyone in the EU who puts an AI system on the market with the name or trademark of anyone from a third country
  • Distributor – anyone in the supply chain, other than the provider or the importer, who makes an AI system available on the EU market.

If there is any chance your company does any of these things with AI systems, you should keep reading.

Documentation requirements are piling up

A key task for any SMB dealing with EU AI Act requirements is determining whether an AI system for which they are responsible qualifies as a “high” risk.

If so, the act requires establishing:

  1. Risk and quality management systems: The focus here is to identify, analyze, estimate, and evaluate risks to health, safety, or fundamental rights. Companies also need to implement appropriate risk management measures.
  2. Implement a data governance program: Tracking the provenance and quality of training data can help to measure and manage biases and ensure representativeness.
  3. Detailed technical documentation: In addition to facilitating the safe operation of AI systems, it is also critical for demonstrating compliance with the requirements. It should describe the design, development process, and performance of the AI system.
  4. Transparency: Those responsible for AI systems need to provide clear and accessible information on their capabilities and limitations. The goal is to ensure users understand the operation and output of the AI system, including foreseeable risks.
  5. Accuracy, robustness, and cybersecurity: In addition to safety considerations, high-risk systems must ensure consistent performance throughout the lifecycle and resilience against errors, faults, and adversarial attacks.
  6. Post-market monitoring: By gathering data on the AI system’s performance and compliance, risk management and quality systems can stay up to date.
  7. Human oversight: A final key requirement for high-risk AI systems is ensuring human operators can understand and appropriately respond to the AI system’s operations and outputs.

Even for systems that are not high-risk, the AI Act has additional requirements for systems that create lifelike content (which the Act refers to as “deep fakes”) and more powerful general-purpose AI models.

Liability risk expands

Another challenge for SMBs under the EU AI Act will be the increased risk of government and private legal action. The EU AI Act lays out a series of fines to penalize non-compliance:

  • 35,000,000 Euros or up to 7% of global annual revenue for using prohibited AI systems.
  • 15,000,000 or up to 3% of global revenue for several other requirements.
  • 7,5000,000 or 1% of revenue for supplying incorrect information.

SMBs can pay the lower of these two amounts, which can still be an enormous burden to a growing company.

Furthermore, additional EU regulation may make it easier for private parties to sue AI providers for product defects.

  • Proposed changes to the Product Liability Directive (PLD) will create a presumption of defectiveness for AI products that do not comply with mandatory safety standards (including the EU AI Act). This will make it easier for private parties to win in court.
  • The proposed AI Liability Directive (AILD) will make it easier to prove that even non-customers of AI products suffered harm, and thus entitle them to legal action.

ISO 42001 as a way to manage risk

Published at the end of 2023, ISO 42001 is a new compliance standard laying out best practices for building an AI Management System (AIMS). After being evaluated by an external auditor, companies can receive certification under the standard.

In addition to generally building customer trust and ensuring proper AI governance, ISO 42001 is also likely to be adopted as a “harmonized standard” under the EU AI Act. The biggest advantage here is that high-risk AI systems and general-purpose AI models will be presumed to be in conformity with much of the AI Act if they are also compliant with a harmonized standard (like ISO 42001).

While this is no guarantee, it goes a long way toward reducing risk. Other jurisdictions, like the State of Colorado in the United States, have taken similar steps by making ISO 42001 compliance a defense against some accused violations of the law.

Furthermore, implementing ISO 42001 is itself an effective way to manage risk. At a minimum, it requires:

  • Laying out organizational roles and responsibilities when it comes to AI
  • Monitoring for incidents and other non-conformities
  • Conducting AI risk and impact assessments

It also includes an expansive set of optional controls in Annex A that facilitate:

  • Responsible AI development goals, objectives, and procedures
  • Using external input to improve AI system security and safety
  • Effective data governance, classification, and labeling

Conclusion

The AI Act is the most consequential piece of AI legislation ever passed. And its impacts will be felt for decades. Whether or not you agree with the EU’s regulatory approach, it will come into force over the next two years.

SMBs with any exposure to the EU market should carefully examine their business to determine if they meet any of the definitions of covered organizations. Even if they don’t, the odds of similar legislation coming into effect in other jurisdictions are high, as Colorado has made clear.

Finally, certifying their AI Management System under ISO 42001 provides a legal defense in certain scenarios, reducing liability risk. At the same time, the preparation and auditing process itself will make the organization more resilient and responsible when using AI systems.


Are you interested in ISO 42001 certification for your company? Book a demo to learn more about how Scrut Automation can help.

Stay up to date

Get the latest content and updates in information security and compliance delivered to straight to your inbox.

Book Your Free Consultation Call

Related Posts

We are entering the Spring of 2024 with fresh new capital – […]

Ransomware attacks. Fines from data protection regulators. Lawsuits from customers after a […]

In the ever-expanding digital arena, cybersecurity architecture—the strategic design and implementation of […]

NIST 800 represents the desired state for cyber resilience. Businesses can benefit […]

On July 12, 2024, the European Union (EU) Official Journal published the[...]

On July 12, 2024, the European Union (EU) Official Journal published the[...]

On July 12, 2024, the European Union (EU) Official Journal published the[...]

See Scrut in action!