AI and Compliance

AI and Compliance for the mid-market

It has been hard to not hear a lot about artificial intelligence (AI) over the past year and a half. And the hype is somewhat justified. In addition to the pure business implications of AI, it will be able to accelerate both cybersecurity and compliance efforts in all types of organizations.

With that said, small and medium businesses (SMBs) cannot blindly adopt this new technology without sufficient scrutiny and caution. Deploying it securely and responsibly will require a structured and disciplined approach that addresses not only cybersecurity best practices but also existing privacy regulations and looming AI-specific requirements.

In this post, we’ll go through the top considerations for SMBs when it comes to rolling out AI-powered tools and technologies.

Cybersecurity standards

AI can be used to improve cybersecurity while also introducing potentially novel risks. Foremost among these is unintended training. Because tools like ChatGPT train on inputs by default, it is possible to expose sensitive information to other users accidentally.

Similarly, prompt injection attacks against Large Language Models (LLMs) have the potential to cause serious damage if not mitigated properly. 

That is why SMBs looking to leverage AI should consider some emerging standards and resources in the space such as:

  • OWASP Top 10 for LLMs: Put together by the Open Web Application Security Project (OWASP), this lists the top 10 identified vulnerabilities of certain AI systems and provides recommendations to remediate them. The accompanying security & governance checklist is another resource that security and business teams can use when developing their approach.
  • MITRE ATLAS: The MITRE Corporation is a non-profit funded by the United States government to help develop techniques and technologies that solve national-level challenges. They developed the Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) as part of their cybersecurity efforts.  This is a structured catalog of potential AI-related risks along with a wealth of case studies.
  • U.S. CISA and UK NCSC guidance: In late 2023, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom National Cyber Security Centre (NCSC) released joint guidelines for secure AI system development. The guidelines provide best practices for designing, developing, deploying, operating, and maintaining AI applications and systems.

Frameworks like these can be quite helpful in enumerating risks and prioritizing development efforts. With that said, SMBs generally benefit from customizing these approaches to their specific needs, use cases, and business operations.

Regulatory requirements

AI can create novel privacy challenges due to the volume of data they consume and process as well as the ability of some systems to generate personal data through inference. This type of sensitive data generation can make compliance with existing frameworks and regulations challenging, demanding new approaches. We’ll look at some of the key challenges below.

  • European Union (EU) General Data Protection Regulation (GDPR): Although it has been in force for almost 6 years, understanding of the GDPR and its applicability continues to evolve. Complying with its requirements to track and regulate sub-processors will become increasingly challenging as AI tools proliferate through software supply chains. Similarly, tackling sensitive data generation may require techniques such as machine unlearning. This evolving technique can potentially prevent AI models from reproducing personally identifiable information.
  • EU AI Act: Passed in early 2024, this law will greatly impact the types of tools and techniques SMBs can use when they operate in the EU or interact with its citizens. We did a deep dive into it and its implications – find it here.
  • California Consumer Privacy Act (CCPA): Much like the GDPR which inspired it, the CCPA and subsequent California Privacy Rights Act (CPRA) are also shifting the compliance landscape. As enforcement actions continue, SMBs would be well-advised to monitor developments as they relate to AI. California has also proposed rules on automated decision-making technologies which are likely to impact many businesses if and when they come into force.
  • New York City Local Law 144: Passed at the individual metropolitan level, this statute prohibits employers from using automated employment decision tools unless they conduct a bias audit and provide required notices. While implementation is still ongoing, the fact that New York remains the global financial capital, this rule is likely to have major reach.

External certifications

As organizations become increasingly aware of the potential impacts of AI, understanding and auditing its use throughout supply chains is becoming vitally important. Whether from a cybersecurity perspective or a broader operational one, both existing and new standards are addressing these concerns.

  • ISO/IEC 42001: Released at the end of 2023, this standard and certification lays out how to develop an Artificial Intelligence Management System (AIMS). In addition to cybersecurity issues, it also touches on effective governance, explainability, and data integrity. While not many companies have achieved this standard yet, it is bound to grow in popularity as organizations seek external attestation regarding their practices and procedures related to AI.
  • ISO/IEC 5338: Also released in 2023, this standard is focused more on the development and lifecycle management of AI systems. For organizations developing artificial intelligence products themselves, this might be an interesting standard to look at.
  • ISO/IEC 27001: Updated in 2022, the global standard for information security will be highly applicable to those using AI. Companies pursuing or maintaining this standard will need to consider carefully the implications of such systems on:
    • Incident response
    • Decommissioning procedures
    • Third-party risk management 
  • ISO/IEC CD 27090 and WD 27091: Building upon the ISO 27001 standard, these documents (still under review as of early 2024) will provide specific guidance for organizations seeking to enhance their information security and privacy programs, respectively, while leveraging AI.
  • SOC 2: The “gold standard” for business-to-business security for companies operating in North America, SOC 2 does not have any AI-specific provisions as of the standard’s 2022 update. With that said, there are certainly many intersections between AI and requirements to:
    • Protect confidentiality against threats like prompt injection
    • Prevent data poisoning and corrupted model seeding
    • Manage risks across the software supply chain

Getting ahead of the curve with ResponsibleAI

With the rapid pace of technological advancement, SMBs may often find it difficult to keep up with best practices and emerging regulatory requirements. To assist in these efforts and help build a comprehensive AI governance framework, Scrut Automation decided to develop the ResponsibleAI framework.

Weaving together cybersecurity, privacy, and compliance requirements and concerns, ResponsibleAI is a flexible toolkit that lets businesses of all sizes wield AI effectively and securely. 

So if you are a growing SMB that is using or planning to use AI, consider how you will manage the requirements and considerations described above. If you are interested in having a trusted partner show you the way, Scrut Automation is here to help!

Interested in learning more about ResponsibleAI and what our governance, risk, and compliance platform can do for your business? Please contact us today.

Stay up to date

Get the latest content and updates in information security and compliance delivered to straight to your inbox.

Book Your Free Consultation Call

Related Posts

As businesses grow and scale, cloud data management, and storage solutions become […]

In an era dominated by digital transactions and interconnectedness, safeguarding personal data […]

In today’s dynamic business landscape, the increasing complexities of regulatory environments pose […]

It has been hard to not hear a lot about artificial intelligence[...]

It has been hard to not hear a lot about artificial intelligence[...]

It has been hard to not hear a lot about artificial intelligence[...]

See Scrut in action!