Aayush Ghosh Choudhury
Known for his clear and actionable leadership and guidance, Aayush is well-versed in the nuances of an organization's security posture and in navigating complex compliance requirements. He is a sought-after speaker and thought leader in GRC, contributing regularly to industry publications and conferences.
According to Cybersecurity Ventures, the global cost of cybercrime is projected to hit a staggering $10.5 trillion in 2025, rising from $9.5 trillion in 2024. This stark reminder of the urgent need for strong cybersecurity measures within Governance, Risk, and Compliance (GRC) frameworks has been marked by significant shifts and innovations in the GRC landscape this year.
In this post, we’ll dive into the top 10 developments that stood out and explore how companies can gear up for the challenges and opportunities that 2025 will bring.
1. European Union continues its regulatory push with DSA, DORA, and EU AI Act
Beginning in 2016 with the General Data Protection Regulation (GDPR), the European Union has led the globe in terms of cybersecurity and privacy regulation. This year the trend continued with the:
- Digital Services Act (full DSA enforcement began in February)
- Digital Operational Resilience Act (DORA, entering into force in January 2025)
- Artificial Intelligence Act (AI Act, passed this summer)
Whatever the impacts to innovation of such a regulatory burden, companies will need to deal with them. If the GDPR is any example, the follow-on regulations will likely trigger other jurisdictions to pass similar laws. And we are already seeing this as various regulations pop up globally.
2. U.S. state-level regulations expand
With data privacy and cybersecurity a relatively low priority at the federal level for both American political parties, individual states have started implementing their own rules. Bloomberg Law reported that approximately 20 states have already passed their own comprehensive data privacy laws. Some key ones seeing movement in 2024 include:
- Washington State’s My Health My Data Act (MHMDA, enforced starting March 2024)
- Colorado’s Artificial Intelligence Act (based on the EU AI Act and passed in May 2024).
State-level regulation is likely to continue as the federal government focuses elsewhere during the next presidential administration. And we are already seeing states like Texas propose their own AI governance laws.
On the note of federal action in cybersecurity, the outgoing Biden Administration focused its efforts on an initiative that may not end up bearing fruit.
3. Rise (and perhaps fall) of “Safe Harbor” standards for software security
Beginning with the 2023 release of the National Cybersecurity Strategy, the White House and Cybersecurity and Infrastructure Security Agency (CISA) pushed hard to establish mandatory standards for software development. Due to the challenges of codifying such rules and the planned departure of CISA Director Jen Easterly, however, it’s unlikely this effort will materialize into legislation in the near future.
With that said, CISA did roll out a voluntary “Secure by Design” pledge allowing software manufacturers to commit to certain steps. These include providing features like multi-factor authentication to customers at no additional cost.
4. Security and compliance concerns slow AI adoption
Despite the buzz around AI, there is still substantial skepticism about its data security and privacy. Scale AI’s 2024 “Zeitgeist” survey reveals that 60% of respondents who have yet to adopt AI cite security concerns and a lack of expertise as the primary barriers to implementation. Similarly, LucidWorks found a nearly 3x increase in data security concerns related to generative AI from 2023 to 2024 in its Global Benchmark Study.
A middle ground clearly exists between neglecting AI’s potential and using it without restraint, which can lead to excessive risks. Effective guardrails and governance frameworks are key here. And they even facilitate leveraging AI for security and compliance tasks.
5. AI helps with security and compliance
Despite concerns about its effectiveness as well as data security, firms are at the same time leveraging AI to accelerate GRC efforts. Especially when it comes to repetitive tasks like completing security questionnaires, AI has demonstrated huge potential to increase productivity for security and compliance teams.
6. Intellectual property rights blur in the age of AI
Along with the direct security considerations, there remain many unanswered questions regarding the applicability of existing intellectual property law when it comes to artificial intelligence. The U.S. Copyright Office said early in 2024 that works with AI-generated content could not be copyrighted without evidence of human contribution. While that added some clarity to the debate, it still did not address ongoing questions about whether:
- Training generative AI models on public news stories represents “fair use.”
- Code generation tools can be trained on open-source-licensed libraries.
- What the obligations of companies further down the supply chain are.
With no precedent to guide them, even some attorneys are scratching their heads as to how to proceed. From a practical perspective, though, it makes sense to investigate the indemnification provisions that major generative AI vendors – such as OpenAI and Microsoft – offer. These can potentially provide a legal backstop if a company gets challenged on intellectual property grounds.
7. No- and low-code adds another burden to GRC teams
On top of (and combined with) AI tools are no- and low-code offerings that offer hugely powerful capabilities to non-developers. These applications let employees build fully-functioning front and back ends, including application programming interfaces (API). While these systems can make productivity explode, they also present unique risks. Especially because staff members with less training can impact the confidentiality, integrity, and availability of huge volumes of data, no- and low-code can present risk.
At the Black Hat cybersecurity conference in August 2024, for example, Michael Bargury presented “15 Ways to Break Your Copilot,” highlighting the security and compliance challenges of these tools, especially infused with AI.
8. New technology means new compliance frameworks
Thankfully, as new technology emerges, so do new ways of securing them. 2024 saw the release of many AI-specific GRC approaches, including the:
- Databricks AI Security Framework (DASF)
- Open Web Application Security Project (OWASP) Top 10 risks for Large Language Models (LLM)
- HITRUST AI Security Certification
These tools provide excellent guidance to GRC practitioners (and give them more homework to do). Combined with the first accredited certifications of companies under the ISO 42001 standard, the release of these frameworks made 2024 a big year for AI-related GRC.
9. Personal liability for leaders of breached companies
2024 was a landmark year in other ways as well. Across the globe, regulators began targeting individual executives for regulatory action related to alleged cybersecurity weaknesses in their companies. Beginning with the U.S. Securities and Exchange Commission’s actions against SolarWinds and its Chief Information Security Officer (CISO) at the end of 2023, other countries have followed suit.
After the breach of Change Healthcare in early 2024, one U.S. Senator called for SEC and Federal Trade Commission (FTC) investigations into it and parent company UnitedHealthcare. Describing the companies as “negligent,” this Senator implied that personal accountability for its executives might be appropriate given the damage the breach caused.
CISOs and other risk advisors have traditionally played advisory roles in risk management, but a growing trend indicates they are increasingly being held accountable for critical decisions. For instance, regulatory developments like the New York Department of Financial Services (NYDFS) Cybersecurity Regulation (23 NYCRR §500.4), effective November 2024, require CISOs to report directly to the board and mandate heightened board oversight of cybersecurity risks. Clearly documenting decisions and supporting rationales will thus become a key part of defending against individual liability should the worst happen.
10. Compliance-as-code gets traction
A final – and positive – development from 2024 is the mainstreaming of GRC engineering (which some describe as “compliance-as-code.” A manifesto written by several practitioners in 2024 announcing the launch of their movement, focuses on making compliance more effective and efficient. It emphasizes using automation, practical risk measurement, and open-source tools while prioritizing solutions that work well for users, not just GRC teams. By addressing issues earlier in processes and treating compliance as a core function, this approach aims to improve outcomes and simplify traditionally complex practices.
By embedding compliance and risk management into development pipelines and operational systems, GRC practitioners can ensure governance and security evolve alongside technology. Practitioners can also collaborate more closely with engineering teams to co-create solutions, fostering a culture where compliance is seen as a shared responsibility rather than a bottleneck. Leveraging data-driven insights and continuous assurance mechanisms, GRC professionals can provide real-time visibility into risk, enabling faster, more informed decision-making.
Conclusion
As you can see, 2024 was a busy year in the world of GRC. With even more regulatory requirements coming into force next year, on top of technological development and disruption, we can only expect the pace to accelerate. Scrut Automation stays on top of these trends to help mid-market businesses focus on what they do best: providing value to their customers.
So if you are interested in learning more about how Scrut can help your GRC goals in 2025, get in touch!