Risk Grustlers / Episode #3
AI With a Pinch of Responsibility
Featuring Walter Haydock
Taking a slight departure from our regular themes of exploring the journeys of Risk Grustlers, we’re here with an on-demand podcast with the one and only, Walter Haydock, Founder and CEO of StackAware, to demystify and dig into the role of responsibility in today’s AI threat landscape.
Walter is a true trailblazer when it comes to solving for AI security. With a profound understanding of AI’s inner workings, he’s the ultimate demystifier of Language Models’ core applications. Join us to tap into his unmatched insights.
“Ensuring that you can manage your own infrastructure is really important to hammer down before you decide that you’re going to run an LLM model on your own.”
“Using AI inherently involves a degree of risk. To tread wisely, especially in terms of privacy, the smart approach would be to limit the data you collect and process
Description
In this episode, Walter gives us a crash course on all things LLM – from listing the differences between using a self-hosted LLM and a third-party LLM to explaining the top five risks to watch out for while using them.
Application developers are often overwhelmed with the bundle of resources out there, especially when working with LLM-based applications. The OWASP Top 10 and the NIST AI RMF framework, to name just a few – so what should be the key concerns?
That’s exactly what we’re solving here. Tune in to listen to the top 5 concerns that, according to Walter, should be on the top of your list when creating a tool on top of a LLM!
Last but not least, as promised, we are linking the FREE resources down below, so don’t forget to take a look and sharpen your AI security knowledge.
Highlights from the episode
- Discussing the pros and cons of using an open-source LLM Vs. third-party LLM
- Decoding the key concerns to look out for when leveraging a third-party LLM to create a tool
- Understanding key differences between direct prompt injection and indirect prompt injection
- Navigating the uncertainty of privacy regulations for LLMs in different regions