Blake Entrekin
Senior Director, Security Compliance

Who Should Own AI Risk at Your Organization?

Who owns AI risk?

A big topic that I’ve seen floating around various networks and security leadership groups is the question, “Who is the executive leader accountable for AI risks?” After attending a recent Privacy Conference, I learned many organizations and security leaders share this concern. When I first heard the question, I thought: “The senior-most security leader, right?” They typically own top-level security risks within an organization, so why should this be any different?

In this blog, we’ll explore who is and should be accountable for AI risk within organizations and how to empower them to take this significant responsibility. 

AI Security Risks

What does “AI risk” really mean? AI security risks can refer to a wide range of possibilities, including, but not limited, to:

  • Using the AI engine to access internal resources like backend production systems
  • Getting the AI engine to leak confidential information
  • Convincing an AI engine to provide misinformation

Those risks could be owned by the senior-most security leader, but what about other AI risks, like safety risks?

AI Safety Risks

AI risks don’t only include security risks, but safety risks, as well. These fall more into the ethical and brand reputation category, such as the AI engine:

  • Saying something inappropriate or widely inaccurate
  • Teaching someone how to harm another
  • Impersonating another individual using personal details about their life

When it comes to AI safety, you could make a compelling case that ownership for these risks spans multiple areas, including Product, Legal, Privacy, Public Relations, and Marketing.

Yes, different elements of AI safety might fall under the purview of each of these teams, but they can’t all own them together. Otherwise, no one will truly own them and nothing will ever get done — good luck getting all of these leaders together for quick decisions.

The Role of The Privacy Team

A common solution I’ve seen is that the Privacy team owns AI risks. It doesn’t matter whether your AI models deal with Personally Identifiable Information (PII), the Privacy person, group, or team is already equipped to assess vendors and software systems for data usage and generally has a strong idea of what data is flowing and to where.

Privacy is likely a strong advocate for establishing processes and hiring vendors when it comes to managing AI risks. Unfortunately, the privacy team alone cannot manage the much bigger picture.

Establishing an AI Risk Council

What about the larger questions and decisions that go beyond the purview of Privacy alone? Whose responsibility is it to answer complicated questions, such as:

  • Who are the audiences for the AI model?
  • How do we define an AI safety risk? What are the guardrails that determine an “unsafe” output?
  • What are the legal implications of an LLM interaction gone wrong, and how can we prepare?
  • What’s the best way to represent our AI model to the public accurately?

A best practice should be forming an AI Risk council that is composed of relevant department heads, led by the data protection officer or senior official responsible for privacy.

There will still be decisions that require executive sign-off or buy-in. In these cases, the council should meet regularly to decide and ratify larger company decisions around the company's use and applicable development of AI. The council ensures that every relevant perspective is made part of the conversation, ideally limiting missteps around managing risk.

I want to acknowledge that creating and gathering a council like this might be easier said than done. If you’re thinking about AI like we are, however, you know it is both a threat and an opportunity. This is something already on the C-suite radar, so why not codify it? The level of difficulty will depend on a number of factors, but, in the end, I believe it’s still worth it to deliver the most comprehensive AI risk management within your organization.

Get Started Managing AI Risk

If these ideas sound good in theory, but the idea of managing AI risk internally is still daunting, you’re not alone. It’s often challenging to know where to start and to truly grasp the massive scope of AI risks within any organization. At HackerOne, we understand that every organization is different and, therefore, has different AI risks. To learn more about how to manage AI security and safety risks within your organization, download our eBook: The Ultimate Guide to Managing Ethical and Security Risks in AI

The 8th Annual Hacker-Powered Security Report

HPSR blog ad image