From chasing cybercriminals to leading AI security strategy, Jason Rebholz has seen the evolution of digital threats from the front lines. A former CISO, incident responder, and now co-founder of Evoke Security, Jason brings a rare combination of deep technical insight and pragmatic leadership to one of the most pressing challenges of our time: securing the age of AI. In this Gamechangers in Resilience conversation, we explore what it means to build trust in autonomous systems, the hidden risks of agentic workflows, and the “boring” security controls that might just save us all.
Q: How did you get your start in security?
In high school, I started to speed solve Rubik’s cubes, which spawned a love for puzzles and pattern matching. At the same time, I was learning computer programming and a single conversation with my teacher set me on my path. When I told him I didn’t see programming as a career for me, he encouraged me to look at computer networking. As I began researching what that was, I stumbled into network security and was instantly drawn into the concept of hacking into networks. It was a massive puzzle that I had to solve.
What followed was a series of fortunate events, good timing, and hard work. I attended college for one of the first degrees that had a focus on computer security and accidentally fell into incident response, the ultimate puzzle, as Mandiant’s first hire straight out of college. That became rocket fuel for my career as I built a passion for investigating cyber security incidents.
Q: You’ve said AI security is still treated like a PR risk. What would it take to change that mindset?
The current risks with AI can appear low level at first glance. A chatbot hallucinating something embarrassing. Or some small scale data exposure. It’s nothing a company couldn’t quickly recover from. But that’s how ransomware started as well. Small scale interruptions, just a single system. I lived through that escalation and watched ransom demands go from thousands of dollars to millions of dollars.
Like ransomware, it wasn’t until the really bad incidents happened, like Colonial Pipeline, that people started to notice. We’re on a similar path now because the promise of AI is so alluring.
Q: What’s a security fundamental that becomes more important in an AI-native environment?
Starting my career in incident response gave me a very practical view of cyber security. There are the academic risks that sound scary but, in practice, aren’t a risk. This has given me an ability to look at any environment and plot out the most likely ways attackers will gain access, move from system to system, and ultimately steal data or encrypt your systems.
With AI, new digital highways are being built in networks to allow AI agents to operate. This brings a new layer of attack paths that will lead to business impacts on the same level as ransomware. This means that organizations need to step back and threat model how they’re deploying AI and really understand where a rogue AI action can lead to a material business impact. I’ve witnessed engineers do this first hand because they have the tightest grasp on the technology and can see where things could quickly go off the rails.
Q: What do you tell founders who want to “move fast” on AI – but haven’t defined what “breaking things” really means in a trust context?
Build with intent. I see too many companies with a solution chasing a problem. AI is not the answer to every problem, which is why so many companies are struggling to extract value from AI. They overengineer a solution to something that a very basic approach would have solved.
Q: What’s your nightmare AI incident that hasn’t happened yet, but will seem totally obvious in hindsight?
I see a future where AI agents outnumber humans and those agents have greater autonomy and access than their human counterparts. These agents, which I see as the next operating system, will be the next target for attackers because of the access and outcomes that it will present. Just as agents will make things easier for employees, they will make things easier for attackers too.
Agents are one rogue action away from critical business systems going haywire.
Q: If agentic AI becomes the new SaaS sprawl, what’s the shadow risk you think companies will miss until it’s too late?
Employees can give AI agents the same access and permissions that they have, essentially creating digital clones of the employee. This expands the attack surface in a way that can be difficult to comprehend. It’s no longer going to be about securing your employees’s accounts. It’s going to have to include how you secure employees’ agents, which becomes exponentially harder if you don’t know what AI agents or tools are operating in your environment.
Q: What’s a “boring” control or behavior that will save a company in the next era of autonomous systems?
One of the most talked about controls today is also the one of the hardest to implement in organizations. Least privilege. It’s the control that, when missing, can make a small issue become a massive issue. With agentic workflows and AI agents, a rats nest of permissions will be created. It’s not just what your agent has access to, it’s understanding the agent’s social network and what they have access to. Instead of an agent asking it’s neighbor for a cup of sugar, it will ask it for access to a file containing sensitive information, circumventing the access controls you had put in place.