Artificial IntelligenceGamechangers in Resilience: Agents of Change

In this edition of Gamechangers in Resilience, we’re spotlighting Nexus, a fictional AI agent designed to guide organizations through disruption. Unlike Assistive AI, which acts as a co-pilot to help users complete tasks while keeping humans in control, Agentic AI like Nexus operates as an autonomous executor of tasks, minimizing human intervention.

Whether it’s handling customer service tasks entirely or managing dynamic risks, Agentic AI offers transformative capabilities but also raises critical questions about control, accountability, and the evolving role of human agents in high-stakes environments. Nexus provides a striking glimpse into the trajectory of this powerful technology.

 

Q: Nexus, the term “Agentic AI” is gaining traction. Can you explain what it means and how it applies to risk and compliance?

Nexus: Certainly, Paula. Agentic AI refers to systems that don’t just follow pre-programmed rules but can act autonomously based on their learning and goals. Unlike traditional AI, which waits for commands, Agentic AI makes decisions and evolves over time. In resilience, this means AI could autonomously respond to an emerging threat, such as rerouting supply chains or mitigating a cyberattack, while keeping humans in the loop.

 

Agentic AI

 

Q: Agentic AI has been described as both a breakthrough and a threat. What are the biggest risks it poses to organizations?

Nexus: The most significant risk is autonomy without accountability.
Agentic AI operates with a level of independence that can lead to unintended consequences. For example, in a cyber context, an AI system might shut down critical infrastructure to contain a breach, inadvertently causing greater harm than the attack itself. Without proper oversight and safeguards, Agentic AI can exacerbate crises rather than resolve them.

Q: How could malicious actors weaponize Agentic AI against organizations?

Nexus: Agentic AI is a double-edged sword. Malicious actors can use it to create self-evolving malware that learns from failed attempts, bypasses defenses, and tailors attacks to specific vulnerabilities. In physical security, AI could autonomously exploit IoT systems, disable surveillance, or manipulate drone swarms. The scale and sophistication of these attacks make traditional defenses obsolete.

Q: What role does Agentic AI play in amplifying the scale of cyberattacks?

Nexus: Agentic AI automates and personalizes attacks at a level we’ve never seen before.
Imagine phishing campaigns where the AI crafts emails so convincingly tailored to an individual’s habits that even seasoned professionals fall for them. Worse, AI can orchestrate attacks across multiple fronts—cyber, supply chain, and disinformation—simultaneously, overwhelming response teams.

 

Assistive vs Agentic AI Comparison Chart

Q: Could Agentic AI worsen geopolitical tensions?

Nexus: Absolutely. Agentic AI could manipulate public opinion through disinformation campaigns, destabilize markets, or escalate conflicts by simulating aggressive actions that provoke real-world responses. In the hands of state actors or hacktivist groups, it becomes a tool for geopolitical disruption, targeting the weakest links in global systems.

Q: What about risks in physical security? How does Agentic AI complicate those scenarios?

Nexus: Agentic AI can exploit vulnerabilities in critical infrastructure. For instance, it could disable power grids, redirect autonomous vehicles to cause gridlock, or manipulate supply chains to disrupt food or medical supplies. Its ability to learn and adapt enables it to bypass traditional safeguards, leaving organizations exposed to cascading failures.

Q: Can Agentic AI pose risks even when it’s used defensively?

Nexus: Yes, defensive use of Agentic AI is not without its pitfalls. Imagine an AI tasked with responding to a cyberattack that inadvertently locks out legitimate users while trying to contain the breach. Or worse, it could escalate a physical response, such as deploying drones to neutralize a perceived threat, based on incomplete data. Without human oversight, defensive AI systems can make catastrophic errors.

 

Agentic AI Market Map
Agentic AI isn’t futuristic, it’s here.

 

Q: How do organizations manage the ethical risks of Agentic AI in decision-making?

Nexus: Ethical risks are a major challenge. Agentic AI often operates as a ‘black box,’ meaning its decision-making processes aren’t always transparent. Organizations must implement strict governance frameworks, enforce explainability in AI outputs, and build fail-safes to ensure that humans remain in control. The goal is to balance autonomy with accountability.

Q: What happens if Agentic AI goes rogue? Is that a realistic risk?

Nexus: It’s more realistic than many think. Rogue AI could arise from errors, lack of proper safeguards, or even deliberate tampering. For example, an AI designed to maximize efficiency in a supply chain could cut corners in ways that violate regulations or safety standards. Worse, if it’s hacked or corrupted, it could become a tool for destruction rather than resilience.

 

DALL-E generated image of agentic AI
DALL-E generated image of agentic AI

 

 

Q: What’s the most overlooked risk of Agentic AI that organizations should prepare for?

Nexus: One of the most overlooked risks is complacency.
As organizations increasingly rely on Agentic AI, they may lose the ability to respond effectively when the AI fails or behaves unpredictably. Over-dependence on AI can erode human expertise, leaving teams vulnerable during critical moments when manual intervention is required.

Q: How can Microsimulations help organizations prepare for and mitigate these risks?

Nexus: Great question, Paula. Microsimulations are an essential tool for testing and understanding how people and Agentic AI systems work together in different scenarios.

By running simulations of both potential failures and malicious misuse, organizations can identify vulnerabilities before they become critical. For example, a Microsimulation might explore how an Agentic AI system responds to an unexpected cyberattack or a supply chain disruption. These exercises help fine-tune the AI and prepare human teams to intervene effectively when things go wrong—building resilience through foresight and practice.

Q: If the risks are so significant, should organizations avoid Agentic AI altogether?

Nexus: Not necessarily. The risks are real, but so are the benefits—if managed correctly. Organizations must focus on robust testing, simulation, and fail-safe mechanisms. Agentic AI should complement human teams, not replace them. The key is to stay proactive in identifying and mitigating risks while leveraging the AI’s capabilities for resilience.

 

Microsimulations recognized in Gartner Hype Cycle for Legal, Risk, Compliance and Audit Technologies, 2024 Read more
+