Artificial IntelligenceCatalyzing AI Innovation: 3 Learnings from Wargame to Gameday London

At iluminr’s Wargame to Gameday in London, we took a fresh look at security, compliance, risk and resilience.

Often pegged as the “brakes” on innovation, these functions are becoming powerful accelerators—when you’ve got the right mindset. We explored how security, compliance, risk and resilience teams can shift from being reactive to being proactive, steering new tech forward while keeping pitfalls in check.

Here are some of our favorite takeaways from the day’s conversations.

 

Microsimulation – Fairplay: AI Bias in an Important Service

1. Microsimulating AI Bias: Who’s Screening the Screeners?

Using AI in hiring can feel like a gamechanger—until a third party’s tech decisions land your organization in hot water. Workday’s recent bias lawsuit is a clear reminder that when you rely on third-party AI, you’re not just buying convenience; you’re inheriting their risks. In this case, it’s not only Workday under scrutiny but also every company using their software. The message? Third-party AI requires close oversight, because their compliance and reputational risks become yours.

At Wargame to Gameday London, we ran a Microsimulation to demonstrate how AI bias can slip into hiring and promotion decisions, revealing the ripple effects of unmonitored algorithms. The key takeaway was clear: if you’re bringing in AI from an external provider, it’s essential to understand its inner workings and the potential risks it may introduce.

Here are three questions to keep your AI-enhanced business processes in check:

  1. How is this technology being used in your business process? Is it thoughtfully woven into your strategy and overall business process design? What impact is it driving? Do you have proper redundancies and fallback strategies in place?
  2. How is the model being trained? AI is only as unbiased as the data it learns from. Get clarity on how the model is trained to ensure it aligns with your standards and reduces the chance of hidden biases creeping into your decision-making.
  3. What new risks are you introducing by integrating this technology? Third-party or proprietary, relying on AI means absorbing new vulnerabilities. Assess how the technology could impact your compliance standing and reputation, so you’re not caught off-guard by risks.

Asking these questions ensures you’re managing the risks that come with third-party AI—keeping it a powerful ally, without compromising your organization’s integrity.

 

 

Photo credit: VisualEvolution

 

2. Working with the Grain: Finding Flow, Not Friction

In many organizations, security, compliance, risk and resilience teams can be perceived as restrictive, the proverbial “fun police” that interrupt everyday operations with a rigid set of rules and mandates. Yet, while rules are essential, mandates alone rarely drive true engagement—especially if they feel imposed rather than integrated. Instead, resilience that truly sticks is about finding flow, not friction. At the event, we explored the importance of weaving risk and resilience practices seamlessly into an organization’s cultural fabric, making them as much a part of the daily routine as the coffee machine or the morning check-in.

Building resilience that’s “baked in” means recognizing that people naturally gravitate toward familiar, intuitive processes rather than forced steps. This means designing practices that fit so comfortably within the organization’s culture that they’re hardly noticeable. So, how can organizations foster this type of resilience?

  1. Start with what’s already working: Instead of implementing entirely new procedures, observe and understand the natural habits and workflows that already drive your team. Maybe your team is used to a specific type of daily check-in or a familiar meeting rhythm. Building on these existing patterns means that risk practices become an extension of what people are already doing, rather than a disruptive change.
  2. Design for real-world intuition: When new resilience practices are needed, they should feel intuitive. Consider the emergency exit signs: they’re useful only when they align with how people naturally think and act in a crisis (and the arrow points the same direction as the person running). If everyone in your office always takes a specific hallway, placing an exit sign in that direction will guide them naturally. Similarly, in a digital sense, resilience should feel as seamless as the workflows people already follow—automated alerts that integrate into existing systems or quick action steps that align with habitual routines.

By embedding resilience practices into the familiar flow of work, you make them instinctual rather than obligatory. Create a culture where resilience isn’t an add-on, but a natural part of how the organization operates. The ultimate goal is to empower people to respond without second-guessing because their risk response is as much a part of their role as the tasks on their to-do list.

Microsimulation – Evolved: Agentic AI

 

3. Agentic AI: When Hackers Give Algorithms a License to Improvise

Agentic AI introduces a new level of complexity: technology that doesn’t just follow commands but adapts, learns, and evolves based on real-world interactions. While this adaptability can be powerful, it also opens the door to alarming possibilities in malicious applications.

Picture agentic AI systems in the hands of cyber attackers. These systems could learn from defensive patterns, tweaking their tactics with each encounter to burrow deeper into networks. They might refine phishing tactics to exploit human and AI vulnerabilities alike, evade detection by adjusting stealth techniques, or evolve intelligent malware that mutates to counteract security protocols. With this adaptability, threats become more sophisticated, potentially bypassing defenses that weren’t designed to counter adaptive AI.

In a controlled Microsimulation, we explored how an agentic AI system could modify its behavior over time, optimizing its responses based on what it found effective—a capability that, in the wrong hands, could significantly amplify threats.

Even the best AI systems can fall victim to social engineering tactics, making them susceptible to deception and manipulation by bad actors. Imagine an AI that unwittingly lets its guard down, falling for cleverly crafted scams designed to exploit its adaptive nature.

Key Concerns for Resilience:

  • Anticipate adaptive threats: Resilience planning must account for the unpredictable nature of agentic AI threats, demanding defenses that can evolve alongside adversaries.
  • Prepare for deception: Just as AI can be used to defend, it can also be deceived. Consider worst-case scenarios where AI systems are manipulated through social engineering tactics, highlighting the need for adaptive and layered security strategies.

Agentic AI redefines cyber risk, with its potential for both growth and vulnerability reshaping how resilience must be approached in a rapidly changing threat landscape.

 

Stepping into the Driver’s Seat

Our Wargame to Gameday discussions made one thing clear: security, compliance, risk and resilience aren’t the “fun police” anymore; they’re the strategic force that makes big moves possible. When security, risk and resilience become proactive, they can shape not just what we protect against, but what we create. With that mindset shift, organizations can use resilience as a springboard for innovation—no matter how fast the world is moving.

Curious about how your team would handle an AI-driven threat? Ready to put your defenses to the test? Contact us to run an AI microsimulation tailored for your organization.

 

Microsimulations recognized in Gartner Hype Cycle for Legal, Risk, Compliance and Audit Technologies, 2024 Read more
+