In a candid and insightful panel discussion at iluminr’s 2024 Wargame to Gameday event in Washington, DC, Steve Holden, Ph.D., SVP and Head of Single-Family Analytics, and Bob Fucito, VP and Head of Enterprise Resiliency and Crisis Management shared their grounded and pragmatic approaches to guiding their organization through the continuous evolution of AI, risk, and resilience.
Both leaders, known for their pragmatic innovation, emphasized practical strategies to harness AI’s potential while safeguarding the resilience needed to withstand disruption. They underscored the necessity of pairing AI’s power with a robust resilience framework, ensuring sustained growth through continuous learning and adaptation.
While Steve underscored the transformative power of Generative AI, Bob Fucito emphasized that innovation must include consideration for its resiliency. Their synergy was clear: AI may introduce new capabilities, but resilience ensures those capabilities are sustained and secured over time. Together, they make a compelling case for innovation grounded in preparedness for whatever the future holds.
With AI and Generative AI revolutionizing industries and reshaping risk management, their insights were particularly relevant to executives and teams navigating the dual challenge of pushing innovation forward while ensuring sound governance.
Key takeaways from their conversation:
1. Traversing Risk: Activating AI Capabilities Responsibly
“AI’s speed of change can be intimidating, however, resilience must be part of the conversation where it applies. It ensures that no matter how fast technology evolves, organizations can remain safe and secure.”
– Bob Fucito
As organizations unlock the potential of AI, both leaders emphasized the critical need to approach this transformation thoughtfully. AI impacts every aspect of operations, and while the opportunities are significant, so are the risks. Steve described their approach to responsible innovation as structured by three pillars: balance, transparency, and humility—each playing a critical role in navigating the complexities of AI adoption.
“How do you turn on those capabilities in a way that doesn’t get you in trouble?”
– Steve Holden
Balance
Both Bob and Steve emphasized the need to strike the right balance between speed and security in AI implementation. In a rapidly transforming operating ecosystem, organizations must leverage AI’s capabilities effectively while ensuring they do not expose themselves to unnecessary risks. This means adopting a measured approach—moving quickly enough to innovate but cautiously enough to safeguard systems and prevent vulnerabilities.
Transparency
Steve’s team provides biweekly knowledge shares by engineering teams and has created advisory councils with key stakeholders. He also maintains a blog where he shares the latest in the organization’s progress. By making decisions public and incorporating diverse perspectives, organizations can foster trust and alignment across teams. Transparency helps not only in mitigating risk but also in empowering teams to make informed decisions as they move forward with AI initiatives.
Humility
Steve noted what we believed to be true yesterday may no longer hold true today. This humility must inform an organization’s decision-making, particularly in the early stages of AI development. He cited Amazon’s “one-way vs. two-way doors” concept as underscoring the importance of creating flexible, reversible decisions—allowing organizations to backtrack if needed, without locking themselves into irreversible paths. A ‘two-way door’ decision allows for reversibility—if the outcome isn’t as expected, you can easily pivot. Conversely, a ‘one-way door’ decision is irreversible, underscoring the need for caution in AI implementation. The focus is on staying adaptable, continuously learning, and accepting that mistakes are part of the innovation journey.
“You will inevitably be wrong—and that’s part of the process.”
– Steve Holden
Executive Insight: Embrace AI strategically—don’t just adopt it for innovation’s sake. Establish governance structures that align AI capabilities with enterprise risk management to ensure safety without stifling progress.
2. An Iterative Approach: Building and Proving Resilience Capability
Bob Fucito emphasized that organizational resilience is a dynamic, ongoing process that requires constant refinement. He pointed out that in today’s fast-paced environment, resilience must be continuously tested and adapted to stay ahead of emerging threats. This approach, he explained, involves regularly testing people, systems, processes, sites, data, and third-parties to “prove” their resilience, both independently and collectively. By simulating these events, organizations can identify vulnerabilities before they become critical issues, ensuring their systems are capable of withstanding disruptions.
Bob highlighted that it’s not enough for AI models and systems to simply function; they need to be resilient against both known risks and those that are still emerging. As AI continues to evolve and intersect with other technologies, the risks are no longer just theoretical—they are actively shifting and expanding.
“Resilience isn’t a box you check—it’s a habit. If you’re not testing and adapting constantly, you’re already behind.”
– Bob Fucito
Steve and Bob’s message was clear: resilience is an iterative cycle of adaptation, learning, and improvement. Organizations that continuously test their systems, adjust to new realities, and prepare for future risks will be better equipped to thrive in an AI-driven world.
Leadership Strategy: Evolve your resilience program alongside your AI initiatives. Regular testing, learning from failures, and refining processes is crucial for maintaining business continuity and safeguarding the organizational integrity.
3. Resilience as a Service: A Federated Model for Risk and Innovation
The concept of federated governance models emerged as a central theme during the discussion, highlighting a more distributed and collaborative approach to managing AI and risk. Instead of concentrating responsibility within a single department—such as IT or risk management—federated governance spreads ownership across the entire organization. This model ensures that every team, from operations to marketing to cybersecurity, actively participates in identifying, managing, and mitigating risks.
Bob Fucito, known widely for his innovation in the field of crisis management, discussed how organizations need to embed resilience into innovation itself. “Resilience is not just a safeguard,” he explained, “it’s part of the innovation cycle. When we test and adapt continuously, we allow AI and other technologies to evolve safely, ensuring they deliver value in the long run. Resilience is not perfection, it’s progress. Just like AI, it has to evolve constantly, or it becomes obsolete.”
He emphasized how this approach embeds resilience into the organization’s DNA. By distributing responsibility, resilience is no longer just a checklist item for crisis management teams; it becomes an integral part of day-to-day operations across the business. Each team becomes responsible for recognizing risks within their specific domain and responding swiftly, ensuring that resilience is built into the fabric of the organization rather than treated as an isolated function. Bob likened this process to ‘Resilience as a Service’:
“Response capability is a service we build into every layer of the organization. Equipping teams with the tools to adapt, respond, and learn no matter what comes our way.”
– Bob Fucito
Steve echoed these sentiments, underscoring that a federated model doesn’t just bolster resilience—it also fosters agility in innovation. In an environment where AI is rapidly changing, having decision-making power distributed across teams allows for more responsive, real-time innovation aligned to business outcomes. Rather than relying on a centralized, often slower, chain of command, individual teams can act quickly and decisively. This model empowers teams to be proactive in building new capability, and also encourages cross-functional collaboration, which leads to more innovative solutions to emerging challenges.
Steve outlined his approach to AI innovation and governance, structured around 4 key criteria that ensure responsible and effective implementation:
- Low Risk: Focus on projects that introduce low risk to the organization, ensuring that AI initiatives do not create unnecessary vulnerabilities or expose the business to excessive uncertainty.
- Unique Idea: Prioritize AI projects that bring unique and innovative ideas to the table, ensuring that each initiative offers fresh perspectives or solves problems in new, impactful ways, pushing the organization forward in a meaningful direction.
- Technology Stack Alignment: For successful implementation, it is critical that AI solutions are aligned with the organization’s existing tech stack. This ensures seamless integration and avoids the inefficiencies or compatibility issues that can arise from adopting technologies that don’t fit well within the current infrastructure.
- Clear Use Cases: Each initiative must directly address specific business objectives or challenges, ensuring the project’s relevance and potential for driving tangible value.
“The business is the domain expert. Now that they know this tech really well, what do they think this could help accomplish in the context of their business objectives?”
– Steve Holden
A federated governance model can enhance transparency and accountability, as it requires continuous communication and alignment across departments. This collaboration ensures that AI risk management is not siloed but instead becomes a shared responsibility, with teams regularly sharing insights, test results, and evolving risks. The federated approach creates a more resilient, responsive, and collaborative organization, better equipped to handle the complex and fast-moving challenges presented by AI and emerging risks.
Organizational Takeaway: Consider a federated governance model for risk and AI management. This approach not only decentralizes responsibility but also encourages cross-functional collaboration, ensuring that resilience is ingrained in the organization’s DNA.
4. Continuous Learning: AI is the New Frontier
Steve Holden drew a compelling parallel between the evolution of cybersecurity and the rapidly emerging world of AI. Just as cybersecurity wasn’t even a recognized field until the 1980s, AI is now creating entirely new domains and industries focused on its unique challenges and opportunities. In the early days of cyber, businesses and governments alike were caught off-guard as new threats—such as hacking, viruses, and data breaches—emerged in tandem with the growth of the internet. What began as a niche concern for IT departments has since ballooned into a multi-billion dollar industry encompassing everything from network security to data protection to threat intelligence.
Similarly, AI is now sparking the rise of cottage industries designed to address its specific risks. These emerging industries will tackle a range of concerns, including algorithmic bias, ethical AI use, privacy issues, and the potential for malicious AI applications. Just as cyber experts now specialize in penetration testing, identity management, or digital forensics, AI is driving the need for specialized skills and services. We’re already seeing the creation of companies that offer AI auditing, regulatory compliance tools, and frameworks for evaluating the ethical implications of machine learning models.
This explosive growth in AI-related industries mirrors the early trajectory of cybersecurity. In its infancy, cyber was seen as a technical issue, confined to IT departments. But as cyber threats became more pervasive and impactful, businesses were forced to rethink their strategies and investments. Cybersecurity evolved into a core component of corporate risk management, requiring executive oversight and board-level attention.
Executive Imperative: Stay ahead of the AI curve by fostering a culture of continuous learning within your leadership teams. Encourage your organization to keep pace with rapidly evolving AI through training and proactive risk management.
5. Confidence in Uncertainty: The Core of Resilience
“Trust is that someone will act as you hope or expect when the outcome or situation is unknown.”
– Rachael Botsman, Trust Expert
As businesses increasingly adopt AI technologies, uncertainty becomes a given, and trust becomes a crucial asset for navigating this complexity.
Bob Fucito highlighted that resilience is an ongoing, iterative process, stressing the importance of continually testing systems and processes to prove they can withstand both known and emerging risks. His emphasis on resilience is key to building trust within an organization and externally, particularly as AI presents unpredictable challenges. Bob’s point was clear: resilient organizations are those that can confidently manage uncertainty, and in doing so, they earn trust from stakeholders, regulators, and teams.
Trust in today’s AI-driven world doesn’t eliminate uncertainty. It confidently manages it. Trust is earned by proving, time and again, that even when facing unpredictable AI risks, the organization is prepared to pivot and learn over time. This blend of acknowledging vulnerability, providing clarity, and granting autonomy to teams across the organization creates the confidence needed to navigate uncertainty, positioning organizations as trusted leaders in the AI era.
Steve and Bob illuminated a critical point: AI without resilience can lead to vulnerabilities, and resilience without innovation can stagnate. Their organization has built its strategy on the intersection of these two forces. AI helps predict future disruptions, while resilience frameworks ensure the organization can weather those disruptions and come out stronger on the other side.
Leadership Imperative: C-suite leaders must prioritize trust-building as a core part of their AI and resilience strategies. Without a foundation of trust, even the most advanced AI systems or resilience programs will struggle to gain alignment across teams and stakeholders.
AI Resilience: Mastering One-Way vs. Two-Way Doors
“In a world where AI can change the game overnight, resilience is a necessity. Leaders who build systems to adapt quickly will not only survive but lead the charge.”
– Bob Fucito
AI is transforming the way organizations operate, but its potential must be harnessed thoughtfully and with a focus on resilience. By taking an iterative approach to risk management, adopting federated governance models, and fostering continuous learning, organizations can successfully balance AI-driven innovation with the need for sound risk mitigation. Above all, building trust in the face of uncertainty will be key to thriving in this AI-driven future.
In a world where technology moves faster than ever, are you confident your organization’s resilience can keep pace with AI innovation?
As AI continues to reshape industries, your organization’s ability to navigate uncertainty will define your competitive edge. Schedule an Executive Microsimulation today to assess your leadership team’s readiness and ensure you’re prepared to lead through the complexities of fast-paced change.