Yesterday, we enjoyed the day with the Business Resiliency Professionals Association at Chicago’s Hot Topics in Resilience Roundtable Event, where we were invited to facilitate 3 lightning rounds of roundtable discussions on the now, next, and future of AI Risk and Reward in the field of Business Continuity and adjacent disciplines.
The conversations delved into the fascinating dynamics of AI application, sparking illuminating dialogues on its current impact, where teams are focused next, and the potential it holds for reshaping the landscape of Resilience.
AI Risk and Reward in Resilience: 7 Takeaways
- Current Applications of AI in Resilience: AI is already being utilized in a number of innovative ways in Resilience. These applications range from serving as personal assistants to tasks like code fixing and quality assurance, as well as automating business continuity plans and decision-making during incidents.
- Need for AI Oversight: The discussions highlighted numerous non-obvious applications of AI by companies and third parties, underscoring the need for visibility how AI is being deployed across business processes and services, particularly in light of emerging regulations like the EU AI Act.
- Navigating Hype vs. Reality: While there have been significant advancements in AI technology, it’s crucial to recognize that sometimes the hype surrounding AI can overshadow the actual capabilities. The recent revelation of the manual effort behind Amazon’s “Just Walk Out” technology underscores this point.
- Evolution of Roles in the Industry: As AI matures, it is reshaping roles within the industry. There’s a noticeable shift away from tasks focused on information collection and analysis towards roles emphasizing governance and oversight of AI systems.
- Challenges with Data: Data remains a significant barrier to the widespread adoption and effective utilization of AI in Business Continuity efforts. Ensuring access to high-quality data is crucial for AI systems to function optimally.
- Quality and Human Element: As AI becomes more pervasive, it’s essential to monitor the quality of AI-generated outputs and ensure we’re not overlooking key human elements.
- Evolution of Legal and Compliance Responsibility: Who is responsible when AI makes a decision that leads to a catastrophic outcome? The evolving landscape necessitates clear delineation of accountability, potentially involving discussions around regulatory frameworks, liability allocation, and ethical considerations surrounding AI deployment in critical business operations.
The discussions provided valuable insights into the current landscape and future trajectory of AI in Business Continuity and Resilience efforts. They underscore the importance of informed decision-making, thoughtful implementation, and ongoing consideration of ethical and human-centric principles in leveraging AI technologies.
Thanks to all who participated in such robust and engaging discussions. We look forward to continuing the conversation when we return this October to share insights on AI and Cyber Resilience.
For more about on how AI is transforming the world of wargaming and gameday response, tune into iluminr’s recent roadmap webinar Resilient Futures: iluminr’s 2024 Roadmap for Human-AI Capability.
Author:
Paula Fontana
VP, Global Marketing
iluminr