Across the Board: How AI Risk Affects Entire Organizations

By Katharina Koerner | Trace3 AI Governance, Risk & Compliance

As artificial intelligence reshapes industries and redefines competitive landscapes, it is essential for today's organizations to address AI-related risk proactively and holistically. A significant risk lies in failing to acknowledge that AI-related risks, as documented in resources like the MIT Risk Repository or the AI Incident Database, affect the entire organization. Overlooking this comprehensive impact can jeopardize organizational integrity and undermine strategic objectives, far beyond mere departmental concerns. 

This article is the first in a series promoting corporate responsibility by providing resources and insights into AI risk, AI governance, and their integration into AI strategy. The series will help to assess whether and how to integrate AI risk management into enterprise risk management. It will offer techniques and tools to mitigate privacy, security and development-related AI risks.  

Our series begins by demonstrating how AI risk extends beyond technical systems or projects to impact entire business functions and the organization as a whole. AI incidents can affect operational workflows, organizational reputation, financial stability, and can pose significant threats to individuals and groups, with far-reaching societal and environmental consequences. 

 Despite these far-reaching implications, some organizations still view AI governance and risk management as a secondary concern rather than an essential safeguard. This takes not only a myopic view of risk management but also misinterprets the role of AI governance. AI governance is not only the cornerstone of AI risk management but provides the necessary overview of development and use of AI within an organization.  

 Examples Demonstrating Organization-Wide Implications of AI Risk 

Recent real-world incidents highlight the critical importance of incorporating AI risk into organizational-wide risk management strategies: 

 Reputational risk, e.g., due to biased outputs from AI tools, can erode trust and damage brand equity. For example, in February 2023, Alphabet Inc. (Google) experienced a significant stock drop after its AI chatbot, Bard, provided incorrect information during a demonstration. This error led to a loss of approximately $100 billion in market value, highlighting the financial impact of AI missteps.  

Regulatory risk is growing with frameworks like the EU AI Act and General Data Protection Regulation (GDPR), the emerging patchwork of AI regulation across U.S. states, and enforcement campaigns by federal regulators such as the FTC targeting AI practices that conflict with consumer protection standards. For instance, Clearview AI's breaches of privacy regulations, due in part to its use of AI for analyzing internet-scraped photos to create an unauthorized biometric database for facial recognition, have led to multiple sanctions and multimillion-dollar fines in Europe, Australia, the US, highlighting the severe consequences of non-compliance. A lack of structured oversight and awareness around these regulations can leave organizations vulnerable to compliance issues, particularly when AI systems are deployed without a full understanding of their regulatory obligations.  

AI system failures can further realize operational risk in supply chain and manufacturing, emerging from operational dependencies on AI systems. Supply chain algorithms misinterpreting demand patterns, can disrupt operations, lead to excess or shortages of inventory, and result in lost revenue and customer dissatisfaction. The failure of Zillow Group's AI-driven iBuying program, Zillow Offers, in 2021, illustrates the profound operational risks AI failures can pose, even when personal data is not directly involved. The program’s reliance on a machine learning model to estimate property values led to a critical flaw: consistent overestimations led Zillow to overpay for homes, resulting in a staggering $569 million in write-downs. This exemplifies the inherent risks when AI is employed without sufficient consideration of the underlying market dynamics and human factors, a strategic error that ultimately forced Zillow to close its iBuying unit and lay off a quarter of its staff. 

AI security remains a major concern for companies, as most incidents stem from traditional security failures . However, novel AI-specific threats are emerging, introducing unique security challenges. Examples include: 

  • Prompt injection attacks, akin to SQL injection in their simplicity and impact, have become a notable AI-specific risk. For instance, in 2023, a student executed a prompt injection attack on Microsoft's Bing Chat, bypassing its hidden directives by instructing it to ignore prior instructions and reveal its start-up commands.  

  • Training data poisoning poses significant AI security risks with company-wide implications. In November 2023, the "Nightshade" tool enabled artists to corrupt generative AI training datasets, leading to potential model malfunctions. 

  • Additionally, adversarial attacks highlight the dangers of AI misclassification. In 2019, researchers tricked Tesla's Autopilot into accelerating from 35 to 85 mph by subtly altering a speed limit sign, demonstrating the risk of real-world modifications. 

Such attacks on AI systems can lead to organization-wide impacts such as operational disruption, reputational damage, regulatory penalties, financial loss, and the erosion of competitive advantage, as sensitive details and vulnerabilities exposed in AI systems can cascade into broader exploitation and misuse.  

AI-generated deepfakes and disinformation pose significant financial and reputational risks to companies, escalating to what Forbes describes as a billion-dollar business threat. From fake CEO announcements to AI-driven identity theft and fabricated reviews, deep fakes are sophisticated threats, especially during sensitive times like mergers or public offerings. In 2024, an employee of the British firm Arup was tricked into sending $25 million to fraudsters using a deepfake of the CFO during a video call, demonstrating the severe financial risks posed by sophisticated deepfake technology. 

An illustrative case, highlighted by both the New York Times and Bloomberg in May 2023, demonstrates how "a solitary fabricated image of smoke emanating from a building precipitated a swift, panic-driven sell-off in the stock market." 

Proactive AI Risk Management: Building Dedicated Teams for Tomorrow's Challenges 

Examples as the ones listed above, should not be regarded as mere cautionary tales but as critical indicators pointing towards the urgent need for incorporation of AI governance within the broader framework of enterprise risk management. 

Considering the complexities and pervasive nature of AI-related risk, it is imperative that organizations adopt a more holistic approach to risk management. Traditional frameworks often fall short in addressing the unique challenges presented by AI, such as algorithmic bias, misinformation and confabulations, or ethical concerns. Recent advancements in AI management frameworks, such as the NIST AI Risk Management Framework and ISO/IEC 42001:2023 for an AI Management System, underscore the limitations of conventional approaches, pointing clearly to the need for tailored AI risk management practices and nouvelle governance structures. 

While AI strategy outlines how an organization intends to leverage AI to achieve its business, AI governance acts as an indispensable pillar of AI strategy that ensures the effective oversight, ethical implementation, and regulatory compliance of AI initiatives.  

AI governance provides oversight of AI initiatives by tracking activities such as solution development, contributing risk related considerations to feasibility studies and AI adoption, and assessing and continuously monitoring AI projects across the organization. AI governance ensures that principles of responsible use are embedded throughout every stage of the AI lifecycle. It helps to prevent malinvestment and avoids duplication of efforts by ensuring that AI use cases are well-aligned across the organization. 

AI governance oversees AI initiatives by tracking AI use cases and solution development, incorporating risk assessments during AI use case discovery and prioritization, and continuously monitoring AI projects across the organization. It embeds principles of responsible use at every stage of the AI lifecycle. Additionally, AI governance can prevent malinvestment and reduce redundant efforts by aligning AI use cases organization wide. 

Moreover, there is a critical need for specialized training and a cultural shift within organizations to foster a culture that prioritizes ethical considerations and risk awareness in the development and deployment of AI solutions. This cultural evolution should aim to elevate AI governance from a peripheral to a central function, aligning it with organizational objectives and strategic goals. 

The time to proactively build dedicated teams to manage AI risks effectively is now. By taking this step, organizations not only protect their operational integrity and strategic goals but also equip themselves to ethically and efficiently utilize AI technologies in a competitive, AI-driven marketplace. 


Koerner, KatharinaKatharina Koerner is a multifaceted professional, bringing together a rich blend of skills encompassing senior management, legal acumen, and technical proficiency. Based in Silicon Valley since 2020, she has focused her career on tech policy, privacy, security, AI regulation, and the operationalization of trustworthy AI. Katharina holds a PhD in EU Law, a JD in Law, and various certifications in information security, privacy, privacy engineering, and ML. Her career includes serving as the CEO of an international education group for 5 years and 10 years in the Austrian public service. At the International Association for Privacy Professionals (IAPP), she served as a Principal Researcher - Technology, focusing on privacy engineering, technology regulation, and AI research. From there, Katharina joined Daiki, a forward-thinking seed-stage startup focused on AI enablement, where she served as the Corporate Development Manager, spearheading strategic initiatives, building partnerships, and implementing AI governance frameworks for customers and providing services in responsible AI implementation. Right before joining Trace3 in our new AI Governance & Risk team, Katharina has been the AI Governance Lead at Western Governors University (WGU), where she developed and implemented governance frameworks for AI/ML systems.

Back to Blog