Understanding Artificial Intelligence (AI) Compliance: Examples, Legislation, and Best Practices

 

Given the relative novelty of Artificial Intelligence (AI), regulatory compliance legislation is still evolving, with authorities aiming to balance innovation with ethical standards and public safety. Current AI legislation serves two primary purposes: user privacy and data transparency. 

For business owners, executives, and those who simply utilize AI within their roles, it’s necessary to understand AI compliance measures to prevent legal penalties, ensure ethical use, and maintain trust with those who your company serves.

In this article, we cover location-specific examples of AI compliance frameworks, consequences for non-compliance, and best practices for regulatory adherence.

 

EU and US Legislation

 

Across the European Union and the United States, multiple bills and acts are currently in motion to regulate the way businesses use AI. 

 

EU Legislation and Frameworks

 

The primary act regulating European AI usage and deployment is the EU AI Act. Other legislation includes the EU General Data Protection Regulation (EU) 2016/679 (GDPR), the Product Liability Directive (if adopted, replacing Directive 85/374/EEC), the General Product Safety Regulation 2023/988/EU (replacing Directive 2001/95/EC), and various intellectual property laws under the national laws of EU Member States.

The EU AI Act informs the governance, compliance requirements, and risk management protocols for AI systems within the EU, ensuring that AI deployment aligns with ethical standards, user transparency, and safety regulations. More specifically, it establishes a framework for classifying AI systems based on risk, delineating responsibilities for developers and deployers of high-risk AI systems, prohibiting certain high-risk applications, and mandating transparency and accountability measures to protect users and society. 

 

US Legislation and Frameworks

There is no one comprehensive piece of federal legislation in the U.S. that regulates the development or specifically restricts the use of AI. However, some existing federal laws do address aspects of AI — although their application is limited.

Consider the following AI governance frameworks.

  • Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: This framework outlines the federal government's approach to governing AI technology, emphasizing responsible development and usage to address challenges and enhance security. It sets guiding principles and directives for federal agencies to ensure AI is developed and used safely, securely, and ethically. The order promotes innovation, equity, and civil rights, while also protecting consumers, workers, and privacy. It includes specific provisions for labeling synthetic content, managing AI in critical infrastructure, and mitigating AI-related risks.

  • The White House Blueprint for an AI Bill of Rights: This blueprint provides a set of principles aimed at protecting individuals in the context of AI use. It focuses on ensuring AI systems are safe and effective, preventing algorithmic discrimination, protecting data privacy, offering clear notice and explanation about their use, and providing human alternatives and fallback options. The blueprint serves as a guide for incorporating these protections into policy and practice, aiming to uphold civil rights, equal opportunity, and democratic values in the deployment and use of AI technologies.

While these federal frameworks serve as guidelines, there exists legislation — largely sector-specific — that addresses certain aspects of AI, including the following:

  • National Defense Authorization Act for Fiscal Year 2019: This act directed the Department of Defense to undertake various AI-related activities, including appointing a coordinator to oversee AI initiatives.

  • FAA Reauthorization Act of 2024: This act includes provisions requiring the review of AI applications in aviation.

  • National AI Initiative Act of 2020: This act focuses on expanding AI research and development and establishes the National Artificial Intelligence Initiative Office, responsible for overseeing and implementing the U.S. national AI strategy. This initiative aims to maintain U.S. leadership in AI by fostering innovation and addressing the ethical, legal, and societal implications of AI technologies.

Furthermore, the Federal Trade Commission (FTC) has provided guidelines under the Algorithmic Accountability Act, ensuring that AI systems do not engage in deceptive or unfair practices, emphasizing transparency and accountability. The Department of Commerce, through the National Institute of Standards and Technology (NIST), has released the AI Risk Management Framework to help organizations manage AI-related risks effectively. Additionally, the Department of Health and Human Services (HHS) has been developing policies under the 21st Century Cures Act to integrate AI in healthcare while safeguarding patient privacy and ensuring the safety and efficacy of AI-driven medical devices. 

These efforts, coupled with state-level initiatives and sector-specific regulations — such as the AV START Act for autonomous vehicles — create something of an ecosystem for AI governance in the U.S.

 

Examples of Regulatory Breaches

The following three cases are examples of companies being fined for violating regulatory requirements through the misuse of AI and data processing practices.

CFPB Penalizes Hello Digit for its Deceptive Financial Algorithm

In 2022, the Consumer Financial Protection Board (CFPB) took early action against AI misconduct by fining Hello Digit — a fintech company promoting automated saving — $2.7 million. 

The penalty was issued for an algorithm that caused users to incur overdrafts and penalties. Hello Digit was found in violation of the Consumer Financial Protection Act, engaging in deceptive acts and practices. The company falsely guaranteed no overdrafts, promised reimbursements in the event of overdrafts, and misled customers by pocketing earned interest despite claiming otherwise. In addition to the fine, Hello Digit was ordered to fulfill all previously denied overdraft reimbursement requests.

EEOC Fines iTutorGroup for AI-Driven Age Discrimination

In August 2023, the Equal Employment Opportunity Commission (EEOC) settled a landmark case with iTutorGroup, levying a fine of $365,000 for using an AI-powered recruitment tool that discriminated based on age. 

This marked the first settlement against AI-driven recruitment tools in the U.S. The company violated the Age Discrimination in Employment Act of 1967 by automatically rejecting over 200 qualified applicants solely due to their age, specifically women over 55 and men over 60. As part of the settlement, iTutorGroup is prohibited from using algorithms that reject candidates over 40 or discriminate based on sex. The company must also comply with all non-discrimination laws and work with the EEOC to implement policies preventing future discrimination. 

ICO Fines TikTok £12.7 Million for Misusing Children's Data

In 2023, TikTok faced one of the largest penalties from the Information Commissioner’s Office (ICO): £12.7 million for unlawfully processing the personal data of children under 13. The social media platform violated the U.K. GDPR regulations by using AI-driven profiling based on user interactions and demographics without clear and transparent disclosure.

 

Consequences for Non-Compliance

The specific consequences for non-compliance depend on the state, industry, and context in which the breach takes place. For example, breaches of the EU AI Act can lead to significant penalties, including substantial fines that vary based on the severity and nature of the non-compliance:

  • Use of prohibited AI systems, as per Article 5, can result in fines of up to €30 million.

  • Failure to meet the quality criteria for high-risk AI systems — including requirements for accuracy, robustness, and cybersecurity (Article 10) — can yield fines of up to €30 million.

  • Inadequate establishment and documentation of a compliance management system and technical documentation for high-risk AI systems (Article 9) may lead to fines of up to €20 million.

  • Providing inaccurate, insufficient, or deceptive information to competent authorities can result in fines of up to €10 million.

While most regulatory breaches — and associated charges — take place in Europe, businesses in the United States also face significant consequences for non-compliance with AI regulations. U.S. regulatory bodies, such as the FTC, can impose severe penalties for deceptive practices involving AI, including fines, mandatory audits, and injunctive relief.

 

Avoiding Non-Compliance: Best Practices

To stay on top of evolving AI regulations and mitigate against potential risks, consider the following five best practices.

1. Stay Updated on Regulations and Data Governance

Regularly monitor and analyze updates to AI regulations and guidelines pertinent to your operational regions. In doing so, partner with legal experts specializing in AI, data protection, and privacy laws to ensure your compliance framework is robust and up-to-date. 

Additionally, implement comprehensive data governance strategies to guarantee data integrity, quality, and compliance with legal standards. This includes implementing thorough data classification processes, maintaining clear data usage policies, and ensuring data access controls are strictly enforced. 

2. Evaluate Ethical Impacts and Ensure Transparency

Embed ethical impact assessments into your AI development lifecycle to systematically evaluate potential societal impacts, biases, and ethical concerns before deployment. Utilize tools like fairness metrics, bias detection algorithms, and ethical risk assessment frameworks to scrutinize your AI models. 

Moreover, prioritize algorithmic transparency by developing interpretable models and leveraging techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations). Document and communicate the decision-making processes of AI systems to stakeholders, ensuring that the rationale behind AI-driven decisions is clear and comprehensible. 

3. Implement Privacy by Design and Strengthen Security

Embrace a "privacy by design" methodology during AI system development, incorporating privacy safeguards at every stage from data collection to model deployment. Utilize privacy-preserving techniques such as differential privacy, federated learning, and homomorphic encryption to protect user data. 

Concurrently, fortify your AI systems with advanced cybersecurity measures, including secure multi-party computation, intrusion detection systems, and robust encryption protocols to defend against unauthorized access and cyber threats. In doing so, regularly conduct security audits and vulnerability assessments to ensure the resilience of AI infrastructure. 

4. Incorporate Human Oversight and Ensure Accountability

Integrate compliance management/human oversight into AI decision-making processes, defining clear roles and responsibilities for individuals monitoring AI system outputs. Develop accountability frameworks that include audit trails, error tracking, and bias detection mechanisms to address any anomalies or unintended consequences. Moreover, maintain comprehensive documentation encompassing data sources, model architectures, training processes, and decision logic to facilitate transparency and traceability. 

5. Continuously Monitor and Improve AI Systems

Establish continuous monitoring frameworks to systematically track and evaluate AI system performance over time. Deploy monitoring tools and techniques such as real-time analytics, anomaly detection algorithms, and performance dashboards to identify and address deviations promptly. Train employees in AI ethics, compliance protocols, and technical aspects of AI system maintenance to foster an environment of ongoing vigilance and responsibility. 

Conduct periodic reviews and updates of your AI compliance program to incorporate lessons learned, address emerging regulatory changes, and integrate advancements in AI ethics and governance.

 

Trace3: Ensuring AI Regulatory Compliance 

Trace3 is at the forefront of AI compliance, providing businesses with the tools and expertise to navigate the evolving regulatory landscape. If you have questions or would like to learn more, contact our AI Center of Excellence at centerofai@trace3.com

If you’d like to learn more about AI use cases specific to your company, check out our AI white paper “AI Across Industries: Key Use Cases In Every Vertical.”


Back to Blog