Skip to main content

The European Union finds itself at a decisive juncture, navigating the complex dual mandate of leveraging the transformative potential of Artificial Intelligence (AI) while upholding robust consumer protections. This delicate balance is exemplified by two landmark legislative initiatives: the 2024 Product Liability Directive (PLD) and the EU AI Act. Together, these regulatory frameworks are set to redefine the contours of the rapidly advancing Fintech sector, where AI serves as the engine of innovation. Their far-reaching implications will not only reshape the sector’s technological landscape but also establish new benchmarks for accountability, trust, and consumer safeguards across the industry.

 

1. Product Liability Directive 2024 (PLD)

The PLD signifies a landmark overhaul of the EU’s consumer protection framework, modernizing it to address the complexities of digital and AI-driven products and services.

Key Features:

  • Expanded Scope: The directive now explicitly includes software, digital content, intangible assets, and, in certain cases, services as ”products.” This reflects the realities of a rapidly digitizing economy where traditional boundaries between goods and services are increasingly blurred.
  • Stricter Liability: It reinforces strict liability for producers, ensuring consumers can claim damages caused by defective products without proving negligence.
  • Broader Damage Coverage: Recognizes a wider range of compensable harms, including personal injuries, psychological harm, property damage, and data loss or corruption.
  • Consumer-Centric Protections: Simplified claims processes and improved access to product information enhance consumer protections, particularly in cases involving complex AI systems.

Implications for Fintech:

The PLD’s broadened scope directly affects the Fintech sector, where AI powers critical functions such as algorithmic trading, credit scoring, and fraud detection.

  • Algorithmic Trading Risks: AI trading algorithms prone to malfunction could result in substantial financial losses, placing liability on the companies that develop and deploy them.
  • Bias in Credit Scoring Models: AI systems found to discriminate against certain groups may expose companies to liability for discriminatory outcomes.
  • Cybersecurity Vulnerabilities: AI-powered systems with weak security measures could lead to data breaches or financial losses, holding companies accountable for damages caused by these failures.

From the Old to the New: The PLD moves beyond the narrow focus of its predecessor, which primarily addressed tangible goods, to encompass the unique risks posed by digital and AI technologies. It extends liability across the supply chain and introduces provisions addressing cybersecurity, software updates, and the complexities of AI systems.

 

2. The EU AI Act

The EU AI Act introduces a risk-based regulatory framework designed to address the challenges and opportunities posed by AI systems. Its primary goal is to create an environment where innovation can thrive while ensuring these technologies are safe, ethical, and transparent.

Core Principles:

  • Risk Management Framework: AI systems are categorized by risk levels, with high-risk systems subject to stricter regulatory requirements. Many Fintech applications, such as those used for automated decision-making in loans, fraud detection, and trading, fall into this high-risk category.
  • Transparency and Explainability: Companies must ensure AI systems are explainable, particularly when their decisions significantly affect individuals or organizations, such as in loan approvals or investment advice.
  • Data Quality and Ethics: The Act emphasizes the need for high-quality, unbiased, and lawfully sourced data to train AI models. Ethical considerations underpin the entire framework, ensuring fairness and minimizing risks to individuals and society.

Impact on Fintech: The Act sets a clear framework for Fintech companies using AI, encouraging compliance with standards that ensure reliability, safety, and ethical practices. High-risk applications in Fintech, such as fraud detection or AML (anti-money laundering) systems, must meet rigorous requirements, including human oversight and continuous monitoring.

 

3. Complementary and Contradictory Dynamics Between the PLD and the EU AI Act

While the PLD and AI Act share the common objective of ensuring safe and ethical AI use, their intersection presents both synergies and tensions.

Complementary Aspects:

  • Shared Objectives: Both frameworks aim to build consumer trust by addressing risks associated with AI technologies. The PLD focuses on liability for harm caused, while the AI Act emphasizes proactive risk management to prevent harm.
  • Comprehensive Approach: Together, they create a regulatory framework that addresses both reactive (liability after harm occurs) and proactive (risk prevention) aspects of AI safety.

Potential Conflicts:

  • Overlap in Jurisdictions: Certain areas, such as AI safety and reliability, fall under both frameworks, potentially leading to confusion regarding compliance requirements.
  • Innovation vs. Liability: The PLD’s emphasis on strict liability may discourage innovation in high-stakes AI applications, conflicting with the AI Act’s intent to promote responsible innovation.
  • Interpretational Challenges: Determining whether an AI system is defective under the PLD or compliant with the AI Act’s standards could result in legal ambiguities, particularly in complex cases involving advanced AI models.

 

Examples of Complementary and Conflicting Scenarios

Algorithmic Trading Systems:

  • PLD Perspective: Holds companies liable for financial losses caused by malfunctioning AI-powered trading algorithms.
  • AI Act Perspective: Requires rigorous risk assessments, explainability, and human oversight to prevent such malfunctions.

Conflict: The PLD addresses liability for past harm, while the AI Act focuses on preventing future risks. This divergence may create uncertainty for companies balancing innovation with compliance.

Anti-Money Laundering (AML) Systems:

  • PLD Perspective: Imposes liability for damages caused by failures in AI-based AML systems, such as undetected fraudulent transactions.
  • AI Act Perspective: Emphasizes the need for robust, unbiased, and explainable systems that prevent harm without discriminating against legitimate users.

Conflict: While the PLD focuses on addressing harm post-occurrence, the AI Act prioritizes proactive measures to prevent harm, potentially leading to differing priorities for companies in developing these systems.

 

Conclusion

The European Union’s dual legislative efforts- the Product Liability Directive (PLD) 2024 and the EU AI Act represent a groundbreaking, yet intricate regulatory framework aimed at fostering safe and ethical AI systems. While these directives collectively create a robust mechanism to address the multifaceted challenges posed by AI, their interplay also unveils areas of tension and uncertainty.

By combining the PLD’s emphasis on liability and consumer protection with the EU AI Act’s proactive focus on risk management and ethical compliance, the EU seeks to balance innovation with accountability. However, navigating the potential overlaps, contradictions, and interpretational complexities will require further legal clarity and careful implementation. Striking this balance is pivotal not only to safeguard consumer interests but also to ensure that the Fintech sector and broader AI industry continue to thrive in a competitive and secure environment.

Eris Law Advokatbyrå AB is here to guide and support companies in navigating the complexities of the PLD 2024 and EU AI Act. For more in-depth information and research on this topic, please contact us.

Source:

  1. Source: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf
  2. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32024L2853
  3. https://eur-lex.europa.eu/eli/dir/2024/2853/oj/eng
  4. https://www.taylorwessing.com/en/insights-and-events/insights/2025/01/di-new-product-liability-directive https://blog.aaysanalytics.com/post/eu-ai-act

Leave a Reply