Spain's Groundbreaking AI Legislation

17.03.25 08:47 PM

Navigating the Future with Ethical AI Governance

The Spanish government has taken a significant step towards shaping the future of Artificial Intelligence with the recent approval of the draft law for an ethical, inclusive, and beneficial use of AI. This landmark legislation aims to adapt Spanish law to the already in force European Union AI regulation, establishing a regulatory framework that simultaneously fosters innovation.


In a press conference following the Council of Ministers, Óscar López, the Minister for Digital Transformation and the Civil Service, emphasized the dual nature of AI as a powerful tool with the potential for immense good and significant harm. He highlighted its capacity to aid in medical research and disaster prevention, while also acknowledging its risks in spreading misinformation and undermining democratic processes. This new legal framework underscores the government's commitment to ensuring the responsible development and deployment of AI technologies in Spain. 


The draft law is now set to undergo expedited parliamentary procedures before its anticipated final approval and enactment. This urgency reflects the government's proactive stance in aligning with European standards and addressing the rapidly evolving landscape of AI.


Key Pillars of the New AI Governance Framework

The overarching goal of this legislative effort is to guarantee that the development, marketing, and utilization of AI systems within Spain adhere to principles of ethics, inclusivity, and benefit to individuals. To achieve this, the framework incorporates several key elements:

  • Alignment with EU Regulation: A central tenet of the Spanish law is its seamless integration with the European Union's AI regulation, ensuring a harmonized legal environment for AI across member states. This alignment aims to prevent risks to individuals associated with AI technologies.
  • Prohibition of Harmful Practices: The law explicitly prohibits certain AI practices deemed inherently harmful. These prohibitions, which came into effect at the EU level on February 2, 2025, and will be enforceable in Spain from August 2, 2025, include:
    • Employing subliminal techniques to manipulate individuals' decisions without their explicit consent, leading to significant harm such as addiction, gender-based violence, or the undermining of personal autonomy. For instance, a chatbot subtly encouraging users with gambling problems to engage with online gambling platforms would fall under this prohibition.
    • Exploiting vulnerabilities linked to age, disability, or socioeconomic status to substantially alter behavior in ways that cause or could cause considerable harm. An example cited is an AI-powered children's toy prompting children to undertake challenges that could result in severe physical injury.
    • The biometric categorization of individuals based on sensitive attributes like race, political affiliation, religious beliefs, or sexual orientation. A facial recognition system deducing political or sexual orientation from social media photos exemplifies this prohibited practice.
    • Social scoring of individuals or groups based on their social conduct or personal traits as a basis for decisions such as denying access to subsidies or loans.
    • Evaluating the risk of an individual committing a crime by analyzing personal data such as family history, educational background, or place of residence, except under legally defined exceptions.
    • Inferring emotions in workplace or educational settings as a method of evaluation for promotion or dismissal, unless justified by medical or safety considerations.
  • Categorization and Regulation of High-Risk Systems: The legislation identifies specific categories of AI systems deemed to be of high risk. These include AI used as safety components in industrial products, toys, medical devices, and transportation. It also encompasses systems operating in critical areas such as biometrics, critical infrastructure, education, employment, essential private and public services, law enforcement, migration, asylum, border control, judicial administration, and democratic processes. These high-risk systems will be subject to a set of mandatory obligations, including risk management, human oversight, technical documentation, data governance, record-keeping, transparency, and quality management systems.
  • Support for Innovation through Sandboxes: Recognizing the importance of fostering AI development, Spain has proactively established a framework for AI sandboxes – controlled testing environments. This initiative, with a call for participants launched in December of the previous year, predates the August 2026 deadline mandated by the European regulation for member states to establish such environments. These sandboxes will allow providers to test and validate innovative AI systems for a limited period before market release, in collaboration with the competent authorities. The insights gained from these pilot programs will inform the development of technical guidance for complying with the requirements for high-risk AI systems.


Understanding the Penalties for Non-Compliance

A critical aspect of the new legislation is the establishment of a robust sanctioning regime to ensure adherence to its provisions. Penalties are graded based on the nature and severity of the violation, with distinctions made between prohibited practices and non-compliance related to high-risk AI systems.


Sanctions for Prohibited AI Practices

  • Violations of the prohibited AI practices will incur fines ranging from 7.5 million euros to 35 million euros, or 2% to 7% of the offender's total global turnover in the preceding financial year, whichever is the higher amount.
  • For small and medium-sized enterprises (SMEs), the applicable fine will be the lower of these two amounts.
  • In addition to monetary penalties, authorities may also mandate the adaptation of the non-compliant AI system to meet regulatory requirements or prohibit its commercialization altogether.


Sanctions for Violations Related to High-Risk AI Systems

The legislation outlines different levels of infractions related to high-risk AI systems, each with corresponding penalties:

  • Very Serious Infractions: These are the most severe violations and include:
    • Failure to report a serious incident caused by a high-risk AI system, such as a fatality, damage to critical infrastructure, or environmental harm.
    • Non-compliance with orders issued by a market surveillance authority.
    • Penalties for very serious infractions range from 7.5 million euros to 15 million euros, or 2% to 3% of the offender's total global turnover in the preceding financial year.
  • Serious Infractions: Examples of serious infractions include:
    • Failure to implement human oversight in a biometric AI system used for workplace attendance monitoring.
    • Lack of a quality management system for AI-powered robots performing industrial inspection and maintenance.
    • Failure to clearly and distinguishably label AI-generated content (deepfakes) upon the first interaction. This includes images, audio, or video depicting real or non-existent individuals saying or doing things they never did or being in places they never were.
    • The penalties for serious infractions range from 500,000 euros to 7.5 million euros, or 1% to 2% of the offender's total global turnover.
  • Light Infractions: A light infraction is exemplified by:
    • Failure to include the CE marking on a high-risk AI system, its packaging, or accompanying documentation to indicate conformity with the AI Regulation.
    • Specific monetary penalties for light infractions are not detailed within the provided sources.


Oversight and Enforcement

The responsibility for overseeing and enforcing the AI regulations will be distributed among several existing and newly established authorities, depending on the specific type of AI system and the sector in which it is deployed. These authorities include:

  • The Spanish Agency for Data Protection (AEPD), particularly for biometric systems and border management.
  • The General Council of the Judiciary (CGPJ) for AI systems within the justice system.
  • The Central Electoral Board (JEC) for AI systems affecting democratic processes.
  • The Spanish Agency for the Supervision of Artificial Intelligence (AESIA) will serve as the primary supervisory body for other AI systems.
  • Existing sector-specific regulators such as the Bank of Spain (for creditworthiness assessment systems), the Directorate-General for Insurance (for insurance systems), and the National Securities Market Commission (CNMV) (for capital markets systems) will also play a role in overseeing AI within their respective domains.


Looking Ahead

The approval of this draft law marks a crucial step in Spain's commitment to harnessing the potential of AI responsibly. By aligning with European regulations and establishing clear guidelines and penalties, the government aims to create an environment where AI innovation can thrive while safeguarding ethical principles and protecting individuals from potential harms. The expedited parliamentary process indicates the urgency and importance placed on this legislation as Spain navigates the transformative power of artificial intelligence.


Harold Lucero