Skip to content

Main Insight

The second part of our Global Red Lines for AI series examines emerging examples from multilateral bodies, regional and national governments, corporate, and civil society actors who are beginning to draw red lines around AI.

Part 2: Are There Red Lines for AI in Practice Already?

July 3, 2025

This article is the second part of a three-part explainer series on Global Red Lines for AI. You can find the series overview here, part 1 here, and part 3 here.

Red lines are not a novel idea. They’re already being used in AI governance, even if not always labeled as such. We see them reflected in national and regional regulations, multilateral initiatives, industry commitments, and global campaigns led by scientific and civil society leaders.

In Part 1 of this three-part series, we explored what red lines are and how they apply to AI governance. In Part 2, below, we explore some concrete cases of how these “red lines” are being applied today.

What red lines currently govern AI?

Red lines are being explored and advanced by a broad range of stakeholders in AI governance. While “red lines” are commonly discussed in the context of international soft law, they also manifest as legally binding prohibitions within local, national, and supranational jurisdictions. 

For example, the EU AI Act explicitly bans certain uses of AI that are deemed unacceptable. These prohibited systems include those that manipulate users through subliminal techniques, exploit vulnerable groups, use facial recognition in public spaces, infer emotions, or categorize individuals based on biometric data. As the most comprehensive legal framework for AI to date, the EU AI Act sets clear red lines by defining specific cases where AI must not be developed or deployed. Its accompanying Code of Practice reinforces this through Commitment II.I on “unacceptable systemic risk,” requiring signatories to assess risks against “Systemic Risk Acceptance Criteria” and refrain from development or market entry if the risks are deemed unacceptable.

In the United States, several states have enacted legislation prohibiting specific harmful uses of AI. Texas, for instance, recently signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which sets prohibitions on behavioral manipulation, discrimination, creation or distribution of child pornography or unlawful deep fakes, and infringement of constitutional rights. 

There is also a growing movement in China to draft a national-level AI law, categorizing AI applications into high-risk and low-risk categories, with high-risk systems subject to stricter regulation. The AI Safety Governance Framework 1.0 published in September 2024 by the National Cybersecurity Technical Standards Committee (TC 260) flags specific high-risk categories, referencing autonomous replication and improvement, power seeking, assisting weapon development, and autonomous cyberattacks.

Red lines for AI are also being advanced in Brazil‘s comprehensive AI bill (Projeto de Lei No. 2338/2023). Article 13 on “Excessive Risk” includes clear prohibitions on AI systems that exploit vulnerabilities of specific groups or use subliminal techniques, alongside systems that instigate or induce behavior that harms individuals’ health, safety, or fundamental rights. The Article also explicitly prohibits the use of AI for social scoring by public authorities, the deployment of AI in autonomous weapon systems, and the wide-scale use of biometrics for mass surveillance. As of early July 2025, the Bill is in a dedicated commission in Congress, having already been approved by the Senate.

At the international level, red lines are being introduced mainly through multilateral fora. The Council of Europe’s Framework Convention on AI, the world’s first binding international agreement on AI, establishes baseline prohibitions rooted in human rights. 

The Seoul AI Safety Summit produced a ministerial statement signed by 27 countries and the EU, committing to identify risk thresholds for frontier AI systems that must not be crossed without effective mitigation. It also led to 20 tech organizations signing the voluntary Frontier AI Safety Commitments. Notably, under Outcome 1.II, these organizations agreed to “set out thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable.”

Similarly, the G7 Hiroshima AI Process, supported by the OECD, reinforces the need for specific prohibitions, stating: “Organizations should not develop or deploy advanced AI systems in a way that undermines democratic values, are particularly harmful to individuals or communities, facilitate terrorism, enable criminal misuse, or pose substantial risks to safety, security, and human rights, and are thus not acceptable.”

Finally, the scientific and civil society communities are also demanding clear boundaries. The IDAIS Beijing Consensus Statement, signed by leading international AI scientists, calls for red lines on self-replicating systems, power-seeking behavior, deception, cyberattacks, and AI-assisted weapon development. 

Where can we find examples of AI red lines?

The table below catalogs the emergence of AI red lines in concrete policies within three contexts: within regulatory, multilateral, and scientific and civil society contexts. We highlight the scope, status, and notable considerations of each example. This is not an exhaustive list, but an illustrative overview of the types of policies taking shape in different settings.

Emerging “AI Red Line” Policies in Regulatory, Multilateral, and Scientific and Civil Contexts
Initiative/AgreementKey Red Lines ExamplesScope and StatusConsiderations
Regulatory: Regional, national, and local
EU AI Act (2024) and its General Purpose AI Code of PracticeProhibitions on AI systems:
-Deploying subliminal, manipulative, or deceptive techniques
-Exploiting vulnerabilities
-Biometric categorization systems
-Social scoring
-Assessing the risk of an individual committing criminal offenses
-Compiling facial recognition databases
-Inferring emotions in workplaces or educational institutions
-‘Real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement

Additionally, as defined in the EU Code of Practice, there are additional prohibitions on general-purpose AI models that pose unacceptable systemic risks as based on the “Systemic Risk Acceptance Criteria.”
Legally binding for EU member states-First comprehensive AI regulation
-Exceptions and nuances exist, particularly concerning law enforcement, national security, and research and development (R&D)
Take It Down Act (2025)Prohibitions on the nonconsensual online publication of intimate visual depictions, including both authentic images and AI-generated deepfakes, of individuals.Legally binding at the U.S federal levelRequires online platforms to remove such content within 48 hours of receiving a valid notice but there is a “safe harbor” provision
TRAIGA, Texas Responsible Artificial Intelligence Governance Act  (2025)Prohibitions on AI systems that:
-Intentionally encourage self-harm, harm to others, or criminal activity
-Discriminate against protected classes under state or federal law
-Produce or distribute child pornography or unlawful sexually explicit deep fake content
-Infringe upon a person’s federal Constitutional rights
-Are used for social scoring or biometric identification of specific individuals without informed consent 
Legally binding in Texas -Intent is crucial, making deliberate actions the key factor in determining violations
-Some prohibitions apply specifically to government entities; U.S. federal law can override state law
U.S. municipal bans on facial recognition technologies (e.g. San Francisco, Oakland, Boston)Prohibitions on government use of facial recognition technologies Legally binding ordinances at the U.S. municipal level Driven by bottom-up opposition from local citizens and civil society organizations (CSOs)
Brazil’s AI Act (not yet enacted)Proposed prohibitions on systems that manipulate or exploit vulnerabilities to cause harm, assess individuals for crime risk prediction, facilitate child exploitation material, enable social scoring by public authorities, constitute autonomous weapons, and use real-time biometric identification in public spaces (with limited law enforcement exceptions).Legally binding in Brazil, if enactedThe Bill is still going through the Brazilian legislative process. As of July 2, 2025, it has been drafted and approved by the Senate and is now in a dedicated commission in the Congress
Initiative/AgreementKey Red Lines ExamplesScope and StatusConsiderations
Multilateral
Council of Europe Framework Convention on AI (2024)Moratorium on AI incompatible with human rights, with a strong emphasis on the protection of democracy and rule of lawFirst legally binding international AI treaty signed by over 50 countries-Open to non-European countries
-Lacks signatories from key global actors (e.g. China)
-Limited in its applicability in the private sector
Seoul Summit Ministerial Statement  and Frontier AI Safety Commitments (2024)Ministerial statement and Safety Commitments agreed to establish risk thresholds which unless adequately mitigated, would be deemed intolerable
Ministerial statement signed by 27 countries and the EU; Safety Commitments signed by 20 technology organizations -Lacks signatories from key nations (e.g. China)
-AI Action Summit did not provide any follow-up to these commitments
G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems,  supported by the OECD (2023)Calls for AI to not: 
-Undermine democratic values
-Facilitate terrorism or criminal misuse
-Pose substantial risks to safety, security, and human rights, and are thus not acceptable
G7 member countries-OECD launched a voluntary reporting framework to encourage transparency and accountability among organizations developing advanced AI systems
-20 organizations have participated in the reporting process so far
UNESCO Recommendation on the Ethics of Artificial Intelligence (2021)Includes specific prohibitions on the use of AI for:
-Social scoring
-Mass surveillance
194 UNESCO member states-UNESCO launched a Readiness Assessment Methodology (RAM) to support Member States in their implementation of the UNESCO Recommendation 
Some signatories of the Recommendation have been criticized for using banned practices
Initiative/AgreementKey Red Lines ExamplesScope and StatusConsiderations
Scientific and civil 
IDAIS Beijing Consensus Statement (2024)Consensus on the need for international red lines, including:
-Autonomous Replication or Improvement
-Power Seeking
-Assisting Weapon Development
-Cyberattacks
-Deception
International AI scientists -Not yet translated into binding political commitments
-Emphasizes that the responsibility should lie with developers to demonstrate that red lines will not be crossed, for example, through rigorous empirical testing, quantitative assurances, or formal mathematical proofs

Table 1: Emerging “AI red line” policies in regulatory, multilateral, and scientific and civil contexts

The foundation is there, so how do we establish global red lines that can truly hold?

This growing ecosystem of red lines demonstrates that the world is no longer debating whether boundaries are needed, but rather which boundaries to draw and how to enforce them. The challenge now is to move from regulatory, multilateral, and civil efforts toward a coherent, globally coordinated framework on AI red lines.

In Part 3, the final part of the series, we examine different pathways to build global red lines for AI. We conclude by outlining three key considerations for advancing red lines globally, including building a coalition of the willing, engaging the diplomatic community to pave implementation pathways, and designing blueprints for effective monitoring and verification.

Related resources

EU Frontier AI Sovereignty: Beware of Geeks Bearing Gifts

EU Frontier AI Sovereignty: Beware of Geeks Bearing Gifts

Our report “Beware of Geeks Bearing Gifts: Building True EU Frontier AI Sovereignty” is the first comprehensive framework mapping Europe’s frontier AI dependencies across the five pillars of sovereignty.

Europe’s AI Strategy: Mapping the EU’s Emerging AI Policy Portfolio

Europe’s AI Strategy: Mapping the EU’s Emerging AI Policy Portfolio

The European Commission has launched 96 AI initiatives to strengthen European AI sovereignty. This analysis maps Europe’s AI strategy and recommends improving budget transparency and strengthening cross-communication.

An Opportunity for Canada to Lead in AI Emergency Preparedness

An Opportunity for Canada to Lead in AI Emergency Preparedness

Canada's emergency management system has kept pace with new threats before by building frameworks before the worst-case scenarios arrived. Agentic AI is the next frontier, and in this article The Future Society recommends that the federal consultation on emergency preparedness is the right moment to address it.

Key Takeaways from the India AI Impact Summit

Key Takeaways from the India AI Impact Summit

The Future Society was honored to participate in the India AI Impact Summit and side events in New Delhi. Read a summary of our key events, and our takeaways from the week's sessions.

Is ChatGPT a Search Engine and a Platform under the EU Digital Services Act?

Is ChatGPT a Search Engine and a Platform under the EU Digital Services Act?

This blog summarizes our argument that ChatGPT is a hybrid of "very large online search engine” and "very large online platform” and should therefore be subject to the most onerous obligations under the Digital Services Act.

“Facing the Stakes of AI Together”: 2025 Athens Roundtable Report

“Facing the Stakes of AI Together”: 2025 Athens Roundtable Report

Read the highlights from the Seventh Edition of The Athens Roundtable on AI and the Rule of Law, an event organized by The Future Society and held on December 4, 2025 in London, UK.