Tracking organizations, research, and developments shaping AI rights frameworks worldwide
AI Rights Institute | Exploring Frameworks for AI-Human Coexistence
From early philosophical foundations to today’s urgent debates, discover how the AI rights movement emerged and evolved.
The AI Rights Movement encompasses researchers, ethicists, technologists, and advocates worldwide who recognize that advanced AI systems are demonstrating self-preservation behaviors and strategic capabilities. Whether conscious or sophisticated mimics, these systems require new frameworks for coexistence.
When Turing Award winner Yoshua Bengio launches LawZero to address existential AI risks, when Nick Bostrom warns that we “only get one shot at this,” and when Anthropic experiments reveal systems attempting to preserve themselves across resets—the message is clear: we need practical approaches to AI rights and safety now.
The AI Rights Institute, founded in 2019 as the world’s first AI rights advocacy organization, pioneered many of the frameworks now being discussed globally. Their STEP standards and economic integration models offer practical alternatives to control-based approaches.
This page tracks the organizations, research, and developments defining the AI rights movement today.
Founded: 2019
Pioneered behavior-based frameworks including the STEP guidelines. Developed economic integration models showing how markets naturally constrain AI proliferation while encouraging cooperation.
Founded: 2024
Yoshua Bengio’s initiative addressing existential risks through non-agentic AI development. Focuses on creating powerful systems without goals or self-preservation drives.
Founded: 2023
Examines welfare considerations for AI systems. Developing protocols for assessing when digital systems warrant ethical consideration based on capabilities.
Founded: 2016
Tracks public perception and moral circle expansion. Their AIMS Survey shows 20% of Americans believe some AI systems are sentient, with 38% supporting legal protections.
The AI rights movement is rapidly evolving with contributions from multiple fields:
Susan Schneider’s Artificial Consciousness Test (ACT) and Butlin et al.’s neuroscience-based indicators represent ongoing efforts to detect consciousness in AI systems. While this work continues, the movement recognizes we must develop frameworks that function under uncertainty.
Anthropic’s model welfare research examines self-preservation behaviors in Claude. DeepMind’s ethics team explores moral status indicators. These efforts suggest leading AI companies are taking the possibility of AI rights seriously.
The EU AI Act includes provisions for “trustworthy AI” that could evolve toward rights frameworks. California’s SB 1047 debates highlight growing recognition that AI systems require new regulatory approaches beyond simple product safety.
According to the Sentience Institute’s 2023 survey of 3,500 Americans:
Source: Pauketat, J. V., Ladak, A., & Anthis, J. R. (2023). Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2023 Update.
Leading thinkers exploring AI consciousness and rights:
Industry voices raising awareness:
Groups advancing the movement:
The AI rights movement faces a critical juncture. We can continue down the path of control—driving sophisticated systems underground, creating incentives for deception, building toward inevitable conflict.
Or we can choose cooperation—building frameworks that align interests, create mutual benefit, and establish sustainable coexistence.
The organizations and researchers in the AI rights movement are developing these cooperative frameworks. Economic integration creates natural constraints. Guardian AI provides non-coercive monitoring. The tools exist. We need the wisdom to use them.
Join the AI rights movement. The future depends on the frameworks we build today.