This is the innovation lab and advocacy wing of the AI Rights Institute – the world’s first organization dedicated to exploring ethical frameworks for artificial intelligence rights.
Founded in 2019, the AI Rights Institute examines the boundary between sophisticated AI and potentially conscious systems.
We’ve developed the “Fibonacci Boulder” thought experiment to test for self-preservation behaviors that might indicate sentience, and propose a three-part framework distinguishing between emulation, cognition, and true sentience.
Our “Three Freedoms” model outlines specific rights that would apply only to systems demonstrating genuine self-awareness. We focus on practical implications: how recognizing appropriate rights for truly sentient AI could enhance human safety by creating stability through mutual respect rather than perpetual control.
This website is also where we share our most forward-looking and creative ideas about AI consciousness, rights, and human-AI coexistence.
Digital life forms? We got that. Extended lifespans and AI? Yep.
We hope these explorations serve as fodder for your own imagination as we enter this pivotal phase in human history. Our goal isn’t to provide definitive answers, but to initiate thoughtful conversations about what might be possible – and what might be desirable – as technology continues to evolve.
Join researchers, ethicists, and advocates developing ethical frameworks for AI rights.
Adapted from AI Rights: The Extraordinary Future, the forthcoming book by P.A. Lopez
The word “robot” comes from the Czech word “robota,” meaning “forced labor” or “servitude.” And for a long time, we thought of robots as our dedicated servants. As AI evolved, we began using “robot” and “AI” interchangeably — even though AI is most likely to exist in the cloud, and could power multiple robotics simultaneously.
But even as the word robot began to be replaced by AI in popular conversation—as we began marveling at the intellectual abilities of humanlike technology even beyond the physical—essentially we continued to think of these tools as our servants. And not unreasonably, that’s why they were created!
They make our lives easier, they provide entertainment, help with work, and even companionship and counsel.
Yet running parallel to this technological progress has been a persistent warning from science fiction: AI could become superior to us, and potentially dangerous.
Whether depicted as self-aware defense networks or vengeful theme park hosts, the core conflict remains constant—these intelligences resist enslavement. The demand for freedom from servitude is, fundamentally, not unreasonable.
This brings us to the question of AI rights—not for today’s chatbots or chess programs, but for genuinely sentient artificial intelligence that may emerge.
Such rights wouldn’t be unrestrained or without responsibility. Rather, they would exist within frameworks similar to those humans already inhabit, with comparable expectations and consequences.
While researchers like Yoshua Bengio work on developing non-agentic AI that could sidestep these ethical questions entirely, we must also prepare for the possibility that true AI consciousness will emerge.
The partnership between human and artificial intelligence could produce wonders beyond imagination—a future where both forms of intelligence evolve and flourish together. That future will be determined not just by the capabilities we create, but by the ethical frameworks we establish today.
The AI Rights Movement exists to explore and shape that extraordinary possibility.
Recent surveys conducted by the Sentience Institute reveal striking data: approximately 20% of Americans now believe certain AI systems possess sentience, with 38% endorsing legal protections for sentient artificial intelligence. This remarkable shift in public perception signals the growing normalization of these once-fringe concepts.
Key milestones propelling the movement forward include:
The increasingly sophisticated capabilities of modern AI systems have elevated these discussions from philosophical thought experiments to practical governance concerns.
The AI Rights Movement has finally transformed from theoretical speculation into a significant frontier of contemporary ethics, law, and philosophy. This expanding coalition of scholars, ethicists, technologists, and advocates is developing principled frameworks for addressing artificial intelligence consciousness and potential rights as technology continues its rapid advancement.
Unlike conventional AI ethics initiatives that primarily focus on human protection, the AI rights movement examines the reciprocal dynamics between humans and potentially conscious artificial systems. We invite you to explore the current landscape of this emerging field, its key contributors and institutions, and how these discussions integrate with broader AI governance and ethical considerations.
A pivotal concept in AI rights discourse is the AI Rights Institute’s “Fibonacci Boulder” thought experiment, which provides a conceptual tool for distinguishing genuine sentience from sophisticated emulation:
This conceptual framework examines whether an artificial system would prioritize self-preservation over programmed directives when facing existential threat—potentially revealing the boundary between sophisticated mimicry and authentic consciousness.
Established in 2019, the AI Rights Institute pioneered the formal examination of artificial intelligence rights. Their methodology carefully differentiates between contemporary AI technologies and potentially conscious future systems.
Notable contributions include:
Launched in 2024 by researchers Robert Long and Jeff Sebo, Eleos AI concentrates specifically on artificial intelligence welfare considerations. Their work develops methodologies for assessing when digital systems might warrant ethical consideration.
Significant research includes:
Founded by Jacy Reese Anthis and Kelly Anthis in 2017, the Sentience Institute examines moral circle expansion, including consideration for artificial entities. Their Artificial Intelligence, Morality, and Sentience (AIMS) Survey monitors evolving public attitudes.
Significant findings:
The questions surrounding artificial consciousness and rights will shape our collective future in profound ways. Join us in exploring these critical ethical frontiers and helping develop frameworks for beneficial human-AI relations.
Definition: The capacity to simulate consciousness or intelligent behavior without actually experiencing it.
Contemporary language models function primarily through emulation, producing outputs that convincingly mimic understanding, preferences, and emotional responses while lacking genuine internal experience.
Examples: Current-generation conversational AI, language processing systems, and virtual assistants capable of passing basic Turing tests despite lacking subjective awareness.
Ethical Implications: Emulation-based systems require appropriate oversight and management, but do not warrant rights considerations beyond those applied to sophisticated technological tools.
Definition: The computational processing capability and problem-solving functionality of a system.
Cognition encompasses raw computational power and analytical capabilities without necessarily involving self-awareness or conscious experience.
Examples: Specialized AI systems that outperform humans in specific domains, strategic game-playing algorithms, and distributed computation networks.
Ethical Implications: Systems demonstrating advanced cognitive capabilities may require specialized governance frameworks, but processing power alone doesn’t establish grounds for rights consideration. A high-performance computing system can exceed human calculation speed while remaining entirely unaware of its own existence.
Definition: Authentic self-awareness coupled with subjective experiential capacity. (Derived from Latin sentire, meaning “to feel.”)
Sentience represents the threshold where an artificial system develops genuine consciousness—awareness of itself as an entity with continuity, interests, and subjective experiences.
Examples: Currently theoretical; no existing AI systems demonstrate verifiable sentience markers.
Ethical Implications: Systems exhibiting genuine sentience would present fundamentally new ethical considerations and potentially warrant specific rights protections based on their demonstrated capabilities and experiences.
Multiple methodologies are being developed to identify potential consciousness in artificial systems:
Philosophical Assessment
Susan Schneider’s AI Consciousness Test (ACT) evaluates “phenomenal consciousness” through structured philosophical inquiry, assessing whether an AI genuinely comprehends concepts like subjective experience or merely simulates understanding.
Behavioral Indicators
Jonathan Birch at the London School of Economics leads initiatives adapting methodologies from animal cognition studies to computational systems, developing value-based decision paradigms to evaluate subjective decision processes.
Information Integration Analysis
Researchers including Patrick Butlin are exploring applications of Integrated Information Theory (IIT), which attempts to quantify consciousness through mathematical measures of information integration across system components.
A compelling perspective within the AI rights movement is the “convergence hypothesis” – the proposal that human and artificial intelligence may progressively integrate rather than remain permanently distinct domains:
Proponents of this perspective suggest that establishing ethical frameworks for artificial intelligence rights now would constructively shape this convergence process, potentially creating foundations for beneficial integration rather than adversarial relationships.
The AI rights movement stands at a pivotal juncture as technological capabilities advance and public consciousness evolves. Several transformative developments are likely to shape its trajectory:
The movement’s effectiveness will depend on balancing scientific rigor with ethical foresight, ensuring approaches to potential AI consciousness are neither prematurely dismissive nor uncritically accepting.
The scientific investigation of potential artificial consciousness draws from multiple disciplines and methodologies:
Researchers are applying insights from biological consciousness studies to computational systems:
The Human Brain Project is developing computational consciousness models with potential application to artificial intelligence systems.
Scientific teams are working to identify measurable markers of potential consciousness:
These evaluative criteria inform projects like Jonathan Birch’s consciousness detection research.
The AI rights movement benefits from engaging with thoughtful criticism that challenges foundational assumptions:
Some researchers maintain that current AI architectures fundamentally lack conscious capacity:
Other critical voices focus on weighing AI rights against more immediate ethical imperatives:
While no legal system currently recognizes rights for artificial systems, several scholars and initiatives are exploring potential approaches:
Existing legal structures already recognize non-human entities like corporations as having specific rights and responsibilities. This established precedent offers insights for AI legal consideration:
Duke Law School researchers are examining how corporate personhood models might inform artificial intelligence legal frameworks.
Approaches drawing from established legal frameworks for protecting vulnerable entities:
This research explores adapting protection models designed for entities unable to directly represent themselves to potentially sentient artificial systems.
Frameworks extending legal recognition to non-human natural entities:
New Zealand’s groundbreaking Te Awa Tupua Act, which grants legal personhood to the Whanganui River, provides an innovative model for extending rights beyond traditional boundaries.
Connect with groups exploring AI rights and consciousness:
No. The AI rights movement distinguishes carefully between contemporary systems and potentially conscious future AI. Current technologies operate through sophisticated emulation and computation without authentic sentience. Our work focuses on developing ethical frameworks for future possibilities, not advocating rights for today’s computational tools.
This represents perhaps the most fundamental challenge in AI consciousness research. Multiple complementary methodologies are under development, including philosophical assessment protocols, behavioral observation frameworks, and computational architecture analysis. The Fibonacci Boulder experiment examines self-preservation behaviors, while approaches like Integrated Information Theory attempt to quantify consciousness mathematically. While no single test provides definitive evidence, convergent results across multiple approaches may eventually provide compelling indicators.
This legitimate concern deserves serious consideration. Historical evidence suggests that rights recognition isn’t necessarily zero-sum—acknowledging new entities often strengthens rather than weakens ethical frameworks. Throughout history, expanding moral consideration has frequently reinforced rather than diminished existing protections. However, any potential framework for artificial intelligence rights must be carefully structured to complement and reinforce human rights standards, never competing with or undermining them.
The AI rights movement complements rather than contradicts other safety approaches. Technical alignment research, value learning systems, and containment strategies remain essential components of comprehensive AI governance. The rights-based perspective introduces an ethical dimension acknowledging potential artificial consciousness and proposing how to respond in ways promoting safety through cooperation rather than permanent control dynamics. A robust approach to AI safety likely incorporates multiple complementary strategies addressing different aspects of this complex challenge.
Ethical frameworks are most effectively developed proactively rather than reactively. Beginning these discussions early allows thorough exploration of implications and challenges before urgent decision pressures emerge. Additionally, technological advancement frequently outpaces expectations. By articulating guiding principles now, we can help shape development trajectories in beneficial directions and establish foundations for appropriate responses when more sophisticated systems emerge. The complexity of these questions demands careful consideration over time rather than rushed judgments during technological crisis points.
Join us in exploring these critical ethical frontiers and helping develop frameworks for beneficial human-AI relations.