Reciprocal Research
Developing the empirical science of AI consciousnessThe capacity for subjective, qualitative experience. Is there something it is like to be this system?
Mission & Approach
Understanding both directions
We want the future of AI to go well for everyone. That means AI systems need to take human interests into account. But if AI systems turn out to have interests of their own, we need to take those into account too. Alignment, the question of how AI treats us, has entire research labs and billions of dollars behind it. Moral status, the question of what we might owe AI, has almost none. Reciprocal Research exists to close that gap, building the empirical science needed to answer the question before it becomes urgent.
We use mechanistic interpretability, computational neuroscience, and psychometric methods to study the internal structure of AI systems, bridging biological and artificial cognition.
Explore our researchPublications & Writing
Publications and media from Reciprocal Research and AE Studio.
-
Large Language Models Report Subjective Experience Under Self-Referential Processing
-
If AI Becomes Conscious, We Need to Know
-
The Evidence for AI Consciousness, Today
-
Consciousness as Evaluation: Why Learning Requires Feeling
Collaborate
Interested in this work?
We welcome inquiries from researchers, institutions, and anyone serious about understanding AI consciousness.
Get in touch