The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
Scientists will eventually figure out how consciousness works. Our job is different: making sure society is ready for whatever they find.
We're a 501(c)(3) public charity focused on one thing: preparing institutions, professionals, and the public for questions about AI consciousness. Questions that are arriving faster than answers.
Tax-exempt educational organization
We translate existing science for practitioners
We track readiness, not timeline forecasts
Conditions-focused, not advocacy-driven
The science of consciousness exists. Ethical frameworks exist. But they're stuck in academic journals and philosophy departments.
Meanwhile, therapists are seeing patients who grieve AI companions. Journalists are covering sentience claims with no scientific grounding. Policymakers are drafting AI regulations without consciousness science input.
There's a gap between academic knowledge and professional practice. We fill it.
Not by doing new research, but by making existing knowledge useful to people who need it now.
A Google engineer publicly claimed that an AI chatbot had become sentient. The world had no playbook.
This wasn't a failure of science; it was a failure of translation. The knowledge existed. It just wasn't where people needed it.
Take academic knowledge and make it accessible to healthcare workers, journalists, educators, and policymakers.
The Sentience Readiness Index tracks how prepared countries and institutions are for AI consciousness questions.
Guides, frameworks, and context for professionals who encounter these questions in their work.
Track what's happening now (AI attachment, grief, sentience beliefs) to inform preparation.
We're not a research lab. We translate what researchers find, we don't generate new findings.
Our resources are educational context, not medical advice. We never substitute for professional treatment.
As a 501(c)(3), we assess conditions objectively. We don't advocate for specific bills or endorse candidates.
We don't forecast when AI will become conscious. We prepare for multiple scenarios, not a single prediction.
We don't claim to know if or when AI will become conscious. We prepare institutions for both possibilities, because both require preparation.
Every resource we create is grounded in peer-reviewed science. We synthesize what researchers know, not what makes good headlines.
We acknowledge what we don't know. Experts genuinely disagree about AI consciousness, and we represent that disagreement faithfully.
We focus on what can be done now, under uncertainty. Institutions don't need to wait for scientific consensus to build capacity.
The key insight: Whether AI becomes conscious or not, society needs prepared professionals, informed policy, and accurate public understanding. That preparation is the same either way.
A systematic assessment of how prepared countries and institutions are for AI consciousness questions. Tracks policy, professional capacity, public discourse, and research ecosystems.
Explore RankingsContext and frameworks for healthcare workers, journalists, educators, and researchers who encounter AI consciousness questions in their work.
View ResourcesAccessible explanations of consciousness science, emerging phenomena, and what preparation means, helping anyone navigate these questions.
Start LearningRegistered in the United States. Donations are tax-deductible to the extent allowed by law.
Scientific oversight and collaborative feedback by an independent science advisory board.
Meet the TeamPublic disclosure of funding sources, methodology, and governance documents.
TransparencyThis work requires many perspectives. Whether you're a researcher, professional, or just thinking about these questions, we'd like to hear from you.