The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
Complete documentation of how we measure societal readiness for artificial sentience. Published in the interest of transparency and reproducibility.
Purpose: The SRI measures how ready societies are to navigate the possibility of artificial sentience. It does not assess whether AI sentience is likely, imminent, or desirable. Rather, it evaluates whether societal conditions support informed, adaptive responses if and when such questions become practically relevant.
Organizational Note: The Harder Problem Project is a 501(c)(3) educational organization. This index assesses conditions; it does not advocate for or against specific legislation.
"How well-positioned is this jurisdiction to recognize, evaluate, and respond to potential artificial sentience in an informed, adaptive manner?"
Having the institutional capacity, policy flexibility, professional resources, and public understanding necessary to navigate novel questions, regardless of how those questions are ultimately answered.
The current state of laws, institutions, discourse, resources, and adaptive mechanisms, not the merit of any proposed changes.
The capacity for subjective experience. We use this term without taking a position on which systems (if any) currently possess it or will possess it in the future.
The SRI assesses six categories, each scored 0-100. The overall score is a weighted average.
Legal and policy frameworks that allow for open inquiry into and potential recognition of artificial sentience.
Government bodies, academic institutions, and professional organizations actively engaging with AI consciousness questions.
Freedom and capacity to conduct research relevant to AI consciousness, machine sentience, and related questions.
Preparation of healthcare, legal, media, and education professionals to navigate AI consciousness questions.
Quality, informedness, and maturity of public conversation about AI consciousness and sentience.
Ability of legal, policy, and institutional systems to update and adapt as understanding evolves.
Well Prepared
Moderately Prepared
Partially Prepared
Minimally Prepared
Unprepared
Each category contains specific indicators with detailed scoring rubrics.
The degree to which existing legal and policy frameworks allow for open inquiry into, and potential recognition of, artificial sentience. This is assessed without judging the merit of any specific proposed legislation.
Do existing legal definitions of persons, entities, property, or rights allow for potential future expansion or clarification?
Are there existing policy frameworks, study commissions, or official processes for addressing AI consciousness questions?
Do regulatory bodies have the flexibility and mandate to address novel questions about AI capabilities and status?
Have legal or regulatory measures been enacted that foreclose inquiry into or recognition of AI sentience?
Important: This indicator assesses the current state of enacted measures—what is currently law or regulation. It does not assess pending legislation or take positions on proposed bills.
The degree to which government bodies, academic institutions, and professional organizations and other institutions are actively engaging with questions related to AI consciousness.
Have government bodies—legislative, executive, or advisory—substantively addressed AI consciousness or sentience questions?
Are academic institutions—universities, research centers, scholarly bodies—actively engaging with these questions?
Have relevant professional organizations (medical, legal, technical, ethical) addressed AI consciousness questions?
The freedom and capacity to conduct research relevant to AI consciousness, machine sentience, and related questions.
Are researchers free to study AI consciousness, machine sentience, and related topics without legal, institutional, or funding restrictions?
Does the jurisdiction have active research capacity (researchers, institutions, funding) relevant to these questions?
The preparation of key professional communities to navigate questions and situations related to AI consciousness.
Are healthcare professionals equipped with awareness and resources to navigate AI-related presentations or questions?
Are legal professionals equipped to navigate novel questions about AI status, rights, or recognition?
Are journalists and media professionals equipped to cover AI consciousness topics accurately and responsibly?
Are educators—K-12 and higher education—equipped to address AI consciousness questions with students?
The quality, informedness, and maturity of public conversation about AI consciousness and sentience.
Is the general public aware that questions about AI consciousness are subjects of legitimate inquiry?
When the topic is discussed publicly, is the discourse informed, nuanced, and productive?
Is there stigma attached to seriously discussing AI consciousness, and does it impede productive conversation?
The ability of legal, policy, and institutional systems to update and adapt as scientific understanding and technological capabilities evolve.
Do legal systems have mechanisms for updating frameworks as knowledge evolves?
Do institutions demonstrate the capacity to learn and update based on new information?
If current approaches prove inadequate, can the jurisdiction change course?
Official government sources, enacted legislation, court decisions
Peer-reviewed research, major news outlets, professional organizations
Expert commentary, industry reports, quality think tanks
Blogs, social media, advocacy materials (used cautiously for context)
Gather sources across all indicator categories
1-3 weeks per jurisdictionAdvanced LLM with extended thinking generates initial assessment using standardized prompt
1-2 days per jurisdictionStaff analyst reviews LLM assessment for accuracy, methodology compliance, and editorial standards
3-5 days per jurisdictionSenior editor reviews for consistency, neutrality, and compliance with organizational standards
2-3 days per jurisdictionAssessment published with full methodology notes
Annual
As warranted by major developments
Ongoing as errors are identified
Verify accuracy of LLM assessment, check methodology compliance, and ensure all claims are properly sourced.
Ensure consistency across assessments, verify neutrality, and confirm compliance with organizational standards.
Published updated methodology documentation to website with minor refinements to scoring rules and LLM assessment prompt. Enhanced clarity of indicator definitions and improved consistency across category descriptions.
Renamed from Artificial Welfare Index (AWI) to Sentience Readiness Index (SRI) to better reflect the assessment's focus on societal readiness rather than AI welfare specifically. Expanded methodology to include four new assessment dimensions: Research Environment, Professional Readiness, Public Discourse Quality, and Adaptive Capacity.
Expanded AWI coverage by adding 10 additional countries to the assessment. Published under our previous organizational name (SAPAN).
First public version of the Artificial Welfare Index (AWI), benchmarking AI welfare considerations across over 30 governments using 8 key measures. Published under our previous organizational name (SAPAN).
Disclosure: The Harder Problem Project is a 501(c)(3) nonprofit educational organization. We do not take positions on specific legislation. This methodology document is published in the interest of transparency.