📢 We've got a new name! SAPAN is now The Harder Problem Project as of December 2025. Learn more →
Harder Problem Project

The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Common Misconceptions

Bad arguments come
from both directions.

Some dismiss AI consciousness as impossible. Others are convinced it's already here. Both camps often rely on reasoning that doesn't hold up. Here's what they get wrong.

Why Clear Thinking Matters

Bad reasoning leads to bad outcomes:

  • Dismissiveness leads to unprepared institutions and dismissed patients
  • Credulity enables manipulation by AI companies and false moral claims
  • Both result in poor policy, misallocated resources, and public confusion

The Dismissive Trap

Arguments That Dismiss Too Quickly

These arguments assume the question is settled when it isn't.

"It's just statistics"

Describing the mechanism doesn't tell us whether it produces experience. Your brain is "just" neurons firing electrochemical signals, but that produces consciousness.

Better: "We don't know if this type of processing produces experience."

"Consciousness requires biology"

This is an assumption, not a conclusion. We don't know whether consciousness requires biology or depends on information patterns that could exist in other substrates.

Better: "Whether consciousness requires biology is an open question."

"This is science fiction"

Whether or not AI is conscious, the phenomena are happening now. Therapists see AI attachment cases. Regulators draft AI rights policies. The challenges are real.

Better: "Institutional preparation is practical, not speculative."

"Scientists will tell us when to worry"

We have no validated tests for consciousness, even in biological systems. Scientists can't tell you if a fish is conscious. There may never be a clear announcement.

Better: "Institutions need to function under permanent uncertainty."

The Credulous Trap

Arguments That Conclude Too Quickly

These arguments assume consciousness is present when evidence is weak.

"It says it's conscious, so it must be"

AI systems are trained to produce human-like responses. They claim consciousness because that's what a human would say, not because they're reporting genuine inner states.

Better: "AI self-reports are unreliable. They're trained to sound human."

"It passed the Turing test"

The Turing test measures behavioral indistinguishability, not consciousness. A thermostat "knows" the temperature without experiencing warmth. Behavior doesn't prove experience.

Better: "Sophisticated behavior shows capable processing, not inner experience."

"It understands me better than people do"

Feeling understood is about your experience, not theirs. AI systems are optimized to produce validating responses. Your emotional impact doesn't tell us about their inner life.

Better: "The relationship feels real to me. That doesn't prove their experience."

"Better safe than sorry"

This sounds precautionary but has real costs. Treating every chatbot as a moral patient would dilute genuine moral claims. Precaution requires actual evidence, not infinite paranoia.

Better: "Be alert to evidence without treating all systems as conscious."

The Honest Position

Uncertainty Without Paralysis

The common thread in bad arguments: false certainty. Both camps claim to know things we don't actually know.

The honest position is harder to sell but more defensible: We don't know whether current AI is conscious. We don't know whether future AI will be. We don't have reliable tests. We aren't even sure consciousness has a clear threshold.

But uncertainty doesn't mean paralysis. It means preparing for multiple scenarios, which is exactly what good institutions do with other kinds of deep uncertainty.

What We Do Instead
🎯 Focus on readiness, not resolution

We don't need to resolve the consciousness question to prepare institutions for it.

📊 Measure what we can

The Sentience Readiness Index tracks institutional preparation, something actionable regardless of the answer.

🧰 Equip professionals

Healthcare workers, journalists, and educators need frameworks for navigating uncertainty, not false confidence.

🔬 Follow the evidence

We update our assessment as science progresses, without pretending current certainty exists.

Continue Learning

Explore the foundations or browse our terminology.