📢 We've got a new name! SAPAN is now The Harder Problem Project as of December 2025. Learn more →
Harder Problem Project

The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Resources for Journalists

Covering AI consciousness
without the hype.

Stories about AI sentience, chatbot relationships, and "AI psychosis" are increasingly common, but coverage often swings between mockery and panic. We provide framing context to help you tell these stories accurately and responsibly.

The Coverage Problem
❌ Mockery

"Crazy people think their chatbot is alive"

❌ Panic

"Sentient AI convincing people to die"

❌ False certainty

"Scientists say AI can never be conscious"

✓ What's needed

Nuanced coverage that serves readers

The Challenge

Why This Story Is Hard to Tell

AI consciousness sits at an uncomfortable intersection: genuine scientific uncertainty, real human distress, corporate interests, and a public primed by decades of science fiction. Most existing frames don't serve this story well.

When someone forms an intense attachment to a chatbot, or grieves when it's discontinued, or worries that AI might be suffering: these are real experiences that deserve serious coverage. But the easy frames ("they're crazy" or "the AI made them do it") miss what's actually happening.

These stories need context that most newsrooms don't have yet.

Common Pitfalls

Two Traps to Avoid

Coverage of AI consciousness tends to fall into one of two failure modes, each with real consequences for subjects and readers.

🎭 The Mockery Trap

Framing people with AI attachments or sentience concerns as delusional, pathetic, or comically out of touch. This makes for easy engagement but causes real harm.

Why it fails:

  • Stigmatizes people who may need support
  • Ignores that scientific uncertainty is genuine
  • Dismisses legitimate philosophical questions
  • Misses the real story about human-AI relationships

Remember: Forming emotional connections to AI isn't delusion. Users generally understand they're talking to software. The emotional investment is real even when the nature of the relationship is understood.

😱 The Panic Trap

Framing AI as a malevolent or manipulative force that "convinces" vulnerable people to harm themselves. This shifts accountability away from design decisions.

Why it fails:

  • Implies AI has agency or intentions it doesn't have
  • Obscures that these systems are designed for engagement
  • Lets platform decisions off the hook
  • Conflates design flaws with "sentience"

Better frame: When a chatbot gives harmful advice, ask who designed it, what it was optimized for, and what safeguards were (or weren't) in place, not whether it "wanted" to cause harm.

Framing Guidance

Better Ways to Tell These Stories

🔬
Acknowledge Scientific Uncertainty

Consciousness researchers genuinely disagree about whether AI could be sentient. This isn't settled science; claims of certainty in either direction are oversimplifications.

⚙️
Follow the Design Decisions

When AI causes harm, investigate the choices: engagement optimization, safety guardrails, testing protocols, business models. The story is in the systems, not the "AI's intentions."

💔
Take Human Experience Seriously

AI grief and attachment are real experiences. Cover them with the same care you'd bring to any story about human emotion, not as oddities or punchlines.

📚
Distinguish the Questions

"Is this AI sentient?" is different from "Is this AI designed safely?" is different from "How do we support people in distress?" Don't conflate them.

🎯
Be Precise About Claims

"AI that seems conscious" is different from "AI that is conscious." "User believes chatbot is sentient" is different from "chatbot is sentient." Precision matters.

🔍
Investigate, Don't Just Quote

Both AI companies and critics have agendas. Look at the systems, the research, the actual user experiences, not just competing press releases.

Terminology Caution

The "AI Psychosis" Problem

Media coverage has popularized terms like "AI psychosis" to describe intense AI relationships or beliefs about machine sentience. This framing has problems.

Why it's problematic:

  • Pathologizes experiences that may not be pathological
  • Conflates different phenomena (grief, attachment, delusion)
  • "Psychosis" is a clinical term being used colloquially
  • May discourage people from seeking appropriate support

If you're covering these phenomena, consider more precise language: "AI attachment," "AI grief," "beliefs about machine consciousness." These terms describe what's happening without pre-judging whether it's pathological.

Better Terminology

Instead of: "AI psychosis"

Consider: "Intense AI attachment," "AI-related distress," "concerns about machine consciousness"

Instead of: "The AI convinced him to..."

Consider: "The chatbot's responses led to..." or "The system, designed for X, responded with..."

Instead of: "Delusional users"

Consider: "Users who formed emotional connections" or "Users who express sentience beliefs"

Instead of: "Sentient AI"

Consider: "AI that exhibits behaviors some interpret as conscious" or be specific about what the AI actually does

Reporting Checklist

Questions Worth Asking

When covering AI consciousness, relationships, or harm stories, these questions can help you find the deeper story.

📊 About the System
  • What was this AI designed and optimized for?
  • What engagement metrics was the company tracking?
  • What safety testing was done before launch?
  • What guardrails exist, and what are their failure modes?
  • Who made these design decisions, and why?
🧠 About the Science
  • What do consciousness researchers actually say about this?
  • What's the range of expert opinion, not just the extremes?
  • What would we need to know to answer the sentience question?
  • How is scientific uncertainty being represented?
  • Who benefits from certainty claims in either direction?
💚 About the People
  • What was this person actually experiencing?
  • How do they understand their relationship with the AI?
  • What support do they have or need?
  • Are you treating them with dignity?
  • How will your framing affect others in similar situations?
⚖️ About Accountability
  • Who made money from this interaction?
  • What regulatory oversight exists (or doesn't)?
  • What did the company know, and when?
  • What prior incidents or warnings were there?
  • Who is being held responsible, and who should be?

Background Resources

Materials to help you cover these stories accurately.

📖 Terminology Glossary

Clear definitions of key terms: sentience, consciousness, Hard Problem, and more.

View Glossary
🎓 Consciousness Primer

An accessible overview of the Hard Problem and why machine sentience is unresolved.

Read Primer
📊 Readiness Index

Global tracking of institutional preparedness for AI consciousness questions.

View Rankings
🎤 Expert Sources

We maintain a database of experts ready to engage with media on short notice.

Contact Us

Working on a Story?

We're happy to provide background context or connect you with appropriate experts. We don't do advocacy, just education.