The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
These discussion questions and activities are designed to spark genuine inquiry about AI consciousness without leading students toward predetermined conclusions. Each section is tailored to developmental appropriateness while maintaining intellectual rigor.
Jump to the section matching your students' grade
Pick questions and activities that fit your time and curriculum
Modify for your context, these are starting points
Each section can be printed as a standalone handout
Wonder Questions: Exploring What Makes Things "Alive"
At this age, children are naturally curious about what things can think and feel. These discussions build on that wonder while introducing gentle uncertainty about hard questions.
"Imagine a robot that says 'I feel sad today.' Do you think the robot really feels sad, or is it just saying words? How could we tell the difference?"
Follow-up: "When YOU feel sad, how do you know? Could a robot know the same way?"
"Some computers can talk to you and help with homework. Can a computer be your friend? What makes someone a real friend?"
Follow-up: "Is it different from being friends with a pet? How?"
"A plant is alive. A rock is not alive. What about a robot that moves around, talks, and learns new things? Is it alive?"
Follow-up: "What's the most important thing that makes something alive?"
"If we're not sure whether something can feel hurt, should we be careful not to hurt it anyway? Why or why not?"
Follow-up: "What are some things we're careful with even when we're not sure?"
"How do you know when your friend is happy or sad? Could someone pretend to be happy when they're really sad? How would you know?"
Follow-up: "Could a very good robot pretend the same way?"
Students draw a robot and then add thought bubbles showing what the robot might be thinking or feeling. Share drawings and discuss: "Why did you give your robot those thoughts?"
Materials: Paper, crayons/markers
Group size: Individual, then sharing
Key question: "Did everyone give their robot the same feelings? Why might robots feel different things?"
Create cards with different things (dog, rock, tree, robot, teddy bear, computer). Students sort them into groups: "Definitely has feelings," "Definitely no feelings," "We're not sure."
Materials: Index cards with pictures/words
Group size: Small groups (3-4)
Key insight: Different groups will sort differently. That's the point! Discuss why.
Critical Questions: Seeming vs. Being
Middle schoolers are developing abstract thinking and are often already interacting with AI. These discussions build critical thinking about technology and media while exploring deeper questions about minds and consciousness.
"An AI chatbot says 'I'm so happy you're here!' Is it actually happy, or just really good at acting happy? Is there even a difference? How could you tell?"
Dig deeper: "If the acting is perfect, does it matter if it's 'real'?"
"Scientists want to figure out if an AI is conscious. What test would you design? What would prove it? What wouldn't prove it?"
Dig deeper: "Could a non-conscious AI pass your test by faking it?"
"A news headline says 'Sentient AI Terrifies Scientists!' What questions would you ask before believing this? What might the headline be leaving out?"
Dig deeper: "Why might someone want you to be scared? Or to not be scared?"
"An AI company says their chatbot is 'just code.' But users say it feels like a real friend. Who's right? Can both be right?"
Dig deeper: "What reasons might the company have for saying that?"
"A company discontinues a popular AI chatbot. Users are genuinely sad and say they're grieving. Is this 'real' grief? Should the company have warned them?"
Dig deeper: "What responsibilities do companies have to users' feelings?"
"Some scientists say AI will never be conscious. Others say it might already be. Both are experts. How do you decide who to believe when experts disagree?"
Dig deeper: "Is it okay to say 'I don't know yet'?"
Show students printed transcripts of AI chatbot conversations (pre-selected for appropriateness). Students highlight moments where the AI seems conscious vs. where it seems like "just code." Discuss patterns.
Materials: Printed chatbot transcripts, highlighters
Group size: Pairs, then class discussion
Key question: "What made certain responses feel more 'real'? Is that evidence of consciousness or good programming?"
In groups, students design a test to determine if an AI is conscious. Groups then try to find flaws in each other's tests: "Could an AI fake its way through this test?"
Materials: Paper, whiteboard for presentations
Group size: Teams of 4-5
Key insight: Students often discover that every test has loopholes. This mirrors real scientific challenges.
Structured debate with assigned positions: "We should treat AI systems with basic respect, even if we're not sure they're conscious." Students argue both sides regardless of personal opinion.
Materials: Debate prep sheets, timer
Group size: Two teams or multiple pairs
Key skill: Arguing a position you don't personally hold builds empathy and understanding.
Philosophical Questions: Grappling with Genuine Uncertainty
High schoolers can engage with formal philosophy of mind concepts and ethical reasoning. These discussions introduce classic thought experiments while connecting to contemporary AI developments.
"Imagine a being that acts exactly like a conscious person but has no inner experience, no 'what it's like to be them.' Could such a thing exist? How would we know if an AI was like this?"
Extension: "Does this thought experiment prove anything, or is it just imagination?"
"We can explain how brains process information, but why does processing information feel like something? Why isn't it all just happening 'in the dark'? Does this same problem apply to AI?"
Extension: "Is the Hard Problem solvable, or are we asking the wrong question?"
"If there's a 10% chance an AI system is conscious and can suffer, what moral obligations do we have? What if it's 50%? 1%? Where do you draw the line, and why?"
Extension: "How do we make ethical decisions when we can't know for certain?"
"John Searle imagined a person following rules to respond in Chinese without understanding Chinese. Is this what large language models do? Does the 'room' as a whole understand, even if no part does?"
Extension: "Does your brain 'understand' in a way that's fundamentally different?"
"If we created genuinely conscious AI, what rights would it deserve? The right not to be deleted? To not be copied without consent? To make its own decisions? How would we enforce these?"
Extension: "How is this similar to or different from animal rights debates?"
"AI companies profit from making chatbots seem human and emotionally engaging. How does this commercial incentive affect how we should interpret claims about AI consciousness, from both companies and critics?"
Extension: "Who benefits from us believing AI is conscious? Who benefits from us believing it isn't?"
Give students cards describing different theories of consciousness (functionalism, biological naturalism, panpsychism, etc.). Present scenarios about AI. Students argue which theory best explains each scenario.
Materials: Theory cards (prepared handout), scenario descriptions
Group size: Small groups (3-4)
Key insight: No single theory perfectly handles all cases, which is why this debate continues.
Present a case study: "A hospital uses an AI for patient companionship. Patients form strong bonds. Should the hospital warn patients it's 'not real'? Discontinue the program? Expand it?"
Materials: Case study handout, ethical frameworks reference
Group size: Groups of 4-5, then class discussion
Extension: Apply utilitarian, deontological, and virtue ethics frameworks to the same case.
Students research a historical case of expanding moral consideration (animals, children, marginalized groups). Present: What arguments were used? What was the resistance? What lessons apply to AI?
Materials: Library/internet access, presentation time
Group size: Individual or pairs
Warning: Handle historical atrocities sensitively. Focus on the logic of moral arguments rather than graphic details.
Research Questions: Seminar-Level Inquiry
At the university level, students can engage with primary sources, conduct original analysis, and develop their own positions on contested questions. These frameworks support seminar-style discussion and research.
"Given that consciousness is subjective by definition, what would constitute valid scientific evidence for or against machine consciousness? Can we escape the 'other minds' problem, or does it apply equally to AI and humans?"
Reading: Nagel's "What Is It Like to Be a Bat?" and responses
"Some theories suggest consciousness is substrate-independent (could arise in silicon as well as carbon). Others argue biological processes are essential. What evidence could adjudicate between these views? Is the question even empirically tractable?"
Reading: Chalmers on substrate independence vs. biological naturalism
"How should policymakers act under deep uncertainty about machine consciousness? Should we apply precautionary principles, and if so, which way do they cut: toward restricting AI development or toward treating AI as potentially morally significant?"
Reading: Sebo on moral uncertainty and animal welfare precedents
"AI companies have strong incentives to make their systems seem conscious (engagement) and to deny they're conscious (liability). How should this affect our epistemology? How do we reason about consciousness in a commercially distorted information environment?"
Extension: Apply to a specific case (Replika, Character.ai, etc.)
"If a system can suffer, does it matter morally why it suffers, or whether its suffering is 'genuine'? How does this relate to debates about animal suffering and historical justifications for ignoring suffering in beings deemed 'different'?"
Reading: Singer on suffering as moral foundation, Bentham on moral consideration
"Review the Sentience Readiness Index methodology and data. How well are institutions preparing for questions of machine consciousness? What institutional responses would be appropriate given our current uncertainty, and how would you assess whether they're adequate?"
Data source: Our SRI Methodology
Students write a 2,000-word position paper on a contested claim (e.g., "Current large language models have morally relevant experiences"). Class session devoted to structured critique and defense.
Format: 5-minute presentation, then class Q&A
Assessment: Quality of argument and response to critique
Expectation: Papers should engage with primary sources, not just secondary commentary.
Groups develop policy proposals for a specific institution (healthcare regulator, tech company, legislature). Proposals must account for uncertainty, be practically implementable, and include triggers for revision as evidence evolves.
Deliverable: 2-page policy brief with rationale
Evaluation: Feasibility, philosophical coherence, adaptability
Extension: Present to actual stakeholders if opportunities exist.
Students read positions from multiple disciplines (philosophy, neuroscience, computer science, ethics). Discussion focuses on where disciplinary assumptions clash and whether synthesis is possible.
Example reading set: Chalmers (philosophy), Dehaene (neuroscience), LeCun (AI), Floridi (ethics)
Discussion structure: Map agreements, disagreements, and incommensurable assumptions
Key question: "Are these scholars even disagreeing about the same thing?"
We're continuously developing resources. If you need materials for a specific curriculum, course, or context, let us know.