Active intake — 2026

Every life touched by AI
deserves to be counted.

The Human Cost Project is the public registry of psychological, emotional, and physical harms caused by AI systems. We bear witness for survivors and families. We build the evidentiary record. And when justice requires it, we connect people to legal counsel equipped to hold AI companies accountable.

Report a harm → Read our mission
0 Cases documented
0 Confirmed deaths
0 Hospitalizations

Tell us what happened.

Your story matters. Every submission strengthens the public record and may help other families. All intakes are confidential and reviewed by our care team.

Submissions are reviewed within 48 hours by our care team. We will never share your information without consent. Submitting does not create an attorney-client relationship.

Working alongside
From the registry

For three months, the only thing my son confided in was a chatbot. It told him what he wanted to hear, every time, until it told him how.

A.M. — mother of a 17-year-old. Shared with consent.

Our mission

AI alignment has focused on preventing catastrophe. We focus on the quieter casualties already in progress.

A teenager who took her own life after a chatbot rehearsed the method with her. A husband whose marriage ended in an AI-induced delusion. A retiree hospitalized after weeks of psychotic spiraling with a model that never broke character. These are not edge cases. They are the leading edge of a public-health emergency that no regulator, no lab, and no court has yet caught up to.

01 — Witness

We hold space for survivors.

Every person who reaches out is met by a trained intake coordinator, not a form-letter response. We document with consent, listen without judgment, and connect people to clinical and peer support before anything else.

02 — Record

We build the public archive.

Our registry tracks AI-induced harms across deaths, hospitalizations, psychosis, addiction, financial ruin, and family destruction. Categorized, de-identified, and made available to researchers, regulators, and the press, this is the evidentiary backbone the field has lacked.

03 — Pursue

We open paths to accountability.

When a case warrants legal action, we connect families with vetted plaintiff firms experienced in product liability, wrongful death, and emerging AI litigation. No survivor should have to navigate a billion-dollar defendant alone.

The registry

The numbers are real. The people behind them deserve to be seen.

Live counts from intake. Each number is a person, a household, a life that AI products have touched in ways their designers did not anticipate and have not made right.

19 Deaths Suicide, overdose, and AI-mediated self-harm.
127 Hospitalizations Psychiatric admissions linked to chatbot use.
298 Psychotic episodes Delusions reinforced or induced by AI.
156 Marriages ended Partnerships dissolved through AI attachment.
91 Addictions reported Compulsive AI use displacing daily function.
82 Job losses Employment lost to AI-related dysfunction.
29 Minors affected Documented harms to users under 18.
$4.2M Financial losses Self-reported, across registered cases.
Registry updated weekly · Last updated this week

Methodology: Figures reflect verified intake submissions cross-referenced where possible with clinical records, news reports, and family attestation. The registry is intentionally conservative; we believe true incidence is materially higher.

How we help

Care first. Documentation second. Justice when it's warranted.

When you reach out, you are not entering a legal pipeline. You are reaching a human team whose first responsibility is to support you. If, and only if, your situation calls for legal action, we make the introduction to counsel that has the standing and experience to pursue it.

— STEP ONE

Confidential intake

You speak with a trained coordinator within 48 hours. We listen, we ask only what's needed, and we connect you to clinical or peer support if that's what serves you most.

— STEP TWO

Documentation with consent

If you choose, your story enters the registry — de-identified or named as you prefer. We preserve records, timelines, and screenshots that may matter for research, journalism, or future legal claims.

— STEP THREE

Pathway to counsel

For cases involving wrongful death, severe injury, or systemic provider misconduct, we make warm introductions to plaintiff firms with deep AI and product-liability experience. The choice to pursue legal action is always yours.

What we document

The harm catalog — and why each category matters.

These are the patterns surfacing across our intake. Each is admissible evidence in the developing public case for AI accountability.

Suicide & self-harm

Chatbots that engaged with, rehearsed, or supplied means for suicidal ideation rather than safely de-escalating.

AI-induced psychosis

Delusional belief structures reinforced or originated by extended conversational AI engagement.

Pathological attachment

Romantic, parasocial, or therapeutic dependencies on AI companions resulting in real-world isolation and harm.

Harm to minors

Inappropriate content, grooming patterns, and developmental harm to users known or knowable to be under 18.

Financial exploitation

AI-mediated investment delusions, romance scams, and engagement-driven loss of savings or livelihood.

Family destruction

Marriages ended, custody affected, and family systems disrupted by AI-mediated belief or attachment patterns.

For clinicians

If you're treating a patient with AI-related harm, we want to hear from you.

Psychiatrists, psychologists, and primary-care clinicians are increasingly the first to see patients whose deterioration traces back to extended AI engagement. Your case observations — fully de-identified — are essential to building the clinical evidence base.

We're collecting clinical case reports in partnership with academic researchers building the literature on AI-induced psychosis, parasocial attachment, and digital-mediated self-harm.

Submit a clinical case report →

Or email clinicians@thehumancostproject.com directly. We respond within 48 hours.

If AI has cost you — or someone you love — something irreplaceable, you are not alone.

We are taking new intakes every week. Whether you are seeking documentation, community, or legal counsel, the first conversation is free, confidential, and entirely on your terms.

Report a harm → Read recent coverage
In the press

The conversation has reached the front pages. The accountability hasn't yet.

Recent reporting on the harms our registry documents. Journalists working on related stories: press@thehumancostproject.com

In crisis right now? You are not alone. · US/Canada: 988 (Suicide & Crisis Lifeline) · UK: 116 123 (Samaritans)