When AI Meets mRNA: A Dog's Cancer Vaccine and the Ten-Person Thought Experiment
The Story
In early 2026, an Australian software engineer named David used a combination of AlphaFold, GPT-based literature synthesis, and open-source bioinformatics tools to design a personalized mRNA cancer vaccine — not for a human patient, but for his aging Labrador diagnosed with mast cell tumors.
David sequenced the tumor, identified neoantigens with AI-predicted binding affinity, and worked with a veterinary compounding lab to synthesize and administer the vaccine. Six weeks later, imaging showed significant tumor regression.
The story went viral. Not because of the science alone — but because of what it represented.
Why This Matters
This isn't really about a dog. It's about the collision of three forces:
The democratization of AI tools — protein structure prediction, literature synthesis, and experimental design are no longer locked behind institutional gates. A motivated individual with a laptop can now access capabilities that required a PhD lab five years ago.
The regulatory gray zone — veterinary medicine operates under vastly different rules than human medicine. David didn't need FDA approval. He didn't need an IRB. He needed a vet willing to administer the vaccine and a lab willing to compound it.
The emotional asymmetry — when your dog is dying, the risk calculus changes. "First, do no harm" competes with "do something, anything." This is the same tension that drives right-to-try legislation in human medicine, but with far fewer guardrails.
Ten Perspectives
We asked ten AI practitioners — engineers, researchers, ethicists, and founders — to independently react to David's story. Their responses revealed something unexpected: the fault lines weren't where you'd predict.
The safety-focused researchers were not uniformly opposed. Several noted that veterinary applications could serve as a valuable proving ground for AI-designed therapeutics, with lower stakes and faster iteration cycles.
The startup founders were not uniformly enthusiastic. The more experienced ones worried about liability, reproducibility, and the inevitable moment when someone attempts this on a human family member.
The ethicists split along an interesting axis: those trained in bioethics focused on consent and harm; those trained in technology ethics focused on access and equity. Same story, completely different framings.
The Deeper Question
What David did is legal, arguably ethical, and possibly effective. But it raises a question that our institutions haven't begun to answer: when AI makes complex science accessible to individuals, who holds the responsibility for outcomes?
The AI didn't treat the dog. David did. But David couldn't have done it without AI tools that compressed years of specialized knowledge into weeks of focused work. The tools didn't make decisions — but they enabled decisions that would have been impossible without them.
This is the pattern we'll see again and again. Not AI replacing humans, but AI amplifying human agency in domains where agency was previously gated by institutional access. Medicine. Law. Engineering. Finance.
The dog is fine, by the way. As of this writing, Buddy is tumor-free and back to chasing tennis balls.