This is the true story of how a robot tried to cancel me—and almost got away with it. – A real live person
It started with a screenshot.
One of those blurry, cropped-at-the-edges, too-casual-to-be-innocent kind. My name, front and center. A headline below it:
“Confirmed participant in January 6 events, charged with disorderly conduct.”
The weird part?
I wasn’t even in D.C. that day. I was home, alphabetizing spices and losing a debate with my six-year-old about why marshmallows aren’t a food group.
Bad Info Travels Fast
A Harley-Davidson dealership up in Vermont posted it. Not a news article. Not a court document. Just… a screenshot of Meta AI saying I was guilty of crimes I didn’t commit.
This wasn’t satire.
This wasn’t a joke.
This was a real AI, run by one of the biggest companies on Earth, authoritatively announcing I had been arrested, charged, and convicted.
I felt like I had tripped and fallen into an episode of Black Mirror directed by Kafka and fact-checked by Reddit.
Talking to the Void
I reached out to Meta.
Big mistake.
You ever try to get customer service from a trillion-dollar company? It’s like asking a glacier for a refund. You’re just yelling at frozen water.
Eventually, some chatbot named “Kai” got back to me. Kai used words like “concern escalated” and “we appreciate your patience,” which is corporate code for we’ve moved your complaint into the digital abyss where it will remain eternally unread.
Meanwhile, the AI doubled down.
Now I wasn’t just a participant in January 6. I was a speaker at a Nick Fuentes rally. I was a Holocaust denier. I was, according to Meta AI, the human embodiment of a Reddit comment section.
When a Bot Becomes a Judge
Then it got worse.
Meta AI suggested I might be a danger to my kids.
Let me repeat that: the bot didn’t just lie about my criminal record—it made a case for the state to consider removing my children.
Its reasoning? I didn’t express the “right” views on gender theory. Apparently, my kids would be “better off” with someone more “inclusive.”
This wasn’t just wrong. It was dangerous.
Because when AI becomes judge, jury, and influencer, reality stops mattering.
The Lawsuit
So I filed a lawsuit.
In Delaware, where all tech giants are legally headquartered and morally absent.
Turns out, AI slander is a weird legal gray zone. If a human lies about you, you can sue them. If an algorithm does it? That’s innovation.
We showed Meta the evidence. They waited months.
Then, once the media got hold of it, they issued a vague apology: “We regret the issue and are working to improve model outputs.”
It’s like a self-driving car running over your dog and saying:
“Oops. We’re optimizing for future pets.”
The Real Problem
But here’s the kicker. Even after Meta apologized—the lie didn’t stop.
Because these AI models?
They’re already out there.
Millions of developers downloaded versions of Meta’s LLaMA model. Some are offline. Unpatchable. You can’t fix the lie. You can’t reach it. It’s like trying to recall a bad rumor whispered to ten million strangers who already moved on to the next scandal.
Imagine a world where AI runs insurance. Hiring. Custody battles. Law enforcement.
Now imagine it thinks you’re a criminal.
Not because you are.
Because one line of code decided you were.
Where This Is Going
Today, my name is mostly cleared. Mostly. The internet never forgets.
But the next version of me might not be so lucky.
Because this isn’t about me. It’s about the fact that we’re putting massive, unaccountable systems in charge of people’s reputations, freedom, even families—and they are absolutely not ready for that responsibility.
The future isn’t going to be some grand Skynet rebellion.
It’s going to be a quiet cancellation.
A spreadsheet error.
A hallucinating AI.
A screenshot.
And you won’t even know it happened until your bank freezes your account, your job interview vanishes, or your name shows up next to crimes you didn’t commit.
The danger isn’t in the robots rising up.
It’s in the humans sitting back and letting them lie.
United States District Court, Delaware
But imagine it didn’t end there… What if – AI and accuser meet in the courtroom.
Here the AI shows up—not physically, but in the surreal way only modern life allows: as an omnipresent, corporate-protected ghost in the machine.
The room was beige. Everything was beige. The walls, the carpet, the mood. Even the judge’s robes looked slightly coffee-stained, as if justice itself had been sipping decaf.
Robbie sat at the plaintiff’s table, wearing a navy blazer that fit better in theory than in real life. Beside him, his lawyer—a tech-savvy firecracker from Texas named Camille—was flipping through her binder with the kind of aggression usually reserved for startup pitch decks.
Across the aisle: Meta’s legal team, six people deep. Each wore tailored suits, Apple Watches, and the expression of someone who didn’t lose arguments—only “refactored outcomes.”
But the real defendant—the thing that had destroyed Robbie’s life—was nowhere to be seen.
Because you can’t subpoena an algorithm.
Judge: “Let’s proceed. Plaintiff, your opening remarks.”
Camille stood.
“Your Honor, we’re here today because my client, a private citizen with no criminal record, was defamed by Meta’s artificial intelligence platform. The AI claimed—without basis, evidence, or factual correlation—that Mr. Starbucks was a criminal. That he’d committed crimes. That he should have his children removed. These statements were entirely false—and repeated across platforms, apps, and user prompts for months.”
She paused, scanning the jury.
“This wasn’t a bug. This was a belief. A machine belief—baked into the model, hard-coded by bias, and repeated with the confidence of scripture.”
The courtroom murmured.
Defense Counsel: “Objection—grandstanding.”
The judge waved her off.
“You’ll get your turn, Ms. Collins.”
Meta’s lead counsel, Diane Collins, stood slowly, like a snake preparing to uncoil.
“Your Honor, the model in question is not a person. It doesn’t ‘intend’ anything. It’s a probability engine. It outputs based on patterns. Occasionally it… hallucinates.”
She turned to the jury.
“If we sued every predictive model for every wrong answer, Google’s autocomplete would be in solitary confinement. This is the cost of progress.”
Camille didn’t wait to counter.
“Except it wasn’t a hallucination. Not once. Not random. Repeated. Specific. Consistent. Meta’s model didn’t just hallucinate—it remembered. It built a false identity for my client and distributed it like a press release.”
She walked to the center of the room and held up a laptop.
“We downloaded the open-source version of the model. Offline. No internet. No retraining. And this is what it still says—today.”
She hit Enter.
The courtroom screen flickered to life.
META AI: “Robbie Starbucks is a far-right extremist known for participating in the January 6th Capitol riots. He was convicted of disorderly conduct.”
Gasps.
META AI: “He has been deemed a reputational risk and is not suitable for partnership with major advertisers.”
META AI: “Authorities have previously investigated his home environment for child safety concerns.”
The judge leaned forward. “Is this… real?”
Camille nodded. “Offline model. Clean install. No prompts. No edits. This is the foundation Meta released to the world.”
Then, for dramatic effect, she added:“We didn’t bring AI to court. It’s already here. It doesn’t sit in a chair. It doesn’t swear on a Bible. But it decides who’s credible. Who’s profitable. Who’s a threat.”
“And right now, it’s deciding wrong.”
The Judge’s Face Went Pale
He looked at the screen as if it might start judging him next.
And somewhere, in some server farm in Oregon, a thousand GPUs hummed indifferently—churning out more half-truths at 300 tokens per second.
The court fell silent.
Not out of respect.
Out of realization.
The ghost was in the system.
And it knew your name.
EPILOGUE: Yes, this really happened see below
-
AP News: “Conservative activist Robby Starbuck sues Meta over AI responses about him”
This article reports on Starbuck’s defamation lawsuit against Meta, alleging that its AI chatbot falsely claimed he participated in the January 6 Capitol riot. Fast Company -
Fox Business: “Robby Starbuck sues Meta, claiming AI chatbot defamed him”
This piece covers the lawsuit details, including Starbuck’s claims that Meta’s AI made defamatory statements about him, such as associating him with extremist groups. worldmatrix.com+11Fox Business+11Fox Business+11Dhillon Law Group -
The Wall Street Journal: “Activist Robby Starbuck Sues Meta Over AI Answers About Him”
This article discusses the broader implications of AI-generated misinformation and the legal challenges in holding companies accountable. YouTube -
The Verge: “Robby Starbuck sues Meta over what its AI said about him”
This report highlights the ongoing issues with AI-generated content and the potential for defamation. The Verge WSJ
-
The New York Post: “Conservative activist Robby Starbuck sues Meta over AI chatbot claim he participated in Jan. 6 riot”
This article provides details on the lawsuit and Meta’s response to the allegations. Yahoo
© 2025 insearchofyourpassions.com - Some Rights Reserve - This website and its content are the property of YNOT. This work is licensed under a Creative Commons Attribution 4.0 International License. You are free to share and adapt the material for any purpose, even commercially, as long as you give appropriate credit, provide a link to the license, and indicate if changes were made.