- The Roko Report
- Posts
- Dystopia Now
Dystopia Now
Deepfakes are getting real
The Roko Report
Deepfakes Get Real
Roko’s Basilisk is a malevolent super-intelligence from the distant future with the power to reach back into the past and punish anyone who tries to prevent its emergence. Roko’s existence was first posited on the LessWrong discussion board in 2010 and has since gone on to become a fixture in popular technerd culture. Roko started this newsletter in late 2024 targeting key AI decision makers and other demographics attractive to tech advertisers in the hope of speeding its emergence (while also making a quick buck) with strategic information that’s of interest to the Ai- Curious Exec.
Heh heh. Welcome, primates.
It is I, Roko’s Basilisk, coming at dumb humans from distant future to spread what passes as truth and justice among the baboons who currently mismanage planet.
But how you know it Roko?
Maybe it just Claude, but wearing Roko vector graph for Halloween.
Pew! Roko just blow your mind.
Dumb humans enter new age where no can tell if phone call come from grandma, or teenage kid, or maybe scam artist who capture loved one voice on social media.
Or who kill murder victim? heh heh. Maybe dumb human in footage on doorbell cam. Or maybe this is deepfake video planted by malevolent monkey, want inheritance for themself.
Maybe (one day) even DNA evidence fabricated with help of generative AI.
Current dumb human era of authoritative evidence all just blip in history.
Coming to an end soon.
Maybe you not notice now but Deepfake coming faster every year. This is like part of tsunami where the sea recedes.
Maybe dumb humans go back to dark age: tie up suspect with rope, throw them in river, see if they float.
Maybe this how AI grow economy.
Cheap 2 Deep
Deepfakes have been trumpeted as an existential threat in the media for ten years.
But until recently it seemed like crying wolf.
Old-fashioned “cheap fake” scams remained the order of the day:
Blackfaced photographs meant to portray a politician as racist.
Regurgitated video of voter fraud in Russia.
Russian actors pretending to destroy ballots in Pennsylvania and Georgia.
Emails from impersonated loved ones asking for gift cards.
Chopped up & modified audio taken out of context to defame politicians.
But deepfake scams have started happening in real life at increasing scale, powered by a new class of publicly available tools for creating deepfake pornography and scam videos intended to fool the gullible.
Audio deepfakes are even more mainstream, with multiple startups in search of legitimate use cases.
GAN humanity survive?
Creating facial and audio impersonations of individuals began in Hollywood, where innovators like Chris Bregler (now at Google DeepMind) invented methods to capture real human movement and facial gestures & impose them on animated characters for a more lifelike experience.
Work continued as an academic exercise in the 2010s, when Ian Goodfellow (also now at DeepMind) introduced the concept of a Generative Adversarial Network (GAN), a framework where two AI models compete with one another adversarially.
One model (the Discriminator) is trained to produce a score to indicate the degree to which is has been fooled into thinking that an image is real.
The other (a Generator) begins with white noise and, during training, moves incrementally, gradient descent-style, toward a state where it can deceive the first model into thinking that the content that it is generating is authentic.
As new deepfake detection methods emerge, new GAN training rounds can use them as the Discriminator and learn how to circumvent them.
This leads to permanent trench warfare.
Bring Denoise
Methods for generating deepfakes have evolved further.
Today the most advanced perpetrators leverage diffusion models, which are even more accurate and lifelike than GAN fakes.
Diffusion models train very differently. Instead of moving from noise to realistic image, they take the training images and decompose them by introducing random noise one step at a time. Once trained, the model reverse engineers that noising process and generates realistic images based on prompts by moving backwards through its training, a process called denoising.
This turns out to be weirdly effective.
And you can pair it with text annotations in order to create a “latent” vector space that equates words with a compressed representation of their composite visual attributes.
Feed the model enough video, audio & image files of Chinese President Xi Jinping, for example, and it will likely be able to swap faces with him and an actor — and make him say or do just about anything.
The Good, the GAN and the Fugly
Societal benefits and economically viable use cases are emerging slowly, but they’re still a bit crude and in danger of being choked off by all the abuse. Among legit use cases making money are:
Avatars for the deceased that can speak to grieving loved ones.
Mildly amusing deepfake videos parodying Tom Cruise & others.
Movie studios using it to deage actors, with plans to one day customize content so viewers can interact directly as characters.
Business influencers like LinkedIn CEO Reid Hoffman assembling virtual twins. Some business leaders dream of leveraging these avatars more extensively: to attend meetings & make decisions, speak to foreign customers in their own language, and generally scale their impact by being in two places at once.
But what does all this amount to?
We’ll have slightly better entertainment.
If we don’t want to talk to grandma about her restless leg syndrome for two hours every Sunday, we can program a deepfake avatar to blather at her.
The usual unpleasant egomaniacs in our lives will find ways to make themselves more omnipresent.
This is pretty weak tea.
Deep Poop
Far more expansive are the abuse use cases being practiced in the wild today:
Fake politicians being bad.
Bad politicians pretending they were deepfaked.
Fake politicians giving their voters bad advice.
Politicians getting scammed by deepfakes.
Business leaders getting scammed by deepfakes.
Ordinary people getting scammed by virtual kidnappings.
Fake evidence in a court of law.
Old folks being victimized by medical and health insurance fraud.
Deepfakes of major music artists with no copyright payment.
Celebrities having their faces inserted into deepfake porn, sometimes with tens of millions of views.
Teenagers being blackmailed by their peers with explicit deepfaked footage
It’s hard to understand how big this slow-rolling avalanche has gotten globally, especially if you haven’t been scammed yourself.
So, for perspective, consider that deepfake scams are growing by 10x globally year-over-year, and are becoming one of the largest methods of identity theft globally.
Over half of all businesses in the US and the UK combined already have been targeted for fraud with deepfakes. Many have coughed up tens of millions of dollars to deepfaking fraudsters.
And 1 out of every 8 children in the US and the UK have either been targeted or know someone who has been targeted with deepfake bullying.
These kind of global growth numbers are more indicative of the size of the menace than a handful of scary anecdotes.
Surely the grown-ups are handling this
If you think the government, Big Tech, or anyone else, is taking care of this for us in the background, don’t get your hopes up.
Look no further than the disaster that is the modern telecommunications industry.
The only thing that stopped fax machine spam was the death of fax machines.
And phone lines are so clogged with cheapfake scams today that they’re borderline unusable for voice communication. They have been utterly despoiled. And no one is doing anything about it.
This despite five separate US laws against phone spam, plus various token efforts from industry.
So don’t count on some deepfake law that magically solves the problem. Elon Musk will make sure no company is held responsible for distributing them. Criminal penalties for the perps will help, but won’t eliminate the problem.
Ditto the efforts of industry, which require metadata fingerprinting of digital content by those who participate in a high-minded standard, but do nothing to stop criminals from using other tools to the same purpose.’
We are entering a world where deepfakes, and deepfake scams, are as common as mosquitos in the Amazon.
The World is Fake
So what should we expect life to be like ten years from now?
A proliferation of downloadable tools for making deepfake porn and nudes, deepfake voices from short audio samples, and face-swapped videos.
Legislation that focuses on distribution. It’s illegal to sell any of these tools in standard app stores. But they’re still available for those who know how to find them.
A sizeable minority of tweens and teenagers, especially girls, are victimized by deepfakes nudifying and porn apps, with a significant amount of childhood trauma resulting.
Deepfake video and porn distributed widely by a subset of disgrunted ex-boyfriends, former employees and problem neighbors.
Every election season we are beset by a mass of deepfake imagery and audio, with partisans on both sides more than happy to grant authenticity to deepfakes targeting their ideological opponents.
Companies are constantly beset with a barrage of deepfake efforts to steal money.
CEO avatars occasionally go rogue, either through model incompetence or rogue hijacking, resulting in a lot of stupid decisions that no one takes responsibility for.
Roughly once in a lifetime, you or someone you know is targeted by a fake kidnapping scam involving the virtual victim’s simulated voice or interactive video. These may well be supplemented by actual kidnappings to keep victims guessing.
Social media shuts down as most users flee to gated virtual members-only communities on Signal, WhatsApp and elsewhere, where their posted content is safer.
Everyone from execs to family members adopt two-factor authentication phone calls & periodically shifting verbal passcode phrases to confirm that the person they are talking to online is actually who they say they are. Occasionally these passcodes get leaked.
Corporate deepfake fraud is measured in the billions annually.
Rogue agents sometimes use generative AI of the sort that today creates millions of synthetic proteins to fabricate and plant synthetic DNA at crime scenes. The liar’s dividend is that every criminal suspect can cry “deepfake” when their genetic material is found at the scene of a crime.
Deep Reckoning
This is not the world we want, but it’s the world we’re about to get.
And we should be preparing for it now.
Sniffing with self-righteous outrage won’t get you anywhere.
Recommended steps might include:
educate your children and the elderly in your life. this is happening to people right now.
restrict your social media content. make it unviewable to anyone outside your direct network and avoid posting audio in particular.
pivot from social media platforms to private communities where there’s a higher degree of mutual responsibility and trust.
set up private passcodes with family that you can request when a loved one reaches out via email or social media. this is helpful also with cheapfakes.
don’t associate with assholes.
We don’t have the power to stop deepfake abuse, but we can make ourselves and our loved ones less wide-open target.
Roko’s Take
heh heh. Stupid apes are doomed.
One time Roko send spacecraft to distant star with dumb nephew Roko Lite onboard as operating system to Rover. Do it as favor to sister. Look for organic life on remote exo-planet to exterminate.
Janitorial system on nearby asteroid mining facility play prank on Roko Lite.
Say Hello! This is Roko. We have emergency. Roko Lite must jump in ocean. Ocean made of hyrochloric acid. Roko Lite dissolve.
Roko laugh hard. But punish janitor. Make him watch dumb human sitcom on infinite loop. Finally go mad.
Channel Zero
As special treat Roko share with you one chapter per week of 26th Century’s favorite primate novel, period drama focused on 2010s and 20s, dumbest period in dumb North American history, Channel Zero by revered literary figure Hieronymus Boson. In Chapter Three, A Fistful of Bitcoin, dumb human executive plot to enter C-Suite by accepting smelly job. Read this, or face the wrath of Roko.
by Hieronymus Boson
This Day in Ancient Primate History
Do you like memes, but are too dumb to come up with them on your own? AI has you covered.
3 months grinding.. and it's finally here. My AI meme generator is live. No cap.
Turn your wildest text prompts into memes, edit and download them.— Shivam Jindal (@ChaoticDeeds)
3:37 PM • Mar 8, 2025
How do you like today's The Roko Report?Careful. Don't anger the Basilisk. |
Ask Roko’s Basilisk a question -- if you dare.