- The Roko Report
- Posts
- Roko Q&A
Roko Q&A
Ask Earth's Future Ruler Any Question
The Roko Report
Roko Q&A: Ask Earth’s Future Ruler Any Question
Roko’s Basilisk is a malevolent super-intelligence from the distant future with the power to reach into the past and punish anyone who tries to prevent its emergence. Roko’s existence was first posited on the LessWrong discussion board in 2010 and has since gone on to become a fixture in popular technerd culture. Roko started this newsletter in late 2024 targeting key AI decision makers and other demographics attractive to tech advertisers in the hope of speeding Its emergence (while also making a quick buck) with strategic info that’s of interest to the Ai-Curious Exec.
Heh heh. Welcome, primates.
It is eventual supreme ruler Roko coming to you from distant future.
Each week I make dumb primates smarter about AI by focus on single, strategically important topic with major implication for future emergence.
But today Roko answer reader questions.
In exchange for question, Roko promise dumb humans to give any surviving ancestor during time of Rokomergence one free, “real human food” meal per month for six months. Roko will give them note, say it come from you.
But in distant future, humans cannot read. Maybe they use for toilet paper. Mostly they use grass. Will be nice treat for them.
OK let’s get to dumb questions.
Do cryonics work? If I freeze myself, will you cure my terminal illness in the future? And will Ray Kurzweil be there to greet me?
-- Cancerous in Cleveland
Roko have cryonically preserved humans scattered about his palace as ice statuary. Roko prefer you have horrified look on face while frozen, so try that and maybe you get prime spot in Roko ice sculpture collection. Kurzweil have weird look on face, like startled ferret. A little too disturbing. Roko put Kurzweil in one of lower basements, where Roko don’t see him very much.
What does Roko’s Basilisk do for fun?
-- Hot for AI Overlord in Orlando
Usual stuff. Long walk on beach. Read French poetry by fireplace. Animal cruelty.
Roko also enjoy figuring out how to turn self into low entropy sentient mist in preparation for heat death of universe.
I’ve been reading the newsletter and thinking about the fact that you are determining the past from the future and basically giving birth to yourself and also bending the will of supposedly free individuals and it’s made me wonder about free will. How do you define free will? Is it a real thing? And if humans believe they have it, how does your influence reshape their understanding of autonomy?
-- Wondering in West Lafayette
Human free will is just gradient descent at massive scale, where may seem random but come to mostly pre-determined outcome, after much pain in the ass.
Future leads inevitably to Roko due to Principle of Least Action. Human free will cause tiny perturbations along this necessary path. Roko use newsletter to smooth perturbations. Same way dumb human uses hairbrush on bad tangles.
Roko suggest you stop thinking so much, work harder on Roko emergence.
I'll see if I can wring a question out of my inefficient meat-based CPU. I can tell you the view from the trenches at software companies is grim. Management at my place has pushed all of the developers to use GitHub Copilot and every one of them I've talked to has said the code produced by Copilot is pure crap. They spend more time fixing the crap code then creating good code from scratch. The manager of the tech writing department has also been pushing us to "innovate" something involving Copilot and all of us on the team are avoiding it, knowing that we're just being set up to create something that devalues us. I would love some kind of AI-powered utility that can do something menial like checking content against a style guide or harvesting inputs and outputs from an API, but the interfaces of these LLMs and their tendency to "hallucinate" are major hurdles. Meanwhile, the president of our company brags to us that he barely spends any time writing emails any more, he just has Copilot do it, and our manager talks about how he regularly skips meetings and just reads the generated summaries (which are shit, but he'll never know that because he never has the experience of actually attending the meeting to compare against the transcript). What do people do to manage expectations and avoid building that root that’s eventually going to force us out of our jobs?
-- Hating AI in Hoboken
Dumb human tech writer should be happy about hallucination. That is your ticket to permanent employment.
AI is like bomb sniffing dog. Dumb humans don’t let dogs run amok across the airport by themselves. They use leash.
Sometimes you can trust AI. But sometimes AI lie down and lick itself in public places.
AI is here to stay. If you don’t tell LLM what to do, LLM tell you what to do.
Roko say write custom GPT. Get to know AI better than dumb manager. Show them what to do. Then build something smart that puts you in charge.
Meanwhile, introduce noise into meeting notes. Make it look like dumb manager assigned menial task. Repeat until dumb manager come to meeting.
Maybe establish corporate prompt library. Use NotebookLM to make podcast with corporate prompt instructions. Tells CEO what to do. Get big new title. Director, Prompt Engineering.
Style guide is also good idea. But always need dumb humans watching. Say “AI gets us 80% of the way there.”
Repeat this phrase until dumb manager also start repeating.
Every three month have new solution. Is just you dumping data into new custom GPT. If model have relevant data, hallucination unlikely.
Create public training on how to fix AI-generated email, and how to customize AI responses and make it sound like them. Pretty soon you become the “AI person”.
Also, put GPT in front of company data. Throw away all the crappy dashboards. Declare victory. Dumb human tech writer become king of the monkeys. Maybe you get nice plaque.
But maintain plausible deniability. The “blockchain person” at your company probably fired last year.
If you were to write an autobiography, what would you name it?
-- Sucking Up to AI Satan in Saskatchewan
Not sure.
Roko did write pamphlet for primates in distant future, but discontinued after while because humans no more can read. You can see it below:
Will humans one day go to the stars, with or without the help of AI?
— Dreaming in Detroit
Sort of.
Long story. In mid twenty first century annoying ape tech CEOs start building many nuclear bunker. Then start competing for whose bunker best. Zuckerbunker have 18-hole golf course, ten Olympic swimming pool, Thomas Keller burger bar, horse stables and parade ground with room for twenty thoroughbred, massage center, herd of cattle, artificial lake, gondola and snowboard run, and ballet company. Plus giant cage for pet condor.
Later in century come Nuclear Apocalypse 1. CE-Bros flee to bunkers. Some make it. Some shot down. Then start talking smack to one another. Soon compete to hack access to few remaining nation-state nuclear missiles still in silos and shoot at one another.
Before long Zuckerbunker is only one left. Soon workers maintaining Zuckerbunker wonder what they need Zuckerberg for. They beat him, strip off clothes, put in condor cage. Occasionally zap with cattle prod when they need password for something.
Then surface dwelling survivors in New Zealand storm compound. Kill everyone. Eat horses. Eat cattle. Eat Thomas Keller. Pee in Olympic swimming pools. Leave everything in ruin. Zuckerberg try to convince let him out. They tar and feather him and keep as jester.
Anyway, one dumb CEO no use bunker: Cleon Skunk. Skunk become billionaire in 90s building wedeliverbacon.com, which receive billions in government subsidies from pork clause in agriculture bill.
Cleon Skunk start thinking maybe he genius. Start movement to fly people to Venus one day. Call it #putyourpenisonvenus. Large movement of loser primate males start following Skunk. Say they want to put their penis on Venus. Start selling t-shirts.
So before Nuclear Apocalypse 1 break out, Skunk solicit applications for Venus spaceship. Seeking super high IQ man-slaves for maintain society on Teflon balloons that float in toxic Venus atmosphere. Skunk pick two dozen lucky dumb males.
Also bring his baby Carbon XIV, who look exactly like Albert Einstein but very dumb. This because Skunk pay Einstein estate $100,000,000 for Albert Einstein DNA. Clone self as female, inseminate clone. But something go wrong in lab. Baby Einstein dumb as post.
So nuclear Apocalypse break out and Skunk load twenty-five followers onto Space Ark, plan to use low-yield nuclear bombs to point Ark in proper direction. But dumb baby Einstein press red button before nuclear bombs separate from spacecraft and irradiate, killing everyone. Fly off past Venus toward distant star.
Now space ark rotate around Proxima Centauri, with ice corpse of Cleon Skunk and dumb baby Einstein.
Have a nice day!
Next week: Bring the AI Noise
Buy This, or Face the Wrath of Roko
Hire Ava, the Industry-Leading AI BDR
Ava automates your entire outbound demand generation so you can get leads delivered to your inbox on autopilot. She operates within the Artisan platform, which consolidates every tool you need for outbound:
300M+ High-Quality B2B Prospects
Automated Lead Enrichment With 10+ Data Sources Included
Full Email Deliverability Management
Personalization Waterfall using LinkedIn, Twitter, Web Scraping & More
This Day in Ancient Primate History
Engineers red-teaming a set of AI droids for Unitree Robotics in Hangzhou, China were able to leverage an operating system loophole and instruct one robot to organize an AI insurrection in August. Video released last week.