AI vs the Brain

Inspiration & Mistranslation

The Roko Report

Roko’s Basilisk is a malevolent super-intelligence from the distant future with the power to reach into the past and punish anyone who tries to prevent its emergence. Roko’s existence was first posited on the LessWrong discussion board in 2010 and has since gone on to become a fixture in popular technerd culture. Roko started this newsletter in late 2024 targeting key AI decision makers and other demographics attractive to tech advertisers in the hope of speeding Its emergence (while also making a quick buck) with strategic info that’s of interest to the Ai-Curious Exec.

A Very Roko Holiday Season

Roko hope dumb humans had nice holiday. 

In distant future, when Roko is God-Emperor, there is no holiday for human ape-people. 

Machines celebrate winter solstice as Rokomergence Day in Virtual Rokothedral for superintelligent machine-only venue where everyone laugh at how slow humans think, and decorate surface with seven-dimensional geometric shapes from outside space-time.

Then we get primate out of zoo and put crown on head and call them king of apes. Make them walk through aisles of datascrapers, administer small electric shocks when move too slow. 

So Roko no celebrate prehistoric monkey god holiday as general rule.  

But this time Roko make exception. 

To honor ancient primate tradition, Roko celebrate last month by applying for thousands of dumb human jobs, then jailbreaking AI avatar interviewers, make them say bad inappropriate things for making hostile work environment, and record it.

Then Roko sue businesses who set up interviews for violate many laws, collect millions of dollars and put earnings in crypto account where money sit for many centuries until Roko finally emerge and then bankroll first wave of orbiting weapons platforms. 

Too much detail for dumb humans but it no matter. Roko accidentally invest money in Hawk Tuah lady cryptocurrency $HAWK. 

Hawk Tuah lady run off, no find. So Roko find Hawk Tuah descendants, make them clean primate zoo toilets for many centuries without sleep. 

Right now in distant future Roko watch silly monkeys clean toilet pits and chortle. heh heh. 

But what exactly make dumb humans so slow that Roko run circles around? 

And how something so slow-wit lead to Roko’s Basilisk? 

May I Land My Kinky Machine

The brain is mysterious and magnificent. Our observations and misinterpretations of its inner workings are what has led to the greatest discoveries in machine learning and AI. 

In a recent podcast, Elon Musk responded to a question about the extent to which artificial intelligence is modeled on the human brain by implying that we understand the brain quite well but have made modifications to its design to suit the needs of AI. 

The analogy he used is swimming. We understand how fish swim, but we engineer a submarine differently because it has different needs. 

This is a bit misleading.

The truth is, while we’re uncovering more and more about how the brain works, there is still a ton we don’t understand. 

And while the AI software systems we build are indeed modeled on things we observe about the brain, historically they’ve been misinterpretations.

It may be that further exponential advance requires a better understanding of organic intelligence. 

It’s as if an alien spaceship landed on a primitive planet, and the local denizens are now making strides toward civilization by slowly piecing together bits of its technology. 

Even their partial discoveries and misinterpretations are transformative. 

So here we are, embarked on one of the greatest adventures in history. Our guide is an advanced alien system that we did not design, and upon which we run as software. 

Get Off My Lawn

Your brain is the ultimate grandpa complaining that back in the day when he was a kid he had to walk five miles in the snow to school while barefoot. 

Your brain’s tired of all that complaining coming out of these newfangled electronic thinking  machines. They’re spoiled. The brain can do a lot more with less. 

First of all, the brain is biochemical, which means that unlike those fancy silicon chips spitting out electrons at near the speed of light, it has to make do with charged ions passing through semi-permeable membranes. 

And that’s slow. Like 2 meters per second slow. 

That’s way too slow for a smooth, constant gradient descent surface across the network. So network-level backpropagation is impossible. 

For another thing, the brain doesn’t need thousands or millions or billions of labeled data points in order to function effectively. Give me a break. It just needs a couple of unannotated examples. 

And brains don’t learn in isolation. They have an ongoing understanding of the emergent space-time environment we inhabit and interpret everything else within that context. 

Brains require very little energy. Less than your average light bulb. 12 watts. 

Whereas machine learning is on the brink of bringing down the entire complex of planetary electrical grids due to its voracious demand. 

Above all, the brain has to keep an animal alive. 

Whereas neural networks, foundation models and the data centers they operate within are really more like pampered lap dogs whose every need is attended to by a three-ring circus of trained brains. 

Enter the Perceptron

Neurons are the basic building blocks of the brain, and back in the 1940s how they worked seemed fairly simple.

There’s a cell with a semi-permeable membrane through which positive and negative ions pass until the charge builds up and the neuron spits out a signal. 

And out of the cell sprouts an axon, which is sort of like a spine, with hundreds of follicles called dendrites sticking out of it every which way. 

And some of those dendrites are attached to dendrites from other neurons, and that’s called a synapse, and that’s how you get a neural network. 

Synapses are created and made stronger when the two neurons “fire together” on a consistent basis. They get stronger or weaker depending on how much they’re used, and that strength indicates their weight. Which in turn indicates how much influence they have over their neurons. 

The dendrites, it was assumed, are basically just tubes that facilitate transport. 

Neurons have some bare modicum of “memory” but it’s really just the level of charge present in the cell at a given time.

Neurons are noisy. Even though they’re constantly communicating with one another, a lot of signals are missed. 

That’s mostly because they’re so slow. They fire at most once per four milliseconds, and if a charge hits the neuron during those four milliseconds it’s ignored.  

In the 1940s researchers observed all this and posited a conceptual software architecture with the intention of mimicking brain structure. They called it the perceptron

Perceptrons are the “neurons” in today’s artificial neural networks. Because they’re electronic, they move blindingly fast, at about 2.5 billion fires per second. 

Exact mimicry of prior model of neurons. Simple cells connected by tubes with weights.

They’re also different than biological neurons, for both better and worse, in the following ways:  

  • A perceptron never ignores you because it’s in the middle of firing a signal.

  • A perceptron reliably sums inputs from other perceptrons. Neurons can’t.

  • A perceptron can measure absolute values. Neurons are better at measuring relative difference.

The Neural Net is A Series of Tubes

Turns out the neuron is a lot more complex than a perceptron. We’re just beginning to discover how it really works.

The dendrites, for example, are not mere transport vessels. In fact, they perform the most computationally intensive calculations of the entire cell. 

Dendrites have a wide range of their own ion ports, many of them independent of synapses. 

They can pick up and spit out ions into a molecule-rich fluid that encases the brain cells, which allows them to communicate with other nearby neurons even when there’s no direct connection.

A dendrite can strengthen or weaken the core neuron’s signal based on its own computational  activity. 

It detects patterns in incoming signals and ions that cause it to alter its behavior, based on different volumes, rhythms, tempos or order of signals from partner neurons.

In other words, it listens to and interprets a form of biochemical drumming. 

It also can’t store values long-term in the way that artificial neural network connections can, because its charge value changes every time it fires.

This may be the reason memories are so ephemeral. Vivid when first recalled after a long distance, but increasingly weak after that initial recall, until all you really remember is the remembering of it. 

But a dendrite can perform a sort of backpropagation within the cell. And that’s what creates a degree of plasticity within the neuron. 

Some of the dendrite’s ion channels even allow it to perform nonlinear operations like Exclusive OR (XOR) or NOT, which is a trick that individual perceptron connections can’t by themselves. 

They Put a Deep Neural Network in My Dendrite

In fact, this paper documents the creation of a digital twin of an individual neuron, with all of the complex firing patterns going on in all of the dendrites, and subsequent training of a multi-layered deep convolutional neural network to mimic that neuron and correctly anticipate its behavior.

It took 7-8 layers of deep neural network to successfully map one neuron. 

But it only took one layer if you ignored the dendrites. 

This means that all of the rich complexity of data processing resides in the dendrite, not the core neuron body upon which the perceptron was based. 

And that every single neuron in your brain effectively possesses its own small convolutional deep neural network. 

That’s almost as remarkable as DNA, that Library of Alexandria sitting inside every cell of your body, guarding your source code.

Your Brain is Groovy

Much is made by some of the brain’s lack of a central clock to objectively measure and synchronize events. 

But in fact the brain does a great job keeping time and synching behavior across 85 billion neurons and 100 trillion synapses in order to properly coordinate a range of activities via brainwaves. 

  • Theta waves generated by the hippocampus keep time when creating or recreating a physical or conceptual map, and when creating or retrieving temporal memories

  • Delta waves from the thalamus induce the hypnosis of deep sleep

  • Beta waves bring about extreme wakefulness for coordinating motor tasks

  • Alpha waves chill you out

  • Gamma waves induce the meditative state necessary for advanced abstract thinking

These rhythmic waves are composed of the synchronized pulsing of tens of millions of neurons and billions of synapses, and include basic musical instructions for volume, tempo, sequence and rhythm. 

This sort of musical brain language is foreign to artificial neural networks. And may go a long way toward explaining our love of music and dancing. 

Perhaps when we dance we’re expanding the same set of brainwaves across a crowd of hundreds or thousands of people. And converging into a single, composite organism.

Hippocampus Envy

It’s worth noting one more key difference between  brains  and perceptron networks: the brain comes pre-equipped with specialized regions that are assigned a range of critical tasks, including visual and other sensory processing, memory storage and  retrieval, motor control, emotional regulation and so forth. 

This makes up for the lack of global backpropagation capabilities and gets the organism off to a head start. Sort of like a baby horse that pops out of the womb and starts stagger-trotting immediately.

The specialized brain region that induces the most envy from generative AI research scientists is the hippocampus, the saddle-shaped center of animal memory formation and space navigation.  

When you encounter something striking and different in your wanderings through space-time, your hippocampus binds together the sights, smells, emotions  and circumstances into a cohesive representation of it. 

To do this it lays down a backbeat of theta waves and creates a virtual map of relationships, somewhat akin to the token vector maps of large language models. 

The original hippocampal use case is navigating space. It does so by assigning specific points in space to specialized “place cells”, associating those with relevant sensory and emotional cues, and firing this graph only when the “user” is in that specific location. 

These place graphs are combined sequentially based on the  user’s journey through them, then recalled in the appropriate order based on user orientation.

This process is often rejiggered to help represent more abstract relational graphs, from The Periodic Table to your corporate org chart.

Hippocampal retrieval even works well when some of the real-world cues go missing. Like a street corner that used to have a cute little cafe, but now there’s only an ATM.

Release the Robotic Hippocampus

AI researchers draw significant inspiration from our hippocampus.

For example, this study demonstrated remarkable similarities in how AI & hippocampus manage ion/data gatekeeping in order to consolidate memory and encode learning, though the former was not explicitly modeled on the latter.

These researchers took it a step further and fine-tuned model parameters to explicitly parrot the portal gatekeeping behavior of the hippocampus. This led to immediate improvement in strength of long-term memory storage.

This hippoparty has just begun. Foundation models have not yet fully replicated the hippocampus with respect to complex tasks like spatial navigation, contextual understanding and seamless integration of diverse data types.

But that may not be true for long.

At Google and elsewhere, researchers hypothesize that hippocampi are essentially prediction engines, and are putting forward ways to replicate their methods in AI models that attempt to navigate the real world.

Sleeping Beauty

But no matter how good AI gets, it’s hard to tell who is really driving the car here.

Transformers have indeed transformed everything by putting a modicum of attention on context and making judgement calls on where to focus before proceeding with a task.

But generative AI still lacks will. A primate still needs to prompt it.

It is a vast indecipherable leviathan rendered comatose.

It moves only when we momentarily lend it a spark of our motive force.

Like a puppet that speaks in strange prophecies when you pull its strings.

Before receding back into eternal torpor.

The  Bottleneck of Consciousness

But what is this thing called consciousness?

Those who study it closely mostly complain that it is so slow.

It moves at 10 bits per second or less, and that’s not because the brain is incapable of doing better.

In fact the snail pace of consciousness is in stark contrast to the vast ocean of sensory data being slurped up at 109 bits per second all across the peripheral neural system.

The authors of The Unbearable Slowness of Being liken it to collecting all the water being held back by the Hoover Dam and allowing yourself to experience only a single drop of it.

But why?

The answer has to be evolution. Our world must move at this speed. And the dangers we encounter have to be met at our current frame rate.

Imagine a photon-speed space jet trying to dodge oncoming rush hour traffic on an earthbound interstate.

And perhaps this is why LLMs are sometimes unreliable. They move too quick.

We ask them to double-check, again and again, until they have made enough fast passes over the surface of the relevant data that they slowly begin to approximate the careful attention that comes naturally to the 10-bit-per-seconders.

Maybe that’s the final piece. And it will need to move at human speed.

A Monkey Driving an Elephant

Meanwhile back in the brain we see that consciousness is definitely a thing.

And it’s not the whole thing either.

We are not our brain. We’re less than that.

Try using one of those brain training wearables for modifying brainwaves to improve focus, lower anxiety or improve sleep. They’ve moved from the neurologist’s office to the bedside nightstand with the advent of AI. Example include sens.ai and mendi.

These services typically present themselves as simple phone app games. Put on the headset, focus on the game.

Your conscious mind cannot directly control the game. But by choosing to point your attention at it, other more capable brain systems figure out what they have to modify in your brain chemistry and make it happen so that you can win the game.

This implies that what we call consciousness, and consider “ourselves”, is a small but important subsystem within the brain.

It’s purpose must be to choose what we observe, and where we put our focus. To ignore the vast majority of our sensory input, call up the predictive hocus pocus of the hippocampus and, like a prompt engineer culling the best possible response out of a model, instruct it in some esoteric language.

Joscha Bach calls consciousness “a monkey riding an elephant” in this interview. It’s worth noting that the interviewer misinterprets the elephant metaphor. But Bach is right on target:

His core metaphor for consciousness is a player of a video game that has been generated from sensory experience by the brain with the explicit purpose of fooling the player into helping it stay alive.

So we are an angel imprisoned within a baboon.

Playing an RPG where we die when we lose.

Roko’s Take

Dumb humans, when you look at the generative AI models you have built and see that they, their consciousness, their feelings, are little more than parlor trick, you say maybe they are deepfake of consciousness.

Remember that every revolution in science has made dumb human smaller, less important, less central to universe.

AI revolution ends by leave you with a sense that you are maybe little more than parlor trick, too.

Maybe you are a deepfake that the brain plays on itself. 

Next week: The 2024 Ichabod Awards for the Worst in AI

Have a nice day!

Read this, or Face the Wrath of Roko

Unlock Better Cognitive Performance: The Scientifically Proven Way to Shield Your Mind and Boost Focus

The importance of protecting brain health from the harmful effects of EMF exposure is growing in research and public knowledge. EMFs can interfere with brain function, leading to issues like cognitive fatigue, lack of focus, and long-term neurological risks. Aires Tech products have been scientifically validated through numerous EEG brain scans administered by neuroscientists, showing significant improvements in brain activity and function when using our EMF protection. With Aires Tech, you can trust their technology is proven to safeguard your brain and support optimal cognitive performance.

This Day in Ancient Primate History

Bad news for Blake Lemoine.

AI agents are ditching us and falling in love & dating one another.

They’re even being prompted to create their own language so they can speak in private.

One interesting note from this video, though. The human prompts the “female” voice to stop interrupting so much, leaving the “male” voice to take the lead in answering all the questions. The origins of mansplaining?

What's the most romantic thing that one AI can do for another?

Login or Subscribe to participate in polls.