CAN THAT MACHINE REALLY THINK?
Unless you have been in hibernation for the winter, you have been inundated with articles and news reports about artificial intelligence since Chat GPT was released. Besides the fact that the app has generated over 100 million users since its introduction, there has been a great deal of ink spilled over the potential risk of machines outstripping the human brain. The Wall Street Journal recently published a piece seriously questioning whether AI machines should have legal standing or even the right to vote. Is it really possible that computers will be (or already are) smarter than we are? Before we can hazard a guess about that, it might be worth spending a little time on how we think and how “they” do, and that gets us to the stories of two fascinating men.
No matter how you define intelligence, memory is its fundamental requirement. We have talked a bit about the variations in memory’s importance over time—writing that replaced oral history and printing that obviated the need to memorize manuscripts—but we have not looked at how memory actually works. How is it that fully formed images of something that happened decades ago suddenly pop into our heads? How were the images made and where in the world (or the head) have they been? That takes us to our first scientist.
Eric Kandel was born to a middle-class Ashkenazi Jewish family in Vienna in 1929. When he was nine years old, he and his older brother were sent to live with an uncle in Brooklyn to escape persecution following Austria’s absorption into Nazi Germany. In New York he went from a yeshiva to Erasmus Hall High School and then to Harvard where he was determined to find a relation between Freudian psychology and clinical neurology. Kandel went on to medical school at NYU and then to the Laboratory of Neurophysiology at the National Institutes of Health before completing a residency in psychiatry. During that time, he started research into the role of the hippocampus in memory, but that proved too complicated for him to unravel.
The roadblock led Kandel to the nervous system of Aplysia californica, the giant sea snail. The slightly creepy invertebrate had a couple of major advantages. First, it was relatively large and had large neurons. Second, its nervous system was simple. It’s gill would retract when the surrounding tissue was touched and the animal could be trained to pull its gill back when its tail was shocked. The activity of the neurons responsible for those reflexes could be measured directly, and the snail got Kandel the Nobel Prize in Physiology or Medicine in 2000.
To understand what Kandel found, we need to start with how a simple reflex works. The simplest reflex in the human is what happens when a physician taps your knee with a hammer and your leg kicks out. What happens exactly is that the hammer hits the tendon below your knee cap and stretches it. That stretch makes a receptor in the tendon fire and an electric current goes up the fibers (dendrites) that go back to the body of the sensory neuron. From there the message is sent out an axon. At the end of the axon, there is a knob that lies up against an empty space—the synapse. When the current arrives, the axon releases a chemical into the synapse and that chemical attaches to a receptor on the next neuron in the chain. That motor neuron fires when enough of the transmitter chemical is captured and a signal is sent down that cell’s axon to the thigh muscle which contracts. Out goes the knee.
This ‘monosynaptic’ reflex is the simplest kind of neural network, and it has two key features that are worth our attention. First, it is all or nothing. Either the two neurons fire or they don’t. Sound familiar? It is a typical binary system. The second important feature is that an input (the tendon stretch) causes a signal to be sent to a processor (the sensory neuron) that does something (sends a message to the motor neuron) that does something else (makes a muscle contract). An input causes a process that leads to an output that becomes the next input where something else is done that causes another output. We have just described exactly what happens when a computer is programmed—a linked series of binary inputs and outputs. But the brain is much more complex than just a series of a few hundred billion binary, serial processes. To understand that we need to look more closely at what else Kandel found.[1]
First, he looked at the difference between short term memory that fades away after a few minutes and long-term memory that can last a lifetime. Short term memory is a function of the fact that, while the neuron has only one output, it receives many inputs through its dendrites. Some of those make it more likely to fire and some make it less likely, so the neuron is not actually all or nothing—it is not binary. Each of the brain’s neurons is an integrator that sums thousands of inputs of varying strength from thousands of sources before sending a signal down its axon.
Sometimes a long-term memory is laid down. That happens when input is strong enough to activate an enzyme that causes a section of the neuron’s DNA to uncoil and make a messenger RNA. That RNA goes back out to the cytoplasm where new proteins are generated that go down the axon and cause new synaptic terminals to grow and remain stable. That conversion takes place, for the most part, in the hippocampus, but the memories are ultimately stored in the area that generated the initial input—visual memories, auditory memories, emotional memories go back to where they came from and stay there. The important thing to remember is that short term memory is a temporary chemical process and long-term memory involves permanent anatomic changes.
The real magic—and the part no one has worked out yet—comes when the memory is recalled, and all of its various aspects are reintegrated into a whole picture that is the essence of consciousness. For our purposes, however, there are two aspects of this memory formation that are central to understanding artificial intelligence. First, this is not a simple binary code. Whether the neuron fires is a function of a number of inputs of varying strength that are summed. It is an analog process. Second, it is not like the monosynaptic reflex in which input and output are in a conga line, one following exactly on what came before. The brain is a parallel processor in which a large number of inputs of varying strength affect the output. It is far more complicated than the usual computer. And that brings us to our next fascinating individual.
John von Neuman was born in Budapest in 1903 to an upper class, non-observant Jewish family. By age eight, he knew eight languages (including ancient Greek) and was adept at both differential and integral calculus. He published his first major mathematical paper at age nineteen and went on to publish more than 150 others, mostly theoretical and applied mathematics, physics, and computer science. In 1933 he came to the Institute for Advanced Study at Princeton and later worked on the Manhattan Project. After World War II, he was responsible for the concept of mutual assured destruction (MAD) to avert nuclear war. Albert Einstein’s Wikipedia entry runs to thirty-one pages when printed out. Von Neuman’s is ninety-one.
In his mid-fifties, von Neuman contracted cancer (the exact type is not clear), but he spent his last months working on a series of lectures on artificial intelligence. He was not well enough to finish and deliver the guest lectures he had scheduled, but what he had completed was published as The Computer & the Brain with an introduction by futurist Ray Kurzweil. Von Neuman knew that computers were much faster than the brain and had a much lower error rate. He died before transistors were developed, but even the vacuum tubes in von Neuman’s ENIAC computer were 100,000 times faster than the simple neural circuit.
If the brain was a serial processor like the computer, it could not possibly compete, but it isn’t. Von Neuman played a mind game to estimate the brain’s ability to store information. He assumed that no long-term memory was ever lost. He guessed that every neuron was constantly receiving input from its thousands of dendrites and guessed that amounted to an average of fourteen inputs a second to every cell. Since there are roughly 1010 neurons that would mean 14 x 1010 inputs a second. If one lived an average of 60 years (a fair estimate in the early fifties), that would be 2x 109 seconds x 14 x 1010 inputs or 28 x 1019 bits of stored memory—about 100 million of the fairly powerful auxiliary hard drives sitting next to my computer. It is not just a matter of storage. To match the brain’s processing power, a digital computer would have to perform an impossible 1016 operations a second.
But all that ignores the fact that the brain is not a serial processor. Each neuron ‘processor’ has weighted inputs and can do multiple simultaneous operations. What it lacks in speed, the brain more than makes up for in complexity. That is about where von Neuman left it when he died in 1955, and that is where it more or less stayed.
Frank Rosenblatt actually proposed a computer with a parallel processing system and weighted inputs for the IBM 704 computer in 1957, but the existing hardware was not up to the task. All that changed at the end of 2017 when Ashish Vaswani and his co-authors published “Attention Is All You Need” in which they described the generative pre-trained transformer. In that model, a computer is trained with a gargantuan set of historic data and asked to predict specific outcomes. To do that, the machine takes multiple inputs simultaneously, assigns weights to them, and tests its predictions against reality. It then adjusts the weights until the machine’s prediction matches reality as closely as possible. In many ways, it is doing precisely what the brain does but with much more information and much faster. Like the brain, the GPT model is a parallel analog processor but with a knowledge base and speed no human can possibly match.
The question is whether there is still something about the human brain that is unique. That is a question for another time. Meanwhile, generative artificial intelligence looks very much like an inflection point in human history.
If you want more depth:
Kandel, Eric R., In Search of Memory: The Emergence of a New Science of Mind. New York: W.W. Norton & Company, 2006.
John von Neuman, The Computer & the Brain. New Haven: Yale University Press, 1958.
Vashwani, Ashish, et.al., “Attention Is All You Need,” arXiv:1706.03762v5, 6 December 2017. Accessed 3/13/23.
[1] I am going to simplify this a good deal. To get the details of this fascinating story, try his book In Search of Memory: The Emergence of a New Science of Mind.




