First we aimed to conquer the Earth. Then we exhaust Nature’s resources. Now we are trying to compete with our own invention – artificial intelligence. Is ours a story of self-cannibalism?
This book review was published earlier this summer in Leonardo Reviews. I felt very pessimistic after writing it. There’s a sense in general that humans are but machines and can be programmed to perfection – perfection as defined by a goal-oriented greed to be faster, bigger… more is more. We throw ourselves under the bus of progress. Inevitably, those with power will continue to prevail, from tech giants to autocrats. And our free will is being crushed—unawares to us because it is unconscious—as we continue to glorify a self-defeating victory.
A Book Review:
The Discrete Charm of the Machine: Why the World Became Digital
Princeton University Press, Princeton, NJ, 2019
256 pp., illus. 40 b/w, Trade $27.95
ISBN: 987-0-6911-7943-8.
A theatre critic once described the performance of an English stage actor this way: Each word is sent individually gift packed. The human voice is analog (a continuous sound wave), and speech is discrete, as opposed to a grunt. Shakespearean words carry far more information than most, and so it is critical that actors deliver them with great clarity. But like any delivery, the trip can be full of potential noisy disruptions: the theatre’s acoustics, air conditioning, or outside traffic —and if it is being recorded, static and other noises that are inherent in the equipment. Such analog recording will eventually be unreadable if transferred or edited repeatedly.
The Discreet Charm of the Machine is on the history of ‘discretization’ that led to faster and safer delivery of information, resulting in, among others, the internet today. Along the way, the ability to package information also allows that information to be modified or enhanced. And more importantly, such ‘bits’ can be recombined to make computations. Steiglitz traces the first analog computer to the 70 BC Antikythera Mechanism, pieces of which were discovered by divers in the Aegean Sea in 1900. “The device consists of a clockwork of at least 30 intermeshing gears with different numbers of teeth . . . connected to dials [which]. . . show the movement of the sun, moon, and probably the five then-known planets” as the crank is turned. The ‘program’ here is determined by how different size gear teeth mesh with others – it is inherent in the machine and not separable.
The first device to successfully store a program independent of the machine that executes it is the Jacquard loom (1804), which made intricate Jacquard weave fabrics. Joseph Marie Jacquard (France, 1752-1834) used punched-hole cards to control the needles that make the design. But each set of punched cards can make only a single pattern. It took well over a century before Alan Turing (Britain, 1912-1954), whose Turing machine extended this punched-card idea to a theoretical infinite tape consisting of cells with either ‘0’ or ‘1’ and a control head to read or change those cells. He also used the idea of a computer’s ‘state’ – which determines the next step. This idea of ‘conditional execution’ originated with Charles Babbage (Britain, 1791-1871), a mathematician and inventor who first conceived the idea of machine calculation.
From here, Stieglitz, a computer scientist, brings up the idea of scale: Humans occupy the middle scale, between the subatomic and the astronomical. He uses terms like Planck’s constant and Heisenberg’s uncertainty to illustrate his point, leading to the speculation of a superfast quantum computer. However, he does not provide a human perspective: The human body-brain can only receive signals in this middle scale. For example, we have no receptors for ultraviolet light and cannot perceive the sunburn—the pain the next day is from tissue damage. How far should we search—and make use of— scales and calculations outside of our perceptual and cognitive capabilities? If a computer returns an answer we don’t understand, how do we know if it is an insight or a short-circuit (like a seizure in humans)? If information is the removal of uncertainty, as the author asserts, how much can we trust this certainty if it cannot be explained to us? For example, a current intractable problem for Google and the US Army alike, is the case of adversarial examples in deep learning: scientists, by changing a few pixels, can make a machine to call a dog a rifle, for example.
For the moment, thankfully, Steiglitz does not see artificial intelligence to overtake humans soon. But he claims there is no ‘analog magic’ in the brain: Neurons are simply valves—excitatory or inhibitory. The Halle Berry neuron, however, suggests otherwise. In experiments in the brains of patients undergoing open brain surgery (with consent), one monitored hippocampus neuron repeatedly fired when an image of the actress was flashed on the screen. Says co-author Christof Koch, then of Caltech, “Conventional wisdom views individual brain cells as simple switches or relays. In fact, we are finding that neurons are able to function more like a sophisticated computer.” Just think: There are over 100 billion neurons in our brain—multiple parallel processing is occurring in your brain right now as you read this page. It is not magic—this ‘machine’ constantly generates new programs in response to the environment, through sensors all over the body, including the guts, which (unconsciously) receive information via food intake. Artificial intelligence has not addressed this sentient body and the unconscious. According to Eric Kandel, the Nobel neurobiologist from Columbia university, our free will is unconscious. Do machines have an unconscious?
It is very important that this book gives the general reader a glimpse of what ultimately should concern us all. But there’s no mistaking the author’s enthusiasm for the ‘super brain’ and how we got here. The ‘why’ part in the book is less convincing: Are speed and efficiency all that matter? Technical progress, too? According to Steiglitz: ‘Computers are too useful, people are too dependent on them, and there is too much money to be made . . .’ It sounds like a habit.
Martin Rees, physicist and cosmologist, in his book On the Future of Humanity (2019), says of DeepMind’s AlphaGo, which uses neural network learning to beat the Korean Go master Lee Sedol in 2016: “The hardware underlying AlphaGo used hundreds of kilowatts of power. In contrast, the brain of Lee Sedol…consumes about thirty watts (like a lightbulb) and can do many other things apart from play board games.”
Essentially, “successful aircraft do not flap their wings,” as Steiglitz aptly points out. But technology can be a one-track bully—should we be equally committed to educating those 7.5 billion brains on this Earth, each with its 100 billion neurons, even though they may not be as obedient?