Two distinguished neuroscientists distil general principles from more than a century of scientific study, "reverse engineering" the brain to understand its design.
Neuroscience research has exploded, with more than fifty thousand neuroscientists applying increasingly advanced methods. A mountain of new facts and mechanisms has emerged. And yet a principled framework to organize this knowledge has been missing. In this book, Peter Sterling and Simon Laughlin, two leading neuroscientists, strive to fill this gap, outlining a set of organizing principles to explain the whys of neural design that allow the brain to compute so efficiently.
Setting out to "reverse engineer" the brain -- disassembling it to understand it -- Sterling and Laughlin first consider why an animal should need a brain, tracing computational abilities from bacterium to protozoan to worm. They examine bigger brains and the advantages of "anticipatory regulation"; identify constraints on neural design and the need to "nanofy"; and demonstrate the routes to efficiency in an integrated molecular system, phototransduction. They show that the principles of neural design at finer scales and lower levels apply at larger scales and higher levels; describe neural wiring efficiency; and discuss learning as a principle of biological design that includes "save only what is needed."
Sterling and Laughlin avoid speculation about how the brain might work and endeavor to make sense of what is already known. Their distinctive contribution is to gather a coherent set of basic rules and exemplify them across spatial and functional scales.
Omg, what a great book. This is what I have been looking for!
The authors actually provide explanations about WHY the brain is structured so, rather than just a list of facts about the brain's anatomy. They achieve this by considering a few principles of design -- the need to efficiently use space, time, energy and information -- and use these principles to (loosely) derive structures in the brain -- such as folds in the cerebellum, wiring between cortical layers, design of synaptic vesicles, ... In my mind they have achieved something similar to what (say) Newton achieved for mechanics. The first predictive theory!?!
And I just really love the questions they address; - why do organisms need a brain? (coordinate multicellular organisms/electrical communication is fast) - why does it need to be big? (the advantage of predictive control and to coordinate larger multicellular organism to exploit more info)
This book got me really excited because I could see them using ideas from information theory and computational complexity to reason about the brains structure. The expense of high information rates and invariance to noise were common themes, and the brains ability to scale speed with energy/space often gave sublinear improvements.
This book did a shoddy job of learning/memory (last chapter), but an amazing job on information processing/perception.
There is a lot of work to be done to make the authors claims/conjectures more rigorous but I can now see a way forward. Now what I really want to a similar book to explain reasoning and learning in the brain from a set of simple principles.
I have read a lot of neuroscience books that are heavy on "What" and "How," that is, the mind-blowing anatomy and physiology of the brain. Truly a lot is known, and one can spend a lifetime on just these aspects. But few authors dare, or are qualified, to weigh in on "Why" things are as they are. Perhaps they are afraid of being thrown into the bin of teleologists and scorned. Certainly these are dangerous waters, with so many just-so stories about how things must be as they are.
Sterling and Laughlin have the technical chops to do better. To pack that staggering computational power into a few kilograms, with power dissipation of a just a hundred watts, evolution has had to find and exploit a lot of tricks. Evolution is often portrayed as doing a lot of things at random because it just happened to stumble upon a good-enough kludge here and there. The authors make the case that in contrast, the demands on neural performance are so severe that truly optimal design wins. They lay out a handful of principles, then take us on a tour showing how those principles play out in the same way across many scales, many sensory and processing modalities, and even many species, as diverse as human and fly. Along the way, they also introduce the reader to many amazing organisms (the star-nosed mole!) and equally amazing abilities that even humans possess (single-photon sensitivity!), so every chapter is filled with revelations.
By the end, the authors do not shy from explaining how their viewpoint, painstakingly obtained by looking at specifics, also illuminates the human condition, and by this point, the reader has earned these insights. There is really no other book with these ambitions combined with such expertise and global view.
This is a remarkable, engagingly written book, covering in exhaustive detail how the brain adheres to a small set of core engineering principles in its organization and operation over a huge range of temporal and spatial scales. Here I have to reiterate that the coverage is exhaustive, which can also be exhausting on an initial reading. This book unleashes a firehose of information about brain anatomy, chemistry, engineering, physics, and comparative biology, and the reader should be forgiven for having a slightly uneasy "Are we going to be tested on all this?" feeling while going through it.
The key is recognizing that this is no ordinary book, since it is no ordinary subject. Much like the brain's multiple scales of organization and function, this is a monograph that can be appreciated at multiple levels of detail and is almost certainly best approached more than once with slightly different goals. To that end, the book is well organized for multiple passes, with informative section titles, end-of-chapter summaries, and figures that supplement the text but can (and perhaps should) be studied separately from it. The authors state that they did not intend to write a textbook—and they didn't—but this is decidedly not a popular science book. Instead, I think it is one of the clearest, most valuable contributions to the cognitive science literature in many years, and I expect that it will be a classic reference for years to come.
This book is amazing. It’s a top down AND bottom up examination of thought and neurology. It manages to intermingle generality and extreme specificity to describe how thinking works from levels down to the quantum and up to that of a whole person, smoothly connecting everything via shared principles that apply at all levels. It’s written at a level higher than pop-sci but still well comprehensible to a layman like myself. You might have to skim the math/chemistry but I still strongly recommend it.
Spent a long time reading this one, but surely it was worth it! The amount of depth the authors convoyed in this book exploring the neural design of our brains is marvelous. Peter Sterling and Simon Laughlin provided an elaborated explanation of how our brains, a small compact object which was constructed by evolution, can function much better than a supercomputer while wasting much, much less energy and space. They therefore explained the neurophysiology and neurochemistry that underlines much of our neural functions, and they didn’t disappoint in any chapter. I found it very difficult book to go through, and had to come back to it a couple of times after long breaks from it because I didn’t have enough neuroscience background to understand most of what is written there. The book is highly recommended for those serious learners who are interested in neurology, neuroscience and all those other “brain-enthusiasts”.
I would've given it a 5, except in the last chapter the author took it upon himself to add some rather presumptuous judgments, arrived at with unscientific reasoning, about how attention deficit disorder is treated. Just because a medication has a few compounds in common with cocaine doesn't necessarily make it bad, and the attempt to play on emotions (AHH! DRUGS! SIMILARITIES TO METH OMG! - I'm paraphrasing obviously) caused me to lose a large chunk of respect for the author. Most of the book sounded as though it had been written by scientists, but the last page of the last chapter seemed like it had been written by a journalist. Keep your opinions and your science separate please!
This book is both ambitious and courageous. It is ambitious because it seeks to extract general principles from the accumulating mountain of data about brain function, and it is courageous because the principles discovered are not expressed only in the imprecise realm of mere words. Instead, Sterling and Laughlin interpret physiological findings from a wide range of organisms in terms of a single unifying framework: Shannon’s mathematical theory of information, in which information is a well-defined quantity measured in bits.
From the outset, Sterling and Laughlin make it quite plain that this not a book of speculation about brains, but it is a book which shows how definite facts can be used extract general principles of brain function. In the search for these general principles, Sterling and Laughlin cast their investigative net wide, covering sensory and motor systems in species which represent a diverse range of evolutionary experiments.
By comparing these systems in the bacterium Escherichia coli, the single celled Paramecium caudatum, the nematode worm Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and a variety of mammals (including humans), the authors show that:
1) even though the biochemical or neuronal signalling systems vary greatly within each species according to the particular demands placed on each sensory system (e.g. vision versus olfaction),
2) the processing within each type of sensory system (e.g. vision) is similar when considered across different species.
For example, they show that the information processing steps in neural networks that support vision are different from those that support audition within the human brain, but the information processing steps in neural networks that support vision in humans are very similar to those that support vision in flies.
Rather than noting this as an intriguing evolutionary coincidence, Sterling and Laughlin argue convincingly that these findings are the result of an interaction between universal physical constraints and species-specific constraints on information processing. These constraints dictate that the energy cost of each additional bit of information rises extremely steeply, resulting in rapidly diminishing informational returns on each extra Joule of energy expended. And because energy cost translates fairly directly into Darwinian fitness, the importance of these constraints is self-evidently paramount.
The principles discovered are made clear at the outset, and these principles are re-discovered many times within the book as different species and sensory systems are examined. In this regard, the book resembles Darwin’s great work, On the Origin of Species. Sterling and Laughlin do not just propose a vague but plausible idea in the hope that others might believe it. They propose a series of testable hypotheses (principles), formulated in mathematically precise language, and support them with a detailed analysis of hard evidence drawn from a virtual archipelago of diverse sources. This book deserves to have an enormous impact on neuroscience (and the various ’brain sciences’), because (in the best traditions of science) it provides a framework for condensing mountains of physiological data into a neat theoretical (information-shaped) molehill.
Sterling and Laughlin do not claim that the set of 10 principles they discover are either complete nor infallible, but they make a convincing case that these principles deserve to be taken seriously. These principles are not listed here because they require substantial context for their worth to be appreciated. The fact that the principles cannot be stated in isolation reflects the authors’ ability to simplify, but not to over-simplify, what is essentially a complex problem.
Like most books on the brain, this one explains how particular mechanisms execute particular functions, but, unlike most books, it also makes frequent use of the word "why". According to Sterling and Laughlin, it is not enough (for example) to understand the physical mechanisms which tells us how a photon changes the voltage of a photoreceptor by some amount. A complete theory of vision should also tell us why the mechanism is the way it is, why its voltage changed by that amount, and why not twice nor half that amount.
This emphasis on why the brain operates as it does has a strong tradition uniquely associated with the computational approach to brain function espoused in books by Marr (Vision, 1981), Rieke et al (Spikes, 1997), Land and Nilsson (Animal Eyes, 2003), Dayan and Abbott (Theoretical Neuroscience, 2005), Bialek (Biophysics: Searching for Principles, 2012). So the approach is not new, but it is rarely adopted, nor expressed as cogently as it is in this book; a book which will still be read long after books which describe only how neurophysiological mechanisms work have been forgotten.
Whilst the authors’ ambition is laudable, some aspects of the execution could be better. This book relies critically on the reader having a firm grasp of Shannon’s formal definition of information. Even though a few pages are dedicated to this, a tutorial account would allow less numerate readers to appreciate the many results which depend on understanding information theory. On a similar note, certain key technical terms are not explained (e.g. Nyquist limit, Fourier transform, and point spread function). Addressing these problems would have greatly improved the book’s accessibility. Having said that, the effort involved in researching topics via Google (as readers are enjoined to do in the Preface) would be well rewarded with a clearer understanding of the book.
In conclusion, the authors clearly believe neuroscience suffers from two related problems: too much data and too little theory. They claim that, “the best we can do with Data Mountain really is just to set a few pitons up the south face”. But I think they have achieved much more than this. Sterling and Laughlin have firmly established a base camp, and have hewn a path which will allow scientists of sufficient skill and fortitude to conquer Data Mountain.
-- Note: In order that you can gauge if I am qualified to comment on this book, my name is Dr JV Stone, an Honorary Reader in the Department of Psychology, University of Sheffield, England. I have published books and papers on vision, computational neuroscience and information theory. --
This is a truly great book. It is great because it manages what most scientists don't even attempt - a causal explanation of the structures we find within our brains. There are a couple of ways to read this book:
Simply try and understand the design principles described by the authors. These principles were either found de novo or taken from engineering or other sciences. It is mindboggling how much my view on neural circuits has been altered by becoming aware of these underlying design rules. The minimization of wire is something that scientists are generally aware of but now, having dense reconstructions of huge tissue samples or even whole brains available is becomes an even more fascinating excercise to try and see how exactly this rule is implemented on a local and global scale.
Another way to read this book is genuine curiosity regarding the visual system. This is the system the authors have worked on during their active careers and this is what they have thought long and deeply about. Sitting through anatomy classes was never my favorite exercise, but following the authors on their tour de force from the retina through the LGN to V1 and higher areas is simply fun. If all neural pathways would be explained with this level of dedication, wit, and genuine and deep understanding of the phenomena, university would be a different place.
This is a book I would recommend to everyone interested in the bigger picture of neuroscience. Great read!
A textbook about neuroscience that does it right - teaching you structures to see the world, making choice use of the neural facts, rather than being inundated by them.
I work on very similar areas to this book and read it to help me write the intro to my Thesis. I thought it was epic, super interesting, and I learnt a lot, especially about retinal coding where the authors are clearly experts. There were times when it was a little loose, it definitely feels like a view of the brain that is waiting to be padded out (as is correct). But overall this was definitely not a problem.
These kinds of ideas are the high-level structuring lessons that I will keep with more for understanding the brain.
My ratings of books on Goodreads are solely a crude ranking of their utility to me, and not an evaluation of literary merit, entertainment value, social importance, humor, insightfulness, scientific accuracy, creative vigor, suspensefulness of plot, depth of characters, vitality of theme, excitement of climax, satisfaction of ending, or any other combination of dimensions of value which we are expected to boil down through some fabulous alchemy into a single digit.
Does what it claims to do very well. This is a way of thinking i have always preferred but it has done the work of backing up the logic with data. Reading it filled in many gaps in my knowledge that limited this perspective for me; now that those gaps are filled, these fundamentals of thinking about neuroscience will become more powerful for me.
Very technical. Read the broad overview of the brain, but checked out once it went deep into how the eyes work. I would enjoy a more lay-oriented version of this book.