In From Molecule to Metaphor , Jerome Feldman proposes a theory of language and thought that treats language not as an abstract symbol system but as a human biological ability that can be studied as a function of the brain, as vision and motor control are studied. This theory, he writes, is a "bridging theory" that works from extensive knowledge at two ends of a causal chain to explicate the links between. Although the cognitive sciences are revealing much about how our brains produce language and thought, we do not yet know exactly how words are understood or have any methodology for finding out. Feldman develops his theory in computer simulations -- formal models that suggest ways that language and thought may be realized in the brain. Combining key findings and theories from biology, computer science, linguistics, and psychology, Feldman synthesizes a theory by exhibiting programs that demonstrate the required behavior while remaining consistent with the findings from all disciplines. After presenting the essential results on language, learning, neural computation, the biology of neurons and neural circuits, and the mind/brain, Feldman introduces specific demonstrations and formal models of such topics as how children learn their first words, words for abstract and metaphorical concepts, understanding stories, and grammar (including "hot-button" issues surrounding the innateness of human grammar). With this accessible, comprehensive book Feldman offers readers who want to understand how our brains create thought and language a theory of language that is intuitively plausible and also consistent with existing scientific data at all levels.
As promised, Jerome Feldman delivers a comprehensive, detailed theory that takes us from individual neuron function to the way the brain’s neural system works in language use, as it is currently known, all the way through embodied thought, grammar, language, and conceptual metaphor. Though there is obviously much more research to be done at most of these higher levels. This book reminded me how reading always depends on both writer and reader. I first read it some years back, but was much more impressed after reading it again recently. My point is that the changes that made me appreciate this material more were obviously in me, not the book; the main change being, perhaps, greater interest in the detail described. One section of particular interest: “I believe there is a plausible story about how a discreet evolutionary change could have given early hominids a simulation capability that helped start the process leading to our current linguistic abilities. Mammals in general exhibit at least two kinds of involuntary simulation behavior--dreams and play.” [...] “Given that mammals do exhibit involuntary displacement in dreams, it seems that only one evolutionary adaptation is needed to achieve our ability to imagine situations of our choosing. Suppose that the mammals involuntary simulation mechanisms were augmented by brain circuits that could explicitly control what was being imagined, as we routinely do. […] Now, hominids who could do detached simulations could relive the past, plan for the future and be well on their way to simulating other minds. Understanding other minds would then provide a substrate for richer communication and all the benefits that accrue from the use of mental spaces.”
I’m a bad scholar and amateur scientist, possibly, in that I always seem to have an ax to grind. My particular ax is symbolic thought as the primary evolutionary agent of change in human thinking. These last few decades, scientists have come up with a number of abilities they have hypothesized could have led to language. One of these is imagination: the ability to simulate. Another candidate is recursive thought. Either imagination or recursive thought, they have speculated in various times and places, could have been the ability that led to language and the modern human manner of thought. To me, this seems a case of violation of Occam’s Razor: entities in the chain of cause and effect are multiplied unnecessarily. When I read the section on simulation in From Molecule to Metaphor, in which the author suggested that controlled simulation could have been the key that led to language, I was momentarily stunned, then thought: Couldn’t that be backwards? What else is one function of language but controlling simulation? Isn’t that what we’re learning from embodied models of cognition? Language activates many of the same areas that are activated by the perceptions and actions of real situations. Perhaps symbol use is the way we extended our nascent natural abilities to simulate, not the other way round. Protolanguage, or the start of one in a lexicon, could have given our ancestors concrete signposts to kick start displacement and the simulation process. I believe Derek Bickerton first said something similar to this about displacement in More Than Nature Needs, his terrific explanation of Wallace’s Problem; that is, the existence of the large evolutionary gap between human abilities and those of all other creatures. And, in the same vein, I believe that symbol use and displacement (grammar, in particular) could have kick started recursivity.
At any rate, From Molecule to Metaphor is an excellent exposition of embodied language functioning at all levels, and I heartily recommend it.
This book might be called a ‘full-stack’ description of biological cognition to language and is an enjoyable and illuminating read. A survey of neurobiology through to cognitive linguistics. Feldman is happy to acknowledge the areas we do not yet understand and comes from a “Embodied Construction Grammar” perspective. This in turn has developed from Construction Grammar and seems to be somewhat parallel to the Cognitive Grammar of Langacker (it is very strange that Langacker is not mentioned in the text, index nor bibliography; I wonder what academic offence he caused?). The book seems a bit partisan to his school to me. For instance I find the description of Stephen Pinker's views here not one I derived from his writings but closer to a caricature. In general, this seems to be a summary of one, or a selected few, Universities’ work and research programmes.
Embodied Cognition makes the point that you cannot understand language without first understanding the basics of language acquisition. And those basics are the embodied experiences we can see babies and then infants work their way through to the delight of their parents. For instance the container is a basic schema or generalisation. Things can be put into it, and taken out and many other operations. Infants can spend an age (5 minutes or so) focussed on exactly that in play. Embodied cognition says that the phrase “France fell into recession” simply has no meaning except via metaphorical leverage of that prior, and other, schemas.
From Molecule to Metaphor presents as a full stack description - but there seems to me to be a missing and fundamental layer between the ground up (roughly neurons and assemblages of them) description and the top down (roughly embodied construction grammar) description. With that layer so sketchily articulated the fundamental claim of the title is a wee bit oversold. That's hardly unusual.
The ‘Schemas and Frames’ path that forms an essential layer of the stack seems insufficient. We have had 40 years of Object Orientated Programming (OOP) and the brittleness of the class/object/inheritance model has been long apparent. I may be missing something, but it seems a similar route is being taken here.
This book has given me ample motivation to understand models of the neuron and neurons and up. The book was excellent in arguing that we have so much more knowledge today, that to do so is not to study an entirely different subject to Linguistics.
This book surveys one of the most intriguing fields of our day. In a way that leads to an appreciation off the great work done so far and how it is gradually joining the dots to the greatest scientific mystery of them all.
It is amazing how it works. Starting with neural detectors on retina of frog's eye allowing to catch a passing fly leading to humans learning grammar by throwing a ball. Impressive development in the field reveals even more of how we process information but mysteries still remain. Will artificial intelligence manage to outsmart humans by being able to process large amounts of information quickly? It could be just an attempt to imitate adaptive human perception due to lacking some important ingredients. Well, future will show.
Language is unique to humans; no other animal has it. However, many other facilities are unique to this or that clade of animals: giraffes can eat twigs off tall trees; bats can fly. They achieve this by specially adapting existing organs: giraffes have an enormous neck; bats have remade their hands into wings. What is language a special adaptation of? It must be some sort of brain function. However, we don't know what it is, and this book shies away from saying it outright. It says that the grammatical aspect of verbs may be reusing the neurons for motor control; what about the tense, the modality, the evidentiality? It also says that the spatiotemporal adpositions such as "above" and "before" may be reusing the neurons for vision; what about other adpositions, such as the English "of" and "by", which cannot be visualized? Finnish and Estonian use noun cases in many situations where English uses a preposition, such as "on the table" and "in the house", and also in many situations where English doesn't; so does Russian: in Russian translation, the sentences "Mister Geppetto made Pinocchio a boy" and "Mister Geppetto made Pinocchio with a chisel" both have the indirect object in the instrumental case without adpositions. Do different noun cases activate different kinds of neurons in the brain of a speaker? If so, why are they in the same grammatical category? The book doesn't even begin to ask these questions.
Feldman marshalls the evidence and arguments that underpin the embodied constructive grammar theory of natural (human) language capability. Like his colleagues at UC Berkeley, Feldman rejects a symbolic approach to language processing and to cognition in general and attempts to describe how apparently higher order cognition is rooted in neural network mechanisms. There are better and clearer descriptions of key mechanisms such as internal simulation in other publications, but Feldman is to be commended for pulling together the assumptions on which computational methods based on embodied constructivism are being attempted.
Fairly engaging for such an academic work. Everything is well-organized, too, which I guess is a weird compliment for a book, but there you go. The ideas are clearly explained and concise which is great for a book with SO MANY ideas jumping around. The author did a good job explaining neuroscience concepts. Would recommend for people getting interested in embodied cognition but still new to the subject and linguistics; would not necessarily recommend to people already familiar with those topics, as this is largely introductory.
The first section, presenting the information processing framework to the brain, is a great introduction to the idea. The middle section, applying that framework to language-learning, dragged a bit. I would have preferred more depth in the last section on neural grammars, especially regarding ECG (Embodied Construction Grammar).
Not an easy read, but a lot of useful stuff on imagination, metaphor, and inference as they relate to language and thought processes. The sections on artificial intelligence are mostly lost on me.