Jump to ratings and reviews
Rate this book

Fluid Concepts and Creative Analogies

Rate this book
subtitle: Computer Models of Mental Fluidity & Creativity
A lucid, highly readable exploration of the computer models of discovery, creation, and analogical thought developed by the Pulitzer Prize-winning author of Gödel, Escher, Bach and the Fluid Analogies Research Group. The book features anagram and number puzzles, analogy puzzles involving letter strings or tabletop objects, and fanciful alphabetic styles.

"A remarkable book. At first I said 'too technical and specialized,' but hours later I found I couldn't stop reading.... A marvelous book, illuminating oddities of thought and raising them to profound insights into the nature of human creativity."--Donald A. Norman, Apple Fellow; Professor Emeritus, University of California, San Diego

528 pages, Hardcover

First published January 1, 1991

99 people are currently reading
1796 people want to read

About the author

Douglas R. Hofstadter

40 books2,292 followers
Douglas Richard Hofstadter is an American scholar of cognitive science, physics, and comparative literature whose research focuses on consciousness, thinking and creativity. He is best known for his book Gödel, Escher, Bach: an Eternal Golden Braid, first published in 1979, for which he was awarded the 1980 Pulitzer Prize for general non-fiction.

Hofstadter is the son of Nobel Prize-winning physicist Robert Hofstadter. Douglas grew up on the campus of Stanford University, where his father was a professor. Douglas attended the International School of Geneva for a year. He graduated with Distinction in Mathematics from Stanford in 1965. He spent a few years in Sweden in the mid 1960s. He continued his education and received his Ph.D. in Physics from the University of Oregon in 1975.

Hofstadter is College of Arts and Sciences Distinguished Professor of Cognitive Science at Indiana University in Bloomington, where he directs the Center for Research on Concepts and Cognition which consists of himself and his graduate students, forming the "Fluid Analogies Research Group" (FARG). He was initially appointed to the Indiana University's Computer Science Department faculty in 1977, and at that time he launched his research program in computer modeling of mental processes (which at that time he called "artificial intelligence research", a label that he has since dropped in favor of "cognitive science research"). In 1984, he moved to the University of Michigan in Ann Arbor, where he was hired as a professor of psychology and was also appointed to the Walgreen Chair for the Study of Human Understanding. In 1988 he returned to Bloomington as "College of Arts and Sciences Professor" in both Cognitive Science and Computer Science, and also was appointed Adjunct Professor of History and Philosophy of Science, Philosophy, Comparative Literature, and Psychology, but he states that his involvement with most of these departments is nominal.

In April, 2009, Hofstadter was elected a Fellow of the American Academy of Arts and Sciences and a Member of the American Philosophical Society.
Hofstadter's many interests include music, visual art, the mind, creativity, consciousness, self-reference, translation and mathematics. He has numerous recursive sequences and geometric constructions named after him.

At the University of Michigan and Indiana University, he co-authored, with Melanie Mitchell, a computational model of "high-level perception" — Copycat — and several other models of analogy-making and cognition. The Copycat project was subsequently extended under the name "Metacat" by Hofstadter's doctoral student James Marshall. The Letter Spirit project, implemented by Gary McGraw and John Rehling, aims to model the act of artistic creativity by designing stylistically uniform "gridfonts" (typefaces limited to a grid). Other more recent models are Phaeaco (implemented by Harry Foundalis) and SeqSee (Abhijit Mahabal), which model high-level perception and analogy-making in the microdomains of Bongard problems and number sequences, respectively.

Hofstadter collects and studies cognitive errors (largely, but not solely, speech errors), "bon mots" (spontaneous humorous quips), and analogies of all sorts, and his long-time observation of these diverse products of cognition, and his theories about the mechanisms that underlie them, have exerted a powerful influence on the architectures of the computational models developed by himself and FARG members.

All FARG computational models share certain key principles, among which are: that human thinking is carried out by thousands of independent small actions in parallel, biased by the concepts that are currently activated; that activation spreads from activated concepts to less activated "neighbor concepts"; that there is a "mental temperature" that regulates the degree of randomness in the parallel activity; that promising avenues tend to be explored more rapidly than unpromising ones. F

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
213 (35%)
4 stars
222 (36%)
3 stars
133 (21%)
2 stars
33 (5%)
1 star
7 (1%)
Displaying 1 - 30 of 33 reviews
Profile Image for Matt.
752 reviews614 followers
May 29, 2017
In this book Douglas Hofstadter and his colleagues from the FARG / Fluid Analogies Research Group give away details of their findings on computer’s ability to make analogies, creativity, and what is called “fluid concepts”. There’s a couple of programs the “FARGonauts” developed over the years. The different chapters had been published before in science magazines but received an overhaul for this book. There are also newly written prefaces to each chapter and a very interesting epilogue called Creativity, Brain Mechanisms, and the Turing Test.

The book was first published in 1991 which, in computer terms, is ages ago. One can assume that in the meantime many new insights have been uncovered. Nevertheless I think the concepts and ideas presented here are still relevant today. There are some intriguing approaches to imitating human cognition into a program. In particular the formation of analogies through programs and the “slippage” of concepts are very revealing. The systems presented here all operate on so-called micro-domains, that is, on tiny sections of the virtually infinite real world.

For example the program called Copycat operates on letters only and is able to give answers to problems of the following kind:
Suppose the letter-string abc were changed to abd; how would you change the letter-string ijk in “the same way”?
This does not sound like much, but it is a very interesting and wide field, if you take a closer look at it. The general idea is also addressed by Melanie Mitchell, a co-author and developer of Copycat in this video of a lecture, which I highly recommend:
https://www.youtube.com/watch?v=I1ay-...
This video is from 2015, which leads me to believe that the themes and general architecture of the programs described in this book are still relevant, and my time reading it wasn’t wasted after all.

The problem above looks like some question from an IQ test, but in fact it’s not. There is no right or wrong answer, there’s only answers that are more elegant and “deep” (one of Hofstadter’s favorite expressions) than others. Humans, when faced with this sort of problem usually start building analogies that help them find a rule behind the given letter-change, and apply this rule to the other string. In this case there are several possible rules one can think of:

1) Change the third letter to d, so that ijk becomes ijd which doesn’t seem very appealing.

2) Change everything to abd, so that ijk becomes abd which is even less subtle or elegant (at least to me, it might be different for the current US president)

3) Change every occurrence of c to d, so that ijk won’t change at all. This, I think, seems a little better than above, but is still not satisfactory.

4) Finally; change the last letter to its successor in the alphabet, so that ijk becomes ijl. That’s the answer most people think of right away. But why is that the case? Because of the analogy you discover between the “rising” string abc and ijk and the knowledge that d comes after c in the alphabet.

Here’s another problem:
Suppose the letter-string abc were changed to abd; how would you change the letter-string xyz in “the same way”?
This is rather similar to the problem above, but it obviously has some obstacle built into it. The concept “successorship” doesn’t work for the last letter of xyz anymore. Copycat (at least some of the times) offers wyz as an answer. This might look strange at first, but it’s actually a rather deep answer. The program has discovered the rising sequence of letters a-b-c, and the change to the successor in the last position. It then slipped these concepts and instead of going up from left to right and change the last letter to its successor it is now going down from right to left and changes the first letter to its predecessor. How is that for analogy making?

There are a couple more programs like that presented in the book. This is all done without any maths or actual program code. So laypeople should have no problem following Hofstadter and his colleagues’ reasoning.

This was actually the first book I read about artificial intelligence, AI, and the possibility to mimic human cognition. There’s a lot of talk about AI and “intelligent machines” and how those might overcome humans in the future, the so called Technological Singularity, that is the time when a artificial superintelligence emerges. I think this scenario is still far down the road, if it comes at all. Unless some very clever people have some very clever concepts hidden somewhere in a drawer I don’t think computers will achieve human intelligence anytime soon. Today there are “neural nets”, of cause, and “deep learning” and there’s great progress in these fields, but, to me and to Hofstadter as well, those have little to do with intelligence and human cognition. This is only the simulation of a rather low layer in perception (the neurons) and a neural net seems even less aware of the concepts it’s dealing with than any ordinary program, like, for instance, a word processor, whereas programs like Copycat & Co seem to be more like the real deal when it comes to actual thinking agents.

_______________________

A little tidbit: It seems that Fluid Concepts and Creative Analogies was the very first book ever sold by Amazon: https://en.wikipedia.org/wiki/Amazon....

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Profile Image for Maurizio Codogno.
Author 66 books144 followers
November 15, 2010
Non è facile commentare un libro di cui sono il traduttore, e nel quale sono citato e ringraziato nella postfazione...
Il libro è "di lavoro", nel senso che è formato da vari capitoli che raccontano i progetti portati avanti dal gruppo di ricerca sulle scienze cognitive guidato da Hofstadter. I lavori si basano sul cercare di vedere come un sistema possa riuscire a trovare delle analogie su ambienti molto specializzati: ad esempio, trovare l'elemento successivo di una serie di numeri, oppure rispondere alla domanda "se abc diventa abd, allora cosa diventerà bcd?" o ancora, come riuscire a disegnare le lettere mancanti in un alfabeto costruito con una serie di segmenti.
Tutti questi problemi non hanno una soluzione univoca nemmeno per noi, e costituiscono pertanto una sfida che a volte riesce, e altre volte no.
Per chi non ama questi temi il libro può pertanto risultare pesante, anche se la prosa di Hofstadter è sempre piacevole e infarcita di giochi linguistici che ho per quanto possibile conservato oppure adattato nella versione italiana.
2,973 reviews
May 16, 2013
Well below expectations.

This books has a couple of major problems:

First, Hofstadter keeps describing the computer programs he was working on.
"That book is 'required reading' for anyone who really wants to understand [the computer program] Copycat—definitely a worthwhile goal, in my book." pg. 301.

These programs were meant to simulate human cognition. But Hofstadter skips a key step. Instead of discussing his model of human cognition, he keeps discussing his model of the program.

Second, Hofstadter spends a lot of time pooh-poohing other research projects for overhyping their results. I am not sure what I am supposed to gain from that. Especially because the overhype has become readily apparent in the years since the book was published.

Third, the book repeats itself quite often. It is clear that these essays were written separately and then stitched together because they cover the same ground several times.

Finally, when Hofstadter does not write the chapters by himself, his playful and accessible voice tends to hide or disappear altogether. The others' styles, at least in their chapters, are pretty dry.

It's good in the first couple of chapters and good in the interstitial pieces. Other than that . . .
Profile Image for Kyle C.
642 reviews88 followers
Read
August 2, 2023
This is a forbiddingly technical collection of essays, all of them authored or co-authored by Douglas Hofstadter, offering a general survey of his research on AI and cognition over decades. Chapter 5, especially, is a detailed discussion of the Copycat Project, an early computer program that looked at analogies for letter sequences and attempted to make convincing and elegant solutions (for example, abc-->abd, so efg-->______?) Much of this collection is now far behind in the field of computer science. The book was written in 1995 before Deep Blue defeated Gary Kasparov in chess, before Google Translate was released, before ChatGPT launched. Yet many of its insights into human thinking and problem solving remain relevant today. A computer might be able to play chess better than a human or produce a sentence that sounds like Arthur Koestler—but this only shows that playing chess and imitating Koestler are not necessarily the definitive measures of intelligence.

Even though new technologies like ChatGPT and Bard may seem to be the shiny wonder-toys of AI today, Hofstadter's book offers some good ways of evaluating and probing the general intelligence of AI programs. Even as far back as 1995, there had been significant research into language modeling (for example, as early as 1983, Racter illustrated that programs, given enough grammatical rules and enough language data, could imitate the style of potboiler crime thrillers, generating the bizarre novel "The Policeman's Beard is Half Constructed"). However, as Hofstadter argues, such programs are not a convincing demonstration of intelligence and creative thinking. The ability to analyze large amounts of textual tokens and recombine them into stochastically probable strings of words is not an autonomous form of intelligence; it's just statistical plagiarism, computerized mimicry, a scaled-up mad-libs form of speech production. What humans do when they tackle problems is far more complex. It involves a recursive process of perceiving and re-perceiving the problem, shuffling mental objects around, building and discarding preliminary substructures and models, and always searching for deeper analogies, applying concepts fluidly to other domains.

Rather than summarize the contents of this volume, I think it is more useful today to think about some of its salient insights and how they might apply to current AI technologies.

1. First, Hoftstadter argues that intelligence requires not just perception but continual readjustment of perception. Cognition is recognition. What distinguishes the human mind is how it can toggle between different ways of perceiving patterns. Take, for example, a string of numbers:
1 2 2 3 3 4 4 5 5

Most people would naturally group identical numbers together, perhaps something like
1 (2 2) (3 3) (4 4) (5 5)

But while that seems logical, it is not elegant. There is a lonely 1 at the start of the sequence and so while this current scheme shows a general pattern, it is not a comprehensive explanation or model. On closer inspection, our mind is able to regroup these numbers into another ordered pattern:
(1 2) (2 3) (3 4) (4 5) (5 6?)

This is much more gratifying. We can see this either as a pattern of doubled numbers (which does not explain the stranded 1 at the beginning) or as a rising sequence of adjacent pairs of numbers (1,2) (2,3)(3,4). Now we can explain the 1 and better predict the next in the sequence. This is crucial for problem-solving. Our brain can perceive and re-perceive data, mentally reorganize the structures of a problem, shift the implied mental boundaries of groups of objects, and identify a new logic. Even though the surface of both number sequences are the same, there is a different and better internal substructure in the second version.

Now in Hofstadter's view, an AI program should have some preexisting concepts (for example, the order of numbers 1, 2, 3, 4); it would have some knowledge of mathematical operations; and it would be biased to look for particular kinds of patterns (doubles of numbers, sequences of numbers, or some other important series like prime numbers or the Fibonacci sequence). The program would then have several routines running in parallel: codelets that look for small groups of patterns (e.g. "(2 2) (3 3)" or "(2 3) (3 4)"), codelets that attempt to bridge these groups by an obvious logic (perhaps doubles or pairs of adjacent numbers), and finally codelets that operate globally to check that a coherent model has been constructed for the whole sequence. The program would know when to stop when the temperature of the program is low (e.g. if there are lots of codelets looking for numerical patterns, the temperature will be high; if one codelet has been successful and bridges connecting like groups have been formed, then a solution is likely to have been found; no other codelets will be operating because they have been discarded as unproductive.)

What is powerful about this approach is that the program scans for different types of patterns, discards or revises groups when they do not yield general patterns, and continues to search for a model that explains the whole sequence. The program is continually evaluating whether its model accounts for all the groups of subpatterns and it actively looks for alternative ways to analyze the sequence of numbers. In contrast, language models do not do this. They only have preexisting models about probable sequences of tokens; they do not attempt to reconfigure their perception of a problem. Take, for example, a sequence of numbers like:
1 1 2 1 1 1 2 3 2 2 1 1 2 3 4 3 3 2 1

This might seem like a random sequence of small numbers. We have a startling 1 1 at the start, a stranded 2 and three 1s followed by 2 3. What matters is how you perceive groupings of numbers. With a bit of patience, you might see:
( 1 ) (1 2 1 1) (1 2 3 2 2 1) (1 2 3 4 3 3 2 1)

Now the pattern becomes obvious! It is a sequence of peaks and plateaus. The first group is just 1. The second group starts at 1, climbs to 2, and then comes back down to 1 and ends at 1; the third climbs from 1 to 2 to 3 and then descends to 2, stays at 2 and ends at 1. The logic becomes clear. Each group is an ordered sequence of whole numbers, culminating in a peak (say n), and then repeating n-1 twice, until climbing back down in an ordered descending sequence. The next in the pattern would be 1 2 3 4 5 4 4 3 2 1)

However, when I first put this into ChatGPT, it only describes a sequence of rising and falling numbers and it predicts that the next in the sequence will be 2, 3, 4, 5, 4, 3, 2, 1. In the second run, it recognized that there was the repetition of a number but it predicts 4, 4, 3, 2, 1. Even while ChatGPT gets the general episodic sequence of numbers, climbing from and descending back down to 1, it does not recognize the doubling of numbers. In the second run, even when it recognizes the repetition of n-1, it forgets the ordered sequence. It groups sequences together but its answers did not consistently account for smaller substructures (the ordered sequence of rise and fall or the repetition of n-1).

2. Intelligence involves juggling different mental objects. When we solve anagrams, we don't arrange every possible sequence of letters (take, for example, the letters "T E N J U K". There are 5 factorial number of possibilities!) What our brains do is try to build smaller groups of plausible phonological or morphological units (like "unk" or "ten" or "net") and then we shuffle these around until we find a word. We create a mental blackboard on which we arrange these letters and phonemes and we see which outcomes create words or plausible-looking words. It is the easiest and quickest way to find a word, whittling down five variables into smaller potential ones.

ChatGPT is actually good at this but it has a strong tendency to add letters. Because it does not follow a logical procedure, looking for phonetic units and validating its results, checking that each letter is accounted for, ChatGPT can get these simple anagrams wrong. What is an anagram of "t o o n i n"? It says "nation". What is an anagram of "l e t h e m"? It says "theme". When I asked it for an anagram of "n e d i l u g n c e", it came back with "encudling" which is plausible, if not an actual word. Admittedly, in different runs, ChatGPT can get all these anagrams correct, but it's obvious that it is not actually producing these through a mental process like human cognition. It adds letters, leaves out letters, and it doesn't validate its output. It can be correct but also bafflingly wrong. It's statistical modelling, not cognition. I do not doubt that ChatGPT could become 100% reliable at anagrams (it searches for patterns and already can do quite well!) What is relevant here is that these answers are not generated reliably through a form of deductive scrambling. When presented with "toonin", I made a list of smaller morphological components (in, on, not, tion) but ChatGPT's answer of "nation" suggests an entirely different method of reasoning (maybe in training it encountered an anagram of nation? maybe its statistical weights prefer nation over notion? I have no idea!)

3. This final point is fundamental to Hofstadter's book and career: cognition involves analogical and fluid reasoning. An intelligent mind is able to recognize analogies in deep ways. It does not simply apply the exact same logic to another situation but will apply a similar logic to similar situations in slippery ways. An intelligent mind makes analogies between concepts, not just between attributes. Take, for example, a simple case of analogy:
I change efg to wfg. Can you do the same thing to ghi?

What would be the best analogy? One answer might be "whi". Simply change the leftmost side of the sequence of letters to a w just as efg to wfg changes the leftmost side (e to w). But another more subtle answer might also be "ghw". In that case, we recognize that in "efg", g is the rightmost letter while in "ghi", it is the leftmost letter. So by analogy, rather than just replacing the leftmost letter (ghi to whi), we change the rightmost letter (ghi to ghw) in order to preserve the g. In the first answer, we have a clear analogy (leftmost letter changes) but in the second answer we have a more fluid analogy (leftmost letter is analogous to rightmost letter). Analogy can be applied flexibly to inexact comparanda. Analogy is most creative and elegant when it breaks across rigid boundaries like that.

A fantastic example is this:
I change abc to abd. Can you do the same thing to mrrjjj?

This is a perplexing problem but elegant when the solution is shown. When we change abc to abd, we clearly are just changing the rightmost letter to its successor (c to d). We also recognize that abc is in alphabetical order. So what then to do with mrrjjj, which is not in alphabetical order and has doubles and tripples of letters? We might naively just suppose that we should change the rightmost letter (abc to abd so mrrjjj to mrrjjk) but this ignores the salient pattern (the repetition of r and j). So maybe we change mrrjjj to mrrkkk? That seems plausible too, but it leaves out the fact that abc is an alphabetical sequence and we naturally want to find something that correlates between abc and mrrjjj. What then is a deeper analogy at work here? Well, "abc" is in sequential order just like 1 2 3, "abd" is just 1 2 4. We have just increased 3 to 4. On the other hand, "mrrjjj" is a sequence of groups of letters increasing in number (1 m, 2 r, 3 j). And so here find an analogy: one sequence of letters is alphabetical, the other sequence is iterative. So the best analogy would be "mrrjjjj" (1 m, 2 r, 4 j) just as abc to abd is a change from 123 to 124. It's very clever!

In Hoftstadter's program, a series of codelets would search for patterns (to recognize that abc is an ordered sequence, that c to d is a sequential increase, that mrrjjj is not an alphabetical sequence but a sequence of iterations); codelets would then try to find ways to find analogies to bridge these two (a is to m, b is to 2, c is to jjj). Note that codelets might propose possible but unlikely analogies (they might suggest changing the final letter so the outcome is mrrjjk or they might look for reverse analogies a is to jjj because they are the smallest numbers in their sequences) but these other proposals turn out to be unsatisfactory because they don't bridge other groups. What the system is looking for is some model that successfully accounts for all the patterns. Unlike ChatGPT, the program is attempting to uncover a deeper analogy, not just between objects but the concepts underpinning them (ordinal sequences vs iterative sequences).

ChatGPT turns out to be really bad at this specific kind of abstract analogical thinking. It fails to get these elegant kinds of analogies at all. That said, it can do other kinds of analogies in different knowledge domains. A fun game that Hofstadter suggests is to compare cities. What is the West Point of Maryland? What is the Bloomington of California? What is the Portland of Connecticut? These questions require people to search for deep points of comparison between cities, in terms of population, size, weather, climate, culture, bicycle festivals, military schools or other idiosyncratic curiosa. ChatGPT can be extremely good at this. When I asked it what is the Tasmania of New York City, it proposed "Staten Island", explaining that like Tasmania, it is necessary to take a ferry to get there. On the whole, that is a good answer. I personally thought "Hoboken" because Tasmania feels culturally close to Melbourne and Sydney, yet because it is separated from the continent and is often derided as less urban, it also feels like another country altogether. Staten Island feels right, but my answer is funnier, and it suggests a key difference in cognition—my brain was willing to think of New York City more broadly and flexibly; I could include parts of New Jersey more readily into my conceptual schema to find a very different analogy for Tasmania. Where ChatGPT was rigid and literal, I thought more loosely.

Language models are scaling up. With more training on larger corpora of texts, they can develop and refine their parameters so that they can produce more accurate information and solve problems more reliably. As of today, however, ChatGPT will hallucinate quotations from novels, make up historical events, fabricate solutions to mathematical problems, but the point is not simply that language models can make errors. What Hofstadter argues is that these kinds of language models are only simulating intelligence. Even with more sophisticated models that can speak uncannily with a convincing human tone, we have to test not their capability for stylistic imitation but their actual ability to perform mental processes. Can they perceive patterns in multiple ways? Can they think up deep analogies creatively? Can they validate their answers? Is their process recursive and self-monitoring? It is important to carefully probe artificial intelligence and not be mesmerized by the specter, becoming victims of the Eliza effect.
Profile Image for Mommalibrarian.
903 reviews62 followers
July 31, 2014
Today I officially cry UNCLE! I read and skimmed 273 of 491 pages. The subject is Artificial Intelligence. The author builds a series of computer programs to solve rigorously defined but not automatic-t-solve logic games. We get to play these games with pencil and paper but do not see the actual code. There is a lot of discussion of the problems other AI researchers have chosen and whether the solutions they have come up with constitute AI to any degree. Some chapters are, verbatim, papers which the author has previously published in scientific journals. It is heavy sledding and I did not find enough illumination for my feeble intellect to invest further time. I picked the book up because I own it. I cannot remember when or why I purchased it. I am making an effort to read the books I own now. We will see how long that can last.
Profile Image for Eric Hamen.
4 reviews4 followers
August 17, 2007
One of the denser of Hofsteader's books. Ruminations on thought vs program. Not light reading but no more difficult that Godel Esher Bach. Took me a good while to finish but am very glad I did.
Profile Image for Eliza.
123 reviews11 followers
January 15, 2021
I picked up FC + CA after being completely blown away by Hofstadter's acclaimed GodelEscherBach. However, I was admittedly a bit disappointed by this book. Though Hofstadter warns in the preface that this book is meant to contain overlap between the chapters, I found the repetitive nature of this book a bit of a roadblock to someone who was determined to read it cover to cover. Though, the nuances found between each research project and how a group of core ideas could be applied in a multitude of different variations was of some interest.

I decided to ultimately promote this rating to 4 stars over 3 stars because of Hoftstadter's commentary on "hollow" connectionist models. As neural networks continue to be the hot topic in today's ML/AI communities, Hofstadter's criticism that there are merely quite skilled in manipulating language and not truly understanding the concepts/data they work with is worth thinking on. To use his technical jargon, they succumb to the ELIZA effect (presenting the illusion of being more intelligent than they really are) and would surely not pass the Turing test! However, since FC + CA was written 30 years agao, his concept of neural networks is a bit outdated. It would be interesting to hear his take on the modern Generative Adversarial Networks, as Hofstatder makes the bold claim that connectionist networks will never be able to generate unique compositions in the style of known works!

Hofstadter proposes architecture that [he claims] does not fall prey to the weakness of the ELIZA effect. Whether or not Hofstadter is truly successful in his claim of higher intelligence is up to the reader (though I think I would grant that the answer is yes), his unique way of thinking, execution, and presentation is a welcome beam of light in academia. I found that chapter 3, the description of the Jumbo model to solve anagrams, to be of most interest, since it contained one of the more detailed technical descriptions of its implementation (spoiler, he does not use an exhaustive brute force method to solve!). His proposed parallel terraced scan + idea of many minute stochastic decisions creating a culminative deterministic effect is quite flexible and could be used in many areas of research outside of his own.

Hoftstadter is faithful to his primary goal of modeling and respecting the emergent properties of human creativity in a system, and not just focusing solely on the end product of his programs. Hofstadter at his core is a cognitive scientist. Much of his critique on neural networks stems from the fact that they do not respect the central feedback loop of creativity that humans are gifted with and do not clarify the inner workings of the human mind. Is this true?
Profile Image for Katherine Green.
12 reviews
October 4, 2017
How to build a fragment of something that does something quite a bit like actual thinking, at least about analogy puzzles.
Profile Image for Brian Powell.
194 reviews34 followers
September 10, 2020
Provocative exploration of computer models of human creativity. Given today's strong emphasis on perception within AI, Hofstadter's book is a refreshing departure into the mysteries of mental fluidity and analogy, key bellwethers of human intelligence.

This book is a collection of articles by Hofstadter and his fellow researchers that describe a series of computer programs developed to model the mechanics of analogy-making and creativity. At times I'd lament the level of detail, thinking "I don't really need to know how this specific program does what it does to understand how human minds identify the key concepts underlying novel analogies"; but, in hindsight the level of depth is really spot on -- without it, the whole project would have been too high-level and hand-wavy and unconvincing.

This work stands in contrast to much modern-day work in AI (like deep learning), which, importantly, does not attempt to model human thought and perception. Though much influenced by human neurology, these systems are developed to achieve some particular engineering task (e.g. recognize stop signs, or translate spoken French to English), not to emulate any particular human mental characteristic. Even generative connectionist models, which might be considered "creative", are vastly different from the kinds of models discussed in this book, which really try to get at *how* human minds originate ideas. Unless novel ideas and analogies are truly randomly sampled from complicated distributions in our brains, then human creativity is *not* well-modeled by generative networks (though, indeed, it seems at best to be well *imitated* by them). It seems likely that Hofstadter's take, which is utterly original and skirts the boundaries of traditional connectionist and symbolic AI, is the more believable proposal.

This work occupies an important (and increasingly less-explored) corner of the AI enterprise, and remains a key contribution to the field.
Profile Image for Dax.
72 reviews1 follower
January 2, 2009
Ugh. Note that I started this book in 2006 and am still trying to get through it. Douglas Hofstadter is most famouse for Godel, Escher, and Bach and I haven't gotten through that one fully yet either. However, I am intrigued by his continued search for "I". He has dedicated his life to unraveling how we think and looked at it from the lens of various fields: physics, philosophy, biology, mathematics, and, in this novel, computer science. I appreciate Hofstadter because his breadth is tremendous. I'm reminded of a Richard Feynman or even a Robert Pirsig although I think both are probably better writers.
Profile Image for Michael Guyer.
39 reviews2 followers
May 24, 2017
Thought about giving this 3 stars, but because I find Hofstadter's writing so readable I bumped it up to 4. Several interesting projects are detailed and discussed in this collection, and I largely agree with Hofstadter's assessment about what constitutes real decision making in a program designed to carry out some "intelligent" task such as analogy making, but the projects and their underlying architectures are all quite similar and thus grow a bit boring to read about by the end. This does serve to reinforce many of the key ideas of these projects, but as a reader it became tedious to finish. I feel as though I could have taken just as much away from this book if I had read a handful of chapters instead of reading it from cover to cover.
Profile Image for Bria.
938 reviews77 followers
November 3, 2020
On the one hand, I think Hofstadter is probably roundly vindicated by the last few decades of AI research. Having small systems work together stochastically for emergent results seems more line in with the direction we've gone in since then, and his resistance to hand-coding in specific relationships has certainly been borne out to be right. However I imagine he's not terribly satisfied with the progress in deep learning and neural networks, as they are, more than ever, completely mysterious black boxes to us, and thus don't do as much as he'd probably like to demonstrate how intelligence - ours or a computer's - actually *works*.
Profile Image for Dave Peticolas.
1,377 reviews45 followers
October 8, 2014

A collection of papers authored by Hofstadter and the members of his Fluid Analogies Research Group, this book presents the results of many years work in cognitive science research.

Also included are a couple papers pointedly criticizing some other approaches in the field, the main criticism being a failure to model human cognition in a realistic way.

Hofstadter is always worth reading and this collection is no exception to the rule.

55 reviews13 followers
December 19, 2008
This book examines in some detail an interesting architecture for systems for tackling some problems that might otherwise seem unapproachable.
It goes into sufficient detail that a competent developer could probably reproduce the solutions
However, it is largely a collection of publish papers and hence is very repetitious to a degree that annoyed me.
Profile Image for Frank.
917 reviews44 followers
August 11, 2009
In depth look at some of Professor Hofstadter's recent and very original research. There is also an essay raising some interesting points about the difficulty in assessing the quality of work in artificial intelligence. (Prof H is too diplomatic to say so, but he obviously questions the promise and usefulness of many standard fields of exploration).
Profile Image for Philip Chaston.
397 reviews1 follower
February 17, 2017
A long book, a month's read on the morning train after the Economist and a bridge between instantiating puzzles within microdomains and how we are creative. Exploring his programs reminds me of how we should stop, take stock and understand that simple questions, a child's questions, are keys to unlocking the insight of how we use analogy, metaphor and the new.
Profile Image for Klenk.
115 reviews4 followers
November 10, 2008
The focus on small hard problems in tiny domains is something I am growing more sympathetic too. Also, his indictment of symbolic cognitive science is a must read for people who do work in the field.
23 reviews3 followers
August 15, 2010
The most focused and concrete of Hofstadter's books. Also somewhat dryer, but definitely with interesting contents. Particularly for those interested in novel approaches to analogical reasoning in AI.
2 reviews
June 19, 2011
Very interesting book if you like to know about some of the structural aspects of artificial intelligence programs. Hofstadter works in domains which are very small, but interesting because of their playful nature.
8 reviews
Read
December 19, 2008
You know you want it. All my wonderful AI fascinated friends, you know you want it!
Profile Image for Gavin.
Author 2 books560 followers
September 7, 2019
One of a pile of Mind books I grabbed desperately for a first-year philosophy essay. Did not understand it (naturally that didn't stop me citing it). Will have another go some day
Profile Image for David Jacobson.
319 reviews17 followers
September 2, 2018
As in all of his books I've read, Douglas Hofstadter (and co-authors) raise fascinating questions about the nature of human thought as a high-level language of the brain. Here they present a series of (now-outdated) artificial intelligence experiments in which they attempt to model human cognition in drastically restricted domains; e.g., the domain of word-scramble puzzles, the domain of numerical sequences, etc. In doing so, they expose the myriad complexities of modeling creative thought even in such simple cases; they point out, both implicitly and explicitly, that AI projects that claim to tackle larger, "real life" domains (they give examples from the time, but modern examples based on "machine learning" abound) must give up any attempt at genuinely modeling human thought processes in order to make any progress. In light of this, and in light of the tremendous progress in such brute-force AI over the past twenty years, it would be fascinating to read an updated version of this book.

While the subject matter of this book is fascinating, its format is unbearable. It is a collection of essays—so often the haven for lazy authors who want to jam what they have already written into book form. Since many of the essays included are technical papers describing different particular AI projects that came out of Hofstadter's group, they continually explain the same concepts over and over. In this way, rather than learning more and more as you read through the book, you instead learn most of what they have to say in the first couple chapters; the ending comes not as a revelation but as a relief.
Profile Image for Nattapon Chotsisuparat.
Author 1 book5 followers
April 15, 2024
Fluid Concepts and Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought by Douglas Hofstadter. I want to mention two things. First, this book is the first ever book sold by Amazon.com back in the day. Now, in 2024, Amazon is a big tech company with a market cap over 1 trillion and has changed from just a book seller to various high tech industries. So, this book has historical significance. Second, this book is about AI, published in 1995. Over 29 years later, we have generative AI in 2024 with Chatgpt, Gemini, Perplexity ai search engine, and Copilot. Everything has changed so much. AI technology has really developed, so this book is kind of outdated, published in 1995 and many papers cited in this book were published in the 1970s and 1980s. So, it's a good book to see the development of AI technology over the years.
Profile Image for Serdar.
Author 13 books33 followers
July 1, 2018
The three star rating is not a reflection of the book's quality, just that it's highly specialized. This isn't the total tour de force of "Gödel, Escher, Bach", but a series of discussions about the mechanisms of analogy and how that is a cornerstone of what we could call intelligence. Absorbing if again not quite the bolt of lightning "G,E,B" was, but what else could be?
Profile Image for LaanSiBB.
305 reviews18 followers
Read
April 13, 2020
In comparison to AI development, this book is still valid for understanding human ability and mentality in a broader sense. Machine learning has already impacted every day's activities, would be crucial to visit some initial goal of replicating a human mind (including emotion and such), as to review the model we know of, for its depth and representation.
Profile Image for Liedzeit Liedzeit.
Author 1 book102 followers
June 5, 2018
Nicht annähernd so gut wie die anderen Werke. Anders ausgedrückt: ganz schön öde insgesamt, obwohl da natürlich einige Geistesspritzer zu finden sind.
Displaying 1 - 30 of 33 reviews

Can't find what you're looking for?

Get help and learn more about the design.