The intriguing aspect of the human mind is what it is NOT capable of doing.
We are confident of knowing a lot of things that, upon closer scrutiny, we
don't know at all. Most people don't know why pressing a switch turns on
a light, and not even why rubbing a batch lights a fire.
The authors mention Frank Keil's experiments that showed how we live in
the illusion of knowing things that we don't actually know.
Our memory obviously does not memorize every single bit of information like
a hard-disk: we only memorize what is useful.
Thomas Landauer estimated the size of the human memory: 1 gigabyte.
The world is infinitely complex: it would make no sense to try to memorize
every single detail. We memorize very little, but that tiny portion is
what we need to act effectively in that complex world: we create abstractions
that can be used and reused, fine-tuned and expanded.
The authors emphasize that thinking is functional to acting: thought evolved because bodies need to act in the environment. First, life evolved action, and then it evolved more and more sophisticated forms of action. Thought is a sophisticated way to decide which action to perform. (Personally, i feel that this is not framed correctly: thought evolved like all other things in nature, more or less randomly, and then the thoughts that are detrimental got pruned away while the thoughts that lead to beneficial actions got reinforced). Our thinking is about "causes": we understand the causes and the effects, and that's the basis of most of our direct knowledge of the world and what we need to act human reasoning is causal. It is not the propositional logic of true/false. It is the logic of causes and effects. This logic applies to both inanimate objects (physics) and other people (psychology). Causal reasoning allows us to plan the future, even the very distant future. We can also reason backwards, from effects to their possible causes. The passion for telling stories is also a by-product of our causal reasoning. We are also very good at counterfactuals, at imagining situations that did not happen but could happen. The authors distinguish between intuitive and deliberative decisions. The intuitive ones are fast but often misguided. The deliberative ones require thinking carefully and typically using knowledge distributed in society. that human intelligence does not reside in the individual mind but in the collective mind of human society Our intelligence is spread outside our body: we have books, the Internet and fellow humans. Human society is a community of knowledge. We collaborate to accumulate knowledge that any individual can use. A lot of knowledge is actually hidden inside the tools that we use, for example the computers that pilot a plane An individual's "intelligence" depends more on her ability to use the intelligence outside her brain than on the innate intelligence of her brain. The authors write that "the brain is in the mind", not vice versa, meaning that the mind encompasses more than the individual brain, starting with the rest of the body and extending outside the body. I think that Dawkins has expressed this better and more generally in his book Dawkins Richard: "The Extended Phenotype" (1982). Inevitably we are given the example of the beehive, which is the collective intelligence of thousands of not-so-intelligent bees, and then of prehistoric hunting, the archetypical collaboration. Just like bees, humans benefited greatly from collaborating and so societies became ever bigger and more complex. They therefore suggest that the myth of the lonely "nerdy" scientist is just that: a myth. Most scientific progress comes from the collaboration of a multitude of scientists over generations. However, later they confess that a study found that people who routinely consult the Internet to get informed on medical topics are no more knowledgeable than people who never do. It doesn't dawn on the authors that there is as much knowledge as disinformation in human society. For every historian who tells a well-researched story there are countless unfounded legends (that sometimes spread even faster and with more practical influence). Hence a premise of their argument (that sharing knowledge makes us smarter) may not be true. Even more interestingly, they point out that the belief in false "causal models" of how things work creates highly polarized opinions, precisely the kind that spread on the Internet. Psychological experiments show that people are afraid of genetically-modified foods or vaccinations based on false causal models. The authors don't spend enough time discussing how ignorance also makes us more vulnerable to propaganda: the less i know about something, the more likely i am to be influenced by the campaigns funded by special-interest groups. campaigns. They correctly point out that Artificial Intelligence may have failed to create a super-intelligence but the "hive" of the Web has created one: there is an infinite amount of knowledge on the Web that we can use, and all that knowledge is, for practical purposes, the equivalent of a super-intelligence. They point out that machines don't understand intentions whereas we humans do, and that remains the biggest gap between machine intelligence and human intelligence. That is actually a scary situation: computer-based systems are getting more and more complex, managing more and more of our lives, but they actually are clueless about our "intentions": they are clueless of the goal that we want to achieve by building them. Super-intelligence is not in the power of futuristic AI-equipped machines, but rather in the power of connecting people. The Web is super-intelligence (in fact many subsets of the Web are, from websites that behave like encyclopedias to websites that store book reviews, from websites that store advice on how to fix things to websites that store statistics). This super-intelligence is a by-product of allowing people to share that knowledge, so that each individual can use the knowledge of all other individuals. It is more than a beehive: bees are slaves that simply partition labor but acquire no knowledge. Our physical and virtual libraries are a way for all humans to exploit the knowledge acquired by all humans, present and past. Key to this "super-intelligence", which is really a community of knowledge, is to make collaboration easier and more efficient. The book ends with a method to measure group intelligence, as opposed to individual intelligence. This is more important to them than individual intelligence because we mostly work in groups, not alone. The authors emphasize that intelligent acts are performed by groups, not by individuals: the individual intelligent act is the result of interacting with others. Intelligence is not a property of individuals, but a property of teams. Therefore they advocate an educational reform leading to an education that trains people to rely on the knowledge of others. I respect every study the book mentions, but i feel that something is missing. Whenever i have worked in a team, there was someone who was the main "intelligence", who was leading. The team's intelligence really depended on having at least one person who was individually intelligent. In other words, the intelligence (and knowledge) of the individual members has an impact on the intelligence of the group. To wit, a group of Harvard scientists is more likely to come up with a groundbreaking scientific theory than a group of Nashville country singers. At a personal level, i confess that i am always reluctant to work in teams. Whenever i work in a team, i feel that the team is slowing me down, that things could be achieved much faster (not necessarily better, but faster) if i did them alone interacting with the others from a distance (e.g., only by email or telephone). I guess the authors would reply that this is the "illusion of knowledge" that the book is about. Maybe there is also an "illusion of omnipotence" in some of us. There are simple facts like the speed at which one types on a keyboard or how many times one needs a coffee break that can have a negative impact on team intelligence. If something can be achieved reasonably well in five minutes or perfectly well in one week, which of the two options is better? I have often had the feeling that working in a team produces a better result but a huge cost, while working alone yields a modest result but much faster (and then let the team work catch up with an improved result). It's hard to convince me that i would have achieved more in life if i had worked more in teams. My feeling is the opposite: i would not be what i am, but, most likely, a faceless employee in a big organization. Then, again, the authors may argue that a faceless employee in a big organization is better than what i am. I guess this book lacks a fundamental definition at the beginning of what is the ultimate goal: do we want to raise new Leonardos or better factory workers? That said, i have nothing against the general idea of group intelligence: my intelligence is certainly in large part the intelligence of all the people who came before me and who are around now and provide me with so much key knowledge. I am just not sure that they need to be physically in the same room with me. Addition of 2021: The chapter "Thinking About Science" became suddenly very relevant during the covid pandemic, that basically pitted people who believe in science against people who believe in antiscience. This passage sounds prescient: "Antiscientific beliefs are still pervasive and strong, and education does not seem to be helping. Vaccine opposition is a good example where education has been ineffective at changing attitudes. Brendan Nyhan, a political scientist at Dartmouth, and his colleagues ran a study with parents to test whether providing information can increase acceptance of the MMR (measles, mumps, rubella) vaccine. Parents were given specific information in various formats and then asked what they believed about links between vaccination and autism, the likelihood of serious side effects from the vaccine, and the likelihood that they would vaccinate their children. In one case, the information given included a bunch of potentially negative outcomes of failing to vaccinate. In another, parents were shown images of children with measles, mumps, and rubella. In a third, parents read an emotional story about a child who contracted measles. Finally, some parents were given information from the Centers for Disease Control and Prevention debunking the link between vaccines and autism. The results were dismaying. None of the information made people more likely to say they would vaccinate. In fact, some of the information backfired. After seeing images of sick children, parents expressed an increased belief in the vaccine-autism connection, and after reading the emotional story, parents were more likely to believe in serious vaccine side effects. So what's gone wrong? This is the question that has consumed journal articles on the public understanding of science for the last several years. The answer that has dominated thinking recently is that nothing has gone wrong; the problem is with our expectations. It is the deficit model that is wrong. Scientific attitudes are not based on rational evaluation of evidence, and therefore providing information does not change them. Attitudes are determined instead by a host of contextual and cultural factors that make them largely immune to change. One of the leading voices in promoting this new perspective is Dan Kahan, a law professor from Yale. Our attitudes are not based on a rational, detached evaluation of the evidence, Kahan argues. This is because our beliefs are not isolated pieces of data that we can take and discard at will. Instead, beliefs are deeply intertwined with other beliefs, shared cultural values, and our identities. To discard a belief often means discarding a whole host of other beliefs, forsaking our communities, going against those we trust and love, and in short, challenging our identities." Some of the book's examples are not clear (for example when they are set in the context of baseball, a sport that is very popular in the USA but not elsewhere). TM, ®, Copyright © 2018 Piero Scaruffi |