Edelman Gerald & Tononi Giulio:
"A Universe of Consciousness" (Basic, 2000)


Home | The whole bibliography | My book on Consciousness

(Copyright © 2000 Piero Scaruffi | Legal restrictions - Termini d'uso )


Consciousness is a process, as William James said a century earlier, and the authors focus on the neural processes that could be good candidates.

They start out with three assumptions, but these rest on very weak philosophical foundations: the physics assumption (that consciousness is a physical process) the evolutionary assumption (consciousness evolved from nonconscious life) and the qualia assumption (feelings cannot be communicated through scientific theories). Many don't believe that consciousness is a physical process, and some of us (me too) believe that degrees of consciousness are widespread in nature. But if the authors limit the definition of consciousness to "human consciousness", then maybe we can all agree. From the physics assumption, it follows that only beings with bodies can be conscious. From the evolutionary assumption, the authors think that it follows: "doing generally precedes understanding". Note the "generally", which basically invalidates the statement. Nonetheless, a few sentences later they use this uncertain statement to criticize approaches based on the computer metaphor that neglects embodiment and action. They emphasize that Logic is not necessary to the emergence of consciousness whereas it is to the working of a computer. Higher brain functions emerged from biological evolution, not from mathematical logic, and a process similar to Darwinian selection shapes the brain: selectionism precedes logic. Evolution created brains, selectionism shaped brains, and then brains invented and learned logic.

Why are we conscious of some things and not of others? Why are we conscious of the outside temperature but not of our blood pressure, which is a much more important datum?

There is a wild variety of conscious experiences, billions of different conscious states. At any moment we could be conscious of many things in many particular ways, but we are conscious of only one thing in one particular way.

Property of consciousness include: unity/integration (Tononi thinks that a conscious state cannot be subdivided into independent components, although i can decide to be conscious of this or that, and the "i" that decides seems to be independent of the "i" that obeys), coherence, privacy (only i can feel my conscious states), the ability to focus on one of the many billion possible conscious states (something that gets equated to "informativeness" because it rules out a lot of uncertainty), the ability to construct a conscious scene that is not perceived but only imagined (dreaming, daydreaming or just thinking), and it comes in different degrees of intensity. Last but not least, consciousness is related to memory. Tononi is vague about how to treat the feelings associated to a conscious state, the feeling of anxiety, of injustice, of horror, etc. Instead Tononi focuses on the unity and integration of consciousness (again, in my opinion debatable) and identifies the fundamental property of conscious states as the network of relationships "that cannot be fully broken down into independent components". This would be a definition (whether you agree with it or not) if not for the "fully" that makes it a vague unscientific statement: what is "fully" and what is not "fully"? Furthermore, the authors make the subtle but arbitrary step of turning their "informativeness" (a rather unquantifiable property) into "information" (which is conveniently quantifiable). Their "informativeness" relates to the ability of consciousness to be in only one of many possible conscious states, but, without knowing the total number of possible states (infinite?), it is impossible to measure this thing.

Edelman's fundamental thesis is that there is no specific place in which consciousness originates and takes place, there is no "neural correlate of consciousness": consciousness is spread out around the brain. The cortex is responsible for the "content" of consciousness but not for the "being conscious" itself.

There are many different types of neurons in the human brain (70+ in the eye's retina alone) and even within each type no two neurons are alike. The brain is not a "network", nor each brain region is a network: some regions are networks (notably the thalamo-cortical system, although this network is better viewed as a network of networks, a network of specialists), other regions are long loops (notably between the cortex and the cerebellum and the hippocampus) and other regions are fans (the "value systems" that project into the whole brain).

The fundamental process that the authors connect to consciousness is "reentry", a bigger and more complex form of feedback that is not cybernetic feedback because it consists of many parallel paths and it does not correct an "error". Reentry coordinates, synchronizes and integrates the activity of all neural regions. Reentry provides the "binding" necessary to integrate different neural "maps". The rapid integration of the activity of many brain regions through "reentry" is a necessary condition for consciousness. This is not a sufficient conditions though. Consciousness also requires "neural activity that changes continually and is thus spatially and temporally differentiated". Unfortunately, "reentry" is never defined well by the authors, and this sentence is a tautology: how could neural activity not change and not be spatially and temporally differentiated? It is all very vague and superficial, based on words such as "highly", "rapid", "effective", "multiple" etc that mean nothing at all. They identify the state of consciousness with "distributed, integrated, but continuously changing patterns of neural activity" but is there any neural activity that does not satisfy this definition?

Most of our actions actually happen unconsciously: practice does not only make perfect, it also makes us unconscious of doing what we have done many things before. Therefore the conscious states should exhibit something that is truly unique.

Edelman has applied Darwinian thinking to neural activity and this book summarizes his theory of "Neural Darwinism", which is based on discoveries about the immune system: the immune system does not design antibodies; the foreign molecules "select" the best antibody and the immune system simply makes more of them for as long as they keep being "selected" by the attacking foreigners. Edelman points to obvious advantages of following this method that explain why Nature chose it: the same action can be generated by different configurations of neurons because natural selection and neural selection can yield several different structures that perform the same function.

The "values" (a confusing term) are the brain structures that constrain this selectionist process. These "values" (which are not values but brain structures) spread information throughout the brain, and, in particular, information about what is going on in the brain. These nuclei are active only when the brain is awake. Their names are intimidating: noradrenergic nucleus, serotonergic nucleus, dopaminergic nucleus, cholinergic nucleus, etc. These nuclei may also interact among themselves "combinatorially". Edelman believes that these "values" (which are not values but brain regions) determine categorization and action.

A chapter discusses Edelman's theory of memory. Memory is "nonrepresentational" just like an antibody is not a representation of the foreign antigen. A memory is a creative reconstruction of the neural activity needed to repeat an action. Memory is "constructive recategorization". There are thousands of memory systems in the brain: memory is not a brain region but a property of brain regions. Memory is the process by which brain regions collaborate to produce an output similar to a previous output. In general, the neural activity related to the same "memory" will be different at different times. If a few neurons die, the memory of an event might survive precisely because there are ways to reconstruct it that don't depend on the existence of those specific neurons. This makes memory more robust than a simple representation of events. Its capacity is also bigger than the size of the brain, just like a computer program can generate many more statements than the statements that constitute the program itself. Memory is creative, not replicative.

Edelman distinguishes between primary consciousness and higher-order consciousness. The definition of the two remains sketchy: the only thing that we are told is that the latter involves language and semantics, a sense of the self, and the ability to construct past and future scenes. Other animals have primary consciousness. It arises when four features evolve: "perceptual categorization", i.e. the ability classify sensory signals into categories (carried out in posterior regions of the brain), conceptual memory (whose "concepts" are not the usual concepts but processes that combine the perceptual categorizations of a scene), value-laden memory (in anterior regions of the brain) and reentry, the fundamental process of integration. These four constituents appear in this sequence during evolution. This primary consciousness allows animals to work with a "remembered present". These animals can generate a mental scene in which many items are integrated. This mental scene allows the animal to decide how to behave. The value system of the brain is connected with the regions of the brain involved in forming concepts. Reentry is the integrative mechanism of this primary consciousness, the process that generates the sense of the whole scene, the process that carries out the "binding". Reentry is a process both in time and space: it physically connects different regions of the brain but also it synchronizes them (induces coherence, and it is not clear whether synchronization and coherence are the same thing for Edelman). The terminology is confusing, and the whole picture of the emergence of consciousness is pure speculation.

Things get a little more formal with the introduction of a formal definition of integration: integration measures the loss of entropy caused by the interactions within brain regions The next step is to measure the degree to which a neural state is integrated and the degree to which a neural state is differentiated (there are billions of possible conscious states). Another vague definition: a functional cluster is a region of the brain that it cannot be fully decomposed into independent elements. (I think that no such region exists). This is a measure of integration. "Neural complexity" is the quantitative measurement of the content of a neural process, i.e. of the degree of differentiation. Another vague definition is used to argue that a high value of neural complexity corresponds to an optimal balance of functional specialization and of functional integration. (Specialization is never defined, so one has to assume that it means "differentiation", but, even so, it is not clear what the "optimal balance" corresponds to: a maximum or just to "high values" as Edelman vaguely says). Edelman writes that the normal adult cortex is both "integrated an highly differentiated" and it is not clear why he uses the adjective "highly" only in front of "differentiated" and not in front of "integrated". The complexity of the brain comes from the fact that a brain is a collection of specialists that work together. Later Edelman talks of "functionally specialized groups of neurons" and it is not clear if these are the "functional clusters" Just like an antigen induces a reaction by the immune system that starts amplifying the production of the appropriate antibody, a sensory stimulus amplifies the "information" already contained in the brain. Abruptly, Edelman concludes that consciousness is due to complexity, and complexity originates from the interation with the environment.

At any given time, most of our cognitive life is made of automated routines that do not contribute directly to our conscious experience, i.e. only a subset of the brain contributes to that conscious moment. A region contributes to a conscious state only when it becomes the "dynamic core" of the brain. This is a highly-differentiated (i.e. highly complex) functional cluster generated typically within the thalamocortical system. This is a cluster of neuronal groups that, for a fraction of a second, are strongly interacting among themselves (through "reentry") and weakly interacting with the rest of the brain. Edelman insists that this is a process, although it sounds like a place, although a place that may change within a fraction of a second. The same group may be at one time part of the dynamic core and contribute to consciousness and a moment later be not part of the dynamic core and contribute to unconscious behavior. Some regions never seem to be part of the dynamic core, for example the regions of the brain that relate to blood pressure. Consciousness arises because of groups that interact for a fraction of a second much more strongly among themselves than with the rest of the brain. The dynamic core gets more and more complex as the body develops and it interacts with the world.

The dynamic core can be viewed as a high-dimension space, each dimension corresponding to a neuronal group that is part of the dynamic core for that fraction of a second. The dynamic core can be in a large number of states. Qualia are all the conscious states that can be discriminated. Every "discriminable" point in the n-dimensional space of the dynamic core corresponds to a conscious state. The stream of consciousness is a sequence of points, i.e. a line, in that space. Qualia are high-dimensional discriminations. The authors emphasize that a conscious experience "discriminates", i.e. rules out a lot of other possible conscious states. (Later the definition gets more convoluted: "Qualia are higher-order categorizations by the self of the conscious experiences of that self that are mediated by the interaction between value-category memory and perception"... whatever that means).

But this "discrimination" is never really explained, nor defined. It is not difficult to believe that there are potentially billions of possible states of consciousness, and that only one actually happens, and that this one therefore "discriminates" all the others that never happened: but what makes this one different from the others so that this one actually happened? It looks like every point in the n-dimensional space should be equally capable of becoming a conscious state. It seems inevitable that something must "discriminate", or, better, select one particular conscious state out of billions that are possible at any given moment. Basically their "dynamic core" hypothesis tells us what we already knew: i could be conscious of an infinite number of things but at any given time i am conscious of only one, which means that something in my brain creates a conscious state while the neurons are in that configuration at that given time. After reading this book, we know exactly what we knew before: that we don't know how (and why) brain processes turn into conscious states.

The effect of this integration via reentry in the dynamic core is a "remembered present". This is a highly informative state because it rules out billions of other possible states, and this helps the brain make the next move. The dynamic core is created in a fraction of a second. The corresponding conscious state can access a wealth of information that is now integrated in one "scene" and can plan accordingly, all in a fraction of a second. Hence the evolutionary advantage of primary consciousness.

Higher-order consciousness emerges when language gets integrated with the dynamic core. Judging from their wording, the authors believe that higher-order consciousness emerged only one in evolution, with Homo Sapiens, and that it is clearly differentiated from lower-order consciousness. In other words, a being is either lower-order conscious or higher-order conscious, and nothing in between. Once the brain system for language was integrated with the dynamic core, the self emerged. In other words, higher brain functions require interactions both with the world and with other persons.

Edelman's model of the evolution of consciousness is a mixture of the internalist model (you think before you can express your thoughts in a language, and language emerges later) and the externalist model (language creates the conscious self). Internalists have a first-person view, externalists have a third-person view. Edelman's primary consciousness is the consciousness that internalists talk about. Edelman's higher-order consciousness is the consciousness that externalists talk about. Primary consciousness creates concepts, but they cannot expressed in primary consciousness because language still doesn't exist. Concepts precede language. Once language is integrated, then feelings and values get connected.

Philosophically speaking, the authors like to talk of "qualified realism". They discuss being vs describing (consciousness can only be experienced, and any description cannot fully "describe" it), selection vs logic (systems can be built using either, but selection came first), and doing vs understanding (a body first behaves and then thinks).

The authors are neither dualists (mind and body are not two completely different substances) nor monists (mind is not body and body is not mind). But they sound like dualists to me.

Alas, Edeman's theory is all about "mostly", "largely", "highly", "sufficiently", etc. There is never a clear-cut statement about what does what and when and how.

The book advances a theory for what causes the conscious feeling that we experience, but fails to explain how matter can turn into feelings. For all the properties of consciousness that they list, the authors fail to grasp the essence of consciousness: I "feel" that i am myself.

See also Tononi's Tononi's "Phi, A Voyage from the Brain to the Soul" (2012).