For more details see my latest book
|
10th September 2024
The “hard problem” of consciousness is not science: understanding bacteria, animal, human and machine consciousness
The “hard problem” of consciousness is not science: understanding bacteria, animal, human and machine consciousness
Introduction
What is human consciousness? Do other animals experience something similar? Could machines reach a level of consciousness equalling or exceeding humans? Questions like this have encouraged the proliferation of large numbers of theoretical frameworks offering models for how consciousness works, even though just the definition of consciousness is a major source of controversy.
There are philosophers who argue that some aspects of consciousness will always be beyond scientific explanation. They claim that the inner qualities we experience when we see the moon reflected on a lake, hear the waves breaking on the shore, or smell a rose cannot be accounted for in scientific terms. These kind of qualities are sometimes called phenomenal consciousness. The so-called hard problem with understanding them was captured in Nagel’s question “What is it like to be a bat?” Asking this question led Nagel to conclude that consciousness is only present in an organism if there is “something that it is like” to be that organism, and what it is like is not knowable for any other organism.
It is certainly true that we can never experience what someone else experiences, but this has nothing to do with whether the experiences can be understood in scientific terms. Nagel’s scientific dead end is one reason for the proliferation of complex scientific theories, but the scientific reality of consciousness is much more prosaic. It is actually straightforward to understand what consciousness is, how it is supported by current knowledge of brain functioning, and how it differs between different animals.
This is not to say that such a fundamental understanding of consciousness is also an understanding of all the rich range of human experience using consciousness, any more than understanding how a television works is also an understanding of the meaning of all the programs that can be viewed on the screen. However, a fundamental understanding of consciousness provides a solid platform for a wider understanding.
Consistent internal activations in response to similar experiences
The starting point is to recognize that a basic property of life is the ability to detect circumstances in the environment, to determine any similarities between the current circumstances and preprogrammed or previously learned circumstances, and to use any such similarities to determine behaviour. This property exists on a vast range of different levels of complexity. At the simple end, E. coli bacteria can detect the presence of glucose in their environment, and turn on the production of the enzymes that can extract energy from glucose. In the presence of lactose, the bacteria can turn off the production of those glucose processing enzymes and turn on the production of other enzymes able to extract energy from lactose. At a much more complex level, a human can detect the presence of a dog in the visual environment and select an appropriate behaviour, despite the huge variation in the appearance of dogs and the existence of other objects like wolves, coyotes and foxes that are visually similar but require radically different behaviours.
A corollary of this fundamental property is that similar environmental circumstances must trigger “similar” internal states in an organism. However, “similar” in this case is some combination of physical similarity and simultaneous past activity. In other words, internal similarity can range from “almost identical” to a statistical “similarity” across large populations of detection units. For example, in the case of neurons, detection of similarity may correspond with activity of a significant subset of a specific but mostly inactive population of units (like neurons) that have often been active at similar times in the past. If in two experiences significant but different subsets of the specific population are active, the experiences are similar even if the overlap between the subsets is fairly small. Each significant subset encourages similar behaviours.
What is human consciousness? Do other animals experience something similar? Could machines reach a level of consciousness equalling or exceeding humans? Questions like this have encouraged the proliferation of large numbers of theoretical frameworks offering models for how consciousness works, even though just the definition of consciousness is a major source of controversy.
There are philosophers who argue that some aspects of consciousness will always be beyond scientific explanation. They claim that the inner qualities we experience when we see the moon reflected on a lake, hear the waves breaking on the shore, or smell a rose cannot be accounted for in scientific terms. These kind of qualities are sometimes called phenomenal consciousness. The so-called hard problem with understanding them was captured in Nagel’s question “What is it like to be a bat?” Asking this question led Nagel to conclude that consciousness is only present in an organism if there is “something that it is like” to be that organism, and what it is like is not knowable for any other organism.
It is certainly true that we can never experience what someone else experiences, but this has nothing to do with whether the experiences can be understood in scientific terms. Nagel’s scientific dead end is one reason for the proliferation of complex scientific theories, but the scientific reality of consciousness is much more prosaic. It is actually straightforward to understand what consciousness is, how it is supported by current knowledge of brain functioning, and how it differs between different animals.
This is not to say that such a fundamental understanding of consciousness is also an understanding of all the rich range of human experience using consciousness, any more than understanding how a television works is also an understanding of the meaning of all the programs that can be viewed on the screen. However, a fundamental understanding of consciousness provides a solid platform for a wider understanding.
Consistent internal activations in response to similar experiences
The starting point is to recognize that a basic property of life is the ability to detect circumstances in the environment, to determine any similarities between the current circumstances and preprogrammed or previously learned circumstances, and to use any such similarities to determine behaviour. This property exists on a vast range of different levels of complexity. At the simple end, E. coli bacteria can detect the presence of glucose in their environment, and turn on the production of the enzymes that can extract energy from glucose. In the presence of lactose, the bacteria can turn off the production of those glucose processing enzymes and turn on the production of other enzymes able to extract energy from lactose. At a much more complex level, a human can detect the presence of a dog in the visual environment and select an appropriate behaviour, despite the huge variation in the appearance of dogs and the existence of other objects like wolves, coyotes and foxes that are visually similar but require radically different behaviours.
A corollary of this fundamental property is that similar environmental circumstances must trigger “similar” internal states in an organism. However, “similar” in this case is some combination of physical similarity and simultaneous past activity. In other words, internal similarity can range from “almost identical” to a statistical “similarity” across large populations of detection units. For example, in the case of neurons, detection of similarity may correspond with activity of a significant subset of a specific but mostly inactive population of units (like neurons) that have often been active at similar times in the past. If in two experiences significant but different subsets of the specific population are active, the experiences are similar even if the overlap between the subsets is fairly small. Each significant subset encourages similar behaviours.
Similarity between two activation states in a complex system can be activation of different subsets of a defined population. As illustrated conceptually, the neurons within the red boundary are all sometimes activated in response to seeing a dog. However, those neurons may also be activated in response to other types of objects. The two groups of neurons within the green boundaries are those activated in response to seeing two different dogs. The majority of the neurons in these groups fall within the red zone. The neurons within the blue zone represent a group of neurons activated in response to a non-dog object. Some ‘dog’ group neurons are activated, but these form only a small proportion of the blue group.
It is the consistency of internal activation patterns that is the basis of consciousness. When an organism experiences something, the experience generates an activation state within the organism, and similar experiences generate similar activations. The experiences of these consistent activations are what we label “conscious experiences”. Note that from a scientific point of view, the question “why does consciousness feel as it does and not something different?” is meaningless: the feeling is simply the subjective description of the consistent internal activation.
The complexity of such consistent activations varies immensely between different types of organism. Although there are consistent activations in a bacterium like E. coli, and in this sense the bacteria are conscious, the activations are orders of magnitude less complex in information terms than the consistent activations in a human brain. Furthermore, a human can verbally describe these internal activations, and the verbal descriptions can interact with those activations, adding even more complexity.
Consistent internal activations in a human cortex
In a human cortex, some pyramidal neurons get inputs fairly directly from senses that respond to the external environment. Other pyramidal neurons get inputs from those sensory neurons, yet other neurons get inputs from these secondary neurons and so on. For example, input derived from the eyes enters the cortex via pyramidal neurons in a cortical area called V1. Outputs from neurons in V1 go to neurons in V2, then diverge into two separate pathways. In the dorsal stream V2 outputs go to V3, then successively on to MT, IP and PP. In the ventral stream V2 outputs go to V4, then successively on to TEO and TE. From these pathways outputs go on to yet higher areas. Information derived from the senses thus drives activity of a population of neurons across the brain. Each pyramidal neuron in the population has been programmed with large numbers of often extremely complex combinations of sensory inputs, and is activated if one of those combinations is currently present. The complexity of the conditions is greater in the higher areas. Conditions in any cortical area have a complexity that is effective for discriminating between circumstances with different behavioural implications, so each area discriminates between different types of circumstances. For example, conditions in the ventral stream discriminate between different positions, distances and motions of visual objects, and are therefore effective for guiding, for example, motor movements relative to such objects. Conditions in the ventral stream discriminate between different types of visual objects and are therefore effective for guiding, for example, naming behaviours.
The complexity of such consistent activations varies immensely between different types of organism. Although there are consistent activations in a bacterium like E. coli, and in this sense the bacteria are conscious, the activations are orders of magnitude less complex in information terms than the consistent activations in a human brain. Furthermore, a human can verbally describe these internal activations, and the verbal descriptions can interact with those activations, adding even more complexity.
Consistent internal activations in a human cortex
In a human cortex, some pyramidal neurons get inputs fairly directly from senses that respond to the external environment. Other pyramidal neurons get inputs from those sensory neurons, yet other neurons get inputs from these secondary neurons and so on. For example, input derived from the eyes enters the cortex via pyramidal neurons in a cortical area called V1. Outputs from neurons in V1 go to neurons in V2, then diverge into two separate pathways. In the dorsal stream V2 outputs go to V3, then successively on to MT, IP and PP. In the ventral stream V2 outputs go to V4, then successively on to TEO and TE. From these pathways outputs go on to yet higher areas. Information derived from the senses thus drives activity of a population of neurons across the brain. Each pyramidal neuron in the population has been programmed with large numbers of often extremely complex combinations of sensory inputs, and is activated if one of those combinations is currently present. The complexity of the conditions is greater in the higher areas. Conditions in any cortical area have a complexity that is effective for discriminating between circumstances with different behavioural implications, so each area discriminates between different types of circumstances. For example, conditions in the ventral stream discriminate between different positions, distances and motions of visual objects, and are therefore effective for guiding, for example, motor movements relative to such objects. Conditions in the ventral stream discriminate between different types of visual objects and are therefore effective for guiding, for example, naming behaviours.
Flow of information from the eyes into the visual cortical areas. Pyramidal neurons in different cortical areas detect groups of conditions with different information complexities, where the complexity of a condition is roughly equivalent to the total number of raw sensory inputs that contribute to it, directly or via intermediate conditions. Information flows initially from the eyes to the lateral geniculate nucleus (LGN) of the thalamus. From the LGN if goes through cortical areas V1 and V2, then diverges into two paths, dorsal and ventral, with condition complexity increasing with progression along the paths. Conditions with different complexity are effective for discriminating between different types of sensory circumstances, and therefore for recommending different types of behaviours. N.B. Technically, separate neurons in V1 and V2 process visual inputs to the two separate pathways
In areas close to sensory input, neurons detect conditions that are relatively simple combinations of inputs. For example, some neurons in area V1 are activated in response to a boundary between light and dark in a specific location on the retina and with a specific orientation. Any one such neuron could be activated by almost any type of visual object, depending on the positioning of the image of that object on the retina. Areas V2, V4, TEO and TE are activated by more and more complex combinations of conditions detected in earlier areas, with the higher conditions defined by groups of simpler conditions that have tended to occur at similar times. As a result, higher areas like TE can discriminate between different categories of visual objects.
This is not to say that individual neurons in TE correspond with, say, specific categories of visual objects. Rather, the populations of neurons activated in response to instances of one object category are sufficiently similar to each other, and sufficiently different from the populations activated in response to instances of a different category, that the brain can discriminate between different categories. This discrimination means that appropriate behaviours can be selected in response to different types of objects.
Although the combinations of inputs that activate one neuron change over time, the changes are relatively small, and after a change the neuron will still be activated by most of the combinations that activated it in the past. Hence similar sensory inputs tend to result in similar patterns of activation. The conscious experience of seeing, for example, a dog, is the activation of a population of neurons in areas like TE made up largely of neurons that have often been active in the past when dogs have also been active. This type of consciousness is certainly experienced by other animals.
Note also that there may be similarities between more complex circumstances, such as social situations. Neurons programmed with even more complex conditions will be active more often in some types of situation than in others. Activation of such neurons is the conscious experience of the situation. The population of neurons activated in response to some new situation with some similarities to several earlier situations could contain active subsets of the populations active in all of those earlier situations.
Hence the conscious experience of objects or situations is the activation of populations of pyramidal neurons across the cortex that have often been active in the past in response to similar objects or situations.
Behaviours associated with active neuron populations
Each active cortical pyramidal neuron has a range of recommendation strengths in favour of many different behaviours, each recommendation having a different weight. The basal ganglia determines and implements the behaviours with the largest total recommendation strengths across all the currently active pyramidal neurons. Some of the recommendations are in favour of speech production behaviours. These speech behaviours can also be viewed as descriptions of the currently active pyramidal population.
In the earlier example of the different populations of neurons activated in areas like TE in response to seeing different dogs, all of these populations have a large total recommendation strength in the basal ganglia in favour of saying “dog”.
Indirect activation
A key factor missing from the description so far is that, in addition to activation by sensory inputs, neurons can be activated by other neurons on the basis of past temporally correlated activity. In other words, if a group of currently inactive neurons has been active in the past at times when a group of currently active neurons was also active, then the active group can indirectly activate the inactive group in the absence of any of their programmed sensory combinations. At a very detailed level, direct activations are driven through the basal dendrites of pyramidal neurons, while indirect activations are driven through their apical dendrites.
Such indirect activations are the neural basis of memory and imagination.
In the case of memory for facts or words, neurons are indirectly activated on the basis of frequent past simultaneous activity. For example, suppose that dogs have often been seen in the past at the same time as the word “dog” was heard. A group of neurons directly activated by the auditory inputs will often be active at the same time as a group of neurons directly activated by the visual inputs. Later, when the word “dog” is heard alone, the neurons directly activated in response to the auditory inputs can indirectly activate neurons activated in the past by visual inputs from seeing a dog, generating a ‘pseudoimage’ of a dog.
Note that neurons close to visual inputs, for example in V1 and V2, are activated in response to all kinds of objects depending on the precise positioning of the image of the object on the retina, and will not have been more frequently activated in response to dogs or any other type of object. Such neurons will therefore not be indirectly activated by auditory inputs, and the ‘pseudoimage’ will not be a visual hallucination.
In the case of memory for events, neurons are indirectly activated on the basis that they changed their conditions in the past at the same time. Almost every experience will have some degree of novelty, and the greater the degree of novelty, the greater the changes needed to neuron conditions. The brain records the identities of groups of neurons that were active when there were condition changes, and can use these records to indirectly activate such groups, especially when the number of changes was large. Hence we find it easier to recall novel experiences. Such indirect activations are experienced as a partial re-experience of the earlier event, but as for memory for facts and words, recalls are not experienced as visual hallucinations.
A typical process by which a memory can be constructed starts with some words spoken that ask about some past event. The directly activated auditory neurons indirectly activate visual neurons on the basis of frequent past simultaneous activity. Provided the words were well chosen, the active population of visual neurons contains a significant subset of the neurons active during the targeted past event. These neurons can then indirectly activate a larger subset, and so on, resulting in a population of active neurons containing many of those active in the earlier event.
Imaginings are constructed from large numbers of fragments of past experiences, each fragment corresponding with a subset of the neurons active during the past experience. Of course, such a fragment could also be derived from a past imagining.
The stream of consciousness
A uniquely human aspect of consciousness is the frequent experience of a constant stream of mental images with little correlation with current sensory experience. The processes supporting this are firstly the indirect activation mechanism. A stream may start from an active population of neurons driven by current sensory inputs. That population can indirectly activate a secondary population, the secondary population a tertiary population and so on, becoming less and less related to the original sensory population. However, such a long chain would tend to become less and less meaningful in sensory or behavioural terms.
A second process that maintains meaning in long chains depends upon the existence of very strong activation links between words and visual images. As the meaning of the current pseudovisual population declines, it can nevertheless be used to drive production of whatever words correspond with it most closely. These words form internal speech but are generally not physically verbalized. However, through the indirect activation mechanism those words can drive a rather more meaningful visual population. Hence the stream of consciousness is experienced as periods of mental vagueness punctuated periodically by rather sharper images.
Although animal brains certainly possess the indirect activation mechanisms, the absence of complex language capabilities means that animal brains are limited in the length of indirect activation chains that can be meaningfully supported. Hence the stream of consciousness continuing for long periods is uniquely human. This capability is critical for supporting behaviour in the extremely complex societies in which humans live. It also makes possible the creation of tools and artifacts when the toolmaker is in an environment very unlike the environment in which the tool or artifact will be used.
The sense of self
It can often feel as if there is a ‘self’ inside us, separate from the self that is currently experiencing and acting. We can even have dialogues with this internal self, such as asking ourselves “I wonder what I should do now?” This self awareness is another aspect of consciousness. We can imagine ourselves doing all sorts of things, including things impossible in the real world.
The physical basis of this internal self is the activation of a population of neurons often active in the past when our attention was focussed on ourselves. As a child, we often heard our personal name in an environment in which we were doing something. As a result, hearing our name occurred at the same time as many of our environment/action combinations. Later, hearing our name could therefore activate a kind of weighted average of these past experiences. This weighted average population is experienced as an internal self.
As time goes on, this weighted average population has often been active, and if a small subgroup becomes active it will tend to activate a larger group. Hence a significant internal self population will be active most of the time. The weighted average past self can recommend behaviours based on all that past experience, and is valuable for determining behaviour in complex social situations.
Because creation of this internal self is also dependent on speech, and especially on personal names, it will be much weaker in non-human animals.
Types of consciousness
What we have seen is that there are perhaps four different levels of consciousness. Level one is when similar internal activations are generated in response to similar inputs from the environment. Level two is when there are indirect activations on the basis of temporally correlated past activity, which makes the internal activations more complex. Level three is when long chains of indirect activations with useful cognitive and behavioural meanings can be supported. Level four is when there can be activation of group of units corresponding with weighted average of past self experiences and behaviours.
Level one could be labelled phenomenal consciousness and is possessed even by bacteria. It could even be argued that if an inanimate object enters similar states in response to similar environmental conditions that inanimate object is demonstrating a very simple degree of phenomenal consciousness. Level two consciousness is probably present in almost all animals. Level three is limited to human beings, although meaningful shorter chains are certainly possible in animals, especially primates and the more intelligent birds like crows. Level four is almost certainly limited to human beings, although some animals appear to be able to use something very similar to personal names.
Machine consciousness
Any computer generates internal activations in response to inputs from the environment, and the activations in response to similar inputs are similar enough to support appropriate responses (i.e. behaviours). Hence any computing system has bacteria level consciousness.
Current artificial intelligence systems have some capability to activate information on the basis of past temporally correlated activity, but the backpropagation algorithms used mean that the conditions which activate individual units (like perceptrons) change much more than the conditions of pyramidal neurons in the brain. Hence the indirect activation mechanisms are much less effective for supporting second level consciousness and levels three and four do not yet exist. However, there is no fundamental reason blocking achievement of these consciousness levels in artificial intelligence, given electronic devices or software that better supports pyramidal neuron type change mechanisms.
The experience of consciousness
From a scientific point of view, what we call a conscious human experience is the activation of a large population of neurons which record information conditions that have often occurred in the past. Subjective descriptions of the experience can be generated by implementing the speech behaviours most strongly recommended by that active population. Even when two people describe their experience in similar terms, the activated neurons are in physically different brains, and there are very few exact correspondences between the circumstances in which a neuron in one brain is activated and the circumstances in which any neuron in the other brain is activated. Hence there can be no mapping at the neuron level between conscious experiences in two different brains.
Furthermore, even in the same brain, similar sensory experiences may result in significantly different active neuron populations. If a brain on a number of occasions gets sensory inputs corresponding with seeing the colour red, the population of activated neurons detecting relatively simple 'red' conditions may be fairly similar on each occasion. However, the circumstances in which the colour red is perceived on the different occasions will be different, and the neurons detecting more complex conditions corresponding with, for example, the social situation, will be different. Furthermore, depending on exactly which neurons have been activated, neurons will be indirectly activated on the basis of past temporally correlated activity. These indirectly activated neurons could be fragments of past experiences in which the colour red featured in some way. Depending on the exact starting population, these fragments will also be different in different experiences of the colour red. All the populations on different occasions will have a strong total recommendation strength in favour of saying “red”, but other speech recommendation strengths possessed by the population will be different. Hence descriptions of the conscious experience of red on different occasions could be different.
For any two humans describing their conscious experiences in the same way, there is no meaningful mapping between the active populations of neurons in the two brains. Hence at a detailed level the physical activity is completely different. Any similarity is only that similar speech behaviours have been associated with the completely different active neuron populations in the two brains
The feeling of a conscious experience is simply a higher level description of the active neuron population. From a scientific point of view, to ask why a conscious experience feels as it does instead of something different is meaningless. It is a question on the same level as "Why does something exist rather than nothing?" which delights some of our philosophical instincts but has no answer in any terms humans have so far found.
Much more accessible to scientific inquiry are questions like the nature of the brain activations that support our experiences of the stream of consciousness and self awareness, the limitations of such activations, and whether even more complex forms of consciousness could be possible.
This is not to say that individual neurons in TE correspond with, say, specific categories of visual objects. Rather, the populations of neurons activated in response to instances of one object category are sufficiently similar to each other, and sufficiently different from the populations activated in response to instances of a different category, that the brain can discriminate between different categories. This discrimination means that appropriate behaviours can be selected in response to different types of objects.
Although the combinations of inputs that activate one neuron change over time, the changes are relatively small, and after a change the neuron will still be activated by most of the combinations that activated it in the past. Hence similar sensory inputs tend to result in similar patterns of activation. The conscious experience of seeing, for example, a dog, is the activation of a population of neurons in areas like TE made up largely of neurons that have often been active in the past when dogs have also been active. This type of consciousness is certainly experienced by other animals.
Note also that there may be similarities between more complex circumstances, such as social situations. Neurons programmed with even more complex conditions will be active more often in some types of situation than in others. Activation of such neurons is the conscious experience of the situation. The population of neurons activated in response to some new situation with some similarities to several earlier situations could contain active subsets of the populations active in all of those earlier situations.
Hence the conscious experience of objects or situations is the activation of populations of pyramidal neurons across the cortex that have often been active in the past in response to similar objects or situations.
Behaviours associated with active neuron populations
Each active cortical pyramidal neuron has a range of recommendation strengths in favour of many different behaviours, each recommendation having a different weight. The basal ganglia determines and implements the behaviours with the largest total recommendation strengths across all the currently active pyramidal neurons. Some of the recommendations are in favour of speech production behaviours. These speech behaviours can also be viewed as descriptions of the currently active pyramidal population.
In the earlier example of the different populations of neurons activated in areas like TE in response to seeing different dogs, all of these populations have a large total recommendation strength in the basal ganglia in favour of saying “dog”.
Indirect activation
A key factor missing from the description so far is that, in addition to activation by sensory inputs, neurons can be activated by other neurons on the basis of past temporally correlated activity. In other words, if a group of currently inactive neurons has been active in the past at times when a group of currently active neurons was also active, then the active group can indirectly activate the inactive group in the absence of any of their programmed sensory combinations. At a very detailed level, direct activations are driven through the basal dendrites of pyramidal neurons, while indirect activations are driven through their apical dendrites.
Such indirect activations are the neural basis of memory and imagination.
In the case of memory for facts or words, neurons are indirectly activated on the basis of frequent past simultaneous activity. For example, suppose that dogs have often been seen in the past at the same time as the word “dog” was heard. A group of neurons directly activated by the auditory inputs will often be active at the same time as a group of neurons directly activated by the visual inputs. Later, when the word “dog” is heard alone, the neurons directly activated in response to the auditory inputs can indirectly activate neurons activated in the past by visual inputs from seeing a dog, generating a ‘pseudoimage’ of a dog.
Note that neurons close to visual inputs, for example in V1 and V2, are activated in response to all kinds of objects depending on the precise positioning of the image of the object on the retina, and will not have been more frequently activated in response to dogs or any other type of object. Such neurons will therefore not be indirectly activated by auditory inputs, and the ‘pseudoimage’ will not be a visual hallucination.
In the case of memory for events, neurons are indirectly activated on the basis that they changed their conditions in the past at the same time. Almost every experience will have some degree of novelty, and the greater the degree of novelty, the greater the changes needed to neuron conditions. The brain records the identities of groups of neurons that were active when there were condition changes, and can use these records to indirectly activate such groups, especially when the number of changes was large. Hence we find it easier to recall novel experiences. Such indirect activations are experienced as a partial re-experience of the earlier event, but as for memory for facts and words, recalls are not experienced as visual hallucinations.
A typical process by which a memory can be constructed starts with some words spoken that ask about some past event. The directly activated auditory neurons indirectly activate visual neurons on the basis of frequent past simultaneous activity. Provided the words were well chosen, the active population of visual neurons contains a significant subset of the neurons active during the targeted past event. These neurons can then indirectly activate a larger subset, and so on, resulting in a population of active neurons containing many of those active in the earlier event.
Imaginings are constructed from large numbers of fragments of past experiences, each fragment corresponding with a subset of the neurons active during the past experience. Of course, such a fragment could also be derived from a past imagining.
The stream of consciousness
A uniquely human aspect of consciousness is the frequent experience of a constant stream of mental images with little correlation with current sensory experience. The processes supporting this are firstly the indirect activation mechanism. A stream may start from an active population of neurons driven by current sensory inputs. That population can indirectly activate a secondary population, the secondary population a tertiary population and so on, becoming less and less related to the original sensory population. However, such a long chain would tend to become less and less meaningful in sensory or behavioural terms.
A second process that maintains meaning in long chains depends upon the existence of very strong activation links between words and visual images. As the meaning of the current pseudovisual population declines, it can nevertheless be used to drive production of whatever words correspond with it most closely. These words form internal speech but are generally not physically verbalized. However, through the indirect activation mechanism those words can drive a rather more meaningful visual population. Hence the stream of consciousness is experienced as periods of mental vagueness punctuated periodically by rather sharper images.
Although animal brains certainly possess the indirect activation mechanisms, the absence of complex language capabilities means that animal brains are limited in the length of indirect activation chains that can be meaningfully supported. Hence the stream of consciousness continuing for long periods is uniquely human. This capability is critical for supporting behaviour in the extremely complex societies in which humans live. It also makes possible the creation of tools and artifacts when the toolmaker is in an environment very unlike the environment in which the tool or artifact will be used.
The sense of self
It can often feel as if there is a ‘self’ inside us, separate from the self that is currently experiencing and acting. We can even have dialogues with this internal self, such as asking ourselves “I wonder what I should do now?” This self awareness is another aspect of consciousness. We can imagine ourselves doing all sorts of things, including things impossible in the real world.
The physical basis of this internal self is the activation of a population of neurons often active in the past when our attention was focussed on ourselves. As a child, we often heard our personal name in an environment in which we were doing something. As a result, hearing our name occurred at the same time as many of our environment/action combinations. Later, hearing our name could therefore activate a kind of weighted average of these past experiences. This weighted average population is experienced as an internal self.
As time goes on, this weighted average population has often been active, and if a small subgroup becomes active it will tend to activate a larger group. Hence a significant internal self population will be active most of the time. The weighted average past self can recommend behaviours based on all that past experience, and is valuable for determining behaviour in complex social situations.
Because creation of this internal self is also dependent on speech, and especially on personal names, it will be much weaker in non-human animals.
Types of consciousness
What we have seen is that there are perhaps four different levels of consciousness. Level one is when similar internal activations are generated in response to similar inputs from the environment. Level two is when there are indirect activations on the basis of temporally correlated past activity, which makes the internal activations more complex. Level three is when long chains of indirect activations with useful cognitive and behavioural meanings can be supported. Level four is when there can be activation of group of units corresponding with weighted average of past self experiences and behaviours.
Level one could be labelled phenomenal consciousness and is possessed even by bacteria. It could even be argued that if an inanimate object enters similar states in response to similar environmental conditions that inanimate object is demonstrating a very simple degree of phenomenal consciousness. Level two consciousness is probably present in almost all animals. Level three is limited to human beings, although meaningful shorter chains are certainly possible in animals, especially primates and the more intelligent birds like crows. Level four is almost certainly limited to human beings, although some animals appear to be able to use something very similar to personal names.
Machine consciousness
Any computer generates internal activations in response to inputs from the environment, and the activations in response to similar inputs are similar enough to support appropriate responses (i.e. behaviours). Hence any computing system has bacteria level consciousness.
Current artificial intelligence systems have some capability to activate information on the basis of past temporally correlated activity, but the backpropagation algorithms used mean that the conditions which activate individual units (like perceptrons) change much more than the conditions of pyramidal neurons in the brain. Hence the indirect activation mechanisms are much less effective for supporting second level consciousness and levels three and four do not yet exist. However, there is no fundamental reason blocking achievement of these consciousness levels in artificial intelligence, given electronic devices or software that better supports pyramidal neuron type change mechanisms.
The experience of consciousness
From a scientific point of view, what we call a conscious human experience is the activation of a large population of neurons which record information conditions that have often occurred in the past. Subjective descriptions of the experience can be generated by implementing the speech behaviours most strongly recommended by that active population. Even when two people describe their experience in similar terms, the activated neurons are in physically different brains, and there are very few exact correspondences between the circumstances in which a neuron in one brain is activated and the circumstances in which any neuron in the other brain is activated. Hence there can be no mapping at the neuron level between conscious experiences in two different brains.
Furthermore, even in the same brain, similar sensory experiences may result in significantly different active neuron populations. If a brain on a number of occasions gets sensory inputs corresponding with seeing the colour red, the population of activated neurons detecting relatively simple 'red' conditions may be fairly similar on each occasion. However, the circumstances in which the colour red is perceived on the different occasions will be different, and the neurons detecting more complex conditions corresponding with, for example, the social situation, will be different. Furthermore, depending on exactly which neurons have been activated, neurons will be indirectly activated on the basis of past temporally correlated activity. These indirectly activated neurons could be fragments of past experiences in which the colour red featured in some way. Depending on the exact starting population, these fragments will also be different in different experiences of the colour red. All the populations on different occasions will have a strong total recommendation strength in favour of saying “red”, but other speech recommendation strengths possessed by the population will be different. Hence descriptions of the conscious experience of red on different occasions could be different.
For any two humans describing their conscious experiences in the same way, there is no meaningful mapping between the active populations of neurons in the two brains. Hence at a detailed level the physical activity is completely different. Any similarity is only that similar speech behaviours have been associated with the completely different active neuron populations in the two brains
The feeling of a conscious experience is simply a higher level description of the active neuron population. From a scientific point of view, to ask why a conscious experience feels as it does instead of something different is meaningless. It is a question on the same level as "Why does something exist rather than nothing?" which delights some of our philosophical instincts but has no answer in any terms humans have so far found.
Much more accessible to scientific inquiry are questions like the nature of the brain activations that support our experiences of the stream of consciousness and self awareness, the limitations of such activations, and whether even more complex forms of consciousness could be possible.