This is a time when we have seen people trying to figure out how to conceive, create, build a sort of machinery capable to be conscious, and between all the theoretical misunderstandings, the only fact which everybody agrees and almost sings together is «we do not fully understand consciousness». Although, this is not an excuse to renounce to this wonderful dream about birth, somehow, a conscious artificial intelligence. However, how is intelligence related to consciousness? Do we need an artifact to be super intelligence to be conscious? Is consciousness a state originated when intelligence go through a threshold, just like action potential within the neuron needs to overpass a certain amount of electrical potential to fire it? Can we conceive consciousness without intelligence? Can we unbind consciousness and the mechanical process that our brain does in order to solve a problem?
Philosophy of mind solved this logical problem dividing the conscious experience in two” categories, phenomenal consciousness and psychological consciousness, which the second one talks about the content and processing labor that occurs inside the mind: thought, feelings, ideas, intuitions, knowledge, self-consciousness, reportability, awakeness, voluntary control, attention, etc. Despite this psychological experience, phenomenal consciousness is related to the experience per se, for instance, when you see the color red and you know what it is like to see a red color and you can be sure that is different from a green color; it is that sense or state within you at the moment you experience the world. For this reason, it prohibits a functionalist analysis of its roll inside the mind machinery because we can abstract this experience from the information processing and output behavior. In other words, we do not need to experience the abstraction that the brain does with the information that receives from the environment. If evolution is right and tries to be functional and economical, the group of cells that made us will process the information just as efficient as they work today without giving a conscious experience of the world whatsoever.
To process information is not logically necessary to be conscious; being smart is not a reason to experience more vivid or less vivid the greenish on those morning trees. If you consider yourself smarter than your undergraduate friend it will not change the fact that knowledge, memory capacity, process velocity, etc. will not make a difference between which one of you experience better the outdoor life, both of you will have almost the same experience about the red color. The difference will be in the background: thoughts, feelings, concepts, information of each one, etc., but the experience of experiencing a red color will be almost the same. Making clear this logical thread is contingent on our imaginary friends do not have any physical damage.
Currently people are trying to build super intelligence computers expecting a threshold or a singularity in their software. The popular opinion around computer science is if they build a machine sufficiently intelligent there will be a moment when this machine will realize being conscious, something like Deep Blue will be tired of playing chess and decide other thing to do by itself. This statement throws more questions than the theoretical flaws it makes. First of all, we need to accept that integrated information theory (IIT) is right which implies the organization’s level of information will bring a conscious experience and it does not matter how many bits it contains. However, according to IIT, computers cannot be conscious because they do not have a feedback system in the way it integrates information. And this is actually not the main problem, because some of you can find really compelling this theory, the real problem is that computers itself are smartest than us right now, thus, computers must be conscious by now or we just need to wait for the singularity kicks in. Perhaps if we wait long enough our cell phones will be conscious about how many times we check our social media and they will turn off for good.
This is a classical remembrance of The sorites paradox, when a heap of sand is still a heap of sand if we remove just one grain of sand, if we continue with this task at some point there will not be a heap of sand because we already removed almost all grain of sand from the heap, so, at which point the heap of sand stopped being a heap of sand? How intelligent needs to be a computer to be conscious of itself?
The mysterious path to solve how to replicate consciousness is not building a super intelligent computer able to process huge amount of information, for the reason that we can logically and phenomenally separate consciousness from intelligence. We cannot deny that kids with a genetic disorder as Down syndrome do not have a conscious experience whether their IQ scores. The ability of experience has a different value than simply how much information you can process, we can build systems that “learn” as AlphaGo which already beat humans playing this traditional game, but the substance of a conscious experience is way different and truly misunderstood than the way how our brain and mind process and integrate information. This is the reason David Chalmers say this is the hard problem of consciousness.
Can super-intelligent computers cross the singularity threshold and waking up as conscious entities? Time will say, and with time I mean a huge effort of a multidisciplinary community of researchers, neuroscientists and philosophers working together in order to solve this mystery, one of the greatest ones.
MORE TO EXPLORE
Integrated information theory of consciousness: An update account. G. Tononi in Archives italiennes de biologie, vol. 150, n°4, p. 293-329, December 2012
David Chalmers. The Conscious Mind: In Search of a Fundamental Theory New York: Oxford University Press, 1996.