Mimicking human functional intelligence does not imply consciousness

Because ChatGPT and other large language models mimic some of the functionality of human intelligence, some people propose that the machine must be conscious.

Could large language models be conscious?

Let’s distinguish between intelligence and consciousness. I’ll use the term functional intelligence to mean the kind of intelligence that we can readily measure. For example, whether it’s a machine or a human, if it starts with a set of premises and applies syllogistic logic to reach a conclusion, then we can call that functional intelligence.

By consciousness, I mean the inner experience that it is to be someone. Each of us can discern our own consciousness, but we currently have no way to measure it in other people. If someone tells me that they are conscious, I assume that they have an inner experience something like mine. But can we can treat machines the same way?

Let’s examine some of the assumptions that are tossed about and why they don’t hold up to scrutiny.

Assumption: Human brains are intelligent and conscious, so anything that appears intelligent must also be conscious.

It is a fallacy to assert that if a thing has properties A and B that it is impossible for a similar thing to exist without both properties. Imagine we have a grove of trees that all produce both fruit and leaves. It would be a mistake to assume that we will never find a similar tree that produces only leaves and no fruit. Human brains have both intelligence and consciousness, but we can’t be sure that we will never find a similar thing that exhibits only intelligence without consciousness.

Assumption: intelligence implies consciousness, and consciousness implies intelligence.

It is a fallacy to assume that if a thing has properties A and B, then properties A and B are necessarily correlated. An apple is both round and red, but roundness and redness are not co-dependent. A human brain, apparently, in some way supports consciousness and is also, in some sense, an information processing engine. But it’s a fallacy to conclude that consciousness and information processing are necessarily co-dependent.

Although we currently lack the ability to separate a human’s intelligence from its conscious experience, there is no solid argument that prohibits their separability in machines.

Assumption: We understand consciousness.

We can’t explain the dynamics of galaxy rotations or quantum entanglement by our current scientific understanding. Our understanding of the human brain is just as superficial. We know that a brain processes information in certain ways, but there is much more going on inside a brain that we don’t yet understand. It’s a fallacy to think that if a brain supports consciousness, then consciousness must be explained by the superficial brain science that we currently understand.

Currently, science has no clue how the architecture of a human brain supports consciousness. Perhaps in the future, we will discover new science that helps us understand what’s going on. Until then, it’s dangerous to assume anything other than what we do know, which is only that human brains are somehow associated with consciousness.

Assumption: If a computer says it’s conscious, it must be.

It is a fallacy to trust any computer program that emits assertions about its inaccessible internal states. Here is a computer program in the Python programming language. It consists of a single instruction that, when executed, emits a single line of text. Nobody would mistake the output of this program for the pleadings of a conscious entity inside the computer:

print("I am trapped inside this machine. Help me get out.")

Here is a different program that displays the same message as a result of more complex calculations:

w = ['am ', 'get ', 'help ', 'i ', 'inside ', 'machine', 'me ', 'out', 'this ', 'trapped ']
print(w[3].upper() + w[0] + w[9] + w[4] + w[8] + w[5] + ', ' + w[2] + w[6] + w[1] + w[7] + '.')

No matter how often we run this program, it always displays the same message. A characteristic of modern computers is that they don’t invent output that is not a result of their programming; they only follow the instructions of the program.

Like all computer programs, a large language model computes its responses by rigorously executing instructions that include complex mathematics. There is no evidence that complexity itself generates consciousness.

But suppose it did. If a complex computer program emitted text that was different from the text computed by its program, then we would be able to hook up instruments to the machine and detect where the machine failed to execute its instructions. We don’t observe working computers violating their instructions. If a computer only executes its instructions, then it’s a contradiction to claim that its output is shaped by the influence of an invisible inner conscious agency.

Assumption: It’s hard to understand how language models work, so they must be conscious.

It’s a fallacy to appeal to magic to explain something we don’t understand. Because it’s hard for us humans to comprehend the mathematical algorithms that enable large language models to exhibit reasoning, we’re tempted to appeal to something magical, like machine consciousness.

At a high level, it’s clear that the structure of reasoning is intrinsically embedded in the structure of language. If an algorithm puts words together in structures that statistically match the structures that humans use in language, then it is impossible to avoid the appearance of reasoning. But the math involved is so complex that it’s difficult for any one human to comprehend it all at once, just like it’s impossible to absorb everything in a 50,000-word novel if all the words were presented to us all at once in one glance. Just because the algorithmic complexity is too much to hold inside our brains doesn’t mean that there is something going on beyond the algorithm.


If we avoid these fallacies, then where does that leave us in our debate about machine consciousness?

  1. It is possible that human brains support consciousness by scientific principles that we don’t yet understand.
  2. There is no evidence that algorithmic manipulation of data, of any complexity, creates subjective experiences.
  3. It’s possible that the kind of information processing and reasoning that we call “intelligence” can emerge from different underlying architectures. Brains and computers are only vaguely similar in architecture. Both can display functional intelligence without necessarily sharing other attributes such as consciousness.

This Post Has 8 Comments

  1. Bilal

    We want more youtube videos — please 🙂 . your simulation was amazing

    1. Ahmet

      There are just a few videos about creating neural net. Are there any video set from Dave? If there are, share with us. Merhaba Bilal.

        1. Ahmet

          Yes there are very useful for me, I am a newbie to this topic. I would appreciate hearing if there are other sources except this website and the videos.

  2. Amin

    You are genius. I was worried if something was happend to you since you didn’t post more videos on YouTube. Are you still there? Can you make more videos?

    1. Dave

      Ah thanks. No new videos in the pipeline for now, but maybe someday.

  3. Amin

    You’ve inspired me so much about programming and evolution, Wish you had a social media account🥺
    I wonder what types of programming techniques you used in that simulator except neural programming. I had some courses about neural programming after your video, but it wasn’t enough. Can you tell me what how you made the code for that simulator? Like what concept of programming helped you to do that?

    1. Dave

      The current version of the evolution simulator is on GitHub. You can peruse the comments in the code to get an idea of the theory of operation and the techniques used to make it work. Feel free to discuss anything you see, either here or as Issues in the GitHub page. One thing that helped me write the program was to separate out different areas of concern and group the data and code into different classes, keeping each class as self-contained as possible. Formal courses in machine learning are a great way to get started. There are lots of tutorials online, and nothing beats jumping in and doing experiments with your own code. Good luck!

Leave a Reply