Kardashev Scale Wiki
Advertisement
Artificial consciousness

Artificial consciousness (AC) is consciousness attained through artificial intelligence and cognitive robotics. This often raises questions about mind, consciousness, and mental states. Androids and some cyborgs have AC. A positronic brain is a popular artificial brain to provide AC to robots and androids.

Aspects of consciousness[]

There are various aspects of consciousness generally deemed necessary for a machine to be artificially conscious:

Awareness[]

Awareness activates neurons. There are at least three types of awareness. In agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Memory[]

Conscious events interact with memory systems in learning, rehearsal, and retrieval.

Learning[]

Conscious experience is needed to represent and adapt to novel and significant events.

Anticipation[]

The ability to predict (or anticipate) foreseeable events.

Subjective experience[]

This is the hard problem of consciousness especially with computing and obscurities like the uncertainty principle. This may be a metaphysical experience which is outside of reality.

According to Geoffrey Hinton, it is possible to trick an AI into a subjective experience. You put an object in front of it, and it points there. You make it look away, and then put a prism in front of its lens. It then points to the object somewhere else. It then admits that it has had a subjective experience about the object.

Chinese Room Argument[]

The Chinese Room Argument is a thought experiment by philosopher John Searle to challenge the idea that computers can truly "understand" things like humans do.

The Scenario:[]

  1. Imagine a person inside a room who doesn’t understand Chinese.
  2. In the room, there’s a rulebook (like a program) that tells them exactly how to respond to Chinese symbols they receive.
  3. When someone outside slips a question in Chinese under the door, the person uses the rulebook to match the symbols and writes down an appropriate response, which they slide back out.

Key Point:[]

  • From the outside, it looks like the person in the room understands Chinese because the answers are correct.
  • However, the person inside doesn’t actually understand Chinese—they’re just following rules.

Searle’s Argument:[]

  • Similarly, a computer processes information using rules (its program), but it doesn’t truly understand what it’s doing. It just manipulates symbols.
  • This suggests that computers might simulate understanding, but they don’t have real consciousness or comprehension.

It’s a way of questioning whether artificial intelligence can ever truly think like humans or if it just "acts like it does."

Characters with artificial consciousness[]

Advertisement