AI will surpass human brains once we crack the 'neural code,' claims researcher
Humans will build Artificial Intelligence (AI) which surpasses our own capabilities once we crack the "neural code," says an AI technology analyst.
Eitan Michael Azoff, a specialist in AI analysis, argues that humans are set to engineer superior intelligence with greater capacity and speed than our own brains.
What will unlock this leap in capability is understanding the "neural code," he explains. That's how the human brain encodes sensory information, and how it moves information in the brain to perform cognitive tasks, such as thinking, learning, problem solving, internal visualization, and internal dialogue.
In his new book, "Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence," Azoff says that one of the critical steps towards building "human-level AI" is emulating consciousness in computers.
Computers can simulate consciousness
There are multiple types of consciousness, and scientists acknowledge that even simpler animals such as bees possess a degree of consciousness. This is mostly consciousness without self-awareness. The nearest we humans experience that is when we are totally focused on a task, being "in the flow."
Computer simulation can create a virtual brain that, as a first step, could emulate consciousness without self-awareness, believes Azoff.
Consciousness without self-awareness helps animals plan actions, predict possible events and recall relevant incidents from the past, and it could do the same for AI.
Visual thinking could also be the key to unlocking the mystery of what is consciousness. Current AI does not 'think' visually; it uses 'large language models' (LLMs). As visual thinking predated language in humans, Azoff suggests that understanding visual thinking and then modeling visual processing will be a crucial building block for human-level AI.
Azoff says, "Once we crack the neural code, we will engineer faster and superior brains with greater capacity, speed and supporting technology that will surpass the human brain.
"We will do that first by modeling visual processing, which will enable us to emulate visual thinking. I speculate that in-the-flow-consciousness will emerge from that. I do not believe that a system needs to be alive to have consciousness."
But Azoff issues a warning too, saying that society must act to control this technology and prevent its misuse: "Until we have more confidence in the machines we build we should ensure the following two points are always followed.
"First, we must make sure humans have sole control of the off switch. Second, we must build AI systems with behavior safety rules implanted."
More information:
Book: Toward Human-Level Artificial Intelligence
Provided by Taylor & Francis