Home > Media News > AI will surpass human brain's capabilities: scientist

AI will surpass human brain's capabilities: scientist
10 Sep, 2024 / 07:10 am / OMNES Media LLC

Source: http://www.webdesk.com

200 Views

(Web Desk) - Artificial intelligence is set to surpass the human brain's capabilities - but such advancements come at a cost.

An AI technology analyst says we're just steps away from cracking the "neural code" that allows machines to consciously learn like humans.

Eitan Michael Azoff believes we are on the path to creating superior intelligence that boasts greater capacity and speed.

The AI specialist makes the case in his new book, Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence.

According to Azoff, one of the key steps towards unlocking "human-level AI" is understanding the "neural code."

The term describes the way our brains encode sensory information and perform cognitive tasks like thinking and problem solving.

Azoff says that one of the critical steps towards building ‘human-level AI’ is emulating consciousness in computers - likely without self-awareness, similar to what humans experience when focusing on a task.

Consciousness without self-awareness helps animals plan actions and recall memories - and it could do the same for AI.

Current AI does not "think" visually. Large-language models like GPT-4 can process and generate human-like text.

As visual thinking came before human language, Azoff believes understanding this process and then modeling visual processing will be a crucial step.

“Once we crack the neural code we will engineer faster and superior brains with greater capacity, speed and supporting technology that will surpass the human brain," Azoff explained.

“We will do that first by modeling visual processing, which will enable us to emulate visual thinking."

However, the analyst doesn't believe a system needs to be alive to have consciousness.

His logic calls into question the way artificial intelligence works.

Current machine learning models cannot exist without some degree of human involvement, as they must constantly be fed fresh and accurate data.

Self-learning AI, which consumes its own output or that of other models, consistently declines in the quality of its responses.

This "inbreeding" is encountered more and more as AI-generated content floods the internet and finds its way back into datasets.

Beyond these pitfalls, Azoff readily acknowledges potential misuse of the technology.

“Until we have more confidence in the machines we build we should ensure the following two points are always followed," Azoff said.

“First, we must make sure humans have sole control of the off switch. Second, we must build AI systems with behavior safety rules implanted.”

But one question remains: is this a challenge we should take on?

Tags