ARTIFICIAL intelligence is set to surpass the human brain’s capabilities – but such advancements come at a cost.
An AI technology analyst says we’re just steps away from cracking the “neural code” that allows machines to consciously learn like humans.
Eitan Michael Azoff believes we are on the path to creating superior intelligence that boasts greater capacity and speed.
The AI specialist makes the case in his new book, Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence.
According to Azoff, one of the key steps towards unlocking “human-level AI” is understanding the “neural code.”
The term describes the way our brains encode sensory information and perform cognitive tasks like thinking and problem solving.
Azoff says that one of the critical steps towards building ‘human-level AI’ is emulating consciousness in computers – likely without self-awareness, similar to what humans experience when focusing on a task.
Consciousness without self-awareness helps animals plan actions and recall memories – and it could do the same for AI.
Current AI does not “think” visually. Large-language models like GPT-4 can process and generate human-like text.
As visual thinking came before human language, Azoff believes understanding this process and then modeling visual processing will be a crucial step.
“Once we crack the neural code we will engineer faster and superior brains with greater capacity, speed and supporting technology that will surpass the human brain,” Azoff explained.
“We will do that first by modeling visual processing, which will enable us to emulate visual thinking.”
However, the analyst doesn’t believe a system needs to be alive to have consciousness.
His logic calls into question the way artificial intelligence works.
Current machine learning models cannot exist without some degree of human involvement, as they must constantly be fed fresh and accurate data.
Self-learning AI, which consumes its own output or that of other models, consistently declines in the quality of its responses.
This “inbreeding” is encountered more and more as AI-generated content floods the internet and finds its way back into datasets.
Beyond these pitfalls, Azoff readily acknowledges potential misuse of the technology.
“Until we have more confidence in the machines we build we should ensure the following two points are always followed,” Azoff said.
“First, we must make sure humans have sole control of the off switch. Second, we must build AI systems with behavior safety rules implanted.”
But one question remains: is this a challenge we should take on?
Artificial intelligence is already proving unpopular with consumers, as major firms like Google and Meta continue to infuse existing services with AI functionality.
Meta, for one, has admitted to feeding its AI information stripped from the public profiles of millions of Facebook and Instagram users.
Users consent by default when they sign up to use the services, and a well-hidden opt-out form only applies to those who can make a compelling legal argument for data protection.
And tech firms don’t plan on slowing down anytime soon. In the face of backlash, Microsoft has resumed the rollout of its AI-equipped virtual assistant, Recall.
The release was delayed indefinitely until last week, when Microsoft announced that the tool would be available for beta testing in October.
The program takes captures of a screen every few seconds to create a library of searchable content, which AI then parses through.
And concerns only continue to spring up. Amid an ongoing lawsuit, OpenAI made the case for the theft of copyrighted material, pleading that its models need it to function.
These risks are only expected to intensify as AI develops – and as it lurches towards developing superhuman intelligence, this will undeniably give rise to new problems.