AI and AGI: Deciphering the Difference – What Sets Them Apart?

By

in AI Adventures

Artificial intelligence is a broad term used to describe machines, usually computers, that can simulate human intelligence. They can have different applications, from expert systems, to natural language processing, speech recognition or image object recognition. Artificial intelligence systems can take different “forms”, from simple chatbots and calculators, to systems used in autonomous vehicles.

Artificial general intelligence – AGI is a specific part of artificial intelligence that refers to the human level of intelligence. These systems are not focused on a single area, but can understand and perform any task. These systems are also known under the name strong AI, full AI, or human-like AI.

GPT-4 shows signs of AGI

Artificial intelligence researchers at Microsoft who have had the opportunity to explore GPT-4 say it is the first “spark” of general artificial intelligence. However, despite the huge leap it made over the previous 3.5 model, GPT-4 is far from human-level intelligence.

“We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction.” – Microsoft’s researchers say.

AI vs AGI difference

The difference between artificial intelligence and general artificial intelligence is not only in the power of the systems. These are two completely different classes of artificial intelligence. AI can be trained to do a job exceptionally well, for example playing Chess or Go. Artificial intelligence can be trained to recognize objects in a picture, diagnose diseases, but it will not be able to give an adequate answer for outside its narrow purpose.

Unlike AI, which usually follows some defined rules to be able to offer solutions, general artificial intelligence should be able to analyze data and based on the data and experiences it will be able to adapt itself to problems and offer an appropriate solution. Although the accuracy of solutions is important in this case as well, the value of AGI lies in its ability to adapt to problems and to have the necessary creativity to provide an adequate solution.

Artificial intelligence experts cannot agree on whether we should create general artificial intelligence. Some believe that AGI is not only achievable but a desirable solution. Others believe that if AGI was to be achieved, “everything on Earth would have been destroyed.”

Not everyone is excited about better AI

Eliezer Yudkowsky – a researcher who works in the field of machine learning has warned once again about the dangers of smarter-than-human artificial intelligence. Yudkowsky is also more extreme than the signatories of the open letter for a 6-month moratorium on the development of artificial intelligence, supported by Elon Musk and Steve Wozniak. According to him, the demands do not go far enough.

“A rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know. If that’s our state of ignorance for GPT-4, and GPT-5 is the same size of giant capability step as from GPT-3 to GPT-4, I think we’ll no longer be able to justifiably say “probably not self-aware” if we let people make GPT-5s. It’ll just be “I don’t know; nobody knows.” If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the “self-aware” part, but because being unsure means you have no idea what you are doing and that is dangerous and you should stop.

The AGI discussion is certainly not over. The systems will be more sophisticated, more powerful and faster. GPT-4 is probably far from the first AGI system, but it is certainly a step in this direction. But every step is a step forward, writes Futurism.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments