GPT-4: genius or trickster? Researchers reflect on the limits of AI
Can artificial intelligence think like a human, or is its ability severely limited?
artificial intelligence general purpose (AGI) is a hot topic that causes controversy between scientists, businessmen and the public. Some argue that we are close to creating a system that can solve any problem at the human level or even better. Others doubt that this is even possible.
One of the most famous systems that now claims to be AGI is GPT-4 from the company OpenAI. This is a large language model (LLM), which analyzes billions of texts from the Internet and generates different kinds of output, including poetry and program code. GPT-4 has already shown impressive results on numerous occasions, such as passing the legal exam and increasing productivity in the UK, resulting in $39 billion in savings.
Microsoft, which invested $10 billion in OpenAI in January, ran a series of experiments of its own with GPT-4 that concluded that the model can manipulate complex concepts, a key aspect of reasoning.
However, not everyone agrees with such theses. Experts in cognitive science and neuroscience say that large language models cannot think and understand language. They also criticize the use of the Turing test, a method for assessing a machine’s ability to imitate a human in dialogue, as a criterion for assessing machine intelligence.
Dr. Andrea Martin of the Max-Planck Institute says the concept of AGI itself is debatable. She also believes that using the Turing test to determine the ability of a machine to think like a human is simply a misinterpretation of the very concept of the Turing test.
Professor Tali Sharot from University College London believes that large language models are able to learn and acquire knowledge and skills, but this does not make them look like people. And Professor Caswell Barry of the University of California argues that OpenAI has “fed” most of the readily available digital texts on the Internet to its model, but has not endowed it with the ability to manipulate or generate abstract concepts.
Renowned programmer Grady Booch is leaning towards the notion that AGI will not happen at all in our lives due to the lack of “the right architecture for the semantics of causality, abductive reasoning, common sense, theory of mind, and subjective experience.”
Large language models also face ethical and legal liability issues. OpenAI was recently hit by a class action lawsuit for scrapping copyrighted data. In addition, GPT-4 responses have been proven time and time again to contain many racial and societal biases.
While modern large language models do exhibit a high level of text processing and generation, they are still far from the full understanding and analysis of context, as well as from the true understanding of language and interpersonal interactions that are characteristic of human intelligence. Moreover, the human brain has the capacity for creative thinking, abstract and emotional understanding, as well as intuition and common sense, which still remains a challenge for artificial intelligence developers.
Thus, while large language models can indeed be useful, their proponents tend to exaggerate both their value and their capabilities. It is unlikely that artificial intelligence in the coming years will be able to somehow approach the human brain in terms of its comprehensive abilities and capabilities.
Source link
www.securitylab.ru