[ad_1]
Sam Altman, chief executive of the maker of ChatGPT Open AIis reportedly trying to find out. Up to 7 trillion US dollars He believes that the world needs to run artificial intelligence (AI) systems. Altman also said recently The world will need more energy. He envisions a future saturated with AI – so much so that some kind of technological breakthrough like nuclear fusion might be required.
Altman clearly has big plans for his company’s technology, but is the future of AI really that rosy? As a longtime “artificial intelligence” researcher, I have my doubts.
Today’s AI systems — especially creative AI tools like ChatGPT — aren’t really intelligent. Moreover, there is no evidence that they can become so without fundamental changes in the way they operate.
What is AI?
One definition of AI is a computer system that “Perform tasks normally associated with intelligent beings.“
This definition, like many others, is a bit fuzzy: Should we call spreadsheets AI, because they can perform calculations that would once have been a high-level human task? What about factory robots, which have not only replaced humans but in many cases surpassed us in their ability to perform complex and delicate tasks?
While spreadsheets and robots can indeed perform tasks that were once the domain of humans, they do so by following an algorithm – a process or set of rules for approaching and working through a task.
One thing we can say is that there is no such thing as “an AI” in the sense of a system that can perform a range of intelligent functions like a human. Rather, many different AI technologies can do very different things.
Making decisions vs. generating output
Perhaps the most important distinction is between “differentiative AI” and “generative AI”.
Discriminatory AI helps make decisions, such as whether a bank should lend to a small business, or whether a doctor diagnoses a patient with X or disease Y. These types of AI technologies have been around for decades, and are getting bigger and better Emerges all the time.
Generative AI systems, on the other hand – ChatGPT, Midjourney and their relatives – produce output in response to input: in other words, they create things. In essence, they’re exposed to billions of data points (like sentences) and use that to instantly predict likely responses. The answer may often be “true,” depending on the source data, but there is no guarantee.
For generative AI, there is no difference between a “deception” – a false response invented by the system – and a response that a human would interpret as true. This appears to be an inherent flaw of the technology, which uses a type of neural network called a transformer.
AI, but not intelligent.
Another example shows how the “AI” goalposts are constantly moving. In the 1980s, I worked on a computer system designed to provide expert medical advice on laboratory results. It was written as such in the American research literature. One of the first four Medical “expert systems” in clinical use, and an Australian government report in 1986 called it the most successful expert system developed in Australia.
I was quite proud of it. It was an AI landmark, and it performed a task that normally required highly trained medical professionals. However, the system was not entirely intelligent. It was really just a lookup table that matched lab test results to high-level diagnostic and patient management advice.
Technology now exists that makes it much easier to build such systems, so there are thousands of them in use around the world. (This technology, based on research by me and colleagues, is provided by an Australian company called Beamtree.)
In doing the work done by highly trained experts, they are certainly “AI”, but they are still not intelligent (although more complex ones have thousands and thousands of rules for finding answers).
The transformer networks used in generative AI systems still operate on a set of rules, although there may be millions or billions of them, and they cannot be easily defined in human terms.
What is real intelligence?
If algorithms can produce these kinds of amazing results from ChatGPT without being intelligent, then what is real intelligence?
We can say that intelligence is insight: the decision whether something is a good idea or not. Think of Archimedes, jumping out of his bath and shouting “eureka” because he had an insight into the principle of euphoria.
Generative AI does not have intuition. ChatGPT can’t tell you if its answer is better than Gemini. (Gemini, more recently known as Bard, is Google’s competitor to OpenAI’s GPT family of AI tools.)
Or to put it another way: Generative AI can create amazing paintings in the style of Monet, but if it was trained only on Renaissance art, it would never invent Impressionism.
Generative AI is phenomenal, and people will undoubtedly find vast and very valuable uses for it. Already, it provides very useful tools for transforming and presenting (but not discovering) information, and tools for converting specifications to code are already in routine use.
These will get better and better: Google’s just-released Gemini, for example, seems to be trying. Reduce the risk of hallucinations.using search and then redisplaying the search results.
However, as we become more familiar with creative AI systems, we will see more clearly that it is not truly intelligent. There is no insight. This is not magic, but a very clever wizard’s trick: an algorithm that is the result of extraordinary human ingenuity.
Source: UNSW
[ad_2]
Your style is so unique compared to many other people. Thank you for publishing when you have the opportunity,Guess I will just make this bookmarked.2