Site icon The Indian Panorama

Not everything we call AI is actually ‘artificial intelligence’. Here’s what you need to know

In August 1955, a group of scientists made a funding request for USD 13,500 to host a summer workshop at Dartmouth College, New Hampshire. The field they proposed to explore was artificial intelligence (AI). While the funding request was humble, the conjecture of the researchers was not: “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”.

Since these humble beginnings, movies and media have romanticised AI or cast it as a villain. Yet for most people, AI has remained as a point of discussion and not part of a conscious lived experience.

AI has arrived in our lives

Late last month, AI, in the form of ChatGPT, broke free from the sci-fi speculations and research labs and onto the desktops and phones of the general public. It’s what’s known as a “generative AI” – suddenly, a cleverly worded prompt can produce an essay or put together a recipe and shopping list, or create a poem in the style of Elvis Presley.

While ChatGPT has been the most dramatic entrant in a year of generative AI success, similar systems have shown even wider potential to create new content, with text-to-image prompts used to create vibrant images that have even won art competitions.

AI may not yet have a living consciousness or a theory of mind popular in sci-fi movies and novels, but it is getting closer to at least disrupting what we think artificial intelligence systems can do. Researchers working closely with these systems have swooned under the prospect of sentience, as in the case with Google‘s large language model (LLM) LaMDA. An LLM is a model that has been trained to process and generate natural language.

Generative AI has also produced worries about plagiarism, exploitation of original content used to create models, ethics of information manipulation and abuse of trust, and even “the end of programming”.

At the centre of all this is the question that has been growing in urgency since the Dartmouth summer workshop: does AI differ from human intelligence?

What does ‘AI’ actually mean?

To qualify as AI, a system must exhibit some level of learning and adapting. For this reason, decision-making systems, automation, and statistics are not AI.

AI is broadly defined in two categories: Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI). To date, AGI does not exist.

The key challenge for creating a general AI is to adequately model the world with all the entirety of knowledge, in a consistent and useful manner. That’s a massive undertaking, to say the least.

Most of what we know as AI today has narrow intelligence – where a particular system addresses a particular problem. Unlike human intelligence, such narrow AI intelligence is effective only in the area in which it has been trained: fraud detection, facial recognition or social recommendations, for example. AGI, however, would function as humans do. For now, the most notable example of trying to achieve this is the use of neural networks and “deep learning” trained on vast amounts of data.

Neural networks are inspired by the way human brains work. Unlike most machine learning models that run calculations on the training data, neural networks work by feeding each data point one by one through an interconnected network, each time adjusting the parameters.

Source: PTI

Exit mobile version