Do we incorrectly refer to our attempt to emulate some human brain functions as AI?
Full article is at: https://medium.com/next-top-writers/is-ai-really-ai-11f81a8bab41
A personal touch to this essay
Recently, I wanted to find a picture of an ascetic. I couldn’t find something that I could match with the image I had in mind. Thus, I decided to try AI to create something for me. Its creation came with the warning that the image created may not be unique to my request. That means the same image might be created for someone else with a similar request. Again, not long ago, I had a back-and-forth with a Medium user on creating fiction using AI. My point was that AI would use training data from someone else’s original work in its creation. These two personal experiences prompted me to write this article.
Getting some definitions in order
As an overture to the opinions expressed in this article, it is required to clarify the use of certain terms. Let us start from the beginning. What is AI? The definition that I use here is what IBM uses (IBM, 2023). AI is a field that combines computer science and robust datasets enabling problem-solving. There are also machine learning and deep learning that use computer algorithms to develop expert systems, based on data to predict or classify. The two facets of AI, namely, weak AI and strong AI are also talked about. An example of a narrow or weak AI is a biometric recognition system that you walk through when you enter the immigration area of airports in many developed countries. They have computer algorithms and databases that work together to identify you as the person on your passport. Another example is the Deep Blue that could beat Garry Kasparov, a world champion, in chess. It can be an expert system that answers our questions after searching a knowledge base that was defined in the computer code. Strong AI is still in the works and will be available, if you believe the pundits, in the near future. Perhaps, when the strong AI is in place, a computer would pass the Turing Test that was originally designed by the British mathematician, Alan Turing.
Now, what about machine learning and deep learning? They, in conceptual terms, are again computer algorithms that are trained on many examples to do some predictive work with the new specimens that are not in the training set. Many practitioners can use specific jargon to baffle the ordinary people to subjugation and make them accept AI’s grandeur with awe. Machine learning and its upgraded version, deep learning are basically computer programs written by humans like us to do certain tasks using the data that are fed in. In parallel to the rise of disciplines like data science, new jargon like machine learning started to crop up under the ever-increasing influence of computer science. But, at the core what do these terms represent? Let us discuss some background information before taking up this subject again in a later section.
What is Statistics?
In the current definitions of AI what we do not generally mention is the role played by what is called ‘Statistics’. The term Statistics was first used by German writers in the 18th century to study the political arrangements of the states of the known world (Davies, 1995). Fisher (1922) defines statistics as follows:
“..briefly, and in its most concrete form, the object of statistical method is the reduction of data. A quantity of data, which usually by its mere bulk is incapable of entering the mind, is to be replaced by relatively few quantities… which…. shall contain as much as possible, ideally the whole, of the relevant information contained in the original data”.
We all know the importance of probability in statistical thinking. We also cannot forget Legendre’s and Gauss’s work on least squares methodology that helped develop modern regression analysis or Tyron’s (1939) attempt at Cluster Analysis, a method of unsupervised learning, that tried to take a set of data and separate it into subgroups “where the elements of each subgroup are more similar to each other than they are to elements not in the subgroup”.
Role of Statistics in AI
Furthermore, it was the “Laplace-Gaussian” curve that became normal curve due to Francis Galton. It is not an incorrect statement that the modern AI we are most boastful about was, in a certain sense, started by R.A. Fisher (1936) with what is now called discriminant analysis featuring the above-said normal curve. It was about the separation of three species of iris plant, Iris setosa, virginica and versicolor using four measurements on their flowers. He expressed the view that given their genetic closeness, a certain diagnosis of two species versicolor and virginica could not be solely done on the above four measurements of ‘a single flower taken on a plant growing wild’. However, he might have imagined a situation where a sufficiently representative set of measurements as a training dataset with two subgroups, i.e., two species, to develop a scoring algorithm to identify future specimens that can be assigned to either of these based on the same four measurements. In short, he saw the possibility to learn about the two species using the training dataset at hand and use the learnings to identify which species a new set of measurements was likely to have come from. Thus, it is about mathematical functions maximised for the probability of correctly assigning items to a subgroup based on the existing data. The above can be thought of as what is at the heart of AI. It does not matter whether it is neural networks or deep learning, the core of many AI algorithms depends on the various invocations of the above simple idea. It was not surprising that the early papers on the power of neural networks that I remember reading, also illustrated the methodology with the same dataset that Fisher used. Even in the case of much hyped-about large language models like Generative Pretrained Transformers (GPT), what we have is some form of neural networks supported by massive computer power.
Is ‘Artificial’ in AI really ‘Artificial’?
The irony is that machine learning or deep learning does not seem to have any artificiality. These techniques are based on mathematical functions (or models) built using assumptions about associated probabilities and fine-tuned with data and self-learning using the power of the modern computer technology. Given the manner how inductive logic works in mathematics, Poincare in his book “Science and Hypothesis” claimed
“Mathematics may, therefore, like the other sciences, proceed from the particular to the general. “
This is also true in the case of AI. As we have seen from Fisher’s method and the ensuing discussion, we first try to generalise some natural process. Then, we use the generalised form to identify a novel example, outside the cases used for the generalisation, as an acceptable fit or a non-fit for the generalised form. This is true about the predictive models as well as unsupervised learning methodologies based in the physical world and hence, their more data and computer-intensive forms, the artificial intelligence. All the data AI approaches use are natural and the outcomes are at least supposed to be about the natural world. Thus, the word ‘artificial’ in AI does not represent anything artificial but some form of probabilistic statements about the natural world. Even the fake ones are not supposed to be fake in the universe of possibilities.
Another look at the artificiality
Let us see what the word “artificial” means to us. According to Oxford Learners Dictionaries (https://www.oxfordlearnersdictionaries.com), the adjective “artificial” means
“things that are not real, or not naturally produced or grown” or “made or produced to copy something natural; not real”. Two of the examples provided are artificial limb and flower. This tells us that the artificial things are made to imitate something in the natural world. As we know, intelligence is not an object in the natural world. It is not tangible as it is simply a notion, an idea like courage. If we call something a limb, no doubt that limb can be felt and captured in a photograph or make a cast in 3 dimensions. In philosophical terms, we can categorise it as a subject that can be described qualitatively and quantitatively in terms of colour or size. But we are unable to do the same with intelligence, a predicate, that may depend on time in a subject’s life and the environment relevant to that stage in life. We can say we measure it with IQ tests. Even these tests are culture specific. As Jared Diamond explained in the prologue, “Yali’s question”, to his book, “Guns, Germs, and Steel: The Fates of Human Societies”, tests of intelligence are dependent on childhood environment and learned knowledge.
Signorelli (2018) seems to consider that as the current progress in AI suggests, machines like computers, i.e., future Super Machines can one day even overtake human intelligence. Again Signorelli (2018) pointing out the shortcomings of Turing Test believes intelligence should include testing on moral issues that brings in cultural, personal, and emotional aspects. An example of a moral dilemma is who, a healthy, young dog, or a sick, old man, one would admit in an emergency boat after a shipwreck? Unfortunately, the current form of AI cannot even judge, in terms of self-awareness, the morality of the deep fake. This article is not an attempt to have a debate, but to look at the relevant issues in a more human way.
What about intelligence?
We do not know what intelligence really is. Let us go back to the Oxford Learner’s Dictionary. It describes intelligence as “the ability to learn, understand and think in a logical way about things; the ability to do this well”. Do we know whether AI can understand or think? We can make it learn things using algorithms, supervised or unsupervised learning methodologies, searches etc. However, can we establish that computers understand or think? Understanding is about knowing or realising the meaning of what somebody says, the words and the language. But is this true with the computers? If a child, hours past the normal lunchtime, says he or she is hungry, the mother knows what hunger feels like. Does the computer empathise with the child’s hunger? It can evaluate the meaning of the word hunger and the time since lunchtime. But it cannot feel what a biological system like a human can experience. Computers, through even the best algorithms, cannot feel human drives in the same way as we do. We are not just a brain, that many people liken to a computer. We are biological systems that are made of brains, other body parts working in unison, the associated biomes consisting of many microbes and the whole natural world. Thus, this can be different to the computer’s understanding of its own algorithms and data structures.
What about our thinking?
People have thought about self-awareness of computers. HAL in the movie ‘2001: Space Odyssey’ is one such example. In fiction, we often imagine cyborgs that can wreak havoc among humans. People can visualise generative pretrained transformers or genetic algorithms that can adapt. However, we, humans in form and substance, not only think about ourselves, but also, we think about others. Many a time, our thinking is situation specific. In AI so far, we mainly used generalisations. To make them particular and think like a human, we need to make humanoids with built-in AI with unique personalities, not a population of clones born in a brave new world. When I asked AI to draw me an ascetic, it had to understand my request with the specific knowledge of my personality, interests, morals, and tastes rather than the meaning of the word ‘an ascetic’ in general. Until that happens AI is simply a suite of algorithms for general use bundled with massive data structures and computer power.
As a last word
It is not insane to wonder about the meaning of AI and what it really represents. As we have seen in this essay, AI is about generalisations. AI, glorified though it may be, cannot imitate intelligence that is not even tangible in a physical sense. Therefore, it is not making sense to use the word ‘artificial’. Thus, what we have is generalised intelligence, if we accept the intelligence part as it is, rather than some artificial intelligence. This will take the cloak of grandiose and mystery out of AI and make it appear as a suite of computer-assisted intelligent tasks that can be subject to use and misuse by its creators. Thus, we naturally accept human intervention in AI to guide our future as we do with other human endeavours. Do we require an army of clones or humanoids to fight for us in a star war? We have not yet made an acceptable contact with the aliens. As things stand now, we only need computers to work for the betterment of humanity, not to become artificial humans competing with us. We have already amassed weapons of mass destruction and do not perhaps wish to add to the arsenal. Thus, we only ask for generalised intelligence to serve us not artificial intelligence to compete with us.
Bibliography
IBM (2023) What is Artificial Intelligence (AI)?, https://www.ibm.com/topics/artificial-intelligence
Fisher, R. A (1922) On the Mathematical Foundations of Theoretical Statistics, Phil. Trans. Of the Royal Society of London, A, 222:309–368
Fisher, R. A (1936) The Use of Multiple Measurements in Taxonomic Problems, Annals of Eugenics, v. 7, p. 179–188
Signorelli, C.M (2018) Can Computers Become Conscious and Overcome Humans?, Front. Robot. AI, Vol 5 | https://doi.org/10.3389/frobt.2018.00121
Tryon, R.C. (1939) Cluster Analysis: Correlation Profile and Orthometric (Factor) Analysis for the Isolation of Unities in Mind and Personality. Edwards Brothers, Ann Arbor