Artificial Intelligence as an Idea or Concept began way back in 1950s. In the decades that followed, computer scientists, mathematicians and experts in other fields strived to advance the field, either by improving the algorithms or improving the hardware.
Research into AI has propelled in this decade, through new explorations in Image Processing, Pattern detection and Deep learning concepts. Now, Machines understand verbal commands, distinguish pictures, drive cars and play games equal to humans.
We see Artificial Intelligence expanding in long roads ahead. Therefore, it becomes necessary to classify AI to get a clearer understanding of existing AI capabilities and road ahead.
One of the accepted way of classifying AI is through it’s memory capabilities. Through this, experts have classified AI into 4 types :
Early AI algorithms lacked memory and were purely reactive. Given a specific input, the output would always be the same. From statistical math, these models were able to extract huge chunks of data, then produce a smart intelligent output or pattern.
These are most basic types of AI systems having the ability neither to form memories nor to use past experiences to deduce decisions.
Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.
Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.But it doesn’t have any concept of the past, nor any memory of what has happened before.
Humans use their past experience also to make an informed decision. Sometimes ,decisions are taken by us with “imperfect informations” . Nature of past and imperfections paved way to next generation of AI towards goal of mimic humans.
Artificial Intelligence started steadily growing in this decade due to emergence of Deep Learning techniques. Based on our understanding of the brain’s inner mechanisms, an algorithm was developed which was able to imitate the way our neurons connect. There was training the software involved, which means computer is able to mimic the human patterns by comparing with huge memory of existing data. One of the characteristics of deep learning is that it gets smarter the more data it is trained on.
Deep learning dramatically improved AI’s image recognition capabilities, and soon other kinds of AI algorithms were born, such as Deep Reinforcement techniques.
Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road.
Another notable example is Google’s AlphaStar project, which managed to defeat top professional players at the real-time strategy game StarCraft 2. The models were developed to work with imperfect information and the AI repeatedly played against itself to learn new strategies and perfect its decisions.
But this Type has it’s own limitations. Most notably, it requires huge amounts of data to learn simple tasks.
As in Self Driving example, simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from. The way human drivers compile experience over years behind the wheel is missing leading to our third classification.
This classification is derived from psychology term – “Theory of Mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior. Machines in the next, more advanced, class forms representations about the world and it’s entities. Hence, classified as Theory of Mind.
While the previous two types of AI have been and are found in abundance, the next two types of AI exist, for now, either as a concept or a work in progress.
If AI systems are indeed to advance to their full potential of mimicing human nature, they will have to have vast memory knowledge and ability to compute this herculean sized data.Advances in all the branches of AI will lead us to this. This will provide ability to understand human’s and thought nature and accordingly adjust their behavior.
The final step of AI development is to build systems that can form representations about themselves. This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others.
We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.
This type of AI will not only be able to understand and evoke emotions in those it interacts with, but also have emotions, needs, beliefs, and potentially desires of its own. And this is the type of AI that doomsayers of the technology are wary of.
Creating this type of Ai, which is decades, if not centuries away from materializing, is and will always be the ultimate objective of all AI research.
To conclude, while we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own, if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.