Artificial intelligence, also known as AI, has been the talk of the past decade – or even more recently. But what exactly is artificial intelligence? More importantly, how has it managed to make so much of an impact on the world in such a short time?
Broadly speaking, artificial intelligence is the technology that enables machines or computers to exercise human-like thought processes, such as learning from tasks, making decisions and applying reasoning to their decisions.
In the past, artificial intelligence used to be a thing of science fiction. Fantasy stories would tell about robots, advanced computing systems, and even dystopian futures of artificial intelligence dominating the world. While people may imagine today’s artificial intelligence to be in the realm of these sci-fi worlds, the truth is that artificial intelligence has a major practical application – that is, to process the enormous amounts of unstructured data generated on a daily basis.
Artificially intelligent systems are already being employed in an increasing variety of fields around the world, from medicine and healthcare to factory productions. One major reason why artificial intelligence is so appealing is because of its ability to increase productivity in a workplace while reducing the operation costs. Unlike human workers, computers do not get tired, call in sick, or go on holidays. Computers also do not require a monthly salary, instead having more of an upfront cost. Additionally, computers can work faster and more precisely than humans in most jobs.
What makes up this disruptive technology? Let us take a look at the components of an artificial intelligence system.
What Makes Up Artificial Intelligence?
Artificial intelligence itself is made up of several revolutionary technologies, including natural language processing, deep learning and predictive analytics. This may sound like a bit of jargon, but in essence, the terms refer to the technologies that enable computers to be “artificially intelligent” by understanding the nuances of human language, learning from experience and making informed predictions.
To further understand what actually comprises artificial intelligence, let us look at the various key technologies that make up AI.
Machine learning enables computer systems to learn from experience without needing to be explicitly programmed. With machine learning, humans users are able to “teach” a computer to perform new tasks that it had previously not been able to. For example, think of YouTube’s recommended channels. The system analyses your video history and, using an algorithm, comes up with a few suggested channels that it thinks you may be interested in. You, as the user, then have the option to dismiss the suggestion if it is not something you are interested in. If this option is selected, the system “learns” from this experience and will not show that suggestion to the user again.
Aside from its applications on websites and streaming services, machine learning is also used in healthcare, pharmaceutical and life science fields. It aids human researchers in diagnosis of diseases, developing drugs for treatment and interpreting medical images.
Until now, nobody has fully understood how the human brain works. However, this has not stopped scientists from attempting to mimic the biological neural networks in human brains to construct artificial neural networks.
Multiple layers of these artificial neural networks come together to enable deep learning, which is a form of machine learning. Consider facial recognition technologies. In facial recognition, a computer system deciphers many different inputs from each group of pixels in a photo. Each input is processed through an artificial neural network, and the computer combines the results to determine an output – whether there is a face in the photo, and if so, whether that face is the same face the computer has stored in its database. Of course, these artificial neural networks can make mistakes, and it is through deep learning that the recognition pattern is refined and improved.
Deep learning is also employed in voice recognition software such as Siri and Cortana. Our human voices contain many inflexions and nuances, not to mention accents. As such, a computer system must analyze each aspect of a phrase to translate it into words. If you have ever used voice recognition on virtual assistants, you may notice that the assistant sometimes asks you to correct them if they have misinterpreted your statement. This is a form of deep learning.
Neural networks, as mentioned, are computer systems resembling the biological network connections in our human brains. Computers process data through their artificial neural networks in order to learn and improve on their own. The best types of data are large data sets, such as thousands of pictures of a single object or a person. The neural network examines this data to draw associations and identify new sets of data it has been presented with. For example, if a computer is presented with many photos of apples, it registers the similarities between each of those photos that identify them as apples. The computer can then use these associations to determine if a new image is an apple or not.
If you have coding experience, you will probably be familiar with machine language. To tell a regular computer to perform an action, we typically have to use specific codes or keywords that the computer will be able to understand. Cognitive computing, however, aims to lessen the gap between human and computer interaction, enabling computers to understand human language and the meaning of speech and images. Computers are created with more human-like thought processes, which is how a user can talk to Siri like another human being.
Natural Language Processing
Similar to cognitive computing, artificial intelligence systems are taught to interpret and understand the context of different human languages. We may laugh at how Google Translate comes up with some incorrect and hilarious translations, but artificial intelligence technology in this area is slowly improving, with Skype Translator being able to interpret and translate the speech of multiple languages in real-time.
Computer vision allows computer systems to read the content of an image, including graphs, tables, pictures and other unstructured data. While traditional computers require images to be interpreted for them, artificial intelligence is able to interpret images itself. This technology has been applied to the medical industry, where computers aid in interpreting x-ray scans and medical tests.
We may not yet live in a sci-fi world with the likes of Skynet, JARVIS and C-3PO. However, the future of artificial intelligence is looking promising. As we can see, artificial intelligence is a diverse topic that consists of many different technologies which work together to create “intelligent” computer systems. With the digitization of almost the whole world these days, we will no doubt be seeing an increase in the use of artificially intelligent systems to increase productivity and cost-effectiveness across the globe. Since the technologies of artificial intelligence are rapidly changing and expanding, it is imperative that we keep up-to-date on this immensely disruptive platform. While computers could potentially be a threat to human workers, they could also be powerful allies as long as we understand how to work with them.