Demystified: AI, Machine Learning, Deep Learning
-
6277
-
0
-
2
-
0
Many people regard the terms Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) as synonyms. This is actually quite far from the truth, and today we demystify these misconceptions.
AI, ML, and DL are the practical applications of various data analysis models and algorithms for various use cases in different industries. While these tools cannot yet make moral choices or act as sentient machines, they hold the potential to drive more value for any business. This is done by finding previously unseen patterns and predicting the possible outcomes of certain actions. We hope this article will provide a better understanding of the AI, ML and DL capabilities, and will become your starting point in a journey towards AI-powered data analytics. The first step along this way was done through expert systems, which were the first attempt to codify the processes of the data analysis. Experts explained the rules of some domain expertise, these were later codified in rules engines and used to answer the questions. However, this structure was able to work only according to these predefined rules and could not improve over time. Keeping the rules engines up-to-date as new facts are added is also a cumbersome and effort-consuming endeavor. However, expert systems are still quite widespread in healthcare and finances.
The next step towards machine learning was made when the modern computer CPU architecture was introduced, followed by Graphic Processor Units (GPUs). The core benefit of GPUs is that they enable parallel data processing due to housing thousands of smaller cores, as opposed to serial processing in several larger cores of CPUs. The further evolution of this technology and transition to the cloud of all data-intensive projects had lead to a significant boom in AI development.
In addition, using the Big Data tools like Hadoop, Spark, Flink, Map Reduce and Cassandra ensures the businesses leverage their Big Data capabilities to the max. Previously, the data was stored somewhere and had to be uploaded elsewhere to be processed. This caused the data transfer speed to be the bottleneck and was quite expensive. The modern-day approach means storing the data in the cloud and directing the processing to the data, not the other way around. This is much more cost-effective, not to mention being much faster.
What is AI: something great that doesn’t work just yet
We have already published an article explaining that AI is not coming in form of sentient robots. As a matter of fact, AI has been an umbrella term and buzzword ever since the introduction of the Turing test. Scientists were striving to create the intellect in the machine, but this should not have been a copy of human intellect.
AI is a set of algorithms that learn and evolve over time, becoming better and better in dealing with their task, but are incapable of moral choices (don’t know what is good or bad) or understanding the value of art, or receiving an aesthetic pleasure from music.
Actually, AI is now more of a codename for a new frontier in the minds of men. Every new aspect of mathematical theory gains its practical implementation and stops being an AI. This phenomenon was described by Ted Dunning, Ph.D., Chief Application Architect at MapR (enterprise Hadoop vendor), who said that “when something does not work – we call it AI. When it begins to work right — we call it something else.”
What is Machine Learning: AI applied to Big Data Analytics
Machine Learning is the subset of AI technology that boomed around 1980s. Its main application is the ever-increasing quality and precision of big data analysis. The ML algorithms find patterns in the historical data and the trained ML models follow these patterns to predict the valuable insights in new data.
There are two main subdomains of the ML learning domain: supervised learning and unsupervised learning.
- Supervised learning uses data labels (both the income and the output data are provided to the algorithm) created by humans to structure the data. The ML models in this category include random decision forests, naive Bayes algorithms, SVM, and linear or logistic regression training.
- Unsupervised learning uses unlabeled data and the ML models have to either train themselves at their own pace or apply the reinforced learning technique, which maximizes the rewards depending on feedback.
Machine Learning is mostly leveraged for data classification and is widely used in the financial and banking industries to fight fraud and for risk management. We have described the use cases of machine learning in the banking industry or how various businesses are using machine learning to improve financial services.
Deep Learning: multiple layers of neural networks
Deep Learning algorithms use multiple layers of nodes with different weights, so-called Deep Neural Networks or DNNs. There are input and output nodes, and at least 10 layers between them. By adjusting the logical weight of each DNN node, the data scientists can influence the outcome, thus training the model to reach the required results at scale. Due to the development of massively parallel processing of data and the use of potent GPUs, data scientists can nowadays use the DNNs with thousands of layers.
Every readjustment of the logical weight results in the model better understanding the required object features. The advantage of this process is that the features can be not defined in the beginning, and the model will learn to identify the best outcome on its own with time.
DNNs are the most widespread but not the only types of ML algorithms used in Deep Learning.
We have explained how Deep Learning really works, as well as covering multiple Deep Learning advancements made in 2017 in the areas like text and speech applications, machine perception and OCR, reinforced learning and robotic movement apps. These applications disrupt and improve the healthcare, financial and retail industries, as well as many other areas of life.
Prospects of AI technology evolution
Thus said, there were significant improvements in the effectiveness of AI applications over the last decade. Even though Deep Blue defeated Garry Kasparov more than 20 years ago, the next sound victory of this type happened only recently, when Google DeepMind’s AlphaGo won vs the active Go champion Lee Sedol in 2016. This does not mean the AI development has stalled — quite contrary, it boomed significantly.
From sci-fi excitement of the early 1950s and dreams of sentient robots, we came to realize that AI is best used to automate laborious and time-consuming tasks at scale, not to imitate the functions of the human brain. Certain similarities between the natural and artificial intelligence still remain, like in Loihi, the self-learning chip from Intel. However, recent technological advancements underpin the potent trend of employing ML algorithms for large-scale data analysis.
Final thoughts on AI, Machine Learning and Deep Learning
Is Deep Learning the direction the AI will evolve? We think not, as there are new exciting applications constantly appearing on the cutting edge of technology, as well as the new possibilities to apply ML and DL in previously not digitalized industries. As more and more businesses complete their cloud transition, more and more prospects for leveraging the AI algorithms in Big Data analytics offer themselves.
During the AWS 2018 Summit in London, Amazon CTO Dr. Werner Vogels revealed that AWS concentrates their effort in delivering AI-powered features and services for their product to further empower the value delivery for their customers. This resulted in multiple AI/ML/DL releases, announcements and features presented during the recent AWS re:Invent 2018 week.
Microsoft does the same, housing an 8,000-people-strong R&D department wholly dedicated to developing AI and ML services for Azure. AI is definitely hotter than ever nowadays and will become ever more profitable and useful in future.
Did you implement the ML models to your data analytics systems already? Would you like to do it? If so — IT Svit can help!