The future of AI: Deep Learning… or much more?
Whenever people talk of Artificial Intelligence or AI, they most likely speak of deep learning (DL). Yes, the results of applying deep learning approach in analyzing data can be astounding.
DL is a part of AlphaGo project that won the Go game against the current world champion. DL lies in the foundation of multiple services and projects that occupy the headlines of mass media. Sounds great, yeah? The problem is, DL is but a small part of Machine Learning (ML), and ML is but a small part of the greater AI industry.
Deep learning is a machine learning technique, which relies on learning data representations, unlike task-specific machine learning algorithms. While deep learning can be supervised or unsupervised, it definitely excels in analyzing unstructured data according to pre-configured parameters and statistics, providing a great degree of precision in evaluating the incoming data.
Let’s imagine you have a canned vegetable factory that produces pickles, tomato juice, canned tomatoes, tomato paste and beans in tomato sauce. Raw vegetables have to be sorted to decide if the tomatoes are ripe enough to be used as a source for paste or can be only used for making conserve. Cucumbers must also be sorted, as too big cannot be used for pickles. Previously, sorting the raw materials would be a task for a special department with experienced employees. Nowadays, a single machine using deep learning algorithm deals with the task.
This is but one example of deep learning implementation. Just name any monotonous job that requires sorting the incoming flow of data — and deep learning will be your best bet to automate it. However, several concerns arise:
- DL algorithm can sort only the data it was taught to sort. It can recognize images with cars easily and distinguish them from boats, for example, but a picture of amphibious transport can break the logic. In addition, the algorithm cannot initiate a full-scale training itself, so it will never be able to recognize a pattern it was not trained to recognize.
- DL algorithm relies heavily on the quantity and quality of the data used for initial training. For example, if the initial data set was comprised of 30 books depicting the terror and horrors of the World War II and 70 examples of “Mein Kampf”, this algorithm would become a full-scale Nazi, because the sources stating that “fascism is great” would be more than 2 times more numerous than the sources stating the opposite. Thus said, the results of the DL training greatly depend on the moral and ethical preferences of the creator of the data set used for the algorithm training.
- DL algorithm risks greatly to fall into “overfitting problem”. There should be no upper limit to the recognition accuracy level, but this might lead to many valid results being dismissed due to not meeting some of the parameters. What human eye would easily recognize as an Albino negroid might become a nightmare for a DL algorithm, in case it was not taught some negroids can have white hair and red eyes.
- DL algorithms cannot present the reasons for their decisions. They can only state if something is red, round, baked or wooden, but not describe the logical chain that led to the conclusion. They also cannot tell good from bad, right from wrong, etc. This means DL cannot be used when issuing decisions that impact human lives, financial transactions, etc.
Despite the unprecedented power the DL holds, it can be used for quite a limited range of activities, actually. But what is the AI then?
The AI depicted by sci-fi authors and movie directors (like in I, Robot) is an autonomous computing system capable of making decisions in the situations of uncertainty. It should be capable of unsupervised learning, explaining its decisions and making correct moral choices.
Deep learning is obviously not a match for a true AI. Luckily, as DL is but 1% of Machine learning, there are a plethora of other algorithms. The combination of DL and other algorithms or, perhaps a totally new algorithm not widely known nowadays, will be the source of the true AI we hope to see in the future. What are the reasons that slow down the progress of AI development then? We shortly list them below:
- True AI training demands literally unlimited stores of data, the ones very few companies can provide (Google, Apple, Microsoft, Amazon and… that’s pretty much it). AI developers can only hope to invent a promising algorithm and sell it to one of the big players, like the 2016 Deepmind $600 million acquisition by Google.
- AI development requires essential financial investments, which means only the aforementioned big players can have the sufficient resources to test the algorithms at large enough scale. Bloomberg cites Peter Lee, Microsoft’s Head of Research, saying a seven-figure salary is an appropriate reward for a world-class deep learning expert.
- Presenting the explanations of the decision-making process is essential. If the algorithm cannot explain the logical chain of conclusions that lead to making a decision, the result cannot be considered trustworthy. For example, there was a study conducted by Virginia Tech University specialists, described in details in The Verge article.
The computer algorithm was provided with an image of a bedroom with blinds and asked what was covering the windows. The algorithm checked the floor, found the bed, understood the room was the bedroom and answered: blinds. However, it never paid attention to the windows in the first place, it just happened so the majority of bedroom images in its database has had blinds on windows. But what if there were no blinds on the image?
Thus said, a responsible AI we can trust with the important decision-making is yet to be developed, with deep learning being merely a part of the process and not the most important one.
Do you think otherwise? Please tell us your thoughts on the topic, we are always open for discussion! Liked the article? Please share it with friends!
Feel free to browse through the latest insights and hints on the DevOps, Big Data, Machine Learning and Blockchain from IT Svit!
10 subtle superpowers of DevOps
DevOps is definitely the way IT industry will evolve, but many companies are still unsure it is time for
GitHub smart security alerts: know of vulnerabilities in your projects
A plenty of developers use third-party projects in their GitHub projects and might suffer dire consequences when possible security
IT Svit now offers full-cycle services for startups
IT Svit has accumulated lots of experience with startup consulting, software development for startups, scaling and ongoing support. Thus
5 critical success factors for Big Data mining
Successful Big Data mining relies on the correct analytical model, choosing the relevant data sources, receiving worthy results and
AWS PrivateLink: a long-awaited solution for all AWS customers
Needless to say, paying for the traffic used by the services connected to your AWS VPC (Virtual Private Cloud)