AI: The first 50 years … 10 years later
Ten years ago, I wrote my first blog post on the topic of Artificial Intelligence, celebrating the first 50 years of AI. It is interesting to do some considerations after another 10 years as it seems that everything has changed while, from a different point of view, nothing has fundamentally changed…
Please note that the text that follows is a bit fragmented and not so linear in jumping from one aspect to another one but I did it on purpose as the structure tries to reflect Artificial Intelligence many facets and always changing attributes: I hope that the result is not too hard to read.
AI: science and not science fiction
Even after so many years, a shared and common definition of AI is far from becoming a reality: there are many definitions and even more nuances that somehow change with the years.
For Marco Somalvico, AI studies the theory and methodologies that allow the design of software capable of providing computer performance that, to a common observer, would seem to be exclusive to human intelligence.
For Wikipedia, AI is the seemingly intelligent behaviour of a computer, as opposed to the natural intelligence of human beings. We commonly speak of Artificial Intelligence when a computer simulates cognitive functions that we normally associate with the human mind, such as for solving complex and non-numerical problems.
Even if it sometimes does seem like science fiction (or even magic), Artificial Intelligence is scientific and deterministic. There is nothing that we don’t understand or that happens without us understanding why (even if it can be extremely labor-intensive and expensive to do so in certain cases). Being science, which moves forward in small steps for subsequent improvements and for theories and hypotheses that can be denied or validated, revolutionary inventions in AI are not likely.
What Artificial Intelligence is not
Artificial Intelligence is not the solution to all the world’s problems nor the most important discovery since the wheel (or fire or anything else). Even if Google, Microsoft, IBM and others continue to make statements in this direction, they are the first to know that it’s not true.
It’s not something so advanced and intelligent that, with little work and effort, can solve complex problems and learn without supervision. What Edison said about genius is also valid for AI: “Genius is one percent inspiration, 99 percent perspiration.”
AI is not machine learning or deep learning. Machine learning is a technique (among many existing and future techniques) that is commonly used in the AI field to solve problems in different subsets of use cases where AI is applied.
It is not an omniscient program that only needs to be configured or trained to solve problems from a high level of abstraction.
It’s not something that is programmed on its own, that works without expert assistance and that could easily adapt to real-world problems without any specific knowledge.
Where are we now? The state of Artificial Intelligence
Today, AI simulates the result of some cognitive processes using approaches and methods that are different from those that the human brain uses. AI is here to stay. After a long winter and despite excessive expectations, there are many problems that can already be solved thanks to AI. It will also make it possible to solve many more problems as technologies and methodologies mature and improve.
Just like anything else, implementing an AI-based solution requires expertise, time, analysis and work. Not all use cases that appear to be suitable for AI are, or are in a way that has a positive cost/benefit ratio.
The most important thing in a successful AI-based project is not the technique that will be used but other elements like the solidity of the business case, a correct analysis and effective implementation (based on experience, data and the most appropriate approach). To understand whether an AI-based project can be successful or not, the best way is almost always the implementation of a proof of concept.
A solution based on AI is always based on programming in a traditional and/or alternative way. If something is too difficult for a person, it is also too difficult even for any AI-based solution.
An existing process doesn’t necessary have to be replicated in the same way when using AI. The best solution is often to combine AI with a review of the processes for optimizing the added value of AI.
AI is not the same as a human being’s general intelligence. Recently Geoffrey Hinton, the father of deep learning, said “My view is throw it all away and start again.”
Risks of Artificial Intelligence
Above all, the first “risk” is waiting for AI to meet the exaggerated expectations created by the big players in the market. Instead, companies should be using it gradually, but increasingly, to improve business processes.
It’s risky to think that AI is the “end” and not just one of the tools that can be used to solve problems, reduce costs, increase business opportunities and make businesses more effective and efficient. It’s not enough to just have a data scientist and a bunch of annotated documents to implement a successful AI-based solution.
Advice for your AI purchase
To finish my considerations, I will leave you with three simple and practical tips to keep in mind when evaluating solutions based on AI.
- Always exercise a healthy skepticism. If something seems too good to be true, it is because it is not true.
- Trust in the value of AI and experience. Although the road may be long and winding, the results will be positive, concrete and measurable.
- Stay away from big leaps: it’s better to move ahead step by step to achieve intermediate results that will serve as a solid platform for the future rather than trying to achieve extremely complex and ambitious goals from the start.