When AI projects fail, it is often a matter of flaws in your data. It is important to learn from mistakes and see it all in the long term to get AI on the track.
A year and a half ago, the US mortgage company Mr Cooper introduced a recommendation system in its customer service. The system would provide suggestions on how to solve customer problems.
After nine months, it was realized that no one was using the system. And it took another six months to figure out why. It tells the cio Shridar Sharma to our US sister site CIO.com.
The reason was simply that the system’s recommendations were not relevant. However, this was not due to the algorithms, but to training them on data that was based on the technical descriptions of the customers’ problems and not on how the customers themselves described the problems with their own words.
Unfortunately, this is nothing unusual. In a recent IDC survey, only about 30 percent of companies said they had succeeded in 90 percent of cases with their AI projects.
Many companies have trouble producing accurate data to train their machine learning algorithms If data has not been properly categorized then people need to take the time to arrange it and it can delay projects or cause them to fail.
Another problem is that you simply do not have the data required for the project.
Data can be found in excessive amounts too – and in too many different places.
Another obstacle to AI projects is when companies rely on historical data instead of current data in their training sets. Often, systems trained on static historical data for real-time data do not work according to Andreas Braun, an analyst at Accenture.
There can be a big difference in the selection of historical data and the data that a system spits out in real time – for example, in the case of detecting fraud or money laundering because the models have not been trained to notice small changes in behavior.
Source: Computer Sweden