jueves, 1 de octubre de 2015

Understanding Concepts in machine learning

Understanding Concepts in machine learning









Understanding Concepts in machine learning 

The first one, "The Dark secret at the heart of AI", performs a very good job in explaining the "black box theory", that we have commented on several occasions here: as machine learning's algorithms become more and more sophisticated, the possibility that a human mind understands the procedures it uses to reach a given result are made minor , which leads us, ultimately, to have a black box that generates results that we can test only according to its quality, without understanding how it is reached. 

If we train an algorithm with all the credit history granted and denied by an entity, for example, in order to make decisions in the future that would take a risk committee today, we will end up with a black box from which decisions can be contrasted depending on the quality of the results (if it reduces the number of credits granted and not returned , we'll give it a valid one), but we'll probably be unable to understand how the machine comes to each decision: its operation will be a black box in whose results we rely on its results.

This brings us another question: because when we feed an algorithm, we do it with all the data that we have available and that, according to our human reasoning, contribute to the result, what we find is that the progress of the machine learning redefine the concept that we have of human intelligence, and alter our perceptions of what we can or cannot do.

The starting point for the machine is what we as humans are able to understand, from there, everything is unexplored terrain and methods that require a power of calculation from which our brain simply lacks. So, things that today seems normal for us to make a human, we will see them as absurd as it is a machine that makes them, and things that a machine is capable of making us seem less and more surprising and more normal.

 Very soon, the chatbot will have become the standard for after-sales service and for many more things like "explain" the news, and the first time of disenchantment and disappointment will give way to a moment in which really, as happens to younger generations, we prefer to talk with robots to talk to people, because not only give us a better service , but also eliminate the feeling of being "bothering someone" on the other side (just as a link does not "complain" or "answer us wrong" if we click on it ten times).

 Simply, it is a conversational algorithm that serves you when you have issues related to the product or service of a company will become "normal", and we will see as "of the last century" when there were people dedicated to do that service.

In the same way, there will be activities that will soon be something of the past, be programmed traffic lights to avoid jams, make investment decisions or diagnose a disease, and it will seem "strange" or "primitive" to think that these activities were previously carried out by a person. 

The replacement of taxi or truck drivers will be seen as something so obvious, that we will find it incredible – and dangerous – that this activity was developed in the past manually. Will that mean that many people who did that job go straight to unemployment? Possibly, but the solution will not be to tax those robots who have gone on to perform these activities, but to train and train people to be able to carry out other related activities. And in that sense, cutting social benefits, as the trend in countries like the United States seems, can only lead to a worse problem.

This does not mean logically that we do not have to look for methodologies that somehow allow to increase the traceability of the decisions made by the machines. The article describing the nightmare scenario imagined by Tim Berners-Lee in which decisions in the financial world are taken by machines that create and manage companies, completely apart from typically human notions (and difficult to explain for a machine), as the social impact or the common good is certainly worth reading, and quotes the phrase of that recent Vanity Fair interview with Elon Musk in which he spoke of the same type of dangers, of automatic optimization and of what could happen with an algorithm that optimizes the production of strawberries: