viernes, 14 de abril de 2017

Twitter and its innovation practices

Twitter and its innovation practices
 
Twitter and its innovation practices
Twitter and its innovation practices
 
Twitter and its innovation practices
 
Twitter prepares the development of a new line of business based on the exploitation of the data of its users, and I can not help but see the reflection of innovation practices that the company has already become a repetitive pattern: allow the development of an ecosystem of companies that exploit characteristics of its operation, see which is the most outstanding competitor , acquire it, and expel the others.

A pattern that has been repeated with practically all the functions that Twitter has been incorporating from ideas of the community of users or companies that has been able to generate in every moment: for the development of the app for each platform, for example, Twitter allowed it to develop several in competition, finally acquiring one of them and converting it into an official client. With the geolocalization, with the inclusion of photographs, with the shortening of links, with the insertion of video ... with many of the functions that we see today on Twitter, the company has acted following the same strategy.

In the present case, the pattern is repeated: For years, the company has allowed to arising multiple companies dedicated to the analytics of their data. Finally, in August last year, he acquired Gnip, one of the competitors in that environment, for $134 million. After the operation, the company has simply been leaving to up agreements with its previous partners without renewing them: the last of these partners, DataSift, will see their access to Twitter data disappear on August 13. And finally announced that Gnip will be in charge exclusively for the exploitation of these data, in what constitutes a proof of the reality of big data as a business source.

So far, all normal. Or in the case of Twitter, business as usual. What this episode raises me, however, it is the sustainability of these practices in the medium and long term: innovation on Twitter depends largely on the progressive demands of its users and the way in which the business community is looking for ways to satisfy them through products that are built around the ecosystem generated by the company. On that platform, the company performs a work of cherry-picking: Choose the best option and acquire it. Some of these acquisitions have been completely crucial in the company's future. From the point of view of innovation, an impeccable practice that exploits its capacity to generate a platform, an ecosystem that draws the attention of third parties. For those who make the decision to develop activities on this platform, the obvious finding of what a risky sport is: After creating and consolidating your activity, or you are the chosen one and become part of Twitter, which seems to be especially good at the time to raise purchases without decapitalizing the company and retaining the majority of its workers , or you only have a time until Twitter decides to throw you and exploit that business itself.

Entrepreneurs like Loïc LeMeur, who spent several years trying to develop products around Twitter with seesmic and pivoting non-stop to adapt to that changing platform, nothing new. But I do not think you have been eager to try again in the same way. As a formula for innovation, the strategy has definitely paid off. But could Twitter, in the future, find that it alienated the base of developers and entrepreneurs to the point that, in the face of these prospects for the future, companies that were willing to gamble working on their ecosystem would not emerge?



This article is also available in English in my medium page, "Twitter and its approach to innovation"

jueves, 13 de abril de 2017

From the data to the artificial intelligence

From the data to the artificial intelligence
 
From the data to the artificial intelligence
From the data to the artificial intelligence
From the data to the artificial intelligence
An article about the advances of Facebook in image recognition, which allows you to establish search systems based on the content that appears in them, it leads me to reflect on the importance of the availability of data for the development of algorithms of machine learning and artificial intelligence: no one escapes that the ability of Facebook to develop these systems of processing and recognition of patterns in images has to do neither more nor Less than with their chances of accessing tens of millions of images tagged and commented on their users in the network itself and Instagram Facebook.

When it comes to thinking about the possibilities of artificial intelligence for our business, we have to start with the possibilities we have to get data to analyze. Data which, moreover, are not all created equal: it is not just that the paper file is not going to serve us anything, but also, we need formats and tools open enough to allow processing, something that is not always easy when we talk about companies that, for a long time, processed their data in legacy systems of difficult integration.

The fact of coming from a stage in which many industries have been concerned about catching up on issues related to the so-called big data facilitates to some extent that task: when you already have data scientists in place, the least you can expect is that they have carried out the cleaning and cataloguing of the data sources with which they intend to count in their analytics and visualizations. But after the big data, the next step comes: Artificial intelligence. In fact, progress in artificial intelligence is leading the data scientists to realize that they need to evolve into that discipline, or be considered obsolete professionals.

The data is the real gasoline that moves artificial intelligence. The availability of data allows us to develop the best algorithms, and above all, improve them over time to produce better results and adapt to changing conditions. The availability of more and more data in autonomous driving as its fleets do more and more kilometers is what allows Tesla to reduce the number of disengagements, episodes in which the driver is forced to take control, to the current levels: only between October and November of last 2016, four autonomous vehicles of the company travelled 885 km on California highways , and they experienced 182 of those moments, in what represents a starting point from which to continue improving with the accumulated experience. In fact, Waymo, which has accumulated data for all experiments in autonomous driving of Google, it achieved throughout the year 2016 to bring down the number of these disengagements from 0.8 per thousand miles, to 0.2, in what is an impressive progression fed, again, by the availability of data to process.

The real mistake in artificial intelligence is to try to judge an algorithm by its results at the moment we obtain it, without taking into account the progress it can achieve as it has more and better data. Write a review about Echo of Amazon saying that it is little less than an alarm clock with radio a little illustrated is an attitude that forgets the fundamental: that with eight million devices on the market, the possibilities that Amazon has to improve Echo's intelligence are virtually unlimited, and that means that every time we will understand better , which will gradually reduce its errors and become, without a doubt, a device that we end up asking how we could live without it.

In what sport can the arrival of arbitrators based on artificial intelligence be considered first? Of course, in American football, the classic example of sport in which everything is quantified, analyzed and processed to the limit. Which insurance companies will be able first to access the savings and improvements of the appraisal based on artificial intelligence? Those that have large amounts of data correctly stored and structured to be able to process and train with them to the machine. What academic institutions will be the first to take advantage of artificial intelligence in the educational process? Those that have complete files, properly structured and prepared for their treatment. And I can assure you that that, which seems so basic, does not have all the institutions I know.

To understand the evolution of the data to the machine learning and to the artificial intelligence is, for any manager, increasingly important, and for a company, more and more strategic. This is how you will decide which companies end up which side of the new digital

The Data revolution

The Data revolution
 
The Data revolution
The Data revolution
It draws my attention that the unofficial issue, but if omnipresent of this mobile World Congress of 2017 is, look almost everywhere you want to look, the data revolution. In a very short time, we have gone from developing business activities and focusing on making their approach as competitive as possible, to seek an end that although obviously very related, is raised in a completely different way: to focus on these activities generating as much data as possible.

Trying to interpret the historical series of events like the MWC requires, on the one hand, to comb some gray hair and, on the other, try not to see it all through a single color prism. For someone who works in infrastructure, it is possible that everything you see since entering the MWC by Pavilion 1 until it leaves the 8 has to do with cloud computing, with integration of datacenters, or with 5 G. For those who work in security, all you will see will surely be issues related to this aspect. Capturing the common element, that "the next element" become the official theme of the event, requires a whole view, an abstraction taking some steps away. And my persistent impression is that this omnipresent theme is the reinterpretation of all business activities through the prism of data: The data converted into the real gasoline that moves the business.

The first important announcement of the MWC was, without a doubt, that 4th platform of Telefonica that reorients the whole company precisely to that, to the management of the data of the user (very relevant in that sense the entry of Chema Alonso himself in which it describes and tries to clarify the approach with the margin of Visions Conspiracy): The digital transformation of an operator is absolutely necessary to avoid its total commoditization , and that transformation requires an exquisite attention to the data, so APIficaremos all our activity and we will turn around that. That yes, that the ultimate goal is that the services are better and do not want to go ... but all this, thanks to the generation and exploitation of data. The whole business, raised around the data you generate as a user, and in the way you with the rules and the appropriate guarantees to understand it as something positive, not sinister.

But in fact, it doesn't matter who you are: If you're Telefonica, perfect, the thing seems clear. But if you are a car brand – four years ago there was only one, Ford, and this year are already a crowd – the approach will be to reorient the entire experience of a user to, again, the generation of data. What is it, as I mentioned yesterday in my talk at the seat stand, the vehicle connected? Simply, one way to try to improve the product and service set of an automotive company thanks to the data generated by a vehicle that stores and transmits everything we do with it. From that pioneering Tesla that in 2013 decided to include the total connectivity of its vehicles by an agreement with AT&T in the price of them (four years of practically unlimited connectivity with each vehicle sold) until yesterday, the first big brand, Chevrolet, announced the same for $20/month, everything fits perfectly: a car is no longer a vehicle to move from one point to another , but a huge computer with wheels turned into the maximum expression of mobile technology, and therefore has all the sense of the world show in an event like the MWC. Everything in the idea of the connected car points to the same: constant generation of data to be able to convert the user experience into something infinitely more versatile, to go from selling a product, to sell a complete solution that includes everything, and is based on the exploitation of the data that the user generates with his vehicle. Eventually, that user will stop having an active participation in the driving, or the vehicle can stop being his and become a model of use by login similar to a Chromebook (any car becomes "your car", with your presets on the radio, the position of your seat, your driving parameters or your usual sites on the GPS as soon as you identify when you enter it) , or we will see how they integrate in the price and experience of the vehicle issues such as maintenance or insurance, but all those possibilities will be fed and will make sense thanks to the constant generation of data.

The data revolution and digital transformation is expressed with absolute clarity at that time, when you are able to walk through the vastness of a MWC and when you go back to your hotel exhausted you realize that everything, practically everything you've seen had that common thread. If something is going to change in the next few years will be that, the orientation of all business activity to the generation and exploitation of the data, to its constant analysis by all kinds of technical

miércoles, 12 de abril de 2017

ICTs turn to the explosion of data and the post-PC era, in five days

ICTs turn to the explosion of data and the post-PC era, in five days
  • ICTs turn to the explosion of data and the post-PC era, in five days
 
ICTs turn to the explosion of data and the post-PC era, in five days
ICTs turn to the explosion of data and the post-PC era, in five days
Marimar Jiménez, five days, called me to ask me some impressions about the subjects that, in my opinion, they would occupy the technological agenda of this 2012 that begins, and published it yesterday Friday under the title "ICT are turned into the explosion of data and the era post-PC" (see in PDF). We talked about some of the topics I have recently discussed on the blog: Big Data and analytics, BYOD, corporate social webs and retail innovations.

Then the relevant part of the message I answered:

Analytical: After the beginning of participation for many, we will enter the popularization of the analysis phase. For some, this will be called Big data, investment in massive systems, distributed or not, of complex analysis of data of all kinds in order to detect tendencies, to plan actions or to incorporate information in the CRM of the companies. In the systems departments of the most advanced companies in this regard, Hadoop will be a common topic of conversation. For others, it will mean simply the incorporation of analytical tools of the web activity more or less simple. But it will certainly mean an increase in activity in this regard.
BYOD: Companies will continue to consolidate the tendency to accept that the employee chooses their own devices, incorporating them into the company's information infrastructure whenever possible. The trend marks a whole new attitude when it comes to understanding corporate information architectures, and poses important challenges in terms of management, control, costs and security.
Social-oriented corporate webs. The 2012 will begin to mark the obsolescence season of the old static corporate Web approach, and you'll see a significant increase in pages of companies looking for interaction, constant communication and liaison. What so far was simply a trend in media companies or with strong technological orientation will begin to consolidate in many other industries.
Innovations in retail: the massive popularization of the smartphone will be accompanied very possibly by a strong increase in its proposal of value in face to the retail, through experiences with topics like NFC and related systems.

Sensorization and machine learning

Sensorization and machine learning
Sensorization and machine learning
Sensorization and machine learning
 
Sensorization and machine learning
 
 
The news of the day leaves little doubt: we are heading towards a future where we will live completely surrounded by sensors of all kinds. The photo's earphones are the latest development of SMS audio, the company created by rapper 50 Cent, based on Intel technology, and designed to monitor physiological variables associated with physical exercise, a socket that might seem rather more natural for the practice of sport than wearing a bracelet, a chest band, or a wrist watch.

But the earphones are only a tiny piece in a huge puzzle that is behind many of the recent developments and movements in the technology sector: Yesterday also announced the acquisition of SmartThings on the part of Samsung, two hundred million dollars that position the Korean giant in the world of home automation (lighting, humidity, locks ... of all) and make millionaires the founders of a company started in Kickstarter. Clearly, the tendency is to sensoricemos our bodies, our environment, our homes and our cars, even if it leads us to have no clear who will be responsible when the information collected by these sensors trigger a bad decision.

Intelligent watches, bracelets for the monitoring of older people, new developments in batteries designed specifically for such devices ... and a real flood of data produced every time we move, exercise, or just breathe. Data of all kinds, with possibilities of use very imaginative or very dangerous, that will determine new business rules that are putting in SOLFA even the international agreements.

What do we do with so many data generated by so many sensors? We are already saturated, and we are only analyzing around 1% of the data generated. The logical-or almost the only thing-we can do is ... put other machines to analyze them. The machine learning is being shown as the great frontier, as the only way to make such a constant collection of data a minimum of meaning. The training of an algorithm with data from 133,000 patients from four Chicago hospitals between 2006 and 2011 achieved a diagnosis of emergency situations such as cardiovascular or respiratory problems, issued with four hours of advancement over that performed by physicians. A compilation of parameters of the patient's clinical history, combined with information about their age, family history, and certain analytical analyses, after being analyzed by an algorithm, it is likely to lead to a drastic reduction in deaths related to this type of situation, in which the provision of medical assistance a few hours before may prove vital.

We are definitely experiencing a sensorization boom. But the next step, logical or even essential is going to be the development of tools so that the immense amount of data generated by these sensors can be analyzed with a minimum of criterion. A very interesting scenario, with a brutal potential, and in which we will certainly see some important movements soon ...



(This article is also available in English in my medium page, "Sensorization and Machine learning")

martes, 11 de abril de 2017

Do you have an information management strategy?

Do you have an information management strategy?
 
Do you have an information management strategy?
Do you have an information management strategy?
 
Do you have an information management strategy?
 
The progressive digitization of our environment has led to the generation of a huge amount of data on our habits, uses, customs and actions of all kinds. On the net, it is clear that everything we do, the pages we visit, the clicks that direct our browsing, our purchases, etc. are collected in a log file and associated well to our identity, if we have carried out a login process, or to a system that allows the preservation of the session between different actions, such as cookies or digital fingerprinting.

But the constant generation of data begins to encompass much more than the time spent in front of the screen. More and more people begin to use regularly – or even consistently – devices that allow to quantify various variables ranging from location to multiple parameters usually associated with physical activity. The simple use of the mobile phone, associated with the "most common lie in the network" that implies the simple click with which we claim to have read the terms of service of an app, (something that we usually do because they are not usually written in English, but usually in a "legalés" that few fluently dominate), can allow the developer of the app can monitor sensors that evaluate from our location to the ambient noise level , temperature, displacement in different sense (three-dimensional accelerometers and gyroscope), moisture, light or proximity to the body.

Devices such as Fitbit, Jawbone Up, Misfit Shine and similar allow to measure parameters such as the steps we give, the floors we climb, the activity we develop, or even, connected with other accessories such as a scale, our weight and percentage of fat. A small device such as Scanadu Scout allows to evaluate in ten seconds supported in our temples a variety of parameters such as body temperature, blood pressure, respiratory rate, blood oxygen level, pulse and stress level, and store all readings in the corresponding application. The smartwatches, more and more common, allow to evaluate constants like the body temperature, the pulse, etc.: At its last conference for developers, Apple, which is rumored to be on the verge of putting in the market its iwatch with a special relationship with health, presented a platform that allows integrating all the information generated by all our devices and wearables of all kinds , so that it can be managed by physicians and other providers of health and wellness-related services.

The smart home is another huge field of data generation: to be able to control parameters like temperature, the security, lighting or content of our pantry using devices such as Nest, Canary, Philips Hue, Amazon Dash and many others has a clear counterpart: to allow all these data to be managed by the service providers in ways that, on many occasions , we didn't even get to imagine.

To develop its value proposition, many companies begin to consider the exploitation of the data that their users generate. The idea may seem interesting and tempting: getting to know your client can generate a sustainable competitive advantage, since it allows you to offer your product or service in conditions of adaptation that that customer values, that come to generate a positive bias in their choice of the product or service according to that adaptation, and that difficult that a competitor that knows less to your client can match. And new tools that dramatically reduce entry barriers to sophisticated analytical and machine learning techniques are fueling the trend.

But the difference between the companies that carry out this type of exploitation and those that do it badly can become noticeable. Hence, the development of a data management strategy is fundamental: it is not a matter of accumulating useless data, let alone alienate the client by making him think that we are the private equivalent or even the foolish cousin of the NSA who watches all his movements.

What data do we really need? What is the minimum set of data that we must generate, what we must obtain explicitly – by to the client – and which implicitly – Derivándolos the use that the customer makes of our products or services? What do we want this data for? Do we really intend to exploit them in order to offer your client a better value proposition, or rather to harass and persecute it more efficiently, or to sell access to such data to third parties that we are not clear what they intend to do with them? What treatment do we intend to give to this data? Are we going to be obscurantist, hide the customer what we know about it, how we use them or who we share it with.

lunes, 10 de abril de 2017

Business, data and transparency

Business, data and transparency
 
  • Business, data and transparency
 
Business, data and transparency
Business, data and transparency
My expansion column this week is titled "Business, Data and transparency" (pdf), and it aims to convey an idea for me fundamental: that it is not about how much data a company collects about us, but of variables like how to do it, the level of control that offers the user on that process, the clarity in the reasons for the compilation of that data , transparency in the analyses carried out, and the final result that the user or client perceives after the process. It is not so much collecting data, but doing it well and being respectful.

Paradoxes are clear: I can think of companies that, although they know about me much more than what I can get to know about myself, only generate me as a side effect that the publicity I receive is better adapted to my interests, something that in principle I perceive as positive. And also, let me decide at every moment what data I want to save, which I want to remove, and offer me tools to do it myself in three mouse clicks. and other companies that once I gave them some data, and from there and for having done it, I call five hundred other companies different at dinnertime to annoy me with products and services that do not interest me. A management, that of the data and the information of the client, that goes far beyond the rights arco and of the legal norms, and that differentiates increasingly to the companies of the last century of those of this century.

Then the full text of the column:
 
''Business, data and transparency

What do companies know about us? Every day we produce more information, and companies try to capture and analyze it. Tastes, feelings, tendencies, obtained through information that we publish in social networks. We are "signed up" on so many sites, that knowing the details about everything that the big data is capable of analyzing at every moment on us is becoming more and more complex.

The answer is not to stop using tools that offer very important value proposals in our contact with people or access to information. On the contrary: what we as users must demand is clarity and transparency.

That a company collects data about us can be reasonable, if done right. And what is doing well in this context? Simply, as a user you can know at every moment what data is being handled by the company about me, what you are doing with them, and what results you intend to obtain.

When we think about it, the results are surprising: it turns out that the amount of data is not what worries us the most, but the use that is made of them. A company can get to know us better than we do, but what we really need to worry about is what consequences that knowledge has. If it is going to be used to persecute us more, to overwhelm us, or to sell data to third parties losing control of its use, we will — reasonably — avoid it. On the contrary, if the result of knowing us better is that it offers better products, in better conditions, or more adapted to our tastes, it is more possible that we agree.

It is not the data: it is the clear and unmistakable will to allow us to understand what happens to them, what they are used for. The keyword? Transparency.''