The coming revolution in insurance

Posted on by .

Technological change and competition disrupt a complacent industry

IN THE stormy and ever-changing world of global finance, insurance has remained a relatively placid backwater. With the notable exception of AIG, an American insurer bailed out by the taxpayer in 2008, the industry rode out the financial crisis largely unscathed. Now, however, insurers face unprecedented competitive pressure owing to technological change. This pressure is demanding not just adaptation, but transformation.

The essential product of insurance—protection, usually in the form of money, when things go wrong—has few obvious substitutes. Insurers have built huge customer bases as a result. Investment revenue has provided a reliable boost to profits. This easy life led to a complacent refusal to modernise. The industry is still astonishingly reliant on human labour. Underwriters look at data but plenty still rely on human judgment to evaluate risks and set premiums. Claims are often reviewed manually.

The march of automation and technology is an opportunity for new entrants. Although starting a new soup-to-nuts insurer from scratch is rare (see article), many companies are taking aim at parts of the insurance process. Two Sigma, a large American “quant” hedge fund, for example, is betting its number-crunching algorithms can gauge risks and set prices for insurance better and faster than any human could. Other upstarts have developed alternative sales channels. Simplesurance, a German firm, for example, has integrated product-warranty insurance into e-commerce sites.

Insurers are responding to technological disruption in a variety of ways. Two Sigma contributes its analytical prowess to a joint venture with Hamilton, a Bermudian insurer, and AIG, which actually issues the policies (currently only for small-business insurance in America). Allianz, a German insurer, simply bought into Simplesurance; many insurers have internal venture-capital arms for this purpose. A third approach is to try to foster internal innovation, as Aviva, a British insurer, has done by building a “digital garage” in Hoxton, a trendy part of London.

The biggest threat that incumbents face is to their bottom line. Life insurers, reliant on investment returns to meet guaranteed payouts, have been stung by a prolonged period of low interest rates. The tough environment has accelerated a shift in life insurance towards products that pass more of the risk to investors. Standard Life, a British firm, made the transition earlier than most, for example, and has long been primarily an asset manager (see article).

Meanwhile, providers of property-and-casualty (P&C) insurance, such as policies to protect cars or homes, have seen their pricing power come under relentless pressure, notably from price-comparison websites. In combination with the stubbornly high costs of maintaining their old systems, this has meant that profitability has steadily deteriorated. The American P&C industry, for instance, has seen its “combined ratio”, which expresses claims and costs as a percentage of premium revenue, steadily creep up from 96.2% in 2013 to 97.8% in 2015, and to an estimated 100.3% for 2016 (ie, a net underwriting loss). Henrik Naujoks of Bain & Company, a consultancy, says this has left such insurers facing a stark choice: become low-cost providers, or differentiate themselves through the services they provide.

One fairly simple way to offer distinctive services is to use existing data in new ways. Insurers have long drawn up worst-case scenarios to estimate the losses they would incur from, say, a natural catastrophe. But some have started working with clients and local authorities on preparing for such events; they are becoming, in effect, risk-prevention consultants. AXA, a French insurer, has recently started using its models on the flooding of the Seine to prepare contingency plans. Gaëlle Olivier of AXA’s P&C unit says the plans proved helpful in responding to floods in June 2016, reducing the damage.

Damage control

Tech-savvy insurers are going one step further, exploiting entirely new sources of data. Some are using sensors to track everything from boiler temperatures to health data to driving styles, and then offering policies with pricing and coverage calibrated accordingly. Data from sensors also open the door to offering new kinds of risk-prevention services. As part of Aviva’s partnership with HomeServe, a British home-services company, the insurer pays to have a sensor (“LeakBot”) installed on its customers’ incoming water pipes that can detect even minuscule leaks. HomeServe can then repair these before a pipe floods a home, causing serious damage.

The shift towards providing more services fosters competition on factors beyond price. Porto Seguros, a Brazilian insurer, offers services ranging from roadside assistance to scheduling doctor’s appointments. In France AXA provides coverage for users of BlaBlaCar, a long-distance ride-sharing app. The main aim of the policy is to guarantee that customers can still reach their destination. If, say, the car breaks down, it offers services ranging from roadside car-repair to alternative transport (eg, calling a taxi).

Insurers face many hurdles, however, to becoming service providers and risk consultants. Maurice Tulloch, head of the general-insurance arm of Aviva, admits that such services are yet to catch on with most customers. So far, his firm, like its peers, has focused on enticing them to adopt the new offerings by cutting insurance premiums, rather than on making money directly from them. It reckons it can recoup the cost of, say, the HomeServe sensors and repairs from the reduction in claims.

One example of what the future may hold comes from the car industry. Carmakers have traditionally bought product-liability insurance to cover manufacturing defects. But Volvo and Mercedes are so confident of their self-driving cars that last year they said they will not buy insurance at all. They will “self-insure”—ie, directly bear any losses from crashes.

Some think that such trends threaten the very existence of insurance. Even if they do not, Bain’s Mr Naujoks is not alone in expecting the next five years to bring more change to the insurance industry than he has seen in the past 20.

This article appeared in the Finance and economics section of the print edition under the headline “Counsel of protection”

In Its New Factory, Impossible Foods Will Make 12 Million Pounds Of Plant-Based Burgers A Year

Posted on by .

The exciting new plant-based burger that bleeds like real meat is scaling up to reach more customers: It wants to be in 1,000 restaurants by the end of the year.

In Its New Factory, Impossible Foods Will Make 12 Million Pounds Of Plant-Based Burgers A Year
PHOTO: COURTESY IMPOSSIBLE FOODS
 Fast Company

Inside a former baked-goods factory near the Oakland airport, a construction crew is installing giant vats that will soon be used to scale up production of the Impossible Burger–a plant-based meat designed to look and taste good enough that meat eaters will want to order it, not vegetarians.

The “meat,” developed by a team led by former Stanford biochemistry professor Patrick Brown, is currently being produced in a 10,000-square-foot pilot facility in Silicon Valley and a 1,500-square foot space in New Jersey. The new facility, at around 60,000 square feet, will dramatically scale up production capacity. When the factory is fully ramped up, it will be able to produce at least 1 million pounds of Impossible Burger meat a month, or 250 times more than today.

“It will enable us to go from something that is scarce–and we’re constantly getting complaints from customers about the fact that they can’t buy them at their local restaurant–and start to make it ubiquitous,” Brown said at an event launching the new factory.

The burger is currently available at 11 restaurants, including 3 that launched it on March 23. But by the end of the year, the company expects to supply 1,000 restaurants. It just signed a deal to have the burgers featured in the San Francisco Giant’s baseball stadium.

For the company, achieving scale is a critical part of achieving its mission. Brown started working on the project while thinking about the problem of climate change; raising cows and other animals for meat is one of the world’s largest sources of greenhouse gases. It also uses and pollutes more water than any other industry, and drives deforestation. But he realized that the majority of the world wouldn’t voluntarily go vegetarian for those reasons.

“Billions of people around the world who love meat are not going to stop demanding it, so we just have to find a better way to produce it,” he says.

PHOTO: COURTESY IMPOSSIBLE FOODS

The team studied the properties of meat–particularly heme, the molecule that makes blood red and gives meat a meaty taste–and then experimented with recreating those properties using only ingredients from plants.

“When you think about meat, there’s the muscle, there’s the connective tissue, there’s the fat, so we had to figure out how to mimic those parts of beef to figure out how to experience the texture, but also the taste,” Don DeMasi, senior vice president of engineering for Impossible Foods, tells Fast Company.

The result looks like it was made from a cow, not plants. The handful of chefs who were given first access to the product say they think of it as meat. “It kind of made this transition in my mind to be–it’s just another kind of meat,” says chef Traci Des Jardins, who has been serving Impossible burgers at her San Francisco restaurant Jardinière for about a year, and now is also serving it at Public House, her restaurant at the city’s ballpark.

PHOTO: COURTESY IMPOSSIBLE FOODS

Before it’s cooked, the product is red like raw beef; as it cooks, it browns. As the heme mixes with juices and oozes out, it can look like it’s bleeding. “You’re seeing the exact same cooking chemistry that you see in meat, literally,” says Brown.

As the company scales up its beef alternative, it will focus on restaurants. In a year, it says, U.S. restaurants serve more than 5 billion pounds of burgers, and Impossible wants its 12 million pounds to be among them. Retail will come later, along with other products that are currently in development, such as poultry and steak.

“Our long-term goal is to basically develop a new and better way to create all the foods we make from animals,” says Brown.

YouTube Tops 1 Billion Hours of Video a Day, on Pace to Eclipse TV

Posted on by .

Google unit posts 10-fold increase in viewership since 2012, boosted by algorithms personalizing user lineups

YouTube viewers world-wide are now watching more than 1 billion hours of videos a day, threatening to eclipse U.S. television viewership, a milestone fueled by the Google unit’s aggressive embrace of artificial intelligence to recommend videos.

YouTube surpassed the figure, which is far larger than previously reported, late last year. It represents a 10-fold increase since 2012, YouTube said, when it started building algorithms that tap user data to give each user personalized video lineups designed to keep them watching longer.

Feeding those recommendations is an unmatched collection of content: 400 hours of video are uploaded to YouTube each minute, or 65 years of video a day.

“The corpus of content continues to get richer and richer by the minute, and machine-learning algorithms do a better and better job of surfacing the content that an individual user likes,” YouTube Chief Product Officer Neal Mohan said.

MERRILL SHERMAN

YouTube’s billion-hour mark underscores the wide lead of the 12-year-old platform in online video—threatening traditional television, which lacks similarly sophisticated tools.

Facebook Inc. and Netflix Inc. said in January 2016 that users watch 100 million hours and 116 million hours, respectively, of video daily on their platforms. Nielsen data suggest Americans watch on average roughly 1.25 billion hours of live and recorded TV a day, a figure steadily dropping in recent years.

Despite its size, it is unclear if YouTube is making money. Google parent Alphabet Inc. doesn’t disclose YouTube’s performance, but people familiar with its financials said it took in about $4 billion in revenue in 2014 and roughly broke even.

YouTube makes most of its money on running ads before videos but it also spends big on technology and rights to content, including deals with TV networks for a planned web-TV service. When asked about profits last year, YouTube Chief Executive Susan Wojcicki said, “Growth is the priority.”

YouTube’s success using tailor-made video lineups illustrates how technology companies can reshape media consumption into narrow categories of interests, a trend some observers find worrying.

“If I only watch heavy-metal videos, of course it’s serving me more of those. But then I’m missing out on the diversity of culture that exists,” said Christo Wilson, a Northeastern University computer-science professor who studies the impact of algorithms. “The blessing and curse of cable and broadcast TV is it was a shared experience.…But that goes away if we each have personalized ecosystems.”

YouTube benefits from the enormous reach of Google, which handles about 93% of internet searches, according to market researcher StatCounter. Google embeds YouTube videos in search results and pre-installs the YouTube app on its Android software, which runs 88% of smartphones, according to Strategy Analytics.

That has helped drive new users to its platform. About 2 billion unique users now watch a YouTube video every 90 days, according to a former manager. In 2013, the last time YouTube disclosed its user base, it said it surpassed 1 billion monthly users. YouTube is now likely larger than the world’s biggest TV network, China Central Television, which has more than 1.2 billion viewers.

YouTube long configured video recommendations to boost total views, but that approach rewarded videos with misleading titles or preview images. To increase user engagement and retention, the company in early 2012 changed its algorithms to boost watch time instead. Immediately, clicks dropped nearly 20% partly because users stuck with videos longer. Some executives and video creators objected.

Months later, YouTube executives unveiled a goal of 1 billion hours of watch time daily by the end of 2016. At the time, optimistic forecasts projected it would reach 400 million hours by then.

YouTube retooled its algorithms using a field of artificial intelligence called machine learning to parse massive databases of user history to improve video recommendations.

If I only watch heavy-metal videos, of course it’s serving me more of those. But then I’m missing out on the diversity of culture that exists.

—Northeastern University computer-science professor Christo Wilson

Previously, the algorithms recommended content largely based on what other users clicked after watching a particular video, the former manager said. Now their “understanding of what is in a video [and] what a person or group of people would like to watch has grown dramatically,” he said.

Engineers tested each change on a control group, and only kept the change if those users spent more time on YouTube.

One strategy was to find new areas of user interest. For instance, YouTube could suggest a soccer video to users watching a lot of football, and then flood the lineup with more soccer if the first clip was a hit. “Once you realize there’s an additional preference, exploit that,” the former manager said.

But the algorithm didn’t always deliver. For instance, when Ms. Wojcicki joined as CEO in 2014, YouTube recommended videos to her about eczema because she had recently watched a clip about skin rashes after suspecting one of her children had one, said Cristos Goodrow, YouTube’s video-recommendation chief.

That made the video-recommendation team realize there were certain “single-use videos” to ignore as signals of user interest. But to mark them, they had to think of each example, such as certain health and how-to videos.

Then last year YouTube partnered with Google Brain, which develops advanced machine-learning software called deep neural networks, which have led to dramatic improvements in other fields, such as language translation. The Google Brain system was able to identify single-use video categories on its own.

Did You Hear That? Robots Are Learning The Subtle Sounds Of Mechanical Breakdown

Posted on by .
Fast Company
STEVEN MELENDEZ 02.16.17

Expert mechanics can detect what’s wrong with a car without lifting the hood. So can deep learning robots, says startup 3DSignals.

Did You Hear That? Robots Are Learning The Subtle Sounds Of Mechanical Breakdown
[Photo: Flickr user David Hodgson]

Sometimes, machines just don’t sound right.

Brakes squeal, hard drives crunch, air conditioners rattle, and their owners know it’s time for a service call. But some of the most valuable machinery in the world often operates with nobody around to hear the mechanical breakdowns, from the chillers and pumps that drive big-building climate control systems to the massive turbines at hydroelectric power plants.

That’s why a number of startups are working to train computers to pick up on changes in the sounds, vibrations, heat emissions, and other signals that machines give off as they’re working or failing. The hope is that the computers can catch mechanical failures before they happen, saving on repair costs and reducing downtime.

“We’re developing an expert mechanic’s brain that identifies exactly what is happening to a machine by the way that it sounds,” says Amnon Shenfeld, founder and CEO of 3DSignals, a startup based in Kfar Saba, Israel, that is using machine learning to train computers to listen to machinery and diagnose problems at facilities like hydroelectric plants and steel mills

And while most current efforts are currently focused on large-scale machinery, Shenfeld says the same sort of technology might one day help detect failures in home appliances or in devices like self-driving cars or rental vehicles that don’t spend much time in the hands of an owner who’s used to their usual sounds.

“When you have car-as-a-service, the person in the car doesn’t know the car,” he says. “If it sounds strange, you’re losing this link with the human in the car deciding it’s time to take it to the mechanic.”

Initially, 3DSignals’ systems can detect anomalous sounds based on physical modeling of particular types of equipment, notifying an expert mechanic to diagnose the problem. And once the problem is fixed, the mechanic’s diagnosis is added to 3DSignals’ database, which it uses to train its algorithms to not only detect unusual sounds, but also interpret them to understand what kind of repair is needed.

“The next time we hit this signature on the same machine for the same customer or another customer using the same type of machine, it will not just be anomaly detection,” says Shenfeld.

And while 3DSignals focuses entirely on using its machine learning tools to process acoustic data—an area Shenfeld says is surprisingly neglected outside of speech recognition—other companies are using a variety of other types of data to detect and diagnose mechanical problems.

[Photo: Flickr user Sue Clark]

Systems from Augury, a startup with offices in New York and Haifa, Israel, monitor vibrations, temperature ultrasound, and electromagnetic emissions from machinery, typically large-scale heating and cooling systems. The company offers a portable tool a technician can use to capture a reading to an iPhone or Android device, where an app offers a real-time diagnosis as well as a continuous monitoring system, says cofounder and CEO Saar Yoskovitz.

“One of our customers, located in Ohio, had a thunderstorm at 2 a.m., and the whole building went dark,” he says. “The generator was supposed to pick up automatically but it didn’t, and the way that they learned about it was through an alert they got from Augury.”

In more complicated situations, the system uses machine learning-based algorithms and a growing library of signals of different types of errors to suggest what’s wrong and what technicians can do to repair the machine, he says.

“Over time, we’ve collected probably the largest malfunction dictionary in the world for our types of machines,” says Yoskovitz.

For another startup, called Presenso, also based in Haifa, the exact type of data being monitored is less important than how it changes over time and what that signals about the machine’s operation. The company’s systems record data from sensors already installed in industrial equipment as part of existing control processes, streaming readings to Presenso’s cloud servers.

“We don’t care, or we don’t need to know, if the sensor we’re analyzing now measures temperature or voltage or flow,” says CEO and cofounder Eitan Vesely.

Presenso’s software tools use a variety of machine learning techniques to effectively build a model of a machine’s operations based on the sensor data it receives. The tools provide visualizations of unusual readings, and how different they are from normal operations.

“They don’t need any human guidance or [to] know what the physical attributes are that are being measured,” Vesely says. “The goal is for them to learn by themselves how the machine operates.”

And while real-time breakdown detection is obviously useful for companies operating machines in their own facilities, experts say it could also be useful to equipment vendors or insurance companies providing financial coverage in the case of downtime.

“If a company has downtime or business interruption insurance, that’s something where it becomes incredibly relevant and also a money saver,” says Ryan Martin, a senior analyst at ABI Research.

Having more reliable predictions of when machines need maintenance could also spur equipment makers to effectively offer time as a service, charging industrial users either extended warranties or even charging by the hour for the guaranteed use of their equipment without incurring too much risk to themselves, says Augury’s Yoskovitz.

“One of the problems with this business model is they hold all the risk,” he says. “If anything goes wrong, they pay for it, and they’re looking for ways to lower this risk.”

Helper Robots Develop Skills and Charm

Posted on by .

At CES 2017, robots gain physical skills and improve emotional connections to humans

LG Electronics Inc. Vice President David VanderWaal unveiled Airport Guide Robot  at the 2017 Consumer Electronics Show in Las Vegas on Wednesday.

LG Electronics Inc. Vice President David VanderWaal unveiled Airport Guide Robot at the 2017 Consumer Electronics Show in Las Vegas on Wednesday. PHOTO: ASSOCIATED PRESS

Robots that play chess, provide directions, go out for groceries—these are the helpful robots on display at this year’s CES tech trade show in Las Vegas. And while they don’t resemble humans, exactly, many are designed to accomplish their tasks with digital eyes, turning heads, waving arms and grabbing hands.

“We’re on the cusp of a robotic moment,” said J.P. Gownder, a robotics-focused senior analyst and vice president at the Forrester research firm. Not only are robots developing physical skills necessary to carry out domestic tasks, they’re also gaining emotional appeal so that humans accept them as a companion, not just an appliance.

While that quintessential humanoid domestic servant may yet be 20 years away, he said, what’s on display at the show is “a significant step toward the realization of robots that have been in our popular culture for 80 years or so.”

The C-3PO model—named after the chatty Star Wars droid—is a single, humanlike robot that is good at many different things. “The robots we’re seeing at CES are good at specific tasks instead of a little bit of everything,” he said. But many developers focusing on individual skills could bring down costs for robotics technologies overall.

As the tech world reveals the products that will impact the year ahead, Personal Tech columnists Geoffrey A. Fowler and Joanna Stern hunt for the most exciting and unusual, from AR glasses to breast pumps. Photo: Emily Prapuolenis/The Wall Street Journal

One of the big themes of CES 2017 is building intelligence into everyday objects, like ovens, refrigerators and other appliances. While a robot, in the end, may just be another device, the fact that it is designed to feel more like a companion will pay off, Mr. Gownder said.

“Having these humanlike features communicates something primal to the user, and when done properly it can make robots more useful and better because we have a connection to them.”

Here are a few robots at CES that demonstrate both skills and charm:

MoRo, a human-size assistant robot

MoRo is a nearly 4-foot tall robot that can help grab things at home or in the grocery store.

MoRo is a nearly 4-foot tall robot that can help grab things at home or in the grocery store. PHOTO: BEIJING EWAYBOT TECHNOLOGY LLC

MoRo is designed to run errands indoors and outdoors, as long as it isn’t raining. It has two six-joint arms with three fingers at each end. The arms move up and down its sides so it can grab things on the ground or from a not-too-high shelf. Its maker, Beijing Ewaybot Technology LLC, says it uses a mixture of cameras, sensors and algorithms to apply the correct amount of force for picking up either a piece of tissue or a can. Voice controlled—and capable of talking back—it stands nearly 4 feet tall and weighs 77 pounds. MoRo is set to go on sale in May for $30,000.

ITRI’s chess-playing Intelligent Vision System robot

The ITRI Intelligent Vision System robot uses cameras and sensors to play chess and pour coffee.

The ITRI Intelligent Vision System robot uses cameras and sensors to play chess and pour coffee. PHOTO:INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE

A robot that can play chess with humans while also pouring them a cup of coffee: That is what research lab ITRI Taiwan brought to CES to demonstrate its Intelligent Vision System. Cameras help the robot recognize the chess pieces and cups, and perceive depth so that it can reach for the right piece without knocking over another, or pick up the coffee without tipping it. ITRI says it plans to license its technology, so while you may see these sort of abilities in other robots, you won’t be able to buy a robotic chess partner from the lab itself.

Yumii Cutii, a robot friend for seniors

Yumii's Cutii robot in action at the CES 2017 trade show.

Yumii’s Cutii robot in action at the CES 2017 trade show. PHOTO: WILSON ROTHMAN/THE WALL STREET JOURNAL

The Cutii is built to “make you feel good,” says Yumii, the French startup making the robot. It is designed to help senior citizens continue living at home instead of going into an eldercare facility. Its LCD-screen head can bob up and down, to show off its smiling face and host video calls from friends, family and doctors. It can recognize the faces of its human companions. Since it is voice controlled, it can ask them if they want to take part in pre-determined activities like visiting a museum or taking a yoga class—then listen for a response.

Hease Robotics Heasy hospitality robot

The Heasy uses a swiveling set of eyes to express emotion.

The Heasy uses a swiveling set of eyes to express emotion. PHOTO: HEASE ROBOTICS

Need to find your way around a hotel resort? Looking for a specific item in a mall? Trying to find departure times at an airport? Heasy is a robot built to do all of that, and provide any other information you might need. It is a robot made for commercial use, and it offers a bit of personality thanks to a swiveling pair of eyes. Hease, the French startup behind it, calls Heasy a “robot as a service.” It is set to go on sale to businesses later this year.

LG Airport Guide Robot

LG's Airport Guide Robot can escort people to their gate.

LG’s Airport Guide Robot can escort people to their gate. PHOTO: JOHN LOCHER/ASSOCIATED PRESS

Korean electronics giant LG Electronics Inc. showed off a similar robot of its own at CES. The aptly named Airport Guide Robot is set to arrive in Seoul’s Incheon International Airport later this year. When it does, it’ll be able to answer questions in English, Chinese, Japanese and Korean. It can scan a passenger’s ticket to offer boarding times and gate locations, and directions inside of the airport, with estimated distances and walking times. If needed, it can escort lost or late travelers to their gates, LG said.

With Mastercard’s Identity Check, your face is the latest form of biometric identification.

Posted on by .

MASTERCARD

Selfies are no longer just a social media irritant: With Mastercard’s Identity Check, your face is the latest form of biometric identification. In October, Mastercard rolled out the new mobile-payment ID feature, which allows e-commerce customers to verify themselves at checkout simply by capturing an image of their face on their phone. “One of the biggest consumer pain points is when you are prompted for a password,” says Ajay Bhalla, who oversees Mastercard’s security efforts. “That leads to a lot of friction: unhappy consumers, merchants losing sales, banks losing volume.”

Two years in the making, Identity Check works much like the iPhone’s fingerprint ID system. Users set it up in advance by taking a series of photos from several different angles. That info is then turned into an algorithm, which Identity Check can match to a fresh picture at the moment of transaction. “Biometrics is actually safer [than a password] because it is really you,” says Bhalla.

Shoppers weary of passwords might be excited to hear the news, but they should be careful not to get too happy. “The biggest issue we face is that people naturally start smiling when they take a picture,” says Bhalla, “and that messes with the algorithm.”

Mastercard is also exploring other new methods for identifying customers, such as iris scanning, voice recognition, and even heartbeat monitoring. Blood-pumping patterns, it turns out, are as unique as fingerprints. —Nikita Richardson

Milestones: In September, Mastercard launched a program to help developers integrate their services into new technologies such as virtual reality, augmented reality, and the internet of things.

Challenges: Mastercard has been hit with a $19 billion class-action lawsuit over allegations that the company charged illegal fees to U.K. vendors, leading to higher prices for consumers. (“[We] firmly disagree with the basis of this claim and we intend to oppose it,” says a Mastercard spokesperson.)

Mastercard’s new facial recognition tech Identity Check works like the iPhone’s fingerprint system.[Illustration: Laura Breiling]

The age of analytics: Competing in a data-driven world

Posted on by .

McKinsey Global Institute

By Nicolaus Henke, Jacques Bughin, Michael Chui, James Manyika, Tamim Saleh, Bill Wiseman, and Guru Sethupathy

Big data’s potential just keeps growing. Taking full advantage means companies must incorporate analytics into their strategic vision and use it to make better, faster decisions.

Is big data all hype? To the contrary: earlier research may have given only a partial view of the ultimate impact. A new report from the McKinsey Global Institute (MGI), The age of analytics: Competing in a data-driven world, suggests that the range of applications and opportunities has grown and will continue to expand. Given rapid technological advances, the question for companies now is how to integrate new capabilities into their operations and strategies—and position themselves in a world where analytics can upend entire industries.

The age of analytics
The age of analytics
Big data continues to grow; if anything, earlier estimates understated its potential.

A 2011 MGI report highlighted the transformational potential of big data. Five years later, we remain convinced that this potential has not been oversold. In fact, the convergence of several technology trends is accelerating progress. The volume of data continues to double every three years as information pours in from digital platforms, wireless sensors, virtual-reality applications, and billions of mobile phones. Data-storage capacity has increased, while its cost has plummeted. Data scientists now have unprecedented computing power at their disposal, and they are devising algorithms that are ever more sophisticated.

Earlier, we estimated the potential for big data and analytics to create value in five specific domains. Revisiting them today shows uneven progress and a great deal of that value still on the table (exhibit). The greatest advances have occurred in location-based services and in US retail, both areas with competitors that are digital natives. In contrast, manufacturing, the EU public sector, and healthcare have captured less than 30 percent of the potential value we highlighted five years ago. And new opportunities have arisen since 2011, further widening the gap between the leaders and laggards.

Progress in capturing value from data and analytics has been uneven.

Leading companies are using their capabilities not only to improve their core operations but also to launch entirely new business models. The network effects of digital platforms are creating a winner-take-most situation in some markets. The leading firms have remarkably deep analytical talent taking on various problems—and they are actively looking for ways to enter other industries. These companies can take advantage of their scale and data insights to add new business lines, and those expansions are increasingly blurring traditional sector boundaries.

Where digital natives were built for analytics, legacy companies have to do the hard work of overhauling or changing existing systems. Adapting to an era of data-driven decision making is not always a simple proposition. Some companies have invested heavily in technology but have not yet changed their organizations so they can make the most of these investments. Many are struggling to develop the talent, business processes, and organizational muscle to capture real value from analytics.

The first challenge is incorporating data and analytics into a core strategic vision. The next step is developing the right business processes and building capabilities, including both data infrastructure and talent. It is not enough simply to layer powerful technology systems on top of existing business operations. All these aspects of transformation need to come together to realize the full potential of data and analytics. The challenges incumbents face in pulling this off are precisely why much of the value we highlighted in 2011 is still unclaimed.

The urgency for incumbents is growing, since leaders are staking out large advantages, and hesitating increases the risk of being disrupted. Disruption is already happening, and it takes multiple forms. Introducing new types of data sets (“orthogonal data”) can confer a competitive advantage, for instance, while massive integration capabilities can break through organizational silos, enabling new insights and models. Hyperscale digital platforms can match buyers and sellers in real time, transforming inefficient markets. Granular data can be used to personalize products and services—including, most intriguingly, healthcare. New analytical techniques can fuel discovery and innovation. Above all, businesses no longer have to go on gut instinct; they can use data and analytics to make faster decisions and more accurate forecasts supported by a mountain of evidence.

The next generation of tools could unleash even bigger changes. New machine-learning and deep-learning capabilities have an enormous variety of applications that stretch into many sectors of the economy. Systems enabled by machine learning can provide customer service, manage logistics, analyze medical records, or even write news stories.

These technologies could generate productivity gains and an improved quality of life, but they carry the risk of causing job losses and dislocations. Previous MGI research found that 45 percent of work activities could be automated using current technologies; some 80 percent of that is attributable to existing machine-learning capabilities. Breakthroughs in natural-language processing could expand that impact.

Data and analytics are already shaking up multiple industries, and the effects will only become more pronounced as adoption reaches critical mass—and as machines gain unprecedented capabilities to solve problems and understand language. Organizations that can harness these capabilities effectively will be able to create significant value and differentiate themselves, while others will find themselves increasingly at a disadvantage.

A survival plan for bricks-and-mortar Starbucks

Posted on by .

Food Business News

by Jeff Gelski

Starbucks Coffee store
The Starbucks stores need to remain a place where people go to seek out human interaction.

SEATTLE — Web sites, e-commerce and what he described as the Amazon effect could lead both large and small companies to close retail stores in the coming years, said Howard Schultz, chairman and chief executive officer for the Starbucks Coffee Co.

Howard Schultz, Starbucks
Howard Schultz, chairman and c.e.o. of Starbucks

“There is no doubt that over the next five years or so we are going to see a dramatic level of retailers not be able to sustain their level of core business as a traditional bricks-and-mortar retailer, and their omni-channel approach is not going to be sustainable to maintain their cost of their infrastructure, and as a result of that, there is going to be a tremendous amount of changes with regard to the retail landscape,” he said in a Nov. 3 earnings call.

Mr. Schultz said Starbucks stores have an advantage in that they maintain a special place in terms of a sense of community, an environment where people go to seek out human interaction. Starbucks could be in a unique position 5 to 10 years from now. Other retail stores closing could mean fewer stores competing for Starbucks’ customers, he said.

“I’m not talking about the coffee category,” Mr. Schultz said. “I am talking overall, but we are in the very, very early stages of a tremendous change in the bricks-and-mortar footprint of retailers domestically and internationally as a result of the sea-change in how people are buying things, and that is going to have I think a negative effect on all of retail, but we believe that it is going to have ultimately a positive effect on the position that we occupy and the environment that we create in our stores.”

Starbucks Roastery in Seattle
New Starbucks roastery stores will be designed to enhance the consumer experience.

New Starbucks roastery stores will be designed to enhance the consumer experience. The Seattle roastery, the only one in operation right now, delivered a comp sales increase of 24% in the fiscal year ended Oct. 2, Mr. Schultz said. A roastery in Shanghai, China, should begin operations next year.

“Opening in late 2017 on Nanjing Road among the busiest shopping destinations in the world, the Starbucks Shanghai roastery will be a stunning two-level, 30,000-square-foot experiential destination showcasing the newest coffee brewing methods and offering customers the finest assortment of exclusive micro-lot coffees from around the world in an immersive all-sensory experience emblematic of our Seattle roastery, respectfully curated through a unique lens that will make it highly impactful and relevant to our Chinese customers,” he said.

Starbucks plans to open roasteries in New York and Tokyo in 2018. A roastery should open in Europe in 2019, but Starbucks has yet to select a city.

Starbucks mobile ordering
Mobile orders now represent 6% of Starbucks transactions.

Starbucks has a digital presence as well. Mobile orders now represent 6% of transactions, said Kevin Johnson, president and chief operating officer.

“We are continuously improving the mobile order and pay experience with newly released functionality that presents our personalized offer directly on the front screen of the mobile app and allows the customer to save favorite stores, favorite customers’ beverages, and we have new features in the pipeline to be released shortly, including real-time personalized product suggestions and the ability to save favorite orders, and there is more coming,” he said.

Starbucks executives discussed results of the 2016 fiscal year in the Nov. 3 earnings call. Net earnings attributable to Starbucks in the year ended Oct. 2 were $2,817.7 million, equal to $1.90 per share on the common stock, which was up 2.2% from $2,757.4 million, or $1.82 per share, in the previous fiscal year. Consolidated net revenues grew 11% to $21,315.9 million from $19,162.7 million in the previous fiscal year. The 2016 fiscal year contained 53 weeks compared to 52 weeks for the previous fiscal year.

THE 21ST CENTURY CORPORATION DISRUPTED How to Master Change

Posted on by .

Here’s what chipmaker Nvidia can teach us.

Go to any business conference and you’ll hear people talking about transformation. For a lot of companies it’s a matter of life and death. Their legacy business is fading, and they need to become something new. The problem is that most companies, like most people, aren’t good at change.

Some, though, are amazing at it. Like those Hasbro toys that start out as a robot but can be bent into the shape of a car, these companies start out doing one thing, then—poof!—flip a few switches and become something else.

Consider Nvidia NVDA -0.05% , a chip company in Santa Clara, Calif., that started out 23 years ago and originally became successful in the somewhat humble business of making graphics boards that videogame fans used to soup up the performance of their PCs. Then, in 2006, Nvidia figured out that its graphics chips could be hooked together to make a supercomputer. Today its graphics processors power many of the brawniest computers in the world, and they will be used in two next-generation supercomputers being designed by U.S. Department of Energy labs. That line of business generates $150 million a year for Nvidia.

But now the chipmaker has spotted a market that could be its biggest opportunity yet: self-driving cars. To make a vehicle autonomous, you need to gather massive streams of data from loads of sensors and cameras and process that data on the fly so that the car can “see” what’s around it. Turns out Nvidia’s graphics chips are great for that. So far, 80 companies, including Volvo, Audi, and Tesla TSLA -0.27% , are using Nvidia technology in their research around autonomous vehicles. “We’re transforming into an artificial-intelligence company,” says Danny Shapiro, a senior director in Nvidia’s automotive group.

Why has Nvidia been such a natural quick-change artist? Well, it turns out that it and other “transformers” have a few traits in common:

Big ears. They listen to customers. Nvidia’s self-driving-car business grew out of a long-standing relationship with auto companies. Car guys used Nvidia chips for computer-aided design, then used Nvidia supercomputer chips to do crash simulations. When the car guys started thinking about autonomous vehicles, Nvidia leaped at the chance to help them solve the problem.

▸ An impatient boss. Transformer CEOs like change and will drive it down throughout the organization. Nvidia’s CEO, Jen-Hsun Huang, is an engineer and a chip designer. He ­cofounded Nvidia and still runs it like a startup.

▸ Active imaginations. Conventional companies try to find new uses for capabilities they already have. Transformers look at what the market needs and then go build it, hiring new people and/or taking people off other jobs.

▸ A brush with death. That’s not the case at ­Nvidia, but a close call with the corporate undertaker can sometimes provide a necessary spark. Think of Apple’s AAPL -0.20% multiple rebirths under Steve Jobs.

▸ Finally: Yes, transformation is hard—but not changing can sometimes be fatal.

From Audi To Zoox, 17 Companies That Are Driving Change In The Car Industry

Posted on by .

Along with GM, these innovators are finding ways to improve the basics of modern transportation.

From Audi To Zoox, 17 Companies That Are Driving Change In The Car Industry
Building a Tesla Model S
 

OLD-GUARD CAR MANUFACTURERS

Audi: In 2015, started test-driving an AI-laden prototype nicknamed “Jack” that lets drivers easily switch to autonomous mode via buttons on the wheel

BMW: Has promised an entirely autonomous car called iNext by 2021; BMW’s ReachNow car-sharing service launched in April in Seattle and expanded to Portland, Oregon, in September

Ford: Announced plans for fully autonomous car with no pedals or steering wheel by 2021; recently invested $75 million in California laser-sensor company Velodyne; bought San Francisco–based private bus service Chariot and plans to expand it

Volvo: Forged partnerships with Microsoft (will incorporate HoloLens augmented-reality technology into its cars) and Uber (which is planning to use Volvos as part of its self-driving test fleet in Pittsburgh); teamed up with safety-system-maker Autoliv to set up a new company focused on autonomous-driving software

TECH GIANTS

Alphabet: Launched self-piloting-car project back in 2009; testing retrofitted Lexus SUVs and its own adorable prototype vehicles in several locations; recently partnered with Fiat Chrysler to build self-driving minivans

Google’s self-driving prototype.[Photo: Brooks Kraft LLC/Getty Images]

Apple: Has invested $1 billion in Chinese ride-share company Didi Chuxing; reportedly rebooting its efforts to develop an Apple car; might also build a system to add autonomous features to preexisting vehicles

Baidu: Chinese search-engine company has teamed up with digital-graphics pioneer Nvidia to create a self-driving-vehicle system that uses 3-D maps in the cloud; is in the testing stage with several different self-driving-car prototypes, including one built with BMW

Tesla: After revolutionizing electric vehicles with the semi-autonomous Model S, will release more-affordable all-electric Model 3, possibly in late 2017; Model S’s involved in a pair of high-profile fatal accidents

IN-DEMAND PROVIDERS

Didi Chuxing: Acquired Uber’s Chinese operations in August, ending a fierce rivalry for Chinese market

Lyft: Partnered with GM to start testing autonomous Chevy Bolt taxis within the next year

Uber: In September, began testing autonomous Ford Fusions in Pittsburgh—the first self-driving fleet available to the public in the U.S.

A fleet of Uber’s autonomous cars.[Photo: Angelo Merendino/Getty Images]

STARTUPS

Comma.ai: Andreessen Horowitz–backed company making an inexpensive kit that turns regular cars into semi-autonomous ones

Mobileye: Israeli software maker that had partnered with Tesla to provide chips and software, but the two companies ended their collaboration in the wake of a fatal accident in May (Tesla cars currently still use Mobileye chips); has teamed up with Delphi Automotive to build a self-driving system by 2019

NextEV: Shanghai-based electric-car innovator headed in the U.S. by former Cisco exec Padmasree Warrior; set to show off a high-performance all-electric sports-car prototype this year

Nutonomy: Born at MIT and backed by Ford, makes self-driving cars, software, and autonomous robots; started testing driverless taxis in Singapore this summer

Quanergy: Silicon Valley–based company developing light- and object-sensing technology for self-driving cars; boasts $1.59 billion valuation thanks to investors Samsung and Delphi

Zoox: Palo Alto startup behind the Boz, a fully autonomous concept vehicle (still in the design phase) with inward-facing seats similar to a train car; company valued at around $1 billion

A version of this article appeared in the November 2016 issue of Fast Company magazine.