Category Archives: Collecting Data

A Survey of 3,000 Executives Reveals How Businesses Succeed with AI

Posted on by .

AUGUST 28, 2017

The buzz over artificial intelligence (AI) has grown loud enough to penetrate the C-suites of organizations around the world, and for good reason. Investment in AI is growing and is increasingly coming from organizations outside the tech space. And AI success stories are becoming more numerous and diverse, from Amazon reaping operational efficiencies using its AI-powered Kiva warehouse robots, to GE keeping its industrial equipment running by leveraging AI for predictive maintenance.

While it’s clear that CEOs need to consider AI’s business implications, the technology’s nascence in business settings makes it less clear how to profitably employ it. Through a study of AI that included a survey of 3,073 executives and 160 case studies across 14 sectors and 10 countries, and through a separate digital research program, we have identified 10 key insights CEOs need to know to embark on a successful AI journey.

Don’t believe the hype: Not every business is using AI… yet. While investment in AI is heating up, corporate adoption of AI technologies is still lagging. Total investment (internal and external) in AI reached somewhere in the range of $26 billion to $39 billion in 2016, with external investment tripling since 2013. Despite this level of investment, however, AI adoption is in its infancy, with just 20% of our survey respondents using one or more AI technologies at scale or in a core part of their business, and only half of those using three or more. (Our results are weighted to reflect the relative economic importance of firms of different sizes. We include five categories of AI technology systems: robotics and autonomous vehicles, computer vision, language, virtual agents, and machine learning.)

For the moment, this is good news for those companies still experimenting or piloting AI (41%). Our results suggest there’s still time to climb the learning curve and compete using AI.

However, we are likely at a key inflection point of AI adoption. AI technologies like neural-based machine learning and natural language processing are beginning to mature and prove their value, quickly becoming centerpieces of AI technology suites among adopters. And we expect at least a portion of current AI piloters to fully integrate AI in the near term. Finally, adoption appears poised to spread, albeit at different rates, across sectors and domains. Telecom and financial services are poised to lead the way, with respondents in these sectors planning to increase their AI tech spend by more than 15% a year — seven percentage points higher than the cross-industry average — in the next three years.

Believe the hype that AI can potentially boost your top and bottom line. Thirty percent of early AI adopters in our survey — those using AI at scale or in core processes — say they’ve achieved revenue increases, leveraging AI in efforts to gain market share or expand their products and services. Furthermore, early AI adopters are 3.5 times more likely than others to say they expect to grow their profit margin by up to five points more than industry peers. While the question of correlation versus causation can be legitimately raised, a separate analysis uncovered some evidence that AI is already directly improving profits, with ROI on AI investment in the same range as associated digital technologies such as big data and advanced analytics.

Without support from leadership, your AI transformation might not succeed. Successful AI adopters have strong executive leadership support for the new technology. Survey respondents from firms that have successfully deployed an AI technology at scale tend to rate C-suite support as being nearly twice as high as those companies that have not adopted any AI technology. They add that strong support comes not only from the CEO and IT executives but also from all other C-level officers and the board of directors.

You don’t have to go it alone on AI — partner for capability and capacity. With the AI field recently picking up its pace of innovation after the decades-long “AI winter,” technical expertise and capabilities are in short supply. Even large digital natives such as Amazon and Google have turned to companies and talent outside their confines to beef up their AI skills. Consider, for example, Google’s acquisition of DeepMind, which is using its machine learning chops to help the tech giant improve even core businesses like search optimization. Our survey, in fact, showed that early AI adopters have primarily bought the right fit-for-purpose technology solutions, with only a minority of respondents both developing and implementing all AI solutions in-house.

Resist the temptation to put technology teams solely in charge of AI initiatives. Compartmentalizing accountability for AI with functional leaders in IT, digital, or innovation can result in a hammer-in-search-of-a-nail outcome: technologies being launched without compelling use cases. To ensure a focus on the most valuable use cases, AI initiatives should be assessed and co-led by both business and technical leaders, an approach that has proved successful in the adoption of other digital technologies.

Take a portfolio approach to accelerate your AI journey. AI tools today vary along a spectrum ranging from tools that have been proven to solve business problems (for example, pattern detection for predictive maintenance) to those with low awareness and currently-limited-but-high-potential utility (for example, application of AI to developing competitive strategy). This distribution suggests that organizations could consider a portfolio-based approach to AI adoption across three time horizons:

Short-term: Focus on use cases where there are proven technology solutions today, and scale them across the organization to drive meaningful bottom-line value.

Medium-term: Experiment with technology that’s emerging but still relatively immature (deep learning video recognition) to prove their value in key business use cases before scaling.

Long-term: Work with academia or a third party to solve a high-impact use case (augmented human decision making in a key knowledge worker role, for example) with bleeding-edge AI technology to potentially capture a sizable first-mover advantage.

Machine learning is a powerful tool, but it’s not right for everything. Machine learning and its most prominent subfield, deep learning, have attracted a lot of media attention and received a significant share of the financing that has been pouring into the AI universe, garnering nearly 60% of all investments from outside the industry in 2016.

But while machine learning has many applications, it is just one of many AI-related technologies capable of solving business problems. There’s no one-size-fits-all AI solution. For example, the AI techniques implemented to improve customer call center performance could be very different from the technology used to identify credit card payments fraud. It’s critical to look for the right tool to solve each value-creating business problem at a particular stage in an organization’s digital and AI journey.

Digital capabilities come before AI. We found that industries leading in AI adoption — such as high-tech, telecom, and automotive — are also the ones that are the most digitized. Likewise, within any industry the companies that are early adopters of AI have already invested in digital capabilities, including cloud infrastructure and big data. In fact, it appears that companies can’t easily leapfrog to AI without digital transformation experience. Using a battery of statistics, we found that the odds of generating profit from using AI are 50% higher for companies that have strong experience in digitization.

Be bold. In a separate study on digital disruption, we found that adopting an offensive digital strategy was the most important factor in enabling incumbent companies to reverse the curse of digital disruption. An organization with an offensive strategy radically adapts its portfolio of businesses, developing new business models to build a growth path that is more robust than before digitization. So far, the same seems to hold true for AI: Early AI adopters with a very proactive, strictly offensive strategy report a much better profit outlook than those without one.

The biggest challenges are people and processes. In many cases, the change-management challenges of incorporating AI into employee processes and decision making far outweigh technical AI implementation challenges. As leaders determine the tasks machines should handle, versus those that humans perform, both new and traditional, it will be critical to implement programs that allow for constant reskilling of the workforce. And as AI continues to converge with advanced visualization, collaboration, and design thinking, businesses will need to shift from a primary focus on process efficiency to a focus on decision management effectiveness, which will further require leaders to create a culture of continuous improvement and learning.

Make no mistake: The next digital frontier is here, and it’s AI. While some firms are still reeling from previous digital disruptions, a new one is taking shape. But it’s early days. There’s still time to make AI a competitive advantage.


Jacques Bughin is a director of the McKinsey Global Institute based in Brussels.

Brian McCarthy is a partner in McKinsey’s Atlanta office.

Michael Chui is a McKinsey Global Institute partner based in San Francisco, and leads MGI’s work on the impact of technological change.

Technology Speeds Up Timeline on Quarterly Close

Posted on by .

Companies are taking four-and-half days to collect the quarterly snapshot of their financial position, down from six days in 2009

Duke Energy reduced its quarterly closing timeline by 30% to 40% to just a handful of days by 2010.
Duke Energy reduced its quarterly closing timeline by 30% to 40% to just a handful of days by 2010. PHOTO: MONICA HERNDON/ZUMA PRESS

As accounting becomes more reliant on technology, finance chiefs across a range of sectors are reaping substantial benefits from closing their books faster.

Companies including Red Hat Inc., RHT -1.07% Duke Energy Corp. DUK -0.51% and Dun & Bradstreet Corp. DNB -0.51% have sped up their quarterly close to gain quicker access to their results.

It takes most companies four-and-half days to capture a quarterly snapshot of their financial position in 2017, down from six days in 2009, according to PricewaterhouseCoopers LLP benchmarking studies. The consulting and accounting firm examined the practices of roughly 500 companies around the world with a median revenue of $2.5 billion.

Companies that have accelerated their quarterly close say having results in hand earlier makes decision-making easier and helps the organization become more nimble. The extra time allows the finance team to perform a deeper analysis, catch errors and invest more time in planning for the next quarter.

Dash to CloseCompanies are reducing the number ofdays spent on closing their books eachquarter.THE WALL STREET JOURNALSource: PricewaterhouseCoopers LLP
.daysTop quartileMedianLower quartile200920170.02.55.07.510.0

A faster quarterly close was the priority for Eric Shander when he joined open-source software solutions company Red Hat as chief accounting officer in 2015. Mr. Shander and his team spent 14 months streamlining and accelerating the process.

Tasks such as account reconciliation were previously left to the end of the reporting period, contributing to the last-minute rush. Now, accounts are reconciled every few weeks. Mr. Shander also redistributed book-closing responsibilities across the finance team to ensure a more equitable workload.

Red Hat now closes its books comfortably in two days, down from five days previously, said Mr. Shander, who was named chief financial officer in April.

The finance team has been more productive as a result of the extra time, Mr. Shander said. They have caught and fixed errors, dug deeper into the data before announcing results and pivots to identifying priorities for the next quarter earlier, he said.

“We’re actually considering moving up some of our earnings announcements as a result of it,” he said. “It’s been a huge success.”

Advances in technologies are helping companies accelerate their book-closing process. More companies are automating their close to reduce the amount of manual activities, such as journal entries, said William Marchionni, senior business adviser at consulting firm Hackett Group HCKT -1.31% Inc.’s Finance Operations Advisory Program.

“Some top performers are getting management reporting data on revenue, shipments, cost for goods sold, and other key metrics on a daily basis from their information systems,” Mr. Marchionni said.

For Dun & Bradstreet CFO Rich Veldran, the lure of cost savings has prompted investments in robotics and automation technology that accelerate the quarterly reporting process. The data and analytics company closes its books in four days, despite operating across more than 200 countries, which adds to the complexity of its financial reporting process.

“There’s a real opportunity for us to do things in a much more automated, faster way, within finance,” Mr. Veldran said, adding that his team is already testing several potential applications for robotic process automation in the finance function.

Steven Young, CFO of Duke Energy.
Steven Young, CFO of Duke Energy. PHOTO:DUKE ENERGY

A new software system was key to helping Duke Energy streamline its quarterly close, said CFO Steven Young. The electric utility in 2007 launched a three-year revamp of its financial infrastructure, after a series of acquisitions burdened the company with a patchwork of financial systems and processes, Mr. Young said. Duke reduced its closing timeline by 30% to 40% to just a handful of days by 2010, Mr. Young said, though he declined to state the exact number of days. The company has continued to improve its quarterly close through new technologies.

“The advantage is that you get data disseminated through the organization quicker, you can then communicate trends, patterns and that can result in quicker decisions to take tactical actions in response to the data,” Mr. Young said.

Companies that operate across multiple geographies and sell different types of products and services often require more time to close their books than a single-product, single-geography business, said Beth Paul, a partner at PwC.

CFOs in a particular sector, such as airlines, autos or retail, often aim to close their books and report results around the same time to keep in line with industry norms.

“There’s a view that they need to be consistent with their peers because if you’re lagging, it could lead people to wonder why,” Ms. Paul said, adding that straggling behind the pack could raise doubts about management’s competency.

She also noted that certain sectors, such as banks and financial services, tend to close their books faster due to greater investments in technology.

Still, for many CFOs accelerating the quarterly close process remains a low priority. Instead, these companies have focused on meeting increasing regulatory demands and deployed resources to operational projects such as entering new markets or launching new product lines.

“Account-to-report has historically been the last place where companies invest. It isn’t client facing, and they have ended up doing things on a shoestring,” said Hackett Group’s Mr. Marchionni.

Advertisers Try to Avoid the Web’s Dark Side, From Fake News to Extremist

Posted on by .

Marketers are reevaluating their approach to automated ad-buying and demanding more accountability

ILLUSTRATION: PETER AND MARIA HOEY

In February, Kieran Hannon, chief marketing officer of Belkin International Inc., noticed an odd tweet asking the electronics maker why it was advertising on Breitbart News Network, a right-wing website known for scorched-earth populism.

A banner ad promoting the company’s new Linksys mesh router had appeared on the site, even though Breitbart wasn’t among the roughly 200 sites Belkin had preapproved for its ads.

Mr. Hannon called his ad agency, which couldn’t explain the mix-up.

“We still don’t know how that happened,” he said.

Such headaches are becoming all too familiar for marketing executives, as they come to grips with the trade-offs inherent in automated advertising. Known as “programmatic” ad buying, it is now the way the vast majority of digital display ads are sold.

Programmatic advertising allows the buyer to target consumers across thousands of sites, based on their browsing history or shopping habits or demographics. Doing so is more cost-effective than buying more expensive ads on a handful of well-known sites.

But marketers don’t fully control whether their ads will show up in places they would rather avoid: sites featuring pornography, pirated content, fake news, videos supporting terrorists, or outlets whose traffic is artificially generated by computer programs.

The confusion stems from the convoluted infrastructure of the ad-technology world: a maze of agencies, ad networks, exchanges, publisher platforms and vendors. Instead of buying space on websites, brands can buy audiences—categories of people—and their ads are placed on sites those people visit.

The problems arise when those people are on sites where brands don’t wish to appear.

The Tangled World of Digital Ads

Online advertisers and their partners can generally target specific groups of users based on certain characteristics. But their ads can still wind up in undesirable places.

1 of 7
ILLUSTRATION BY SEAN MCDADE
ILLUSTRATION BY SEAN MCDADE
ILLUSTRATION BY SEAN MCDADE
ILLUSTRATION BY SEAN MCDADE
ILLUSTRATION BY SEAN MCDADE
ILLUSTRATION BY SEAN MCDADE
ILLUSTRATION BY SEAN MCDADE
ILLUSTRATION BY SEAN MCDADE
ILLUSTRATION BY SEAN MCDADE

As the issues pile up, marketers are taking action, with the help of companies that independently verify that their ads aren’t going to toxic locations. Brands are cutting down their purchase of ads through open exchanges—public pools of ad space from hundreds of thousands of sites—opting instead for methods that give them more visibility into where ads are appearing.

On open exchanges, it “just becomes harder and harder to figure out if your ad is showing up in a legitimate ad experience,” said Kristi Argyilan, senior vice president of marketing at retailer Target Corp.

Marketers have been dealing with these issues for years. But the “brand safety” risks in digital advertising have hit home with multiple high-profile episodes in recent months.

In March, a number of big brands including PepsiCo Inc., Wal-Mart Stores Inc. L’Oréal SAand AT&T Inc. pulled their ads from YouTube and the Google Display Network, a network of third-party websites, after revelations that ads ran alongside objectionable content, including videos promoting anti-Semitism and terrorism.

Google, a unit of Alphabet Inc., promised to better police its content and give marketers more information about where their ads appear on YouTube. It also said it would bolster its technology that automatically screens videos, and it set a 10,000-view threshold for a video channel to reach before it can make money from ads.​

Some advertisers, satisfied with Google’s efforts, have begun spending again, while others, including big marketers such as SC Johnson & Son Inc., Procter & Gamble Co. and J.P. Morgan Chase & Co., haven’t returned, according to people familiar with the matter.

J.P. Morgan is working with Google to get its ads back on “safe YouTube channels” and expects to return soon, one of the people said.

P&G is working closely with YouTube to test the safeguards it has put in place since the problems arose, a spokeswoman for the company said. A spokeswoman for SC Johnson declined to comment.

CMO TODAY INSIGHTS

Get your daily dose of media and advertising news with WSJ’s CMO Today newsletter. Sign up here.

“Many advertisers never left and many have decided to come back,” Google said in a statement. “While they know that no system can be perfect, they appreciate the actions we’ve taken and know we are taking this seriously and are committed to getting better and better.”

Though the number of ordinary web users who saw an ad in an offensive YouTube video was likely small, the combination of the public-relations damage from the revelations and the potential for more widespread exposure down the road led marketers to act.

Breitbart, which is popular with the “alt-right”—a loose conglomeration of groups, some of which embrace white supremacy and view multiculturalism as a threat—became a controversial landing spot for advertisers in the wake of the 2016 presidential election. Brands that have pulled out of Breitbart include Kellogg Co., eyewear company Warby Parker and insurer Allstate Corp.

A spokesman for Breitbart declined to comment.

The recurring issues have caused brands to adjust their overall approach to automated ad buying.

Colgate-Palmolive Co. is adding language to the contract it has with its ad-buying firm, which requires it to maintain blacklists of sites the company doesn’t want to have its ads appear on, according to people familiar with the matter. Colgate didn’t respond to requests for comment.

We needed to make sure our ads are showing up where our ads make contextual sense.

—Chris Drago, Hewlett Packard Enterprise’s senior director of global media

Advertisers are doubling down on using online ad verification services such as Integral Ad Science Inc. and White Ops Inc.

OpenSlate, which helps advertisers vet YouTube channels, currently works with roughly 230 advertisers, more than twice as many as last year. “The interest in finding out where your ads are running and who saw your ad has skyrocketed over the past three months,” said OpenSlate CEO Mike Henry.

More marketers are purchasing ads through “programmatic direct” deals, in which a publisher uses technology to sell directly to advertisers, and “private programmatic marketplaces,” in which a publisher or a select group of publishers can sell to a select group of advertisers, in real time. Automation is involved in both, but the risks are far lower than with open exchanges.

Display-ad spending on programmatic direct deals in the U.S. is expected to grow by 35% this year to $18.2 billion, while spending on private marketplaces will increase 39% to about $6 billion, according to eMarketer. By contrast, spending on open exchanges is forecast to grow by 8.4% this year to $8.3 billion.

Target pulled back from buying via open exchanges at the end of 2015 and now uses private marketplaces to buy ads from about 160 different publishers.

Hewlett Packard Enterprise, which spun out from Hewlett-Packard Co. in 2015, set up private marketplaces with about 15 publishers including Forbes and CNN about a year ago.

“We needed to make sure our ads are showing up where our ads make contextual sense,” said Chris Drago, the company’s senior director of global media. “I don’t want to be on Victoria’s Secret because someone is there buying bras for his wife.”

Why Do Gas Station Prices Constantly Change? Blame the Algorithm

Posted on by .

Retailers are using artificial-intelligence software to set optimal prices, testing textbook theories of competition; antitrust officials worry such systems raise prices for consumers

The Knaap Tankstation gas station in Rotterdam, Netherlands, uses a2i Systems artificial-intelligence pricing software.
The Knaap Tankstation gas station in Rotterdam, Netherlands, uses a2i Systems artificial-intelligence pricing software. PHOTO: KNAAP TANKSTATION BV

ROTTERDAM, the Netherlands—One recent afternoon at a Shell-branded station on the outskirts of this Dutch city, the price of a gallon of unleaded gas started ticking higher, rising more than 3½ cents by closing time. A little later, a competing station 3 miles down the road raised its price about the same amount.

The two stations are among thousands of companies that use artificial-intelligence software to set prices. In doing so, they are testing a fundamental precept of the market economy.

In economics textbooks, open competition between companies selling a similar product, like gasoline, tends to push prices lower. These kinds of algorithms determine the optimal price sometimes dozens of times a day. As they get better at predicting what competitors are charging and what consumers are willing to pay, there are signs they sometimes end up boosting prices together.

Advances in A.I. are allowing retail and wholesale firms to move beyond “dynamic pricing” software, which has for years helped set prices for fast-moving goods, like airline tickets or flat-screen televisions. Older pricing software often used simple rules, such as always keeping prices lower than a competitor.

On the Same Track

Two competing gas stations in the Rotterdam area both using a2i Systems pricing software roughly mirrored each other’s price moves during a selected week.

These new systems crunch mountains of historical and real-time data to predict how customers and competitors will react to any price change under different scenarios, giving them an almost superhuman insight into market dynamics. Programmed to meet a certain goal—such as boosting sales—the algorithms constantly update tactics after learning from experience.

Ulrik Blichfeldt, chief executive of Denmark-based a2i Systems A/S, whose technology powers the Rotterdam gas stations, said his software is focused primarily on modeling consumer behavior and leads to benefits for consumers as well as gas stations. The software learns when raising prices drives away customers and when it doesn’t, leading to lower prices at times when price-sensitive customers are likely to drive by, he said.

“This is not a matter of stealing more money from your customer. It’s about making margin on people who don’t care, and giving away margin to people who do care,” he said.

Driving the popularity of A.I. pricing is the pain rippling through most retail industries, long a low-margin business that’s now suffering from increased competition from online competitors.

“The problem we’re solving is that retailers are going through a bloodbath,” said Guru Hariharan, chief executive of Mountain View, Calif.-based Boomerang Commerce Inc., whose A.I.-enabled software is used by StaplesInc. and other companies.

Staples uses A.I.-enabled software to change prices on 30,000 products a day on its website.
Staples uses A.I.-enabled software to change prices on 30,000 products a day on its website. PHOTO: RICHARD B. LEVINE/ZUMA PRESS

The rise of A.I. pricing poses a challenge to antitrust law. Authorities in the EU and U.S. haven’t opened probes or accused retailers of impropriety for using A.I. to set prices. Antitrust experts say it could be difficult to prove illegal intent as is often required in collusion cases; so far, algorithmic-pricing prosecutions have involved allegations of humans explicitly designing machines to manipulate markets.

Officials say they are looking at whether they need new rules. The Organization for Economic Cooperation and Development said it plans to discuss in June at a round table how such software could make collusion easier “without any formal agreement or human interaction.”

“If professional poker players are having difficulty playing against an algorithm, imagine the difficulty a consumer might have,” said Maurice Stucke, a former antitrust attorney for the U.S. Justice Department and now a law professor at the University of Tennessee, who has written about the competition issues posed by A.I. “In all likelihood, consumers are going to end up paying a higher price.”

In one example of what can happen when prices are widely known, Germany required all gas stations to provide live fuel prices that it shared with consumer price-comparison apps. The effort appears to have boosted prices between 1.2 to 3.3 euro cents per liter, or about 5 to 13 U.S. cents per gallon, according to a discussion paper published in 2016 by the Düsseldorf Institute for Competition Economics.

Makers and users of A.I. pricing said humans remain in control and that retailers’ strategic goals vary widely, which should promote competition and lower prices.

“If you completely let the software rule, then I could see [collusion] happening,” said Faisal Masud, chief technology officer for Staples, which uses A.I.-enabled software to change prices on 30,000 products a day on its website. “But let’s be clear, whatever tools we use, the business logic remains human.”

Online retailers in the U.S., such as Amazon.com Inc. and its third-party sellers, were among the first to adopt dynamic pricing. Amazon.com declined to comment.

Since then, sectors with fast-moving goods, frequent price changes and thin margins—such as the grocery, electronics and gasoline markets—have been the quickest to adopt the latest algorithmic pricing, because they are the most keen for extra pennies of margin, analysts and executives say.

The pricing-software industry has grown in tandem with the amount of data available to—and generated by—retailers. Stores keep information on transactions, as well as information about store traffic, product location and buyer demographics. They also can buy access to databases that monitor competitors’ product assortments, availability and prices—both on the web and in stores.

A.I. is used to make sense of all that information. International Business Machines Corp. said its price-optimization business uses capabilities from its Watson cognitive-computing engine to advise retailers on pricing. Germany’s Blue Yonder GmbH, a price-optimization outfit that serves clients in the grocery, electronics and fashion industries, said it uses neural networks based on those its physicist founder built to analyze data from a particle collider.

Neural networks are a type of A.I. computer system inspired by the interconnected structure of the human brain. They are good at matching new information to old patterns in vast databases, which allows them to use real-time signals such as purchases to predict from experience how consumers and competitors will behave.

Algorithms can also figure out what products are usually purchased together, allowing them to optimize the price of a whole shopping cart. If customers tend to be sensitive to milk prices, but less so to cereal prices, the software might beat a competitor’s price on milk, and make up margin on cereal.

“They’re getting really smart,” said Nik Subramanian, chief technology officer of Brussels-based Kantify, who said its pricing software has figured out how to raise prices after it sees on a competitor’s website that it has run out of a certain product.

Algorithmic pricing works well in the retail gasoline market, because it is a high-volume commodity that is relatively uniform, leading station owners in competitive markets to squeeze every penny.

For years, price wars in cutthroat markets have followed a typical pattern. A retailer would cut prices to lure customers, then competitors would follow suit, each cutting a little more than the others, eventually pushing prices down close to the wholesale cost. Finally one seller would reach a breaking point and raise prices. Everyone would follow, and the cycle started all over.

Some economists say the price wars helped consumers with overall lower prices, but led to very thin margins for station owners.

Danish oil and energy company OK hired a2i Systems in 2011 because its network of gas stations was suffering from a decade-old price war. It changed what it charged as many as 10 times a day, enlisting a team of people to drive around the country and call in competitors’ prices, said Gert Johansen, the company’s head of concept development.

A2i Systems—the name means applied artificial intelligence—was started by Alireza Derakhshan and Frodi Hammer, both engineering graduates of the University of Southern Denmark, in Odense. Before focusing on fuel, they built other A.I. systems, including a game displayed on interactive playground floor tiles that adapted to the speed and skill level of the children running around on top.

For OK, a2i created thousands of neural networks—one for each fuel at each station—and trained them to compare live sales data to years of historical company data to predict how customers would react to price changes. Then it ran those predictions through algorithms built to pick the optimal prices and learn from their mistakes.

In a pilot study, OK split 30 stations into two sets, a control group and an a2i group. The group using the software averaged 5% higher margins, according to a paper Mr. Derakhshan presented last June at an A.I. conference in Seville, Spain.

Scandinavian supermarket chain REMA 1000 says it will roll out price-optimization software made by Texas-based Revionics Inc. in coming months.
Scandinavian supermarket chain REMA 1000 says it will roll out price-optimization software made by Texas-based Revionics Inc. in coming months. PHOTO: JOSEPH DEAN/NEWSCOM/ZUMA PRESS

The new system could make complex decisions that weren’t simply based on a competitor’s prices, Mr. Derakhshan said in an interview.

One client called to complain the software was malfunctioning. A competitor across the street had slashed prices in a promotion, but the algorithm responded by raising prices. There wasn’t a bug. Instead, the software was monitoring the real-time data and saw an influx of customers, presumably because of the long wait across the street.

“It could tell that no matter how it increased prices, people kept coming in,” said Mr. Derakhshan.

On the outskirts of Rotterdam, Koen van der Knaap began running the system on his family-owned Shell station in recent months. Down the road, a station owned by Tamoil, a gasoline retailer owned by Libya’s Oilinvest Group, uses it too.

During a late-March week for which both Tamoil and Mr. van der Knaap provided hourly data, the costs for unleaded gas at the two stations—which vary in opening hours and services—bounced around independently much of the time, and generally declined, reflecting falling oil prices that week.

During some periods, however, the stations’ price changes paralleled each other, going up or down by more than 2 U.S. cents per gallon within a few hours of each other. Often, prices dropped early in the morning and increased toward the end of the day, implying that the A.I. software may have been identifying common market-demand signals through the local noise.

The station owners say their systems frequently lower prices to gain volume when there are customers to be won.

“It can be frustrating,” said Erwin Ralan, an electronics-store manager who was filling up at the Tamoil station that week. “Prices usually go up at the end of the day. But when you’re empty and you’re in a rush, there’s not much you can do.”

The Rise of the Smart City

Posted on by .

Officials are tapping all kinds of data to make their cities safer, healthier and more efficient, in what may be just the start of a sweeping change in how cities are run.

As city officials across the country begin to draw on data about income, traffic, fires, illnesses and more, big changes are already under way in leading smart cities.

Cities have a way to go before they can be considered geniuses. But they’re getting smart pretty fast.

In just the past few years, mayors and other officials in cities across the country have begun to draw on the reams of data at their disposal—about income, burglaries, traffic, fires, illnesses, parking citations and more—to tackle many of the problems of urban life. Whether it’s making it easier for residents to find parking places, or guiding health inspectors to high-risk restaurants or giving smoke alarms to the households that are most likely to suffer fatal fires, big-data technologies are beginning to transform the way cities work.

Cities have just scratched the surface in using data to improve operations, but big changes are already under way in leading smart cities, says Stephen Goldsmith, a professor of government and director of the Innovations in Government Program at the Harvard Kennedy School. “In terms of city governance, we are at one of the most consequential periods in the last century,” he says.

Although cities have been using data in various forms for decades, the modern practice of civic analytics has only begun to take off in the past few years, thanks to a host of technological changes. Among them: the growth of cloud computing, which dramatically lowers the costs of storing information; new developments in machine learning, which put advanced analytical tools in the hands of city officials; the Internet of Things and the rise of inexpensive sensors that can track a vast array of information such as gunshots, traffic or air pollution; and the widespread use of smartphone apps and mobile devices that enable citizens and city workers alike to monitor problems and feed information about them back to city hall.

All this data collection raises understandable privacy concerns. Most cities have policies designed to safeguard citizen privacy and prevent the release of information that might identify any one individual. In theory, anyway. In reality, even when publicly available data is stripped of personally identifiable information, tech-savvy users can combine it with other data sets to figure out an awful lot of information about any individual. Widespread use of sensors and video can also present privacy risks unless precautions are taken. The technology “is forcing cities to confront questions of privacy that they haven’t had to confront before,” says Ben Green, a fellow at Harvard’s Berkman Klein Center for Internet and Society and lead author of a recent report on open-data privacy.

Still, cities are moving ahead, finding more ways to use the considerable amounts of data at their disposal. Here’s a look at some of the ways the information revolution is changing the way cities are run—and the lives of its residents.

Spotting potential problems… before they occur

Perhaps the most innovative way cities are employing data is to anticipate problems.

Consider the risk of death by fire. Although declining nationally, there still were 2,685 civilian deaths in building fires in 2015, the latest year for which data is available. The presence of smoke alarms is critical in preventing these deaths; the National Fire Protection Association, a nonprofit standards group, says a working fire alarm cuts the risk of dying in a home fire in half.

New Orleans, like most cities, has a program run by its Fire Department to distribute smoke detectors. But until recently, the program relied on residents to request an alarm. After a fire in which five people—three children, their mother and grandmother—perished, the department started looking for a way to make sure that they were getting alarms into homes where they could make a difference.

FIRE RISK | With census and other data, New Orleans mapped the combined risk of missing smoke alarms and fire deaths, helping officials target distribution of smoke detectors.
FIRE RISK | With census and other data, New Orleans mapped the combined risk of missing smoke alarms and fire deaths, helping officials target distribution of smoke detectors. PHOTO: CITY OF NEW ORLEANS/OPA

Oliver Wise, director of the city’s Office of Performance and Accountability, had his data team tap two Census Bureau surveys to identify city blocks most likely to contain homes without smoke detectors and at the greatest risk for fire fatalities—those with young children or the elderly. They then used other data to zero in on neighborhoods with a history of house fires. Using advanced machine-learning techniques, Mr. Wise’s office produced a map that showed those blocks where fire deaths were most likely to occur and where the Fire Department could target its smoke-detector distribution.

Since the data program began in early 2015, the department has installed about 18,000 smoke detectors, says Tim McConnell, chief of the New Orleans Fire Department. That compares with no more than 800 detectors a year under the older program. It is too early to tell how effective it has been at preventing fire deaths, Chief McConnell says, since they are so rare. But the program did have an early, notable success.

A few months after the program began, firefighters responded to a call in Central New Orleans. Arriving, the fire crew found three families—11 people in all—huddled on the lawn. The residents had been alerted by smoke detectors recently installed under the outreach program.

“That was just one of those stories where you go, ‘This works,’ ” Chief McConnell says. “For us, it’s a game changer.”

Predictive analytics have also been used to improve restaurant health inspections in Chicago. The Department of Public Health relies on about three dozen inspectors to check for possible violations at more than 15,000 food establishments across the city. It needed a better way to prioritize inspections to make sure that places with potential critical violations—those that carry the greatest risk for the spread of food-borne illness—were examined before someone actually became sick.

The data team in the city’s Department of Innovation and Technology developed an algorithm that looked at 11 variables, including whether the restaurant had previous violations, how long it has been in business (the longer, the better), the weather (violations are more likely when it’s hot), even stats about nearby burglaries (which tells something about the neighborhood, though analysts aren’t sure what).

CHECK, PLEASE | To prioritize restaurant inspections, Chicago developed an algorithm to identify eateries most likely to have violations. The darker the pink, the higher the likelihood.
CHECK, PLEASE | To prioritize restaurant inspections, Chicago developed an algorithm to identify eateries most likely to have violations. The darker the pink, the higher the likelihood. PHOTO: CITY OF CHICAGO

With the tool, the health department could identify establishments that were most likely to have problems and move them up the list for inspection. After the algorithm went into use in 2015, a follow-up analysis found that inspectors were visiting restaurants with possible critical violations seven days sooner than before. Since then, its use has resulted in a 15% rise in the number of critical violations found, though the number of illness complaints—an imperfect measure of violations—has been flat.

Sensors on everything

Just as individuals are flocking to Fitbits and other wearables to monitor their health, cities, too, are turning to sensors to track their own vital signs. Through this Internet of Things, sensor-equipped water pipes can identify leaks, electric meters can track power use, and parking meters can automatically flag violations.

As part of a smart-city initiative, Kansas City, Mo., has installed computer-equipped sensors on streetlights along a 2.2-mile light-rail line that opened in March of last year. The city uses video from the sensors to gather information about traffic and available street parking along the corridor. The data is then made available on a public website that shows the location of streetcars, areas where traffic is moving slowly, and locations with open parking spots. It also provides an hourly traffic count in the corridor for the past day.

PARK HERE | In Kansas City, Mo., sensors on streetlights along a new light-rail line gather information about traffic and available parking that the public can view online.
PARK HERE | In Kansas City, Mo., sensors on streetlights along a new light-rail line gather information about traffic and available parking that the public can view online. PHOTO: XAQT

The sensors can even count foot traffic, which could assist entrepreneurs looking to open a new coffee shop or retail outlet, and help city officials estimate the size of crowds, which is useful in responding to public disturbances or in assigning cleanup crews after events like the city’s 2015 World Series parade. Their ability to detect motion also can be used to adjust the LED streetlights so that they dim if no one is around and automatically brighten if cars or pedestrians pass by. The goal is to use data to “improve our efficiency of service and ascertain what services we ought to be providing,” says Bob Bennett, Kansas City’s chief innovation officer.

Cities are also putting sensors in the hands of citizens. In Louisville, Ky., a coalition of public, private and philanthropic organizations has provided more than 1,000 sensor-equipped inhalers to asthma sufferers to map where in the city poor air quality is triggering breathing problems. The tiny sensors, from Propeller Health, a Madison, Wis., medical-device company, have built-in GPS that collects time and location data with each puff of the inhaler.

The city is still completing its analysis of the data, but early findings were impressive, says Grace Simrall, Louisville’s chief of civic innovation. For one thing, patients in the program saw measurable improvement, in part by giving them a better understanding of their disease, and their physicians more information to devise treatment plans. And as expected, the data made it possible to show clusters of inhaler use and link it with air pollution.

In one case, sensor data spotlighted a congested road on the east side of town where inhaler use was three times as high as in other parts of the city. In response, the city planted a belt of trees separating the road from a nearby residential neighborhood; the plantings have resulted in a 60% reduction in particulate matter (which can aggravate breathing problems) behind the green belt.

Citizens as data collectors

Using the public as data collectors isn’t new—it’s the idea behind 911 and 311 systems. But smartphone apps, in the hands of residents and city workers, give cities new and more powerful ways to expand their data-collection efforts.

In Mobile, Ala., building-code inspectors armed with smartphones and Facebook Inc.’sInstagram photo-sharing app were able to inventory the city’s 1,200 blighted properties in just eight days—a task that enforcement officers had previously considered impossible with the older paper-based systems of tracking blight. With Instagram, inspectors could snap a photo of a property and have it appear on a map, showing officials where dilapidated, abandoned or other problem properties are clustered.

AIR TRIGGER | Sensor-equipped asthma inhalers in Louisville that collect data on time and place of use have improved care for individuals and helped the city address problem areas.
AIR TRIGGER | Sensor-equipped asthma inhalers in Louisville that collect data on time and place of use have improved care for individuals and helped the city address problem areas. PHOTO: PROPELLER HEALTH

The inventory was just the first step. Mobile’s two-year-old Innovation Team, funded with a grant from Bloomberg Philanthropies, cross-referenced the data with other available property information—tax records, landmark status, out-of-state ownership—to compile a “blight index,” a master profile of every problem property in the city. This made it possible to identify which property owners might need assistance in rehabbing their properties and which ones to cite for code violations. The city is wrapping up a second survey of blighted properties to measure the net change over the past year, says Jeff Carter, Innovation Team’s executive director. “Instagram was phase one, and we would never have made it to phase two without it,” Mr. Carter says.

Mobile data collection is also helping Los Angeles to clean up city streets. Teams from the city sanitation department use video and smartphones to document illegal dumping, abandoned bulky items and other trash problems. The teams can use an app to report problems needing immediate attention, but what was really noteworthy—especially for a city the size of L.A.—was that they were able to view and grade all 22,000 miles of the city’s streets and alleyways.

The result has been to give officials and the public a better picture of garbage-plagued areas that can be targeted under Mayor Eric Garcetti’s Clean Streets program. Data collected by the mobile teams is compiled in a detailed map of the city, with each street segment rated as being clean, somewhat clean and not clean. The city publishes the map online so that anyone can get a color-coded view of how streets rank for cleanliness.

STATE OF THE STREETS | This online map tracks the progress of Los Angeles’s Clean Streets program. Green means ‘clean,’ yellow ‘somewhat clean,’ and red ‘not clean.’
STATE OF THE STREETS | This online map tracks the progress of Los Angeles’s Clean Streets program. Green means ‘clean,’ yellow ‘somewhat clean,’ and red ‘not clean.’ PHOTO: CLEAN STREETS LA

The program, which recently finished its first full year, has resulted in an 80% reduction in the number of areas scored “not clean,” says Lilian Coral, Los Angeles’s chief data officer. The new data-driven approach not only has made it possible to better identify problem areas, Ms. Coral says, but it also has helped to reduce disparities in the city’s cleanup efforts, which previously depended mainly on complaints to identify locations needing attention.

In Boston, meanwhile, the city has joined with Waze, a navigation app from Google that enables drivers to share traffic and road conditions in real time.

The Boston traffic-management center uses Waze data to supplement live feeds from its network of traffic cameras and sensors, getting a more detailed picture of what’s happening on city streets. Messages from Waze users can alert the center to traffic problems—a double-parked truck or a fender-bender—as soon as they develop, allowing officials to respond more quickly.

Waze data also has helped the city to run low-cost experiments on possible traffic changes. For instance, to test how to best enforce “don’t block the box” at congested intersections, the center took more than 20 problem intersections and assigned each one either a changing message sign, a police officer or no intervention at all. Using Waze data, analysts would then see which enforcement approach was most effective at reducing congestion. As it turns out, Waze’s traffic-jam data didn’t show that either approach made much difference in reducing congestion (which may reinforce the view of those who believe little can be done to eliminate traffic headaches).

The partnership, one of 250 that Waze has signed with cities around the world, also enables the city to feed street-closure and similar information into the Waze app, making it easier for drivers to reroute trips before they get stuck in traffic.

“When residents see a problem, sometimes their reaction is to call us, but more these days their instinct is to report it through an app like Waze or Yelp , ” says Andrew Therriault, Boston’s chief data officer. “To be as responsive as possible to the public’s needs, we need to listen to their input through whichever medium they choose to share it.”

Mr. Totty, a former news editor for The Journal Report in San Francisco, can be reached atreports@wsj.com.

Appeared in the Apr. 17, 2017, print edition.

YouTube Tops 1 Billion Hours of Video a Day, on Pace to Eclipse TV

Posted on by .

Google unit posts 10-fold increase in viewership since 2012, boosted by algorithms personalizing user lineups

YouTube viewers world-wide are now watching more than 1 billion hours of videos a day, threatening to eclipse U.S. television viewership, a milestone fueled by the Google unit’s aggressive embrace of artificial intelligence to recommend videos.

YouTube surpassed the figure, which is far larger than previously reported, late last year. It represents a 10-fold increase since 2012, YouTube said, when it started building algorithms that tap user data to give each user personalized video lineups designed to keep them watching longer.

Feeding those recommendations is an unmatched collection of content: 400 hours of video are uploaded to YouTube each minute, or 65 years of video a day.

“The corpus of content continues to get richer and richer by the minute, and machine-learning algorithms do a better and better job of surfacing the content that an individual user likes,” YouTube Chief Product Officer Neal Mohan said.

MERRILL SHERMAN

YouTube’s billion-hour mark underscores the wide lead of the 12-year-old platform in online video—threatening traditional television, which lacks similarly sophisticated tools.

Facebook Inc. and Netflix Inc. said in January 2016 that users watch 100 million hours and 116 million hours, respectively, of video daily on their platforms. Nielsen data suggest Americans watch on average roughly 1.25 billion hours of live and recorded TV a day, a figure steadily dropping in recent years.

Despite its size, it is unclear if YouTube is making money. Google parent Alphabet Inc. doesn’t disclose YouTube’s performance, but people familiar with its financials said it took in about $4 billion in revenue in 2014 and roughly broke even.

YouTube makes most of its money on running ads before videos but it also spends big on technology and rights to content, including deals with TV networks for a planned web-TV service. When asked about profits last year, YouTube Chief Executive Susan Wojcicki said, “Growth is the priority.”

YouTube’s success using tailor-made video lineups illustrates how technology companies can reshape media consumption into narrow categories of interests, a trend some observers find worrying.

“If I only watch heavy-metal videos, of course it’s serving me more of those. But then I’m missing out on the diversity of culture that exists,” said Christo Wilson, a Northeastern University computer-science professor who studies the impact of algorithms. “The blessing and curse of cable and broadcast TV is it was a shared experience.…But that goes away if we each have personalized ecosystems.”

YouTube benefits from the enormous reach of Google, which handles about 93% of internet searches, according to market researcher StatCounter. Google embeds YouTube videos in search results and pre-installs the YouTube app on its Android software, which runs 88% of smartphones, according to Strategy Analytics.

That has helped drive new users to its platform. About 2 billion unique users now watch a YouTube video every 90 days, according to a former manager. In 2013, the last time YouTube disclosed its user base, it said it surpassed 1 billion monthly users. YouTube is now likely larger than the world’s biggest TV network, China Central Television, which has more than 1.2 billion viewers.

YouTube long configured video recommendations to boost total views, but that approach rewarded videos with misleading titles or preview images. To increase user engagement and retention, the company in early 2012 changed its algorithms to boost watch time instead. Immediately, clicks dropped nearly 20% partly because users stuck with videos longer. Some executives and video creators objected.

Months later, YouTube executives unveiled a goal of 1 billion hours of watch time daily by the end of 2016. At the time, optimistic forecasts projected it would reach 400 million hours by then.

YouTube retooled its algorithms using a field of artificial intelligence called machine learning to parse massive databases of user history to improve video recommendations.

If I only watch heavy-metal videos, of course it’s serving me more of those. But then I’m missing out on the diversity of culture that exists.

—Northeastern University computer-science professor Christo Wilson

Previously, the algorithms recommended content largely based on what other users clicked after watching a particular video, the former manager said. Now their “understanding of what is in a video [and] what a person or group of people would like to watch has grown dramatically,” he said.

Engineers tested each change on a control group, and only kept the change if those users spent more time on YouTube.

One strategy was to find new areas of user interest. For instance, YouTube could suggest a soccer video to users watching a lot of football, and then flood the lineup with more soccer if the first clip was a hit. “Once you realize there’s an additional preference, exploit that,” the former manager said.

But the algorithm didn’t always deliver. For instance, when Ms. Wojcicki joined as CEO in 2014, YouTube recommended videos to her about eczema because she had recently watched a clip about skin rashes after suspecting one of her children had one, said Cristos Goodrow, YouTube’s video-recommendation chief.

That made the video-recommendation team realize there were certain “single-use videos” to ignore as signals of user interest. But to mark them, they had to think of each example, such as certain health and how-to videos.

Then last year YouTube partnered with Google Brain, which develops advanced machine-learning software called deep neural networks, which have led to dramatic improvements in other fields, such as language translation. The Google Brain system was able to identify single-use video categories on its own.

Did You Hear That? Robots Are Learning The Subtle Sounds Of Mechanical Breakdown

Posted on by .
Fast Company
STEVEN MELENDEZ 02.16.17

Expert mechanics can detect what’s wrong with a car without lifting the hood. So can deep learning robots, says startup 3DSignals.

Did You Hear That? Robots Are Learning The Subtle Sounds Of Mechanical Breakdown
[Photo: Flickr user David Hodgson]

Sometimes, machines just don’t sound right.

Brakes squeal, hard drives crunch, air conditioners rattle, and their owners know it’s time for a service call. But some of the most valuable machinery in the world often operates with nobody around to hear the mechanical breakdowns, from the chillers and pumps that drive big-building climate control systems to the massive turbines at hydroelectric power plants.

That’s why a number of startups are working to train computers to pick up on changes in the sounds, vibrations, heat emissions, and other signals that machines give off as they’re working or failing. The hope is that the computers can catch mechanical failures before they happen, saving on repair costs and reducing downtime.

“We’re developing an expert mechanic’s brain that identifies exactly what is happening to a machine by the way that it sounds,” says Amnon Shenfeld, founder and CEO of 3DSignals, a startup based in Kfar Saba, Israel, that is using machine learning to train computers to listen to machinery and diagnose problems at facilities like hydroelectric plants and steel mills

And while most current efforts are currently focused on large-scale machinery, Shenfeld says the same sort of technology might one day help detect failures in home appliances or in devices like self-driving cars or rental vehicles that don’t spend much time in the hands of an owner who’s used to their usual sounds.

“When you have car-as-a-service, the person in the car doesn’t know the car,” he says. “If it sounds strange, you’re losing this link with the human in the car deciding it’s time to take it to the mechanic.”

Initially, 3DSignals’ systems can detect anomalous sounds based on physical modeling of particular types of equipment, notifying an expert mechanic to diagnose the problem. And once the problem is fixed, the mechanic’s diagnosis is added to 3DSignals’ database, which it uses to train its algorithms to not only detect unusual sounds, but also interpret them to understand what kind of repair is needed.

“The next time we hit this signature on the same machine for the same customer or another customer using the same type of machine, it will not just be anomaly detection,” says Shenfeld.

And while 3DSignals focuses entirely on using its machine learning tools to process acoustic data—an area Shenfeld says is surprisingly neglected outside of speech recognition—other companies are using a variety of other types of data to detect and diagnose mechanical problems.

[Photo: Flickr user Sue Clark]

Systems from Augury, a startup with offices in New York and Haifa, Israel, monitor vibrations, temperature ultrasound, and electromagnetic emissions from machinery, typically large-scale heating and cooling systems. The company offers a portable tool a technician can use to capture a reading to an iPhone or Android device, where an app offers a real-time diagnosis as well as a continuous monitoring system, says cofounder and CEO Saar Yoskovitz.

“One of our customers, located in Ohio, had a thunderstorm at 2 a.m., and the whole building went dark,” he says. “The generator was supposed to pick up automatically but it didn’t, and the way that they learned about it was through an alert they got from Augury.”

In more complicated situations, the system uses machine learning-based algorithms and a growing library of signals of different types of errors to suggest what’s wrong and what technicians can do to repair the machine, he says.

“Over time, we’ve collected probably the largest malfunction dictionary in the world for our types of machines,” says Yoskovitz.

For another startup, called Presenso, also based in Haifa, the exact type of data being monitored is less important than how it changes over time and what that signals about the machine’s operation. The company’s systems record data from sensors already installed in industrial equipment as part of existing control processes, streaming readings to Presenso’s cloud servers.

“We don’t care, or we don’t need to know, if the sensor we’re analyzing now measures temperature or voltage or flow,” says CEO and cofounder Eitan Vesely.

Presenso’s software tools use a variety of machine learning techniques to effectively build a model of a machine’s operations based on the sensor data it receives. The tools provide visualizations of unusual readings, and how different they are from normal operations.

“They don’t need any human guidance or [to] know what the physical attributes are that are being measured,” Vesely says. “The goal is for them to learn by themselves how the machine operates.”

And while real-time breakdown detection is obviously useful for companies operating machines in their own facilities, experts say it could also be useful to equipment vendors or insurance companies providing financial coverage in the case of downtime.

“If a company has downtime or business interruption insurance, that’s something where it becomes incredibly relevant and also a money saver,” says Ryan Martin, a senior analyst at ABI Research.

Having more reliable predictions of when machines need maintenance could also spur equipment makers to effectively offer time as a service, charging industrial users either extended warranties or even charging by the hour for the guaranteed use of their equipment without incurring too much risk to themselves, says Augury’s Yoskovitz.

“One of the problems with this business model is they hold all the risk,” he says. “If anything goes wrong, they pay for it, and they’re looking for ways to lower this risk.”

With Mastercard’s Identity Check, your face is the latest form of biometric identification.

Posted on by .

MASTERCARD

Selfies are no longer just a social media irritant: With Mastercard’s Identity Check, your face is the latest form of biometric identification. In October, Mastercard rolled out the new mobile-payment ID feature, which allows e-commerce customers to verify themselves at checkout simply by capturing an image of their face on their phone. “One of the biggest consumer pain points is when you are prompted for a password,” says Ajay Bhalla, who oversees Mastercard’s security efforts. “That leads to a lot of friction: unhappy consumers, merchants losing sales, banks losing volume.”

Two years in the making, Identity Check works much like the iPhone’s fingerprint ID system. Users set it up in advance by taking a series of photos from several different angles. That info is then turned into an algorithm, which Identity Check can match to a fresh picture at the moment of transaction. “Biometrics is actually safer [than a password] because it is really you,” says Bhalla.

Shoppers weary of passwords might be excited to hear the news, but they should be careful not to get too happy. “The biggest issue we face is that people naturally start smiling when they take a picture,” says Bhalla, “and that messes with the algorithm.”

Mastercard is also exploring other new methods for identifying customers, such as iris scanning, voice recognition, and even heartbeat monitoring. Blood-pumping patterns, it turns out, are as unique as fingerprints. —Nikita Richardson

Milestones: In September, Mastercard launched a program to help developers integrate their services into new technologies such as virtual reality, augmented reality, and the internet of things.

Challenges: Mastercard has been hit with a $19 billion class-action lawsuit over allegations that the company charged illegal fees to U.K. vendors, leading to higher prices for consumers. (“[We] firmly disagree with the basis of this claim and we intend to oppose it,” says a Mastercard spokesperson.)

Mastercard’s new facial recognition tech Identity Check works like the iPhone’s fingerprint system.[Illustration: Laura Breiling]

The age of analytics: Competing in a data-driven world

Posted on by .

McKinsey Global Institute

By Nicolaus Henke, Jacques Bughin, Michael Chui, James Manyika, Tamim Saleh, Bill Wiseman, and Guru Sethupathy

Big data’s potential just keeps growing. Taking full advantage means companies must incorporate analytics into their strategic vision and use it to make better, faster decisions.

Is big data all hype? To the contrary: earlier research may have given only a partial view of the ultimate impact. A new report from the McKinsey Global Institute (MGI), The age of analytics: Competing in a data-driven world, suggests that the range of applications and opportunities has grown and will continue to expand. Given rapid technological advances, the question for companies now is how to integrate new capabilities into their operations and strategies—and position themselves in a world where analytics can upend entire industries.

The age of analytics
The age of analytics
Big data continues to grow; if anything, earlier estimates understated its potential.

A 2011 MGI report highlighted the transformational potential of big data. Five years later, we remain convinced that this potential has not been oversold. In fact, the convergence of several technology trends is accelerating progress. The volume of data continues to double every three years as information pours in from digital platforms, wireless sensors, virtual-reality applications, and billions of mobile phones. Data-storage capacity has increased, while its cost has plummeted. Data scientists now have unprecedented computing power at their disposal, and they are devising algorithms that are ever more sophisticated.

Earlier, we estimated the potential for big data and analytics to create value in five specific domains. Revisiting them today shows uneven progress and a great deal of that value still on the table (exhibit). The greatest advances have occurred in location-based services and in US retail, both areas with competitors that are digital natives. In contrast, manufacturing, the EU public sector, and healthcare have captured less than 30 percent of the potential value we highlighted five years ago. And new opportunities have arisen since 2011, further widening the gap between the leaders and laggards.

Progress in capturing value from data and analytics has been uneven.

Leading companies are using their capabilities not only to improve their core operations but also to launch entirely new business models. The network effects of digital platforms are creating a winner-take-most situation in some markets. The leading firms have remarkably deep analytical talent taking on various problems—and they are actively looking for ways to enter other industries. These companies can take advantage of their scale and data insights to add new business lines, and those expansions are increasingly blurring traditional sector boundaries.

Where digital natives were built for analytics, legacy companies have to do the hard work of overhauling or changing existing systems. Adapting to an era of data-driven decision making is not always a simple proposition. Some companies have invested heavily in technology but have not yet changed their organizations so they can make the most of these investments. Many are struggling to develop the talent, business processes, and organizational muscle to capture real value from analytics.

The first challenge is incorporating data and analytics into a core strategic vision. The next step is developing the right business processes and building capabilities, including both data infrastructure and talent. It is not enough simply to layer powerful technology systems on top of existing business operations. All these aspects of transformation need to come together to realize the full potential of data and analytics. The challenges incumbents face in pulling this off are precisely why much of the value we highlighted in 2011 is still unclaimed.

The urgency for incumbents is growing, since leaders are staking out large advantages, and hesitating increases the risk of being disrupted. Disruption is already happening, and it takes multiple forms. Introducing new types of data sets (“orthogonal data”) can confer a competitive advantage, for instance, while massive integration capabilities can break through organizational silos, enabling new insights and models. Hyperscale digital platforms can match buyers and sellers in real time, transforming inefficient markets. Granular data can be used to personalize products and services—including, most intriguingly, healthcare. New analytical techniques can fuel discovery and innovation. Above all, businesses no longer have to go on gut instinct; they can use data and analytics to make faster decisions and more accurate forecasts supported by a mountain of evidence.

The next generation of tools could unleash even bigger changes. New machine-learning and deep-learning capabilities have an enormous variety of applications that stretch into many sectors of the economy. Systems enabled by machine learning can provide customer service, manage logistics, analyze medical records, or even write news stories.

These technologies could generate productivity gains and an improved quality of life, but they carry the risk of causing job losses and dislocations. Previous MGI research found that 45 percent of work activities could be automated using current technologies; some 80 percent of that is attributable to existing machine-learning capabilities. Breakthroughs in natural-language processing could expand that impact.

Data and analytics are already shaking up multiple industries, and the effects will only become more pronounced as adoption reaches critical mass—and as machines gain unprecedented capabilities to solve problems and understand language. Organizations that can harness these capabilities effectively will be able to create significant value and differentiate themselves, while others will find themselves increasingly at a disadvantage.

Ways Predictive Analytics Improves Innovation

Posted on by .

InformationWeek

3/12/2016

From drug discovery to price optimization, across virtually every industry, more companies are using predictive analytics to increase revenue, reduce costs, and modernize the way they do business. Here are some examples.

Disrupt An Industry Drug discovery has been done the same way for decades, if not centuries. Researchers have a hypothesis-driven target, screen that target against chemical compounds, and then iteratively take them through clinical trials. As history has shown, a lot of trial and error is involved, perhaps more than is necessary, particularly in this day and age. According to industry association PhRMA, it takes an average of more than 10 years and $2.6 billion to develop a drug. Pharmaceutical company BERG Health aims to change that. It is using predictive analytics and artificial intelligence (AI) to discover and develop lifesaving treatments. 'There's no way a human can process the amount of data necessary to dissect the complexity of biology and disease into form-based discovery,' said Niven Narain, founder and CEO of BERG. 'We use human tissue samples to learn about as many biological components as we can and we include that patient's clinical and demographic data.' Its platform builds a model of healthy individuals and then compares that to individuals with a disease. The AI then builds a model of the genes and proteins that pinpoints the core differences between health and disease. The model helps BERG target its drug discovery process. The company also uses the same process to identify which patients are the best candidates for a certain drug. Using a single tissue sample, its platform can create more than 14 trillion data points that collectively become a 'patient signature.' The patient signature indicates whether or not the individual will likely respond well to a treatment that, for example, is far more precise than first-line pancreatic cancer treatment. First-line pancreatic treatments fail 90% of the time, Narain said. (Image: bykst via Pixabay)

Disrupt An Industry

Drug discovery has been done the same way for decades, if not centuries. Researchers have a hypothesis-driven target, screen that target against chemical compounds, and then iteratively take them through clinical trials. As history has shown, a lot of trial and error is involved, perhaps more than is necessary, particularly in this day and age. According to industry association PhRMA, it takes an average of more than 10 years and $2.6 billion to develop a drug. Pharmaceutical company BERG Health aims to change that. It is using predictive analytics and artificial intelligence (AI) to discover and develop lifesaving treatments.

“There’s no way a human can process the amount of data necessary to dissect the complexity of biology and disease into form-based discovery,” said Niven Narain, founder and CEO of BERG. “We use human tissue samples to learn about as many biological components as we can and we include that patient’s clinical and demographic data.”

Its platform builds a model of healthy individuals and then compares that to individuals with a disease. The AI then builds a model of the genes and proteins that pinpoints the core differences between health and disease. The model helps BERG target its drug discovery process. The company also uses the same process to identify which patients are the best candidates for a certain drug.

Using a single tissue sample, its platform can create more than 14 trillion data points that collectively become a “patient signature.” The patient signature indicates whether or not the individual will likely respond well to a treatment that, for example, is far more precise than first-line pancreatic cancer treatment. First-line pancreatic treatments fail 90% of the time, Narain said.

(Image: bykst via Pixabay)

Meet Customer Demand

Handmade photo product company PhotoBarn has increased its throughput 500% by creating warehouse software and lean manufacturing processes that are built around predictive analytics. Before its transformation in 2015, the company struggled to balance supply and demand.

About halfway through 2015, the company started using predictive analytics to forecast sales, inventory, and raw materials to anticipate what it would need before and during the holiday season. That and its new lean manufacturing process enabled the company to move five times more product using the same number of people.

“The spikes and volumes in the holiday period are hard to handle. In 2015, we reimagined our supply chain from suppliers to customers,” said PhotoBarn’s business analytics and marketing chief Ryan McClurkin. “We were able to handle the order volumes without hiccups [because] we’re anticipating versus reacting, and it pays huge dividends.”

Right-Size Resources

Predictive analytics has helped Alabama’s Birmingham Zoo more accurately forecast attendance. As a result of that, the company can make more informed staffing and marketing decisions.

“The number of people who attend the zoo affects staffing, marketing and events planning. You could look at historical averages, but we pulled historical data and correlated that with weather data, school calendars, national holidays, [and other] variables to predict how many people would show on a given day,” said Joshua Jones, managing partner at data analytics and data science consulting firm StrategyWise.

The information is displayed on a digital dashboard that provides a much more accurate forecast. Instead of guessing that 10,000 people will come to the park based on historical information alone, Birmingham Zoo can now see it is likely that 7,131 visitors (or whatever the number happens to be) will attend on a particular day.

Create The Perfect Game

Success in the lottery industry is all about finding the right payout levels. Two of the most important factors are the sizes of the prizes and the frequency of payouts, which is why prize values and odds vary significantly in a single game. However, some games are more popular than others.

“Lotteries want to maximize their revenues so they can [contribute to] education and whatever social programs the state wants to support,” said Mather Economics director Arvid Tchivzhel. “We’ve measured responses in tickets purchased due to changes in the payout structure. You can almost build the perfect game based on where you set the payout levels and the frequency.”

Sell More Effectively

Jewelry TV (JTV), like many luxury goods retailers, was hit hard by the recession. The company tried a number of tactics to improve sales that didn’t work as well as hoped, so it eventually embraced predictive analytics.

“A regression model helps you understand what’s impacting your revenue. When you start building a predictive analytics model, it can tell you why what you’ve been doing isn’t working — customers don’t care,” said Ryan McClurkin, former director of strategic analytics at Jewelry TV and currently chief of business analytics and marketing at PhotoBarn. “Predictive analytics can tell you your customers care about this [instead].” That’s the power of predictive analytics. It allows you to see the variables you can innovate around.