Monthly Archives: July 2016

How the Best Business Leaders Disrupt Themselves

Posted on by .

And how doing so helps them keep up with technological change.

Why isn’t Intuit INTU -0.94% dead? Its peers from the Pleistocene epoch of PC software (VisiCalc, WordStar) are long gone; only Intuit survives as a significant independent business. The reason is easy to state, hard to emulate: The company has continually disrupted itself, most recently scrapping its desktop-driven business model of the previous 30 years and switching to one based on the cloud. Revenues went down before they went up, but Intuit’s stock recently hit an all-time high.

Such stories are extremely rare. Successful incumbent firms are more likely to follow the trajectory of Kodak, Sears SHLD 2.39% , Bethlehem Steel, and many newspapers, dead or diminished after technology transformed their industries. Little wonder that for the past two years, when we have asked Fortune 500 CEOs to name their single biggest challenge, their No. 1 answer has been “the rapid pace of technological change.”

Yet a few incumbents have defied the odds and succeeded at self-disruption. How they do it is becoming clear.

They see their business as disrupters would see it. This challenge is psychological and requires escaping the aura of headquarters. At the dawn of the web, American Airlines’ AAL -2.31% Sabre subsidiary assembled a team and sent it to another building with orders to disrupt the industry’s travel-agent-based business model. The result was Travelocity. Charles Schwab responded to the rise of “robo-advisers” like Betterment and Wealthfront by forming a full-time team that ignored the company’s corporate playbook. The team developed Schwab Intelligent Portfolios, a robo-product that now manages more assets than any of its disrupter startup rivals.

They find the courage to leap. Netflix NFLX -0.44% CEO Reed Hastings knew that online streaming would disrupt his successful DVDs-by-mail model. He committed to streaming in 2011—and Netflix’s stock plunged 76%. Wall Street called for his head. But Hastings pushed on, and today DVDs are just 7% of the company’s business, while the stock is up 150% from its pre-plunge peak

They never stop. Self-­disruption isn’t something you do just once. Every successful disrupter becomes an incumbent in its transformed industry, and digital business models don’t last long. Amazon AMZN 0.82% disrupted bookstores 20 years ago, then disrupted its own books-by-mail model with Kindle e-readers. Digital evolution is merciless: Intelwas a champion self-disrupter until it missed the mobile revolution; in April it announced 12,000 layoffs.

Leaders can glean these lessons from the first industries to be disrupted by digital tech. But the hardest step for incumbents is the first one, best expressed by Peter Drucker: “If leaders are unable to slough off yesterday, to abandon yesterday, they simply will not be able to create tomorrow.” 

AI Is Learning to See the World—But Not the Way Humans Do

Posted on by .
MIT Technology Review
by Jamie Condliffe
June 30, 2016

AI systems are modeled after human biology, but their vision systems still work quite differently.

Computer vision has been having a moment. No more does an image recognition algorithm make dumb mistakes when looking at the world: these days, it can accurately tell you that an image contains a cat. But the way it pulls off the party trick may not be as familiar to humans as we thought.

Most computer vision systems identify features in images using neural networks, which are inspired by our own biology and are very similar in their architecture—only here, the biological sensing and neurons are swapped out for mathematical functions. Now a study by researchers at Facebook and Virginia Tech says that despite those similarities, we should be careful in assuming that both work in the same way.

To see exactly what was happening as both humans and AI analyzed an image, the researchers studied where the two focused their attention. Both were provided with blurred images and asked questions about what was happening in the picture—“Where is the cat?” for instance. Parts of the image could be selectively sharpened, one at a time, and both human and AI did so until they could answer the question. The team repeated the tests using several different algorithms.

Obviously they could both provide answers—but the interesting result is how they did so. On a scale of 1 to -1, where 1 is total agreement and -1 total disagreement, two humans scored on average 0.63 in terms of where they focused their attention across the image. With a human and an AI, the average dropped to 0.26.

In other words: the AI and human were both looking at the same image, both being asked the same question, both getting it right—but using different visual features to arrive at those same conclusions.

This is an explicit result about a phenomenon that researchers had already hinted at. In 2014, a team from Cornell University and the University of Wyoming showed that it was possible to create images that fool AI into seeing something, simply by creating a picture made up of the strong visual features that the software had come to associate with an object. Humans have a large pool of common-sense knowledge to draw on, which means they don’t get caught out by such tricks. That’s something researchers are trying to incorporate into a new breed of intelligent software that understands the semantic visual world.

But just because computers don’t use the same approach doesn’t necessarily mean they’re inferior. In fact, they may be better off ignoring the human approach altogether.

The kinds of neural networks used in computer vision usually employ a technique known as supervised learning to work out what’s happening in an image. Ultimately, their ability to associate a complex combination of patterns, textures, and shapes with the name of an object is made possible by providing the AI with a training set of images whose contents have already been labeled by a human.

But teams at Facebook and Google’s DeepMind have been experimenting with unsupervised learning systems that ingest content from video and images to learn what human faces and everyday objects look like, without any human intervention. Magic Pony, recently bought by Twitter, also shuns supervised learning, instead learning to recognize statistical patterns in images to teach itself what edges, textures, and other features should look like.

In these cases, it’s perhaps even less likely that the knowledge of the AI will be generated through a process aping that of a human. Once inspired by human brains, AI may beat us by simply being itself.