Inside a former baked-goods factory near the Oakland airport, a construction crew is installing giant vats that will soon be used to scale up production of the Impossible Burger–a plant-based meat designed to look and taste good enough that meat eaters will want to order it, not vegetarians.
The “meat,” developed by a team led by former Stanford biochemistry professor Patrick Brown, is currently being produced in a 10,000-square-foot pilot facility in Silicon Valley and a 1,500-square foot space in New Jersey. The new facility, at around 60,000 square feet, will dramatically scale up production capacity. When the factory is fully ramped up, it will be able to produce at least 1 million pounds of Impossible Burger meat a month, or 250 times more than today.
“It will enable us to go from something that is scarce–and we’re constantly getting complaints from customers about the fact that they can’t buy them at their local restaurant–and start to make it ubiquitous,” Brown said at an event launching the new factory.
The burger is currently available at 11 restaurants, including 3 that launched it on March 23. But by the end of the year, the company expects to supply 1,000 restaurants. It just signed a deal to have the burgers featured in the San Francisco Giant’s baseball stadium.
For the company, achieving scale is a critical part of achieving its mission. Brown started working on the project while thinking about the problem of climate change; raising cows and other animals for meat is one of the world’s largest sources of greenhouse gases. It also uses and pollutes more water than any other industry, and drives deforestation. But he realized that the majority of the world wouldn’t voluntarily go vegetarian for those reasons.
“Billions of people around the world who love meat are not going to stop demanding it, so we just have to find a better way to produce it,” he says.
The team studied the properties of meat–particularly heme, the molecule that makes blood red and gives meat a meaty taste–and then experimented with recreating those properties using only ingredients from plants.
“When you think about meat, there’s the muscle, there’s the connective tissue, there’s the fat, so we had to figure out how to mimic those parts of beef to figure out how to experience the texture, but also the taste,” Don DeMasi, senior vice president of engineering for Impossible Foods, tells Fast Company.
The result looks like it was made from a cow, not plants. The handful of chefs who were given first access to the product say they think of it as meat. “It kind of made this transition in my mind to be–it’s just another kind of meat,” says chef Traci Des Jardins, who has been serving Impossible burgers at her San Francisco restaurant Jardinière for about a year, and now is also serving it at Public House, her restaurant at the city’s ballpark.
Before it’s cooked, the product is red like raw beef; as it cooks, it browns. As the heme mixes with juices and oozes out, it can look like it’s bleeding. “You’re seeing the exact same cooking chemistry that you see in meat, literally,” says Brown.
As the company scales up its beef alternative, it will focus on restaurants. In a year, it says, U.S. restaurants serve more than 5 billion pounds of burgers, and Impossible wants its 12 million pounds to be among them. Retail will come later, along with other products that are currently in development, such as poultry and steak.
“Our long-term goal is to basically develop a new and better way to create all the foods we make from animals,” says Brown.
OLD-GUARD CAR MANUFACTURERS
Audi: In 2015, started test-driving an AI-laden prototype nicknamed “Jack” that lets drivers easily switch to autonomous mode via buttons on the wheel
BMW: Has promised an entirely autonomous car called iNext by 2021; BMW’s ReachNow car-sharing service launched in April in Seattle and expanded to Portland, Oregon, in September
Ford: Announced plans for fully autonomous car with no pedals or steering wheel by 2021; recently invested $75 million in California laser-sensor company Velodyne; bought San Francisco–based private bus service Chariot and plans to expand it
Volvo: Forged partnerships with Microsoft (will incorporate HoloLens augmented-reality technology into its cars) and Uber (which is planning to use Volvos as part of its self-driving test fleet in Pittsburgh); teamed up with safety-system-maker Autoliv to set up a new company focused on autonomous-driving software
Alphabet: Launched self-piloting-car project back in 2009; testing retrofitted Lexus SUVs and its own adorable prototype vehicles in several locations; recently partnered with Fiat Chrysler to build self-driving minivans
Apple: Has invested $1 billion in Chinese ride-share company Didi Chuxing; reportedly rebooting its efforts to develop an Apple car; might also build a system to add autonomous features to preexisting vehicles
Baidu: Chinese search-engine company has teamed up with digital-graphics pioneer Nvidia to create a self-driving-vehicle system that uses 3-D maps in the cloud; is in the testing stage with several different self-driving-car prototypes, including one built with BMW
Tesla: After revolutionizing electric vehicles with the semi-autonomous Model S, will release more-affordable all-electric Model 3, possibly in late 2017; Model S’s involved in a pair of high-profile fatal accidents
Didi Chuxing: Acquired Uber’s Chinese operations in August, ending a fierce rivalry for Chinese market
Lyft: Partnered with GM to start testing autonomous Chevy Bolt taxis within the next year
Uber: In September, began testing autonomous Ford Fusions in Pittsburgh—the first self-driving fleet available to the public in the U.S.
Comma.ai: Andreessen Horowitz–backed company making an inexpensive kit that turns regular cars into semi-autonomous ones
Mobileye: Israeli software maker that had partnered with Tesla to provide chips and software, but the two companies ended their collaboration in the wake of a fatal accident in May (Tesla cars currently still use Mobileye chips); has teamed up with Delphi Automotive to build a self-driving system by 2019
NextEV: Shanghai-based electric-car innovator headed in the U.S. by former Cisco exec Padmasree Warrior; set to show off a high-performance all-electric sports-car prototype this year
Nutonomy: Born at MIT and backed by Ford, makes self-driving cars, software, and autonomous robots; started testing driverless taxis in Singapore this summer
Quanergy: Silicon Valley–based company developing light- and object-sensing technology for self-driving cars; boasts $1.59 billion valuation thanks to investors Samsung and Delphi
Zoox: Palo Alto startup behind the Boz, a fully autonomous concept vehicle (still in the design phase) with inward-facing seats similar to a train car; company valued at around $1 billion
A version of this article appeared in the November 2016 issue of Fast Company magazine.
MIT Technology Review
by Jamie Condliffe
June 30, 2016
AI systems are modeled after human biology, but their vision systems still work quite differently.
Computer vision has been having a moment. No more does an image recognition algorithm make dumb mistakes when looking at the world: these days, it can accurately tell you that an image contains a cat. But the way it pulls off the party trick may not be as familiar to humans as we thought.
Most computer vision systems identify features in images using neural networks, which are inspired by our own biology and are very similar in their architecture—only here, the biological sensing and neurons are swapped out for mathematical functions. Now a study by researchers at Facebook and Virginia Tech says that despite those similarities, we should be careful in assuming that both work in the same way.
To see exactly what was happening as both humans and AI analyzed an image, the researchers studied where the two focused their attention. Both were provided with blurred images and asked questions about what was happening in the picture—“Where is the cat?” for instance. Parts of the image could be selectively sharpened, one at a time, and both human and AI did so until they could answer the question. The team repeated the tests using several different algorithms.
Obviously they could both provide answers—but the interesting result is how they did so. On a scale of 1 to -1, where 1 is total agreement and -1 total disagreement, two humans scored on average 0.63 in terms of where they focused their attention across the image. With a human and an AI, the average dropped to 0.26.
In other words: the AI and human were both looking at the same image, both being asked the same question, both getting it right—but using different visual features to arrive at those same conclusions.
This is an explicit result about a phenomenon that researchers had already hinted at. In 2014, a team from Cornell University and the University of Wyoming showed that it was possible to create images that fool AI into seeing something, simply by creating a picture made up of the strong visual features that the software had come to associate with an object. Humans have a large pool of common-sense knowledge to draw on, which means they don’t get caught out by such tricks. That’s something researchers are trying to incorporate into a new breed of intelligent software that understands the semantic visual world.
But just because computers don’t use the same approach doesn’t necessarily mean they’re inferior. In fact, they may be better off ignoring the human approach altogether.
The kinds of neural networks used in computer vision usually employ a technique known as supervised learning to work out what’s happening in an image. Ultimately, their ability to associate a complex combination of patterns, textures, and shapes with the name of an object is made possible by providing the AI with a training set of images whose contents have already been labeled by a human.
But teams at Facebook and Google’s DeepMind have been experimenting with unsupervised learning systems that ingest content from video and images to learn what human faces and everyday objects look like, without any human intervention. Magic Pony, recently bought by Twitter, also shuns supervised learning, instead learning to recognize statistical patterns in images to teach itself what edges, textures, and other features should look like.
In these cases, it’s perhaps even less likely that the knowledge of the AI will be generated through a process aping that of a human. Once inspired by human brains, AI may beat us by simply being itself.
The great chain of being sure about things
The technology behind bitcoin lets people who do not know or trust each other build a dependable ledger. This has implications far beyond the cryptocurrency
WHEN the Honduran police came to evict her in 2009 Mariana Catalina Izaguirre had lived in her lowly house for three decades. Unlike many of her neighbours in Tegucigalpa, the country’s capital, she even had an official title to the land on which it stood. But the records at the country’s Property Institute showed another person registered as its owner, too—and that person convinced a judge to sign an eviction order. By the time the legal confusion was finally sorted out, Ms Izaguirre’s house had been demolished.
It is the sort of thing that happens every day in places where land registries are badly kept, mismanaged and/or corrupt—which is to say across much of the world. This lack of secure property rights is an endemic source of insecurity and injustice. It also makes it harder to use a house or a piece of land as collateral, stymying investment and job creation.
Such problems seem worlds away from bitcoin, a currency based on clever cryptography which has a devoted following among mostly well-off, often anti-government and sometimes criminal geeks. But the cryptographic technology that underlies bitcoin, called the “blockchain”, has applications well beyond cash and currency. It offers a way for people who do not know or trust each other to create a record of who owns what that will compel the assent of everyone concerned. It is a way of making and preserving truths.
That is why politicians seeking to clean up the Property Institute in Honduras have asked Factom, an American startup, to provide a prototype of a blockchain-based land registry. Interest in the idea has also been expressed in Greece, which has no proper land registry and where only 7% of the territory is adequately mapped.
A place in the past
Other applications for blockchain and similar “distributed ledgers” range from thwarting diamond thieves to streamlining stockmarkets: the NASDAQ exchange will soon start using a blockchain-based system to record trades in privately held companies. The Bank of England, not known for technological flights of fancy, seems electrified: distributed ledgers, it concluded in a research note late last year, are a “significant innovation” that could have “far-reaching implications” in the financial industry.
The politically minded see the blockchain reaching further than that. When co-operatives and left-wingers gathered for this year’s OuiShare Fest in Paris to discuss ways that grass-roots organisations could undermine giant repositories of data like Facebook, the blockchain made it into almost every speech. Libertarians dream of a world where more and more state regulations are replaced with private contracts between individuals—contracts which blockchain-based programming would make self-enforcing.
The blockchain began life in the mind of Satoshi Nakamoto, the brilliant, pseudonymous and so far unidentified creator of bitcoin—a “purely peer-to-peer version of electronic cash”, as he put it in a paper published in 2008. To work as cash, bitcoin had to be able to change hands without being diverted into the wrong account and to be incapable of being spent twice by the same person. To fulfil Mr Nakamoto’s dream of a decentralised system the avoidance of such abuses had to be achieved without recourse to any trusted third party, such as the banks which stand behind conventional payment systems.
It is the blockchain that replaces this trusted third party. A database that contains the payment history of every bitcoin in circulation, the blockchain provides proof of who owns what at any given juncture. This distributed ledger is replicated on thousands of computers—bitcoin’s “nodes”—around the world and is publicly available. But for all its openness it is also trustworthy and secure. This is guaranteed by the mixture of mathematical subtlety and computational brute force built into its “consensus mechanism”—the process by which the nodes agree on how to update the blockchain in the light of bitcoin transfers from one person to another.
Let us say that Alice wants to pay Bob for services rendered. Both have bitcoin “wallets”—software which accesses the blockchain rather as a browser accesses the web, but does not identify the user to the system. The transaction starts with Alice’s wallet proposing that the blockchain be changed so as to show Alice’s wallet a little emptier and Bob’s a little fuller.
The network goes through a number of steps to confirm this change. As the proposal propagates over the network the various nodes check, by inspecting the ledger, whether Alice actually has the bitcoin she now wants to spend. If everything looks kosher, specialised nodes called miners will bundle Alice’s proposal with other similarly reputable transactions to create a new block for the blockchain.
This entails repeatedly feeding the data through a cryptographic “hash” function which boils the block down into a string of digits of a given length (see diagram). Like a lot of cryptography, this hashing is a one-way street. It is easy to go from the data to their hash; impossible to go from the hash back to the data. But though the hash does not contain the data, it is still unique to them. Change what goes into the block in any way—alter a transaction by a single digit—and the hash would be different.
Running in the shadows
That hash is put, along with some other data, into the header of the proposed block. This header then becomes the basis for an exacting mathematical puzzle which involves using the hash function yet again. This puzzle can only be solved by trial and error. Across the network, miners grind through trillions and trillions of possibilities looking for the answer. When a miner finally comes up with a solution other nodes quickly check it (that’s the one-way street again: solving is hard but checking is easy), and each node that confirms the solution updates the blockchain accordingly. The hash of the header becomes the new block’s identifying string, and that block is now part of the ledger. Alice’s payment to Bob, and all the other transactions the block contains, are confirmed.
This puzzle stage introduces three things that add hugely to bitcoin’s security. One is chance. You cannot predict which miner will solve a puzzle, and so you cannot predict who will get to update the blockchain at any given time, except in so far as it has to be one of the hard working miners, not some random interloper. This makes cheating hard.
The second addition is history. Each new header contains a hash of the previous block’s header, which in turn contains a hash of the header before that, and so on and so on all the way back to the beginning. It is this concatenation that makes the blocks into a chain. Starting from all the data in the ledger it is trivial to reproduce the header for the latest block. Make a change anywhere, though—even back in one of the earliest blocks—and that changed block’s header will come out different. This means that so will the next block’s, and all the subsequent ones. The ledger will no longer match the latest block’s identifier, and will be rejected.
Is there a way round this? Imagine that Alice changes her mind about paying Bob and tries to rewrite history so that her bitcoin stays in her wallet. If she were a competent miner she could solve the requisite puzzle and produce a new version of the blockchain. But in the time it took her to do so, the rest of the network would have lengthened the original blockchain. And nodes always work on the longest version of the blockchain there is. This rule stops the occasions when two miners find the solution almost simultaneously from causing anything more than a temporary fork in the chain. It also stops cheating. To force the system to accept her new version Alice would need to lengthen it faster than the rest of the system was lengthening the original. Short of controlling more than half the computers—known in the jargon as a “51% attack”—that should not be possible.
Dreams are sometimes catching
Leaving aside the difficulties of trying to subvert the network, there is a deeper question: why bother to be part of it at all? Because the third thing the puzzle-solving step adds is an incentive. Forging a new block creates new bitcoin. The winning miner earns 25 bitcoin, worth about $7,500 at current prices.
All this cleverness does not, in itself, make bitcoin a particularly attractive currency. Its value is unstable and unpredictable (see chart), and the total amount in circulation is deliberately limited. But the blockchain mechanism works very well. According to blockchain.info, a website that tracks such things, on an average day more than 120,000 transactions are added to the blockchain, representing about $75m exchanged. There are now 380,000 blocks; the ledger weighs in at nearly 45 gigabytes.
Most of the data in the blockchain are about bitcoin. But they do not have to be. Mr Nakamoto has built what geeks call an “open platform”—a distributed system the workings of which are open to examination and elaboration. The paragon of such platforms is the internet itself; other examples include operating systems like Android or Windows. Applications that depend on basic features of the blockchain can thus be developed without asking anybody for permission or paying anyone for the privilege. “The internet finally has a public data base,” says Chris Dixon of Andreessen Horowitz, a venture-capital firm which has financed several bitcoin start-ups, including Coinbase, which provides wallets, and 21, which makes bitcoin-mining hardware for the masses.
For now blockchain-based offerings fall in three buckets. The first takes advantage of the fact that any type of asset can be transferred using the blockchain. One of the startups betting on this idea is Colu. It has developed a mechanism to “dye” very small bitcoin transactions (called “bitcoin dust”) by adding extra data to them so that they can represent bonds, shares or units of precious metals.
Protecting land titles is an example of the second bucket: applications that use the blockchain as a truth machine. Bitcoin transactions can be combined with snippets of additional information which then also become embedded in the ledger. It can thus be a registry of anything worth tracking closely. Everledger uses the blockchain to protect luxury goods; for example it will stick on to the blockchain data about a stone’s distinguishing attributes, providing unchallengeable proof of its identity should it be stolen. Onename stores personal information in a way that is meant to do away with the need for passwords; CoinSpark acts as a notary. Note, though, that for these applications, unlike for pure bitcoin transactions, a certain amount of trust is required; you have to believe the intermediary will store the data accurately.
It is the third bucket that contains the most ambitious applications: “smart contracts” that execute themselves automatically under the right circumstances. Bitcoin can be “programmed” so that it only becomes available under certain conditions. One use of this ability is to defer the payment miners get for solving a puzzle until 99 more blocks have been added—which provides another incentive to keep the blockchain in good shape.
Lighthouse, a project started by Mike Hearn, one of bitcoin’s leading programmers, is a decentralised crowdfunding service that uses these principles. If enough money is pledged to a project it all goes through; if the target is never reached, none does. Mr Hearn says his scheme will both be cheaper than non-bitcoin competitors and also more independent, as governments will be unable to pull the plug on a project they don’t like.
Energy is contagious
The advent of distributed ledgers opens up an “entirely new quadrant of possibilities”, in the words of Albert Wenger of USV, a New York venture firm that has invested in startups such as OpenBazaar, a middleman-free peer-to-peer marketplace. But for all that the blockchain is open and exciting, sceptics argue that its security may yet be fallible and its procedures may not scale. What works for bitcoin and a few niche applications may be unable to support thousands of different services with millions of users.
Though Mr Nakamoto’s subtle design has so far proved impregnable, academic researchers have identified tactics that might allow a sneaky and well financed miner to compromise the block chain without direct control of 51% of it. And getting control of an appreciable fraction of the network’s resources looks less unlikely than it used to. Once the purview of hobbyists, bitcoin mining is now dominated by large “pools”, in which small miners share their efforts and rewards, and the operators of big data centres, many based in areas of China, such as Inner Mongolia, where electricity is cheap.
Another worry is the impact on the environment. With no other way to establish the bona fides of miners, the bitcoin architecture forces them to do a lot of hard computing; this “proof of work”, without which there can be no reward, insures that all concerned have skin in the game. But it adds up to a lot of otherwise pointless computing. According to blockchain.info the network’s miners are now trying 450 thousand trillion solutions per second. And every calculation takes energy.
Because miners keep details of their hardware secret, nobody really knows how much power the network consumes. If everyone were using the most efficient hardware, its annual electricity usage might be about two terawatt-hours—a bit more than the amount used by the 150,000 inhabitants of King’s County in California’s Central Valley. Make really pessimistic assumptions about the miners’ efficiency, though, and you can get the figure up to 40 terawatt-hours, almost two-thirds of what the 10m people in Los Angeles County get through. That surely overstates the problem; still, the more widely people use bitcoin, the worse the waste could get.
Yet for all this profligacy bitcoin remains limited. Because Mr Nakamoto decided to cap the size of a block at one megabyte, or about 1,400 transactions, it can handle only around seven transactions per second, compared to the 1,736 a second Visa handles in America. Blocks could be made bigger; but bigger blocks would take longer to propagate through the network, worsening the risks of forking.
Earlier platforms have surmounted similar problems. When millions went online after the invention of the web browser in the 1990s pundits predicted the internet would grind to a standstill: eppur si muove. Similarly, the bitcoin system is not standing still. Specialised mining computers can be very energy efficient, and less energy-hungry alternatives to the proof-of-work mechanism have been proposed. Developers are also working on an add-on called “Lightning” which would handle large numbers of smaller transactions outside the blockchain. Faster connections will let bigger blocks propagate as quickly as small ones used to.
The problem is not so much a lack of fixes. It is that the network’s “bitcoin improvement process” makes it hard to choose one. Change requires community-wide agreement, and these are not people to whom consensus comes easily. Consider the civil war being waged over the size of blocks. One camp frets that quickly increasing the block size will lead to further concentration in the mining industry and turn bitcoin into more of a conventional payment processor. The other side argues that the system could crash as early as next year if nothing is done, with transactions taking hours.
A break in the battle
Mr Hearn and Gavin Andresen, another bitcoin grandee, are leaders of the big-block camp. They have called on mining firms to install a new version of bitcoin which supports a much bigger block size. Some miners who do, though, appear to be suffering cyber-attacks. And in what seems a concerted effort to show the need for, or the dangers of, such an upgrade, the system is being driven to its limits by vast numbers of tiny transactions.
This has all given new momentum to efforts to build an alternative to the bitcoin blockchain, one that might be optimised for the storing of distributed ledgers rather than for the running of a cryptocurrency. MultiChain, a build-your-own-blockchain platform offered by Coin Sciences, another startup, demonstrates what is possible. As well as offering the wherewithal to build a public blockchain like bitcoin’s, it can also be used to build private chains open only to vetted users. If all the users start off trusted the need for mining and proof-of-work is reduced or eliminated, and a currency attached to the ledger becomes an optional extra.
The first industry to adopt such sons of blockchain may well be the one whose failings originally inspired Mr Nakamoto: finance. In recent months there has been a rush of bankerly enthusiasm for private blockchains as a way of keeping tamper-proof ledgers. One of the reasons, irony of ironies, is that this technology born of anti-government libertarianism could make it easier for the banks to comply with regulatory requirements on knowing their customers and anti-money-laundering rules. But there is a deeper appeal.
Industrial historians point out that new powers often become available long before the processes that best use them are developed. When electric motors were first developed they were deployed like the big hulking steam engines that came before them. It took decades for manufacturers to see that lots of decentralised electric motors could reorganise every aspect of the way they made things. In its report on digital currencies, the Bank of England sees something similar afoot in the financial sector. Thanks to cheap computing financial firms have digitised their inner workings; but they have not yet changed their organisations to match. Payment systems are mostly still centralised: transfers are cleared through the central bank. When financial firms do business with each other, the hard work of synchronising their internal ledgers can take several days, which ties up capital and increases risk.
Distributed ledgers that settle transactions in minutes or seconds could go a long way to solving such problems and fulfilling the greater promise of digitised banking. They could also save banks a lot of money: according to Santander, a bank, by 2022 such ledgers could cut the industry’s bills by up to $20 billion a year. Vendors still need to prove that they could deal with the far-higher-than-bitcoin transaction rates that would be involved; but big banks are already pushing for standards to shape the emerging technology. One of them, UBS, has proposed the creation of a standard “settlement coin”. The first order of business for R3 CEV, a blockchain startup in which UBS has invested alongside Goldman Sachs, JPMorgan and 22 other banks, is to develop a standardised architecture for private ledgers.
The banks’ problems are not unique. All sorts of companies and public bodies suffer from hard-to-maintain and often incompatible databases and the high transaction costs of getting them to talk to each other. This is the problem Ethereum, arguably the most ambitious distributed-ledger project, wants to solve. The brainchild of Vitalik Buterin, a 21-year-old Canadian programming prodigy, Ethereum’s distributed ledger can deal with more data than bitcoin’s can. And it comes with a programming language that allows users to write more sophisticated smart contracts, thus creating invoices that pay themselves when a shipment arrives or share certificates which automatically send their owners dividends if profits reach a certain level. Such cleverness, Mr Buterin hopes, will allow the formation of “decentralised autonomous organisations”—virtual companies that are basically just sets of rules running on Ethereum’s blockchain.
One of the areas where such ideas could have radical effects is in the “internet of things”—a network of billions of previously mute everyday objects such as fridges, doorstops and lawn sprinklers. A recent report from IBM entitled “Device Democracy” argues that it would be impossible to keep track of and manage these billions of devices centrally, and unwise to to try; such attempts would make them vulnerable to hacking attacks and government surveillance. Distributed registers seem a good alternative.
The sort of programmability Ethereum offers does not just allow people’s property to be tracked and registered. It allows it to be used in new sorts of ways. Thus a car-key embedded in the Ethereum blockchain could be sold or rented out in all manner of rule-based ways, enabling new peer-to-peer schemes for renting or sharing cars. Further out, some talk of using the technology to make by-then-self-driving cars self-owning, to boot. Such vehicles could stash away some of the digital money they make from renting out their keys to pay for fuel, repairs and parking spaces, all according to preprogrammed rules.
What would Rousseau have said?
Unsurprisingly, some think such schemes overly ambitious. Ethereum’s first (“genesis”) block was only mined in August and, though there is a little ecosystem of start-ups clustered around it, Mr Buterin admitted in a recent blog post that it is somewhat short of cash. But the details of which particular blockchains end up flourishing matter much less than the broad enthusiasm for distributed ledgers that is leading both start-ups and giant incumbents to examine their potential. Despite society’s inexhaustible ability to laugh at accountants, the workings of ledgers really do matter.
Today’s world is deeply dependent on double-entry book-keeping. Its standardised system of recording debits and credits is central to any attempt to understand a company’s financial position. Whether modern capitalism absolutely required such book-keeping in order to develop, as Werner Sombart, a German sociologist, claimed in the early 20th century, is open to question. Though the system began among the merchants of renaissance Italy, which offers an interesting coincidence of timing, it spread round the world much more slowly than capitalism did, becoming widely used only in the late 19th century. But there is no question that the technique is of fundamental importance not just as a record of what a company does, but as a way of defining what one can be.
Ledgers that no longer need to be maintained by a company—or a government—may in time spur new changes in how companies and governments work, in what is expected of them and in what can be done without them. A realisation that systems without centralised record-keeping can be just as trustworthy as those that have them may bring radical change.
Such ideas can expect some eye-rolling—blockchains are still a novelty applicable only in a few niches, and the doubts as to how far they can spread and scale up may prove well founded. They can also expect resistance. Some of bitcoin’s critics have always seen it as the latest techy attempt to spread a “Californian ideology” which promises salvation through technology-induced decentralisation while ignoring and obfuscating the realities of power—and happily concentrating vast wealth in the hands of an elite. The idea of making trust a matter of coding, rather than of democratic politics, legitimacy and accountability, is not necessarily an appealing or empowering one.
At the same time, a world with record-keeping mathematically immune to manipulation would have many benefits. Evicted Ms Izaguirre would be better off; so would many others in many other settings. If blockchains have a fundamental paradox, it is this: by offering a way of setting the past and present in cryptographic stone, they could make the future a very different place.
September 24, 2015
When Harlequins rugby club was preparing for last season’s premier league, it turned to an unexpected partner in its search for a match-winning formula.
Deloitte, one of the big four professional services firms better known for its work with blue-chip companies, was hired by the English club to help make sense of the vast volumes of information it collects on its players.
Harlequins’ data sources include body sensors that track movements on the pitch and devices that monitor nutrition. Deloitte uses a bespoke data analytics tool to try to propel the team to a competitive advantage based on the statistics it collects.
Technology is reshaping the accounting, audit and consulting divisions that are the bread and butter of professional services firms. They are trying to fight back, launching partnerships with technology companies, picking dynamic start-ups to invest in and increasingly employing techniques that are the foundations on which innovative technology companies such as Google and Amazon are built.
Tudor Aw, partner and technology sector head at KPMG Europe, says “Companies such as Google and Amazon have three core assets: data storage, data analytics and cloud technology. They underpin the business model that we need to embrace for the future.”
Innovate or die is the stark message for the professional services firms. Last year PwC signed a joint venture with Google to combine Google’s innovation and technology platform with PwC’s industry experience and corporate insight.
Also last year, KPMG signed a joint venture with McLaren to use predictive analytics in its audit and consulting work.
Richard Oldfield, head of strategy at PwC UK, says: “There is no single technological threat to the professional services industry. It’s a tsunami of threats: data analytics, artificial intelligence and cyber security.”
Online accounting services offered by the traditional firms are ripe for disruption, as technology moves activities online and lowers barriers to entry. They are competing with established brands such as SAP, Salesforce and Oracle, as well as newer businesses such as Square, the payments company launched by Twitter founder Jack Dorsey, and Receipt Bank, which removes the need for manual data entry of bills and receipts.
Google and Amazon have three core assets: data storage, data analytics, and cloud technology
In response, last year KPMG spent £40m to develop cloud-based software that can allow businesses to go online and prepare their accounts, do their bookkeeping, administer their payrolls, and file VAT and corporate tax returns — for a monthly fee starting at £150.
As more and more data are stored in a digital format, this creates opportunities for data analytics. Nowhere more so than in audit, where a huge shift is taking place because entire data sets, such as company journals or expense claims, can be analysed.
Stephen Griggs, managing partner of audit and risk advisory at Deloitte UK, says: “We are not sampling data; we’re analysing the whole population of data. We’ll illuminate apparent anomalies and automate the more basic testing functions to enable our people to spend more time on the trickier areas.”
Market participants think it is possible for the big accountancy firms to work alongside the established technological names and disruptive start-ups, without being cut out altogether.
Fiona Czerniawska, managing director at Source Consulting, says that while it is likely that firms such as Google and Amazon will use techniques they have pioneered for analysing big data to offer new services in the marketplace, “it’s hard to see them penetrating the business-to-business sector, and professional services in particular because so much still depends on personal chemistry between client and professional”.
PwC’s Mr Oldfield says trust is very important, noting there is perhaps less of a trust premium attached to big technology groups. And data analytics can only take you so far in audit — there is also the judgment element.
SourceConsulting’s Ms Czerniawska believes that, while high-tech start-ups will begin to re-engineer parts of the audit process in the coming years, even these firms will struggle to replace the value a human being can add.
She says: “However sophisticated our algorithms, a part of the professional services market will always be human — the only question is: How much of it?”
Experts rethink belief that tech always lifts employment as machines take on skills once thought uniquely human.
By TIMOTHY AEPPEL in WSJ
Feb. 24, 2015
CAMBRIDGE, Mass.—Economist Erik Brynjolfsson had long dismissed fears that automation would soon devour jobs that required the uniquely human skills of judgment and dexterity.
Many of his colleagues at the Massachusetts Institute of Technology, where a big chunk of tomorrow’s technology is conceived and built, have spent their careers trying to prove such machines are within reach.
When Google Inc. announced in 2010 that a specially equipped fleet of driverless Toyota Prius cars had safely traveled more than 1,000 miles of U.S. roads, Mr. Brynjolfsson realized he might be wrong.
“Something had changed,” Mr. Brynjolfsson said, recalling his astonishment at machines navigating the many unpredictable moments that face drivers.
From steam engines to robotic welders and ATMs, technology has long displaced humans—always creating new, often higher-skill jobs in its wake.
But recent advances—everything from driverless cars to computers that can read human facial expressions—have pushed experts like Mr. Brynjolfsson to look anew at the changes automation will bring to the labor force as robots wiggle their way into higher reaches of the workplace.
They wonder if automation technology is near a tipping point, when machines finally master traits that have kept human workers irreplaceable.
“It’s gotten easier to substitute machines for many kinds of labor. We should be able to have a lot more wealth with less labor,” Mr. Brynjolfsson said. “But it could happen that there are people who want to work but can’t.”
In the Australian Outback, for example, mining giant Rio Tinto uses self-driving trucks and drills that need no human operators at iron ore mines. Automated trains will soon carry the ore to a port 300 miles away.
The Port of Los Angeles is installing equipment that could cut in half the number of longshoremen needed in a workplace already highly automated.
Computers do legal research, write stock reports and news stories, as well as translate conversations; at car dealers, they generate online advertising; and, at banks, they churn out government-required documents to flag potential money laundering—all jobs done by human workers a short time ago.
Microsoft co-founder Bill Gates , speaking in Washington last year, said automation threatens all manner of workers, from drivers to waiters to nurses. “I don’t think people have that in their mental model,” he said.
Gartner Inc., the technology research firm, has predicted a third of all jobs will be lost to automation within a decade. And within two decades, economists at Oxford University forecast nearly half of the current jobs will be performed with machine technology.
“When I was in grad school, you knew if you worried about technology, you were viewed as a dummy—because it always helps people,” MIT economist David Autor said. But rather than killing jobs indiscriminately, Mr. Autor’s research found automation commandeering such middle-class work as clerk and bookkeeper, while creating jobs at the high- and low-end of the market.
This is one reason the labor market has polarized and wages have stagnated over the past 15 years, Mr. Autor said. The concern among economists shouldn’t be machines soon replacing humans, he said: “The real problem I see with automation is that it’s contributed to growing inequality.”
Mr. Autor and other experts say much of the new technology are tools to make workers more productive, not replace them. Markets will yield new, yet-to-be-imagined work, they said, and, according to modern economic history, plenty of jobs.
The short- and long-term impact of technology is debated at MIT, where research labs hatch much of the hardware and software reshaping markets.
Landmark breakthroughs by MIT scientists include Marc Raibert ’s development of robots with “dynamic” balance, without which the machines would tip over constantly. Another colleague, Rodney Brooks, made “Genghis” in the late 1980s, a six-legged clambering robot inspired by spiders and now in the Smithsonian.
MIT campus scientists and economists meet regularly to discuss the implications of their work. The talks started after Mr. Brynjolfsson co-wrote a 2011 book that spelled out his epiphany about automation’s new era. The book noted that only six years before Google’s startling driverless car announcement, fellow MIT economist and automation expert Frank Levy had published a well-regarded book that said driverless cars were impossible.
Mr. Levy wasn’t happy to be singled out that way, he said, and was hardly a Luddite. The subtitle of his book is: “How Computers Are Creating the Next Job Market.” Mr. Levy stands by the idea that automation’s advance to uniquely human tasks, including driving, won’t happen as fast as many predict.
The debate inspired him to get economists and scientists talking. MIT robotics professor John Leonard helped set up the meetings, which are held about once a month. Topics span the prosaic—warehouse robots—to the philosophical—What happens if there is no meaningful work for humans?
A recent session featured Henrik Christensen, head of the Georgia Institute of Technology’s robotics program and a specialist in industrial robots. Automation is spreading to factories world-wide, and China recently overtook the U.S. as the world’s largest market for robots, he told the group, packed into a room in MIT’s Frank Gehry-designed computer-science center.
“Most truck drivers won’t have those jobs 10 years from now,” said Mr. Christensen, who is especially bullish on self-driving cars. He predicted children born today won’t need to learn to drive but will find plenty of jobs.
Automation may move slower than many expect. Bank ATMs spread quickly throughout the U.S. over the past three decades, but the number of tellers has only recently declined. In 1985, the U.S. employed 484,000 bank tellers, compared with 472,000 in 2007—reflecting the growth in banking. Since the recession, the number has fallen to 361,000.
Scott Stern, an MIT economist who spoke to the group last year, is among those who believe that technology may have reached a tipping point. He had once thought the latest wave of automation would crest gradually, he said, “playing out along the lines of prior technological transitions.”
But technological advances are moving at a faster speed, Mr. Stern said, with unpredictable results.
The large question under debate by scientists here is how close are breakthroughs that will allow robots to interact with humans in complex tasks.
One group at MIT says computing capacity is the only barrier. The world is building vast pools of data and computing muscle that, this view holds, will soon enable machines to do jobs that previously required skilled people.
Others say scientists are far from translating common sense, sight and dexterity into a string of code. Absent that, computing power won’t help.
Mr. Leonard, the robotics professor who helped initiate the talks with economists, is skeptical such breakthroughs will come soon. “There’s something about robots that makes people think we’re close to Arnold Schwarzenegger and the Terminator movies,” he said.
To make his point, Mr. Leonard mounted a camera on his car’s dashboard to record his daily commute. The idea was to collect an inventory of the sort of unexpected events a computer would face while driving.
Snapping open his laptop, Mr. Leonard showed a series of images from his dash-cam that would confound a machine, he said, including a left-hand turn in traffic. The 49-year-old professor said driverless cars won’t be able to navigate busy city streets in his lifetime.
Bill Freeman, a professor of engineering and computer science at MIT, joins colleagues from the campus during a meeting last month about new frontiers in automation.
Google recently gave Mr. Leonard a ride in a driverless car, and he compared the experience to the Wright brothers’ flight at Kitty Hawk. “It was a remarkable event,” he said of the first flight. “But look how long it took” to reach commercial air travel, he said, aviation’s lasting economic transformation.
The first time automation spawned fears of a jobless future might have been in the 19th century, when English textile workers attacked the first mechanical knitting machines. They were right to fear the contraptions, which eventually replaced them. Another wave of fear hit in the 1960s, when industrial robots began to eat into U.S. manufacturing for the first time.
Yet a recent survey of top economists by the University of Chicago found 88% either agreed or strongly agreed that automation has never historically led to reduced U.S. employment.
Economists in the minority are often said to embrace the so-called lump of labor fallacy: that the amount of work is finite. To date, the job market hasn’t worked that way. Some new machines are so efficient they push down prices and create more demand—which in many cases spawns more jobs, not fewer.
The invention of the automobile threw blacksmiths out of work, but created far more jobs building and selling cars. Displaced workers with obsolete skills are always hurt. but the total number of jobs has never declined over time.
That seems to be the case at Rio Tinto’s Australian mines. John McGagh, the company’s head of technology and innovation, said the surge of automation began about a decade ago, made possible by “more powerful computer chips and highly accurate GPS.”
The new equipment cut many driving jobs, of course. But the reductions will be partly offset by new types of work. The company now needs more network technicians, Mr. McGagh said, and “mechatronics engineers,” a hybrid of electrical and mechanical engineering that hardly existed five years ago.
The robot at Aloft hotel in Cupertino, Calif., covers errands. It trundles items to guests from the front desk, weaving pilotless through hotel corridors. The machine has a compartment kept locked until it reaches the guest’s door. Instead of knocking, it calls the room phone and waits.
No tipping is required. But a built-in screen asks guests for a rating. The robot chirps “Wheee” for a good rating, jiggling back and forth on its wheels.
“We considered having them talk,” said Steve Cousins, chief executive of Savioke, the robot’s creator. “But the issue is, if it talks to you—you’ll assume it understands you.” That remains a skill monopolized by hotel employees.
I was at a dairy meeting this week. Senior management of companies from all over the US come together to discuss market trends, regulatory issues and food safety. Their companies make the cheese, yogurt, dairy ingredients and other dairy products that we all eat. It is a highly experienced group from small, medium and large companies.
Normally the audience is very up-to-date and not much surprises them. This time one session really opened their eyes and gasps could be heard.
The speaker was Dan O’Conner from RetailNet Group, as they put it: a leading Global Retail Intelligence Resource helping Retailers and Suppliers plan tomorrow’s strategies today. He showed digital trends and the impact on the dairy business. The audience was silent and in full attention.
Dan spoke about how apps change customer shopping and especially that they “wall off” the users and make competition much more difficult. Apps will become the new loyalty programs.
The growth of customer toolkits that allow for a much more personalized experience and easier click and buy. These toolkits include: product scanning, creation of shopping lists based on scanned products, recipes, coupons and delivery options. The delivery options are getting faster and faster with major emphasis on same day delivery. Amazon is the leader in the retail industry and is forcing the major retailers to rethink everything.
Over time total transparency will be expected by many customers. This includes ability to see ingredients, sustainability of supply, origin of sensitive ingredients, etc. All this will be available through the content of apps. The impact on sourcing of ingredients is considerable.
It is expected that over the next few years volume will shift away from stores and that about 25% of todays volume will take place without store interaction, of that15% will be pick and deliver. This highlights that companies need to provide both web and store channel options (clicks and bricks). The increased cost can be quite significant and a totally new set of employee skills is needed. On top of that, supply chains will become more complex and need to be flexible.
Finally, in all cases the increase of marketplaces changes the dynamics. They have the fastest growth and need to be considered as a source of revenue. However, profitable pricing is more difficult and requires new ways to add value.
For me the reaction of the audience showed that most companies are still in denial of the changes that are happening in their industry. They are not watching the frontiers of their business. Not a good sign. Time to wake up.