Category Archives: Technologies

Boeing Deal Targets Flying Taxis

Posted on by .

Proposed acquisition of Aurora Flight Sciences could pave way for fleets of pilotless flying taxis

Uber selected the Aurora eVTOL to explore potential flying taxis, with 50 due to be delivered by 2020.
Uber selected the Aurora eVTOL to explore potential flying taxis, with 50 due to be delivered by 2020. PHOTO: AURORA FLIGHT SCIENCES

Boeing Co. BA 0.98% on Thursday said it plans to acquire Aurora Flight Sciences Corp., a maker of aerial drones and pilotless flying systems in a move the company said could pave the way for fleets of small flying taxis.

Virginia-based Aurora is a specialist in autonomous systems that allow military and commercial aircraft to be flown remotely, including technology that automates many functions, and has been working with Uber Technologies Inc. on a new vehicle that would take off and land like a helicopter.

Flying taxi-style concepts have attracted interest and funding from technology and aerospace companies, though face big hurdles including regulations that would allow fleets to operate alongside commercial airliners and other air traffic, as well as batteries to keep them aloft for several hours.

The purchase of Aurora would also expand Boeing’s reach in the new field of electric-powered aircraft.

Flying car concepts and designs have been around for awhile. But some firms are looking to transform the idea and provide a point-to-point passenger vehicle service–or a flying taxi. Graphic Simulation: Volocopter (Originally published June 20, 2017)

Boeing’s venture capital arm also this year invested in Zunum Aero, a Washington state-based startup that on Thursday unveiled its plan for an electric-hybrid regional passenger jet.

“These types of technology are helping pilots today and are a steppingstone to pilotless aircraft,” said John Langford, Aurora’s founder and chief executive, in a live-streamed interview.

Greg Hyslop, Boeing’s chief technology officer, said the work on autonomous systems also had potential benefits for a host of other industries looking to leverage the potential of so-called machine learning, where computers improve from experience.

The proposed Aurora deal marks Boeing’s second acquisition in less than a year involving autonomous systems following last December’s purchase of Liquid Robotics Inc., a maker of ships and undersea vehicles, and adds to a portfolio that includes aerial drone maker Insitu.

Terms for the proposed purchase of Aurora weren’t disclosed. The firm has more than 550 staff and will be run as an independent unit in Boeing’s engineering and technology business.

Aurora also produces composite parts for aircraft and other vehicles. Boeing is looking to produce more of its own parts as part of an insourcing strategy to reduce costs and potential disruption in its supply chain.

Boeing has been considering further acquisitions as part of the push to expand sales at its newly formed services arm to $50 billion over the next several years from around $14 billion at present.

Giving the Dry-Erase Whiteboard a High-Tech Makeover

Posted on by .

Your videoconferences are going to become far more productive.

MIT Technology Review by Elizabeth Woyk
September 11, 2017

As far as unproductive meetings go, it doesn’t get much worse than being on a videoconference when someone in a conference room 3,000 miles away starts scribbling on a dry-erase whiteboard you can’t see.

Organizations are increasingly hiring employees around the globe, and they need tools that help people collaborate across distances. But using a regular webcam to live-stream a whiteboard usually produces a mirror image in which the letters are displayed in reverse. Even when all participants can see the board correctly, shadows and reflections can make words and drawings frustratingly difficult to decipher.

Cyclops, a startup based in Cambridge, Massachusetts, is developing a new approach: an online videoconferencing service that uses computer-vision algorithms to clarify writing on whiteboards and flips their orientation so both the meeting host and remote viewers can read them easily. The technology, which took about two years to develop, reduces the “noise,” or visual distortions, produced by consumer-grade laptop cameras and webcams. Cyclops’s algorithms also scan whiteboards for marks that look like text or lines and enhances them to give viewers a more vibrant, higher-contrast image.

In recent years, Cisco, Google, and Microsoft have introduced large, Internet-connected touch-screen monitors that people can write on using a stylus. A crop of startups, including Altia Systems and Kaptivo, sell cameras that use Web-based software to make regular whiteboards interactive. But these cameras typically cost hundreds of dollars and require annual subscriptions to software packages, while digital whiteboards like Google’s Jamboard and Microsoft’s Surface Hub are as much as $9,000. Both types of products also necessitate software configuration, hardware mounting, or both.

Cyclops says its tool is easier to use and more affordable. With a webcam and a Slack account, remote colleagues can be connected to a shared whiteboard with just a few clicks. Toggles located within the app let them customize the computer-vision algorithms and adjust the image’s contrast, sharpness, and texture. An augmented-reality feature allows them to draw and type comments on top of the whiteboard image. They can also take screenshots of the board and upload the picture directly to Slack or e-mail it.

For now, the tool is entirely free. Cyclops cofounder and CEO Waikit Lau says the startup might charge a small monthly subscription fee in the future.

Cyclops still has some kinks to work out. Currently, only eight people can log in to a videoconference at the same time. The platform’s whiteboard-enhancing algorithms also result in blurriness when trying to capture fast motions, such as people gesturing or walking past the camera. Lau says Cyclops hopes to solve that problem by refining its technology so it will be able to augment just the lines on the board rather than the entire video feed.

Early users say Cyclops’s video streams stutter at times and look grainier than those produced by other videoconferencing services. But they also think its whiteboard feature and convenience set it apart from competitors, such as Appear.inGoogle HangoutsGoToMeetingSkypeWebEx, and Zoom. Before discovering Cyclops, Boston-area technology entrepreneur Paul Morville used Google’s Slides program to create online presentations that groups of people could view and edit together. He says Cyclops is a faster, simpler way to brainstorm: “You can be on a phone call and say, ‘I want to show you something on my whiteboard,’ and within 15 seconds, you’re projecting that whiteboard to the other party—and it’s clear and easy for them to read.”

Another satisfied user is Brett Terespolsky, the chief technology officer of Switch Innovation in Johannesburg, South Africa. He used to end meetings by e-mailing a photo of his marked-up whiteboard to remote colleagues. Now he opts for Cyclops, so his entire team can review and plan projects around a single whiteboard in real time.

How 3-D Printing Is Changing Health Care

Posted on by .

Recent advances have made the technology more useful for planning surgery and creating drugs

The Mayo Clinic printed a model of a patient’s pelvis to plan surgery to remove a rare tumor that had spread to the base of the spine.
The Mayo Clinic printed a model of a patient’s pelvis to plan surgery to remove a rare tumor that had spread to the base of the spine. PHOTO: MAYO CLINIC

A year ago, an 11-year-old girl named London Secor had surgery at the Mayo Clinic to remove a rare tumor located in her pelvis. In the past, surgeons would have considered amputating one of Ms. Secor’s legs, given that the tumor had spread to the bone and nerves of her sacrum and was encroaching on her hip socket.

That didn’t happen this time, however, due largely to advances in 3-D printing.

Before the surgery, Mayo printed a 3-D model of the girl’s pelvis, scaled to size and showing her bladder, veins, blood vessels, ureters and the tumor. Members of the medical team were able to hold the model in their hands, examine it and plot a surgical approach that would allow them to remove the entire tumor without taking her leg.

“There is nothing like holding a 3-D model to understand a complicated anatomical procedure,” says Peter Rose, the surgeon who performed the operation on Ms. Secor, an avid swimmer and basketball player from Charlotte, N.C. “The model helped us understand the anatomy that was altered by the tumor and helped us orient ourselves for our cuts around it.”

The pelvis model was one of about 500 3-D-printed objects created at the Mayo Clinic last year. It’s part of a web of organizations racing to find ways to use 3-D printing to improve health care.

Some research institutions, including the Mayo Clinic, have set up on-site printing labs in partnership with such makers of 3-D printers as Stratasys , 3D Systems and Formlabs. General Electric Co. and Johnson & Johnson are diving in, too, with GE focused on 3-D printers and translating images from various sources into 3-D objects, and J&J focused on developing a range of materials that can be used as “ink” to print customized objects.

Using data from MRIs, CT scans and ultrasounds, as well as three-dimensional pictures, 3-D printers create objects, layer by layer, using materials ranging from plastics to metal to human tissue. Beyond organ models, the printers are being used in health care to create dental and medical implants, hearing aids, prosthetics, drugs and even human skin.

Research firm Gartner predicts that by 2019, 10% of people in the developed world will be living with 3-D-printed items on or in their bodies, and 3-D printing will be a central tool in more than one-third of surgical procedures involving prosthetics and implanted devices. According to research firm IndustryARC, the overall market for medical 3-D printing is expected to grow to $1.21 billion by 2020 from about $660 million in 2016.

Though the industry is young, Anurag Gupta, a Gartner vice president of research, says 3-D printing in health care “could have the transformative impact of the internet or cloud computing a few years ago.”

The technology of 3-D printing has been around since the 1980s, but recent advances in software and hardware have made it faster, more cost-efficient and of higher quality. Five years ago, the 3-D printers made by Stratasys could print in one or two materials and one or two colors. Now they can print six materials simultaneously and create more than 360,000 combinations of textures and colors to better mimic materials ranging from soft tissue to bone, paving the way for wider adoption.

The rise of customized medicine, in which care and medicine is tailored to individual patients, also has helped fuel growth of 3-D printing in health care, as more patients and doctors seek out customized medical devices, surgical tools and drugs.

One of the areas in which the technology may hold particular promise, experts say, is in the manufacturing of drugs in the dose and shape best suited to certain groups of patients. Aprecia Pharmaceuticals recently launched a 3-D printed epilepsy drug called Spritam, a high-dosage pill that dissolves quickly with a small amount of water and in a shape that is easy to swallow.

Printing whole organs, such as livers and kidneys, remains the Holy Grail, but that is more than a decade away, says Gartner’s Mr. Gupta. Printing smaller pieces of human material, however, has already begun.

Researchers at the University Carlos III of Madrid, along with the Spanish biotech company BioDan, have printed human skin to eventually help burn victims and others suffering from skin injuries and diseases. The process involves a 3-D printer that deposits bioinks containing cells from an individual as well as other biological molecules to create a patch of skin. Like the real thing, this printed skin consists of an external layer, the epidermis, and the thicker, deeper layer, the dermis.

Organovo Holdings Inc. of San Diego prints pieces of liver and kidney tissue to test new therapies and the toxicology of early-stage drugs. Johnson & Johnson is working with Aspect Biosystems Ltd. to develop bioprinted knee meniscus tissue. And 3D Systems is developing 3-D-printed lung tissue with United Therapeutics Corp.

While entry-level 3-D printers used by hobbyists can cost a few hundred dollars, industrial 3-D printers used by hospitals can range from $10,000 to $400,000 for those that print plastics and polymers.

Another hurdle for hospitals is the “hidden cost” of operating 3-D printers, says Jimmie Beacham who leads GE Healthcare’s 3-D printing strategy. Engineers are required to transform dense digital images from MRI, CT and ultrasound scans into information that can be printed into a 3-D model. What’s more, printing a 3-D object doesn’t yet happen with the click of a button. It took 60 hours for Mayo Clinic to print Ms. Secor’s pelvis and tumor, for example.

Still, 3-D printing can lead to cost savings in other areas, say experts such as Jonathan Morris, a Mayo radiologist. Allowing surgeons to practice on 3-D models of a specific patient’s organs before surgery can significantly reduce time in the operating room. Printing implants and prosthetics on demand and on location means fewer middlemen in the supply chain and less waste. And given the better fit of customized implants from 3-D printers, patients may not have to replace them as often.

The Mayo Clinic and a half dozen other cutting-edge research hospitals have blazed the path in terms of creating 3-D printing labs on site. Now some larger city network hospitals are beginning to purchase their own 3-D printers, while smaller hospitals and doctors can order 3-D models for complicated surgeries on a case-by-case basis from 3-D printing companies.

A Survey of 3,000 Executives Reveals How Businesses Succeed with AI

Posted on by .

AUGUST 28, 2017

The buzz over artificial intelligence (AI) has grown loud enough to penetrate the C-suites of organizations around the world, and for good reason. Investment in AI is growing and is increasingly coming from organizations outside the tech space. And AI success stories are becoming more numerous and diverse, from Amazon reaping operational efficiencies using its AI-powered Kiva warehouse robots, to GE keeping its industrial equipment running by leveraging AI for predictive maintenance.

While it’s clear that CEOs need to consider AI’s business implications, the technology’s nascence in business settings makes it less clear how to profitably employ it. Through a study of AI that included a survey of 3,073 executives and 160 case studies across 14 sectors and 10 countries, and through a separate digital research program, we have identified 10 key insights CEOs need to know to embark on a successful AI journey.

Don’t believe the hype: Not every business is using AI… yet. While investment in AI is heating up, corporate adoption of AI technologies is still lagging. Total investment (internal and external) in AI reached somewhere in the range of $26 billion to $39 billion in 2016, with external investment tripling since 2013. Despite this level of investment, however, AI adoption is in its infancy, with just 20% of our survey respondents using one or more AI technologies at scale or in a core part of their business, and only half of those using three or more. (Our results are weighted to reflect the relative economic importance of firms of different sizes. We include five categories of AI technology systems: robotics and autonomous vehicles, computer vision, language, virtual agents, and machine learning.)

For the moment, this is good news for those companies still experimenting or piloting AI (41%). Our results suggest there’s still time to climb the learning curve and compete using AI.

However, we are likely at a key inflection point of AI adoption. AI technologies like neural-based machine learning and natural language processing are beginning to mature and prove their value, quickly becoming centerpieces of AI technology suites among adopters. And we expect at least a portion of current AI piloters to fully integrate AI in the near term. Finally, adoption appears poised to spread, albeit at different rates, across sectors and domains. Telecom and financial services are poised to lead the way, with respondents in these sectors planning to increase their AI tech spend by more than 15% a year — seven percentage points higher than the cross-industry average — in the next three years.

Believe the hype that AI can potentially boost your top and bottom line. Thirty percent of early AI adopters in our survey — those using AI at scale or in core processes — say they’ve achieved revenue increases, leveraging AI in efforts to gain market share or expand their products and services. Furthermore, early AI adopters are 3.5 times more likely than others to say they expect to grow their profit margin by up to five points more than industry peers. While the question of correlation versus causation can be legitimately raised, a separate analysis uncovered some evidence that AI is already directly improving profits, with ROI on AI investment in the same range as associated digital technologies such as big data and advanced analytics.

Without support from leadership, your AI transformation might not succeed. Successful AI adopters have strong executive leadership support for the new technology. Survey respondents from firms that have successfully deployed an AI technology at scale tend to rate C-suite support as being nearly twice as high as those companies that have not adopted any AI technology. They add that strong support comes not only from the CEO and IT executives but also from all other C-level officers and the board of directors.

You don’t have to go it alone on AI — partner for capability and capacity. With the AI field recently picking up its pace of innovation after the decades-long “AI winter,” technical expertise and capabilities are in short supply. Even large digital natives such as Amazon and Google have turned to companies and talent outside their confines to beef up their AI skills. Consider, for example, Google’s acquisition of DeepMind, which is using its machine learning chops to help the tech giant improve even core businesses like search optimization. Our survey, in fact, showed that early AI adopters have primarily bought the right fit-for-purpose technology solutions, with only a minority of respondents both developing and implementing all AI solutions in-house.

Resist the temptation to put technology teams solely in charge of AI initiatives. Compartmentalizing accountability for AI with functional leaders in IT, digital, or innovation can result in a hammer-in-search-of-a-nail outcome: technologies being launched without compelling use cases. To ensure a focus on the most valuable use cases, AI initiatives should be assessed and co-led by both business and technical leaders, an approach that has proved successful in the adoption of other digital technologies.

Take a portfolio approach to accelerate your AI journey. AI tools today vary along a spectrum ranging from tools that have been proven to solve business problems (for example, pattern detection for predictive maintenance) to those with low awareness and currently-limited-but-high-potential utility (for example, application of AI to developing competitive strategy). This distribution suggests that organizations could consider a portfolio-based approach to AI adoption across three time horizons:

Short-term: Focus on use cases where there are proven technology solutions today, and scale them across the organization to drive meaningful bottom-line value.

Medium-term: Experiment with technology that’s emerging but still relatively immature (deep learning video recognition) to prove their value in key business use cases before scaling.

Long-term: Work with academia or a third party to solve a high-impact use case (augmented human decision making in a key knowledge worker role, for example) with bleeding-edge AI technology to potentially capture a sizable first-mover advantage.

Machine learning is a powerful tool, but it’s not right for everything. Machine learning and its most prominent subfield, deep learning, have attracted a lot of media attention and received a significant share of the financing that has been pouring into the AI universe, garnering nearly 60% of all investments from outside the industry in 2016.

But while machine learning has many applications, it is just one of many AI-related technologies capable of solving business problems. There’s no one-size-fits-all AI solution. For example, the AI techniques implemented to improve customer call center performance could be very different from the technology used to identify credit card payments fraud. It’s critical to look for the right tool to solve each value-creating business problem at a particular stage in an organization’s digital and AI journey.

Digital capabilities come before AI. We found that industries leading in AI adoption — such as high-tech, telecom, and automotive — are also the ones that are the most digitized. Likewise, within any industry the companies that are early adopters of AI have already invested in digital capabilities, including cloud infrastructure and big data. In fact, it appears that companies can’t easily leapfrog to AI without digital transformation experience. Using a battery of statistics, we found that the odds of generating profit from using AI are 50% higher for companies that have strong experience in digitization.

Be bold. In a separate study on digital disruption, we found that adopting an offensive digital strategy was the most important factor in enabling incumbent companies to reverse the curse of digital disruption. An organization with an offensive strategy radically adapts its portfolio of businesses, developing new business models to build a growth path that is more robust than before digitization. So far, the same seems to hold true for AI: Early AI adopters with a very proactive, strictly offensive strategy report a much better profit outlook than those without one.

The biggest challenges are people and processes. In many cases, the change-management challenges of incorporating AI into employee processes and decision making far outweigh technical AI implementation challenges. As leaders determine the tasks machines should handle, versus those that humans perform, both new and traditional, it will be critical to implement programs that allow for constant reskilling of the workforce. And as AI continues to converge with advanced visualization, collaboration, and design thinking, businesses will need to shift from a primary focus on process efficiency to a focus on decision management effectiveness, which will further require leaders to create a culture of continuous improvement and learning.

Make no mistake: The next digital frontier is here, and it’s AI. While some firms are still reeling from previous digital disruptions, a new one is taking shape. But it’s early days. There’s still time to make AI a competitive advantage.


Jacques Bughin is a director of the McKinsey Global Institute based in Brussels.

Brian McCarthy is a partner in McKinsey’s Atlanta office.

Michael Chui is a McKinsey Global Institute partner based in San Francisco, and leads MGI’s work on the impact of technological change.

Next Leap for Robots: Picking Out and Boxing Your Online Order

Posted on by .

Developers close in on systems to move products off shelves and into boxes, as retailers aim to automate labor-intensive process

Your Next Online Order Could Be Picked Out by a Robot
Facing more pressure to speed orders more quickly to customers, a rising number of companies are using high-tech robots in their manufacturing process. But could it render humans obsolete? The WSJ takes a look inside.

Robot developers say they are close to a breakthrough—getting a machine to pick up a toy and put it in a box.

It is a simple task for a child, but for retailers it has been a big hurdle to automating one of the most labor-intensive aspects of e-commerce: grabbing items off shelves and packing them for shipping.

Several companies, including Saks Fifth Avenue owner Hudson’s Bay Co. HBC -0.27% and Chinese online-retail giant JD.com Inc., JD 1.07% have recently begun testing robotic “pickers” in their distribution centers. Some robotics companies say their machines can move gadgets, toys and consumer products 50% faster than human workers.

Retailers and logistics companies are counting on the new advances to help them keep pace with explosive growth in online sales and pressure to ship faster. U.S. e-commerce revenues hit $390 billion last year, nearly twice as much as in 2011, according to the U.S. Census Bureau. Sales are rising even faster in China, India and other developing countries.

That is propelling a global hiring spree to find people to process those orders. U.S. warehouses added 262,000 jobs over the past five years, with nearly 950,000 people working in the sector, according to the Labor Department. Labor shortages are becoming more common, particularly during the holiday rush, and wages are climbing.

Mechanical engineer Parker Heyl adjusts a robotic arm at RightHand Robotics’ test facility in Somerville, Mass.
Mechanical engineer Parker Heyl adjusts a robotic arm at RightHand Robotics’ test facility in Somerville, Mass.PHOTO: SIMON SIMARD FOR THE WALL STREET JOURNAL

Picking is the biggest labor cost in most e-commerce distribution centers, and among the least automated. Swapping in robots could cut the labor cost of fulfilling online orders by a fifth, said Marc Wulfraat, president of consulting firm MWPVL International Inc.

“When you’re talking about hundreds of millions of units, those numbers can be very significant,” he said. “It’s going to be a significant edge for whoever gets there first.”

Until recently, robots had to be trained to identify and grab each item, which is impractical in a distribution center that might stock an ever-changing array of millions of products.

Automation companies such as Kuka AG KU2 -0.45% , Dematic Corp. and Honeywell International Inc. unit Intelligrated, as well as startups like RightHand Robotics Inc. and IAM Robotics LLC are working on automating picking.

In RightHand Robotics’ Somerville, Mass., test facility, mechanical arms hunt around the clock through bins containing packages of baby wipes, jars of peanut butter and other products. Each attempt—successful or not—feeds into a database. The bigger that data set, the faster and more reliably the machines can pick, said Yaro Tenzer, the startup’s co-founder.

Hudson’s Bay is testing RightHand’s robots in a distribution center in Scarborough, Ontario.

“This thing could run 24 hours a day,” said Erik Caldwell, the retailer’s senior vice president of supply chain and digital operations, at a conference in May. “They don’t get sick; they don’t smoke.”

JD.com is developing its own picking robots, which it started testing in a Shanghai distribution center in April. The company hopes to open a fully automated warehouse there by the end of next year, said Hui Cheng, head of JD.com’s robotics-research center in Silicon Valley.

Swisslog, a subsidiary of Kuka, sells picking robots that can be integrated into the company’s other warehouse automation systems or purchased separately. The company sold its first unit in the U.S., to a large retailer, earlier this year, said A.K. Schultz, Swisslog’s vice president for retail and e-commerce. Mr. Schultz declined to name the retailer.

Previous waves of warehouse automation didn’t lead to sudden mass layoffs, partly because order volumes have been growing so fast. And automated picking is still at least a year away from commercial use, robotics experts say. The main challenge lies in creating the enormous databases of 3D-rendered objects that robots need to determine the best way to grip new objects.

RightHand Robotics co-founders Leif Jentoft, left, and Yaro Tenzer
RightHand Robotics co-founders Leif Jentoft, left, and Yaro Tenzer PHOTO: FOR THE WALL STREET JOURNAL

Some companies hope to speed development by making some research public.Amazon.com Inc. will hold its third annual automated picking competition at a robotics conference in Japan later this month. For the first time, entrants won’t know in advance all the items the robots will need to pick.

At the University of California, Berkeley, a team is simulating millions of attempts to pick 10,000 objects. Funded by Amazon, Siemens AG and others, the project is meant to build an open-source database for use in any automation system, said Ken Goldberg, the professor leading the project.

“With 10,000 objects, I’m surprised how well it did,” he said. “I would love to show it 100,000 examples and see how well it performs after that.”

Apple’s New Big Bet: Augmented Reality

Posted on by .

ARKit platform puts it in a race with Facebook, Alphabet and Snap and fuels industry beliefs that it will eventually develop glasses

A demonstration of Apple’s ARKit technology on an iPad Pro on Monday. The system uses camera data to find horizontal planes in a room and estimate the amount of light available.

A demonstration of Apple’s ARKit technology on an iPad Pro on Monday. The system uses camera data to find horizontal planes in a room and estimate the amount of light available. PHOTO: DAVID PAUL MORRIS/BLOOMBERG NEWS

Apple Inc. AAPL 0.60% set its sights on a new target: becoming the world’s largest platform for augmented reality.

The ambitious announcement, which was overshadowed by the introduction Monday of the HomePod speaker, plunges Apple into a race against Alphabet Inc., Facebook Inc.,FB 0.20% Snap Inc. SNAP -3.93% and others to conquer an emerging technology that uses cameras and computers to overlay digital images on a person’s view of the real world.

It also bolsters the belief among many industry observers that Apple will build new augmented-reality features into its coming 10th-anniversary edition iPhone, and eventually develop glasses that relay information about the world so people can view maps or restaurant menus without pulling out a device.

Augmented reality shot to prominence nearly a year ago following the release of “Pokémon Go,” a game in which players scoured the map of the real world, with the help of location-tracking technology, to find digital monsters superimposed through the smartphone screen. The technology is different from virtual reality, which uses computer headsets to create fully immersive digital worlds.

Roughly 40 million people in the U.S. are expected to use augmented reality this year, up 30% from last year, according to research firm eMarketer. It estimates the total will rise to 54 million in 2019.

In a 2.5-hour keynote, Apple announced a slew of new hardware and software products. WSJ’s Joanna Stern recaps what you need to know about the most important announcements.

Craig Federighi, Apple’s head of software, demonstrated the potential of Apple’s new technology platform, ARKit, at the company’s annual Worldwide Developers Conference keynote Monday.

While viewing a table on stage through an iPhone screen, Mr. Federighi added virtual images of a steaming cup of coffee and lamp. The images appeared to rest directly on the table, recognizing the real-world surface rather than floating above it.

Apple Chief Executive Tim Cook has been a big proponent of augmented reality, saying he believes it will have broader success than virtual reality because it is less isolating.

Several companies are already working on augmented reality, including headsets in development from Microsoft Corp. and Magic Leap Inc. Alphabet’s Google Tango platform has been available on some smartphones for a about a year.

Apple’s ARKit, though, has the potential to democratize the technology by bringing it to roughly a billion devices without requiring separate hardware or software, as some competitors do. The company says the system uses camera data to find horizontal planes in a room and estimate the amount of light available.

“It’s a seminal event in the journey toward AR that Apple’s come out and shipped something,” said Matt Miesnieks, co-founder of 6D.ai, a computer-vision startup. He expects developers to use the software because of its relative simplicity and potential to reach across Apple’s large user base.

IKEA International A/S and Lego A/S are already working on augmented-reality apps using ARKit that could allow people to visualize furniture in their home or a virtual image of Lego Batman, Mr. Federighi said.

Representatives of Wingnut AR, an augmented-reality studio from “Lord Of The Rings” director Peter Jackson, showed an ARKit-based experience, seen through an iPad, in which airships battled in a virtual town square that was digitally dropped on a real table on stage, with the audience visible in the background.

Apple’s announcement came two months after Facebook opened its augmented-reality tools to developers. CEO Mark Zuckerberg said he expects the nascent technology to open the world to a new world of apps and services.

Facebook’s smaller rival, Snap, popularized simple augmented-reality tools that overlay bunny ears or dog noses on users’ faces. It also allows users to add special effects to photos and backgrounds.

By creating an augmented-reality tool kit for developers, Apple could spur its more than 680 million iPhone users to share augmented images through its iMessage service rather through Facebook or Snapchat, developers said.

ARKit indicates Apple solved a difficult technical problem—finding a way to use cameras and sensors in an iPhone to track the outside world, said Mr. Miesnieks. He said the same sensors and algorithms would run a pair of glasses, bolstering his belief that Apple plans to launch eyewear in the future.

Apple declined to comment on whether it could develop glasses.

“They’re clearly getting developers ramped up for this,” said Paul Reynolds, founder of Torch 3D, a startup focused on 3D-app development. “I’m sure for the iPhone launch they’ll have nice content around it.”

Why Do Gas Station Prices Constantly Change? Blame the Algorithm

Posted on by .

Retailers are using artificial-intelligence software to set optimal prices, testing textbook theories of competition; antitrust officials worry such systems raise prices for consumers

The Knaap Tankstation gas station in Rotterdam, Netherlands, uses a2i Systems artificial-intelligence pricing software.
The Knaap Tankstation gas station in Rotterdam, Netherlands, uses a2i Systems artificial-intelligence pricing software. PHOTO: KNAAP TANKSTATION BV

ROTTERDAM, the Netherlands—One recent afternoon at a Shell-branded station on the outskirts of this Dutch city, the price of a gallon of unleaded gas started ticking higher, rising more than 3½ cents by closing time. A little later, a competing station 3 miles down the road raised its price about the same amount.

The two stations are among thousands of companies that use artificial-intelligence software to set prices. In doing so, they are testing a fundamental precept of the market economy.

In economics textbooks, open competition between companies selling a similar product, like gasoline, tends to push prices lower. These kinds of algorithms determine the optimal price sometimes dozens of times a day. As they get better at predicting what competitors are charging and what consumers are willing to pay, there are signs they sometimes end up boosting prices together.

Advances in A.I. are allowing retail and wholesale firms to move beyond “dynamic pricing” software, which has for years helped set prices for fast-moving goods, like airline tickets or flat-screen televisions. Older pricing software often used simple rules, such as always keeping prices lower than a competitor.

On the Same Track

Two competing gas stations in the Rotterdam area both using a2i Systems pricing software roughly mirrored each other’s price moves during a selected week.

These new systems crunch mountains of historical and real-time data to predict how customers and competitors will react to any price change under different scenarios, giving them an almost superhuman insight into market dynamics. Programmed to meet a certain goal—such as boosting sales—the algorithms constantly update tactics after learning from experience.

Ulrik Blichfeldt, chief executive of Denmark-based a2i Systems A/S, whose technology powers the Rotterdam gas stations, said his software is focused primarily on modeling consumer behavior and leads to benefits for consumers as well as gas stations. The software learns when raising prices drives away customers and when it doesn’t, leading to lower prices at times when price-sensitive customers are likely to drive by, he said.

“This is not a matter of stealing more money from your customer. It’s about making margin on people who don’t care, and giving away margin to people who do care,” he said.

Driving the popularity of A.I. pricing is the pain rippling through most retail industries, long a low-margin business that’s now suffering from increased competition from online competitors.

“The problem we’re solving is that retailers are going through a bloodbath,” said Guru Hariharan, chief executive of Mountain View, Calif.-based Boomerang Commerce Inc., whose A.I.-enabled software is used by StaplesInc. and other companies.

Staples uses A.I.-enabled software to change prices on 30,000 products a day on its website.
Staples uses A.I.-enabled software to change prices on 30,000 products a day on its website. PHOTO: RICHARD B. LEVINE/ZUMA PRESS

The rise of A.I. pricing poses a challenge to antitrust law. Authorities in the EU and U.S. haven’t opened probes or accused retailers of impropriety for using A.I. to set prices. Antitrust experts say it could be difficult to prove illegal intent as is often required in collusion cases; so far, algorithmic-pricing prosecutions have involved allegations of humans explicitly designing machines to manipulate markets.

Officials say they are looking at whether they need new rules. The Organization for Economic Cooperation and Development said it plans to discuss in June at a round table how such software could make collusion easier “without any formal agreement or human interaction.”

“If professional poker players are having difficulty playing against an algorithm, imagine the difficulty a consumer might have,” said Maurice Stucke, a former antitrust attorney for the U.S. Justice Department and now a law professor at the University of Tennessee, who has written about the competition issues posed by A.I. “In all likelihood, consumers are going to end up paying a higher price.”

In one example of what can happen when prices are widely known, Germany required all gas stations to provide live fuel prices that it shared with consumer price-comparison apps. The effort appears to have boosted prices between 1.2 to 3.3 euro cents per liter, or about 5 to 13 U.S. cents per gallon, according to a discussion paper published in 2016 by the Düsseldorf Institute for Competition Economics.

Makers and users of A.I. pricing said humans remain in control and that retailers’ strategic goals vary widely, which should promote competition and lower prices.

“If you completely let the software rule, then I could see [collusion] happening,” said Faisal Masud, chief technology officer for Staples, which uses A.I.-enabled software to change prices on 30,000 products a day on its website. “But let’s be clear, whatever tools we use, the business logic remains human.”

Online retailers in the U.S., such as Amazon.com Inc. and its third-party sellers, were among the first to adopt dynamic pricing. Amazon.com declined to comment.

Since then, sectors with fast-moving goods, frequent price changes and thin margins—such as the grocery, electronics and gasoline markets—have been the quickest to adopt the latest algorithmic pricing, because they are the most keen for extra pennies of margin, analysts and executives say.

The pricing-software industry has grown in tandem with the amount of data available to—and generated by—retailers. Stores keep information on transactions, as well as information about store traffic, product location and buyer demographics. They also can buy access to databases that monitor competitors’ product assortments, availability and prices—both on the web and in stores.

A.I. is used to make sense of all that information. International Business Machines Corp. said its price-optimization business uses capabilities from its Watson cognitive-computing engine to advise retailers on pricing. Germany’s Blue Yonder GmbH, a price-optimization outfit that serves clients in the grocery, electronics and fashion industries, said it uses neural networks based on those its physicist founder built to analyze data from a particle collider.

Neural networks are a type of A.I. computer system inspired by the interconnected structure of the human brain. They are good at matching new information to old patterns in vast databases, which allows them to use real-time signals such as purchases to predict from experience how consumers and competitors will behave.

Algorithms can also figure out what products are usually purchased together, allowing them to optimize the price of a whole shopping cart. If customers tend to be sensitive to milk prices, but less so to cereal prices, the software might beat a competitor’s price on milk, and make up margin on cereal.

“They’re getting really smart,” said Nik Subramanian, chief technology officer of Brussels-based Kantify, who said its pricing software has figured out how to raise prices after it sees on a competitor’s website that it has run out of a certain product.

Algorithmic pricing works well in the retail gasoline market, because it is a high-volume commodity that is relatively uniform, leading station owners in competitive markets to squeeze every penny.

For years, price wars in cutthroat markets have followed a typical pattern. A retailer would cut prices to lure customers, then competitors would follow suit, each cutting a little more than the others, eventually pushing prices down close to the wholesale cost. Finally one seller would reach a breaking point and raise prices. Everyone would follow, and the cycle started all over.

Some economists say the price wars helped consumers with overall lower prices, but led to very thin margins for station owners.

Danish oil and energy company OK hired a2i Systems in 2011 because its network of gas stations was suffering from a decade-old price war. It changed what it charged as many as 10 times a day, enlisting a team of people to drive around the country and call in competitors’ prices, said Gert Johansen, the company’s head of concept development.

A2i Systems—the name means applied artificial intelligence—was started by Alireza Derakhshan and Frodi Hammer, both engineering graduates of the University of Southern Denmark, in Odense. Before focusing on fuel, they built other A.I. systems, including a game displayed on interactive playground floor tiles that adapted to the speed and skill level of the children running around on top.

For OK, a2i created thousands of neural networks—one for each fuel at each station—and trained them to compare live sales data to years of historical company data to predict how customers would react to price changes. Then it ran those predictions through algorithms built to pick the optimal prices and learn from their mistakes.

In a pilot study, OK split 30 stations into two sets, a control group and an a2i group. The group using the software averaged 5% higher margins, according to a paper Mr. Derakhshan presented last June at an A.I. conference in Seville, Spain.

Scandinavian supermarket chain REMA 1000 says it will roll out price-optimization software made by Texas-based Revionics Inc. in coming months.
Scandinavian supermarket chain REMA 1000 says it will roll out price-optimization software made by Texas-based Revionics Inc. in coming months. PHOTO: JOSEPH DEAN/NEWSCOM/ZUMA PRESS

The new system could make complex decisions that weren’t simply based on a competitor’s prices, Mr. Derakhshan said in an interview.

One client called to complain the software was malfunctioning. A competitor across the street had slashed prices in a promotion, but the algorithm responded by raising prices. There wasn’t a bug. Instead, the software was monitoring the real-time data and saw an influx of customers, presumably because of the long wait across the street.

“It could tell that no matter how it increased prices, people kept coming in,” said Mr. Derakhshan.

On the outskirts of Rotterdam, Koen van der Knaap began running the system on his family-owned Shell station in recent months. Down the road, a station owned by Tamoil, a gasoline retailer owned by Libya’s Oilinvest Group, uses it too.

During a late-March week for which both Tamoil and Mr. van der Knaap provided hourly data, the costs for unleaded gas at the two stations—which vary in opening hours and services—bounced around independently much of the time, and generally declined, reflecting falling oil prices that week.

During some periods, however, the stations’ price changes paralleled each other, going up or down by more than 2 U.S. cents per gallon within a few hours of each other. Often, prices dropped early in the morning and increased toward the end of the day, implying that the A.I. software may have been identifying common market-demand signals through the local noise.

The station owners say their systems frequently lower prices to gain volume when there are customers to be won.

“It can be frustrating,” said Erwin Ralan, an electronics-store manager who was filling up at the Tamoil station that week. “Prices usually go up at the end of the day. But when you’re empty and you’re in a rush, there’s not much you can do.”

In Its New Factory, Impossible Foods Will Make 12 Million Pounds Of Plant-Based Burgers A Year

Posted on by .

The exciting new plant-based burger that bleeds like real meat is scaling up to reach more customers: It wants to be in 1,000 restaurants by the end of the year.

In Its New Factory, Impossible Foods Will Make 12 Million Pounds Of Plant-Based Burgers A Year
PHOTO: COURTESY IMPOSSIBLE FOODS
 Fast Company

Inside a former baked-goods factory near the Oakland airport, a construction crew is installing giant vats that will soon be used to scale up production of the Impossible Burger–a plant-based meat designed to look and taste good enough that meat eaters will want to order it, not vegetarians.

The “meat,” developed by a team led by former Stanford biochemistry professor Patrick Brown, is currently being produced in a 10,000-square-foot pilot facility in Silicon Valley and a 1,500-square foot space in New Jersey. The new facility, at around 60,000 square feet, will dramatically scale up production capacity. When the factory is fully ramped up, it will be able to produce at least 1 million pounds of Impossible Burger meat a month, or 250 times more than today.

“It will enable us to go from something that is scarce–and we’re constantly getting complaints from customers about the fact that they can’t buy them at their local restaurant–and start to make it ubiquitous,” Brown said at an event launching the new factory.

The burger is currently available at 11 restaurants, including 3 that launched it on March 23. But by the end of the year, the company expects to supply 1,000 restaurants. It just signed a deal to have the burgers featured in the San Francisco Giant’s baseball stadium.

For the company, achieving scale is a critical part of achieving its mission. Brown started working on the project while thinking about the problem of climate change; raising cows and other animals for meat is one of the world’s largest sources of greenhouse gases. It also uses and pollutes more water than any other industry, and drives deforestation. But he realized that the majority of the world wouldn’t voluntarily go vegetarian for those reasons.

“Billions of people around the world who love meat are not going to stop demanding it, so we just have to find a better way to produce it,” he says.

PHOTO: COURTESY IMPOSSIBLE FOODS

The team studied the properties of meat–particularly heme, the molecule that makes blood red and gives meat a meaty taste–and then experimented with recreating those properties using only ingredients from plants.

“When you think about meat, there’s the muscle, there’s the connective tissue, there’s the fat, so we had to figure out how to mimic those parts of beef to figure out how to experience the texture, but also the taste,” Don DeMasi, senior vice president of engineering for Impossible Foods, tells Fast Company.

The result looks like it was made from a cow, not plants. The handful of chefs who were given first access to the product say they think of it as meat. “It kind of made this transition in my mind to be–it’s just another kind of meat,” says chef Traci Des Jardins, who has been serving Impossible burgers at her San Francisco restaurant Jardinière for about a year, and now is also serving it at Public House, her restaurant at the city’s ballpark.

PHOTO: COURTESY IMPOSSIBLE FOODS

Before it’s cooked, the product is red like raw beef; as it cooks, it browns. As the heme mixes with juices and oozes out, it can look like it’s bleeding. “You’re seeing the exact same cooking chemistry that you see in meat, literally,” says Brown.

As the company scales up its beef alternative, it will focus on restaurants. In a year, it says, U.S. restaurants serve more than 5 billion pounds of burgers, and Impossible wants its 12 million pounds to be among them. Retail will come later, along with other products that are currently in development, such as poultry and steak.

“Our long-term goal is to basically develop a new and better way to create all the foods we make from animals,” says Brown.

Did You Hear That? Robots Are Learning The Subtle Sounds Of Mechanical Breakdown

Posted on by .
Fast Company
STEVEN MELENDEZ 02.16.17

Expert mechanics can detect what’s wrong with a car without lifting the hood. So can deep learning robots, says startup 3DSignals.

Did You Hear That? Robots Are Learning The Subtle Sounds Of Mechanical Breakdown
[Photo: Flickr user David Hodgson]

Sometimes, machines just don’t sound right.

Brakes squeal, hard drives crunch, air conditioners rattle, and their owners know it’s time for a service call. But some of the most valuable machinery in the world often operates with nobody around to hear the mechanical breakdowns, from the chillers and pumps that drive big-building climate control systems to the massive turbines at hydroelectric power plants.

That’s why a number of startups are working to train computers to pick up on changes in the sounds, vibrations, heat emissions, and other signals that machines give off as they’re working or failing. The hope is that the computers can catch mechanical failures before they happen, saving on repair costs and reducing downtime.

“We’re developing an expert mechanic’s brain that identifies exactly what is happening to a machine by the way that it sounds,” says Amnon Shenfeld, founder and CEO of 3DSignals, a startup based in Kfar Saba, Israel, that is using machine learning to train computers to listen to machinery and diagnose problems at facilities like hydroelectric plants and steel mills

And while most current efforts are currently focused on large-scale machinery, Shenfeld says the same sort of technology might one day help detect failures in home appliances or in devices like self-driving cars or rental vehicles that don’t spend much time in the hands of an owner who’s used to their usual sounds.

“When you have car-as-a-service, the person in the car doesn’t know the car,” he says. “If it sounds strange, you’re losing this link with the human in the car deciding it’s time to take it to the mechanic.”

Initially, 3DSignals’ systems can detect anomalous sounds based on physical modeling of particular types of equipment, notifying an expert mechanic to diagnose the problem. And once the problem is fixed, the mechanic’s diagnosis is added to 3DSignals’ database, which it uses to train its algorithms to not only detect unusual sounds, but also interpret them to understand what kind of repair is needed.

“The next time we hit this signature on the same machine for the same customer or another customer using the same type of machine, it will not just be anomaly detection,” says Shenfeld.

And while 3DSignals focuses entirely on using its machine learning tools to process acoustic data—an area Shenfeld says is surprisingly neglected outside of speech recognition—other companies are using a variety of other types of data to detect and diagnose mechanical problems.

[Photo: Flickr user Sue Clark]

Systems from Augury, a startup with offices in New York and Haifa, Israel, monitor vibrations, temperature ultrasound, and electromagnetic emissions from machinery, typically large-scale heating and cooling systems. The company offers a portable tool a technician can use to capture a reading to an iPhone or Android device, where an app offers a real-time diagnosis as well as a continuous monitoring system, says cofounder and CEO Saar Yoskovitz.

“One of our customers, located in Ohio, had a thunderstorm at 2 a.m., and the whole building went dark,” he says. “The generator was supposed to pick up automatically but it didn’t, and the way that they learned about it was through an alert they got from Augury.”

In more complicated situations, the system uses machine learning-based algorithms and a growing library of signals of different types of errors to suggest what’s wrong and what technicians can do to repair the machine, he says.

“Over time, we’ve collected probably the largest malfunction dictionary in the world for our types of machines,” says Yoskovitz.

For another startup, called Presenso, also based in Haifa, the exact type of data being monitored is less important than how it changes over time and what that signals about the machine’s operation. The company’s systems record data from sensors already installed in industrial equipment as part of existing control processes, streaming readings to Presenso’s cloud servers.

“We don’t care, or we don’t need to know, if the sensor we’re analyzing now measures temperature or voltage or flow,” says CEO and cofounder Eitan Vesely.

Presenso’s software tools use a variety of machine learning techniques to effectively build a model of a machine’s operations based on the sensor data it receives. The tools provide visualizations of unusual readings, and how different they are from normal operations.

“They don’t need any human guidance or [to] know what the physical attributes are that are being measured,” Vesely says. “The goal is for them to learn by themselves how the machine operates.”

And while real-time breakdown detection is obviously useful for companies operating machines in their own facilities, experts say it could also be useful to equipment vendors or insurance companies providing financial coverage in the case of downtime.

“If a company has downtime or business interruption insurance, that’s something where it becomes incredibly relevant and also a money saver,” says Ryan Martin, a senior analyst at ABI Research.

Having more reliable predictions of when machines need maintenance could also spur equipment makers to effectively offer time as a service, charging industrial users either extended warranties or even charging by the hour for the guaranteed use of their equipment without incurring too much risk to themselves, says Augury’s Yoskovitz.

“One of the problems with this business model is they hold all the risk,” he says. “If anything goes wrong, they pay for it, and they’re looking for ways to lower this risk.”

With Mastercard’s Identity Check, your face is the latest form of biometric identification.

Posted on by .

MASTERCARD

Selfies are no longer just a social media irritant: With Mastercard’s Identity Check, your face is the latest form of biometric identification. In October, Mastercard rolled out the new mobile-payment ID feature, which allows e-commerce customers to verify themselves at checkout simply by capturing an image of their face on their phone. “One of the biggest consumer pain points is when you are prompted for a password,” says Ajay Bhalla, who oversees Mastercard’s security efforts. “That leads to a lot of friction: unhappy consumers, merchants losing sales, banks losing volume.”

Two years in the making, Identity Check works much like the iPhone’s fingerprint ID system. Users set it up in advance by taking a series of photos from several different angles. That info is then turned into an algorithm, which Identity Check can match to a fresh picture at the moment of transaction. “Biometrics is actually safer [than a password] because it is really you,” says Bhalla.

Shoppers weary of passwords might be excited to hear the news, but they should be careful not to get too happy. “The biggest issue we face is that people naturally start smiling when they take a picture,” says Bhalla, “and that messes with the algorithm.”

Mastercard is also exploring other new methods for identifying customers, such as iris scanning, voice recognition, and even heartbeat monitoring. Blood-pumping patterns, it turns out, are as unique as fingerprints. —Nikita Richardson

Milestones: In September, Mastercard launched a program to help developers integrate their services into new technologies such as virtual reality, augmented reality, and the internet of things.

Challenges: Mastercard has been hit with a $19 billion class-action lawsuit over allegations that the company charged illegal fees to U.K. vendors, leading to higher prices for consumers. (“[We] firmly disagree with the basis of this claim and we intend to oppose it,” says a Mastercard spokesperson.)

Mastercard’s new facial recognition tech Identity Check works like the iPhone’s fingerprint system.[Illustration: Laura Breiling]