Category Archives: Internet of Things (IoT)


Posted on by .

in Fast Company

January 11, 2016


On a hot, swampy November afternoon at IMG Academy in Bradenton, Florida, the private school’s boys’ lacrosse team is huddling around the bench draining water bottles after a round of drills. It’s a scene like any other on high school fields around the country . . . except here there’s a small crew of men sitting off to the side with laptops, running diagnostics on the team’s fluid intake, tracking each player individually through smart chip–enabled refillable water bottles.

Each “smart cap” bottle is digitally linked to a specific player. It works with an app to calculate how much he sweats in an average practice, how much sodium he loses, and how much he needs to drink to maintain optimal performance. Each bottle is filled with a drink formula that corresponds to an individual player’s sweat type. A microchip and a small turbine in the spout measure how much he takes with every sip. LED lights on the cap help him pace his drinking, showing whether he’s ahead of or behind his target. This is the future of athletic hydration. It’s also the future of Gatorade.

The brand’s new high-tech focus can be traced back to senior vice president and general manager Brett O’Brien’s decision in 2014 to build an internal innovation unit to look beyond bottle shapes and new flavors and toward a higher mission. After all, something had to be done. Gatorade sales in the first half of 2009 had fallen 18% year over year; new competitors such as VitaminWater, Red Bull, and Monster had gained influence and market share; and the product, born 50 years ago in a University of Florida lab as the original sports specialty drink, had become known more as a hangover helper than a high-performance elixir. O’Brien and the innovation group set about getting Gatorade back into shape, transforming it into an elite sports brand on par with Nike and Under Armour. “It’s not just about capturing a bigger part of a marketplace,” O’Brien says. “It’s about filling a void for your consumer, and ours are athletes.”

Gatorade started by looking at how athletes were already using its products. For instance, pro athletes had long been mixing sodium powder packs and other additives into their Gatorade (the brand even makes its own, called Gatorlytes). Now the company is looking to cut out the extra steps by creating 12 different formulas served in small egg-shaped pods that mix with water—to be used in new bottles featuring the smart-cap design. The formulas will range in carb, calorie, and electrolyte levels to optimize fluid recovery. To know which formula is right for which athlete, the brand has developed a suite of products and technologies that work together to measure and track individuals’ data. A smart scale linked to a tablet and software, for example, catalogs a player’s weight, along with training time and intensity, to make fluid and carb-intake recommendations. A patch, like a near-field communication chip-enabled Band-Aid, will analyze a player’s sweat and communicate with the digital platform to identify his sweat type—which will determine sodium, electrolyte, and additional fluid-intake needs.

All of this technology was developed in conjunction with the brand’s nutrition-minded Gatorade Sports Science Institute, fueled by PepsiCo’s 40% increase in R&D spending between 2011 and 2014. Now Gatorade is testing it on the field with top high school, college, and pro athletes, including the Brazilian national soccer team, the Kansas City Chiefs, the Boston Celtics, FC Barcelona, and the University of Florida. For Kansas City Chiefs wide receiver Jeremy Maclin, the idea of customized hydration is intriguing. “I don’t think there’s any athlete on this earth who wouldn’t take an advantage if proven to work,” he says.

Xavi Cortadellas, Gatorade’s global innovation and design senior director (who joined the brand in 2011 after 11 years at Nike), believes the move into customization and personalization was inevitable, given the rise of data-driven insight in sports in general. “It’s about whether you want to be a premium brand or not,” he says. “If you want to play in the premium space, then you have to be delivering personalization.” As part of Cortadellas’s plan to scale these new ideas, a simple version of the bottle will be available to the public by mid-2016—it can be customized with digitally printed cap graphics including team and player names, numbers, and logos. “Here at IMG, one of the things the kids were really excited about was having the ability to customize the bottle ‘just like NikeID personalized my shoes,’ ” says Cortadellas. “That’s what you want to hear.”

Ultimately, Gatorade wants to move beyond the field. The brand is developing and testing a new line of food products expected to hit shelves over the next several years—protein bars, vegetable-based nitrate boosts, protein-enriched yogurt—that will embody what nutritionists have long been preaching to athletes: that they are athletes all day, even at rest. “In all these spaces where we have no or minimal share, the potential for growth is gigantic,” Cortadellas says.

Not long ago, a Gatorade ad tagline asked, “Is it in you?” If Cortadellas and his colleagues are right, the question will soon become moot. They’ll already know.

Other applications for the technology behind bitcoin.

Posted on by .
The great chain of being sure about things
The technology behind bitcoin lets people who do not know or trust each other build a dependable ledger. This has implications far beyond the cryptocurrency

WHEN the Honduran police came to evict her in 2009 Mariana Catalina Izaguirre had lived in her lowly house for three decades. Unlike many of her neighbours in Tegucigalpa, the country’s capital, she even had an official title to the land on which it stood. But the records at the country’s Property Institute showed another person registered as its owner, too—and that person convinced a judge to sign an eviction order. By the time the legal confusion was finally sorted out, Ms Izaguirre’s house had been demolished.

It is the sort of thing that happens every day in places where land registries are badly kept, mismanaged and/or corrupt—which is to say across much of the world. This lack of secure property rights is an endemic source of insecurity and injustice. It also makes it harder to use a house or a piece of land as collateral, stymying investment and job creation.

Such problems seem worlds away from bitcoin, a currency based on clever cryptography which has a devoted following among mostly well-off, often anti-government and sometimes criminal geeks. But the cryptographic technology that underlies bitcoin, called the “blockchain”, has applications well beyond cash and currency. It offers a way for people who do not know or trust each other to create a record of who owns what that will compel the assent of everyone concerned. It is a way of making and preserving truths.

That is why politicians seeking to clean up the Property Institute in Honduras have asked Factom, an American startup, to provide a prototype of a blockchain-based land registry. Interest in the idea has also been expressed in Greece, which has no proper land registry and where only 7% of the territory is adequately mapped.

A place in the past

Other applications for blockchain and similar “distributed ledgers” range from thwarting diamond thieves to streamlining stockmarkets: the NASDAQ exchange will soon start using a blockchain-based system to record trades in privately held companies. The Bank of England, not known for technological flights of fancy, seems electrified: distributed ledgers, it concluded in a research note late last year, are a “significant innovation” that could have “far-reaching implications” in the financial industry.

The politically minded see the blockchain reaching further than that. When co-operatives and left-wingers gathered for this year’s OuiShare Fest in Paris to discuss ways that grass-roots organisations could undermine giant repositories of data like Facebook, the blockchain made it into almost every speech. Libertarians dream of a world where more and more state regulations are replaced with private contracts between individuals—contracts which blockchain-based programming would make self-enforcing.

The blockchain began life in the mind of Satoshi Nakamoto, the brilliant, pseudonymous and so far unidentified creator of bitcoin—a “purely peer-to-peer version of electronic cash”, as he put it in a paper published in 2008. To work as cash, bitcoin had to be able to change hands without being diverted into the wrong account and to be incapable of being spent twice by the same person. To fulfil Mr Nakamoto’s dream of a decentralised system the avoidance of such abuses had to be achieved without recourse to any trusted third party, such as the banks which stand behind conventional payment systems.

It is the blockchain that replaces this trusted third party. A database that contains the payment history of every bitcoin in circulation, the blockchain provides proof of who owns what at any given juncture. This distributed ledger is replicated on thousands of computers—bitcoin’s “nodes”—around the world and is publicly available. But for all its openness it is also trustworthy and secure. This is guaranteed by the mixture of mathematical subtlety and computational brute force built into its “consensus mechanism”—the process by which the nodes agree on how to update the blockchain in the light of bitcoin transfers from one person to another.

Let us say that Alice wants to pay Bob for services rendered. Both have bitcoin “wallets”—software which accesses the blockchain rather as a browser accesses the web, but does not identify the user to the system. The transaction starts with Alice’s wallet proposing that the blockchain be changed so as to show Alice’s wallet a little emptier and Bob’s a little fuller.

The network goes through a number of steps to confirm this change. As the proposal propagates over the network the various nodes check, by inspecting the ledger, whether Alice actually has the bitcoin she now wants to spend. If everything looks kosher, specialised nodes called miners will bundle Alice’s proposal with other similarly reputable transactions to create a new block for the blockchain.

This entails repeatedly feeding the data through a cryptographic “hash” function which boils the block down into a string of digits of a given length (see diagram). Like a lot of cryptography, this hashing is a one-way street. It is easy to go from the data to their hash; impossible to go from the hash back to the data. But though the hash does not contain the data, it is still unique to them. Change what goes into the block in any way—alter a transaction by a single digit—and the hash would be different.

Running in the shadows

That hash is put, along with some other data, into the header of the proposed block. This header then becomes the basis for an exacting mathematical puzzle which involves using the hash function yet again. This puzzle can only be solved by trial and error. Across the network, miners grind through trillions and trillions of possibilities looking for the answer. When a miner finally comes up with a solution other nodes quickly check it (that’s the one-way street again: solving is hard but checking is easy), and each node that confirms the solution updates the blockchain accordingly. The hash of the header becomes the new block’s identifying string, and that block is now part of the ledger. Alice’s payment to Bob, and all the other transactions the block contains, are confirmed.

This puzzle stage introduces three things that add hugely to bitcoin’s security. One is chance. You cannot predict which miner will solve a puzzle, and so you cannot predict who will get to update the blockchain at any given time, except in so far as it has to be one of the hard working miners, not some random interloper. This makes cheating hard.

The second addition is history. Each new header contains a hash of the previous block’s header, which in turn contains a hash of the header before that, and so on and so on all the way back to the beginning. It is this concatenation that makes the blocks into a chain. Starting from all the data in the ledger it is trivial to reproduce the header for the latest block. Make a change anywhere, though—even back in one of the earliest blocks—and that changed block’s header will come out different. This means that so will the next block’s, and all the subsequent ones. The ledger will no longer match the latest block’s identifier, and will be rejected.

Is there a way round this? Imagine that Alice changes her mind about paying Bob and tries to rewrite history so that her bitcoin stays in her wallet. If she were a competent miner she could solve the requisite puzzle and produce a new version of the blockchain. But in the time it took her to do so, the rest of the network would have lengthened the original blockchain. And nodes always work on the longest version of the blockchain there is. This rule stops the occasions when two miners find the solution almost simultaneously from causing anything more than a temporary fork in the chain. It also stops cheating. To force the system to accept her new version Alice would need to lengthen it faster than the rest of the system was lengthening the original. Short of controlling more than half the computers—known in the jargon as a “51% attack”—that should not be possible.

Dreams are sometimes catching

Leaving aside the difficulties of trying to subvert the network, there is a deeper question: why bother to be part of it at all? Because the third thing the puzzle-solving step adds is an incentive. Forging a new block creates new bitcoin. The winning miner earns 25 bitcoin, worth about $7,500 at current prices.

All this cleverness does not, in itself, make bitcoin a particularly attractive currency. Its value is unstable and unpredictable (see chart), and the total amount in circulation is deliberately limited. But the blockchain mechanism works very well. According to, a website that tracks such things, on an average day more than 120,000 transactions are added to the blockchain, representing about $75m exchanged. There are now 380,000 blocks; the ledger weighs in at nearly 45 gigabytes.

Most of the data in the blockchain are about bitcoin. But they do not have to be. Mr Nakamoto has built what geeks call an “open platform”—a distributed system the workings of which are open to examination and elaboration. The paragon of such platforms is the internet itself; other examples include operating systems like Android or Windows. Applications that depend on basic features of the blockchain can thus be developed without asking anybody for permission or paying anyone for the privilege. “The internet finally has a public data base,” says Chris Dixon of Andreessen Horowitz, a venture-capital firm which has financed several bitcoin start-ups, including Coinbase, which provides wallets, and 21, which makes bitcoin-mining hardware for the masses.

For now blockchain-based offerings fall in three buckets. The first takes advantage of the fact that any type of asset can be transferred using the blockchain. One of the startups betting on this idea is Colu. It has developed a mechanism to “dye” very small bitcoin transactions (called “bitcoin dust”) by adding extra data to them so that they can represent bonds, shares or units of precious metals.

Protecting land titles is an example of the second bucket: applications that use the blockchain as a truth machine. Bitcoin transactions can be combined with snippets of additional information which then also become embedded in the ledger. It can thus be a registry of anything worth tracking closely. Everledger uses the blockchain to protect luxury goods; for example it will stick on to the blockchain data about a stone’s distinguishing attributes, providing unchallengeable proof of its identity should it be stolen. Onename stores personal information in a way that is meant to do away with the need for passwords; CoinSpark acts as a notary. Note, though, that for these applications, unlike for pure bitcoin transactions, a certain amount of trust is required; you have to believe the intermediary will store the data accurately.

It is the third bucket that contains the most ambitious applications: “smart contracts” that execute themselves automatically under the right circumstances. Bitcoin can be “programmed” so that it only becomes available under certain conditions. One use of this ability is to defer the payment miners get for solving a puzzle until 99 more blocks have been added—which provides another incentive to keep the blockchain in good shape.

Lighthouse, a project started by Mike Hearn, one of bitcoin’s leading programmers, is a decentralised crowdfunding service that uses these principles. If enough money is pledged to a project it all goes through; if the target is never reached, none does. Mr Hearn says his scheme will both be cheaper than non-bitcoin competitors and also more independent, as governments will be unable to pull the plug on a project they don’t like.

Energy is contagious

The advent of distributed ledgers opens up an “entirely new quadrant of possibilities”, in the words of Albert Wenger of USV, a New York venture firm that has invested in startups such as OpenBazaar, a middleman-free peer-to-peer marketplace. But for all that the blockchain is open and exciting, sceptics argue that its security may yet be fallible and its procedures may not scale. What works for bitcoin and a few niche applications may be unable to support thousands of different services with millions of users.

Though Mr Nakamoto’s subtle design has so far proved impregnable, academic researchers have identified tactics that might allow a sneaky and well financed miner to compromise the block chain without direct control of 51% of it. And getting control of an appreciable fraction of the network’s resources looks less unlikely than it used to. Once the purview of hobbyists, bitcoin mining is now dominated by large “pools”, in which small miners share their efforts and rewards, and the operators of big data centres, many based in areas of China, such as Inner Mongolia, where electricity is cheap.

Another worry is the impact on the environment. With no other way to establish the bona fides of miners, the bitcoin architecture forces them to do a lot of hard computing; this “proof of work”, without which there can be no reward, insures that all concerned have skin in the game. But it adds up to a lot of otherwise pointless computing. According to the network’s miners are now trying 450 thousand trillion solutions per second. And every calculation takes energy.

Because miners keep details of their hardware secret, nobody really knows how much power the network consumes. If everyone were using the most efficient hardware, its annual electricity usage might be about two terawatt-hours—a bit more than the amount used by the 150,000 inhabitants of King’s County in California’s Central Valley. Make really pessimistic assumptions about the miners’ efficiency, though, and you can get the figure up to 40 terawatt-hours, almost two-thirds of what the 10m people in Los Angeles County get through. That surely overstates the problem; still, the more widely people use bitcoin, the worse the waste could get.

Yet for all this profligacy bitcoin remains limited. Because Mr Nakamoto decided to cap the size of a block at one megabyte, or about 1,400 transactions, it can handle only around seven transactions per second, compared to the 1,736 a second Visa handles in America. Blocks could be made bigger; but bigger blocks would take longer to propagate through the network, worsening the risks of forking.

Earlier platforms have surmounted similar problems. When millions went online after the invention of the web browser in the 1990s pundits predicted the internet would grind to a standstill: eppur si muove. Similarly, the bitcoin system is not standing still. Specialised mining computers can be very energy efficient, and less energy-hungry alternatives to the proof-of-work mechanism have been proposed. Developers are also working on an add-on called “Lightning” which would handle large numbers of smaller transactions outside the blockchain. Faster connections will let bigger blocks propagate as quickly as small ones used to.

The problem is not so much a lack of fixes. It is that the network’s “bitcoin improvement process” makes it hard to choose one. Change requires community-wide agreement, and these are not people to whom consensus comes easily. Consider the civil war being waged over the size of blocks. One camp frets that quickly increasing the block size will lead to further concentration in the mining industry and turn bitcoin into more of a conventional payment processor. The other side argues that the system could crash as early as next year if nothing is done, with transactions taking hours.

A break in the battle

Mr Hearn and Gavin Andresen, another bitcoin grandee, are leaders of the big-block camp. They have called on mining firms to install a new version of bitcoin which supports a much bigger block size. Some miners who do, though, appear to be suffering cyber-attacks. And in what seems a concerted effort to show the need for, or the dangers of, such an upgrade, the system is being driven to its limits by vast numbers of tiny transactions.

This has all given new momentum to efforts to build an alternative to the bitcoin blockchain, one that might be optimised for the storing of distributed ledgers rather than for the running of a cryptocurrency. MultiChain, a build-your-own-blockchain platform offered by Coin Sciences, another startup, demonstrates what is possible. As well as offering the wherewithal to build a public blockchain like bitcoin’s, it can also be used to build private chains open only to vetted users. If all the users start off trusted the need for mining and proof-of-work is reduced or eliminated, and a currency attached to the ledger becomes an optional extra.

The first industry to adopt such sons of blockchain may well be the one whose failings originally inspired Mr Nakamoto: finance. In recent months there has been a rush of bankerly enthusiasm for private blockchains as a way of keeping tamper-proof ledgers. One of the reasons, irony of ironies, is that this technology born of anti-government libertarianism could make it easier for the banks to comply with regulatory requirements on knowing their customers and anti-money-laundering rules. But there is a deeper appeal.

Industrial historians point out that new powers often become available long before the processes that best use them are developed. When electric motors were first developed they were deployed like the big hulking steam engines that came before them. It took decades for manufacturers to see that lots of decentralised electric motors could reorganise every aspect of the way they made things. In its report on digital currencies, the Bank of England sees something similar afoot in the financial sector. Thanks to cheap computing financial firms have digitised their inner workings; but they have not yet changed their organisations to match. Payment systems are mostly still centralised: transfers are cleared through the central bank. When financial firms do business with each other, the hard work of synchronising their internal ledgers can take several days, which ties up capital and increases risk.

Distributed ledgers that settle transactions in minutes or seconds could go a long way to solving such problems and fulfilling the greater promise of digitised banking. They could also save banks a lot of money: according to Santander, a bank, by 2022 such ledgers could cut the industry’s bills by up to $20 billion a year. Vendors still need to prove that they could deal with the far-higher-than-bitcoin transaction rates that would be involved; but big banks are already pushing for standards to shape the emerging technology. One of them, UBS, has proposed the creation of a standard “settlement coin”. The first order of business for R3 CEV, a blockchain startup in which UBS has invested alongside Goldman Sachs, JPMorgan and 22 other banks, is to develop a standardised architecture for private ledgers.

The banks’ problems are not unique. All sorts of companies and public bodies suffer from hard-to-maintain and often incompatible databases and the high transaction costs of getting them to talk to each other. This is the problem Ethereum, arguably the most ambitious distributed-ledger project, wants to solve. The brainchild of Vitalik Buterin, a 21-year-old Canadian programming prodigy, Ethereum’s distributed ledger can deal with more data than bitcoin’s can. And it comes with a programming language that allows users to write more sophisticated smart contracts, thus creating invoices that pay themselves when a shipment arrives or share certificates which automatically send their owners dividends if profits reach a certain level. Such cleverness, Mr Buterin hopes, will allow the formation of “decentralised autonomous organisations”—virtual companies that are basically just sets of rules running on Ethereum’s blockchain.

One of the areas where such ideas could have radical effects is in the “internet of things”—a network of billions of previously mute everyday objects such as fridges, doorstops and lawn sprinklers. A recent report from IBM entitled “Device Democracy” argues that it would be impossible to keep track of and manage these billions of devices centrally, and unwise to to try; such attempts would make them vulnerable to hacking attacks and government surveillance. Distributed registers seem a good alternative.

The sort of programmability Ethereum offers does not just allow people’s property to be tracked and registered. It allows it to be used in new sorts of ways. Thus a car-key embedded in the Ethereum blockchain could be sold or rented out in all manner of rule-based ways, enabling new peer-to-peer schemes for renting or sharing cars. Further out, some talk of using the technology to make by-then-self-driving cars self-owning, to boot. Such vehicles could stash away some of the digital money they make from renting out their keys to pay for fuel, repairs and parking spaces, all according to preprogrammed rules.

What would Rousseau have said?

Unsurprisingly, some think such schemes overly ambitious. Ethereum’s first (“genesis”) block was only mined in August and, though there is a little ecosystem of start-ups clustered around it, Mr Buterin admitted in a recent blog post that it is somewhat short of cash. But the details of which particular blockchains end up flourishing matter much less than the broad enthusiasm for distributed ledgers that is leading both start-ups and giant incumbents to examine their potential. Despite society’s inexhaustible ability to laugh at accountants, the workings of ledgers really do matter.

Today’s world is deeply dependent on double-entry book-keeping. Its standardised system of recording debits and credits is central to any attempt to understand a company’s financial position. Whether modern capitalism absolutely required such book-keeping in order to develop, as Werner Sombart, a German sociologist, claimed in the early 20th century, is open to question. Though the system began among the merchants of renaissance Italy, which offers an interesting coincidence of timing, it spread round the world much more slowly than capitalism did, becoming widely used only in the late 19th century. But there is no question that the technique is of fundamental importance not just as a record of what a company does, but as a way of defining what one can be.

Ledgers that no longer need to be maintained by a company—or a government—may in time spur new changes in how companies and governments work, in what is expected of them and in what can be done without them. A realisation that systems without centralised record-keeping can be just as trustworthy as those that have them may bring radical change.

Such ideas can expect some eye-rolling—blockchains are still a novelty applicable only in a few niches, and the doubts as to how far they can spread and scale up may prove well founded. They can also expect resistance. Some of bitcoin’s critics have always seen it as the latest techy attempt to spread a “Californian ideology” which promises salvation through technology-induced decentralisation while ignoring and obfuscating the realities of power—and happily concentrating vast wealth in the hands of an elite. The idea of making trust a matter of coding, rather than of democratic politics, legitimacy and accountability, is not necessarily an appealing or empowering one.

At the same time, a world with record-keeping mathematically immune to manipulation would have many benefits. Evicted Ms Izaguirre would be better off; so would many others in many other settings. If blockchains have a fundamental paradox, it is this: by offering a way of setting the past and present in cryptographic stone, they could make the future a very different place.


“Soft” Sensors Are Breaking Into Four Major Industries

Posted on by .




From something as simple as a door sensor at a store to the new age of “smart” sensors in a rapidly emerging wearables market, the application of sensors has already permeated many parts of our everyday lives.

But the future of dynamic sensor applications goes beyond just measuring your heart rate through the new Apple Watch; it is about how readily available sensor technology can be seamlessly intertwined with our daily activities to give us continuous insight into our own health and the way we interact with the world around us.

This is not just about wearables. Companies within the medicine, military and sport industries have all been increasing the connectivity of their respective products and workflows to the Internet of Things (IoT), and have used this connectivity and awareness to create powerful tools that enable better understanding, better analyses and better decisions.

Sensors fuel this process. But like all tools, it’s important to use the right one for the job. Not every type of sensor is fit to measure the range of lifestyles that we lead, or how our environment affects us. People are dynamic beings with soft bodies that react differently when we interact with the different situations and products around us.

A Step Up With Soft Sensors

Unlike conventional sensors that focus on the movement and characteristics of hard objects, soft sensors have been developed with the body and other “soft” structures in mind. Whether they are a few millimeters in diameter or the size of a sheet of paper, these sensors provide highly accurate and repeatable data about any change in the shape of these soft structures.

Technology should enhance our life, not distract us from it.

Soft and stretchy sensors can be used to measure human movement directly without interfering with that movement. They can be placed on the body, discreetly integrated into our clothes or otherwise glued, sewn or moulded into anything soft. This enables products to be designed to fit the way our bodies and other soft structures work, not the other way around.

The opportunities are limitless. Stretch sensors have massive potential to disrupt much more than just the realm of consumer wearables.


The most obvious application for soft sensors is in the sports sphere. The connected athlete already has a suite of wearable tech tools at their disposal, such as armbands and wristbands that measure distance, time and routes. However, many of these products are focused purely on monitoring biometric data.

By contrast, barely-there soft sensors add a level of bio-mechanical data that gives athletes and coaches a better understanding of body motion, muscle contraction, breathing rates, movement techniques, posture and risk of injury for each individual.

Companies such as Heddoko have already started integrating StretchSense’s fabric stretch sensor into compression clothing to continuously track body movement and guide athletes toward optimal performance and precision. The flexibility of the sensors means that they can be applied to any item of clothing or footwear without restricting any part of the athlete’s natural movement and performance.

Paired with wireless technology, athletes can break down and analyse every part of their performance without leaving the sporting venue.


Hospitals and specialist clinics have already benefited greatly from the use of sensors to measure a range of health metrics, like heart rate, blood pressure, glucose levels and much more. However, the healthcare industry still lacks a wide integration of available technologies that could produce better results.

Wireless and wearable soft sensors enable low-level care to be shifted out of the hospital and into the home, allowing accurate self-assessment and ongoing monitoring of patients during home recovery time periods.

For patients who require ongoing physiotherapy, soft sensors allow for the individual monitoring of exercise movements, improving the accuracy of technique and tracking of the recovery progress. Patients can share their data in real time with their specialist, from home or work, saving everyone the time and opportunity cost involved in making a special trip.

At MIT, researchers have created a “7 finger robot” that enhances the grasping mechanism of the human hand by adding two extra “fingers” adjacent to the thumb and pinky finger. The grasping assistance of the robot has potential to help elderly and disabled patients expand their independence capabilities over greater periods of time.


The automobile sector already uses more than 100 sensors (depending on the model) to measure brakes, tire pressure, temperature and if you’re too close to a car. The majority of these sensors focus on the condition and safety of the car. Soft sensors open new ways to monitor and enhance the safety and comfort of the people within the car.

Embedded within a car seat, soft sensors can be used to analyze how people are sitting, showing clearly the weight distribution and posture of the driver or passengers.

The seat can automatically adjust to the personal preference of the person sitting in it and ensure they stay comfortable throughout their journey. Safety features, such as an airbag, can be dynamically geared toward the individual sitting in the seat — whether it’s an adult or a child — enabling the car to deploy the airbag with appropriate pressure and height in the event of an accident.

Virtual, Augmented Reality

Although large companies like Google have sparked a resurgence in the interest on virtual reality, it is interesting to see the limiting factors that hold back virtual reality from being a truly immersive experience — namely, the lack of input methods for interacting with the digital universe.

Keyboards and touchscreens simply do not work. Unfortunately for the industry, these are still the critical elements that, apart from the visual aspect, largely connect the user to the experience.

Precisely tracking how a person moves in a simple, untethered way not only plugs a major gap in the VR/AR experience, it paves the way for the whole body to become an input device. Soft sensors can make games responsive through the natural movements of a player.

Motion and body language data can be combined with other biometric data to measure a person’s neurological reactions in nearly real-life situations. Now developers can create a truly immersive and truly interactive experience that responds to the people in it.

A Simple Solution

These examples are just the tip of the iceberg. For too long we have focused on  repurposing sensor technology created for rigid and precise machines and putting them on soft bodies. However, people are soft and precise — and hard sensors simply cannot tell the whole story.

With soft, precise sensing we now have a way of capturing so much more of the non-verbal communication we take for granted when we interact with each other. Mastering this new contextual awareness will take us one giant leap closer toward a world where technology is smarter, less intrusive and more seamlessly embedded in the fabric of our lives. Because, at the end of the day, technology should enhance our life, not distract us from it.


Supermarket giant Kroger uses temperature sensors to keep frozen foods frozen

Posted on by .

CIO | Jul 31, 2015
By Tom Kaneshige

Every summer, people head to the grocery store in droves to pick up cartons of cold, creamy ice cream. It’s a great way to stay cool. But shoppers will go elsewhere if the frozen dairy treat is crusted with yucky ice crystals–the result of freezing, thawing and refreezing.

No one knows this better than Chris Hjelm, CIO at Kroger, a $108 billion supermarket chain and a 2015 CIO 100 award winner. He says temperature spikes in the refrigerator case can make cold goods go bad, and there are lots of circumstances that can lead to spikes: A compressor may go out, defrost cycles could run too long, a door might have a bad seal, or someone could leave a door ajar.

Hjelm and his research-and-development team decided to try to ward off such problems by turning to Internet of Things (IoT) technologies. He equipped refrigerated containers with sensors that check temperatures every 30 minutes–instead of having employees manually check thermometers twice a day–and then alert store managers and facilities engineers if the mercury hits unsafe levels.

“It’s something every food retailer will have,” Hjelm predicts.
Today, Kroger’s IoT temperature monitoring system cuts down on the number of cold products that go bad and have to be thrown out, reduces labor and saves energy. And happy customers enjoy better ice cream and other cold and frozen foods.

A typical Kroger store’s temperature monitoring system has more than 220 tags connected to a network that uses the ZigBee low-bandwidth wireless network protocol. Nearly half of the chain’s 2,600 stores have the technology; a complete rollout is expected by early 2016.

Kroger’s temperature monitoring system is part of a massive IoT movement. Some 4.9 billion devices and pieces of industrial equipment will be connected to the Internet this year, primarily in the manufacturing, utilities and transportation sectors, and then, later, government, according to Gartner.

That’s not to suggest that embracing the IoT is easy.

“It’s not as simple as saying, ‘I’ve got a cool temperature monitoring system, let’s go put it in.’ There are more pieces to this puzzle,” Hjelm says.

Sensory Overload

Hjelm and his team had to spend months tweaking the temperature monitoring system to work with an array of existing refrigerator cases, each with a different defrost cycle. They also had to find tags that were waterproof and humidity-proof. And they needed a data management plan, because the tags throw off a lot of data. On top of all that, employees needed training.

The temperature monitoring system has a strong ROI, but the bigger payoff will come in the future, Hjelm says. This is Kroger’s first foray into the IoT, and it laid the foundation for the deployment of more systems that use tags and monitors.

“We have a pipeline of innovation, such as a mobile shopping system with laser scanners and network-connected LED lighting sensors, that we believe will take advantage of this infrastructure investment,” Hjelm says.

The cherry on top, says Gartner analyst Alfonso Velosa, is that IoT projects can advance a CIO’s career. For example, a store’s refrigerator cases are expensive assets that literally touch the product and ensure its quality–and they don’t belong to the CIO. But when IT puts tags on them, the CIO moves to the front line of the business.

“This is an opportunity for the CIO to engage and become more relevant to the business,” Velosa says.

The Internet of Everything Empowers Cities as Never Before

Posted on by .

The Huffington Post

By John M. Eger, Director of the Creative Economy Initiative at San Diego State University (SDSU)


As San Francisco Mayor Ed Lee said at the City Innovate Summit last week:

Cities today are the engines of the greatest surge in innovation, creativity and problem solving in human history … and cities that think of themselves as platforms will become stronger, attract better talent and become smarter from the bottom up.

Let’s be frank. The strength of America’s economy and as well as our political prowess in the world are inextricably linked. Cities — not the Federal government — are best positioned to renew and reinvent America for the new, global, knowledge-based economy.

In this new world economy where every nation, every community, every individual is competing with every other, as President Obama said before a joint session of Congress:

We need to out-innovate, out-educate, and out-build the rest of the world…We have to make America the best place on earth to do business.

Increasingly, cities across America are starting to change the focus, deploy technology and prepare our citizens to out innovate, out educate and out build every other community and thus every other nation in the world. It is the cities — not Washington, D.C. — that we need to look to for leadership to reinvent America for the new economy.

Of course, whom we elect as president is important, but whoever emerges in 2016, there role is extremely limited as to what they can accomplish with so many issues outside the domain of the President’s power. Urban scholar and author Neal Pierce observed, that national economies no longer exist; only a global economy and a “constellation of regional economies, with major cities at their core.”


More recently, Benjamin Barber, author of If Mayors Ruled The World, underscored the notion that cities are critical to almost everything we need as a country, and has written:

The nation-state is failing us on the global scale … cities and the mayors that run them, offer the best new forces of good governance… They are the primary incubator of the cultural, social, and political innovations which shape our planet.

Technology, particularly the “Internet of Everything” (IoE), where everything is connected to almost every other thing, is providing cities and their elected officials with the tools to ensure safer cities, better transportation, health care, energy and water conservation, clean air, and environmental services. By installing the broadband necessary for the IoE, cities are also building the platform for innovation.


The new broadband infrastructure coupled with Big Data analysis can serve to make the city government more transparent and also encourage individuals and companies to develop innovative products and services; and importantly, engage the general public to help create “efficiencies that save taxpayer money” and “build trust in the public sector.” That is at the heart of The Responsive City: Engaging Communities Through Data Smart Governance, by former Mayor of Indianapolis and Deputy Mayor of New York, Stephen Goldsmith and Susan Crawford, co-director of the Berkman Center for Internet and Society at Harvard and special assistant to both President Obama and former New York Mayor Michael Bloomberg.

As IBM, which launched a “smart cities initiative” in the last few years put it:

As cities grow in both numbers and population, they are taking their place on the world’s center stage, with more economic, political and technological power than ever before. Economically, they are becoming the hubs of a globally integrated, services-based society. Politically, they are in the midst of a realignment of power — with greater influence, but also greater responsibility.

Goldman Sachs calls the IoE the 3rd wave, and points out that while, “The 1990’s fixed Internet wave connected 1 billion users … the 2000’s mobile wave connected another 2 billion. The IoE has the potential to connect 10X as many (28 billion) “things” to the Internet by 2020, ranging from bracelets to cars. Gartner research says that this year we already have 4.9 Billion Connected “Things.”

The payoffs are huge.

According to Cisco Systems, IoE could generate $4.6 trillion in value for the global public sector by 2022 through cost savings, productivity gains, new revenues and improved citizen experiences… Cities have the potential to claim almost two-thirds of the non-defense (civilian) IoE public sector value. Cities, they believe, will capture much of this value by implementing killer apps in which “$100 billion can be saved in smart buildings alone by “reducing energy consumption.”

City leaders are awakening to the challengers and the opportunities before them. In just the last few years, over 50 cities have joined forces to create an organization called Next Century Cities. Together they have committed to embracing technology, identifying opportunities to change their current systems of police, fire, safety, transportation, health care, water and energy and more; and finding more ways to collaborate at the state, county and city level, keys to our Nation’s success and survival.

Small Firms Get New Manufacturing Edge: 3-D Printing in the Cloud

Posted on by .

The camera mount on this drone quadcopter was designed by Fusion Imaging and created using Shapeways, a 3D-printing service.
The camera mount on this drone quadcopter was designed by Fusion Imaging and created using Shapeways, a 3D-printing service.PHOTO: FUSION IMAGING

There’s been a lot of talk in recent years about small companies rushing to adopt 3-D printing, taking advantage of the ability to turn computer-generated designs into physical objects. Yet even though the technology can crank out items more cheaply than traditional manufacturing, for many small firms the investment is too big: Higher-end printers that produce finished products can cost thousands or even millions of dollars, and require specialized skills to use.

Enter a new batch of cloud-based services—such as Shapeways, Cubify, i Materialize and Sculpteo—that offer to do the 3-D printing for small businesses. Companies can work up a design and upload it to a cloud server, and the cloud business at the other end 3-D prints it. The cloud companies sometimes offer services such as packaging, shipping and billing as well—allowing small businesses to focus on core areas such as design.

The result, say experts and entrepreneurs, is a chance for forms to launch and crank out products with little or no capital outlay. It also allows them to rethink common problems such as carrying inventory and diversifying their product line.

“For small businesses and groups within enterprises, 3-D service bureaus will play a growing role,” says Pete Basiliere, an analyst with Gartner Inc. He says such services will open up new avenues for prototyping and manufacturing, as well as low-cost market testing of designs.

An idea from the sky

The story of Warren Alexander and his company, Fusion Imaging, shows how cloud services operate and what they offer. Last year Mr. Alexander, a marketing specialist in Australia, was flying a DJI Phantom hobbyist drone quadcopter when he met Hayden Bao, a designer and fellow drone enthusiast. During the conversation, they realized they both needed easier ways to attach propeller guards to their drones. No products were available, so they decided to make their own.

They put together a design and uploaded the finished file to Shapeways, a 3-D-printing service based in the U.S. and the Netherlands. The Shapeways platform produces about 180,000 products a month for its users, including jewelry, coffee mugs, home décor and accessories for electronic devices. About 40% of its users are small businesses, and the rest are individuals producing goods for their own use.

When Shapeways receives a file like the one from Fusion Imaging, the staff vets it to make sure that it is printable and won’t produce a product with physical defects, such as plastic walls that are too thin to hold their shape. The finished product is posted for sale on the Shapeways online community. When an order comes in, Shapeways manufactures it to order, packages it and sends it to the customer. The site also promotes its designers to its 20,000 members and highlights some of their work at trade shows.

A year after signing on with Shapeways, Mr. Alexander says Fusion has become the largest shop on the site and has sold 3,000 items in 70 countries.

To be sure, there are limits and caveats to these cloud services. For instance, there’s the length of the production cycle. Even though 3-D printing is fast, adding a middleman makes the turnaround longer than it would be if small companies were doing the printing themselves.

Pricing is also more complicated. At Shapeways, for instance, the site determines how much it will cost to manufacture each item, based on the complexity of the design and the materials, and factors in a profit margin for itself; the designer then determines the price that it will charge the retail customer. Shapeways also charges a fee of 3.5% on the marked-up price.

Bigger changes afoot

For all that, though, Mr. Alexander expects the use of 3-D printing to spread rapidly. “What is a competitive edge for us is something that everyone will have in a few years,” he says, adding that the use of a 3-D-printing service allows Fusion Imaging to focus on distinguishing itself with its designs, he says. Along with the propeller guard, the company offers other products such as a GoPro camera mount for the Phantom.

Convenience aside, other entrepreneurs say 3-D printing services change how small businesses approach fundraising and risk.

“3-D-printing services allow you to be your own Kickstarter,” says Robert Blinn, co-founder of New York-based jewelry firm Gotham Smith.

Designers can market-test demand for a new product without spending money on producing inventory, he says. The product can be refined based on feedback on individual orders, using customers to test the product. And the money from initial orders can pay for the cost of production.

Mr. Blinn, who gave up a career in hedge funds to study computer-aided design and create products, says Gotham Smith has had quick success during the past year on Shapeways, and that some of its designs are now for sale at the Museum of Modern Art store.

“We haven’t necessarily grown faster because of 3-D printing. But our startup costs were only $100 a month for Web hosting,” he says. “If we had used traditional manufacturing, we would have had to spend a lot more money upfront to get the same results.”

3-D printing services also allow small businesses to take advantage of what Mr. Blinn calls the “long tail’ of product development.

“We can have five or 50 products for sale, and it doesn’t matter, because we don’t have to keep any inventory,” he says. That paves the way for a level of customization and niche targeting that wouldn’t be possible for most conventional manufacturers.

The future is here today: How GE is using the Internet of Things, big data and robotics to power its business

Posted on by .
By Danny Pal in Computing
12 Mar 2015

When your business supplies over 30 per cent of the planet’s power and produces engines for everything from cars to aircraft, it’s important to ensure that both the means of production and the products themselves operate as efficiently as possible.

That’s why General Electric (GE) – the fourth largest company in the world – has invested heavily in the Internet of Things and analytics technologies. As Bill Ruh, GE VP of global software services, told Computing: “It’s about getting better outcomes out of the machines we produce … the most profitable thing we can get to is zero unscheduled downtime.

“If you can make sure your power generation doesn’t go down, you don’t have brownouts, you can change the world and substantially improve industry.”

GE Aviation, the world’s second largest engine manufacturer, is using Internet of Things (IoT) technology to manage and repair jet engines pre-emptively, detecting tiny faults before they have a chance to grow into big ones.

“We’re finding that we can drive huge productivity increases in our teams because we can look at historical data across the fleet and begin to identify problems before they really happen,” Ruh explained

“We’ve had huge productivity increases because we’re automating how we think about maintaining an aircraft engine,” he continued. “We’re using big data analytics to help drive the maintenance schedule.”

By using sensors to collect engine data, GE is able to perform short bursts of maintenance much earlier on, meaning that over an entire lifecycle the engine spends less time in repairs.

Ruh likened it to getting a car repaired, but then discovering something hadn’t been spotted and having to return it to the garage shortly afterwards.

“The thing you hate when taking your car in is when in two months you have to take it back in. If they’d fixed it the first time, then they’d save you money and time,” he said.

“We can do that now. This is going to drive satisfaction rates and time on wing.”

But jet engine maintenance isn’t the only area where GE has harnessed the power of the IoT and big data analytics. Another is what the company calls its “Brilliant Factories Initiative”, which, according to Ruh, is “rethinking what a factory is”.

“Analytics in a factory has probably been underutilised, the data is mostly thrown away,” he explained.

“But we’re not doing that. We’re keeping every piece of data about how the machines operate and we’re using that to continue to identify types of analytics that could drive efficiency in our factory.”

Ruh added that GE is aiming for a “huge” 40 per cent improvement in factory efficiency.

‘If I could do analytics without IoT, I would’

But while connected devices and the Internet of Things are helping to drive improvements, for Ruh, they’re not the most interesting part of factory set-up; that would be the data and what can be mined from it.

“If I could do the analytics without IoT, I would, but I can’t because I need the machine data and machines are chatty. So for us, the IoT is necessary but it’s not the most interesting part. The analytics, the insight you gain: that’s where the value is. It’s just that you need the data in order to gain the insight,” he said.

For a company the size of GE, there’s a lot of data to gain insight from.

“Our current jet aircraft engines produce one terabyte of data per flight,” Ruh said, to illustrate the scale of GE’s data trove. “On average an airline is doing anywhere from five to 10 flights a day, so that’s five to 10 terabytes per plane, so when you’re talking about 20,000 planes in the air you’re talking about an enormous amount of data per day.”

Apart from the sheer volume of data, Ruh estimated that the firm analyses 50 million variables from 10 million sensors. These are the sorts of numbers that most industries could not conceive of managing. So when is IoT going to become useful to the mainstream?

Ruh believes that IoT going mainstream may be three to five years away. However, he warned that “if you’re not starting today you won’t be mainstream”. It is especially challenging, he argued, because operations teams rather than IT teams are the ones leading the change.

“The people who run power plants? They’re working with IT, don’t get me wrong, but they’re putting their own operations structure in place that is parallel to the IT structure,” he said.

“They’re already putting sensors in power plants, airlines and oil rigs. The number of sensors, the amount of data and the collection of that is already there,” Ruh continued. The trouble is, he added, the operations teams tend not to link this to analytics, which is where IT needs to get involved.

“The magic has to occur when the IT and the operations teams come together and that’s really where in five years the mainstream will be driven, when these guys are working in tandem,” he said. “GE does that today, but it took us four years to get there and it’s going to take other organisations that long.”

Despite being relatively advanced in its use of IoT and big data processing, GE isn’t resting on its laurels. Rather, it is exploring new areas to deploy the technology, including power generation.

“We’re changing the way we do remote monitoring and diagnostics. GE generates over 30 per cent of electricity in the world on our machines. So we monitor those machines remotely, then we try to proactively understand their behaviour and how to fix them,” Ruh said.

Ultimately, as with the jet engines, use of this technology will allow predictive maintenance, “telling our customers what they should be doing as opposed to what has happened”. This approach represents “the cornerstone of modernisation”, for GE.

‘An R2-D2 for every field worker’

But sensors are just the start. Ruh described how GE is looking to exploit the power of internet-connected robots to carry out “dirty, dull and dangerous tasks”. The firm has already experimented with this in the railway industry.

“We’re testing robotic rail inspectors that will go along and look for problems in rail yards, looking to see if there’s anything broken,” said Ruh. “The nice thing about that is they work 24 hours a day and they can work in the dark as well.”

GE called its first robotic inspector “The Guardian”, because as Ruh pointed out, it’s there to help people, not replace them.

“It’s something which actually works with a human,” he said, adding that GE got inspiration for this model from the film Star Wars.

“Most people think robotics is separate from humans but I look at something like Star Wars, where the robots weren’t there to replace humans, but to help them. So the question is: how do you create an R2-D2 for every field worker?”

Ruh went on to describe how this could raise some interesting questions in the future: as robots get more intelligent people could potentially start treating them on a more equal level.

“People are treating their robots like pets or members of the family. People are giving robot vacuum cleaners names, but not only that – and I think this is strange – but people take them on vacation,” he said.

“The fact is that we’re now seeing the Turing Test being passed every day, with people having no idea that they’re talking to a computer,” Ruh said. By extension, if the trend continues, robots will become widely accepted in homes and workplaces.

“Once robots take on human qualities – and I think that’s what’s going to happen -we’re going to find these things are playing a role in our lives as part of the family,” he concluded.

Big Data Analytics, Mobile Technologies And Robotics Defining The Future Of Digital Factories

Posted on by .
Louis Columbus
Forbes TECH 2/15/2015

47% of manufacturers expect big data analytics to have a major impact on company performance making it core to the future of digital factories.

36% expect mobile technologies and applications to improve their company’s financial performance today and in the future.

49% expect advanced analytics to reduce operational costs and utilize assets efficiently.

These and additional take-aways are from the well-researched and written report from SCM World, The Digital Factory: Game-Changing Technologies That Will Transform Manufacturing Industry (Client access) by Pierfrancesco Manenti, November, 2014. SCM World and MESA International recently collaborated on a joint survey to define the landscape of manufacturing technology tools, defining an investment priority timeline.

Online surveys were sent to corporate members of SCM World and MESA International, with respondents from professional services and software sectors excluded from the analysis. Manufacturing & Production (22%), IT Technology (21%), Operations and Engineering (14% each) and General Management (8%) are the most common job functions of survey respondents. Respondents were distributed across Asia & Australia (22%), Europe, Middle East & Africa (40%) and North & South America (38%).

SCM World also recently launched their Design for Profitability survey to learn about how companies are improving their innovation success rate, and are offering a free copy of the final report in exchange for your participation.

Key take-aways from the study include the following:

  • Mobile technologies and applications (75%), big data analytics (68%) and advanced robotics (64%) are considered the three most disruptive technologies by manufacturers today. SCM World notes that mobile technologies and applications are being progressively adopted across the plant floor, profoundly changing the way manufacturing operations are measured, controlled and supervised. The survey also found manufacturers globally have high expectations for big data analytics providing greater insights into how manufacturing operations can be improved. Cloud computing received a low ranking as a disruptive technology as manufacturers increasingly see it as an enabling technology.
  • Big Data analytics (42%), advanced robotics (30%), mobile technologies and applications (36%), Internet of things/cyber-physical systems (36%) and digital manufacturing (29%) are the top five technologies manufacturers are relying on to improve agility, responsiveness and reliability of their operations. SCM World looked at which technologies will most impact the Supply Chain Operations Reference (SCOR) Models’ five Key Performance Indicators (KPIs) of reliability, responsiveness, agility, costs and asset utilization. SCM World grouped the five KPIs into speed (combining agility, responsiveness and reliability) and efficiency (combining costs and assets utilization). The results of this analysis are presented below.
  • 58% of manufacturers are either piloting or planning to invest in mobile technologies and applications, followed by big data analytics (49%). The following investment priority timeline illustrates how manufacturers are prioritizing their technology investments by future, emerging, current and mature technologies.
  • Comparing the investment priority timeline and level of technology disruption in the following technology investment priority grid further clarifies the impact of each technology on manufacturing. The relative size of the bubbles indicate the overall impact on SCOR Model KPIs. SCM World believes the five technologies to the right of the red curve will drive the greatest disruption in manufacturing in the next few years.
  • Real-time factory performance analysis (57%), real-time planning (including MRP and factory scheduling) (53%), real-time supply chain performance analysis (42%) and production quality and yield management (40%) are the four most likely use cases for big data analytics in the digital factory of the future. Only 4% of manufacturers see no use case for big data analytics in the future.
  • Intel’s big data analytics strategy is paying off. In 2012, Intel saved $3M in manufacturing costs by implementing preventative analysis on just a single line, and is now planning to extend the process to more chip lines and save an additional $30M in the next few years.
  • GE Aviation has estimated that using big data analytics to enable “in process” inspection could increase production speeds by 25%, while cutting down on inspection after the building process is complete by another 25%.
  • Production tracking and remote factory monitoring (60%), track and trace across the supply chain (46%), and extended plant floor automation via machine-to-machine communication (40%) are the three most likely use cases of the Internet of Things in the digital factory of the future. SCM World found only 6% of manufacturers found no use case of the Internet of Things on the plant floor.
  • Boeing is one of the early adopters of 3D printing technology, making more than 20,000 3D printed parts for 10 different military and commercial planes. SCM World found that the 787 Dreamliner has 30 3D printed parts, including air ducts and hinges, which is a record for the industry. GE Aviation has also acquired a few companies in this area and is planning to open the first high-volume additive manufacturing facility. SCM World found that fast prototyping, production of demo parts for marketing & trade shows (69%), production low-volume parts or components (51%) , and making tools and molds on the plant floor as required (42%) are the three most likely use cases of additive manufacturing and 3D printing in the digital factory of the future.


Areas businesses need to address to have broader application of IIoT.

Posted on by .

For businesses, three key areas need to be addressed to accelerate the economy-wide, cross-industry application of the IIoT:

Reimagine industry models: If every product is connected and enables a new service, reinventing industry practices and business models becomes paramount. As companies embark on a journey that begins with using the IIoT to improve efficiencies, and progresses to creating outcome-oriented, product- service hybrids, they will need to plan each stage. How can their efforts for improving asset utilization, for example, be used as a platform for new services? Will a company gain most value by offering its own data to an ecosystem of partners, or from incorporating third-party data to enhance its own services? Should a company invest in its own platform or join existing industry platforms? How will its partnerships evolve as a consequence?

Capitalize on the value of data: The power of the IIoT comes not only from generating insightful data from physical objects, but also from sharing it between players within supply chains and cross- industry consortia. According to a survey undertaken by Accenture and GE,10 73 percent of companies are already investing more than 20 percent of their overall technology budget on big data analytics. That shift requires new technical and management skills. Further, it demands a cultural willingness to streamline data flow, not only within enterprises, but also between them. Companies must create new financial and governance models to share the rewards of using common data. Interoperability and security are identified as the greatest hurdles to progress by two- thirds of those companies actively pursuing IIoT initiatives, according to a survey by Accenture, the World Economic Forum and the Industrial Internet Consortium.11 Collaborators should establish their own processes and tests to improve interoperability while establishing common security frameworks. Governments need to work across borders with business and other stakeholders to agree who owns data, what can be shared and how liabilities will be handled across jurisdictions.

Prepare for the future of work: An overwhelming majority of executives (94 percent) believe that the increasing use of smart products and robotics will change the required skill and job mix in the workforce of the future.11 Decision making can be devolved to workers empowered by valuable data, while the design and creative process could become more iterative and experimental. Employees may have to develop working relationships with intelligent machines. And continuous learning could replace traditional training as technologies and business practices evolve quickly. Managers will have to be willing to collapse hierarchies and silos and open up to extended workforces beyond their own walls. Such an approach demands a new culture and tolerance of autonomy. Leaders must also accept the demand for individually tailored working environments and experiences by creative and dispersed workforces, while maintaining core values and a common purpose within their organizations. Companies will have to establish digital platforms to create global talent exchanges that address skills shortages. Digital tools will also accelerate skills development and support a continuous learning culture. Companies will need to reassess their organizational structures and operations. Thanks to technologies such as 3D printing and micro-assembly, in some quarters, the IIoT will reverse today’s trend of centralized manufacturing and localized services, requiring the reconfiguration of operations and talent.


From Accenture report: Winning with the Industrial Internet of Things: How to accelerate the journey to productivity and growth. February 2015.

Bluesmart carry-on suitcase can’t get lost

Posted on by .
The Bluesmart acts as a high-tech travel buddy.Bluesmart

Up until recently, the humble suitcase had pretty much one job to do: keep your stuff together so it doesn’t scatter to the four directions when you go traveling. There have been a few tech innovations when it comes to suitcases, but none quite so complete as what you’ll get from the Bluesmart, a connected carry-on that works with your Android or iOS smartphone. The smart luggage is raising funds on Indiegogo to go from prototype into production.

The creators of Bluesmart have thrown just about everything they can think of into the design. Before you even head out the door, the suitcase helps you out with packing with its built-in digital scale. Pull up on the handle and check the weight on the app to see if you’ve overstuffed or packed just the right amount. Outlets can be few and far between when you’re on the go, so Bluesmart also has a built-in battery for charging up your tech gear.

A digitally controlled lock lets you access your suitcase using your smartphone. It can also be set to automatically lock itself if you become separated. If the battery dies, you can still get in with a special key. The lock is also usable by the TSA so you don’t run afoul of regulations should you have to check the bag.

Naturally, you’re attached to all the clothes and equipment stashed in your carry-on and you don’t want to lose it. For starters, Bluesmart has a Bluetooth proximity sensor function to send you alerts if you or your suitcase wanders off and to help you track it back down again using a proximity heat map.

There’s also GPS on board for locating the wayward luggage on a map if you get parted by a greater distance than the proximity feature can handle. This could also be very helpful in the event you have to check the bag and it disappears into the murky depths of the airline’s baggage system.

Storage situated at the front of the bag lets you quickly access your laptop and other electronic gear so you can roll through TSA checks. Avid travelers may also enjoy seeing their trip data collected on the Bluesmart app, including which airports you’ve been at, how long you’ve spent in each country and the number of miles you’ve covered during your journeys.

All the tech is cool, but you also need a functioning suitcase to go along with it. To that end, the Bluesmart is designed with waterproof zipper, anodized aluminum for the handle and four spinner wheels. It has a 34-liter capacity and the current prototypes weigh 8.5 pounds, though the Bluesmart team is working on making it lighter.

Bluesmart launched on Indiegogo with a $50,000 goal, but it seems the idea of a smart suitcase is a popular one. It has over $304,000 in pledges with 33 days left to run. The Bluesmart price is not far off from other high-end suitcase offerings. It’s going for a $265 (about £165, AU$302) pledge. It’s not too hard to see the appeal since the Bluesmart is shaping up to be a traveling gadget lover’s dream luggage.

Bluesmart suitcases
Bluesmart prototypes have already been traveling the world.