Monthly Archives: December 2013

Fish-Like Underwater Robots Developed to Protect the Environment

Posted on by .

Science World Report

First Posted: Dec 26, 2013


The SHOAL robots swim using a tailfin rather than a propeller, minimising noise and disruption to marine life. Image courtesy of SHOAL

The SHOAL robots swim using a tailfin rather than a propeller, minimising noise and disruption to marine life. (Photo : SHOAL)

Teams of robotic fish are drawing on the intelligence of swarms of social insects and other organisms in new ways to help protect the environment.

The group cognition of insects like fireflies and honeybees, and even organisms such as slime moulds, provide useful models for researchers developing autonomous systems. Teams of underwater vehicles can mimic these natural behaviours to monitor water pollution or search for debris on the seabed.

The EU-funded SHOAL project has developed robots inspired by fish, but operating like ants. These robots analyse the waters they swim through, identifying chemical pollutants or leaks from oil pipelines in European harbours.

As they move around in the water, teams of robots build up a map of their surroundings and work together to patrol the port.

‘These robotic fish allow for constant pollution monitoring, so if an incident occurs, such as a leak or spill in a harbour, we can take action immediately,’ said Luke Speller, a senior research scientist at British-based technology consultancy BMT Group and coordinator of the SHOAL project.

‘Compared to current measurement techniques of divers collecting samples and sending them for laboratory testing, the fish can give a much quicker response to environmental incidents in the port,’ he added.

The SHOAL robots swim using a tailfin rather than a propeller, minimising noise and disruption to marine life. The fish design allows the robots to manoeuvre easily, patrol in shallow waters, and avoid snags that might snarl propellers.

They use sonar to detect obstacles and map their surroundings, and are also equipped with acoustic localisation, gyroscopes, accelerometers and other sensors for navigation. Underwater acoustic communication is also used to share information between robots and the shore. On-board chemical sensors measure pollution and general water quality parameters such as salinity and oxygen concentration.

Monitoring or search

When monitoring, the robot fish spread out to maximise the coverage area, but patrol all areas regularly. Once a member of the ‘shoal’ detects a possible problem, the system begins a search.

‘Each of the robots is programmed with the same behavioural characteristics. To ensure they act differently, with different goals, each robot shares a small amount of information, such as where it has been and its current readings. If a pollution incident is detected then the robots will switch to a searching behaviour to find and identify the cause and origin of the pollutant,’ Speller said.

The robotic fish interact with each other and with a base station, which can take direct control if human intervention is required, such as if pollution is expected or an incident has occured in the port.

The project finished last year, and the researchers are now looking into whether robotic fish could play a role in coral reef monitoring, hydrographic mapping, and even barnacle counting.

Real-world environmental monitoring is also a focus of the EU-funded CoCoRo underwater robotic swarm research. This project is studying the collective cognitive capabilities that emerge from the dynamic interactions of simple individuals.

CoCoRo’s Lily robot is based on refitted turtle-shaped submarine-type toys and small blinking blue lights are one of their means of communication. Image courtesy of CoCoRo


‘We want to keep our individual robots as simple as possible and design algorithms that can help us to maximise their collective intelligence,’ said project coordinator Dr Thomas Schmickl, from the Artificial Life Laboratory at Karl-Franzens University in Graz, Austria.

‘We want to demonstrate that even such simple individuals can make rather intelligent and complex choices in the group,’ he added.

Key to the CoCoRo project is the potential to scale up from simple individuals to large groups, perhaps hundreds of robots. These could be used for environmental and ecological monitoring, such as plume detection of spills or toxic waste, as well as in exploration or search and rescue.

CoCoRo’s Lily robot platform has achieved the largest autonomous underwater robot swarm yet made, with 22 individuals. Based on refitted turtle-shaped submarine-type toys, the robots have limited computer-processing power but are cheap and easy to assemble.

The group is able to use a set of criteria to find the best target among several and it is also able to estimate its swarm size without using identity numbers, keeping down the demands for heavy number crunching.

‘It is a totally different approach, to develop mechanisms that are used in the animal kingdom, where there are no identity numbers and there is no Internet Protocol (IP) or similar system for coordinating huge groups of animals. But still they organise themselves very well and make collective decisions and coordinate,’ Dr Schmickl said.

The team is developing the Lily results into a more advanced robot, called Jeff, with a more torpedo-like form, though it is not intended to mimic a fish.

Other EU-funded projects to explore the use of robots in coordinated teams include TRIDENT, led by Spain’s Universitat Jaume I. It developed a system for an underwater robot working in close cooperation with a surface vehicle robot. The set-up guides the robots in formation to survey the sea floor, using a system to find and manipulate or recover items such as ‘black box’ flight recorders.

The CO3-AUVs project also worked on development, implementation and testing of advanced cognitive systems for coordination and cooperative control of multiple autonomous underwater vehicles. The project, coordinated by Jacobs University Bremen, in Germany, focused on systems to explore uncharted territory, as well as monitor underwater structures and carry out harbour safety and security missions. — Source and © European Union

Leap Motion Controller Leaps Forward With Software, Sharpens Focus With Apps

Posted on by .
11/23/2013 Forbes by Anthony Wing Kosner.


Leap Motion is a revolutionary company, but revolutions take time. In Leap’s case, their motion control device introduces a whole new way for people to interact with computers. As with anything truly new, our first instinct is to map it to what we already know. For computer interaction, what we know is the mouse, which maps two dimensions of a surface to the two dimensions of a screen. And, since the advent of mobile, we know touch, where a gesture on the surface of a screen maps directly to the elements on that screen. The Wikipedia page for Leap Motion supports this common preconception explaining that the device “supports hand and finger motions as input, analogous to a mouse, but requiring no hand contact or touching.”

After working on the problem for a couple of years, Leap founders Michael Buckwald and David Holz secured Series A funding in May of last year and announced pre-sale of the $79.95 USB drive-sized device. The reception from developers was astounding, and The MIT Technology Review called it, “The Most Important New Technology Since the Smart Phone.”  Unlike Apple, Leap Motion did not try to control every aspect of the user’s experience, instead sending tens of thousands of kits to developers to allow them to create their own apps in their ecosystem. I wrote, later that summer, that the company was “Putting Its Future Into The Hands Of Developers.”

The actual product was released this past July, six months later than expected, to decidedly mixed reviews. Leap succeeded, almost too well, at creating excitement and awareness around their product. Their design and marketing was all first-rate, but the first crop of apps on its Airspace app store were a mixed bag. My own first hands-on experience was a frustrating disappointment. My kids were initially excited, but soon lost interest.

The problem was twofold. First, the hand tracking software was incredibly accurate, but suffered from some glitchy and erratic behaviors. Second, many of the apps did not make it clear exactly what type of behavior they were expecting from the user. A platform like Leap Motion is only as good, to a new user, as the worst app they try out first. Combine these issues with an unfamiliar interaction paradigm and you have a formula for frustration.

But before you write the future of computer interaction off as a gimmick, count to ten. Leap Motion is part of a new hardware product paradigm as well. Unlike the iPhone which seems designed for hardware obsolescence, these new purpose-built devices can increase their functionality by orders of magnitude with changes to software alone. And in the four months since the Leap Motion Controller was released, this is exactly what the company’s engineers have been busy doing.

The Leap’s second-generation hand tracking software that I previewed at the company’s San Francisco offices last week (see screen shot above) with founder Buckwald and marketing head Michael Zagorsek, addresses the glitches in the first version while capturing more information in a more efficient way. Most importantly, it solves the occlusion problem that happened when the device’s camera’s “lost sight” of the fingertips it was tracking, either from a hand rotating perpendicular to the line of sight or being blocked by the other hand. This had the unexpected effect of of causing the representation of the hands to disappear from the screen in the middle of an action.

The new software tracks not only the fingertips and palms of the hands, but each joint as well, making for a much more accurate representation of hand position and motion. And the software has also grown up (according to the “object permanence” stage of Piaget’s childhood development model) so that it remembers where a hand is even when temporarily out of view. This will definitely eliminate some of the potentially confusing feedback users can receive that breaks the illusion of continuity in some of the apps.

Perhaps even more significant in terms of growing adoption and engagement with the device is the emerging sense of what the Leap Motion Controller is actually good for. As I wrote about in my description of Elliptic’s ultrasound gesture recognition technology, Leap is really overkill for reading simple swipe gestures to turn a page or scroll. Leap is now defining itself as a “motion capture” technology in contrast to mere gesture recognition. This distinction is important for understanding what Leap does best.

Gestures, whether through touch or in the air, are generalized movements. Moving your hand or finger from left to right within certain statistical boundaries can be interpreted as a swipe. But the capture of precisely where the different parts of your hand are during the course of that gesture is a whole other matter at which Leap Motion uniquely excels.


To bring this point home, Leap has just released a new app which it developed internally called Freeform. Freeform allows you to sculpt highly detailed figures within the three-dimensional space of the app directly with your hands and then export these as 3D models that can be printed on any 3D printer. As you can see in the screen shot above, you can choose the material and tools to work with and even rotate the model for lathe or pottery wheel effects.

This is only the second app produced by Leap Motion that has been released into Airspace. The first is a general computer control app called Touchless (available in different versions for Mac and Windows.) The difference between the two apps is striking and marks a major refocusing of the company’s approach to what a Leap Motion app should be. Instead of emphasizing Leap as a three dimensional mouse, Freeform highlights the precise control you can achieve with it as an input method. This, more than mouse replacement, is the technology’s unique strength. Leap Motion is really for capturing fine motor movements as opposed to the gross motor movements captured by devices like Kinect and Elliptic.

There is something more, as well. For Leap Motion to regain the enthusiasm of its initial reception from the chasm of disappointment that followed its actual release, it must demonstrate concrete use cases that satisfy the mass audience. Freeform could help do that.

Imagine for a moment the share of the population that has the kind of spatial intelligence required to work physically in three dimensions. Now, imagine the sliver of that population that also has the abstract intelligence required to work with 3D programs on a computer. And even for people in that sliver, consider that it takes 3-6 months to get up to speed with such programs. This series of constraints is a limitation, in turn, on the adoption of 3D printing. By creating a program that people without highly developed abstract intelligence can get comfortable using in a matter of minutes or hours instead of months (if ever) Freeform could radically alter the landscape of 3D creativity.

And 3D modeling and printing is just one example. In the months ahead, look for Leap’s developer community to refocus their efforts as well around opening these kinds of constraints from all kinds of dimensional activities. With this is mind, it is very exciting that SOS Ventures and Founders Fund have announced the formation of a new accelerator program specifically designed to jumpstart the next generation of apps for Leap Motion’s technology. The LEAP.AXLR8R will pick startups with disruptive and achievable ideas, provide seed funding and office space adjacent to Leap Motion’s offices in San Francisco as well as access to mentors and other resources to support an intensive three month development cycle beginning in late January, 2014.


Recon Instruments Launches Jet Heads-Up Display

Posted on by .
Recon Instruments Launches Jet Heads-Up Display
New sport eyewear computer puts ride metrics and more in front of your eyes
ByGreg Kaplan in icycling Magazine

Vancouver, Canada-based Recon Instruments has a new product that puts ride metrics right in front of your eyes. The company’s Jet combines the computing power of a mobile phone with a display that’s easy-to-read and unobtrusive, powered by a battery that can last for hours, and fits on a pair of sunglasses. The new product was unveiled today, and the first units will arrive in December.

Recon Instruments’ co-founder Dan Eisenhardt was a competitive swimmer at the national level in Demark when he started searching for a device that could give him instant and accurate updates on his performance metrics in the water, but found none. Later, while in graduate school at the University of British Columbia, Eisenhardt and classmate/Recon co-founder Hamed Abdollahi developed a solution that could eventually quench his thirst for data. The pair came up with an idea for a heads-up display for ski goggles, and with the encouragement of a professor, took the idea and ran with it.

The first designs for ski goggles incorporated the processors from a Texas Instruments graphing calculator. Metrics such as speed and airtime were projected onto a small display at the bottom right side of the goggles’ lens, where the information and the actual display hardware were the least obtrusive. With funding from angel investors and government grants, the first version of the goggles, called Snow, went into development in 2008 and finally launched in 2010. Snow sports were a good start, but the company’s reach was limited by the seasonal ski season. Targeting cyclists, many of whom ride year-round, was a natural progression, and many riders want the same kind of performance data that Recon already provides to skiers, including the ability to connect in real-time with friends via mobile devices.

The Jet provides its own GPS-derived data such as speed, altitude, and gradient, and also connects to other networkable devices—power meter, heart rate monitor, speed/cadence sensor, or a smartphone—and display data from those devices. If connected to a smartphone, Jet can even display more mundane data like text messages and incoming calls. To motivate you further, the glasses are even capable of displaying a ghost rider to pace your workouts, or a Strava KOM “pace indicator” to show your progress on a specific segment.

The Recon Jet system incorporates three components: the optics themselves, the Recon display and processor, and a swappable power supply. Recon-ready eyewear comes in a range of options, including lens shade, polarization, and size.

The Recon unit itself fits snugly against the lower right side of a pair of sunglasses, on the outside of the lens. A rubber gasket holds the display against the lens and keeps out moisture, preventing condensation. The company says that most users will be able to read the display, regardless of what corrective eyewear they use. You control the display by swiping and tapping a sensor on the unit, just as you would a smart phone. Recon’s battery is on the left side of the glasses, to balance the unit.

The Recon Jet sits on a normal-sized pair of sunglasses.



The Recon Jet display is based on technologies developed for Recon’s snow sport goggles, but will display metrics such as speed, power, and time.

The Recon Jet system, the company says, adds about one ounce to a pair of glasses. In your hands, it feels heavy and unbalanced compared to a pair of sunglasses. But after five minutes of wearing the system, you stop noticing the added weight.

The brain that drives Jet is a 1GHz dual-core ARM processor, with 1GB memory, and 8GB storage—similar to the technology powering some smartphones. The Jet display is 428x240px and incorporates a prism and magnifying lens to make the display appear to be a 30” LCD, as you would see it from seven feet away. The 16:9 display is clear and easy to read. Every Jet unit has 9-Aix sensors, powered by an accelerometer, a gyroscope, and a magnetometer. The company says their battery will last as long as nine hours, depending on how you use the system, the strength of your GPS signal, and environmental factors including temperature. You can connect to other devices via Ethernet (“wifi”), Bluetooth low power, and ANT+. There’s a micro-USB connector for data transfer and charging.

When wearing the Jet, the display is only activated when you look at it; an infrared eye-sensor toggles the display to extend battery life. Glance down, and the display is instantly enabled and viewable; when you return your focus to the road ahead, the display turns off. To the user, it appears seamless. Speed, power, grade, and other metrics are displayed simultaneously in a segmented view, similar to the interface used by Garmin, and you can set which metrics are displayed at any time.

You can also display route information thanks to a navigation interface—especially useful for finding your way home. There’s also a feature that lets you locate friends—ideal for keeping your group together on a Gran Fondo.

Linking a GoPro or Contour camera can provide streaming video to anyone following a Jet wearer; and you can even give yourself a rear view by mounting a rearward-facing camera and connecting it to your display.

You can pre-order a Jet now for $500, and units will ship in December. After July 21, the price will jump to $600. The glasses have a one-year warranty, but it does not cover crash replacement.

The Recon Jet shows a lot of promise, but we hope the glasses become more sleek. As to which cyclists will be best-served by this new product, one obvious application is riders who enter a lot of time trials—the ability to watch their power output without breaking from their aero position will be a boon to performance.

AT&T retail chief explains ‘The Store of the Future”.

Posted on by .


As consumers shift many of their purchases online, will physical retail stores even have a reason to exist? That’s the difficult question AT&T retail executives asked themselves two years ago as they began a process to redesign and reinvent all 2,300 company-owned retail store locations in North America. In August, AT&T customers in La Grange, Illinois, a suburb of Chicago, were the first to experience a new concept store AT&T leaders believe reflects the future of retail.

Prior to the store’s opening I spoke to Paul Roth, AT&T’s president of retail sales. “We began with a blank sheet of paper—literally,” Roth told me. Roth says that despite the huge increase in e-tailing (Forrester Research predicts that online retail sales will grow at a compounded annual rate of 10 percent through 2017), retail stores will continue to be relevant, but only if they serve vastly different purposes than they do today. “The future of retail is all about personalized service and education,” he predicts.

Roth believes AT&T’s new store design serves the purpose customers demand from a retail store location because it will offer the following the three components.

1. Highly personalized services. What customers in La Grange will not see is almost as important as what they will see. Cash registers? Gone. Counters and terminals? Gone. All of the store’s retail staff (consultants) will be equipped with tablets supported by a mobile point-of-sale system so customer transactions can occur anywhere in the store. Roth says instead of being ‘transactional,’ the communication and experience takes place side-by-side, creating a more personalized experience.

AT&T’s research found that consumers who want to buy a specific product and have it delivered to their home will simply do it online. But for those who enter a store, their purpose is to learn, to experience, and to speak to a person. It means the physical environment of a store must change to reduce the communication barriers between employee and customers.

For example, in the center of the newly designed AT&T stores, customers will find circular “learning tables.” These are set up around concept of “exploration, education, and interactivity.” You’ll notice in the photo below that the learning tables are round and not rectangular, removing barriers to facilitate a more intimate, personalized conversation. The tables also encourage education and interactivity. For example, let’s say a customer buys the new Nokia Lumia 1020 smartphone because they read positive reviews about the camera’s 41-megapixel sensor. The camera and the device come with many new features. With the exception of early adopters, however, the majority of consumers will want learn more about the phone’s capabilities. AT&T employees will be able to escort customers to learning tables to help them set up their phones and learn to use them.

2. Solutions, not transactions. AT&T’s research found that consumers go to the web to conduct ‘transactions;’ they go to a store to discover solutions to help them live, work, play, and learn. “In our prior merchandising scheme, we offered smartphones and accessories in different parts of the store. That’s not a solution. It’s a transaction. If we put them together to show how they work, now we have a solution,” says Roth.

I find this concept to be the most intriguing of the redesign. AT&T stores will have connected ‘experience zones,’ where a complete set of products will be displayed together. For example, in the music zone, a customer will see smartphones flanked by various speaker options in different colors, sizes, and styles. A customer can play music on a smartphone and move the sound from speaker to speaker. Other zones will showcase digital home automation and entertainment products. This is called “lifestyle merchandising” and, according to Roth, has been shown in pilot experiments to boost sales of products that consumers didn’t appreciate until they saw the product used as a complete solution. “Prior to putting it together as a complete lifestyle solution, consumers didn’t see the value. Now they can discover solutions they didn’t know existed,” says Roth.

3. Emotionally engaging experiences. Customers also told AT&T that want to be “rewarded” for a trip to a store. This means the physical design must be open, warm, and inviting. Customers visiting redesigned AT&T stores will find a colors and materials designed to a signal a high-tech experience (white tables with high gloss or matte finishes) combined with warm and comforting materials made of reclaimed teak wood. Interactive digital displays will replace printed brochures and in-store posters, which often take up to eight weeks to print, ship, and install. Displays will show targeted messages relevant to the local community and, in some areas, reflect a language popular in the region.

AT&T’s new store design will take some time to roll out. The company expects to redesign 15 to 20 stores by the end of the year with an accelerated rollout in 2014. The goal is to convert 100 percent of AT&T’s store portfolio to the new design.


Roth has an ambitious goal of making AT&T a premier retailer in the area of customer service. As evidence that he’s getting closer to meeting the goal, Roth cites J.D Power’s latest study, ranking AT&T as the best performing wireless provider for “overall customer service as measured across its retail stores, online, and call centers.”

AT&T’s experiment carries a valuable lesson for all business owners, whether or not they own a physical retail store. You see, Roth did not start with the question, “How do we sell more product?” Instead he asked a question far more profound: How do we want people to feel when they enter our store? According to Roth, “We want people to say to themselves, ‘It feels good to be here. I would like to spend time in this store. I will find something that I didn’t know existed, but which is relevant to me and my life.’” Enhancing the customer experience begins with asking the right questions. Only time will tell if AT&T’s redesign will be successful, but it’s off to a strong start because it began its reinvention process with the right questions.

Not too long: The Robots Are Coming

Posted on by .



Google and other companies believe that robots today are like cell phones back when they were the size of bricks.   REUTERS/Fabrizio Bensch

Snow White was prescient. In a scene from the 1937 Disney movie, she gets a team of birds and cute woodland animals to clean the dwarfs’ house while she warbles “Whistle While You Work.”

A decade or two from now, that’s going to be how you take care of your house – except the work will be done by small robots, each built for a single purpose. They will hover in the air to pick up clutter, climb walls to wash windows and scuttle under furniture to vacuum while you sit back with a cappuccino and binge-watch Breaking Bad reruns.

Outdoors you’ll find a robot swarm cleaning the streets, trimming trees, and watering plants. Little packages will get dropped off by flying quad-rotor drones, probably emblazoned with the familiar smiley face. For the big stuff – like, say, a refrigerator – an autonomous vehicle guided by Google technology will pull into your driveway, and a hulking Google bot with six legs will carry the fridge up your stairs and gently set it where you want it.

Over Thanksgiving, Amazon unveiled its drone delivery project on 60 Minutes, and in no time the jokes and indignation were flying:

Hunters will grab their shotguns and use the drones like clay pigeons.

The drones will short out and fall from the sky by the hundreds when a rainstorm blows in.

Walmart is working on drones that kill Amazon drones.

Then, days after Amazon’s reveal, Google went public with its new robotics unit, run by Andy Rubin, the whiz who created Google’s Android operating system. The message: Google’s investment is no lark. Robots are for real.

In fact, Google and a lot of other companies believe robots today are like cell phones back when they were the size of bricks and cost $6,000. It may take 10 or 20 years, but before long everybody is going to have a robot – or several.

These robots will not look the way most people expect – they won’t walk and talk like C3PO or Rosie from The Jetsons. An all-purpose humanoid robot doesn’t make much sense. As tech thinker Kevin Kelly wrote, “To demand that [intelligent robots] be human-like is the same flawed logic as demanding that artificial flying be birdlike, with flapping wings.”

Instead, the world will gradually acquire many kinds of robots, each designed and built to most effectively carry out a particular task in a way that saves humans time, money or drudgery.

The Amazon drones would do that. Loaded with artificial intelligence, they promise to deliver small items faster than any human could.

Google’s experimental driverless cars are robots. One day, a delivery truck driver will seem as redundant as an elevator operator.

Robotics and artificial intelligence are tough fields, but there’s so much research lab and start-up money going into it, we’ll get the technology right long before we sort out how to integrate robots socially, legally and practically. It’s less difficult to imagine delivery drones working than to imagine the New York sky darkened by thousands of the things carrying everything from shoes to Chinese take-out.

“We’ll solve those kinds of problems when the benefits to society become large enough,” says Colin Angle, chief executive officer of iRobot, maker of the granddaddy of consumer robots, the Roomba vacuum cleaner. Angle notes that when cars were invented, they were insanely dangerous and disruptive and widely hated.

Society is already a long way into robotics and we often don’t know it. I recently visited some family members who own an enormous farm in Saskatchewan. They handle the harvest with just three people and a giant combine that has so many smarts, the driver mostly rides along and never touches anything. In another decade, the smarts will be so good that the farmer can stay inside and play the commodities market while machines do all the work in the field.

Robot news will keep coming. A company called Knightscope just unveiled its robotic security guard. It could roam a warehouse floor at night, its camera keeping an eye out for anything unusual, its chemical sensors sniffing for leaks.

A startup called Play-i is making toy-like bots that can teach a 5-year-old how to program bots. And you know where that will lead in two decades: 25-year-olds who can invent ever more intelligent bots.

Rodney Brooks, who runs the robotics lab at the Massachusetts Institute of Technology and co-founded iRobot with Angle, has a new robotics company, Rethink Robotics. It is making an inexpensive industrial robot that is simple to train and can work alongside a human. An entrepreneur, for instance, could set one up in her garage and teach it to make something, creating a small automated factory.

Brooks and Angle have long believed the Roomba was the first phase of the “robot-enabled home.” They followed Roomba up with the Scooba floor-washing robot, and promise more along those lines – perhaps a window-washing bot, or a clothes-folding bot. (iRobot won’t give specifics.) The bots will likely all be wirelessly connected to each other, and to a kind of “head butler” robot that takes commands from its owner and hands out tasks to the many mini-bots.

It’s no fantasy, Angle insists. This is the not-to-distant future.

Plus, it’s a whole lot easier than getting birds and squirrels to do your dusting.


Why Companies Are Terrible At Spotting Creative Ideas?

Posted on by .

December 11, 2013


Cognitive biases can keep us from assessing creativity with a clear mind. Here’s how to get around them.

In business, a creative idea is only worth as much as the manager who can recognize it. Malcolm Gladwell once told the story of Xerox engineer Gary Starkweather, who conceived of a laser printer circa 1970 but was forbidden to pursue it by a boss. Starkweather developed a prototype in his spare time and forced the company to transfer him so he could finish it. He basically begged Xerox to let him work on an idea it should have been begging him to work on.

That story ended just fine for Xerox, but no doubt many other creative ideas stall in the conception phase for lack of encouragement. Truth is many managers face what might be called a creativity dilemma: their desire for novel ideas and creative workers is at odds with their need to provide practical order. The result of this dilemma, in many cases, is that an aversion to novelty rules the day.

Management scholar Jennifer Mueller of the University of San Diego has studied the failures of creative assessment and found hidden cognitive factors at its core. “There are situational variables that are very subtle and transitory that can shift your ability to determine what’s creative,” Mueller tells Co.Design. These seemingly random factors–such as a manager’s mindset during an idea pitch–can bias people against creativity without them knowing it.

In one study, published in Psychological Science last year, Mueller and collaborators asked test participants to rate a creative product: a running shoe equipped with nanotechnology that improved its fit and reduced blistering. Some of the participants were put in the mindset of someone open to uncertainty (by being told there were many potential answers to a problem). Others were put in frame of mind that favored certainty (told that a problem needed a single, certain resolution).

These slight mental nudges had an outsized effect on assessments of creativity. Participants who’d been predisposed toward certainty rated the shoe as significantly less creative than those predisposed to tolerate uncertainty. They also responded more favorably to concepts of practicality on an implicit word association test. The researchers concluded that idea evaluators can harbor a “negative bias against creativity” they don’t even realize exists.

In more recent work, set for publication in the Journal of Experimental Social Psychology, Mueller and some different collaborators expanded the idea assessment scenario to include four ideas. Two were independently rated as creative, and two were not. The researchers wanted to see whether an evaluator’s mindset influenced every idea heard, or only the ideas that were truly creative.

Before test participants rated the ideas, some were put in a “why” frame of mind, while others were put in a “how” frame of mind. The “why” mindset was supposed to establish the sort of broad, abstract thinking one might want during creative evaluation (known in psychological terms as a “high-level construal”). The “how” mindset was meant to evoke a narrow mentality locked onto practical details and logistics (a “low-level construal”).

All the test participants felt the same way about the two non-creative ideas. These were seen as uninspiring no matter a person’s frame of mind. But ratings of the creative ideas varied significantly based on which construal had been established earlier. Participants in the “why” mindset considered the ideas much more creative than those in “how” mindset. It was as if these hidden cognitive factors formed a secondary layer of assessment, once an initial creativity threshold was passed.

Mueller suspects that an abstract or “why” mindset may be a better psychological framework to consider novelty than, say, a narrow “how” mentality. “So the ‘how’ mindset focuses on the one Achilles heel of all creative ideas, which is the more novel the more uncertainty–the less you know about how feasible it is,” she says. “That’s what we think is driving down these assessments of creative ideas.”

Recognizing which mindsets stifle idea assessment is the first step toward resolving the creativity dilemma. Managers prone to practicality can begin pitch meetings with a quick intervention that promotes a more abstract frame of mind. In Mueller’s latest study, the “why” mindset was achieved simply by asking test participants to consider why people do a series of common activities: back up a computer, for instance, or drive a car. Might also ask why people use a laser printer, while you’re at it.

A Spoon for People With Parkinson’s Reduces Shaking by 70%

Posted on by .

A Spoon for People With Parkinson’s Reduces Shaking by 70%




 I have very clear memories from when I was a girl of my grandpa sitting at the table, hands fluttering wildly, struggling to get a spoonful from his plate to his mouth without making a mess. His Parkinson’s disease made what should have been a relaxing, comforting meal with family into a stressful battle with his own body.

A new spoon from Lift Labs may help others who suffer from hand tremors that make eating difficult. The spoon (which down the road will also have knife and fork attachments) counteracts the movements of a wavering grip, reducing the shaking by 70%.

Sara Hendren, an artist and researcher who runs Abler, a site devoted to adaptive technologies and prosthetics, likened the Lift Ware spoon to an “edit” of more familiar flatware. “This kind of ‘edit’ extends self-feeding for its user,” she wrote to The Atlantic over email, “and maintaining that kind of autonomy can be very significant to one’s own self-perception and the perceptions of others. After all, the experience of change in a person’s ability is registered as much in these qualitative ways as it is in hearing the results of lab tests.”

In a sense, all technologies are “assistive” technologies — eating hot soup with one’s bare hands does not sound particularly effective, nor pleasant. We have invented spoons to extend our limited abilities.

The Lift Ware spoon extends that reasoning to be yet more inclusive. And that sort of thinking can be fruitful for designers and inventors. “Constraints are always generative for designers,” Hendren said, “and this set of constraints—digging deeply into all kinds of atypicalities, less visible conditions, psycho-social challenges, and the many varied experiences of aging—is a still largely untapped area of design research and development. This example of the spoon might yield further consideration of cutlery altogether. Cultures already use different implements for eating. Why not re-examine cutlery entirely?”


Gain More Insight Into Your Customers. Business Intelligence is the Icing on the Cake for a Local Bakery.

Posted on by .

butter lane bakery

Most businesses collect information on their customers. But many may be unaware of how valuable that information can be. Using the data effectively can be the key to identifying your best customers and figuring out how to serve them better. It can also be the key to growing your business by increasing sales or cutting unnecessary costs.

Here’s what happened in the case of Butter Lane, a specialty bakery with a focus on cupcakes and locations in the East Village, New York City and Park Slope in Brooklyn.

Co-Owner Pam Nelson says the company has always collected information on its customers and had a fairly large customer database. But that database was used primarily just to send out email blasts in the past. At the time, no practical option existed for separating out Butter Lane’s best customers for special attention or more customized marketing.

Also the company has participated in three Small Business Saturdays so far. But Nelson says the company has so far also had no way of telling how many repeat customers were gained as a result. Thus, there has been no way to determine whether any extra investment associated with participation is getting a good return — until now, that is.

Gain More Insight Into Your Customers

Over the last year, Butter Lane has made a change. The company has been using a business intelligence application from Swipely to gather data on its customers. And they will use this data in a very different way. Swipely integrates with point of sales technology used by established retailers, restaurants, bars, salons and similar businesses.

The company says it tracks credit card numbers to sort out first time from repeat customers. ”And that’s a big deal for us, just in general,” said Nelson in a recent phone interview with Small Business Trends.

The new tracking allows Butter Lane to figure out what special events, offers, or other marketing efforts result in greater earnings and in increased numbers of return customers.

Today, Small Business Saturday, Butter Lane is launching a new loyalty program in an effort to use customer data to gain even more insight and grow its business at the same time. Customers entering either of the bakery’s locations today will be prompted to sign up for the program by text or on an in-store iPad. Butter Lane will then be able to track customers by name every time they make a credit card purchase.

butter lane bakery

Customers will get cash back rewards the more they spend. Nelson says customers may also receive those rewards in the form of free products like cupcakes or other goodies.

So Butter Lane has found an easy way to reward its best customers automatically and perhaps encourage them to keep coming back.

What Customer Data Tells You

But that’s not the only thing Butter Lane can do with the enhanced data the company will be collecting. Matthew Oley, Vice President of Sales and Marketing for Swipely, says data collected on customers can be used to learn many things about a business.

For example, by collecting data on your customers through loyalty programs, you can discover what products or services they prefer and which customers tend to spend above a certain threshold. You can use this information to segment your customer list and target specific customers with offers most suitable to them.

By importing outside data you can determine whether other factors like weather conditions or even social media campaigns or mentions have an impact on your business. You can even use that social media data to track your average Yelp score or keep up with the latest social media mentions about your company.

It’s also possible to import social media data for some of your major competitors to compare with your own. Track the amount of business during a slow time of day to figure out whether it makes sense to stay open that extra hour. Or experiment to see whether expanding your hours boosts revenue or number of return customers — or both.

Bottom line: The more you know about your customers the better. So figure out how to collect and utilize that data in the way most effective for you.