Predicting the Future: Five Technologies That Are Already (Kind Of) Here

At the end of every year at IBM’s research lab, predictions about where technology might be going in the next five years are made by a select group of research and computer scientists.  We’ve seen quite a few interesting innovations this year already: Ultrabook™ touch and sensors integrated into all kinds of applications, perceptual computing, touch-based interfaces applied to entire operating systems, etc., so I think it’s safe to say that in the next five years we’re going to see even greater leaps in the world of computing. IBM’s “five in five” for 2012 focus on our human senses: touch, hearing, sound, sight, and smell, with a few extra added crystal ball predictions thrown in the mix for good measure. In this article, we’ll look at what some very smart people believe might be coming our way in the next few years. While some of these seem truly space age in nature, the groundwork for most of them has already been laid.


Computers and mobile devices are predicted to have a sense of touch that we haven’t seen before. Remember the scene in “Willy Wonka and the Chocolate Factory”, when Mr. Wonka is able to reach into the screen and actually touch the chocolate bar inside? That’s what we have to look forward to: a real touch experience that allows us to not only see what we’re looking at, but feel it. Perhaps you’re thinking about buying a gorgeous wool coat; instead of just hoping for the best, you might be able to touch your phone/mobile device/PC and via a variety of extremely complicated computing processes feel the texture of the fabric right there on your screen. This will be accomplished using infrared, haptic, and vibration technologies so you’ll be able to determine burlap vs. silk, or heavy vs. light.


Computers will be able to cognitively detect patterns and figure out what matters in those patterns over a period of time. For example, pictures and videos in an emergency situation (like an earthquake or a tornado) could be used to guide and direct emergency personnel. A bank of visual data could be used to share experiences that others could benefit from; i.e., the medical field could look at a set of MRI images over time and look for patterns that could be early detectors of possible issues. They will understand content in such a way that will potentially go beyond human capacity simply by the amount of data available and how it is processed.


Our computers will be better able to filter sounds contextually; deducing not only that yes, there’s a sound happening right now, but also the underlying meaning behind it. Maybe we’ll actually be able to understand what our babies are trying to tell us when they are screaming at 3 AM, or figure out what our cat is telling us when she won’t stop meowing over and over. Another possible use for sound sensor technology: passive “guardians” that are placed at strategic points at vulnerable places, such as a riverbank with a history of mudslides, or at the base of a mountain that is partial to earthquakes. These sensors could “listen” to data that humans aren’t necessarily able to detect, using indexes of data, not only predicting possible events but also giving us first alerts when there are possible indicators that an event is about to happen.


“Digital taste buds” might sound a bit off to you, but hear me out. Computers that could figure out the chemical structure of food and why people enjoy it could actually be quite useful. Food is nothing more than a series of chemical reactions, after all, and if a machine could figure out the connection between those chemical reactions and our neural pathways, perhaps it could make Brussels sprouts taste like cheesecake, meeting both nutrition standards and catering to our human need for something that actually tastes good. It’s a thought, anyway.   


What if you could call your doctor and he could diagnose you merely by having you breathe into the phone? This is something that IBM’s research lab has predicted to be coming within the next five years.  Some health issues have very distinctive odors, but there’s also a vast myriad of information transmitted in our basic respiration that could be of incredible value when analyzed. It could be conceivable that this advance would actually reduce health care costs; something that I’m sure we’re all looking forward to in the future.

More predictions: Energy

Imagine if you could power your house with the energy that you create yourself. Anything that moves has the potential to create energy: in fact, you’re burning up quite a bit just reading this article. Renewable energy technology is what is predicted within the next five years to power nearly everything we use:  just by the act of driving down the street, you could be recharging batteries.


Out of all the predictions that came out today, this is the one that I didn’t quite get the point of. Junk mail, or any irrelevant information, will become obsolete. If you feel like you’ve heard this before, you have; nearly every technology think tank comes up with this one at some point in their lifespan. The prediction is for spam emails to morph into personal notes that are actually completely relevant to you without you asking for it. For example, say your favorite band is coming to town. Your PC, tablet, or mobile device automatically purchases you tickets based on previous iterations of you interacting with that band in some way across your Web media streams.

In other words, you’re not going to get ads for Viagra anymore, well, unless you really want them. The current untargeted (somewhat) nature of unsolicited ads is going to be a thing of the past. We’re only going to get sales pitches that we’re really interested in, incorporating data from all aspects of our lives, like social networks and online preferences. Does that sound like less spam, or more spam? Time will tell. Personally, if I choose to hear more about a service or a product, it’s my choice that is making the buying decision, not ads that are randomly sent to me. If I do decide to make the leap from passive observer to potentially interested customer, I want there to be a buffer in between that asks me for my permission (otherwise known as opt-in marketing). Anything else than that is still considered spam in my book, even if I’ve given it a plus on Google+, liked it on Facebook, followed it on Twitter, etc. If I’ve done that, it doesn’t mean I’m giving that entity permission to send more stuff; it means I want that information right there where I’ve decided to access it, not leaking over to my email, which I consider a completely different entity.



Instead of using a password to log on everywhere, you’ll need you, yourself, and you. Your biological signature – including biometric data like retinal scans and voice files – will build a password unique to your DNA. You’ll be able to opt in or out of this sort of system, and provide whatever information you might feel comfortable with. Your own DNA and biological signature will become what is actually guarding your data – you will become your own security – so instead of creating and keeping track of multiple passwords, you’ll just keep track of yourself.


The “digital divide” in five years will be less of a roadblock than at any other time in history. More people have access to information right now and that number will only get larger in the next five years, especially with the proliferation of mobile devices. There are roughly 7 billion people in the world today, and in five years, pundits predict that 5.6 billion mobile devices will be purchased. That means that in five years (or less); more than 80% of the world’s population will have access to at least one mobile device, and in turn, information.

Are we looking at artificial intelligence?

Reading over this list, you might be wondering if this is bordering on machines right out of Minority Report. Some of these amazing predictions are already happening out in the real world right now:

“For example, scientists at the University of Berkeley are able to use images of brain activity to roughly reproduce the picture or video that a person was watching when the activity occurred. Similarly, the EPOC Neuroheadset from electronics company Emotiv uses sensors mounted on the scalp to allow people with neurological disorders, such as locked-in syndrome, to use their minds to move objects on a computer screen. - Source

We might be able to make a phone call just by thinking about it (might want to turn this feature off between the hours of 3 and 6 AM on a Saturday night), or call up an application simply by saying its name. Other sources of this technology could be crowdsourcing data to make giant leaps of insight. For example, say you’ve got the idea for a fantastic novel, but actually getting it out on paper is really tricky. Now, a computer isn’t going to write an entire book for you, but perhaps getting an idea down in outline form would be helpful. Here’s more from Bernie Meyerson, IBM’s VP of Innovation:

"This is really an assistive technology," he explains. "It can't go off on its own. It's not designed to do that. What it's designed to do, in fact, is respond to a human in an assistive manner. But by providing a human-style of input, it's freed us from the task of programming and moved to the task of training. It simply has -- not more intelligence -- but more bandwidth, and there's a huge difference between the two."  - Source

What do you think about these predictions: crazy, doable, or a little bit from Column A, little bit from Column B? Tell us your thoughts in the comments, along with your predictions for the next five years in technology.