What's next? World 2.0, a programmable smart world, that's what.

world2.0

This imaginative "what might be" post requires a disclaimer.  So to be clear to my readers, these thoughts are my own, not coming from Intel.  This is purely my speculation, my phrasing, and imagination of what may be in front of us. I do not work in the micro-controller, cloud computing space, or have knowledge of future products or technology in these areas.

Working in tech and with developers I often get asked "so, Bob, what's the next big thing?". Being an imaginative guy I often think about it, and I do have personal thoughts.  My term for the next big thing is "World 2.0".  World 2.0 is my own term, borrowed from the phase "Web 2.0" when the web shifted from a publishing medium to an interactive medium.  World 2.0 is also related to what Intel calls this the "Internet of Things" but I use the term World 2.0 because like Web 2.0 it signifies a fundamental shift.  For me it's a shift to a physical world that is programmable and interactive just as virtual worlds are and visa versa.  For example in a virtual world you can program every aspect of the world and how it behaves and interacts with virtual characters. In video games for example rarely do you see a character pull a key out of his or her pocket to open a door.  Also you rarely see a character look for a light switch to turn on a light.  Reason is these are things that can be programmed to happen as the developer wishes based on rules and a story line controlled in the code.  We are soon moving to a real physical world that can be programmed in the same way.  And this goes far beyond smart lights and doors, but these are simple examples that will soon pervasively change.

World 2.0 Chapter 1: DIY Internet of Things 
Many things have happened recently that will allow this to happen, more than we've understood before.  Home automation, and control systems have been around forever.  We could always program lights and doors, but at a huge cost and they were never terribly intelligent.  However a shift happened, not unlike the shift 6-7 years ago that shifted the mobile space.  This shift involves a pervasive internet, mobile computing, cloud computing, inexpensive sensors, micro-controllers and 3D printers.  What we are seeing now is that virtually and kind of smart device can be imagined, then rapidly prototyped with intelligent sensors, interconnected via the cloud, and brought to market by almost anyone.  Go to any hobby store and look at what you can buy and connect together to build something that can see you, connect to the Internet and be controlled by your smart phone.  You have access to light sensors, infrared sensors, video cameras, NFC sensors, USB connectors  that can connect to servos and motors all controlled by an inexpensive microcontroller.  You can model a physical device using free software over the web then print out, and iterate on that device time and again.

What this means is we are now about to step into a new era.  We've spent the last 20 years where anyone with a PC and an inclination to do something cool could program the web or an app on your smartphone.  We can now start programming the physical world just about as easy as any home DIY home or technical project.  It is as accessible and extensible as the web has been, however the monetization possibilities may equal or far exceed that of the web or app space. We've all started to hear of smart wearable tech from smart bracelets, watches and glasses.  As things we wear get smarter there's a lot of opportunity for inventive developers.  But that is thinking small.  Lets look at even less sexy things that have huge opportunity.  For example imagine creating a better way to enter your house or apartment because your phone or watch is paired with your front door.  How many front doors are there that would need to be changed, and upgraded. Great potential in that one small area of Internet of Things.

To further illustrate let me take a bad idea (cause I don't want to give the awesome ideas away); the smart toothbrush. You could put a micro-controller with WiFi in the base of a toothbrush, then wire in motion and infrared sensors in the head of the brush.  The brush could gather data and send to an app or web server.  With some smart software you can analyze how well your brush teeth.  What areas you miss, what good or bad habits you have, and alerts to your phone should you miss a brushing.  That data could give you, your dentist or insurance company info to improve your dental care.  That is entirely possible and could be invented by just about anybody with a bit of training and prototyped using a 3D printer.

A smart toothbrush is silly, but there are countless opportunities to create newer, smarter things that we interact with.  Many could be patented, licensed and be monetized with a bit of personal ingenuity.  With standards on WiFi, NFC, USB, REST services, and microcontrollers we have an infrastructure to create and connect so many things, never possible before.  The possibility to have so many connected smart things brings us to the next era of World 2.0

World 2.0 Chapter 2: Ubiquitous Sensing
The next phase beyond DIY smart things is making the Internet of Things more aware of us so we can more easily control the world without having to do anything more than waving our hand, or nodding our head. At Intel we call this Perceptual Computing.  However if you look at all of these technologies (Kinect, Leap Motion or Intel's Perceptual Computing) they require a computer, camera or sensor looking at you from a fairly fixed point.  A fixed camera is great for what we need now, however like the PDA was for the smartphone it's a necessary bridge to something much more awesome.

But let's back track a bit.  If we understand Perceptual Computing  (PerC for short) we can understand how the DIY Internet of Things gets us to fully realize it.   PerC is about making the computing systems more aware of us, where we are in space and what we are doing so that the computing systems can interpret our motions, our speech and our behaviors to comprehend what we needing of the system, just as other humans do.  In other words the world around us needs to sense us.  The world, or at least the spaces we do computing, need to have a nervous system to see, hear and process us.  If you've even seen "Star Trek The Next Generation", we see that the computer is able to always sense the crew allowing them to activate and access things by voice or proximity.  The sensing is ubiquitous allowing an interactive computer to be omnipresent.

Intel is starting to solve the fixed sensor PerC problem with new integrated cameras in Ultrabooks and 2in1 computers in 2014 .  However true ubiquity requires that your body, hand, head and voice be sensed as you move about.  But if you recall, going back to the DIY phase we will have world were doors, wearables, light bulbs, cars, appliances, table tops, TVs and potentially toothbrushes will have sensors and some interconnectivity. We already explained how things around us have the ability to sense us independently.  Now if you aggregate of the data from wearables and smart things around you, then process in real time, we now have the ability to very accurately sense you in space, at almost any position and in context to what you are doing, in just about any place you compute.

This is ubiquitous sensing. Security concerns aside, this brings a new era of things you've only seen in sci/fi movies.  When Tony Stark has a problem to solve, and he interacts with a computer hologram.  His hands, his body, his head and his speech are being sensed and processed so he can simply talk, waves his arms and grab virtual objects in space. That freedom to interact with a virtual system as you would real physical items, is what becomes possible.  While the movies focused on manipulating virtual objects, if we've connected our Internet of Things, all of those things can be controlled in concert.  Potentially the color of your walls, the pattern in your carpet, the light coming thru your windows are as easily changed or set by just talking or waving your arms.  At this point the lines between virtual and real are broken down and blurred.  This is World 2.0 where you can control virtual objects as if they were real and you can program and control the physical objects as if it they were virtual.

Develop & Invent our Future
It's a big world of "things" out there to program, make smart and connect.  The opportunity is immense and just as we could never have predicted the capabilities and experiences brought to the web or to our smartphones and tablets, we can't easily predict what "things" will be created by the ingenious, inventive and splendid minds of developers.  My advice for developers on "what's next"; learn about micro-controllers, sensors, cloud processing, encrypting/protecting data, and 3D printing then start inventing the future.  It's going to be awesome

 

 

Nähere Informationen zur Compiler-Optimierung finden Sie in unserem Optimierungshinweis.