Archived - LocalSense: Mixing Intel® RealSense™ Technology and IoT in the Enterprise

Published: 08/05/2016, Last Updated: 08/05/2016

The Intel® RealSense™ SDK has been discontinued. No ongoing support or updates will be available.

The Internet of Things (IoT) is mind-boggling.

Smaller devices, faster processing power, and cheaper materials are enabling us to track more and more aspects of our lives. In the consumer space, not a single day passes by without a connected device making a grandiose appearance on the market—everything from fitness trackers to smart spoons and even pet-triggered door openers.

However, in the enterprise world, where interest for the IoT is clear and present, the pace of adoption is much slower:

  • Connected hardware and data analysis is not new to the enterprise world. Many companies already have systems in place such as RFID, NFC, and temperature sensors. As a result, the adoption curve is shallower as companies may not want to take on hardware expenditures.
  • Enterprises put a large premium on security, privacy, and reliability. Do an Internet search for ways to hack IoT consumer devices and you’ll realize why these breaches are bad news in the enterprise world where data is part of the IP and must be protected. Considering reliability issues, a temperature sensor that fails in a home may be a minor inconvenience, whereas a temperature sensor that fails in the operating room can lead to serious consequences.
  • The consumer space deals with scaling issues differently than in the enterprise and commercial worlds. For example, while four sensors may be needed to monitor a home’s temperature, a hospital needs 5,000 of them.

Don’t get us wrong, there are tons of fun things you can do at the consumer level, including running pranks on your loved ones, but our initial focus is on implementing a system that could be used at the enterprise level. This is why on the LocalSense project, we decided to focus on the commercial space; that’s where the interesting questions and challenges are.

In doing so, we are going to show you some of the limitations and problems we encounter, as well as some potential long-term solutions.

What is the LocalSense project?

LocalSense was born during the Dublin Innovators' Summit when different people in our group realized they had recently met with customers facing a real problem that could be solved using different tools available in our respective repertoires.

At the core of LocalSense is the drive toward using IoT to keep people safer in specific environments where administrators need to know the whereabouts of people and devices in order to better inform, prevent, and react.

For example, during the Dublin summit we focused on two specific scenarios: a hospital setting and a pharmaceutical or chemical factory

In a hospital setting, LocalSense can be used to:

  • Locate key staff in order to better respond to an emergency
  • Locate equipment in order to provide better utilization, care, and management
  • Localize news and messages
  • Help make sure that patients stay within the hospital compounds
  • Facially recognize a doctor to let him or her enter a room rather than having to use a badge (this capability also helps keep the doctor’s hands cleaner)

In a pharmaceutical or chemical factory, localization can be used to:

  • Warn employees that the environment in which they work can be dangerous (for example, in a laboratory with hazardous substances)
  • Monitor the location of employees in order to promptly respond to an emergency situation
  • Implement facial recognition to allow access to laboratories or rooms without using a badge

What is the technology behind LocalSense?


LocalSense is at the proof-of-concept (PoC) level, so we are using readily available hardware. Because we are taking a “component-first” approach, it would be simple to switch the hardware components around, if, for example, a given device is using too much power in certain environments.

  • We use Intel® RealSense™ technology for face recognition and voice synthesis. In a production case, we use the Intel® RealSense™ camera SR-300 because of its advanced recognition capabilities. However, in demos a lower-resolution camera may work (we use the one on our MacBook* Pro as well).
  • IoT devices were connected to an Intel® Edison board using an Arduino shield and the Grove Seed Kit*. Other cards should work as well. We’ve also run a few tests on the Intel® Galileo board.

For beacons, we selected Estimote (For more information about beacons in our project, head on to the beacons section).

Intel® NUCs act as local servers where we store some data that we do not want broadcast to an external network (for example, a person’s face data).

  • Connectivity is an issue in certain environments, and we will soon have some connectivity options through Sigfox.


  • Intel® XDK IoT Edition where we build the code deployed on the Intel® Edison board. We also use the Intel® XDK to emulate and test our companion app.
  • Node.js on the Intel® Edison. Again, you could use something else as long as you can make REST/HTTP calls.

Communication and device management

From the start, we knew that communication routing would be one of the challenges of the projects. We are using SmartNotify* patent-pending technology to route communications and events between humans and machines.

A major goal of LocalSense is to provide seamless methods for authentication and access that avoid physical contact where possible.

The LocalSense Biometric Checkpoint builds upon Machine Vision components of the Intel RealSense technology to create a touchless interface for building access control.

Intel RealSense cameras are powerful sensors and face recognition is among their wide range of capabilities. The Intel® RealSense™ SDK provides comprehensive algorithms and techniques for detecting a user’s face, pose, and facial landmarks, which serve as a strong foundation for creating a robust biometric user identification system.

Biometric checkpoints are a reactive component of LocalSense. They maintain real-time communication with the central hub in order to authenticate the users already registered in the system. They also provide visual feedback to people with insufficient clearance levels and notify security in case of unauthorized access.

A user can be added to the system through secured registration terminals, which use Intel RealSense technology to learn the user’s facial features and then add this data to the user’s digital profile.

The following code sample reveals key elements for creating a Face Recognition module using the Intel RealSense SDK and C#:

private void WorkerThread()

    // Loop that Acquires and Releases RealSense data streams
    while (senseManager.AcquireFrame(true) >= pxcmStatus.PXCM_STATUS_NO_ERROR)

        // Acquire the RGB image data
        PXCMCapture.Sample captureSample = senseManager.QuerySample();
        Bitmap frameBitmapRGB;
        PXCMImage.ImageData colorData;
        captureSample.color.AcquireAccess(PXCMImage.Access.ACCESS_READ, PXCMImage.PixelFormat.PIXEL_FORMAT_RGB24, out colorData);
        frameBitmapRGB = colorData.ToBitmap(0,,;

        // Get face data
        if (faceData != null)
            totalDetectedFaces = faceData.QueryNumberOfDetectedFaces();

            if (totalDetectedFaces > 0)
                // Get the first face detected (index 0)
                PXCMFaceData.Face face = faceData.QueryFaceByIndex(0);

                // Process face recognition data
                if (face != null)
                    // Retrieve the recognition data instance
                    recognitionData = face.QueryRecognition();

                    // Check if user is registered in the local database
                    if (recognitionData.IsRegistered())
                        userId = Convert.ToString(recognitionData.QueryUserID());

                        // Store User Data
                        if (doStoreUser)
                            // Check if the ID exists in the User Data Store
                            bool newID = true;
                            foreach (var userData in userDataStore)
                                if (userData.ID == userId)
                                    newID = false;

                            if (newID)
                                // Storing a user snapshot
                                string snapshot = "snapshot" + userId + ".jpg";
                                frameBitmapRGB.Save(snapshot, System.Drawing.Imaging.ImageFormat.Jpeg);

                                // Adding the new user to the User Data Store
                                this.Dispatcher.Invoke((Action)(() =>
                                    userDataStore.Add(new UserData() { ID = userId, UserName = tbUserName.Text, ClearanceLevel = tbClearanceLevel.Text, Snapshot = snapshot });

                            doStoreUser = false;
                            doRegisterLocal = false;
                        if (doRegisterLocal)
                            // Add unregistered user to the local database
                            doRegisterLocal = false;

If you’re interested in code snippets to see how we handle face recognition and the database setup, check out our GitHub* demo code.

Using face recognition technology, we can quickly assess whether someone can enter certain areas. We can also enhance safety by being able to push information that is contextualized and relevant to the person we just authenticated.

For example, imagine that you're entering an operating room. Right after you are scanned, the checkpoint can start giving you verbal information about the patient as you prepare for surgery.

We believe this feature can be extremely handy when managing groups or people who are unable to read because of either age or education.


Voice is a wonderful thing, yet it is not standard in computing yet. We’d love to change that!

Once we got our face-recognition system working, we envision voice as a tremendous add-on from two angles:

  • Giving information to the person doing the security checks.
  • Giving information to the person who just got validated. Why? Well think of how much more efficient this process can be: You pass the checkpoint and then you get to hear specific news and information pertaining to the area you just entered (SmartNotify can route content based on location so we’re using this functionality here)

Voice capabilities also provide a way to cater to the elderly or people with disabilities much more effectively.


Beacons are the new buzzword in the tech industry. In the context of LocalSense, beacons can use indoor awareness to provide localized information and help people better protect themselves and their equipment. For example, read our blog about SmartNotify's Safe Travels application.

In our scenario, we are using beacons to better locate people indoors and convey relevant information. Within the PoC we use a simple Intel® Edison board as the capture point and the Estimote beacons as the roaming devices.

We are currently using Estimote beacons though there are many other manufacturers in the market that you could consider. You may want to check out this security article to make sure your requirements are met.

Many companies are entering the beacon business offering iBeacons for iOS* and their Android* counterpart. As long as you get the right technical spec about the beacons you can use anything you want. Technically you can even turn an Intel® Edison board into a beacon.

The only drawback we found with using Estimote beacons is the way the supplier handles device registration. Currently the devices are automatically assigned to the email address that makes the purchase. The Estimote beacons then need to be reassigned to the proper users. This scenario works fine in a consumer environment though it could be greatly enhanced in an enterprise environment where people doing the purchases are often miles removed from operations.

Since Estimote is releasing some very cool tags, we are hoping they change their process a bit and simply send a box that includes an authorization code.

Setting up beacons to track an indoor location taught us several lessons; you can follow along here with some actual code in our Github repository that implements a Kalman Filter to track Estimote beacons.

  • You can figure out how far you are from the beacon by using two data points: using the device’s known output power at 1 m, and (2) leveraging the RSSI data. Think of RSSI as signal strength.
  • Devices (and therefore RSSI) are extremely sensitive and given that a device registers every second, the readings will be different even if the device is not moving whatsoever.
  • A beacon’s power varies greatly based on obstacles between the device and the receiver. Or even the way you are facing (the body acts as a shield).

In our case we had to resorts to several tricks to “quiet” the data. If you check out our GitHub you will see that:

  • We used something from our GIS bag of tricks named the Kalman filter equation. This equation is used in GIS applications to estimate where a point might be when the information changes quite often.
  • We average all these figures and send data back every five seconds.
  • In a production environment, the data analysis and algorithm is more refined than an open source approach.

Obviously in a production environment you should have the data transit through a more powerful data store where you can do faster calculation and run a sturdier AI algorithm.

Lessons learned and next steps:

  • Remember the Spiderman movie where Spidey’s uncle tells the hero: “With great powers come great responsibilities.” The minute you start tinkering with facial recognition and sensors, you need to be aware of the potential privacy and security pitfalls.
  • Using these devices, and technologies, the amount of data you are harvesting about a person’s location is staggering.
  • We believe in not storing the data. We use it to process the information and return data back to the device but do not store all the location points or all the facial data.
  • We can solve a USD 12B problem: During the PoC, we started to participate in different hackathons and also started to showcase our project to different people. It turns out that LocalSense can be used to solve a USD 12B problem. Interested in learning more?

And, we are soon going to launch a crowdfunding campaign and would love for you to take part in it!

About the team:

The team members are Gregory Menvielle, Silviú Tudor-Serban, Agnès Duverger, and Alex Niquille, with contributions from Massimo Bonanni, Thomas Fickert, and the entire team of the Intel® Software Innovator program.


Product and Performance Information


Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804