The power of simulation and why developers should consider it mandatory—A conversation with IoT expert Sangeeta Ghangam

power of sim

Getting your system and software architecture right is very important to the success of a product. It is particularly important when the system you are building has a long expected life time. Internet-of-Things (IoT) edge analytics is such a system, where once you deploy your smart analyzing gateway, you must live with the constraints of the hardware and software for a decade or so.

Getting it right is pretty important. The earlier you can evaluate and analyze the performance of your design, the easier it is to change it. If you wait until you have all the code done and the hardware design set, there is precious little flexibility left in the system. There is a better way:  Do the architecture of the software pre-code using simulation. In this conversation with Intel IoT expert Sangeeta Ghangam, we show how to use simulation for IoT edge code, getting the system design right before the hardware is selected and the code is written.


TWEET THIS:IoT devs – analyze IoT performance before actual code


In the past, such architecture and analysis work was typically done using back-of-the-napkin numbers or in Microsoft Excel*. But with the increasing complexity of modern systems, these old-school tools are no longer effective or efficient.

The Intel® CoFluent™ Studio is offering a way that lets you work before code… but more concretely than just adding numbers in a spreadsheet. By building a simulation model of the system behavior and performance, and running simulations on it with varying input parameters, a much larger design space can be explored.

A conversation with Sangeeta

I recently met with Sangeeta Ghangam, a software engineer and IoT expert in Intel’s Internet of Things Group (IOTG) who—along with her team—used Intel CoFluent Studio to model and simulate an Intel Edge Analytics system to analyze and improve the system architecture. Below is the conversation. It’s been edited for brevity. Enjoy!

Jakob Engblom (JE): Sangeeta, I’d like to begin with a brief introduction of who you are and what you do at Intel.

Sangeeta Ghangam

Sangeeta Ghangam (SG): Sure. I’m a software engineer and currently work in IOTG as a Product Solution Lead. I started in IOTG in 2014 with a focus on analytics, and now focus on the next-generation Edge Application Platform (EAP). I have been at Intel over 5 years and before joining IOTG I worked with storage device drivers in the Platform Engineering Group.

So as part of the IOTG Product Development Team, my focus is on EAP development and driving synergies between the Moon Island Platform and the Edge Compute software.

JE: So Moon Island is essentially Intel hardware and Wind River* software … in particular the Wind River Intelligent Devices Platform. When I was at Wind River, I helped get IDP to run on Wind River Simics* model of one type of Moon Island hardware. Funny to see how things fit together.

To get more concrete, let’s introduce the IoT system you worked on when you did the model using Intel CoFluent Studio.

SG: The system was a gateway running real-time edge analytics and decision-making code. The gateway would use a handful of sensor nodes to gather information and issue control commands to a fairly large industrial machine. My team was working with the software running on the gateway, which was the primary driver of customer value in this project. In the end, we had to provide the customer with a recommendation for which hardware to buy and deploy in order to run the Intel edge analytics software.

The system looked something like this (the number of sensor nodes and gateways would vary):

IoT System

JE: What were the issues you encountered in designing and architecting this system?

SG: We needed to understand how to size the system, given a certain set of edge analytics modules. There are many variables here: the number of sensors to attach to a single gateway, the nature of the connection between the sensors and the gateway, the compute power in the gateway, and the actual set of software modules running on it.


TWEET THIS:IoT Dev: Efficiently evaluate the effect of many system parameters – without hardware


We needed to find a solution that would let us run the workloads we needed today, but also allow room for growth. Once the gateway and the sensors were deployed, a hardware upgrade would be five to ten years out, but the software would be upgraded many times over the lifetime of the system. Thus, we needed to make sure we had some headroom, but without wasting power and cost.

JE: That is indeed quite a few variables to deal with … and nothing you can just do off-the-cuff.

SG: No, off-the-cuff does not cut it. We also needed to this before the software was actually coded for the target, and in a systematic way. For this reason, we decided to use Intel CoFluent Studio to build a model of the system and its software.

The Intel CoFluent Studio works at a higher level of abstraction than code, and thus we could start experimenting before the hardware and software was settled. By adding parameters to the model, we could simulate the effect of different types of hardware on the overall performance. Since there was no real code involved, it is also much easier to change the architecture since you do not have to rewrite the code to communicate in a different way or compute using a different algorithm.

JE: So what would such a model look like?

SG:  Here is a small section of the model that shows the sensor connection and the data processing pipeline

CoFluent Model Application

It is a graphical modeling system where we have included the main internal and external components of an edge system. E.g.; we have the sensor, the gateway platform, a programmable logic controller (PLC) as far as the external components go … and then the various processing modules internal to each of these elements.

JE: How do you model software in this kind of setup?  Do you actually compute results or just keep it to abstract tasks that consume time and generate tokens to put into queues?

SG: The software shown above was modeled using very exact system measurements from a representative edge analytics workload. Colleagues of ours in the Intel Software and Services Group (SSG) characterized the system by finding out the cycles per instruction (CPI), instruction count, CPU and memory usage. These numbers were used by the Intel Cofluent Technology modeling team to abstract the transactions. Since we have baseline operational statistics, we can easily extrapolate any addition or removal of the internal operational modules to expand on this model.

JE: So to be clear, software is modeled as consuming a certain amount of resources on the processor, along with enough details to predict how long it will take to run a particular computational task?

SG: Yes. I worked with the Client Systems Optimization team in SSG for characterizing the workload using internally developed tools, as well as Intel® Vtune™ Amplifier. We collected metrics for CPI, instruction count, and detailed CPU/memory usage to model the workload accurately.

JE: Did you introduce the actual analytics algorithms into the model?

SG: Yes we modelled the edge analytics as a starting point; however, for the future we can use the Matlab plugins in Intel Cofluent Studio to integrate analytics faster than the current turnaround of 1 year +. We had a bit of luck here. The algorithm designers were working in Matlab, and Intel CoFluent Studio has a Matlab integration. Thus, we could run the Matlab algorithms just like they were in the model generated by Intel CoFluent Studio—no need to convert them to actual code. In this way, we could work with software functionality a year ahead of having actual running code on an actual platform, which is obviously handy.


TWEET THIS: Develop smarter – model SW functionality 1 year B4 having actual code on an actual platform.


JE: What were the results of you simulation, and how many different configurations did you actually run?

SG: For a given set of input variables and sensor data throughput, the model estimated the platform resource usage to give us a clear idea of which gateway would best suit the workload under consideration. It also provided architecture inputs in areas where we had to devise a better way to manage the data processing in case the gateway was the fixed variable.

JE: How did you calibrate the model to gain faith in the results?

SG:  To perform initial calibration we used real-time sensor data and processing timelines on an actual hardware which we added to the model as variables. This meant that we could compare the results (from running a certain set of tasks) from the model with results from hardware, increasing our faith in the model.

JE:  Is there any chance you could have done this using hardware?

SG: Not really. In our lab, we have a few gateways and a few sensors. Evaluating the architecture on the lab bench would have limited us to just the configurations we could build using that set of hardware. We would also have to wait for the final software to make any kind of performance analysis and estimation. We could have tried different sets of analytics modules, but it would have been too late to change the capacity of the gateway hardware to run it so it would have been a matter of packing whatever we had into a given box. Not very architectural. Or shift-left.

In contrast, with Intel CoFluent Studio, we could study the problem before we had hardware, and without being limited by the hardware configurations available to us in the lab. Eventually, we do run code on the real machine – but at that point, we had a good idea on what would work and what would not.

JE: Are you still using the model today?

SG: Yes and no. The project has ended, so we no longer use the model we’ve talked about here. But we’re taking key learnings from the project and applying them to our next project. In this new project, we are way ahead of hardware availability. The Intel CoFluent Studio model and simulation lets us do architecture work before we have anything to run code on, and way before we have the actual code in hand.


TWEET THIS:  With Intel CoFluent Studio, you can model performance before you have code in hand


We have extended and improved the model to make it easier to vary the set of edge analytics modules used, allowing for faster experiments that explore a larger architectural space.

JE: That’s really good to hear! Nothing proves the value of a tool like users that keep using it once the first test project is over.  I have sold and marketed development tools all my career, and there is nothing better than a user who decides that your tool is part of the standard tool chest from now on.

Thanks a lot for your time and insights, Sangeeta. This was a nice example of how simulation can be used to build better systems faster, and why simulation should be considered a mandatory tool for system, hardware, and software developers everywhere.

SG: Thank you for the opportunity to talk more about the modeling project, I think the lessons learned here can be applied to several other areas so we actually have performance data supporting a particular product vision.

Follow me @__jengblom for news on my latest blog posts.

*Other names and brands may be claimed as the property of others.

For more complete information about compiler optimizations, see our Optimization Notice.