I recently joined the Intel Software and Services Group, as product manager in the team working on the Wind River Simics* virtual platform system. Before I moved to Intel, I had heard that there was a lot of simulation and virtual platform work going on inside the company, and the company did not disappoint! Here, I have found a ton of cool simulation technology (and plenty of fantastic products and smart people). Simulation is a key part of software and hardware design, and Intel is making very good use of it. In this post, I will discuss some of the simulation types we are using, and what we use them for!
I have been working with Simics for almost 15 years now, and even before I joined the team I had written some simulators for my PhD. Simics is one particular type of simulator (the fast functional instruction-set based virtual platform), but there are many other abstraction levels, choices of detail, and approaches to modeling and simulation out there. In this post, I will go through some of the simulation technologies that I have come across so far, both for hardware development and software development. In the future, I plan to dig into details on particular use cases to showcase simulation technology and simulation tools.
It is well known that hardware architects use microarchitecture-level simulators to evaluate and explore design variants. They also use higher-level models that only deal with latencies and bandwidth across interconnects to sketch out system architectures. There is no single tool or framework that can cover all use cases, but rather a set of tools used for different purposes. Behind each product that Intel ships, there was a lot of modeling and simulation done to design and validate it, and to enable software. This is how hardware design is done, and has been for a long time going back at least to the 1960s!
In order validate system-level changes that have big effects on software, what you need is a fast functional model that runs code and models new functionality in order to evaluate how software will make use of new hardware features. This role can be very nicely filled by Simics, as discussed in an earlier blog post I wrote for Wind River. Similarly, simulators can be used to evaluate and prototype new instruction set extensions, to make sure that they work and are useful in real software.
When signal processing algorithms go into a system, simulation is used both to validate that algorithms do what they should, and to decide which pieces to put into hardware and which into software. When done right, algorithm code can even be used to generate the hardware directly!
Software developers also need models of next-generation platforms to develop firmware, UEFI system firmware, drivers, and bring up operating systems (which I blogged about a few years ago). This shortens the time to marketand increases hardware and system solutions quality. In the end, such simulation models are used to enable the whole ecosystem around Intel hardware and software, to speed up system development and customer deployment of new solutions. The better this works, the faster solutions get into the hands of consumers and IT professionals.
Looping back to silicon and chips, in order to validate hardware designs, you have to run simulations of the actual RTL, on both software simulators and dedicated hardware devices. That is a huge field in and of itself, and where I am definitely not an expert. Suffice to say that without RTL simulation, silicon would most likely never work the first time.
Power and power management is a very important aspect of modern computer design, and I am fairly close to theDocea team that Intel acquired last year. The Docea tool suite is all about developing power models and running simulation scenarios to validate and optimize power design. Very cool (or should I say hot?) stuff, where you can even get a virtual floorplan of a chip to light up showing how it gets warm as it runs.
Another tool in my organizational vicinity is the Intel® CoFluent™ Studio modeling toolset, which lets you use high-level models to estimate system performance and dimension and architect systems. Intel CoFluent Studio models use data flows to model the behavior of algorithms, software, hardware, and communications channels. It is a very general form of modeling, that can used to design the microarchitecture of a hardware accelerator, as well as determining the optimal topography for your thousand-node IoT sensor deployment, or to abstractly explore software architecture for a data processing node.
It really is like swimming in a big sea of simulators and learning things all the time!
What gets really interesting is when you start to mix and integrate different types of simulators. When you combine simulators that model different phenomena and operate at different levels of abstraction, you can build some truly awesome combined systems. Some of these integrations have been made public, such as combining a Simics model of an Intel server platform with detailed models of hardware accelerators in order to test hardware designs with realistic inputs from a complete real driver stack. There are several cases where we have used Simics and its platform models as a way to combine disparate separate simulations for subsystems or particular phenomena into combined models.
I look forward to sharing more stories about simulation going forward!