Integration. A word to scare children with? Maybe not, but it is one of the hardest parts of system engineering and development. When different pieces of hardware, firmware, and software are combined to build a complete system, all kinds of issues can arise. The classic way to build systems often had a “big bang integration” towards the end, where somehow all the pieces would come together and work. That was a bad idea, and hence the move towards continuous integration enabled by simulation and modeling tools such as the Simics® Virtual Platform.
A Tangled Problem
For computer chip and system-on-chip (SoC) design, integration has to be done pre-silicon in order to find integration issues early, so that designs can be updated without expensive silicon re-spins. Such integration involves a lot of pieces and many cross-connections. Even if we only include a couple of IP blocks, the picture becomes rather tangled:
Figure 1. A tangle of IP
There is typically firmware to integrate for each IP block. The combined IP block will have an interface to its software driver, which in turn integrates to the operating system (OS). In some cases, accesses from the driver and OS all go to the firmware, while in other cases, there might be direct hardware accesses in addition to firmware-mediated access. Each IP block will communicate with other blocks via direct lines, or Networks-on-Chip (NoCs), or buses. Power management hardware and firmware will control the activity of all blocks in the chip. The Unified Extensible Firmware Interface (UEFI) or other BIOS and boot code that brings up the system will need to access hardware to take inventory, bring it up, and enable some of it. The software drivers for IP blocks sometimes load the firmware onto the IP blocks, and are thus responsible for booting them.
In short, there are many scenarios to consider and test, across quite a disparate set of components and types of software, firmware, and hardware.
Simulating the Tangle
To do integration in pre-silicon, we need to build virtual platforms that provide a complete system setup – from the “obvious” main cores running the main software stacks, to the obscure processor cores inside the IP blocks that run firmware.
Such models are built as part of the overall virtual platform development task, but just as often, there are existing models from IP block teams and IP vendors that can be used to quickly get a model in place. Such models come from many different sources and are written using a wide variety of frameworks. A particularly common case is function acceleration for graphics, media, and networking. In such cases, hardware designers tend to use simulators. We also often need to include simulators for physics and mechanics to build a truly complete system. The overall picture looks like this:
We have many different groups, each contributing their models built using their own favored modeling technologies. We need to pull it all together into a single platform that can run real software and that looks like the real thing to the software.
One way to do this is to build ad-hoc point integrations between different simulators in order to tackle particular problems. This has the potential to create quite a few separate integrations, such as:
Figure 3. Ad-hoc point integrations between different simulators tackle particular problems
This works well for a few models, but when the number of models starts to increase, and the number of combinations goes up, it quickly becomes an exercise in futility. In theory, for each model you add, the number of possible combinations more than doubles – at four models we have 11 combinations, and at five models, there are 26 possible combinations.
Building Integrations with Simics as the Base
A more practical solution is to build adapters or integrations from each model into a common base such as the Simics Virtual Platform:
Figure 4. Build integrations from each model into a common base
In this case, we only need to build a single integration for each model, and then any combination of models can be produced by fitting different models into the common base simulator, Simics. This approach makes it feasible to deploy arbitrary combinations of models, facilitating integration testing across blocks.
The Simics framework has proven to be good at this over time, thanks to a few core technology choices that date back to the earliest days of Simics:
- Simics was designed to support multiple languages for modeling. C, C++, Python*, SystemC are all supported as standard languages, but we have also seen users integrate code in Matlab*, CUDA, and Java* into Simics modules.
- Simics uses the host platform C-level ABI for linking models and for the interfaces between models. This ABI is the same regardless of the compiler used, avoiding the need to specify a compiler version for model builds. It is normal for system models to be built with models compiled using a range of compilers and compiler versions.
- Simics packages models into binary modules. This separates the model build from the user’s system configuration. A user of a model does not need access to the source code, nor do they need to care about how a model is built.
- Communication between models is performed using a set of Simics interfaces. This provides interoperability via a common standard, and it ensures that models can connect to Simics platforms and Simics features like PCIe and Ethernet simulation. It does not mean that the interfaces are static over time; they have evolved as needed to support new use cases.
- The Simics API is deep and open to all users, making it possible to write powerful integrations without relying on the Simics base product to add specific support.
Heterogeneous Virtual Platforms Built with Simics
The result is that Simics platforms are made up of a heterogeneous set of components. Normally, there is a base platform modeled using Simics directly, providing the main processor cores and chips of an Intel® platform, for example.
The Simics platform is fast, and it can boot and run operating systems and complete software stacks. Then, additional components are added to the base, or components from the base platform are replaced by more detailed models. In many cases, both a base Simics model of a device or subsystem, and a more detailed integrated simulator are used. For example, an audio subsystem can be simulated as a fast-functional model at its interface to the rest of the system, or we can use a full “white box” model that contains models of the processor cores and devices found internally, and that model runs firmware to enable integration testing between drivers and firmware.
Another example shown in the picture above is replacing parts of the platform with actual Register-Transfer Level (RTL) running on emulators, simulators, or FPGA prototypes. In this case, transactors are used to connect RTL to the transaction-level simulator, usually running the RTL on some form of external hardware box in order to make it fast enough to be useful.
On the left in the above picture, we also see the example of Simics being integrated with environment and world models from the physical domain. I have a longer discussion on this particular case.
What about Performance?
Building a virtual platform by integrating many disparate parts can have a performance impact. However, that is usually not caused by the integration, per se. From experience, the effect of translations between interfaces has a very small impact on overall simulation performance. Three different effects tend to cause performance issues:
- When you put more simulators together, the work needed to run the simulator goes up, and the combined system runs slower, even if all parts are optimized. There is an unavoidable difference between simulating one processor and simulating a hundred. The way to mitigate this issue is to make sure the setups used are appropriate for each test – to not always run with the most complete and complex possible platform.
- Some models work well on their own but do not work ideally when integrated in a bigger context. For example, driving a model forward with many simulator events tends to work when running a model on its own, but it really hurts when integrating a model with many other models. This is usually pretty easy to fix once the problem has been identified.
- The integrated models might be slow by design, which will slow down the overall integrated platform. A cycle-level model that faithfully models the microarchitecture of a subsystem will run orders of magnitude slower than a transaction-level model, and that is going to impact the overall speed of the platform. The solution to this is to pair a detailed model with a fast model, minimizing the amount of time spent using the detailed model when its details are not really needed.
So overall, integrating many different simulators won’t in itself necessarily hurt performance compared to building a homogeneous platform from scratch.
Integrating the Virtual Platform into Other Systems
Another aspect of integration that might not be immediately obvious is integrating the virtual platform itself into higher-level flows. In most cases (at least in terms of simulated hours and number of simulation runs), virtual platform models are run from an automatic test system or launcher system, rather than as interactive runs on a user’s desktop.
For such cases, it is helpful to have a common simulator platform that encapsulates all other models. The higher-level systems can be written to make use of a single tool, regardless of the internal make-up of the model that is being used. By providing a consistent automation and encapsulation interface, Simics makes it possible to build reusable test infrastructure that can work across different targets and different configurations of the targets. There does not need to be any native Simics model in the target virtual platform at all – it is still beneficial to integrate with Simics just to fit into the infrastructure built around it.
Such integrations also last all the way from pre-silicon to post-silicon, deployment, and maintenance. Getting automation in place is a key piece of modern software development, and virtual platforms can help a lot with that, as discussed previously.
Virtual Platforms are Necessary for Early System Integration
To do system integration early in the cycle, supporting integration with virtual platforms such Simics is necessary. To build complete platforms that can run all required software loads (in particular, firmware), the virtual platforms are often built as integrations of various pre-existing models and parts. Such integrations provide a way to quickly get virtual platforms in place that have sufficient detail to run all the software, while still providing uniform packaging towards other systems.
Ecosystem Partners Shift Left with Intel for Faster Time-to-Market: Intel’s Pre-Silicon Customer Acceleration (PCA) program scales innovation across all operating environments using the Simics® Virtual Platform as a primary technology.
Shifting Left—Building Systems & Software before Hardware Lands: Our shift-left began with efforts to coordinate the co-development of platform hardware and software—one effect was moving software from the end of product development to front and center.
Using Clear Linux* for Teaching Virtual Platforms: For Simics training and demo purposes, we often use Linux* running on the virtual platforms. Linux is free, open-source, easy to get.
Simics Software Automates “Cyber Grand Challenge” Validation: DARPA used Simics to help run a “Cyber Grand Challenge” where automated cyber-attack and cyber-defense systems were pitted against each other to drive progress in autonomous cyber-security.
Containerizing Wind River Simics® Virtual Platforms (Part 1): How developers can use containers together with Wind River Simics virtual platforms—the technology of containers and how to use them with Simics.
Using Wind River Simics® with Containers (Part 2): Advantages over using hardware for debugging, variation, scaling, fault injection, automation, pre-silicon software readiness, and more.
Dr. Jakob Engblom is a product management engineer for the Simics Virtual Platform tool, and an Intel® Software Evangelist. He got his first computer in 1983 and has been programming ever since. Professionally, his main focus has been simulation and programming tools for the past two decades. He looks at how simulation in all forms can be used to improve software and system development, from the smallest IoT nodes to the biggest servers, across the hardware-software stack from firmware up to application programs, and across the product life cycle from architecture and pre-silicon to the maintenance of shipping legacy systems. His professional interests include simulation technology, debugging, multicore and parallel systems, cybersecurity, domain-specific modeling, programming tools, computer architecture, and software testing. Jakob has more than 100 published articles and papers and is a regular speaker at industry and academic conferences. He holds a PhD in Computer Systems from Uppsala University, Sweden. Read all of Jakob Engbloms's posts.