by Sebastian Schoenberg and Ken Strandberg
Virtualization in a Tera-Scale Environment
In the previous article, we talked about the synchronization challenges developers face in an environment with a large number of cores, and two possible ways to approach the challenges. In this article, we’ll talk more about the additional benefits expected from virtualization in future Tera-scale environments.
Download Virtualization in a Tera-Scale Environment [PDF 86KB]
With increased computing capacity and scalability, Tera-scale systems of the near future will enable much greater functionality in virtual environments. This foundation will make it possible for new, scalable applications and virtual appliances to enter the market. And yet, with this evolution of new virtualized systems, the requirements of legacy applications that businesses continue to depend on won’t need to be sacrificed. A considerable benefit to the developer, virtualization reduces the challenges companies will face as a result of fewer environments the software must be tested and validated against. These factors – scalability, virtualization, and fewer platform challenges – will create a new, rich development environment around Tera-scale virtualized systems.
Tera-scale and Virtual Machines
Open architectures, like Intel® Architecture, standardized PC platform components and systems, and Windows* and Linux* operating systems, have provided the foundation for innovative products. They’ve also catalyzed the growth of entirely new markets. This openness, however, creates havoc for developers who must validate their products against a vast array of platforms and system configurations. The enormous costs of testing in such a varied environment can often exceed the actual development costs of the software, making it prohibitive in some cases for innovation to flourish.
Virtualization considerably reduces configuration variability for applications running within a virtual machine. From the massive possibilities of devices and peripherals, companies providing virtual machine monitors (VMMs) have limited the number of devices the VMMs support to just a handful. Since the VMM abstracts the capabilities of devices in the system and presents them to the virtual machines (VMs), a virtual environment significantly reduces the configurations possible. This in turn improves the amount of development effort and testing required to launch a new virtual application.
Developers will be able to test on most – if not all – the possible virtualized platform configurations and complete more thorough test suites before deployment. Since their applications do not directly interact with the underlying physical hardware, they can leave compatibility testing with the real hardware to the VMM vendors, eliminating a whole lay er of testing currently required.
Tera-scale and Virtual Appliances
The market for virtual appliances is rapidly growing, and we expect Tera-scale will only enable richer possibilities of virtual appliances, whether they draw upon a larger number of cores or just a few. Virtual appliances package an entire software product with application and OS -- such as a streamlined Linux, Windows PE (preconfigured environment), or other OS -- into a single, easily deployable VM. The benefits of reduced variability in the system carry over to appliances. With just a few VMM vendors, who often agree on providing the same or very similar virtual devices in the market and limited supported devices, appliances can be tested more thoroughly for the existing VMMs and any additional features these VMMs provide.
The compute-intensive appliances that we will see in the future will demand availability of many more cores from the VMM. Since the trend in VMMs is toward smaller kernels, specially optimized for such large-scale platforms, this will make the VMM very scalable and capable of managing the many cores. That enables a platform ready for a scalable runtime environment. And, with fewer devices to support, it creates a rich environment for the developer, but with fewer challenges. As Java* removed many of the platform challenges for software deployment, developers can leave the virtual platform details – device virtualization, load balancing, support for live migration, varying functionality, etc. – to the VMM and focus on richer applications, deeper test suites, and more stable code. Virtualization reduces many of the challenges for many-core development of virtual appliances.
Tera-scale and Virtual Machine Monitors
Just as we are only seeing a glimpse of possible future Tera-scale usages and workloads and what they will evolve into, the same is true for virtual environment possibilities in Tera-scale platforms. With reduced variability, we expect that future VMMs will continue to support emerging usages, such as transparent migration, zero-downtime-seamless hardware updates, and portable infrastructures, in addition to server consolidation and high availability applications seen today.
Common platform configurations across an infrastructure are key to enable transparent migration from one platform to another. Transparent migration allows IT departments to take advantage of unused resources, to optimize overloaded resources, or to create highly stable infrastructures. For example, if a scheduled workload requires many more cores than are available on the machine the workload is scheduled to run on, and there’s capacity on another many-core server, it can be easily migrated to the other machine without reconfiguring it. Or, consider if a non-compute-intensive workload only needs a couple cores, it can be migrated to a much smaller platform -- possibly even a mobile internet device (MID) to take on the road, providing 24/7 access to an integrated service infrastructure. These types of migrations make the infrastructure very portable, enabling highly flexible service oriented architectures based on scalable virtual environments. Of course, these capabilities are not new, but with Tera-scale based infrastructures, they become infinitely more scalable due to many-core targeted VMMs.
High scalability doesn’t mean leaving behind the nee d to continue to support legacy applications. Tera-scale environments will enable both compute-intensive and legacy applications to run side-by-side. As we mentioned in our last article, containing the legacy OS and application in a VM maintains its availability. And, wrapping I/O devices, such as network cards, in a virtual container makes them available to other VMs. It also simplifies device management. Communication with this device simply gets pushed off to the VM in which it runs.
The beauty of having such open software and hardware architectures, like Windows, Linux, and Intel Architecture, is the richness of products hardware and software developers can offer. The challenge for emerging software products is to create applications that are compatible with a massive range of platform configurations. Virtualized, Tera-scale environments, with reduced hardware variability and highly scalable VMMs, will enable a rich new development environment for scalable virtual appliances and applications. Tera-scale and virtualization will also continue to support legacy systems. Indeed, these legacy-based containers will simplify combining these devices with more scalable VMs that demand many more cores.
About the Authors
Sebastian Schoenberg is a Staff Researcher in Intel's Corporate Technology Group where he works on bringing virtualization together with other core technologies such as many-core or security. His expertise is in the areas of virtualization, operating systems, high-performance micro-kernels, real-time, and security. Sebastian holds 10+ patents and has served on program committees of internationally recognized conferences. He was a guest researcher at the University of Cambridge, UK and received his PhD in computer science from University of Technology, Dresden in Germany.
Ken Strandberg writes technical articles, white papers, seminars, web-based training, marketing collateral, and interactive content for emerging technology companies, Fortune 100 enterprises, and multi-national corporations. Mr. Strandberg’s technology areas include Software, High-performance Computing and Clusters, Industrial Technologies, Design Automation, Networking, Medical Technologies, Semiconductor, and Telecom. Mr. Strandberg can be reached at firstname.lastname@example.org.