Dedicated core for real time control

Dedicated core for real time control

I want one core 100% dedicated toI/O process on COTS PC motherboard (not embedded system). That is I want one core not being disturbed by any OS task switching or any other "demands". Assume this one core just sits in a single tight loop servicing a custom PCIe I/O card outside of Windows. The other 3 cores are running Win 7 "normally". Windows code on 3 cores needs to communicate with dedicated I/O core in real time with low latency by means of shared RAM.

Windows 7 must be able run normally including support DX11 and GPU cards, USB, SATA Ethernet etc.....

I assume the only way to do this is with some sort of hypervisor layer.

Where do I start?

Does Intel have or know of a "thin" hypervisor" which we can start with. How can we get help and support when we hit brick walls

???

5 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Many hypervisors already place all I/O processing in core 0. If youinstall and then look at core utilization graphs, you'll see core 0 showing more utilization than other cores for exactly this reason. You might call this a "weak" solution to the problem you pose -- weak because the hypervisor has dedicated a single core to the task, but there are no hard guarantees that core 0 will not also be used for application tasks if the scheduler decides that it needs it.

Alternatively, you can use virtualization to create a VM with Win 7 and then pin it to core 0 and run nothing on it except the I/O handling software you have in mind. For other application tasks,you can create a second VMand give itcores 1-3.Configuring VM core affinity is a standard feature on mosthypervisors (e.g., VMWare and Xen) -- part of resource allocation management.

David Ott

David - the primary requirement is 100% CPU cycles (not 99%) dedicated core - IE no interruption by scheduler. "My" dedicated core needs to be completely independent of windows and 100% running "my" code - but yet be able to communicate with dot net application running under win7 on remaining cores by means of shared RAM.

So how about this -

Pin DOS 6.22 to core 0 so core 0 is only running DOS - no task switching. Then I run "myioapp.exe" from DOS prompt and then Core 0 is then running 100% "my code"

----
More general hypervisor question - We have never tried to use any hypervisor for anything yet. So we do not know anything about them.

I start with an I7 8GB RAM with Win 7 already installed. I just took it out of the box and get it running and Windows registered etc.
Now I want to install VMware or Xen etc - Do I Install the hypervisor then reinstall the OS or can I Install the hypervisor "on top of windows"
How long does it take to do all this
What is learning curve time required
How many developers are using hypervisors in this way
We are not interested in hypervisor for running servers.
We would like to use hypervisor to allow software development on one box to be used with windows, linux, bsd, and some sort of simple non task switching OS.

About your dedicated DOS-based IO application, I believe what you have said is correct. Run DOS as its own VM and affinitize to core 0. Run Win 7 in another VM and a affinitize whatever other cores are available (but not core 0). Note, however, that you cannot share memory between VMs. VMMs are designed to maintain isolation between memory of different VMs. Your choices are to use some kind of IPC between the two VMs (much like they are on different hosts), or to use some kind of storage based solution (e.g., write to an SSD that can be mounted by both VMs).

About your general questions, the idea is usually to install the VMM (hypervisor) and then to install VMs (with whatever OS you would like) on top of it. You should refer to the installation documentation for whichever VMM you are interested in. Since some VMMs make use of drivers and other kernal code, the relationship between the VMM and an associated host OS can be somewhat complex (e.g., Xen and Linux, Hyper-V and Windows). Explanations are beyond the scope of this forum-- please look to VMM vendor documentation about their product.

Using a single VMM to host VMs withdifferent operating system (windows, linux, bsd, etc.) is a supported use case. In fact, this is one of the early use cases that motivated the development of virtualization from the early days. The technology iswidely usedthis way. (You'll love it!)

David Ott

Hi friends, I am first time here and really found ti good by all means, Infact the topic is good to read and and sending this page to my other firends because they are always asking me for the good resources to read out

Leave a Comment

Please sign in to add a comment. Not a member? Join today