Virtualization Software Development

Performance Counter Uncore issues

Trying to implement performance counting on my bare-metal hypervisor. Particularly I am interested in the L3 cache misses for an intel i7 06_1Eh processor. There are two methods that should give me the same result, the non-architectural MEM_LOAD_RETIRED.L3_MISS performance event and the uncore UNC_l3_MISS.ANY performance event. I assign either of these events to the IA32_PERFEVTSEL0, set the USR, OS, and EN bits, or the MSR_UNCORE_PERFEVTSEL0 setting the EN bits. And then I set their respective enable bits in their respective global performance control MSRs.

Cloud Computing Via Virtualization

Dear Friends,
Now a Days Cloud Computing takes a lead in IT industry. We all Knows How Cloud Works.
First We need to setup one server that creats main node or cluster (Master) and Others are Supporting Node.
I have some idea about Infrastructure as a service via Ubuntu server.

In
IaaS we have some Actual Processors and Chunks of Hardware but Through
Virtualization We can Create a Node. And Friend If You have any Idea
About Clusters,Nodes and Virtual Server than You can Post Here.

Now a days i m developing Cluster Monitoring System.

Cloud Computing Via Virtualization

Dear Friends,
Now a Days Cloud Computing takes a lead in IT industry. We all Knows How Cloud Works.
First We need to setup one server that creats main node or cluster (Master) and Others are Supporting Node.
I have some idea about Infrastructure as a service via Ubuntu server.

In IaaS we have some Actual Processors and Chunks of Hardware but Through Virtualization We can Create a Node. And Friend If You have any Idea About Clusters,Nodes and Virtual Server than You can Post Here.

Now a days i m developing Cluster Monitoring System.

MSR-Bitmaps

Having problems with the MSR bitmaps.
I read the IA32_VMX_PROCBASED_CTLS MSR, set bit 28 and stored it into the vmcs's Primary proc-based VM-execution controls field.
I then have the following structure in the vmcs header file:

struct MSR_BITMAP
{
u64 MSR_READ_LO[128];
u64 MSR_READ_HI[128];
u64 MSR_WRITE_LO[128];
u64 MSR_WRITE_HI[128];
} __attribute__ (( aligned (4096) ));

Using SYSCALL/SYSRET in a 64bit guest OS (running as a VM) causes #UD

Hi

I have an RTOS 64bit which uses SYSCALL/SYSRET for system calls. They work fine if this OS is executed "native" on an Intel machine.

Now if I use a VMM (under development, 64bit) and start the OS as 64bit guest VM the SYSCALL causes an invalid opcode exception. The EFER.SCE bit is set and no LOCK prefix is used. I've also tried to use SYSENTER/SYSEXIT in this OS and they work just fine.

I'm not sure if I have setup the VMCS correctly. Are there any hints or special settings in VMCS regarding SYSCALL/SYSRET when running virtualized?

bug check 0x7f

Hi! i'm from Russia, and my english is not good.i have some problem with windows 7. After start vmlaunch i have bug check 0x7f(first parametr D). BUT if i put int 3 interrupt in my non-root code(after vmlaunch) then guest OS works good. This bug there only on windows 7. On windows XP driver works good without int 3. In advance say thank

handling Vm exit caused by exceptions

Hi all, I've a question:I've to handle exit for each reason that caused the exit for an exception, and I've to do that in the same way (with an entry injection if it is caused by guest software or with a restore without condition if it is caused by vmm)? Or can I have an example of different handling for different reasons of an exit caused by an exception please (or some kind of explanation)?

Thanks in advance,irp

Suscribirse a Virtualization Software Development