Skip to content

Conversation

@russell-islam
Copy link
Contributor

@russell-islam russell-islam commented May 23, 2020

This is the initial implementation of the proposal that we made earlier. The purpose of this crate is to make the hypervisor support abstracted to include Microsoft Hyper-V. We will have a hypervisor trait that will be separately implemented by KVM and Hyper-V. Based on the hypervisor we will create the VMfs and VcpuFD which are also some genetic traits. These two traits will be implemented by KVM and HyperV. For KVM there will be two implementations KvmVMfd and KvmVcpuFd and same for HyperV.

Sample code: hypervisor = KvmHyperVisor::new()
vm = hypervisor.create_vm()
vcpu = vm.create_vcpu(id)

vm.get/set()
vcpu.get/set()

For the payload we have a binding inside the hypervisor crate that will abstract between HyperV and KVM.

The plan is to move most of the KVM reference to Hypervisor crate. The hypervisor will hand the payload based on KVM or HyperV. This is the initial proposal. For the payload I have followed the previous commit by crowd-strike.

@russell-islam
Copy link
Contributor Author

russell-islam commented May 23, 2020

Looks like build on aarch64 fails. Is there any way to build aarch64 binary on x86-64 machine?

@michael2012z
Copy link
Member

@russell-islam , you can cross-build for aarch64 on x86_64, that's what we do with the 2 "Cross Build" checks now.

To try cross-build locally, you need to install target aarch64-unknown-linux-gnu or aarch64-unknown-linux-musl, and a tool-chain (to link).

@michael2012z
Copy link
Member

michael2012z commented May 25, 2020

About the build error on aarch64. It was caused by an issue in cloud-hypervisor/kvm-bindings on aarch64, see this PR for the complete story: cloud-hypervisor/kvm-bindings#1.

As a workaround, you can modify the Cargo.toml (which uses cloud-hypervisor/kvm-bindings with with-serde feature) when it comes to aarch64 build, here is an example: https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/.github/workflows/cross-build.yaml#L26

Changing it run: sed -i 's/"with-serde",\ //g' vmm/Cargo.toml && sed -i 's/"with-serde",\ //g' hypervisor/Cargo.toml should fix the error.

@yisun-git
Copy link
Contributor

Hi,

We proposed the Hypervisor crate in cloud-hypervisor and rust-vmm before.
rust-vmm/community#50
https://github.intel.com/cloud-hypervisor/community/issues/2

The draft codes had been provided at:
https://github.com/yisun-git/Hypervisor_on_FC/tree/dbg_yisun1_hypervisor

But the community has different opinions on this. So, we stopped the work.

I want to learn what is the purpose of this PR? Have you addressed the comments in above issues?

Thanks,
Sun Yi

@russell-islam
Copy link
Contributor Author

@russell-islam , you can cross-build for aarch64 on x86_64, that's what we do with the 2 "Cross Build" checks now.

To try cross-build locally, you need to install target aarch64-unknown-linux-gnu or aarch64-unknown-linux-musl, and a tool-chain (to link).

Is this documented somewhere with detailed steps?

@russell-islam
Copy link
Contributor Author

Hi,

We proposed the Hypervisor crate in cloud-hypervisor and rust-vmm before.
rust-vmm/community#50
https://github.intel.com/cloud-hypervisor/community/issues/2

The draft codes had been provided at:
https://github.com/yisun-git/Hypervisor_on_FC/tree/dbg_yisun1_hypervisor

But the community has different opinions on this. So, we stopped the work.

I want to learn what is the purpose of this PR? Have you addressed the comments in above issues?

Thanks,
Sun Yi

@yisun-git Thanks for the information. The purpose of this PR is to get initial comments on the Hypervisor crate that will eventually support Microsoft Hyper-V.

@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch from 0893ea7 to eeab12b Compare May 27, 2020 01:23
@russell-islam
Copy link
Contributor Author

russell-islam commented May 27, 2020

Hi,

We proposed the Hypervisor crate in cloud-hypervisor and rust-vmm before.
rust-vmm/community#50
https://github.intel.com/cloud-hypervisor/community/issues/2

The draft codes had been provided at:
https://github.com/yisun-git/Hypervisor_on_FC/tree/dbg_yisun1_hypervisor

But the community has different opinions on this. So, we stopped the work.

I want to learn what is the purpose of this PR? Have you addressed the comments in above issues?

Thanks,
Sun Yi

@yisun-git Thanks for the information. The purpose of this PR is to get initial comments on the Hypervisor crate that will eventually support Microsoft Hyper-V. I would love to seek your contribution too.

@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch from eeab12b to e3129de Compare May 27, 2020 01:25
@russell-islam
Copy link
Contributor Author

Aarch64 build is fine now but I am getting the following now. Looks like this a known issue even with latest master:
error[E0277]: the trait bound kvm_bindings::x86::bindings_v4_20_0::kvm_sregs: serde::Serialize is not satisfied

@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch from e3129de to f9be7c5 Compare May 27, 2020 03:32
@michael2012z
Copy link
Member

@russell-islam , see michael2012z#28, attention to the last commit.

@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch from f9be7c5 to d0626cb Compare May 27, 2020 03:58
@russell-islam
Copy link
Contributor Author

@russell-islam , see michael2012z#28, attention to the last commit.

Thank you. I misunderstood the fix. Now I got it.

@russell-islam russell-islam changed the title [RFC] Initial proposal of Hypervisor crate (abstraction between KVM and HyperV) [RFC] Initial proposal of Hypervisor crate (abstraction between KVM and Microsoft HyperV) May 27, 2020
@russell-islam russell-islam changed the title [RFC] Initial proposal of Hypervisor crate (abstraction between KVM and Microsoft HyperV) [RFC] Initial proposal of Hypervisor crate (abstraction between KVM and Microsoft Hyper-V) May 27, 2020
Copy link
Member

@sameo sameo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @russell-islam for that PR. Even though the overall design looks good, it needs a bit more work and I added some comments accordingly.

On a more general note, we need to rework the commits structure as well. The current commit history shows how you iteratively improve the code into something that builds and works. What we need is a commit history that shows the structure of the work and how each piece of the puzzle gets added together. For example:

  • Commit #1: Add the Vcpu trait
  • Commit #2: Add the Hypervisor trait
  • Commit #3: Add the KVM bindings and implementation for both traits
  • Commit #4: Port the current code to use both traits and build under the kvm feature.

This will make it clearer and easier to review as well. Finally, for each of those commits:

  1. The commit subject should be formatted according to what's documented in https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/CONTRIBUTING.md
  2. Those are fundamental changes, so we expect the commit message to describe the rationale behind the code changes and how it's implemented.

@russell-islam
Copy link
Contributor Author

Thanks a lot @sameo . I will address your comments.

@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch 2 times, most recently from a3c1f56 to d7229b5 Compare June 4, 2020 00:39
Copy link
Member

@sboeuf sboeuf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a first pass on this PR, I'll dig into more details tomorrow or next week.
A few nits about your commit messages:

  • Try to keep the commit messages title short (refer to only one component before the : helps)
  • Some commit messages body have some indentation, make sure you remove it
  • Don't hesitate to be verbose in the commit message body :)

clap = { version = "2.33.1", features=["wrap_help"] }
epoll = ">=4.0.1"
futures = { version = "0.3.5", features = ["thread-pool"] }
hypervisor = { path = "hypervisor" }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add an extra commit introducing the new hypervisor crate without anything inside?
This commit should be the first of the entire patchset, and it will be the opportunity to describe the goal of this new crate in the commit message.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This commit should be the first of the entire patchset, and it will be the opportunity to describe the goal of this new crate in the commit message.

@russell-islam initially introduced the hypervisor trait ad the first commit, but since it depends on the cpu and vm traits, it had a few placeholders that added some noise when actually introducing the cpu and vmm traits further down the PR. I suggested to introduce traits in the dependency order, i.e. CPU -> VM -> Hypervisor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sameo I guess I can add one more extra commit just to add the skeleton of the hypervisor crate. vcpu would be the next commit.

/// exit_reason => panic!("unexpected exit reason: {:?}", exit_reason),
/// }
/// }
/// }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if all these examples/documentation are needed since they've been ported from the kvm-ioctls crate. That feels like a lot of duplicates.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. But the example has some changes reflecting the current hypervisor implementations.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, but could we keep it shorter since the core content is already part of kvm-ioctls?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do . thanks

@russell-islam
Copy link
Contributor Author

I will make the change once I get some reviews to avoid back and forth update.

clap = { version = "2.33.1", features=["wrap_help"] }
epoll = ">=4.0.1"
futures = { version = "0.3.5", features = ["thread-pool"] }
hypervisor = { path = "hypervisor" }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This commit should be the first of the entire patchset, and it will be the opportunity to describe the goal of this new crate in the commit message.

@russell-islam initially introduced the hypervisor trait ad the first commit, but since it depends on the cpu and vm traits, it had a few placeholders that added some noise when actually introducing the cpu and vmm traits further down the PR. I suggested to introduce traits in the dependency order, i.e. CPU -> VM -> Hypervisor.

// Copyright © 2019 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should not be 2 SPDX ids. Because the crowdstrike code was dual licensed, we can choose to keep this crate dual licensed or make it Apache 2.0 only. The former would allow for reusing that code with less permissive licenses, so that would be my preference.

Copy link
Contributor Author

@russell-islam russell-islam Jun 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Copyright © 2019 Intel Corporation // // SPDX-License-Identifier: Apache-2.0 // // Copyright © 2020, Microsoft Corporation // // Copyright 2018-2019 CrowdStrike, Inc. //

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@russell-islam Are you ok with dual licensing apache or MIT or do you want to only keep the apache license?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guru is in a talk within Msft. I will update you once I hear something.

pub use {
kvm_bindings::kvm_lapic_state as LapicState,
kvm_bindings::kvm_xcrs as ExtendedControlRegisters, kvm_bindings::kvm_xsave as Xsave,
kvm_bindings::CpuId, kvm_bindings::Msrs as MsrEntries, kvm_ioctls::Cap, kvm_ioctls::Kvm,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a nit, but I find it more readable to have one line per export:

pub use {
    kvm_bindings::kvm_lapic_state as LapicState,
    kvm_bindings::kvm_xcrs as ExtendedControlRegisters,
    kvm_bindings::kvm_xsave as Xsave,
    kvm_bindings::CpuId,
    kvm_bindings::Msrs as MsrEntries,
    kvm_ioctls::Cap,
    kvm_ioctls::Kvm,
}

Copy link
Contributor Author

@russell-islam russell-islam Jun 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to remove some VisCode code fmt. This is auto saved. :(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not accepted by cargo fmt check. I had to revert it. :(

pub use {
kvm_bindings::kvm_lapic_state as LapicState,
kvm_bindings::kvm_xcrs as ExtendedControlRegisters, kvm_bindings::kvm_xsave as Xsave,
kvm_bindings::CpuId, kvm_bindings::Msrs as MsrEntries, kvm_ioctls::Cap, kvm_ioctls::Kvm,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Cap and Kvm are not x86 specific. You may want to export them from the top level lib.rs file.

/// Registers an event to be signaled whenever a certain address is written to.
fn register_ioevent(
&self,
fd: &EventFd,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note here, and I'm not requesting any change: This is Linux specific and the Hyper-V implementation will have to change at least this method to something more generic.
My point is: This PR is a first step and we need to acknowledge that this traits may change as we add support for other hypervisors later on.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I totally agree. We have to adjust the trait once we have hyperv-ioctls ready.

use crate::RegionType;
use kvm_bindings::CpuId;
use kvm_ioctls::*;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unneeded change.

use byteorder::{LittleEndian, ReadBytesExt, WriteBytesExt};

use kvm_bindings::kvm_lapic_state;
use hypervisor::kvm_lapic_state;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be use hypervisor::x86_64::LapicState and then replace all kvm_lapic_state instances with LapicState

use crate::InitramfsConfig;
use crate::RegionType;
use kvm_bindings::CpuId;
use kvm_ioctls::*;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use hypervisor::CpuId would remove the need for some of the below changes.


// For GDT details see arch/x86/include/asm/segment.h
use kvm_bindings::kvm_segment;
use hypervisor::kvm_segment;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, you want to use hypervisor::x86_64::SegmentRegister

#[cfg(target_arch = "x86_64")]
pub use kvm_bindings::{kvm_lapic_state, kvm_segment, kvm_userspace_memory_region};
pub use kvm_ioctls;
pub use kvm_ioctls::VcpuExit;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be exported through the kvm module only.

@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch from d7229b5 to 361fa67 Compare June 8, 2020 04:31
@russell-islam russell-islam changed the title [RFC] Initial proposal of Hypervisor crate (abstraction between KVM and Microsoft Hyper-V) Implement initial version of Hypervisor crate (abstraction between KVM and Microsoft Hyper-V) Jun 8, 2020
@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch 2 times, most recently from fc82da7 to ab81277 Compare June 8, 2020 19:31
@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch 2 times, most recently from 224300d to c5fe9da Compare June 18, 2020 20:18
@likebreath
Copy link
Member

@russell-islam I played your PR with GDB and took a reference from the working master branch. It looks like the VMM (cloud-hypervisor) is not hanging: vmm/http-servier/vcpu/virtio threads are working fine. What is hanging seems to be the guest which is stuck at the early stage of booting. I also see the following error from the clh debug log, which are MMIO access errors. It aligns with our hypothesis that the problem is tied to MMIO. I will take a closer look tomorrow. Anything you found interesting, please just share/ask.

  1cloud-hypervisor: 6.049099ms: INFO:vmm/src/memory_manager.rs:732 -- Created userspace mapping: 0 -> 7fffa8000000 40000000                                                                                       
  2cloud-hypervisor: 683.317154ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50                                                                                            
  3cloud-hypervisor: 683.379447ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50                                                                                            
  4cloud-hypervisor: 683.409303ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50                                                                                            
  5cloud-hypervisor: 683.438271ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50                                                                                            
  6cloud-hypervisor: 683.467305ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50                                                                                            
  7cloud-hypervisor: 683.496437ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50                                                                                            
  8cloud-hypervisor: 683.525402ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50                                                                                            
  9cloud-hypervisor: 683.554339ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50                                                                                            
 10cloud-hypervisor: 683.589732ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50                                                                                            
 11cloud-hypervisor: 683.618777ms: WARN:vm-virtio/src/transport/mmio.rs:382 -- unknown virtio mmio register write: 0x50
...

@russell-islam
Copy link
Contributor Author

unknown virtio mmio register write

Thanks @likebreath. Looks like the offset actually notifying the queue index

QueueNotify 0x050 W Queue notifier Writing a queue index to this register notifies the device that there are new buffers to process in the queue.

@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch 2 times, most recently from 4316afa to c5fe9da Compare June 18, 2020 23:20
@sameo
Copy link
Member

sameo commented Jun 18, 2020

@russell-islam You were registering the wrong io events and not getting guest notifications. That:
sameo@c07ff47

should fix it.

@likebreath
Copy link
Member

@russell-islam I tried the patch from @sameo, and now clh with MMIO feature boots correctly. :)

I think Sam's branch is here https://github.com/sameo/cloud-hypervisor-1/commits/topic/hypervisor and the updated patch I tried is here: sameo@cad17ff.

@sameo
Copy link
Member

sameo commented Jun 19, 2020

@russell-islam I tried the patch from @sameo, and now clh with MMIO feature boots correctly. :)

I think Sam's branch is here https://github.com/sameo/cloud-hypervisor-1/commits/topic/hypervisor and the updated patch I tried is here: sameo@cad17ff.

@likebreath is correct, sameo@cad17ff is an implementation that's correct, although there may be more idiomatic way to do it.

The purpose of this trait is to add support for other hypervisors than
KVM, like e.g. Microsoft Hyper-V.

Further commits will define additional hypervisor related traits like
Vcpu and Vm. Each of the supported hypervisor will need to implement all
traits defined from the hypervisor crate.

Signed-off-by: Muminul Islam <muislam@microsoft.com>
@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch from c5fe9da to 8823fcb Compare June 19, 2020 19:14
@likebreath
Copy link
Member

Great. The CI now is passing. Now only build failing with aarch64 w/ musl.

This Vcpu trait should be implemented by each underlying hypervisor.

Previously created hypervisor object should create the VM based on
already selected hypervisor and Vm object should create this vcpu
object based on same hyperviosr. Each of this object should be
referenced by trait object i.e <dyn Vcpu>.

Signed-off-by: Muminul Islam <muislam@microsoft.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This VM trait should be implemented by each underlying hypervisor.

Previously created hypervisor object should create the VM based on
already selected hypervisor. This is just the trait definition. For each
of supported hypervisor we need to implement the trait. Later we will
implement this trait for KVM and then Microsoft Hyper-V.

Signed-off-by: Muminul Islam <muislam@microsoft.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
As the only hypervisor that Cloud-Hypervisor supports is KVM, the
Hypervisor trait accomodates for the upcoming KVM implementation.

This trait will be instanciated at build time through hypervisor
specific features, i.e. it's not aiming at run-time selection of
hypervisors for Cloud-Hypervisor.

Signed-off-by: Muminul Islam <muislam@microsoft.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
For each of the traits we are defining kvm related structures
and add the trait implementation to the structs. For more information
please see the kvm-ioctls and kvm-bindings crate.

This is a standalone implementation that does not include the switch of
the Cloud-Hypervisor vmm and arch crates to it.

Signed-off-by: Muminul Islam <muislam@microsoft.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Implement the vCPU state getter and setter separately from the initial
KVM Hypervisor trait implementation, mostly for readability purposes.

Signed-off-by: Muminul Islam <muislam@microsoft.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Start moving the vmm, arch and pci crates to being hypervisor agnostic
by using the hypervisor trait and abstractions. This is not a complete
switch and there are still some remaining KVM dependencies.

Signed-off-by: Muminul Islam <muislam@microsoft.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
@russell-islam russell-islam force-pushed the muislam/untangle-part1 branch from 8823fcb to a8b4a15 Compare June 19, 2020 21:41
@russell-islam
Copy link
Contributor Author

Great. The CI now is passing. Now only build failing with aarch64 w/ musl.

@likebreath thanks for keeping an aye. The integration tests passed. yeah!!

@sameo
Copy link
Member

sameo commented Jun 20, 2020

@sboeuf Are you happy with the changes? Please have a look. As a first step towards a complete hypervisor absraction, this PR is ready to be reviewed imho.

Copy link
Member

@sboeuf sboeuf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks good!
I have two remaining comments (can be addressed in a follow up PR if needed).
Both the list of expected caps and the list of MSRs are specific to Cloud-Hypervisor, which is why I think they should live in the vmm crate instead of the hypervisor one (which is supposed to be pushed to rust-vmm eventually).

Comment on lines +29 to +39
pub fn check_required_kvm_extensions(kvm: &Kvm) -> KvmResult<()> {
if !kvm.check_extension(Cap::SignalMsi) {
return Err(KvmError::CapabilityMissing(Cap::SignalMsi));
}
if !kvm.check_extension(Cap::TscDeadlineTimer) {
return Err(KvmError::CapabilityMissing(Cap::TscDeadlineTimer));
}
if !kvm.check_extension(Cap::SplitIrqchip) {
return Err(KvmError::CapabilityMissing(Cap::SplitIrqchip));
}
Ok(())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very Cloud-Hypervisor specific. I mean the requirement for having these KVM caps supported is not something that all VMM might need, so it'd be more appropriate to let the VMM check them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case we need to have a way to return this kvm object reference back to vmm.

Comment on lines +55 to +69
MsrEntries::from_entries(&[
msr!(msr_index::MSR_IA32_SYSENTER_CS),
msr!(msr_index::MSR_IA32_SYSENTER_ESP),
msr!(msr_index::MSR_IA32_SYSENTER_EIP),
msr!(msr_index::MSR_STAR),
msr!(msr_index::MSR_CSTAR),
msr!(msr_index::MSR_LSTAR),
msr!(msr_index::MSR_KERNEL_GS_BASE),
msr!(msr_index::MSR_SYSCALL_MASK),
msr!(msr_index::MSR_IA32_TSC),
msr_data!(
msr_index::MSR_IA32_MISC_ENABLE,
msr_index::MSR_IA32_MISC_ENABLE_FAST_STRING as u64
),
msr_data!(msr_index::MSR_MTRRdefType, MTRR_ENABLE | MTRR_MEM_TYPE_WB),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit the same idea as the caps here, but I'd like to mention this list of MSRs is specific to Cloud-Hypervisor.

@sameo sameo merged commit e4dee57 into cloud-hypervisor:master Jun 22, 2020
MrXinWang added a commit to MrXinWang/cloud-hypervisor that referenced this pull request Jun 28, 2020
cloud-hypervisor#1225
introduces a hypervisor abstraction crate, which breaks some of
the unit test cases on AArch64. This commit fixes related test
cases.

Signed-off-by: Henry Wang <henry.wang@arm.com>
MrXinWang added a commit to MrXinWang/cloud-hypervisor that referenced this pull request Jun 28, 2020
cloud-hypervisor#1225
introduces a hypervisor abstraction crate, which breaks some of
the unit test cases on AArch64. This commit fixes related test
cases.

Signed-off-by: Henry Wang <henry.wang@arm.com>
rbradford pushed a commit that referenced this pull request Jun 29, 2020
#1225
introduces a hypervisor abstraction crate, which breaks some of
the unit test cases on AArch64. This commit fixes related test
cases.

Signed-off-by: Henry Wang <henry.wang@arm.com>
@bonzini
Copy link
Contributor

bonzini commented Sep 22, 2020

Out of curiosity, will the Hypervisor implementation for Hyper-V support Linux as well, or will it be only for Windows (similar to the WHPX support in QEMU)?

@sameo
Copy link
Member

sameo commented Sep 22, 2020

Out of curiosity, will the Hypervisor implementation for Hyper-V support Linux as well, or will it be only for Windows (similar to the WHPX support in QEMU)?

It will support KVM and Hyper-V on Linux.

@vsrivatsa1
Copy link

Hello, what's the status of this work being merged into rust-vmm?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants