Skip to content

Analyse performance gaps with QEMU's virtio-net implementation #369

@rbradford

Description

@rbradford

Non-vhost, not-multiqeue QEMU:

qemu-system-x86_64 -machine q35,accel=kvm,kernel_irqchip -cpu host -m 512 -bios ~/Downloads/OVMF.fd -device virtio-blk-pci,drive=root -drive if=none,id=root,file=/home/rob/workloads/clear-30800-kvm.img -nodefaults -device virtio-net-pci,netdev=mynet0 -netdev tap,id=mynet0,ifname=tap0,script=no,downscript=no,vhost=off -serial stdio

iperf3 result:

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  26.3 GBytes  22.6 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  26.3 GBytes  22.6 Gbits/sec                  receiver

vs CH with virtio-net:

target/debug/cloud-hypervisor  --kernel ~/src/rust-hypervisor-firmware/target/target/release/hypervisor-fw --serial tty --console off --disk path=~/workloads/clear-30800-kvm.img --net tap=tap0,mac=12:34:56:78:90:ab --memory size=512M
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  9.25 GBytes  7.94 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  9.25 GBytes  7.94 Gbits/sec                  receiver

i.e. Cloud Hypervisor's peformance is ~ 35% of QEMU.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions