All of lore.kernel.org
 help / color / mirror / Atom feed
* qemu process consumes 100% host CPU after reverting snapshot
@ 2024-03-27 23:32 Chun Feng Wu
  2024-03-28 10:50 ` Chun Feng Wu
  0 siblings, 1 reply; 3+ messages in thread
From: Chun Feng Wu @ 2024-03-27 23:32 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 6441 bytes --]

Hi,

I am testing throttle filter chain(multiple throttle-groups on disk) 
with the following steps:
1. start guest vm(chained throttle filters applied on disk per 
https://github.com/qemu/qemu/blob/master/docs/throttle.txt)
2. take snapshot
3. revert snapshot

after step3, I noticed qemu process in host consumes 100% cpu, and after 
I login guest vm, vm cannot(or slowly) response my cmd (it works well 
before reverting).

/    PID     USER      PR  NI    VIRT    RES         SHR    S %CPU  
%MEM     TIME+ COMMAND
    65455 root      20   0 9659924 891328  20132 R 100.3   5.4 29:39.93 
qemu-system-x86/

/
/

Does anybody know why such issue happens?  is it a bug or I 
misunderstand something?

my cmd:

/qemu-system-x86_64 \
   -name ubuntu-20.04-vm,debug-threads=on \
   -machine pc-i440fx-jammy,usb=off,dump-guest-core=off \
   -accel kvm \
   -cpu 
Broadwell-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,md-clear=on,stibp=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,tsx-ctrl=off,hle=off,rtm=off 
\
   -m 8192 \
   -overcommit mem-lock=off \
   -smp 2,sockets=1,dies=1,cores=1,threads=2 \
   -numa node,nodeid=0,cpus=0-1,memdev=ram \
   -object memory-backend-ram,id=ram,size=8192M \
   -uuid d2d68f5d-bff0-4167-bbc3-643e3566b8fb \
   -display none \
   -nodefaults \
   -monitor stdio \
   -rtc base=utc,driftfix=slew \
   -no-shutdown \
   -boot strict=on \
   -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
   -object 
'{"qom-type":"throttle-group","id":"limit0","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":200,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":200,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
     -object 
'{"qom-type":"throttle-group","id":"limit1","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":250,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":250,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
     -object 
'{"qom-type":"throttle-group","id":"limit2","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":300,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":300,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
     -object 
'{"qom-type":"throttle-group","id":"limit012","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":400,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":400,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
   -blockdev 
'{"driver":"file","filename":"/virt/images/focal-server-cloudimg-amd64.img","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"}' 
\
   -blockdev 
'{"node-name":"libvirt-4-format","read-only":false,"driver":"qcow2","file":"libvirt-4-storage","backing":null}' 
\
   -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-4-format,id=virtio-disk0,bootindex=1 
\
   -blockdev 
'{"driver":"file","filename":"/virt/disks/vm1_disk_1.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' 
\
   -blockdev 
'{"node-name":"libvirt-3-format","read-only":false,"driver":"qcow2","file":"libvirt-3-storage","backing":null}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-5-filter","throttle-group":"limit0","file":"libvirt-3-format"}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-6-filter","throttle-group":"limit012","file":"libvirt-5-filter"}' 
\
   -device 
virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-6-filter,id=virtio-disk1 \
   -blockdev 
'{"driver":"file","filename":"/virt/disks/vm1_disk_2.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' 
\
   -blockdev 
'{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-3-filter","throttle-group":"limit1","file":"libvirt-2-format"}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-4-filter","throttle-group":"limit012","file":"libvirt-3-filter"}' 
\
   -device 
virtio-blk-pci,bus=pci.0,addr=0x6,drive=libvirt-4-filter,id=virtio-disk2 \
   -blockdev 
'{"driver":"file","filename":"/virt/disks/vm1_disk_3.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' 
\
   -blockdev 
'{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-1-filter","throttle-group":"limit2","file":"libvirt-1-format"}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-2-filter","throttle-group":"limit012","file":"libvirt-1-filter"}' 
\
   -device 
virtio-blk-pci,bus=pci.0,addr=0x7,drive=libvirt-2-filter,id=virtio-disk3 \
   -netdev user,id=user0,hostfwd=tcp::2222-:22 \
   -device 
virtio-net-pci,netdev=user0,id=net0,mac=52:54:00:12:34:56,bus=pci.0,addr=0x3 
\
   -chardev pty,id=charserial0 \
   -device isa-serial,chardev=charserial0,id=serial0,index=0 \
   -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 \
   -sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
   -msg timestamp=on/


snapshot and reverting:

/(qemu) info status
VM status: running
(qemu) savevm snapshot1
(qemu) loadvm snapshot1/


my env:
/:~# qemu-system-x86_64 --version
QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.17)
Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers

:~# lsb_release -a
No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 22.04.4 LTS
Release:    22.04
Codename:    jammy/

-- 
Thanks and Regards,

Wu

[-- Attachment #2: Type: text/html, Size: 7902 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: qemu process consumes 100% host CPU after reverting snapshot
  2024-03-27 23:32 qemu process consumes 100% host CPU after reverting snapshot Chun Feng Wu
@ 2024-03-28 10:50 ` Chun Feng Wu
  2024-04-01  5:05   ` Dongli Zhang
  0 siblings, 1 reply; 3+ messages in thread
From: Chun Feng Wu @ 2024-03-28 10:50 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 7230 bytes --]

BTW, if I download qemu8.0(https://download.qemu.org/qemu-8.0.0.tar.xz), 
compile and install it in my ubuntu22.04, launch vm with the same 
command(just update --machine to be "pc"), the host cpu usage is not 
high, it seems to be a bug in QEMU6

Also, I have another question, for disk iotune or throttle group, after 
reverting snapshot, if I login vm, and run fio, I/O performance drops a 
lot in both QEMU6 and QEMU8, do anyone know the reason? Any explanation 
would be appreciated!



On 2024/3/28 07:32, Chun Feng Wu wrote:
>
> Hi,
>
> I am testing throttle filter chain(multiple throttle-groups on disk) 
> with the following steps:
> 1. start guest vm(chained throttle filters applied on disk per 
> https://github.com/qemu/qemu/blob/master/docs/throttle.txt)
> 2. take snapshot
> 3. revert snapshot
>
> after step3, I noticed qemu process in host consumes 100% cpu, and 
> after I login guest vm, vm cannot(or slowly) response my cmd (it works 
> well before reverting).
>
> /    PID     USER      PR  NI    VIRT    RES         SHR S  %CPU  
> %MEM     TIME+ COMMAND
>    65455 root      20   0 9659924 891328  20132 R 100.3 5.4    
> 29:39.93 qemu-system-x86/
>
> /
> /
>
> Does anybody know why such issue happens?  is it a bug or I 
> misunderstand something?
>
> my cmd:
>
> /qemu-system-x86_64 \
>   -name ubuntu-20.04-vm,debug-threads=on \
>   -machine pc-i440fx-jammy,usb=off,dump-guest-core=off \
>   -accel kvm \
>   -cpu 
> Broadwell-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,md-clear=on,stibp=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,tsx-ctrl=off,hle=off,rtm=off 
> \
>   -m 8192 \
>   -overcommit mem-lock=off \
>   -smp 2,sockets=1,dies=1,cores=1,threads=2 \
>   -numa node,nodeid=0,cpus=0-1,memdev=ram \
>   -object memory-backend-ram,id=ram,size=8192M \
>   -uuid d2d68f5d-bff0-4167-bbc3-643e3566b8fb \
>   -display none \
>   -nodefaults \
>   -monitor stdio \
>   -rtc base=utc,driftfix=slew \
>   -no-shutdown \
>   -boot strict=on \
>   -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
>   -object 
> '{"qom-type":"throttle-group","id":"limit0","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":200,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":200,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
> \
>     -object 
> '{"qom-type":"throttle-group","id":"limit1","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":250,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":250,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
> \
>     -object 
> '{"qom-type":"throttle-group","id":"limit2","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":300,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":300,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
> \
>     -object 
> '{"qom-type":"throttle-group","id":"limit012","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":400,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":400,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
> \
>   -blockdev 
> '{"driver":"file","filename":"/virt/images/focal-server-cloudimg-amd64.img","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"}' 
> \
>   -blockdev 
> '{"node-name":"libvirt-4-format","read-only":false,"driver":"qcow2","file":"libvirt-4-storage","backing":null}' 
> \
>   -device 
> virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-4-format,id=virtio-disk0,bootindex=1 
> \
>   -blockdev 
> '{"driver":"file","filename":"/virt/disks/vm1_disk_1.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' 
> \
>   -blockdev 
> '{"node-name":"libvirt-3-format","read-only":false,"driver":"qcow2","file":"libvirt-3-storage","backing":null}' 
> \
>   -blockdev 
> '{"driver":"throttle","node-name":"libvirt-5-filter","throttle-group":"limit0","file":"libvirt-3-format"}' 
> \
>   -blockdev 
> '{"driver":"throttle","node-name":"libvirt-6-filter","throttle-group":"limit012","file":"libvirt-5-filter"}' 
> \
>   -device 
> virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-6-filter,id=virtio-disk1 \
>   -blockdev 
> '{"driver":"file","filename":"/virt/disks/vm1_disk_2.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' 
> \
>   -blockdev 
> '{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}' 
> \
>   -blockdev 
> '{"driver":"throttle","node-name":"libvirt-3-filter","throttle-group":"limit1","file":"libvirt-2-format"}' 
> \
>   -blockdev 
> '{"driver":"throttle","node-name":"libvirt-4-filter","throttle-group":"limit012","file":"libvirt-3-filter"}' 
> \
>   -device 
> virtio-blk-pci,bus=pci.0,addr=0x6,drive=libvirt-4-filter,id=virtio-disk2 \
>   -blockdev 
> '{"driver":"file","filename":"/virt/disks/vm1_disk_3.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' 
> \
>   -blockdev 
> '{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}' 
> \
>   -blockdev 
> '{"driver":"throttle","node-name":"libvirt-1-filter","throttle-group":"limit2","file":"libvirt-1-format"}' 
> \
>   -blockdev 
> '{"driver":"throttle","node-name":"libvirt-2-filter","throttle-group":"limit012","file":"libvirt-1-filter"}' 
> \
>   -device 
> virtio-blk-pci,bus=pci.0,addr=0x7,drive=libvirt-2-filter,id=virtio-disk3 \
>   -netdev user,id=user0,hostfwd=tcp::2222-:22 \
>   -device 
> virtio-net-pci,netdev=user0,id=net0,mac=52:54:00:12:34:56,bus=pci.0,addr=0x3 
> \
>   -chardev pty,id=charserial0 \
>   -device isa-serial,chardev=charserial0,id=serial0,index=0 \
>   -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 \
>   -sandbox 
> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
>   -msg timestamp=on/
>
>
> snapshot and reverting:
>
> /(qemu) info status
> VM status: running
> (qemu) savevm snapshot1
> (qemu) loadvm snapshot1/
>
>
> my env:
> /:~# qemu-system-x86_64 --version
> QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.17)
> Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers
>
> :~# lsb_release -a
> No LSB modules are available.
> Distributor ID:    Ubuntu
> Description:    Ubuntu 22.04.4 LTS
> Release:    22.04
> Codename:    jammy/
>
> -- 
> Thanks and Regards,
>
> Wu

-- 
Thanks and Regards,

Wu

[-- Attachment #2: Type: text/html, Size: 9204 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: qemu process consumes 100% host CPU after reverting snapshot
  2024-03-28 10:50 ` Chun Feng Wu
@ 2024-04-01  5:05   ` Dongli Zhang
  0 siblings, 0 replies; 3+ messages in thread
From: Dongli Zhang @ 2024-04-01  5:05 UTC (permalink / raw)
  To: Chun Feng Wu, qemu-devel

To share my test result, I do NOT reproduce the issue with the below command
line and QEMU-8.2.

However, I can reproduce with QEMU-6.2. The mainloop consumes 100% CPU usage.

qemu-system-x86_64 \
--enable-kvm -cpu host -smp 2 -m 8G \
-object
throttle-group,id=limit0,x-iops-total=200,x-iops-total-max=200,x-bps-total-max-length=1,x-bps-read-max-length=1,x-bps-write-max-length=1,x-iops-total-max-length=1,x-iops-read-max-length=1,x-iops-write-max-length=1
\
-object
throttle-group,id=limit1,x-iops-total=250,x-iops-total-max=250,x-bps-total-max-length=1,x-bps-read-max-length=1,x-bps-write-max-length=1,x-iops-total-max-length=1,x-iops-read-max-length=1,x-iops-write-max-length=1
\
-object
throttle-group,id=limit2,x-iops-total=300,x-iops-total-max=300,x-bps-total-max-length=1,x-bps-read-max-length=1,x-bps-write-max-length=1,x-iops-total-max-length=1,x-iops-read-max-length=1,x-iops-write-max-length=1
\
-object
throttle-group,id=limit012,x-iops-total=400,x-iops-total-max=400,x-bps-total-max-length=1,x-bps-read-max-length=1,x-bps-write-max-length=1,x-iops-total-max-length=1,x-iops-read-max-length=1,x-iops-write-max-length=1
\
-device
virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-4-format,id=virtio-disk0,bootindex=1
\
-blockdev
driver=file,filename=image.qcow2,node-name=libvirt-4-storage,discard=unmap \
-blockdev node-name=libvirt-4-format,driver=qcow2,file=libvirt-4-storage \
-device virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-6-filter,id=virtio-disk1 \
-blockdev
driver=file,filename=test01.qcow2,node-name=libvirt-3-storage,discard=unmap \
-blockdev node-name=libvirt-3-format,driver=qcow2,file=libvirt-3-storage \
-blockdev
driver=throttle,node-name=libvirt-5-filter,throttle-group=limit0,file=libvirt-3-format
\
-blockdev
driver=throttle,node-name=libvirt-6-filter,throttle-group=limit012,file=libvirt-5-filter
\
-device virtio-blk-pci,bus=pci.0,addr=0x6,drive=libvirt-4-filter,id=virtio-disk2 \
-blockdev
driver=file,filename=test02.qcow2,node-name=libvirt-2-storage,discard=unmap \
-blockdev node-name=libvirt-2-format,driver=qcow2,file=libvirt-2-storage \
-blockdev
driver=throttle,node-name=libvirt-3-filter,throttle-group=limit1,file=libvirt-2-format
\
-blockdev
driver=throttle,node-name=libvirt-4-filter,throttle-group=limit012,file=libvirt-3-filter
\
-device virtio-blk-pci,bus=pci.0,addr=0x7,drive=libvirt-2-filter,id=virtio-disk3 \
-blockdev
driver=file,filename=test03.qcow2,node-name=libvirt-1-storage,discard=unmap \
-blockdev node-name=libvirt-1-format,driver=qcow2,file=libvirt-1-storage \
-blockdev
driver=throttle,node-name=libvirt-1-filter,throttle-group=limit2,file=libvirt-1-format
\
-blockdev
driver=throttle,node-name=libvirt-2-filter,throttle-group=limit012,file=libvirt-1-filter
\
-netdev user,id=user0,hostfwd=tcp::5028-:22 \
-device
virtio-net-pci,netdev=user0,id=net0,mac=52:54:00:12:34:56,bus=pci.0,addr=0x3 \
-monitor stdio

(qemu) info status
(qemu) savevm snapshot1
(qemu) loadvm snapshot1

Dongli Zhang

On 3/28/24 03:50, Chun Feng Wu wrote:
> BTW, if I download
> qemu8.0(https://urldefense.com/v3/__https://download.qemu.org/qemu-8.0.0.tar.xz__;!!ACWV5N9M2RV99hQ!ObhT3ZYGr8GQFt4p7dWc-t-CFeKff1UzyRPc9WQrAuhz1B7jDC1-VTqjoCUDcrhXsffyBQuRtnoZhfZhS8FVig$ ), compile and install it in my ubuntu22.04, launch vm with the same command(just update --machine to be "pc"), the host cpu usage is not high, it seems to be a bug in QEMU6
> 
> Also, I have another question, for disk iotune or throttle group, after
> reverting snapshot, if I login vm, and run fio, I/O performance drops a lot in
> both QEMU6 and QEMU8, do anyone know the reason? Any explanation would be
> appreciated!
> 
> 
> 
> On 2024/3/28 07:32, Chun Feng Wu wrote:
>>
>> Hi,
>>
>> I am testing throttle filter chain(multiple throttle-groups on disk) with the
>> following steps:
>> 1. start guest vm(chained throttle filters applied on disk per
>> https://urldefense.com/v3/__https://github.com/qemu/qemu/blob/master/docs/throttle.txt__;!!ACWV5N9M2RV99hQ!ObhT3ZYGr8GQFt4p7dWc-t-CFeKff1UzyRPc9WQrAuhz1B7jDC1-VTqjoCUDcrhXsffyBQuRtnoZhfaQOXc8pg$ )
>> 2. take snapshot
>> 3. revert snapshot
>>
>> after step3, I noticed qemu process in host consumes 100% cpu, and after I
>> login guest vm, vm cannot(or slowly) response my cmd (it works well before
>> reverting).
>>
>> /    PID     USER      PR  NI    VIRT    RES         SHR S  %CPU  %MEM    
>> TIME+ COMMAND
>>    65455 root      20   0 9659924 891328  20132 R 100.3 5.4    29:39.93
>> qemu-system-x86/
>>
>> /
>> /
>>
>> Does anybody know why such issue happens?  is it a bug or I misunderstand
>> something?
>>
>> my cmd:
>>
>> /qemu-system-x86_64 \
>>   -name ubuntu-20.04-vm,debug-threads=on \
>>   -machine pc-i440fx-jammy,usb=off,dump-guest-core=off \
>>   -accel kvm \
>>   -cpu
>> Broadwell-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,md-clear=on,stibp=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,tsx-ctrl=off,hle=off,rtm=off \
>>   -m 8192 \
>>   -overcommit mem-lock=off \
>>   -smp 2,sockets=1,dies=1,cores=1,threads=2 \
>>   -numa node,nodeid=0,cpus=0-1,memdev=ram \
>>   -object memory-backend-ram,id=ram,size=8192M \
>>   -uuid d2d68f5d-bff0-4167-bbc3-643e3566b8fb \
>>   -display none \
>>   -nodefaults \
>>   -monitor stdio \
>>   -rtc base=utc,driftfix=slew \
>>   -no-shutdown \
>>   -boot strict=on \
>>   -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
>>   -object
>> '{"qom-type":"throttle-group","id":"limit0","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":200,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":200,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' \
>>     -object
>> '{"qom-type":"throttle-group","id":"limit1","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":250,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":250,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' \
>>     -object
>> '{"qom-type":"throttle-group","id":"limit2","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":300,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":300,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' \
>>     -object
>> '{"qom-type":"throttle-group","id":"limit012","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":400,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":400,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' \
>>   -blockdev
>> '{"driver":"file","filename":"/virt/images/focal-server-cloudimg-amd64.img","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"}' \
>>   -blockdev
>> '{"node-name":"libvirt-4-format","read-only":false,"driver":"qcow2","file":"libvirt-4-storage","backing":null}' \
>>   -device
>> virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-4-format,id=virtio-disk0,bootindex=1 \
>>   -blockdev
>> '{"driver":"file","filename":"/virt/disks/vm1_disk_1.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' \
>>   -blockdev
>> '{"node-name":"libvirt-3-format","read-only":false,"driver":"qcow2","file":"libvirt-3-storage","backing":null}' \
>>   -blockdev
>> '{"driver":"throttle","node-name":"libvirt-5-filter","throttle-group":"limit0","file":"libvirt-3-format"}' \
>>   -blockdev
>> '{"driver":"throttle","node-name":"libvirt-6-filter","throttle-group":"limit012","file":"libvirt-5-filter"}' \
>>   -device
>> virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-6-filter,id=virtio-disk1 \
>>   -blockdev
>> '{"driver":"file","filename":"/virt/disks/vm1_disk_2.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
>>   -blockdev
>> '{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}' \
>>   -blockdev
>> '{"driver":"throttle","node-name":"libvirt-3-filter","throttle-group":"limit1","file":"libvirt-2-format"}' \
>>   -blockdev
>> '{"driver":"throttle","node-name":"libvirt-4-filter","throttle-group":"limit012","file":"libvirt-3-filter"}' \
>>   -device
>> virtio-blk-pci,bus=pci.0,addr=0x6,drive=libvirt-4-filter,id=virtio-disk2 \
>>   -blockdev
>> '{"driver":"file","filename":"/virt/disks/vm1_disk_3.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
>>   -blockdev
>> '{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \
>>   -blockdev
>> '{"driver":"throttle","node-name":"libvirt-1-filter","throttle-group":"limit2","file":"libvirt-1-format"}' \
>>   -blockdev
>> '{"driver":"throttle","node-name":"libvirt-2-filter","throttle-group":"limit012","file":"libvirt-1-filter"}' \
>>   -device
>> virtio-blk-pci,bus=pci.0,addr=0x7,drive=libvirt-2-filter,id=virtio-disk3 \
>>   -netdev user,id=user0,hostfwd=tcp::2222-:22 \
>>   -device
>> virtio-net-pci,netdev=user0,id=net0,mac=52:54:00:12:34:56,bus=pci.0,addr=0x3 \
>>   -chardev pty,id=charserial0 \
>>   -device isa-serial,chardev=charserial0,id=serial0,index=0 \
>>   -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 \
>>   -sandbox
>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
>>   -msg timestamp=on/
>>
>>
>> snapshot and reverting:
>>
>> /(qemu) info status
>> VM status: running
>> (qemu) savevm snapshot1
>> (qemu) loadvm snapshot1/
>>
>>
>> my env:
>> /:~# qemu-system-x86_64 --version
>> QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.17)
>> Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers
>>
>> :~# lsb_release -a
>> No LSB modules are available.
>> Distributor ID:    Ubuntu
>> Description:    Ubuntu 22.04.4 LTS
>> Release:    22.04
>> Codename:    jammy/
>>
>> -- 
>> Thanks and Regards,
>>
>> Wu
> 


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-04-01  5:06 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-27 23:32 qemu process consumes 100% host CPU after reverting snapshot Chun Feng Wu
2024-03-28 10:50 ` Chun Feng Wu
2024-04-01  5:05   ` Dongli Zhang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.