All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak?
@ 2017-10-26  3:09 Sam
  2017-10-26  3:54 ` Sam
  2017-10-26 14:02 ` Daniel P. Berrange
  0 siblings, 2 replies; 8+ messages in thread
From: Sam @ 2017-10-26  3:09 UTC (permalink / raw)
  To: QEMU Developers

For qemu-2.6.0, in huge page (1G) environment, after kill the qemu process,
memory which is alloc for the vm could not be released. Detail is bellow.
Or should I use some specific command to stop vm? I want to know if there
someone has the same problem?

The start up command is:

CMD1="$QEMU_CMD -D qemu.log -trace events=qemu-events-all -enable-kvm -cpu
qemu64,+vmx,+ssse3,+sse4.1,+sse4.2,+x2apic,+aes,+avx,+vme,
+pat,+ss,+pclmulqdq,+xsave,level=13 -machine pc,accel=kvm -chardev
socket,id=hmqmondev,port=55908,host=127.0.0.1,nodelay,server,nowait -mon
chardev=hmqmondev,id=hmqmon,mode=readline -rtc
base=utc,clock=host,driftfix=none
-usb -device usb-tablet -daemonize -nodefaults -nodefconfig
-no-kvm-pit-reinjection -global kvm-pit.lost_tick_policy=discard -vga std
-k en-us -smp 8 -name gangyewei-qemutime-1 -m 40960 -boot order=cdn -vnc
:8,password -drive file=$DISK_0,if=none,id=drive_
0,format=qcow2,cache=none,aio=native -device virtio-blk-pci,id=dev_drive_0,
drive=drive_0,bus=pci.0,addr=0x5 -drive file=$DISK_1,if=none,id=drive_
1,format=qcow2,cache=none,aio=native -device virtio-blk-pci,id=dev_drive_1,
drive=drive_1,bus=pci.0,addr=0x6 -drive file=$DISK_2,if=none,id=drive_
2,format=qcow2,cache=none,aio=native -device virtio-blk-pci,id=dev_drive_2,
drive=drive_2,bus=pci.0,addr=0x7 -device ide-cd,drive=ide0-cd0,bus=ide.1,unit=1
-drive id=ide0-cd0,media=cdrom,if=none -chardev socket,id=char-n-52b49b80,
path=/usr/local/var/run/openvswitch/n-52b49b80,server -netdev
type=vhost-user,id=n-52b49b80,chardev=char-n-52b49b80,vhostforce=on -device
virtio-net-pci,netdev=n-52b49b80,mac=00:22:52:b4:9b:
80,id=netdev-n-52b49b80,addr=0xf$(nic_speed 10000) -object
memory-backend-file,id=mem,size=40960M,mem-path=/mnt/huge,share=on -numa
node,memdev=mem -pidfile $PID_FILE -chardev socket,path=/opt/cloud/
workspace/servers/4511f52a-f450-40d3-9417-a1e0a27ed507/qga.sock,server,nowait,id=qga0
-device virtio-serial -device virtserialport,chardev=qga0,
name=org.qemu.guest_agent.0"

The stop script is just kill this process.

the result of `cat /proc/meminfo` show memory is still there.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak?
  2017-10-26  3:09 [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak? Sam
@ 2017-10-26  3:54 ` Sam
  2017-10-26  9:22   ` Sam
  2017-10-26 14:02 ` Daniel P. Berrange
  1 sibling, 1 reply; 8+ messages in thread
From: Sam @ 2017-10-26  3:54 UTC (permalink / raw)
  To: QEMU Developers

BTW, I start VM by this command, and then stop it, then memory leak happens.

To compare, I start another VM, and wait for about 10 minutes, and then
kill it as I does above, then no memory leak happens.

So is this a bug of KVM or qemu ?

2017-10-26 11:09 GMT+08:00 Sam <batmanustc@gmail.com>:

> For qemu-2.6.0, in huge page (1G) environment, after kill the qemu
> process, memory which is alloc for the vm could not be released. Detail is
> bellow.
> Or should I use some specific command to stop vm? I want to know if there
> someone has the same problem?
>
> The start up command is:
>
> CMD1="$QEMU_CMD -D qemu.log -trace events=qemu-events-all -enable-kvm -cpu
> qemu64,+vmx,+ssse3,+sse4.1,+sse4.2,+x2apic,+aes,+avx,+vme,+
> pat,+ss,+pclmulqdq,+xsave,level=13 -machine pc,accel=kvm -chardev
> socket,id=hmqmondev,port=55908,host=127.0.0.1,nodelay,server,nowait -mon
> chardev=hmqmondev,id=hmqmon,mode=readline -rtc
> base=utc,clock=host,driftfix=none -usb -device usb-tablet -daemonize
> -nodefaults -nodefconfig -no-kvm-pit-reinjection -global
> kvm-pit.lost_tick_policy=discard -vga std -k en-us -smp 8 -name
> gangyewei-qemutime-1 -m 40960 -boot order=cdn -vnc :8,password -drive
> file=$DISK_0,if=none,id=drive_0,format=qcow2,cache=none,aio=native
> -device virtio-blk-pci,id=dev_drive_0,drive=drive_0,bus=pci.0,addr=0x5
> -drive file=$DISK_1,if=none,id=drive_1,format=qcow2,cache=none,aio=native
> -device virtio-blk-pci,id=dev_drive_1,drive=drive_1,bus=pci.0,addr=0x6
> -drive file=$DISK_2,if=none,id=drive_2,format=qcow2,cache=none,aio=native
> -device virtio-blk-pci,id=dev_drive_2,drive=drive_2,bus=pci.0,addr=0x7
> -device ide-cd,drive=ide0-cd0,bus=ide.1,unit=1 -drive
> id=ide0-cd0,media=cdrom,if=none -chardev socket,id=char-n-52b49b80,path
> =/usr/local/var/run/openvswitch/n-52b49b80,server -netdev
> type=vhost-user,id=n-52b49b80,chardev=char-n-52b49b80,vhostforce=on
> -device virtio-net-pci,netdev=n-52b49b80,mac=00:22:52:b4:9b:80,id=
> netdev-n-52b49b80,addr=0xf$(nic_speed 10000) -object
> memory-backend-file,id=mem,size=40960M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -pidfile $PID_FILE -chardev socket,path=/opt/cloud/workspa
> ce/servers/4511f52a-f450-40d3-9417-a1e0a27ed507/qga.sock,server,nowait,id=qga0
> -device virtio-serial -device virtserialport,chardev=qga0,na
> me=org.qemu.guest_agent.0"
>
> The stop script is just kill this process.
>
> the result of `cat /proc/meminfo` show memory is still there.
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak?
  2017-10-26  3:54 ` Sam
@ 2017-10-26  9:22   ` Sam
  2017-10-26 13:29     ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 8+ messages in thread
From: Sam @ 2017-10-26  9:22 UTC (permalink / raw)
  To: QEMU Developers

I don't know if this process is:
1. qemu call kernel API to alloc 40G memory.
2. qemu process is killed by hand.
3. kernel alloc 40G memory, but qemu process is down, then this 40G is left
in kernel.

2017-10-26 11:54 GMT+08:00 Sam <batmanustc@gmail.com>:

> BTW, I start VM by this command, and then stop it, then memory leak
> happens.
>
> To compare, I start another VM, and wait for about 10 minutes, and then
> kill it as I does above, then no memory leak happens.
>
> So is this a bug of KVM or qemu ?
>
> 2017-10-26 11:09 GMT+08:00 Sam <batmanustc@gmail.com>:
>
>> For qemu-2.6.0, in huge page (1G) environment, after kill the qemu
>> process, memory which is alloc for the vm could not be released. Detail is
>> bellow.
>> Or should I use some specific command to stop vm? I want to know if there
>> someone has the same problem?
>>
>> The start up command is:
>>
>> CMD1="$QEMU_CMD -D qemu.log -trace events=qemu-events-all -enable-kvm
>> -cpu qemu64,+vmx,+ssse3,+sse4.1,+sse4.2,+x2apic,+aes,+avx,+vme,+p
>> at,+ss,+pclmulqdq,+xsave,level=13 -machine pc,accel=kvm -chardev
>> socket,id=hmqmondev,port=55908,host=127.0.0.1,nodelay,server,nowait -mon
>> chardev=hmqmondev,id=hmqmon,mode=readline -rtc
>> base=utc,clock=host,driftfix=none -usb -device usb-tablet -daemonize
>> -nodefaults -nodefconfig -no-kvm-pit-reinjection -global
>> kvm-pit.lost_tick_policy=discard -vga std -k en-us -smp 8 -name
>> gangyewei-qemutime-1 -m 40960 -boot order=cdn -vnc :8,password -drive
>> file=$DISK_0,if=none,id=drive_0,format=qcow2,cache=none,aio=native
>> -device virtio-blk-pci,id=dev_drive_0,drive=drive_0,bus=pci.0,addr=0x5
>> -drive file=$DISK_1,if=none,id=drive_1,format=qcow2,cache=none,aio=native
>> -device virtio-blk-pci,id=dev_drive_1,drive=drive_1,bus=pci.0,addr=0x6
>> -drive file=$DISK_2,if=none,id=drive_2,format=qcow2,cache=none,aio=native
>> -device virtio-blk-pci,id=dev_drive_2,drive=drive_2,bus=pci.0,addr=0x7
>> -device ide-cd,drive=ide0-cd0,bus=ide.1,unit=1 -drive
>> id=ide0-cd0,media=cdrom,if=none -chardev socket,id=char-n-52b49b80,path
>> =/usr/local/var/run/openvswitch/n-52b49b80,server -netdev
>> type=vhost-user,id=n-52b49b80,chardev=char-n-52b49b80,vhostforce=on
>> -device virtio-net-pci,netdev=n-52b49b80,mac=00:22:52:b4:9b:80,id=ne
>> tdev-n-52b49b80,addr=0xf$(nic_speed 10000) -object
>> memory-backend-file,id=mem,size=40960M,mem-path=/mnt/huge,share=on -numa
>> node,memdev=mem -pidfile $PID_FILE -chardev socket,path=/opt/cloud/
>> workspace/servers/4511f52a-f450-40d3-9417-a1e0a27ed507/qga.sock,server,nowait,id=qga0
>> -device virtio-serial -device virtserialport,chardev=qga0,na
>> me=org.qemu.guest_agent.0"
>>
>> The stop script is just kill this process.
>>
>> the result of `cat /proc/meminfo` show memory is still there.
>>
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak?
  2017-10-26  9:22   ` Sam
@ 2017-10-26 13:29     ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 8+ messages in thread
From: Dr. David Alan Gilbert @ 2017-10-26 13:29 UTC (permalink / raw)
  To: Sam; +Cc: QEMU Developers

* Sam (batmanustc@gmail.com) wrote:
> I don't know if this process is:
> 1. qemu call kernel API to alloc 40G memory.
> 2. qemu process is killed by hand.
> 3. kernel alloc 40G memory, but qemu process is down, then this 40G is left
> in kernel.

It might be interesting to check the files in:
/sys/kernel/mm/hugepages/hugepages-2048kB

what do free_hugepages and nr_hugepages say?

Is there a file left in /mnt/huge ?

Dave

> 
> 2017-10-26 11:54 GMT+08:00 Sam <batmanustc@gmail.com>:
> 
> > BTW, I start VM by this command, and then stop it, then memory leak
> > happens.
> >
> > To compare, I start another VM, and wait for about 10 minutes, and then
> > kill it as I does above, then no memory leak happens.
> >
> > So is this a bug of KVM or qemu ?
> >
> > 2017-10-26 11:09 GMT+08:00 Sam <batmanustc@gmail.com>:
> >
> >> For qemu-2.6.0, in huge page (1G) environment, after kill the qemu
> >> process, memory which is alloc for the vm could not be released. Detail is
> >> bellow.
> >> Or should I use some specific command to stop vm? I want to know if there
> >> someone has the same problem?
> >>
> >> The start up command is:
> >>
> >> CMD1="$QEMU_CMD -D qemu.log -trace events=qemu-events-all -enable-kvm
> >> -cpu qemu64,+vmx,+ssse3,+sse4.1,+sse4.2,+x2apic,+aes,+avx,+vme,+p
> >> at,+ss,+pclmulqdq,+xsave,level=13 -machine pc,accel=kvm -chardev
> >> socket,id=hmqmondev,port=55908,host=127.0.0.1,nodelay,server,nowait -mon
> >> chardev=hmqmondev,id=hmqmon,mode=readline -rtc
> >> base=utc,clock=host,driftfix=none -usb -device usb-tablet -daemonize
> >> -nodefaults -nodefconfig -no-kvm-pit-reinjection -global
> >> kvm-pit.lost_tick_policy=discard -vga std -k en-us -smp 8 -name
> >> gangyewei-qemutime-1 -m 40960 -boot order=cdn -vnc :8,password -drive
> >> file=$DISK_0,if=none,id=drive_0,format=qcow2,cache=none,aio=native
> >> -device virtio-blk-pci,id=dev_drive_0,drive=drive_0,bus=pci.0,addr=0x5
> >> -drive file=$DISK_1,if=none,id=drive_1,format=qcow2,cache=none,aio=native
> >> -device virtio-blk-pci,id=dev_drive_1,drive=drive_1,bus=pci.0,addr=0x6
> >> -drive file=$DISK_2,if=none,id=drive_2,format=qcow2,cache=none,aio=native
> >> -device virtio-blk-pci,id=dev_drive_2,drive=drive_2,bus=pci.0,addr=0x7
> >> -device ide-cd,drive=ide0-cd0,bus=ide.1,unit=1 -drive
> >> id=ide0-cd0,media=cdrom,if=none -chardev socket,id=char-n-52b49b80,path
> >> =/usr/local/var/run/openvswitch/n-52b49b80,server -netdev
> >> type=vhost-user,id=n-52b49b80,chardev=char-n-52b49b80,vhostforce=on
> >> -device virtio-net-pci,netdev=n-52b49b80,mac=00:22:52:b4:9b:80,id=ne
> >> tdev-n-52b49b80,addr=0xf$(nic_speed 10000) -object
> >> memory-backend-file,id=mem,size=40960M,mem-path=/mnt/huge,share=on -numa
> >> node,memdev=mem -pidfile $PID_FILE -chardev socket,path=/opt/cloud/
> >> workspace/servers/4511f52a-f450-40d3-9417-a1e0a27ed507/qga.sock,server,nowait,id=qga0
> >> -device virtio-serial -device virtserialport,chardev=qga0,na
> >> me=org.qemu.guest_agent.0"
> >>
> >> The stop script is just kill this process.
> >>
> >> the result of `cat /proc/meminfo` show memory is still there.
> >>
> >
> >
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak?
  2017-10-26  3:09 [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak? Sam
  2017-10-26  3:54 ` Sam
@ 2017-10-26 14:02 ` Daniel P. Berrange
  2017-10-27  3:28   ` Sam
  1 sibling, 1 reply; 8+ messages in thread
From: Daniel P. Berrange @ 2017-10-26 14:02 UTC (permalink / raw)
  To: Sam; +Cc: QEMU Developers

On Thu, Oct 26, 2017 at 11:09:49AM +0800, Sam wrote:
> For qemu-2.6.0, in huge page (1G) environment, after kill the qemu process,
> memory which is alloc for the vm could not be released. Detail is bellow.
> Or should I use some specific command to stop vm? I want to know if there
> someone has the same problem?
> 
> The start up command is:
> 
> CMD1="$QEMU_CMD -D qemu.log -trace events=qemu-events-all -enable-kvm -cpu
> qemu64,+vmx,+ssse3,+sse4.1,+sse4.2,+x2apic,+aes,+avx,+vme,
> +pat,+ss,+pclmulqdq,+xsave,level=13 -machine pc,accel=kvm -chardev
> socket,id=hmqmondev,port=55908,host=127.0.0.1,nodelay,server,nowait -mon
> chardev=hmqmondev,id=hmqmon,mode=readline -rtc
> base=utc,clock=host,driftfix=none
> -usb -device usb-tablet -daemonize -nodefaults -nodefconfig
> -no-kvm-pit-reinjection -global kvm-pit.lost_tick_policy=discard -vga std
> -k en-us -smp 8 -name gangyewei-qemutime-1 -m 40960 -boot order=cdn -vnc
> :8,password -drive file=$DISK_0,if=none,id=drive_
> 0,format=qcow2,cache=none,aio=native -device virtio-blk-pci,id=dev_drive_0,
> drive=drive_0,bus=pci.0,addr=0x5 -drive file=$DISK_1,if=none,id=drive_
> 1,format=qcow2,cache=none,aio=native -device virtio-blk-pci,id=dev_drive_1,
> drive=drive_1,bus=pci.0,addr=0x6 -drive file=$DISK_2,if=none,id=drive_
> 2,format=qcow2,cache=none,aio=native -device virtio-blk-pci,id=dev_drive_2,
> drive=drive_2,bus=pci.0,addr=0x7 -device ide-cd,drive=ide0-cd0,bus=ide.1,unit=1
> -drive id=ide0-cd0,media=cdrom,if=none -chardev socket,id=char-n-52b49b80,
> path=/usr/local/var/run/openvswitch/n-52b49b80,server -netdev
> type=vhost-user,id=n-52b49b80,chardev=char-n-52b49b80,vhostforce=on -device


Ok, here you have a vhost-user network device associated with a UNIX
socket

> virtio-net-pci,netdev=n-52b49b80,mac=00:22:52:b4:9b:
> 80,id=netdev-n-52b49b80,addr=0xf$(nic_speed 10000) -object
> memory-backend-file,id=mem,size=40960M,mem-path=/mnt/huge,share=on -numa

and here the QEMU RAM is marked shared.

> node,memdev=mem -pidfile $PID_FILE -chardev socket,path=/opt/cloud/
> workspace/servers/4511f52a-f450-40d3-9417-a1e0a27ed507/qga.sock,server,nowait,id=qga0
> -device virtio-serial -device virtserialport,chardev=qga0,
> name=org.qemu.guest_agent.0"
> 
> The stop script is just kill this process.
> 
> the result of `cat /proc/meminfo` show memory is still there.

I expect what has happened is that QEMU has connected to openvsiwtch via the
vhost-user netdev you have, and shared its guest RAM with openvswitch. Now
the openvswitch process has the 40G RAM page mapped.

Now you kill QEMU and QEMU exits and the kernel releases all its RAM mappings,
but the 40G guest RAM mapping is still used by openvswitch.

IOW, I suspect that openvswitch is not releasing the RAM mapping when QEMU
exits, and so it stays resident.

Take a look at the openvswitch processes to see if any of them have the
40GB RAM mapping still shown.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak?
  2017-10-26 14:02 ` Daniel P. Berrange
@ 2017-10-27  3:28   ` Sam
  2017-10-27  7:08     ` Daniel P. Berrange
  0 siblings, 1 reply; 8+ messages in thread
From: Sam @ 2017-10-27  3:28 UTC (permalink / raw)
  To: Daniel P. Berrange; +Cc: QEMU Developers

After restart ovs-dpdk(which is openvswitch with dpdk lib), memory is
released.

But problem is in product environment, I could not restart ovs-dpdk......

So I think I'd better wait for 10 second to avoid this bug, or use
'-mem-prealloc' to start vm.

The reason I want to remove '-mem-prealloc' is to reduce the start time of
VM, but now it seems I have to do more test.

2017-10-26 22:02 GMT+08:00 Daniel P. Berrange <berrange@redhat.com>:

> On Thu, Oct 26, 2017 at 11:09:49AM +0800, Sam wrote:
> > For qemu-2.6.0, in huge page (1G) environment, after kill the qemu
> process,
> > memory which is alloc for the vm could not be released. Detail is bellow.
> > Or should I use some specific command to stop vm? I want to know if there
> > someone has the same problem?
> >
> > The start up command is:
> >
> > CMD1="$QEMU_CMD -D qemu.log -trace events=qemu-events-all -enable-kvm
> -cpu
> > qemu64,+vmx,+ssse3,+sse4.1,+sse4.2,+x2apic,+aes,+avx,+vme,
> > +pat,+ss,+pclmulqdq,+xsave,level=13 -machine pc,accel=kvm -chardev
> > socket,id=hmqmondev,port=55908,host=127.0.0.1,nodelay,server,nowait -mon
> > chardev=hmqmondev,id=hmqmon,mode=readline -rtc
> > base=utc,clock=host,driftfix=none
> > -usb -device usb-tablet -daemonize -nodefaults -nodefconfig
> > -no-kvm-pit-reinjection -global kvm-pit.lost_tick_policy=discard -vga
> std
> > -k en-us -smp 8 -name gangyewei-qemutime-1 -m 40960 -boot order=cdn -vnc
> > :8,password -drive file=$DISK_0,if=none,id=drive_
> > 0,format=qcow2,cache=none,aio=native -device
> virtio-blk-pci,id=dev_drive_0,
> > drive=drive_0,bus=pci.0,addr=0x5 -drive file=$DISK_1,if=none,id=drive_
> > 1,format=qcow2,cache=none,aio=native -device
> virtio-blk-pci,id=dev_drive_1,
> > drive=drive_1,bus=pci.0,addr=0x6 -drive file=$DISK_2,if=none,id=drive_
> > 2,format=qcow2,cache=none,aio=native -device
> virtio-blk-pci,id=dev_drive_2,
> > drive=drive_2,bus=pci.0,addr=0x7 -device ide-cd,drive=ide0-cd0,bus=ide.
> 1,unit=1
> > -drive id=ide0-cd0,media=cdrom,if=none -chardev
> socket,id=char-n-52b49b80,
> > path=/usr/local/var/run/openvswitch/n-52b49b80,server -netdev
> > type=vhost-user,id=n-52b49b80,chardev=char-n-52b49b80,vhostforce=on
> -device
>
>
> Ok, here you have a vhost-user network device associated with a UNIX
> socket
>
> > virtio-net-pci,netdev=n-52b49b80,mac=00:22:52:b4:9b:
> > 80,id=netdev-n-52b49b80,addr=0xf$(nic_speed 10000) -object
> > memory-backend-file,id=mem,size=40960M,mem-path=/mnt/huge,share=on -numa
>
> and here the QEMU RAM is marked shared.
>
> > node,memdev=mem -pidfile $PID_FILE -chardev socket,path=/opt/cloud/
> > workspace/servers/4511f52a-f450-40d3-9417-a1e0a27ed507/
> qga.sock,server,nowait,id=qga0
> > -device virtio-serial -device virtserialport,chardev=qga0,
> > name=org.qemu.guest_agent.0"
> >
> > The stop script is just kill this process.
> >
> > the result of `cat /proc/meminfo` show memory is still there.
>
> I expect what has happened is that QEMU has connected to openvsiwtch via
> the
> vhost-user netdev you have, and shared its guest RAM with openvswitch. Now
> the openvswitch process has the 40G RAM page mapped.
>
> Now you kill QEMU and QEMU exits and the kernel releases all its RAM
> mappings,
> but the 40G guest RAM mapping is still used by openvswitch.
>
> IOW, I suspect that openvswitch is not releasing the RAM mapping when QEMU
> exits, and so it stays resident.
>
> Take a look at the openvswitch processes to see if any of them have the
> 40GB RAM mapping still shown.
>
>
> Regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/
> dberrange :|
> |: https://libvirt.org         -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/
> dberrange :|
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak?
  2017-10-27  3:28   ` Sam
@ 2017-10-27  7:08     ` Daniel P. Berrange
  2017-10-31  5:21       ` Sam
  0 siblings, 1 reply; 8+ messages in thread
From: Daniel P. Berrange @ 2017-10-27  7:08 UTC (permalink / raw)
  To: Sam; +Cc: QEMU Developers

On Fri, Oct 27, 2017 at 11:28:33AM +0800, Sam wrote:
> After restart ovs-dpdk(which is openvswitch with dpdk lib), memory is
> released.
> 
> But problem is in product environment, I could not restart ovs-dpdk......

This is a clear bug in ovs-dpdk then - it is failing to release memory
when QEMU exits.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak?
  2017-10-27  7:08     ` Daniel P. Berrange
@ 2017-10-31  5:21       ` Sam
  0 siblings, 0 replies; 8+ messages in thread
From: Sam @ 2017-10-31  5:21 UTC (permalink / raw)
  To: Daniel P. Berrange; +Cc: QEMU Developers

Why could I not find in ovs-dpdk maillist? could some one give the bug
number?

2017-10-27 15:08 GMT+08:00 Daniel P. Berrange <berrange@redhat.com>:

> On Fri, Oct 27, 2017 at 11:28:33AM +0800, Sam wrote:
> > After restart ovs-dpdk(which is openvswitch with dpdk lib), memory is
> > released.
> >
> > But problem is in product environment, I could not restart ovs-dpdk......
>
> This is a clear bug in ovs-dpdk then - it is failing to release memory
> when QEMU exits.
>
>
> Regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/
> dberrange :|
> |: https://libvirt.org         -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/
> dberrange :|
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-10-31  5:21 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-26  3:09 [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak? Sam
2017-10-26  3:54 ` Sam
2017-10-26  9:22   ` Sam
2017-10-26 13:29     ` Dr. David Alan Gilbert
2017-10-26 14:02 ` Daniel P. Berrange
2017-10-27  3:28   ` Sam
2017-10-27  7:08     ` Daniel P. Berrange
2017-10-31  5:21       ` Sam

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.