From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33706) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1e7iDr-0001Gt-1N for qemu-devel@nongnu.org; Thu, 26 Oct 2017 09:29:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1e7iDl-00019w-UE for qemu-devel@nongnu.org; Thu, 26 Oct 2017 09:29:23 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44868) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1e7iDl-00018a-Kt for qemu-devel@nongnu.org; Thu, 26 Oct 2017 09:29:17 -0400 Date: Thu, 26 Oct 2017 14:29:12 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20171026132911.GC2585@work-vm> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] Qemu start VM in huge page without '-mem-prealloc' will get memory leak? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Sam Cc: QEMU Developers * Sam (batmanustc@gmail.com) wrote: > I don't know if this process is: > 1. qemu call kernel API to alloc 40G memory. > 2. qemu process is killed by hand. > 3. kernel alloc 40G memory, but qemu process is down, then this 40G is left > in kernel. It might be interesting to check the files in: /sys/kernel/mm/hugepages/hugepages-2048kB what do free_hugepages and nr_hugepages say? Is there a file left in /mnt/huge ? Dave > > 2017-10-26 11:54 GMT+08:00 Sam : > > > BTW, I start VM by this command, and then stop it, then memory leak > > happens. > > > > To compare, I start another VM, and wait for about 10 minutes, and then > > kill it as I does above, then no memory leak happens. > > > > So is this a bug of KVM or qemu ? > > > > 2017-10-26 11:09 GMT+08:00 Sam : > > > >> For qemu-2.6.0, in huge page (1G) environment, after kill the qemu > >> process, memory which is alloc for the vm could not be released. Detail is > >> bellow. > >> Or should I use some specific command to stop vm? I want to know if there > >> someone has the same problem? > >> > >> The start up command is: > >> > >> CMD1="$QEMU_CMD -D qemu.log -trace events=qemu-events-all -enable-kvm > >> -cpu qemu64,+vmx,+ssse3,+sse4.1,+sse4.2,+x2apic,+aes,+avx,+vme,+p > >> at,+ss,+pclmulqdq,+xsave,level=13 -machine pc,accel=kvm -chardev > >> socket,id=hmqmondev,port=55908,host=127.0.0.1,nodelay,server,nowait -mon > >> chardev=hmqmondev,id=hmqmon,mode=readline -rtc > >> base=utc,clock=host,driftfix=none -usb -device usb-tablet -daemonize > >> -nodefaults -nodefconfig -no-kvm-pit-reinjection -global > >> kvm-pit.lost_tick_policy=discard -vga std -k en-us -smp 8 -name > >> gangyewei-qemutime-1 -m 40960 -boot order=cdn -vnc :8,password -drive > >> file=$DISK_0,if=none,id=drive_0,format=qcow2,cache=none,aio=native > >> -device virtio-blk-pci,id=dev_drive_0,drive=drive_0,bus=pci.0,addr=0x5 > >> -drive file=$DISK_1,if=none,id=drive_1,format=qcow2,cache=none,aio=native > >> -device virtio-blk-pci,id=dev_drive_1,drive=drive_1,bus=pci.0,addr=0x6 > >> -drive file=$DISK_2,if=none,id=drive_2,format=qcow2,cache=none,aio=native > >> -device virtio-blk-pci,id=dev_drive_2,drive=drive_2,bus=pci.0,addr=0x7 > >> -device ide-cd,drive=ide0-cd0,bus=ide.1,unit=1 -drive > >> id=ide0-cd0,media=cdrom,if=none -chardev socket,id=char-n-52b49b80,path > >> =/usr/local/var/run/openvswitch/n-52b49b80,server -netdev > >> type=vhost-user,id=n-52b49b80,chardev=char-n-52b49b80,vhostforce=on > >> -device virtio-net-pci,netdev=n-52b49b80,mac=00:22:52:b4:9b:80,id=ne > >> tdev-n-52b49b80,addr=0xf$(nic_speed 10000) -object > >> memory-backend-file,id=mem,size=40960M,mem-path=/mnt/huge,share=on -numa > >> node,memdev=mem -pidfile $PID_FILE -chardev socket,path=/opt/cloud/ > >> workspace/servers/4511f52a-f450-40d3-9417-a1e0a27ed507/qga.sock,server,nowait,id=qga0 > >> -device virtio-serial -device virtserialport,chardev=qga0,na > >> me=org.qemu.guest_agent.0" > >> > >> The stop script is just kill this process. > >> > >> the result of `cat /proc/meminfo` show memory is still there. > >> > > > > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK