qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Alex Bennée" <alex.bennee@gmail.com>
To: qemu-devel@nongnu.org
Subject: [Qemu-devel] [Bug 1836558] Re: Qemu-ppc Memory leak creating threads
Date: Mon, 15 Jul 2019 15:52:42 -0000	[thread overview]
Message-ID: <156320596219.28429.12314000798052307695.malone@wampee.canonical.com> (raw)
In-Reply-To: 156318593102.28533.3075291509963886255.malonedeb@chaenomeles.canonical.com

By running:

  valgrind --leak-check=yes ./qemu-ppc tests/testthread

I can replicate a leak compared to qemu-arm with the same test....

==25789==    at 0x483577F: malloc (vg_replace_malloc.c:299)                                                                                                         [13/7729]
==25789==    by 0x4D7F8D0: g_malloc (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5800.3)
==25789==    by 0x1FC65D: create_new_table (translate_init.inc.c:9252)
==25789==    by 0x1FC65D: register_ind_in_table (translate_init.inc.c:9291)
==25789==    by 0x1FC971: register_ind_insn (translate_init.inc.c:9325)
==25789==    by 0x1FC971: register_insn (translate_init.inc.c:9390)
==25789==    by 0x1FC971: create_ppc_opcodes (translate_init.inc.c:9450)
==25789==    by 0x1FC971: ppc_cpu_realize (translate_init.inc.c:9819)
==25789==    by 0x277263: device_set_realized (qdev.c:834)
==25789==    by 0x27BBC6: property_set_bool (object.c:2076)
==25789==    by 0x28019E: object_property_set_qobject (qom-qobject.c:26)
==25789==    by 0x27DAF4: object_property_set_bool (object.c:1334)
==25789==    by 0x27AE4B: cpu_create (cpu.c:62)
==25789==    by 0x1C89B8: cpu_copy (main.c:188)
==25789==    by 0x1CA44F: do_fork (syscall.c:5604)
==25789==    by 0x1D665A: do_syscall1.isra.43 (syscall.c:9160)
==25789==
==25789== 6,656 bytes in 26 blocks are possibly lost in loss record 216 of 238
==25789==    at 0x483577F: malloc (vg_replace_malloc.c:299)
==25789==    by 0x4D7F8D0: g_malloc (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5800.3)
==25789==    by 0x1FC65D: create_new_table (translate_init.inc.c:9252)
==25789==    by 0x1FC65D: register_ind_in_table (translate_init.inc.c:9291)
==25789==    by 0x1FC9BA: register_dblind_insn (translate_init.inc.c:9337)
==25789==    by 0x1FC9BA: register_insn (translate_init.inc.c:9384)
==25789==    by 0x1FC9BA: create_ppc_opcodes (translate_init.inc.c:9450)
==25789==    by 0x1FC9BA: ppc_cpu_realize (translate_init.inc.c:9819)
==25789==    by 0x277263: device_set_realized (qdev.c:834)
==25789==    by 0x27BBC6: property_set_bool (object.c:2076)
==25789==    by 0x28019E: object_property_set_qobject (qom-qobject.c:26)
==25789==    by 0x27DAF4: object_property_set_bool (object.c:1334)
==25789==    by 0x27AE4B: cpu_create (cpu.c:62)
==25789==    by 0x17304D: main (main.c:681)
==25789==
==25789== 10,752 (1,024 direct, 9,728 indirect) bytes in 4 blocks are definitely lost in loss record 223 of 238
==25789==    at 0x483577F: malloc (vg_replace_malloc.c:299)
==25789==    by 0x4D7F8D0: g_malloc (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5800.3)
==25789==    by 0x1FC65D: create_new_table (translate_init.inc.c:9252)
==25789==    by 0x1FC65D: register_ind_in_table (translate_init.inc.c:9291)
==25789==    by 0x1FC998: register_dblind_insn (translate_init.inc.c:9332)
==25789==    by 0x1FC998: register_insn (translate_init.inc.c:9384)
==25789==    by 0x1FC998: create_ppc_opcodes (translate_init.inc.c:9450)
==25789==    by 0x1FC998: ppc_cpu_realize (translate_init.inc.c:9819)
==25789==    by 0x277263: device_set_realized (qdev.c:834)
==25789==    by 0x27BBC6: property_set_bool (object.c:2076)
==25789==    by 0x28019E: object_property_set_qobject (qom-qobject.c:26)
==25789==    by 0x27DAF4: object_property_set_bool (object.c:1334)
==25789==    by 0x27AE4B: cpu_create (cpu.c:62)
==25789==    by 0x1C89B8: cpu_copy (main.c:188)
==25789==    by 0x1CA44F: do_fork (syscall.c:5604)
==25789==    by 0x1D665A: do_syscall1.isra.43 (syscall.c:9160)

So something funky happens to the PPC translator for each new thread....

** Changed in: qemu
       Status: New => Confirmed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1836558

Title:
  Qemu-ppc Memory leak creating threads

Status in QEMU:
  Confirmed

Bug description:
  When creating c++ threads (with c++ std::thread), the resulting binary
  has memory leaks when running with qemu-ppc.

  Eg the following c++ program, when compiled with gcc, consumes more
  and more memory while running at qemu-ppc. (does not have memory leaks
  when compiling for Intel, when running same binary on real powerpc CPU
  hardware also no memory leaks).

  (Note I used function getCurrentRSS to show available memory, see
  https://stackoverflow.com/questions/669438/how-to-get-memory-usage-at-
  runtime-using-c; calls commented out here)

  Compiler: powerpc-linux-gnu-g++ (Debian 8.3.0-2) 8.3.0 (but same problem with older g++ compilers even 4.9)
  Os: Debian 10.0 ( Buster) (but same problem seen on Debian 9/stetch)
  qemu: qemu-ppc version 3.1.50


  ---

  #include <iostream>
  #include <thread>
  #include <chrono>

  
  using namespace std::chrono_literals;

  // Create/run and join a 100 threads.
  void Fun100()
  {
  //    auto b4 = getCurrentRSS();
  //    std::cout << getCurrentRSS() << std::endl;
      for(int n = 0; n < 100; n++)
      {
          std::thread t([]
          {
              std::this_thread::sleep_for( 10ms );
          });
  //        std::cout << n << ' ' << getCurrentRSS() << std::endl;
          t.join();
      }
      std::this_thread::sleep_for( 500ms ); // to give OS some time to wipe memory...
  //    auto after = getCurrentRSS();
      std::cout << b4 << ' ' << after << std::endl;
  }

  
  int main(int, char **)
  {
      Fun100();
      Fun100();  // memory used keeps increasing
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1836558/+subscriptions


  parent reply	other threads:[~2019-07-15 16:01 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-15 10:18 [Qemu-devel] [Bug 1836558] [NEW] Qemu-ppc Memory leak creating threads Daan Scherft
2019-07-15 13:34 ` [Qemu-devel] [Bug 1836558] " Daan Scherft
2019-07-15 14:15 ` Alex Bennée
2019-07-15 15:10 ` Daan Scherft
2019-07-15 15:52 ` Alex Bennée [this message]
2019-07-15 16:21 ` Alex Bennée
2019-07-15 16:50 ` Alex Bennée
2019-07-16 12:13 ` [Qemu-devel] [RFC PATCH for 4.1?] target/ppc: move opcode decode tables to PowerPCCPU Alex Bennée
2019-07-16 12:13   ` [Qemu-devel] [Bug 1836558] " Alex Bennée
2019-07-16 14:50   ` [Qemu-devel] " Richard Henderson
2019-07-17  1:33   ` David Gibson
2019-07-17  9:41     ` Alex Bennée
2019-07-17  9:41       ` [Qemu-devel] [Bug 1836558] " Alex Bennée
2019-07-17 12:13   ` [Qemu-devel] " no-reply
2019-07-16 14:01 ` [Qemu-devel] [RFC PATCH for 4.1] linux-user: unparent CPU object before unref Alex Bennée
2019-07-16 14:01   ` [Qemu-devel] [Bug 1836558] " Alex Bennée
2019-07-16 14:43   ` [Qemu-devel] " Peter Maydell
2019-07-16 14:43     ` [Qemu-devel] [Bug 1836558] " Peter Maydell
2019-07-16 15:02     ` [Qemu-devel] " Alex Bennée
2019-07-16 15:02       ` [Qemu-devel] [Bug 1836558] " Alex Bennée
2019-07-16 15:17   ` [Qemu-devel] " Philippe Mathieu-Daudé
2020-01-09 13:22 ` [Bug 1836558] Re: Qemu-ppc Memory leak creating threads Thomas Huth
2020-03-10  8:48 ` Laurent Vivier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=156320596219.28429.12314000798052307695.malone@wampee.canonical.com \
    --to=alex.bennee@gmail.com \
    --cc=1836558@bugs.launchpad.net \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).