qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Alex Bennée" <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Subject: [Qemu-devel] [Bug 1836558] Re: [RFC PATCH for 4.1?] target/ppc: move opcode decode tables to PowerPCCPU
Date: Wed, 17 Jul 2019 09:41:16 -0000	[thread overview]
Message-ID: <878ssxuhfn.fsf@zen.linaroharston> (raw)
Message-ID: <20190717094116.GDVe-y5qlL0re-bOqn1x-KfL_VGK7wxo5IbAYmjxd78@z> (raw)
In-Reply-To: 156318593102.28533.3075291509963886255.malonedeb@chaenomeles.canonical.com

David Gibson <david@gibson.dropbear.id.au> writes:

> On Tue, Jul 16, 2019 at 01:13:52PM +0100, Alex Bennée wrote:
>> The opcode decode tables aren't really part of the CPUPPCState but an
>> internal implementation detail for the translator. This can cause
>> problems with memcpy in cpu_copy as any table created during
>> ppc_cpu_realize get written over causing a memory leak. To avoid this
>> move the tables into PowerPCCPU which is better suited to hold
>> internal implementation details.
>>
>> Attempts to fix: https://bugs.launchpad.net/qemu/+bug/1836558
>> Cc: 1836558@bugs.launchpad.net
>> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>
> I've applied this now to ppc-for-4.2.  If there's an argument for
> including it in 4.1 during hard freeze, you'll need to spell it out
> for me.

Well without:

  Subject: [RFC PATCH for 4.1] linux-user: unparent CPU object before unref
  Date: Tue, 16 Jul 2019 15:01:33 +0100
  Message-Id: <20190716140133.8578-1-alex.bennee@linaro.org>

it doesn't matter as we never attempt to free the memory once a thread
is destroyed. This causes all linux-user guests that create and destroy
threads quickly to slowly leak memory. However due to the dynamic opcode
table ppc/ppc64-linux-user guests leak a lot faster than most, in the
order of ~50k each time a thread is created and destroyed.

However I'm happy to defer to you as the maintainer :-)

--
Alex Bennée

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1836558

Title:
  Qemu-ppc Memory leak creating threads

Status in QEMU:
  Confirmed

Bug description:
  When creating c++ threads (with c++ std::thread), the resulting binary
  has memory leaks when running with qemu-ppc.

  Eg the following c++ program, when compiled with gcc, consumes more
  and more memory while running at qemu-ppc. (does not have memory leaks
  when compiling for Intel, when running same binary on real powerpc CPU
  hardware also no memory leaks).

  (Note I used function getCurrentRSS to show available memory, see
  https://stackoverflow.com/questions/669438/how-to-get-memory-usage-at-
  runtime-using-c; calls commented out here)

  Compiler: powerpc-linux-gnu-g++ (Debian 8.3.0-2) 8.3.0 (but same problem with older g++ compilers even 4.9)
  Os: Debian 10.0 ( Buster) (but same problem seen on Debian 9/stetch)
  qemu: qemu-ppc version 3.1.50


  ---

  #include <iostream>
  #include <thread>
  #include <chrono>

  
  using namespace std::chrono_literals;

  // Create/run and join a 100 threads.
  void Fun100()
  {
  //    auto b4 = getCurrentRSS();
  //    std::cout << getCurrentRSS() << std::endl;
      for(int n = 0; n < 100; n++)
      {
          std::thread t([]
          {
              std::this_thread::sleep_for( 10ms );
          });
  //        std::cout << n << ' ' << getCurrentRSS() << std::endl;
          t.join();
      }
      std::this_thread::sleep_for( 500ms ); // to give OS some time to wipe memory...
  //    auto after = getCurrentRSS();
      std::cout << b4 << ' ' << after << std::endl;
  }

  
  int main(int, char **)
  {
      Fun100();
      Fun100();  // memory used keeps increasing
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1836558/+subscriptions


  parent reply	other threads:[~2019-07-17  9:55 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-15 10:18 [Qemu-devel] [Bug 1836558] [NEW] Qemu-ppc Memory leak creating threads Daan Scherft
2019-07-15 13:34 ` [Qemu-devel] [Bug 1836558] " Daan Scherft
2019-07-15 14:15 ` Alex Bennée
2019-07-15 15:10 ` Daan Scherft
2019-07-15 15:52 ` Alex Bennée
2019-07-15 16:21 ` Alex Bennée
2019-07-15 16:50 ` Alex Bennée
2019-07-16 12:13 ` [Qemu-devel] [RFC PATCH for 4.1?] target/ppc: move opcode decode tables to PowerPCCPU Alex Bennée
2019-07-16 12:13   ` [Qemu-devel] [Bug 1836558] " Alex Bennée
2019-07-16 14:50   ` [Qemu-devel] " Richard Henderson
2019-07-17  1:33   ` David Gibson
2019-07-17  9:41     ` Alex Bennée [this message]
2019-07-17  9:41       ` [Qemu-devel] [Bug 1836558] " Alex Bennée
2019-07-17 12:13   ` [Qemu-devel] " no-reply
2019-07-16 14:01 ` [Qemu-devel] [RFC PATCH for 4.1] linux-user: unparent CPU object before unref Alex Bennée
2019-07-16 14:01   ` [Qemu-devel] [Bug 1836558] " Alex Bennée
2019-07-16 14:43   ` [Qemu-devel] " Peter Maydell
2019-07-16 14:43     ` [Qemu-devel] [Bug 1836558] " Peter Maydell
2019-07-16 15:02     ` [Qemu-devel] " Alex Bennée
2019-07-16 15:02       ` [Qemu-devel] [Bug 1836558] " Alex Bennée
2019-07-16 15:17   ` [Qemu-devel] " Philippe Mathieu-Daudé
2020-01-09 13:22 ` [Bug 1836558] Re: Qemu-ppc Memory leak creating threads Thomas Huth
2020-03-10  8:48 ` Laurent Vivier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=878ssxuhfn.fsf@zen.linaroharston \
    --to=alex.bennee@linaro.org \
    --cc=1836558@bugs.launchpad.net \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).