qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paul Zimmerman <pauldzim@gmail.com>
To: Thomas <74cmonty@gmail.com>
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
	"QEMU Developers" <qemu-devel@nongnu.org>,
	"Andrew Baumann" <Andrew.Baumann@microsoft.com>,
	"Philippe Mathieu-Daudé" <f4bug@amsat.org>,
	qemu-arm <qemu-arm@nongnu.org>,
	"Alex Bennée" <alex.bennee@linaro.org>
Subject: Re: Emulate Rpi with QEMU fails
Date: Thu, 8 Oct 2020 19:21:50 -0700	[thread overview]
Message-ID: <CADBGO7_taH6z3x-Ab3rtxUJ_FrFL3ULexO=CJsMoynbvCGazaw@mail.gmail.com> (raw)
In-Reply-To: <CADBGO79XkF7hAxDmrPm9Za16rXPHbB2L6xD2zr8puDLQp+z0Fw@mail.gmail.com>

Running 'top -H' (to show individual threads) I see qemu is using 6 to 7
threads while running, and each thread is taking from 15% to 70% or so
of cpu time. So probably qemu is not able to spread the work among the
threads evenly enough to use all the cpu time available.

Might be an interesting area of investigation for someone motivated
enough :)

- Paul

On Thu, Oct 8, 2020 at 2:07 PM Paul Zimmerman <pauldzim@gmail.com> wrote:
>
> Hi Thomas,
>
> What does 'top' say while the emulation is running? I have an 8 cpu-thread
> system, yet 'top' never shows more than about 300% cpu. I would have
> thought it would get closer to 800% cpu. And running qemu as root with
> nice -20 doesn't seem to make much difference.
>
> - Paul
>
> On Thu, Oct 8, 2020 at 12:00 AM Thomas <74cmonty@gmail.com> wrote:
> >
> > Interesting enough is: my top figure reported by perf is like yours:
> >
> > Samples: 6M of event 'cycles:u', Event count (approx.): 1936571734942
> > Overhead  Command          Shared Object                  Symbol
> >    7,95%  qemu-system-arm  qemu-system-arm                [.]
> > helper_lookup_tb_ptr
> > ◆
> >    4,16%  qemu-system-arm  qemu-system-arm                [.]
> > cpu_get_tb_cpu_state
> > ▒
> >    2,52%  qemu-system-arm  libpthread-2.32.so             [.]
> > __pthread_mutex_lock
> > ▒
> >    2,06%  qemu-system-arm  qemu-system-arm                [.]
> > qht_lookup_custom
> > ▒
> >    1,66%  qemu-system-arm  qemu-system-arm                [.]
> > tlb_set_page_with_attrs
> > ▒
> >    1,61%  qemu-system-arm  libpthread-2.32.so             [.]
> > __pthread_mutex_unlock_usercnt
> > ▒
> >    1,59%  qemu-system-arm  qemu-system-arm                [.]
> > get_phys_addr
> > ▒
> >    1,27%  qemu-system-arm  qemu-system-arm                [.]
> > cpu_exec
> > ▒
> >    1,23%  qemu-system-arm  qemu-system-arm                [.]
> > object_class_dynamic_cast_assert
> > ▒
> >    0,98%  qemu-system-arm  libc-2.32.so                   [.]
> > _int_malloc
> > ▒
> >    0,95%  qemu-system-arm  qemu-system-arm                [.]
> > object_dynamic_cast_assert
> > ▒
> >    0,95%  qemu-system-arm  qemu-system-arm                [.]
> > tb_htable_lookup
> > ▒
> >    0,92%  qemu-system-arm  qemu-system-arm                [.]
> > tcg_gen_code
> > ▒
> >    0,91%  qemu-system-arm  qemu-system-arm                [.]
> > qemu_mutex_lock_impl
> > ▒
> >    0,75%  qemu-system-arm  qemu-system-arm                [.]
> > get_page_addr_code_hostp
> > ▒
> >    0,73%  qemu-system-arm  qemu-system-arm                [.]
> > tcg_optimize
> > ▒
> >    0,71%  qemu-system-arm  qemu-system-arm                [.]
> > qemu_mutex_unlock_impl
> > ▒
> >    0,69%  qemu-system-arm  libc-2.32.so                   [.]
> > _int_free
> > ▒
> >    0,64%  qemu-system-arm  qemu-system-arm                [.]
> > tb_flush_jmp_cache
> > ▒
> >    0,63%  qemu-system-arm  qemu-system-arm                [.]
> > address_space_ldl_le
> > ▒
> >    0,62%  qemu-system-arm  libpthread-2.32.so             [.]
> > __lll_lock_wait
> > ▒
> >    0,58%  qemu-system-arm  libpthread-2.32.so             [.]
> > pthread_cond_wait@@GLIBC_2.3.2
> > ▒
> >    0,52%  qemu-system-arm  qemu-system-arm                [.]
> > cpu_reset_interrupt
> > ▒
> >    0,52%  qemu-system-arm  libc-2.32.so                   [.]
> > cfree@GLIBC_2.2.5
> > ▒
> >    0,52%  qemu-system-arm  qemu-system-arm                [.]
> > qemu_set_irq
> > ▒
> >
> > However the absolute time is 3-4 minutes.
> > And I don't know where I should start optimization now.
> >
> >
> >
> > Am 07.10.20 um 14:02 schrieb Alex Bennée:
> > > Thomas Schneider <74cmonty@gmail.com> writes:
> > >
> > >> Are you referring to this tool?
> > >> https://github.com/stefano-garzarella/qemu-boot-time
> > >> <https://github.com/stefano-garzarella/qemu-boot-time>
> > > No - just plain perf:
> > >
> > >   perf record $QEMU $ARGS
> > >
> > > Then a "perf report" which will show you the hotspots, for example:
> > >
> > >    8.92%  qemu-system-aar  qemu-system-aarch64      [.] helper_lookup_tb_ptr
> > >    4.76%  qemu-system-aar  qemu-system-aarch64      [.] liveness_pass_1
> > >    3.69%  qemu-system-aar  qemu-system-aarch64      [.] tcg_gen_code
> > >    2.95%  qemu-system-aar  qemu-system-aarch64      [.] qht_lookup_custom
> > >    2.93%  qemu-system-aar  qemu-system-aarch64      [.] tcg_optimize
> > >    1.28%  qemu-system-aar  qemu-system-aarch64      [.] tcg_out_opc.isra.15
> > >    1.09%  qemu-system-aar  qemu-system-aarch64      [.] get_phys_addr_lpae
> > >    1.09%  qemu-system-aar  [kernel.kallsyms]        [k] isolate_freepages_block
> > >    1.05%  qemu-system-aar  qemu-system-aarch64      [.] cpu_get_tb_cpu_state
> > >    0.98%  qemu-system-aar  [kernel.kallsyms]        [k] do_syscall_64
> > >    0.94%  qemu-system-aar  qemu-system-aarch64      [.] tb_lookup_cmp
> > >    0.78%  qemu-system-aar  qemu-system-aarch64      [.] init_ts_info
> > >    0.73%  qemu-system-aar  qemu-system-aarch64      [.] tb_htable_lookup
> > >    0.73%  qemu-system-aar  qemu-system-aarch64      [.] tb_gen_code
> > >    0.73%  qemu-system-aar  qemu-system-aarch64      [.] tlb_set_page_with_attrs
> > >
> > >>
> > >> Am 07.10.2020 um 13:00 schrieb Alex Bennée:
> > >>> perf to record your boot
> > >
> >


  reply	other threads:[~2020-10-09  2:23 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-03 11:45 Emulate Rpi with QEMU fails Thomas
2020-10-04 17:44 ` Alex Bennée
2020-10-04 18:40   ` Peter Maydell
2020-10-05  9:40     ` Alex Bennée
2020-10-05 10:51       ` Thomas Schneider
2020-10-05 22:08         ` Paul Zimmerman
2020-10-06  6:58           ` Thomas Schneider
2020-10-06  7:42             ` Paul Zimmerman
2020-10-06  9:58             ` Alex Bennée
2020-10-07  6:28               ` Thomas
2020-10-07  6:50                 ` Paul Zimmerman
2020-10-07  7:27                   ` Thomas Schneider
2020-10-07 11:00                     ` Alex Bennée
2020-10-07 11:36                       ` Thomas Schneider
2020-10-07 12:02                         ` Alex Bennée
2020-10-08  7:00                           ` Thomas
2020-10-08 21:07                             ` Paul Zimmerman
2020-10-09  2:21                               ` Paul Zimmerman [this message]
2020-10-09  6:20                             ` Alex Bennée

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CADBGO7_taH6z3x-Ab3rtxUJ_FrFL3ULexO=CJsMoynbvCGazaw@mail.gmail.com' \
    --to=pauldzim@gmail.com \
    --cc=74cmonty@gmail.com \
    --cc=Andrew.Baumann@microsoft.com \
    --cc=alex.bennee@linaro.org \
    --cc=f4bug@amsat.org \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).