All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Daniele Buono <dbuono@linux.vnet.ibm.com>
Cc: "Kevin Wolf" <kwolf@redhat.com>,
	atheurer@redhat.com, jhopper@redhat.com,
	"Tingting Mao" <timao@redhat.com>,
	qemu-devel@nongnu.org, "Honghao Wang" <wanghonghao@bytedance.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@redhat.com>,
	"Serge Guelton" <sguelton@redhat.com>
Subject: Re: [PATCH] coroutine: resize pool periodically instead of limiting size
Date: Thu, 9 Sep 2021 16:53:03 +0100	[thread overview]
Message-ID: <YTot306rqLhNO4wA@stefanha-x1.localdomain> (raw)
In-Reply-To: <677df3db-fb9d-f64d-62f5-98857db5e6e6@linux.vnet.ibm.com>

[-- Attachment #1: Type: text/plain, Size: 2792 bytes --]

On Wed, Sep 08, 2021 at 10:30:09PM -0400, Daniele Buono wrote:
> Stefan, the patch looks great.
> Thank you for debugging the performance issue that was happening with
> SafeStack.
> 
> On 9/2/2021 4:10 AM, Stefan Hajnoczi wrote:
> > On Wed, Sep 01, 2021 at 05:09:23PM +0100, Stefan Hajnoczi wrote:
> > > It was reported that enabling SafeStack reduces IOPS significantly
> > > (>25%) with the following fio benchmark on virtio-blk using a NVMe host
> > > block device:
> > > 
> > >    # fio --rw=randrw --bs=4k --iodepth=64 --runtime=1m --direct=1 \
> > > 	--filename=/dev/vdb --name=job1 --ioengine=libaio --thread \
> > > 	--group_reporting --numjobs=16 --time_based \
> > >          --output=/tmp/fio_result
> > > 
> > > Serge Guelton and I found that SafeStack is not really at fault, it just
> > > increases the cost of coroutine creation. This fio workload exhausts the
> > > coroutine pool and coroutine creation becomes a bottleneck. Previous
> > > work by Honghao Wang also pointed to excessive coroutine creation.
> > > 
> > > Creating new coroutines is expensive due to allocating new stacks with
> > > mmap(2) and mprotect(2). Currently there are thread-local and global
> > > pools that recycle old Coroutine objects and their stacks but the
> > > hardcoded size limit of 64 for thread-local pools and 128 for the global
> > > pool is insufficient for the fio benchmark shown above.
> > > 
> > > This patch changes the coroutine pool algorithm to a simple thread-local
> > > pool without a size limit. Threads periodically shrink the pool down to
> > > a size sufficient for the maximum observed number of coroutines.
> > > 
> > > This is a very simple algorithm. Fancier things could be done like
> > > keeping a minimum number of coroutines around to avoid latency when a
> > > new coroutine is created after a long period of inactivity. Another
> > > thought is to stop the timer when the pool size is zero for power saving
> > > on threads that aren't using coroutines. However, I'd rather not add
> > > bells and whistles unless they are really necessary.
> 
> I agree we should aim at something that is as simple as possible.
> 
> However keeping a minumum number of coroutines could be useful to avoid
> performance regressions from the previous version.
> 
> I wouldn't say restore the global pool - that's way too much trouble -
> but keeping the old "pool size" configuration parameter as the miniumum
> pool size could be a good idea to maintain the previous performance in
> cases where the dynamic pool shrinks too much.

Good point. We're taking a risk by freeing all coroutines when the timer
expires. It's safer to stick to the old pool size limit to avoid
regressions.

I'll send another revision.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2021-09-09 15:55 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-01 16:09 [PATCH] coroutine: resize pool periodically instead of limiting size Stefan Hajnoczi
2021-09-02  8:10 ` Stefan Hajnoczi
2021-09-09  2:30   ` Daniele Buono
2021-09-09 15:53     ` Stefan Hajnoczi [this message]
2021-09-02 10:24 ` Philippe Mathieu-Daudé
2021-09-09  8:37 ` Daniel P. Berrangé
2021-09-09 15:51   ` Stefan Hajnoczi
2021-09-09 16:08     ` Daniel P. Berrangé
2021-09-13 12:40       ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YTot306rqLhNO4wA@stefanha-x1.localdomain \
    --to=stefanha@redhat.com \
    --cc=atheurer@redhat.com \
    --cc=dbuono@linux.vnet.ibm.com \
    --cc=jhopper@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=philmd@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sguelton@redhat.com \
    --cc=timao@redhat.com \
    --cc=wanghonghao@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.