All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-block@nongnu.org, mreitz@redhat.com, eblake@redhat.com,
	stefanha@redhat.com, berrange@redhat.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 07/12] nbd: Increase bs->in_flight during AioContext switch
Date: Tue, 19 Feb 2019 12:11:16 +0100	[thread overview]
Message-ID: <20190219111116.GH4727@localhost.localdomain> (raw)
In-Reply-To: <9fe39150-f308-8f2e-0c9d-6d0a31d9329d@redhat.com>

Am 18.02.2019 um 18:22 hat Paolo Bonzini geschrieben:
> On 18/02/19 17:18, Kevin Wolf wrote:
> > +            /* aio_ctx_switch is only supposed to be set if we're sitting in
> > +             * the qio_channel_yield() below. */
> > +            assert(!*aio_ctx_switch);
> >              bdrv_dec_in_flight(bs);
> >              qio_channel_yield(ioc, G_IO_IN);
> > -            bdrv_inc_in_flight(bs);
> > +            if (*aio_ctx_switch) {
> > +                /* nbd_client_attach_aio_context() already increased in_flight
> > +                 * when scheduling this coroutine for reentry */
> > +                *aio_ctx_switch = false;
> > +            } else {
> > +                bdrv_inc_in_flight(bs);
> > +            }
> 
> Hmm, my first thought would have been to do the bdrv_inc_in_flight(bs);
> unconditionally here, and in nbd_connection_entry do the opposite, like
> 
> 	if (s->aio_ctx_switch) {
> 	    s->aio_ctx_switch = false;
> 	    bdrv_dec_in_flight(bs);
> 	}
> 
> but I guess the problem is that then bdrv_drain could hang.

Yes, the important part is that in_flight can drop to 0 while we're in
qio_channel_yield().

> So my question is:
> 
> 1) is there a testcase that shows the problem with this "obvious"
> refactoring;

I haven't actually tried it out because it's "obviously" wrong, but in
any test case where no requests are running, you'd never leave this
loop, so it should trivially trigger the problem.

In other words, I think qemu-iotests 094 covers this.

> 2) maybe instead of aio_co_schedul-ing client->connection_co and having
> the s->aio_ctx_switch flag, you could go through a bottom half that does
> the bdrv_inc_in_flight and then enters client->connection_co?

That would be too easy. :-)

But I agree, that might indeed be the better solution.

I think I'd keep patch 6 anyway so that we know the exact yield that
we'll interrupt, even if it's not strictly necessary as long as we know
that nbd_receive_reply() can only yield in places that are safe to be
interrupted. While intuitively I think it's true, I don't feel like
actually auditing the code, and at some point we'd probably fail to
check that new code won't violate this invariant.

Kevin

  reply	other threads:[~2019-02-19 11:11 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-18 16:18 [Qemu-devel] [PATCH 00/12] block: bdrv_set_aio_context() related fixes Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 01/12] block-backend: Make blk_inc/dec_in_flight public Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 02/12] virtio-blk: Increase in_flight for request restart BH Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 03/12] nbd: Restrict connection_co reentrance Kevin Wolf
2019-02-18 20:30   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 04/12] io: Make qio_channel_yield() interruptible Kevin Wolf
2019-02-18 17:11   ` Paolo Bonzini
2019-02-19 10:24     ` Kevin Wolf
2019-02-18 20:33   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 05/12] nbd: Move nbd_read_eof() to nbd/client.c Kevin Wolf
2019-02-18 20:42   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 06/12] nbd: Use low-level QIOChannel API in nbd_read_eof() Kevin Wolf
2019-02-18 20:48   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 07/12] nbd: Increase bs->in_flight during AioContext switch Kevin Wolf
2019-02-18 17:22   ` Paolo Bonzini
2019-02-19 11:11     ` Kevin Wolf [this message]
2019-02-19 14:12       ` Paolo Bonzini
2019-02-20 16:33     ` Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 08/12] block: Don't poll in bdrv_set_aio_context() Kevin Wolf
2019-02-18 20:52   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 09/12] block: Fix AioContext switch for drained node Kevin Wolf
2019-02-18 20:53   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 10/12] test-bdrv-drain: AioContext switch in drained section Kevin Wolf
2019-02-18 20:55   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 11/12] block: Use normal drain for bdrv_set_aio_context() Kevin Wolf
2019-02-18 20:57   ` Eric Blake
2019-02-19 11:23     ` Kevin Wolf
2019-02-19 16:04       ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 12/12] aio-posix: Assert that aio_poll() is always called in home thread Kevin Wolf
2019-02-18 20:58   ` Eric Blake

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190219111116.GH4727@localhost.localdomain \
    --to=kwolf@redhat.com \
    --cc=berrange@redhat.com \
    --cc=eblake@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.