* [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
@ 2017-08-22 12:51 Stefan Hajnoczi
2017-08-22 13:23 ` Paolo Bonzini
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2017-08-22 12:51 UTC (permalink / raw)
To: qemu-devel
Cc: Dr. David Alan Gilbert, qemu-block, Eric Blake, Paolo Bonzini,
Stefan Hajnoczi
The following scenario leads to an assertion failure in
qio_channel_yield():
1. Request coroutine calls qio_channel_yield() successfully when sending
would block on the socket. It is now yielded.
2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
nbd_receive_reply() failed.
3. Request coroutine is entered and returns from qio_channel_yield().
Note that the socket fd handler has not fired yet so
ioc->write_coroutine is still set.
4. Request coroutine attempts to send the request body with nbd_rwv()
but the socket would still block. qio_channel_yield() is called
again and assert(!ioc->write_coroutine) is hit.
The problem is that nbd_read_reply_entry() does not distinguish between
request coroutines that are waiting to receive a reply and those that
are not.
This patch adds a per-request bool receiving flag so
nbd_read_reply_entry() can avoid spurious aio_wake() calls.
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
This should fix the issue that Dave is seeing but I'm concerned that
there are more problems in nbd-client.c. We don't have good
abstractions for writing coroutine socket I/O code. Something like Go's
channels would avoid manual low-level coroutine calls. There is
currently no way to cancel qio_channel_yield() so requests doing I/O may
remain in-flight indefinitely and nbd-client.c doesn't join them...
block/nbd-client.h | 7 ++++++-
block/nbd-client.c | 35 ++++++++++++++++++++++-------------
2 files changed, 28 insertions(+), 14 deletions(-)
diff --git a/block/nbd-client.h b/block/nbd-client.h
index 1935ffbcaa..b435754b82 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -17,6 +17,11 @@
#define MAX_NBD_REQUESTS 16
+typedef struct {
+ Coroutine *coroutine;
+ bool receiving; /* waiting for read_reply_co? */
+} NBDClientRequest;
+
typedef struct NBDClientSession {
QIOChannelSocket *sioc; /* The master data channel */
QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
@@ -27,7 +32,7 @@ typedef struct NBDClientSession {
Coroutine *read_reply_co;
int in_flight;
- Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
+ NBDClientRequest requests[MAX_NBD_REQUESTS];
NBDReply reply;
bool quit;
} NBDClientSession;
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 422ecb4307..c2834f6b47 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -39,8 +39,10 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
int i;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
- if (s->recv_coroutine[i]) {
- aio_co_wake(s->recv_coroutine[i]);
+ NBDClientRequest *req = &s->requests[i];
+
+ if (req->coroutine && req->receiving) {
+ aio_co_wake(req->coroutine);
}
}
}
@@ -88,28 +90,28 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
* one coroutine is called until the reply finishes.
*/
i = HANDLE_TO_INDEX(s, s->reply.handle);
- if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
+ if (i >= MAX_NBD_REQUESTS ||
+ !s->requests[i].coroutine ||
+ !s->requests[i].receiving) {
break;
}
- /* We're woken up by the recv_coroutine itself. Note that there
+ /* We're woken up again by the request itself. Note that there
* is no race between yielding and reentering read_reply_co. This
* is because:
*
- * - if recv_coroutine[i] runs on the same AioContext, it is only
+ * - if the request runs on the same AioContext, it is only
* entered after we yield
*
- * - if recv_coroutine[i] runs on a different AioContext, reentering
+ * - if the request runs on a different AioContext, reentering
* read_reply_co happens through a bottom half, which can only
* run after we yield.
*/
- aio_co_wake(s->recv_coroutine[i]);
+ aio_co_wake(s->requests[i].coroutine);
qemu_coroutine_yield();
}
- if (ret < 0) {
- s->quit = true;
- }
+ s->quit = true;
nbd_recv_coroutines_enter_all(s);
s->read_reply_co = NULL;
}
@@ -128,14 +130,17 @@ static int nbd_co_send_request(BlockDriverState *bs,
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
- if (s->recv_coroutine[i] == NULL) {
- s->recv_coroutine[i] = qemu_coroutine_self();
+ if (s->requests[i].coroutine == NULL) {
break;
}
}
g_assert(qemu_in_coroutine());
assert(i < MAX_NBD_REQUESTS);
+
+ s->requests[i].coroutine = qemu_coroutine_self();
+ s->requests[i].receiving = false;
+
request->handle = INDEX_TO_HANDLE(s, i);
if (s->quit) {
@@ -173,10 +178,13 @@ static void nbd_co_receive_reply(NBDClientSession *s,
NBDReply *reply,
QEMUIOVector *qiov)
{
+ int i = HANDLE_TO_INDEX(s, request->handle);
int ret;
/* Wait until we're woken up by nbd_read_reply_entry. */
+ s->requests[i].receiving = true;
qemu_coroutine_yield();
+ s->requests[i].receiving = false;
*reply = s->reply;
if (reply->handle != request->handle || !s->ioc || s->quit) {
reply->error = EIO;
@@ -186,6 +194,7 @@ static void nbd_co_receive_reply(NBDClientSession *s,
NULL);
if (ret != request->len) {
reply->error = EIO;
+ s->quit = true;
}
}
@@ -200,7 +209,7 @@ static void nbd_coroutine_end(BlockDriverState *bs,
NBDClientSession *s = nbd_get_client_session(bs);
int i = HANDLE_TO_INDEX(s, request->handle);
- s->recv_coroutine[i] = NULL;
+ s->requests[i].coroutine = NULL;
/* Kick the read_reply_co to get the next reply. */
if (s->read_reply_co) {
--
2.13.5
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-22 12:51 [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry Stefan Hajnoczi
@ 2017-08-22 13:23 ` Paolo Bonzini
2017-08-23 14:45 ` Stefan Hajnoczi
2017-08-22 16:18 ` Dr. David Alan Gilbert
` (4 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2017-08-22 13:23 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel
Cc: Dr. David Alan Gilbert, qemu-block, Eric Blake
On 22/08/2017 14:51, Stefan Hajnoczi wrote:
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c. We don't have good
> abstractions for writing coroutine socket I/O code. Something like Go's
> channels would avoid manual low-level coroutine calls. There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...
The idea was that shutdown(2) would force them to reenter...
Paolo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-22 12:51 [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry Stefan Hajnoczi
2017-08-22 13:23 ` Paolo Bonzini
@ 2017-08-22 16:18 ` Dr. David Alan Gilbert
2017-08-23 14:20 ` Eric Blake
` (3 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Dr. David Alan Gilbert @ 2017-08-22 16:18 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: qemu-devel, qemu-block, Eric Blake, Paolo Bonzini
* Stefan Hajnoczi (stefanha@redhat.com) wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
>
> 1. Request coroutine calls qio_channel_yield() successfully when sending
> would block on the socket. It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
> nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
> Note that the socket fd handler has not fired yet so
> ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
> but the socket would still block. qio_channel_yield() is called
> again and assert(!ioc->write_coroutine) is hit.
>
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
>
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
With that patch that assert does seem to go away; just leaving the
other failure we're seeing.
Dave
> ---
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c. We don't have good
> abstractions for writing coroutine socket I/O code. Something like Go's
> channels would avoid manual low-level coroutine calls. There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...
>
> block/nbd-client.h | 7 ++++++-
> block/nbd-client.c | 35 ++++++++++++++++++++++-------------
> 2 files changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/block/nbd-client.h b/block/nbd-client.h
> index 1935ffbcaa..b435754b82 100644
> --- a/block/nbd-client.h
> +++ b/block/nbd-client.h
> @@ -17,6 +17,11 @@
>
> #define MAX_NBD_REQUESTS 16
>
> +typedef struct {
> + Coroutine *coroutine;
> + bool receiving; /* waiting for read_reply_co? */
> +} NBDClientRequest;
> +
> typedef struct NBDClientSession {
> QIOChannelSocket *sioc; /* The master data channel */
> QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
> @@ -27,7 +32,7 @@ typedef struct NBDClientSession {
> Coroutine *read_reply_co;
> int in_flight;
>
> - Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
> + NBDClientRequest requests[MAX_NBD_REQUESTS];
> NBDReply reply;
> bool quit;
> } NBDClientSession;
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index 422ecb4307..c2834f6b47 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -39,8 +39,10 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
> int i;
>
> for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> - if (s->recv_coroutine[i]) {
> - aio_co_wake(s->recv_coroutine[i]);
> + NBDClientRequest *req = &s->requests[i];
> +
> + if (req->coroutine && req->receiving) {
> + aio_co_wake(req->coroutine);
> }
> }
> }
> @@ -88,28 +90,28 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
> * one coroutine is called until the reply finishes.
> */
> i = HANDLE_TO_INDEX(s, s->reply.handle);
> - if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
> + if (i >= MAX_NBD_REQUESTS ||
> + !s->requests[i].coroutine ||
> + !s->requests[i].receiving) {
> break;
> }
>
> - /* We're woken up by the recv_coroutine itself. Note that there
> + /* We're woken up again by the request itself. Note that there
> * is no race between yielding and reentering read_reply_co. This
> * is because:
> *
> - * - if recv_coroutine[i] runs on the same AioContext, it is only
> + * - if the request runs on the same AioContext, it is only
> * entered after we yield
> *
> - * - if recv_coroutine[i] runs on a different AioContext, reentering
> + * - if the request runs on a different AioContext, reentering
> * read_reply_co happens through a bottom half, which can only
> * run after we yield.
> */
> - aio_co_wake(s->recv_coroutine[i]);
> + aio_co_wake(s->requests[i].coroutine);
> qemu_coroutine_yield();
> }
>
> - if (ret < 0) {
> - s->quit = true;
> - }
> + s->quit = true;
> nbd_recv_coroutines_enter_all(s);
> s->read_reply_co = NULL;
> }
> @@ -128,14 +130,17 @@ static int nbd_co_send_request(BlockDriverState *bs,
> s->in_flight++;
>
> for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> - if (s->recv_coroutine[i] == NULL) {
> - s->recv_coroutine[i] = qemu_coroutine_self();
> + if (s->requests[i].coroutine == NULL) {
> break;
> }
> }
>
> g_assert(qemu_in_coroutine());
> assert(i < MAX_NBD_REQUESTS);
> +
> + s->requests[i].coroutine = qemu_coroutine_self();
> + s->requests[i].receiving = false;
> +
> request->handle = INDEX_TO_HANDLE(s, i);
>
> if (s->quit) {
> @@ -173,10 +178,13 @@ static void nbd_co_receive_reply(NBDClientSession *s,
> NBDReply *reply,
> QEMUIOVector *qiov)
> {
> + int i = HANDLE_TO_INDEX(s, request->handle);
> int ret;
>
> /* Wait until we're woken up by nbd_read_reply_entry. */
> + s->requests[i].receiving = true;
> qemu_coroutine_yield();
> + s->requests[i].receiving = false;
> *reply = s->reply;
> if (reply->handle != request->handle || !s->ioc || s->quit) {
> reply->error = EIO;
> @@ -186,6 +194,7 @@ static void nbd_co_receive_reply(NBDClientSession *s,
> NULL);
> if (ret != request->len) {
> reply->error = EIO;
> + s->quit = true;
> }
> }
>
> @@ -200,7 +209,7 @@ static void nbd_coroutine_end(BlockDriverState *bs,
> NBDClientSession *s = nbd_get_client_session(bs);
> int i = HANDLE_TO_INDEX(s, request->handle);
>
> - s->recv_coroutine[i] = NULL;
> + s->requests[i].coroutine = NULL;
>
> /* Kick the read_reply_co to get the next reply. */
> if (s->read_reply_co) {
> --
> 2.13.5
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-22 12:51 [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry Stefan Hajnoczi
2017-08-22 13:23 ` Paolo Bonzini
2017-08-22 16:18 ` Dr. David Alan Gilbert
@ 2017-08-23 14:20 ` Eric Blake
2017-08-23 14:26 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2017-08-23 14:51 ` [Qemu-devel] " Eric Blake
` (2 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Eric Blake @ 2017-08-23 14:20 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel
Cc: Dr. David Alan Gilbert, qemu-block, Paolo Bonzini
[-- Attachment #1: Type: text/plain, Size: 1794 bytes --]
On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
>
> 1. Request coroutine calls qio_channel_yield() successfully when sending
> would block on the socket. It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
> nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
> Note that the socket fd handler has not fired yet so
> ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
> but the socket would still block. qio_channel_yield() is called
> again and assert(!ioc->write_coroutine) is hit.
>
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
>
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c. We don't have good
> abstractions for writing coroutine socket I/O code. Something like Go's
> channels would avoid manual low-level coroutine calls. There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...
Is this patch needed for 2.10-rc4, or does Fam's series cover the issue?
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [Qemu-block] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-23 14:20 ` Eric Blake
@ 2017-08-23 14:26 ` Stefan Hajnoczi
0 siblings, 0 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2017-08-23 14:26 UTC (permalink / raw)
To: Eric Blake
Cc: Stefan Hajnoczi, qemu-devel, Paolo Bonzini,
Dr. David Alan Gilbert, qemu block
On Wed, Aug 23, 2017 at 3:20 PM, Eric Blake <eblake@redhat.com> wrote:
> On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
>> The following scenario leads to an assertion failure in
>> qio_channel_yield():
>>
>> 1. Request coroutine calls qio_channel_yield() successfully when sending
>> would block on the socket. It is now yielded.
>> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
>> nbd_receive_reply() failed.
>> 3. Request coroutine is entered and returns from qio_channel_yield().
>> Note that the socket fd handler has not fired yet so
>> ioc->write_coroutine is still set.
>> 4. Request coroutine attempts to send the request body with nbd_rwv()
>> but the socket would still block. qio_channel_yield() is called
>> again and assert(!ioc->write_coroutine) is hit.
>>
>> The problem is that nbd_read_reply_entry() does not distinguish between
>> request coroutines that are waiting to receive a reply and those that
>> are not.
>>
>> This patch adds a per-request bool receiving flag so
>> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>>
>> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> ---
>> This should fix the issue that Dave is seeing but I'm concerned that
>> there are more problems in nbd-client.c. We don't have good
>> abstractions for writing coroutine socket I/O code. Something like Go's
>> channels would avoid manual low-level coroutine calls. There is
>> currently no way to cancel qio_channel_yield() so requests doing I/O may
>> remain in-flight indefinitely and nbd-client.c doesn't join them...
>
> Is this patch needed for 2.10-rc4, or does Fam's series cover the issue?
Fam's series fixes non-shared storage migration.
This patch addresses the failure case when the server closes the
connection prematurely.
Stefan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-22 13:23 ` Paolo Bonzini
@ 2017-08-23 14:45 ` Stefan Hajnoczi
2017-08-23 16:15 ` Paolo Bonzini
0 siblings, 1 reply; 11+ messages in thread
From: Stefan Hajnoczi @ 2017-08-23 14:45 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel, Dr. David Alan Gilbert, qemu-block, Eric Blake
On Tue, Aug 22, 2017 at 03:23:32PM +0200, Paolo Bonzini wrote:
> On 22/08/2017 14:51, Stefan Hajnoczi wrote:
> > This should fix the issue that Dave is seeing but I'm concerned that
> > there are more problems in nbd-client.c. We don't have good
> > abstractions for writing coroutine socket I/O code. Something like Go's
> > channels would avoid manual low-level coroutine calls. There is
> > currently no way to cancel qio_channel_yield() so requests doing I/O may
> > remain in-flight indefinitely and nbd-client.c doesn't join them...
>
> The idea was that shutdown(2) would force them to reenter...
That depends on the BDRV_POLL_WHILE() allowing all request coroutines to
terminate before we call nbd_client_detach_aio_context():
qio_channel_shutdown(client->ioc,
QIO_CHANNEL_SHUTDOWN_BOTH,
NULL);
BDRV_POLL_WHILE(bs, client->read_reply_co);
nbd_client_detach_aio_context(bs);
I'm not sure we have any guarantee that request coroutines will have
terminated.
Once nbd_client_detach_aio_context() is called
ioc->read_coroutine/write_coroutine are set to NULL. At that point any
remaining coroutine doing I/O on ioc will be in trouble.
Stefan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-22 12:51 [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry Stefan Hajnoczi
` (2 preceding siblings ...)
2017-08-23 14:20 ` Eric Blake
@ 2017-08-23 14:51 ` Eric Blake
2017-08-23 16:17 ` Paolo Bonzini
2017-08-23 15:31 ` Vladimir Sementsov-Ogievskiy
2017-08-23 15:45 ` Eric Blake
5 siblings, 1 reply; 11+ messages in thread
From: Eric Blake @ 2017-08-23 14:51 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel
Cc: Dr. David Alan Gilbert, qemu-block, Paolo Bonzini,
Vladimir Sementsov-Ogievskiy
[-- Attachment #1: Type: text/plain, Size: 2108 bytes --]
On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
>
> 1. Request coroutine calls qio_channel_yield() successfully when sending
> would block on the socket. It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
> nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
> Note that the socket fd handler has not fired yet so
> ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
> but the socket would still block. qio_channel_yield() is called
> again and assert(!ioc->write_coroutine) is hit.
>
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
>
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c. We don't have good
> abstractions for writing coroutine socket I/O code. Something like Go's
> channels would avoid manual low-level coroutine calls. There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...
Vladimir has some cleanups that rewrite the NBD coroutines to be more
legible, but it is invasive enough to be 2.11 material. I think that
for a stop-gap of getting 2.10 out the door, we may be better off
including this patch - but I would still like some positive review from
more than just me. There's not much time left before I need to send the
-rc4 NBD pull request, though.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-22 12:51 [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry Stefan Hajnoczi
` (3 preceding siblings ...)
2017-08-23 14:51 ` [Qemu-devel] " Eric Blake
@ 2017-08-23 15:31 ` Vladimir Sementsov-Ogievskiy
2017-08-23 15:45 ` Eric Blake
5 siblings, 0 replies; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-23 15:31 UTC (permalink / raw)
To: qemu-devel
22.08.2017 15:51, Stefan Hajnoczi wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
>
> 1. Request coroutine calls qio_channel_yield() successfully when sending
> would block on the socket. It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
> nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
> Note that the socket fd handler has not fired yet so
> ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
> but the socket would still block. qio_channel_yield() is called
> again and assert(!ioc->write_coroutine) is hit.
>
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
>
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> This should fix the issue that Dave is seeing but I'm concerned that
> there are more problems in nbd-client.c. We don't have good
> abstractions for writing coroutine socket I/O code. Something like Go's
> channels would avoid manual low-level coroutine calls. There is
> currently no way to cancel qio_channel_yield() so requests doing I/O may
> remain in-flight indefinitely and nbd-client.c doesn't join them...
>
> block/nbd-client.h | 7 ++++++-
> block/nbd-client.c | 35 ++++++++++++++++++++++-------------
> 2 files changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/block/nbd-client.h b/block/nbd-client.h
> index 1935ffbcaa..b435754b82 100644
> --- a/block/nbd-client.h
> +++ b/block/nbd-client.h
> @@ -17,6 +17,11 @@
>
> #define MAX_NBD_REQUESTS 16
>
> +typedef struct {
> + Coroutine *coroutine;
> + bool receiving; /* waiting for read_reply_co? */
> +} NBDClientRequest;
> +
> typedef struct NBDClientSession {
> QIOChannelSocket *sioc; /* The master data channel */
> QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
> @@ -27,7 +32,7 @@ typedef struct NBDClientSession {
> Coroutine *read_reply_co;
> int in_flight;
>
> - Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
> + NBDClientRequest requests[MAX_NBD_REQUESTS];
> NBDReply reply;
> bool quit;
> } NBDClientSession;
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index 422ecb4307..c2834f6b47 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -39,8 +39,10 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
> int i;
>
> for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> - if (s->recv_coroutine[i]) {
> - aio_co_wake(s->recv_coroutine[i]);
> + NBDClientRequest *req = &s->requests[i];
> +
> + if (req->coroutine && req->receiving) {
> + aio_co_wake(req->coroutine);
> }
> }
> }
> @@ -88,28 +90,28 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
> * one coroutine is called until the reply finishes.
> */
> i = HANDLE_TO_INDEX(s, s->reply.handle);
> - if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
> + if (i >= MAX_NBD_REQUESTS ||
> + !s->requests[i].coroutine ||
> + !s->requests[i].receiving) {
> break;
> }
>
> - /* We're woken up by the recv_coroutine itself. Note that there
> + /* We're woken up again by the request itself. Note that there
> * is no race between yielding and reentering read_reply_co. This
> * is because:
> *
> - * - if recv_coroutine[i] runs on the same AioContext, it is only
> + * - if the request runs on the same AioContext, it is only
> * entered after we yield
> *
> - * - if recv_coroutine[i] runs on a different AioContext, reentering
> + * - if the request runs on a different AioContext, reentering
> * read_reply_co happens through a bottom half, which can only
> * run after we yield.
> */
> - aio_co_wake(s->recv_coroutine[i]);
> + aio_co_wake(s->requests[i].coroutine);
> qemu_coroutine_yield();
> }
>
> - if (ret < 0) {
> - s->quit = true;
> - }
> + s->quit = true;
good. this fixes the case when "if (i >= MAX...)" goes here without
setting ret.
> nbd_recv_coroutines_enter_all(s);
> s->read_reply_co = NULL;
> }
> @@ -128,14 +130,17 @@ static int nbd_co_send_request(BlockDriverState *bs,
> s->in_flight++;
>
> for (i = 0; i < MAX_NBD_REQUESTS; i++) {
> - if (s->recv_coroutine[i] == NULL) {
> - s->recv_coroutine[i] = qemu_coroutine_self();
> + if (s->requests[i].coroutine == NULL) {
> break;
> }
> }
>
> g_assert(qemu_in_coroutine());
> assert(i < MAX_NBD_REQUESTS);
> +
> + s->requests[i].coroutine = qemu_coroutine_self();
> + s->requests[i].receiving = false;
> +
> request->handle = INDEX_TO_HANDLE(s, i);
>
> if (s->quit) {
> @@ -173,10 +178,13 @@ static void nbd_co_receive_reply(NBDClientSession *s,
> NBDReply *reply,
> QEMUIOVector *qiov)
> {
> + int i = HANDLE_TO_INDEX(s, request->handle);
> int ret;
>
> /* Wait until we're woken up by nbd_read_reply_entry. */
> + s->requests[i].receiving = true;
> qemu_coroutine_yield();
> + s->requests[i].receiving = false;
> *reply = s->reply;
> if (reply->handle != request->handle || !s->ioc || s->quit) {
> reply->error = EIO;
> @@ -186,6 +194,7 @@ static void nbd_co_receive_reply(NBDClientSession *s,
> NULL);
> if (ret != request->len) {
> reply->error = EIO;
> + s->quit = true;
as I understand, some fixes around s->quit are merged into this patch
and actually unrelated to the described problem.
Anyway, setting quite here should be not bad (I set corresponding
eio_to_all variable in my series on each error and check it after each
possible yield).
> }
> }
>
> @@ -200,7 +209,7 @@ static void nbd_coroutine_end(BlockDriverState *bs,
> NBDClientSession *s = nbd_get_client_session(bs);
> int i = HANDLE_TO_INDEX(s, request->handle);
>
> - s->recv_coroutine[i] = NULL;
> + s->requests[i].coroutine = NULL;
>
> /* Kick the read_reply_co to get the next reply. */
> if (s->read_reply_co) {
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-22 12:51 [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry Stefan Hajnoczi
` (4 preceding siblings ...)
2017-08-23 15:31 ` Vladimir Sementsov-Ogievskiy
@ 2017-08-23 15:45 ` Eric Blake
5 siblings, 0 replies; 11+ messages in thread
From: Eric Blake @ 2017-08-23 15:45 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel
Cc: Dr. David Alan Gilbert, qemu-block, Paolo Bonzini
[-- Attachment #1: Type: text/plain, Size: 1505 bytes --]
On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
> The following scenario leads to an assertion failure in
> qio_channel_yield():
>
> 1. Request coroutine calls qio_channel_yield() successfully when sending
> would block on the socket. It is now yielded.
> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
> nbd_receive_reply() failed.
> 3. Request coroutine is entered and returns from qio_channel_yield().
> Note that the socket fd handler has not fired yet so
> ioc->write_coroutine is still set.
> 4. Request coroutine attempts to send the request body with nbd_rwv()
> but the socket would still block. qio_channel_yield() is called
> again and assert(!ioc->write_coroutine) is hit.
>
> The problem is that nbd_read_reply_entry() does not distinguish between
> request coroutines that are waiting to receive a reply and those that
> are not.
>
> This patch adds a per-request bool receiving flag so
> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>
> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Using the steps in
https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg03853.html,
I've verified that this avoids the hang that is otherwise present, so
I'm adding:
Tested-by: Eric Blake <eblake@redhat.com>
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-23 14:45 ` Stefan Hajnoczi
@ 2017-08-23 16:15 ` Paolo Bonzini
0 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2017-08-23 16:15 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: qemu-devel, Dr. David Alan Gilbert, qemu-block, Eric Blake
On 23/08/2017 16:45, Stefan Hajnoczi wrote:
> That depends on the BDRV_POLL_WHILE() allowing all request coroutines to
> terminate before we call nbd_client_detach_aio_context():
>
> qio_channel_shutdown(client->ioc,
> QIO_CHANNEL_SHUTDOWN_BOTH,
> NULL);
> BDRV_POLL_WHILE(bs, client->read_reply_co);
>
> nbd_client_detach_aio_context(bs);
>
> I'm not sure we have any guarantee that request coroutines will have
> terminated.
Ok, I see my confusion, it's only because of the "receiving" flag which
actually means "waiting for reply". Your patch is okay.
Paolo
> Once nbd_client_detach_aio_context() is called
> ioc->read_coroutine/write_coroutine are set to NULL. At that point any
> remaining coroutine doing I/O on ioc will be in trouble.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry
2017-08-23 14:51 ` [Qemu-devel] " Eric Blake
@ 2017-08-23 16:17 ` Paolo Bonzini
0 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2017-08-23 16:17 UTC (permalink / raw)
To: Eric Blake, Stefan Hajnoczi, qemu-devel
Cc: Dr. David Alan Gilbert, qemu-block, Vladimir Sementsov-Ogievskiy
[-- Attachment #1: Type: text/plain, Size: 2116 bytes --]
On 23/08/2017 16:51, Eric Blake wrote:
> On 08/22/2017 07:51 AM, Stefan Hajnoczi wrote:
>> The following scenario leads to an assertion failure in
>> qio_channel_yield():
>>
>> 1. Request coroutine calls qio_channel_yield() successfully when sending
>> would block on the socket. It is now yielded.
>> 2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
>> nbd_receive_reply() failed.
>> 3. Request coroutine is entered and returns from qio_channel_yield().
>> Note that the socket fd handler has not fired yet so
>> ioc->write_coroutine is still set.
>> 4. Request coroutine attempts to send the request body with nbd_rwv()
>> but the socket would still block. qio_channel_yield() is called
>> again and assert(!ioc->write_coroutine) is hit.
>>
>> The problem is that nbd_read_reply_entry() does not distinguish between
>> request coroutines that are waiting to receive a reply and those that
>> are not.
>>
>> This patch adds a per-request bool receiving flag so
>> nbd_read_reply_entry() can avoid spurious aio_wake() calls.
>>
>> Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> ---
>> This should fix the issue that Dave is seeing but I'm concerned that
>> there are more problems in nbd-client.c. We don't have good
>> abstractions for writing coroutine socket I/O code. Something like Go's
>> channels would avoid manual low-level coroutine calls. There is
>> currently no way to cancel qio_channel_yield() so requests doing I/O may
>> remain in-flight indefinitely and nbd-client.c doesn't join them...
>
> Vladimir has some cleanups that rewrite the NBD coroutines to be more
> legible, but it is invasive enough to be 2.11 material. I think that
> for a stop-gap of getting 2.10 out the door, we may be better off
> including this patch - but I would still like some positive review from
> more than just me. There's not much time left before I need to send the
> -rc4 NBD pull request, though.
>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2017-08-23 16:17 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-22 12:51 [Qemu-devel] [PATCH] nbd-client: avoid spurious qio_channel_yield() re-entry Stefan Hajnoczi
2017-08-22 13:23 ` Paolo Bonzini
2017-08-23 14:45 ` Stefan Hajnoczi
2017-08-23 16:15 ` Paolo Bonzini
2017-08-22 16:18 ` Dr. David Alan Gilbert
2017-08-23 14:20 ` Eric Blake
2017-08-23 14:26 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2017-08-23 14:51 ` [Qemu-devel] " Eric Blake
2017-08-23 16:17 ` Paolo Bonzini
2017-08-23 15:31 ` Vladimir Sementsov-Ogievskiy
2017-08-23 15:45 ` Eric Blake
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.