* [Qemu-devel] [RFC PATCH 0/3] aio-context start to eliminate io_flush
@ 2013-03-28 21:52 Anthony Liguori
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy" Anthony Liguori
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Anthony Liguori @ 2013-03-28 21:52 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Mike Roth
Mike and I spent some time talking about how to move forward with
AioContext vs. glib. I think the best proposal so far is to add
a common API on top of the *aio* interfaces and the existing
qemu_set_fd_handler main loop functions.
Then we get code sharing while still having multiple main loop
implementations until we can converge on a single one since the users
will consume the same interface at least.
The only thing standing in the way is the inability to map the semantics
of io_flush. I think an easy way to do this is to keep track of the
events registered through *aio* and let aio_flush() block as long as
any event is registered.
That's precisely what this series starts to do. It's untested and
extremely incomplete but I wanted to get some feedback first before we
put any serious effort into this and make sure it's an agreeable
approach.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy"
2013-03-28 21:52 [Qemu-devel] [RFC PATCH 0/3] aio-context start to eliminate io_flush Anthony Liguori
@ 2013-03-28 21:52 ` Anthony Liguori
2013-03-28 23:37 ` Paolo Bonzini
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush Anthony Liguori
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 3/3] rbd: remove aio handler when no requests are pending Anthony Liguori
2 siblings, 1 reply; 13+ messages in thread
From: Anthony Liguori @ 2013-03-28 21:52 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Mike Roth
Today, all callers of qemu_aio_set_fd_handler() pass a valid io_flush
function. However, the function allows the handler to be omitted
and the behavior is a bit strange.
It will still add the file descriptor to the GSource but it will
not consider the source to be "busy". This could lead to aio_flush()
returning prematurely.
Since we never rely on this behavior today, it doesn't matter but
the next patch will start relying on an absent io_flush function
to assume the handler is always busy.
Cc: Paolo Bonzini <bonzini@redhat.com>
Cc: Mike Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
---
aio-posix.c | 4 ++--
aio-win32.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/aio-posix.c b/aio-posix.c
index b68eccd..a2349f6 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -208,8 +208,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
- if (!node->deleted && node->io_flush) {
- if (node->io_flush(node->opaque) == 0) {
+ if (!node->deleted) {
+ if (node->io_flush && node->io_flush(node->opaque) == 0) {
continue;
}
busy = true;
diff --git a/aio-win32.c b/aio-win32.c
index 38723bf..b02fd40 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -154,8 +154,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
- if (!node->deleted && node->io_flush) {
- if (node->io_flush(node->e) == 0) {
+ if (!node->deleted) {
+ if (node->io_flush && node->io_flush(node->e) == 0) {
continue;
}
busy = true;
--
1.8.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush
2013-03-28 21:52 [Qemu-devel] [RFC PATCH 0/3] aio-context start to eliminate io_flush Anthony Liguori
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy" Anthony Liguori
@ 2013-03-28 21:52 ` Anthony Liguori
2013-03-28 23:38 ` Paolo Bonzini
2013-04-02 8:37 ` Kevin Wolf
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 3/3] rbd: remove aio handler when no requests are pending Anthony Liguori
2 siblings, 2 replies; 13+ messages in thread
From: Anthony Liguori @ 2013-03-28 21:52 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Mike Roth
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
---
block/sheepdog.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/block/sheepdog.c b/block/sheepdog.c
index bb67c4c..2bccd9b 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -503,13 +503,6 @@ static void restart_co_req(void *opaque)
qemu_coroutine_enter(co, NULL);
}
-static int have_co_req(void *opaque)
-{
- /* this handler is set only when there is a pending request, so
- * always returns 1. */
- return 1;
-}
-
typedef struct SheepdogReqCo {
int sockfd;
SheepdogReq *hdr;
@@ -532,14 +525,14 @@ static coroutine_fn void do_co_req(void *opaque)
unsigned int *rlen = srco->rlen;
co = qemu_coroutine_self();
- qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, have_co_req, co);
+ qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
ret = send_co_req(sockfd, hdr, data, wlen);
if (ret < 0) {
goto out;
}
- qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, have_co_req, co);
+ qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
if (ret < sizeof(*hdr)) {
--
1.8.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [RFC PATCH 3/3] rbd: remove aio handler when no requests are pending
2013-03-28 21:52 [Qemu-devel] [RFC PATCH 0/3] aio-context start to eliminate io_flush Anthony Liguori
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy" Anthony Liguori
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush Anthony Liguori
@ 2013-03-28 21:52 ` Anthony Liguori
2013-03-28 23:39 ` Paolo Bonzini
2 siblings, 1 reply; 13+ messages in thread
From: Anthony Liguori @ 2013-03-28 21:52 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, Mike Roth
This allows us to pass NULL for io_flush by removing the aio
event when no requests are pending.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
---
block/rbd.c | 25 ++++++++++++++++---------
1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index 1a8ea6d..a7c3645 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -407,6 +407,18 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
g_free(rcb);
}
+static void qemu_rbd_aio_event_reader(void *opaque);
+
+static void qemu_rbd_update_aio_handler(BDRVRBDState *s)
+{
+ if (s->qemu_aio_count > 0) {
+ qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
+ NULL, NULL, s);
+ } else {
+ qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL, NULL);
+ }
+}
+
/*
* aio fd read handler. It runs in the qemu context and calls the
* completion handling of completed rados aio operations.
@@ -429,18 +441,12 @@ static void qemu_rbd_aio_event_reader(void *opaque)
s->event_reader_pos = 0;
qemu_rbd_complete_aio(s->event_rcb);
s->qemu_aio_count--;
+ qemu_rbd_update_aio_handler(s);
}
}
} while (ret < 0 && errno == EINTR);
}
-static int qemu_rbd_aio_flush_cb(void *opaque)
-{
- BDRVRBDState *s = opaque;
-
- return (s->qemu_aio_count > 0);
-}
-
static int qemu_rbd_open(BlockDriverState *bs, const char *filename,
QDict *options, int flags)
{
@@ -525,9 +531,8 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename,
}
fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
- qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
- NULL, qemu_rbd_aio_flush_cb, s);
+ qemu_rbd_update_aio_handler(s);
return 0;
@@ -701,6 +706,7 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
size = nb_sectors * BDRV_SECTOR_SIZE;
s->qemu_aio_count++; /* All the RADOSCB */
+ qemu_rbd_update_aio_handler(s);
rcb = g_malloc(sizeof(RADOSCB));
rcb->done = 0;
@@ -736,6 +742,7 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
failed:
g_free(rcb);
s->qemu_aio_count--;
+ qemu_rbd_update_aio_handler(s);
qemu_aio_release(acb);
return NULL;
}
--
1.8.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy"
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy" Anthony Liguori
@ 2013-03-28 23:37 ` Paolo Bonzini
2013-04-02 8:34 ` Kevin Wolf
2013-04-08 15:28 ` Stefan Hajnoczi
0 siblings, 2 replies; 13+ messages in thread
From: Paolo Bonzini @ 2013-03-28 23:37 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Kevin Wolf, Paolo Bonzini, qemu-devel, Mike Roth
Il 28/03/2013 22:52, Anthony Liguori ha scritto:
> Today, all callers of qemu_aio_set_fd_handler() pass a valid io_flush
> function.
Except one:
aio_set_event_notifier(ctx, &ctx->notifier,
(EventNotifierHandler *)
event_notifier_test_and_clear, NULL);
This is the EventNotifier that is used by qemu_notify_event.
It's quite surprising that this patch works and passes the tests. /me
reads cover letter... ah, it is untested. :)
But if you can eliminate the sole usage of aio_wait()'s return value (in
bdrv_drain_all()), everything would be much simpler. There is a
relatively convenient
assert(QLIST_EMPTY(&bs->tracked_requests));
that you can use as the exit condition instead. Perhaps it's not
trivial to do it efficiently, but it's not a fast path.
Paolo
> However, the function allows the handler to be omitted
> and the behavior is a bit strange.
>
> It will still add the file descriptor to the GSource but it will
> not consider the source to be "busy". This could lead to aio_flush()
> returning prematurely.
>
> Since we never rely on this behavior today, it doesn't matter but
> the next patch will start relying on an absent io_flush function
> to assume the handler is always busy.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush Anthony Liguori
@ 2013-03-28 23:38 ` Paolo Bonzini
2013-04-02 8:37 ` Kevin Wolf
1 sibling, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2013-03-28 23:38 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Kevin Wolf, qemu-devel, Mike Roth
Il 28/03/2013 22:52, Anthony Liguori ha scritto:
> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
> ---
> block/sheepdog.c | 11 ++---------
> 1 file changed, 2 insertions(+), 9 deletions(-)
>
> diff --git a/block/sheepdog.c b/block/sheepdog.c
> index bb67c4c..2bccd9b 100644
> --- a/block/sheepdog.c
> +++ b/block/sheepdog.c
> @@ -503,13 +503,6 @@ static void restart_co_req(void *opaque)
> qemu_coroutine_enter(co, NULL);
> }
>
> -static int have_co_req(void *opaque)
> -{
> - /* this handler is set only when there is a pending request, so
> - * always returns 1. */
> - return 1;
> -}
> -
> typedef struct SheepdogReqCo {
> int sockfd;
> SheepdogReq *hdr;
> @@ -532,14 +525,14 @@ static coroutine_fn void do_co_req(void *opaque)
> unsigned int *rlen = srco->rlen;
>
> co = qemu_coroutine_self();
> - qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, have_co_req, co);
> + qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
>
> ret = send_co_req(sockfd, hdr, data, wlen);
> if (ret < 0) {
> goto out;
> }
>
> - qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, have_co_req, co);
> + qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
>
> ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
> if (ret < sizeof(*hdr)) {
>
There is another one in hw/dataplane/virtio-blk.c, btw.
Paolo
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC PATCH 3/3] rbd: remove aio handler when no requests are pending
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 3/3] rbd: remove aio handler when no requests are pending Anthony Liguori
@ 2013-03-28 23:39 ` Paolo Bonzini
0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2013-03-28 23:39 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Kevin Wolf, qemu-devel, Mike Roth
Il 28/03/2013 22:52, Anthony Liguori ha scritto:
> This allows us to pass NULL for io_flush by removing the aio
> event when no requests are pending.
>
> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
> ---
> block/rbd.c | 25 ++++++++++++++++---------
> 1 file changed, 16 insertions(+), 9 deletions(-)
I'm not sure why this is an improvement?
io_flush is basically the same as the io_can_read handler.
Paolo
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy"
2013-03-28 23:37 ` Paolo Bonzini
@ 2013-04-02 8:34 ` Kevin Wolf
2013-04-08 15:28 ` Stefan Hajnoczi
1 sibling, 0 replies; 13+ messages in thread
From: Kevin Wolf @ 2013-04-02 8:34 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Anthony Liguori, qemu-devel, Mike Roth
Am 29.03.2013 um 00:37 hat Paolo Bonzini geschrieben:
> Il 28/03/2013 22:52, Anthony Liguori ha scritto:
> > Today, all callers of qemu_aio_set_fd_handler() pass a valid io_flush
> > function.
>
> Except one:
>
> aio_set_event_notifier(ctx, &ctx->notifier,
> (EventNotifierHandler *)
> event_notifier_test_and_clear, NULL);
>
> This is the EventNotifier that is used by qemu_notify_event.
>
> It's quite surprising that this patch works and passes the tests. /me
> reads cover letter... ah, it is untested. :)
>
> But if you can eliminate the sole usage of aio_wait()'s return value (in
> bdrv_drain_all()), everything would be much simpler. There is a
> relatively convenient
>
> assert(QLIST_EMPTY(&bs->tracked_requests));
>
> that you can use as the exit condition instead. Perhaps it's not
> trivial to do it efficiently, but it's not a fast path.
We just need to move to .bdrv_drain() for all block driver that
register an AioHandler. I'm pretty sure that each one has its own
data structures to manage in-flight requests (basically what is the
aio_flush handler today would become the .bdrv_drain callback).
Then bdrv_drain_all() can directly use the bdrv_drain() return value and
doesn't need to have it passed through aio_wait() any more.
Kevin
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush Anthony Liguori
2013-03-28 23:38 ` Paolo Bonzini
@ 2013-04-02 8:37 ` Kevin Wolf
2013-04-02 10:15 ` Paolo Bonzini
2013-04-08 15:31 ` Stefan Hajnoczi
1 sibling, 2 replies; 13+ messages in thread
From: Kevin Wolf @ 2013-04-02 8:37 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Paolo Bonzini, qemu-devel, Mike Roth
Am 28.03.2013 um 22:52 hat Anthony Liguori geschrieben:
> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
> ---
> block/sheepdog.c | 11 ++---------
> 1 file changed, 2 insertions(+), 9 deletions(-)
>
> diff --git a/block/sheepdog.c b/block/sheepdog.c
> index bb67c4c..2bccd9b 100644
> --- a/block/sheepdog.c
> +++ b/block/sheepdog.c
> @@ -503,13 +503,6 @@ static void restart_co_req(void *opaque)
> qemu_coroutine_enter(co, NULL);
> }
>
> -static int have_co_req(void *opaque)
> -{
> - /* this handler is set only when there is a pending request, so
> - * always returns 1. */
Now you return 1 even when no request is pending (which is the case in
which no io_flush handler would be set before). Why is this correct?
(This is actually a question about PATCH 1/3, I just noticed it here.
Are there more cases like this?)
Kevin
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush
2013-04-02 8:37 ` Kevin Wolf
@ 2013-04-02 10:15 ` Paolo Bonzini
2013-04-08 15:31 ` Stefan Hajnoczi
1 sibling, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2013-04-02 10:15 UTC (permalink / raw)
To: Kevin Wolf; +Cc: Anthony Liguori, qemu-devel, Mike Roth
> > diff --git a/block/sheepdog.c b/block/sheepdog.c
> > index bb67c4c..2bccd9b 100644
> > --- a/block/sheepdog.c
> > +++ b/block/sheepdog.c
> > @@ -503,13 +503,6 @@ static void restart_co_req(void *opaque)
> > qemu_coroutine_enter(co, NULL);
> > }
> >
> > -static int have_co_req(void *opaque)
> > -{
> > - /* this handler is set only when there is a pending request, so
> > - * always returns 1. */
>
> Now you return 1 even when no request is pending (which is the case in
> which no io_flush handler would be set before). Why is this correct?
> (This is actually a question about PATCH 1/3, I just noticed it here.
> Are there more cases like this?)
In the dataplane code, the ioeventfd uses an io_flush callback that returns
true.
Paolo
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy"
2013-03-28 23:37 ` Paolo Bonzini
2013-04-02 8:34 ` Kevin Wolf
@ 2013-04-08 15:28 ` Stefan Hajnoczi
1 sibling, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2013-04-08 15:28 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Mike Roth
On Fri, Mar 29, 2013 at 12:37:18AM +0100, Paolo Bonzini wrote:
> Il 28/03/2013 22:52, Anthony Liguori ha scritto:
> > Today, all callers of qemu_aio_set_fd_handler() pass a valid io_flush
> > function.
>
> Except one:
>
> aio_set_event_notifier(ctx, &ctx->notifier,
> (EventNotifierHandler *)
> event_notifier_test_and_clear, NULL);
>
> This is the EventNotifier that is used by qemu_notify_event.
>
> It's quite surprising that this patch works and passes the tests. /me
> reads cover letter... ah, it is untested. :)
This one instance is easy enough to fix by a return 0 io_flush handler.
Stefan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush
2013-04-02 8:37 ` Kevin Wolf
2013-04-02 10:15 ` Paolo Bonzini
@ 2013-04-08 15:31 ` Stefan Hajnoczi
2013-04-08 15:38 ` Kevin Wolf
1 sibling, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2013-04-08 15:31 UTC (permalink / raw)
To: Kevin Wolf; +Cc: Paolo Bonzini, Anthony Liguori, qemu-devel, Mike Roth
On Tue, Apr 02, 2013 at 10:37:24AM +0200, Kevin Wolf wrote:
> Am 28.03.2013 um 22:52 hat Anthony Liguori geschrieben:
> > Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
> > ---
> > block/sheepdog.c | 11 ++---------
> > 1 file changed, 2 insertions(+), 9 deletions(-)
> >
> > diff --git a/block/sheepdog.c b/block/sheepdog.c
> > index bb67c4c..2bccd9b 100644
> > --- a/block/sheepdog.c
> > +++ b/block/sheepdog.c
> > @@ -503,13 +503,6 @@ static void restart_co_req(void *opaque)
> > qemu_coroutine_enter(co, NULL);
> > }
> >
> > -static int have_co_req(void *opaque)
> > -{
> > - /* this handler is set only when there is a pending request, so
> > - * always returns 1. */
>
> Now you return 1 even when no request is pending (which is the case in
> which no io_flush handler would be set before). Why is this correct?
> (This is actually a question about PATCH 1/3, I just noticed it here.
> Are there more cases like this?)
The trick with return 1 handlers is that they are deleted when there are
no more requests. The pattern is:
...begin processing request...
qemu_set_fd_handler2(fd, ..., have_co_req);
qemu_coroutine_yield()
qemu_set_fd_handler2(fd, NULL, NULL, NULL, NULL); /* delete handler */
...finish processing request...
Stefan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush
2013-04-08 15:31 ` Stefan Hajnoczi
@ 2013-04-08 15:38 ` Kevin Wolf
0 siblings, 0 replies; 13+ messages in thread
From: Kevin Wolf @ 2013-04-08 15:38 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: Paolo Bonzini, Anthony Liguori, qemu-devel, Mike Roth
Am 08.04.2013 um 17:31 hat Stefan Hajnoczi geschrieben:
> On Tue, Apr 02, 2013 at 10:37:24AM +0200, Kevin Wolf wrote:
> > Am 28.03.2013 um 22:52 hat Anthony Liguori geschrieben:
> > > Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
> > > ---
> > > block/sheepdog.c | 11 ++---------
> > > 1 file changed, 2 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/block/sheepdog.c b/block/sheepdog.c
> > > index bb67c4c..2bccd9b 100644
> > > --- a/block/sheepdog.c
> > > +++ b/block/sheepdog.c
> > > @@ -503,13 +503,6 @@ static void restart_co_req(void *opaque)
> > > qemu_coroutine_enter(co, NULL);
> > > }
> > >
> > > -static int have_co_req(void *opaque)
> > > -{
> > > - /* this handler is set only when there is a pending request, so
> > > - * always returns 1. */
> >
> > Now you return 1 even when no request is pending (which is the case in
> > which no io_flush handler would be set before). Why is this correct?
> > (This is actually a question about PATCH 1/3, I just noticed it here.
> > Are there more cases like this?)
>
> The trick with return 1 handlers is that they are deleted when there are
> no more requests. The pattern is:
>
> ...begin processing request...
> qemu_set_fd_handler2(fd, ..., have_co_req);
> qemu_coroutine_yield()
> qemu_set_fd_handler2(fd, NULL, NULL, NULL, NULL); /* delete handler */
> ...finish processing request...
Indeed. I thought I had seen a case where only the flush handler is
reset, but it seems I didn't look close enough.
Kevin
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2013-04-08 15:38 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-28 21:52 [Qemu-devel] [RFC PATCH 0/3] aio-context start to eliminate io_flush Anthony Liguori
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy" Anthony Liguori
2013-03-28 23:37 ` Paolo Bonzini
2013-04-02 8:34 ` Kevin Wolf
2013-04-08 15:28 ` Stefan Hajnoczi
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 2/3] sheepdog: pass NULL for io_flush Anthony Liguori
2013-03-28 23:38 ` Paolo Bonzini
2013-04-02 8:37 ` Kevin Wolf
2013-04-02 10:15 ` Paolo Bonzini
2013-04-08 15:31 ` Stefan Hajnoczi
2013-04-08 15:38 ` Kevin Wolf
2013-03-28 21:52 ` [Qemu-devel] [RFC PATCH 3/3] rbd: remove aio handler when no requests are pending Anthony Liguori
2013-03-28 23:39 ` Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.