All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 0/4] exec: Make bounce buffer thread safe
@ 2015-03-13  1:38 Fam Zheng
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer Fam Zheng
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Fam Zheng @ 2015-03-13  1:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

The global bounce buffer used for non-direct memory access is not thread-safe:

 1) Access to "bounce" is not atomic.

 2) Access to "map_client_list" is not atomic.

 3) In dma_blk_cb, there is a race condition between:

        mem = dma_memory_map(dbs->sg->as, cur_addr, &cur_len, dbs->dir);
        /* ... */
        cpu_register_map_client(dbs, continue_after_map_failure);

    Bounce may become available after dma_memory_map failed but before
    cpu_register_map_client is called.

 4) The callback registered by cpu_register_map_client is not called in the
    right AioContext.

This series fixes these issues respectively.


Fam Zheng (4):
  exec: Atomic access to bounce buffer
  exec: Atomic access to map_client_list
  exec: Notify cpu_register_map_client caller if the bounce buffer is
    available
  dma-helpers: Move reschedule_dma BH to blk's AioContext

 dma-helpers.c |  4 +++-
 exec.c        | 35 +++++++++++++++++++----------------
 2 files changed, 22 insertions(+), 17 deletions(-)

-- 
1.9.3

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer
  2015-03-13  1:38 [Qemu-devel] [PATCH v2 0/4] exec: Make bounce buffer thread safe Fam Zheng
@ 2015-03-13  1:38 ` Fam Zheng
  2015-03-13  8:09   ` Paolo Bonzini
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 2/4] exec: Atomic access to map_client_list Fam Zheng
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Fam Zheng @ 2015-03-13  1:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

There could be a race condition when two processes call
address_space_map concurrently and both want to use the bounce buffer.

Add an in_use flag in BounceBuffer to sync it.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 exec.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/exec.c b/exec.c
index 60b9752..8d4e134 100644
--- a/exec.c
+++ b/exec.c
@@ -2481,6 +2481,7 @@ typedef struct {
     void *buffer;
     hwaddr addr;
     hwaddr len;
+    bool in_use;
 } BounceBuffer;
 
 static BounceBuffer bounce;
@@ -2569,9 +2570,10 @@ void *address_space_map(AddressSpace *as,
     l = len;
     mr = address_space_translate(as, addr, &xlat, &l, is_write);
     if (!memory_access_is_direct(mr, is_write)) {
-        if (bounce.buffer) {
+        if (atomic_cmpxchg(&bounce.in_use, false, true)) {
             return NULL;
         }
+        smp_mb();
         /* Avoid unbounded allocations */
         l = MIN(l, TARGET_PAGE_SIZE);
         bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
@@ -2639,6 +2641,7 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
     qemu_vfree(bounce.buffer);
     bounce.buffer = NULL;
     memory_region_unref(bounce.mr);
+    atomic_mb_set(&bounce.in_use, false);
     cpu_notify_map_clients();
 }
 
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [Qemu-devel] [PATCH v2 2/4] exec: Atomic access to map_client_list
  2015-03-13  1:38 [Qemu-devel] [PATCH v2 0/4] exec: Make bounce buffer thread safe Fam Zheng
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer Fam Zheng
@ 2015-03-13  1:38 ` Fam Zheng
  2015-03-13  8:11   ` Paolo Bonzini
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 3/4] exec: Notify cpu_register_map_client caller if the bounce buffer is available Fam Zheng
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext Fam Zheng
  3 siblings, 1 reply; 16+ messages in thread
From: Fam Zheng @ 2015-03-13  1:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

Change map_client_list to QSLIST which supports atomic operations.

There are two access points to map_client_list. One is
cpu_register_map_client, the other is cpu_notify_map_clients called
after releasing the global bounce buffer in address_space_unmap. Each
is now converted to a single atomic operation.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 exec.c | 27 ++++++++++++---------------
 1 file changed, 12 insertions(+), 15 deletions(-)

diff --git a/exec.c b/exec.c
index 8d4e134..93ccd5a 100644
--- a/exec.c
+++ b/exec.c
@@ -2489,11 +2489,11 @@ static BounceBuffer bounce;
 typedef struct MapClient {
     void *opaque;
     void (*callback)(void *opaque);
-    QLIST_ENTRY(MapClient) link;
+    QSLIST_ENTRY(MapClient) link;
 } MapClient;
 
-static QLIST_HEAD(map_client_list, MapClient) map_client_list
-    = QLIST_HEAD_INITIALIZER(map_client_list);
+static QSLIST_HEAD(map_client_list, MapClient) map_client_list
+    = QSLIST_HEAD_INITIALIZER(map_client_list);
 
 void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque))
 {
@@ -2501,26 +2501,23 @@ void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque))
 
     client->opaque = opaque;
     client->callback = callback;
-    QLIST_INSERT_HEAD(&map_client_list, client, link);
+    QSLIST_INSERT_HEAD_ATOMIC(&map_client_list, client, link);
     return client;
 }
 
-static void cpu_unregister_map_client(void *_client)
-{
-    MapClient *client = (MapClient *)_client;
-
-    QLIST_REMOVE(client, link);
-    g_free(client);
-}
-
 static void cpu_notify_map_clients(void)
 {
     MapClient *client;
+    QSLIST_HEAD(, MapClient) mclist;
 
-    while (!QLIST_EMPTY(&map_client_list)) {
-        client = QLIST_FIRST(&map_client_list);
+    /* Only the bounce buffer owner thread could possibly call this, but we
+     * should also coordinate with cpu_register_map_client. */
+    QSLIST_MOVE_ATOMIC(&mclist, &map_client_list);
+    while (!QSLIST_EMPTY(&mclist)) {
+        client = QSLIST_FIRST(&mclist);
         client->callback(client->opaque);
-        cpu_unregister_map_client(client);
+        QSLIST_REMOVE_HEAD(&mclist, link);
+        g_free(client);
     }
 }
 
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [Qemu-devel] [PATCH v2 3/4] exec: Notify cpu_register_map_client caller if the bounce buffer is available
  2015-03-13  1:38 [Qemu-devel] [PATCH v2 0/4] exec: Make bounce buffer thread safe Fam Zheng
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer Fam Zheng
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 2/4] exec: Atomic access to map_client_list Fam Zheng
@ 2015-03-13  1:38 ` Fam Zheng
  2015-03-13  8:12   ` Paolo Bonzini
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext Fam Zheng
  3 siblings, 1 reply; 16+ messages in thread
From: Fam Zheng @ 2015-03-13  1:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

The caller's workflow is like

    if (!address_space_map()) {
        ...
        cpu_register_map_client();
    }

If bounce buffer became available after address_space_map() but before
cpu_register_map_client(), the caller could miss it and has to wait for the
next bounce buffer uesr to release, which may never happen in the worse case.

Just notify the caller with the passed callback in cpu_register_map_client().

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 exec.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/exec.c b/exec.c
index 93ccd5a..82781e4 100644
--- a/exec.c
+++ b/exec.c
@@ -2502,6 +2502,9 @@ void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque))
     client->opaque = opaque;
     client->callback = callback;
     QSLIST_INSERT_HEAD_ATOMIC(&map_client_list, client, link);
+    if (!atomic_read(&bounce.in_use)) {
+        callback(opaque);
+    }
     return client;
 }
 
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext
  2015-03-13  1:38 [Qemu-devel] [PATCH v2 0/4] exec: Make bounce buffer thread safe Fam Zheng
                   ` (2 preceding siblings ...)
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 3/4] exec: Notify cpu_register_map_client caller if the bounce buffer is available Fam Zheng
@ 2015-03-13  1:38 ` Fam Zheng
  2015-03-13  8:13   ` Paolo Bonzini
  3 siblings, 1 reply; 16+ messages in thread
From: Fam Zheng @ 2015-03-13  1:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

That if the dbs' owner is an iothread, dma should be resumed on the right
thread. In this case it is the AioContext of the block device.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 dma-helpers.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/dma-helpers.c b/dma-helpers.c
index 6918572..84f61a7 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -95,8 +95,10 @@ static void reschedule_dma(void *opaque)
 static void continue_after_map_failure(void *opaque)
 {
     DMAAIOCB *dbs = (DMAAIOCB *)opaque;
+    AioContext *ctx;
 
-    dbs->bh = qemu_bh_new(reschedule_dma, dbs);
+    ctx = blk_get_aio_context(dbs->blk);
+    dbs->bh = aio_bh_new(ctx, reschedule_dma, dbs);
     qemu_bh_schedule(dbs->bh);
 }
 
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer Fam Zheng
@ 2015-03-13  8:09   ` Paolo Bonzini
  2015-03-13  8:16     ` Fam Zheng
  0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2015-03-13  8:09 UTC (permalink / raw)
  To: Fam Zheng, qemu-devel



On 13/03/2015 02:38, Fam Zheng wrote:
> There could be a race condition when two processes call
> address_space_map concurrently and both want to use the bounce buffer.
> 
> Add an in_use flag in BounceBuffer to sync it.
> 
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  exec.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/exec.c b/exec.c
> index 60b9752..8d4e134 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2481,6 +2481,7 @@ typedef struct {
>      void *buffer;
>      hwaddr addr;
>      hwaddr len;
> +    bool in_use;
>  } BounceBuffer;
>  
>  static BounceBuffer bounce;
> @@ -2569,9 +2570,10 @@ void *address_space_map(AddressSpace *as,
>      l = len;
>      mr = address_space_translate(as, addr, &xlat, &l, is_write);
>      if (!memory_access_is_direct(mr, is_write)) {
> -        if (bounce.buffer) {
> +        if (atomic_cmpxchg(&bounce.in_use, false, true)) {

atomic_or is enough...

>              return NULL;
>          }
> +        smp_mb();

... and it already includes a memory barrier.

Paolo

>          /* Avoid unbounded allocations */
>          l = MIN(l, TARGET_PAGE_SIZE);
>          bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
> @@ -2639,6 +2641,7 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
>      qemu_vfree(bounce.buffer);
>      bounce.buffer = NULL;
>      memory_region_unref(bounce.mr);
> +    atomic_mb_set(&bounce.in_use, false);
>      cpu_notify_map_clients();
>  }
>  
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/4] exec: Atomic access to map_client_list
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 2/4] exec: Atomic access to map_client_list Fam Zheng
@ 2015-03-13  8:11   ` Paolo Bonzini
  0 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2015-03-13  8:11 UTC (permalink / raw)
  To: Fam Zheng, qemu-devel



On 13/03/2015 02:38, Fam Zheng wrote:
> Change map_client_list to QSLIST which supports atomic operations.
> 
> There are two access points to map_client_list. One is
> cpu_register_map_client, the other is cpu_notify_map_clients called
> after releasing the global bounce buffer in address_space_unmap. Each
> is now converted to a single atomic operation.
> 
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  exec.c | 27 ++++++++++++---------------
>  1 file changed, 12 insertions(+), 15 deletions(-)
> 
> diff --git a/exec.c b/exec.c
> index 8d4e134..93ccd5a 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2489,11 +2489,11 @@ static BounceBuffer bounce;
>  typedef struct MapClient {
>      void *opaque;
>      void (*callback)(void *opaque);
> -    QLIST_ENTRY(MapClient) link;
> +    QSLIST_ENTRY(MapClient) link;
>  } MapClient;
>  
> -static QLIST_HEAD(map_client_list, MapClient) map_client_list
> -    = QLIST_HEAD_INITIALIZER(map_client_list);
> +static QSLIST_HEAD(map_client_list, MapClient) map_client_list
> +    = QSLIST_HEAD_INITIALIZER(map_client_list);
>  
>  void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque))
>  {
> @@ -2501,26 +2501,23 @@ void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque))
>  
>      client->opaque = opaque;
>      client->callback = callback;
> -    QLIST_INSERT_HEAD(&map_client_list, client, link);
> +    QSLIST_INSERT_HEAD_ATOMIC(&map_client_list, client, link);
>      return client;
>  }
>  
> -static void cpu_unregister_map_client(void *_client)
> -{
> -    MapClient *client = (MapClient *)_client;
> -
> -    QLIST_REMOVE(client, link);
> -    g_free(client);
> -}
> -
>  static void cpu_notify_map_clients(void)
>  {
>      MapClient *client;
> +    QSLIST_HEAD(, MapClient) mclist;
>  
> -    while (!QLIST_EMPTY(&map_client_list)) {
> -        client = QLIST_FIRST(&map_client_list);
> +    /* Only the bounce buffer owner thread could possibly call this, but we
> +     * should also coordinate with cpu_register_map_client. */
> +    QSLIST_MOVE_ATOMIC(&mclist, &map_client_list);
> +    while (!QSLIST_EMPTY(&mclist)) {
> +        client = QSLIST_FIRST(&mclist);
>          client->callback(client->opaque);
> -        cpu_unregister_map_client(client);
> +        QSLIST_REMOVE_HEAD(&mclist, link);
> +        g_free(client);
>      }
>  }
>  
> 

Looks good.

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/4] exec: Notify cpu_register_map_client caller if the bounce buffer is available
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 3/4] exec: Notify cpu_register_map_client caller if the bounce buffer is available Fam Zheng
@ 2015-03-13  8:12   ` Paolo Bonzini
  0 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2015-03-13  8:12 UTC (permalink / raw)
  To: Fam Zheng, qemu-devel



On 13/03/2015 02:38, Fam Zheng wrote:
> The caller's workflow is like
> 
>     if (!address_space_map()) {
>         ...
>         cpu_register_map_client();
>     }
> 
> If bounce buffer became available after address_space_map() but before
> cpu_register_map_client(), the caller could miss it and has to wait for the
> next bounce buffer uesr to release, which may never happen in the worse case.
> 
> Just notify the caller with the passed callback in cpu_register_map_client().
> 
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  exec.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/exec.c b/exec.c
> index 93ccd5a..82781e4 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2502,6 +2502,9 @@ void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque))
>      client->opaque = opaque;
>      client->callback = callback;
>      QSLIST_INSERT_HEAD_ATOMIC(&map_client_list, client, link);
> +    if (!atomic_read(&bounce.in_use)) {
> +        callback(opaque);

This should call cpu_notify_map_clients instead, otherwise you get two
calls.

Paolo

> +    }
>      return client;
>  }
>  
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext
  2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext Fam Zheng
@ 2015-03-13  8:13   ` Paolo Bonzini
  2015-03-13  8:58     ` Fam Zheng
  0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2015-03-13  8:13 UTC (permalink / raw)
  To: Fam Zheng, qemu-devel



On 13/03/2015 02:38, Fam Zheng wrote:
> That if the dbs' owner is an iothread, dma should be resumed on the right
> thread. In this case it is the AioContext of the block device.
> 
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  dma-helpers.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/dma-helpers.c b/dma-helpers.c
> index 6918572..84f61a7 100644
> --- a/dma-helpers.c
> +++ b/dma-helpers.c
> @@ -95,8 +95,10 @@ static void reschedule_dma(void *opaque)
>  static void continue_after_map_failure(void *opaque)
>  {
>      DMAAIOCB *dbs = (DMAAIOCB *)opaque;
> +    AioContext *ctx;
>  
> -    dbs->bh = qemu_bh_new(reschedule_dma, dbs);
> +    ctx = blk_get_aio_context(dbs->blk);
> +    dbs->bh = aio_bh_new(ctx, reschedule_dma, dbs);
>      qemu_bh_schedule(dbs->bh);
>  }
>  
> 

This looks good.  However, I wonder if dma_aio_cancel should also call
cpu_unregister_map_client.  In this case, it's much better to just use a
lock for the list (though you can still use atomics for the in-use flag).

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer
  2015-03-13  8:09   ` Paolo Bonzini
@ 2015-03-13  8:16     ` Fam Zheng
  2015-03-13  8:32       ` Paolo Bonzini
  0 siblings, 1 reply; 16+ messages in thread
From: Fam Zheng @ 2015-03-13  8:16 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Fri, 03/13 09:09, Paolo Bonzini wrote:
> 
> 
> On 13/03/2015 02:38, Fam Zheng wrote:
> > There could be a race condition when two processes call
> > address_space_map concurrently and both want to use the bounce buffer.
> > 
> > Add an in_use flag in BounceBuffer to sync it.
> > 
> > Signed-off-by: Fam Zheng <famz@redhat.com>
> > ---
> >  exec.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/exec.c b/exec.c
> > index 60b9752..8d4e134 100644
> > --- a/exec.c
> > +++ b/exec.c
> > @@ -2481,6 +2481,7 @@ typedef struct {
> >      void *buffer;
> >      hwaddr addr;
> >      hwaddr len;
> > +    bool in_use;
> >  } BounceBuffer;
> >  
> >  static BounceBuffer bounce;
> > @@ -2569,9 +2570,10 @@ void *address_space_map(AddressSpace *as,
> >      l = len;
> >      mr = address_space_translate(as, addr, &xlat, &l, is_write);
> >      if (!memory_access_is_direct(mr, is_write)) {
> > -        if (bounce.buffer) {
> > +        if (atomic_cmpxchg(&bounce.in_use, false, true)) {
> 
> atomic_or is enough...

atomic_cmpxchg is here to take the ownership of bounce iff it is not in use, so
I think it is necessary.

Fam

> 
> >              return NULL;
> >          }
> > +        smp_mb();
> 
> ... and it already includes a memory barrier.
> 
> Paolo
> 
> >          /* Avoid unbounded allocations */
> >          l = MIN(l, TARGET_PAGE_SIZE);
> >          bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
> > @@ -2639,6 +2641,7 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
> >      qemu_vfree(bounce.buffer);
> >      bounce.buffer = NULL;
> >      memory_region_unref(bounce.mr);
> > +    atomic_mb_set(&bounce.in_use, false);
> >      cpu_notify_map_clients();
> >  }
> >  
> > 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer
  2015-03-13  8:16     ` Fam Zheng
@ 2015-03-13  8:32       ` Paolo Bonzini
  2015-03-13  8:38         ` Fam Zheng
  2015-03-13  8:41         ` Paolo Bonzini
  0 siblings, 2 replies; 16+ messages in thread
From: Paolo Bonzini @ 2015-03-13  8:32 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel



On 13/03/2015 09:16, Fam Zheng wrote:
>>> > > +        if (atomic_cmpxchg(&bounce.in_use, false, true)) {
>> > 
>> > atomic_or is enough...
> atomic_cmpxchg is here to take the ownership of bounce iff it is not in use, so
> I think it is necessary.

It's changing false to true and true to true, so you can do

    if (atomic_or(&bounce.in_use, 1)) {
        // was true already
    }

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer
  2015-03-13  8:32       ` Paolo Bonzini
@ 2015-03-13  8:38         ` Fam Zheng
  2015-03-13  8:41         ` Paolo Bonzini
  1 sibling, 0 replies; 16+ messages in thread
From: Fam Zheng @ 2015-03-13  8:38 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Fri, 03/13 09:32, Paolo Bonzini wrote:
> 
> 
> On 13/03/2015 09:16, Fam Zheng wrote:
> >>> > > +        if (atomic_cmpxchg(&bounce.in_use, false, true)) {
> >> > 
> >> > atomic_or is enough...
> > atomic_cmpxchg is here to take the ownership of bounce iff it is not in use, so
> > I think it is necessary.
> 
> It's changing false to true and true to true, so you can do
> 
>     if (atomic_or(&bounce.in_use, 1)) {
>         // was true already
>     }

I see, we have the old value! Thanks!

Fam

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer
  2015-03-13  8:32       ` Paolo Bonzini
  2015-03-13  8:38         ` Fam Zheng
@ 2015-03-13  8:41         ` Paolo Bonzini
  1 sibling, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2015-03-13  8:41 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel



On 13/03/2015 09:32, Paolo Bonzini wrote:
> 
> 
> On 13/03/2015 09:16, Fam Zheng wrote:
>>>>>> +        if (atomic_cmpxchg(&bounce.in_use, false, true)) {
>>>>
>>>> atomic_or is enough...
>> atomic_cmpxchg is here to take the ownership of bounce iff it is not in use, so
>> I think it is necessary.
> 
> It's changing false to true and true to true, so you can do
> 
>     if (atomic_or(&bounce.in_use, 1)) {
>         // was true already
>     }

... and actually, atomic_xchg is even better (on x86, atomic_or is
compiled into a cmpxchg loop, but atomic_xchg is a single instruction).

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext
  2015-03-13  8:13   ` Paolo Bonzini
@ 2015-03-13  8:58     ` Fam Zheng
  2015-03-13 10:48       ` Paolo Bonzini
  0 siblings, 1 reply; 16+ messages in thread
From: Fam Zheng @ 2015-03-13  8:58 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Fri, 03/13 09:13, Paolo Bonzini wrote:
> 
> 
> On 13/03/2015 02:38, Fam Zheng wrote:
> > That if the dbs' owner is an iothread, dma should be resumed on the right
> > thread. In this case it is the AioContext of the block device.
> > 
> > Signed-off-by: Fam Zheng <famz@redhat.com>
> > ---
> >  dma-helpers.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/dma-helpers.c b/dma-helpers.c
> > index 6918572..84f61a7 100644
> > --- a/dma-helpers.c
> > +++ b/dma-helpers.c
> > @@ -95,8 +95,10 @@ static void reschedule_dma(void *opaque)
> >  static void continue_after_map_failure(void *opaque)
> >  {
> >      DMAAIOCB *dbs = (DMAAIOCB *)opaque;
> > +    AioContext *ctx;
> >  
> > -    dbs->bh = qemu_bh_new(reschedule_dma, dbs);
> > +    ctx = blk_get_aio_context(dbs->blk);
> > +    dbs->bh = aio_bh_new(ctx, reschedule_dma, dbs);
> >      qemu_bh_schedule(dbs->bh);
> >  }
> >  
> > 
> 
> This looks good.  However, I wonder if dma_aio_cancel should also call
> cpu_unregister_map_client.  In this case, it's much better to just use a
> lock for the list (though you can still use atomics for the in-use flag).

The other possibility is grab a reference for the cpu_register_map_client call,
and release it in reschedule_dma. This way the atomics can keep, but we'll need
a "finished" flag in DMAAIOCB to avoid double completion.

Fam

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext
  2015-03-13  8:58     ` Fam Zheng
@ 2015-03-13 10:48       ` Paolo Bonzini
  2015-03-13 12:33         ` Paolo Bonzini
  0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2015-03-13 10:48 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel



On 13/03/2015 09:58, Fam Zheng wrote:
>> > 
>> > This looks good.  However, I wonder if dma_aio_cancel should also call
>> > cpu_unregister_map_client.  In this case, it's much better to just use a
>> > lock for the list (though you can still use atomics for the in-use flag).
> The other possibility is grab a reference for the cpu_register_map_client call,
> and release it in reschedule_dma. This way the atomics can keep, but we'll need
> a "finished" flag in DMAAIOCB to avoid double completion.

Considering this is a slow path, a lock seems preferrable.

It's not that your patch were bad, it's that a pre-existing bug got in
your way, and broke the assumptions you made.

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext
  2015-03-13 10:48       ` Paolo Bonzini
@ 2015-03-13 12:33         ` Paolo Bonzini
  0 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2015-03-13 12:33 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-devel



On 13/03/2015 11:48, Paolo Bonzini wrote:
> > The other possibility is grab a reference for the cpu_register_map_client call,
> > and release it in reschedule_dma. This way the atomics can keep, but we'll need
> > a "finished" flag in DMAAIOCB to avoid double completion.
> Considering this is a slow path, a lock seems preferrable.

And another problem...

You need to be careful about dma_aio_cancel running together with the
continue_after_map_failure, because continue_after_map_failure can be
called by another thread.  You could have

     continue_after_map_failure               dma_aio_cancel
     ------------------------------------------------------------------
     aio_bh_new
                                              qemu_bh_delete
     qemu_bh_schedule (use after free)

To fix this, my suggestion is to pass a BH directly to
cpu_register_map_client (possibly to cpu_unregister_map_client as well?
 seems to have pros and cons).  Then cpu_notify_clients can run entirely
with the lock taken, and not race against cpu_unregister_map_client.
dma_aio_cancel can just do cpu_unregister_map_client followed by
qemu_bh_delete.

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2015-03-13 12:33 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-13  1:38 [Qemu-devel] [PATCH v2 0/4] exec: Make bounce buffer thread safe Fam Zheng
2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 1/4] exec: Atomic access to bounce buffer Fam Zheng
2015-03-13  8:09   ` Paolo Bonzini
2015-03-13  8:16     ` Fam Zheng
2015-03-13  8:32       ` Paolo Bonzini
2015-03-13  8:38         ` Fam Zheng
2015-03-13  8:41         ` Paolo Bonzini
2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 2/4] exec: Atomic access to map_client_list Fam Zheng
2015-03-13  8:11   ` Paolo Bonzini
2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 3/4] exec: Notify cpu_register_map_client caller if the bounce buffer is available Fam Zheng
2015-03-13  8:12   ` Paolo Bonzini
2015-03-13  1:38 ` [Qemu-devel] [PATCH v2 4/4] dma-helpers: Move reschedule_dma BH to blk's AioContext Fam Zheng
2015-03-13  8:13   ` Paolo Bonzini
2015-03-13  8:58     ` Fam Zheng
2015-03-13 10:48       ` Paolo Bonzini
2015-03-13 12:33         ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.