qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock
@ 2019-09-13 10:25 Dr. David Alan Gilbert (git)
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants Dr. David Alan Gilbert (git)
                   ` (6 more replies)
  0 siblings, 7 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-09-13 10:25 UTC (permalink / raw)
  To: qemu-devel, pbonzini, ehabkost, berrange, quintela

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

This patch uses glib's g_auto mechanism to automatically free
rcu_read_lock's at the end of the block.  Given that humans
have a habit of forgetting an error path somewhere it's
best to leave it to the compiler.

v3
  Add block-head version of macro
  Rename
  Add docs
  Convert more cases using the block-head version

Dr. David Alan Gilbert (5):
  rcu: Add automatically released rcu_read_lock variants
  migration: Fix missing rcu_read_unlock
  migration: Use automatic rcu_read unlock in ram.c
  migration: Use automatic rcu_read unlock in rdma.c
  rcu: Use automatic rc_read unlock in core memory/exec code

 docs/devel/rcu.txt      |  16 +++
 exec.c                  | 120 +++++++---------
 include/exec/ram_addr.h | 138 +++++++++----------
 include/qemu/rcu.h      |  25 ++++
 memory.c                |  15 +-
 migration/ram.c         | 295 +++++++++++++++++++---------------------
 migration/rdma.c        |  57 ++------
 7 files changed, 310 insertions(+), 356 deletions(-)

-- 
2.21.0



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants
  2019-09-13 10:25 [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert (git)
@ 2019-09-13 10:25 ` Dr. David Alan Gilbert (git)
  2019-09-13 12:01   ` Paolo Bonzini
  2019-09-19 17:31   ` Daniel P. Berrangé
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 2/5] migration: Fix missing rcu_read_unlock Dr. David Alan Gilbert (git)
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-09-13 10:25 UTC (permalink / raw)
  To: qemu-devel, pbonzini, ehabkost, berrange, quintela

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

RCU_READ_LOCK_GUARD() takes the rcu_read_lock and then uses glib's
g_auto infrastructure (and thus whatever the compiler's hooks are) to
release it on all exits of the block.

WITH_RCU_READ_LOCK_GUARD() is similar but is used as a wrapper for the
lock, i.e.:

   WITH_RCU_READ_LOCK_GUARD() {
       stuff under lock
   }

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 docs/devel/rcu.txt | 16 ++++++++++++++++
 include/qemu/rcu.h | 25 +++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/docs/devel/rcu.txt b/docs/devel/rcu.txt
index c84e7f42b2..d83fed2f79 100644
--- a/docs/devel/rcu.txt
+++ b/docs/devel/rcu.txt
@@ -187,6 +187,22 @@ The following APIs must be used before RCU is used in a thread:
 Note that these APIs are relatively heavyweight, and should _not_ be
 nested.
 
+Convenience macros
+==================
+
+Two macros are provided that automatically release the read lock at the
+end of the scope.
+
+      RCU_READ_LOCK_GUARD()
+
+         Takes the lock and will release it at the end of the block it's
+         used in.
+
+      WITH_RCU_READ_LOCK_GUARD()  { code }
+
+         Is used at the head of a block to protect the code within the block.
+
+Note that 'goto'ing out of the guarded block will also drop the lock.
 
 DIFFERENCES WITH LINUX
 ======================
diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h
index 22876d1428..3a8d4cf28b 100644
--- a/include/qemu/rcu.h
+++ b/include/qemu/rcu.h
@@ -154,6 +154,31 @@ extern void call_rcu1(struct rcu_head *head, RCUCBFunc *func);
       }),                                                                \
       (RCUCBFunc *)g_free);
 
+typedef void RCUReadAuto;
+static inline RCUReadAuto *rcu_read_auto_lock(void)
+{
+    rcu_read_lock();
+    /* Anything non-NULL causes the cleanup function to be called */
+    return (void *)(uintptr_t)0x1;
+}
+
+static inline void rcu_read_auto_unlock(RCUReadAuto *r)
+{
+    rcu_read_unlock();
+}
+
+G_DEFINE_AUTOPTR_CLEANUP_FUNC(RCUReadAuto, rcu_read_auto_unlock)
+
+#define WITH_RCU_READ_LOCK_GUARD() \
+    WITH_RCU_READ_LOCK_GUARD_(_rcu_read_auto##__COUNTER__)
+
+#define WITH_RCU_READ_LOCK_GUARD_(var) \
+    for (g_autoptr(RCUReadAuto) var = rcu_read_auto_lock(); \
+        (var); rcu_read_auto_unlock(var), (var) = NULL)
+
+#define RCU_READ_LOCK_GUARD() \
+    g_autoptr(RCUReadAuto) _rcu_read_auto = rcu_read_auto_lock()
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Qemu-devel] [PATCH v3 2/5] migration: Fix missing rcu_read_unlock
  2019-09-13 10:25 [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert (git)
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants Dr. David Alan Gilbert (git)
@ 2019-09-13 10:25 ` Dr. David Alan Gilbert (git)
  2019-09-19 17:32   ` Daniel P. Berrangé
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 3/5] migration: Use automatic rcu_read unlock in ram.c Dr. David Alan Gilbert (git)
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-09-13 10:25 UTC (permalink / raw)
  To: qemu-devel, pbonzini, ehabkost, berrange, quintela

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

Use the automatic rcu_read unlocker to fix a missing unlock.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 35 +++++++++++++++++------------------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index b2bd618a89..cff35477ec 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3445,28 +3445,27 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     }
     (*rsp)->f = f;
 
-    rcu_read_lock();
-
-    qemu_put_be64(f, ram_bytes_total_common(true) | RAM_SAVE_FLAG_MEM_SIZE);
+    WITH_RCU_READ_LOCK_GUARD() {
+        qemu_put_be64(f, ram_bytes_total_common(true) | RAM_SAVE_FLAG_MEM_SIZE);
 
-    RAMBLOCK_FOREACH_MIGRATABLE(block) {
-        if (!block->idstr[0]) {
-            error_report("%s: RAMBlock with empty name", __func__);
-            return -1;
-        }
-        qemu_put_byte(f, strlen(block->idstr));
-        qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr));
-        qemu_put_be64(f, block->used_length);
-        if (migrate_postcopy_ram() && block->page_size != qemu_host_page_size) {
-            qemu_put_be64(f, block->page_size);
-        }
-        if (migrate_ignore_shared()) {
-            qemu_put_be64(f, block->mr->addr);
+        RAMBLOCK_FOREACH_MIGRATABLE(block) {
+            if (!block->idstr[0]) {
+                error_report("%s: RAMBlock with empty name", __func__);
+                return -1;
+            }
+            qemu_put_byte(f, strlen(block->idstr));
+            qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr));
+            qemu_put_be64(f, block->used_length);
+            if (migrate_postcopy_ram() && block->page_size !=
+                                          qemu_host_page_size) {
+                qemu_put_be64(f, block->page_size);
+            }
+            if (migrate_ignore_shared()) {
+                qemu_put_be64(f, block->mr->addr);
+            }
         }
     }
 
-    rcu_read_unlock();
-
     ram_control_before_iterate(f, RAM_CONTROL_SETUP);
     ram_control_after_iterate(f, RAM_CONTROL_SETUP);
 
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Qemu-devel] [PATCH v3 3/5] migration: Use automatic rcu_read unlock in ram.c
  2019-09-13 10:25 [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert (git)
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants Dr. David Alan Gilbert (git)
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 2/5] migration: Fix missing rcu_read_unlock Dr. David Alan Gilbert (git)
@ 2019-09-13 10:25 ` Dr. David Alan Gilbert (git)
  2019-09-19 17:33   ` Daniel P. Berrangé
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 4/5] migration: Use automatic rcu_read unlock in rdma.c Dr. David Alan Gilbert (git)
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-09-13 10:25 UTC (permalink / raw)
  To: qemu-devel, pbonzini, ehabkost, berrange, quintela

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

Use the automatic read unlocker in migration/ram.c

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 260 ++++++++++++++++++++++--------------------------
 1 file changed, 121 insertions(+), 139 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index cff35477ec..6c5f0199fd 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -181,14 +181,14 @@ int foreach_not_ignored_block(RAMBlockIterFunc func, void *opaque)
     RAMBlock *block;
     int ret = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
+
     RAMBLOCK_FOREACH_NOT_IGNORED(block) {
         ret = func(block, opaque);
         if (ret) {
             break;
         }
     }
-    rcu_read_unlock();
     return ret;
 }
 
@@ -1849,12 +1849,12 @@ static void migration_bitmap_sync(RAMState *rs)
     memory_global_dirty_log_sync();
 
     qemu_mutex_lock(&rs->bitmap_mutex);
-    rcu_read_lock();
-    RAMBLOCK_FOREACH_NOT_IGNORED(block) {
-        ramblock_sync_dirty_bitmap(rs, block);
+    WITH_RCU_READ_LOCK_GUARD() {
+        RAMBLOCK_FOREACH_NOT_IGNORED(block) {
+            ramblock_sync_dirty_bitmap(rs, block);
+        }
+        ram_counters.remaining = ram_bytes_remaining();
     }
-    ram_counters.remaining = ram_bytes_remaining();
-    rcu_read_unlock();
     qemu_mutex_unlock(&rs->bitmap_mutex);
 
     memory_global_after_dirty_log_sync();
@@ -2398,13 +2398,12 @@ static void migration_page_queue_free(RAMState *rs)
     /* This queue generally should be empty - but in the case of a failed
      * migration might have some droppings in.
      */
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, next_mspr) {
         memory_region_unref(mspr->rb->mr);
         QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
         g_free(mspr);
     }
-    rcu_read_unlock();
 }
 
 /**
@@ -2425,7 +2424,8 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
     RAMState *rs = ram_state;
 
     ram_counters.postcopy_requests++;
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
+
     if (!rbname) {
         /* Reuse last RAMBlock */
         ramblock = rs->last_req_rb;
@@ -2467,12 +2467,10 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
     QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req);
     migration_make_urgent_request();
     qemu_mutex_unlock(&rs->src_page_req_mutex);
-    rcu_read_unlock();
 
     return 0;
 
 err:
-    rcu_read_unlock();
     return -1;
 }
 
@@ -2712,7 +2710,8 @@ static uint64_t ram_bytes_total_common(bool count_ignored)
     RAMBlock *block;
     uint64_t total = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
+
     if (count_ignored) {
         RAMBLOCK_FOREACH_MIGRATABLE(block) {
             total += block->used_length;
@@ -2722,7 +2721,6 @@ static uint64_t ram_bytes_total_common(bool count_ignored)
             total += block->used_length;
         }
     }
-    rcu_read_unlock();
     return total;
 }
 
@@ -3086,7 +3084,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
     RAMBlock *block;
     int ret;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
 
     /* This should be our last sync, the src is now paused */
     migration_bitmap_sync(rs);
@@ -3107,13 +3105,11 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
              * point.
              */
             error_report("migration ram resized during precopy phase");
-            rcu_read_unlock();
             return -EINVAL;
         }
         /* Deal with TPS != HPS and huge pages */
         ret = postcopy_chunk_hostpages(ms, block);
         if (ret) {
-            rcu_read_unlock();
             return ret;
         }
 
@@ -3128,7 +3124,6 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
     trace_ram_postcopy_send_discard_bitmap();
 
     ret = postcopy_each_ram_send_discard(ms);
-    rcu_read_unlock();
 
     return ret;
 }
@@ -3149,7 +3144,7 @@ int ram_discard_range(const char *rbname, uint64_t start, size_t length)
 
     trace_ram_discard_range(rbname, start, length);
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     RAMBlock *rb = qemu_ram_block_by_name(rbname);
 
     if (!rb) {
@@ -3169,8 +3164,6 @@ int ram_discard_range(const char *rbname, uint64_t start, size_t length)
     ret = ram_block_discard_range(rb, start, length);
 
 err:
-    rcu_read_unlock();
-
     return ret;
 }
 
@@ -3303,13 +3296,12 @@ static void ram_init_bitmaps(RAMState *rs)
     /* For memory_global_dirty_log_start below.  */
     qemu_mutex_lock_iothread();
     qemu_mutex_lock_ramlist();
-    rcu_read_lock();
 
-    ram_list_init_bitmaps();
-    memory_global_dirty_log_start();
-    migration_bitmap_sync_precopy(rs);
-
-    rcu_read_unlock();
+    WITH_RCU_READ_LOCK_GUARD() {
+        ram_list_init_bitmaps();
+        memory_global_dirty_log_start();
+        migration_bitmap_sync_precopy(rs);
+    }
     qemu_mutex_unlock_ramlist();
     qemu_mutex_unlock_iothread();
 }
@@ -3500,55 +3492,57 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
         goto out;
     }
 
-    rcu_read_lock();
-    if (ram_list.version != rs->last_version) {
-        ram_state_reset(rs);
-    }
-
-    /* Read version before ram_list.blocks */
-    smp_rmb();
+    WITH_RCU_READ_LOCK_GUARD() {
+        if (ram_list.version != rs->last_version) {
+            ram_state_reset(rs);
+        }
 
-    ram_control_before_iterate(f, RAM_CONTROL_ROUND);
+        /* Read version before ram_list.blocks */
+        smp_rmb();
 
-    t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
-    i = 0;
-    while ((ret = qemu_file_rate_limit(f)) == 0 ||
-            !QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
-        int pages;
+        ram_control_before_iterate(f, RAM_CONTROL_ROUND);
 
-        if (qemu_file_get_error(f)) {
-            break;
-        }
+        t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+        i = 0;
+        while ((ret = qemu_file_rate_limit(f)) == 0 ||
+                !QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
+            int pages;
 
-        pages = ram_find_and_save_block(rs, false);
-        /* no more pages to sent */
-        if (pages == 0) {
-            done = 1;
-            break;
-        }
+            if (qemu_file_get_error(f)) {
+                break;
+            }
 
-        if (pages < 0) {
-            qemu_file_set_error(f, pages);
-            break;
-        }
+            pages = ram_find_and_save_block(rs, false);
+            /* no more pages to sent */
+            if (pages == 0) {
+                done = 1;
+                break;
+            }
 
-        rs->target_page_count += pages;
-
-        /* we want to check in the 1st loop, just in case it was the 1st time
-           and we had to sync the dirty bitmap.
-           qemu_clock_get_ns() is a bit expensive, so we only check each some
-           iterations
-        */
-        if ((i & 63) == 0) {
-            uint64_t t1 = (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - t0) / 1000000;
-            if (t1 > MAX_WAIT) {
-                trace_ram_save_iterate_big_wait(t1, i);
+            if (pages < 0) {
+                qemu_file_set_error(f, pages);
                 break;
             }
+
+            rs->target_page_count += pages;
+
+            /*
+             * we want to check in the 1st loop, just in case it was the 1st
+             * time and we had to sync the dirty bitmap.
+             * qemu_clock_get_ns() is a bit expensive, so we only check each
+             * some iterations
+             */
+            if ((i & 63) == 0) {
+                uint64_t t1 = (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - t0) /
+                              1000000;
+                if (t1 > MAX_WAIT) {
+                    trace_ram_save_iterate_big_wait(t1, i);
+                    break;
+                }
+            }
+            i++;
         }
-        i++;
     }
-    rcu_read_unlock();
 
     /*
      * Must occur before EOS (or any QEMUFile operation)
@@ -3586,35 +3580,33 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
     RAMState *rs = *temp;
     int ret = 0;
 
-    rcu_read_lock();
-
-    if (!migration_in_postcopy()) {
-        migration_bitmap_sync_precopy(rs);
-    }
+    WITH_RCU_READ_LOCK_GUARD() {
+        if (!migration_in_postcopy()) {
+            migration_bitmap_sync_precopy(rs);
+        }
 
-    ram_control_before_iterate(f, RAM_CONTROL_FINISH);
+        ram_control_before_iterate(f, RAM_CONTROL_FINISH);
 
-    /* try transferring iterative blocks of memory */
+        /* try transferring iterative blocks of memory */
 
-    /* flush all remaining blocks regardless of rate limiting */
-    while (true) {
-        int pages;
+        /* flush all remaining blocks regardless of rate limiting */
+        while (true) {
+            int pages;
 
-        pages = ram_find_and_save_block(rs, !migration_in_colo_state());
-        /* no more blocks to sent */
-        if (pages == 0) {
-            break;
-        }
-        if (pages < 0) {
-            ret = pages;
-            break;
+            pages = ram_find_and_save_block(rs, !migration_in_colo_state());
+            /* no more blocks to sent */
+            if (pages == 0) {
+                break;
+            }
+            if (pages < 0) {
+                ret = pages;
+                break;
+            }
         }
-    }
-
-    flush_compressed_data(rs);
-    ram_control_after_iterate(f, RAM_CONTROL_FINISH);
 
-    rcu_read_unlock();
+        flush_compressed_data(rs);
+        ram_control_after_iterate(f, RAM_CONTROL_FINISH);
+    }
 
     multifd_send_sync_main(rs);
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
@@ -3637,9 +3629,9 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
     if (!migration_in_postcopy() &&
         remaining_size < max_size) {
         qemu_mutex_lock_iothread();
-        rcu_read_lock();
-        migration_bitmap_sync_precopy(rs);
-        rcu_read_unlock();
+        WITH_RCU_READ_LOCK_GUARD() {
+            migration_bitmap_sync_precopy(rs);
+        }
         qemu_mutex_unlock_iothread();
         remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
     }
@@ -3983,7 +3975,13 @@ int colo_init_ram_cache(void)
             error_report("%s: Can't alloc memory for COLO cache of block %s,"
                          "size 0x" RAM_ADDR_FMT, __func__, block->idstr,
                          block->used_length);
-            goto out_locked;
+            RAMBLOCK_FOREACH_NOT_IGNORED(block) {
+                if (block->colo_cache) {
+                    qemu_anon_ram_free(block->colo_cache, block->used_length);
+                    block->colo_cache = NULL;
+                }
+            }
+            return -errno;
         }
         memcpy(block->colo_cache, block->host, block->used_length);
     }
@@ -4009,18 +4007,6 @@ int colo_init_ram_cache(void)
     memory_global_dirty_log_start();
 
     return 0;
-
-out_locked:
-
-    RAMBLOCK_FOREACH_NOT_IGNORED(block) {
-        if (block->colo_cache) {
-            qemu_anon_ram_free(block->colo_cache, block->used_length);
-            block->colo_cache = NULL;
-        }
-    }
-
-    rcu_read_unlock();
-    return -errno;
 }
 
 /* It is need to hold the global lock to call this helper */
@@ -4034,16 +4020,14 @@ void colo_release_ram_cache(void)
         block->bmap = NULL;
     }
 
-    rcu_read_lock();
-
-    RAMBLOCK_FOREACH_NOT_IGNORED(block) {
-        if (block->colo_cache) {
-            qemu_anon_ram_free(block->colo_cache, block->used_length);
-            block->colo_cache = NULL;
+    WITH_RCU_READ_LOCK_GUARD() {
+        RAMBLOCK_FOREACH_NOT_IGNORED(block) {
+            if (block->colo_cache) {
+                qemu_anon_ram_free(block->colo_cache, block->used_length);
+                block->colo_cache = NULL;
+            }
         }
     }
-
-    rcu_read_unlock();
     qemu_mutex_destroy(&ram_state->bitmap_mutex);
     g_free(ram_state);
     ram_state = NULL;
@@ -4281,31 +4265,30 @@ static void colo_flush_ram_cache(void)
     unsigned long offset = 0;
 
     memory_global_dirty_log_sync();
-    rcu_read_lock();
-    RAMBLOCK_FOREACH_NOT_IGNORED(block) {
-        ramblock_sync_dirty_bitmap(ram_state, block);
+    WITH_RCU_READ_LOCK_GUARD() {
+        RAMBLOCK_FOREACH_NOT_IGNORED(block) {
+            ramblock_sync_dirty_bitmap(ram_state, block);
+        }
     }
-    rcu_read_unlock();
 
     trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
-    rcu_read_lock();
-    block = QLIST_FIRST_RCU(&ram_list.blocks);
+    WITH_RCU_READ_LOCK_GUARD() {
+        block = QLIST_FIRST_RCU(&ram_list.blocks);
 
-    while (block) {
-        offset = migration_bitmap_find_dirty(ram_state, block, offset);
+        while (block) {
+            offset = migration_bitmap_find_dirty(ram_state, block, offset);
 
-        if (offset << TARGET_PAGE_BITS >= block->used_length) {
-            offset = 0;
-            block = QLIST_NEXT_RCU(block, next);
-        } else {
-            migration_bitmap_clear_dirty(ram_state, block, offset);
-            dst_host = block->host + (offset << TARGET_PAGE_BITS);
-            src_host = block->colo_cache + (offset << TARGET_PAGE_BITS);
-            memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
+            if (offset << TARGET_PAGE_BITS >= block->used_length) {
+                offset = 0;
+                block = QLIST_NEXT_RCU(block, next);
+            } else {
+                migration_bitmap_clear_dirty(ram_state, block, offset);
+                dst_host = block->host + (offset << TARGET_PAGE_BITS);
+                src_host = block->colo_cache + (offset << TARGET_PAGE_BITS);
+                memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
+            }
         }
     }
-
-    rcu_read_unlock();
     trace_colo_flush_ram_cache_end();
 }
 
@@ -4504,16 +4487,15 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
      * it will be necessary to reduce the granularity of this
      * critical section.
      */
-    rcu_read_lock();
+    WITH_RCU_READ_LOCK_GUARD() {
+        if (postcopy_running) {
+            ret = ram_load_postcopy(f);
+        } else {
+            ret = ram_load_precopy(f);
+        }
 
-    if (postcopy_running) {
-        ret = ram_load_postcopy(f);
-    } else {
-        ret = ram_load_precopy(f);
+        ret |= wait_for_decompress_done();
     }
-
-    ret |= wait_for_decompress_done();
-    rcu_read_unlock();
     trace_ram_load_complete(ret, seq_iter);
 
     if (!ret  && migration_incoming_in_colo_state()) {
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Qemu-devel] [PATCH v3 4/5] migration: Use automatic rcu_read unlock in rdma.c
  2019-09-13 10:25 [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert (git)
                   ` (2 preceding siblings ...)
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 3/5] migration: Use automatic rcu_read unlock in ram.c Dr. David Alan Gilbert (git)
@ 2019-09-13 10:25 ` Dr. David Alan Gilbert (git)
  2019-09-19 17:34   ` Daniel P. Berrangé
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 5/5] rcu: Use automatic rc_read unlock in core memory/exec code Dr. David Alan Gilbert (git)
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-09-13 10:25 UTC (permalink / raw)
  To: qemu-devel, pbonzini, ehabkost, berrange, quintela

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

Use the automatic read unlocker in migration/rdma.c.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/rdma.c | 57 ++++++++++--------------------------------------
 1 file changed, 11 insertions(+), 46 deletions(-)

diff --git a/migration/rdma.c b/migration/rdma.c
index 78e6b72bac..5c9054721d 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -88,7 +88,6 @@ static uint32_t known_capabilities = RDMA_CAPABILITY_PIN_ALL;
                                 " to abort!"); \
                 rdma->error_reported = 1; \
             } \
-            rcu_read_unlock(); \
             return rdma->error_state; \
         } \
     } while (0)
@@ -2678,11 +2677,10 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *ioc,
     size_t i;
     size_t len = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     rdma = atomic_rcu_read(&rioc->rdmaout);
 
     if (!rdma) {
-        rcu_read_unlock();
         return -EIO;
     }
 
@@ -2695,7 +2693,6 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *ioc,
     ret = qemu_rdma_write_flush(f, rdma);
     if (ret < 0) {
         rdma->error_state = ret;
-        rcu_read_unlock();
         return ret;
     }
 
@@ -2715,7 +2712,6 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *ioc,
 
             if (ret < 0) {
                 rdma->error_state = ret;
-                rcu_read_unlock();
                 return ret;
             }
 
@@ -2724,7 +2720,6 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *ioc,
         }
     }
 
-    rcu_read_unlock();
     return done;
 }
 
@@ -2764,11 +2759,10 @@ static ssize_t qio_channel_rdma_readv(QIOChannel *ioc,
     ssize_t i;
     size_t done = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     rdma = atomic_rcu_read(&rioc->rdmain);
 
     if (!rdma) {
-        rcu_read_unlock();
         return -EIO;
     }
 
@@ -2805,7 +2799,6 @@ static ssize_t qio_channel_rdma_readv(QIOChannel *ioc,
 
         if (ret < 0) {
             rdma->error_state = ret;
-            rcu_read_unlock();
             return ret;
         }
 
@@ -2819,14 +2812,12 @@ static ssize_t qio_channel_rdma_readv(QIOChannel *ioc,
         /* Still didn't get enough, so lets just return */
         if (want) {
             if (done == 0) {
-                rcu_read_unlock();
                 return QIO_CHANNEL_ERR_BLOCK;
             } else {
                 break;
             }
         }
     }
-    rcu_read_unlock();
     return done;
 }
 
@@ -2882,7 +2873,7 @@ qio_channel_rdma_source_prepare(GSource *source,
     GIOCondition cond = 0;
     *timeout = -1;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     if (rsource->condition == G_IO_IN) {
         rdma = atomic_rcu_read(&rsource->rioc->rdmain);
     } else {
@@ -2891,7 +2882,6 @@ qio_channel_rdma_source_prepare(GSource *source,
 
     if (!rdma) {
         error_report("RDMAContext is NULL when prepare Gsource");
-        rcu_read_unlock();
         return FALSE;
     }
 
@@ -2900,7 +2890,6 @@ qio_channel_rdma_source_prepare(GSource *source,
     }
     cond |= G_IO_OUT;
 
-    rcu_read_unlock();
     return cond & rsource->condition;
 }
 
@@ -2911,7 +2900,7 @@ qio_channel_rdma_source_check(GSource *source)
     RDMAContext *rdma;
     GIOCondition cond = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     if (rsource->condition == G_IO_IN) {
         rdma = atomic_rcu_read(&rsource->rioc->rdmain);
     } else {
@@ -2920,7 +2909,6 @@ qio_channel_rdma_source_check(GSource *source)
 
     if (!rdma) {
         error_report("RDMAContext is NULL when check Gsource");
-        rcu_read_unlock();
         return FALSE;
     }
 
@@ -2929,7 +2917,6 @@ qio_channel_rdma_source_check(GSource *source)
     }
     cond |= G_IO_OUT;
 
-    rcu_read_unlock();
     return cond & rsource->condition;
 }
 
@@ -2943,7 +2930,7 @@ qio_channel_rdma_source_dispatch(GSource *source,
     RDMAContext *rdma;
     GIOCondition cond = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     if (rsource->condition == G_IO_IN) {
         rdma = atomic_rcu_read(&rsource->rioc->rdmain);
     } else {
@@ -2952,7 +2939,6 @@ qio_channel_rdma_source_dispatch(GSource *source,
 
     if (!rdma) {
         error_report("RDMAContext is NULL when dispatch Gsource");
-        rcu_read_unlock();
         return FALSE;
     }
 
@@ -2961,7 +2947,6 @@ qio_channel_rdma_source_dispatch(GSource *source,
     }
     cond |= G_IO_OUT;
 
-    rcu_read_unlock();
     return (*func)(QIO_CHANNEL(rsource->rioc),
                    (cond & rsource->condition),
                    user_data);
@@ -3058,7 +3043,7 @@ qio_channel_rdma_shutdown(QIOChannel *ioc,
     QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
     RDMAContext *rdmain, *rdmaout;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
 
     rdmain = atomic_rcu_read(&rioc->rdmain);
     rdmaout = atomic_rcu_read(&rioc->rdmain);
@@ -3085,7 +3070,6 @@ qio_channel_rdma_shutdown(QIOChannel *ioc,
         break;
     }
 
-    rcu_read_unlock();
     return 0;
 }
 
@@ -3131,18 +3115,16 @@ static size_t qemu_rdma_save_page(QEMUFile *f, void *opaque,
     RDMAContext *rdma;
     int ret;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     rdma = atomic_rcu_read(&rioc->rdmaout);
 
     if (!rdma) {
-        rcu_read_unlock();
         return -EIO;
     }
 
     CHECK_ERROR_STATE();
 
     if (migration_in_postcopy()) {
-        rcu_read_unlock();
         return RAM_SAVE_CONTROL_NOT_SUPP;
     }
 
@@ -3227,11 +3209,9 @@ static size_t qemu_rdma_save_page(QEMUFile *f, void *opaque,
         }
     }
 
-    rcu_read_unlock();
     return RAM_SAVE_CONTROL_DELAYED;
 err:
     rdma->error_state = ret;
-    rcu_read_unlock();
     return ret;
 }
 
@@ -3451,11 +3431,10 @@ static int qemu_rdma_registration_handle(QEMUFile *f, void *opaque)
     int count = 0;
     int i = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     rdma = atomic_rcu_read(&rioc->rdmain);
 
     if (!rdma) {
-        rcu_read_unlock();
         return -EIO;
     }
 
@@ -3698,7 +3677,6 @@ out:
     if (ret < 0) {
         rdma->error_state = ret;
     }
-    rcu_read_unlock();
     return ret;
 }
 
@@ -3716,11 +3694,10 @@ rdma_block_notification_handle(QIOChannelRDMA *rioc, const char *name)
     int curr;
     int found = -1;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     rdma = atomic_rcu_read(&rioc->rdmain);
 
     if (!rdma) {
-        rcu_read_unlock();
         return -EIO;
     }
 
@@ -3734,7 +3711,6 @@ rdma_block_notification_handle(QIOChannelRDMA *rioc, const char *name)
 
     if (found == -1) {
         error_report("RAMBlock '%s' not found on destination", name);
-        rcu_read_unlock();
         return -ENOENT;
     }
 
@@ -3742,7 +3718,6 @@ rdma_block_notification_handle(QIOChannelRDMA *rioc, const char *name)
     trace_rdma_block_notification_handle(name, rdma->next_src_index);
     rdma->next_src_index++;
 
-    rcu_read_unlock();
     return 0;
 }
 
@@ -3767,17 +3742,15 @@ static int qemu_rdma_registration_start(QEMUFile *f, void *opaque,
     QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(opaque);
     RDMAContext *rdma;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     rdma = atomic_rcu_read(&rioc->rdmaout);
     if (!rdma) {
-        rcu_read_unlock();
         return -EIO;
     }
 
     CHECK_ERROR_STATE();
 
     if (migration_in_postcopy()) {
-        rcu_read_unlock();
         return 0;
     }
 
@@ -3785,7 +3758,6 @@ static int qemu_rdma_registration_start(QEMUFile *f, void *opaque,
     qemu_put_be64(f, RAM_SAVE_FLAG_HOOK);
     qemu_fflush(f);
 
-    rcu_read_unlock();
     return 0;
 }
 
@@ -3802,17 +3774,15 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
     RDMAControlHeader head = { .len = 0, .repeat = 1 };
     int ret = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     rdma = atomic_rcu_read(&rioc->rdmaout);
     if (!rdma) {
-        rcu_read_unlock();
         return -EIO;
     }
 
     CHECK_ERROR_STATE();
 
     if (migration_in_postcopy()) {
-        rcu_read_unlock();
         return 0;
     }
 
@@ -3844,7 +3814,6 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
                     qemu_rdma_reg_whole_ram_blocks : NULL);
         if (ret < 0) {
             ERROR(errp, "receiving remote info!");
-            rcu_read_unlock();
             return ret;
         }
 
@@ -3868,7 +3837,6 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
                         "not identical on both the source and destination.",
                         local->nb_blocks, nb_dest_blocks);
             rdma->error_state = -EINVAL;
-            rcu_read_unlock();
             return -EINVAL;
         }
 
@@ -3885,7 +3853,6 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
                             local->block[i].length,
                             rdma->dest_blocks[i].length);
                 rdma->error_state = -EINVAL;
-                rcu_read_unlock();
                 return -EINVAL;
             }
             local->block[i].remote_host_addr =
@@ -3903,11 +3870,9 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
         goto err;
     }
 
-    rcu_read_unlock();
     return 0;
 err:
     rdma->error_state = ret;
-    rcu_read_unlock();
     return ret;
 }
 
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Qemu-devel] [PATCH v3 5/5] rcu: Use automatic rc_read unlock in core memory/exec code
  2019-09-13 10:25 [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert (git)
                   ` (3 preceding siblings ...)
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 4/5] migration: Use automatic rcu_read unlock in rdma.c Dr. David Alan Gilbert (git)
@ 2019-09-13 10:25 ` Dr. David Alan Gilbert (git)
  2019-09-19 17:35   ` Daniel P. Berrangé
  2019-09-25 10:32 ` [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert
  2019-09-25 13:13 ` Dr. David Alan Gilbert
  6 siblings, 1 reply; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-09-13 10:25 UTC (permalink / raw)
  To: qemu-devel, pbonzini, ehabkost, berrange, quintela

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 exec.c                  | 120 +++++++++++++++-------------------
 include/exec/ram_addr.h | 138 +++++++++++++++++++---------------------
 memory.c                |  15 ++---
 3 files changed, 120 insertions(+), 153 deletions(-)

diff --git a/exec.c b/exec.c
index 235d6bc883..e75be06819 100644
--- a/exec.c
+++ b/exec.c
@@ -1034,16 +1034,14 @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
         return;
     }
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     mr = address_space_translate(as, addr, &addr, &l, false, attrs);
     if (!(memory_region_is_ram(mr)
           || memory_region_is_romd(mr))) {
-        rcu_read_unlock();
         return;
     }
     ram_addr = memory_region_get_ram_addr(mr) + addr;
     tb_invalidate_phys_page_range(ram_addr, ram_addr + 1, 0);
-    rcu_read_unlock();
 }
 
 static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
@@ -1329,14 +1327,13 @@ static void tlb_reset_dirty_range_all(ram_addr_t start, ram_addr_t length)
     end = TARGET_PAGE_ALIGN(start + length);
     start &= TARGET_PAGE_MASK;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     block = qemu_get_ram_block(start);
     assert(block == qemu_get_ram_block(end - 1));
     start1 = (uintptr_t)ramblock_ptr(block, start - block->offset);
     CPU_FOREACH(cpu) {
         tlb_reset_dirty(cpu, start1, length);
     }
-    rcu_read_unlock();
 }
 
 /* Note: start and end must be within the same ram block.  */
@@ -1357,30 +1354,29 @@ bool cpu_physical_memory_test_and_clear_dirty(ram_addr_t start,
     end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
     page = start >> TARGET_PAGE_BITS;
 
-    rcu_read_lock();
-
-    blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
-    ramblock = qemu_get_ram_block(start);
-    /* Range sanity check on the ramblock */
-    assert(start >= ramblock->offset &&
-           start + length <= ramblock->offset + ramblock->used_length);
-
-    while (page < end) {
-        unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
-        unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
-        unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
+    WITH_RCU_READ_LOCK_GUARD() {
+        blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
+        ramblock = qemu_get_ram_block(start);
+        /* Range sanity check on the ramblock */
+        assert(start >= ramblock->offset &&
+               start + length <= ramblock->offset + ramblock->used_length);
+
+        while (page < end) {
+            unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
+            unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
+            unsigned long num = MIN(end - page,
+                                    DIRTY_MEMORY_BLOCK_SIZE - offset);
+
+            dirty |= bitmap_test_and_clear_atomic(blocks->blocks[idx],
+                                                  offset, num);
+            page += num;
+        }
 
-        dirty |= bitmap_test_and_clear_atomic(blocks->blocks[idx],
-                                              offset, num);
-        page += num;
+        mr_offset = (ram_addr_t)(page << TARGET_PAGE_BITS) - ramblock->offset;
+        mr_size = (end - page) << TARGET_PAGE_BITS;
+        memory_region_clear_dirty_bitmap(ramblock->mr, mr_offset, mr_size);
     }
 
-    mr_offset = (ram_addr_t)(page << TARGET_PAGE_BITS) - ramblock->offset;
-    mr_size = (end - page) << TARGET_PAGE_BITS;
-    memory_region_clear_dirty_bitmap(ramblock->mr, mr_offset, mr_size);
-
-    rcu_read_unlock();
-
     if (dirty && tcg_enabled()) {
         tlb_reset_dirty_range_all(start, length);
     }
@@ -1408,28 +1404,27 @@ DirtyBitmapSnapshot *cpu_physical_memory_snapshot_and_clear_dirty
     end  = last  >> TARGET_PAGE_BITS;
     dest = 0;
 
-    rcu_read_lock();
-
-    blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
+    WITH_RCU_READ_LOCK_GUARD() {
+        blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
 
-    while (page < end) {
-        unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
-        unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
-        unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
+        while (page < end) {
+            unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
+            unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
+            unsigned long num = MIN(end - page,
+                                    DIRTY_MEMORY_BLOCK_SIZE - offset);
 
-        assert(QEMU_IS_ALIGNED(offset, (1 << BITS_PER_LEVEL)));
-        assert(QEMU_IS_ALIGNED(num,    (1 << BITS_PER_LEVEL)));
-        offset >>= BITS_PER_LEVEL;
+            assert(QEMU_IS_ALIGNED(offset, (1 << BITS_PER_LEVEL)));
+            assert(QEMU_IS_ALIGNED(num,    (1 << BITS_PER_LEVEL)));
+            offset >>= BITS_PER_LEVEL;
 
-        bitmap_copy_and_clear_atomic(snap->dirty + dest,
-                                     blocks->blocks[idx] + offset,
-                                     num);
-        page += num;
-        dest += num >> BITS_PER_LEVEL;
+            bitmap_copy_and_clear_atomic(snap->dirty + dest,
+                                         blocks->blocks[idx] + offset,
+                                         num);
+            page += num;
+            dest += num >> BITS_PER_LEVEL;
+        }
     }
 
-    rcu_read_unlock();
-
     if (tcg_enabled()) {
         tlb_reset_dirty_range_all(start, length);
     }
@@ -1661,7 +1656,7 @@ void ram_block_dump(Monitor *mon)
     RAMBlock *block;
     char *psize;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     monitor_printf(mon, "%24s %8s  %18s %18s %18s\n",
                    "Block Name", "PSize", "Offset", "Used", "Total");
     RAMBLOCK_FOREACH(block) {
@@ -1673,7 +1668,6 @@ void ram_block_dump(Monitor *mon)
                        (uint64_t)block->max_length);
         g_free(psize);
     }
-    rcu_read_unlock();
 }
 
 #ifdef __linux__
@@ -1995,11 +1989,10 @@ static unsigned long last_ram_page(void)
     RAMBlock *block;
     ram_addr_t last = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     RAMBLOCK_FOREACH(block) {
         last = MAX(last, block->offset + block->max_length);
     }
-    rcu_read_unlock();
     return last >> TARGET_PAGE_BITS;
 }
 
@@ -2086,7 +2079,7 @@ void qemu_ram_set_idstr(RAMBlock *new_block, const char *name, DeviceState *dev)
     }
     pstrcat(new_block->idstr, sizeof(new_block->idstr), name);
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     RAMBLOCK_FOREACH(block) {
         if (block != new_block &&
             !strcmp(block->idstr, new_block->idstr)) {
@@ -2095,7 +2088,6 @@ void qemu_ram_set_idstr(RAMBlock *new_block, const char *name, DeviceState *dev)
             abort();
         }
     }
-    rcu_read_unlock();
 }
 
 /* Called with iothread lock held.  */
@@ -2637,17 +2629,16 @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
 
     if (xen_enabled()) {
         ram_addr_t ram_addr;
-        rcu_read_lock();
+        RCU_READ_LOCK_GUARD();
         ram_addr = xen_ram_addr_from_mapcache(ptr);
         block = qemu_get_ram_block(ram_addr);
         if (block) {
             *offset = ram_addr - block->offset;
         }
-        rcu_read_unlock();
         return block;
     }
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     block = atomic_rcu_read(&ram_list.mru_block);
     if (block && block->host && host - block->host < block->max_length) {
         goto found;
@@ -2663,7 +2654,6 @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
         }
     }
 
-    rcu_read_unlock();
     return NULL;
 
 found:
@@ -2671,7 +2661,6 @@ found:
     if (round_offset) {
         *offset &= TARGET_PAGE_MASK;
     }
-    rcu_read_unlock();
     return block;
 }
 
@@ -3380,10 +3369,9 @@ MemTxResult address_space_read_full(AddressSpace *as, hwaddr addr,
     FlatView *fv;
 
     if (len > 0) {
-        rcu_read_lock();
+        RCU_READ_LOCK_GUARD();
         fv = address_space_to_flatview(as);
         result = flatview_read(fv, addr, attrs, buf, len);
-        rcu_read_unlock();
     }
 
     return result;
@@ -3397,10 +3385,9 @@ MemTxResult address_space_write(AddressSpace *as, hwaddr addr,
     FlatView *fv;
 
     if (len > 0) {
-        rcu_read_lock();
+        RCU_READ_LOCK_GUARD();
         fv = address_space_to_flatview(as);
         result = flatview_write(fv, addr, attrs, buf, len);
-        rcu_read_unlock();
     }
 
     return result;
@@ -3440,7 +3427,7 @@ static inline MemTxResult address_space_write_rom_internal(AddressSpace *as,
     hwaddr addr1;
     MemoryRegion *mr;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     while (len > 0) {
         l = len;
         mr = address_space_translate(as, addr, &addr1, &l, true, attrs);
@@ -3465,7 +3452,6 @@ static inline MemTxResult address_space_write_rom_internal(AddressSpace *as,
         buf += l;
         addr += l;
     }
-    rcu_read_unlock();
     return MEMTX_OK;
 }
 
@@ -3610,10 +3596,9 @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
     FlatView *fv;
     bool result;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     fv = address_space_to_flatview(as);
     result = flatview_access_valid(fv, addr, len, is_write, attrs);
-    rcu_read_unlock();
     return result;
 }
 
@@ -3668,13 +3653,12 @@ void *address_space_map(AddressSpace *as,
     }
 
     l = len;
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     fv = address_space_to_flatview(as);
     mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
 
     if (!memory_access_is_direct(mr, is_write)) {
         if (atomic_xchg(&bounce.in_use, true)) {
-            rcu_read_unlock();
             return NULL;
         }
         /* Avoid unbounded allocations */
@@ -3690,7 +3674,6 @@ void *address_space_map(AddressSpace *as,
                                bounce.buffer, l);
         }
 
-        rcu_read_unlock();
         *plen = l;
         return bounce.buffer;
     }
@@ -3700,7 +3683,6 @@ void *address_space_map(AddressSpace *as,
     *plen = flatview_extend_translation(fv, addr, len, mr, xlat,
                                         l, is_write, attrs);
     ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
-    rcu_read_unlock();
 
     return ptr;
 }
@@ -3968,13 +3950,12 @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
     hwaddr l = 1;
     bool res;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     mr = address_space_translate(&address_space_memory,
                                  phys_addr, &phys_addr, &l, false,
                                  MEMTXATTRS_UNSPECIFIED);
 
     res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
-    rcu_read_unlock();
     return res;
 }
 
@@ -3983,14 +3964,13 @@ int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque)
     RAMBlock *block;
     int ret = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     RAMBLOCK_FOREACH(block) {
         ret = func(block, opaque);
         if (ret) {
             break;
         }
     }
-    rcu_read_unlock();
     return ret;
 }
 
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index a327a80cfe..be76b0bfba 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -199,30 +199,29 @@ static inline bool cpu_physical_memory_get_dirty(ram_addr_t start,
     end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
     page = start >> TARGET_PAGE_BITS;
 
-    rcu_read_lock();
-
-    blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
+    WITH_RCU_READ_LOCK_GUARD() {
+        blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
+
+        idx = page / DIRTY_MEMORY_BLOCK_SIZE;
+        offset = page % DIRTY_MEMORY_BLOCK_SIZE;
+        base = page - offset;
+        while (page < end) {
+            unsigned long next = MIN(end, base + DIRTY_MEMORY_BLOCK_SIZE);
+            unsigned long num = next - base;
+            unsigned long found = find_next_bit(blocks->blocks[idx],
+                                                num, offset);
+            if (found < num) {
+                dirty = true;
+                break;
+            }
 
-    idx = page / DIRTY_MEMORY_BLOCK_SIZE;
-    offset = page % DIRTY_MEMORY_BLOCK_SIZE;
-    base = page - offset;
-    while (page < end) {
-        unsigned long next = MIN(end, base + DIRTY_MEMORY_BLOCK_SIZE);
-        unsigned long num = next - base;
-        unsigned long found = find_next_bit(blocks->blocks[idx], num, offset);
-        if (found < num) {
-            dirty = true;
-            break;
+            page = next;
+            idx++;
+            offset = 0;
+            base += DIRTY_MEMORY_BLOCK_SIZE;
         }
-
-        page = next;
-        idx++;
-        offset = 0;
-        base += DIRTY_MEMORY_BLOCK_SIZE;
     }
 
-    rcu_read_unlock();
-
     return dirty;
 }
 
@@ -240,7 +239,7 @@ static inline bool cpu_physical_memory_all_dirty(ram_addr_t start,
     end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
     page = start >> TARGET_PAGE_BITS;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
 
     blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
 
@@ -262,8 +261,6 @@ static inline bool cpu_physical_memory_all_dirty(ram_addr_t start,
         base += DIRTY_MEMORY_BLOCK_SIZE;
     }
 
-    rcu_read_unlock();
-
     return dirty;
 }
 
@@ -315,13 +312,11 @@ static inline void cpu_physical_memory_set_dirty_flag(ram_addr_t addr,
     idx = page / DIRTY_MEMORY_BLOCK_SIZE;
     offset = page % DIRTY_MEMORY_BLOCK_SIZE;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
 
     blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
 
     set_bit_atomic(offset, blocks->blocks[idx]);
-
-    rcu_read_unlock();
 }
 
 static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
@@ -340,39 +335,37 @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
     end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
     page = start >> TARGET_PAGE_BITS;
 
-    rcu_read_lock();
+    WITH_RCU_READ_LOCK_GUARD() {
+        for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
+            blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i]);
+        }
 
-    for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
-        blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i]);
-    }
+        idx = page / DIRTY_MEMORY_BLOCK_SIZE;
+        offset = page % DIRTY_MEMORY_BLOCK_SIZE;
+        base = page - offset;
+        while (page < end) {
+            unsigned long next = MIN(end, base + DIRTY_MEMORY_BLOCK_SIZE);
 
-    idx = page / DIRTY_MEMORY_BLOCK_SIZE;
-    offset = page % DIRTY_MEMORY_BLOCK_SIZE;
-    base = page - offset;
-    while (page < end) {
-        unsigned long next = MIN(end, base + DIRTY_MEMORY_BLOCK_SIZE);
+            if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) {
+                bitmap_set_atomic(blocks[DIRTY_MEMORY_MIGRATION]->blocks[idx],
+                                  offset, next - page);
+            }
+            if (unlikely(mask & (1 << DIRTY_MEMORY_VGA))) {
+                bitmap_set_atomic(blocks[DIRTY_MEMORY_VGA]->blocks[idx],
+                                  offset, next - page);
+            }
+            if (unlikely(mask & (1 << DIRTY_MEMORY_CODE))) {
+                bitmap_set_atomic(blocks[DIRTY_MEMORY_CODE]->blocks[idx],
+                                  offset, next - page);
+            }
 
-        if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) {
-            bitmap_set_atomic(blocks[DIRTY_MEMORY_MIGRATION]->blocks[idx],
-                              offset, next - page);
+            page = next;
+            idx++;
+            offset = 0;
+            base += DIRTY_MEMORY_BLOCK_SIZE;
         }
-        if (unlikely(mask & (1 << DIRTY_MEMORY_VGA))) {
-            bitmap_set_atomic(blocks[DIRTY_MEMORY_VGA]->blocks[idx],
-                              offset, next - page);
-        }
-        if (unlikely(mask & (1 << DIRTY_MEMORY_CODE))) {
-            bitmap_set_atomic(blocks[DIRTY_MEMORY_CODE]->blocks[idx],
-                              offset, next - page);
-        }
-
-        page = next;
-        idx++;
-        offset = 0;
-        base += DIRTY_MEMORY_BLOCK_SIZE;
     }
 
-    rcu_read_unlock();
-
     xen_hvm_modified_memory(start, length);
 }
 
@@ -402,36 +395,35 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
         offset = BIT_WORD((start >> TARGET_PAGE_BITS) %
                           DIRTY_MEMORY_BLOCK_SIZE);
 
-        rcu_read_lock();
+        WITH_RCU_READ_LOCK_GUARD() {
+            for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
+                blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i])->blocks;
+            }
 
-        for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
-            blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i])->blocks;
-        }
+            for (k = 0; k < nr; k++) {
+                if (bitmap[k]) {
+                    unsigned long temp = leul_to_cpu(bitmap[k]);
 
-        for (k = 0; k < nr; k++) {
-            if (bitmap[k]) {
-                unsigned long temp = leul_to_cpu(bitmap[k]);
+                    atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp);
 
-                atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp);
+                    if (global_dirty_log) {
+                        atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][offset],
+                                  temp);
+                    }
 
-                if (global_dirty_log) {
-                    atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][offset],
-                              temp);
+                    if (tcg_enabled()) {
+                        atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset],
+                                  temp);
+                    }
                 }
 
-                if (tcg_enabled()) {
-                    atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset], temp);
+                if (++offset >= BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) {
+                    offset = 0;
+                    idx++;
                 }
             }
-
-            if (++offset >= BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) {
-                offset = 0;
-                idx++;
-            }
         }
 
-        rcu_read_unlock();
-
         xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS);
     } else {
         uint8_t clients = tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CLIENTS_NOCODE;
diff --git a/memory.c b/memory.c
index 61a254c3f9..e867a1f2b3 100644
--- a/memory.c
+++ b/memory.c
@@ -799,14 +799,13 @@ FlatView *address_space_get_flatview(AddressSpace *as)
 {
     FlatView *view;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     do {
         view = address_space_to_flatview(as);
         /* If somebody has replaced as->current_map concurrently,
          * flatview_ref returns false.
          */
     } while (!flatview_ref(view));
-    rcu_read_unlock();
     return view;
 }
 
@@ -2177,12 +2176,11 @@ int memory_region_get_fd(MemoryRegion *mr)
 {
     int fd;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     while (mr->alias) {
         mr = mr->alias;
     }
     fd = mr->ram_block->fd;
-    rcu_read_unlock();
 
     return fd;
 }
@@ -2192,14 +2190,13 @@ void *memory_region_get_ram_ptr(MemoryRegion *mr)
     void *ptr;
     uint64_t offset = 0;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     while (mr->alias) {
         offset += mr->alias_offset;
         mr = mr->alias;
     }
     assert(mr->ram_block);
     ptr = qemu_map_ram_ptr(mr->ram_block, offset);
-    rcu_read_unlock();
 
     return ptr;
 }
@@ -2589,12 +2586,11 @@ MemoryRegionSection memory_region_find(MemoryRegion *mr,
                                        hwaddr addr, uint64_t size)
 {
     MemoryRegionSection ret;
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     ret = memory_region_find_rcu(mr, addr, size);
     if (ret.mr) {
         memory_region_ref(ret.mr);
     }
-    rcu_read_unlock();
     return ret;
 }
 
@@ -2602,9 +2598,8 @@ bool memory_region_present(MemoryRegion *container, hwaddr addr)
 {
     MemoryRegion *mr;
 
-    rcu_read_lock();
+    RCU_READ_LOCK_GUARD();
     mr = memory_region_find_rcu(container, addr, 1).mr;
-    rcu_read_unlock();
     return mr && mr != container;
 }
 
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants Dr. David Alan Gilbert (git)
@ 2019-09-13 12:01   ` Paolo Bonzini
  2019-09-13 18:45     ` Dr. David Alan Gilbert
  2019-09-19 17:31   ` Daniel P. Berrangé
  1 sibling, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2019-09-13 12:01 UTC (permalink / raw)
  To: Dr. David Alan Gilbert (git), qemu-devel, ehabkost, berrange, quintela

On 13/09/19 12:25, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> RCU_READ_LOCK_GUARD() takes the rcu_read_lock and then uses glib's
> g_auto infrastructure (and thus whatever the compiler's hooks are) to
> release it on all exits of the block.
> 
> WITH_RCU_READ_LOCK_GUARD() is similar but is used as a wrapper for the
> lock, i.e.:
> 
>    WITH_RCU_READ_LOCK_GUARD() {
>        stuff under lock
>    }
> 
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  docs/devel/rcu.txt | 16 ++++++++++++++++
>  include/qemu/rcu.h | 25 +++++++++++++++++++++++++
>  2 files changed, 41 insertions(+)
> 
> diff --git a/docs/devel/rcu.txt b/docs/devel/rcu.txt
> index c84e7f42b2..d83fed2f79 100644
> --- a/docs/devel/rcu.txt
> +++ b/docs/devel/rcu.txt
> @@ -187,6 +187,22 @@ The following APIs must be used before RCU is used in a thread:
>  Note that these APIs are relatively heavyweight, and should _not_ be
>  nested.
>  
> +Convenience macros
> +==================
> +
> +Two macros are provided that automatically release the read lock at the
> +end of the scope.
> +
> +      RCU_READ_LOCK_GUARD()
> +
> +         Takes the lock and will release it at the end of the block it's
> +         used in.
> +
> +      WITH_RCU_READ_LOCK_GUARD()  { code }
> +
> +         Is used at the head of a block to protect the code within the block.
> +
> +Note that 'goto'ing out of the guarded block will also drop the lock.
>  
>  DIFFERENCES WITH LINUX
>  ======================
> diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h
> index 22876d1428..3a8d4cf28b 100644
> --- a/include/qemu/rcu.h
> +++ b/include/qemu/rcu.h
> @@ -154,6 +154,31 @@ extern void call_rcu1(struct rcu_head *head, RCUCBFunc *func);
>        }),                                                                \
>        (RCUCBFunc *)g_free);
>  
> +typedef void RCUReadAuto;
> +static inline RCUReadAuto *rcu_read_auto_lock(void)
> +{
> +    rcu_read_lock();
> +    /* Anything non-NULL causes the cleanup function to be called */
> +    return (void *)(uintptr_t)0x1;
> +}
> +
> +static inline void rcu_read_auto_unlock(RCUReadAuto *r)
> +{
> +    rcu_read_unlock();
> +}
> +
> +G_DEFINE_AUTOPTR_CLEANUP_FUNC(RCUReadAuto, rcu_read_auto_unlock)
> +
> +#define WITH_RCU_READ_LOCK_GUARD() \
> +    WITH_RCU_READ_LOCK_GUARD_(_rcu_read_auto##__COUNTER__)
> +
> +#define WITH_RCU_READ_LOCK_GUARD_(var) \
> +    for (g_autoptr(RCUReadAuto) var = rcu_read_auto_lock(); \
> +        (var); rcu_read_auto_unlock(var), (var) = NULL)
> +
> +#define RCU_READ_LOCK_GUARD() \
> +    g_autoptr(RCUReadAuto) _rcu_read_auto = rcu_read_auto_lock()
> +
>  #ifdef __cplusplus
>  }
>  #endif
> 

Acked-by: Paolo Bonzini <pbonzini@redhat.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants
  2019-09-13 12:01   ` Paolo Bonzini
@ 2019-09-13 18:45     ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert @ 2019-09-13 18:45 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: quintela, berrange, qemu-devel, ehabkost

* Paolo Bonzini (pbonzini@redhat.com) wrote:
> On 13/09/19 12:25, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> > 
> > RCU_READ_LOCK_GUARD() takes the rcu_read_lock and then uses glib's
> > g_auto infrastructure (and thus whatever the compiler's hooks are) to
> > release it on all exits of the block.
> > 
> > WITH_RCU_READ_LOCK_GUARD() is similar but is used as a wrapper for the
> > lock, i.e.:
> > 
> >    WITH_RCU_READ_LOCK_GUARD() {
> >        stuff under lock
> >    }
> > 
> > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > ---
> >  docs/devel/rcu.txt | 16 ++++++++++++++++
> >  include/qemu/rcu.h | 25 +++++++++++++++++++++++++
> >  2 files changed, 41 insertions(+)
> > 
> > diff --git a/docs/devel/rcu.txt b/docs/devel/rcu.txt
> > index c84e7f42b2..d83fed2f79 100644
> > --- a/docs/devel/rcu.txt
> > +++ b/docs/devel/rcu.txt
> > @@ -187,6 +187,22 @@ The following APIs must be used before RCU is used in a thread:
> >  Note that these APIs are relatively heavyweight, and should _not_ be
> >  nested.
> >  
> > +Convenience macros
> > +==================
> > +
> > +Two macros are provided that automatically release the read lock at the
> > +end of the scope.
> > +
> > +      RCU_READ_LOCK_GUARD()
> > +
> > +         Takes the lock and will release it at the end of the block it's
> > +         used in.
> > +
> > +      WITH_RCU_READ_LOCK_GUARD()  { code }
> > +
> > +         Is used at the head of a block to protect the code within the block.
> > +
> > +Note that 'goto'ing out of the guarded block will also drop the lock.
> >  
> >  DIFFERENCES WITH LINUX
> >  ======================
> > diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h
> > index 22876d1428..3a8d4cf28b 100644
> > --- a/include/qemu/rcu.h
> > +++ b/include/qemu/rcu.h
> > @@ -154,6 +154,31 @@ extern void call_rcu1(struct rcu_head *head, RCUCBFunc *func);
> >        }),                                                                \
> >        (RCUCBFunc *)g_free);
> >  
> > +typedef void RCUReadAuto;
> > +static inline RCUReadAuto *rcu_read_auto_lock(void)
> > +{
> > +    rcu_read_lock();
> > +    /* Anything non-NULL causes the cleanup function to be called */
> > +    return (void *)(uintptr_t)0x1;
> > +}
> > +
> > +static inline void rcu_read_auto_unlock(RCUReadAuto *r)
> > +{
> > +    rcu_read_unlock();
> > +}
> > +
> > +G_DEFINE_AUTOPTR_CLEANUP_FUNC(RCUReadAuto, rcu_read_auto_unlock)
> > +
> > +#define WITH_RCU_READ_LOCK_GUARD() \
> > +    WITH_RCU_READ_LOCK_GUARD_(_rcu_read_auto##__COUNTER__)
> > +
> > +#define WITH_RCU_READ_LOCK_GUARD_(var) \
> > +    for (g_autoptr(RCUReadAuto) var = rcu_read_auto_lock(); \
> > +        (var); rcu_read_auto_unlock(var), (var) = NULL)
> > +
> > +#define RCU_READ_LOCK_GUARD() \
> > +    g_autoptr(RCUReadAuto) _rcu_read_auto = rcu_read_auto_lock()
> > +
> >  #ifdef __cplusplus
> >  }
> >  #endif
> > 
> 
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Can you look at the other bits of this series and tell me if they're OK?
If they are, do you want to take them (since it's RCU based)?

Dave

--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants Dr. David Alan Gilbert (git)
  2019-09-13 12:01   ` Paolo Bonzini
@ 2019-09-19 17:31   ` Daniel P. Berrangé
  1 sibling, 0 replies; 19+ messages in thread
From: Daniel P. Berrangé @ 2019-09-19 17:31 UTC (permalink / raw)
  To: Dr. David Alan Gilbert (git); +Cc: pbonzini, quintela, qemu-devel, ehabkost

On Fri, Sep 13, 2019 at 11:25:34AM +0100, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> RCU_READ_LOCK_GUARD() takes the rcu_read_lock and then uses glib's
> g_auto infrastructure (and thus whatever the compiler's hooks are) to
> release it on all exits of the block.
> 
> WITH_RCU_READ_LOCK_GUARD() is similar but is used as a wrapper for the
> lock, i.e.:
> 
>    WITH_RCU_READ_LOCK_GUARD() {
>        stuff under lock
>    }
> 
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  docs/devel/rcu.txt | 16 ++++++++++++++++
>  include/qemu/rcu.h | 25 +++++++++++++++++++++++++
>  2 files changed, 41 insertions(+)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/5] migration: Fix missing rcu_read_unlock
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 2/5] migration: Fix missing rcu_read_unlock Dr. David Alan Gilbert (git)
@ 2019-09-19 17:32   ` Daniel P. Berrangé
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel P. Berrangé @ 2019-09-19 17:32 UTC (permalink / raw)
  To: Dr. David Alan Gilbert (git); +Cc: pbonzini, quintela, qemu-devel, ehabkost

On Fri, Sep 13, 2019 at 11:25:35AM +0100, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> Use the automatic rcu_read unlocker to fix a missing unlock.
> 
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  migration/ram.c | 35 +++++++++++++++++------------------
>  1 file changed, 17 insertions(+), 18 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/5] migration: Use automatic rcu_read unlock in ram.c
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 3/5] migration: Use automatic rcu_read unlock in ram.c Dr. David Alan Gilbert (git)
@ 2019-09-19 17:33   ` Daniel P. Berrangé
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel P. Berrangé @ 2019-09-19 17:33 UTC (permalink / raw)
  To: Dr. David Alan Gilbert (git); +Cc: pbonzini, quintela, qemu-devel, ehabkost

On Fri, Sep 13, 2019 at 11:25:36AM +0100, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> Use the automatic read unlocker in migration/ram.c
> 
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  migration/ram.c | 260 ++++++++++++++++++++++--------------------------
>  1 file changed, 121 insertions(+), 139 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] migration: Use automatic rcu_read unlock in rdma.c
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 4/5] migration: Use automatic rcu_read unlock in rdma.c Dr. David Alan Gilbert (git)
@ 2019-09-19 17:34   ` Daniel P. Berrangé
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel P. Berrangé @ 2019-09-19 17:34 UTC (permalink / raw)
  To: Dr. David Alan Gilbert (git); +Cc: pbonzini, quintela, qemu-devel, ehabkost

On Fri, Sep 13, 2019 at 11:25:37AM +0100, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> Use the automatic read unlocker in migration/rdma.c.
> 
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  migration/rdma.c | 57 ++++++++++--------------------------------------
>  1 file changed, 11 insertions(+), 46 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 5/5] rcu: Use automatic rc_read unlock in core memory/exec code
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 5/5] rcu: Use automatic rc_read unlock in core memory/exec code Dr. David Alan Gilbert (git)
@ 2019-09-19 17:35   ` Daniel P. Berrangé
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel P. Berrangé @ 2019-09-19 17:35 UTC (permalink / raw)
  To: Dr. David Alan Gilbert (git); +Cc: pbonzini, quintela, qemu-devel, ehabkost

On Fri, Sep 13, 2019 at 11:25:38AM +0100, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  exec.c                  | 120 +++++++++++++++-------------------
>  include/exec/ram_addr.h | 138 +++++++++++++++++++---------------------
>  memory.c                |  15 ++---
>  3 files changed, 120 insertions(+), 153 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock
  2019-09-13 10:25 [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert (git)
                   ` (4 preceding siblings ...)
  2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 5/5] rcu: Use automatic rc_read unlock in core memory/exec code Dr. David Alan Gilbert (git)
@ 2019-09-25 10:32 ` Dr. David Alan Gilbert
  2019-09-25 13:13 ` Dr. David Alan Gilbert
  6 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert @ 2019-09-25 10:32 UTC (permalink / raw)
  To: qemu-devel, pbonzini, ehabkost, berrange, quintela

* Dr. David Alan Gilbert (git) (dgilbert@redhat.com) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> This patch uses glib's g_auto mechanism to automatically free
> rcu_read_lock's at the end of the block.  Given that humans
> have a habit of forgetting an error path somewhere it's
> best to leave it to the compiler.

Queued

> v3
>   Add block-head version of macro
>   Rename
>   Add docs
>   Convert more cases using the block-head version
> 
> Dr. David Alan Gilbert (5):
>   rcu: Add automatically released rcu_read_lock variants
>   migration: Fix missing rcu_read_unlock
>   migration: Use automatic rcu_read unlock in ram.c
>   migration: Use automatic rcu_read unlock in rdma.c
>   rcu: Use automatic rc_read unlock in core memory/exec code
> 
>  docs/devel/rcu.txt      |  16 +++
>  exec.c                  | 120 +++++++---------
>  include/exec/ram_addr.h | 138 +++++++++----------
>  include/qemu/rcu.h      |  25 ++++
>  memory.c                |  15 +-
>  migration/ram.c         | 295 +++++++++++++++++++---------------------
>  migration/rdma.c        |  57 ++------
>  7 files changed, 310 insertions(+), 356 deletions(-)
> 
> -- 
> 2.21.0
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock
  2019-09-13 10:25 [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert (git)
                   ` (5 preceding siblings ...)
  2019-09-25 10:32 ` [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert
@ 2019-09-25 13:13 ` Dr. David Alan Gilbert
  2019-09-25 13:21   ` Paolo Bonzini
  6 siblings, 1 reply; 19+ messages in thread
From: Dr. David Alan Gilbert @ 2019-09-25 13:13 UTC (permalink / raw)
  To: qemu-devel, pbonzini, ehabkost, berrange, quintela

* Dr. David Alan Gilbert (git) (dgilbert@redhat.com) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> This patch uses glib's g_auto mechanism to automatically free
> rcu_read_lock's at the end of the block.  Given that humans
> have a habit of forgetting an error path somewhere it's
> best to leave it to the compiler.

I've had to unqueue this - clang doesn't like the apparently unused
auto variable; we need to find a way to make that happy.

Dave

> v3
>   Add block-head version of macro
>   Rename
>   Add docs
>   Convert more cases using the block-head version
> 
> Dr. David Alan Gilbert (5):
>   rcu: Add automatically released rcu_read_lock variants
>   migration: Fix missing rcu_read_unlock
>   migration: Use automatic rcu_read unlock in ram.c
>   migration: Use automatic rcu_read unlock in rdma.c
>   rcu: Use automatic rc_read unlock in core memory/exec code
> 
>  docs/devel/rcu.txt      |  16 +++
>  exec.c                  | 120 +++++++---------
>  include/exec/ram_addr.h | 138 +++++++++----------
>  include/qemu/rcu.h      |  25 ++++
>  memory.c                |  15 +-
>  migration/ram.c         | 295 +++++++++++++++++++---------------------
>  migration/rdma.c        |  57 ++------
>  7 files changed, 310 insertions(+), 356 deletions(-)
> 
> -- 
> 2.21.0
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock
  2019-09-25 13:13 ` Dr. David Alan Gilbert
@ 2019-09-25 13:21   ` Paolo Bonzini
  2019-09-25 15:28     ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2019-09-25 13:21 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, qemu-devel, ehabkost, berrange, quintela

On 25/09/19 15:13, Dr. David Alan Gilbert wrote:
> * Dr. David Alan Gilbert (git) (dgilbert@redhat.com) wrote:
>> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
>>
>> This patch uses glib's g_auto mechanism to automatically free
>> rcu_read_lock's at the end of the block.  Given that humans
>> have a habit of forgetting an error path somewhere it's
>> best to leave it to the compiler.
> 
> I've had to unqueue this - clang doesn't like the apparently unused
> auto variable; we need to find a way to make that happy.

__attribute__((unused))?

Paolo

> Dave
> 
>> v3
>>   Add block-head version of macro
>>   Rename
>>   Add docs
>>   Convert more cases using the block-head version
>>
>> Dr. David Alan Gilbert (5):
>>   rcu: Add automatically released rcu_read_lock variants
>>   migration: Fix missing rcu_read_unlock
>>   migration: Use automatic rcu_read unlock in ram.c
>>   migration: Use automatic rcu_read unlock in rdma.c
>>   rcu: Use automatic rc_read unlock in core memory/exec code
>>
>>  docs/devel/rcu.txt      |  16 +++
>>  exec.c                  | 120 +++++++---------
>>  include/exec/ram_addr.h | 138 +++++++++----------
>>  include/qemu/rcu.h      |  25 ++++
>>  memory.c                |  15 +-
>>  migration/ram.c         | 295 +++++++++++++++++++---------------------
>>  migration/rdma.c        |  57 ++------
>>  7 files changed, 310 insertions(+), 356 deletions(-)
>>
>> -- 
>> 2.21.0
>>
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock
  2019-09-25 13:21   ` Paolo Bonzini
@ 2019-09-25 15:28     ` Dr. David Alan Gilbert
  2019-09-25 15:49       ` Daniel P. Berrangé
  2019-09-25 16:47       ` Paolo Bonzini
  0 siblings, 2 replies; 19+ messages in thread
From: Dr. David Alan Gilbert @ 2019-09-25 15:28 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: quintela, berrange, qemu-devel, ehabkost

* Paolo Bonzini (pbonzini@redhat.com) wrote:
> On 25/09/19 15:13, Dr. David Alan Gilbert wrote:
> > * Dr. David Alan Gilbert (git) (dgilbert@redhat.com) wrote:
> >> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> >>
> >> This patch uses glib's g_auto mechanism to automatically free
> >> rcu_read_lock's at the end of the block.  Given that humans
> >> have a habit of forgetting an error path somewhere it's
> >> best to leave it to the compiler.
> > 
> > I've had to unqueue this - clang doesn't like the apparently unused
> > auto variable; we need to find a way to make that happy.
> 
> __attribute__((unused))?

I worry that if I do that, then it'll optimise it out.

Dave

> Paolo
> 
> > Dave
> > 
> >> v3
> >>   Add block-head version of macro
> >>   Rename
> >>   Add docs
> >>   Convert more cases using the block-head version
> >>
> >> Dr. David Alan Gilbert (5):
> >>   rcu: Add automatically released rcu_read_lock variants
> >>   migration: Fix missing rcu_read_unlock
> >>   migration: Use automatic rcu_read unlock in ram.c
> >>   migration: Use automatic rcu_read unlock in rdma.c
> >>   rcu: Use automatic rc_read unlock in core memory/exec code
> >>
> >>  docs/devel/rcu.txt      |  16 +++
> >>  exec.c                  | 120 +++++++---------
> >>  include/exec/ram_addr.h | 138 +++++++++----------
> >>  include/qemu/rcu.h      |  25 ++++
> >>  memory.c                |  15 +-
> >>  migration/ram.c         | 295 +++++++++++++++++++---------------------
> >>  migration/rdma.c        |  57 ++------
> >>  7 files changed, 310 insertions(+), 356 deletions(-)
> >>
> >> -- 
> >> 2.21.0
> >>
> >>
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock
  2019-09-25 15:28     ` Dr. David Alan Gilbert
@ 2019-09-25 15:49       ` Daniel P. Berrangé
  2019-09-25 16:47       ` Paolo Bonzini
  1 sibling, 0 replies; 19+ messages in thread
From: Daniel P. Berrangé @ 2019-09-25 15:49 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Paolo Bonzini, qemu-devel, ehabkost, quintela

On Wed, Sep 25, 2019 at 04:28:58PM +0100, Dr. David Alan Gilbert wrote:
> * Paolo Bonzini (pbonzini@redhat.com) wrote:
> > On 25/09/19 15:13, Dr. David Alan Gilbert wrote:
> > > * Dr. David Alan Gilbert (git) (dgilbert@redhat.com) wrote:
> > >> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> > >>
> > >> This patch uses glib's g_auto mechanism to automatically free
> > >> rcu_read_lock's at the end of the block.  Given that humans
> > >> have a habit of forgetting an error path somewhere it's
> > >> best to leave it to the compiler.
> > > 
> > > I've had to unqueue this - clang doesn't like the apparently unused
> > > auto variable; we need to find a way to make that happy.
> > 
> > __attribute__((unused))?
> 
> I worry that if I do that, then it'll optimise it out.

Can you just insert a dummy use of the variable. eg libvirt does
this by just taking the address of the unused variable and casting
the result to void.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock
  2019-09-25 15:28     ` Dr. David Alan Gilbert
  2019-09-25 15:49       ` Daniel P. Berrangé
@ 2019-09-25 16:47       ` Paolo Bonzini
  1 sibling, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2019-09-25 16:47 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: quintela, berrange, qemu-devel, ehabkost

On 25/09/19 17:28, Dr. David Alan Gilbert wrote:
> * Paolo Bonzini (pbonzini@redhat.com) wrote:
>> On 25/09/19 15:13, Dr. David Alan Gilbert wrote:
>>> * Dr. David Alan Gilbert (git) (dgilbert@redhat.com) wrote:
>>>> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
>>>>
>>>> This patch uses glib's g_auto mechanism to automatically free
>>>> rcu_read_lock's at the end of the block.  Given that humans
>>>> have a habit of forgetting an error path somewhere it's
>>>> best to leave it to the compiler.
>>>
>>> I've had to unqueue this - clang doesn't like the apparently unused
>>> auto variable; we need to find a way to make that happy.
>>
>> __attribute__((unused))?
> 
> I worry that if I do that, then it'll optimise it out.

It cannot, since the function passed to the cleanup attribute can have
side effects.

Paolo

> 
> Dave
> 
>> Paolo
>>
>>> Dave
>>>
>>>> v3
>>>>   Add block-head version of macro
>>>>   Rename
>>>>   Add docs
>>>>   Convert more cases using the block-head version
>>>>
>>>> Dr. David Alan Gilbert (5):
>>>>   rcu: Add automatically released rcu_read_lock variants
>>>>   migration: Fix missing rcu_read_unlock
>>>>   migration: Use automatic rcu_read unlock in ram.c
>>>>   migration: Use automatic rcu_read unlock in rdma.c
>>>>   rcu: Use automatic rc_read unlock in core memory/exec code
>>>>
>>>>  docs/devel/rcu.txt      |  16 +++
>>>>  exec.c                  | 120 +++++++---------
>>>>  include/exec/ram_addr.h | 138 +++++++++----------
>>>>  include/qemu/rcu.h      |  25 ++++
>>>>  memory.c                |  15 +-
>>>>  migration/ram.c         | 295 +++++++++++++++++++---------------------
>>>>  migration/rdma.c        |  57 ++------
>>>>  7 files changed, 310 insertions(+), 356 deletions(-)
>>>>
>>>> -- 
>>>> 2.21.0
>>>>
>>>>
>>> --
>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>>
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2019-09-25 16:51 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-13 10:25 [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert (git)
2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 1/5] rcu: Add automatically released rcu_read_lock variants Dr. David Alan Gilbert (git)
2019-09-13 12:01   ` Paolo Bonzini
2019-09-13 18:45     ` Dr. David Alan Gilbert
2019-09-19 17:31   ` Daniel P. Berrangé
2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 2/5] migration: Fix missing rcu_read_unlock Dr. David Alan Gilbert (git)
2019-09-19 17:32   ` Daniel P. Berrangé
2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 3/5] migration: Use automatic rcu_read unlock in ram.c Dr. David Alan Gilbert (git)
2019-09-19 17:33   ` Daniel P. Berrangé
2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 4/5] migration: Use automatic rcu_read unlock in rdma.c Dr. David Alan Gilbert (git)
2019-09-19 17:34   ` Daniel P. Berrangé
2019-09-13 10:25 ` [Qemu-devel] [PATCH v3 5/5] rcu: Use automatic rc_read unlock in core memory/exec code Dr. David Alan Gilbert (git)
2019-09-19 17:35   ` Daniel P. Berrangé
2019-09-25 10:32 ` [Qemu-devel] [PATCH v3 0/5] Automatic RCU read unlock Dr. David Alan Gilbert
2019-09-25 13:13 ` Dr. David Alan Gilbert
2019-09-25 13:21   ` Paolo Bonzini
2019-09-25 15:28     ` Dr. David Alan Gilbert
2019-09-25 15:49       ` Daniel P. Berrangé
2019-09-25 16:47       ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).