All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros
@ 2021-03-11  3:15 Mahmoud Mandour
  2021-03-11  3:15 ` [PATCH 1/9] tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD Mahmoud Mandour
                   ` (8 more replies)
  0 siblings, 9 replies; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour

Hello, 

This is my first contribution to the qemu project, in which
I attempt to replace some usages of qemu_mutex_lock calls and 
the respective qemu_mutex_unlock calls with QEMU_LOCK_GUARD macros. 

As it is a matter of subjectivity on which we should base whether we would 
change qemu_mutex_lock/unlock with a lock guard, I tried as much as I could
to only change it where beneficial to readibility and simplicity. 

I tried to only change it where it would eliminate goto paths 
or if the span of locking/unlocking is so spacious that it's 
not immediately obvious that the particular calls to 
qemu_mutex_lock/unlock are a matching pair.

Mahmoud Mandour (9):
  tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD
  block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  char: Replaced a qemu_mutex_lock with QEMU_LOCK_GUARD
  util: Replaced qemu_mutex_lock with QEMU_LOCK_GUARDs
  monitor: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  migration: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  virtio-iommu: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  hw/hyperv/vmbus: replaced qemu_mutex_lock with QEMU_LOCK_GUARD

 backends/tpm/tpm_emulator.c |   8 +-
 block/curl.c                |  13 +--
 block/nbd.c                 | 188 +++++++++++++++++-------------------
 chardev/char.c              |   3 +-
 hw/9pfs/9p-synth.c          |  12 +--
 hw/hyperv/vmbus.c           |  13 +--
 hw/virtio/virtio-iommu.c    |  78 +++++++--------
 migration/migration.c       |   6 +-
 migration/ram.c             |   6 +-
 monitor/monitor.c           |   8 +-
 monitor/qmp.c               |  51 +++++-----
 util/filemonitor-inotify.c  |  24 ++---
 util/vfio-helpers.c         |  23 ++---
 13 files changed, 192 insertions(+), 241 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 1/9] tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD
  2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
@ 2021-03-11  3:15 ` Mahmoud Mandour
  2021-03-11 10:04   ` Marc-André Lureau
  2021-03-23  2:58   ` Stefan Berger
  2021-03-11  3:15 ` [PATCH 2/9] block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD Mahmoud Mandour
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, Stefan Berger

Removed a qemu_mutex_lock() and its respective qemu_mutex_unlock()
and used QEMU_LOCK_GUARD instead. This simplifies the code by
eliminiating gotos and removing the qemu_mutex_unlock() calls.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 backends/tpm/tpm_emulator.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/backends/tpm/tpm_emulator.c b/backends/tpm/tpm_emulator.c
index a012adc193..a3c041e402 100644
--- a/backends/tpm/tpm_emulator.c
+++ b/backends/tpm/tpm_emulator.c
@@ -126,7 +126,7 @@ static int tpm_emulator_ctrlcmd(TPMEmulator *tpm, unsigned long cmd, void *msg,
     uint8_t *buf = NULL;
     int ret = -1;
 
-    qemu_mutex_lock(&tpm->mutex);
+    QEMU_LOCK_GUARD(&tpm->mutex);
 
     buf = g_alloca(n);
     memcpy(buf, &cmd_no, sizeof(cmd_no));
@@ -134,20 +134,18 @@ static int tpm_emulator_ctrlcmd(TPMEmulator *tpm, unsigned long cmd, void *msg,
 
     n = qemu_chr_fe_write_all(dev, buf, n);
     if (n <= 0) {
-        goto end;
+        return ret;
     }
 
     if (msg_len_out != 0) {
         n = qemu_chr_fe_read_all(dev, msg, msg_len_out);
         if (n <= 0) {
-            goto end;
+            return ret;
         }
     }
 
     ret = 0;
 
-end:
-    qemu_mutex_unlock(&tpm->mutex);
     return ret;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 2/9] block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
  2021-03-11  3:15 ` [PATCH 1/9] tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD Mahmoud Mandour
@ 2021-03-11  3:15 ` Mahmoud Mandour
  2021-03-12 10:23   ` Vladimir Sementsov-Ogievskiy
  2021-03-11  3:15 ` [PATCH 3/9] char: Replaced a qemu_mutex_lock " Mahmoud Mandour
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Mahmoud Mandour, open list:CURL, Max Reitz

Replaced various qemu_mutex_lock/qemu_mutex_unlock calls with
lock guard macros (QEMU_LOCK_GUARD() and WITH_QEMU_LOCK_GUARD).
This slightly simplifies the code by eliminating calls to
qemu_mutex_unlock and eliminates goto paths.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 block/curl.c |  13 ++--
 block/nbd.c  | 188 ++++++++++++++++++++++++---------------------------
 2 files changed, 95 insertions(+), 106 deletions(-)

diff --git a/block/curl.c b/block/curl.c
index 4ff895df8f..56a217951a 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -832,12 +832,12 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
     uint64_t start = acb->offset;
     uint64_t end;
 
-    qemu_mutex_lock(&s->mutex);
+    QEMU_LOCK_GUARD(&s->mutex);
 
     // In case we have the requested data already (e.g. read-ahead),
     // we can just call the callback and be done.
     if (curl_find_buf(s, start, acb->bytes, acb)) {
-        goto out;
+        return;
     }
 
     // No cache found, so let's start a new request
@@ -852,7 +852,7 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
     if (curl_init_state(s, state) < 0) {
         curl_clean_state(state);
         acb->ret = -EIO;
-        goto out;
+        return;
     }
 
     acb->start = 0;
@@ -867,7 +867,7 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
     if (state->buf_len && state->orig_buf == NULL) {
         curl_clean_state(state);
         acb->ret = -ENOMEM;
-        goto out;
+        return;
     }
     state->acb[0] = acb;
 
@@ -880,14 +880,11 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
         acb->ret = -EIO;
 
         curl_clean_state(state);
-        goto out;
+        return;
     }
 
     /* Tell curl it needs to kick things off */
     curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
-
-out:
-    qemu_mutex_unlock(&s->mutex);
 }
 
 static int coroutine_fn curl_co_preadv(BlockDriverState *bs,
diff --git a/block/nbd.c b/block/nbd.c
index c26dc5a54f..28ba7aad61 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -407,27 +407,25 @@ static void *connect_thread_func(void *opaque)
         thr->sioc = NULL;
     }
 
-    qemu_mutex_lock(&thr->mutex);
-
-    switch (thr->state) {
-    case CONNECT_THREAD_RUNNING:
-        thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
-        if (thr->bh_ctx) {
-            aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func, thr->bh_opaque);
-
-            /* play safe, don't reuse bh_ctx on further connection attempts */
-            thr->bh_ctx = NULL;
+    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
+        switch (thr->state) {
+        case CONNECT_THREAD_RUNNING:
+            thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
+            if (thr->bh_ctx) {
+                aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func, thr->bh_opaque);
+
+                /* play safe, don't reuse bh_ctx on further connection attempts */
+                thr->bh_ctx = NULL;
+            }
+            break;
+        case CONNECT_THREAD_RUNNING_DETACHED:
+            do_free = true;
+            break;
+        default:
+            abort();
         }
-        break;
-    case CONNECT_THREAD_RUNNING_DETACHED:
-        do_free = true;
-        break;
-    default:
-        abort();
     }
 
-    qemu_mutex_unlock(&thr->mutex);
-
     if (do_free) {
         nbd_free_connect_thread(thr);
     }
@@ -443,36 +441,33 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
     BDRVNBDState *s = bs->opaque;
     NBDConnectThread *thr = s->connect_thread;
 
-    qemu_mutex_lock(&thr->mutex);
-
-    switch (thr->state) {
-    case CONNECT_THREAD_FAIL:
-    case CONNECT_THREAD_NONE:
-        error_free(thr->err);
-        thr->err = NULL;
-        thr->state = CONNECT_THREAD_RUNNING;
-        qemu_thread_create(&thread, "nbd-connect",
-                           connect_thread_func, thr, QEMU_THREAD_DETACHED);
-        break;
-    case CONNECT_THREAD_SUCCESS:
-        /* Previous attempt finally succeeded in background */
-        thr->state = CONNECT_THREAD_NONE;
-        s->sioc = thr->sioc;
-        thr->sioc = NULL;
-        yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
-                               nbd_yank, bs);
-        qemu_mutex_unlock(&thr->mutex);
-        return 0;
-    case CONNECT_THREAD_RUNNING:
-        /* Already running, will wait */
-        break;
-    default:
-        abort();
-    }
-
-    thr->bh_ctx = qemu_get_current_aio_context();
+    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
+        switch (thr->state) {
+        case CONNECT_THREAD_FAIL:
+        case CONNECT_THREAD_NONE:
+            error_free(thr->err);
+            thr->err = NULL;
+            thr->state = CONNECT_THREAD_RUNNING;
+            qemu_thread_create(&thread, "nbd-connect",
+                               connect_thread_func, thr, QEMU_THREAD_DETACHED);
+            break;
+        case CONNECT_THREAD_SUCCESS:
+            /* Previous attempt finally succeeded in background */
+            thr->state = CONNECT_THREAD_NONE;
+            s->sioc = thr->sioc;
+            thr->sioc = NULL;
+            yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
+                                   nbd_yank, bs);
+            return 0;
+        case CONNECT_THREAD_RUNNING:
+            /* Already running, will wait */
+            break;
+        default:
+            abort();
+        }
 
-    qemu_mutex_unlock(&thr->mutex);
+        thr->bh_ctx = qemu_get_current_aio_context();
+    }
 
 
     /*
@@ -486,46 +481,45 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
     s->wait_connect = true;
     qemu_coroutine_yield();
 
-    qemu_mutex_lock(&thr->mutex);
 
-    switch (thr->state) {
-    case CONNECT_THREAD_SUCCESS:
-    case CONNECT_THREAD_FAIL:
-        thr->state = CONNECT_THREAD_NONE;
-        error_propagate(errp, thr->err);
-        thr->err = NULL;
-        s->sioc = thr->sioc;
-        thr->sioc = NULL;
-        if (s->sioc) {
-            yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
-                                   nbd_yank, bs);
-        }
-        ret = (s->sioc ? 0 : -1);
-        break;
-    case CONNECT_THREAD_RUNNING:
-    case CONNECT_THREAD_RUNNING_DETACHED:
-        /*
-         * Obviously, drained section wants to start. Report the attempt as
-         * failed. Still connect thread is executing in background, and its
-         * result may be used for next connection attempt.
-         */
-        ret = -1;
-        error_setg(errp, "Connection attempt cancelled by other operation");
-        break;
+    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
+        switch (thr->state) {
+        case CONNECT_THREAD_SUCCESS:
+        case CONNECT_THREAD_FAIL:
+            thr->state = CONNECT_THREAD_NONE;
+            error_propagate(errp, thr->err);
+            thr->err = NULL;
+            s->sioc = thr->sioc;
+            thr->sioc = NULL;
+            if (s->sioc) {
+                yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
+                                       nbd_yank, bs);
+            }
+            ret = (s->sioc ? 0 : -1);
+            break;
+        case CONNECT_THREAD_RUNNING:
+        case CONNECT_THREAD_RUNNING_DETACHED:
+            /*
+             * Obviously, drained section wants to start. Report the attempt as
+             * failed. Still connect thread is executing in background, and its
+             * result may be used for next connection attempt.
+             */
+            ret = -1;
+            error_setg(errp, "Connection attempt cancelled by other operation");
+            break;
 
-    case CONNECT_THREAD_NONE:
-        /*
-         * Impossible. We've seen this thread running. So it should be
-         * running or at least give some results.
-         */
-        abort();
+        case CONNECT_THREAD_NONE:
+            /*
+             * Impossible. We've seen this thread running. So it should be
+             * running or at least give some results.
+             */
+            abort();
 
-    default:
-        abort();
+        default:
+            abort();
+        }
     }
 
-    qemu_mutex_unlock(&thr->mutex);
-
     return ret;
 }
 
@@ -546,25 +540,23 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs,
     bool wake = false;
     bool do_free = false;
 
-    qemu_mutex_lock(&thr->mutex);
-
-    if (thr->state == CONNECT_THREAD_RUNNING) {
-        /* We can cancel only in running state, when bh is not yet scheduled */
-        thr->bh_ctx = NULL;
-        if (s->wait_connect) {
-            s->wait_connect = false;
-            wake = true;
-        }
-        if (detach) {
-            thr->state = CONNECT_THREAD_RUNNING_DETACHED;
-            s->connect_thread = NULL;
+    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
+        if (thr->state == CONNECT_THREAD_RUNNING) {
+            /* We can cancel only in running state, when bh is not yet scheduled */
+            thr->bh_ctx = NULL;
+            if (s->wait_connect) {
+                s->wait_connect = false;
+                wake = true;
+            }
+            if (detach) {
+                thr->state = CONNECT_THREAD_RUNNING_DETACHED;
+                s->connect_thread = NULL;
+            }
+        } else if (detach) {
+            do_free = true;
         }
-    } else if (detach) {
-        do_free = true;
     }
 
-    qemu_mutex_unlock(&thr->mutex);
-
     if (do_free) {
         nbd_free_connect_thread(thr);
         s->connect_thread = NULL;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 3/9] char: Replaced a qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
  2021-03-11  3:15 ` [PATCH 1/9] tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD Mahmoud Mandour
  2021-03-11  3:15 ` [PATCH 2/9] block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD Mahmoud Mandour
@ 2021-03-11  3:15 ` Mahmoud Mandour
  2021-03-11 10:05   ` Marc-André Lureau
  2021-03-11  3:15 ` [PATCH 4/9] util: Replaced qemu_mutex_lock with QEMU_LOCK_GUARDs Mahmoud Mandour
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Marc-André Lureau, Mahmoud Mandour, Paolo Bonzini

Removed a pair of qemu_mutex_lock and its respective qemu_mutex_unlock
and used a QEMU_LOCK_GUARD instead. This improves readability by
removing the call to qemu_mutex_unlock.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 chardev/char.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/chardev/char.c b/chardev/char.c
index 97cafd6849..2b0bc1325c 100644
--- a/chardev/char.c
+++ b/chardev/char.c
@@ -115,7 +115,7 @@ static int qemu_chr_write_buffer(Chardev *s,
     int res = 0;
     *offset = 0;
 
-    qemu_mutex_lock(&s->chr_write_lock);
+    QEMU_LOCK_GUARD(&s->chr_write_lock);
     while (*offset < len) {
     retry:
         res = cc->chr_write(s, buf + *offset, len - *offset);
@@ -153,7 +153,6 @@ static int qemu_chr_write_buffer(Chardev *s,
          */
         qemu_chr_write_log(s, buf, len);
     }
-    qemu_mutex_unlock(&s->chr_write_lock);
 
     return res;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 4/9] util: Replaced qemu_mutex_lock with QEMU_LOCK_GUARDs
  2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
                   ` (2 preceding siblings ...)
  2021-03-11  3:15 ` [PATCH 3/9] char: Replaced a qemu_mutex_lock " Mahmoud Mandour
@ 2021-03-11  3:15 ` Mahmoud Mandour
  2021-03-11  3:15 ` [PATCH 5/9] monitor: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD Mahmoud Mandour
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, Daniel P. Berrangé

Removed various qemu_mutex_lock calls and their respective
qemu_mutex_unlock calls and used QEMU_LOCK_GUARD. This simplifies
the code by eliminating various goto paths and removes
qemu_mutex_unlock calls.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 util/filemonitor-inotify.c | 24 ++++++++----------------
 util/vfio-helpers.c        | 23 ++++++++++-------------
 2 files changed, 18 insertions(+), 29 deletions(-)

diff --git a/util/filemonitor-inotify.c b/util/filemonitor-inotify.c
index 2c45f7f176..0e1a196088 100644
--- a/util/filemonitor-inotify.c
+++ b/util/filemonitor-inotify.c
@@ -59,10 +59,9 @@ static void qemu_file_monitor_watch(void *arg)
     int used = 0;
     int len;
 
-    qemu_mutex_lock(&mon->lock);
+    QEMU_LOCK_GUARD(&mon->lock);
 
     if (mon->fd == -1) {
-        qemu_mutex_unlock(&mon->lock);
         return;
     }
 
@@ -72,11 +71,11 @@ static void qemu_file_monitor_watch(void *arg)
         if (errno != EAGAIN) {
             error_report("Failure monitoring inotify FD '%s',"
                          "disabling events", strerror(errno));
-            goto cleanup;
+            return;
         }
 
         /* no more events right now */
-        goto cleanup;
+        return;
     }
 
     /* Loop over all events in the buffer */
@@ -142,9 +141,6 @@ static void qemu_file_monitor_watch(void *arg)
             }
         }
     }
-
- cleanup:
-    qemu_mutex_unlock(&mon->lock);
 }
 
 
@@ -250,7 +246,8 @@ qemu_file_monitor_add_watch(QFileMonitor *mon,
     QFileMonitorWatch watch;
     int64_t ret = -1;
 
-    qemu_mutex_lock(&mon->lock);
+    QEMU_LOCK_GUARD(&mon->lock);
+
     dir = g_hash_table_lookup(mon->dirs, dirpath);
     if (!dir) {
         int rv = inotify_add_watch(mon->fd, dirpath,
@@ -259,7 +256,7 @@ qemu_file_monitor_add_watch(QFileMonitor *mon,
 
         if (rv < 0) {
             error_setg_errno(errp, errno, "Unable to watch '%s'", dirpath);
-            goto cleanup;
+            return ret;
         }
 
         trace_qemu_file_monitor_enable_watch(mon, dirpath, rv);
@@ -290,8 +287,6 @@ qemu_file_monitor_add_watch(QFileMonitor *mon,
 
     ret = watch.id;
 
- cleanup:
-    qemu_mutex_unlock(&mon->lock);
     return ret;
 }
 
@@ -303,13 +298,13 @@ void qemu_file_monitor_remove_watch(QFileMonitor *mon,
     QFileMonitorDir *dir;
     gsize i;
 
-    qemu_mutex_lock(&mon->lock);
+    QEMU_LOCK_GUARD(&mon->lock);
 
     trace_qemu_file_monitor_remove_watch(mon, dirpath, id);
 
     dir = g_hash_table_lookup(mon->dirs, dirpath);
     if (!dir) {
-        goto cleanup;
+        return;
     }
 
     for (i = 0; i < dir->watches->len; i++) {
@@ -333,7 +328,4 @@ void qemu_file_monitor_remove_watch(QFileMonitor *mon,
             qemu_set_fd_handler(mon->fd, NULL, NULL, NULL);
         }
     }
-
- cleanup:
-    qemu_mutex_unlock(&mon->lock);
 }
diff --git a/util/vfio-helpers.c b/util/vfio-helpers.c
index 97dfa3fd57..dc05755ef1 100644
--- a/util/vfio-helpers.c
+++ b/util/vfio-helpers.c
@@ -748,41 +748,41 @@ int qemu_vfio_dma_map(QEMUVFIOState *s, void *host, size_t size,
     assert(QEMU_PTR_IS_ALIGNED(host, qemu_real_host_page_size));
     assert(QEMU_IS_ALIGNED(size, qemu_real_host_page_size));
     trace_qemu_vfio_dma_map(s, host, size, temporary, iova);
-    qemu_mutex_lock(&s->lock);
+    QEMU_LOCK_GUARD(&s->lock);
     mapping = qemu_vfio_find_mapping(s, host, &index);
     if (mapping) {
         iova0 = mapping->iova + ((uint8_t *)host - (uint8_t *)mapping->host);
     } else {
         if (s->high_water_mark - s->low_water_mark + 1 < size) {
             ret = -ENOMEM;
-            goto out;
+            return ret;
         }
         if (!temporary) {
             if (qemu_vfio_find_fixed_iova(s, size, &iova0)) {
                 ret = -ENOMEM;
-                goto out;
+                return ret;
             }
 
             mapping = qemu_vfio_add_mapping(s, host, size, index + 1, iova0);
             if (!mapping) {
                 ret = -ENOMEM;
-                goto out;
+                return ret;
             }
             assert(qemu_vfio_verify_mappings(s));
             ret = qemu_vfio_do_mapping(s, host, size, iova0);
             if (ret) {
                 qemu_vfio_undo_mapping(s, mapping, NULL);
-                goto out;
+                return ret;
             }
             qemu_vfio_dump_mappings(s);
         } else {
             if (qemu_vfio_find_temp_iova(s, size, &iova0)) {
                 ret = -ENOMEM;
-                goto out;
+                return ret;
             }
             ret = qemu_vfio_do_mapping(s, host, size, iova0);
             if (ret) {
-                goto out;
+                return ret;
             }
         }
     }
@@ -790,8 +790,7 @@ int qemu_vfio_dma_map(QEMUVFIOState *s, void *host, size_t size,
     if (iova) {
         *iova = iova0;
     }
-out:
-    qemu_mutex_unlock(&s->lock);
+
     return ret;
 }
 
@@ -826,14 +825,12 @@ void qemu_vfio_dma_unmap(QEMUVFIOState *s, void *host)
     }
 
     trace_qemu_vfio_dma_unmap(s, host);
-    qemu_mutex_lock(&s->lock);
+    QEMU_LOCK_GUARD(&s->lock);
     m = qemu_vfio_find_mapping(s, host, &index);
     if (!m) {
-        goto out;
+        return;
     }
     qemu_vfio_undo_mapping(s, m, NULL);
-out:
-    qemu_mutex_unlock(&s->lock);
 }
 
 static void qemu_vfio_reset(QEMUVFIOState *s)
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 5/9] monitor: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
                   ` (3 preceding siblings ...)
  2021-03-11  3:15 ` [PATCH 4/9] util: Replaced qemu_mutex_lock with QEMU_LOCK_GUARDs Mahmoud Mandour
@ 2021-03-11  3:15 ` Mahmoud Mandour
  2021-03-11  9:50   ` Dr. David Alan Gilbert
  2021-03-11  3:15 ` [PATCH 6/9] migration: " Mahmoud Mandour
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, Markus Armbruster, Dr. David Alan Gilbert

Removed various qemu_mutex_lock and their respective qemu_mutex_unlock
calls and used lock guard macros (QEMU_LOCK_GUARD and
WITH_QEMU_LOCK_GUARD). This simplifies the code by
eliminating qemu_mutex_unlock calls.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 monitor/monitor.c |  8 ++------
 monitor/qmp.c     | 51 ++++++++++++++++++++++-------------------------
 2 files changed, 26 insertions(+), 33 deletions(-)

diff --git a/monitor/monitor.c b/monitor/monitor.c
index e94f532cf5..640496e562 100644
--- a/monitor/monitor.c
+++ b/monitor/monitor.c
@@ -349,7 +349,7 @@ monitor_qapi_event_queue_no_reenter(QAPIEvent event, QDict *qdict)
     evconf = &monitor_qapi_event_conf[event];
     trace_monitor_protocol_event_queue(event, qdict, evconf->rate);
 
-    qemu_mutex_lock(&monitor_lock);
+    QEMU_LOCK_GUARD(&monitor_lock);
 
     if (!evconf->rate) {
         /* Unthrottled event */
@@ -391,8 +391,6 @@ monitor_qapi_event_queue_no_reenter(QAPIEvent event, QDict *qdict)
             timer_mod_ns(evstate->timer, now + evconf->rate);
         }
     }
-
-    qemu_mutex_unlock(&monitor_lock);
 }
 
 void qapi_event_emit(QAPIEvent event, QDict *qdict)
@@ -447,7 +445,7 @@ static void monitor_qapi_event_handler(void *opaque)
     MonitorQAPIEventConf *evconf = &monitor_qapi_event_conf[evstate->event];
 
     trace_monitor_protocol_event_handler(evstate->event, evstate->qdict);
-    qemu_mutex_lock(&monitor_lock);
+    QEMU_LOCK_GUARD(&monitor_lock);
 
     if (evstate->qdict) {
         int64_t now = qemu_clock_get_ns(monitor_get_event_clock());
@@ -462,8 +460,6 @@ static void monitor_qapi_event_handler(void *opaque)
         timer_free(evstate->timer);
         g_free(evstate);
     }
-
-    qemu_mutex_unlock(&monitor_lock);
 }
 
 static unsigned int qapi_event_throttle_hash(const void *key)
diff --git a/monitor/qmp.c b/monitor/qmp.c
index 2326bd7f9b..2b0308f933 100644
--- a/monitor/qmp.c
+++ b/monitor/qmp.c
@@ -76,7 +76,7 @@ static void monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
 
 static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
 {
-    qemu_mutex_lock(&mon->qmp_queue_lock);
+    QEMU_LOCK_GUARD(&mon->qmp_queue_lock);
 
     /*
      * Same condition as in monitor_qmp_dispatcher_co(), but before
@@ -103,7 +103,6 @@ static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
         monitor_resume(&mon->common);
     }
 
-    qemu_mutex_unlock(&mon->qmp_queue_lock);
 }
 
 void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
@@ -179,7 +178,7 @@ static QMPRequest *monitor_qmp_requests_pop_any_with_lock(void)
     Monitor *mon;
     MonitorQMP *qmp_mon;
 
-    qemu_mutex_lock(&monitor_lock);
+    QEMU_LOCK_GUARD(&monitor_lock);
 
     QTAILQ_FOREACH(mon, &mon_list, entry) {
         if (!monitor_is_qmp(mon)) {
@@ -205,8 +204,6 @@ static QMPRequest *monitor_qmp_requests_pop_any_with_lock(void)
         QTAILQ_INSERT_TAIL(&mon_list, mon, entry);
     }
 
-    qemu_mutex_unlock(&monitor_lock);
-
     return req_obj;
 }
 
@@ -376,30 +373,30 @@ static void handle_qmp_command(void *opaque, QObject *req, Error *err)
     req_obj->err = err;
 
     /* Protect qmp_requests and fetching its length. */
-    qemu_mutex_lock(&mon->qmp_queue_lock);
+    WITH_QEMU_LOCK_GUARD(&mon->qmp_queue_lock) {
 
-    /*
-     * Suspend the monitor when we can't queue more requests after
-     * this one.  Dequeuing in monitor_qmp_dispatcher_co() or
-     * monitor_qmp_cleanup_queue_and_resume() will resume it.
-     * Note that when OOB is disabled, we queue at most one command,
-     * for backward compatibility.
-     */
-    if (!qmp_oob_enabled(mon) ||
-        mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1) {
-        monitor_suspend(&mon->common);
-    }
+        /*
+         * Suspend the monitor when we can't queue more requests after
+         * this one.  Dequeuing in monitor_qmp_dispatcher_co() or
+         * monitor_qmp_cleanup_queue_and_resume() will resume it.
+         * Note that when OOB is disabled, we queue at most one command,
+         * for backward compatibility.
+         */
+        if (!qmp_oob_enabled(mon) ||
+            mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1) {
+            monitor_suspend(&mon->common);
+        }
 
-    /*
-     * Put the request to the end of queue so that requests will be
-     * handled in time order.  Ownership for req_obj, req,
-     * etc. will be delivered to the handler side.
-     */
-    trace_monitor_qmp_in_band_enqueue(req_obj, mon,
-                                      mon->qmp_requests->length);
-    assert(mon->qmp_requests->length < QMP_REQ_QUEUE_LEN_MAX);
-    g_queue_push_tail(mon->qmp_requests, req_obj);
-    qemu_mutex_unlock(&mon->qmp_queue_lock);
+        /*
+         * Put the request to the end of queue so that requests will be
+         * handled in time order.  Ownership for req_obj, req,
+         * etc. will be delivered to the handler side.
+         */
+        trace_monitor_qmp_in_band_enqueue(req_obj, mon,
+                                          mon->qmp_requests->length);
+        assert(mon->qmp_requests->length < QMP_REQ_QUEUE_LEN_MAX);
+        g_queue_push_tail(mon->qmp_requests, req_obj);
+    }
 
     /* Kick the dispatcher routine */
     if (!qatomic_xchg(&qmp_dispatcher_co_busy, true)) {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 6/9] migration: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
                   ` (4 preceding siblings ...)
  2021-03-11  3:15 ` [PATCH 5/9] monitor: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD Mahmoud Mandour
@ 2021-03-11  3:15 ` Mahmoud Mandour
  2021-03-11  9:44   ` Dr. David Alan Gilbert
  2021-03-11  3:15 ` [PATCH 7/9] virtio-iommu: " Mahmoud Mandour
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, Dr. David Alan Gilbert, Juan Quintela

Replaced various qemu_mutex_lock calls and their respective
qemu_mutex_unlock calls with QEMU_LOCK_GUARD macro. This simplifies
the code by eliminating the respective qemu_mutex_unlock calls.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 migration/migration.c | 6 ++----
 migration/ram.c       | 6 ++----
 2 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index a5ddf43559..36768391b6 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -323,7 +323,7 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
     int ret = 0;
 
     trace_migrate_send_rp_message((int)message_type, len);
-    qemu_mutex_lock(&mis->rp_mutex);
+    QEMU_LOCK_GUARD(&mis->rp_mutex);
 
     /*
      * It's possible that the file handle got lost due to network
@@ -331,7 +331,7 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
      */
     if (!mis->to_src_file) {
         ret = -EIO;
-        goto error;
+        return ret;
     }
 
     qemu_put_be16(mis->to_src_file, (unsigned int)message_type);
@@ -342,8 +342,6 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
     /* It's possible that qemu file got error during sending */
     ret = qemu_file_get_error(mis->to_src_file);
 
-error:
-    qemu_mutex_unlock(&mis->rp_mutex);
     return ret;
 }
 
diff --git a/migration/ram.c b/migration/ram.c
index 72143da0ac..52537f14ac 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -819,7 +819,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
 {
     bool ret;
 
-    qemu_mutex_lock(&rs->bitmap_mutex);
+    QEMU_LOCK_GUARD(&rs->bitmap_mutex);
 
     /*
      * Clear dirty bitmap if needed.  This _must_ be called before we
@@ -852,7 +852,6 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
     if (ret) {
         rs->migration_dirty_pages--;
     }
-    qemu_mutex_unlock(&rs->bitmap_mutex);
 
     return ret;
 }
@@ -3275,7 +3274,7 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
     int idx, thread_count;
 
     thread_count = migrate_decompress_threads();
-    qemu_mutex_lock(&decomp_done_lock);
+    QEMU_LOCK_GUARD(&decomp_done_lock);
     while (true) {
         for (idx = 0; idx < thread_count; idx++) {
             if (decomp_param[idx].done) {
@@ -3295,7 +3294,6 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
             qemu_cond_wait(&decomp_done_cond, &decomp_done_lock);
         }
     }
-    qemu_mutex_unlock(&decomp_done_lock);
 }
 
  /*
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 7/9] virtio-iommu: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
                   ` (5 preceding siblings ...)
  2021-03-11  3:15 ` [PATCH 6/9] migration: " Mahmoud Mandour
@ 2021-03-11  3:15 ` Mahmoud Mandour
  2021-03-11  3:15 ` [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock " Mahmoud Mandour
  2021-03-11  3:15 ` [PATCH 9/9] hw/hyperv/vmbus: replaced " Mahmoud Mandour
  8 siblings, 0 replies; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Eric Auger, Mahmoud Mandour, Michael S. Tsirkin

In various places, qemu_mutex_lock calls and their respective
calls to qemu_mutex_unlock are replaced with QEMU_LOCK_GUARD macros.
This simplifies the code by removing the calls to
qemu_mutex_unlock and eliminates goto paths.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 hw/virtio/virtio-iommu.c | 78 +++++++++++++++++++---------------------
 1 file changed, 36 insertions(+), 42 deletions(-)

diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index c2883a2f6c..46a4cc0801 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -604,37 +604,36 @@ static void virtio_iommu_handle_command(VirtIODevice *vdev, VirtQueue *vq)
             tail.status = VIRTIO_IOMMU_S_DEVERR;
             goto out;
         }
-        qemu_mutex_lock(&s->mutex);
-        switch (head.type) {
-        case VIRTIO_IOMMU_T_ATTACH:
-            tail.status = virtio_iommu_handle_attach(s, iov, iov_cnt);
-            break;
-        case VIRTIO_IOMMU_T_DETACH:
-            tail.status = virtio_iommu_handle_detach(s, iov, iov_cnt);
-            break;
-        case VIRTIO_IOMMU_T_MAP:
-            tail.status = virtio_iommu_handle_map(s, iov, iov_cnt);
-            break;
-        case VIRTIO_IOMMU_T_UNMAP:
-            tail.status = virtio_iommu_handle_unmap(s, iov, iov_cnt);
-            break;
-        case VIRTIO_IOMMU_T_PROBE:
-        {
-            struct virtio_iommu_req_tail *ptail;
+        WITH_QEMU_LOCK_GUARD(&s->mutex) {
+            switch (head.type) {
+            case VIRTIO_IOMMU_T_ATTACH:
+                tail.status = virtio_iommu_handle_attach(s, iov, iov_cnt);
+                break;
+            case VIRTIO_IOMMU_T_DETACH:
+                tail.status = virtio_iommu_handle_detach(s, iov, iov_cnt);
+                break;
+            case VIRTIO_IOMMU_T_MAP:
+                tail.status = virtio_iommu_handle_map(s, iov, iov_cnt);
+                break;
+            case VIRTIO_IOMMU_T_UNMAP:
+                tail.status = virtio_iommu_handle_unmap(s, iov, iov_cnt);
+                break;
+            case VIRTIO_IOMMU_T_PROBE:
+            {
+                struct virtio_iommu_req_tail *ptail;
 
-            output_size = s->config.probe_size + sizeof(tail);
-            buf = g_malloc0(output_size);
+                output_size = s->config.probe_size + sizeof(tail);
+                buf = g_malloc0(output_size);
 
-            ptail = (struct virtio_iommu_req_tail *)
-                        (buf + s->config.probe_size);
-            ptail->status = virtio_iommu_handle_probe(s, iov, iov_cnt, buf);
-            break;
-        }
-        default:
-            tail.status = VIRTIO_IOMMU_S_UNSUPP;
+                ptail = (struct virtio_iommu_req_tail *)
+                            (buf + s->config.probe_size);
+                ptail->status = virtio_iommu_handle_probe(s, iov, iov_cnt, buf);
+                break;
+            }
+            default:
+                tail.status = VIRTIO_IOMMU_S_UNSUPP;
+            }
         }
-        qemu_mutex_unlock(&s->mutex);
-
 out:
         sz = iov_from_buf(elem->in_sg, elem->in_num, 0,
                           buf ? buf : &tail, output_size);
@@ -721,7 +720,7 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
     sid = virtio_iommu_get_bdf(sdev);
 
     trace_virtio_iommu_translate(mr->parent_obj.name, sid, addr, flag);
-    qemu_mutex_lock(&s->mutex);
+    QEMU_LOCK_GUARD(&s->mutex);
 
     ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(sid));
     if (!ep) {
@@ -733,7 +732,7 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
         } else {
             entry.perm = flag;
         }
-        goto unlock;
+        return entry;
     }
 
     for (i = 0; i < s->nb_reserved_regions; i++) {
@@ -751,7 +750,7 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
                                           sid, addr);
                 break;
             }
-            goto unlock;
+            return entry;
         }
     }
 
@@ -766,7 +765,7 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
         } else {
             entry.perm = flag;
         }
-        goto unlock;
+        return entry;
     }
 
     found = g_tree_lookup_extended(ep->domain->mappings, (gpointer)(&interval),
@@ -778,7 +777,7 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
         virtio_iommu_report_fault(s, VIRTIO_IOMMU_FAULT_R_MAPPING,
                                   VIRTIO_IOMMU_FAULT_F_ADDRESS,
                                   sid, addr);
-        goto unlock;
+        return entry;
     }
 
     read_fault = (flag & IOMMU_RO) &&
@@ -795,14 +794,12 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
         virtio_iommu_report_fault(s, VIRTIO_IOMMU_FAULT_R_MAPPING,
                                   flags | VIRTIO_IOMMU_FAULT_F_ADDRESS,
                                   sid, addr);
-        goto unlock;
+        return entry;
     }
     entry.translated_addr = addr - mapping_key->low + mapping_value->phys_addr;
     entry.perm = flag;
     trace_virtio_iommu_translate_out(addr, entry.translated_addr, sid);
 
-unlock:
-    qemu_mutex_unlock(&s->mutex);
     return entry;
 }
 
@@ -871,21 +868,18 @@ static void virtio_iommu_replay(IOMMUMemoryRegion *mr, IOMMUNotifier *n)
 
     sid = virtio_iommu_get_bdf(sdev);
 
-    qemu_mutex_lock(&s->mutex);
+    QEMU_LOCK_GUARD(&s->mutex);
 
     if (!s->endpoints) {
-        goto unlock;
+        return;
     }
 
     ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(sid));
     if (!ep || !ep->domain) {
-        goto unlock;
+        return;
     }
 
     g_tree_foreach(ep->domain->mappings, virtio_iommu_remap, mr);
-
-unlock:
-    qemu_mutex_unlock(&s->mutex);
 }
 
 static int virtio_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu_mr,
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
                   ` (6 preceding siblings ...)
  2021-03-11  3:15 ` [PATCH 7/9] virtio-iommu: " Mahmoud Mandour
@ 2021-03-11  3:15 ` Mahmoud Mandour
  2021-03-11  7:43   ` Greg Kurz
  2021-03-11 10:49   ` Christian Schoenebeck
  2021-03-11  3:15 ` [PATCH 9/9] hw/hyperv/vmbus: replaced " Mahmoud Mandour
  8 siblings, 2 replies; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, Christian Schoenebeck, Greg Kurz

Replaced a call to qemu_mutex_lock and its respective call to
qemu_mutex_unlock and used QEMU_LOCK_GUARD macro in their place.
This simplifies the code by removing the call required to unlock
and also eliminates goto paths.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 hw/9pfs/9p-synth.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
index 7eb210ffa8..473ef914b0 100644
--- a/hw/9pfs/9p-synth.c
+++ b/hw/9pfs/9p-synth.c
@@ -79,11 +79,11 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int mode,
     if (!parent) {
         parent = &synth_root;
     }
-    qemu_mutex_lock(&synth_mutex);
+    QEMU_LOCK_GUARD(&synth_mutex);
     QLIST_FOREACH(tmp, &parent->child, sibling) {
         if (!strcmp(tmp->name, name)) {
             ret = EEXIST;
-            goto err_out;
+            return ret;
         }
     }
     /* Add the name */
@@ -94,8 +94,6 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int mode,
                       node->attr, node->attr->inode);
     *result = node;
     ret = 0;
-err_out:
-    qemu_mutex_unlock(&synth_mutex);
     return ret;
 }
 
@@ -116,11 +114,11 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int mode,
         parent = &synth_root;
     }
 
-    qemu_mutex_lock(&synth_mutex);
+    QEMU_LOCK_GUARD(&synth_mutex);
     QLIST_FOREACH(tmp, &parent->child, sibling) {
         if (!strcmp(tmp->name, name)) {
             ret = EEXIST;
-            goto err_out;
+            return ret;
         }
     }
     /* Add file type and remove write bits */
@@ -136,8 +134,6 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int mode,
     pstrcpy(node->name, sizeof(node->name), name);
     QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
     ret = 0;
-err_out:
-    qemu_mutex_unlock(&synth_mutex);
     return ret;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 9/9] hw/hyperv/vmbus: replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
                   ` (7 preceding siblings ...)
  2021-03-11  3:15 ` [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock " Mahmoud Mandour
@ 2021-03-11  3:15 ` Mahmoud Mandour
  8 siblings, 0 replies; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-11  3:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour

Replaced some calls to qemu_mutex_lock and their respective
qemu_mutex_unlock calls with QEMU_LOCK_GUARD macro. This simplifies
the code by removing the calls to qemu_mutex_unlock and eliminates
goto paths.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 hw/hyperv/vmbus.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/hw/hyperv/vmbus.c b/hw/hyperv/vmbus.c
index 984caf898d..7c966ae399 100644
--- a/hw/hyperv/vmbus.c
+++ b/hw/hyperv/vmbus.c
@@ -1616,11 +1616,11 @@ static int enqueue_incoming_message(VMBus *vmbus,
     int ret = 0;
     uint8_t idx, prev_size;
 
-    qemu_mutex_lock(&vmbus->rx_queue_lock);
+    QEMU_LOCK_GUARD(&vmbus->rx_queue_lock);
 
     if (vmbus->rx_queue_size == HV_MSG_QUEUE_LEN) {
         ret = -ENOBUFS;
-        goto out;
+        return ret;
     }
 
     prev_size = vmbus->rx_queue_size;
@@ -1632,8 +1632,7 @@ static int enqueue_incoming_message(VMBus *vmbus,
     if (!prev_size) {
         vmbus_resched(vmbus);
     }
-out:
-    qemu_mutex_unlock(&vmbus->rx_queue_lock);
+
     return ret;
 }
 
@@ -2189,10 +2188,10 @@ static void process_message(VMBus *vmbus)
     void *msgdata;
     uint32_t msglen;
 
-    qemu_mutex_lock(&vmbus->rx_queue_lock);
+    QEMU_LOCK_GUARD(&vmbus->rx_queue_lock);
 
     if (!vmbus->rx_queue_size) {
-        goto unlock;
+        return;
     }
 
     hv_msg = &vmbus->rx_queue[vmbus->rx_queue_head];
@@ -2241,8 +2240,6 @@ out:
     vmbus->rx_queue_head %= HV_MSG_QUEUE_LEN;
 
     vmbus_resched(vmbus);
-unlock:
-    qemu_mutex_unlock(&vmbus->rx_queue_lock);
 }
 
 static const struct {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-11  3:15 ` [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock " Mahmoud Mandour
@ 2021-03-11  7:43   ` Greg Kurz
  2021-03-11 10:49   ` Christian Schoenebeck
  1 sibling, 0 replies; 28+ messages in thread
From: Greg Kurz @ 2021-03-11  7:43 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: Christian Schoenebeck, qemu-devel

On Thu, 11 Mar 2021 05:15:37 +0200
Mahmoud Mandour <ma.mandourr@gmail.com> wrote:

> Replaced a call to qemu_mutex_lock and its respective call to
> qemu_mutex_unlock and used QEMU_LOCK_GUARD macro in their place.
> This simplifies the code by removing the call required to unlock
> and also eliminates goto paths.
> 
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> ---

Nice cleanup. Thanks !

Acked-by: Greg Kurz <groug@kaod.org>

>  hw/9pfs/9p-synth.c | 12 ++++--------
>  1 file changed, 4 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
> index 7eb210ffa8..473ef914b0 100644
> --- a/hw/9pfs/9p-synth.c
> +++ b/hw/9pfs/9p-synth.c
> @@ -79,11 +79,11 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int mode,
>      if (!parent) {
>          parent = &synth_root;
>      }
> -    qemu_mutex_lock(&synth_mutex);
> +    QEMU_LOCK_GUARD(&synth_mutex);
>      QLIST_FOREACH(tmp, &parent->child, sibling) {
>          if (!strcmp(tmp->name, name)) {
>              ret = EEXIST;
> -            goto err_out;
> +            return ret;
>          }
>      }
>      /* Add the name */
> @@ -94,8 +94,6 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int mode,
>                        node->attr, node->attr->inode);
>      *result = node;
>      ret = 0;
> -err_out:
> -    qemu_mutex_unlock(&synth_mutex);
>      return ret;
>  }
>  
> @@ -116,11 +114,11 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int mode,
>          parent = &synth_root;
>      }
>  
> -    qemu_mutex_lock(&synth_mutex);
> +    QEMU_LOCK_GUARD(&synth_mutex);
>      QLIST_FOREACH(tmp, &parent->child, sibling) {
>          if (!strcmp(tmp->name, name)) {
>              ret = EEXIST;
> -            goto err_out;
> +            return ret;
>          }
>      }
>      /* Add file type and remove write bits */
> @@ -136,8 +134,6 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int mode,
>      pstrcpy(node->name, sizeof(node->name), name);
>      QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
>      ret = 0;
> -err_out:
> -    qemu_mutex_unlock(&synth_mutex);
>      return ret;
>  }
>  



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 6/9] migration: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-11  3:15 ` [PATCH 6/9] migration: " Mahmoud Mandour
@ 2021-03-11  9:44   ` Dr. David Alan Gilbert
  2021-03-15 18:01     ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 28+ messages in thread
From: Dr. David Alan Gilbert @ 2021-03-11  9:44 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: qemu-devel, Juan Quintela

* Mahmoud Mandour (ma.mandourr@gmail.com) wrote:
> Replaced various qemu_mutex_lock calls and their respective
> qemu_mutex_unlock calls with QEMU_LOCK_GUARD macro. This simplifies
> the code by eliminating the respective qemu_mutex_unlock calls.
> 
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/migration.c | 6 ++----
>  migration/ram.c       | 6 ++----
>  2 files changed, 4 insertions(+), 8 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index a5ddf43559..36768391b6 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -323,7 +323,7 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
>      int ret = 0;
>  
>      trace_migrate_send_rp_message((int)message_type, len);
> -    qemu_mutex_lock(&mis->rp_mutex);
> +    QEMU_LOCK_GUARD(&mis->rp_mutex);
>  
>      /*
>       * It's possible that the file handle got lost due to network
> @@ -331,7 +331,7 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
>       */
>      if (!mis->to_src_file) {
>          ret = -EIO;
> -        goto error;
> +        return ret;
>      }
>  
>      qemu_put_be16(mis->to_src_file, (unsigned int)message_type);
> @@ -342,8 +342,6 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
>      /* It's possible that qemu file got error during sending */
>      ret = qemu_file_get_error(mis->to_src_file);
>  
> -error:
> -    qemu_mutex_unlock(&mis->rp_mutex);
>      return ret;
>  }
>  
> diff --git a/migration/ram.c b/migration/ram.c
> index 72143da0ac..52537f14ac 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -819,7 +819,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
>  {
>      bool ret;
>  
> -    qemu_mutex_lock(&rs->bitmap_mutex);
> +    QEMU_LOCK_GUARD(&rs->bitmap_mutex);
>  
>      /*
>       * Clear dirty bitmap if needed.  This _must_ be called before we
> @@ -852,7 +852,6 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
>      if (ret) {
>          rs->migration_dirty_pages--;
>      }
> -    qemu_mutex_unlock(&rs->bitmap_mutex);
>  
>      return ret;
>  }
> @@ -3275,7 +3274,7 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
>      int idx, thread_count;
>  
>      thread_count = migrate_decompress_threads();
> -    qemu_mutex_lock(&decomp_done_lock);
> +    QEMU_LOCK_GUARD(&decomp_done_lock);
>      while (true) {
>          for (idx = 0; idx < thread_count; idx++) {
>              if (decomp_param[idx].done) {
> @@ -3295,7 +3294,6 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
>              qemu_cond_wait(&decomp_done_cond, &decomp_done_lock);
>          }
>      }
> -    qemu_mutex_unlock(&decomp_done_lock);
>  }
>  
>   /*
> -- 
> 2.25.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/9] monitor: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-11  3:15 ` [PATCH 5/9] monitor: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD Mahmoud Mandour
@ 2021-03-11  9:50   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 28+ messages in thread
From: Dr. David Alan Gilbert @ 2021-03-11  9:50 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: qemu-devel, Markus Armbruster

* Mahmoud Mandour (ma.mandourr@gmail.com) wrote:
> Removed various qemu_mutex_lock and their respective qemu_mutex_unlock
> calls and used lock guard macros (QEMU_LOCK_GUARD and
> WITH_QEMU_LOCK_GUARD). This simplifies the code by
> eliminating qemu_mutex_unlock calls.
> 
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> ---
>  monitor/monitor.c |  8 ++------
>  monitor/qmp.c     | 51 ++++++++++++++++++++++-------------------------
>  2 files changed, 26 insertions(+), 33 deletions(-)
> 
> diff --git a/monitor/monitor.c b/monitor/monitor.c
> index e94f532cf5..640496e562 100644
> --- a/monitor/monitor.c
> +++ b/monitor/monitor.c
> @@ -349,7 +349,7 @@ monitor_qapi_event_queue_no_reenter(QAPIEvent event, QDict *qdict)
>      evconf = &monitor_qapi_event_conf[event];
>      trace_monitor_protocol_event_queue(event, qdict, evconf->rate);
>  
> -    qemu_mutex_lock(&monitor_lock);
> +    QEMU_LOCK_GUARD(&monitor_lock);
>  
>      if (!evconf->rate) {
>          /* Unthrottled event */
> @@ -391,8 +391,6 @@ monitor_qapi_event_queue_no_reenter(QAPIEvent event, QDict *qdict)
>              timer_mod_ns(evstate->timer, now + evconf->rate);
>          }
>      }
> -
> -    qemu_mutex_unlock(&monitor_lock);
>  }
>  
>  void qapi_event_emit(QAPIEvent event, QDict *qdict)
> @@ -447,7 +445,7 @@ static void monitor_qapi_event_handler(void *opaque)
>      MonitorQAPIEventConf *evconf = &monitor_qapi_event_conf[evstate->event];
>  
>      trace_monitor_protocol_event_handler(evstate->event, evstate->qdict);
> -    qemu_mutex_lock(&monitor_lock);
> +    QEMU_LOCK_GUARD(&monitor_lock);
>  
>      if (evstate->qdict) {
>          int64_t now = qemu_clock_get_ns(monitor_get_event_clock());
> @@ -462,8 +460,6 @@ static void monitor_qapi_event_handler(void *opaque)
>          timer_free(evstate->timer);
>          g_free(evstate);
>      }
> -
> -    qemu_mutex_unlock(&monitor_lock);
>  }
>  
>  static unsigned int qapi_event_throttle_hash(const void *key)
> diff --git a/monitor/qmp.c b/monitor/qmp.c
> index 2326bd7f9b..2b0308f933 100644
> --- a/monitor/qmp.c
> +++ b/monitor/qmp.c
> @@ -76,7 +76,7 @@ static void monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
>  
>  static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
>  {
> -    qemu_mutex_lock(&mon->qmp_queue_lock);
> +    QEMU_LOCK_GUARD(&mon->qmp_queue_lock);
>  
>      /*
>       * Same condition as in monitor_qmp_dispatcher_co(), but before
> @@ -103,7 +103,6 @@ static void monitor_qmp_cleanup_queue_and_resume(MonitorQMP *mon)
>          monitor_resume(&mon->common);
>      }
>  
> -    qemu_mutex_unlock(&mon->qmp_queue_lock);
>  }
>  
>  void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
> @@ -179,7 +178,7 @@ static QMPRequest *monitor_qmp_requests_pop_any_with_lock(void)
>      Monitor *mon;
>      MonitorQMP *qmp_mon;
>  
> -    qemu_mutex_lock(&monitor_lock);
> +    QEMU_LOCK_GUARD(&monitor_lock);
>  
>      QTAILQ_FOREACH(mon, &mon_list, entry) {
>          if (!monitor_is_qmp(mon)) {
> @@ -205,8 +204,6 @@ static QMPRequest *monitor_qmp_requests_pop_any_with_lock(void)
>          QTAILQ_INSERT_TAIL(&mon_list, mon, entry);
>      }
>  
> -    qemu_mutex_unlock(&monitor_lock);
> -
>      return req_obj;

You could have used it for the qmp_queue_lock in this function as well;
that can go in a later patch sometime.

>  }
>  
> @@ -376,30 +373,30 @@ static void handle_qmp_command(void *opaque, QObject *req, Error *err)
>      req_obj->err = err;
>  
>      /* Protect qmp_requests and fetching its length. */
> -    qemu_mutex_lock(&mon->qmp_queue_lock);
> +    WITH_QEMU_LOCK_GUARD(&mon->qmp_queue_lock) {
>  
> -    /*
> -     * Suspend the monitor when we can't queue more requests after
> -     * this one.  Dequeuing in monitor_qmp_dispatcher_co() or
> -     * monitor_qmp_cleanup_queue_and_resume() will resume it.
> -     * Note that when OOB is disabled, we queue at most one command,
> -     * for backward compatibility.
> -     */
> -    if (!qmp_oob_enabled(mon) ||
> -        mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1) {
> -        monitor_suspend(&mon->common);
> -    }
> +        /*
> +         * Suspend the monitor when we can't queue more requests after
> +         * this one.  Dequeuing in monitor_qmp_dispatcher_co() or
> +         * monitor_qmp_cleanup_queue_and_resume() will resume it.
> +         * Note that when OOB is disabled, we queue at most one command,
> +         * for backward compatibility.
> +         */
> +        if (!qmp_oob_enabled(mon) ||
> +            mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1) {
> +            monitor_suspend(&mon->common);
> +        }
>  
> -    /*
> -     * Put the request to the end of queue so that requests will be
> -     * handled in time order.  Ownership for req_obj, req,
> -     * etc. will be delivered to the handler side.
> -     */
> -    trace_monitor_qmp_in_band_enqueue(req_obj, mon,
> -                                      mon->qmp_requests->length);
> -    assert(mon->qmp_requests->length < QMP_REQ_QUEUE_LEN_MAX);
> -    g_queue_push_tail(mon->qmp_requests, req_obj);
> -    qemu_mutex_unlock(&mon->qmp_queue_lock);
> +        /*
> +         * Put the request to the end of queue so that requests will be
> +         * handled in time order.  Ownership for req_obj, req,
> +         * etc. will be delivered to the handler side.
> +         */
> +        trace_monitor_qmp_in_band_enqueue(req_obj, mon,
> +                                          mon->qmp_requests->length);
> +        assert(mon->qmp_requests->length < QMP_REQ_QUEUE_LEN_MAX);
> +        g_queue_push_tail(mon->qmp_requests, req_obj);
> +    }
>  
>      /* Kick the dispatcher routine */
>      if (!qatomic_xchg(&qmp_dispatcher_co_busy, true)) {

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> -- 
> 2.25.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/9] tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD
  2021-03-11  3:15 ` [PATCH 1/9] tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD Mahmoud Mandour
@ 2021-03-11 10:04   ` Marc-André Lureau
  2021-03-23  2:58   ` Stefan Berger
  1 sibling, 0 replies; 28+ messages in thread
From: Marc-André Lureau @ 2021-03-11 10:04 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: QEMU, Stefan Berger

[-- Attachment #1: Type: text/plain, Size: 1609 bytes --]

On Thu, Mar 11, 2021 at 1:26 PM Mahmoud Mandour <ma.mandourr@gmail.com>
wrote:

> Removed a qemu_mutex_lock() and its respective qemu_mutex_unlock()
> and used QEMU_LOCK_GUARD instead. This simplifies the code by
> eliminiating gotos and removing the qemu_mutex_unlock() calls.
>
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
>

Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>

---
>  backends/tpm/tpm_emulator.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/backends/tpm/tpm_emulator.c b/backends/tpm/tpm_emulator.c
> index a012adc193..a3c041e402 100644
> --- a/backends/tpm/tpm_emulator.c
> +++ b/backends/tpm/tpm_emulator.c
> @@ -126,7 +126,7 @@ static int tpm_emulator_ctrlcmd(TPMEmulator *tpm,
> unsigned long cmd, void *msg,
>      uint8_t *buf = NULL;
>      int ret = -1;
>
> -    qemu_mutex_lock(&tpm->mutex);
> +    QEMU_LOCK_GUARD(&tpm->mutex);
>
>      buf = g_alloca(n);
>      memcpy(buf, &cmd_no, sizeof(cmd_no));
> @@ -134,20 +134,18 @@ static int tpm_emulator_ctrlcmd(TPMEmulator *tpm,
> unsigned long cmd, void *msg,
>
>      n = qemu_chr_fe_write_all(dev, buf, n);
>      if (n <= 0) {
> -        goto end;
> +        return ret;
>      }
>
>      if (msg_len_out != 0) {
>          n = qemu_chr_fe_read_all(dev, msg, msg_len_out);
>          if (n <= 0) {
> -            goto end;
> +            return ret;
>          }
>      }
>
>      ret = 0;
>
> -end:
> -    qemu_mutex_unlock(&tpm->mutex);
>      return ret;
>  }
>
> --
> 2.25.1
>
>
>

-- 
Marc-André Lureau

[-- Attachment #2: Type: text/html, Size: 2485 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/9] char: Replaced a qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-11  3:15 ` [PATCH 3/9] char: Replaced a qemu_mutex_lock " Mahmoud Mandour
@ 2021-03-11 10:05   ` Marc-André Lureau
  0 siblings, 0 replies; 28+ messages in thread
From: Marc-André Lureau @ 2021-03-11 10:05 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: Paolo Bonzini, QEMU

[-- Attachment #1: Type: text/plain, Size: 1203 bytes --]

On Thu, Mar 11, 2021 at 1:28 PM Mahmoud Mandour <ma.mandourr@gmail.com>
wrote:

> Removed a pair of qemu_mutex_lock and its respective qemu_mutex_unlock
> and used a QEMU_LOCK_GUARD instead. This improves readability by
> removing the call to qemu_mutex_unlock.
>
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
>

Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>

---
>  chardev/char.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/chardev/char.c b/chardev/char.c
> index 97cafd6849..2b0bc1325c 100644
> --- a/chardev/char.c
> +++ b/chardev/char.c
> @@ -115,7 +115,7 @@ static int qemu_chr_write_buffer(Chardev *s,
>      int res = 0;
>      *offset = 0;
>
> -    qemu_mutex_lock(&s->chr_write_lock);
> +    QEMU_LOCK_GUARD(&s->chr_write_lock);
>      while (*offset < len) {
>      retry:
>          res = cc->chr_write(s, buf + *offset, len - *offset);
> @@ -153,7 +153,6 @@ static int qemu_chr_write_buffer(Chardev *s,
>           */
>          qemu_chr_write_log(s, buf, len);
>      }
> -    qemu_mutex_unlock(&s->chr_write_lock);
>
>      return res;
>  }
> --
> 2.25.1
>
>
>

-- 
Marc-André Lureau

[-- Attachment #2: Type: text/html, Size: 2011 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-11  3:15 ` [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock " Mahmoud Mandour
  2021-03-11  7:43   ` Greg Kurz
@ 2021-03-11 10:49   ` Christian Schoenebeck
  2021-03-11 11:52     ` Greg Kurz
  1 sibling, 1 reply; 28+ messages in thread
From: Christian Schoenebeck @ 2021-03-11 10:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, Greg Kurz

On Donnerstag, 11. März 2021 04:15:37 CET Mahmoud Mandour wrote:
> Replaced a call to qemu_mutex_lock and its respective call to
> qemu_mutex_unlock and used QEMU_LOCK_GUARD macro in their place.
> This simplifies the code by removing the call required to unlock
> and also eliminates goto paths.
> 
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> ---
>  hw/9pfs/9p-synth.c | 12 ++++--------
>  1 file changed, 4 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
> index 7eb210ffa8..473ef914b0 100644
> --- a/hw/9pfs/9p-synth.c
> +++ b/hw/9pfs/9p-synth.c
> @@ -79,11 +79,11 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int
> mode, if (!parent) {
>          parent = &synth_root;
>      }
> -    qemu_mutex_lock(&synth_mutex);
> +    QEMU_LOCK_GUARD(&synth_mutex);
>      QLIST_FOREACH(tmp, &parent->child, sibling) {
>          if (!strcmp(tmp->name, name)) {
>              ret = EEXIST;
> -            goto err_out;
> +            return ret;
>          }
>      }
>      /* Add the name */
> @@ -94,8 +94,6 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int mode,
> node->attr, node->attr->inode);
>      *result = node;
>      ret = 0;
> -err_out:
> -    qemu_mutex_unlock(&synth_mutex);
>      return ret;
>  }
> 
> @@ -116,11 +114,11 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent,
> int mode, parent = &synth_root;
>      }
> 
> -    qemu_mutex_lock(&synth_mutex);
> +    QEMU_LOCK_GUARD(&synth_mutex);
>      QLIST_FOREACH(tmp, &parent->child, sibling) {
>          if (!strcmp(tmp->name, name)) {
>              ret = EEXIST;
> -            goto err_out;
> +            return ret;
>          }
>      }
>      /* Add file type and remove write bits */
> @@ -136,8 +134,6 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int
> mode, pstrcpy(node->name, sizeof(node->name), name);
>      QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
>      ret = 0;
> -err_out:
> -    qemu_mutex_unlock(&synth_mutex);
>      return ret;
>  }

Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>

Greg, I suggest I'll push this onto my queue as you seem to be busy.

Best regards,
Christian Schoenebeck




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-11 10:49   ` Christian Schoenebeck
@ 2021-03-11 11:52     ` Greg Kurz
  2021-03-11 11:59       ` Christian Schoenebeck
  0 siblings, 1 reply; 28+ messages in thread
From: Greg Kurz @ 2021-03-11 11:52 UTC (permalink / raw)
  To: Christian Schoenebeck; +Cc: Mahmoud Mandour, qemu-devel

On Thu, 11 Mar 2021 11:49:06 +0100
Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:

> On Donnerstag, 11. März 2021 04:15:37 CET Mahmoud Mandour wrote:
> > Replaced a call to qemu_mutex_lock and its respective call to
> > qemu_mutex_unlock and used QEMU_LOCK_GUARD macro in their place.
> > This simplifies the code by removing the call required to unlock
> > and also eliminates goto paths.
> > 
> > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> > ---
> >  hw/9pfs/9p-synth.c | 12 ++++--------
> >  1 file changed, 4 insertions(+), 8 deletions(-)
> > 
> > diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
> > index 7eb210ffa8..473ef914b0 100644
> > --- a/hw/9pfs/9p-synth.c
> > +++ b/hw/9pfs/9p-synth.c
> > @@ -79,11 +79,11 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int
> > mode, if (!parent) {
> >          parent = &synth_root;
> >      }
> > -    qemu_mutex_lock(&synth_mutex);
> > +    QEMU_LOCK_GUARD(&synth_mutex);
> >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> >          if (!strcmp(tmp->name, name)) {
> >              ret = EEXIST;
> > -            goto err_out;
> > +            return ret;
> >          }
> >      }
> >      /* Add the name */
> > @@ -94,8 +94,6 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int mode,
> > node->attr, node->attr->inode);
> >      *result = node;
> >      ret = 0;
> > -err_out:
> > -    qemu_mutex_unlock(&synth_mutex);
> >      return ret;
> >  }
> > 
> > @@ -116,11 +114,11 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent,
> > int mode, parent = &synth_root;
> >      }
> > 
> > -    qemu_mutex_lock(&synth_mutex);
> > +    QEMU_LOCK_GUARD(&synth_mutex);
> >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> >          if (!strcmp(tmp->name, name)) {
> >              ret = EEXIST;
> > -            goto err_out;
> > +            return ret;
> >          }
> >      }
> >      /* Add file type and remove write bits */
> > @@ -136,8 +134,6 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int
> > mode, pstrcpy(node->name, sizeof(node->name), name);
> >      QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
> >      ret = 0;
> > -err_out:
> > -    qemu_mutex_unlock(&synth_mutex);
> >      return ret;
> >  }
> 
> Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> 
> Greg, I suggest I'll push this onto my queue as you seem to be busy.
> 

This cleanup spans over multiple subsystems but I think it makes more
sense to keep all these patches together. Let's wait for everyone to
ack/review and then we'll decide how to merge the patches.

> Best regards,
> Christian Schoenebeck
> 
> 



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-11 11:52     ` Greg Kurz
@ 2021-03-11 11:59       ` Christian Schoenebeck
  2021-03-13  5:43         ` Mahmoud Mandour
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Schoenebeck @ 2021-03-11 11:59 UTC (permalink / raw)
  To: qemu-devel; +Cc: Greg Kurz, Mahmoud Mandour

On Donnerstag, 11. März 2021 12:52:45 CET Greg Kurz wrote:
> On Thu, 11 Mar 2021 11:49:06 +0100
> 
> Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > On Donnerstag, 11. März 2021 04:15:37 CET Mahmoud Mandour wrote:
> > > Replaced a call to qemu_mutex_lock and its respective call to
> > > qemu_mutex_unlock and used QEMU_LOCK_GUARD macro in their place.
> > > This simplifies the code by removing the call required to unlock
> > > and also eliminates goto paths.
> > > 
> > > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> > > ---
> > > 
> > >  hw/9pfs/9p-synth.c | 12 ++++--------
> > >  1 file changed, 4 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
> > > index 7eb210ffa8..473ef914b0 100644
> > > --- a/hw/9pfs/9p-synth.c
> > > +++ b/hw/9pfs/9p-synth.c
> > > @@ -79,11 +79,11 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int
> > > mode, if (!parent) {
> > > 
> > >          parent = &synth_root;
> > >      
> > >      }
> > > 
> > > -    qemu_mutex_lock(&synth_mutex);
> > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > 
> > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > >      
> > >          if (!strcmp(tmp->name, name)) {
> > >          
> > >              ret = EEXIST;
> > > 
> > > -            goto err_out;
> > > +            return ret;
> > > 
> > >          }
> > >      
> > >      }
> > >      /* Add the name */
> > > 
> > > @@ -94,8 +94,6 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int
> > > mode, node->attr, node->attr->inode);
> > > 
> > >      *result = node;
> > >      ret = 0;
> > > 
> > > -err_out:
> > > -    qemu_mutex_unlock(&synth_mutex);
> > > 
> > >      return ret;
> > >  
> > >  }
> > > 
> > > @@ -116,11 +114,11 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode
> > > *parent,
> > > int mode, parent = &synth_root;
> > > 
> > >      }
> > > 
> > > -    qemu_mutex_lock(&synth_mutex);
> > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > 
> > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > >      
> > >          if (!strcmp(tmp->name, name)) {
> > >          
> > >              ret = EEXIST;
> > > 
> > > -            goto err_out;
> > > +            return ret;
> > > 
> > >          }
> > >      
> > >      }
> > >      /* Add file type and remove write bits */
> > > 
> > > @@ -136,8 +134,6 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent,
> > > int
> > > mode, pstrcpy(node->name, sizeof(node->name), name);
> > > 
> > >      QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
> > >      ret = 0;
> > > 
> > > -err_out:
> > > -    qemu_mutex_unlock(&synth_mutex);
> > > 
> > >      return ret;
> > >  
> > >  }
> > 
> > Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > 
> > Greg, I suggest I'll push this onto my queue as you seem to be busy.
> 
> This cleanup spans over multiple subsystems but I think it makes more
> sense to keep all these patches together. Let's wait for everyone to
> ack/review and then we'll decide how to merge the patches.

Sure, makes sense.





^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/9] block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-11  3:15 ` [PATCH 2/9] block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD Mahmoud Mandour
@ 2021-03-12 10:23   ` Vladimir Sementsov-Ogievskiy
  2021-03-13  5:51     ` Mahmoud Mandour
  2021-03-16 13:29     ` Eric Blake
  0 siblings, 2 replies; 28+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-03-12 10:23 UTC (permalink / raw)
  To: Mahmoud Mandour, qemu-devel; +Cc: Kevin Wolf, open list:CURL, Max Reitz

11.03.2021 06:15, Mahmoud Mandour wrote:
> Replaced various qemu_mutex_lock/qemu_mutex_unlock calls with
> lock guard macros (QEMU_LOCK_GUARD() and WITH_QEMU_LOCK_GUARD).
> This slightly simplifies the code by eliminating calls to
> qemu_mutex_unlock and eliminates goto paths.
> 
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> ---
>   block/curl.c |  13 ++--
>   block/nbd.c  | 188 ++++++++++++++++++++++++---------------------------

Better would be make two separate patches I think.

>   2 files changed, 95 insertions(+), 106 deletions(-)
> 
> diff --git a/block/curl.c b/block/curl.c
> index 4ff895df8f..56a217951a 100644
> --- a/block/curl.c
> +++ b/block/curl.c
> @@ -832,12 +832,12 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
>       uint64_t start = acb->offset;
>       uint64_t end;
>   
> -    qemu_mutex_lock(&s->mutex);
> +    QEMU_LOCK_GUARD(&s->mutex);
>   
>       // In case we have the requested data already (e.g. read-ahead),
>       // we can just call the callback and be done.
>       if (curl_find_buf(s, start, acb->bytes, acb)) {
> -        goto out;
> +        return;
>       }
>   
>       // No cache found, so let's start a new request
> @@ -852,7 +852,7 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
>       if (curl_init_state(s, state) < 0) {
>           curl_clean_state(state);
>           acb->ret = -EIO;
> -        goto out;
> +        return;
>       }
>   
>       acb->start = 0;
> @@ -867,7 +867,7 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
>       if (state->buf_len && state->orig_buf == NULL) {
>           curl_clean_state(state);
>           acb->ret = -ENOMEM;
> -        goto out;
> +        return;
>       }
>       state->acb[0] = acb;
>   
> @@ -880,14 +880,11 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
>           acb->ret = -EIO;
>   
>           curl_clean_state(state);
> -        goto out;
> +        return;
>       }
>   
>       /* Tell curl it needs to kick things off */
>       curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
> -
> -out:
> -    qemu_mutex_unlock(&s->mutex);
>   }

This change is obvious and good.

>   
>   static int coroutine_fn curl_co_preadv(BlockDriverState *bs,
> diff --git a/block/nbd.c b/block/nbd.c
> index c26dc5a54f..28ba7aad61 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -407,27 +407,25 @@ static void *connect_thread_func(void *opaque)
>           thr->sioc = NULL;
>       }
>   
> -    qemu_mutex_lock(&thr->mutex);
> -
> -    switch (thr->state) {
> -    case CONNECT_THREAD_RUNNING:
> -        thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
> -        if (thr->bh_ctx) {
> -            aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func, thr->bh_opaque);
> -
> -            /* play safe, don't reuse bh_ctx on further connection attempts */
> -            thr->bh_ctx = NULL;
> +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
> +        switch (thr->state) {
> +        case CONNECT_THREAD_RUNNING:
> +            thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
> +            if (thr->bh_ctx) {
> +                aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func, thr->bh_opaque);

over-80 line

> +
> +                /* play safe, don't reuse bh_ctx on further connection attempts */

and here

> +                thr->bh_ctx = NULL;
> +            }
> +            break;
> +        case CONNECT_THREAD_RUNNING_DETACHED:
> +            do_free = true;
> +            break;
> +        default:
> +            abort();
>           }
> -        break;
> -    case CONNECT_THREAD_RUNNING_DETACHED:
> -        do_free = true;
> -        break;
> -    default:
> -        abort();
>       }
>   
> -    qemu_mutex_unlock(&thr->mutex);
> ->       if (do_free) {
>           nbd_free_connect_thread(thr);
>       }
> @@ -443,36 +441,33 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>       BDRVNBDState *s = bs->opaque;
>       NBDConnectThread *thr = s->connect_thread;
>   
> -    qemu_mutex_lock(&thr->mutex);
> -
> -    switch (thr->state) {
> -    case CONNECT_THREAD_FAIL:
> -    case CONNECT_THREAD_NONE:
> -        error_free(thr->err);
> -        thr->err = NULL;
> -        thr->state = CONNECT_THREAD_RUNNING;
> -        qemu_thread_create(&thread, "nbd-connect",
> -                           connect_thread_func, thr, QEMU_THREAD_DETACHED);
> -        break;
> -    case CONNECT_THREAD_SUCCESS:
> -        /* Previous attempt finally succeeded in background */
> -        thr->state = CONNECT_THREAD_NONE;
> -        s->sioc = thr->sioc;
> -        thr->sioc = NULL;
> -        yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
> -                               nbd_yank, bs);
> -        qemu_mutex_unlock(&thr->mutex);
> -        return 0;
> -    case CONNECT_THREAD_RUNNING:
> -        /* Already running, will wait */
> -        break;
> -    default:
> -        abort();
> -    }
> -
> -    thr->bh_ctx = qemu_get_current_aio_context();
> +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
> +        switch (thr->state) {
> +        case CONNECT_THREAD_FAIL:
> +        case CONNECT_THREAD_NONE:
> +            error_free(thr->err);
> +            thr->err = NULL;
> +            thr->state = CONNECT_THREAD_RUNNING;
> +            qemu_thread_create(&thread, "nbd-connect",
> +                               connect_thread_func, thr, QEMU_THREAD_DETACHED);
> +            break;
> +        case CONNECT_THREAD_SUCCESS:
> +            /* Previous attempt finally succeeded in background */
> +            thr->state = CONNECT_THREAD_NONE;
> +            s->sioc = thr->sioc;
> +            thr->sioc = NULL;
> +            yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
> +                                   nbd_yank, bs);
> +            return 0;
> +        case CONNECT_THREAD_RUNNING:
> +            /* Already running, will wait */
> +            break;
> +        default:
> +            abort();
> +        }
>   
> -    qemu_mutex_unlock(&thr->mutex);
> +        thr->bh_ctx = qemu_get_current_aio_context();
> +    }
>   
>   
>       /*
> @@ -486,46 +481,45 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>       s->wait_connect = true;
>       qemu_coroutine_yield();
>   
> -    qemu_mutex_lock(&thr->mutex);
>   
> -    switch (thr->state) {
> -    case CONNECT_THREAD_SUCCESS:
> -    case CONNECT_THREAD_FAIL:
> -        thr->state = CONNECT_THREAD_NONE;
> -        error_propagate(errp, thr->err);
> -        thr->err = NULL;
> -        s->sioc = thr->sioc;
> -        thr->sioc = NULL;
> -        if (s->sioc) {
> -            yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
> -                                   nbd_yank, bs);
> -        }
> -        ret = (s->sioc ? 0 : -1);
> -        break;
> -    case CONNECT_THREAD_RUNNING:
> -    case CONNECT_THREAD_RUNNING_DETACHED:
> -        /*
> -         * Obviously, drained section wants to start. Report the attempt as
> -         * failed. Still connect thread is executing in background, and its
> -         * result may be used for next connection attempt.
> -         */
> -        ret = -1;
> -        error_setg(errp, "Connection attempt cancelled by other operation");
> -        break;
> +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
> +        switch (thr->state) {
> +        case CONNECT_THREAD_SUCCESS:
> +        case CONNECT_THREAD_FAIL:
> +            thr->state = CONNECT_THREAD_NONE;
> +            error_propagate(errp, thr->err);
> +            thr->err = NULL;
> +            s->sioc = thr->sioc;
> +            thr->sioc = NULL;
> +            if (s->sioc) {
> +                yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
> +                                       nbd_yank, bs);
> +            }
> +            ret = (s->sioc ? 0 : -1);
> +            break;
> +        case CONNECT_THREAD_RUNNING:
> +        case CONNECT_THREAD_RUNNING_DETACHED:
> +            /*
> +             * Obviously, drained section wants to start. Report the attempt as
> +             * failed. Still connect thread is executing in background, and its
> +             * result may be used for next connection attempt.
> +             */
> +            ret = -1;
> +            error_setg(errp, "Connection attempt cancelled by other operation");
> +            break;
>   
> -    case CONNECT_THREAD_NONE:
> -        /*
> -         * Impossible. We've seen this thread running. So it should be
> -         * running or at least give some results.
> -         */
> -        abort();
> +        case CONNECT_THREAD_NONE:
> +            /*
> +             * Impossible. We've seen this thread running. So it should be
> +             * running or at least give some results.
> +             */
> +            abort();
>   
> -    default:
> -        abort();
> +        default:
> +            abort();
> +        }
>       }
>   
> -    qemu_mutex_unlock(&thr->mutex);
> -
>       return ret;
>   }
>   
> @@ -546,25 +540,23 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs,
>       bool wake = false;
>       bool do_free = false;
>   
> -    qemu_mutex_lock(&thr->mutex);
> -
> -    if (thr->state == CONNECT_THREAD_RUNNING) {
> -        /* We can cancel only in running state, when bh is not yet scheduled */
> -        thr->bh_ctx = NULL;
> -        if (s->wait_connect) {
> -            s->wait_connect = false;
> -            wake = true;
> -        }
> -        if (detach) {
> -            thr->state = CONNECT_THREAD_RUNNING_DETACHED;
> -            s->connect_thread = NULL;
> +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
> +        if (thr->state == CONNECT_THREAD_RUNNING) {
> +            /* We can cancel only in running state, when bh is not yet scheduled */

over-80 line

> +            thr->bh_ctx = NULL;
> +            if (s->wait_connect) {
> +                s->wait_connect = false;
> +                wake = true;
> +            }
> +            if (detach) {
> +                thr->state = CONNECT_THREAD_RUNNING_DETACHED;
> +                s->connect_thread = NULL;
> +            }
> +        } else if (detach) {
> +            do_free = true;
>           }
> -    } else if (detach) {
> -        do_free = true;
>       }
>   
> -    qemu_mutex_unlock(&thr->mutex);
> -
>       if (do_free) {
>           nbd_free_connect_thread(thr);
>           s->connect_thread = NULL;
> 


For nbd.c we mostly change simple critical sections

qemu_mutex_lock()
...
qemu_mutex_unlock()

into

WITH_QEMU_LOCK_GUARD() {
...
}

And don't eliminate any gotos.. Hmm. On the first sight increasing nesting of the code doesn't make it more beautiful.
But I understand that context-manager concept is safer than calling unlock() by hand, and with nested block the critical section becomes more obvious. So, with fixed over-80 lines:

Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-11 11:59       ` Christian Schoenebeck
@ 2021-03-13  5:43         ` Mahmoud Mandour
  2021-03-13  7:51           ` Greg Kurz
  0 siblings, 1 reply; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-13  5:43 UTC (permalink / raw)
  To: Christian Schoenebeck; +Cc: qemu-devel, Greg Kurz

[-- Attachment #1: Type: text/plain, Size: 3656 bytes --]

Thanks for the fast review. I asked on the QEMU IRC channel
before committing whether to put all the changes into one patch
or split them and was instructed that it was better to split them up.
But in any case I was open to both ways and you can decide
on the best way to go.

On Thu, Mar 11, 2021 at 1:59 PM Christian Schoenebeck <
qemu_oss@crudebyte.com> wrote:

> On Donnerstag, 11. März 2021 12:52:45 CET Greg Kurz wrote:
> > On Thu, 11 Mar 2021 11:49:06 +0100
> >
> > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > > On Donnerstag, 11. März 2021 04:15:37 CET Mahmoud Mandour wrote:
> > > > Replaced a call to qemu_mutex_lock and its respective call to
> > > > qemu_mutex_unlock and used QEMU_LOCK_GUARD macro in their place.
> > > > This simplifies the code by removing the call required to unlock
> > > > and also eliminates goto paths.
> > > >
> > > > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> > > > ---
> > > >
> > > >  hw/9pfs/9p-synth.c | 12 ++++--------
> > > >  1 file changed, 4 insertions(+), 8 deletions(-)
> > > >
> > > > diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
> > > > index 7eb210ffa8..473ef914b0 100644
> > > > --- a/hw/9pfs/9p-synth.c
> > > > +++ b/hw/9pfs/9p-synth.c
> > > > @@ -79,11 +79,11 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent,
> int
> > > > mode, if (!parent) {
> > > >
> > > >          parent = &synth_root;
> > > >
> > > >      }
> > > >
> > > > -    qemu_mutex_lock(&synth_mutex);
> > > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > >
> > > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > > >
> > > >          if (!strcmp(tmp->name, name)) {
> > > >
> > > >              ret = EEXIST;
> > > >
> > > > -            goto err_out;
> > > > +            return ret;
> > > >
> > > >          }
> > > >
> > > >      }
> > > >      /* Add the name */
> > > >
> > > > @@ -94,8 +94,6 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent,
> int
> > > > mode, node->attr, node->attr->inode);
> > > >
> > > >      *result = node;
> > > >      ret = 0;
> > > >
> > > > -err_out:
> > > > -    qemu_mutex_unlock(&synth_mutex);
> > > >
> > > >      return ret;
> > > >
> > > >  }
> > > >
> > > > @@ -116,11 +114,11 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode
> > > > *parent,
> > > > int mode, parent = &synth_root;
> > > >
> > > >      }
> > > >
> > > > -    qemu_mutex_lock(&synth_mutex);
> > > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > >
> > > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > > >
> > > >          if (!strcmp(tmp->name, name)) {
> > > >
> > > >              ret = EEXIST;
> > > >
> > > > -            goto err_out;
> > > > +            return ret;
> > > >
> > > >          }
> > > >
> > > >      }
> > > >      /* Add file type and remove write bits */
> > > >
> > > > @@ -136,8 +134,6 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode
> *parent,
> > > > int
> > > > mode, pstrcpy(node->name, sizeof(node->name), name);
> > > >
> > > >      QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
> > > >      ret = 0;
> > > >
> > > > -err_out:
> > > > -    qemu_mutex_unlock(&synth_mutex);
> > > >
> > > >      return ret;
> > > >
> > > >  }
> > >
> > > Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > >
> > > Greg, I suggest I'll push this onto my queue as you seem to be busy.
> >
> > This cleanup spans over multiple subsystems but I think it makes more
> > sense to keep all these patches together. Let's wait for everyone to
> > ack/review and then we'll decide how to merge the patches.
>
> Sure, makes sense.
>
>
>
>

[-- Attachment #2: Type: text/html, Size: 5576 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/9] block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-12 10:23   ` Vladimir Sementsov-Ogievskiy
@ 2021-03-13  5:51     ` Mahmoud Mandour
  2021-03-15 14:08       ` Vladimir Sementsov-Ogievskiy
  2021-03-16 13:29     ` Eric Blake
  1 sibling, 1 reply; 28+ messages in thread
From: Mahmoud Mandour @ 2021-03-13  5:51 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: Kevin Wolf, qemu-devel, open list:CURL, Max Reitz

[-- Attachment #1: Type: text/plain, Size: 11990 bytes --]

Thank you for the fast review and I'm sorry for the silly and obvious style
errors. Unfortunately I did not notice the section on using the checkpatch
script in the Contributing page on the wiki before committing. But I assure
you that such errors will not occur again.

On Fri, Mar 12, 2021 at 12:23 PM Vladimir Sementsov-Ogievskiy <
vsementsov@virtuozzo.com> wrote:

> 11.03.2021 06:15, Mahmoud Mandour wrote:
> > Replaced various qemu_mutex_lock/qemu_mutex_unlock calls with
> > lock guard macros (QEMU_LOCK_GUARD() and WITH_QEMU_LOCK_GUARD).
> > This slightly simplifies the code by eliminating calls to
> > qemu_mutex_unlock and eliminates goto paths.
> >
> > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> > ---
> >   block/curl.c |  13 ++--
> >   block/nbd.c  | 188 ++++++++++++++++++++++++---------------------------
>
> Better would be make two separate patches I think.
>
> >   2 files changed, 95 insertions(+), 106 deletions(-)
> >
> > diff --git a/block/curl.c b/block/curl.c
> > index 4ff895df8f..56a217951a 100644
> > --- a/block/curl.c
> > +++ b/block/curl.c
> > @@ -832,12 +832,12 @@ static void curl_setup_preadv(BlockDriverState
> *bs, CURLAIOCB *acb)
> >       uint64_t start = acb->offset;
> >       uint64_t end;
> >
> > -    qemu_mutex_lock(&s->mutex);
> > +    QEMU_LOCK_GUARD(&s->mutex);
> >
> >       // In case we have the requested data already (e.g. read-ahead),
> >       // we can just call the callback and be done.
> >       if (curl_find_buf(s, start, acb->bytes, acb)) {
> > -        goto out;
> > +        return;
> >       }
> >
> >       // No cache found, so let's start a new request
> > @@ -852,7 +852,7 @@ static void curl_setup_preadv(BlockDriverState *bs,
> CURLAIOCB *acb)
> >       if (curl_init_state(s, state) < 0) {
> >           curl_clean_state(state);
> >           acb->ret = -EIO;
> > -        goto out;
> > +        return;
> >       }
> >
> >       acb->start = 0;
> > @@ -867,7 +867,7 @@ static void curl_setup_preadv(BlockDriverState *bs,
> CURLAIOCB *acb)
> >       if (state->buf_len && state->orig_buf == NULL) {
> >           curl_clean_state(state);
> >           acb->ret = -ENOMEM;
> > -        goto out;
> > +        return;
> >       }
> >       state->acb[0] = acb;
> >
> > @@ -880,14 +880,11 @@ static void curl_setup_preadv(BlockDriverState
> *bs, CURLAIOCB *acb)
> >           acb->ret = -EIO;
> >
> >           curl_clean_state(state);
> > -        goto out;
> > +        return;
> >       }
> >
> >       /* Tell curl it needs to kick things off */
> >       curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0,
> &running);
> > -
> > -out:
> > -    qemu_mutex_unlock(&s->mutex);
> >   }
>
> This change is obvious and good.
>
> >
> >   static int coroutine_fn curl_co_preadv(BlockDriverState *bs,
> > diff --git a/block/nbd.c b/block/nbd.c
> > index c26dc5a54f..28ba7aad61 100644
> > --- a/block/nbd.c
> > +++ b/block/nbd.c
> > @@ -407,27 +407,25 @@ static void *connect_thread_func(void *opaque)
> >           thr->sioc = NULL;
> >       }
> >
> > -    qemu_mutex_lock(&thr->mutex);
> > -
> > -    switch (thr->state) {
> > -    case CONNECT_THREAD_RUNNING:
> > -        thr->state = ret < 0 ? CONNECT_THREAD_FAIL :
> CONNECT_THREAD_SUCCESS;
> > -        if (thr->bh_ctx) {
> > -            aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func,
> thr->bh_opaque);
> > -
> > -            /* play safe, don't reuse bh_ctx on further connection
> attempts */
> > -            thr->bh_ctx = NULL;
> > +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
> > +        switch (thr->state) {
> > +        case CONNECT_THREAD_RUNNING:
> > +            thr->state = ret < 0 ? CONNECT_THREAD_FAIL :
> CONNECT_THREAD_SUCCESS;
> > +            if (thr->bh_ctx) {
> > +                aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func,
> thr->bh_opaque);
>
> over-80 line
>
> > +
> > +                /* play safe, don't reuse bh_ctx on further connection
> attempts */
>
> and here
>
> > +                thr->bh_ctx = NULL;
> > +            }
> > +            break;
> > +        case CONNECT_THREAD_RUNNING_DETACHED:
> > +            do_free = true;
> > +            break;
> > +        default:
> > +            abort();
> >           }
> > -        break;
> > -    case CONNECT_THREAD_RUNNING_DETACHED:
> > -        do_free = true;
> > -        break;
> > -    default:
> > -        abort();
> >       }
> >
> > -    qemu_mutex_unlock(&thr->mutex);
> > ->       if (do_free) {
> >           nbd_free_connect_thread(thr);
> >       }
> > @@ -443,36 +441,33 @@ nbd_co_establish_connection(BlockDriverState *bs,
> Error **errp)
> >       BDRVNBDState *s = bs->opaque;
> >       NBDConnectThread *thr = s->connect_thread;
> >
> > -    qemu_mutex_lock(&thr->mutex);
> > -
> > -    switch (thr->state) {
> > -    case CONNECT_THREAD_FAIL:
> > -    case CONNECT_THREAD_NONE:
> > -        error_free(thr->err);
> > -        thr->err = NULL;
> > -        thr->state = CONNECT_THREAD_RUNNING;
> > -        qemu_thread_create(&thread, "nbd-connect",
> > -                           connect_thread_func, thr,
> QEMU_THREAD_DETACHED);
> > -        break;
> > -    case CONNECT_THREAD_SUCCESS:
> > -        /* Previous attempt finally succeeded in background */
> > -        thr->state = CONNECT_THREAD_NONE;
> > -        s->sioc = thr->sioc;
> > -        thr->sioc = NULL;
> > -        yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
> > -                               nbd_yank, bs);
> > -        qemu_mutex_unlock(&thr->mutex);
> > -        return 0;
> > -    case CONNECT_THREAD_RUNNING:
> > -        /* Already running, will wait */
> > -        break;
> > -    default:
> > -        abort();
> > -    }
> > -
> > -    thr->bh_ctx = qemu_get_current_aio_context();
> > +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
> > +        switch (thr->state) {
> > +        case CONNECT_THREAD_FAIL:
> > +        case CONNECT_THREAD_NONE:
> > +            error_free(thr->err);
> > +            thr->err = NULL;
> > +            thr->state = CONNECT_THREAD_RUNNING;
> > +            qemu_thread_create(&thread, "nbd-connect",
> > +                               connect_thread_func, thr,
> QEMU_THREAD_DETACHED);
> > +            break;
> > +        case CONNECT_THREAD_SUCCESS:
> > +            /* Previous attempt finally succeeded in background */
> > +            thr->state = CONNECT_THREAD_NONE;
> > +            s->sioc = thr->sioc;
> > +            thr->sioc = NULL;
> > +
> yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
> > +                                   nbd_yank, bs);
> > +            return 0;
> > +        case CONNECT_THREAD_RUNNING:
> > +            /* Already running, will wait */
> > +            break;
> > +        default:
> > +            abort();
> > +        }
> >
> > -    qemu_mutex_unlock(&thr->mutex);
> > +        thr->bh_ctx = qemu_get_current_aio_context();
> > +    }
> >
> >
> >       /*
> > @@ -486,46 +481,45 @@ nbd_co_establish_connection(BlockDriverState *bs,
> Error **errp)
> >       s->wait_connect = true;
> >       qemu_coroutine_yield();
> >
> > -    qemu_mutex_lock(&thr->mutex);
> >
> > -    switch (thr->state) {
> > -    case CONNECT_THREAD_SUCCESS:
> > -    case CONNECT_THREAD_FAIL:
> > -        thr->state = CONNECT_THREAD_NONE;
> > -        error_propagate(errp, thr->err);
> > -        thr->err = NULL;
> > -        s->sioc = thr->sioc;
> > -        thr->sioc = NULL;
> > -        if (s->sioc) {
> > -
> yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
> > -                                   nbd_yank, bs);
> > -        }
> > -        ret = (s->sioc ? 0 : -1);
> > -        break;
> > -    case CONNECT_THREAD_RUNNING:
> > -    case CONNECT_THREAD_RUNNING_DETACHED:
> > -        /*
> > -         * Obviously, drained section wants to start. Report the
> attempt as
> > -         * failed. Still connect thread is executing in background, and
> its
> > -         * result may be used for next connection attempt.
> > -         */
> > -        ret = -1;
> > -        error_setg(errp, "Connection attempt cancelled by other
> operation");
> > -        break;
> > +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
> > +        switch (thr->state) {
> > +        case CONNECT_THREAD_SUCCESS:
> > +        case CONNECT_THREAD_FAIL:
> > +            thr->state = CONNECT_THREAD_NONE;
> > +            error_propagate(errp, thr->err);
> > +            thr->err = NULL;
> > +            s->sioc = thr->sioc;
> > +            thr->sioc = NULL;
> > +            if (s->sioc) {
> > +
> yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
> > +                                       nbd_yank, bs);
> > +            }
> > +            ret = (s->sioc ? 0 : -1);
> > +            break;
> > +        case CONNECT_THREAD_RUNNING:
> > +        case CONNECT_THREAD_RUNNING_DETACHED:
> > +            /*
> > +             * Obviously, drained section wants to start. Report the
> attempt as
> > +             * failed. Still connect thread is executing in background,
> and its
> > +             * result may be used for next connection attempt.
> > +             */
> > +            ret = -1;
> > +            error_setg(errp, "Connection attempt cancelled by other
> operation");
> > +            break;
> >
> > -    case CONNECT_THREAD_NONE:
> > -        /*
> > -         * Impossible. We've seen this thread running. So it should be
> > -         * running or at least give some results.
> > -         */
> > -        abort();
> > +        case CONNECT_THREAD_NONE:
> > +            /*
> > +             * Impossible. We've seen this thread running. So it should
> be
> > +             * running or at least give some results.
> > +             */
> > +            abort();
> >
> > -    default:
> > -        abort();
> > +        default:
> > +            abort();
> > +        }
> >       }
> >
> > -    qemu_mutex_unlock(&thr->mutex);
> > -
> >       return ret;
> >   }
> >
> > @@ -546,25 +540,23 @@ static void
> nbd_co_establish_connection_cancel(BlockDriverState *bs,
> >       bool wake = false;
> >       bool do_free = false;
> >
> > -    qemu_mutex_lock(&thr->mutex);
> > -
> > -    if (thr->state == CONNECT_THREAD_RUNNING) {
> > -        /* We can cancel only in running state, when bh is not yet
> scheduled */
> > -        thr->bh_ctx = NULL;
> > -        if (s->wait_connect) {
> > -            s->wait_connect = false;
> > -            wake = true;
> > -        }
> > -        if (detach) {
> > -            thr->state = CONNECT_THREAD_RUNNING_DETACHED;
> > -            s->connect_thread = NULL;
> > +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
> > +        if (thr->state == CONNECT_THREAD_RUNNING) {
> > +            /* We can cancel only in running state, when bh is not yet
> scheduled */
>
> over-80 line
>
> > +            thr->bh_ctx = NULL;
> > +            if (s->wait_connect) {
> > +                s->wait_connect = false;
> > +                wake = true;
> > +            }
> > +            if (detach) {
> > +                thr->state = CONNECT_THREAD_RUNNING_DETACHED;
> > +                s->connect_thread = NULL;
> > +            }
> > +        } else if (detach) {
> > +            do_free = true;
> >           }
> > -    } else if (detach) {
> > -        do_free = true;
> >       }
> >
> > -    qemu_mutex_unlock(&thr->mutex);
> > -
> >       if (do_free) {
> >           nbd_free_connect_thread(thr);
> >           s->connect_thread = NULL;
> >
>
>
> For nbd.c we mostly change simple critical sections
>
> qemu_mutex_lock()
> ...
> qemu_mutex_unlock()
>
> into
>
> WITH_QEMU_LOCK_GUARD() {
> ...
> }
>
> And don't eliminate any gotos.. Hmm. On the first sight increasing nesting
> of the code doesn't make it more beautiful.
> But I understand that context-manager concept is safer than calling
> unlock() by hand, and with nested block the critical section becomes more
> obvious. So, with fixed over-80 lines:
>
> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>
> --
> Best regards,
> Vladimir
>

[-- Attachment #2: Type: text/html, Size: 16100 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-13  5:43         ` Mahmoud Mandour
@ 2021-03-13  7:51           ` Greg Kurz
  2021-03-15 16:07             ` Christian Schoenebeck
  0 siblings, 1 reply; 28+ messages in thread
From: Greg Kurz @ 2021-03-13  7:51 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: Christian Schoenebeck, qemu-devel

On Sat, 13 Mar 2021 07:43:38 +0200
Mahmoud Mandour <ma.mandourr@gmail.com> wrote:

> Thanks for the fast review. I asked on the QEMU IRC channel
> before committing whether to put all the changes into one patch
> or split them and was instructed that it was better to split them up.
> But in any case I was open to both ways and you can decide
> on the best way to go.
> 

People only do inline replies here. Please don't top-post for the
sake of clarity.

So, the instructions to split the patches is obviously the way to go. The
question here is rather : will each subsystem maintainer pick up patches
from this series or will only one maintainer pick up all the patches after
they have been acked by the other maintainers ?

> On Thu, Mar 11, 2021 at 1:59 PM Christian Schoenebeck <
> qemu_oss@crudebyte.com> wrote:
> 
> > On Donnerstag, 11. März 2021 12:52:45 CET Greg Kurz wrote:
> > > On Thu, 11 Mar 2021 11:49:06 +0100
> > >
> > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > > > On Donnerstag, 11. März 2021 04:15:37 CET Mahmoud Mandour wrote:
> > > > > Replaced a call to qemu_mutex_lock and its respective call to
> > > > > qemu_mutex_unlock and used QEMU_LOCK_GUARD macro in their place.
> > > > > This simplifies the code by removing the call required to unlock
> > > > > and also eliminates goto paths.
> > > > >
> > > > > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> > > > > ---
> > > > >
> > > > >  hw/9pfs/9p-synth.c | 12 ++++--------
> > > > >  1 file changed, 4 insertions(+), 8 deletions(-)
> > > > >
> > > > > diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
> > > > > index 7eb210ffa8..473ef914b0 100644
> > > > > --- a/hw/9pfs/9p-synth.c
> > > > > +++ b/hw/9pfs/9p-synth.c
> > > > > @@ -79,11 +79,11 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent,
> > int
> > > > > mode, if (!parent) {
> > > > >
> > > > >          parent = &synth_root;
> > > > >
> > > > >      }
> > > > >
> > > > > -    qemu_mutex_lock(&synth_mutex);
> > > > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > > >
> > > > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > > > >
> > > > >          if (!strcmp(tmp->name, name)) {
> > > > >
> > > > >              ret = EEXIST;
> > > > >
> > > > > -            goto err_out;
> > > > > +            return ret;
> > > > >
> > > > >          }
> > > > >
> > > > >      }
> > > > >      /* Add the name */
> > > > >
> > > > > @@ -94,8 +94,6 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent,
> > int
> > > > > mode, node->attr, node->attr->inode);
> > > > >
> > > > >      *result = node;
> > > > >      ret = 0;
> > > > >
> > > > > -err_out:
> > > > > -    qemu_mutex_unlock(&synth_mutex);
> > > > >
> > > > >      return ret;
> > > > >
> > > > >  }
> > > > >
> > > > > @@ -116,11 +114,11 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode
> > > > > *parent,
> > > > > int mode, parent = &synth_root;
> > > > >
> > > > >      }
> > > > >
> > > > > -    qemu_mutex_lock(&synth_mutex);
> > > > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > > >
> > > > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > > > >
> > > > >          if (!strcmp(tmp->name, name)) {
> > > > >
> > > > >              ret = EEXIST;
> > > > >
> > > > > -            goto err_out;
> > > > > +            return ret;
> > > > >
> > > > >          }
> > > > >
> > > > >      }
> > > > >      /* Add file type and remove write bits */
> > > > >
> > > > > @@ -136,8 +134,6 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode
> > *parent,
> > > > > int
> > > > > mode, pstrcpy(node->name, sizeof(node->name), name);
> > > > >
> > > > >      QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
> > > > >      ret = 0;
> > > > >
> > > > > -err_out:
> > > > > -    qemu_mutex_unlock(&synth_mutex);
> > > > >
> > > > >      return ret;
> > > > >
> > > > >  }
> > > >
> > > > Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > > >
> > > > Greg, I suggest I'll push this onto my queue as you seem to be busy.
> > >
> > > This cleanup spans over multiple subsystems but I think it makes more
> > > sense to keep all these patches together. Let's wait for everyone to
> > > ack/review and then we'll decide how to merge the patches.
> >
> > Sure, makes sense.
> >
> >
> >
> >



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/9] block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-13  5:51     ` Mahmoud Mandour
@ 2021-03-15 14:08       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 28+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-03-15 14:08 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: qemu-devel, Kevin Wolf, open list:CURL, Max Reitz

13.03.2021 08:51, Mahmoud Mandour wrote:
> Thank you for the fast review and I'm sorry for the silly and obvious style
> errors. Unfortunately I did not notice the section on using the checkpatch
> script in the Contributing page on the wiki before committing. But I assure
> you that such errors will not occur again.

Don't worry :)

> 
> On Fri, Mar 12, 2021 at 12:23 PM Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com <mailto:vsementsov@virtuozzo.com>> wrote:
> 
>     11.03.2021 06:15, Mahmoud Mandour wrote:
>      > Replaced various qemu_mutex_lock/qemu_mutex_unlock calls with
>      > lock guard macros (QEMU_LOCK_GUARD() and WITH_QEMU_LOCK_GUARD).
>      > This slightly simplifies the code by eliminating calls to
>      > qemu_mutex_unlock and eliminates goto paths.
>      >
>      > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com <mailto:ma.mandourr@gmail.com>>
>      > ---
>      >   block/curl.c |  13 ++--
>      >   block/nbd.c  | 188 ++++++++++++++++++++++++---------------------------
> 
>     Better would be make two separate patches I think.
> 
>      >   2 files changed, 95 insertions(+), 106 deletions(-)
>      >
>      > diff --git a/block/curl.c b/block/curl.c
>      > index 4ff895df8f..56a217951a 100644
>      > --- a/block/curl.c
>      > +++ b/block/curl.c
>      > @@ -832,12 +832,12 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
>      >       uint64_t start = acb->offset;
>      >       uint64_t end;
>      >
>      > -    qemu_mutex_lock(&s->mutex);
>      > +    QEMU_LOCK_GUARD(&s->mutex);
>      >
>      >       // In case we have the requested data already (e.g. read-ahead),
>      >       // we can just call the callback and be done.
>      >       if (curl_find_buf(s, start, acb->bytes, acb)) {
>      > -        goto out;
>      > +        return;
>      >       }
>      >
>      >       // No cache found, so let's start a new request
>      > @@ -852,7 +852,7 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
>      >       if (curl_init_state(s, state) < 0) {
>      >           curl_clean_state(state);
>      >           acb->ret = -EIO;
>      > -        goto out;
>      > +        return;
>      >       }
>      >
>      >       acb->start = 0;
>      > @@ -867,7 +867,7 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
>      >       if (state->buf_len && state->orig_buf == NULL) {
>      >           curl_clean_state(state);
>      >           acb->ret = -ENOMEM;
>      > -        goto out;
>      > +        return;
>      >       }
>      >       state->acb[0] = acb;
>      >
>      > @@ -880,14 +880,11 @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
>      >           acb->ret = -EIO;
>      >
>      >           curl_clean_state(state);
>      > -        goto out;
>      > +        return;
>      >       }
>      >
>      >       /* Tell curl it needs to kick things off */
>      >       curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
>      > -
>      > -out:
>      > -    qemu_mutex_unlock(&s->mutex);
>      >   }
> 
>     This change is obvious and good.
> 
>      >
>      >   static int coroutine_fn curl_co_preadv(BlockDriverState *bs,
>      > diff --git a/block/nbd.c b/block/nbd.c
>      > index c26dc5a54f..28ba7aad61 100644
>      > --- a/block/nbd.c
>      > +++ b/block/nbd.c
>      > @@ -407,27 +407,25 @@ static void *connect_thread_func(void *opaque)
>      >           thr->sioc = NULL;
>      >       }
>      >
>      > -    qemu_mutex_lock(&thr->mutex);
>      > -
>      > -    switch (thr->state) {
>      > -    case CONNECT_THREAD_RUNNING:
>      > -        thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
>      > -        if (thr->bh_ctx) {
>      > -            aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func, thr->bh_opaque);
>      > -
>      > -            /* play safe, don't reuse bh_ctx on further connection attempts */
>      > -            thr->bh_ctx = NULL;
>      > +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
>      > +        switch (thr->state) {
>      > +        case CONNECT_THREAD_RUNNING:
>      > +            thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
>      > +            if (thr->bh_ctx) {
>      > +                aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func, thr->bh_opaque);
> 
>     over-80 line
> 
>      > +
>      > +                /* play safe, don't reuse bh_ctx on further connection attempts */
> 
>     and here
> 
>      > +                thr->bh_ctx = NULL;
>      > +            }
>      > +            break;
>      > +        case CONNECT_THREAD_RUNNING_DETACHED:
>      > +            do_free = true;
>      > +            break;
>      > +        default:
>      > +            abort();
>      >           }
>      > -        break;
>      > -    case CONNECT_THREAD_RUNNING_DETACHED:
>      > -        do_free = true;
>      > -        break;
>      > -    default:
>      > -        abort();
>      >       }
>      >
>      > -    qemu_mutex_unlock(&thr->mutex);
>      > ->       if (do_free) {
>      >           nbd_free_connect_thread(thr);
>      >       }
>      > @@ -443,36 +441,33 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>      >       BDRVNBDState *s = bs->opaque;
>      >       NBDConnectThread *thr = s->connect_thread;
>      >
>      > -    qemu_mutex_lock(&thr->mutex);
>      > -
>      > -    switch (thr->state) {
>      > -    case CONNECT_THREAD_FAIL:
>      > -    case CONNECT_THREAD_NONE:
>      > -        error_free(thr->err);
>      > -        thr->err = NULL;
>      > -        thr->state = CONNECT_THREAD_RUNNING;
>      > -        qemu_thread_create(&thread, "nbd-connect",
>      > -                           connect_thread_func, thr, QEMU_THREAD_DETACHED);
>      > -        break;
>      > -    case CONNECT_THREAD_SUCCESS:
>      > -        /* Previous attempt finally succeeded in background */
>      > -        thr->state = CONNECT_THREAD_NONE;
>      > -        s->sioc = thr->sioc;
>      > -        thr->sioc = NULL;
>      > -        yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
>      > -                               nbd_yank, bs);
>      > -        qemu_mutex_unlock(&thr->mutex);
>      > -        return 0;
>      > -    case CONNECT_THREAD_RUNNING:
>      > -        /* Already running, will wait */
>      > -        break;
>      > -    default:
>      > -        abort();
>      > -    }
>      > -
>      > -    thr->bh_ctx = qemu_get_current_aio_context();
>      > +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
>      > +        switch (thr->state) {
>      > +        case CONNECT_THREAD_FAIL:
>      > +        case CONNECT_THREAD_NONE:
>      > +            error_free(thr->err);
>      > +            thr->err = NULL;
>      > +            thr->state = CONNECT_THREAD_RUNNING;
>      > +            qemu_thread_create(&thread, "nbd-connect",
>      > +                               connect_thread_func, thr, QEMU_THREAD_DETACHED);
>      > +            break;
>      > +        case CONNECT_THREAD_SUCCESS:
>      > +            /* Previous attempt finally succeeded in background */
>      > +            thr->state = CONNECT_THREAD_NONE;
>      > +            s->sioc = thr->sioc;
>      > +            thr->sioc = NULL;
>      > +            yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
>      > +                                   nbd_yank, bs);
>      > +            return 0;
>      > +        case CONNECT_THREAD_RUNNING:
>      > +            /* Already running, will wait */
>      > +            break;
>      > +        default:
>      > +            abort();
>      > +        }
>      >
>      > -    qemu_mutex_unlock(&thr->mutex);
>      > +        thr->bh_ctx = qemu_get_current_aio_context();
>      > +    }
>      >
>      >
>      >       /*
>      > @@ -486,46 +481,45 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>      >       s->wait_connect = true;
>      >       qemu_coroutine_yield();
>      >
>      > -    qemu_mutex_lock(&thr->mutex);
>      >
>      > -    switch (thr->state) {
>      > -    case CONNECT_THREAD_SUCCESS:
>      > -    case CONNECT_THREAD_FAIL:
>      > -        thr->state = CONNECT_THREAD_NONE;
>      > -        error_propagate(errp, thr->err);
>      > -        thr->err = NULL;
>      > -        s->sioc = thr->sioc;
>      > -        thr->sioc = NULL;
>      > -        if (s->sioc) {
>      > -            yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
>      > -                                   nbd_yank, bs);
>      > -        }
>      > -        ret = (s->sioc ? 0 : -1);
>      > -        break;
>      > -    case CONNECT_THREAD_RUNNING:
>      > -    case CONNECT_THREAD_RUNNING_DETACHED:
>      > -        /*
>      > -         * Obviously, drained section wants to start. Report the attempt as
>      > -         * failed. Still connect thread is executing in background, and its
>      > -         * result may be used for next connection attempt.
>      > -         */
>      > -        ret = -1;
>      > -        error_setg(errp, "Connection attempt cancelled by other operation");
>      > -        break;
>      > +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
>      > +        switch (thr->state) {
>      > +        case CONNECT_THREAD_SUCCESS:
>      > +        case CONNECT_THREAD_FAIL:
>      > +            thr->state = CONNECT_THREAD_NONE;
>      > +            error_propagate(errp, thr->err);
>      > +            thr->err = NULL;
>      > +            s->sioc = thr->sioc;
>      > +            thr->sioc = NULL;
>      > +            if (s->sioc) {
>      > +                yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
>      > +                                       nbd_yank, bs);
>      > +            }
>      > +            ret = (s->sioc ? 0 : -1);
>      > +            break;
>      > +        case CONNECT_THREAD_RUNNING:
>      > +        case CONNECT_THREAD_RUNNING_DETACHED:
>      > +            /*
>      > +             * Obviously, drained section wants to start. Report the attempt as
>      > +             * failed. Still connect thread is executing in background, and its
>      > +             * result may be used for next connection attempt.
>      > +             */
>      > +            ret = -1;
>      > +            error_setg(errp, "Connection attempt cancelled by other operation");
>      > +            break;
>      >
>      > -    case CONNECT_THREAD_NONE:
>      > -        /*
>      > -         * Impossible. We've seen this thread running. So it should be
>      > -         * running or at least give some results.
>      > -         */
>      > -        abort();
>      > +        case CONNECT_THREAD_NONE:
>      > +            /*
>      > +             * Impossible. We've seen this thread running. So it should be
>      > +             * running or at least give some results.
>      > +             */
>      > +            abort();
>      >
>      > -    default:
>      > -        abort();
>      > +        default:
>      > +            abort();
>      > +        }
>      >       }
>      >
>      > -    qemu_mutex_unlock(&thr->mutex);
>      > -
>      >       return ret;
>      >   }
>      >
>      > @@ -546,25 +540,23 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs,
>      >       bool wake = false;
>      >       bool do_free = false;
>      >
>      > -    qemu_mutex_lock(&thr->mutex);
>      > -
>      > -    if (thr->state == CONNECT_THREAD_RUNNING) {
>      > -        /* We can cancel only in running state, when bh is not yet scheduled */
>      > -        thr->bh_ctx = NULL;
>      > -        if (s->wait_connect) {
>      > -            s->wait_connect = false;
>      > -            wake = true;
>      > -        }
>      > -        if (detach) {
>      > -            thr->state = CONNECT_THREAD_RUNNING_DETACHED;
>      > -            s->connect_thread = NULL;
>      > +    WITH_QEMU_LOCK_GUARD(&thr->mutex) {
>      > +        if (thr->state == CONNECT_THREAD_RUNNING) {
>      > +            /* We can cancel only in running state, when bh is not yet scheduled */
> 
>     over-80 line
> 
>      > +            thr->bh_ctx = NULL;
>      > +            if (s->wait_connect) {
>      > +                s->wait_connect = false;
>      > +                wake = true;
>      > +            }
>      > +            if (detach) {
>      > +                thr->state = CONNECT_THREAD_RUNNING_DETACHED;
>      > +                s->connect_thread = NULL;
>      > +            }
>      > +        } else if (detach) {
>      > +            do_free = true;
>      >           }
>      > -    } else if (detach) {
>      > -        do_free = true;
>      >       }
>      >
>      > -    qemu_mutex_unlock(&thr->mutex);
>      > -
>      >       if (do_free) {
>      >           nbd_free_connect_thread(thr);
>      >           s->connect_thread = NULL;
>      >
> 
> 
>     For nbd.c we mostly change simple critical sections
> 
>     qemu_mutex_lock()
>     ...
>     qemu_mutex_unlock()
> 
>     into
> 
>     WITH_QEMU_LOCK_GUARD() {
>     ...
>     }
> 
>     And don't eliminate any gotos.. Hmm. On the first sight increasing nesting of the code doesn't make it more beautiful.
>     But I understand that context-manager concept is safer than calling unlock() by hand, and with nested block the critical section becomes more obvious. So, with fixed over-80 lines:
> 
>     Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com <mailto:vsementsov@virtuozzo.com>>
> 
>     -- 
>     Best regards,
>     Vladimir
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-13  7:51           ` Greg Kurz
@ 2021-03-15 16:07             ` Christian Schoenebeck
  2021-03-15 20:31               ` Greg Kurz
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Schoenebeck @ 2021-03-15 16:07 UTC (permalink / raw)
  To: qemu-devel
  Cc: Greg Kurz, Mahmoud Mandour, Dr. David Alan Gilbert,
	Stefan Berger, Kevin Wolf, Max Reitz, Marc-André Lureau,
	Daniel P. Berrange, Markus Armbruster, Juan Quintela, Eric Auger,
	Michael S. Tsirkin

On Samstag, 13. März 2021 08:51:21 CET Greg Kurz wrote:
> On Sat, 13 Mar 2021 07:43:38 +0200
> 
> Mahmoud Mandour <ma.mandourr@gmail.com> wrote:
> > Thanks for the fast review. I asked on the QEMU IRC channel
> > before committing whether to put all the changes into one patch
> > or split them and was instructed that it was better to split them up.
> > But in any case I was open to both ways and you can decide
> > on the best way to go.
> 
> People only do inline replies here. Please don't top-post for the
> sake of clarity.
> 
> So, the instructions to split the patches is obviously the way to go. The
> question here is rather : will each subsystem maintainer pick up patches
> from this series or will only one maintainer pick up all the patches after
> they have been acked by the other maintainers ?

We need a call here. :)

Soft freeze is tomorrow, so will one submaintainer handle this series all 
together or should each submaintainer handle only their specific patches?

> > On Thu, Mar 11, 2021 at 1:59 PM Christian Schoenebeck <
> > 
> > qemu_oss@crudebyte.com> wrote:
> > > On Donnerstag, 11. März 2021 12:52:45 CET Greg Kurz wrote:
> > > > On Thu, 11 Mar 2021 11:49:06 +0100
> > > > 
> > > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > > > > On Donnerstag, 11. März 2021 04:15:37 CET Mahmoud Mandour wrote:
> > > > > > Replaced a call to qemu_mutex_lock and its respective call to
> > > > > > qemu_mutex_unlock and used QEMU_LOCK_GUARD macro in their place.
> > > > > > This simplifies the code by removing the call required to unlock
> > > > > > and also eliminates goto paths.
> > > > > > 
> > > > > > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> > > > > > ---
> > > > > > 
> > > > > >  hw/9pfs/9p-synth.c | 12 ++++--------
> > > > > >  1 file changed, 4 insertions(+), 8 deletions(-)
> > > > > > 
> > > > > > diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
> > > > > > index 7eb210ffa8..473ef914b0 100644
> > > > > > --- a/hw/9pfs/9p-synth.c
> > > > > > +++ b/hw/9pfs/9p-synth.c
> > > > > > @@ -79,11 +79,11 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode
> > > > > > *parent,
> > > 
> > > int
> > > 
> > > > > > mode, if (!parent) {
> > > > > > 
> > > > > >          parent = &synth_root;
> > > > > >      
> > > > > >      }
> > > > > > 
> > > > > > -    qemu_mutex_lock(&synth_mutex);
> > > > > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > > > > 
> > > > > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > > > > >      
> > > > > >          if (!strcmp(tmp->name, name)) {
> > > > > >          
> > > > > >              ret = EEXIST;
> > > > > > 
> > > > > > -            goto err_out;
> > > > > > +            return ret;
> > > > > > 
> > > > > >          }
> > > > > >      
> > > > > >      }
> > > > > >      /* Add the name */
> > > > > > 
> > > > > > @@ -94,8 +94,6 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent,
> > > 
> > > int
> > > 
> > > > > > mode, node->attr, node->attr->inode);
> > > > > > 
> > > > > >      *result = node;
> > > > > >      ret = 0;
> > > > > > 
> > > > > > -err_out:
> > > > > > -    qemu_mutex_unlock(&synth_mutex);
> > > > > > 
> > > > > >      return ret;
> > > > > >  
> > > > > >  }
> > > > > > 
> > > > > > @@ -116,11 +114,11 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode
> > > > > > *parent,
> > > > > > int mode, parent = &synth_root;
> > > > > > 
> > > > > >      }
> > > > > > 
> > > > > > -    qemu_mutex_lock(&synth_mutex);
> > > > > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > > > > 
> > > > > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > > > > >      
> > > > > >          if (!strcmp(tmp->name, name)) {
> > > > > >          
> > > > > >              ret = EEXIST;
> > > > > > 
> > > > > > -            goto err_out;
> > > > > > +            return ret;
> > > > > > 
> > > > > >          }
> > > > > >      
> > > > > >      }
> > > > > >      /* Add file type and remove write bits */
> > > > > > 
> > > > > > @@ -136,8 +134,6 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode
> > > 
> > > *parent,
> > > 
> > > > > > int
> > > > > > mode, pstrcpy(node->name, sizeof(node->name), name);
> > > > > > 
> > > > > >      QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
> > > > > >      ret = 0;
> > > > > > 
> > > > > > -err_out:
> > > > > > -    qemu_mutex_unlock(&synth_mutex);
> > > > > > 
> > > > > >      return ret;
> > > > > >  
> > > > > >  }
> > > > > 
> > > > > Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > > > > 
> > > > > Greg, I suggest I'll push this onto my queue as you seem to be busy.
> > > > 
> > > > This cleanup spans over multiple subsystems but I think it makes more
> > > > sense to keep all these patches together. Let's wait for everyone to
> > > > ack/review and then we'll decide how to merge the patches.
> > > 
> > > Sure, makes sense.





^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 6/9] migration: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-11  9:44   ` Dr. David Alan Gilbert
@ 2021-03-15 18:01     ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 28+ messages in thread
From: Dr. David Alan Gilbert @ 2021-03-15 18:01 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: qemu-devel, Juan Quintela

* Dr. David Alan Gilbert (dgilbert@redhat.com) wrote:
> * Mahmoud Mandour (ma.mandourr@gmail.com) wrote:
> > Replaced various qemu_mutex_lock calls and their respective
> > qemu_mutex_unlock calls with QEMU_LOCK_GUARD macro. This simplifies
> > the code by eliminating the respective qemu_mutex_unlock calls.
> > 
> > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

I've queued 5 and 6 via my queue.

Dave

> > ---
> >  migration/migration.c | 6 ++----
> >  migration/ram.c       | 6 ++----
> >  2 files changed, 4 insertions(+), 8 deletions(-)
> > 
> > diff --git a/migration/migration.c b/migration/migration.c
> > index a5ddf43559..36768391b6 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -323,7 +323,7 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
> >      int ret = 0;
> >  
> >      trace_migrate_send_rp_message((int)message_type, len);
> > -    qemu_mutex_lock(&mis->rp_mutex);
> > +    QEMU_LOCK_GUARD(&mis->rp_mutex);
> >  
> >      /*
> >       * It's possible that the file handle got lost due to network
> > @@ -331,7 +331,7 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
> >       */
> >      if (!mis->to_src_file) {
> >          ret = -EIO;
> > -        goto error;
> > +        return ret;
> >      }
> >  
> >      qemu_put_be16(mis->to_src_file, (unsigned int)message_type);
> > @@ -342,8 +342,6 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
> >      /* It's possible that qemu file got error during sending */
> >      ret = qemu_file_get_error(mis->to_src_file);
> >  
> > -error:
> > -    qemu_mutex_unlock(&mis->rp_mutex);
> >      return ret;
> >  }
> >  
> > diff --git a/migration/ram.c b/migration/ram.c
> > index 72143da0ac..52537f14ac 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -819,7 +819,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> >  {
> >      bool ret;
> >  
> > -    qemu_mutex_lock(&rs->bitmap_mutex);
> > +    QEMU_LOCK_GUARD(&rs->bitmap_mutex);
> >  
> >      /*
> >       * Clear dirty bitmap if needed.  This _must_ be called before we
> > @@ -852,7 +852,6 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> >      if (ret) {
> >          rs->migration_dirty_pages--;
> >      }
> > -    qemu_mutex_unlock(&rs->bitmap_mutex);
> >  
> >      return ret;
> >  }
> > @@ -3275,7 +3274,7 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
> >      int idx, thread_count;
> >  
> >      thread_count = migrate_decompress_threads();
> > -    qemu_mutex_lock(&decomp_done_lock);
> > +    QEMU_LOCK_GUARD(&decomp_done_lock);
> >      while (true) {
> >          for (idx = 0; idx < thread_count; idx++) {
> >              if (decomp_param[idx].done) {
> > @@ -3295,7 +3294,6 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
> >              qemu_cond_wait(&decomp_done_cond, &decomp_done_lock);
> >          }
> >      }
> > -    qemu_mutex_unlock(&decomp_done_lock);
> >  }
> >  
> >   /*
> > -- 
> > 2.25.1
> > 
> -- 
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock with QEMU_LOCK_GUARD
  2021-03-15 16:07             ` Christian Schoenebeck
@ 2021-03-15 20:31               ` Greg Kurz
  0 siblings, 0 replies; 28+ messages in thread
From: Greg Kurz @ 2021-03-15 20:31 UTC (permalink / raw)
  To: Christian Schoenebeck
  Cc: Kevin Wolf, Daniel P. Berrange, Juan Quintela, Markus Armbruster,
	Michael S. Tsirkin, qemu-devel, Max Reitz, Eric Auger,
	Marc-André Lureau, Mahmoud Mandour, Dr. David Alan Gilbert,
	Stefan Berger

On Mon, 15 Mar 2021 17:07:50 +0100
Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:

> On Samstag, 13. März 2021 08:51:21 CET Greg Kurz wrote:
> > On Sat, 13 Mar 2021 07:43:38 +0200
> > 
> > Mahmoud Mandour <ma.mandourr@gmail.com> wrote:
> > > Thanks for the fast review. I asked on the QEMU IRC channel
> > > before committing whether to put all the changes into one patch
> > > or split them and was instructed that it was better to split them up.
> > > But in any case I was open to both ways and you can decide
> > > on the best way to go.
> > 
> > People only do inline replies here. Please don't top-post for the
> > sake of clarity.
> > 
> > So, the instructions to split the patches is obviously the way to go. The
> > question here is rather : will each subsystem maintainer pick up patches
> > from this series or will only one maintainer pick up all the patches after
> > they have been acked by the other maintainers ?
> 
> We need a call here. :)
> 
> Soft freeze is tomorrow, so will one submaintainer handle this series all 
> together or should each submaintainer handle only their specific patches?
> 

I see that some of Mahmoud's patches in Dave Gilbert's latest PR, so I
guess you can go ahead and merge this one in the 9p tree.

> > > On Thu, Mar 11, 2021 at 1:59 PM Christian Schoenebeck <
> > > 
> > > qemu_oss@crudebyte.com> wrote:
> > > > On Donnerstag, 11. März 2021 12:52:45 CET Greg Kurz wrote:
> > > > > On Thu, 11 Mar 2021 11:49:06 +0100
> > > > > 
> > > > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > > > > > On Donnerstag, 11. März 2021 04:15:37 CET Mahmoud Mandour wrote:
> > > > > > > Replaced a call to qemu_mutex_lock and its respective call to
> > > > > > > qemu_mutex_unlock and used QEMU_LOCK_GUARD macro in their place.
> > > > > > > This simplifies the code by removing the call required to unlock
> > > > > > > and also eliminates goto paths.
> > > > > > > 
> > > > > > > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> > > > > > > ---
> > > > > > > 
> > > > > > >  hw/9pfs/9p-synth.c | 12 ++++--------
> > > > > > >  1 file changed, 4 insertions(+), 8 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
> > > > > > > index 7eb210ffa8..473ef914b0 100644
> > > > > > > --- a/hw/9pfs/9p-synth.c
> > > > > > > +++ b/hw/9pfs/9p-synth.c
> > > > > > > @@ -79,11 +79,11 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode
> > > > > > > *parent,
> > > > 
> > > > int
> > > > 
> > > > > > > mode, if (!parent) {
> > > > > > > 
> > > > > > >          parent = &synth_root;
> > > > > > >      
> > > > > > >      }
> > > > > > > 
> > > > > > > -    qemu_mutex_lock(&synth_mutex);
> > > > > > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > > > > > 
> > > > > > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > > > > > >      
> > > > > > >          if (!strcmp(tmp->name, name)) {
> > > > > > >          
> > > > > > >              ret = EEXIST;
> > > > > > > 
> > > > > > > -            goto err_out;
> > > > > > > +            return ret;
> > > > > > > 
> > > > > > >          }
> > > > > > >      
> > > > > > >      }
> > > > > > >      /* Add the name */
> > > > > > > 
> > > > > > > @@ -94,8 +94,6 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent,
> > > > 
> > > > int
> > > > 
> > > > > > > mode, node->attr, node->attr->inode);
> > > > > > > 
> > > > > > >      *result = node;
> > > > > > >      ret = 0;
> > > > > > > 
> > > > > > > -err_out:
> > > > > > > -    qemu_mutex_unlock(&synth_mutex);
> > > > > > > 
> > > > > > >      return ret;
> > > > > > >  
> > > > > > >  }
> > > > > > > 
> > > > > > > @@ -116,11 +114,11 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode
> > > > > > > *parent,
> > > > > > > int mode, parent = &synth_root;
> > > > > > > 
> > > > > > >      }
> > > > > > > 
> > > > > > > -    qemu_mutex_lock(&synth_mutex);
> > > > > > > +    QEMU_LOCK_GUARD(&synth_mutex);
> > > > > > > 
> > > > > > >      QLIST_FOREACH(tmp, &parent->child, sibling) {
> > > > > > >      
> > > > > > >          if (!strcmp(tmp->name, name)) {
> > > > > > >          
> > > > > > >              ret = EEXIST;
> > > > > > > 
> > > > > > > -            goto err_out;
> > > > > > > +            return ret;
> > > > > > > 
> > > > > > >          }
> > > > > > >      
> > > > > > >      }
> > > > > > >      /* Add file type and remove write bits */
> > > > > > > 
> > > > > > > @@ -136,8 +134,6 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode
> > > > 
> > > > *parent,
> > > > 
> > > > > > > int
> > > > > > > mode, pstrcpy(node->name, sizeof(node->name), name);
> > > > > > > 
> > > > > > >      QLIST_INSERT_HEAD_RCU(&parent->child, node, sibling);
> > > > > > >      ret = 0;
> > > > > > > 
> > > > > > > -err_out:
> > > > > > > -    qemu_mutex_unlock(&synth_mutex);
> > > > > > > 
> > > > > > >      return ret;
> > > > > > >  
> > > > > > >  }
> > > > > > 
> > > > > > Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > > > > > 
> > > > > > Greg, I suggest I'll push this onto my queue as you seem to be busy.
> > > > > 
> > > > > This cleanup spans over multiple subsystems but I think it makes more
> > > > > sense to keep all these patches together. Let's wait for everyone to
> > > > > ack/review and then we'll decide how to merge the patches.
> > > > 
> > > > Sure, makes sense.
> 
> 
> 



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/9] block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD
  2021-03-12 10:23   ` Vladimir Sementsov-Ogievskiy
  2021-03-13  5:51     ` Mahmoud Mandour
@ 2021-03-16 13:29     ` Eric Blake
  1 sibling, 0 replies; 28+ messages in thread
From: Eric Blake @ 2021-03-16 13:29 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, Mahmoud Mandour, qemu-devel
  Cc: Kevin Wolf, open list:CURL, Max Reitz

On 3/12/21 4:23 AM, Vladimir Sementsov-Ogievskiy wrote:
> 11.03.2021 06:15, Mahmoud Mandour wrote:
>> Replaced various qemu_mutex_lock/qemu_mutex_unlock calls with
>> lock guard macros (QEMU_LOCK_GUARD() and WITH_QEMU_LOCK_GUARD).
>> This slightly simplifies the code by eliminating calls to
>> qemu_mutex_unlock and eliminates goto paths.
>>
>> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
>> ---
>>   block/curl.c |  13 ++--
>>   block/nbd.c  | 188 ++++++++++++++++++++++++---------------------------
> 
> Better would be make two separate patches I think.

Given the imminent approach of soft freeze, and the fact that this does
not add a feature, I'm debating whether this one patch warrants an
immediate pull request through my NBD tree, or if it should be split
first, or if we just defer it to 6.1 (it shouldn't affect correctness,
just a maintenance cleanup).

In case some other maintainer wants to push this series through for 6.0
(rather than waiting for each maintainer to pick up one patch at a
time), feel free to add:

Acked-by: Eric Blake <eblake@redhat.com>


> For nbd.c we mostly change simple critical sections
> 
> qemu_mutex_lock()
> ...
> qemu_mutex_unlock()
> 
> into
> 
> WITH_QEMU_LOCK_GUARD() {
> ...
> }
> 
> And don't eliminate any gotos.. Hmm. On the first sight increasing
> nesting of the code doesn't make it more beautiful.
> But I understand that context-manager concept is safer than calling
> unlock() by hand, and with nested block the critical section becomes
> more obvious. So, with fixed over-80 lines:
> 
> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/9] tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD
  2021-03-11  3:15 ` [PATCH 1/9] tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD Mahmoud Mandour
  2021-03-11 10:04   ` Marc-André Lureau
@ 2021-03-23  2:58   ` Stefan Berger
  1 sibling, 0 replies; 28+ messages in thread
From: Stefan Berger @ 2021-03-23  2:58 UTC (permalink / raw)
  To: Mahmoud Mandour, qemu-devel; +Cc: Stefan Berger


On 3/10/21 10:15 PM, Mahmoud Mandour wrote:
> Removed a qemu_mutex_lock() and its respective qemu_mutex_unlock()
> and used QEMU_LOCK_GUARD instead. This simplifies the code by
> eliminiating gotos and removing the qemu_mutex_unlock() calls.
>
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> ---
>   backends/tpm/tpm_emulator.c | 8 +++-----
>   1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/backends/tpm/tpm_emulator.c b/backends/tpm/tpm_emulator.c
> index a012adc193..a3c041e402 100644
> --- a/backends/tpm/tpm_emulator.c
> +++ b/backends/tpm/tpm_emulator.c
> @@ -126,7 +126,7 @@ static int tpm_emulator_ctrlcmd(TPMEmulator *tpm, unsigned long cmd, void *msg,
>       uint8_t *buf = NULL;
>       int ret = -1;


ret is not needed anymore


>
> -    qemu_mutex_lock(&tpm->mutex);
> +    QEMU_LOCK_GUARD(&tpm->mutex);
>
>       buf = g_alloca(n);
>       memcpy(buf, &cmd_no, sizeof(cmd_no));
> @@ -134,20 +134,18 @@ static int tpm_emulator_ctrlcmd(TPMEmulator *tpm, unsigned long cmd, void *msg,
>
>       n = qemu_chr_fe_write_all(dev, buf, n);
>       if (n <= 0) {
> -        goto end;
> +        return ret;
>       }
>
>       if (msg_len_out != 0) {
>           n = qemu_chr_fe_read_all(dev, msg, msg_len_out);
>           if (n <= 0) {
> -            goto end;
> +            return ret;
>           }
>       }
>
>       ret = 0;
>
> -end:
> -    qemu_mutex_unlock(&tpm->mutex);
>       return ret;


return 0;



>   }
>


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2021-03-23  3:00 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-11  3:15 [PATCH 0/9] Changing qemu_mutex_locks to lock guard macros Mahmoud Mandour
2021-03-11  3:15 ` [PATCH 1/9] tpm: Changed a qemu_mutex_lock to QEMU_LOCK_GUARD Mahmoud Mandour
2021-03-11 10:04   ` Marc-André Lureau
2021-03-23  2:58   ` Stefan Berger
2021-03-11  3:15 ` [PATCH 2/9] block: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD Mahmoud Mandour
2021-03-12 10:23   ` Vladimir Sementsov-Ogievskiy
2021-03-13  5:51     ` Mahmoud Mandour
2021-03-15 14:08       ` Vladimir Sementsov-Ogievskiy
2021-03-16 13:29     ` Eric Blake
2021-03-11  3:15 ` [PATCH 3/9] char: Replaced a qemu_mutex_lock " Mahmoud Mandour
2021-03-11 10:05   ` Marc-André Lureau
2021-03-11  3:15 ` [PATCH 4/9] util: Replaced qemu_mutex_lock with QEMU_LOCK_GUARDs Mahmoud Mandour
2021-03-11  3:15 ` [PATCH 5/9] monitor: Replaced qemu_mutex_lock calls with QEMU_LOCK_GUARD Mahmoud Mandour
2021-03-11  9:50   ` Dr. David Alan Gilbert
2021-03-11  3:15 ` [PATCH 6/9] migration: " Mahmoud Mandour
2021-03-11  9:44   ` Dr. David Alan Gilbert
2021-03-15 18:01     ` Dr. David Alan Gilbert
2021-03-11  3:15 ` [PATCH 7/9] virtio-iommu: " Mahmoud Mandour
2021-03-11  3:15 ` [PATCH 8/9] hw/9pfs/9p-synth: Replaced qemu_mutex_lock " Mahmoud Mandour
2021-03-11  7:43   ` Greg Kurz
2021-03-11 10:49   ` Christian Schoenebeck
2021-03-11 11:52     ` Greg Kurz
2021-03-11 11:59       ` Christian Schoenebeck
2021-03-13  5:43         ` Mahmoud Mandour
2021-03-13  7:51           ` Greg Kurz
2021-03-15 16:07             ` Christian Schoenebeck
2021-03-15 20:31               ` Greg Kurz
2021-03-11  3:15 ` [PATCH 9/9] hw/hyperv/vmbus: replaced " Mahmoud Mandour

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.