* [PATCH v3 0/2] Replaced locks with lock guard macros
@ 2020-03-20 12:04 dnbrdsky
2020-03-20 12:04 ` [PATCH v3 1/2] lockable: fix __COUNTER__ macro to be referenced properly dnbrdsky
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: dnbrdsky @ 2020-03-20 12:04 UTC (permalink / raw)
To: dnbrdsky; +Cc: qemu-devel
From: Daniel Brodsky <dnbrdsky@gmail.com>
This patch set adds:
- a fix for lock guard macros so they can be used multiple times in
the same function
- replacement of locks with lock guards where appropriate
v2 -> v3:
- added __COUNTER__ fix for additional lock guard macro
- added missing include header in platform.c
v1 -> v2:
- fixed whitespace churn
- added cover letter so patch set referenced correctly
Daniel Brodsky (2):
lockable: fix __COUNTER__ macro to be referenced properly
lockable: replaced locks with lock guard macros where appropriate
block/iscsi.c | 11 +++------
block/nfs.c | 51 +++++++++++++++++++----------------------
cpus-common.c | 13 ++++-------
hw/display/qxl.c | 43 ++++++++++++++++------------------
hw/vfio/platform.c | 5 ++--
include/qemu/lockable.h | 4 ++--
include/qemu/rcu.h | 2 +-
migration/migration.c | 3 +--
migration/multifd.c | 8 +++----
migration/ram.c | 3 +--
monitor/misc.c | 4 +---
ui/spice-display.c | 14 +++++------
util/log.c | 4 ++--
util/qemu-timer.c | 17 +++++++-------
util/rcu.c | 8 +++----
util/thread-pool.c | 3 +--
util/vfio-helpers.c | 4 ++--
17 files changed, 87 insertions(+), 110 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 1/2] lockable: fix __COUNTER__ macro to be referenced properly
2020-03-20 12:04 [PATCH v3 0/2] Replaced locks with lock guard macros dnbrdsky
@ 2020-03-20 12:04 ` dnbrdsky
2020-03-20 12:04 ` [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate dnbrdsky
2020-03-20 14:02 ` [PATCH v3 0/2] Replaced locks with lock guard macros no-reply
2 siblings, 0 replies; 9+ messages in thread
From: dnbrdsky @ 2020-03-20 12:04 UTC (permalink / raw)
To: dnbrdsky; +Cc: qemu-devel
From: Daniel Brodsky <dnbrdsky@gmail.com>
- __COUNTER__ doesn't work with ## concat
- replaced ## with glue() macro so __COUNTER__ is evaluated
Fixes: 3284c3ddc4
Signed-off-by: Daniel Brodsky <dnbrdsky@gmail.com>
---
include/qemu/lockable.h | 4 ++--
include/qemu/rcu.h | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/qemu/lockable.h b/include/qemu/lockable.h
index 1aeb2cb1a6..71cda060e4 100644
--- a/include/qemu/lockable.h
+++ b/include/qemu/lockable.h
@@ -152,7 +152,7 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(QemuLockable, qemu_lockable_auto_unlock)
* }
*/
#define WITH_QEMU_LOCK_GUARD(x) \
- WITH_QEMU_LOCK_GUARD_((x), qemu_lockable_auto##__COUNTER__)
+ WITH_QEMU_LOCK_GUARD_((x), glue(qemu_lockable_auto, __COUNTER__))
/**
* QEMU_LOCK_GUARD - Lock an object until the end of the scope
@@ -170,7 +170,7 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(QemuLockable, qemu_lockable_auto_unlock)
* }
*/
#define QEMU_LOCK_GUARD(x) \
- g_autoptr(QemuLockable) qemu_lockable_auto##__COUNTER__ = \
+ g_autoptr(QemuLockable) glue(qemu_lockable_auto, __COUNTER__) = \
qemu_lockable_auto_lock(QEMU_MAKE_LOCKABLE((x)))
#endif
diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h
index 9c82683e37..570aa603eb 100644
--- a/include/qemu/rcu.h
+++ b/include/qemu/rcu.h
@@ -170,7 +170,7 @@ static inline void rcu_read_auto_unlock(RCUReadAuto *r)
G_DEFINE_AUTOPTR_CLEANUP_FUNC(RCUReadAuto, rcu_read_auto_unlock)
#define WITH_RCU_READ_LOCK_GUARD() \
- WITH_RCU_READ_LOCK_GUARD_(_rcu_read_auto##__COUNTER__)
+ WITH_RCU_READ_LOCK_GUARD_(glue(_rcu_read_auto, __COUNTER__))
#define WITH_RCU_READ_LOCK_GUARD_(var) \
for (g_autoptr(RCUReadAuto) var = rcu_read_auto_lock(); \
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate
2020-03-20 12:04 [PATCH v3 0/2] Replaced locks with lock guard macros dnbrdsky
2020-03-20 12:04 ` [PATCH v3 1/2] lockable: fix __COUNTER__ macro to be referenced properly dnbrdsky
@ 2020-03-20 12:04 ` dnbrdsky
2020-03-20 12:33 ` Dr. David Alan Gilbert
2020-03-20 14:02 ` [PATCH v3 0/2] Replaced locks with lock guard macros no-reply
2 siblings, 1 reply; 9+ messages in thread
From: dnbrdsky @ 2020-03-20 12:04 UTC (permalink / raw)
To: dnbrdsky
Cc: Kevin Wolf, open list:iSCSI, Juan Quintela, Markus Armbruster,
Peter Lieven, qemu-devel, Max Reitz, Alex Williamson,
Gerd Hoffmann, Ronnie Sahlberg, Paolo Bonzini,
Dr. David Alan Gilbert
From: Daniel Brodsky <dnbrdsky@gmail.com>
- ran regexp "qemu_mutex_lock\(.*\).*\n.*if" to find targets
- replaced result with QEMU_LOCK_GUARD if all unlocks at function end
- replaced result with WITH_QEMU_LOCK_GUARD if unlock not at end
Signed-off-by: Daniel Brodsky <dnbrdsky@gmail.com>
---
block/iscsi.c | 11 +++-------
block/nfs.c | 51 ++++++++++++++++++++-----------------------
cpus-common.c | 13 +++++------
hw/display/qxl.c | 43 +++++++++++++++++-------------------
hw/vfio/platform.c | 5 ++---
migration/migration.c | 3 +--
migration/multifd.c | 8 +++----
migration/ram.c | 3 +--
monitor/misc.c | 4 +---
ui/spice-display.c | 14 ++++++------
util/log.c | 4 ++--
util/qemu-timer.c | 17 +++++++--------
util/rcu.c | 8 +++----
util/thread-pool.c | 3 +--
util/vfio-helpers.c | 4 ++--
15 files changed, 84 insertions(+), 107 deletions(-)
diff --git a/block/iscsi.c b/block/iscsi.c
index 682abd8e09..89f8a656a4 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -1086,7 +1086,7 @@ static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState *bs,
acb->task->expxferlen = acb->ioh->dxfer_len;
data.size = 0;
- qemu_mutex_lock(&iscsilun->mutex);
+ QEMU_LOCK_GUARD(&iscsilun->mutex);
if (acb->task->xfer_dir == SCSI_XFER_WRITE) {
if (acb->ioh->iovec_count == 0) {
data.data = acb->ioh->dxferp;
@@ -1102,7 +1102,6 @@ static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState *bs,
iscsi_aio_ioctl_cb,
(data.size > 0) ? &data : NULL,
acb) != 0) {
- qemu_mutex_unlock(&iscsilun->mutex);
scsi_free_scsi_task(acb->task);
qemu_aio_unref(acb);
return NULL;
@@ -1122,7 +1121,6 @@ static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState *bs,
}
iscsi_set_events(iscsilun);
- qemu_mutex_unlock(&iscsilun->mutex);
return &acb->common;
}
@@ -1395,20 +1393,17 @@ static void iscsi_nop_timed_event(void *opaque)
{
IscsiLun *iscsilun = opaque;
- qemu_mutex_lock(&iscsilun->mutex);
+ QEMU_LOCK_GUARD(&iscsilun->mutex);
if (iscsi_get_nops_in_flight(iscsilun->iscsi) >= MAX_NOP_FAILURES) {
error_report("iSCSI: NOP timeout. Reconnecting...");
iscsilun->request_timed_out = true;
} else if (iscsi_nop_out_async(iscsilun->iscsi, NULL, NULL, 0, NULL) != 0) {
error_report("iSCSI: failed to sent NOP-Out. Disabling NOP messages.");
- goto out;
+ return;
}
timer_mod(iscsilun->nop_timer, qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + NOP_INTERVAL);
iscsi_set_events(iscsilun);
-
-out:
- qemu_mutex_unlock(&iscsilun->mutex);
}
static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp)
diff --git a/block/nfs.c b/block/nfs.c
index 9a6311e270..09a78aede8 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -273,15 +273,14 @@ static int coroutine_fn nfs_co_preadv(BlockDriverState *bs, uint64_t offset,
nfs_co_init_task(bs, &task);
task.iov = iov;
- qemu_mutex_lock(&client->mutex);
- if (nfs_pread_async(client->context, client->fh,
- offset, bytes, nfs_co_generic_cb, &task) != 0) {
- qemu_mutex_unlock(&client->mutex);
- return -ENOMEM;
- }
+ WITH_QEMU_LOCK_GUARD(&client->mutex) {
+ if (nfs_pread_async(client->context, client->fh,
+ offset, bytes, nfs_co_generic_cb, &task) != 0) {
+ return -ENOMEM;
+ }
- nfs_set_events(client);
- qemu_mutex_unlock(&client->mutex);
+ nfs_set_events(client);
+ }
while (!task.complete) {
qemu_coroutine_yield();
}
@@ -320,19 +319,18 @@ static int coroutine_fn nfs_co_pwritev(BlockDriverState *bs, uint64_t offset,
buf = iov->iov[0].iov_base;
}
- qemu_mutex_lock(&client->mutex);
- if (nfs_pwrite_async(client->context, client->fh,
- offset, bytes, buf,
- nfs_co_generic_cb, &task) != 0) {
- qemu_mutex_unlock(&client->mutex);
- if (my_buffer) {
- g_free(buf);
+ WITH_QEMU_LOCK_GUARD(&client->mutex) {
+ if (nfs_pwrite_async(client->context, client->fh,
+ offset, bytes, buf,
+ nfs_co_generic_cb, &task) != 0) {
+ if (my_buffer) {
+ g_free(buf);
+ }
+ return -ENOMEM;
}
- return -ENOMEM;
- }
- nfs_set_events(client);
- qemu_mutex_unlock(&client->mutex);
+ nfs_set_events(client);
+ }
while (!task.complete) {
qemu_coroutine_yield();
}
@@ -355,15 +353,14 @@ static int coroutine_fn nfs_co_flush(BlockDriverState *bs)
nfs_co_init_task(bs, &task);
- qemu_mutex_lock(&client->mutex);
- if (nfs_fsync_async(client->context, client->fh, nfs_co_generic_cb,
- &task) != 0) {
- qemu_mutex_unlock(&client->mutex);
- return -ENOMEM;
- }
+ WITH_QEMU_LOCK_GUARD(&client->mutex) {
+ if (nfs_fsync_async(client->context, client->fh, nfs_co_generic_cb,
+ &task) != 0) {
+ return -ENOMEM;
+ }
- nfs_set_events(client);
- qemu_mutex_unlock(&client->mutex);
+ nfs_set_events(client);
+ }
while (!task.complete) {
qemu_coroutine_yield();
}
diff --git a/cpus-common.c b/cpus-common.c
index eaf590cb38..a058f3e44c 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -22,6 +22,7 @@
#include "exec/cpu-common.h"
#include "hw/core/cpu.h"
#include "sysemu/cpus.h"
+#include "qemu/lockable.h"
static QemuMutex qemu_cpu_list_lock;
static QemuCond exclusive_cond;
@@ -71,7 +72,7 @@ static int cpu_get_free_index(void)
void cpu_list_add(CPUState *cpu)
{
- qemu_mutex_lock(&qemu_cpu_list_lock);
+ QEMU_LOCK_GUARD(&qemu_cpu_list_lock);
if (cpu->cpu_index == UNASSIGNED_CPU_INDEX) {
cpu->cpu_index = cpu_get_free_index();
assert(cpu->cpu_index != UNASSIGNED_CPU_INDEX);
@@ -79,12 +80,11 @@ void cpu_list_add(CPUState *cpu)
assert(!cpu_index_auto_assigned);
}
QTAILQ_INSERT_TAIL_RCU(&cpus, cpu, node);
- qemu_mutex_unlock(&qemu_cpu_list_lock);
}
void cpu_list_remove(CPUState *cpu)
{
- qemu_mutex_lock(&qemu_cpu_list_lock);
+ QEMU_LOCK_GUARD(&qemu_cpu_list_lock);
if (!QTAILQ_IN_USE(cpu, node)) {
/* there is nothing to undo since cpu_exec_init() hasn't been called */
qemu_mutex_unlock(&qemu_cpu_list_lock);
@@ -95,7 +95,6 @@ void cpu_list_remove(CPUState *cpu)
QTAILQ_REMOVE_RCU(&cpus, cpu, node);
cpu->cpu_index = UNASSIGNED_CPU_INDEX;
- qemu_mutex_unlock(&qemu_cpu_list_lock);
}
struct qemu_work_item {
@@ -237,7 +236,7 @@ void cpu_exec_start(CPUState *cpu)
* see cpu->running == true, and it will kick the CPU.
*/
if (unlikely(atomic_read(&pending_cpus))) {
- qemu_mutex_lock(&qemu_cpu_list_lock);
+ QEMU_LOCK_GUARD(&qemu_cpu_list_lock);
if (!cpu->has_waiter) {
/* Not counted in pending_cpus, let the exclusive item
* run. Since we have the lock, just set cpu->running to true
@@ -252,7 +251,6 @@ void cpu_exec_start(CPUState *cpu)
* waiter at cpu_exec_end.
*/
}
- qemu_mutex_unlock(&qemu_cpu_list_lock);
}
}
@@ -280,7 +278,7 @@ void cpu_exec_end(CPUState *cpu)
* next cpu_exec_start.
*/
if (unlikely(atomic_read(&pending_cpus))) {
- qemu_mutex_lock(&qemu_cpu_list_lock);
+ QEMU_LOCK_GUARD(&qemu_cpu_list_lock);
if (cpu->has_waiter) {
cpu->has_waiter = false;
atomic_set(&pending_cpus, pending_cpus - 1);
@@ -288,7 +286,6 @@ void cpu_exec_end(CPUState *cpu)
qemu_cond_signal(&exclusive_cond);
}
}
- qemu_mutex_unlock(&qemu_cpu_list_lock);
}
}
diff --git a/hw/display/qxl.c b/hw/display/qxl.c
index 227da69a50..d5627119ec 100644
--- a/hw/display/qxl.c
+++ b/hw/display/qxl.c
@@ -478,18 +478,19 @@ static int qxl_track_command(PCIQXLDevice *qxl, struct QXLCommandExt *ext)
cmd->u.surface_create.stride);
return 1;
}
- qemu_mutex_lock(&qxl->track_lock);
- if (cmd->type == QXL_SURFACE_CMD_CREATE) {
- qxl->guest_surfaces.cmds[id] = ext->cmd.data;
- qxl->guest_surfaces.count++;
- if (qxl->guest_surfaces.max < qxl->guest_surfaces.count)
- qxl->guest_surfaces.max = qxl->guest_surfaces.count;
+ WITH_QEMU_LOCK_GUARD(&qxl->track_lock) {
+ if (cmd->type == QXL_SURFACE_CMD_CREATE) {
+ qxl->guest_surfaces.cmds[id] = ext->cmd.data;
+ qxl->guest_surfaces.count++;
+ if (qxl->guest_surfaces.max < qxl->guest_surfaces.count) {
+ qxl->guest_surfaces.max = qxl->guest_surfaces.count;
+ }
+ }
+ if (cmd->type == QXL_SURFACE_CMD_DESTROY) {
+ qxl->guest_surfaces.cmds[id] = 0;
+ qxl->guest_surfaces.count--;
+ }
}
- if (cmd->type == QXL_SURFACE_CMD_DESTROY) {
- qxl->guest_surfaces.cmds[id] = 0;
- qxl->guest_surfaces.count--;
- }
- qemu_mutex_unlock(&qxl->track_lock);
break;
}
case QXL_CMD_CURSOR:
@@ -958,10 +959,9 @@ static void interface_update_area_complete(QXLInstance *sin,
int i;
int qxl_i;
- qemu_mutex_lock(&qxl->ssd.lock);
+ QEMU_LOCK_GUARD(&qxl->ssd.lock);
if (surface_id != 0 || !num_updated_rects ||
!qxl->render_update_cookie_num) {
- qemu_mutex_unlock(&qxl->ssd.lock);
return;
}
trace_qxl_interface_update_area_complete(qxl->id, surface_id, dirty->left,
@@ -980,7 +980,6 @@ static void interface_update_area_complete(QXLInstance *sin,
* Don't bother copying or scheduling the bh since we will flip
* the whole area anyway on completion of the update_area async call
*/
- qemu_mutex_unlock(&qxl->ssd.lock);
return;
}
qxl_i = qxl->num_dirty_rects;
@@ -991,7 +990,6 @@ static void interface_update_area_complete(QXLInstance *sin,
trace_qxl_interface_update_area_complete_schedule_bh(qxl->id,
qxl->num_dirty_rects);
qemu_bh_schedule(qxl->update_area_bh);
- qemu_mutex_unlock(&qxl->ssd.lock);
}
/* called from spice server thread context only */
@@ -1694,15 +1692,14 @@ static void ioport_write(void *opaque, hwaddr addr,
case QXL_IO_MONITORS_CONFIG_ASYNC:
async_common:
async = QXL_ASYNC;
- qemu_mutex_lock(&d->async_lock);
- if (d->current_async != QXL_UNDEFINED_IO) {
- qxl_set_guest_bug(d, "%d async started before last (%d) complete",
- io_port, d->current_async);
- qemu_mutex_unlock(&d->async_lock);
- return;
+ WITH_QEMU_LOCK_GUARD(&d->async_lock) {
+ if (d->current_async != QXL_UNDEFINED_IO) {
+ qxl_set_guest_bug(d, "%d async started before last (%d) complete",
+ io_port, d->current_async);
+ return;
+ }
+ d->current_async = orig_io_port;
}
- d->current_async = orig_io_port;
- qemu_mutex_unlock(&d->async_lock);
break;
default:
break;
diff --git a/hw/vfio/platform.c b/hw/vfio/platform.c
index 6b2952c034..ac2cefc9b1 100644
--- a/hw/vfio/platform.c
+++ b/hw/vfio/platform.c
@@ -22,6 +22,7 @@
#include "hw/vfio/vfio-platform.h"
#include "migration/vmstate.h"
#include "qemu/error-report.h"
+#include "qemu/lockable.h"
#include "qemu/main-loop.h"
#include "qemu/module.h"
#include "qemu/range.h"
@@ -216,7 +217,7 @@ static void vfio_intp_interrupt(VFIOINTp *intp)
VFIOPlatformDevice *vdev = intp->vdev;
bool delay_handling = false;
- qemu_mutex_lock(&vdev->intp_mutex);
+ QEMU_LOCK_GUARD(&vdev->intp_mutex);
if (intp->state == VFIO_IRQ_INACTIVE) {
QLIST_FOREACH(tmp, &vdev->intp_list, next) {
if (tmp->state == VFIO_IRQ_ACTIVE ||
@@ -236,7 +237,6 @@ static void vfio_intp_interrupt(VFIOINTp *intp)
QSIMPLEQ_INSERT_TAIL(&vdev->pending_intp_queue,
intp, pqnext);
ret = event_notifier_test_and_clear(intp->interrupt);
- qemu_mutex_unlock(&vdev->intp_mutex);
return;
}
@@ -266,7 +266,6 @@ static void vfio_intp_interrupt(VFIOINTp *intp)
qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
vdev->mmap_timeout);
}
- qemu_mutex_unlock(&vdev->intp_mutex);
}
/**
diff --git a/migration/migration.c b/migration/migration.c
index c1d88ace7f..2f0bd6d8b4 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1652,11 +1652,10 @@ static void migrate_fd_cleanup_bh(void *opaque)
void migrate_set_error(MigrationState *s, const Error *error)
{
- qemu_mutex_lock(&s->error_mutex);
+ QEMU_LOCK_GUARD(&s->error_mutex);
if (!s->error) {
s->error = error_copy(error);
}
- qemu_mutex_unlock(&s->error_mutex);
}
void migrate_fd_error(MigrationState *s, const Error *error)
diff --git a/migration/multifd.c b/migration/multifd.c
index cb6a4a3ab8..9123c111a3 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -894,11 +894,11 @@ void multifd_recv_sync_main(void)
for (i = 0; i < migrate_multifd_channels(); i++) {
MultiFDRecvParams *p = &multifd_recv_state->params[i];
- qemu_mutex_lock(&p->mutex);
- if (multifd_recv_state->packet_num < p->packet_num) {
- multifd_recv_state->packet_num = p->packet_num;
+ WITH_QEMU_LOCK_GUARD(&p->mutex) {
+ if (multifd_recv_state->packet_num < p->packet_num) {
+ multifd_recv_state->packet_num = p->packet_num;
+ }
}
- qemu_mutex_unlock(&p->mutex);
trace_multifd_recv_sync_main_signal(p->id);
qemu_sem_post(&p->sem_sync);
}
diff --git a/migration/ram.c b/migration/ram.c
index c12cfdbe26..87a670cfbf 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1368,7 +1368,7 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
return NULL;
}
- qemu_mutex_lock(&rs->src_page_req_mutex);
+ QEMU_LOCK_GUARD(&rs->src_page_req_mutex);
if (!QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
struct RAMSrcPageRequest *entry =
QSIMPLEQ_FIRST(&rs->src_page_requests);
@@ -1385,7 +1385,6 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
migration_consume_urgent_request();
}
}
- qemu_mutex_unlock(&rs->src_page_req_mutex);
return block;
}
diff --git a/monitor/misc.c b/monitor/misc.c
index 6c45fa490f..9723b466cd 100644
--- a/monitor/misc.c
+++ b/monitor/misc.c
@@ -1473,7 +1473,7 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
MonFdsetFd *mon_fdset_fd;
AddfdInfo *fdinfo;
- qemu_mutex_lock(&mon_fdsets_lock);
+ QEMU_LOCK_GUARD(&mon_fdsets_lock);
if (has_fdset_id) {
QLIST_FOREACH(mon_fdset, &mon_fdsets, next) {
/* Break if match found or match impossible due to ordering by ID */
@@ -1494,7 +1494,6 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
if (fdset_id < 0) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "fdset-id",
"a non-negative value");
- qemu_mutex_unlock(&mon_fdsets_lock);
return NULL;
}
/* Use specified fdset ID */
@@ -1545,7 +1544,6 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
fdinfo->fdset_id = mon_fdset->id;
fdinfo->fd = mon_fdset_fd->fd;
- qemu_mutex_unlock(&mon_fdsets_lock);
return fdinfo;
}
diff --git a/ui/spice-display.c b/ui/spice-display.c
index 6babe24909..19632fdf6c 100644
--- a/ui/spice-display.c
+++ b/ui/spice-display.c
@@ -18,6 +18,7 @@
#include "qemu/osdep.h"
#include "ui/qemu-spice.h"
#include "qemu/timer.h"
+#include "qemu/lockable.h"
#include "qemu/main-loop.h"
#include "qemu/option.h"
#include "qemu/queue.h"
@@ -483,12 +484,12 @@ void qemu_spice_display_refresh(SimpleSpiceDisplay *ssd)
{
graphic_hw_update(ssd->dcl.con);
- qemu_mutex_lock(&ssd->lock);
- if (QTAILQ_EMPTY(&ssd->updates) && ssd->ds) {
- qemu_spice_create_update(ssd);
- ssd->notify++;
+ WITH_QEMU_LOCK_GUARD(&ssd->lock) {
+ if (QTAILQ_EMPTY(&ssd->updates) && ssd->ds) {
+ qemu_spice_create_update(ssd);
+ ssd->notify++;
+ }
}
- qemu_mutex_unlock(&ssd->lock);
trace_qemu_spice_display_refresh(ssd->qxl.id, ssd->notify);
if (ssd->notify) {
@@ -580,7 +581,7 @@ static int interface_get_cursor_command(QXLInstance *sin, QXLCommandExt *ext)
SimpleSpiceDisplay *ssd = container_of(sin, SimpleSpiceDisplay, qxl);
int ret;
- qemu_mutex_lock(&ssd->lock);
+ QEMU_LOCK_GUARD(&ssd->lock);
if (ssd->ptr_define) {
*ext = ssd->ptr_define->ext;
ssd->ptr_define = NULL;
@@ -592,7 +593,6 @@ static int interface_get_cursor_command(QXLInstance *sin, QXLCommandExt *ext)
} else {
ret = false;
}
- qemu_mutex_unlock(&ssd->lock);
return ret;
}
diff --git a/util/log.c b/util/log.c
index 2da6cb31dc..bdb3d712e8 100644
--- a/util/log.c
+++ b/util/log.c
@@ -25,6 +25,7 @@
#include "qemu/cutils.h"
#include "trace/control.h"
#include "qemu/thread.h"
+#include "qemu/lockable.h"
static char *logfilename;
static QemuMutex qemu_logfile_mutex;
@@ -94,7 +95,7 @@ void qemu_set_log(int log_flags)
if (qemu_loglevel && (!is_daemonized() || logfilename)) {
need_to_open_file = true;
}
- qemu_mutex_lock(&qemu_logfile_mutex);
+ QEMU_LOCK_GUARD(&qemu_logfile_mutex);
if (qemu_logfile && !need_to_open_file) {
logfile = qemu_logfile;
atomic_rcu_set(&qemu_logfile, NULL);
@@ -136,7 +137,6 @@ void qemu_set_log(int log_flags)
}
atomic_rcu_set(&qemu_logfile, logfile);
}
- qemu_mutex_unlock(&qemu_logfile_mutex);
}
void qemu_log_needs_buffers(void)
diff --git a/util/qemu-timer.c b/util/qemu-timer.c
index d548d3c1ad..b6575a2cd5 100644
--- a/util/qemu-timer.c
+++ b/util/qemu-timer.c
@@ -459,17 +459,16 @@ void timer_mod_anticipate_ns(QEMUTimer *ts, int64_t expire_time)
QEMUTimerList *timer_list = ts->timer_list;
bool rearm;
- qemu_mutex_lock(&timer_list->active_timers_lock);
- if (ts->expire_time == -1 || ts->expire_time > expire_time) {
- if (ts->expire_time != -1) {
- timer_del_locked(timer_list, ts);
+ WITH_QEMU_LOCK_GUARD(&timer_list->active_timers_lock) {
+ if (ts->expire_time == -1 || ts->expire_time > expire_time) {
+ if (ts->expire_time != -1) {
+ timer_del_locked(timer_list, ts);
+ }
+ rearm = timer_mod_ns_locked(timer_list, ts, expire_time);
+ } else {
+ rearm = false;
}
- rearm = timer_mod_ns_locked(timer_list, ts, expire_time);
- } else {
- rearm = false;
}
- qemu_mutex_unlock(&timer_list->active_timers_lock);
-
if (rearm) {
timerlist_rearm(timer_list);
}
diff --git a/util/rcu.c b/util/rcu.c
index 177a675619..60a37f72c3 100644
--- a/util/rcu.c
+++ b/util/rcu.c
@@ -31,6 +31,7 @@
#include "qemu/atomic.h"
#include "qemu/thread.h"
#include "qemu/main-loop.h"
+#include "qemu/lockable.h"
#if defined(CONFIG_MALLOC_TRIM)
#include <malloc.h>
#endif
@@ -141,14 +142,14 @@ static void wait_for_readers(void)
void synchronize_rcu(void)
{
- qemu_mutex_lock(&rcu_sync_lock);
+ QEMU_LOCK_GUARD(&rcu_sync_lock);
/* Write RCU-protected pointers before reading p_rcu_reader->ctr.
* Pairs with smp_mb_placeholder() in rcu_read_lock().
*/
smp_mb_global();
- qemu_mutex_lock(&rcu_registry_lock);
+ QEMU_LOCK_GUARD(&rcu_registry_lock);
if (!QLIST_EMPTY(®istry)) {
/* In either case, the atomic_mb_set below blocks stores that free
* old RCU-protected pointers.
@@ -169,9 +170,6 @@ void synchronize_rcu(void)
wait_for_readers();
}
-
- qemu_mutex_unlock(&rcu_registry_lock);
- qemu_mutex_unlock(&rcu_sync_lock);
}
diff --git a/util/thread-pool.c b/util/thread-pool.c
index 4ed9b89ab2..d763cea505 100644
--- a/util/thread-pool.c
+++ b/util/thread-pool.c
@@ -210,7 +210,7 @@ static void thread_pool_cancel(BlockAIOCB *acb)
trace_thread_pool_cancel(elem, elem->common.opaque);
- qemu_mutex_lock(&pool->lock);
+ QEMU_LOCK_GUARD(&pool->lock);
if (elem->state == THREAD_QUEUED &&
/* No thread has yet started working on elem. we can try to "steal"
* the item from the worker if we can get a signal from the
@@ -225,7 +225,6 @@ static void thread_pool_cancel(BlockAIOCB *acb)
elem->ret = -ECANCELED;
}
- qemu_mutex_unlock(&pool->lock);
}
static AioContext *thread_pool_get_aio_context(BlockAIOCB *acb)
diff --git a/util/vfio-helpers.c b/util/vfio-helpers.c
index ddd9a96e76..b310b23003 100644
--- a/util/vfio-helpers.c
+++ b/util/vfio-helpers.c
@@ -21,6 +21,7 @@
#include "standard-headers/linux/pci_regs.h"
#include "qemu/event_notifier.h"
#include "qemu/vfio-helpers.h"
+#include "qemu/lockable.h"
#include "trace.h"
#define QEMU_VFIO_DEBUG 0
@@ -667,14 +668,13 @@ int qemu_vfio_dma_reset_temporary(QEMUVFIOState *s)
.size = QEMU_VFIO_IOVA_MAX - s->high_water_mark,
};
trace_qemu_vfio_dma_reset_temporary(s);
- qemu_mutex_lock(&s->lock);
+ QEMU_LOCK_GUARD(&s->lock);
if (ioctl(s->container, VFIO_IOMMU_UNMAP_DMA, &unmap)) {
error_report("VFIO_UNMAP_DMA failed: %s", strerror(errno));
qemu_mutex_unlock(&s->lock);
return -errno;
}
s->high_water_mark = QEMU_VFIO_IOVA_MAX;
- qemu_mutex_unlock(&s->lock);
return 0;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate
2020-03-20 12:04 ` [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate dnbrdsky
@ 2020-03-20 12:33 ` Dr. David Alan Gilbert
2020-03-20 12:46 ` Daniel Brodsky
0 siblings, 1 reply; 9+ messages in thread
From: Dr. David Alan Gilbert @ 2020-03-20 12:33 UTC (permalink / raw)
To: dnbrdsky
Cc: Kevin Wolf, open list:iSCSI, Juan Quintela, Markus Armbruster,
Peter Lieven, qemu-devel, Max Reitz, Alex Williamson,
Gerd Hoffmann, Ronnie Sahlberg, Paolo Bonzini
* dnbrdsky@gmail.com (dnbrdsky@gmail.com) wrote:
> From: Daniel Brodsky <dnbrdsky@gmail.com>
>
> - ran regexp "qemu_mutex_lock\(.*\).*\n.*if" to find targets
> - replaced result with QEMU_LOCK_GUARD if all unlocks at function end
> - replaced result with WITH_QEMU_LOCK_GUARD if unlock not at end
>
> Signed-off-by: Daniel Brodsky <dnbrdsky@gmail.com>
> ---
> block/iscsi.c | 11 +++-------
> block/nfs.c | 51 ++++++++++++++++++++-----------------------
> cpus-common.c | 13 +++++------
> hw/display/qxl.c | 43 +++++++++++++++++-------------------
> hw/vfio/platform.c | 5 ++---
> migration/migration.c | 3 +--
> migration/multifd.c | 8 +++----
> migration/ram.c | 3 +--
> monitor/misc.c | 4 +---
> ui/spice-display.c | 14 ++++++------
> util/log.c | 4 ++--
> util/qemu-timer.c | 17 +++++++--------
> util/rcu.c | 8 +++----
> util/thread-pool.c | 3 +--
> util/vfio-helpers.c | 4 ++--
> 15 files changed, 84 insertions(+), 107 deletions(-)
>
<snip>
> diff --git a/migration/migration.c b/migration/migration.c
> index c1d88ace7f..2f0bd6d8b4 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1652,11 +1652,10 @@ static void migrate_fd_cleanup_bh(void *opaque)
>
> void migrate_set_error(MigrationState *s, const Error *error)
> {
> - qemu_mutex_lock(&s->error_mutex);
> + QEMU_LOCK_GUARD(&s->error_mutex);
> if (!s->error) {
> s->error = error_copy(error);
> }
> - qemu_mutex_unlock(&s->error_mutex);
> }
There are some other places in migration.c that would really benefit;
for example, migrate_send_rp_message, if you use a LOCK_QUARD
there, then you can remove the 'int ret', and the goto error.
In postcopy_pause, the locks on qemu_file_lock would work well using the
WITH_QEMU_LOCK_GUARD.
> void migrate_fd_error(MigrationState *s, const Error *error)
> diff --git a/migration/multifd.c b/migration/multifd.c
> index cb6a4a3ab8..9123c111a3 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -894,11 +894,11 @@ void multifd_recv_sync_main(void)
> for (i = 0; i < migrate_multifd_channels(); i++) {
> MultiFDRecvParams *p = &multifd_recv_state->params[i];
>
> - qemu_mutex_lock(&p->mutex);
> - if (multifd_recv_state->packet_num < p->packet_num) {
> - multifd_recv_state->packet_num = p->packet_num;
> + WITH_QEMU_LOCK_GUARD(&p->mutex) {
> + if (multifd_recv_state->packet_num < p->packet_num) {
> + multifd_recv_state->packet_num = p->packet_num;
> + }
> }
> - qemu_mutex_unlock(&p->mutex);
> trace_multifd_recv_sync_main_signal(p->id);
> qemu_sem_post(&p->sem_sync);
> }
> diff --git a/migration/ram.c b/migration/ram.c
> index c12cfdbe26..87a670cfbf 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1368,7 +1368,7 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
> return NULL;
> }
>
> - qemu_mutex_lock(&rs->src_page_req_mutex);
> + QEMU_LOCK_GUARD(&rs->src_page_req_mutex);
> if (!QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
> struct RAMSrcPageRequest *entry =
> QSIMPLEQ_FIRST(&rs->src_page_requests);
> @@ -1385,7 +1385,6 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
> migration_consume_urgent_request();
> }
> }
> - qemu_mutex_unlock(&rs->src_page_req_mutex);
>
> return block;
> }
Is there a reason you didn't do the matching pair at the bottom of
ram_save_queue_pages ?
Dave
> diff --git a/monitor/misc.c b/monitor/misc.c
> index 6c45fa490f..9723b466cd 100644
> --- a/monitor/misc.c
> +++ b/monitor/misc.c
> @@ -1473,7 +1473,7 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
> MonFdsetFd *mon_fdset_fd;
> AddfdInfo *fdinfo;
>
> - qemu_mutex_lock(&mon_fdsets_lock);
> + QEMU_LOCK_GUARD(&mon_fdsets_lock);
> if (has_fdset_id) {
> QLIST_FOREACH(mon_fdset, &mon_fdsets, next) {
> /* Break if match found or match impossible due to ordering by ID */
> @@ -1494,7 +1494,6 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
> if (fdset_id < 0) {
> error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "fdset-id",
> "a non-negative value");
> - qemu_mutex_unlock(&mon_fdsets_lock);
> return NULL;
> }
> /* Use specified fdset ID */
> @@ -1545,7 +1544,6 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
> fdinfo->fdset_id = mon_fdset->id;
> fdinfo->fd = mon_fdset_fd->fd;
>
> - qemu_mutex_unlock(&mon_fdsets_lock);
> return fdinfo;
> }
>
> diff --git a/ui/spice-display.c b/ui/spice-display.c
> index 6babe24909..19632fdf6c 100644
> --- a/ui/spice-display.c
> +++ b/ui/spice-display.c
> @@ -18,6 +18,7 @@
> #include "qemu/osdep.h"
> #include "ui/qemu-spice.h"
> #include "qemu/timer.h"
> +#include "qemu/lockable.h"
> #include "qemu/main-loop.h"
> #include "qemu/option.h"
> #include "qemu/queue.h"
> @@ -483,12 +484,12 @@ void qemu_spice_display_refresh(SimpleSpiceDisplay *ssd)
> {
> graphic_hw_update(ssd->dcl.con);
>
> - qemu_mutex_lock(&ssd->lock);
> - if (QTAILQ_EMPTY(&ssd->updates) && ssd->ds) {
> - qemu_spice_create_update(ssd);
> - ssd->notify++;
> + WITH_QEMU_LOCK_GUARD(&ssd->lock) {
> + if (QTAILQ_EMPTY(&ssd->updates) && ssd->ds) {
> + qemu_spice_create_update(ssd);
> + ssd->notify++;
> + }
> }
> - qemu_mutex_unlock(&ssd->lock);
>
> trace_qemu_spice_display_refresh(ssd->qxl.id, ssd->notify);
> if (ssd->notify) {
> @@ -580,7 +581,7 @@ static int interface_get_cursor_command(QXLInstance *sin, QXLCommandExt *ext)
> SimpleSpiceDisplay *ssd = container_of(sin, SimpleSpiceDisplay, qxl);
> int ret;
>
> - qemu_mutex_lock(&ssd->lock);
> + QEMU_LOCK_GUARD(&ssd->lock);
> if (ssd->ptr_define) {
> *ext = ssd->ptr_define->ext;
> ssd->ptr_define = NULL;
> @@ -592,7 +593,6 @@ static int interface_get_cursor_command(QXLInstance *sin, QXLCommandExt *ext)
> } else {
> ret = false;
> }
> - qemu_mutex_unlock(&ssd->lock);
> return ret;
> }
>
> diff --git a/util/log.c b/util/log.c
> index 2da6cb31dc..bdb3d712e8 100644
> --- a/util/log.c
> +++ b/util/log.c
> @@ -25,6 +25,7 @@
> #include "qemu/cutils.h"
> #include "trace/control.h"
> #include "qemu/thread.h"
> +#include "qemu/lockable.h"
>
> static char *logfilename;
> static QemuMutex qemu_logfile_mutex;
> @@ -94,7 +95,7 @@ void qemu_set_log(int log_flags)
> if (qemu_loglevel && (!is_daemonized() || logfilename)) {
> need_to_open_file = true;
> }
> - qemu_mutex_lock(&qemu_logfile_mutex);
> + QEMU_LOCK_GUARD(&qemu_logfile_mutex);
> if (qemu_logfile && !need_to_open_file) {
> logfile = qemu_logfile;
> atomic_rcu_set(&qemu_logfile, NULL);
> @@ -136,7 +137,6 @@ void qemu_set_log(int log_flags)
> }
> atomic_rcu_set(&qemu_logfile, logfile);
> }
> - qemu_mutex_unlock(&qemu_logfile_mutex);
> }
>
> void qemu_log_needs_buffers(void)
> diff --git a/util/qemu-timer.c b/util/qemu-timer.c
> index d548d3c1ad..b6575a2cd5 100644
> --- a/util/qemu-timer.c
> +++ b/util/qemu-timer.c
> @@ -459,17 +459,16 @@ void timer_mod_anticipate_ns(QEMUTimer *ts, int64_t expire_time)
> QEMUTimerList *timer_list = ts->timer_list;
> bool rearm;
>
> - qemu_mutex_lock(&timer_list->active_timers_lock);
> - if (ts->expire_time == -1 || ts->expire_time > expire_time) {
> - if (ts->expire_time != -1) {
> - timer_del_locked(timer_list, ts);
> + WITH_QEMU_LOCK_GUARD(&timer_list->active_timers_lock) {
> + if (ts->expire_time == -1 || ts->expire_time > expire_time) {
> + if (ts->expire_time != -1) {
> + timer_del_locked(timer_list, ts);
> + }
> + rearm = timer_mod_ns_locked(timer_list, ts, expire_time);
> + } else {
> + rearm = false;
> }
> - rearm = timer_mod_ns_locked(timer_list, ts, expire_time);
> - } else {
> - rearm = false;
> }
> - qemu_mutex_unlock(&timer_list->active_timers_lock);
> -
> if (rearm) {
> timerlist_rearm(timer_list);
> }
> diff --git a/util/rcu.c b/util/rcu.c
> index 177a675619..60a37f72c3 100644
> --- a/util/rcu.c
> +++ b/util/rcu.c
> @@ -31,6 +31,7 @@
> #include "qemu/atomic.h"
> #include "qemu/thread.h"
> #include "qemu/main-loop.h"
> +#include "qemu/lockable.h"
> #if defined(CONFIG_MALLOC_TRIM)
> #include <malloc.h>
> #endif
> @@ -141,14 +142,14 @@ static void wait_for_readers(void)
>
> void synchronize_rcu(void)
> {
> - qemu_mutex_lock(&rcu_sync_lock);
> + QEMU_LOCK_GUARD(&rcu_sync_lock);
>
> /* Write RCU-protected pointers before reading p_rcu_reader->ctr.
> * Pairs with smp_mb_placeholder() in rcu_read_lock().
> */
> smp_mb_global();
>
> - qemu_mutex_lock(&rcu_registry_lock);
> + QEMU_LOCK_GUARD(&rcu_registry_lock);
> if (!QLIST_EMPTY(®istry)) {
> /* In either case, the atomic_mb_set below blocks stores that free
> * old RCU-protected pointers.
> @@ -169,9 +170,6 @@ void synchronize_rcu(void)
>
> wait_for_readers();
> }
> -
> - qemu_mutex_unlock(&rcu_registry_lock);
> - qemu_mutex_unlock(&rcu_sync_lock);
> }
>
>
> diff --git a/util/thread-pool.c b/util/thread-pool.c
> index 4ed9b89ab2..d763cea505 100644
> --- a/util/thread-pool.c
> +++ b/util/thread-pool.c
> @@ -210,7 +210,7 @@ static void thread_pool_cancel(BlockAIOCB *acb)
>
> trace_thread_pool_cancel(elem, elem->common.opaque);
>
> - qemu_mutex_lock(&pool->lock);
> + QEMU_LOCK_GUARD(&pool->lock);
> if (elem->state == THREAD_QUEUED &&
> /* No thread has yet started working on elem. we can try to "steal"
> * the item from the worker if we can get a signal from the
> @@ -225,7 +225,6 @@ static void thread_pool_cancel(BlockAIOCB *acb)
> elem->ret = -ECANCELED;
> }
>
> - qemu_mutex_unlock(&pool->lock);
> }
>
> static AioContext *thread_pool_get_aio_context(BlockAIOCB *acb)
> diff --git a/util/vfio-helpers.c b/util/vfio-helpers.c
> index ddd9a96e76..b310b23003 100644
> --- a/util/vfio-helpers.c
> +++ b/util/vfio-helpers.c
> @@ -21,6 +21,7 @@
> #include "standard-headers/linux/pci_regs.h"
> #include "qemu/event_notifier.h"
> #include "qemu/vfio-helpers.h"
> +#include "qemu/lockable.h"
> #include "trace.h"
>
> #define QEMU_VFIO_DEBUG 0
> @@ -667,14 +668,13 @@ int qemu_vfio_dma_reset_temporary(QEMUVFIOState *s)
> .size = QEMU_VFIO_IOVA_MAX - s->high_water_mark,
> };
> trace_qemu_vfio_dma_reset_temporary(s);
> - qemu_mutex_lock(&s->lock);
> + QEMU_LOCK_GUARD(&s->lock);
> if (ioctl(s->container, VFIO_IOMMU_UNMAP_DMA, &unmap)) {
> error_report("VFIO_UNMAP_DMA failed: %s", strerror(errno));
> qemu_mutex_unlock(&s->lock);
> return -errno;
> }
> s->high_water_mark = QEMU_VFIO_IOVA_MAX;
> - qemu_mutex_unlock(&s->lock);
> return 0;
> }
>
> --
> 2.25.1
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate
2020-03-20 12:33 ` Dr. David Alan Gilbert
@ 2020-03-20 12:46 ` Daniel Brodsky
2020-03-20 12:56 ` Dr. David Alan Gilbert
0 siblings, 1 reply; 9+ messages in thread
From: Daniel Brodsky @ 2020-03-20 12:46 UTC (permalink / raw)
To: Dr. David Alan Gilbert
Cc: Kevin Wolf, open list:iSCSI, Juan Quintela, Markus Armbruster,
Peter Lieven, open list:All patches CC here, Max Reitz,
Alex Williamson, Gerd Hoffmann, Ronnie Sahlberg, Paolo Bonzini
On Fri, Mar 20, 2020 at 5:34 AM Dr. David Alan Gilbert
<dgilbert@redhat.com> wrote:
>
> * dnbrdsky@gmail.com (dnbrdsky@gmail.com) wrote:
> > From: Daniel Brodsky <dnbrdsky@gmail.com>
> >
> > - ran regexp "qemu_mutex_lock\(.*\).*\n.*if" to find targets
> > - replaced result with QEMU_LOCK_GUARD if all unlocks at function end
> > - replaced result with WITH_QEMU_LOCK_GUARD if unlock not at end
> >
> > Signed-off-by: Daniel Brodsky <dnbrdsky@gmail.com>
> > ---
> > block/iscsi.c | 11 +++-------
> > block/nfs.c | 51 ++++++++++++++++++++-----------------------
> > cpus-common.c | 13 +++++------
> > hw/display/qxl.c | 43 +++++++++++++++++-------------------
> > hw/vfio/platform.c | 5 ++---
> > migration/migration.c | 3 +--
> > migration/multifd.c | 8 +++----
> > migration/ram.c | 3 +--
> > monitor/misc.c | 4 +---
> > ui/spice-display.c | 14 ++++++------
> > util/log.c | 4 ++--
> > util/qemu-timer.c | 17 +++++++--------
> > util/rcu.c | 8 +++----
> > util/thread-pool.c | 3 +--
> > util/vfio-helpers.c | 4 ++--
> > 15 files changed, 84 insertions(+), 107 deletions(-)
> >
>
> <snip>
>
> > diff --git a/migration/migration.c b/migration/migration.c
> > index c1d88ace7f..2f0bd6d8b4 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -1652,11 +1652,10 @@ static void migrate_fd_cleanup_bh(void *opaque)
> >
> > void migrate_set_error(MigrationState *s, const Error *error)
> > {
> > - qemu_mutex_lock(&s->error_mutex);
> > + QEMU_LOCK_GUARD(&s->error_mutex);
> > if (!s->error) {
> > s->error = error_copy(error);
> > }
> > - qemu_mutex_unlock(&s->error_mutex);
> > }
>
> There are some other places in migration.c that would really benefit;
> for example, migrate_send_rp_message, if you use a LOCK_QUARD
> there, then you can remove the 'int ret', and the goto error.
> In postcopy_pause, the locks on qemu_file_lock would work well using the
> WITH_QEMU_LOCK_GUARD.
I did a basic pass through for targets and that one didn't come up. I can add
more replacements later, but there are ~300 mutex locks that might be worth
replacing and going through them manually in one go is too tedious.
> > void migrate_fd_error(MigrationState *s, const Error *error)
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index cb6a4a3ab8..9123c111a3 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -894,11 +894,11 @@ void multifd_recv_sync_main(void)
> > for (i = 0; i < migrate_multifd_channels(); i++) {
> > MultiFDRecvParams *p = &multifd_recv_state->params[i];
> >
> > - qemu_mutex_lock(&p->mutex);
> > - if (multifd_recv_state->packet_num < p->packet_num) {
> > - multifd_recv_state->packet_num = p->packet_num;
> > + WITH_QEMU_LOCK_GUARD(&p->mutex) {
> > + if (multifd_recv_state->packet_num < p->packet_num) {
> > + multifd_recv_state->packet_num = p->packet_num;
> > + }
> > }
> > - qemu_mutex_unlock(&p->mutex);
> > trace_multifd_recv_sync_main_signal(p->id);
> > qemu_sem_post(&p->sem_sync);
> > }
>
> > diff --git a/migration/ram.c b/migration/ram.c
> > index c12cfdbe26..87a670cfbf 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -1368,7 +1368,7 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
> > return NULL;
> > }
> >
> > - qemu_mutex_lock(&rs->src_page_req_mutex);
> > + QEMU_LOCK_GUARD(&rs->src_page_req_mutex);
> > if (!QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
> > struct RAMSrcPageRequest *entry =
> > QSIMPLEQ_FIRST(&rs->src_page_requests);
> > @@ -1385,7 +1385,6 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
> > migration_consume_urgent_request();
> > }
> > }
> > - qemu_mutex_unlock(&rs->src_page_req_mutex);
> >
> > return block;
> > }
>
> Is there a reason you didn't do the matching pair at the bottom of
> ram_save_queue_pages ?
>
> Dave
>
According to https://wiki.qemu.org/ToDo/LockGuards cases that are trivial (no
conditional logic) shouldn't be replaced.
> > diff --git a/monitor/misc.c b/monitor/misc.c
> > index 6c45fa490f..9723b466cd 100644
> > --- a/monitor/misc.c
> > +++ b/monitor/misc.c
> > @@ -1473,7 +1473,7 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
> > MonFdsetFd *mon_fdset_fd;
> > AddfdInfo *fdinfo;
> >
> > - qemu_mutex_lock(&mon_fdsets_lock);
> > + QEMU_LOCK_GUARD(&mon_fdsets_lock);
> > if (has_fdset_id) {
> > QLIST_FOREACH(mon_fdset, &mon_fdsets, next) {
> > /* Break if match found or match impossible due to ordering by ID */
> > @@ -1494,7 +1494,6 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
> > if (fdset_id < 0) {
> > error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "fdset-id",
> > "a non-negative value");
> > - qemu_mutex_unlock(&mon_fdsets_lock);
> > return NULL;
> > }
> > /* Use specified fdset ID */
> > @@ -1545,7 +1544,6 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
> > fdinfo->fdset_id = mon_fdset->id;
> > fdinfo->fd = mon_fdset_fd->fd;
> >
> > - qemu_mutex_unlock(&mon_fdsets_lock);
> > return fdinfo;
> > }
> >
> > diff --git a/ui/spice-display.c b/ui/spice-display.c
> > index 6babe24909..19632fdf6c 100644
> > --- a/ui/spice-display.c
> > +++ b/ui/spice-display.c
> > @@ -18,6 +18,7 @@
> > #include "qemu/osdep.h"
> > #include "ui/qemu-spice.h"
> > #include "qemu/timer.h"
> > +#include "qemu/lockable.h"
> > #include "qemu/main-loop.h"
> > #include "qemu/option.h"
> > #include "qemu/queue.h"
> > @@ -483,12 +484,12 @@ void qemu_spice_display_refresh(SimpleSpiceDisplay *ssd)
> > {
> > graphic_hw_update(ssd->dcl.con);
> >
> > - qemu_mutex_lock(&ssd->lock);
> > - if (QTAILQ_EMPTY(&ssd->updates) && ssd->ds) {
> > - qemu_spice_create_update(ssd);
> > - ssd->notify++;
> > + WITH_QEMU_LOCK_GUARD(&ssd->lock) {
> > + if (QTAILQ_EMPTY(&ssd->updates) && ssd->ds) {
> > + qemu_spice_create_update(ssd);
> > + ssd->notify++;
> > + }
> > }
> > - qemu_mutex_unlock(&ssd->lock);
> >
> > trace_qemu_spice_display_refresh(ssd->qxl.id, ssd->notify);
> > if (ssd->notify) {
> > @@ -580,7 +581,7 @@ static int interface_get_cursor_command(QXLInstance *sin, QXLCommandExt *ext)
> > SimpleSpiceDisplay *ssd = container_of(sin, SimpleSpiceDisplay, qxl);
> > int ret;
> >
> > - qemu_mutex_lock(&ssd->lock);
> > + QEMU_LOCK_GUARD(&ssd->lock);
> > if (ssd->ptr_define) {
> > *ext = ssd->ptr_define->ext;
> > ssd->ptr_define = NULL;
> > @@ -592,7 +593,6 @@ static int interface_get_cursor_command(QXLInstance *sin, QXLCommandExt *ext)
> > } else {
> > ret = false;
> > }
> > - qemu_mutex_unlock(&ssd->lock);
> > return ret;
> > }
> >
> > diff --git a/util/log.c b/util/log.c
> > index 2da6cb31dc..bdb3d712e8 100644
> > --- a/util/log.c
> > +++ b/util/log.c
> > @@ -25,6 +25,7 @@
> > #include "qemu/cutils.h"
> > #include "trace/control.h"
> > #include "qemu/thread.h"
> > +#include "qemu/lockable.h"
> >
> > static char *logfilename;
> > static QemuMutex qemu_logfile_mutex;
> > @@ -94,7 +95,7 @@ void qemu_set_log(int log_flags)
> > if (qemu_loglevel && (!is_daemonized() || logfilename)) {
> > need_to_open_file = true;
> > }
> > - qemu_mutex_lock(&qemu_logfile_mutex);
> > + QEMU_LOCK_GUARD(&qemu_logfile_mutex);
> > if (qemu_logfile && !need_to_open_file) {
> > logfile = qemu_logfile;
> > atomic_rcu_set(&qemu_logfile, NULL);
> > @@ -136,7 +137,6 @@ void qemu_set_log(int log_flags)
> > }
> > atomic_rcu_set(&qemu_logfile, logfile);
> > }
> > - qemu_mutex_unlock(&qemu_logfile_mutex);
> > }
> >
> > void qemu_log_needs_buffers(void)
> > diff --git a/util/qemu-timer.c b/util/qemu-timer.c
> > index d548d3c1ad..b6575a2cd5 100644
> > --- a/util/qemu-timer.c
> > +++ b/util/qemu-timer.c
> > @@ -459,17 +459,16 @@ void timer_mod_anticipate_ns(QEMUTimer *ts, int64_t expire_time)
> > QEMUTimerList *timer_list = ts->timer_list;
> > bool rearm;
> >
> > - qemu_mutex_lock(&timer_list->active_timers_lock);
> > - if (ts->expire_time == -1 || ts->expire_time > expire_time) {
> > - if (ts->expire_time != -1) {
> > - timer_del_locked(timer_list, ts);
> > + WITH_QEMU_LOCK_GUARD(&timer_list->active_timers_lock) {
> > + if (ts->expire_time == -1 || ts->expire_time > expire_time) {
> > + if (ts->expire_time != -1) {
> > + timer_del_locked(timer_list, ts);
> > + }
> > + rearm = timer_mod_ns_locked(timer_list, ts, expire_time);
> > + } else {
> > + rearm = false;
> > }
> > - rearm = timer_mod_ns_locked(timer_list, ts, expire_time);
> > - } else {
> > - rearm = false;
> > }
> > - qemu_mutex_unlock(&timer_list->active_timers_lock);
> > -
> > if (rearm) {
> > timerlist_rearm(timer_list);
> > }
> > diff --git a/util/rcu.c b/util/rcu.c
> > index 177a675619..60a37f72c3 100644
> > --- a/util/rcu.c
> > +++ b/util/rcu.c
> > @@ -31,6 +31,7 @@
> > #include "qemu/atomic.h"
> > #include "qemu/thread.h"
> > #include "qemu/main-loop.h"
> > +#include "qemu/lockable.h"
> > #if defined(CONFIG_MALLOC_TRIM)
> > #include <malloc.h>
> > #endif
> > @@ -141,14 +142,14 @@ static void wait_for_readers(void)
> >
> > void synchronize_rcu(void)
> > {
> > - qemu_mutex_lock(&rcu_sync_lock);
> > + QEMU_LOCK_GUARD(&rcu_sync_lock);
> >
> > /* Write RCU-protected pointers before reading p_rcu_reader->ctr.
> > * Pairs with smp_mb_placeholder() in rcu_read_lock().
> > */
> > smp_mb_global();
> >
> > - qemu_mutex_lock(&rcu_registry_lock);
> > + QEMU_LOCK_GUARD(&rcu_registry_lock);
> > if (!QLIST_EMPTY(®istry)) {
> > /* In either case, the atomic_mb_set below blocks stores that free
> > * old RCU-protected pointers.
> > @@ -169,9 +170,6 @@ void synchronize_rcu(void)
> >
> > wait_for_readers();
> > }
> > -
> > - qemu_mutex_unlock(&rcu_registry_lock);
> > - qemu_mutex_unlock(&rcu_sync_lock);
> > }
> >
> >
> > diff --git a/util/thread-pool.c b/util/thread-pool.c
> > index 4ed9b89ab2..d763cea505 100644
> > --- a/util/thread-pool.c
> > +++ b/util/thread-pool.c
> > @@ -210,7 +210,7 @@ static void thread_pool_cancel(BlockAIOCB *acb)
> >
> > trace_thread_pool_cancel(elem, elem->common.opaque);
> >
> > - qemu_mutex_lock(&pool->lock);
> > + QEMU_LOCK_GUARD(&pool->lock);
> > if (elem->state == THREAD_QUEUED &&
> > /* No thread has yet started working on elem. we can try to "steal"
> > * the item from the worker if we can get a signal from the
> > @@ -225,7 +225,6 @@ static void thread_pool_cancel(BlockAIOCB *acb)
> > elem->ret = -ECANCELED;
> > }
> >
> > - qemu_mutex_unlock(&pool->lock);
> > }
> >
> > static AioContext *thread_pool_get_aio_context(BlockAIOCB *acb)
> > diff --git a/util/vfio-helpers.c b/util/vfio-helpers.c
> > index ddd9a96e76..b310b23003 100644
> > --- a/util/vfio-helpers.c
> > +++ b/util/vfio-helpers.c
> > @@ -21,6 +21,7 @@
> > #include "standard-headers/linux/pci_regs.h"
> > #include "qemu/event_notifier.h"
> > #include "qemu/vfio-helpers.h"
> > +#include "qemu/lockable.h"
> > #include "trace.h"
> >
> > #define QEMU_VFIO_DEBUG 0
> > @@ -667,14 +668,13 @@ int qemu_vfio_dma_reset_temporary(QEMUVFIOState *s)
> > .size = QEMU_VFIO_IOVA_MAX - s->high_water_mark,
> > };
> > trace_qemu_vfio_dma_reset_temporary(s);
> > - qemu_mutex_lock(&s->lock);
> > + QEMU_LOCK_GUARD(&s->lock);
> > if (ioctl(s->container, VFIO_IOMMU_UNMAP_DMA, &unmap)) {
> > error_report("VFIO_UNMAP_DMA failed: %s", strerror(errno));
> > qemu_mutex_unlock(&s->lock);
> > return -errno;
> > }
> > s->high_water_mark = QEMU_VFIO_IOVA_MAX;
> > - qemu_mutex_unlock(&s->lock);
> > return 0;
> > }
> >
> > --
> > 2.25.1
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
Daniel
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate
2020-03-20 12:46 ` Daniel Brodsky
@ 2020-03-20 12:56 ` Dr. David Alan Gilbert
2020-03-20 13:51 ` Paolo Bonzini
0 siblings, 1 reply; 9+ messages in thread
From: Dr. David Alan Gilbert @ 2020-03-20 12:56 UTC (permalink / raw)
To: Daniel Brodsky
Cc: Kevin Wolf, open list:iSCSI, Juan Quintela, Markus Armbruster,
Peter Lieven, open list:All patches CC here, Max Reitz,
Alex Williamson, Gerd Hoffmann, Ronnie Sahlberg, Paolo Bonzini
* Daniel Brodsky (dnbrdsky@gmail.com) wrote:
> On Fri, Mar 20, 2020 at 5:34 AM Dr. David Alan Gilbert
> <dgilbert@redhat.com> wrote:
> >
> > * dnbrdsky@gmail.com (dnbrdsky@gmail.com) wrote:
> > > From: Daniel Brodsky <dnbrdsky@gmail.com>
> > >
> > > - ran regexp "qemu_mutex_lock\(.*\).*\n.*if" to find targets
> > > - replaced result with QEMU_LOCK_GUARD if all unlocks at function end
> > > - replaced result with WITH_QEMU_LOCK_GUARD if unlock not at end
> > >
> > > Signed-off-by: Daniel Brodsky <dnbrdsky@gmail.com>
> > > ---
> > > block/iscsi.c | 11 +++-------
> > > block/nfs.c | 51 ++++++++++++++++++++-----------------------
> > > cpus-common.c | 13 +++++------
> > > hw/display/qxl.c | 43 +++++++++++++++++-------------------
> > > hw/vfio/platform.c | 5 ++---
> > > migration/migration.c | 3 +--
> > > migration/multifd.c | 8 +++----
> > > migration/ram.c | 3 +--
> > > monitor/misc.c | 4 +---
> > > ui/spice-display.c | 14 ++++++------
> > > util/log.c | 4 ++--
> > > util/qemu-timer.c | 17 +++++++--------
> > > util/rcu.c | 8 +++----
> > > util/thread-pool.c | 3 +--
> > > util/vfio-helpers.c | 4 ++--
> > > 15 files changed, 84 insertions(+), 107 deletions(-)
> > >
> >
> > <snip>
> >
> > > diff --git a/migration/migration.c b/migration/migration.c
> > > index c1d88ace7f..2f0bd6d8b4 100644
> > > --- a/migration/migration.c
> > > +++ b/migration/migration.c
> > > @@ -1652,11 +1652,10 @@ static void migrate_fd_cleanup_bh(void *opaque)
> > >
> > > void migrate_set_error(MigrationState *s, const Error *error)
> > > {
> > > - qemu_mutex_lock(&s->error_mutex);
> > > + QEMU_LOCK_GUARD(&s->error_mutex);
> > > if (!s->error) {
> > > s->error = error_copy(error);
> > > }
> > > - qemu_mutex_unlock(&s->error_mutex);
> > > }
> >
> > There are some other places in migration.c that would really benefit;
> > for example, migrate_send_rp_message, if you use a LOCK_QUARD
> > there, then you can remove the 'int ret', and the goto error.
> > In postcopy_pause, the locks on qemu_file_lock would work well using the
> > WITH_QEMU_LOCK_GUARD.
>
> I did a basic pass through for targets and that one didn't come up. I can add
> more replacements later, but there are ~300 mutex locks that might be worth
> replacing and going through them manually in one go is too tedious.
Sure; the send_rp_message case is quite a nice example of where the
guard makes the code simpler.
> > > void migrate_fd_error(MigrationState *s, const Error *error)
> > > diff --git a/migration/multifd.c b/migration/multifd.c
> > > index cb6a4a3ab8..9123c111a3 100644
> > > --- a/migration/multifd.c
> > > +++ b/migration/multifd.c
> > > @@ -894,11 +894,11 @@ void multifd_recv_sync_main(void)
> > > for (i = 0; i < migrate_multifd_channels(); i++) {
> > > MultiFDRecvParams *p = &multifd_recv_state->params[i];
> > >
> > > - qemu_mutex_lock(&p->mutex);
> > > - if (multifd_recv_state->packet_num < p->packet_num) {
> > > - multifd_recv_state->packet_num = p->packet_num;
> > > + WITH_QEMU_LOCK_GUARD(&p->mutex) {
> > > + if (multifd_recv_state->packet_num < p->packet_num) {
> > > + multifd_recv_state->packet_num = p->packet_num;
> > > + }
> > > }
> > > - qemu_mutex_unlock(&p->mutex);
> > > trace_multifd_recv_sync_main_signal(p->id);
> > > qemu_sem_post(&p->sem_sync);
> > > }
> >
> > > diff --git a/migration/ram.c b/migration/ram.c
> > > index c12cfdbe26..87a670cfbf 100644
> > > --- a/migration/ram.c
> > > +++ b/migration/ram.c
> > > @@ -1368,7 +1368,7 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
> > > return NULL;
> > > }
> > >
> > > - qemu_mutex_lock(&rs->src_page_req_mutex);
> > > + QEMU_LOCK_GUARD(&rs->src_page_req_mutex);
> > > if (!QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
> > > struct RAMSrcPageRequest *entry =
> > > QSIMPLEQ_FIRST(&rs->src_page_requests);
> > > @@ -1385,7 +1385,6 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
> > > migration_consume_urgent_request();
> > > }
> > > }
> > > - qemu_mutex_unlock(&rs->src_page_req_mutex);
> > >
> > > return block;
> > > }
> >
> > Is there a reason you didn't do the matching pair at the bottom of
> > ram_save_queue_pages ?
> >
> > Dave
> >
>
> According to https://wiki.qemu.org/ToDo/LockGuards cases that are trivial (no
> conditional logic) shouldn't be replaced.
OK
So for what you've already go tthere,
For migration:
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > > diff --git a/monitor/misc.c b/monitor/misc.c
> > > index 6c45fa490f..9723b466cd 100644
> > > --- a/monitor/misc.c
> > > +++ b/monitor/misc.c
> > > @@ -1473,7 +1473,7 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
> > > MonFdsetFd *mon_fdset_fd;
> > > AddfdInfo *fdinfo;
> > >
> > > - qemu_mutex_lock(&mon_fdsets_lock);
> > > + QEMU_LOCK_GUARD(&mon_fdsets_lock);
> > > if (has_fdset_id) {
> > > QLIST_FOREACH(mon_fdset, &mon_fdsets, next) {
> > > /* Break if match found or match impossible due to ordering by ID */
> > > @@ -1494,7 +1494,6 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
> > > if (fdset_id < 0) {
> > > error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "fdset-id",
> > > "a non-negative value");
> > > - qemu_mutex_unlock(&mon_fdsets_lock);
> > > return NULL;
> > > }
> > > /* Use specified fdset ID */
> > > @@ -1545,7 +1544,6 @@ AddfdInfo *monitor_fdset_add_fd(int fd, bool has_fdset_id, int64_t fdset_id,
> > > fdinfo->fdset_id = mon_fdset->id;
> > > fdinfo->fd = mon_fdset_fd->fd;
> > >
> > > - qemu_mutex_unlock(&mon_fdsets_lock);
> > > return fdinfo;
> > > }
> > >
> > > diff --git a/ui/spice-display.c b/ui/spice-display.c
> > > index 6babe24909..19632fdf6c 100644
> > > --- a/ui/spice-display.c
> > > +++ b/ui/spice-display.c
> > > @@ -18,6 +18,7 @@
> > > #include "qemu/osdep.h"
> > > #include "ui/qemu-spice.h"
> > > #include "qemu/timer.h"
> > > +#include "qemu/lockable.h"
> > > #include "qemu/main-loop.h"
> > > #include "qemu/option.h"
> > > #include "qemu/queue.h"
> > > @@ -483,12 +484,12 @@ void qemu_spice_display_refresh(SimpleSpiceDisplay *ssd)
> > > {
> > > graphic_hw_update(ssd->dcl.con);
> > >
> > > - qemu_mutex_lock(&ssd->lock);
> > > - if (QTAILQ_EMPTY(&ssd->updates) && ssd->ds) {
> > > - qemu_spice_create_update(ssd);
> > > - ssd->notify++;
> > > + WITH_QEMU_LOCK_GUARD(&ssd->lock) {
> > > + if (QTAILQ_EMPTY(&ssd->updates) && ssd->ds) {
> > > + qemu_spice_create_update(ssd);
> > > + ssd->notify++;
> > > + }
> > > }
> > > - qemu_mutex_unlock(&ssd->lock);
> > >
> > > trace_qemu_spice_display_refresh(ssd->qxl.id, ssd->notify);
> > > if (ssd->notify) {
> > > @@ -580,7 +581,7 @@ static int interface_get_cursor_command(QXLInstance *sin, QXLCommandExt *ext)
> > > SimpleSpiceDisplay *ssd = container_of(sin, SimpleSpiceDisplay, qxl);
> > > int ret;
> > >
> > > - qemu_mutex_lock(&ssd->lock);
> > > + QEMU_LOCK_GUARD(&ssd->lock);
> > > if (ssd->ptr_define) {
> > > *ext = ssd->ptr_define->ext;
> > > ssd->ptr_define = NULL;
> > > @@ -592,7 +593,6 @@ static int interface_get_cursor_command(QXLInstance *sin, QXLCommandExt *ext)
> > > } else {
> > > ret = false;
> > > }
> > > - qemu_mutex_unlock(&ssd->lock);
> > > return ret;
> > > }
> > >
> > > diff --git a/util/log.c b/util/log.c
> > > index 2da6cb31dc..bdb3d712e8 100644
> > > --- a/util/log.c
> > > +++ b/util/log.c
> > > @@ -25,6 +25,7 @@
> > > #include "qemu/cutils.h"
> > > #include "trace/control.h"
> > > #include "qemu/thread.h"
> > > +#include "qemu/lockable.h"
> > >
> > > static char *logfilename;
> > > static QemuMutex qemu_logfile_mutex;
> > > @@ -94,7 +95,7 @@ void qemu_set_log(int log_flags)
> > > if (qemu_loglevel && (!is_daemonized() || logfilename)) {
> > > need_to_open_file = true;
> > > }
> > > - qemu_mutex_lock(&qemu_logfile_mutex);
> > > + QEMU_LOCK_GUARD(&qemu_logfile_mutex);
> > > if (qemu_logfile && !need_to_open_file) {
> > > logfile = qemu_logfile;
> > > atomic_rcu_set(&qemu_logfile, NULL);
> > > @@ -136,7 +137,6 @@ void qemu_set_log(int log_flags)
> > > }
> > > atomic_rcu_set(&qemu_logfile, logfile);
> > > }
> > > - qemu_mutex_unlock(&qemu_logfile_mutex);
> > > }
> > >
> > > void qemu_log_needs_buffers(void)
> > > diff --git a/util/qemu-timer.c b/util/qemu-timer.c
> > > index d548d3c1ad..b6575a2cd5 100644
> > > --- a/util/qemu-timer.c
> > > +++ b/util/qemu-timer.c
> > > @@ -459,17 +459,16 @@ void timer_mod_anticipate_ns(QEMUTimer *ts, int64_t expire_time)
> > > QEMUTimerList *timer_list = ts->timer_list;
> > > bool rearm;
> > >
> > > - qemu_mutex_lock(&timer_list->active_timers_lock);
> > > - if (ts->expire_time == -1 || ts->expire_time > expire_time) {
> > > - if (ts->expire_time != -1) {
> > > - timer_del_locked(timer_list, ts);
> > > + WITH_QEMU_LOCK_GUARD(&timer_list->active_timers_lock) {
> > > + if (ts->expire_time == -1 || ts->expire_time > expire_time) {
> > > + if (ts->expire_time != -1) {
> > > + timer_del_locked(timer_list, ts);
> > > + }
> > > + rearm = timer_mod_ns_locked(timer_list, ts, expire_time);
> > > + } else {
> > > + rearm = false;
> > > }
> > > - rearm = timer_mod_ns_locked(timer_list, ts, expire_time);
> > > - } else {
> > > - rearm = false;
> > > }
> > > - qemu_mutex_unlock(&timer_list->active_timers_lock);
> > > -
> > > if (rearm) {
> > > timerlist_rearm(timer_list);
> > > }
> > > diff --git a/util/rcu.c b/util/rcu.c
> > > index 177a675619..60a37f72c3 100644
> > > --- a/util/rcu.c
> > > +++ b/util/rcu.c
> > > @@ -31,6 +31,7 @@
> > > #include "qemu/atomic.h"
> > > #include "qemu/thread.h"
> > > #include "qemu/main-loop.h"
> > > +#include "qemu/lockable.h"
> > > #if defined(CONFIG_MALLOC_TRIM)
> > > #include <malloc.h>
> > > #endif
> > > @@ -141,14 +142,14 @@ static void wait_for_readers(void)
> > >
> > > void synchronize_rcu(void)
> > > {
> > > - qemu_mutex_lock(&rcu_sync_lock);
> > > + QEMU_LOCK_GUARD(&rcu_sync_lock);
> > >
> > > /* Write RCU-protected pointers before reading p_rcu_reader->ctr.
> > > * Pairs with smp_mb_placeholder() in rcu_read_lock().
> > > */
> > > smp_mb_global();
> > >
> > > - qemu_mutex_lock(&rcu_registry_lock);
> > > + QEMU_LOCK_GUARD(&rcu_registry_lock);
> > > if (!QLIST_EMPTY(®istry)) {
> > > /* In either case, the atomic_mb_set below blocks stores that free
> > > * old RCU-protected pointers.
> > > @@ -169,9 +170,6 @@ void synchronize_rcu(void)
> > >
> > > wait_for_readers();
> > > }
> > > -
> > > - qemu_mutex_unlock(&rcu_registry_lock);
> > > - qemu_mutex_unlock(&rcu_sync_lock);
> > > }
> > >
> > >
> > > diff --git a/util/thread-pool.c b/util/thread-pool.c
> > > index 4ed9b89ab2..d763cea505 100644
> > > --- a/util/thread-pool.c
> > > +++ b/util/thread-pool.c
> > > @@ -210,7 +210,7 @@ static void thread_pool_cancel(BlockAIOCB *acb)
> > >
> > > trace_thread_pool_cancel(elem, elem->common.opaque);
> > >
> > > - qemu_mutex_lock(&pool->lock);
> > > + QEMU_LOCK_GUARD(&pool->lock);
> > > if (elem->state == THREAD_QUEUED &&
> > > /* No thread has yet started working on elem. we can try to "steal"
> > > * the item from the worker if we can get a signal from the
> > > @@ -225,7 +225,6 @@ static void thread_pool_cancel(BlockAIOCB *acb)
> > > elem->ret = -ECANCELED;
> > > }
> > >
> > > - qemu_mutex_unlock(&pool->lock);
> > > }
> > >
> > > static AioContext *thread_pool_get_aio_context(BlockAIOCB *acb)
> > > diff --git a/util/vfio-helpers.c b/util/vfio-helpers.c
> > > index ddd9a96e76..b310b23003 100644
> > > --- a/util/vfio-helpers.c
> > > +++ b/util/vfio-helpers.c
> > > @@ -21,6 +21,7 @@
> > > #include "standard-headers/linux/pci_regs.h"
> > > #include "qemu/event_notifier.h"
> > > #include "qemu/vfio-helpers.h"
> > > +#include "qemu/lockable.h"
> > > #include "trace.h"
> > >
> > > #define QEMU_VFIO_DEBUG 0
> > > @@ -667,14 +668,13 @@ int qemu_vfio_dma_reset_temporary(QEMUVFIOState *s)
> > > .size = QEMU_VFIO_IOVA_MAX - s->high_water_mark,
> > > };
> > > trace_qemu_vfio_dma_reset_temporary(s);
> > > - qemu_mutex_lock(&s->lock);
> > > + QEMU_LOCK_GUARD(&s->lock);
> > > if (ioctl(s->container, VFIO_IOMMU_UNMAP_DMA, &unmap)) {
> > > error_report("VFIO_UNMAP_DMA failed: %s", strerror(errno));
> > > qemu_mutex_unlock(&s->lock);
> > > return -errno;
> > > }
> > > s->high_water_mark = QEMU_VFIO_IOVA_MAX;
> > > - qemu_mutex_unlock(&s->lock);
> > > return 0;
> > > }
> > >
> > > --
> > > 2.25.1
> > >
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
>
> Daniel
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate
2020-03-20 12:56 ` Dr. David Alan Gilbert
@ 2020-03-20 13:51 ` Paolo Bonzini
2020-03-20 13:55 ` Dr. David Alan Gilbert
0 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2020-03-20 13:51 UTC (permalink / raw)
To: Dr. David Alan Gilbert, Daniel Brodsky
Cc: Kevin Wolf, open list:iSCSI, Juan Quintela, Markus Armbruster,
Peter Lieven, open list:All patches CC here, Max Reitz,
Alex Williamson, Gerd Hoffmann, Ronnie Sahlberg
On 20/03/20 13:56, Dr. David Alan Gilbert wrote:
>> According to https://wiki.qemu.org/ToDo/LockGuards cases that are trivial (no
>> conditional logic) shouldn't be replaced.
> OK
I don't think that has to be either-or. Trivial lock/unlock sequences
are not the first ones that should be converted, but there's an
advantage in having a single patch that converts all possible uses of a
lock. Trivial sequences certainly do not belong in a bigger patch like
this, as they would make the patch even bigger.
> So for what you've already got there,
>
> For migration:
> Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>
Can you just extract that and queue it yourself (for 5.1 probably)?
Paolo
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate
2020-03-20 13:51 ` Paolo Bonzini
@ 2020-03-20 13:55 ` Dr. David Alan Gilbert
0 siblings, 0 replies; 9+ messages in thread
From: Dr. David Alan Gilbert @ 2020-03-20 13:55 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Kevin Wolf, open list:iSCSI, Juan Quintela, Markus Armbruster,
Peter Lieven, open list:All patches CC here, Max Reitz,
Daniel Brodsky, Alex Williamson, Gerd Hoffmann, Ronnie Sahlberg
* Paolo Bonzini (pbonzini@redhat.com) wrote:
> On 20/03/20 13:56, Dr. David Alan Gilbert wrote:
> >> According to https://wiki.qemu.org/ToDo/LockGuards cases that are trivial (no
> >> conditional logic) shouldn't be replaced.
> > OK
>
> I don't think that has to be either-or. Trivial lock/unlock sequences
> are not the first ones that should be converted, but there's an
> advantage in having a single patch that converts all possible uses of a
> lock. Trivial sequences certainly do not belong in a bigger patch like
> this, as they would make the patch even bigger.
>
> > So for what you've already got there,
> >
> > For migration:
> > Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> >
>
> Can you just extract that and queue it yourself (for 5.1 probably)?
I can, although it would be easier if Daniel did that; there's no rush
given it's for 5.1
> Paolo
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 0/2] Replaced locks with lock guard macros
2020-03-20 12:04 [PATCH v3 0/2] Replaced locks with lock guard macros dnbrdsky
2020-03-20 12:04 ` [PATCH v3 1/2] lockable: fix __COUNTER__ macro to be referenced properly dnbrdsky
2020-03-20 12:04 ` [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate dnbrdsky
@ 2020-03-20 14:02 ` no-reply
2 siblings, 0 replies; 9+ messages in thread
From: no-reply @ 2020-03-20 14:02 UTC (permalink / raw)
To: dnbrdsky; +Cc: dnbrdsky, qemu-devel
Patchew URL: https://patchew.org/QEMU/20200320120456.1931482-1-dnbrdsky@gmail.com/
Hi,
This series failed the asan build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN ===
#!/bin/bash
export ARCH=x86_64
make docker-image-fedora V=1 NETWORK=1
time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
=== TEST SCRIPT END ===
CC util/mmap-alloc.o
CC util/oslib-posix.o
CC util/qemu-openpty.o
/tmp/qemu-test/src/util/thread-pool.c:213:5: error: unused variable 'qemu_lockable_auto1' [-Werror,-Wunused-variable]
QEMU_LOCK_GUARD(&pool->lock);
^
/tmp/qemu-test/src/include/qemu/lockable.h:173:29: note: expanded from macro 'QEMU_LOCK_GUARD'
---
qemu_lockable_auto1
^
1 error generated.
make: *** [/tmp/qemu-test/src/rules.mak:69: util/thread-pool.o] Error 1
make: *** Waiting for unfinished jobs....
Traceback (most recent call last):
File "./tests/docker/docker.py", line 664, in <module>
---
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=14648c8c8e3346158dd2900da22cf682', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=x86_64-softmmu', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-khpgiqsl/src/docker-src.2020-03-20-09.58.50.21558:/var/tmp/qemu:z,ro', 'qemu:fedora', '/var/tmp/qemu/run', 'test-debug']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=14648c8c8e3346158dd2900da22cf682
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-khpgiqsl/src'
make: *** [docker-run-test-debug@fedora] Error 2
real 3m43.310s
user 0m8.552s
The full log is available at
http://patchew.org/logs/20200320120456.1931482-1-dnbrdsky@gmail.com/testing.asan/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2020-03-20 14:04 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-20 12:04 [PATCH v3 0/2] Replaced locks with lock guard macros dnbrdsky
2020-03-20 12:04 ` [PATCH v3 1/2] lockable: fix __COUNTER__ macro to be referenced properly dnbrdsky
2020-03-20 12:04 ` [PATCH v3 2/2] lockable: replaced locks with lock guard macros where appropriate dnbrdsky
2020-03-20 12:33 ` Dr. David Alan Gilbert
2020-03-20 12:46 ` Daniel Brodsky
2020-03-20 12:56 ` Dr. David Alan Gilbert
2020-03-20 13:51 ` Paolo Bonzini
2020-03-20 13:55 ` Dr. David Alan Gilbert
2020-03-20 14:02 ` [PATCH v3 0/2] Replaced locks with lock guard macros no-reply
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.