* [PATCH v3 0/2] accel/tcg: Fix monitor deadlock
@ 2021-11-08 11:33 Greg Kurz
2021-11-08 11:33 ` [PATCH v3 1/2] rcu: Introduce force_rcu notifier Greg Kurz
2021-11-08 11:33 ` [PATCH v3 2/2] accel/tcg: Register a " Greg Kurz
0 siblings, 2 replies; 8+ messages in thread
From: Greg Kurz @ 2021-11-08 11:33 UTC (permalink / raw)
To: qemu-devel
Cc: Eduardo Habkost, Richard Henderson, Greg Kurz, qemu-stable,
Paolo Bonzini
Commit 7bed89958bfb ("device_core: use drain_call_rcu in in qmp_device_add")
introduced a regression in QEMU 6.0 : passing device_add without argument
hangs the monitor. This was reported against qemu-system-mips64 with TGC,
but I could consistently reproduce it with other targets (x86 and ppc64).
See https://gitlab.com/qemu-project/qemu/-/issues/650 for details.
The problem is that an emulated busy-looping vCPU can stay forever in
its RCU read-side critical section and prevent drain_call_rcu() to return.
This series fixes the issue by letting RCU kick vCPU threads out of the
read-side critical section when drain_call_rcu() is in progress. This is
achieved through notifiers, as suggested by Paolo Bonzini.
I've pushed this series to:
https://gitlab.com/gkurz/qemu/-/commits/fix-drain-call-rcu
v3:
- new separate implementations of force RCU notifiers for MTTCG and RR
v2:
- moved notifier list to RCU reader data
- separate API for notifier registration
- CPUState passed as an opaque pointer
Greg Kurz (2):
rcu: Introduce force_rcu notifier
accel/tcg: Register a force_rcu notifier
accel/tcg/tcg-accel-ops-mttcg.c | 26 ++++++++++++++++++++++++++
accel/tcg/tcg-accel-ops-rr.c | 18 ++++++++++++++++++
include/qemu/rcu.h | 15 +++++++++++++++
util/rcu.c | 19 +++++++++++++++++++
4 files changed, 78 insertions(+)
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v3 1/2] rcu: Introduce force_rcu notifier
2021-11-08 11:33 [PATCH v3 0/2] accel/tcg: Fix monitor deadlock Greg Kurz
@ 2021-11-08 11:33 ` Greg Kurz
2021-11-08 11:33 ` [PATCH v3 2/2] accel/tcg: Register a " Greg Kurz
1 sibling, 0 replies; 8+ messages in thread
From: Greg Kurz @ 2021-11-08 11:33 UTC (permalink / raw)
To: qemu-devel
Cc: Eduardo Habkost, Richard Henderson, Greg Kurz, qemu-stable,
Paolo Bonzini
The drain_rcu_call() function can be blocked as long as an RCU reader
stays in a read-side critical section. This is typically what happens
when a TCG vCPU is executing a busy loop. It can deadlock the QEMU
monitor as reported in https://gitlab.com/qemu-project/qemu/-/issues/650 .
This can be avoided by allowing drain_rcu_call() to enforce an RCU grace
period. Since each reader might need to do specific actions to end a
read-side critical section, do it with notifiers.
Prepare ground for this by adding a notifier list to the RCU reader
struct and use it in wait_for_readers() if drain_rcu_call() is in
progress. An API is added for readers to register their notifiers.
This is largely based on a draft from Paolo Bonzini.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
---
include/qemu/rcu.h | 15 +++++++++++++++
util/rcu.c | 19 +++++++++++++++++++
2 files changed, 34 insertions(+)
diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h
index 515d327cf11c..e69efbd47f70 100644
--- a/include/qemu/rcu.h
+++ b/include/qemu/rcu.h
@@ -27,6 +27,7 @@
#include "qemu/thread.h"
#include "qemu/queue.h"
#include "qemu/atomic.h"
+#include "qemu/notify.h"
#include "qemu/sys_membarrier.h"
#ifdef __cplusplus
@@ -66,6 +67,13 @@ struct rcu_reader_data {
/* Data used for registry, protected by rcu_registry_lock */
QLIST_ENTRY(rcu_reader_data) node;
+
+ /*
+ * NotifierList used to force an RCU grace period. Accessed under
+ * rcu_registry_lock. Note that the notifier is called _outside_
+ * the thread!
+ */
+ NotifierList force_rcu;
};
extern __thread struct rcu_reader_data rcu_reader;
@@ -180,6 +188,13 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(RCUReadAuto, rcu_read_auto_unlock)
#define RCU_READ_LOCK_GUARD() \
g_autoptr(RCUReadAuto) _rcu_read_auto __attribute__((unused)) = rcu_read_auto_lock()
+/*
+ * Force-RCU notifiers tell readers that they should exit their
+ * read-side critical section.
+ */
+void rcu_add_force_rcu_notifier(Notifier *n);
+void rcu_remove_force_rcu_notifier(Notifier *n);
+
#ifdef __cplusplus
}
#endif
diff --git a/util/rcu.c b/util/rcu.c
index 13ac0f75cb2a..c91da9f137c8 100644
--- a/util/rcu.c
+++ b/util/rcu.c
@@ -46,6 +46,7 @@
unsigned long rcu_gp_ctr = RCU_GP_LOCKED;
QemuEvent rcu_gp_event;
+static int in_drain_call_rcu;
static QemuMutex rcu_registry_lock;
static QemuMutex rcu_sync_lock;
@@ -107,6 +108,8 @@ static void wait_for_readers(void)
* get some extra futex wakeups.
*/
qatomic_set(&index->waiting, false);
+ } else if (qatomic_read(&in_drain_call_rcu)) {
+ notifier_list_notify(&index->force_rcu, NULL);
}
}
@@ -339,8 +342,10 @@ void drain_call_rcu(void)
* assumed.
*/
+ qatomic_inc(&in_drain_call_rcu);
call_rcu1(&rcu_drain.rcu, drain_rcu_callback);
qemu_event_wait(&rcu_drain.drain_complete_event);
+ qatomic_dec(&in_drain_call_rcu);
if (locked) {
qemu_mutex_lock_iothread();
@@ -363,6 +368,20 @@ void rcu_unregister_thread(void)
qemu_mutex_unlock(&rcu_registry_lock);
}
+void rcu_add_force_rcu_notifier(Notifier *n)
+{
+ qemu_mutex_lock(&rcu_registry_lock);
+ notifier_list_add(&rcu_reader.force_rcu, n);
+ qemu_mutex_unlock(&rcu_registry_lock);
+}
+
+void rcu_remove_force_rcu_notifier(Notifier *n)
+{
+ qemu_mutex_lock(&rcu_registry_lock);
+ notifier_remove(n);
+ qemu_mutex_unlock(&rcu_registry_lock);
+}
+
static void rcu_init_complete(void)
{
QemuThread thread;
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 2/2] accel/tcg: Register a force_rcu notifier
2021-11-08 11:33 [PATCH v3 0/2] accel/tcg: Fix monitor deadlock Greg Kurz
2021-11-08 11:33 ` [PATCH v3 1/2] rcu: Introduce force_rcu notifier Greg Kurz
@ 2021-11-08 11:33 ` Greg Kurz
2021-11-09 7:54 ` Richard Henderson
1 sibling, 1 reply; 8+ messages in thread
From: Greg Kurz @ 2021-11-08 11:33 UTC (permalink / raw)
To: qemu-devel
Cc: Eduardo Habkost, Richard Henderson, Greg Kurz, qemu-stable,
Paolo Bonzini
A TCG vCPU doing a busy loop systematicaly hangs the QEMU monitor
if the user passes 'device_add' without argument. This is because
drain_cpu_all() which is called from qmp_device_add() cannot return
if readers don't exit read-side critical sections. That is typically
what busy-looping TCG vCPUs do:
int cpu_exec(CPUState *cpu)
{
[...]
rcu_read_lock();
[...]
while (!cpu_handle_exception(cpu, &ret)) {
// Busy loop keeps vCPU here
}
[...]
rcu_read_unlock();
return ret;
}
Have all vCPU threads register a force_rcu notifier that will kick them
out of the loop using async_run_on_cpu(). The notifier is called with the
rcu_registry_lock mutex held, using async_run_on_cpu() ensures there are
no deadlocks.
Note that when running in round-robin mode, this means that we register
only one notifier which corresponds to the first vCPU. This is okay
since calling async_run_on_cpu() on any vCPU is enough to kick any
other vCPU from execution.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Fixes: 7bed89958bfb ("device_core: use drain_call_rcu in in qmp_device_add")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/650
Signed-off-by: Greg Kurz <groug@kaod.org>
---
accel/tcg/tcg-accel-ops-mttcg.c | 26 ++++++++++++++++++++++++++
accel/tcg/tcg-accel-ops-rr.c | 18 ++++++++++++++++++
2 files changed, 44 insertions(+)
diff --git a/accel/tcg/tcg-accel-ops-mttcg.c b/accel/tcg/tcg-accel-ops-mttcg.c
index 847d2079d21f..29632bd4c0af 100644
--- a/accel/tcg/tcg-accel-ops-mttcg.c
+++ b/accel/tcg/tcg-accel-ops-mttcg.c
@@ -28,6 +28,7 @@
#include "sysemu/tcg.h"
#include "sysemu/replay.h"
#include "qemu/main-loop.h"
+#include "qemu/notify.h"
#include "qemu/guest-random.h"
#include "exec/exec-all.h"
#include "hw/boards.h"
@@ -35,6 +36,26 @@
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-mttcg.h"
+typedef struct MttcgForceRcuNotifier {
+ Notifier notifier;
+ CPUState *cpu;
+} MttcgForceRcuNotifier;
+
+static void do_nothing(CPUState *cpu, run_on_cpu_data d)
+{
+}
+
+static void mttcg_force_rcu(Notifier *notify, void *data)
+{
+ CPUState *cpu = container_of(notify, MttcgForceRcuNotifier, notifier)->cpu;
+
+ /*
+ * Called with rcu_registry_lock held, using async_run_on_cpu() ensures
+ * that there are no deadlocks.
+ */
+ async_run_on_cpu(cpu, do_nothing, RUN_ON_CPU_NULL);
+}
+
/*
* In the multi-threaded case each vCPU has its own thread. The TLS
* variable current_cpu can be used deep in the code to find the
@@ -43,12 +64,16 @@
static void *mttcg_cpu_thread_fn(void *arg)
{
+ MttcgForceRcuNotifier force_rcu;
CPUState *cpu = arg;
assert(tcg_enabled());
g_assert(!icount_enabled());
rcu_register_thread();
+ force_rcu.notifier.notify = mttcg_force_rcu;
+ force_rcu.cpu = cpu;
+ rcu_add_force_rcu_notifier(&force_rcu.notifier);
tcg_register_thread();
qemu_mutex_lock_iothread();
@@ -100,6 +125,7 @@ static void *mttcg_cpu_thread_fn(void *arg)
tcg_cpus_destroy(cpu);
qemu_mutex_unlock_iothread();
+ rcu_remove_force_rcu_notifier(&force_rcu.notifier);
rcu_unregister_thread();
return NULL;
}
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index a5fd26190e20..934ac21d79b5 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -28,6 +28,7 @@
#include "sysemu/tcg.h"
#include "sysemu/replay.h"
#include "qemu/main-loop.h"
+#include "qemu/notify.h"
#include "qemu/guest-random.h"
#include "exec/exec-all.h"
@@ -133,6 +134,19 @@ static void rr_deal_with_unplugged_cpus(void)
}
}
+static void do_nothing(CPUState *cpu, run_on_cpu_data d)
+{
+}
+
+static void rr_force_rcu(Notifier *notify, void *data)
+{
+ /*
+ * Called with rcu_registry_lock held, using async_run_on_cpu() ensures
+ * that there are no deadlocks.
+ */
+ async_run_on_cpu(first_cpu, do_nothing, RUN_ON_CPU_NULL);
+}
+
/*
* In the single-threaded case each vCPU is simulated in turn. If
* there is more than a single vCPU we create a simple timer to kick
@@ -143,10 +157,13 @@ static void rr_deal_with_unplugged_cpus(void)
static void *rr_cpu_thread_fn(void *arg)
{
+ Notifier force_rcu;
CPUState *cpu = arg;
assert(tcg_enabled());
rcu_register_thread();
+ force_rcu.notify = rr_force_rcu;
+ rcu_add_force_rcu_notifier(&force_rcu);
tcg_register_thread();
qemu_mutex_lock_iothread();
@@ -255,6 +272,7 @@ static void *rr_cpu_thread_fn(void *arg)
rr_deal_with_unplugged_cpus();
}
+ rcu_remove_force_rcu_notifier(&force_rcu);
rcu_unregister_thread();
return NULL;
}
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] accel/tcg: Register a force_rcu notifier
2021-11-08 11:33 ` [PATCH v3 2/2] accel/tcg: Register a " Greg Kurz
@ 2021-11-09 7:54 ` Richard Henderson
2021-11-09 8:21 ` Richard Henderson
0 siblings, 1 reply; 8+ messages in thread
From: Richard Henderson @ 2021-11-09 7:54 UTC (permalink / raw)
To: Greg Kurz, qemu-devel; +Cc: Paolo Bonzini, Eduardo Habkost, qemu-stable
On 11/8/21 12:33 PM, Greg Kurz wrote:
> +static void rr_force_rcu(Notifier *notify, void *data)
> +{
> + /*
> + * Called with rcu_registry_lock held, using async_run_on_cpu() ensures
> + * that there are no deadlocks.
> + */
> + async_run_on_cpu(first_cpu, do_nothing, RUN_ON_CPU_NULL);
> +}
Should first_cpu really be rr_current_cpu?
It's not clear to me that this will work for -smp 2 -cpu thread=single.
r~
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] accel/tcg: Register a force_rcu notifier
2021-11-09 7:54 ` Richard Henderson
@ 2021-11-09 8:21 ` Richard Henderson
2021-11-09 17:24 ` Greg Kurz
0 siblings, 1 reply; 8+ messages in thread
From: Richard Henderson @ 2021-11-09 8:21 UTC (permalink / raw)
To: Greg Kurz, qemu-devel; +Cc: Paolo Bonzini, Eduardo Habkost, qemu-stable
On 11/9/21 8:54 AM, Richard Henderson wrote:
> On 11/8/21 12:33 PM, Greg Kurz wrote:
>> +static void rr_force_rcu(Notifier *notify, void *data)
>> +{
>> + /*
>> + * Called with rcu_registry_lock held, using async_run_on_cpu() ensures
>> + * that there are no deadlocks.
>> + */
>> + async_run_on_cpu(first_cpu, do_nothing, RUN_ON_CPU_NULL);
>> +}
>
> Should first_cpu really be rr_current_cpu?
> It's not clear to me that this will work for -smp 2 -cpu thread=single.
Alternately, no async_run_on_cpu at all, just rr_kick_next_cpu().
r~
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] accel/tcg: Register a force_rcu notifier
2021-11-09 8:21 ` Richard Henderson
@ 2021-11-09 17:24 ` Greg Kurz
2021-11-09 18:03 ` Paolo Bonzini
0 siblings, 1 reply; 8+ messages in thread
From: Greg Kurz @ 2021-11-09 17:24 UTC (permalink / raw)
To: Richard Henderson; +Cc: Paolo Bonzini, qemu-stable, qemu-devel, Eduardo Habkost
On Tue, 9 Nov 2021 09:21:27 +0100
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 11/9/21 8:54 AM, Richard Henderson wrote:
> > On 11/8/21 12:33 PM, Greg Kurz wrote:
> >> +static void rr_force_rcu(Notifier *notify, void *data)
> >> +{
> >> + /*
> >> + * Called with rcu_registry_lock held, using async_run_on_cpu() ensures
> >> + * that there are no deadlocks.
> >> + */
> >> + async_run_on_cpu(first_cpu, do_nothing, RUN_ON_CPU_NULL);
> >> +}
> >
> > Should first_cpu really be rr_current_cpu?
> > It's not clear to me that this will work for -smp 2 -cpu thread=single.
>
Why wouldn't it work ? IIUC we always have a first_cpu and
async_run_on_cpu() will kick any vCPU currently run by the
RR thread... or am I missing something ?
Anyway, it seems more explicit to use rr_current_cpu.
> Alternately, no async_run_on_cpu at all, just rr_kick_next_cpu().
>
Heh, this looks even better ! I'll try this right away.
Thanks Richard !
--
Greg
>
> r~
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] accel/tcg: Register a force_rcu notifier
2021-11-09 17:24 ` Greg Kurz
@ 2021-11-09 18:03 ` Paolo Bonzini
2021-11-09 18:29 ` Greg Kurz
0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2021-11-09 18:03 UTC (permalink / raw)
To: Greg Kurz, Richard Henderson; +Cc: qemu-stable, Eduardo Habkost, qemu-devel
On 11/9/21 18:24, Greg Kurz wrote:> Anyway, it seems more explicit to use rr_current_cpu.
>
>> Alternately, no async_run_on_cpu at all, just rr_kick_next_cpu().
>>
>
> Heh, this looks even better ! I'll try this right away.
Once you've tested it I can queue the series with just a
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -141,10 +141,10 @@ static void do_nothing(CPUState *cpu, run_on_cpu_data d)
static void rr_force_rcu(Notifier *notify, void *data)
{
/*
- * Called with rcu_registry_lock held, using async_run_on_cpu() ensures
- * that there are no deadlocks.
+ * Called with rcu_registry_lock held. rr_kick_next_cpu() is
+ * asynchronous, so there cannot be deadlocks.
*/
- async_run_on_cpu(first_cpu, do_nothing, RUN_ON_CPU_NULL);
+ rr_kick_next_cpu();
}
/*
squashed in.
Paolo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] accel/tcg: Register a force_rcu notifier
2021-11-09 18:03 ` Paolo Bonzini
@ 2021-11-09 18:29 ` Greg Kurz
0 siblings, 0 replies; 8+ messages in thread
From: Greg Kurz @ 2021-11-09 18:29 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Richard Henderson, qemu-stable, Eduardo Habkost, qemu-devel
On Tue, 9 Nov 2021 19:03:56 +0100
Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 11/9/21 18:24, Greg Kurz wrote:> Anyway, it seems more explicit to use rr_current_cpu.
> >
> >> Alternately, no async_run_on_cpu at all, just rr_kick_next_cpu().
> >>
> >
> > Heh, this looks even better ! I'll try this right away.
>
> Once you've tested it I can queue the series with just a
>
> --- a/accel/tcg/tcg-accel-ops-rr.c
> +++ b/accel/tcg/tcg-accel-ops-rr.c
> @@ -141,10 +141,10 @@ static void do_nothing(CPUState *cpu, run_on_cpu_data d)
> static void rr_force_rcu(Notifier *notify, void *data)
> {
> /*
> - * Called with rcu_registry_lock held, using async_run_on_cpu() ensures
> - * that there are no deadlocks.
> + * Called with rcu_registry_lock held. rr_kick_next_cpu() is
> + * asynchronous, so there cannot be deadlocks.
> */
> - async_run_on_cpu(first_cpu, do_nothing, RUN_ON_CPU_NULL);
> + rr_kick_next_cpu();
> }
>
> /*
>
> squashed in.
>
I've tested and it works just fine. I need to send a v4 anyway so that
the commit message is in sync with the code changes.
Cheers,
--
Greg
> Paolo
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-11-09 18:32 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-08 11:33 [PATCH v3 0/2] accel/tcg: Fix monitor deadlock Greg Kurz
2021-11-08 11:33 ` [PATCH v3 1/2] rcu: Introduce force_rcu notifier Greg Kurz
2021-11-08 11:33 ` [PATCH v3 2/2] accel/tcg: Register a " Greg Kurz
2021-11-09 7:54 ` Richard Henderson
2021-11-09 8:21 ` Richard Henderson
2021-11-09 17:24 ` Greg Kurz
2021-11-09 18:03 ` Paolo Bonzini
2021-11-09 18:29 ` Greg Kurz
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.