All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v6 00/73] per-CPU locks
@ 2019-01-30  0:46 Emilio G. Cota
  2019-01-30  0:46 ` [Qemu-devel] [PATCH v6 01/73] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
                   ` (73 more replies)
  0 siblings, 74 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Aleksandar Markovic,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Walle,
	Palmer Dabbelt, Peter Maydell, qemu-arm, qemu-ppc, qemu-s390x,
	Sagar Karandikar, Stafford Horne

v5: https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg02979.html

For context, the goal of this series is to substitute the BQL for the
per-CPU locks in many places, notably the execution loop in cpus.c.
This leads to better scalability for MTTCG, since CPUs don't have
to acquire a contended global lock (the BQL) every time they
stop executing code.

See the last commit for some performance numbers.

After this series, the remaining obstacles to achieving KVM-like
scalability in MTTCG are: (1) interrupt handling, which
in some targets requires the BQL, and (2) frequent execution of
"async safe" work.
That said, some targets scale great on MTTCG even before this
series -- for instance, when running a parallel compilation job
in an x86_64 guest, scalability is comparable to what we get with
KVM.

This series is very long. If you only have time to look at a few patches,
I suggest the following, which do most of the heavy lifting and
have not yet been reviewed:

- Patch 7: cpu: make per-CPU locks an alias of the BQL in TCG rr mode
- Patch 70: cpu: protect CPU state with cpu->lock instead of the BQL

I've tested all patches with `make check-qtest -j' for all targets.
The series is checkpatch-clean (just some warnings about __COVERITY__).

You can fetch the series from:
  https://github.com/cota/qemu/tree/cpu-lock-v6

---
Changes since v5:

- Rebase on current master
  + Fixed a few conflicts, and converted the references to cpu->halted and
    cpu->interrupt_request that had been added since v5.

- Add R-b's and Ack's -- thanks everyone!

Thanks,

		Emilio
---
 accel/tcg/cpu-exec.c            |  40 ++--
 accel/tcg/cputlb.c              |  10 +-
 accel/tcg/tcg-all.c             |  12 +-
 accel/tcg/tcg-runtime.c         |   7 +
 accel/tcg/tcg-runtime.h         |   2 +
 accel/tcg/translate-all.c       |   2 +-
 cpus-common.c                   | 129 ++++++++----
 cpus.c                          | 421 ++++++++++++++++++++++++++++++++--------
 exec.c                          |   2 +-
 gdbstub.c                       |   4 +-
 hw/arm/omap1.c                  |   4 +-
 hw/arm/pxa2xx_gpio.c            |   2 +-
 hw/arm/pxa2xx_pic.c             |   2 +-
 hw/intc/s390_flic.c             |   4 +-
 hw/mips/cps.c                   |   2 +-
 hw/misc/mips_itu.c              |   4 +-
 hw/openrisc/cputimer.c          |   2 +-
 hw/ppc/e500.c                   |   4 +-
 hw/ppc/ppc.c                    |  12 +-
 hw/ppc/ppce500_spin.c           |   6 +-
 hw/ppc/spapr_cpu_core.c         |   4 +-
 hw/ppc/spapr_hcall.c            |   4 +-
 hw/ppc/spapr_rtas.c             |   6 +-
 hw/sparc/leon3.c                |   2 +-
 hw/sparc/sun4m.c                |   8 +-
 hw/sparc64/sparc64.c            |   8 +-
 include/qom/cpu.h               | 189 +++++++++++++++---
 qom/cpu.c                       |  27 ++-
 stubs/Makefile.objs             |   1 +
 stubs/cpu-lock.c                |  28 +++
 target/alpha/cpu.c              |   8 +-
 target/alpha/translate.c        |   6 +-
 target/arm/arm-powerctl.c       |   4 +-
 target/arm/cpu.c                |   8 +-
 target/arm/helper.c             |  16 +-
 target/arm/machine.c            |   2 +-
 target/arm/op_helper.c          |   2 +-
 target/cris/cpu.c               |   2 +-
 target/cris/helper.c            |   6 +-
 target/cris/translate.c         |   5 +-
 target/hppa/cpu.c               |   2 +-
 target/hppa/translate.c         |   3 +-
 target/i386/cpu.c               |   4 +-
 target/i386/cpu.h               |   2 +-
 target/i386/hax-all.c           |  36 ++--
 target/i386/helper.c            |   8 +-
 target/i386/hvf/hvf.c           |  16 +-
 target/i386/hvf/x86hvf.c        |  38 ++--
 target/i386/kvm.c               |  78 ++++----
 target/i386/misc_helper.c       |   2 +-
 target/i386/seg_helper.c        |  13 +-
 target/i386/svm_helper.c        |   6 +-
 target/i386/whpx-all.c          |  57 +++---
 target/lm32/cpu.c               |   2 +-
 target/lm32/op_helper.c         |   4 +-
 target/m68k/cpu.c               |   2 +-
 target/m68k/op_helper.c         |   2 +-
 target/m68k/translate.c         |   9 +-
 target/microblaze/cpu.c         |   2 +-
 target/microblaze/translate.c   |   4 +-
 target/mips/cpu.c               |  11 +-
 target/mips/kvm.c               |   4 +-
 target/mips/op_helper.c         |   8 +-
 target/mips/translate.c         |   4 +-
 target/moxie/cpu.c              |   2 +-
 target/nios2/cpu.c              |   2 +-
 target/openrisc/cpu.c           |   4 +-
 target/openrisc/sys_helper.c    |   4 +-
 target/ppc/excp_helper.c        |   8 +-
 target/ppc/helper_regs.h        |   2 +-
 target/ppc/kvm.c                |   8 +-
 target/ppc/translate.c          |   6 +-
 target/ppc/translate_init.inc.c |  36 ++--
 target/riscv/cpu.c              |   5 +-
 target/riscv/op_helper.c        |   2 +-
 target/s390x/cpu.c              |  28 ++-
 target/s390x/excp_helper.c      |   4 +-
 target/s390x/kvm.c              |   2 +-
 target/s390x/sigp.c             |   8 +-
 target/sh4/cpu.c                |   2 +-
 target/sh4/helper.c             |   2 +-
 target/sh4/op_helper.c          |   2 +-
 target/sparc/cpu.c              |   6 +-
 target/sparc/helper.c           |   2 +-
 target/unicore32/cpu.c          |   2 +-
 target/unicore32/softmmu.c      |   2 +-
 target/xtensa/cpu.c             |   6 +-
 target/xtensa/exc_helper.c      |   2 +-
 target/xtensa/helper.c          |   2 +-
 89 files changed, 1018 insertions(+), 455 deletions(-)

^ permalink raw reply	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 01/73] cpu: convert queued work to a QSIMPLEQ
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
@ 2019-01-30  0:46 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 02/73] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
                   ` (72 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Instead of open-coding it.

While at it, make sure that all accesses to the list are
performed while holding the list's lock.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  6 +++---
 cpus-common.c     | 25 ++++++++-----------------
 cpus.c            | 14 ++++++++++++--
 qom/cpu.c         |  1 +
 4 files changed, 24 insertions(+), 22 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 4c2feb9c17..fc39f5d529 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -321,8 +321,8 @@ struct qemu_work_item;
  * @mem_io_pc: Host Program Counter at which the memory was accessed.
  * @mem_io_vaddr: Target virtual address at which the memory was accessed.
  * @kvm_fd: vCPU file descriptor for KVM.
- * @work_mutex: Lock to prevent multiple access to queued_work_*.
- * @queued_work_first: First asynchronous work pending.
+ * @work_mutex: Lock to prevent multiple access to @work_list.
+ * @work_list: List of pending asynchronous work.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
  * @trace_dstate: Dynamic tracing state of events for this vCPU (bitmask).
@@ -363,7 +363,7 @@ struct CPUState {
     sigjmp_buf jmp_env;
 
     QemuMutex work_mutex;
-    struct qemu_work_item *queued_work_first, *queued_work_last;
+    QSIMPLEQ_HEAD(, qemu_work_item) work_list;
 
     CPUAddressSpace *cpu_ases;
     int num_ases;
diff --git a/cpus-common.c b/cpus-common.c
index 3ca58c64e8..a1bbf4139f 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -107,7 +107,7 @@ void cpu_list_remove(CPUState *cpu)
 }
 
 struct qemu_work_item {
-    struct qemu_work_item *next;
+    QSIMPLEQ_ENTRY(qemu_work_item) node;
     run_on_cpu_func func;
     run_on_cpu_data data;
     bool free, exclusive, done;
@@ -116,13 +116,7 @@ struct qemu_work_item {
 static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
 {
     qemu_mutex_lock(&cpu->work_mutex);
-    if (cpu->queued_work_first == NULL) {
-        cpu->queued_work_first = wi;
-    } else {
-        cpu->queued_work_last->next = wi;
-    }
-    cpu->queued_work_last = wi;
-    wi->next = NULL;
+    QSIMPLEQ_INSERT_TAIL(&cpu->work_list, wi, node);
     wi->done = false;
     qemu_mutex_unlock(&cpu->work_mutex);
 
@@ -314,17 +308,14 @@ void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
 
-    if (cpu->queued_work_first == NULL) {
+    qemu_mutex_lock(&cpu->work_mutex);
+    if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
+        qemu_mutex_unlock(&cpu->work_mutex);
         return;
     }
-
-    qemu_mutex_lock(&cpu->work_mutex);
-    while (cpu->queued_work_first != NULL) {
-        wi = cpu->queued_work_first;
-        cpu->queued_work_first = wi->next;
-        if (!cpu->queued_work_first) {
-            cpu->queued_work_last = NULL;
-        }
+    while (!QSIMPLEQ_EMPTY(&cpu->work_list)) {
+        wi = QSIMPLEQ_FIRST(&cpu->work_list);
+        QSIMPLEQ_REMOVE_HEAD(&cpu->work_list, node);
         qemu_mutex_unlock(&cpu->work_mutex);
         if (wi->exclusive) {
             /* Running work items outside the BQL avoids the following deadlock:
diff --git a/cpus.c b/cpus.c
index b09b702712..edde8268b9 100644
--- a/cpus.c
+++ b/cpus.c
@@ -88,9 +88,19 @@ bool cpu_is_stopped(CPUState *cpu)
     return cpu->stopped || !runstate_is_running();
 }
 
+static inline bool cpu_work_list_empty(CPUState *cpu)
+{
+    bool ret;
+
+    qemu_mutex_lock(&cpu->work_mutex);
+    ret = QSIMPLEQ_EMPTY(&cpu->work_list);
+    qemu_mutex_unlock(&cpu->work_mutex);
+    return ret;
+}
+
 static bool cpu_thread_is_idle(CPUState *cpu)
 {
-    if (cpu->stop || cpu->queued_work_first) {
+    if (cpu->stop || !cpu_work_list_empty(cpu)) {
         return false;
     }
     if (cpu_is_stopped(cpu)) {
@@ -1513,7 +1523,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             cpu = first_cpu;
         }
 
-        while (cpu && !cpu->queued_work_first && !cpu->exit_request) {
+        while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
 
             atomic_mb_set(&tcg_current_rr_cpu, cpu);
             current_cpu = cpu;
diff --git a/qom/cpu.c b/qom/cpu.c
index f5579b1cd5..06d6b6044d 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -372,6 +372,7 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_threads = 1;
 
     qemu_mutex_init(&cpu->work_mutex);
+    QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 02/73] cpu: rename cpu->work_mutex to cpu->lock
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
  2019-01-30  0:46 ` [Qemu-devel] [PATCH v6 01/73] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 03/73] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
                   ` (71 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

This lock will soon protect more fields of the struct. Give
it a more appropriate name.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  5 +++--
 cpus-common.c     | 14 +++++++-------
 cpus.c            |  4 ++--
 qom/cpu.c         |  2 +-
 4 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index fc39f5d529..6224b83ada 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -321,7 +321,7 @@ struct qemu_work_item;
  * @mem_io_pc: Host Program Counter at which the memory was accessed.
  * @mem_io_vaddr: Target virtual address at which the memory was accessed.
  * @kvm_fd: vCPU file descriptor for KVM.
- * @work_mutex: Lock to prevent multiple access to @work_list.
+ * @lock: Lock to prevent multiple access to per-CPU fields.
  * @work_list: List of pending asynchronous work.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
@@ -362,7 +362,8 @@ struct CPUState {
     int64_t icount_extra;
     sigjmp_buf jmp_env;
 
-    QemuMutex work_mutex;
+    QemuMutex lock;
+    /* fields below protected by @lock */
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
 
     CPUAddressSpace *cpu_ases;
diff --git a/cpus-common.c b/cpus-common.c
index a1bbf4139f..d0e619f149 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -115,10 +115,10 @@ struct qemu_work_item {
 
 static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
 {
-    qemu_mutex_lock(&cpu->work_mutex);
+    qemu_mutex_lock(&cpu->lock);
     QSIMPLEQ_INSERT_TAIL(&cpu->work_list, wi, node);
     wi->done = false;
-    qemu_mutex_unlock(&cpu->work_mutex);
+    qemu_mutex_unlock(&cpu->lock);
 
     qemu_cpu_kick(cpu);
 }
@@ -308,15 +308,15 @@ void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
 
-    qemu_mutex_lock(&cpu->work_mutex);
+    qemu_mutex_lock(&cpu->lock);
     if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
-        qemu_mutex_unlock(&cpu->work_mutex);
+        qemu_mutex_unlock(&cpu->lock);
         return;
     }
     while (!QSIMPLEQ_EMPTY(&cpu->work_list)) {
         wi = QSIMPLEQ_FIRST(&cpu->work_list);
         QSIMPLEQ_REMOVE_HEAD(&cpu->work_list, node);
-        qemu_mutex_unlock(&cpu->work_mutex);
+        qemu_mutex_unlock(&cpu->lock);
         if (wi->exclusive) {
             /* Running work items outside the BQL avoids the following deadlock:
              * 1) start_exclusive() is called with the BQL taken while another
@@ -332,13 +332,13 @@ void process_queued_cpu_work(CPUState *cpu)
         } else {
             wi->func(cpu, wi->data);
         }
-        qemu_mutex_lock(&cpu->work_mutex);
+        qemu_mutex_lock(&cpu->lock);
         if (wi->free) {
             g_free(wi);
         } else {
             atomic_mb_set(&wi->done, true);
         }
     }
-    qemu_mutex_unlock(&cpu->work_mutex);
+    qemu_mutex_unlock(&cpu->lock);
     qemu_cond_broadcast(&qemu_work_cond);
 }
diff --git a/cpus.c b/cpus.c
index edde8268b9..9a3a1d8a6a 100644
--- a/cpus.c
+++ b/cpus.c
@@ -92,9 +92,9 @@ static inline bool cpu_work_list_empty(CPUState *cpu)
 {
     bool ret;
 
-    qemu_mutex_lock(&cpu->work_mutex);
+    qemu_mutex_lock(&cpu->lock);
     ret = QSIMPLEQ_EMPTY(&cpu->work_list);
-    qemu_mutex_unlock(&cpu->work_mutex);
+    qemu_mutex_unlock(&cpu->lock);
     return ret;
 }
 
diff --git a/qom/cpu.c b/qom/cpu.c
index 06d6b6044d..a2964e53c6 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -371,7 +371,7 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_cores = 1;
     cpu->nr_threads = 1;
 
-    qemu_mutex_init(&cpu->work_mutex);
+    qemu_mutex_init(&cpu->lock);
     QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 03/73] cpu: introduce cpu_mutex_lock/unlock
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
  2019-01-30  0:46 ` [Qemu-devel] [PATCH v6 01/73] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 02/73] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-06 17:21   ` Alex Bennée
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 04/73] cpu: make qemu_work_cond per-cpu Emilio G. Cota
                   ` (70 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

The few direct users of &cpu->lock will be converted soon.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h   | 33 +++++++++++++++++++++++++++++++
 cpus.c              | 48 +++++++++++++++++++++++++++++++++++++++++++--
 stubs/cpu-lock.c    | 28 ++++++++++++++++++++++++++
 stubs/Makefile.objs |  1 +
 4 files changed, 108 insertions(+), 2 deletions(-)
 create mode 100644 stubs/cpu-lock.c

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 6224b83ada..403c48695b 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -454,6 +454,39 @@ extern CPUTailQ cpus;
 
 extern __thread CPUState *current_cpu;
 
+/**
+ * cpu_mutex_lock - lock a CPU's mutex
+ * @cpu: the CPU whose mutex is to be locked
+ *
+ * To avoid deadlock, a CPU's mutex must be acquired after the BQL.
+ */
+#define cpu_mutex_lock(cpu)                             \
+    cpu_mutex_lock_impl(cpu, __FILE__, __LINE__)
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line);
+
+/**
+ * cpu_mutex_unlock - unlock a CPU's mutex
+ * @cpu: the CPU whose mutex is to be unlocked
+ */
+#define cpu_mutex_unlock(cpu)                           \
+    cpu_mutex_unlock_impl(cpu, __FILE__, __LINE__)
+void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line);
+
+/**
+ * cpu_mutex_locked - check whether a CPU's mutex is locked
+ * @cpu: the CPU of interest
+ *
+ * Returns true if the calling thread is currently holding the CPU's mutex.
+ */
+bool cpu_mutex_locked(const CPUState *cpu);
+
+/**
+ * no_cpu_mutex_locked - check whether any CPU mutex is held
+ *
+ * Returns true if the calling thread is not holding any CPU mutex.
+ */
+bool no_cpu_mutex_locked(void);
+
 static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
 {
     unsigned int i;
diff --git a/cpus.c b/cpus.c
index 9a3a1d8a6a..187aed2533 100644
--- a/cpus.c
+++ b/cpus.c
@@ -83,6 +83,47 @@ static unsigned int throttle_percentage;
 #define CPU_THROTTLE_PCT_MAX 99
 #define CPU_THROTTLE_TIMESLICE_NS 10000000
 
+/* XXX: is this really the max number of CPUs? */
+#define CPU_LOCK_BITMAP_SIZE 2048
+
+/*
+ * Note: we index the bitmap with cpu->cpu_index + 1 so that the logic
+ * also works during early CPU initialization, when cpu->cpu_index is set to
+ * UNASSIGNED_CPU_INDEX == -1.
+ */
+static __thread DECLARE_BITMAP(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
+
+bool no_cpu_mutex_locked(void)
+{
+    return bitmap_empty(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
+}
+
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+{
+/* coverity gets confused by the indirect function call */
+#ifdef __COVERITY__
+    qemu_mutex_lock_impl(&cpu->lock, file, line);
+#else
+    QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
+
+    g_assert(!cpu_mutex_locked(cpu));
+    set_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+    f(&cpu->lock, file, line);
+#endif
+}
+
+void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
+{
+    g_assert(cpu_mutex_locked(cpu));
+    qemu_mutex_unlock_impl(&cpu->lock, file, line);
+    clear_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+}
+
+bool cpu_mutex_locked(const CPUState *cpu)
+{
+    return test_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+}
+
 bool cpu_is_stopped(CPUState *cpu)
 {
     return cpu->stopped || !runstate_is_running();
@@ -92,9 +133,9 @@ static inline bool cpu_work_list_empty(CPUState *cpu)
 {
     bool ret;
 
-    qemu_mutex_lock(&cpu->lock);
+    cpu_mutex_lock(cpu);
     ret = QSIMPLEQ_EMPTY(&cpu->work_list);
-    qemu_mutex_unlock(&cpu->lock);
+    cpu_mutex_unlock(cpu);
     return ret;
 }
 
@@ -1855,6 +1896,9 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line)
 {
     QemuMutexLockFunc bql_lock = atomic_read(&qemu_bql_mutex_lock_func);
 
+    /* prevent deadlock with CPU mutex */
+    g_assert(no_cpu_mutex_locked());
+
     g_assert(!qemu_mutex_iothread_locked());
     bql_lock(&qemu_global_mutex, file, line);
     iothread_locked = true;
diff --git a/stubs/cpu-lock.c b/stubs/cpu-lock.c
new file mode 100644
index 0000000000..3f07d3a28b
--- /dev/null
+++ b/stubs/cpu-lock.c
@@ -0,0 +1,28 @@
+#include "qemu/osdep.h"
+#include "qom/cpu.h"
+
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+{
+/* coverity gets confused by the indirect function call */
+#ifdef __COVERITY__
+    qemu_mutex_lock_impl(&cpu->lock, file, line);
+#else
+    QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
+    f(&cpu->lock, file, line);
+#endif
+}
+
+void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
+{
+    qemu_mutex_unlock_impl(&cpu->lock, file, line);
+}
+
+bool cpu_mutex_locked(const CPUState *cpu)
+{
+    return true;
+}
+
+bool no_cpu_mutex_locked(void)
+{
+    return true;
+}
diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
index 5dd0aeeec6..49f83cf7ff 100644
--- a/stubs/Makefile.objs
+++ b/stubs/Makefile.objs
@@ -8,6 +8,7 @@ stub-obj-y += blockdev-close-all-bdrv-states.o
 stub-obj-y += clock-warp.o
 stub-obj-y += cpu-get-clock.o
 stub-obj-y += cpu-get-icount.o
+stub-obj-y += cpu-lock.o
 stub-obj-y += dump.o
 stub-obj-y += error-printf.o
 stub-obj-y += fdset.o
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 04/73] cpu: make qemu_work_cond per-cpu
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (2 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 03/73] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 05/73] cpu: move run_on_cpu to cpus-common Emilio G. Cota
                   ` (69 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

This eliminates the need to use the BQL to queue CPU work.

While at it, give the per-cpu field a generic name ("cond") since
it will soon be used for more than just queueing CPU work.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  6 ++--
 cpus-common.c     | 72 ++++++++++++++++++++++++++++++++++++++---------
 cpus.c            |  2 +-
 qom/cpu.c         |  1 +
 4 files changed, 63 insertions(+), 18 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 403c48695b..46e3c164aa 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -322,6 +322,7 @@ struct qemu_work_item;
  * @mem_io_vaddr: Target virtual address at which the memory was accessed.
  * @kvm_fd: vCPU file descriptor for KVM.
  * @lock: Lock to prevent multiple access to per-CPU fields.
+ * @cond: Condition variable for per-CPU events.
  * @work_list: List of pending asynchronous work.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
@@ -364,6 +365,7 @@ struct CPUState {
 
     QemuMutex lock;
     /* fields below protected by @lock */
+    QemuCond cond;
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
 
     CPUAddressSpace *cpu_ases;
@@ -771,12 +773,10 @@ bool cpu_is_stopped(CPUState *cpu);
  * @cpu: The vCPU to run on.
  * @func: The function to be executed.
  * @data: Data to pass to the function.
- * @mutex: Mutex to release while waiting for @func to run.
  *
  * Used internally in the implementation of run_on_cpu.
  */
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
-                   QemuMutex *mutex);
+void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
 
 /**
  * run_on_cpu:
diff --git a/cpus-common.c b/cpus-common.c
index d0e619f149..daf1531868 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -26,7 +26,6 @@
 static QemuMutex qemu_cpu_list_lock;
 static QemuCond exclusive_cond;
 static QemuCond exclusive_resume;
-static QemuCond qemu_work_cond;
 
 /* >= 1 if a thread is inside start_exclusive/end_exclusive.  Written
  * under qemu_cpu_list_lock, read with atomic operations.
@@ -42,7 +41,6 @@ void qemu_init_cpu_list(void)
     qemu_mutex_init(&qemu_cpu_list_lock);
     qemu_cond_init(&exclusive_cond);
     qemu_cond_init(&exclusive_resume);
-    qemu_cond_init(&qemu_work_cond);
 }
 
 void cpu_list_lock(void)
@@ -113,23 +111,37 @@ struct qemu_work_item {
     bool free, exclusive, done;
 };
 
-static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
+/* Called with the CPU's lock held */
+static void queue_work_on_cpu_locked(CPUState *cpu, struct qemu_work_item *wi)
 {
-    qemu_mutex_lock(&cpu->lock);
     QSIMPLEQ_INSERT_TAIL(&cpu->work_list, wi, node);
     wi->done = false;
-    qemu_mutex_unlock(&cpu->lock);
 
     qemu_cpu_kick(cpu);
 }
 
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
-                   QemuMutex *mutex)
+static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
+{
+    cpu_mutex_lock(cpu);
+    queue_work_on_cpu_locked(cpu, wi);
+    cpu_mutex_unlock(cpu);
+}
+
+void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
     struct qemu_work_item wi;
+    bool has_bql = qemu_mutex_iothread_locked();
+
+    g_assert(no_cpu_mutex_locked());
 
     if (qemu_cpu_is_self(cpu)) {
-        func(cpu, data);
+        if (has_bql) {
+            func(cpu, data);
+        } else {
+            qemu_mutex_lock_iothread();
+            func(cpu, data);
+            qemu_mutex_unlock_iothread();
+        }
         return;
     }
 
@@ -139,13 +151,34 @@ void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
     wi.free = false;
     wi.exclusive = false;
 
-    queue_work_on_cpu(cpu, &wi);
+    cpu_mutex_lock(cpu);
+    queue_work_on_cpu_locked(cpu, &wi);
+
+    /*
+     * We are going to sleep on the CPU lock, so release the BQL.
+     *
+     * During the transition to per-CPU locks, we release the BQL _after_
+     * having kicked the destination CPU (from queue_work_on_cpu_locked above).
+     * This makes sure that the enqueued work will be seen by the CPU
+     * after being woken up from the kick, since the CPU sleeps on the BQL.
+     * Once we complete the transition to per-CPU locks, we will release
+     * the BQL earlier in this function.
+     */
+    if (has_bql) {
+        qemu_mutex_unlock_iothread();
+    }
+
     while (!atomic_mb_read(&wi.done)) {
         CPUState *self_cpu = current_cpu;
 
-        qemu_cond_wait(&qemu_work_cond, mutex);
+        qemu_cond_wait(&cpu->cond, &cpu->lock);
         current_cpu = self_cpu;
     }
+    cpu_mutex_unlock(cpu);
+
+    if (has_bql) {
+        qemu_mutex_lock_iothread();
+    }
 }
 
 void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
@@ -307,6 +340,7 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
 void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
+    bool has_bql = qemu_mutex_iothread_locked();
 
     qemu_mutex_lock(&cpu->lock);
     if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
@@ -324,13 +358,23 @@ void process_queued_cpu_work(CPUState *cpu)
              * BQL, so it goes to sleep; start_exclusive() is sleeping too, so
              * neither CPU can proceed.
              */
-            qemu_mutex_unlock_iothread();
+            if (has_bql) {
+                qemu_mutex_unlock_iothread();
+            }
             start_exclusive();
             wi->func(cpu, wi->data);
             end_exclusive();
-            qemu_mutex_lock_iothread();
+            if (has_bql) {
+                qemu_mutex_lock_iothread();
+            }
         } else {
-            wi->func(cpu, wi->data);
+            if (has_bql) {
+                wi->func(cpu, wi->data);
+            } else {
+                qemu_mutex_lock_iothread();
+                wi->func(cpu, wi->data);
+                qemu_mutex_unlock_iothread();
+            }
         }
         qemu_mutex_lock(&cpu->lock);
         if (wi->free) {
@@ -340,5 +384,5 @@ void process_queued_cpu_work(CPUState *cpu)
         }
     }
     qemu_mutex_unlock(&cpu->lock);
-    qemu_cond_broadcast(&qemu_work_cond);
+    qemu_cond_broadcast(&cpu->cond);
 }
diff --git a/cpus.c b/cpus.c
index 187aed2533..42ea8cfbb5 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1236,7 +1236,7 @@ void qemu_init_cpu_loop(void)
 
 void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
-    do_run_on_cpu(cpu, func, data, &qemu_global_mutex);
+    do_run_on_cpu(cpu, func, data);
 }
 
 static void qemu_kvm_destroy_vcpu(CPUState *cpu)
diff --git a/qom/cpu.c b/qom/cpu.c
index a2964e53c6..be8393e589 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -372,6 +372,7 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_threads = 1;
 
     qemu_mutex_init(&cpu->lock);
+    qemu_cond_init(&cpu->cond);
     QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 05/73] cpu: move run_on_cpu to cpus-common
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (3 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 04/73] cpu: make qemu_work_cond per-cpu Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-06 17:22   ` Alex Bennée
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 06/73] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
                   ` (68 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

We don't pass a pointer to qemu_global_mutex anymore.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 10 ----------
 cpus-common.c     |  2 +-
 cpus.c            |  5 -----
 3 files changed, 1 insertion(+), 16 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 46e3c164aa..fe389037c5 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -768,16 +768,6 @@ void qemu_cpu_kick(CPUState *cpu);
  */
 bool cpu_is_stopped(CPUState *cpu);
 
-/**
- * do_run_on_cpu:
- * @cpu: The vCPU to run on.
- * @func: The function to be executed.
- * @data: Data to pass to the function.
- *
- * Used internally in the implementation of run_on_cpu.
- */
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
-
 /**
  * run_on_cpu:
  * @cpu: The vCPU to run on.
diff --git a/cpus-common.c b/cpus-common.c
index daf1531868..85a61eb970 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -127,7 +127,7 @@ static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
     cpu_mutex_unlock(cpu);
 }
 
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
+void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
     struct qemu_work_item wi;
     bool has_bql = qemu_mutex_iothread_locked();
diff --git a/cpus.c b/cpus.c
index 42ea8cfbb5..755e4addab 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1234,11 +1234,6 @@ void qemu_init_cpu_loop(void)
     qemu_thread_get_self(&io_thread);
 }
 
-void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
-{
-    do_run_on_cpu(cpu, func, data);
-}
-
 static void qemu_kvm_destroy_vcpu(CPUState *cpu)
 {
     if (kvm_destroy_vcpu(cpu) < 0) {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 06/73] cpu: introduce process_queued_cpu_work_locked
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (4 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 05/73] cpu: move run_on_cpu to cpus-common Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode Emilio G. Cota
                   ` (67 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

This completes the conversion to cpu_mutex_lock/unlock in the file.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 cpus-common.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/cpus-common.c b/cpus-common.c
index 85a61eb970..99662bfa87 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -337,20 +337,19 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
     queue_work_on_cpu(cpu, wi);
 }
 
-void process_queued_cpu_work(CPUState *cpu)
+/* Called with the CPU's lock held */
+static void process_queued_cpu_work_locked(CPUState *cpu)
 {
     struct qemu_work_item *wi;
     bool has_bql = qemu_mutex_iothread_locked();
 
-    qemu_mutex_lock(&cpu->lock);
     if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
-        qemu_mutex_unlock(&cpu->lock);
         return;
     }
     while (!QSIMPLEQ_EMPTY(&cpu->work_list)) {
         wi = QSIMPLEQ_FIRST(&cpu->work_list);
         QSIMPLEQ_REMOVE_HEAD(&cpu->work_list, node);
-        qemu_mutex_unlock(&cpu->lock);
+        cpu_mutex_unlock(cpu);
         if (wi->exclusive) {
             /* Running work items outside the BQL avoids the following deadlock:
              * 1) start_exclusive() is called with the BQL taken while another
@@ -376,13 +375,19 @@ void process_queued_cpu_work(CPUState *cpu)
                 qemu_mutex_unlock_iothread();
             }
         }
-        qemu_mutex_lock(&cpu->lock);
+        cpu_mutex_lock(cpu);
         if (wi->free) {
             g_free(wi);
         } else {
             atomic_mb_set(&wi->done, true);
         }
     }
-    qemu_mutex_unlock(&cpu->lock);
     qemu_cond_broadcast(&cpu->cond);
 }
+
+void process_queued_cpu_work(CPUState *cpu)
+{
+    cpu_mutex_lock(cpu);
+    process_queued_cpu_work_locked(cpu);
+    cpu_mutex_unlock(cpu);
+}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (5 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 06/73] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-07 12:40   ` Alex Bennée
  2019-02-20 16:12   ` Richard Henderson
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 08/73] tcg-runtime: define helper_cpu_halted_set Emilio G. Cota
                   ` (66 subsequent siblings)
  73 siblings, 2 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Before we can switch from the BQL to per-CPU locks in
the CPU loop, we have to accommodate the fact that TCG
rr mode (i.e. !MTTCG) cannot work with separate per-vCPU
locks. That would lead to deadlock since we need a single
lock/condvar pair on which to wait for events that affect
any vCPU, e.g. in qemu_tcg_rr_wait_io_event.

At the same time, we are moving towards an interface where
the BQL and CPU locks are independent, and the only requirement
is that the locking order is respected, i.e. the BQL is
acquired first if both locks have to be held at the same time.

In this patch we make the BQL a recursive lock under the hood.
This allows us to (1) keep the BQL and CPU locks interfaces
separate, and (2) use a single lock for all vCPUs in TCG rr mode.

Note that the BQL's API (qemu_mutex_lock/unlock_iothread) remains
non-recursive.

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  2 +-
 cpus-common.c     |  2 +-
 cpus.c            | 90 +++++++++++++++++++++++++++++++++++++++++------
 qom/cpu.c         |  3 +-
 stubs/cpu-lock.c  |  6 ++--
 5 files changed, 86 insertions(+), 17 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index fe389037c5..8b85a036cf 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -363,7 +363,7 @@ struct CPUState {
     int64_t icount_extra;
     sigjmp_buf jmp_env;
 
-    QemuMutex lock;
+    QemuMutex *lock;
     /* fields below protected by @lock */
     QemuCond cond;
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
diff --git a/cpus-common.c b/cpus-common.c
index 99662bfa87..62e282bff1 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -171,7 +171,7 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
     while (!atomic_mb_read(&wi.done)) {
         CPUState *self_cpu = current_cpu;
 
-        qemu_cond_wait(&cpu->cond, &cpu->lock);
+        qemu_cond_wait(&cpu->cond, cpu->lock);
         current_cpu = self_cpu;
     }
     cpu_mutex_unlock(cpu);
diff --git a/cpus.c b/cpus.c
index 755e4addab..c4fa3cc876 100644
--- a/cpus.c
+++ b/cpus.c
@@ -83,6 +83,12 @@ static unsigned int throttle_percentage;
 #define CPU_THROTTLE_PCT_MAX 99
 #define CPU_THROTTLE_TIMESLICE_NS 10000000
 
+static inline bool qemu_is_tcg_rr(void)
+{
+    /* in `make check-qtest', "use_icount && !tcg_enabled()" might be true */
+    return use_icount || (tcg_enabled() && !qemu_tcg_mttcg_enabled());
+}
+
 /* XXX: is this really the max number of CPUs? */
 #define CPU_LOCK_BITMAP_SIZE 2048
 
@@ -98,25 +104,76 @@ bool no_cpu_mutex_locked(void)
     return bitmap_empty(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
 }
 
-void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+static QemuMutex qemu_global_mutex;
+static __thread bool iothread_locked;
+/*
+ * In TCG rr mode, we make the BQL a recursive mutex, so that we can use it for
+ * all vCPUs while keeping the interface as if the locks were per-CPU.
+ *
+ * The fact that the BQL is implemented recursively is invisible to BQL users;
+ * the mutex API we export (qemu_mutex_lock_iothread() etc.) is non-recursive.
+ *
+ * Locking order: the BQL is always acquired before CPU locks.
+ */
+static __thread int iothread_lock_count;
+
+static void rr_cpu_mutex_lock(void)
+{
+    if (iothread_lock_count++ == 0) {
+        /*
+         * Circumvent qemu_mutex_lock_iothread()'s state keeping by
+         * acquiring the BQL directly.
+         */
+        qemu_mutex_lock(&qemu_global_mutex);
+    }
+}
+
+static void rr_cpu_mutex_unlock(void)
+{
+    g_assert(iothread_lock_count > 0);
+    if (--iothread_lock_count == 0) {
+        /*
+         * Circumvent qemu_mutex_unlock_iothread()'s state keeping by
+         * releasing the BQL directly.
+         */
+        qemu_mutex_unlock(&qemu_global_mutex);
+    }
+}
+
+static void do_cpu_mutex_lock(CPUState *cpu, const char *file, int line)
 {
-/* coverity gets confused by the indirect function call */
+    /* coverity gets confused by the indirect function call */
 #ifdef __COVERITY__
-    qemu_mutex_lock_impl(&cpu->lock, file, line);
+    qemu_mutex_lock_impl(cpu->lock, file, line);
 #else
     QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
 
+    f(cpu->lock, file, line);
+#endif
+}
+
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+{
     g_assert(!cpu_mutex_locked(cpu));
     set_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
-    f(&cpu->lock, file, line);
-#endif
+
+    if (qemu_is_tcg_rr()) {
+        rr_cpu_mutex_lock();
+    } else {
+        do_cpu_mutex_lock(cpu, file, line);
+    }
 }
 
 void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
 {
     g_assert(cpu_mutex_locked(cpu));
-    qemu_mutex_unlock_impl(&cpu->lock, file, line);
     clear_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+
+    if (qemu_is_tcg_rr()) {
+        rr_cpu_mutex_unlock();
+        return;
+    }
+    qemu_mutex_unlock_impl(cpu->lock, file, line);
 }
 
 bool cpu_mutex_locked(const CPUState *cpu)
@@ -1215,8 +1272,6 @@ static void qemu_init_sigbus(void)
 }
 #endif /* !CONFIG_LINUX */
 
-static QemuMutex qemu_global_mutex;
-
 static QemuThread io_thread;
 
 /* cpu creation */
@@ -1876,8 +1931,6 @@ bool qemu_in_vcpu_thread(void)
     return current_cpu && qemu_cpu_is_self(current_cpu);
 }
 
-static __thread bool iothread_locked = false;
-
 bool qemu_mutex_iothread_locked(void)
 {
     return iothread_locked;
@@ -1896,6 +1949,8 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line)
 
     g_assert(!qemu_mutex_iothread_locked());
     bql_lock(&qemu_global_mutex, file, line);
+    g_assert(iothread_lock_count == 0);
+    iothread_lock_count++;
     iothread_locked = true;
 }
 
@@ -1903,7 +1958,10 @@ void qemu_mutex_unlock_iothread(void)
 {
     g_assert(qemu_mutex_iothread_locked());
     iothread_locked = false;
-    qemu_mutex_unlock(&qemu_global_mutex);
+    g_assert(iothread_lock_count > 0);
+    if (--iothread_lock_count == 0) {
+        qemu_mutex_unlock(&qemu_global_mutex);
+    }
 }
 
 static bool all_vcpus_paused(void)
@@ -2127,6 +2185,16 @@ void qemu_init_vcpu(CPUState *cpu)
         cpu_address_space_init(cpu, 0, "cpu-memory", cpu->memory);
     }
 
+    /*
+     * In TCG RR, cpu->lock is the BQL under the hood. In all other modes,
+     * cpu->lock is a standalone per-CPU lock.
+     */
+    if (qemu_is_tcg_rr()) {
+        qemu_mutex_destroy(cpu->lock);
+        g_free(cpu->lock);
+        cpu->lock = &qemu_global_mutex;
+    }
+
     if (kvm_enabled()) {
         qemu_kvm_start_vcpu(cpu);
     } else if (hax_enabled()) {
diff --git a/qom/cpu.c b/qom/cpu.c
index be8393e589..2c05aa1bca 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -371,7 +371,8 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_cores = 1;
     cpu->nr_threads = 1;
 
-    qemu_mutex_init(&cpu->lock);
+    cpu->lock = g_new(QemuMutex, 1);
+    qemu_mutex_init(cpu->lock);
     qemu_cond_init(&cpu->cond);
     QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
diff --git a/stubs/cpu-lock.c b/stubs/cpu-lock.c
index 3f07d3a28b..7406a66d97 100644
--- a/stubs/cpu-lock.c
+++ b/stubs/cpu-lock.c
@@ -5,16 +5,16 @@ void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
 {
 /* coverity gets confused by the indirect function call */
 #ifdef __COVERITY__
-    qemu_mutex_lock_impl(&cpu->lock, file, line);
+    qemu_mutex_lock_impl(cpu->lock, file, line);
 #else
     QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
-    f(&cpu->lock, file, line);
+    f(cpu->lock, file, line);
 #endif
 }
 
 void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
 {
-    qemu_mutex_unlock_impl(&cpu->lock, file, line);
+    qemu_mutex_unlock_impl(cpu->lock, file, line);
 }
 
 bool cpu_mutex_locked(const CPUState *cpu)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 08/73] tcg-runtime: define helper_cpu_halted_set
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (6 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-07 12:40   ` Alex Bennée
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 09/73] ppc: convert to helper_cpu_halted_set Emilio G. Cota
                   ` (65 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/tcg-runtime.h | 2 ++
 accel/tcg/tcg-runtime.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/accel/tcg/tcg-runtime.h b/accel/tcg/tcg-runtime.h
index dfe325625c..46386bb564 100644
--- a/accel/tcg/tcg-runtime.h
+++ b/accel/tcg/tcg-runtime.h
@@ -28,6 +28,8 @@ DEF_HELPER_FLAGS_1(lookup_tb_ptr, TCG_CALL_NO_WG_SE, ptr, env)
 
 DEF_HELPER_FLAGS_1(exit_atomic, TCG_CALL_NO_WG, noreturn, env)
 
+DEF_HELPER_FLAGS_2(cpu_halted_set, TCG_CALL_NO_RWG, void, env, i32)
+
 #ifdef CONFIG_SOFTMMU
 
 DEF_HELPER_FLAGS_5(atomic_cmpxchgb, TCG_CALL_NO_WG,
diff --git a/accel/tcg/tcg-runtime.c b/accel/tcg/tcg-runtime.c
index d0d4484406..4aa038465f 100644
--- a/accel/tcg/tcg-runtime.c
+++ b/accel/tcg/tcg-runtime.c
@@ -167,3 +167,10 @@ void HELPER(exit_atomic)(CPUArchState *env)
 {
     cpu_loop_exit_atomic(ENV_GET_CPU(env), GETPC());
 }
+
+void HELPER(cpu_halted_set)(CPUArchState *env, uint32_t val)
+{
+    CPUState *cpu = ENV_GET_CPU(env);
+
+    cpu->halted = val;
+}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 09/73] ppc: convert to helper_cpu_halted_set
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (7 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 08/73] tcg-runtime: define helper_cpu_halted_set Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 10/73] cris: " Emilio G. Cota
                   ` (64 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, David Gibson, qemu-ppc

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/translate.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index e169c43643..75aac45b54 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -1575,8 +1575,7 @@ GEN_LOGICAL2(nor, tcg_gen_nor_tl, 0x03, PPC_INTEGER);
 static void gen_pause(DisasContext *ctx)
 {
     TCGv_i32 t0 = tcg_const_i32(0);
-    tcg_gen_st_i32(t0, cpu_env,
-                   -offsetof(PowerPCCPU, env) + offsetof(CPUState, halted));
+    gen_helper_cpu_halted_set(cpu_env, t0);
     tcg_temp_free_i32(t0);
 
     /* Stop translation, this gives other CPUs a chance to run */
@@ -3550,8 +3549,7 @@ static void gen_sync(DisasContext *ctx)
 static void gen_wait(DisasContext *ctx)
 {
     TCGv_i32 t0 = tcg_const_i32(1);
-    tcg_gen_st_i32(t0, cpu_env,
-                   -offsetof(PowerPCCPU, env) + offsetof(CPUState, halted));
+    gen_helper_cpu_halted_set(cpu_env, t0);
     tcg_temp_free_i32(t0);
     /* Stop translation, as the CPU is supposed to sleep from now */
     gen_exception_nip(ctx, EXCP_HLT, ctx->base.pc_next);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 10/73] cris: convert to helper_cpu_halted_set
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (8 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 09/73] ppc: convert to helper_cpu_halted_set Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 11/73] hppa: " Emilio G. Cota
                   ` (63 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Edgar E. Iglesias

And fix the temp leak along the way.

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/cris/translate.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/cris/translate.c b/target/cris/translate.c
index 11b2c11174..f059745ec0 100644
--- a/target/cris/translate.c
+++ b/target/cris/translate.c
@@ -2829,8 +2829,9 @@ static int dec_rfe_etc(CPUCRISState *env, DisasContext *dc)
     cris_cc_mask(dc, 0);
 
     if (dc->op2 == 15) {
-        tcg_gen_st_i32(tcg_const_i32(1), cpu_env,
-                       -offsetof(CRISCPU, env) + offsetof(CPUState, halted));
+        TCGv_i32 tmp = tcg_const_i32(1);
+        gen_helper_cpu_halted_set(cpu_env, tmp);
+        tcg_temp_free_i32(tmp);
         tcg_gen_movi_tl(env_pc, dc->pc + 2);
         t_gen_raise_exception(EXCP_HLT);
         return 2;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 11/73] hppa: convert to helper_cpu_halted_set
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (9 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 10/73] cris: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 12/73] m68k: " Emilio G. Cota
                   ` (62 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/hppa/translate.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index ce05d5619d..df9179e70f 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -2845,8 +2845,7 @@ static DisasJumpType trans_pause(DisasContext *ctx, uint32_t insn,
 
     /* Tell the qemu main loop to halt until this cpu has work.  */
     tmp = tcg_const_i32(1);
-    tcg_gen_st_i32(tmp, cpu_env, -offsetof(HPPACPU, env) +
-                                 offsetof(CPUState, halted));
+    gen_helper_cpu_halted_set(cpu_env, tmp);
     tcg_temp_free_i32(tmp);
     gen_excp_1(EXCP_HALTED);
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 12/73] m68k: convert to helper_cpu_halted_set
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (10 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 11/73] hppa: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 13/73] alpha: " Emilio G. Cota
                   ` (61 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Laurent Vivier

Cc: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/m68k/translate.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index 752e46ef63..5bd4220e06 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -43,7 +43,6 @@
 #undef DEFO32
 #undef DEFO64
 
-static TCGv_i32 cpu_halted;
 static TCGv_i32 cpu_exception_index;
 
 static char cpu_reg_names[2 * 8 * 3 + 5 * 4];
@@ -79,9 +78,6 @@ void m68k_tcg_init(void)
 #undef DEFO32
 #undef DEFO64
 
-    cpu_halted = tcg_global_mem_new_i32(cpu_env,
-                                        -offsetof(M68kCPU, env) +
-                                        offsetof(CPUState, halted), "HALTED");
     cpu_exception_index = tcg_global_mem_new_i32(cpu_env,
                                                  -offsetof(M68kCPU, env) +
                                                  offsetof(CPUState, exception_index),
@@ -4637,6 +4633,7 @@ DISAS_INSN(halt)
 DISAS_INSN(stop)
 {
     uint16_t ext;
+    TCGv_i32 tmp;
 
     if (IS_USER(s)) {
         gen_exception(s, s->base.pc_next, EXCP_PRIVILEGE);
@@ -4646,7 +4643,9 @@ DISAS_INSN(stop)
     ext = read_im16(env, s);
 
     gen_set_sr_im(s, ext, 0);
-    tcg_gen_movi_i32(cpu_halted, 1);
+    tmp = tcg_const_i32(1);
+    gen_helper_cpu_halted_set(cpu_env, tmp);
+    tcg_temp_free_i32(tmp);
     gen_exception(s, s->pc, EXCP_HLT);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 13/73] alpha: convert to helper_cpu_halted_set
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (11 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 12/73] m68k: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 14/73] microblaze: " Emilio G. Cota
                   ` (60 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/alpha/translate.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index 9d8f9b3eea..a75413e9b5 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -1226,8 +1226,7 @@ static DisasJumpType gen_call_pal(DisasContext *ctx, int palcode)
             /* WTINT */
             {
                 TCGv_i32 tmp = tcg_const_i32(1);
-                tcg_gen_st_i32(tmp, cpu_env, -offsetof(AlphaCPU, env) +
-                                             offsetof(CPUState, halted));
+                gen_helper_cpu_halted_set(cpu_env, tmp);
                 tcg_temp_free_i32(tmp);
             }
             tcg_gen_movi_i64(ctx->ir[IR_V0], 0);
@@ -1382,8 +1381,7 @@ static DisasJumpType gen_mtpr(DisasContext *ctx, TCGv vb, int regno)
         /* WAIT */
         {
             TCGv_i32 tmp = tcg_const_i32(1);
-            tcg_gen_st_i32(tmp, cpu_env, -offsetof(AlphaCPU, env) +
-                                         offsetof(CPUState, halted));
+            gen_helper_cpu_halted_set(cpu_env, tmp);
             tcg_temp_free_i32(tmp);
         }
         return gen_excp(ctx, EXCP_HALTED, 0);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 14/73] microblaze: convert to helper_cpu_halted_set
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (12 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 13/73] alpha: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 15/73] cpu: define cpu_halted helpers Emilio G. Cota
                   ` (59 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Edgar E. Iglesias

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/microblaze/translate.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 78ca265b04..008b84d456 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -1233,9 +1233,7 @@ static void dec_br(DisasContext *dc)
             LOG_DIS("sleep\n");
 
             t_sync_flags(dc);
-            tcg_gen_st_i32(tmp_1, cpu_env,
-                           -offsetof(MicroBlazeCPU, env)
-                           +offsetof(CPUState, halted));
+            gen_helper_cpu_halted_set(cpu_env, tmp_1);
             tcg_gen_movi_i64(cpu_SR[SR_PC], dc->pc + 4);
             gen_helper_raise_exception(cpu_env, tmp_hlt);
             tcg_temp_free_i32(tmp_hlt);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 15/73] cpu: define cpu_halted helpers
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (13 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 14/73] microblaze: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 16/73] tcg-runtime: convert to cpu_halted_set Emilio G. Cota
                   ` (58 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

cpu->halted will soon be protected by cpu->lock.
We will use these helpers to ease the transition,
since right now cpu->halted has many direct callers.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 8b85a036cf..5047047666 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -489,6 +489,30 @@ bool cpu_mutex_locked(const CPUState *cpu);
  */
 bool no_cpu_mutex_locked(void);
 
+static inline uint32_t cpu_halted(CPUState *cpu)
+{
+    uint32_t ret;
+
+    if (cpu_mutex_locked(cpu)) {
+        return cpu->halted;
+    }
+    cpu_mutex_lock(cpu);
+    ret = cpu->halted;
+    cpu_mutex_unlock(cpu);
+    return ret;
+}
+
+static inline void cpu_halted_set(CPUState *cpu, uint32_t val)
+{
+    if (cpu_mutex_locked(cpu)) {
+        cpu->halted = val;
+        return;
+    }
+    cpu_mutex_lock(cpu);
+    cpu->halted = val;
+    cpu_mutex_unlock(cpu);
+}
+
 static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
 {
     unsigned int i;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 16/73] tcg-runtime: convert to cpu_halted_set
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (14 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 15/73] cpu: define cpu_halted helpers Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 17/73] arm: convert to cpu_halted Emilio G. Cota
                   ` (57 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/tcg-runtime.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/accel/tcg/tcg-runtime.c b/accel/tcg/tcg-runtime.c
index 4aa038465f..70e3c9de71 100644
--- a/accel/tcg/tcg-runtime.c
+++ b/accel/tcg/tcg-runtime.c
@@ -172,5 +172,5 @@ void HELPER(cpu_halted_set)(CPUArchState *env, uint32_t val)
 {
     CPUState *cpu = ENV_GET_CPU(env);
 
-    cpu->halted = val;
+    cpu_halted_set(cpu, val);
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 17/73] arm: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (15 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 16/73] tcg-runtime: convert to cpu_halted_set Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 18/73] ppc: " Emilio G. Cota
                   ` (56 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Andrzej Zaborowski,
	Peter Maydell, qemu-arm

Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-arm@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/arm/omap1.c            | 4 ++--
 hw/arm/pxa2xx_gpio.c      | 2 +-
 hw/arm/pxa2xx_pic.c       | 2 +-
 target/arm/arm-powerctl.c | 4 ++--
 target/arm/cpu.c          | 2 +-
 target/arm/op_helper.c    | 2 +-
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
index 539d29ef9c..55a7672976 100644
--- a/hw/arm/omap1.c
+++ b/hw/arm/omap1.c
@@ -1769,7 +1769,7 @@ static uint64_t omap_clkdsp_read(void *opaque, hwaddr addr,
     case 0x18:	/* DSP_SYSST */
         cpu = CPU(s->cpu);
         return (s->clkm.clocking_scheme << 11) | s->clkm.cold_start |
-                (cpu->halted << 6);      /* Quite useless... */
+                (cpu_halted(cpu) << 6);      /* Quite useless... */
     }
 
     OMAP_BAD_REG(addr);
@@ -3790,7 +3790,7 @@ void omap_mpu_wakeup(void *opaque, int irq, int req)
     struct omap_mpu_state_s *mpu = (struct omap_mpu_state_s *) opaque;
     CPUState *cpu = CPU(mpu->cpu);
 
-    if (cpu->halted) {
+    if (cpu_halted(cpu)) {
         cpu_interrupt(cpu, CPU_INTERRUPT_EXITTB);
     }
 }
diff --git a/hw/arm/pxa2xx_gpio.c b/hw/arm/pxa2xx_gpio.c
index e15070188e..5c3fea42e9 100644
--- a/hw/arm/pxa2xx_gpio.c
+++ b/hw/arm/pxa2xx_gpio.c
@@ -128,7 +128,7 @@ static void pxa2xx_gpio_set(void *opaque, int line, int level)
         pxa2xx_gpio_irq_update(s);
 
     /* Wake-up GPIOs */
-    if (cpu->halted && (mask & ~s->dir[bank] & pxa2xx_gpio_wake[bank])) {
+    if (cpu_halted(cpu) && (mask & ~s->dir[bank] & pxa2xx_gpio_wake[bank])) {
         cpu_interrupt(cpu, CPU_INTERRUPT_EXITTB);
     }
 }
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
index 61275fa040..46ab4c3fc2 100644
--- a/hw/arm/pxa2xx_pic.c
+++ b/hw/arm/pxa2xx_pic.c
@@ -58,7 +58,7 @@ static void pxa2xx_pic_update(void *opaque)
     PXA2xxPICState *s = (PXA2xxPICState *) opaque;
     CPUState *cpu = CPU(s->cpu);
 
-    if (cpu->halted) {
+    if (cpu_halted(cpu)) {
         mask[0] = s->int_pending[0] & (s->int_enabled[0] | s->int_idle);
         mask[1] = s->int_pending[1] & (s->int_enabled[1] | s->int_idle);
         if (mask[0] || mask[1]) {
diff --git a/target/arm/arm-powerctl.c b/target/arm/arm-powerctl.c
index 2b856930fb..003bf6c184 100644
--- a/target/arm/arm-powerctl.c
+++ b/target/arm/arm-powerctl.c
@@ -64,7 +64,7 @@ static void arm_set_cpu_on_async_work(CPUState *target_cpu_state,
 
     /* Initialize the cpu we are turning on */
     cpu_reset(target_cpu_state);
-    target_cpu_state->halted = 0;
+    cpu_halted_set(target_cpu_state, 0);
 
     if (info->target_aa64) {
         if ((info->target_el < 3) && arm_feature(&target_cpu->env,
@@ -238,7 +238,7 @@ static void arm_set_cpu_off_async_work(CPUState *target_cpu_state,
 
     assert(qemu_mutex_iothread_locked());
     target_cpu->power_state = PSCI_OFF;
-    target_cpu_state->halted = 1;
+    cpu_halted_set(target_cpu_state, 1);
     target_cpu_state->exception_index = EXCP_HLT;
 }
 
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index d6da3f4fed..8cf2f5466b 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -149,7 +149,7 @@ static void arm_cpu_reset(CPUState *s)
     env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
 
     cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
-    s->halted = cpu->start_powered_off;
+    cpu_halted_set(s, cpu->start_powered_off);
 
     if (arm_feature(env, ARM_FEATURE_IWMMXT)) {
         env->iwmmxt.cregs[ARM_IWMMXT_wCID] = 0x69051000 | 'Q';
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
index c998eadfaa..f9ccdc9abf 100644
--- a/target/arm/op_helper.c
+++ b/target/arm/op_helper.c
@@ -479,7 +479,7 @@ void HELPER(wfi)(CPUARMState *env, uint32_t insn_len)
     }
 
     cs->exception_index = EXCP_HLT;
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cpu_loop_exit(cs);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 18/73] ppc: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (16 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 17/73] arm: convert to cpu_halted Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 19/73] sh4: " Emilio G. Cota
                   ` (55 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, David Gibson, qemu-ppc

In ppce500_spin.c, acquire the lock just once to update
both cpu->halted and cpu->stopped.

In hw/ppc/spapr_hcall.c, acquire the lock just once to
update cpu->halted and call cpu_has_work, since later
in the series we'll acquire the BQL (if not already held)
from cpu_has_work.

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/helper_regs.h        |  2 +-
 hw/ppc/e500.c                   |  4 ++--
 hw/ppc/ppc.c                    | 10 +++++-----
 hw/ppc/ppce500_spin.c           |  6 ++++--
 hw/ppc/spapr_cpu_core.c         |  4 ++--
 hw/ppc/spapr_hcall.c            |  4 +++-
 hw/ppc/spapr_rtas.c             |  6 +++---
 target/ppc/excp_helper.c        |  4 ++--
 target/ppc/kvm.c                |  4 ++--
 target/ppc/translate_init.inc.c |  6 +++---
 10 files changed, 27 insertions(+), 23 deletions(-)

diff --git a/target/ppc/helper_regs.h b/target/ppc/helper_regs.h
index 5efd18049e..9298052ac5 100644
--- a/target/ppc/helper_regs.h
+++ b/target/ppc/helper_regs.h
@@ -161,7 +161,7 @@ static inline int hreg_store_msr(CPUPPCState *env, target_ulong value,
 #if !defined(CONFIG_USER_ONLY)
     if (unlikely(msr_pow == 1)) {
         if (!env->pending_interrupts && (*env->check_pow)(env)) {
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             excp = EXCP_HALTED;
         }
     }
diff --git a/hw/ppc/e500.c b/hw/ppc/e500.c
index 0581e9e3d4..d685c36767 100644
--- a/hw/ppc/e500.c
+++ b/hw/ppc/e500.c
@@ -657,7 +657,7 @@ static void ppce500_cpu_reset_sec(void *opaque)
 
     /* Secondary CPU starts in halted state for now. Needs to change when
        implementing non-kernel boot. */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
 }
 
@@ -671,7 +671,7 @@ static void ppce500_cpu_reset(void *opaque)
     cpu_reset(cs);
 
     /* Set initial guest state. */
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
     env->gpr[1] = (16 * MiB) - 8;
     env->gpr[3] = bi->dt_base;
     env->gpr[4] = 0;
diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index ec4be25f49..d1a5a0b877 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -151,7 +151,7 @@ static void ppc6xx_set_irq(void *opaque, int pin, int level)
             /* XXX: Note that the only way to restart the CPU is to reset it */
             if (level) {
                 LOG_IRQ("%s: stop the CPU\n", __func__);
-                cs->halted = 1;
+                cpu_halted_set(cs, 1);
             }
             break;
         case PPC6xx_INPUT_HRESET:
@@ -230,10 +230,10 @@ static void ppc970_set_irq(void *opaque, int pin, int level)
             /* XXX: TODO: relay the signal to CKSTP_OUT pin */
             if (level) {
                 LOG_IRQ("%s: stop the CPU\n", __func__);
-                cs->halted = 1;
+                cpu_halted_set(cs, 1);
             } else {
                 LOG_IRQ("%s: restart the CPU\n", __func__);
-                cs->halted = 0;
+                cpu_halted_set(cs, 0);
                 qemu_cpu_kick(cs);
             }
             break;
@@ -361,10 +361,10 @@ static void ppc40x_set_irq(void *opaque, int pin, int level)
             /* Level sensitive - active low */
             if (level) {
                 LOG_IRQ("%s: stop the CPU\n", __func__);
-                cs->halted = 1;
+                cpu_halted_set(cs, 1);
             } else {
                 LOG_IRQ("%s: restart the CPU\n", __func__);
-                cs->halted = 0;
+                cpu_halted_set(cs, 0);
                 qemu_cpu_kick(cs);
             }
             break;
diff --git a/hw/ppc/ppce500_spin.c b/hw/ppc/ppce500_spin.c
index c45fc858de..4b3532730f 100644
--- a/hw/ppc/ppce500_spin.c
+++ b/hw/ppc/ppce500_spin.c
@@ -107,9 +107,11 @@ static void spin_kick(CPUState *cs, run_on_cpu_data data)
     map_start = ldq_p(&curspin->addr) & ~(map_size - 1);
     mmubooke_create_initial_mapping(env, 0, map_start, map_size);
 
-    cs->halted = 0;
-    cs->exception_index = -1;
+    cpu_mutex_lock(cs);
+    cpu_halted_set(cs, 0);
     cs->stopped = false;
+    cpu_mutex_unlock(cs);
+    cs->exception_index = -1;
     qemu_cpu_kick(cs);
 }
 
diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
index 0405306d1e..797fd5c1a8 100644
--- a/hw/ppc/spapr_cpu_core.c
+++ b/hw/ppc/spapr_cpu_core.c
@@ -36,7 +36,7 @@ static void spapr_cpu_reset(void *opaque)
     /* All CPUs start halted.  CPU0 is unhalted from the machine level
      * reset code and the rest are explicitly started up by the guest
      * using an RTAS call */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 
     /* Set compatibility mode to match the boot CPU, which was either set
      * by the machine reset code or by CAS. This should never fail.
@@ -90,7 +90,7 @@ void spapr_cpu_set_entry_state(PowerPCCPU *cpu, target_ulong nip, target_ulong r
     env->nip = nip;
     env->gpr[3] = r3;
     kvmppc_set_reg_ppc_online(cpu, 1);
-    CPU(cpu)->halted = 0;
+    cpu_halted_set(CPU(cpu), 0);
     /* Enable Power-saving mode Exit Cause exceptions */
     ppc_store_lpcr(cpu, env->spr[SPR_LPCR] | pcc->lpcr_pm);
 }
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 17bcaa3822..a889eef4f7 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -1088,11 +1088,13 @@ static target_ulong h_cede(PowerPCCPU *cpu, sPAPRMachineState *spapr,
 
     env->msr |= (1ULL << MSR_EE);
     hreg_compute_hflags(env);
+    cpu_mutex_lock(cs);
     if (!cpu_has_work(cs)) {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
         cs->exit_request = 1;
     }
+    cpu_mutex_unlock(cs);
     return H_SUCCESS;
 }
 
diff --git a/hw/ppc/spapr_rtas.c b/hw/ppc/spapr_rtas.c
index d6a0952154..925f67123c 100644
--- a/hw/ppc/spapr_rtas.c
+++ b/hw/ppc/spapr_rtas.c
@@ -109,7 +109,7 @@ static void rtas_query_cpu_stopped_state(PowerPCCPU *cpu_,
     id = rtas_ld(args, 0);
     cpu = spapr_find_cpu(id);
     if (cpu != NULL) {
-        if (CPU(cpu)->halted) {
+        if (cpu_halted(CPU(cpu))) {
             rtas_st(rets, 1, 0);
         } else {
             rtas_st(rets, 1, 2);
@@ -153,7 +153,7 @@ static void rtas_start_cpu(PowerPCCPU *callcpu, sPAPRMachineState *spapr,
     env = &newcpu->env;
     pcc = POWERPC_CPU_GET_CLASS(newcpu);
 
-    if (!CPU(newcpu)->halted) {
+    if (!cpu_halted(CPU(newcpu))) {
         rtas_st(rets, 0, RTAS_OUT_HW_ERROR);
         return;
     }
@@ -207,7 +207,7 @@ static void rtas_stop_self(PowerPCCPU *cpu, sPAPRMachineState *spapr,
      * This could deliver an interrupt on a dying CPU and crash the
      * guest */
     ppc_store_lpcr(cpu, env->spr[SPR_LPCR] & ~pcc->lpcr_pm);
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     kvmppc_set_reg_ppc_online(cpu, 0);
     qemu_cpu_kick(cs);
 }
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 0ec7ae1ad4..5e1778584a 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -206,7 +206,7 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
                 qemu_log("Machine check while not allowed. "
                         "Entering checkstop state\n");
             }
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             cpu_interrupt_exittb(cs);
         }
         if (env->msr_mask & MSR_HVB) {
@@ -954,7 +954,7 @@ void helper_pminsn(CPUPPCState *env, powerpc_pm_insn_t insn)
     CPUState *cs;
 
     cs = CPU(ppc_env_get_cpu(env));
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     env->in_pm_state = true;
 
     /* The architecture specifies that HDEC interrupts are
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
index ebbb48c42f..0efdb71532 100644
--- a/target/ppc/kvm.c
+++ b/target/ppc/kvm.c
@@ -1374,7 +1374,7 @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
 
 int kvm_arch_process_async_events(CPUState *cs)
 {
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 static int kvmppc_handle_halt(PowerPCCPU *cpu)
@@ -1383,7 +1383,7 @@ static int kvmppc_handle_halt(PowerPCCPU *cpu)
     CPUPPCState *env = &cpu->env;
 
     if (!(cs->interrupt_request & CPU_INTERRUPT_HARD) && (msr_ee)) {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
     }
 
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 59e0b86762..a757e02f52 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8454,7 +8454,7 @@ static bool cpu_has_work_POWER7(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
             return false;
         }
@@ -8608,7 +8608,7 @@ static bool cpu_has_work_POWER8(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
             return false;
         }
@@ -8800,7 +8800,7 @@ static bool cpu_has_work_POWER9(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
             return false;
         }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 19/73] sh4: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (17 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 18/73] ppc: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 20/73] i386: " Emilio G. Cota
                   ` (54 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Aurelien Jarno

Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/sh4/op_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/sh4/op_helper.c b/target/sh4/op_helper.c
index 4f825bae5a..57cc363ccc 100644
--- a/target/sh4/op_helper.c
+++ b/target/sh4/op_helper.c
@@ -105,7 +105,7 @@ void helper_sleep(CPUSH4State *env)
 {
     CPUState *cs = CPU(sh_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     env->in_sleep = 1;
     raise_exception(env, EXCP_HLT, 0);
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 20/73] i386: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (18 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 19/73] sh4: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 21/73] lm32: " Emilio G. Cota
                   ` (53 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Eduardo Habkost

Cc: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/cpu.h         |  2 +-
 target/i386/cpu.c         |  2 +-
 target/i386/hax-all.c     |  4 ++--
 target/i386/helper.c      |  4 ++--
 target/i386/hvf/hvf.c     |  8 ++++----
 target/i386/hvf/x86hvf.c  |  4 ++--
 target/i386/kvm.c         | 10 +++++-----
 target/i386/misc_helper.c |  2 +-
 target/i386/whpx-all.c    |  6 +++---
 9 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 59656a70e6..6c65b2ee5d 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1618,7 +1618,7 @@ static inline void cpu_x86_load_seg_cache_sipi(X86CPU *cpu,
                            sipi_vector << 12,
                            env->segs[R_CS].limit,
                            env->segs[R_CS].flags);
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
 }
 
 int cpu_x86_get_descr_debug(CPUX86State *env, unsigned int selector,
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 2f5412592d..a37b984b61 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4723,7 +4723,7 @@ static void x86_cpu_reset(CPUState *s)
     /* We hard-wire the BSP to the first CPU. */
     apic_designate_bsp(cpu->apic_state, s->cpu_index == 0);
 
-    s->halted = !cpu_is_bsp(cpu);
+    cpu_halted_set(s, !cpu_is_bsp(cpu));
 
     if (kvm_enabled()) {
         kvm_arch_reset_vcpu(cpu);
diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index b978a9b821..22951017cf 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -471,7 +471,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
         return 0;
     }
 
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
 
     if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
         cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
@@ -548,7 +548,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
                 !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
                 /* hlt instruction with interrupt disabled is shutdown */
                 env->eflags |= IF_MASK;
-                cpu->halted = 1;
+                cpu_halted_set(cpu, 1);
                 cpu->exception_index = EXCP_HLT;
                 ret = 1;
             }
diff --git a/target/i386/helper.c b/target/i386/helper.c
index e695f8ba7a..a75278f954 100644
--- a/target/i386/helper.c
+++ b/target/i386/helper.c
@@ -454,7 +454,7 @@ void x86_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
                     (env->hflags >> HF_INHIBIT_IRQ_SHIFT) & 1,
                     (env->a20_mask >> 20) & 1,
                     (env->hflags >> HF_SMM_SHIFT) & 1,
-                    cs->halted);
+                    cpu_halted(cs));
     } else
 #endif
     {
@@ -481,7 +481,7 @@ void x86_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
                     (env->hflags >> HF_INHIBIT_IRQ_SHIFT) & 1,
                     (env->a20_mask >> 20) & 1,
                     (env->hflags >> HF_SMM_SHIFT) & 1,
-                    cs->halted);
+                    cpu_halted(cs));
     }
 
     for(i = 0; i < 6; i++) {
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index e193022c03..c1ff220985 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -499,7 +499,7 @@ void hvf_reset_vcpu(CPUState *cpu) {
     }
 
     hv_vm_sync_tsc(0);
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     hv_vcpu_invalidate_tlb(cpu->hvf_fd);
     hv_vcpu_flush(cpu->hvf_fd);
 }
@@ -659,7 +659,7 @@ int hvf_vcpu_exec(CPUState *cpu)
     int ret = 0;
     uint64_t rip = 0;
 
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
 
     if (hvf_process_events(cpu)) {
         return EXCP_HLT;
@@ -677,7 +677,7 @@ int hvf_vcpu_exec(CPUState *cpu)
         vmx_update_tpr(cpu);
 
         qemu_mutex_unlock_iothread();
-        if (!cpu_is_bsp(X86_CPU(cpu)) && cpu->halted) {
+        if (!cpu_is_bsp(X86_CPU(cpu)) && cpu_halted(cpu)) {
             qemu_mutex_lock_iothread();
             return EXCP_HLT;
         }
@@ -711,7 +711,7 @@ int hvf_vcpu_exec(CPUState *cpu)
                 (EFLAGS(env) & IF_MASK))
                 && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) &&
                 !(idtvec_info & VMCS_IDT_VEC_VALID)) {
-                cpu->halted = 1;
+                cpu_halted_set(cpu, 1);
                 ret = EXCP_HLT;
             }
             ret = EXCP_INTERRUPT;
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index df8e946fbc..163bbed23f 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -446,7 +446,7 @@ int hvf_process_events(CPUState *cpu_state)
     if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK)) ||
         (cpu_state->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cpu_state->halted = 0;
+        cpu_halted_set(cpu_state, 0);
     }
     if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
         hvf_cpu_synchronize_state(cpu_state);
@@ -458,5 +458,5 @@ int hvf_process_events(CPUState *cpu_state)
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
     }
-    return cpu_state->halted;
+    return cpu_halted(cpu);
 }
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 9af4542fb8..9006f04d92 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2836,7 +2836,7 @@ static int kvm_get_mp_state(X86CPU *cpu)
     }
     env->mp_state = mp_state.mp_state;
     if (kvm_irqchip_in_kernel()) {
-        cs->halted = (mp_state.mp_state == KVM_MP_STATE_HALTED);
+        cpu_halted_set(cs, mp_state.mp_state == KVM_MP_STATE_HALTED);
     }
     return 0;
 }
@@ -3320,7 +3320,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         env->exception_injected = EXCP12_MCHK;
         env->has_error_code = 0;
 
-        cs->halted = 0;
+        cpu_halted_set(cs, 0);
         if (kvm_irqchip_in_kernel() && env->mp_state == KVM_MP_STATE_HALTED) {
             env->mp_state = KVM_MP_STATE_RUNNABLE;
         }
@@ -3343,7 +3343,7 @@ int kvm_arch_process_async_events(CPUState *cs)
     if (((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
         (cs->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cs->halted = 0;
+        cpu_halted_set(cs, 0);
     }
     if (cs->interrupt_request & CPU_INTERRUPT_SIPI) {
         kvm_cpu_synchronize_state(cs);
@@ -3356,7 +3356,7 @@ int kvm_arch_process_async_events(CPUState *cs)
                                       env->tpr_access_type);
     }
 
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 static int kvm_handle_halt(X86CPU *cpu)
@@ -3367,7 +3367,7 @@ static int kvm_handle_halt(X86CPU *cpu)
     if (!((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
           (env->eflags & IF_MASK)) &&
         !(cs->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         return EXCP_HLT;
     }
 
diff --git a/target/i386/misc_helper.c b/target/i386/misc_helper.c
index 78f2020ef2..fcd6d833e8 100644
--- a/target/i386/misc_helper.c
+++ b/target/i386/misc_helper.c
@@ -554,7 +554,7 @@ static void do_hlt(X86CPU *cpu)
     CPUX86State *env = &cpu->env;
 
     env->hflags &= ~HF_INHIBIT_IRQ_MASK; /* needed if sti is just before */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     cpu_loop_exit(cs);
 }
diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
index 57e53e1f1f..b9c79ccd99 100644
--- a/target/i386/whpx-all.c
+++ b/target/i386/whpx-all.c
@@ -697,7 +697,7 @@ static int whpx_handle_halt(CPUState *cpu)
           (env->eflags & IF_MASK)) &&
         !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu->exception_index = EXCP_HLT;
-        cpu->halted = true;
+        cpu_halted_set(cpu, true);
         ret = 1;
     }
     qemu_mutex_unlock_iothread();
@@ -857,7 +857,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
         (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cpu->halted = false;
+        cpu_halted_set(cpu, false);
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
@@ -887,7 +887,7 @@ static int whpx_vcpu_run(CPUState *cpu)
     int ret;
 
     whpx_vcpu_process_async_events(cpu);
-    if (cpu->halted) {
+    if (cpu_halted(cpu)) {
         cpu->exception_index = EXCP_HLT;
         atomic_set(&cpu->exit_request, false);
         return 0;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 21/73] lm32: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (19 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 20/73] i386: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 22/73] m68k: " Emilio G. Cota
                   ` (52 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Michael Walle

Cc: Michael Walle <michael@walle.cc>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/lm32/op_helper.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/lm32/op_helper.c b/target/lm32/op_helper.c
index 234d55e056..392634441b 100644
--- a/target/lm32/op_helper.c
+++ b/target/lm32/op_helper.c
@@ -31,7 +31,7 @@ void HELPER(hlt)(CPULM32State *env)
 {
     CPUState *cs = CPU(lm32_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     cpu_loop_exit(cs);
 }
@@ -44,7 +44,7 @@ void HELPER(ill)(CPULM32State *env)
             "Connect a debugger or switch to the monitor console "
             "to find out more.\n");
     vm_stop(RUN_STATE_PAUSED);
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     raise_exception(env, EXCP_HALTED);
 #endif
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 22/73] m68k: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (20 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 21/73] lm32: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 23/73] mips: " Emilio G. Cota
                   ` (51 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Laurent Vivier

Cc: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/m68k/op_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/m68k/op_helper.c b/target/m68k/op_helper.c
index 8d09ed91c4..61ba1a6dec 100644
--- a/target/m68k/op_helper.c
+++ b/target/m68k/op_helper.c
@@ -237,7 +237,7 @@ static void cf_interrupt_all(CPUM68KState *env, int is_hw)
                 do_m68k_semihosting(env, env->dregs[0]);
                 return;
             }
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             cs->exception_index = EXCP_HLT;
             cpu_loop_exit(cs);
             return;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 23/73] mips: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (21 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 22/73] m68k: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 24/73] riscv: " Emilio G. Cota
                   ` (50 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Aurelien Jarno,
	Aleksandar Markovic, James Hogan

Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Cc: James Hogan <jhogan@kernel.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/mips/cps.c           | 2 +-
 hw/misc/mips_itu.c      | 4 ++--
 target/mips/kvm.c       | 2 +-
 target/mips/op_helper.c | 8 ++++----
 target/mips/translate.c | 4 ++--
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/hw/mips/cps.c b/hw/mips/cps.c
index fc97f59af4..57f3c9e5fc 100644
--- a/hw/mips/cps.c
+++ b/hw/mips/cps.c
@@ -49,7 +49,7 @@ static void main_cpu_reset(void *opaque)
     cpu_reset(cs);
 
     /* All VPs are halted on reset. Leave powering up to CPC. */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 }
 
 static bool cpu_mips_itu_supported(CPUMIPSState *env)
diff --git a/hw/misc/mips_itu.c b/hw/misc/mips_itu.c
index 1257d8fce6..1fab14d332 100644
--- a/hw/misc/mips_itu.c
+++ b/hw/misc/mips_itu.c
@@ -181,7 +181,7 @@ static void wake_blocked_threads(ITCStorageCell *c)
 {
     CPUState *cs;
     CPU_FOREACH(cs) {
-        if (cs->halted && (c->blocked_threads & (1ULL << cs->cpu_index))) {
+        if (cpu_halted(cs) && (c->blocked_threads & (1ULL << cs->cpu_index))) {
             cpu_interrupt(cs, CPU_INTERRUPT_WAKE);
         }
     }
@@ -191,7 +191,7 @@ static void wake_blocked_threads(ITCStorageCell *c)
 static void QEMU_NORETURN block_thread_and_exit(ITCStorageCell *c)
 {
     c->blocked_threads |= 1ULL << current_cpu->cpu_index;
-    current_cpu->halted = 1;
+    cpu_halted_set(current_cpu, 1);
     current_cpu->exception_index = EXCP_HLT;
     cpu_loop_exit_restore(current_cpu, current_cpu->mem_io_pc);
 }
diff --git a/target/mips/kvm.c b/target/mips/kvm.c
index 8e72850962..0b177a7577 100644
--- a/target/mips/kvm.c
+++ b/target/mips/kvm.c
@@ -156,7 +156,7 @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
 
 int kvm_arch_process_async_events(CPUState *cs)
 {
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index aebad24ed6..0b8104c27f 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -649,7 +649,7 @@ static bool mips_vpe_is_wfi(MIPSCPU *c)
 
     /* If the VPE is halted but otherwise active, it means it's waiting for
        an interrupt.  */
-    return cpu->halted && mips_vpe_active(env);
+    return cpu_halted(cpu) && mips_vpe_active(env);
 }
 
 static bool mips_vp_is_wfi(MIPSCPU *c)
@@ -657,7 +657,7 @@ static bool mips_vp_is_wfi(MIPSCPU *c)
     CPUState *cpu = CPU(c);
     CPUMIPSState *env = &c->env;
 
-    return cpu->halted && mips_vp_active(env);
+    return cpu_halted(cpu) && mips_vp_active(env);
 }
 
 static inline void mips_vpe_wake(MIPSCPU *c)
@@ -674,7 +674,7 @@ static inline void mips_vpe_sleep(MIPSCPU *cpu)
 
     /* The VPE was shut off, really go to bed.
        Reset any old _WAKE requests.  */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cpu_reset_interrupt(cs, CPU_INTERRUPT_WAKE);
 }
 
@@ -2669,7 +2669,7 @@ void helper_wait(CPUMIPSState *env)
 {
     CPUState *cs = CPU(mips_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cpu_reset_interrupt(cs, CPU_INTERRUPT_WAKE);
     /* Last instruction in the block, PC was updated before
        - no need to recover PC and icount */
diff --git a/target/mips/translate.c b/target/mips/translate.c
index e9b5d1d860..3e20ce9f1d 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -30044,7 +30044,7 @@ void cpu_state_reset(CPUMIPSState *env)
             env->tcs[i].CP0_TCHalt = 1;
         }
         env->active_tc.CP0_TCHalt = 1;
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
 
         if (cs->cpu_index == 0) {
             /* VPE0 starts up enabled.  */
@@ -30052,7 +30052,7 @@ void cpu_state_reset(CPUMIPSState *env)
             env->CP0_VPEConf0 |= (1 << CP0VPEC0_MVP) | (1 << CP0VPEC0_VPA);
 
             /* TC0 starts up unhalted.  */
-            cs->halted = 0;
+            cpu_halted_set(cs, 0);
             env->active_tc.CP0_TCHalt = 0;
             env->tcs[0].CP0_TCHalt = 0;
             /* With thread 0 active.  */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 24/73] riscv: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (22 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 23/73] mips: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-06 23:50   ` Alistair Francis
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 25/73] s390x: " Emilio G. Cota
                   ` (49 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Palmer Dabbelt,
	Sagar Karandikar, Bastian Koppelmann, Alistair Francis

Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Alistair Francis <alistair23@gmail.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/riscv/op_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
index 81bd1a77ea..261d6fbfe9 100644
--- a/target/riscv/op_helper.c
+++ b/target/riscv/op_helper.c
@@ -125,7 +125,7 @@ void helper_wfi(CPURISCVState *env)
 {
     CPUState *cs = CPU(riscv_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     cpu_loop_exit(cs);
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 25/73] s390x: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (23 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 24/73] riscv: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30 10:30   ` Cornelia Huck
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 26/73] sparc: " Emilio G. Cota
                   ` (48 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Cornelia Huck,
	Christian Borntraeger, David Hildenbrand, qemu-s390x

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/intc/s390_flic.c        |  2 +-
 target/s390x/cpu.c         | 22 +++++++++++++++-------
 target/s390x/excp_helper.c |  2 +-
 target/s390x/kvm.c         |  2 +-
 target/s390x/sigp.c        |  8 ++++----
 5 files changed, 22 insertions(+), 14 deletions(-)

diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c
index 5f8168f0f0..bfb5cf1d07 100644
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -198,7 +198,7 @@ static void qemu_s390_flic_notify(uint32_t type)
         }
 
         /* we always kick running CPUs for now, this is tricky */
-        if (cs->halted) {
+        if (cpu_halted(cs)) {
             /* don't check for subclasses, CPUs double check when waking up */
             if (type & FLIC_PENDING_SERVICE) {
                 if (!(cpu->env.psw.mask & PSW_MASK_EXT)) {
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 18ba7f85a5..4d70ba785c 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -288,7 +288,7 @@ static void s390_cpu_initfn(Object *obj)
     CPUS390XState *env = &cpu->env;
 
     cs->env_ptr = env;
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     object_property_add(obj, "crash-information", "GuestPanicInformation",
                         s390_cpu_get_crash_info_qom, NULL, NULL, NULL, NULL);
@@ -313,8 +313,8 @@ static void s390_cpu_finalize(Object *obj)
 #if !defined(CONFIG_USER_ONLY)
 static bool disabled_wait(CPUState *cpu)
 {
-    return cpu->halted && !(S390_CPU(cpu)->env.psw.mask &
-                            (PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK));
+    return cpu_halted(cpu) && !(S390_CPU(cpu)->env.psw.mask &
+                                (PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK));
 }
 
 static unsigned s390_count_running_cpus(void)
@@ -340,10 +340,16 @@ unsigned int s390_cpu_halt(S390CPU *cpu)
     CPUState *cs = CPU(cpu);
     trace_cpu_halt(cs->cpu_index);
 
-    if (!cs->halted) {
-        cs->halted = 1;
+    /*
+     * cpu_halted and cpu_halted_set acquire the cpu lock if it
+     * isn't already held, so acquire it first.
+     */
+    cpu_mutex_lock(cs);
+    if (!cpu_halted(cs)) {
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
     }
+    cpu_mutex_unlock(cs);
 
     return s390_count_running_cpus();
 }
@@ -353,10 +359,12 @@ void s390_cpu_unhalt(S390CPU *cpu)
     CPUState *cs = CPU(cpu);
     trace_cpu_unhalt(cs->cpu_index);
 
-    if (cs->halted) {
-        cs->halted = 0;
+    cpu_mutex_lock(cs);
+    if (cpu_halted(cs)) {
+        cpu_halted_set(cs, 0);
         cs->exception_index = -1;
     }
+    cpu_mutex_unlock(cs);
 }
 
 unsigned int s390_cpu_set_state(uint8_t cpu_state, S390CPU *cpu)
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
index 2a33222f7e..d22c5b3ce5 100644
--- a/target/s390x/excp_helper.c
+++ b/target/s390x/excp_helper.c
@@ -461,7 +461,7 @@ try_deliver:
     if ((env->psw.mask & PSW_MASK_WAIT) || stopped) {
         /* don't trigger a cpu_loop_exit(), use an interrupt instead */
         cpu_interrupt(CPU(cpu), CPU_INTERRUPT_HALT);
-    } else if (cs->halted) {
+    } else if (cpu_halted(cs)) {
         /* unhalt if we had a WAIT PSW somehwere in our injection chain */
         s390_cpu_unhalt(cpu);
     }
diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
index 2ebf26adfe..ffb52888c0 100644
--- a/target/s390x/kvm.c
+++ b/target/s390x/kvm.c
@@ -1005,7 +1005,7 @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
 
 int kvm_arch_process_async_events(CPUState *cs)
 {
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 static int s390_kvm_irq_to_interrupt(struct kvm_s390_irq *irq,
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
index c1f9245797..d410da797a 100644
--- a/target/s390x/sigp.c
+++ b/target/s390x/sigp.c
@@ -115,7 +115,7 @@ static void sigp_stop(CPUState *cs, run_on_cpu_data arg)
     }
 
     /* disabled wait - sleeping in user space */
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         s390_cpu_set_state(S390_CPU_STATE_STOPPED, cpu);
     } else {
         /* execute the stop function */
@@ -131,7 +131,7 @@ static void sigp_stop_and_store_status(CPUState *cs, run_on_cpu_data arg)
     SigpInfo *si = arg.host_ptr;
 
     /* disabled wait - sleeping in user space */
-    if (s390_cpu_get_state(cpu) == S390_CPU_STATE_OPERATING && cs->halted) {
+    if (s390_cpu_get_state(cpu) == S390_CPU_STATE_OPERATING && cpu_halted(cs)) {
         s390_cpu_set_state(S390_CPU_STATE_STOPPED, cpu);
     }
 
@@ -313,7 +313,7 @@ static void sigp_cond_emergency(S390CPU *src_cpu, S390CPU *dst_cpu,
     }
 
     /* this looks racy, but these values are only used when STOPPED */
-    idle = CPU(dst_cpu)->halted;
+    idle = cpu_halted(CPU(dst_cpu));
     psw_addr = dst_cpu->env.psw.addr;
     psw_mask = dst_cpu->env.psw.mask;
     asn = si->param;
@@ -347,7 +347,7 @@ static void sigp_sense_running(S390CPU *dst_cpu, SigpInfo *si)
     }
 
     /* If halted (which includes also STOPPED), it is not running */
-    if (CPU(dst_cpu)->halted) {
+    if (cpu_halted(CPU(dst_cpu))) {
         si->cc = SIGP_CC_ORDER_CODE_ACCEPTED;
     } else {
         set_sigp_status(si, SIGP_STAT_NOT_RUNNING);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 26/73] sparc: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (24 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 25/73] s390x: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 27/73] xtensa: " Emilio G. Cota
                   ` (47 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Fabien Chouteau,
	Mark Cave-Ayland, Artyom Tarasenko

Cc: Fabien Chouteau <chouteau@adacore.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/sparc/leon3.c      | 2 +-
 hw/sparc/sun4m.c      | 8 ++++----
 hw/sparc64/sparc64.c  | 4 ++--
 target/sparc/helper.c | 2 +-
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/hw/sparc/leon3.c b/hw/sparc/leon3.c
index fa98ab8177..0746001f91 100644
--- a/hw/sparc/leon3.c
+++ b/hw/sparc/leon3.c
@@ -61,7 +61,7 @@ static void main_cpu_reset(void *opaque)
 
     cpu_reset(cpu);
 
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     env->pc     = s->entry;
     env->npc    = s->entry + 4;
     env->regbase[6] = s->sp;
diff --git a/hw/sparc/sun4m.c b/hw/sparc/sun4m.c
index 709ee37e08..ddcc8d8c4f 100644
--- a/hw/sparc/sun4m.c
+++ b/hw/sparc/sun4m.c
@@ -167,7 +167,7 @@ static void cpu_kick_irq(SPARCCPU *cpu)
     CPUSPARCState *env = &cpu->env;
     CPUState *cs = CPU(cpu);
 
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
     cpu_check_irqs(env);
     qemu_cpu_kick(cs);
 }
@@ -198,7 +198,7 @@ static void main_cpu_reset(void *opaque)
     CPUState *cs = CPU(cpu);
 
     cpu_reset(cs);
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
 }
 
 static void secondary_cpu_reset(void *opaque)
@@ -207,7 +207,7 @@ static void secondary_cpu_reset(void *opaque)
     CPUState *cs = CPU(cpu);
 
     cpu_reset(cs);
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 }
 
 static void cpu_halt_signal(void *opaque, int irq, int level)
@@ -825,7 +825,7 @@ static void cpu_devinit(const char *cpu_type, unsigned int id,
     } else {
         qemu_register_reset(secondary_cpu_reset, cpu);
         cs = CPU(cpu);
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
     }
     *cpu_irqs = qemu_allocate_irqs(cpu_set_irq, cpu, MAX_PILS);
     env->prom_addr = prom_addr;
diff --git a/hw/sparc64/sparc64.c b/hw/sparc64/sparc64.c
index 408388945e..372bbd4f5b 100644
--- a/hw/sparc64/sparc64.c
+++ b/hw/sparc64/sparc64.c
@@ -100,7 +100,7 @@ static void cpu_kick_irq(SPARCCPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUSPARCState *env = &cpu->env;
 
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
     cpu_check_irqs(env);
     qemu_cpu_kick(cs);
 }
@@ -115,7 +115,7 @@ void sparc64_cpu_set_ivec_irq(void *opaque, int irq, int level)
         if (!(env->ivec_status & 0x20)) {
             trace_sparc64_cpu_ivec_raise_irq(irq);
             cs = CPU(cpu);
-            cs->halted = 0;
+            cpu_halted_set(cs, 0);
             env->interrupt_index = TT_IVEC;
             env->ivec_status |= 0x20;
             env->ivec_data[0] = (0x1f << 6) | irq;
diff --git a/target/sparc/helper.c b/target/sparc/helper.c
index 46232788c8..dd00cf7cac 100644
--- a/target/sparc/helper.c
+++ b/target/sparc/helper.c
@@ -245,7 +245,7 @@ void helper_power_down(CPUSPARCState *env)
 {
     CPUState *cs = CPU(sparc_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     env->pc = env->npc;
     env->npc = env->pc + 4;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 27/73] xtensa: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (25 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 26/73] sparc: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 28/73] gdbstub: " Emilio G. Cota
                   ` (46 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Max Filippov

Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/xtensa/cpu.c        | 2 +-
 target/xtensa/exc_helper.c | 2 +-
 target/xtensa/helper.c     | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index a54dbe4260..d4ca35e6cc 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -86,7 +86,7 @@ static void xtensa_cpu_reset(CPUState *s)
 
 #ifndef CONFIG_USER_ONLY
     reset_mmu(env);
-    s->halted = env->runstall;
+    cpu_halted_set(s, env->runstall);
 #endif
 }
 
diff --git a/target/xtensa/exc_helper.c b/target/xtensa/exc_helper.c
index 371a32ba5a..65031e6b65 100644
--- a/target/xtensa/exc_helper.c
+++ b/target/xtensa/exc_helper.c
@@ -116,7 +116,7 @@ void HELPER(waiti)(CPUXtensaState *env, uint32_t pc, uint32_t intlevel)
     }
 
     cpu = CPU(xtensa_env_get_cpu(env));
-    cpu->halted = 1;
+    cpu_halted_set(cpu, 1);
     HELPER(exception)(env, EXCP_HLT);
 }
 
diff --git a/target/xtensa/helper.c b/target/xtensa/helper.c
index 323c47a7fb..2ebbfc29bf 100644
--- a/target/xtensa/helper.c
+++ b/target/xtensa/helper.c
@@ -248,7 +248,7 @@ void xtensa_runstall(CPUXtensaState *env, bool runstall)
     CPUState *cpu = CPU(xtensa_env_get_cpu(env));
 
     env->runstall = runstall;
-    cpu->halted = runstall;
+    cpu_halted_set(cpu, runstall);
     if (runstall) {
         cpu_interrupt(cpu, CPU_INTERRUPT_HALT);
     } else {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 28/73] gdbstub: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (26 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 27/73] xtensa: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 29/73] openrisc: " Emilio G. Cota
                   ` (45 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 gdbstub.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 3129b5c284..91790112a2 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1659,13 +1659,13 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
                         object_get_canonical_path_component(OBJECT(cpu));
                     len = snprintf((char *)mem_buf, sizeof(buf) / 2,
                                    "%s %s [%s]", cpu_model, cpu_name,
-                                   cpu->halted ? "halted " : "running");
+                                   cpu_halted(cpu) ? "halted " : "running");
                     g_free(cpu_name);
                 } else {
                     /* memtohex() doubles the required space */
                     len = snprintf((char *)mem_buf, sizeof(buf) / 2,
                                    "CPU#%d [%s]", cpu->cpu_index,
-                                   cpu->halted ? "halted " : "running");
+                                   cpu_halted(cpu) ? "halted " : "running");
                 }
                 trace_gdbstub_op_extra_info((char *)mem_buf);
                 memtohex(buf, mem_buf, len);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 29/73] openrisc: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (27 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 28/73] gdbstub: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 30/73] cpu-exec: " Emilio G. Cota
                   ` (44 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Stafford Horne

Cc: Stafford Horne <shorne@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/openrisc/sys_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/openrisc/sys_helper.c b/target/openrisc/sys_helper.c
index b66a45c1e0..ab4d8fb520 100644
--- a/target/openrisc/sys_helper.c
+++ b/target/openrisc/sys_helper.c
@@ -137,7 +137,7 @@ void HELPER(mtspr)(CPUOpenRISCState *env, target_ulong spr, target_ulong rb)
         if (env->pmr & PMR_DME || env->pmr & PMR_SME) {
             cpu_restore_state(cs, GETPC(), true);
             env->pc += 4;
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             raise_exception(cpu, EXCP_HALTED);
         }
         break;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 30/73] cpu-exec: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (28 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 29/73] openrisc: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-07 12:44   ` Alex Bennée
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 31/73] cpu: " Emilio G. Cota
                   ` (43 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cpu-exec.c | 25 +++++++++++++++++++++----
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 6c4a33262f..e3d72897e8 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -425,14 +425,21 @@ static inline TranslationBlock *tb_find(CPUState *cpu,
     return tb;
 }
 
-static inline bool cpu_handle_halt(CPUState *cpu)
+static inline bool cpu_handle_halt_locked(CPUState *cpu)
 {
-    if (cpu->halted) {
+    g_assert(cpu_mutex_locked(cpu));
+
+    if (cpu_halted(cpu)) {
 #if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
         if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
             && replay_interrupt()) {
             X86CPU *x86_cpu = X86_CPU(cpu);
+
+            /* prevent deadlock; cpu_mutex must be acquired _after_ the BQL */
+            cpu_mutex_unlock(cpu);
             qemu_mutex_lock_iothread();
+            cpu_mutex_lock(cpu);
+
             apic_poll_irq(x86_cpu->apic_state);
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
             qemu_mutex_unlock_iothread();
@@ -442,12 +449,22 @@ static inline bool cpu_handle_halt(CPUState *cpu)
             return true;
         }
 
-        cpu->halted = 0;
+        cpu_halted_set(cpu, 0);
     }
 
     return false;
 }
 
+static inline bool cpu_handle_halt(CPUState *cpu)
+{
+    bool ret;
+
+    cpu_mutex_lock(cpu);
+    ret = cpu_handle_halt_locked(cpu);
+    cpu_mutex_unlock(cpu);
+    return ret;
+}
+
 static inline void cpu_handle_debug_exception(CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
@@ -546,7 +563,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
         } else if (interrupt_request & CPU_INTERRUPT_HALT) {
             replay_interrupt();
             cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
-            cpu->halted = 1;
+            cpu_halted_set(cpu, 1);
             cpu->exception_index = EXCP_HLT;
             qemu_mutex_unlock_iothread();
             return true;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 31/73] cpu: convert to cpu_halted
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (29 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 30/73] cpu-exec: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-07 20:39   ` Alex Bennée
  2019-02-20 16:21   ` Richard Henderson
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 32/73] cpu: define cpu_interrupt_request helpers Emilio G. Cota
                   ` (42 subsequent siblings)
  73 siblings, 2 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

This finishes the conversion to cpu_halted.

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 cpus.c    | 8 ++++----
 qom/cpu.c | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/cpus.c b/cpus.c
index c4fa3cc876..aee129c0b3 100644
--- a/cpus.c
+++ b/cpus.c
@@ -204,7 +204,7 @@ static bool cpu_thread_is_idle(CPUState *cpu)
     if (cpu_is_stopped(cpu)) {
         return true;
     }
-    if (!cpu->halted || cpu_has_work(cpu) ||
+    if (!cpu_halted(cpu) || cpu_has_work(cpu) ||
         kvm_halt_in_kernel()) {
         return false;
     }
@@ -1686,7 +1686,7 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
 
     cpu->thread_id = qemu_get_thread_id();
     cpu->created = true;
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     current_cpu = cpu;
 
     hax_init_vcpu(cpu);
@@ -1845,7 +1845,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
                  *
                  * cpu->halted should ensure we sleep in wait_io_event
                  */
-                g_assert(cpu->halted);
+                g_assert(cpu_halted(cpu));
                 break;
             case EXCP_ATOMIC:
                 qemu_mutex_unlock_iothread();
@@ -2342,7 +2342,7 @@ CpuInfoList *qmp_query_cpus(Error **errp)
         info->value = g_malloc0(sizeof(*info->value));
         info->value->CPU = cpu->cpu_index;
         info->value->current = (cpu == first_cpu);
-        info->value->halted = cpu->halted;
+        info->value->halted = cpu_halted(cpu);
         info->value->qom_path = object_get_canonical_path(OBJECT(cpu));
         info->value->thread_id = cpu->thread_id;
 #if defined(TARGET_I386)
diff --git a/qom/cpu.c b/qom/cpu.c
index 2c05aa1bca..c5106d5af8 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -261,7 +261,7 @@ static void cpu_common_reset(CPUState *cpu)
     }
 
     cpu->interrupt_request = 0;
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     cpu->mem_io_pc = 0;
     cpu->mem_io_vaddr = 0;
     cpu->icount_extra = 0;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 32/73] cpu: define cpu_interrupt_request helpers
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (30 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 31/73] cpu: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 33/73] ppc: use cpu_reset_interrupt Emilio G. Cota
                   ` (41 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Add a comment about how atomic_read works here. The comment refers to
a "BQL-less CPU loop", which will materialize toward the end
of this series.

Note that the modifications to cpu_reset_interrupt are there to
avoid deadlock during the CPU lock transition; once that is complete,
cpu_interrupt_request will be simple again.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 37 +++++++++++++++++++++++++++++++++++++
 qom/cpu.c         | 27 +++++++++++++++++++++------
 2 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 5047047666..4a87c1fef7 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -513,6 +513,43 @@ static inline void cpu_halted_set(CPUState *cpu, uint32_t val)
     cpu_mutex_unlock(cpu);
 }
 
+/*
+ * When sending an interrupt, setters OR the appropriate bit and kick the
+ * destination vCPU. The latter can then read interrupt_request without
+ * acquiring the CPU lock, because once the kick-induced completes, they'll read
+ * an up-to-date interrupt_request.
+ * Setters always acquire the lock, which guarantees that (1) concurrent
+ * updates from different threads won't result in data races, and (2) the
+ * BQL-less CPU loop will always see an up-to-date interrupt_request, since the
+ * loop holds the CPU lock.
+ */
+static inline uint32_t cpu_interrupt_request(CPUState *cpu)
+{
+    return atomic_read(&cpu->interrupt_request);
+}
+
+static inline void cpu_interrupt_request_or(CPUState *cpu, uint32_t mask)
+{
+    if (cpu_mutex_locked(cpu)) {
+        atomic_set(&cpu->interrupt_request, cpu->interrupt_request | mask);
+        return;
+    }
+    cpu_mutex_lock(cpu);
+    atomic_set(&cpu->interrupt_request, cpu->interrupt_request | mask);
+    cpu_mutex_unlock(cpu);
+}
+
+static inline void cpu_interrupt_request_set(CPUState *cpu, uint32_t val)
+{
+    if (cpu_mutex_locked(cpu)) {
+        atomic_set(&cpu->interrupt_request, val);
+        return;
+    }
+    cpu_mutex_lock(cpu);
+    atomic_set(&cpu->interrupt_request, val);
+    cpu_mutex_unlock(cpu);
+}
+
 static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
 {
     unsigned int i;
diff --git a/qom/cpu.c b/qom/cpu.c
index c5106d5af8..00add81a7f 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -98,14 +98,29 @@ static void cpu_common_get_memory_mapping(CPUState *cpu,
  * BQL here if we need to.  cpu_interrupt assumes it is held.*/
 void cpu_reset_interrupt(CPUState *cpu, int mask)
 {
-    bool need_lock = !qemu_mutex_iothread_locked();
+    bool has_bql = qemu_mutex_iothread_locked();
+    bool has_cpu_lock = cpu_mutex_locked(cpu);
 
-    if (need_lock) {
-        qemu_mutex_lock_iothread();
+    if (has_bql) {
+        if (has_cpu_lock) {
+            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
+        } else {
+            cpu_mutex_lock(cpu);
+            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
+            cpu_mutex_unlock(cpu);
+        }
+        return;
+    }
+
+    if (has_cpu_lock) {
+        cpu_mutex_unlock(cpu);
     }
-    cpu->interrupt_request &= ~mask;
-    if (need_lock) {
-        qemu_mutex_unlock_iothread();
+    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
+    atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
+    qemu_mutex_unlock_iothread();
+    if (!has_cpu_lock) {
+        cpu_mutex_unlock(cpu);
     }
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 33/73] ppc: use cpu_reset_interrupt
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (31 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 32/73] cpu: define cpu_interrupt_request helpers Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 34/73] exec: " Emilio G. Cota
                   ` (40 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, David Gibson, qemu-ppc

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/excp_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 5e1778584a..737c9c72be 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -880,7 +880,7 @@ bool ppc_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
     if (interrupt_request & CPU_INTERRUPT_HARD) {
         ppc_hw_interrupt(env);
         if (env->pending_interrupts == 0) {
-            cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
         }
         return true;
     }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 34/73] exec: use cpu_reset_interrupt
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (32 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 33/73] ppc: use cpu_reset_interrupt Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 35/73] i386: " Emilio G. Cota
                   ` (39 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 exec.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/exec.c b/exec.c
index da3e635f91..21826f0816 100644
--- a/exec.c
+++ b/exec.c
@@ -777,7 +777,7 @@ static int cpu_common_post_load(void *opaque, int version_id)
 
     /* 0x01 was CPU_INTERRUPT_EXIT. This line can be removed when the
        version_id is increased. */
-    cpu->interrupt_request &= ~0x01;
+    cpu_reset_interrupt(cpu, 1);
     tlb_flush(cpu);
 
     /* loadvm has just updated the content of RAM, bypassing the
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 35/73] i386: use cpu_reset_interrupt
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (33 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 34/73] exec: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 36/73] s390x: " Emilio G. Cota
                   ` (38 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

From: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/hax-all.c    |  4 ++--
 target/i386/hvf/x86hvf.c |  8 ++++----
 target/i386/kvm.c        | 14 +++++++-------
 target/i386/seg_helper.c | 13 ++++++-------
 target/i386/svm_helper.c |  2 +-
 target/i386/whpx-all.c   | 10 +++++-----
 6 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index 22951017cf..518c6ff103 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -424,7 +424,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
         irq = cpu_get_pic_interrupt(env);
         if (irq >= 0) {
             hax_inject_interrupt(env, irq);
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
         }
     }
 
@@ -474,7 +474,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
     cpu_halted_set(cpu, 0);
 
     if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
-        cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 163bbed23f..e8b13ed534 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -402,7 +402,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
 
     if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) {
         if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
-            cpu_state->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_NMI);
             info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC;
             wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
         } else {
@@ -414,7 +414,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
         (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) {
         int line = cpu_get_pic_interrupt(&x86cpu->env);
-        cpu_state->interrupt_request &= ~CPU_INTERRUPT_HARD;
+        cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_HARD);
         if (line >= 0) {
             wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line |
                   VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
@@ -440,7 +440,7 @@ int hvf_process_events(CPUState *cpu_state)
     }
 
     if (cpu_state->interrupt_request & CPU_INTERRUPT_POLL) {
-        cpu_state->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
     if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
@@ -453,7 +453,7 @@ int hvf_process_events(CPUState *cpu_state)
         do_cpu_sipi(cpu);
     }
     if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
-        cpu_state->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_TPR);
         hvf_cpu_synchronize_state(cpu_state);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 9006f04d92..ca2629f0fe 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2893,7 +2893,7 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
              */
             events.smi.pending = cs->interrupt_request & CPU_INTERRUPT_SMI;
             events.smi.latched_init = cs->interrupt_request & CPU_INTERRUPT_INIT;
-            cs->interrupt_request &= ~(CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
+            cpu_reset_interrupt(cs, CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
         } else {
             /* Keep these in cs->interrupt_request.  */
             events.smi.pending = 0;
@@ -3189,7 +3189,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
     if (cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
         if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
             qemu_mutex_lock_iothread();
-            cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
             qemu_mutex_unlock_iothread();
             DPRINTF("injected NMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_NMI);
@@ -3200,7 +3200,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
         }
         if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
             qemu_mutex_lock_iothread();
-            cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
             qemu_mutex_unlock_iothread();
             DPRINTF("injected SMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_SMI);
@@ -3236,7 +3236,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
             (env->eflags & IF_MASK)) {
             int irq;
 
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
             irq = cpu_get_pic_interrupt(env);
             if (irq >= 0) {
                 struct kvm_interrupt intr;
@@ -3307,7 +3307,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         /* We must not raise CPU_INTERRUPT_MCE if it's not supported. */
         assert(env->mcg_cap);
 
-        cs->interrupt_request &= ~CPU_INTERRUPT_MCE;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_MCE);
 
         kvm_cpu_synchronize_state(cs);
 
@@ -3337,7 +3337,7 @@ int kvm_arch_process_async_events(CPUState *cs)
     }
 
     if (cs->interrupt_request & CPU_INTERRUPT_POLL) {
-        cs->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
     if (((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
@@ -3350,7 +3350,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         do_cpu_sipi(cpu);
     }
     if (cs->interrupt_request & CPU_INTERRUPT_TPR) {
-        cs->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_TPR);
         kvm_cpu_synchronize_state(cs);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
diff --git a/target/i386/seg_helper.c b/target/i386/seg_helper.c
index 63e265cb38..b580403ef4 100644
--- a/target/i386/seg_helper.c
+++ b/target/i386/seg_helper.c
@@ -1332,7 +1332,7 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
     switch (interrupt_request) {
 #if !defined(CONFIG_USER_ONLY)
     case CPU_INTERRUPT_POLL:
-        cs->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
         break;
 #endif
@@ -1341,23 +1341,22 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
         break;
     case CPU_INTERRUPT_SMI:
         cpu_svm_check_intercept_param(env, SVM_EXIT_SMI, 0, 0);
-        cs->interrupt_request &= ~CPU_INTERRUPT_SMI;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_SMI);
         do_smm_enter(cpu);
         break;
     case CPU_INTERRUPT_NMI:
         cpu_svm_check_intercept_param(env, SVM_EXIT_NMI, 0, 0);
-        cs->interrupt_request &= ~CPU_INTERRUPT_NMI;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_NMI);
         env->hflags2 |= HF2_NMI_MASK;
         do_interrupt_x86_hardirq(env, EXCP02_NMI, 1);
         break;
     case CPU_INTERRUPT_MCE:
-        cs->interrupt_request &= ~CPU_INTERRUPT_MCE;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_MCE);
         do_interrupt_x86_hardirq(env, EXCP12_MCHK, 0);
         break;
     case CPU_INTERRUPT_HARD:
         cpu_svm_check_intercept_param(env, SVM_EXIT_INTR, 0, 0);
-        cs->interrupt_request &= ~(CPU_INTERRUPT_HARD |
-                                   CPU_INTERRUPT_VIRQ);
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD | CPU_INTERRUPT_VIRQ);
         intno = cpu_get_pic_interrupt(env);
         qemu_log_mask(CPU_LOG_TB_IN_ASM,
                       "Servicing hardware INT=0x%02x\n", intno);
@@ -1372,7 +1371,7 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
         qemu_log_mask(CPU_LOG_TB_IN_ASM,
                       "Servicing virtual hardware INT=0x%02x\n", intno);
         do_interrupt_x86_hardirq(env, intno, 1);
-        cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
         break;
 #endif
     }
diff --git a/target/i386/svm_helper.c b/target/i386/svm_helper.c
index 9fd22a883b..a6d33e55d8 100644
--- a/target/i386/svm_helper.c
+++ b/target/i386/svm_helper.c
@@ -700,7 +700,7 @@ void do_vmexit(CPUX86State *env, uint32_t exit_code, uint64_t exit_info_1)
     env->hflags &= ~HF_GUEST_MASK;
     env->intercept = 0;
     env->intercept_exceptions = 0;
-    cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
+    cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
     env->tsc_offset = 0;
 
     env->gdt.base  = x86_ldq_phys(cs, env->vm_hsave + offsetof(struct vmcb,
diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
index b9c79ccd99..9673bdc219 100644
--- a/target/i386/whpx-all.c
+++ b/target/i386/whpx-all.c
@@ -728,14 +728,14 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
     if (!vcpu->interruption_pending &&
         cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
         if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
             vcpu->interruptable = false;
             new_int.InterruptionType = WHvX64PendingNmi;
             new_int.InterruptionPending = 1;
             new_int.InterruptionVector = 2;
         }
         if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
         }
     }
 
@@ -758,7 +758,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
         vcpu->interruptable && (env->eflags & IF_MASK)) {
         assert(!new_int.InterruptionPending);
         if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
             irq = cpu_get_pic_interrupt(env);
             if (irq >= 0) {
                 new_int.InterruptionType = WHvX64PendingInterrupt;
@@ -850,7 +850,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
-        cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
@@ -868,7 +868,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
-        cpu->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        cpu_reset_interrupt(cpu, CPU_INTERRUPT_TPR);
         if (!cpu->vcpu_dirty) {
             whpx_get_registers(cpu);
         }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 36/73] s390x: use cpu_reset_interrupt
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (34 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 35/73] i386: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 37/73] openrisc: " Emilio G. Cota
                   ` (37 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Cornelia Huck,
	David Hildenbrand, qemu-s390x

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/s390x/excp_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
index d22c5b3ce5..7ca50b3df6 100644
--- a/target/s390x/excp_helper.c
+++ b/target/s390x/excp_helper.c
@@ -454,7 +454,7 @@ try_deliver:
 
     /* we might still have pending interrupts, but not deliverable */
     if (!env->pending_int && !qemu_s390_flic_has_any(flic)) {
-        cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
     }
 
     /* WAIT PSW during interrupt injection or STOP interrupt */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 37/73] openrisc: use cpu_reset_interrupt
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (35 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 36/73] s390x: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 38/73] arm: convert to cpu_interrupt_request Emilio G. Cota
                   ` (36 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Stafford Horne

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/openrisc/sys_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/openrisc/sys_helper.c b/target/openrisc/sys_helper.c
index ab4d8fb520..c645cc896d 100644
--- a/target/openrisc/sys_helper.c
+++ b/target/openrisc/sys_helper.c
@@ -170,7 +170,7 @@ void HELPER(mtspr)(CPUOpenRISCState *env, target_ulong spr, target_ulong rb)
                 env->ttmr = (rb & ~TTMR_IP) | ip;
             } else {    /* Clear IP bit.  */
                 env->ttmr = rb & ~TTMR_IP;
-                cs->interrupt_request &= ~CPU_INTERRUPT_TIMER;
+                cpu_reset_interrupt(cs, CPU_INTERRUPT_TIMER);
             }
 
             cpu_openrisc_timer_update(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 38/73] arm: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (36 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 37/73] openrisc: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-07 20:55   ` Alex Bennée
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 39/73] i386: " Emilio G. Cota
                   ` (35 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Peter Maydell, qemu-arm

Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-arm@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/arm/cpu.c     |  6 +++---
 target/arm/helper.c  | 16 +++++++---------
 target/arm/machine.c |  2 +-
 3 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 8cf2f5466b..ef652ebdc2 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -49,7 +49,7 @@ static bool arm_cpu_has_work(CPUState *cs)
     ARMCPU *cpu = ARM_CPU(cs);
 
     return (cpu->power_state != PSCI_OFF)
-        && cs->interrupt_request &
+        && cpu_interrupt_request(cs) &
         (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
          | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
          | CPU_INTERRUPT_EXITTB);
@@ -451,7 +451,7 @@ void arm_cpu_update_virq(ARMCPU *cpu)
     bool new_state = (env->cp15.hcr_el2 & HCR_VI) ||
         (env->irq_line_state & CPU_INTERRUPT_VIRQ);
 
-    if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VIRQ) != 0)) {
+    if (new_state != ((cpu_interrupt_request(cs) & CPU_INTERRUPT_VIRQ) != 0)) {
         if (new_state) {
             cpu_interrupt(cs, CPU_INTERRUPT_VIRQ);
         } else {
@@ -472,7 +472,7 @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
     bool new_state = (env->cp15.hcr_el2 & HCR_VF) ||
         (env->irq_line_state & CPU_INTERRUPT_VFIQ);
 
-    if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFIQ) != 0)) {
+    if (new_state != ((cpu_interrupt_request(cs) & CPU_INTERRUPT_VFIQ) != 0)) {
         if (new_state) {
             cpu_interrupt(cs, CPU_INTERRUPT_VFIQ);
         } else {
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 66faebea8e..cb42d19fe8 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -1782,23 +1782,24 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
     CPUState *cs = ENV_GET_CPU(env);
     uint64_t hcr_el2 = arm_hcr_el2_eff(env);
     uint64_t ret = 0;
+    uint32_t interrupt_request = cpu_interrupt_request(cs);
 
     if (hcr_el2 & HCR_IMO) {
-        if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
+        if (interrupt_request & CPU_INTERRUPT_VIRQ) {
             ret |= CPSR_I;
         }
     } else {
-        if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
+        if (interrupt_request & CPU_INTERRUPT_HARD) {
             ret |= CPSR_I;
         }
     }
 
     if (hcr_el2 & HCR_FMO) {
-        if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
+        if (interrupt_request & CPU_INTERRUPT_VFIQ) {
             ret |= CPSR_F;
         }
     } else {
-        if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
+        if (interrupt_request & CPU_INTERRUPT_FIQ) {
             ret |= CPSR_F;
         }
     }
@@ -9435,10 +9436,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
         return;
     }
 
-    /* Hooks may change global state so BQL should be held, also the
-     * BQL needs to be held for any modification of
-     * cs->interrupt_request.
-     */
+    /* Hooks may change global state so BQL should be held */
     g_assert(qemu_mutex_iothread_locked());
 
     arm_call_pre_el_change_hook(cpu);
@@ -9453,7 +9451,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
     arm_call_el_change_hook(cpu);
 
     if (!kvm_enabled()) {
-        cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_EXITTB);
     }
 }
 
diff --git a/target/arm/machine.c b/target/arm/machine.c
index b292549614..4f2099ecde 100644
--- a/target/arm/machine.c
+++ b/target/arm/machine.c
@@ -693,7 +693,7 @@ static int cpu_post_load(void *opaque, int version_id)
     if (env->irq_line_state == UINT32_MAX) {
         CPUState *cs = CPU(cpu);
 
-        env->irq_line_state = cs->interrupt_request &
+        env->irq_line_state = cpu_interrupt_request(cs) &
             (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ |
              CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VFIQ);
     }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 39/73] i386: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (37 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 38/73] arm: convert to cpu_interrupt_request Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-08 11:00   ` Alex Bennée
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 40/73] i386/kvm: " Emilio G. Cota
                   ` (34 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/cpu.c        | 2 +-
 target/i386/helper.c     | 4 ++--
 target/i386/svm_helper.c | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index a37b984b61..35dea8c152 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -5678,7 +5678,7 @@ int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
 
 static bool x86_cpu_has_work(CPUState *cs)
 {
-    return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0;
+    return x86_cpu_pending_interrupt(cs, cpu_interrupt_request(cs)) != 0;
 }
 
 static void x86_disas_set_info(CPUState *cs, disassemble_info *info)
diff --git a/target/i386/helper.c b/target/i386/helper.c
index a75278f954..9197fb4edc 100644
--- a/target/i386/helper.c
+++ b/target/i386/helper.c
@@ -1035,12 +1035,12 @@ void do_cpu_init(X86CPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUX86State *env = &cpu->env;
     CPUX86State *save = g_new(CPUX86State, 1);
-    int sipi = cs->interrupt_request & CPU_INTERRUPT_SIPI;
+    int sipi = cpu_interrupt_request(cs) & CPU_INTERRUPT_SIPI;
 
     *save = *env;
 
     cpu_reset(cs);
-    cs->interrupt_request = sipi;
+    cpu_interrupt_request_set(cs, sipi);
     memcpy(&env->start_init_save, &save->start_init_save,
            offsetof(CPUX86State, end_init_save) -
            offsetof(CPUX86State, start_init_save));
diff --git a/target/i386/svm_helper.c b/target/i386/svm_helper.c
index a6d33e55d8..ebf3643ba7 100644
--- a/target/i386/svm_helper.c
+++ b/target/i386/svm_helper.c
@@ -316,7 +316,7 @@ void helper_vmrun(CPUX86State *env, int aflag, int next_eip_addend)
     if (int_ctl & V_IRQ_MASK) {
         CPUState *cs = CPU(x86_env_get_cpu(env));
 
-        cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_VIRQ);
     }
 
     /* maybe we need to inject an event */
@@ -674,7 +674,7 @@ void do_vmexit(CPUX86State *env, uint32_t exit_code, uint64_t exit_info_1)
                        env->vm_vmcb + offsetof(struct vmcb, control.int_ctl));
     int_ctl &= ~(V_TPR_MASK | V_IRQ_MASK);
     int_ctl |= env->v_tpr & V_TPR_MASK;
-    if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_VIRQ) {
         int_ctl |= V_IRQ_MASK;
     }
     x86_stl_phys(cs,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 40/73] i386/kvm: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (38 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 39/73] i386: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-08 11:15   ` Alex Bennée
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 41/73] i386/hax-all: " Emilio G. Cota
                   ` (33 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/kvm.c | 54 +++++++++++++++++++++++++++--------------------
 1 file changed, 31 insertions(+), 23 deletions(-)

diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index ca2629f0fe..3f3c670897 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2888,11 +2888,14 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
         events.smi.smm = !!(env->hflags & HF_SMM_MASK);
         events.smi.smm_inside_nmi = !!(env->hflags2 & HF2_SMM_INSIDE_NMI_MASK);
         if (kvm_irqchip_in_kernel()) {
+            uint32_t interrupt_request;
+
             /* As soon as these are moved to the kernel, remove them
              * from cs->interrupt_request.
              */
-            events.smi.pending = cs->interrupt_request & CPU_INTERRUPT_SMI;
-            events.smi.latched_init = cs->interrupt_request & CPU_INTERRUPT_INIT;
+            interrupt_request = cpu_interrupt_request(cs);
+            events.smi.pending = interrupt_request & CPU_INTERRUPT_SMI;
+            events.smi.latched_init = interrupt_request & CPU_INTERRUPT_INIT;
             cpu_reset_interrupt(cs, CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
         } else {
             /* Keep these in cs->interrupt_request.  */
@@ -3183,14 +3186,14 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
 {
     X86CPU *x86_cpu = X86_CPU(cpu);
     CPUX86State *env = &x86_cpu->env;
+    uint32_t interrupt_request;
     int ret;
 
+    interrupt_request = cpu_interrupt_request(cpu);
     /* Inject NMI */
-    if (cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
-        if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
-            qemu_mutex_lock_iothread();
+    if (interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
+        if (interrupt_request & CPU_INTERRUPT_NMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
-            qemu_mutex_unlock_iothread();
             DPRINTF("injected NMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_NMI);
             if (ret < 0) {
@@ -3198,10 +3201,8 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
                         strerror(-ret));
             }
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
-            qemu_mutex_lock_iothread();
+        if (interrupt_request & CPU_INTERRUPT_SMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
-            qemu_mutex_unlock_iothread();
             DPRINTF("injected SMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_SMI);
             if (ret < 0) {
@@ -3215,16 +3216,18 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
         qemu_mutex_lock_iothread();
     }
 
+    interrupt_request = cpu_interrupt_request(cpu);
+
     /* Force the VCPU out of its inner loop to process any INIT requests
      * or (for userspace APIC, but it is cheap to combine the checks here)
      * pending TPR access reports.
      */
-    if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
-        if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if (interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
+        if ((interrupt_request & CPU_INTERRUPT_INIT) &&
             !(env->hflags & HF_SMM_MASK)) {
             cpu->exit_request = 1;
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+        if (interrupt_request & CPU_INTERRUPT_TPR) {
             cpu->exit_request = 1;
         }
     }
@@ -3232,7 +3235,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
     if (!kvm_pic_in_kernel()) {
         /* Try to inject an interrupt if the guest can accept it */
         if (run->ready_for_interrupt_injection &&
-            (cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+            (interrupt_request & CPU_INTERRUPT_HARD) &&
             (env->eflags & IF_MASK)) {
             int irq;
 
@@ -3256,7 +3259,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
          * interrupt, request an interrupt window exit.  This will
          * cause a return to userspace as soon as the guest is ready to
          * receive interrupts. */
-        if ((cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
             run->request_interrupt_window = 1;
         } else {
             run->request_interrupt_window = 0;
@@ -3302,8 +3305,9 @@ int kvm_arch_process_async_events(CPUState *cs)
 {
     X86CPU *cpu = X86_CPU(cs);
     CPUX86State *env = &cpu->env;
+    uint32_t interrupt_request;
 
-    if (cs->interrupt_request & CPU_INTERRUPT_MCE) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_MCE) {
         /* We must not raise CPU_INTERRUPT_MCE if it's not supported. */
         assert(env->mcg_cap);
 
@@ -3326,7 +3330,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         }
     }
 
-    if ((cs->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if ((cpu_interrupt_request(cs) & CPU_INTERRUPT_INIT) &&
         !(env->hflags & HF_SMM_MASK)) {
         kvm_cpu_synchronize_state(cs);
         do_cpu_init(cpu);
@@ -3336,20 +3340,21 @@ int kvm_arch_process_async_events(CPUState *cs)
         return 0;
     }
 
-    if (cs->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
-    if (((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    interrupt_request = cpu_interrupt_request(cs);
+    if (((interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
-        (cs->interrupt_request & CPU_INTERRUPT_NMI)) {
+        (interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cs, 0);
     }
-    if (cs->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (interrupt_request & CPU_INTERRUPT_SIPI) {
         kvm_cpu_synchronize_state(cs);
         do_cpu_sipi(cpu);
     }
-    if (cs->interrupt_request & CPU_INTERRUPT_TPR) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_TPR) {
         cpu_reset_interrupt(cs, CPU_INTERRUPT_TPR);
         kvm_cpu_synchronize_state(cs);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
@@ -3363,10 +3368,13 @@ static int kvm_handle_halt(X86CPU *cpu)
 {
     CPUState *cs = CPU(cpu);
     CPUX86State *env = &cpu->env;
+    uint32_t interrupt_request;
+
+    interrupt_request = cpu_interrupt_request(cs);
 
-    if (!((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if (!((interrupt_request & CPU_INTERRUPT_HARD) &&
           (env->eflags & IF_MASK)) &&
-        !(cs->interrupt_request & CPU_INTERRUPT_NMI)) {
+        !(interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cs, 1);
         return EXCP_HLT;
     }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 41/73] i386/hax-all: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (39 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 40/73] i386/kvm: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-08 11:20   ` Alex Bennée
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 42/73] i386/whpx-all: " Emilio G. Cota
                   ` (32 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/hax-all.c | 30 +++++++++++++++++-------------
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index 518c6ff103..18da1808c6 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -284,7 +284,7 @@ int hax_vm_destroy(struct hax_vm *vm)
 
 static void hax_handle_interrupt(CPUState *cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
 
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
@@ -418,7 +418,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
      * Unlike KVM, HAX kernel check for the eflags, instead of qemu
      */
     if (ht->ready_for_interrupt_injection &&
-        (cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+        (cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
         int irq;
 
         irq = cpu_get_pic_interrupt(env);
@@ -432,7 +432,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
      * interrupt, request an interrupt window exit.  This will
      * cause a return to userspace as soon as the guest is ready to
      * receive interrupts. */
-    if ((cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+    if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
         ht->request_interrupt_window = 1;
     } else {
         ht->request_interrupt_window = 0;
@@ -473,19 +473,19 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
 
     cpu_halted_set(cpu, 0);
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_INIT) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_INIT) {
         DPRINTF("\nhax_vcpu_hax_exec: handling INIT for %d\n",
                 cpu->cpu_index);
         do_cpu_init(x86_cpu);
         hax_vcpu_sync_state(env, 1);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_SIPI) {
         DPRINTF("hax_vcpu_hax_exec: handling SIPI for %d\n",
                 cpu->cpu_index);
         hax_vcpu_sync_state(env, 0);
@@ -544,13 +544,17 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
             ret = -1;
             break;
         case HAX_EXIT_HLT:
-            if (!(cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
-                !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
-                /* hlt instruction with interrupt disabled is shutdown */
-                env->eflags |= IF_MASK;
-                cpu_halted_set(cpu, 1);
-                cpu->exception_index = EXCP_HLT;
-                ret = 1;
+            {
+                uint32_t interrupt_request = cpu_interrupt_request(cpu);
+
+                if (!(interrupt_request & CPU_INTERRUPT_HARD) &&
+                    !(interrupt_request & CPU_INTERRUPT_NMI)) {
+                    /* hlt instruction with interrupt disabled is shutdown */
+                    env->eflags |= IF_MASK;
+                    cpu_halted_set(cpu, 1);
+                    cpu->exception_index = EXCP_HLT;
+                    ret = 1;
+                }
             }
             break;
         /* these situations will continue to hax module */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 42/73] i386/whpx-all: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (40 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 41/73] i386/hax-all: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 43/73] i386/hvf: convert to cpu_request_interrupt Emilio G. Cota
                   ` (31 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/whpx-all.c | 41 ++++++++++++++++++++++++-----------------
 1 file changed, 24 insertions(+), 17 deletions(-)

diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
index 9673bdc219..0d8cfa3a19 100644
--- a/target/i386/whpx-all.c
+++ b/target/i386/whpx-all.c
@@ -690,12 +690,14 @@ static int whpx_handle_portio(CPUState *cpu,
 static int whpx_handle_halt(CPUState *cpu)
 {
     struct CPUX86State *env = (CPUArchState *)(cpu->env_ptr);
+    uint32_t interrupt_request;
     int ret = 0;
 
     qemu_mutex_lock_iothread();
-    if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+    interrupt_request = cpu_interrupt_request(cpu);
+    if (!((interrupt_request & CPU_INTERRUPT_HARD) &&
           (env->eflags & IF_MASK)) &&
-        !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        !(interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu->exception_index = EXCP_HLT;
         cpu_halted_set(cpu, true);
         ret = 1;
@@ -713,6 +715,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
     struct CPUX86State *env = (CPUArchState *)(cpu->env_ptr);
     X86CPU *x86_cpu = X86_CPU(cpu);
     int irq;
+    uint32_t interrupt_request;
     uint8_t tpr;
     WHV_X64_PENDING_INTERRUPTION_REGISTER new_int;
     UINT32 reg_count = 0;
@@ -724,17 +727,19 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
     qemu_mutex_lock_iothread();
 
+    interrupt_request = cpu_interrupt_request(cpu);
+
     /* Inject NMI */
     if (!vcpu->interruption_pending &&
-        cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
-        if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
+        interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
+        if (interrupt_request & CPU_INTERRUPT_NMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
             vcpu->interruptable = false;
             new_int.InterruptionType = WHvX64PendingNmi;
             new_int.InterruptionPending = 1;
             new_int.InterruptionVector = 2;
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
+        if (interrupt_request & CPU_INTERRUPT_SMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
         }
     }
@@ -743,12 +748,12 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
      * Force the VCPU out of its inner loop to process any INIT requests or
      * commit pending TPR access.
      */
-    if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
-        if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if (interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
+        if ((interrupt_request & CPU_INTERRUPT_INIT) &&
             !(env->hflags & HF_SMM_MASK)) {
             cpu->exit_request = 1;
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+        if (interrupt_request & CPU_INTERRUPT_TPR) {
             cpu->exit_request = 1;
         }
     }
@@ -757,7 +762,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
     if (!vcpu->interruption_pending &&
         vcpu->interruptable && (env->eflags & IF_MASK)) {
         assert(!new_int.InterruptionPending);
-        if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
+        if (interrupt_request & CPU_INTERRUPT_HARD) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
             irq = cpu_get_pic_interrupt(env);
             if (irq >= 0) {
@@ -787,7 +792,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
     /* Update the state of the interrupt delivery notification */
     if (!vcpu->window_registered &&
-        cpu->interrupt_request & CPU_INTERRUPT_HARD) {
+        cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) {
         reg_values[reg_count].DeliverabilityNotifications.InterruptNotification
             = 1;
         vcpu->window_registered = 1;
@@ -840,8 +845,9 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     struct CPUX86State *env = (CPUArchState *)(cpu->env_ptr);
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    uint32_t interrupt_request;
 
-    if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_INIT) &&
         !(env->hflags & HF_SMM_MASK)) {
 
         do_cpu_init(x86_cpu);
@@ -849,25 +855,26 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
         vcpu->interruptable = true;
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
-    if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+    interrupt_request = cpu_interrupt_request(cpu);
+    if (((interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
-        (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        (interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cpu, false);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (interrupt_request & CPU_INTERRUPT_SIPI) {
         if (!cpu->vcpu_dirty) {
             whpx_get_registers(cpu);
         }
         do_cpu_sipi(x86_cpu);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_TPR) {
         cpu_reset_interrupt(cpu, CPU_INTERRUPT_TPR);
         if (!cpu->vcpu_dirty) {
             whpx_get_registers(cpu);
@@ -1350,7 +1357,7 @@ static void whpx_memory_init(void)
 
 static void whpx_handle_interrupt(CPUState *cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
 
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 43/73] i386/hvf: convert to cpu_request_interrupt
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (41 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 42/73] i386/whpx-all: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 44/73] ppc: convert to cpu_interrupt_request Emilio G. Cota
                   ` (30 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/hvf/hvf.c    |  8 +++++---
 target/i386/hvf/x86hvf.c | 26 +++++++++++++++-----------
 2 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index c1ff220985..619a4dd565 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -249,7 +249,7 @@ void update_apic_tpr(CPUState *cpu)
 
 static void hvf_handle_interrupt(CPUState * cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
     }
@@ -706,10 +706,12 @@ int hvf_vcpu_exec(CPUState *cpu)
         ret = 0;
         switch (exit_reason) {
         case EXIT_REASON_HLT: {
+            uint32_t interrupt_request = cpu_interrupt_request(cpu);
+
             macvm_set_rip(cpu, rip + ins_len);
-            if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+            if (!((interrupt_request & CPU_INTERRUPT_HARD) &&
                 (EFLAGS(env) & IF_MASK))
-                && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) &&
+                && !(interrupt_request & CPU_INTERRUPT_NMI) &&
                 !(idtvec_info & VMCS_IDT_VEC_VALID)) {
                 cpu_halted_set(cpu, 1);
                 ret = EXCP_HLT;
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index e8b13ed534..91feafeedc 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -358,6 +358,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
 
     uint8_t vector;
     uint64_t intr_type;
+    uint32_t interrupt_request;
     bool have_event = true;
     if (env->interrupt_injected != -1) {
         vector = env->interrupt_injected;
@@ -400,7 +401,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
         };
     }
 
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_NMI) {
         if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
             cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_NMI);
             info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC;
@@ -411,7 +412,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
     }
 
     if (!(env->hflags & HF_INHIBIT_IRQ_MASK) &&
-        (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
+        (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) {
         int line = cpu_get_pic_interrupt(&x86cpu->env);
         cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_HARD);
@@ -420,39 +421,42 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
                   VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
         }
     }
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_HARD) {
         vmx_set_int_window_exiting(cpu_state);
     }
-    return (cpu_state->interrupt_request
-            & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR));
+    return cpu_interrupt_request(cpu_state) & (CPU_INTERRUPT_INIT |
+                                               CPU_INTERRUPT_TPR);
 }
 
 int hvf_process_events(CPUState *cpu_state)
 {
     X86CPU *cpu = X86_CPU(cpu_state);
     CPUX86State *env = &cpu->env;
+    uint32_t interrupt_request;
 
     EFLAGS(env) = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
 
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_INIT) {
         hvf_cpu_synchronize_state(cpu_state);
         do_cpu_init(cpu);
     }
 
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
-    if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
+
+    interrupt_request = cpu_interrupt_request(cpu_state);
+    if (((interrupt_request & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK)) ||
-        (cpu_state->interrupt_request & CPU_INTERRUPT_NMI)) {
+        (interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cpu_state, 0);
     }
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (interrupt_request & CPU_INTERRUPT_SIPI) {
         hvf_cpu_synchronize_state(cpu_state);
         do_cpu_sipi(cpu);
     }
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_TPR) {
         cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_TPR);
         hvf_cpu_synchronize_state(cpu_state);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 44/73] ppc: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (42 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 43/73] i386/hvf: convert to cpu_request_interrupt Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 45/73] sh4: " Emilio G. Cota
                   ` (29 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, David Gibson, qemu-ppc

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/ppc/ppc.c                    |  2 +-
 target/ppc/excp_helper.c        |  2 +-
 target/ppc/kvm.c                |  4 ++--
 target/ppc/translate_init.inc.c | 14 +++++++-------
 4 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index d1a5a0b877..bc1cefa13f 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -91,7 +91,7 @@ void ppc_set_irq(PowerPCCPU *cpu, int n_IRQ, int level)
 
     LOG_IRQ("%s: %p n_IRQ %d level %d => pending %08" PRIx32
                 "req %08x\n", __func__, env, n_IRQ, level,
-                env->pending_interrupts, CPU(cpu)->interrupt_request);
+                env->pending_interrupts, cpu_interrupt_request(CPU(cpu)));
 
     if (locked) {
         qemu_mutex_unlock_iothread();
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 737c9c72be..75a434f46b 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -753,7 +753,7 @@ static void ppc_hw_interrupt(CPUPPCState *env)
 
     qemu_log_mask(CPU_LOG_INT, "%s: %p pending %08x req %08x me %d ee %d\n",
                   __func__, env, env->pending_interrupts,
-                  cs->interrupt_request, (int)msr_me, (int)msr_ee);
+                  cpu_interrupt_request(cs), (int)msr_me, (int)msr_ee);
 #endif
     /* External reset */
     if (env->pending_interrupts & (1 << PPC_INTERRUPT_RESET)) {
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
index 0efdb71532..50919da6af 100644
--- a/target/ppc/kvm.c
+++ b/target/ppc/kvm.c
@@ -1340,7 +1340,7 @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
      * interrupt, reset, etc) in PPC-specific env->irq_input_state. */
     if (!cap_interrupt_level &&
         run->ready_for_interrupt_injection &&
-        (cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+        (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
         (env->irq_input_state & (1<<PPC_INPUT_INT)))
     {
         /* For now KVM disregards the 'irq' argument. However, in the
@@ -1382,7 +1382,7 @@ static int kvmppc_handle_halt(PowerPCCPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
 
-    if (!(cs->interrupt_request & CPU_INTERRUPT_HARD) && (msr_ee)) {
+    if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) && (msr_ee)) {
         cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
     }
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index a757e02f52..e1059d9ed6 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8455,7 +8455,7 @@ static bool cpu_has_work_POWER7(CPUState *cs)
     CPUPPCState *env = &cpu->env;
 
     if (cpu_halted(cs)) {
-        if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
         }
         if ((env->pending_interrupts & (1u << PPC_INTERRUPT_EXT)) &&
@@ -8479,7 +8479,7 @@ static bool cpu_has_work_POWER7(CPUState *cs)
         }
         return false;
     } else {
-        return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+        return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
     }
 }
 
@@ -8609,7 +8609,7 @@ static bool cpu_has_work_POWER8(CPUState *cs)
     CPUPPCState *env = &cpu->env;
 
     if (cpu_halted(cs)) {
-        if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
         }
         if ((env->pending_interrupts & (1u << PPC_INTERRUPT_EXT)) &&
@@ -8641,7 +8641,7 @@ static bool cpu_has_work_POWER8(CPUState *cs)
         }
         return false;
     } else {
-        return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+        return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
     }
 }
 
@@ -8801,7 +8801,7 @@ static bool cpu_has_work_POWER9(CPUState *cs)
     CPUPPCState *env = &cpu->env;
 
     if (cpu_halted(cs)) {
-        if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
         }
         /* External Exception */
@@ -8834,7 +8834,7 @@ static bool cpu_has_work_POWER9(CPUState *cs)
         }
         return false;
     } else {
-        return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+        return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
     }
 }
 
@@ -10247,7 +10247,7 @@ static bool ppc_cpu_has_work(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+    return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
 }
 
 /* CPUClass::reset() */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 45/73] sh4: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (43 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 44/73] ppc: convert to cpu_interrupt_request Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 46/73] cris: " Emilio G. Cota
                   ` (28 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Aurelien Jarno

Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/sh4/cpu.c    | 2 +-
 target/sh4/helper.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
index b9f393b7c7..58ea212f53 100644
--- a/target/sh4/cpu.c
+++ b/target/sh4/cpu.c
@@ -45,7 +45,7 @@ static void superh_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
 
 static bool superh_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 /* CPUClass::reset() */
diff --git a/target/sh4/helper.c b/target/sh4/helper.c
index 2ff0cf4060..8463da5bc8 100644
--- a/target/sh4/helper.c
+++ b/target/sh4/helper.c
@@ -83,7 +83,7 @@ void superh_cpu_do_interrupt(CPUState *cs)
 {
     SuperHCPU *cpu = SUPERH_CPU(cs);
     CPUSH4State *env = &cpu->env;
-    int do_irq = cs->interrupt_request & CPU_INTERRUPT_HARD;
+    int do_irq = cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
     int do_exp, irq_vector = cs->exception_index;
 
     /* prioritize exceptions over interrupts */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 46/73] cris: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (44 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 45/73] sh4: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 47/73] hppa: " Emilio G. Cota
                   ` (27 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Edgar E. Iglesias

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/cris/cpu.c    | 2 +-
 target/cris/helper.c | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/target/cris/cpu.c b/target/cris/cpu.c
index a23aba2688..3cdba581e6 100644
--- a/target/cris/cpu.c
+++ b/target/cris/cpu.c
@@ -37,7 +37,7 @@ static void cris_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool cris_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
 }
 
 /* CPUClass::reset() */
diff --git a/target/cris/helper.c b/target/cris/helper.c
index b2dbb2075c..5c453d5221 100644
--- a/target/cris/helper.c
+++ b/target/cris/helper.c
@@ -116,7 +116,7 @@ int cris_cpu_handle_mmu_fault(CPUState *cs, vaddr address, int size, int rw,
     if (r > 0) {
         qemu_log_mask(CPU_LOG_MMU,
                 "%s returns %d irqreq=%x addr=%" VADDR_PRIx " phy=%x vec=%x"
-                " pc=%x\n", __func__, r, cs->interrupt_request, address,
+                " pc=%x\n", __func__, r, cpu_interrupt_request(cs), address,
                 res.phy, res.bf_vec, env->pc);
     }
     return r;
@@ -130,7 +130,7 @@ void crisv10_cpu_do_interrupt(CPUState *cs)
 
     D_LOG("exception index=%d interrupt_req=%d\n",
           cs->exception_index,
-          cs->interrupt_request);
+          cpu_interrupt_request(cs));
 
     if (env->dslot) {
         /* CRISv10 never takes interrupts while in a delay-slot.  */
@@ -192,7 +192,7 @@ void cris_cpu_do_interrupt(CPUState *cs)
 
     D_LOG("exception index=%d interrupt_req=%d\n",
           cs->exception_index,
-          cs->interrupt_request);
+          cpu_interrupt_request(cs));
 
     switch (cs->exception_index) {
     case EXCP_BREAK:
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 47/73] hppa: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (45 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 46/73] cris: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 48/73] lm32: " Emilio G. Cota
                   ` (26 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/hppa/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 00bf444620..1ab4e62850 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -60,7 +60,7 @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
 
 static bool hppa_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 static void hppa_cpu_disas_set_info(CPUState *cs, disassemble_info *info)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 48/73] lm32: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (46 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 47/73] hppa: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 49/73] m68k: " Emilio G. Cota
                   ` (25 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Michael Walle

Cc: Michael Walle <michael@walle.cc>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/lm32/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/lm32/cpu.c b/target/lm32/cpu.c
index b7499cb627..1508bb6199 100644
--- a/target/lm32/cpu.c
+++ b/target/lm32/cpu.c
@@ -101,7 +101,7 @@ static void lm32_cpu_init_cfg_reg(LM32CPU *cpu)
 
 static bool lm32_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 /* CPUClass::reset() */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 49/73] m68k: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (47 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 48/73] lm32: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 50/73] mips: " Emilio G. Cota
                   ` (24 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Laurent Vivier

Cc: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/m68k/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
index 582e3a73b3..99a7eb4340 100644
--- a/target/m68k/cpu.c
+++ b/target/m68k/cpu.c
@@ -34,7 +34,7 @@ static void m68k_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool m68k_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 static void m68k_set_feature(CPUM68KState *env, int feature)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 50/73] mips: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (48 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 49/73] m68k: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 51/73] nios: " Emilio G. Cota
                   ` (23 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Aurelien Jarno,
	Aleksandar Markovic, James Hogan

Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Cc: James Hogan <jhogan@kernel.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/mips/cpu.c | 7 ++++---
 target/mips/kvm.c | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/target/mips/cpu.c b/target/mips/cpu.c
index e217fb3e36..fdae8cf440 100644
--- a/target/mips/cpu.c
+++ b/target/mips/cpu.c
@@ -56,11 +56,12 @@ static bool mips_cpu_has_work(CPUState *cs)
     MIPSCPU *cpu = MIPS_CPU(cs);
     CPUMIPSState *env = &cpu->env;
     bool has_work = false;
+    uint32_t interrupt_request = cpu_interrupt_request(cs);
 
     /* Prior to MIPS Release 6 it is implementation dependent if non-enabled
        interrupts wake-up the CPU, however most of the implementations only
        check for interrupts that can be taken. */
-    if ((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if ((interrupt_request & CPU_INTERRUPT_HARD) &&
         cpu_mips_hw_interrupts_pending(env)) {
         if (cpu_mips_hw_interrupts_enabled(env) ||
             (env->insn_flags & ISA_MIPS32R6)) {
@@ -72,7 +73,7 @@ static bool mips_cpu_has_work(CPUState *cs)
     if (env->CP0_Config3 & (1 << CP0C3_MT)) {
         /* The QEMU model will issue an _WAKE request whenever the CPUs
            should be woken up.  */
-        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
+        if (interrupt_request & CPU_INTERRUPT_WAKE) {
             has_work = true;
         }
 
@@ -82,7 +83,7 @@ static bool mips_cpu_has_work(CPUState *cs)
     }
     /* MIPS Release 6 has the ability to halt the CPU.  */
     if (env->CP0_Config5 & (1 << CP0C5_VP)) {
-        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
+        if (interrupt_request & CPU_INTERRUPT_WAKE) {
             has_work = true;
         }
         if (!mips_vp_active(env)) {
diff --git a/target/mips/kvm.c b/target/mips/kvm.c
index 0b177a7577..568c3d8f4a 100644
--- a/target/mips/kvm.c
+++ b/target/mips/kvm.c
@@ -135,7 +135,7 @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
 
     qemu_mutex_lock_iothread();
 
-    if ((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if ((cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
             cpu_mips_io_interrupts_pending(cpu)) {
         intr.cpu = -1;
         intr.irq = 2;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 51/73] nios: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (49 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 50/73] mips: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 52/73] s390x: " Emilio G. Cota
                   ` (22 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Chris Wulff, Marek Vasut

Cc: Chris Wulff <crwulff@gmail.com>
Cc: Marek Vasut <marex@denx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/nios2/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/nios2/cpu.c b/target/nios2/cpu.c
index fbfaa2ce26..49a75414d3 100644
--- a/target/nios2/cpu.c
+++ b/target/nios2/cpu.c
@@ -36,7 +36,7 @@ static void nios2_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool nios2_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
 }
 
 /* CPUClass::reset() */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 52/73] s390x: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (50 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 51/73] nios: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 53/73] alpha: " Emilio G. Cota
                   ` (21 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Cornelia Huck,
	Christian Borntraeger, David Hildenbrand, qemu-s390x

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/intc/s390_flic.c | 2 +-
 target/s390x/cpu.c  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c
index bfb5cf1d07..d944824e67 100644
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -189,7 +189,7 @@ static void qemu_s390_flic_notify(uint32_t type)
     CPU_FOREACH(cs) {
         S390CPU *cpu = S390_CPU(cs);
 
-        cs->interrupt_request |= CPU_INTERRUPT_HARD;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_HARD);
 
         /* ignore CPUs that are not sleeping */
         if (s390_cpu_get_state(cpu) != S390_CPU_STATE_OPERATING &&
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 4d70ba785c..d1594c90d9 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -65,7 +65,7 @@ static bool s390_cpu_has_work(CPUState *cs)
         return false;
     }
 
-    if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+    if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
         return false;
     }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 53/73] alpha: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (51 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 52/73] s390x: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 54/73] moxie: " Emilio G. Cota
                   ` (20 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/alpha/cpu.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
index 1fd95d6c0f..cebd459251 100644
--- a/target/alpha/cpu.c
+++ b/target/alpha/cpu.c
@@ -42,10 +42,10 @@ static bool alpha_cpu_has_work(CPUState *cs)
        assume that if a CPU really wants to stay asleep, it will mask
        interrupts at the chipset level, which will prevent these bits
        from being set in the first place.  */
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD
-                                    | CPU_INTERRUPT_TIMER
-                                    | CPU_INTERRUPT_SMP
-                                    | CPU_INTERRUPT_MCHK);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD
+                                        | CPU_INTERRUPT_TIMER
+                                        | CPU_INTERRUPT_SMP
+                                        | CPU_INTERRUPT_MCHK);
 }
 
 static void alpha_cpu_disas_set_info(CPUState *cpu, disassemble_info *info)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 54/73] moxie: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (52 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 53/73] alpha: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 55/73] sparc: " Emilio G. Cota
                   ` (19 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Anthony Green

Cc: Anthony Green <green@moxielogic.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/moxie/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/moxie/cpu.c b/target/moxie/cpu.c
index 8d67eb6727..bad92cfc61 100644
--- a/target/moxie/cpu.c
+++ b/target/moxie/cpu.c
@@ -33,7 +33,7 @@ static void moxie_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool moxie_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 static void moxie_cpu_reset(CPUState *s)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 55/73] sparc: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (53 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 54/73] moxie: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 56/73] openrisc: " Emilio G. Cota
                   ` (18 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Mark Cave-Ayland, Artyom Tarasenko

Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/sparc64/sparc64.c | 4 ++--
 target/sparc/cpu.c   | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/sparc64/sparc64.c b/hw/sparc64/sparc64.c
index 372bbd4f5b..58faeb111a 100644
--- a/hw/sparc64/sparc64.c
+++ b/hw/sparc64/sparc64.c
@@ -56,7 +56,7 @@ void cpu_check_irqs(CPUSPARCState *env)
     /* The bit corresponding to psrpil is (1<< psrpil), the next bit
        is (2 << psrpil). */
     if (pil < (2 << env->psrpil)) {
-        if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
+        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) {
             trace_sparc64_cpu_check_irqs_reset_irq(env->interrupt_index);
             env->interrupt_index = 0;
             cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
@@ -87,7 +87,7 @@ void cpu_check_irqs(CPUSPARCState *env)
                 break;
             }
         }
-    } else if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
+    } else if (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) {
         trace_sparc64_cpu_check_irqs_disabled(pil, env->pil_in, env->softint,
                                               env->interrupt_index);
         env->interrupt_index = 0;
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 4a4445bdf5..933fd8e954 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -708,7 +708,7 @@ static bool sparc_cpu_has_work(CPUState *cs)
     SPARCCPU *cpu = SPARC_CPU(cs);
     CPUSPARCState *env = &cpu->env;
 
-    return (cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    return (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
            cpu_interrupts_enabled(env);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 56/73] openrisc: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (54 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 55/73] sparc: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 57/73] unicore32: " Emilio G. Cota
                   ` (17 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Stafford Horne

Cc: Stafford Horne <shorne@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/openrisc/cputimer.c | 2 +-
 target/openrisc/cpu.c  | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/openrisc/cputimer.c b/hw/openrisc/cputimer.c
index 850f88761c..739404e4f5 100644
--- a/hw/openrisc/cputimer.c
+++ b/hw/openrisc/cputimer.c
@@ -102,7 +102,7 @@ static void openrisc_timer_cb(void *opaque)
         CPUState *cs = CPU(cpu);
 
         cpu->env.ttmr |= TTMR_IP;
-        cs->interrupt_request |= CPU_INTERRUPT_TIMER;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_TIMER);
     }
 
     switch (cpu->env.ttmr & TTMR_M) {
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
index fb7cb5c507..cdbc9353b7 100644
--- a/target/openrisc/cpu.c
+++ b/target/openrisc/cpu.c
@@ -32,8 +32,8 @@ static void openrisc_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool openrisc_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD |
-                                    CPU_INTERRUPT_TIMER);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD |
+                                        CPU_INTERRUPT_TIMER);
 }
 
 static void openrisc_disas_set_info(CPUState *cpu, disassemble_info *info)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 57/73] unicore32: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (55 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 56/73] openrisc: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 58/73] microblaze: " Emilio G. Cota
                   ` (16 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Guan Xuetao

Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/unicore32/cpu.c     | 2 +-
 target/unicore32/softmmu.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/unicore32/cpu.c b/target/unicore32/cpu.c
index 2b49d1ca40..65c5334551 100644
--- a/target/unicore32/cpu.c
+++ b/target/unicore32/cpu.c
@@ -29,7 +29,7 @@ static void uc32_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool uc32_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request &
+    return cpu_interrupt_request(cs) &
         (CPU_INTERRUPT_HARD | CPU_INTERRUPT_EXITTB);
 }
 
diff --git a/target/unicore32/softmmu.c b/target/unicore32/softmmu.c
index 00c7e0d028..f58e2361e0 100644
--- a/target/unicore32/softmmu.c
+++ b/target/unicore32/softmmu.c
@@ -119,7 +119,7 @@ void uc32_cpu_do_interrupt(CPUState *cs)
     /* The PC already points to the proper instruction.  */
     env->regs[30] = env->regs[31];
     env->regs[31] = addr;
-    cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
+    cpu_interrupt_request_or(cs, CPU_INTERRUPT_EXITTB);
 }
 
 static int get_phys_addr_ucv2(CPUUniCore32State *env, uint32_t address,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 58/73] microblaze: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (56 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 57/73] unicore32: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 59/73] accel/tcg: " Emilio G. Cota
                   ` (15 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Edgar E. Iglesias

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/microblaze/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
index 5596cd5485..0cdd7fe917 100644
--- a/target/microblaze/cpu.c
+++ b/target/microblaze/cpu.c
@@ -84,7 +84,7 @@ static void mb_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool mb_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
 }
 
 #ifndef CONFIG_USER_ONLY
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 59/73] accel/tcg: convert to cpu_interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (57 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 58/73] microblaze: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 60/73] cpu: convert to interrupt_request Emilio G. Cota
                   ` (14 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cpu-exec.c      | 15 ++++++++-------
 accel/tcg/tcg-all.c       | 12 +++++++++---
 accel/tcg/translate-all.c |  2 +-
 3 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index e3d72897e8..e4ae04f72c 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -431,7 +431,7 @@ static inline bool cpu_handle_halt_locked(CPUState *cpu)
 
     if (cpu_halted(cpu)) {
 #if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
-        if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
+        if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL)
             && replay_interrupt()) {
             X86CPU *x86_cpu = X86_CPU(cpu);
 
@@ -544,16 +544,17 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
      */
     atomic_mb_set(&cpu->icount_decr.u16.high, 0);
 
-    if (unlikely(atomic_read(&cpu->interrupt_request))) {
+    if (unlikely(cpu_interrupt_request(cpu))) {
         int interrupt_request;
+
         qemu_mutex_lock_iothread();
-        interrupt_request = cpu->interrupt_request;
+        interrupt_request = cpu_interrupt_request(cpu);
         if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
             /* Mask out external interrupts for this step. */
             interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
         }
         if (interrupt_request & CPU_INTERRUPT_DEBUG) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
             cpu->exception_index = EXCP_DEBUG;
             qemu_mutex_unlock_iothread();
             return true;
@@ -562,7 +563,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
             /* Do nothing */
         } else if (interrupt_request & CPU_INTERRUPT_HALT) {
             replay_interrupt();
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HALT);
             cpu_halted_set(cpu, 1);
             cpu->exception_index = EXCP_HLT;
             qemu_mutex_unlock_iothread();
@@ -599,10 +600,10 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
             }
             /* The target hook may have updated the 'cpu->interrupt_request';
              * reload the 'interrupt_request' value */
-            interrupt_request = cpu->interrupt_request;
+            interrupt_request = cpu_interrupt_request(cpu);
         }
         if (interrupt_request & CPU_INTERRUPT_EXITTB) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_EXITTB;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_EXITTB);
             /* ensure that no TB jump will be modified as
                the program flow was changed */
             *last_tb = NULL;
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
index 3d25bdcc17..4e2fe70350 100644
--- a/accel/tcg/tcg-all.c
+++ b/accel/tcg/tcg-all.c
@@ -39,10 +39,16 @@ unsigned long tcg_tb_size;
 static void tcg_handle_interrupt(CPUState *cpu, int mask)
 {
     int old_mask;
-    g_assert(qemu_mutex_iothread_locked());
 
-    old_mask = cpu->interrupt_request;
-    cpu->interrupt_request |= mask;
+    if (!cpu_mutex_locked(cpu)) {
+        cpu_mutex_lock(cpu);
+        old_mask = cpu_interrupt_request(cpu);
+        cpu_interrupt_request_or(cpu, mask);
+        cpu_mutex_unlock(cpu);
+    } else {
+        old_mask = cpu_interrupt_request(cpu);
+        cpu_interrupt_request_or(cpu, mask);
+    }
 
     /*
      * If called from iothread context, wake the target cpu in
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 7364e8a071..ec708208b6 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -2343,7 +2343,7 @@ void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf)
 void cpu_interrupt(CPUState *cpu, int mask)
 {
     g_assert(qemu_mutex_iothread_locked());
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
     atomic_set(&cpu->icount_decr.u16.high, -1);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 60/73] cpu: convert to interrupt_request
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (58 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 59/73] accel/tcg: " Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-08 11:21   ` Alex Bennée
  2019-02-20 16:55   ` Richard Henderson
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 61/73] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
                   ` (13 subsequent siblings)
  73 siblings, 2 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

This finishes the conversion to interrupt_request.

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 qom/cpu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/qom/cpu.c b/qom/cpu.c
index 00add81a7f..f2695be9b2 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -275,7 +275,7 @@ static void cpu_common_reset(CPUState *cpu)
         log_cpu_state(cpu, cc->reset_dump_flags);
     }
 
-    cpu->interrupt_request = 0;
+    cpu_interrupt_request_set(cpu, 0);
     cpu_halted_set(cpu, 0);
     cpu->mem_io_pc = 0;
     cpu->mem_io_vaddr = 0;
@@ -412,7 +412,7 @@ static vaddr cpu_adjust_watchpoint_address(CPUState *cpu, vaddr addr, int len)
 
 static void generic_handle_interrupt(CPUState *cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
 
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 61/73] cpu: call .cpu_has_work with the CPU lock held
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (59 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 60/73] cpu: convert to interrupt_request Emilio G. Cota
@ 2019-01-30  0:47 ` Emilio G. Cota
  2019-02-08 11:22   ` Alex Bennée
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 62/73] cpu: introduce cpu_has_work_with_iothread_lock Emilio G. Cota
                   ` (12 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 4a87c1fef7..96a5d0cb94 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -85,7 +85,8 @@ struct TranslationBlock;
  * @parse_features: Callback to parse command line arguments.
  * @reset: Callback to reset the #CPUState to its initial state.
  * @reset_dump_flags: #CPUDumpFlags to use for reset logging.
- * @has_work: Callback for checking if there is work to do.
+ * @has_work: Callback for checking if there is work to do. Called with the
+ * CPU lock held.
  * @do_interrupt: Callback for interrupt handling.
  * @do_unassigned_access: Callback for unassigned access handling.
  * (this is deprecated: new targets should use do_transaction_failed instead)
@@ -795,9 +796,16 @@ const char *parse_cpu_model(const char *cpu_model);
 static inline bool cpu_has_work(CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
+    bool ret;
 
     g_assert(cc->has_work);
-    return cc->has_work(cpu);
+    if (cpu_mutex_locked(cpu)) {
+        return cc->has_work(cpu);
+    }
+    cpu_mutex_lock(cpu);
+    ret = cc->has_work(cpu);
+    cpu_mutex_unlock(cpu);
+    return ret;
 }
 
 /**
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 62/73] cpu: introduce cpu_has_work_with_iothread_lock
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (60 preceding siblings ...)
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 61/73] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-02-08 11:33   ` Alex Bennée
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 63/73] ppc: convert to cpu_has_work_with_iothread_lock Emilio G. Cota
                   ` (11 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

It will gain some users soon.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 96a5d0cb94..27a80bc113 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -27,6 +27,7 @@
 #include "qapi/qapi-types-run-state.h"
 #include "qemu/bitmap.h"
 #include "qemu/fprintf-fn.h"
+#include "qemu/main-loop.h"
 #include "qemu/rcu_queue.h"
 #include "qemu/queue.h"
 #include "qemu/thread.h"
@@ -87,6 +88,8 @@ struct TranslationBlock;
  * @reset_dump_flags: #CPUDumpFlags to use for reset logging.
  * @has_work: Callback for checking if there is work to do. Called with the
  * CPU lock held.
+ * @has_work_with_iothread_lock: Callback for checking if there is work to do.
+ * Called with both the BQL and the CPU lock held.
  * @do_interrupt: Callback for interrupt handling.
  * @do_unassigned_access: Callback for unassigned access handling.
  * (this is deprecated: new targets should use do_transaction_failed instead)
@@ -158,6 +161,7 @@ typedef struct CPUClass {
     void (*reset)(CPUState *cpu);
     int reset_dump_flags;
     bool (*has_work)(CPUState *cpu);
+    bool (*has_work_with_iothread_lock)(CPUState *cpu);
     void (*do_interrupt)(CPUState *cpu);
     CPUUnassignedAccess do_unassigned_access;
     void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
@@ -796,14 +800,40 @@ const char *parse_cpu_model(const char *cpu_model);
 static inline bool cpu_has_work(CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
+    bool has_cpu_lock = cpu_mutex_locked(cpu);
+    bool (*func)(CPUState *cpu);
     bool ret;
 
+    if (cc->has_work_with_iothread_lock) {
+        if (qemu_mutex_iothread_locked()) {
+            func = cc->has_work_with_iothread_lock;
+            goto call_func;
+        }
+
+        if (has_cpu_lock) {
+            /* avoid deadlock by acquiring the locks in order */
+            cpu_mutex_unlock(cpu);
+        }
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cpu);
+
+        ret = cc->has_work_with_iothread_lock(cpu);
+
+        qemu_mutex_unlock_iothread();
+        if (!has_cpu_lock) {
+            cpu_mutex_unlock(cpu);
+        }
+        return ret;
+    }
+
     g_assert(cc->has_work);
-    if (cpu_mutex_locked(cpu)) {
-        return cc->has_work(cpu);
+    func = cc->has_work;
+ call_func:
+    if (has_cpu_lock) {
+        return func(cpu);
     }
     cpu_mutex_lock(cpu);
-    ret = cc->has_work(cpu);
+    ret = func(cpu);
     cpu_mutex_unlock(cpu);
     return ret;
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 63/73] ppc: convert to cpu_has_work_with_iothread_lock
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (61 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 62/73] cpu: introduce cpu_has_work_with_iothread_lock Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 64/73] mips: " Emilio G. Cota
                   ` (10 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, David Gibson, qemu-ppc

Soon we will call cpu_has_work without the BQL.

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/translate_init.inc.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index e1059d9ed6..78bb766f3f 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8454,6 +8454,8 @@ static bool cpu_has_work_POWER7(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (cpu_halted(cs)) {
         if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
@@ -8497,7 +8499,7 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
     pcc->pcr_supported = PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
     pcc->init_proc = init_proc_POWER7;
     pcc->check_pow = check_pow_nocheck;
-    cc->has_work = cpu_has_work_POWER7;
+    cc->has_work_with_iothread_lock = cpu_has_work_POWER7;
     pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
                        PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
                        PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
@@ -8608,6 +8610,8 @@ static bool cpu_has_work_POWER8(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (cpu_halted(cs)) {
         if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
@@ -8659,7 +8663,7 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
     pcc->pcr_supported = PCR_COMPAT_2_07 | PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
     pcc->init_proc = init_proc_POWER8;
     pcc->check_pow = check_pow_nocheck;
-    cc->has_work = cpu_has_work_POWER8;
+    cc->has_work_with_iothread_lock = cpu_has_work_POWER8;
     pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
                        PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
                        PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
@@ -8800,6 +8804,8 @@ static bool cpu_has_work_POWER9(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (cpu_halted(cs)) {
         if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
@@ -8853,7 +8859,7 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
                          PCR_COMPAT_2_05;
     pcc->init_proc = init_proc_POWER9;
     pcc->check_pow = check_pow_nocheck;
-    cc->has_work = cpu_has_work_POWER9;
+    cc->has_work_with_iothread_lock = cpu_has_work_POWER9;
     pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
                        PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
                        PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
@@ -10247,6 +10253,8 @@ static bool ppc_cpu_has_work(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
 }
 
@@ -10446,7 +10454,7 @@ static void ppc_cpu_class_init(ObjectClass *oc, void *data)
     cc->class_by_name = ppc_cpu_class_by_name;
     pcc->parent_parse_features = cc->parse_features;
     cc->parse_features = ppc_cpu_parse_featurestr;
-    cc->has_work = ppc_cpu_has_work;
+    cc->has_work_with_iothread_lock = ppc_cpu_has_work;
     cc->do_interrupt = ppc_cpu_do_interrupt;
     cc->cpu_exec_interrupt = ppc_cpu_exec_interrupt;
     cc->dump_state = ppc_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 64/73] mips: convert to cpu_has_work_with_iothread_lock
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (62 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 63/73] ppc: convert to cpu_has_work_with_iothread_lock Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 65/73] s390x: " Emilio G. Cota
                   ` (9 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Aurelien Jarno, Aleksandar Markovic

Soon we will call cpu_has_work without the BQL.

Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/mips/cpu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/mips/cpu.c b/target/mips/cpu.c
index fdae8cf440..de1eb771fa 100644
--- a/target/mips/cpu.c
+++ b/target/mips/cpu.c
@@ -58,6 +58,8 @@ static bool mips_cpu_has_work(CPUState *cs)
     bool has_work = false;
     uint32_t interrupt_request = cpu_interrupt_request(cs);
 
+    g_assert(qemu_mutex_iothread_locked());
+
     /* Prior to MIPS Release 6 it is implementation dependent if non-enabled
        interrupts wake-up the CPU, however most of the implementations only
        check for interrupts that can be taken. */
@@ -190,7 +192,7 @@ static void mips_cpu_class_init(ObjectClass *c, void *data)
     cc->reset = mips_cpu_reset;
 
     cc->class_by_name = mips_cpu_class_by_name;
-    cc->has_work = mips_cpu_has_work;
+    cc->has_work_with_iothread_lock = mips_cpu_has_work;
     cc->do_interrupt = mips_cpu_do_interrupt;
     cc->cpu_exec_interrupt = mips_cpu_exec_interrupt;
     cc->dump_state = mips_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 65/73] s390x: convert to cpu_has_work_with_iothread_lock
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (63 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 64/73] mips: " Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-01-30 10:35   ` Cornelia Huck
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 66/73] riscv: " Emilio G. Cota
                   ` (8 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Cornelia Huck,
	David Hildenbrand, qemu-s390x

Soon we will call cpu_has_work without the BQL.

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/s390x/cpu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index d1594c90d9..5c38abaa49 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -59,6 +59,8 @@ static bool s390_cpu_has_work(CPUState *cs)
 {
     S390CPU *cpu = S390_CPU(cs);
 
+    g_assert(qemu_mutex_iothread_locked());
+
     /* STOPPED cpus can never wake up */
     if (s390_cpu_get_state(cpu) != S390_CPU_STATE_LOAD &&
         s390_cpu_get_state(cpu) != S390_CPU_STATE_OPERATING) {
@@ -473,7 +475,7 @@ static void s390_cpu_class_init(ObjectClass *oc, void *data)
     scc->initial_cpu_reset = s390_cpu_initial_reset;
     cc->reset = s390_cpu_full_reset;
     cc->class_by_name = s390_cpu_class_by_name,
-    cc->has_work = s390_cpu_has_work;
+    cc->has_work_with_iothread_lock = s390_cpu_has_work;
 #ifdef CONFIG_TCG
     cc->do_interrupt = s390_cpu_do_interrupt;
 #endif
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 66/73] riscv: convert to cpu_has_work_with_iothread_lock
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (64 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 65/73] s390x: " Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-02-06 23:51   ` Alistair Francis
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 67/73] sparc: " Emilio G. Cota
                   ` (7 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Palmer Dabbelt,
	Sagar Karandikar, Bastian Koppelmann

Soon we will call cpu_has_work without the BQL.

Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/riscv/cpu.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 28d7e5302f..7b36c09fe0 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -257,6 +257,9 @@ static bool riscv_cpu_has_work(CPUState *cs)
 #ifndef CONFIG_USER_ONLY
     RISCVCPU *cpu = RISCV_CPU(cs);
     CPURISCVState *env = &cpu->env;
+
+    g_assert(qemu_mutex_iothread_locked());
+
     /*
      * Definition of the WFI instruction requires it to ignore the privilege
      * mode and delegation registers, but respect individual enables
@@ -343,7 +346,7 @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
     cc->reset = riscv_cpu_reset;
 
     cc->class_by_name = riscv_cpu_class_by_name;
-    cc->has_work = riscv_cpu_has_work;
+    cc->has_work_with_iothread_lock = riscv_cpu_has_work;
     cc->do_interrupt = riscv_cpu_do_interrupt;
     cc->cpu_exec_interrupt = riscv_cpu_exec_interrupt;
     cc->dump_state = riscv_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 67/73] sparc: convert to cpu_has_work_with_iothread_lock
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (65 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 66/73] riscv: " Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 68/73] xtensa: " Emilio G. Cota
                   ` (6 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Paolo Bonzini, Mark Cave-Ayland, Artyom Tarasenko

Soon we will call cpu_has_work without the BQL.

Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/sparc/cpu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 933fd8e954..b15f6ef180 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -708,6 +708,8 @@ static bool sparc_cpu_has_work(CPUState *cs)
     SPARCCPU *cpu = SPARC_CPU(cs);
     CPUSPARCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     return (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
            cpu_interrupts_enabled(env);
 }
@@ -869,7 +871,7 @@ static void sparc_cpu_class_init(ObjectClass *oc, void *data)
 
     cc->class_by_name = sparc_cpu_class_by_name;
     cc->parse_features = sparc_cpu_parse_features;
-    cc->has_work = sparc_cpu_has_work;
+    cc->has_work_with_iothread_lock = sparc_cpu_has_work;
     cc->do_interrupt = sparc_cpu_do_interrupt;
     cc->cpu_exec_interrupt = sparc_cpu_exec_interrupt;
     cc->dump_state = sparc_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 68/73] xtensa: convert to cpu_has_work_with_iothread_lock
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (66 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 67/73] sparc: " Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle Emilio G. Cota
                   ` (5 subsequent siblings)
  73 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini, Max Filippov

Soon we will call cpu_has_work without the BQL.

Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/xtensa/cpu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index d4ca35e6cc..5f3b4a70b0 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -47,6 +47,8 @@ static bool xtensa_cpu_has_work(CPUState *cs)
 #ifndef CONFIG_USER_ONLY
     XtensaCPU *cpu = XTENSA_CPU(cs);
 
+    g_assert(qemu_mutex_iothread_locked());
+
     return !cpu->env.runstall && cpu->env.pending_irq_level;
 #else
     return true;
@@ -173,7 +175,7 @@ static void xtensa_cpu_class_init(ObjectClass *oc, void *data)
     cc->reset = xtensa_cpu_reset;
 
     cc->class_by_name = xtensa_cpu_class_by_name;
-    cc->has_work = xtensa_cpu_has_work;
+    cc->has_work_with_iothread_lock = xtensa_cpu_has_work;
     cc->do_interrupt = xtensa_cpu_do_interrupt;
     cc->cpu_exec_interrupt = xtensa_cpu_exec_interrupt;
     cc->dump_state = xtensa_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (67 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 68/73] xtensa: " Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-02-08 11:34   ` Alex Bennée
  2019-02-20 17:01   ` Richard Henderson
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 70/73] cpu: protect CPU state with cpu->lock instead of the BQL Emilio G. Cota
                   ` (4 subsequent siblings)
  73 siblings, 2 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

This function is only called from TCG rr mode, so add
a prefix to mark this as well as an assertion.

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 cpus.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/cpus.c b/cpus.c
index aee129c0b3..0d255c2655 100644
--- a/cpus.c
+++ b/cpus.c
@@ -211,10 +211,12 @@ static bool cpu_thread_is_idle(CPUState *cpu)
     return true;
 }
 
-static bool all_cpu_threads_idle(void)
+static bool qemu_tcg_rr_all_cpu_threads_idle(void)
 {
     CPUState *cpu;
 
+    g_assert(qemu_is_tcg_rr());
+
     CPU_FOREACH(cpu) {
         if (!cpu_thread_is_idle(cpu)) {
             return false;
@@ -692,7 +694,7 @@ void qemu_start_warp_timer(void)
     }
 
     if (replay_mode != REPLAY_MODE_PLAY) {
-        if (!all_cpu_threads_idle()) {
+        if (!qemu_tcg_rr_all_cpu_threads_idle()) {
             return;
         }
 
@@ -1325,7 +1327,7 @@ static void qemu_tcg_rr_wait_io_event(void)
 {
     CPUState *cpu;
 
-    while (all_cpu_threads_idle()) {
+    while (qemu_tcg_rr_all_cpu_threads_idle()) {
         stop_tcg_kick_timer();
         qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
     }
@@ -1659,7 +1661,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             atomic_mb_set(&cpu->exit_request, 0);
         }
 
-        if (use_icount && all_cpu_threads_idle()) {
+        if (use_icount && qemu_tcg_rr_all_cpu_threads_idle()) {
             /*
              * When all cpus are sleeping (e.g in WFI), to avoid a deadlock
              * in the main_loop, wake it up in order to start the warp timer.
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 70/73] cpu: protect CPU state with cpu->lock instead of the BQL
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (68 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-02-08 14:33   ` Alex Bennée
  2019-02-20 17:25   ` Richard Henderson
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 71/73] cpus-common: release BQL earlier in run_on_cpu Emilio G. Cota
                   ` (3 subsequent siblings)
  73 siblings, 2 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Use the per-CPU locks to protect the CPUs' state, instead of
using the BQL. These locks are uncontended (they are mostly
acquired by the corresponding vCPU thread), so acquiring them
is cheaper than acquiring the BQL, which particularly in
MTTCG can be contended at high core counts.

In this conversion we drop qemu_cpu_cond and qemu_pause_cond,
and use cpu->cond instead.

In qom/cpu.c we can finally remove the ugliness that
results from having to hold both the BQL and the CPU lock;
now we just have to grab the CPU lock.

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  20 ++--
 cpus.c            | 280 ++++++++++++++++++++++++++++++++++------------
 qom/cpu.c         |  29 +----
 3 files changed, 225 insertions(+), 104 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 27a80bc113..30ed2fae0b 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -297,10 +297,6 @@ struct qemu_work_item;
  * valid under cpu_list_lock.
  * @created: Indicates whether the CPU thread has been successfully created.
  * @interrupt_request: Indicates a pending interrupt request.
- * @halted: Nonzero if the CPU is in suspended state.
- * @stop: Indicates a pending stop request.
- * @stopped: Indicates the CPU has been artificially stopped.
- * @unplug: Indicates a pending CPU unplug request.
  * @crash_occurred: Indicates the OS reported a crash (panic) for this CPU
  * @singlestep_enabled: Flags for single-stepping.
  * @icount_extra: Instructions until next timer event.
@@ -329,6 +325,10 @@ struct qemu_work_item;
  * @lock: Lock to prevent multiple access to per-CPU fields.
  * @cond: Condition variable for per-CPU events.
  * @work_list: List of pending asynchronous work.
+ * @halted: Nonzero if the CPU is in suspended state.
+ * @stop: Indicates a pending stop request.
+ * @stopped: Indicates the CPU has been artificially stopped.
+ * @unplug: Indicates a pending CPU unplug request.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
  * @trace_dstate: Dynamic tracing state of events for this vCPU (bitmask).
@@ -352,12 +352,7 @@ struct CPUState {
 #endif
     int thread_id;
     bool running, has_waiter;
-    struct QemuCond *halt_cond;
     bool thread_kicked;
-    bool created;
-    bool stop;
-    bool stopped;
-    bool unplug;
     bool crash_occurred;
     bool exit_request;
     uint32_t cflags_next_tb;
@@ -371,7 +366,13 @@ struct CPUState {
     QemuMutex *lock;
     /* fields below protected by @lock */
     QemuCond cond;
+    QemuCond *halt_cond;
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
+    uint32_t halted;
+    bool created;
+    bool stop;
+    bool stopped;
+    bool unplug;
 
     CPUAddressSpace *cpu_ases;
     int num_ases;
@@ -419,7 +420,6 @@ struct CPUState {
     /* TODO Move common fields from CPUArchState here. */
     int cpu_index;
     int cluster_index;
-    uint32_t halted;
     uint32_t can_do_io;
     int32_t exception_index;
 
diff --git a/cpus.c b/cpus.c
index 0d255c2655..4f17fe25bf 100644
--- a/cpus.c
+++ b/cpus.c
@@ -181,24 +181,30 @@ bool cpu_mutex_locked(const CPUState *cpu)
     return test_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
 }
 
-bool cpu_is_stopped(CPUState *cpu)
+/* Called with the CPU's lock held */
+static bool cpu_is_stopped_locked(CPUState *cpu)
 {
     return cpu->stopped || !runstate_is_running();
 }
 
-static inline bool cpu_work_list_empty(CPUState *cpu)
+bool cpu_is_stopped(CPUState *cpu)
 {
-    bool ret;
+    if (!cpu_mutex_locked(cpu)) {
+        bool ret;
 
-    cpu_mutex_lock(cpu);
-    ret = QSIMPLEQ_EMPTY(&cpu->work_list);
-    cpu_mutex_unlock(cpu);
-    return ret;
+        cpu_mutex_lock(cpu);
+        ret = cpu_is_stopped_locked(cpu);
+        cpu_mutex_unlock(cpu);
+        return ret;
+    }
+    return cpu_is_stopped_locked(cpu);
 }
 
 static bool cpu_thread_is_idle(CPUState *cpu)
 {
-    if (cpu->stop || !cpu_work_list_empty(cpu)) {
+    g_assert(cpu_mutex_locked(cpu));
+
+    if (cpu->stop || !QSIMPLEQ_EMPTY(&cpu->work_list)) {
         return false;
     }
     if (cpu_is_stopped(cpu)) {
@@ -216,9 +222,17 @@ static bool qemu_tcg_rr_all_cpu_threads_idle(void)
     CPUState *cpu;
 
     g_assert(qemu_is_tcg_rr());
+    g_assert(qemu_mutex_iothread_locked());
+    g_assert(no_cpu_mutex_locked());
 
     CPU_FOREACH(cpu) {
-        if (!cpu_thread_is_idle(cpu)) {
+        bool is_idle;
+
+        cpu_mutex_lock(cpu);
+        is_idle = cpu_thread_is_idle(cpu);
+        cpu_mutex_unlock(cpu);
+
+        if (!is_idle) {
             return false;
         }
     }
@@ -780,6 +794,8 @@ void qemu_start_warp_timer(void)
 
 static void qemu_account_warp_timer(void)
 {
+    g_assert(qemu_mutex_iothread_locked());
+
     if (!use_icount || !icount_sleep) {
         return;
     }
@@ -1090,6 +1106,7 @@ static void kick_tcg_thread(void *opaque)
 static void start_tcg_kick_timer(void)
 {
     assert(!mttcg_enabled);
+    g_assert(qemu_mutex_iothread_locked());
     if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) {
         tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
                                            kick_tcg_thread, NULL);
@@ -1102,6 +1119,7 @@ static void start_tcg_kick_timer(void)
 static void stop_tcg_kick_timer(void)
 {
     assert(!mttcg_enabled);
+    g_assert(qemu_mutex_iothread_locked());
     if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) {
         timer_del(tcg_kick_vcpu_timer);
     }
@@ -1204,6 +1222,8 @@ int vm_shutdown(void)
 
 static bool cpu_can_run(CPUState *cpu)
 {
+    g_assert(cpu_mutex_locked(cpu));
+
     if (cpu->stop) {
         return false;
     }
@@ -1276,16 +1296,9 @@ static void qemu_init_sigbus(void)
 
 static QemuThread io_thread;
 
-/* cpu creation */
-static QemuCond qemu_cpu_cond;
-/* system init */
-static QemuCond qemu_pause_cond;
-
 void qemu_init_cpu_loop(void)
 {
     qemu_init_sigbus();
-    qemu_cond_init(&qemu_cpu_cond);
-    qemu_cond_init(&qemu_pause_cond);
     qemu_mutex_init(&qemu_global_mutex);
 
     qemu_thread_get_self(&io_thread);
@@ -1303,46 +1316,70 @@ static void qemu_tcg_destroy_vcpu(CPUState *cpu)
 {
 }
 
-static void qemu_cpu_stop(CPUState *cpu, bool exit)
+static void qemu_cpu_stop_locked(CPUState *cpu, bool exit)
 {
+    g_assert(cpu_mutex_locked(cpu));
     g_assert(qemu_cpu_is_self(cpu));
     cpu->stop = false;
     cpu->stopped = true;
     if (exit) {
         cpu_exit(cpu);
     }
-    qemu_cond_broadcast(&qemu_pause_cond);
+    qemu_cond_broadcast(&cpu->cond);
+}
+
+static void qemu_cpu_stop(CPUState *cpu, bool exit)
+{
+    cpu_mutex_lock(cpu);
+    qemu_cpu_stop_locked(cpu, exit);
+    cpu_mutex_unlock(cpu);
 }
 
 static void qemu_wait_io_event_common(CPUState *cpu)
 {
+    g_assert(cpu_mutex_locked(cpu));
+
     atomic_mb_set(&cpu->thread_kicked, false);
     if (cpu->stop) {
-        qemu_cpu_stop(cpu, false);
+        qemu_cpu_stop_locked(cpu, false);
     }
+    /*
+     * unlock+lock cpu_mutex, so that other vCPUs have a chance to grab the
+     * lock and queue some work for this vCPU.
+     */
+    cpu_mutex_unlock(cpu);
     process_queued_cpu_work(cpu);
+    cpu_mutex_lock(cpu);
 }
 
 static void qemu_tcg_rr_wait_io_event(void)
 {
     CPUState *cpu;
 
+    g_assert(qemu_mutex_iothread_locked());
+    g_assert(no_cpu_mutex_locked());
+
     while (qemu_tcg_rr_all_cpu_threads_idle()) {
         stop_tcg_kick_timer();
-        qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait(first_cpu->halt_cond, first_cpu->lock);
     }
 
     start_tcg_kick_timer();
 
     CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
         qemu_wait_io_event_common(cpu);
+        cpu_mutex_unlock(cpu);
     }
 }
 
 static void qemu_wait_io_event(CPUState *cpu)
 {
+    g_assert(cpu_mutex_locked(cpu));
+    g_assert(!qemu_mutex_iothread_locked());
+
     while (cpu_thread_is_idle(cpu)) {
-        qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait(cpu->halt_cond, cpu->lock);
     }
 
 #ifdef _WIN32
@@ -1362,6 +1399,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
     rcu_register_thread();
 
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     cpu->can_do_io = 1;
@@ -1374,14 +1412,20 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
     }
 
     kvm_init_cpu_signals(cpu);
+    qemu_mutex_unlock_iothread();
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = kvm_cpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
@@ -1389,10 +1433,16 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
 
+    cpu_mutex_unlock(cpu);
+    qemu_mutex_lock_iothread();
     qemu_kvm_destroy_vcpu(cpu);
-    cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
     qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
+    cpu->created = false;
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
+
     rcu_unregister_thread();
     return NULL;
 }
@@ -1409,7 +1459,7 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
 
     rcu_register_thread();
 
-    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     cpu->can_do_io = 1;
@@ -1420,10 +1470,10 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
-        qemu_mutex_unlock_iothread();
+        cpu_mutex_unlock(cpu);
         do {
             int sig;
             r = sigwait(&waitset, &sig);
@@ -1432,10 +1482,11 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
             perror("sigwait");
             exit(1);
         }
-        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cpu);
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug);
 
+    cpu_mutex_unlock(cpu);
     rcu_unregister_thread();
     return NULL;
 #endif
@@ -1466,6 +1517,8 @@ static int64_t tcg_get_icount_limit(void)
 static void handle_icount_deadline(void)
 {
     assert(qemu_in_vcpu_thread());
+    g_assert(qemu_mutex_iothread_locked());
+
     if (use_icount) {
         int64_t deadline =
             qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
@@ -1546,12 +1599,15 @@ static void deal_with_unplugged_cpus(void)
     CPUState *cpu;
 
     CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
         if (cpu->unplug && !cpu_can_run(cpu)) {
             qemu_tcg_destroy_vcpu(cpu);
             cpu->created = false;
-            qemu_cond_signal(&qemu_cpu_cond);
+            qemu_cond_signal(&cpu->cond);
+            cpu_mutex_unlock(cpu);
             break;
         }
+        cpu_mutex_unlock(cpu);
     }
 }
 
@@ -1572,24 +1628,36 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
     rcu_register_thread();
     tcg_register_thread();
 
+    /*
+     * We call cpu_mutex_lock/unlock just to please the assertions in common
+     * code, since here cpu->lock is an alias to the BQL.
+     */
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
-
     cpu->thread_id = qemu_get_thread_id();
     cpu->created = true;
     cpu->can_do_io = 1;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
 
     /* wait for initial kick-off after machine start */
+    cpu_mutex_lock(first_cpu);
     while (first_cpu->stopped) {
-        qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait(first_cpu->halt_cond, first_cpu->lock);
+        cpu_mutex_unlock(first_cpu);
 
         /* process any pending work */
         CPU_FOREACH(cpu) {
             current_cpu = cpu;
+            cpu_mutex_lock(cpu);
             qemu_wait_io_event_common(cpu);
+            cpu_mutex_unlock(cpu);
         }
+
+        cpu_mutex_lock(first_cpu);
     }
+    cpu_mutex_unlock(first_cpu);
 
     start_tcg_kick_timer();
 
@@ -1616,7 +1684,12 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             cpu = first_cpu;
         }
 
-        while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
+        while (cpu) {
+            cpu_mutex_lock(cpu);
+            if (!QSIMPLEQ_EMPTY(&cpu->work_list) || cpu->exit_request) {
+                cpu_mutex_unlock(cpu);
+                break;
+            }
 
             atomic_mb_set(&tcg_current_rr_cpu, cpu);
             current_cpu = cpu;
@@ -1627,6 +1700,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             if (cpu_can_run(cpu)) {
                 int r;
 
+                cpu_mutex_unlock(cpu);
                 qemu_mutex_unlock_iothread();
                 prepare_icount_for_run(cpu);
 
@@ -1634,11 +1708,14 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
 
                 process_icount_data(cpu);
                 qemu_mutex_lock_iothread();
+                cpu_mutex_lock(cpu);
 
                 if (r == EXCP_DEBUG) {
                     cpu_handle_guest_debug(cpu);
+                    cpu_mutex_unlock(cpu);
                     break;
                 } else if (r == EXCP_ATOMIC) {
+                    cpu_mutex_unlock(cpu);
                     qemu_mutex_unlock_iothread();
                     cpu_exec_step_atomic(cpu);
                     qemu_mutex_lock_iothread();
@@ -1648,11 +1725,15 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
                 if (cpu->unplug) {
                     cpu = CPU_NEXT(cpu);
                 }
+                cpu_mutex_unlock(current_cpu);
                 break;
             }
 
+            cpu_mutex_unlock(cpu);
             cpu = CPU_NEXT(cpu);
-        } /* while (cpu && !cpu->exit_request).. */
+        } /* for (;;) .. */
+
+        g_assert(no_cpu_mutex_locked());
 
         /* Does not need atomic_mb_set because a spurious wakeup is okay.  */
         atomic_set(&tcg_current_rr_cpu, NULL);
@@ -1684,6 +1765,7 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
 
     rcu_register_thread();
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
@@ -1692,11 +1774,17 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
     current_cpu = cpu;
 
     hax_init_vcpu(cpu);
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_mutex_unlock_iothread();
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = hax_smp_cpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
@@ -1704,6 +1792,8 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
 
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
+
+    cpu_mutex_unlock(cpu);
     rcu_unregister_thread();
     return NULL;
 }
@@ -1721,6 +1811,7 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
     rcu_register_thread();
 
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
@@ -1728,14 +1819,20 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
     current_cpu = cpu;
 
     hvf_init_vcpu(cpu);
+    qemu_mutex_unlock_iothread();
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = hvf_vcpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
@@ -1743,10 +1840,16 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
 
+    cpu_mutex_unlock(cpu);
+    qemu_mutex_lock_iothread();
     hvf_vcpu_destroy(cpu);
-    cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
     qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
+    cpu->created = false;
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
+
     rcu_unregister_thread();
     return NULL;
 }
@@ -1759,6 +1862,7 @@ static void *qemu_whpx_cpu_thread_fn(void *arg)
     rcu_register_thread();
 
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     current_cpu = cpu;
@@ -1768,28 +1872,40 @@ static void *qemu_whpx_cpu_thread_fn(void *arg)
         fprintf(stderr, "whpx_init_vcpu failed: %s\n", strerror(-r));
         exit(1);
     }
+    qemu_mutex_unlock_iothread();
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = whpx_vcpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
         }
         while (cpu_thread_is_idle(cpu)) {
-            qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+            qemu_cond_wait(cpu->halt_cond, cpu->lock);
         }
         qemu_wait_io_event_common(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
 
+    cpu_mutex_unlock(cpu);
+    qemu_mutex_lock_iothread();
     whpx_destroy_vcpu(cpu);
-    cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
     qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
+    cpu->created = false;
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
+
     rcu_unregister_thread();
     return NULL;
 }
@@ -1817,14 +1933,14 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
     rcu_register_thread();
     tcg_register_thread();
 
-    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
     cpu->created = true;
     cpu->can_do_io = 1;
     current_cpu = cpu;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     /* process any pending work */
     cpu->exit_request = 1;
@@ -1832,9 +1948,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
     do {
         if (cpu_can_run(cpu)) {
             int r;
-            qemu_mutex_unlock_iothread();
+            cpu_mutex_unlock(cpu);
             r = tcg_cpu_exec(cpu);
-            qemu_mutex_lock_iothread();
+            cpu_mutex_lock(cpu);
             switch (r) {
             case EXCP_DEBUG:
                 cpu_handle_guest_debug(cpu);
@@ -1850,9 +1966,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
                 g_assert(cpu_halted(cpu));
                 break;
             case EXCP_ATOMIC:
-                qemu_mutex_unlock_iothread();
+                cpu_mutex_unlock(cpu);
                 cpu_exec_step_atomic(cpu);
-                qemu_mutex_lock_iothread();
+                cpu_mutex_lock(cpu);
             default:
                 /* Ignore everything else? */
                 break;
@@ -1865,8 +1981,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
 
     qemu_tcg_destroy_vcpu(cpu);
     cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
-    qemu_mutex_unlock_iothread();
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
     rcu_unregister_thread();
     return NULL;
 }
@@ -1966,54 +2082,69 @@ void qemu_mutex_unlock_iothread(void)
     }
 }
 
-static bool all_vcpus_paused(void)
-{
-    CPUState *cpu;
-
-    CPU_FOREACH(cpu) {
-        if (!cpu->stopped) {
-            return false;
-        }
-    }
-
-    return true;
-}
-
 void pause_all_vcpus(void)
 {
     CPUState *cpu;
 
+    g_assert(no_cpu_mutex_locked());
+
     qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
     CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
         if (qemu_cpu_is_self(cpu)) {
             qemu_cpu_stop(cpu, true);
         } else {
             cpu->stop = true;
             qemu_cpu_kick(cpu);
         }
+        cpu_mutex_unlock(cpu);
     }
 
+ drop_locks_and_stop_all_vcpus:
     /* We need to drop the replay_lock so any vCPU threads woken up
      * can finish their replay tasks
      */
     replay_mutex_unlock();
+    qemu_mutex_unlock_iothread();
 
-    while (!all_vcpus_paused()) {
-        qemu_cond_wait(&qemu_pause_cond, &qemu_global_mutex);
-        CPU_FOREACH(cpu) {
+    CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
+        if (!cpu->stopped) {
+            cpu->stop = true;
             qemu_cpu_kick(cpu);
+            qemu_cond_wait(&cpu->cond, cpu->lock);
         }
+        cpu_mutex_unlock(cpu);
     }
 
-    qemu_mutex_unlock_iothread();
     replay_mutex_lock();
     qemu_mutex_lock_iothread();
+
+    /* a CPU might have been hot-plugged while we weren't holding the BQL */
+    CPU_FOREACH(cpu) {
+        bool stopped;
+
+        cpu_mutex_lock(cpu);
+        stopped = cpu->stopped;
+        cpu_mutex_unlock(cpu);
+
+        if (!stopped) {
+            goto drop_locks_and_stop_all_vcpus;
+        }
+    }
 }
 
 void cpu_resume(CPUState *cpu)
 {
-    cpu->stop = false;
-    cpu->stopped = false;
+    if (cpu_mutex_locked(cpu)) {
+        cpu->stop = false;
+        cpu->stopped = false;
+    } else {
+        cpu_mutex_lock(cpu);
+        cpu->stop = false;
+        cpu->stopped = false;
+        cpu_mutex_unlock(cpu);
+    }
     qemu_cpu_kick(cpu);
 }
 
@@ -2029,8 +2160,11 @@ void resume_all_vcpus(void)
 
 void cpu_remove_sync(CPUState *cpu)
 {
+    cpu_mutex_lock(cpu);
     cpu->stop = true;
     cpu->unplug = true;
+    cpu_mutex_unlock(cpu);
+
     qemu_cpu_kick(cpu);
     qemu_mutex_unlock_iothread();
     qemu_thread_join(cpu->thread);
@@ -2211,9 +2345,15 @@ void qemu_init_vcpu(CPUState *cpu)
         qemu_dummy_start_vcpu(cpu);
     }
 
+    qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
     while (!cpu->created) {
-        qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
+        qemu_cond_wait(&cpu->cond, cpu->lock);
     }
+    cpu_mutex_unlock(cpu);
+
+    qemu_mutex_lock_iothread();
 }
 
 void cpu_stop_current(void)
diff --git a/qom/cpu.c b/qom/cpu.c
index f2695be9b2..65b070a570 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -94,32 +94,13 @@ static void cpu_common_get_memory_mapping(CPUState *cpu,
     error_setg(errp, "Obtaining memory mappings is unsupported on this CPU.");
 }
 
-/* Resetting the IRQ comes from across the code base so we take the
- * BQL here if we need to.  cpu_interrupt assumes it is held.*/
 void cpu_reset_interrupt(CPUState *cpu, int mask)
 {
-    bool has_bql = qemu_mutex_iothread_locked();
-    bool has_cpu_lock = cpu_mutex_locked(cpu);
-
-    if (has_bql) {
-        if (has_cpu_lock) {
-            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
-        } else {
-            cpu_mutex_lock(cpu);
-            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
-            cpu_mutex_unlock(cpu);
-        }
-        return;
-    }
-
-    if (has_cpu_lock) {
-        cpu_mutex_unlock(cpu);
-    }
-    qemu_mutex_lock_iothread();
-    cpu_mutex_lock(cpu);
-    atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
-    qemu_mutex_unlock_iothread();
-    if (!has_cpu_lock) {
+    if (cpu_mutex_locked(cpu)) {
+        atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
+    } else {
+        cpu_mutex_lock(cpu);
+        atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
         cpu_mutex_unlock(cpu);
     }
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 71/73] cpus-common: release BQL earlier in run_on_cpu
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (69 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 70/73] cpu: protect CPU state with cpu->lock instead of the BQL Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-02-08 14:34   ` Alex Bennée
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 72/73] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
                   ` (2 subsequent siblings)
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

After completing the conversion to per-CPU locks, there is no need
to release the BQL after having called cpu_kick.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 cpus-common.c | 20 +++++---------------
 1 file changed, 5 insertions(+), 15 deletions(-)

diff --git a/cpus-common.c b/cpus-common.c
index 62e282bff1..1241024b2c 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -145,6 +145,11 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
         return;
     }
 
+    /* We are going to sleep on the CPU lock, so release the BQL */
+    if (has_bql) {
+        qemu_mutex_unlock_iothread();
+    }
+
     wi.func = func;
     wi.data = data;
     wi.done = false;
@@ -153,21 +158,6 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 
     cpu_mutex_lock(cpu);
     queue_work_on_cpu_locked(cpu, &wi);
-
-    /*
-     * We are going to sleep on the CPU lock, so release the BQL.
-     *
-     * During the transition to per-CPU locks, we release the BQL _after_
-     * having kicked the destination CPU (from queue_work_on_cpu_locked above).
-     * This makes sure that the enqueued work will be seen by the CPU
-     * after being woken up from the kick, since the CPU sleeps on the BQL.
-     * Once we complete the transition to per-CPU locks, we will release
-     * the BQL earlier in this function.
-     */
-    if (has_bql) {
-        qemu_mutex_unlock_iothread();
-    }
-
     while (!atomic_mb_read(&wi.done)) {
         CPUState *self_cpu = current_cpu;
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 72/73] cpu: add async_run_on_cpu_no_bql
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (70 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 71/73] cpus-common: release BQL earlier in run_on_cpu Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-02-08 14:58   ` Alex Bennée
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 73/73] cputlb: queue async flush jobs without the BQL Emilio G. Cota
  2019-02-20 17:27 ` [Qemu-devel] [PATCH v6 00/73] per-CPU locks Richard Henderson
  73 siblings, 1 reply; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

Some async jobs do not need the BQL.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 14 ++++++++++++++
 cpus-common.c     | 39 ++++++++++++++++++++++++++++++++++-----
 2 files changed, 48 insertions(+), 5 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 30ed2fae0b..bb0729f969 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -884,9 +884,23 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
  * @data: Data to pass to the function.
  *
  * Schedules the function @func for execution on the vCPU @cpu asynchronously.
+ * See also: async_run_on_cpu_no_bql()
  */
 void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
 
+/**
+ * async_run_on_cpu_no_bql:
+ * @cpu: The vCPU to run on.
+ * @func: The function to be executed.
+ * @data: Data to pass to the function.
+ *
+ * Schedules the function @func for execution on the vCPU @cpu asynchronously.
+ * This function is run outside the BQL.
+ * See also: async_run_on_cpu()
+ */
+void async_run_on_cpu_no_bql(CPUState *cpu, run_on_cpu_func func,
+                             run_on_cpu_data data);
+
 /**
  * async_safe_run_on_cpu:
  * @cpu: The vCPU to run on.
diff --git a/cpus-common.c b/cpus-common.c
index 1241024b2c..5832a8bf37 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -109,6 +109,7 @@ struct qemu_work_item {
     run_on_cpu_func func;
     run_on_cpu_data data;
     bool free, exclusive, done;
+    bool bql;
 };
 
 /* Called with the CPU's lock held */
@@ -155,6 +156,7 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
     wi.done = false;
     wi.free = false;
     wi.exclusive = false;
+    wi.bql = true;
 
     cpu_mutex_lock(cpu);
     queue_work_on_cpu_locked(cpu, &wi);
@@ -179,6 +181,21 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
     wi->func = func;
     wi->data = data;
     wi->free = true;
+    wi->bql = true;
+
+    queue_work_on_cpu(cpu, wi);
+}
+
+void async_run_on_cpu_no_bql(CPUState *cpu, run_on_cpu_func func,
+                             run_on_cpu_data data)
+{
+    struct qemu_work_item *wi;
+
+    wi = g_malloc0(sizeof(struct qemu_work_item));
+    wi->func = func;
+    wi->data = data;
+    wi->free = true;
+    /* wi->bql initialized to false */
 
     queue_work_on_cpu(cpu, wi);
 }
@@ -323,6 +340,7 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
     wi->data = data;
     wi->free = true;
     wi->exclusive = true;
+    /* wi->bql initialized to false */
 
     queue_work_on_cpu(cpu, wi);
 }
@@ -347,6 +365,7 @@ static void process_queued_cpu_work_locked(CPUState *cpu)
              * BQL, so it goes to sleep; start_exclusive() is sleeping too, so
              * neither CPU can proceed.
              */
+            g_assert(!wi->bql);
             if (has_bql) {
                 qemu_mutex_unlock_iothread();
             }
@@ -357,12 +376,22 @@ static void process_queued_cpu_work_locked(CPUState *cpu)
                 qemu_mutex_lock_iothread();
             }
         } else {
-            if (has_bql) {
-                wi->func(cpu, wi->data);
+            if (wi->bql) {
+                if (has_bql) {
+                    wi->func(cpu, wi->data);
+                } else {
+                    qemu_mutex_lock_iothread();
+                    wi->func(cpu, wi->data);
+                    qemu_mutex_unlock_iothread();
+                }
             } else {
-                qemu_mutex_lock_iothread();
-                wi->func(cpu, wi->data);
-                qemu_mutex_unlock_iothread();
+                if (has_bql) {
+                    qemu_mutex_unlock_iothread();
+                    wi->func(cpu, wi->data);
+                    qemu_mutex_lock_iothread();
+                } else {
+                    wi->func(cpu, wi->data);
+                }
             }
         }
         cpu_mutex_lock(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [Qemu-devel] [PATCH v6 73/73] cputlb: queue async flush jobs without the BQL
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (71 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 72/73] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
@ 2019-01-30  0:48 ` Emilio G. Cota
  2019-02-08 15:58   ` Alex Bennée
  2019-02-20 17:18   ` Richard Henderson
  2019-02-20 17:27 ` [Qemu-devel] [PATCH v6 00/73] per-CPU locks Richard Henderson
  73 siblings, 2 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-01-30  0:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Paolo Bonzini

This yields sizable scalability improvements, as the below results show.

Host: Two Intel E5-2683 v3 14-core CPUs at 2.00 GHz (Haswell)

Workload: Ubuntu 18.04 ppc64 compiling the linux kernel with
"make -j N", where N is the number of cores in the guest.

                      Speedup vs a single thread (higher is better):

         14 +---------------------------------------------------------------+
            |       +    +       +      +       +      +      $$$$$$  +     |
            |                                            $$$$$              |
            |                                      $$$$$$                   |
         12 |-+                                $A$$                       +-|
            |                                $$                             |
            |                             $$$                               |
         10 |-+                         $$    ##D#####################D   +-|
            |                        $$$ #####**B****************           |
            |                      $$####*****                   *****      |
            |                    A$#*****                             B     |
          8 |-+                $$B**                                      +-|
            |                $$**                                           |
            |               $**                                             |
          6 |-+           $$*                                             +-|
            |            A**                                                |
            |           $B                                                  |
            |           $                                                   |
          4 |-+        $*                                                 +-|
            |          $                                                    |
            |         $                                                     |
          2 |-+      $                                                    +-|
            |        $                                 +cputlb-no-bql $$A$$ |
            |       A                                   +per-cpu-lock ##D## |
            |       +    +       +      +       +      +     baseline **B** |
          0 +---------------------------------------------------------------+
                    1    4       8      12      16     20      24     28
                                       Guest vCPUs
  png: https://imgur.com/zZRvS7q

Some notes:
- baseline corresponds to the commit before this series

- per-cpu-lock is the commit that converts the CPU loop to per-cpu locks.

- cputlb-no-bql is this commit.

- I'm using taskset to assign cores to threads, favouring locality whenever
  possible but not using SMT. When N=1, I'm using a single host core, which
  leads to superlinear speedups (since with more cores the I/O thread can execute
  while vCPU threads sleep). In the future I might use N+1 host cores for N
  guest cores to avoid this, or perhaps pin guest threads to cores one-by-one.

Single-threaded performance is affected very lightly. Results
below for debian aarch64 bootup+test for the entire series
on an Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz host:

- Before:

 Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs):

       7269.033478      task-clock (msec)         #    0.998 CPUs utilized            ( +-  0.06% )
    30,659,870,302      cycles                    #    4.218 GHz                      ( +-  0.06% )
    54,790,540,051      instructions              #    1.79  insns per cycle          ( +-  0.05% )
     9,796,441,380      branches                  # 1347.695 M/sec                    ( +-  0.05% )
       165,132,201      branch-misses             #    1.69% of all branches          ( +-  0.12% )

       7.287011656 seconds time elapsed                                          ( +-  0.10% )

- After:

       7375.924053      task-clock (msec)         #    0.998 CPUs utilized            ( +-  0.13% )
    31,107,548,846      cycles                    #    4.217 GHz                      ( +-  0.12% )
    55,355,668,947      instructions              #    1.78  insns per cycle          ( +-  0.05% )
     9,929,917,664      branches                  # 1346.261 M/sec                    ( +-  0.04% )
       166,547,442      branch-misses             #    1.68% of all branches          ( +-  0.09% )

       7.389068145 seconds time elapsed                                          ( +-  0.13% )

That is, a 1.37% slowdown.

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cputlb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index dad9b7796c..8491d36bcf 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -260,7 +260,7 @@ static void flush_all_helper(CPUState *src, run_on_cpu_func fn,
 
     CPU_FOREACH(cpu) {
         if (cpu != src) {
-            async_run_on_cpu(cpu, fn, d);
+            async_run_on_cpu_no_bql(cpu, fn, d);
         }
     }
 }
@@ -336,8 +336,8 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap)
     tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap);
 
     if (cpu->created && !qemu_cpu_is_self(cpu)) {
-        async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work,
-                         RUN_ON_CPU_HOST_INT(idxmap));
+        async_run_on_cpu_no_bql(cpu, tlb_flush_by_mmuidx_async_work,
+                                RUN_ON_CPU_HOST_INT(idxmap));
     } else {
         tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap));
     }
@@ -481,8 +481,8 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap)
     addr_and_mmu_idx |= idxmap;
 
     if (!qemu_cpu_is_self(cpu)) {
-        async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_work,
-                         RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
+        async_run_on_cpu_no_bql(cpu, tlb_flush_page_by_mmuidx_async_work,
+                                RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
     } else {
         tlb_flush_page_by_mmuidx_async_work(
             cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 25/73] s390x: convert to cpu_halted
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 25/73] s390x: " Emilio G. Cota
@ 2019-01-30 10:30   ` Cornelia Huck
  0 siblings, 0 replies; 109+ messages in thread
From: Cornelia Huck @ 2019-01-30 10:30 UTC (permalink / raw)
  To: Emilio G. Cota
  Cc: qemu-devel, Richard Henderson, Paolo Bonzini,
	Christian Borntraeger, David Hildenbrand, qemu-s390x

On Tue, 29 Jan 2019 19:47:23 -0500
"Emilio G. Cota" <cota@braap.org> wrote:

> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: qemu-s390x@nongnu.org
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  hw/intc/s390_flic.c        |  2 +-
>  target/s390x/cpu.c         | 22 +++++++++++++++-------
>  target/s390x/excp_helper.c |  2 +-
>  target/s390x/kvm.c         |  2 +-
>  target/s390x/sigp.c        |  8 ++++----
>  5 files changed, 22 insertions(+), 14 deletions(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 65/73] s390x: convert to cpu_has_work_with_iothread_lock
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 65/73] s390x: " Emilio G. Cota
@ 2019-01-30 10:35   ` Cornelia Huck
  0 siblings, 0 replies; 109+ messages in thread
From: Cornelia Huck @ 2019-01-30 10:35 UTC (permalink / raw)
  To: Emilio G. Cota
  Cc: qemu-devel, Richard Henderson, Paolo Bonzini, David Hildenbrand,
	qemu-s390x

On Tue, 29 Jan 2019 19:48:03 -0500
"Emilio G. Cota" <cota@braap.org> wrote:

> Soon we will call cpu_has_work without the BQL.
> 
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: qemu-s390x@nongnu.org
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/s390x/cpu.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 03/73] cpu: introduce cpu_mutex_lock/unlock
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 03/73] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
@ 2019-02-06 17:21   ` Alex Bennée
  2019-02-06 20:02     ` Emilio G. Cota
  0 siblings, 1 reply; 109+ messages in thread
From: Alex Bennée @ 2019-02-06 17:21 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> The few direct users of &cpu->lock will be converted soon.
>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h   | 33 +++++++++++++++++++++++++++++++
>  cpus.c              | 48 +++++++++++++++++++++++++++++++++++++++++++--
>  stubs/cpu-lock.c    | 28 ++++++++++++++++++++++++++
>  stubs/Makefile.objs |  1 +
>  4 files changed, 108 insertions(+), 2 deletions(-)
>  create mode 100644 stubs/cpu-lock.c
<snip>
> diff --git a/cpus.c b/cpus.c
> index 9a3a1d8a6a..187aed2533 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -83,6 +83,47 @@ static unsigned int throttle_percentage;
>  #define CPU_THROTTLE_PCT_MAX 99
>  #define CPU_THROTTLE_TIMESLICE_NS 10000000
>
> +/* XXX: is this really the max number of CPUs? */
> +#define CPU_LOCK_BITMAP_SIZE 2048
> +
> +/*
> + * Note: we index the bitmap with cpu->cpu_index + 1 so that the logic
> + * also works during early CPU initialization, when cpu->cpu_index is set to
> + * UNASSIGNED_CPU_INDEX == -1.
> + */
> +static __thread DECLARE_BITMAP(cpu_lock_bitmap,
> CPU_LOCK_BITMAP_SIZE);

I'm a little confused by this __thread bitmap. So by being a __thread
this is a per thread record (like __thread bool iothread_locked) of the
lock. However the test bellow:

> +
> +bool no_cpu_mutex_locked(void)
> +{
> +    return bitmap_empty(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
> +}

which is used for asserts implies the only case we care about is
ensuring one thread doesn't take multiple cpu locks (which seems
reasonable). The only other test is used by cpu_mutex_locked to see if
the current thread has locked a given CPU index.

Given that a thread can only be in two conditions:

  - no lock held
  - holding lock for cpu->index

Why the bitmap? Surely it could simply be:

  static __thread int current_cpu_lock_held;

Where:

  bool no_cpu_mutex_locked(void)
  {
      return current_cpu_lock_held == 0;
  }

  bool cpu_mutex_locked(const CPUState *cpu)
  {
      return current_cpu_lock_held == cpu->cpu_index + 1;
  }

And then we scale to INT_MAX cpus ;-)

If I've missed something I think the doc comment needs to be a bit more
explicit about our locking rules.

<snip>
>
> +    /* prevent deadlock with CPU mutex */
> +    g_assert(no_cpu_mutex_locked());
> +

Technically asserts don't prevent this - they are just enforcing the
locking rules otherwise we would deadlock.

--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 05/73] cpu: move run_on_cpu to cpus-common
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 05/73] cpu: move run_on_cpu to cpus-common Emilio G. Cota
@ 2019-02-06 17:22   ` Alex Bennée
  0 siblings, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-06 17:22 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> We don't pass a pointer to qemu_global_mutex anymore.
>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  include/qom/cpu.h | 10 ----------
>  cpus-common.c     |  2 +-
>  cpus.c            |  5 -----
>  3 files changed, 1 insertion(+), 16 deletions(-)
>
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 46e3c164aa..fe389037c5 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -768,16 +768,6 @@ void qemu_cpu_kick(CPUState *cpu);
>   */
>  bool cpu_is_stopped(CPUState *cpu);
>
> -/**
> - * do_run_on_cpu:
> - * @cpu: The vCPU to run on.
> - * @func: The function to be executed.
> - * @data: Data to pass to the function.
> - *
> - * Used internally in the implementation of run_on_cpu.
> - */
> -void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
> -
>  /**
>   * run_on_cpu:
>   * @cpu: The vCPU to run on.
> diff --git a/cpus-common.c b/cpus-common.c
> index daf1531868..85a61eb970 100644
> --- a/cpus-common.c
> +++ b/cpus-common.c
> @@ -127,7 +127,7 @@ static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
>      cpu_mutex_unlock(cpu);
>  }
>
> -void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
> +void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
>  {
>      struct qemu_work_item wi;
>      bool has_bql = qemu_mutex_iothread_locked();
> diff --git a/cpus.c b/cpus.c
> index 42ea8cfbb5..755e4addab 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -1234,11 +1234,6 @@ void qemu_init_cpu_loop(void)
>      qemu_thread_get_self(&io_thread);
>  }
>
> -void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
> -{
> -    do_run_on_cpu(cpu, func, data);
> -}
> -
>  static void qemu_kvm_destroy_vcpu(CPUState *cpu)
>  {
>      if (kvm_destroy_vcpu(cpu) < 0) {


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 03/73] cpu: introduce cpu_mutex_lock/unlock
  2019-02-06 17:21   ` Alex Bennée
@ 2019-02-06 20:02     ` Emilio G. Cota
  0 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-02-06 20:02 UTC (permalink / raw)
  To: Alex Bennée; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson

On Wed, Feb 06, 2019 at 17:21:15 +0000, Alex Bennée wrote:
> Emilio G. Cota <cota@braap.org> writes:
> > +/*
> > + * Note: we index the bitmap with cpu->cpu_index + 1 so that the logic
> > + * also works during early CPU initialization, when cpu->cpu_index is set to
> > + * UNASSIGNED_CPU_INDEX == -1.
> > + */
> > +static __thread DECLARE_BITMAP(cpu_lock_bitmap,
> > CPU_LOCK_BITMAP_SIZE);
> 
> I'm a little confused by this __thread bitmap. So by being a __thread
> this is a per thread record (like __thread bool iothread_locked) of the
> lock. However the test bellow:
> 
> > +
> > +bool no_cpu_mutex_locked(void)
> > +{
> > +    return bitmap_empty(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
> > +}
> 
> which is used for asserts implies the only case we care about is
> ensuring one thread doesn't take multiple cpu locks (which seems
> reasonable). The only other test is used by cpu_mutex_locked to see if
> the current thread has locked a given CPU index.
> 
> Given that a thread can only be in two conditions:
> 
>   - no lock held
>   - holding lock for cpu->index
> 
> Why the bitmap?

(snip)

> If I've missed something I think the doc comment needs to be a bit more
> explicit about our locking rules.

The missing bit is that the bitmap is not only used for asserts; by the
end of the series, we sometimes acquire all cpu locks (in CPU_FOREACH order
to avoid deadlock), e.g. in pause_all_vcpus(). Given that this happens
in patch 70, I think your comment here is reasonable.

I'll update the commit message to explain why we add now the
bitmap, even if it in this patch it isn't needed yet.

> <snip>
> >
> > +    /* prevent deadlock with CPU mutex */
> > +    g_assert(no_cpu_mutex_locked());
> > +
> 
> Technically asserts don't prevent this - they are just enforcing the
> locking rules otherwise we would deadlock.

Agreed. With that comment I mean "make sure we're following the
locking order". Will fix.

Thanks,

		E.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 24/73] riscv: convert to cpu_halted
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 24/73] riscv: " Emilio G. Cota
@ 2019-02-06 23:50   ` Alistair Francis
  0 siblings, 0 replies; 109+ messages in thread
From: Alistair Francis @ 2019-02-06 23:50 UTC (permalink / raw)
  To: Emilio G. Cota
  Cc: qemu-devel@nongnu.org Developers, Richard Henderson,
	Paolo Bonzini, Palmer Dabbelt, Sagar Karandikar,
	Bastian Koppelmann

On Tue, Jan 29, 2019 at 4:48 PM Emilio G. Cota <cota@braap.org> wrote:
>
> Cc: Palmer Dabbelt <palmer@sifive.com>
> Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
> Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
> Cc: Alistair Francis <alistair23@gmail.com>
> Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  target/riscv/op_helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
> index 81bd1a77ea..261d6fbfe9 100644
> --- a/target/riscv/op_helper.c
> +++ b/target/riscv/op_helper.c
> @@ -125,7 +125,7 @@ void helper_wfi(CPURISCVState *env)
>  {
>      CPUState *cs = CPU(riscv_env_get_cpu(env));
>
> -    cs->halted = 1;
> +    cpu_halted_set(cs, 1);
>      cs->exception_index = EXCP_HLT;
>      cpu_loop_exit(cs);
>  }
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 66/73] riscv: convert to cpu_has_work_with_iothread_lock
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 66/73] riscv: " Emilio G. Cota
@ 2019-02-06 23:51   ` Alistair Francis
  0 siblings, 0 replies; 109+ messages in thread
From: Alistair Francis @ 2019-02-06 23:51 UTC (permalink / raw)
  To: Emilio G. Cota
  Cc: qemu-devel@nongnu.org Developers, Paolo Bonzini, Palmer Dabbelt,
	Richard Henderson, Sagar Karandikar, Bastian Koppelmann

On Tue, Jan 29, 2019 at 5:30 PM Emilio G. Cota <cota@braap.org> wrote:
>
> Soon we will call cpu_has_work without the BQL.
>
> Cc: Palmer Dabbelt <palmer@sifive.com>
> Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
> Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
> Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  target/riscv/cpu.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 28d7e5302f..7b36c09fe0 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -257,6 +257,9 @@ static bool riscv_cpu_has_work(CPUState *cs)
>  #ifndef CONFIG_USER_ONLY
>      RISCVCPU *cpu = RISCV_CPU(cs);
>      CPURISCVState *env = &cpu->env;
> +
> +    g_assert(qemu_mutex_iothread_locked());
> +
>      /*
>       * Definition of the WFI instruction requires it to ignore the privilege
>       * mode and delegation registers, but respect individual enables
> @@ -343,7 +346,7 @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
>      cc->reset = riscv_cpu_reset;
>
>      cc->class_by_name = riscv_cpu_class_by_name;
> -    cc->has_work = riscv_cpu_has_work;
> +    cc->has_work_with_iothread_lock = riscv_cpu_has_work;
>      cc->do_interrupt = riscv_cpu_do_interrupt;
>      cc->cpu_exec_interrupt = riscv_cpu_exec_interrupt;
>      cc->dump_state = riscv_cpu_dump_state;
> --
> 2.17.1
>
>

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode Emilio G. Cota
@ 2019-02-07 12:40   ` Alex Bennée
  2019-02-20 16:12   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-07 12:40 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> Before we can switch from the BQL to per-CPU locks in
> the CPU loop, we have to accommodate the fact that TCG
> rr mode (i.e. !MTTCG) cannot work with separate per-vCPU
> locks. That would lead to deadlock since we need a single
> lock/condvar pair on which to wait for events that affect
> any vCPU, e.g. in qemu_tcg_rr_wait_io_event.
>
> At the same time, we are moving towards an interface where
> the BQL and CPU locks are independent, and the only requirement
> is that the locking order is respected, i.e. the BQL is
> acquired first if both locks have to be held at the same time.
>
> In this patch we make the BQL a recursive lock under the hood.
> This allows us to (1) keep the BQL and CPU locks interfaces
> separate, and (2) use a single lock for all vCPUs in TCG rr mode.
>
> Note that the BQL's API (qemu_mutex_lock/unlock_iothread) remains
> non-recursive.
>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  include/qom/cpu.h |  2 +-
>  cpus-common.c     |  2 +-
>  cpus.c            | 90 +++++++++++++++++++++++++++++++++++++++++------
>  qom/cpu.c         |  3 +-
>  stubs/cpu-lock.c  |  6 ++--
>  5 files changed, 86 insertions(+), 17 deletions(-)
>
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index fe389037c5..8b85a036cf 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -363,7 +363,7 @@ struct CPUState {
>      int64_t icount_extra;
>      sigjmp_buf jmp_env;
>
> -    QemuMutex lock;
> +    QemuMutex *lock;
>      /* fields below protected by @lock */
>      QemuCond cond;
>      QSIMPLEQ_HEAD(, qemu_work_item) work_list;
> diff --git a/cpus-common.c b/cpus-common.c
> index 99662bfa87..62e282bff1 100644
> --- a/cpus-common.c
> +++ b/cpus-common.c
> @@ -171,7 +171,7 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
>      while (!atomic_mb_read(&wi.done)) {
>          CPUState *self_cpu = current_cpu;
>
> -        qemu_cond_wait(&cpu->cond, &cpu->lock);
> +        qemu_cond_wait(&cpu->cond, cpu->lock);
>          current_cpu = self_cpu;
>      }
>      cpu_mutex_unlock(cpu);
> diff --git a/cpus.c b/cpus.c
> index 755e4addab..c4fa3cc876 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -83,6 +83,12 @@ static unsigned int throttle_percentage;
>  #define CPU_THROTTLE_PCT_MAX 99
>  #define CPU_THROTTLE_TIMESLICE_NS 10000000
>
> +static inline bool qemu_is_tcg_rr(void)
> +{
> +    /* in `make check-qtest', "use_icount && !tcg_enabled()" might be true */
> +    return use_icount || (tcg_enabled() && !qemu_tcg_mttcg_enabled());
> +}
> +
>  /* XXX: is this really the max number of CPUs? */
>  #define CPU_LOCK_BITMAP_SIZE 2048
>
> @@ -98,25 +104,76 @@ bool no_cpu_mutex_locked(void)
>      return bitmap_empty(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
>  }
>
> -void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
> +static QemuMutex qemu_global_mutex;
> +static __thread bool iothread_locked;
> +/*
> + * In TCG rr mode, we make the BQL a recursive mutex, so that we can use it for
> + * all vCPUs while keeping the interface as if the locks were per-CPU.
> + *
> + * The fact that the BQL is implemented recursively is invisible to BQL users;
> + * the mutex API we export (qemu_mutex_lock_iothread() etc.) is non-recursive.
> + *
> + * Locking order: the BQL is always acquired before CPU locks.
> + */
> +static __thread int iothread_lock_count;
> +
> +static void rr_cpu_mutex_lock(void)
> +{
> +    if (iothread_lock_count++ == 0) {
> +        /*
> +         * Circumvent qemu_mutex_lock_iothread()'s state keeping by
> +         * acquiring the BQL directly.
> +         */
> +        qemu_mutex_lock(&qemu_global_mutex);
> +    }
> +}
> +
> +static void rr_cpu_mutex_unlock(void)
> +{
> +    g_assert(iothread_lock_count > 0);
> +    if (--iothread_lock_count == 0) {
> +        /*
> +         * Circumvent qemu_mutex_unlock_iothread()'s state keeping by
> +         * releasing the BQL directly.
> +         */
> +        qemu_mutex_unlock(&qemu_global_mutex);
> +    }
> +}
> +
> +static void do_cpu_mutex_lock(CPUState *cpu, const char *file, int line)
>  {
> -/* coverity gets confused by the indirect function call */
> +    /* coverity gets confused by the indirect function call */
>  #ifdef __COVERITY__
> -    qemu_mutex_lock_impl(&cpu->lock, file, line);
> +    qemu_mutex_lock_impl(cpu->lock, file, line);
>  #else
>      QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
>
> +    f(cpu->lock, file, line);
> +#endif
> +}
> +
> +void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
> +{
>      g_assert(!cpu_mutex_locked(cpu));
>      set_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
> -    f(&cpu->lock, file, line);
> -#endif
> +
> +    if (qemu_is_tcg_rr()) {
> +        rr_cpu_mutex_lock();
> +    } else {
> +        do_cpu_mutex_lock(cpu, file, line);
> +    }
>  }
>
>  void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
>  {
>      g_assert(cpu_mutex_locked(cpu));
> -    qemu_mutex_unlock_impl(&cpu->lock, file, line);
>      clear_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
> +
> +    if (qemu_is_tcg_rr()) {
> +        rr_cpu_mutex_unlock();
> +        return;
> +    }
> +    qemu_mutex_unlock_impl(cpu->lock, file, line);
>  }
>
>  bool cpu_mutex_locked(const CPUState *cpu)
> @@ -1215,8 +1272,6 @@ static void qemu_init_sigbus(void)
>  }
>  #endif /* !CONFIG_LINUX */
>
> -static QemuMutex qemu_global_mutex;
> -
>  static QemuThread io_thread;
>
>  /* cpu creation */
> @@ -1876,8 +1931,6 @@ bool qemu_in_vcpu_thread(void)
>      return current_cpu && qemu_cpu_is_self(current_cpu);
>  }
>
> -static __thread bool iothread_locked = false;
> -
>  bool qemu_mutex_iothread_locked(void)
>  {
>      return iothread_locked;
> @@ -1896,6 +1949,8 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line)
>
>      g_assert(!qemu_mutex_iothread_locked());
>      bql_lock(&qemu_global_mutex, file, line);
> +    g_assert(iothread_lock_count == 0);
> +    iothread_lock_count++;
>      iothread_locked = true;
>  }
>
> @@ -1903,7 +1958,10 @@ void qemu_mutex_unlock_iothread(void)
>  {
>      g_assert(qemu_mutex_iothread_locked());
>      iothread_locked = false;
> -    qemu_mutex_unlock(&qemu_global_mutex);
> +    g_assert(iothread_lock_count > 0);
> +    if (--iothread_lock_count == 0) {
> +        qemu_mutex_unlock(&qemu_global_mutex);
> +    }
>  }
>
>  static bool all_vcpus_paused(void)
> @@ -2127,6 +2185,16 @@ void qemu_init_vcpu(CPUState *cpu)
>          cpu_address_space_init(cpu, 0, "cpu-memory", cpu->memory);
>      }
>
> +    /*
> +     * In TCG RR, cpu->lock is the BQL under the hood. In all other modes,
> +     * cpu->lock is a standalone per-CPU lock.
> +     */
> +    if (qemu_is_tcg_rr()) {
> +        qemu_mutex_destroy(cpu->lock);
> +        g_free(cpu->lock);
> +        cpu->lock = &qemu_global_mutex;
> +    }
> +
>      if (kvm_enabled()) {
>          qemu_kvm_start_vcpu(cpu);
>      } else if (hax_enabled()) {
> diff --git a/qom/cpu.c b/qom/cpu.c
> index be8393e589..2c05aa1bca 100644
> --- a/qom/cpu.c
> +++ b/qom/cpu.c
> @@ -371,7 +371,8 @@ static void cpu_common_initfn(Object *obj)
>      cpu->nr_cores = 1;
>      cpu->nr_threads = 1;
>
> -    qemu_mutex_init(&cpu->lock);
> +    cpu->lock = g_new(QemuMutex, 1);
> +    qemu_mutex_init(cpu->lock);
>      qemu_cond_init(&cpu->cond);
>      QSIMPLEQ_INIT(&cpu->work_list);
>      QTAILQ_INIT(&cpu->breakpoints);
> diff --git a/stubs/cpu-lock.c b/stubs/cpu-lock.c
> index 3f07d3a28b..7406a66d97 100644
> --- a/stubs/cpu-lock.c
> +++ b/stubs/cpu-lock.c
> @@ -5,16 +5,16 @@ void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
>  {
>  /* coverity gets confused by the indirect function call */
>  #ifdef __COVERITY__
> -    qemu_mutex_lock_impl(&cpu->lock, file, line);
> +    qemu_mutex_lock_impl(cpu->lock, file, line);
>  #else
>      QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
> -    f(&cpu->lock, file, line);
> +    f(cpu->lock, file, line);
>  #endif
>  }
>
>  void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
>  {
> -    qemu_mutex_unlock_impl(&cpu->lock, file, line);
> +    qemu_mutex_unlock_impl(cpu->lock, file, line);
>  }
>
>  bool cpu_mutex_locked(const CPUState *cpu)


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 08/73] tcg-runtime: define helper_cpu_halted_set
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 08/73] tcg-runtime: define helper_cpu_halted_set Emilio G. Cota
@ 2019-02-07 12:40   ` Alex Bennée
  0 siblings, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-07 12:40 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  accel/tcg/tcg-runtime.h | 2 ++
>  accel/tcg/tcg-runtime.c | 7 +++++++
>  2 files changed, 9 insertions(+)
>
> diff --git a/accel/tcg/tcg-runtime.h b/accel/tcg/tcg-runtime.h
> index dfe325625c..46386bb564 100644
> --- a/accel/tcg/tcg-runtime.h
> +++ b/accel/tcg/tcg-runtime.h
> @@ -28,6 +28,8 @@ DEF_HELPER_FLAGS_1(lookup_tb_ptr, TCG_CALL_NO_WG_SE, ptr, env)
>
>  DEF_HELPER_FLAGS_1(exit_atomic, TCG_CALL_NO_WG, noreturn, env)
>
> +DEF_HELPER_FLAGS_2(cpu_halted_set, TCG_CALL_NO_RWG, void, env, i32)
> +
>  #ifdef CONFIG_SOFTMMU
>
>  DEF_HELPER_FLAGS_5(atomic_cmpxchgb, TCG_CALL_NO_WG,
> diff --git a/accel/tcg/tcg-runtime.c b/accel/tcg/tcg-runtime.c
> index d0d4484406..4aa038465f 100644
> --- a/accel/tcg/tcg-runtime.c
> +++ b/accel/tcg/tcg-runtime.c
> @@ -167,3 +167,10 @@ void HELPER(exit_atomic)(CPUArchState *env)
>  {
>      cpu_loop_exit_atomic(ENV_GET_CPU(env), GETPC());
>  }
> +
> +void HELPER(cpu_halted_set)(CPUArchState *env, uint32_t val)
> +{
> +    CPUState *cpu = ENV_GET_CPU(env);
> +
> +    cpu->halted = val;
> +}


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 30/73] cpu-exec: convert to cpu_halted
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 30/73] cpu-exec: " Emilio G. Cota
@ 2019-02-07 12:44   ` Alex Bennée
  0 siblings, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-07 12:44 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  accel/tcg/cpu-exec.c | 25 +++++++++++++++++++++----
>  1 file changed, 21 insertions(+), 4 deletions(-)
>
> diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
> index 6c4a33262f..e3d72897e8 100644
> --- a/accel/tcg/cpu-exec.c
> +++ b/accel/tcg/cpu-exec.c
> @@ -425,14 +425,21 @@ static inline TranslationBlock *tb_find(CPUState *cpu,
>      return tb;
>  }
>
> -static inline bool cpu_handle_halt(CPUState *cpu)
> +static inline bool cpu_handle_halt_locked(CPUState *cpu)
>  {
> -    if (cpu->halted) {
> +    g_assert(cpu_mutex_locked(cpu));
> +
> +    if (cpu_halted(cpu)) {
>  #if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
>          if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
>              && replay_interrupt()) {
>              X86CPU *x86_cpu = X86_CPU(cpu);
> +
> +            /* prevent deadlock; cpu_mutex must be acquired _after_ the BQL */
> +            cpu_mutex_unlock(cpu);
>              qemu_mutex_lock_iothread();
> +            cpu_mutex_lock(cpu);
> +

*sigh* this is still fugly code I wish we could abstract out of the
common code path. But I guess x86 wants to be special....

Nevertheless:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>


>              apic_poll_irq(x86_cpu->apic_state);
>              cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
>              qemu_mutex_unlock_iothread();
> @@ -442,12 +449,22 @@ static inline bool cpu_handle_halt(CPUState *cpu)
>              return true;
>          }
>
> -        cpu->halted = 0;
> +        cpu_halted_set(cpu, 0);
>      }
>
>      return false;
>  }
>
> +static inline bool cpu_handle_halt(CPUState *cpu)
> +{
> +    bool ret;
> +
> +    cpu_mutex_lock(cpu);
> +    ret = cpu_handle_halt_locked(cpu);
> +    cpu_mutex_unlock(cpu);
> +    return ret;
> +}
> +
>  static inline void cpu_handle_debug_exception(CPUState *cpu)
>  {
>      CPUClass *cc = CPU_GET_CLASS(cpu);
> @@ -546,7 +563,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>          } else if (interrupt_request & CPU_INTERRUPT_HALT) {
>              replay_interrupt();
>              cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
> -            cpu->halted = 1;
> +            cpu_halted_set(cpu, 1);
>              cpu->exception_index = EXCP_HLT;
>              qemu_mutex_unlock_iothread();
>              return true;


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 31/73] cpu: convert to cpu_halted
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 31/73] cpu: " Emilio G. Cota
@ 2019-02-07 20:39   ` Alex Bennée
  2019-02-20 16:21   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-07 20:39 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> This finishes the conversion to cpu_halted.
>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  cpus.c    | 8 ++++----
>  qom/cpu.c | 2 +-
>  2 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/cpus.c b/cpus.c
> index c4fa3cc876..aee129c0b3 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -204,7 +204,7 @@ static bool cpu_thread_is_idle(CPUState *cpu)
>      if (cpu_is_stopped(cpu)) {
>          return true;
>      }
> -    if (!cpu->halted || cpu_has_work(cpu) ||
> +    if (!cpu_halted(cpu) || cpu_has_work(cpu) ||
>          kvm_halt_in_kernel()) {
>          return false;
>      }
> @@ -1686,7 +1686,7 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
>
>      cpu->thread_id = qemu_get_thread_id();
>      cpu->created = true;
> -    cpu->halted = 0;
> +    cpu_halted_set(cpu, 0);
>      current_cpu = cpu;
>
>      hax_init_vcpu(cpu);
> @@ -1845,7 +1845,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
>                   *
>                   * cpu->halted should ensure we sleep in wait_io_event
>                   */
> -                g_assert(cpu->halted);
> +                g_assert(cpu_halted(cpu));
>                  break;
>              case EXCP_ATOMIC:
>                  qemu_mutex_unlock_iothread();
> @@ -2342,7 +2342,7 @@ CpuInfoList *qmp_query_cpus(Error **errp)
>          info->value = g_malloc0(sizeof(*info->value));
>          info->value->CPU = cpu->cpu_index;
>          info->value->current = (cpu == first_cpu);
> -        info->value->halted = cpu->halted;
> +        info->value->halted = cpu_halted(cpu);
>          info->value->qom_path = object_get_canonical_path(OBJECT(cpu));
>          info->value->thread_id = cpu->thread_id;
>  #if defined(TARGET_I386)
> diff --git a/qom/cpu.c b/qom/cpu.c
> index 2c05aa1bca..c5106d5af8 100644
> --- a/qom/cpu.c
> +++ b/qom/cpu.c
> @@ -261,7 +261,7 @@ static void cpu_common_reset(CPUState *cpu)
>      }
>
>      cpu->interrupt_request = 0;
> -    cpu->halted = 0;
> +    cpu_halted_set(cpu, 0);
>      cpu->mem_io_pc = 0;
>      cpu->mem_io_vaddr = 0;
>      cpu->icount_extra = 0;


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 38/73] arm: convert to cpu_interrupt_request
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 38/73] arm: convert to cpu_interrupt_request Emilio G. Cota
@ 2019-02-07 20:55   ` Alex Bennée
  0 siblings, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-07 20:55 UTC (permalink / raw)
  To: Emilio G. Cota
  Cc: qemu-devel, Paolo Bonzini, qemu-arm, Richard Henderson, Peter Maydell


Emilio G. Cota <cota@braap.org> writes:

> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: qemu-arm@nongnu.org
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
<snip>
>
> -    /* Hooks may change global state so BQL should be held, also the
> -     * BQL needs to be held for any modification of
> -     * cs->interrupt_request.
> -     */
> +    /* Hooks may change global state so BQL should be held */
>      g_assert(qemu_mutex_iothread_locked());
>
>      arm_call_pre_el_change_hook(cpu);

So I dug into this and currently the only user is pmu_pre_el_change
which updates a bunch of the performance counters. While profiling the
system running the other thing that showed up was:

  void HELPER(set_cp_reg)(CPUARMState *env, void *rip, uint32_t value)
  {
      const ARMCPRegInfo *ri = rip;

      if (ri->type & ARM_CP_IO) {
          qemu_mutex_lock_iothread();
          ri->writefn(env, ri, value);
          qemu_mutex_unlock_iothread();
      } else {
          ri->writefn(env, ri, value);
      }
  }

And friends. I'm wondering now if these are candidates for using CPU
locks. We mention it in out docs (which might be a little out of date now):

  MMIO access automatically serialises hardware emulation by way of the
  BQL. Currently ARM targets serialise all ARM_CP_IO register accesses
  and also defer the reset/startup of vCPUs to the vCPU context by way
  of async_run_on_cpu().

for registers that affect things like the GIC that can have cross-vCPU
effects I suspect the race free IRQ raising might be enough. Are we over
locking if only a vCPU can read/write it's own CP regs?

Peter any thoughts?

Anyway as far as this patch goes:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 39/73] i386: convert to cpu_interrupt_request
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 39/73] i386: " Emilio G. Cota
@ 2019-02-08 11:00   ` Alex Bennée
  2019-03-02 22:48     ` Emilio G. Cota
  0 siblings, 1 reply; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 11:00 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/i386/cpu.c        | 2 +-
>  target/i386/helper.c     | 4 ++--
>  target/i386/svm_helper.c | 4 ++--
>  3 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index a37b984b61..35dea8c152 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -5678,7 +5678,7 @@ int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
>
>  static bool x86_cpu_has_work(CPUState *cs)
>  {
> -    return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0;
> +    return x86_cpu_pending_interrupt(cs, cpu_interrupt_request(cs))
>  != 0;

This is fine in itself but is there a chance of a race with the
env->eflags/hflags/hflags2 that x86_cpu_pending_interrupt deals with?
Are they only ever self vCPU references?

Anyway:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 40/73] i386/kvm: convert to cpu_interrupt_request
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 40/73] i386/kvm: " Emilio G. Cota
@ 2019-02-08 11:15   ` Alex Bennée
  2019-03-02 23:14     ` Emilio G. Cota
  0 siblings, 1 reply; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 11:15 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/i386/kvm.c | 54 +++++++++++++++++++++++++++--------------------
>  1 file changed, 31 insertions(+), 23 deletions(-)
>
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index ca2629f0fe..3f3c670897 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
<snip>
> @@ -3183,14 +3186,14 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
>  {
>      X86CPU *x86_cpu = X86_CPU(cpu);
>      CPUX86State *env = &x86_cpu->env;
> +    uint32_t interrupt_request;
>      int ret;
>
> +    interrupt_request = cpu_interrupt_request(cpu);
>      /* Inject NMI */
> -    if (cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
> -        if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
> -            qemu_mutex_lock_iothread();
> +    if (interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
> +        if (interrupt_request & CPU_INTERRUPT_NMI) {
>              cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
> -            qemu_mutex_unlock_iothread();
>              DPRINTF("injected NMI\n");
>              ret = kvm_vcpu_ioctl(cpu, KVM_NMI);
>              if (ret < 0) {
> @@ -3198,10 +3201,8 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
>                          strerror(-ret));
>              }
>          }
> -        if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
> -            qemu_mutex_lock_iothread();
> +        if (interrupt_request & CPU_INTERRUPT_SMI) {
>              cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
> -            qemu_mutex_unlock_iothread();
>              DPRINTF("injected SMI\n");
>              ret = kvm_vcpu_ioctl(cpu, KVM_SMI);
>              if (ret < 0) {
> @@ -3215,16 +3216,18 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
>          qemu_mutex_lock_iothread();
>      }
>
> +    interrupt_request = cpu_interrupt_request(cpu);
> +

This seems a bit smelly as we have already read interrupt_request once
before. So this says that something may have triggered an IRQ while we
were dealing with the above. It requires a comment at least.

Otherwise:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 41/73] i386/hax-all: convert to cpu_interrupt_request
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 41/73] i386/hax-all: " Emilio G. Cota
@ 2019-02-08 11:20   ` Alex Bennée
  0 siblings, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 11:20 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  target/i386/hax-all.c | 30 +++++++++++++++++-------------
>  1 file changed, 17 insertions(+), 13 deletions(-)
>
> diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
> index 518c6ff103..18da1808c6 100644
> --- a/target/i386/hax-all.c
> +++ b/target/i386/hax-all.c
> @@ -284,7 +284,7 @@ int hax_vm_destroy(struct hax_vm *vm)
>
>  static void hax_handle_interrupt(CPUState *cpu, int mask)
>  {
> -    cpu->interrupt_request |= mask;
> +    cpu_interrupt_request_or(cpu, mask);
>
>      if (!qemu_cpu_is_self(cpu)) {
>          qemu_cpu_kick(cpu);
> @@ -418,7 +418,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
>       * Unlike KVM, HAX kernel check for the eflags, instead of qemu
>       */
>      if (ht->ready_for_interrupt_injection &&
> -        (cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
> +        (cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
>          int irq;
>
>          irq = cpu_get_pic_interrupt(env);
> @@ -432,7 +432,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
>       * interrupt, request an interrupt window exit.  This will
>       * cause a return to userspace as soon as the guest is ready to
>       * receive interrupts. */
> -    if ((cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
> +    if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
>          ht->request_interrupt_window = 1;
>      } else {
>          ht->request_interrupt_window = 0;
> @@ -473,19 +473,19 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
>
>      cpu_halted_set(cpu, 0);
>
> -    if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
> +    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL) {
>          cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
>          apic_poll_irq(x86_cpu->apic_state);
>      }
>
> -    if (cpu->interrupt_request & CPU_INTERRUPT_INIT) {
> +    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_INIT) {
>          DPRINTF("\nhax_vcpu_hax_exec: handling INIT for %d\n",
>                  cpu->cpu_index);
>          do_cpu_init(x86_cpu);
>          hax_vcpu_sync_state(env, 1);
>      }
>
> -    if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
> +    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_SIPI) {
>          DPRINTF("hax_vcpu_hax_exec: handling SIPI for %d\n",
>                  cpu->cpu_index);
>          hax_vcpu_sync_state(env, 0);
> @@ -544,13 +544,17 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
>              ret = -1;
>              break;
>          case HAX_EXIT_HLT:
> -            if (!(cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
> -                !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
> -                /* hlt instruction with interrupt disabled is shutdown */
> -                env->eflags |= IF_MASK;
> -                cpu_halted_set(cpu, 1);
> -                cpu->exception_index = EXCP_HLT;
> -                ret = 1;
> +            {
> +                uint32_t interrupt_request = cpu_interrupt_request(cpu);
> +
> +                if (!(interrupt_request & CPU_INTERRUPT_HARD) &&
> +                    !(interrupt_request & CPU_INTERRUPT_NMI)) {
> +                    /* hlt instruction with interrupt disabled is shutdown */
> +                    env->eflags |= IF_MASK;
> +                    cpu_halted_set(cpu, 1);
> +                    cpu->exception_index = EXCP_HLT;
> +                    ret = 1;
> +                }
>              }
>              break;
>          /* these situations will continue to hax module */


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 60/73] cpu: convert to interrupt_request
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 60/73] cpu: convert to interrupt_request Emilio G. Cota
@ 2019-02-08 11:21   ` Alex Bennée
  2019-02-20 16:55   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 11:21 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> This finishes the conversion to interrupt_request.
>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  qom/cpu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/qom/cpu.c b/qom/cpu.c
> index 00add81a7f..f2695be9b2 100644
> --- a/qom/cpu.c
> +++ b/qom/cpu.c
> @@ -275,7 +275,7 @@ static void cpu_common_reset(CPUState *cpu)
>          log_cpu_state(cpu, cc->reset_dump_flags);
>      }
>
> -    cpu->interrupt_request = 0;
> +    cpu_interrupt_request_set(cpu, 0);
>      cpu_halted_set(cpu, 0);
>      cpu->mem_io_pc = 0;
>      cpu->mem_io_vaddr = 0;
> @@ -412,7 +412,7 @@ static vaddr cpu_adjust_watchpoint_address(CPUState *cpu, vaddr addr, int len)
>
>  static void generic_handle_interrupt(CPUState *cpu, int mask)
>  {
> -    cpu->interrupt_request |= mask;
> +    cpu_interrupt_request_or(cpu, mask);
>
>      if (!qemu_cpu_is_self(cpu)) {
>          qemu_cpu_kick(cpu);


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 61/73] cpu: call .cpu_has_work with the CPU lock held
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 61/73] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
@ 2019-02-08 11:22   ` Alex Bennée
  0 siblings, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 11:22 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  include/qom/cpu.h | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 4a87c1fef7..96a5d0cb94 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -85,7 +85,8 @@ struct TranslationBlock;
>   * @parse_features: Callback to parse command line arguments.
>   * @reset: Callback to reset the #CPUState to its initial state.
>   * @reset_dump_flags: #CPUDumpFlags to use for reset logging.
> - * @has_work: Callback for checking if there is work to do.
> + * @has_work: Callback for checking if there is work to do. Called with the
> + * CPU lock held.
>   * @do_interrupt: Callback for interrupt handling.
>   * @do_unassigned_access: Callback for unassigned access handling.
>   * (this is deprecated: new targets should use do_transaction_failed instead)
> @@ -795,9 +796,16 @@ const char *parse_cpu_model(const char *cpu_model);
>  static inline bool cpu_has_work(CPUState *cpu)
>  {
>      CPUClass *cc = CPU_GET_CLASS(cpu);
> +    bool ret;
>
>      g_assert(cc->has_work);
> -    return cc->has_work(cpu);
> +    if (cpu_mutex_locked(cpu)) {
> +        return cc->has_work(cpu);
> +    }
> +    cpu_mutex_lock(cpu);
> +    ret = cc->has_work(cpu);
> +    cpu_mutex_unlock(cpu);
> +    return ret;
>  }
>
>  /**


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 62/73] cpu: introduce cpu_has_work_with_iothread_lock
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 62/73] cpu: introduce cpu_has_work_with_iothread_lock Emilio G. Cota
@ 2019-02-08 11:33   ` Alex Bennée
  2019-03-03 19:52     ` Emilio G. Cota
  0 siblings, 1 reply; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 11:33 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> It will gain some users soon.
>
> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h | 36 +++++++++++++++++++++++++++++++++---
>  1 file changed, 33 insertions(+), 3 deletions(-)
>
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 96a5d0cb94..27a80bc113 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -27,6 +27,7 @@
>  #include "qapi/qapi-types-run-state.h"
>  #include "qemu/bitmap.h"
>  #include "qemu/fprintf-fn.h"
> +#include "qemu/main-loop.h"
>  #include "qemu/rcu_queue.h"
>  #include "qemu/queue.h"
>  #include "qemu/thread.h"
> @@ -87,6 +88,8 @@ struct TranslationBlock;
>   * @reset_dump_flags: #CPUDumpFlags to use for reset logging.
>   * @has_work: Callback for checking if there is work to do. Called with the
>   * CPU lock held.
> + * @has_work_with_iothread_lock: Callback for checking if there is work to do.
> + * Called with both the BQL and the CPU lock held.
>   * @do_interrupt: Callback for interrupt handling.
>   * @do_unassigned_access: Callback for unassigned access handling.
>   * (this is deprecated: new targets should use do_transaction_failed instead)
> @@ -158,6 +161,7 @@ typedef struct CPUClass {
>      void (*reset)(CPUState *cpu);
>      int reset_dump_flags;
>      bool (*has_work)(CPUState *cpu);
> +    bool (*has_work_with_iothread_lock)(CPUState *cpu);
>      void (*do_interrupt)(CPUState *cpu);
>      CPUUnassignedAccess do_unassigned_access;
>      void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
> @@ -796,14 +800,40 @@ const char *parse_cpu_model(const char *cpu_model);
>  static inline bool cpu_has_work(CPUState *cpu)
>  {
>      CPUClass *cc = CPU_GET_CLASS(cpu);
> +    bool has_cpu_lock = cpu_mutex_locked(cpu);
> +    bool (*func)(CPUState *cpu);
>      bool ret;
>
> +    if (cc->has_work_with_iothread_lock) {
> +        if (qemu_mutex_iothread_locked()) {
> +            func = cc->has_work_with_iothread_lock;
> +            goto call_func;
> +        }
> +
> +        if (has_cpu_lock) {
> +            /* avoid deadlock by acquiring the locks in order */

This is fine here but can we expand the comment above:

 * cpu_has_work:
 * @cpu: The vCPU to check.
 *
 * Checks whether the CPU has work to do. If the vCPU helper needs to
 * check it's work status with the BQL held ensure we hold the BQL
 * before taking the CPU lock.

Where is our canonical description of the locking interaction between
BQL and CPU locks?

Otherwise:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>


> +            cpu_mutex_unlock(cpu);
> +        }
> +        qemu_mutex_lock_iothread();
> +        cpu_mutex_lock(cpu);
> +
> +        ret = cc->has_work_with_iothread_lock(cpu);
> +
> +        qemu_mutex_unlock_iothread();
> +        if (!has_cpu_lock) {
> +            cpu_mutex_unlock(cpu);
> +        }
> +        return ret;
> +    }
> +
>      g_assert(cc->has_work);
> -    if (cpu_mutex_locked(cpu)) {
> -        return cc->has_work(cpu);
> +    func = cc->has_work;
> + call_func:
> +    if (has_cpu_lock) {
> +        return func(cpu);
>      }
>      cpu_mutex_lock(cpu);
> -    ret = cc->has_work(cpu);
> +    ret = func(cpu);
>      cpu_mutex_unlock(cpu);
>      return ret;
>  }


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle Emilio G. Cota
@ 2019-02-08 11:34   ` Alex Bennée
  2019-02-20 17:01   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 11:34 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> This function is only called from TCG rr mode, so add
> a prefix to mark this as well as an assertion.
>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  cpus.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/cpus.c b/cpus.c
> index aee129c0b3..0d255c2655 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -211,10 +211,12 @@ static bool cpu_thread_is_idle(CPUState *cpu)
>      return true;
>  }
>
> -static bool all_cpu_threads_idle(void)
> +static bool qemu_tcg_rr_all_cpu_threads_idle(void)
>  {
>      CPUState *cpu;
>
> +    g_assert(qemu_is_tcg_rr());
> +
>      CPU_FOREACH(cpu) {
>          if (!cpu_thread_is_idle(cpu)) {
>              return false;
> @@ -692,7 +694,7 @@ void qemu_start_warp_timer(void)
>      }
>
>      if (replay_mode != REPLAY_MODE_PLAY) {
> -        if (!all_cpu_threads_idle()) {
> +        if (!qemu_tcg_rr_all_cpu_threads_idle()) {
>              return;
>          }
>
> @@ -1325,7 +1327,7 @@ static void qemu_tcg_rr_wait_io_event(void)
>  {
>      CPUState *cpu;
>
> -    while (all_cpu_threads_idle()) {
> +    while (qemu_tcg_rr_all_cpu_threads_idle()) {
>          stop_tcg_kick_timer();
>          qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
>      }
> @@ -1659,7 +1661,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
>              atomic_mb_set(&cpu->exit_request, 0);
>          }
>
> -        if (use_icount && all_cpu_threads_idle()) {
> +        if (use_icount && qemu_tcg_rr_all_cpu_threads_idle()) {
>              /*
>               * When all cpus are sleeping (e.g in WFI), to avoid a deadlock
>               * in the main_loop, wake it up in order to start the warp timer.


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 70/73] cpu: protect CPU state with cpu->lock instead of the BQL
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 70/73] cpu: protect CPU state with cpu->lock instead of the BQL Emilio G. Cota
@ 2019-02-08 14:33   ` Alex Bennée
  2019-02-20 17:25   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 14:33 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> Use the per-CPU locks to protect the CPUs' state, instead of
> using the BQL. These locks are uncontended (they are mostly
> acquired by the corresponding vCPU thread), so acquiring them
> is cheaper than acquiring the BQL, which particularly in
> MTTCG can be contended at high core counts.
>
> In this conversion we drop qemu_cpu_cond and qemu_pause_cond,
> and use cpu->cond instead.
>
> In qom/cpu.c we can finally remove the ugliness that
> results from having to hold both the BQL and the CPU lock;
> now we just have to grab the CPU lock.

Ahh I see....

>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  include/qom/cpu.h |  20 ++--
>  cpus.c            | 280 ++++++++++++++++++++++++++++++++++------------
>  qom/cpu.c         |  29 +----
>  3 files changed, 225 insertions(+), 104 deletions(-)
>
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 27a80bc113..30ed2fae0b 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -297,10 +297,6 @@ struct qemu_work_item;
>   * valid under cpu_list_lock.
>   * @created: Indicates whether the CPU thread has been successfully created.
>   * @interrupt_request: Indicates a pending interrupt request.
> - * @halted: Nonzero if the CPU is in suspended state.
> - * @stop: Indicates a pending stop request.
> - * @stopped: Indicates the CPU has been artificially stopped.
> - * @unplug: Indicates a pending CPU unplug request.
>   * @crash_occurred: Indicates the OS reported a crash (panic) for this CPU
>   * @singlestep_enabled: Flags for single-stepping.
>   * @icount_extra: Instructions until next timer event.
> @@ -329,6 +325,10 @@ struct qemu_work_item;
>   * @lock: Lock to prevent multiple access to per-CPU fields.
>   * @cond: Condition variable for per-CPU events.
>   * @work_list: List of pending asynchronous work.
> + * @halted: Nonzero if the CPU is in suspended state.
> + * @stop: Indicates a pending stop request.
> + * @stopped: Indicates the CPU has been artificially stopped.
> + * @unplug: Indicates a pending CPU unplug request.
>   * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
>   *                        to @trace_dstate).
>   * @trace_dstate: Dynamic tracing state of events for this vCPU (bitmask).
> @@ -352,12 +352,7 @@ struct CPUState {
>  #endif
>      int thread_id;
>      bool running, has_waiter;
> -    struct QemuCond *halt_cond;
>      bool thread_kicked;
> -    bool created;
> -    bool stop;
> -    bool stopped;
> -    bool unplug;
>      bool crash_occurred;
>      bool exit_request;
>      uint32_t cflags_next_tb;
> @@ -371,7 +366,13 @@ struct CPUState {
>      QemuMutex *lock;
>      /* fields below protected by @lock */
>      QemuCond cond;
> +    QemuCond *halt_cond;
>      QSIMPLEQ_HEAD(, qemu_work_item) work_list;
> +    uint32_t halted;
> +    bool created;
> +    bool stop;
> +    bool stopped;
> +    bool unplug;
>
>      CPUAddressSpace *cpu_ases;
>      int num_ases;
> @@ -419,7 +420,6 @@ struct CPUState {
>      /* TODO Move common fields from CPUArchState here. */
>      int cpu_index;
>      int cluster_index;
> -    uint32_t halted;
>      uint32_t can_do_io;
>      int32_t exception_index;
>
> diff --git a/cpus.c b/cpus.c
> index 0d255c2655..4f17fe25bf 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -181,24 +181,30 @@ bool cpu_mutex_locked(const CPUState *cpu)
>      return test_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
>  }
>
> -bool cpu_is_stopped(CPUState *cpu)
> +/* Called with the CPU's lock held */
> +static bool cpu_is_stopped_locked(CPUState *cpu)
>  {
>      return cpu->stopped || !runstate_is_running();
>  }
>
> -static inline bool cpu_work_list_empty(CPUState *cpu)
> +bool cpu_is_stopped(CPUState *cpu)
>  {
> -    bool ret;
> +    if (!cpu_mutex_locked(cpu)) {
> +        bool ret;
>
> -    cpu_mutex_lock(cpu);
> -    ret = QSIMPLEQ_EMPTY(&cpu->work_list);
> -    cpu_mutex_unlock(cpu);
> -    return ret;
> +        cpu_mutex_lock(cpu);
> +        ret = cpu_is_stopped_locked(cpu);
> +        cpu_mutex_unlock(cpu);
> +        return ret;
> +    }
> +    return cpu_is_stopped_locked(cpu);
>  }
>
>  static bool cpu_thread_is_idle(CPUState *cpu)
>  {
> -    if (cpu->stop || !cpu_work_list_empty(cpu)) {
> +    g_assert(cpu_mutex_locked(cpu));
> +
> +    if (cpu->stop || !QSIMPLEQ_EMPTY(&cpu->work_list)) {
>          return false;
>      }
>      if (cpu_is_stopped(cpu)) {
> @@ -216,9 +222,17 @@ static bool qemu_tcg_rr_all_cpu_threads_idle(void)
>      CPUState *cpu;
>
>      g_assert(qemu_is_tcg_rr());
> +    g_assert(qemu_mutex_iothread_locked());
> +    g_assert(no_cpu_mutex_locked());
>
>      CPU_FOREACH(cpu) {
> -        if (!cpu_thread_is_idle(cpu)) {
> +        bool is_idle;
> +
> +        cpu_mutex_lock(cpu);
> +        is_idle = cpu_thread_is_idle(cpu);
> +        cpu_mutex_unlock(cpu);
> +
> +        if (!is_idle) {
>              return false;
>          }
>      }
> @@ -780,6 +794,8 @@ void qemu_start_warp_timer(void)
>
>  static void qemu_account_warp_timer(void)
>  {
> +    g_assert(qemu_mutex_iothread_locked());
> +
>      if (!use_icount || !icount_sleep) {
>          return;
>      }
> @@ -1090,6 +1106,7 @@ static void kick_tcg_thread(void *opaque)
>  static void start_tcg_kick_timer(void)
>  {
>      assert(!mttcg_enabled);
> +    g_assert(qemu_mutex_iothread_locked());
>      if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) {
>          tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
>                                             kick_tcg_thread, NULL);
> @@ -1102,6 +1119,7 @@ static void start_tcg_kick_timer(void)
>  static void stop_tcg_kick_timer(void)
>  {
>      assert(!mttcg_enabled);
> +    g_assert(qemu_mutex_iothread_locked());
>      if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) {
>          timer_del(tcg_kick_vcpu_timer);
>      }
> @@ -1204,6 +1222,8 @@ int vm_shutdown(void)
>
>  static bool cpu_can_run(CPUState *cpu)
>  {
> +    g_assert(cpu_mutex_locked(cpu));
> +
>      if (cpu->stop) {
>          return false;
>      }
> @@ -1276,16 +1296,9 @@ static void qemu_init_sigbus(void)
>
>  static QemuThread io_thread;
>
> -/* cpu creation */
> -static QemuCond qemu_cpu_cond;
> -/* system init */
> -static QemuCond qemu_pause_cond;
> -
>  void qemu_init_cpu_loop(void)
>  {
>      qemu_init_sigbus();
> -    qemu_cond_init(&qemu_cpu_cond);
> -    qemu_cond_init(&qemu_pause_cond);
>      qemu_mutex_init(&qemu_global_mutex);
>
>      qemu_thread_get_self(&io_thread);
> @@ -1303,46 +1316,70 @@ static void qemu_tcg_destroy_vcpu(CPUState *cpu)
>  {
>  }
>
> -static void qemu_cpu_stop(CPUState *cpu, bool exit)
> +static void qemu_cpu_stop_locked(CPUState *cpu, bool exit)
>  {
> +    g_assert(cpu_mutex_locked(cpu));
>      g_assert(qemu_cpu_is_self(cpu));
>      cpu->stop = false;
>      cpu->stopped = true;
>      if (exit) {
>          cpu_exit(cpu);
>      }
> -    qemu_cond_broadcast(&qemu_pause_cond);
> +    qemu_cond_broadcast(&cpu->cond);
> +}
> +
> +static void qemu_cpu_stop(CPUState *cpu, bool exit)
> +{
> +    cpu_mutex_lock(cpu);
> +    qemu_cpu_stop_locked(cpu, exit);
> +    cpu_mutex_unlock(cpu);
>  }
>
>  static void qemu_wait_io_event_common(CPUState *cpu)
>  {
> +    g_assert(cpu_mutex_locked(cpu));
> +
>      atomic_mb_set(&cpu->thread_kicked, false);
>      if (cpu->stop) {
> -        qemu_cpu_stop(cpu, false);
> +        qemu_cpu_stop_locked(cpu, false);
>      }
> +    /*
> +     * unlock+lock cpu_mutex, so that other vCPUs have a chance to grab the
> +     * lock and queue some work for this vCPU.
> +     */
> +    cpu_mutex_unlock(cpu);
>      process_queued_cpu_work(cpu);
> +    cpu_mutex_lock(cpu);
>  }
>
>  static void qemu_tcg_rr_wait_io_event(void)
>  {
>      CPUState *cpu;
>
> +    g_assert(qemu_mutex_iothread_locked());
> +    g_assert(no_cpu_mutex_locked());
> +
>      while (qemu_tcg_rr_all_cpu_threads_idle()) {
>          stop_tcg_kick_timer();
> -        qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
> +        qemu_cond_wait(first_cpu->halt_cond, first_cpu->lock);
>      }
>
>      start_tcg_kick_timer();
>
>      CPU_FOREACH(cpu) {
> +        cpu_mutex_lock(cpu);
>          qemu_wait_io_event_common(cpu);
> +        cpu_mutex_unlock(cpu);
>      }
>  }
>
>  static void qemu_wait_io_event(CPUState *cpu)
>  {
> +    g_assert(cpu_mutex_locked(cpu));
> +    g_assert(!qemu_mutex_iothread_locked());
> +
>      while (cpu_thread_is_idle(cpu)) {
> -        qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
> +        qemu_cond_wait(cpu->halt_cond, cpu->lock);
>      }
>
>  #ifdef _WIN32
> @@ -1362,6 +1399,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
>      rcu_register_thread();
>
>      qemu_mutex_lock_iothread();
> +    cpu_mutex_lock(cpu);
>      qemu_thread_get_self(cpu->thread);
>      cpu->thread_id = qemu_get_thread_id();
>      cpu->can_do_io = 1;
> @@ -1374,14 +1412,20 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
>      }
>
>      kvm_init_cpu_signals(cpu);
> +    qemu_mutex_unlock_iothread();
>
>      /* signal CPU creation */
>      cpu->created = true;
> -    qemu_cond_signal(&qemu_cpu_cond);
> +    qemu_cond_signal(&cpu->cond);
>
>      do {
>          if (cpu_can_run(cpu)) {
> +            cpu_mutex_unlock(cpu);
> +            qemu_mutex_lock_iothread();
>              r = kvm_cpu_exec(cpu);
> +            qemu_mutex_unlock_iothread();
> +            cpu_mutex_lock(cpu);
> +
>              if (r == EXCP_DEBUG) {
>                  cpu_handle_guest_debug(cpu);
>              }
> @@ -1389,10 +1433,16 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
>          qemu_wait_io_event(cpu);
>      } while (!cpu->unplug || cpu_can_run(cpu));
>
> +    cpu_mutex_unlock(cpu);
> +    qemu_mutex_lock_iothread();
>      qemu_kvm_destroy_vcpu(cpu);
> -    cpu->created = false;
> -    qemu_cond_signal(&qemu_cpu_cond);
>      qemu_mutex_unlock_iothread();
> +
> +    cpu_mutex_lock(cpu);
> +    cpu->created = false;
> +    qemu_cond_signal(&cpu->cond);
> +    cpu_mutex_unlock(cpu);
> +
>      rcu_unregister_thread();
>      return NULL;
>  }
> @@ -1409,7 +1459,7 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
>
>      rcu_register_thread();
>
> -    qemu_mutex_lock_iothread();
> +    cpu_mutex_lock(cpu);
>      qemu_thread_get_self(cpu->thread);
>      cpu->thread_id = qemu_get_thread_id();
>      cpu->can_do_io = 1;
> @@ -1420,10 +1470,10 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
>
>      /* signal CPU creation */
>      cpu->created = true;
> -    qemu_cond_signal(&qemu_cpu_cond);
> +    qemu_cond_signal(&cpu->cond);
>
>      do {
> -        qemu_mutex_unlock_iothread();
> +        cpu_mutex_unlock(cpu);
>          do {
>              int sig;
>              r = sigwait(&waitset, &sig);
> @@ -1432,10 +1482,11 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
>              perror("sigwait");
>              exit(1);
>          }
> -        qemu_mutex_lock_iothread();
> +        cpu_mutex_lock(cpu);
>          qemu_wait_io_event(cpu);
>      } while (!cpu->unplug);
>
> +    cpu_mutex_unlock(cpu);
>      rcu_unregister_thread();
>      return NULL;
>  #endif
> @@ -1466,6 +1517,8 @@ static int64_t tcg_get_icount_limit(void)
>  static void handle_icount_deadline(void)
>  {
>      assert(qemu_in_vcpu_thread());
> +    g_assert(qemu_mutex_iothread_locked());
> +
>      if (use_icount) {
>          int64_t deadline =
>              qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
> @@ -1546,12 +1599,15 @@ static void deal_with_unplugged_cpus(void)
>      CPUState *cpu;
>
>      CPU_FOREACH(cpu) {
> +        cpu_mutex_lock(cpu);
>          if (cpu->unplug && !cpu_can_run(cpu)) {
>              qemu_tcg_destroy_vcpu(cpu);
>              cpu->created = false;
> -            qemu_cond_signal(&qemu_cpu_cond);
> +            qemu_cond_signal(&cpu->cond);
> +            cpu_mutex_unlock(cpu);
>              break;
>          }
> +        cpu_mutex_unlock(cpu);
>      }
>  }
>
> @@ -1572,24 +1628,36 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
>      rcu_register_thread();
>      tcg_register_thread();
>
> +    /*
> +     * We call cpu_mutex_lock/unlock just to please the assertions in common
> +     * code, since here cpu->lock is an alias to the BQL.
> +     */
>      qemu_mutex_lock_iothread();
> +    cpu_mutex_lock(cpu);
>      qemu_thread_get_self(cpu->thread);
> -
>      cpu->thread_id = qemu_get_thread_id();
>      cpu->created = true;
>      cpu->can_do_io = 1;
> -    qemu_cond_signal(&qemu_cpu_cond);
> +    qemu_cond_signal(&cpu->cond);
> +    cpu_mutex_unlock(cpu);
>
>      /* wait for initial kick-off after machine start */
> +    cpu_mutex_lock(first_cpu);
>      while (first_cpu->stopped) {
> -        qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
> +        qemu_cond_wait(first_cpu->halt_cond, first_cpu->lock);
> +        cpu_mutex_unlock(first_cpu);
>
>          /* process any pending work */
>          CPU_FOREACH(cpu) {
>              current_cpu = cpu;
> +            cpu_mutex_lock(cpu);
>              qemu_wait_io_event_common(cpu);
> +            cpu_mutex_unlock(cpu);
>          }
> +
> +        cpu_mutex_lock(first_cpu);
>      }
> +    cpu_mutex_unlock(first_cpu);
>
>      start_tcg_kick_timer();
>
> @@ -1616,7 +1684,12 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
>              cpu = first_cpu;
>          }
>
> -        while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
> +        while (cpu) {
> +            cpu_mutex_lock(cpu);
> +            if (!QSIMPLEQ_EMPTY(&cpu->work_list) || cpu->exit_request) {
> +                cpu_mutex_unlock(cpu);
> +                break;
> +            }
>
>              atomic_mb_set(&tcg_current_rr_cpu, cpu);
>              current_cpu = cpu;
> @@ -1627,6 +1700,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
>              if (cpu_can_run(cpu)) {
>                  int r;
>
> +                cpu_mutex_unlock(cpu);
>                  qemu_mutex_unlock_iothread();
>                  prepare_icount_for_run(cpu);
>
> @@ -1634,11 +1708,14 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
>
>                  process_icount_data(cpu);
>                  qemu_mutex_lock_iothread();
> +                cpu_mutex_lock(cpu);
>
>                  if (r == EXCP_DEBUG) {
>                      cpu_handle_guest_debug(cpu);
> +                    cpu_mutex_unlock(cpu);
>                      break;
>                  } else if (r == EXCP_ATOMIC) {
> +                    cpu_mutex_unlock(cpu);
>                      qemu_mutex_unlock_iothread();
>                      cpu_exec_step_atomic(cpu);
>                      qemu_mutex_lock_iothread();
> @@ -1648,11 +1725,15 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
>                  if (cpu->unplug) {
>                      cpu = CPU_NEXT(cpu);
>                  }
> +                cpu_mutex_unlock(current_cpu);
>                  break;
>              }
>
> +            cpu_mutex_unlock(cpu);
>              cpu = CPU_NEXT(cpu);
> -        } /* while (cpu && !cpu->exit_request).. */
> +        } /* for (;;) .. */
> +
> +        g_assert(no_cpu_mutex_locked());
>
>          /* Does not need atomic_mb_set because a spurious wakeup is okay.  */
>          atomic_set(&tcg_current_rr_cpu, NULL);
> @@ -1684,6 +1765,7 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
>
>      rcu_register_thread();
>      qemu_mutex_lock_iothread();
> +    cpu_mutex_lock(cpu);
>      qemu_thread_get_self(cpu->thread);
>
>      cpu->thread_id = qemu_get_thread_id();
> @@ -1692,11 +1774,17 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
>      current_cpu = cpu;
>
>      hax_init_vcpu(cpu);
> -    qemu_cond_signal(&qemu_cpu_cond);
> +    qemu_mutex_unlock_iothread();
> +    qemu_cond_signal(&cpu->cond);
>
>      do {
>          if (cpu_can_run(cpu)) {
> +            cpu_mutex_unlock(cpu);
> +            qemu_mutex_lock_iothread();
>              r = hax_smp_cpu_exec(cpu);
> +            qemu_mutex_unlock_iothread();
> +            cpu_mutex_lock(cpu);
> +
>              if (r == EXCP_DEBUG) {
>                  cpu_handle_guest_debug(cpu);
>              }
> @@ -1704,6 +1792,8 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
>
>          qemu_wait_io_event(cpu);
>      } while (!cpu->unplug || cpu_can_run(cpu));
> +
> +    cpu_mutex_unlock(cpu);
>      rcu_unregister_thread();
>      return NULL;
>  }
> @@ -1721,6 +1811,7 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
>      rcu_register_thread();
>
>      qemu_mutex_lock_iothread();
> +    cpu_mutex_lock(cpu);
>      qemu_thread_get_self(cpu->thread);
>
>      cpu->thread_id = qemu_get_thread_id();
> @@ -1728,14 +1819,20 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
>      current_cpu = cpu;
>
>      hvf_init_vcpu(cpu);
> +    qemu_mutex_unlock_iothread();
>
>      /* signal CPU creation */
>      cpu->created = true;
> -    qemu_cond_signal(&qemu_cpu_cond);
> +    qemu_cond_signal(&cpu->cond);
>
>      do {
>          if (cpu_can_run(cpu)) {
> +            cpu_mutex_unlock(cpu);
> +            qemu_mutex_lock_iothread();
>              r = hvf_vcpu_exec(cpu);
> +            qemu_mutex_unlock_iothread();
> +            cpu_mutex_lock(cpu);
> +
>              if (r == EXCP_DEBUG) {
>                  cpu_handle_guest_debug(cpu);
>              }
> @@ -1743,10 +1840,16 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
>          qemu_wait_io_event(cpu);
>      } while (!cpu->unplug || cpu_can_run(cpu));
>
> +    cpu_mutex_unlock(cpu);
> +    qemu_mutex_lock_iothread();
>      hvf_vcpu_destroy(cpu);
> -    cpu->created = false;
> -    qemu_cond_signal(&qemu_cpu_cond);
>      qemu_mutex_unlock_iothread();
> +
> +    cpu_mutex_lock(cpu);
> +    cpu->created = false;
> +    qemu_cond_signal(&cpu->cond);
> +    cpu_mutex_unlock(cpu);
> +
>      rcu_unregister_thread();
>      return NULL;
>  }
> @@ -1759,6 +1862,7 @@ static void *qemu_whpx_cpu_thread_fn(void *arg)
>      rcu_register_thread();
>
>      qemu_mutex_lock_iothread();
> +    cpu_mutex_lock(cpu);
>      qemu_thread_get_self(cpu->thread);
>      cpu->thread_id = qemu_get_thread_id();
>      current_cpu = cpu;
> @@ -1768,28 +1872,40 @@ static void *qemu_whpx_cpu_thread_fn(void *arg)
>          fprintf(stderr, "whpx_init_vcpu failed: %s\n", strerror(-r));
>          exit(1);
>      }
> +    qemu_mutex_unlock_iothread();
>
>      /* signal CPU creation */
>      cpu->created = true;
> -    qemu_cond_signal(&qemu_cpu_cond);
> +    qemu_cond_signal(&cpu->cond);
>
>      do {
>          if (cpu_can_run(cpu)) {
> +            cpu_mutex_unlock(cpu);
> +            qemu_mutex_lock_iothread();
>              r = whpx_vcpu_exec(cpu);
> +            qemu_mutex_unlock_iothread();
> +            cpu_mutex_lock(cpu);
> +
>              if (r == EXCP_DEBUG) {
>                  cpu_handle_guest_debug(cpu);
>              }
>          }
>          while (cpu_thread_is_idle(cpu)) {
> -            qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
> +            qemu_cond_wait(cpu->halt_cond, cpu->lock);
>          }
>          qemu_wait_io_event_common(cpu);
>      } while (!cpu->unplug || cpu_can_run(cpu));
>
> +    cpu_mutex_unlock(cpu);
> +    qemu_mutex_lock_iothread();
>      whpx_destroy_vcpu(cpu);
> -    cpu->created = false;
> -    qemu_cond_signal(&qemu_cpu_cond);
>      qemu_mutex_unlock_iothread();
> +
> +    cpu_mutex_lock(cpu);
> +    cpu->created = false;
> +    qemu_cond_signal(&cpu->cond);
> +    cpu_mutex_unlock(cpu);
> +
>      rcu_unregister_thread();
>      return NULL;
>  }
> @@ -1817,14 +1933,14 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
>      rcu_register_thread();
>      tcg_register_thread();
>
> -    qemu_mutex_lock_iothread();
> +    cpu_mutex_lock(cpu);
>      qemu_thread_get_self(cpu->thread);
>
>      cpu->thread_id = qemu_get_thread_id();
>      cpu->created = true;
>      cpu->can_do_io = 1;
>      current_cpu = cpu;
> -    qemu_cond_signal(&qemu_cpu_cond);
> +    qemu_cond_signal(&cpu->cond);
>
>      /* process any pending work */
>      cpu->exit_request = 1;
> @@ -1832,9 +1948,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
>      do {
>          if (cpu_can_run(cpu)) {
>              int r;
> -            qemu_mutex_unlock_iothread();
> +            cpu_mutex_unlock(cpu);
>              r = tcg_cpu_exec(cpu);
> -            qemu_mutex_lock_iothread();
> +            cpu_mutex_lock(cpu);
>              switch (r) {
>              case EXCP_DEBUG:
>                  cpu_handle_guest_debug(cpu);
> @@ -1850,9 +1966,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
>                  g_assert(cpu_halted(cpu));
>                  break;
>              case EXCP_ATOMIC:
> -                qemu_mutex_unlock_iothread();
> +                cpu_mutex_unlock(cpu);
>                  cpu_exec_step_atomic(cpu);
> -                qemu_mutex_lock_iothread();
> +                cpu_mutex_lock(cpu);
>              default:
>                  /* Ignore everything else? */
>                  break;
> @@ -1865,8 +1981,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
>
>      qemu_tcg_destroy_vcpu(cpu);
>      cpu->created = false;
> -    qemu_cond_signal(&qemu_cpu_cond);
> -    qemu_mutex_unlock_iothread();
> +    qemu_cond_signal(&cpu->cond);
> +    cpu_mutex_unlock(cpu);
>      rcu_unregister_thread();
>      return NULL;
>  }
> @@ -1966,54 +2082,69 @@ void qemu_mutex_unlock_iothread(void)
>      }
>  }
>
> -static bool all_vcpus_paused(void)
> -{
> -    CPUState *cpu;
> -
> -    CPU_FOREACH(cpu) {
> -        if (!cpu->stopped) {
> -            return false;
> -        }
> -    }
> -
> -    return true;
> -}
> -
>  void pause_all_vcpus(void)
>  {
>      CPUState *cpu;
>
> +    g_assert(no_cpu_mutex_locked());
> +
>      qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
>      CPU_FOREACH(cpu) {
> +        cpu_mutex_lock(cpu);
>          if (qemu_cpu_is_self(cpu)) {
>              qemu_cpu_stop(cpu, true);
>          } else {
>              cpu->stop = true;
>              qemu_cpu_kick(cpu);
>          }
> +        cpu_mutex_unlock(cpu);
>      }
>
> + drop_locks_and_stop_all_vcpus:
>      /* We need to drop the replay_lock so any vCPU threads woken up
>       * can finish their replay tasks
>       */
>      replay_mutex_unlock();
> +    qemu_mutex_unlock_iothread();
>
> -    while (!all_vcpus_paused()) {
> -        qemu_cond_wait(&qemu_pause_cond, &qemu_global_mutex);
> -        CPU_FOREACH(cpu) {
> +    CPU_FOREACH(cpu) {
> +        cpu_mutex_lock(cpu);
> +        if (!cpu->stopped) {
> +            cpu->stop = true;
>              qemu_cpu_kick(cpu);
> +            qemu_cond_wait(&cpu->cond, cpu->lock);
>          }
> +        cpu_mutex_unlock(cpu);
>      }
>
> -    qemu_mutex_unlock_iothread();
>      replay_mutex_lock();
>      qemu_mutex_lock_iothread();
> +
> +    /* a CPU might have been hot-plugged while we weren't holding the BQL */
> +    CPU_FOREACH(cpu) {
> +        bool stopped;
> +
> +        cpu_mutex_lock(cpu);
> +        stopped = cpu->stopped;
> +        cpu_mutex_unlock(cpu);
> +
> +        if (!stopped) {
> +            goto drop_locks_and_stop_all_vcpus;
> +        }
> +    }
>  }
>
>  void cpu_resume(CPUState *cpu)
>  {
> -    cpu->stop = false;
> -    cpu->stopped = false;
> +    if (cpu_mutex_locked(cpu)) {
> +        cpu->stop = false;
> +        cpu->stopped = false;
> +    } else {
> +        cpu_mutex_lock(cpu);
> +        cpu->stop = false;
> +        cpu->stopped = false;
> +        cpu_mutex_unlock(cpu);
> +    }
>      qemu_cpu_kick(cpu);
>  }
>
> @@ -2029,8 +2160,11 @@ void resume_all_vcpus(void)
>
>  void cpu_remove_sync(CPUState *cpu)
>  {
> +    cpu_mutex_lock(cpu);
>      cpu->stop = true;
>      cpu->unplug = true;
> +    cpu_mutex_unlock(cpu);
> +
>      qemu_cpu_kick(cpu);
>      qemu_mutex_unlock_iothread();
>      qemu_thread_join(cpu->thread);
> @@ -2211,9 +2345,15 @@ void qemu_init_vcpu(CPUState *cpu)
>          qemu_dummy_start_vcpu(cpu);
>      }
>
> +    qemu_mutex_unlock_iothread();
> +
> +    cpu_mutex_lock(cpu);
>      while (!cpu->created) {
> -        qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
> +        qemu_cond_wait(&cpu->cond, cpu->lock);
>      }
> +    cpu_mutex_unlock(cpu);
> +
> +    qemu_mutex_lock_iothread();
>  }
>
>  void cpu_stop_current(void)
> diff --git a/qom/cpu.c b/qom/cpu.c
> index f2695be9b2..65b070a570 100644
> --- a/qom/cpu.c
> +++ b/qom/cpu.c
> @@ -94,32 +94,13 @@ static void cpu_common_get_memory_mapping(CPUState *cpu,
>      error_setg(errp, "Obtaining memory mappings is unsupported on this CPU.");
>  }
>
> -/* Resetting the IRQ comes from across the code base so we take the
> - * BQL here if we need to.  cpu_interrupt assumes it is held.*/
>  void cpu_reset_interrupt(CPUState *cpu, int mask)
>  {
> -    bool has_bql = qemu_mutex_iothread_locked();
> -    bool has_cpu_lock = cpu_mutex_locked(cpu);
> -
> -    if (has_bql) {
> -        if (has_cpu_lock) {
> -            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
> -        } else {
> -            cpu_mutex_lock(cpu);
> -            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
> -            cpu_mutex_unlock(cpu);
> -        }
> -        return;
> -    }
> -
> -    if (has_cpu_lock) {
> -        cpu_mutex_unlock(cpu);
> -    }
> -    qemu_mutex_lock_iothread();
> -    cpu_mutex_lock(cpu);
> -    atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
> -    qemu_mutex_unlock_iothread();
> -    if (!has_cpu_lock) {
> +    if (cpu_mutex_locked(cpu)) {
> +        atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
> +    } else {
> +        cpu_mutex_lock(cpu);
> +        atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
>          cpu_mutex_unlock(cpu);
>      }
>  }


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 71/73] cpus-common: release BQL earlier in run_on_cpu
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 71/73] cpus-common: release BQL earlier in run_on_cpu Emilio G. Cota
@ 2019-02-08 14:34   ` Alex Bennée
  0 siblings, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 14:34 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> After completing the conversion to per-CPU locks, there is no need
> to release the BQL after having called cpu_kick.
>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  cpus-common.c | 20 +++++---------------
>  1 file changed, 5 insertions(+), 15 deletions(-)
>
> diff --git a/cpus-common.c b/cpus-common.c
> index 62e282bff1..1241024b2c 100644
> --- a/cpus-common.c
> +++ b/cpus-common.c
> @@ -145,6 +145,11 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
>          return;
>      }
>
> +    /* We are going to sleep on the CPU lock, so release the BQL */
> +    if (has_bql) {
> +        qemu_mutex_unlock_iothread();
> +    }
> +
>      wi.func = func;
>      wi.data = data;
>      wi.done = false;
> @@ -153,21 +158,6 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
>
>      cpu_mutex_lock(cpu);
>      queue_work_on_cpu_locked(cpu, &wi);
> -
> -    /*
> -     * We are going to sleep on the CPU lock, so release the BQL.
> -     *
> -     * During the transition to per-CPU locks, we release the BQL _after_
> -     * having kicked the destination CPU (from queue_work_on_cpu_locked above).
> -     * This makes sure that the enqueued work will be seen by the CPU
> -     * after being woken up from the kick, since the CPU sleeps on the BQL.
> -     * Once we complete the transition to per-CPU locks, we will release
> -     * the BQL earlier in this function.
> -     */
> -    if (has_bql) {
> -        qemu_mutex_unlock_iothread();
> -    }
> -
>      while (!atomic_mb_read(&wi.done)) {
>          CPUState *self_cpu = current_cpu;


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 72/73] cpu: add async_run_on_cpu_no_bql
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 72/73] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
@ 2019-02-08 14:58   ` Alex Bennée
  2019-03-03 20:47     ` Emilio G. Cota
  0 siblings, 1 reply; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 14:58 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> Some async jobs do not need the BQL.
>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h | 14 ++++++++++++++
>  cpus-common.c     | 39 ++++++++++++++++++++++++++++++++++-----
>  2 files changed, 48 insertions(+), 5 deletions(-)
>
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 30ed2fae0b..bb0729f969 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -884,9 +884,23 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
>   * @data: Data to pass to the function.
>   *
>   * Schedules the function @func for execution on the vCPU @cpu asynchronously.
> + * See also: async_run_on_cpu_no_bql()
>   */
>  void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
>
> +/**
> + * async_run_on_cpu_no_bql:
> + * @cpu: The vCPU to run on.
> + * @func: The function to be executed.
> + * @data: Data to pass to the function.
> + *
> + * Schedules the function @func for execution on the vCPU @cpu asynchronously.
> + * This function is run outside the BQL.
> + * See also: async_run_on_cpu()
> + */
> +void async_run_on_cpu_no_bql(CPUState *cpu, run_on_cpu_func func,
> +                             run_on_cpu_data data);
> +

So we now have a locking/scheduling hierarchy that goes:

  - run_on_cpu - synchronously wait until target cpu has done the thing
  - async_run_on_cpu - schedule work on cpu at some point (soon) resources protected by BQL
  - async_run_on_cpu_no_bql - as above but only protected by cpu_lock
  - async_safe_run_on_cpu - as above but locking (probably) not required as everything else asleep

So the BQL is only really needed to manipulate data that is shared
across multiple vCPUs like device emulation or other state shared across
multiple vCPUS. For all "just do it over there" cases we should be able
to stick to cpu locks.

It would be nice if we could expand the documentation in
multi-thread-tcg.txt to cover this in long form for people trying to
work out the best thing to use.

Anyway:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>


>  /**
>   * async_safe_run_on_cpu:
>   * @cpu: The vCPU to run on.
> diff --git a/cpus-common.c b/cpus-common.c
> index 1241024b2c..5832a8bf37 100644
> --- a/cpus-common.c
> +++ b/cpus-common.c
> @@ -109,6 +109,7 @@ struct qemu_work_item {
>      run_on_cpu_func func;
>      run_on_cpu_data data;
>      bool free, exclusive, done;
> +    bool bql;
>  };
>
>  /* Called with the CPU's lock held */
> @@ -155,6 +156,7 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
>      wi.done = false;
>      wi.free = false;
>      wi.exclusive = false;
> +    wi.bql = true;
>
>      cpu_mutex_lock(cpu);
>      queue_work_on_cpu_locked(cpu, &wi);
> @@ -179,6 +181,21 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
>      wi->func = func;
>      wi->data = data;
>      wi->free = true;
> +    wi->bql = true;
> +
> +    queue_work_on_cpu(cpu, wi);
> +}
> +
> +void async_run_on_cpu_no_bql(CPUState *cpu, run_on_cpu_func func,
> +                             run_on_cpu_data data)
> +{
> +    struct qemu_work_item *wi;
> +
> +    wi = g_malloc0(sizeof(struct qemu_work_item));
> +    wi->func = func;
> +    wi->data = data;
> +    wi->free = true;
> +    /* wi->bql initialized to false */
>
>      queue_work_on_cpu(cpu, wi);
>  }
> @@ -323,6 +340,7 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
>      wi->data = data;
>      wi->free = true;
>      wi->exclusive = true;
> +    /* wi->bql initialized to false */
>
>      queue_work_on_cpu(cpu, wi);
>  }
> @@ -347,6 +365,7 @@ static void process_queued_cpu_work_locked(CPUState *cpu)
>               * BQL, so it goes to sleep; start_exclusive() is sleeping too, so
>               * neither CPU can proceed.
>               */
> +            g_assert(!wi->bql);
>              if (has_bql) {
>                  qemu_mutex_unlock_iothread();
>              }
> @@ -357,12 +376,22 @@ static void process_queued_cpu_work_locked(CPUState *cpu)
>                  qemu_mutex_lock_iothread();
>              }
>          } else {
> -            if (has_bql) {
> -                wi->func(cpu, wi->data);
> +            if (wi->bql) {
> +                if (has_bql) {
> +                    wi->func(cpu, wi->data);
> +                } else {
> +                    qemu_mutex_lock_iothread();
> +                    wi->func(cpu, wi->data);
> +                    qemu_mutex_unlock_iothread();
> +                }
>              } else {
> -                qemu_mutex_lock_iothread();
> -                wi->func(cpu, wi->data);
> -                qemu_mutex_unlock_iothread();
> +                if (has_bql) {
> +                    qemu_mutex_unlock_iothread();
> +                    wi->func(cpu, wi->data);
> +                    qemu_mutex_lock_iothread();
> +                } else {
> +                    wi->func(cpu, wi->data);
> +                }
>              }
>          }
>          cpu_mutex_lock(cpu);


--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 73/73] cputlb: queue async flush jobs without the BQL
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 73/73] cputlb: queue async flush jobs without the BQL Emilio G. Cota
@ 2019-02-08 15:58   ` Alex Bennée
  2019-02-20 17:18   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Alex Bennée @ 2019-02-08 15:58 UTC (permalink / raw)
  To: Emilio G. Cota; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson


Emilio G. Cota <cota@braap.org> writes:

> This yields sizable scalability improvements, as the below results show.
>
> Host: Two Intel E5-2683 v3 14-core CPUs at 2.00 GHz (Haswell)
>
> Workload: Ubuntu 18.04 ppc64 compiling the linux kernel with
> "make -j N", where N is the number of cores in the guest.

I can verify my pigz benchmark starts levelling out at 12-14 guest vCPUs
on the 36 core host box I'm testing on. Not super controlled environment
but certainly showing how far MTTCG has come since it was first
introduced. Good stuff.

> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index dad9b7796c..8491d36bcf 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -260,7 +260,7 @@ static void flush_all_helper(CPUState *src, run_on_cpu_func fn,
>
>      CPU_FOREACH(cpu) {
>          if (cpu != src) {
> -            async_run_on_cpu(cpu, fn, d);
> +            async_run_on_cpu_no_bql(cpu, fn, d);
>          }
>      }
>  }
> @@ -336,8 +336,8 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap)
>      tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap);
>
>      if (cpu->created && !qemu_cpu_is_self(cpu)) {
> -        async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work,
> -                         RUN_ON_CPU_HOST_INT(idxmap));
> +        async_run_on_cpu_no_bql(cpu, tlb_flush_by_mmuidx_async_work,
> +                                RUN_ON_CPU_HOST_INT(idxmap));
>      } else {
>          tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap));
>      }
> @@ -481,8 +481,8 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap)
>      addr_and_mmu_idx |= idxmap;
>
>      if (!qemu_cpu_is_self(cpu)) {
> -        async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_work,
> -                         RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
> +        async_run_on_cpu_no_bql(cpu, tlb_flush_page_by_mmuidx_async_work,
> +                                RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
>      } else {
>          tlb_flush_page_by_mmuidx_async_work(
>              cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));


Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>

I think that brings my run through this patch series to a conclusion.
Looking good all round.

--
Alex Bennée

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode Emilio G. Cota
  2019-02-07 12:40   ` Alex Bennée
@ 2019-02-20 16:12   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Richard Henderson @ 2019-02-20 16:12 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 1/29/19 4:47 PM, Emilio G. Cota wrote:
> Before we can switch from the BQL to per-CPU locks in
> the CPU loop, we have to accommodate the fact that TCG
> rr mode (i.e. !MTTCG) cannot work with separate per-vCPU
> locks. That would lead to deadlock since we need a single
> lock/condvar pair on which to wait for events that affect
> any vCPU, e.g. in qemu_tcg_rr_wait_io_event.
> 
> At the same time, we are moving towards an interface where
> the BQL and CPU locks are independent, and the only requirement
> is that the locking order is respected, i.e. the BQL is
> acquired first if both locks have to be held at the same time.
> 
> In this patch we make the BQL a recursive lock under the hood.
> This allows us to (1) keep the BQL and CPU locks interfaces
> separate, and (2) use a single lock for all vCPUs in TCG rr mode.
> 
> Note that the BQL's API (qemu_mutex_lock/unlock_iothread) remains
> non-recursive.
> 
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h |  2 +-
>  cpus-common.c     |  2 +-
>  cpus.c            | 90 +++++++++++++++++++++++++++++++++++++++++------
>  qom/cpu.c         |  3 +-
>  stubs/cpu-lock.c  |  6 ++--
>  5 files changed, 86 insertions(+), 17 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 31/73] cpu: convert to cpu_halted
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 31/73] cpu: " Emilio G. Cota
  2019-02-07 20:39   ` Alex Bennée
@ 2019-02-20 16:21   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Richard Henderson @ 2019-02-20 16:21 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 1/29/19 4:47 PM, Emilio G. Cota wrote:
> This finishes the conversion to cpu_halted.
> 
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  cpus.c    | 8 ++++----
>  qom/cpu.c | 2 +-
>  2 files changed, 5 insertions(+), 5 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 60/73] cpu: convert to interrupt_request
  2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 60/73] cpu: convert to interrupt_request Emilio G. Cota
  2019-02-08 11:21   ` Alex Bennée
@ 2019-02-20 16:55   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Richard Henderson @ 2019-02-20 16:55 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 1/29/19 4:47 PM, Emilio G. Cota wrote:
> This finishes the conversion to interrupt_request.
> 
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  qom/cpu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle Emilio G. Cota
  2019-02-08 11:34   ` Alex Bennée
@ 2019-02-20 17:01   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Richard Henderson @ 2019-02-20 17:01 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 1/29/19 4:48 PM, Emilio G. Cota wrote:
> This function is only called from TCG rr mode, so add
> a prefix to mark this as well as an assertion.
> 
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  cpus.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 73/73] cputlb: queue async flush jobs without the BQL
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 73/73] cputlb: queue async flush jobs without the BQL Emilio G. Cota
  2019-02-08 15:58   ` Alex Bennée
@ 2019-02-20 17:18   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Richard Henderson @ 2019-02-20 17:18 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 1/29/19 4:48 PM, Emilio G. Cota wrote:
> This yields sizable scalability improvements, as the below results show.
> 
> Host: Two Intel E5-2683 v3 14-core CPUs at 2.00 GHz (Haswell)
> 
> Workload: Ubuntu 18.04 ppc64 compiling the linux kernel with
> "make -j N", where N is the number of cores in the guest.
> 
>                       Speedup vs a single thread (higher is better):
> 
>          14 +---------------------------------------------------------------+
>             |       +    +       +      +       +      +      $$$$$$  +     |
>             |                                            $$$$$              |
>             |                                      $$$$$$                   |
>          12 |-+                                $A$$                       +-|
>             |                                $$                             |
>             |                             $$$                               |
>          10 |-+                         $$    ##D#####################D   +-|
>             |                        $$$ #####**B****************           |
>             |                      $$####*****                   *****      |
>             |                    A$#*****                             B     |
>           8 |-+                $$B**                                      +-|
>             |                $$**                                           |
>             |               $**                                             |
>           6 |-+           $$*                                             +-|
>             |            A**                                                |
>             |           $B                                                  |
>             |           $                                                   |
>           4 |-+        $*                                                 +-|
>             |          $                                                    |
>             |         $                                                     |
>           2 |-+      $                                                    +-|
>             |        $                                 +cputlb-no-bql $$A$$ |
>             |       A                                   +per-cpu-lock ##D## |
>             |       +    +       +      +       +      +     baseline **B** |
>           0 +---------------------------------------------------------------+
>                     1    4       8      12      16     20      24     28
>                                        Guest vCPUs
>   png: https://imgur.com/zZRvS7q
> 
> Some notes:
> - baseline corresponds to the commit before this series
> 
> - per-cpu-lock is the commit that converts the CPU loop to per-cpu locks.
> 
> - cputlb-no-bql is this commit.
> 
> - I'm using taskset to assign cores to threads, favouring locality whenever
>   possible but not using SMT. When N=1, I'm using a single host core, which
>   leads to superlinear speedups (since with more cores the I/O thread can execute
>   while vCPU threads sleep). In the future I might use N+1 host cores for N
>   guest cores to avoid this, or perhaps pin guest threads to cores one-by-one.
> 
> Single-threaded performance is affected very lightly. Results
> below for debian aarch64 bootup+test for the entire series
> on an Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz host:
> 
> - Before:
> 
>  Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs):
> 
>        7269.033478      task-clock (msec)         #    0.998 CPUs utilized            ( +-  0.06% )
>     30,659,870,302      cycles                    #    4.218 GHz                      ( +-  0.06% )
>     54,790,540,051      instructions              #    1.79  insns per cycle          ( +-  0.05% )
>      9,796,441,380      branches                  # 1347.695 M/sec                    ( +-  0.05% )
>        165,132,201      branch-misses             #    1.69% of all branches          ( +-  0.12% )
> 
>        7.287011656 seconds time elapsed                                          ( +-  0.10% )
> 
> - After:
> 
>        7375.924053      task-clock (msec)         #    0.998 CPUs utilized            ( +-  0.13% )
>     31,107,548,846      cycles                    #    4.217 GHz                      ( +-  0.12% )
>     55,355,668,947      instructions              #    1.78  insns per cycle          ( +-  0.05% )
>      9,929,917,664      branches                  # 1346.261 M/sec                    ( +-  0.04% )
>        166,547,442      branch-misses             #    1.68% of all branches          ( +-  0.09% )
> 
>        7.389068145 seconds time elapsed                                          ( +-  0.13% )
> 
> That is, a 1.37% slowdown.
> 
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  accel/tcg/cputlb.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 70/73] cpu: protect CPU state with cpu->lock instead of the BQL
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 70/73] cpu: protect CPU state with cpu->lock instead of the BQL Emilio G. Cota
  2019-02-08 14:33   ` Alex Bennée
@ 2019-02-20 17:25   ` Richard Henderson
  1 sibling, 0 replies; 109+ messages in thread
From: Richard Henderson @ 2019-02-20 17:25 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 1/29/19 4:48 PM, Emilio G. Cota wrote:
> Use the per-CPU locks to protect the CPUs' state, instead of
> using the BQL. These locks are uncontended (they are mostly
> acquired by the corresponding vCPU thread), so acquiring them
> is cheaper than acquiring the BQL, which particularly in
> MTTCG can be contended at high core counts.
> 
> In this conversion we drop qemu_cpu_cond and qemu_pause_cond,
> and use cpu->cond instead.
> 
> In qom/cpu.c we can finally remove the ugliness that
> results from having to hold both the BQL and the CPU lock;
> now we just have to grab the CPU lock.
> 
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h |  20 ++--
>  cpus.c            | 280 ++++++++++++++++++++++++++++++++++------------
>  qom/cpu.c         |  29 +----
>  3 files changed, 225 insertions(+), 104 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 00/73] per-CPU locks
  2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
                   ` (72 preceding siblings ...)
  2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 73/73] cputlb: queue async flush jobs without the BQL Emilio G. Cota
@ 2019-02-20 17:27 ` Richard Henderson
  2019-02-20 22:50   ` Emilio G. Cota
  73 siblings, 1 reply; 109+ messages in thread
From: Richard Henderson @ 2019-02-20 17:27 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Aleksandar Markovic, Alistair Francis,
	Andrzej Zaborowski, Anthony Green, Artyom Tarasenko,
	Aurelien Jarno, Bastian Koppelmann, Christian Borntraeger,
	Chris Wulff, Cornelia Huck, David Gibson, David Hildenbrand,
	Edgar E. Iglesias, Eduardo Habkost, Fabien Chouteau, Guan Xuetao,
	James Hogan, Laurent Vivier, Marek Vasut, Mark Cave-Ayland,
	Max Filippov, Michael Walle, Palmer Dabbelt, Peter Maydell,
	qemu-arm, qemu-ppc, qemu-s390x, Sagar Karandikar, Stafford Horne

On 1/29/19 4:46 PM, Emilio G. Cota wrote:
> v5: https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg02979.html
> 
> For context, the goal of this series is to substitute the BQL for the
> per-CPU locks in many places, notably the execution loop in cpus.c.
> This leads to better scalability for MTTCG, since CPUs don't have
> to acquire a contended global lock (the BQL) every time they
> stop executing code.
> 
> See the last commit for some performance numbers.
> 
> After this series, the remaining obstacles to achieving KVM-like
> scalability in MTTCG are: (1) interrupt handling, which
> in some targets requires the BQL, and (2) frequent execution of
> "async safe" work.
> That said, some targets scale great on MTTCG even before this
> series -- for instance, when running a parallel compilation job
> in an x86_64 guest, scalability is comparable to what we get with
> KVM.
> 
> This series is very long. If you only have time to look at a few patches,
> I suggest the following, which do most of the heavy lifting and
> have not yet been reviewed:
> 
> - Patch 7: cpu: make per-CPU locks an alias of the BQL in TCG rr mode
> - Patch 70: cpu: protect CPU state with cpu->lock instead of the BQL
> 
> I've tested all patches with `make check-qtest -j' for all targets.
> The series is checkpatch-clean (just some warnings about __COVERITY__).
> 
> You can fetch the series from:
>   https://github.com/cota/qemu/tree/cpu-lock-v6

Thanks for the patience.  Both Alex and I have now completed review, and I
think this is ready for merge.

There are some patch conflicts with master, so if you can fix those and post a
v7, we'll get it merged right away.


r~

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 00/73] per-CPU locks
  2019-02-20 17:27 ` [Qemu-devel] [PATCH v6 00/73] per-CPU locks Richard Henderson
@ 2019-02-20 22:50   ` Emilio G. Cota
  0 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-02-20 22:50 UTC (permalink / raw)
  To: Richard Henderson
  Cc: qemu-devel, Paolo Bonzini, Aleksandar Markovic, Alistair Francis,
	Andrzej Zaborowski, Anthony Green, Artyom Tarasenko,
	Aurelien Jarno, Bastian Koppelmann, Christian Borntraeger,
	Chris Wulff, Cornelia Huck, David Gibson, David Hildenbrand,
	Edgar E. Iglesias, Eduardo Habkost, Fabien Chouteau, Guan Xuetao,
	James Hogan, Laurent Vivier, Marek Vasut, Mark Cave-Ayland,
	Max Filippov, Michael Walle, Palmer Dabbelt, Peter Maydell,
	qemu-arm, qemu-ppc, qemu-s390x, Sagar Karandikar, Stafford Horne

On Wed, Feb 20, 2019 at 09:27:06 -0800, Richard Henderson wrote:
> Thanks for the patience.  Both Alex and I have now completed review, and I
> think this is ready for merge.
> 
> There are some patch conflicts with master, so if you can fix those and post a
> v7, we'll get it merged right away.

Thanks for reviewing! Will send a v7 in a few days.

		Emilio

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 39/73] i386: convert to cpu_interrupt_request
  2019-02-08 11:00   ` Alex Bennée
@ 2019-03-02 22:48     ` Emilio G. Cota
  0 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-03-02 22:48 UTC (permalink / raw)
  To: Alex Bennée; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson

On Fri, Feb 08, 2019 at 11:00:23 +0000, Alex Bennée wrote:
> 
> Emilio G. Cota <cota@braap.org> writes:
> 
> > Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> > Signed-off-by: Emilio G. Cota <cota@braap.org>
> > ---
> >  target/i386/cpu.c        | 2 +-
> >  target/i386/helper.c     | 4 ++--
> >  target/i386/svm_helper.c | 4 ++--
> >  3 files changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> > index a37b984b61..35dea8c152 100644
> > --- a/target/i386/cpu.c
> > +++ b/target/i386/cpu.c
> > @@ -5678,7 +5678,7 @@ int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
> >
> >  static bool x86_cpu_has_work(CPUState *cs)
> >  {
> > -    return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0;
> > +    return x86_cpu_pending_interrupt(cs, cpu_interrupt_request(cs))
> >  != 0;
> 
> This is fine in itself but is there a chance of a race with the
> env->eflags/hflags/hflags2 that x86_cpu_pending_interrupt deals with?
> Are they only ever self vCPU references?

AFAICT they're all self-references; I have checked this via inspection
and with helgrind.

> Anyway:
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Thanks!

		E.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 40/73] i386/kvm: convert to cpu_interrupt_request
  2019-02-08 11:15   ` Alex Bennée
@ 2019-03-02 23:14     ` Emilio G. Cota
  0 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-03-02 23:14 UTC (permalink / raw)
  To: Alex Bennée; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson

On Fri, Feb 08, 2019 at 11:15:27 +0000, Alex Bennée wrote:
> 
> Emilio G. Cota <cota@braap.org> writes:
> 
> > Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> > Signed-off-by: Emilio G. Cota <cota@braap.org>
> > ---
> >  target/i386/kvm.c | 54 +++++++++++++++++++++++++++--------------------
> >  1 file changed, 31 insertions(+), 23 deletions(-)
> >
> > diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> > index ca2629f0fe..3f3c670897 100644
> > --- a/target/i386/kvm.c
> > +++ b/target/i386/kvm.c
> <snip>
> > @@ -3183,14 +3186,14 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
> >  {
> >      X86CPU *x86_cpu = X86_CPU(cpu);
> >      CPUX86State *env = &x86_cpu->env;
> > +    uint32_t interrupt_request;
> >      int ret;
> >
> > +    interrupt_request = cpu_interrupt_request(cpu);
> >      /* Inject NMI */
> > -    if (cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
> > -        if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
> > -            qemu_mutex_lock_iothread();
> > +    if (interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
> > +        if (interrupt_request & CPU_INTERRUPT_NMI) {
> >              cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
> > -            qemu_mutex_unlock_iothread();
> >              DPRINTF("injected NMI\n");
> >              ret = kvm_vcpu_ioctl(cpu, KVM_NMI);
> >              if (ret < 0) {
> > @@ -3198,10 +3201,8 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
> >                          strerror(-ret));
> >              }
> >          }
> > -        if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
> > -            qemu_mutex_lock_iothread();
> > +        if (interrupt_request & CPU_INTERRUPT_SMI) {
> >              cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
> > -            qemu_mutex_unlock_iothread();
> >              DPRINTF("injected SMI\n");
> >              ret = kvm_vcpu_ioctl(cpu, KVM_SMI);
> >              if (ret < 0) {
> > @@ -3215,16 +3216,18 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
> >          qemu_mutex_lock_iothread();
> >      }
> >
> > +    interrupt_request = cpu_interrupt_request(cpu);
> > +
> 
> This seems a bit smelly as we have already read interrupt_request once
> before. So this says that something may have triggered an IRQ while we
> were dealing with the above. It requires a comment at least.

There are a few cpu_reset_interrupt() calls above, so I thought a comment
wasn't necessary. I've added the following comment:

+    /*
+     * We might have cleared some bits in cpu->interrupt_request since reading
+     * it; read it again.
+     */
     interrupt_request = cpu_interrupt_request(cpu);

> Otherwise:
> 
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Thanks,

		E.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 62/73] cpu: introduce cpu_has_work_with_iothread_lock
  2019-02-08 11:33   ` Alex Bennée
@ 2019-03-03 19:52     ` Emilio G. Cota
  0 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-03-03 19:52 UTC (permalink / raw)
  To: Alex Bennée; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson

On Fri, Feb 08, 2019 at 11:33:32 +0000, Alex Bennée wrote:
> 
> Emilio G. Cota <cota@braap.org> writes:
> 
> > It will gain some users soon.
> >
> > Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> > Signed-off-by: Emilio G. Cota <cota@braap.org>
> > ---
> >  include/qom/cpu.h | 36 +++++++++++++++++++++++++++++++++---
(snip)
> >  static inline bool cpu_has_work(CPUState *cpu)
> >  {
> >      CPUClass *cc = CPU_GET_CLASS(cpu);
> > +    bool has_cpu_lock = cpu_mutex_locked(cpu);
> > +    bool (*func)(CPUState *cpu);
> >      bool ret;
> >
> > +    if (cc->has_work_with_iothread_lock) {
> > +        if (qemu_mutex_iothread_locked()) {
> > +            func = cc->has_work_with_iothread_lock;
> > +            goto call_func;
> > +        }
> > +
> > +        if (has_cpu_lock) {
> > +            /* avoid deadlock by acquiring the locks in order */
> 
> This is fine here but can we expand the comment above:
> 
>  * cpu_has_work:
>  * @cpu: The vCPU to check.
>  *
>  * Checks whether the CPU has work to do. If the vCPU helper needs to
>  * check it's work status with the BQL held ensure we hold the BQL
>  * before taking the CPU lock.

I added a comment to the body of the function:

+    /* some targets require us to hold the BQL when checking for work */
     if (cc->has_work_with_iothread_lock) {

> Where is our canonical description of the locking interaction between
> BQL and CPU locks?

It's in a few places, for instance cpu_mutex_lock's documentation
in include/qom/cpu.h.

I've added a comment about the locking order to @lock's documentation,
also in cpu.h:

- * @lock: Lock to prevent multiple access to per-CPU fields.
+ * @lock: Lock to prevent multiple access to per-CPU fields. Must be acqrd
+ *        after the BQL.

> Otherwise:
> 
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Thanks!

		Emilio

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [Qemu-devel] [PATCH v6 72/73] cpu: add async_run_on_cpu_no_bql
  2019-02-08 14:58   ` Alex Bennée
@ 2019-03-03 20:47     ` Emilio G. Cota
  0 siblings, 0 replies; 109+ messages in thread
From: Emilio G. Cota @ 2019-03-03 20:47 UTC (permalink / raw)
  To: Alex Bennée; +Cc: qemu-devel, Paolo Bonzini, Richard Henderson

On Fri, Feb 08, 2019 at 14:58:40 +0000, Alex Bennée wrote:
> 
> Emilio G. Cota <cota@braap.org> writes:
> 
> > Some async jobs do not need the BQL.
> >
> > Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> > Signed-off-by: Emilio G. Cota <cota@braap.org>
(snip)
> So we now have a locking/scheduling hierarchy that goes:
> 
>   - run_on_cpu - synchronously wait until target cpu has done the thing
>   - async_run_on_cpu - schedule work on cpu at some point (soon) resources protected by BQL
>   - async_run_on_cpu_no_bql - as above but only protected by cpu_lock

This one doesn't hold any locks when calling the passed function,
though. The CPU lock is just to serialise the queueing/dequeueing.

>   - async_safe_run_on_cpu - as above but locking (probably) not required as everything else asleep
> 
> So the BQL is only really needed to manipulate data that is shared
> across multiple vCPUs like device emulation or other state shared across
> multiple vCPUS. For all "just do it over there" cases we should be able
> to stick to cpu locks.
> 
> It would be nice if we could expand the documentation in
> multi-thread-tcg.txt to cover this in long form for people trying to
> work out the best thing to use.

I think the documentation in the functions is probably enough -- they
point to each other, so figuring out what to use should be pretty easy.

Besides, these functions aren't MTTCG-only, they're QEMU-wide, so
perhaps their documentation should go in a different file?

Since this can be done later, I'll post a v7 with the other
changes you suggested.

> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Thanks!

		Emilio

^ permalink raw reply	[flat|nested] 109+ messages in thread

end of thread, other threads:[~2019-03-03 20:47 UTC | newest]

Thread overview: 109+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-30  0:46 [Qemu-devel] [PATCH v6 00/73] per-CPU locks Emilio G. Cota
2019-01-30  0:46 ` [Qemu-devel] [PATCH v6 01/73] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 02/73] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 03/73] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
2019-02-06 17:21   ` Alex Bennée
2019-02-06 20:02     ` Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 04/73] cpu: make qemu_work_cond per-cpu Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 05/73] cpu: move run_on_cpu to cpus-common Emilio G. Cota
2019-02-06 17:22   ` Alex Bennée
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 06/73] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode Emilio G. Cota
2019-02-07 12:40   ` Alex Bennée
2019-02-20 16:12   ` Richard Henderson
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 08/73] tcg-runtime: define helper_cpu_halted_set Emilio G. Cota
2019-02-07 12:40   ` Alex Bennée
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 09/73] ppc: convert to helper_cpu_halted_set Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 10/73] cris: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 11/73] hppa: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 12/73] m68k: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 13/73] alpha: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 14/73] microblaze: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 15/73] cpu: define cpu_halted helpers Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 16/73] tcg-runtime: convert to cpu_halted_set Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 17/73] arm: convert to cpu_halted Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 18/73] ppc: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 19/73] sh4: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 20/73] i386: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 21/73] lm32: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 22/73] m68k: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 23/73] mips: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 24/73] riscv: " Emilio G. Cota
2019-02-06 23:50   ` Alistair Francis
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 25/73] s390x: " Emilio G. Cota
2019-01-30 10:30   ` Cornelia Huck
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 26/73] sparc: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 27/73] xtensa: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 28/73] gdbstub: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 29/73] openrisc: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 30/73] cpu-exec: " Emilio G. Cota
2019-02-07 12:44   ` Alex Bennée
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 31/73] cpu: " Emilio G. Cota
2019-02-07 20:39   ` Alex Bennée
2019-02-20 16:21   ` Richard Henderson
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 32/73] cpu: define cpu_interrupt_request helpers Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 33/73] ppc: use cpu_reset_interrupt Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 34/73] exec: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 35/73] i386: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 36/73] s390x: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 37/73] openrisc: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 38/73] arm: convert to cpu_interrupt_request Emilio G. Cota
2019-02-07 20:55   ` Alex Bennée
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 39/73] i386: " Emilio G. Cota
2019-02-08 11:00   ` Alex Bennée
2019-03-02 22:48     ` Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 40/73] i386/kvm: " Emilio G. Cota
2019-02-08 11:15   ` Alex Bennée
2019-03-02 23:14     ` Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 41/73] i386/hax-all: " Emilio G. Cota
2019-02-08 11:20   ` Alex Bennée
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 42/73] i386/whpx-all: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 43/73] i386/hvf: convert to cpu_request_interrupt Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 44/73] ppc: convert to cpu_interrupt_request Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 45/73] sh4: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 46/73] cris: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 47/73] hppa: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 48/73] lm32: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 49/73] m68k: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 50/73] mips: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 51/73] nios: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 52/73] s390x: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 53/73] alpha: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 54/73] moxie: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 55/73] sparc: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 56/73] openrisc: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 57/73] unicore32: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 58/73] microblaze: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 59/73] accel/tcg: " Emilio G. Cota
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 60/73] cpu: convert to interrupt_request Emilio G. Cota
2019-02-08 11:21   ` Alex Bennée
2019-02-20 16:55   ` Richard Henderson
2019-01-30  0:47 ` [Qemu-devel] [PATCH v6 61/73] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
2019-02-08 11:22   ` Alex Bennée
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 62/73] cpu: introduce cpu_has_work_with_iothread_lock Emilio G. Cota
2019-02-08 11:33   ` Alex Bennée
2019-03-03 19:52     ` Emilio G. Cota
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 63/73] ppc: convert to cpu_has_work_with_iothread_lock Emilio G. Cota
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 64/73] mips: " Emilio G. Cota
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 65/73] s390x: " Emilio G. Cota
2019-01-30 10:35   ` Cornelia Huck
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 66/73] riscv: " Emilio G. Cota
2019-02-06 23:51   ` Alistair Francis
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 67/73] sparc: " Emilio G. Cota
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 68/73] xtensa: " Emilio G. Cota
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle Emilio G. Cota
2019-02-08 11:34   ` Alex Bennée
2019-02-20 17:01   ` Richard Henderson
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 70/73] cpu: protect CPU state with cpu->lock instead of the BQL Emilio G. Cota
2019-02-08 14:33   ` Alex Bennée
2019-02-20 17:25   ` Richard Henderson
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 71/73] cpus-common: release BQL earlier in run_on_cpu Emilio G. Cota
2019-02-08 14:34   ` Alex Bennée
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 72/73] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
2019-02-08 14:58   ` Alex Bennée
2019-03-03 20:47     ` Emilio G. Cota
2019-01-30  0:48 ` [Qemu-devel] [PATCH v6 73/73] cputlb: queue async flush jobs without the BQL Emilio G. Cota
2019-02-08 15:58   ` Alex Bennée
2019-02-20 17:18   ` Richard Henderson
2019-02-20 17:27 ` [Qemu-devel] [PATCH v6 00/73] per-CPU locks Richard Henderson
2019-02-20 22:50   ` Emilio G. Cota

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.