All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v7 00/73] per-CPU locks
@ 2019-03-04 18:17 Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 01/73] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
                   ` (74 more replies)
  0 siblings, 75 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Aleksandar Markovic,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Walle,
	Palmer Dabbelt, Paolo Bonzini, Peter Maydell, qemu-arm, qemu-ppc,
	qemu-s390x, Sagar Karandikar, Stafford Horne

v6: https://lists.gnu.org/archive/html/qemu-devel/2019-01/msg07650.html

All patches in the series have reviews now. Thanks everyone!

I've tested all patches with `make check-qtest -j' for all targets.
The series is checkpatch-clean (just some warnings about __COVERITY__).

You can fetch the series from:
  https://github.com/cota/qemu/tree/cpu-lock-v7

---
v6->v7:

- Rebase on master
  - Add a cpu_halted_set call to arm code that wasn't there in v6

- Add R-b's and Ack's.

- Add comment to patch 3's log to explain why the bitmap is added
  there, even though it only gains a user at the end of the series.

- Fix "prevent deadlock" comments before assertions; use
  "enforce locking order" instead, which is more accurate.

- Add a few more comments, as suggested by Alex.

v6->v7 diff (before rebase) below.

Thanks,

		Emilio
---
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index e4ae04f72c..a513457520 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -435,7 +435,7 @@ static inline bool cpu_handle_halt_locked(CPUState *cpu)
             && replay_interrupt()) {
             X86CPU *x86_cpu = X86_CPU(cpu);
 
-            /* prevent deadlock; cpu_mutex must be acquired _after_ the BQL */
+            /* locking order: cpu_mutex must be acquired _after_ the BQL */
             cpu_mutex_unlock(cpu);
             qemu_mutex_lock_iothread();
             cpu_mutex_lock(cpu);
diff --git a/cpus.c b/cpus.c
index 4f17fe25bf..82a93f2a5a 100644
--- a/cpus.c
+++ b/cpus.c
@@ -2062,7 +2062,7 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line)
 {
     QemuMutexLockFunc bql_lock = atomic_read(&qemu_bql_mutex_lock_func);
 
-    /* prevent deadlock with CPU mutex */
+    /* enforce locking order */
     g_assert(no_cpu_mutex_locked());
 
     g_assert(!qemu_mutex_iothread_locked());
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index bb0729f969..726cb7b090 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -322,7 +322,8 @@ struct qemu_work_item;
  * @mem_io_pc: Host Program Counter at which the memory was accessed.
  * @mem_io_vaddr: Target virtual address at which the memory was accessed.
  * @kvm_fd: vCPU file descriptor for KVM.
- * @lock: Lock to prevent multiple access to per-CPU fields.
+ * @lock: Lock to prevent multiple access to per-CPU fields. Must be acquired
+ *        after the BQL.
  * @cond: Condition variable for per-CPU events.
  * @work_list: List of pending asynchronous work.
  * @halted: Nonzero if the CPU is in suspended state.
@@ -804,6 +805,7 @@ static inline bool cpu_has_work(CPUState *cpu)
     bool (*func)(CPUState *cpu);
     bool ret;
 
+    /* some targets require us to hold the BQL when checking for work */
     if (cc->has_work_with_iothread_lock) {
         if (qemu_mutex_iothread_locked()) {
             func = cc->has_work_with_iothread_lock;
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 3f3c670897..65a14deb2f 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -3216,6 +3216,10 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
         qemu_mutex_lock_iothread();
     }
 
+    /*
+     * We might have cleared some bits in cpu->interrupt_request since reading
+     * it; read it again.
+     */
     interrupt_request = cpu_interrupt_request(cpu);
 
     /* Force the VCPU out of its inner loop to process any INIT requests

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 01/73] cpu: convert queued work to a QSIMPLEQ
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 02/73] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
                   ` (73 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Instead of open-coding it.

While at it, make sure that all accesses to the list are
performed while holding the list's lock.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  6 +++---
 cpus-common.c     | 25 ++++++++-----------------
 cpus.c            | 14 ++++++++++++--
 qom/cpu.c         |  1 +
 4 files changed, 24 insertions(+), 22 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 1d6099e5d4..b47f3507fa 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -333,8 +333,8 @@ struct qemu_work_item;
  * @mem_io_pc: Host Program Counter at which the memory was accessed.
  * @mem_io_vaddr: Target virtual address at which the memory was accessed.
  * @kvm_fd: vCPU file descriptor for KVM.
- * @work_mutex: Lock to prevent multiple access to queued_work_*.
- * @queued_work_first: First asynchronous work pending.
+ * @work_mutex: Lock to prevent multiple access to @work_list.
+ * @work_list: List of pending asynchronous work.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
  * @trace_dstate: Dynamic tracing state of events for this vCPU (bitmask).
@@ -375,7 +375,7 @@ struct CPUState {
     sigjmp_buf jmp_env;
 
     QemuMutex work_mutex;
-    struct qemu_work_item *queued_work_first, *queued_work_last;
+    QSIMPLEQ_HEAD(, qemu_work_item) work_list;
 
     CPUAddressSpace *cpu_ases;
     int num_ases;
diff --git a/cpus-common.c b/cpus-common.c
index 3ca58c64e8..a1bbf4139f 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -107,7 +107,7 @@ void cpu_list_remove(CPUState *cpu)
 }
 
 struct qemu_work_item {
-    struct qemu_work_item *next;
+    QSIMPLEQ_ENTRY(qemu_work_item) node;
     run_on_cpu_func func;
     run_on_cpu_data data;
     bool free, exclusive, done;
@@ -116,13 +116,7 @@ struct qemu_work_item {
 static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
 {
     qemu_mutex_lock(&cpu->work_mutex);
-    if (cpu->queued_work_first == NULL) {
-        cpu->queued_work_first = wi;
-    } else {
-        cpu->queued_work_last->next = wi;
-    }
-    cpu->queued_work_last = wi;
-    wi->next = NULL;
+    QSIMPLEQ_INSERT_TAIL(&cpu->work_list, wi, node);
     wi->done = false;
     qemu_mutex_unlock(&cpu->work_mutex);
 
@@ -314,17 +308,14 @@ void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
 
-    if (cpu->queued_work_first == NULL) {
+    qemu_mutex_lock(&cpu->work_mutex);
+    if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
+        qemu_mutex_unlock(&cpu->work_mutex);
         return;
     }
-
-    qemu_mutex_lock(&cpu->work_mutex);
-    while (cpu->queued_work_first != NULL) {
-        wi = cpu->queued_work_first;
-        cpu->queued_work_first = wi->next;
-        if (!cpu->queued_work_first) {
-            cpu->queued_work_last = NULL;
-        }
+    while (!QSIMPLEQ_EMPTY(&cpu->work_list)) {
+        wi = QSIMPLEQ_FIRST(&cpu->work_list);
+        QSIMPLEQ_REMOVE_HEAD(&cpu->work_list, node);
         qemu_mutex_unlock(&cpu->work_mutex);
         if (wi->exclusive) {
             /* Running work items outside the BQL avoids the following deadlock:
diff --git a/cpus.c b/cpus.c
index e83f72b48b..90eabcf977 100644
--- a/cpus.c
+++ b/cpus.c
@@ -88,9 +88,19 @@ bool cpu_is_stopped(CPUState *cpu)
     return cpu->stopped || !runstate_is_running();
 }
 
+static inline bool cpu_work_list_empty(CPUState *cpu)
+{
+    bool ret;
+
+    qemu_mutex_lock(&cpu->work_mutex);
+    ret = QSIMPLEQ_EMPTY(&cpu->work_list);
+    qemu_mutex_unlock(&cpu->work_mutex);
+    return ret;
+}
+
 static bool cpu_thread_is_idle(CPUState *cpu)
 {
-    if (cpu->stop || cpu->queued_work_first) {
+    if (cpu->stop || !cpu_work_list_empty(cpu)) {
         return false;
     }
     if (cpu_is_stopped(cpu)) {
@@ -1514,7 +1524,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             cpu = first_cpu;
         }
 
-        while (cpu && !cpu->queued_work_first && !cpu->exit_request) {
+        while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
 
             atomic_mb_set(&tcg_current_rr_cpu, cpu);
             current_cpu = cpu;
diff --git a/qom/cpu.c b/qom/cpu.c
index f5579b1cd5..06d6b6044d 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -372,6 +372,7 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_threads = 1;
 
     qemu_mutex_init(&cpu->work_mutex);
+    QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 02/73] cpu: rename cpu->work_mutex to cpu->lock
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 01/73] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 03/73] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
                   ` (72 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

This lock will soon protect more fields of the struct. Give
it a more appropriate name.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  6 ++++--
 cpus-common.c     | 14 +++++++-------
 cpus.c            |  4 ++--
 qom/cpu.c         |  2 +-
 4 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index b47f3507fa..721a324199 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -333,7 +333,8 @@ struct qemu_work_item;
  * @mem_io_pc: Host Program Counter at which the memory was accessed.
  * @mem_io_vaddr: Target virtual address at which the memory was accessed.
  * @kvm_fd: vCPU file descriptor for KVM.
- * @work_mutex: Lock to prevent multiple access to @work_list.
+ * @lock: Lock to prevent multiple access to per-CPU fields. Must be acquired
+ *        after the BQL.
  * @work_list: List of pending asynchronous work.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
@@ -374,7 +375,8 @@ struct CPUState {
     int64_t icount_extra;
     sigjmp_buf jmp_env;
 
-    QemuMutex work_mutex;
+    QemuMutex lock;
+    /* fields below protected by @lock */
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
 
     CPUAddressSpace *cpu_ases;
diff --git a/cpus-common.c b/cpus-common.c
index a1bbf4139f..d0e619f149 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -115,10 +115,10 @@ struct qemu_work_item {
 
 static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
 {
-    qemu_mutex_lock(&cpu->work_mutex);
+    qemu_mutex_lock(&cpu->lock);
     QSIMPLEQ_INSERT_TAIL(&cpu->work_list, wi, node);
     wi->done = false;
-    qemu_mutex_unlock(&cpu->work_mutex);
+    qemu_mutex_unlock(&cpu->lock);
 
     qemu_cpu_kick(cpu);
 }
@@ -308,15 +308,15 @@ void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
 
-    qemu_mutex_lock(&cpu->work_mutex);
+    qemu_mutex_lock(&cpu->lock);
     if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
-        qemu_mutex_unlock(&cpu->work_mutex);
+        qemu_mutex_unlock(&cpu->lock);
         return;
     }
     while (!QSIMPLEQ_EMPTY(&cpu->work_list)) {
         wi = QSIMPLEQ_FIRST(&cpu->work_list);
         QSIMPLEQ_REMOVE_HEAD(&cpu->work_list, node);
-        qemu_mutex_unlock(&cpu->work_mutex);
+        qemu_mutex_unlock(&cpu->lock);
         if (wi->exclusive) {
             /* Running work items outside the BQL avoids the following deadlock:
              * 1) start_exclusive() is called with the BQL taken while another
@@ -332,13 +332,13 @@ void process_queued_cpu_work(CPUState *cpu)
         } else {
             wi->func(cpu, wi->data);
         }
-        qemu_mutex_lock(&cpu->work_mutex);
+        qemu_mutex_lock(&cpu->lock);
         if (wi->free) {
             g_free(wi);
         } else {
             atomic_mb_set(&wi->done, true);
         }
     }
-    qemu_mutex_unlock(&cpu->work_mutex);
+    qemu_mutex_unlock(&cpu->lock);
     qemu_cond_broadcast(&qemu_work_cond);
 }
diff --git a/cpus.c b/cpus.c
index 90eabcf977..acd453a3ae 100644
--- a/cpus.c
+++ b/cpus.c
@@ -92,9 +92,9 @@ static inline bool cpu_work_list_empty(CPUState *cpu)
 {
     bool ret;
 
-    qemu_mutex_lock(&cpu->work_mutex);
+    qemu_mutex_lock(&cpu->lock);
     ret = QSIMPLEQ_EMPTY(&cpu->work_list);
-    qemu_mutex_unlock(&cpu->work_mutex);
+    qemu_mutex_unlock(&cpu->lock);
     return ret;
 }
 
diff --git a/qom/cpu.c b/qom/cpu.c
index 06d6b6044d..a2964e53c6 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -371,7 +371,7 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_cores = 1;
     cpu->nr_threads = 1;
 
-    qemu_mutex_init(&cpu->work_mutex);
+    qemu_mutex_init(&cpu->lock);
     QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 03/73] cpu: introduce cpu_mutex_lock/unlock
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 01/73] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 02/73] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 04/73] cpu: make qemu_work_cond per-cpu Emilio G. Cota
                   ` (71 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

The few direct users of &cpu->lock will be converted soon.

The per-thread bitmap introduced here might seem unnecessary,
since a bool could just do. However, once we complete the
conversion to per-vCPU locks, we will need to cover the use
case where all vCPUs are locked by the same thread, which
explains why the bitmap is introduced here.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h   | 33 +++++++++++++++++++++++++++++++
 cpus.c              | 48 +++++++++++++++++++++++++++++++++++++++++++--
 stubs/cpu-lock.c    | 28 ++++++++++++++++++++++++++
 stubs/Makefile.objs |  1 +
 4 files changed, 108 insertions(+), 2 deletions(-)
 create mode 100644 stubs/cpu-lock.c

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 721a324199..44b42d86dd 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -467,6 +467,39 @@ extern CPUTailQ cpus;
 
 extern __thread CPUState *current_cpu;
 
+/**
+ * cpu_mutex_lock - lock a CPU's mutex
+ * @cpu: the CPU whose mutex is to be locked
+ *
+ * To avoid deadlock, a CPU's mutex must be acquired after the BQL.
+ */
+#define cpu_mutex_lock(cpu)                             \
+    cpu_mutex_lock_impl(cpu, __FILE__, __LINE__)
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line);
+
+/**
+ * cpu_mutex_unlock - unlock a CPU's mutex
+ * @cpu: the CPU whose mutex is to be unlocked
+ */
+#define cpu_mutex_unlock(cpu)                           \
+    cpu_mutex_unlock_impl(cpu, __FILE__, __LINE__)
+void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line);
+
+/**
+ * cpu_mutex_locked - check whether a CPU's mutex is locked
+ * @cpu: the CPU of interest
+ *
+ * Returns true if the calling thread is currently holding the CPU's mutex.
+ */
+bool cpu_mutex_locked(const CPUState *cpu);
+
+/**
+ * no_cpu_mutex_locked - check whether any CPU mutex is held
+ *
+ * Returns true if the calling thread is not holding any CPU mutex.
+ */
+bool no_cpu_mutex_locked(void);
+
 static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
 {
     unsigned int i;
diff --git a/cpus.c b/cpus.c
index acd453a3ae..e4b54706ce 100644
--- a/cpus.c
+++ b/cpus.c
@@ -83,6 +83,47 @@ static unsigned int throttle_percentage;
 #define CPU_THROTTLE_PCT_MAX 99
 #define CPU_THROTTLE_TIMESLICE_NS 10000000
 
+/* XXX: is this really the max number of CPUs? */
+#define CPU_LOCK_BITMAP_SIZE 2048
+
+/*
+ * Note: we index the bitmap with cpu->cpu_index + 1 so that the logic
+ * also works during early CPU initialization, when cpu->cpu_index is set to
+ * UNASSIGNED_CPU_INDEX == -1.
+ */
+static __thread DECLARE_BITMAP(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
+
+bool no_cpu_mutex_locked(void)
+{
+    return bitmap_empty(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
+}
+
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+{
+/* coverity gets confused by the indirect function call */
+#ifdef __COVERITY__
+    qemu_mutex_lock_impl(&cpu->lock, file, line);
+#else
+    QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
+
+    g_assert(!cpu_mutex_locked(cpu));
+    set_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+    f(&cpu->lock, file, line);
+#endif
+}
+
+void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
+{
+    g_assert(cpu_mutex_locked(cpu));
+    qemu_mutex_unlock_impl(&cpu->lock, file, line);
+    clear_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+}
+
+bool cpu_mutex_locked(const CPUState *cpu)
+{
+    return test_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+}
+
 bool cpu_is_stopped(CPUState *cpu)
 {
     return cpu->stopped || !runstate_is_running();
@@ -92,9 +133,9 @@ static inline bool cpu_work_list_empty(CPUState *cpu)
 {
     bool ret;
 
-    qemu_mutex_lock(&cpu->lock);
+    cpu_mutex_lock(cpu);
     ret = QSIMPLEQ_EMPTY(&cpu->work_list);
-    qemu_mutex_unlock(&cpu->lock);
+    cpu_mutex_unlock(cpu);
     return ret;
 }
 
@@ -1856,6 +1897,9 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line)
 {
     QemuMutexLockFunc bql_lock = atomic_read(&qemu_bql_mutex_lock_func);
 
+    /* enforce locking order */
+    g_assert(no_cpu_mutex_locked());
+
     g_assert(!qemu_mutex_iothread_locked());
     bql_lock(&qemu_global_mutex, file, line);
     iothread_locked = true;
diff --git a/stubs/cpu-lock.c b/stubs/cpu-lock.c
new file mode 100644
index 0000000000..3f07d3a28b
--- /dev/null
+++ b/stubs/cpu-lock.c
@@ -0,0 +1,28 @@
+#include "qemu/osdep.h"
+#include "qom/cpu.h"
+
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+{
+/* coverity gets confused by the indirect function call */
+#ifdef __COVERITY__
+    qemu_mutex_lock_impl(&cpu->lock, file, line);
+#else
+    QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
+    f(&cpu->lock, file, line);
+#endif
+}
+
+void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
+{
+    qemu_mutex_unlock_impl(&cpu->lock, file, line);
+}
+
+bool cpu_mutex_locked(const CPUState *cpu)
+{
+    return true;
+}
+
+bool no_cpu_mutex_locked(void)
+{
+    return true;
+}
diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
index 269dfa5832..2e47e5edf4 100644
--- a/stubs/Makefile.objs
+++ b/stubs/Makefile.objs
@@ -4,6 +4,7 @@ stub-obj-y += blockdev-close-all-bdrv-states.o
 stub-obj-y += clock-warp.o
 stub-obj-y += cpu-get-clock.o
 stub-obj-y += cpu-get-icount.o
+stub-obj-y += cpu-lock.o
 stub-obj-y += dump.o
 stub-obj-y += error-printf.o
 stub-obj-y += fdset.o
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 04/73] cpu: make qemu_work_cond per-cpu
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (2 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 03/73] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 05/73] cpu: move run_on_cpu to cpus-common Emilio G. Cota
                   ` (70 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

This eliminates the need to use the BQL to queue CPU work.

While at it, give the per-cpu field a generic name ("cond") since
it will soon be used for more than just queueing CPU work.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  6 ++--
 cpus-common.c     | 72 ++++++++++++++++++++++++++++++++++++++---------
 cpus.c            |  2 +-
 qom/cpu.c         |  1 +
 4 files changed, 63 insertions(+), 18 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 44b42d86dd..62f4dc9025 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -335,6 +335,7 @@ struct qemu_work_item;
  * @kvm_fd: vCPU file descriptor for KVM.
  * @lock: Lock to prevent multiple access to per-CPU fields. Must be acquired
  *        after the BQL.
+ * @cond: Condition variable for per-CPU events.
  * @work_list: List of pending asynchronous work.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
@@ -377,6 +378,7 @@ struct CPUState {
 
     QemuMutex lock;
     /* fields below protected by @lock */
+    QemuCond cond;
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
 
     CPUAddressSpace *cpu_ases;
@@ -784,12 +786,10 @@ bool cpu_is_stopped(CPUState *cpu);
  * @cpu: The vCPU to run on.
  * @func: The function to be executed.
  * @data: Data to pass to the function.
- * @mutex: Mutex to release while waiting for @func to run.
  *
  * Used internally in the implementation of run_on_cpu.
  */
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
-                   QemuMutex *mutex);
+void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
 
 /**
  * run_on_cpu:
diff --git a/cpus-common.c b/cpus-common.c
index d0e619f149..daf1531868 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -26,7 +26,6 @@
 static QemuMutex qemu_cpu_list_lock;
 static QemuCond exclusive_cond;
 static QemuCond exclusive_resume;
-static QemuCond qemu_work_cond;
 
 /* >= 1 if a thread is inside start_exclusive/end_exclusive.  Written
  * under qemu_cpu_list_lock, read with atomic operations.
@@ -42,7 +41,6 @@ void qemu_init_cpu_list(void)
     qemu_mutex_init(&qemu_cpu_list_lock);
     qemu_cond_init(&exclusive_cond);
     qemu_cond_init(&exclusive_resume);
-    qemu_cond_init(&qemu_work_cond);
 }
 
 void cpu_list_lock(void)
@@ -113,23 +111,37 @@ struct qemu_work_item {
     bool free, exclusive, done;
 };
 
-static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
+/* Called with the CPU's lock held */
+static void queue_work_on_cpu_locked(CPUState *cpu, struct qemu_work_item *wi)
 {
-    qemu_mutex_lock(&cpu->lock);
     QSIMPLEQ_INSERT_TAIL(&cpu->work_list, wi, node);
     wi->done = false;
-    qemu_mutex_unlock(&cpu->lock);
 
     qemu_cpu_kick(cpu);
 }
 
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
-                   QemuMutex *mutex)
+static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
+{
+    cpu_mutex_lock(cpu);
+    queue_work_on_cpu_locked(cpu, wi);
+    cpu_mutex_unlock(cpu);
+}
+
+void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
     struct qemu_work_item wi;
+    bool has_bql = qemu_mutex_iothread_locked();
+
+    g_assert(no_cpu_mutex_locked());
 
     if (qemu_cpu_is_self(cpu)) {
-        func(cpu, data);
+        if (has_bql) {
+            func(cpu, data);
+        } else {
+            qemu_mutex_lock_iothread();
+            func(cpu, data);
+            qemu_mutex_unlock_iothread();
+        }
         return;
     }
 
@@ -139,13 +151,34 @@ void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
     wi.free = false;
     wi.exclusive = false;
 
-    queue_work_on_cpu(cpu, &wi);
+    cpu_mutex_lock(cpu);
+    queue_work_on_cpu_locked(cpu, &wi);
+
+    /*
+     * We are going to sleep on the CPU lock, so release the BQL.
+     *
+     * During the transition to per-CPU locks, we release the BQL _after_
+     * having kicked the destination CPU (from queue_work_on_cpu_locked above).
+     * This makes sure that the enqueued work will be seen by the CPU
+     * after being woken up from the kick, since the CPU sleeps on the BQL.
+     * Once we complete the transition to per-CPU locks, we will release
+     * the BQL earlier in this function.
+     */
+    if (has_bql) {
+        qemu_mutex_unlock_iothread();
+    }
+
     while (!atomic_mb_read(&wi.done)) {
         CPUState *self_cpu = current_cpu;
 
-        qemu_cond_wait(&qemu_work_cond, mutex);
+        qemu_cond_wait(&cpu->cond, &cpu->lock);
         current_cpu = self_cpu;
     }
+    cpu_mutex_unlock(cpu);
+
+    if (has_bql) {
+        qemu_mutex_lock_iothread();
+    }
 }
 
 void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
@@ -307,6 +340,7 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
 void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
+    bool has_bql = qemu_mutex_iothread_locked();
 
     qemu_mutex_lock(&cpu->lock);
     if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
@@ -324,13 +358,23 @@ void process_queued_cpu_work(CPUState *cpu)
              * BQL, so it goes to sleep; start_exclusive() is sleeping too, so
              * neither CPU can proceed.
              */
-            qemu_mutex_unlock_iothread();
+            if (has_bql) {
+                qemu_mutex_unlock_iothread();
+            }
             start_exclusive();
             wi->func(cpu, wi->data);
             end_exclusive();
-            qemu_mutex_lock_iothread();
+            if (has_bql) {
+                qemu_mutex_lock_iothread();
+            }
         } else {
-            wi->func(cpu, wi->data);
+            if (has_bql) {
+                wi->func(cpu, wi->data);
+            } else {
+                qemu_mutex_lock_iothread();
+                wi->func(cpu, wi->data);
+                qemu_mutex_unlock_iothread();
+            }
         }
         qemu_mutex_lock(&cpu->lock);
         if (wi->free) {
@@ -340,5 +384,5 @@ void process_queued_cpu_work(CPUState *cpu)
         }
     }
     qemu_mutex_unlock(&cpu->lock);
-    qemu_cond_broadcast(&qemu_work_cond);
+    qemu_cond_broadcast(&cpu->cond);
 }
diff --git a/cpus.c b/cpus.c
index e4b54706ce..4057c3a7e5 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1236,7 +1236,7 @@ void qemu_init_cpu_loop(void)
 
 void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
-    do_run_on_cpu(cpu, func, data, &qemu_global_mutex);
+    do_run_on_cpu(cpu, func, data);
 }
 
 static void qemu_kvm_destroy_vcpu(CPUState *cpu)
diff --git a/qom/cpu.c b/qom/cpu.c
index a2964e53c6..be8393e589 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -372,6 +372,7 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_threads = 1;
 
     qemu_mutex_init(&cpu->lock);
+    qemu_cond_init(&cpu->cond);
     QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 05/73] cpu: move run_on_cpu to cpus-common
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (3 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 04/73] cpu: make qemu_work_cond per-cpu Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 06/73] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
                   ` (69 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

We don't pass a pointer to qemu_global_mutex anymore.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 10 ----------
 cpus-common.c     |  2 +-
 cpus.c            |  5 -----
 3 files changed, 1 insertion(+), 16 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 62f4dc9025..eab86205c1 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -781,16 +781,6 @@ void qemu_cpu_kick(CPUState *cpu);
  */
 bool cpu_is_stopped(CPUState *cpu);
 
-/**
- * do_run_on_cpu:
- * @cpu: The vCPU to run on.
- * @func: The function to be executed.
- * @data: Data to pass to the function.
- *
- * Used internally in the implementation of run_on_cpu.
- */
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
-
 /**
  * run_on_cpu:
  * @cpu: The vCPU to run on.
diff --git a/cpus-common.c b/cpus-common.c
index daf1531868..85a61eb970 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -127,7 +127,7 @@ static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
     cpu_mutex_unlock(cpu);
 }
 
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
+void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
     struct qemu_work_item wi;
     bool has_bql = qemu_mutex_iothread_locked();
diff --git a/cpus.c b/cpus.c
index 4057c3a7e5..253c1f1a59 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1234,11 +1234,6 @@ void qemu_init_cpu_loop(void)
     qemu_thread_get_self(&io_thread);
 }
 
-void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
-{
-    do_run_on_cpu(cpu, func, data);
-}
-
 static void qemu_kvm_destroy_vcpu(CPUState *cpu)
 {
     if (kvm_destroy_vcpu(cpu) < 0) {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 06/73] cpu: introduce process_queued_cpu_work_locked
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (4 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 05/73] cpu: move run_on_cpu to cpus-common Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode Emilio G. Cota
                   ` (68 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

This completes the conversion to cpu_mutex_lock/unlock in the file.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 cpus-common.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/cpus-common.c b/cpus-common.c
index 85a61eb970..99662bfa87 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -337,20 +337,19 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
     queue_work_on_cpu(cpu, wi);
 }
 
-void process_queued_cpu_work(CPUState *cpu)
+/* Called with the CPU's lock held */
+static void process_queued_cpu_work_locked(CPUState *cpu)
 {
     struct qemu_work_item *wi;
     bool has_bql = qemu_mutex_iothread_locked();
 
-    qemu_mutex_lock(&cpu->lock);
     if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
-        qemu_mutex_unlock(&cpu->lock);
         return;
     }
     while (!QSIMPLEQ_EMPTY(&cpu->work_list)) {
         wi = QSIMPLEQ_FIRST(&cpu->work_list);
         QSIMPLEQ_REMOVE_HEAD(&cpu->work_list, node);
-        qemu_mutex_unlock(&cpu->lock);
+        cpu_mutex_unlock(cpu);
         if (wi->exclusive) {
             /* Running work items outside the BQL avoids the following deadlock:
              * 1) start_exclusive() is called with the BQL taken while another
@@ -376,13 +375,19 @@ void process_queued_cpu_work(CPUState *cpu)
                 qemu_mutex_unlock_iothread();
             }
         }
-        qemu_mutex_lock(&cpu->lock);
+        cpu_mutex_lock(cpu);
         if (wi->free) {
             g_free(wi);
         } else {
             atomic_mb_set(&wi->done, true);
         }
     }
-    qemu_mutex_unlock(&cpu->lock);
     qemu_cond_broadcast(&cpu->cond);
 }
+
+void process_queued_cpu_work(CPUState *cpu)
+{
+    cpu_mutex_lock(cpu);
+    process_queued_cpu_work_locked(cpu);
+    cpu_mutex_unlock(cpu);
+}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (5 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 06/73] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 08/73] tcg-runtime: define helper_cpu_halted_set Emilio G. Cota
                   ` (67 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Before we can switch from the BQL to per-CPU locks in
the CPU loop, we have to accommodate the fact that TCG
rr mode (i.e. !MTTCG) cannot work with separate per-vCPU
locks. That would lead to deadlock since we need a single
lock/condvar pair on which to wait for events that affect
any vCPU, e.g. in qemu_tcg_rr_wait_io_event.

At the same time, we are moving towards an interface where
the BQL and CPU locks are independent, and the only requirement
is that the locking order is respected, i.e. the BQL is
acquired first if both locks have to be held at the same time.

In this patch we make the BQL a recursive lock under the hood.
This allows us to (1) keep the BQL and CPU locks interfaces
separate, and (2) use a single lock for all vCPUs in TCG rr mode.

Note that the BQL's API (qemu_mutex_lock/unlock_iothread) remains
non-recursive.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  2 +-
 cpus-common.c     |  2 +-
 cpus.c            | 90 +++++++++++++++++++++++++++++++++++++++++------
 qom/cpu.c         |  3 +-
 stubs/cpu-lock.c  |  6 ++--
 5 files changed, 86 insertions(+), 17 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index eab86205c1..d826a2852e 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -376,7 +376,7 @@ struct CPUState {
     int64_t icount_extra;
     sigjmp_buf jmp_env;
 
-    QemuMutex lock;
+    QemuMutex *lock;
     /* fields below protected by @lock */
     QemuCond cond;
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
diff --git a/cpus-common.c b/cpus-common.c
index 99662bfa87..62e282bff1 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -171,7 +171,7 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
     while (!atomic_mb_read(&wi.done)) {
         CPUState *self_cpu = current_cpu;
 
-        qemu_cond_wait(&cpu->cond, &cpu->lock);
+        qemu_cond_wait(&cpu->cond, cpu->lock);
         current_cpu = self_cpu;
     }
     cpu_mutex_unlock(cpu);
diff --git a/cpus.c b/cpus.c
index 253c1f1a59..4b967e4674 100644
--- a/cpus.c
+++ b/cpus.c
@@ -83,6 +83,12 @@ static unsigned int throttle_percentage;
 #define CPU_THROTTLE_PCT_MAX 99
 #define CPU_THROTTLE_TIMESLICE_NS 10000000
 
+static inline bool qemu_is_tcg_rr(void)
+{
+    /* in `make check-qtest', "use_icount && !tcg_enabled()" might be true */
+    return use_icount || (tcg_enabled() && !qemu_tcg_mttcg_enabled());
+}
+
 /* XXX: is this really the max number of CPUs? */
 #define CPU_LOCK_BITMAP_SIZE 2048
 
@@ -98,25 +104,76 @@ bool no_cpu_mutex_locked(void)
     return bitmap_empty(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
 }
 
-void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+static QemuMutex qemu_global_mutex;
+static __thread bool iothread_locked;
+/*
+ * In TCG rr mode, we make the BQL a recursive mutex, so that we can use it for
+ * all vCPUs while keeping the interface as if the locks were per-CPU.
+ *
+ * The fact that the BQL is implemented recursively is invisible to BQL users;
+ * the mutex API we export (qemu_mutex_lock_iothread() etc.) is non-recursive.
+ *
+ * Locking order: the BQL is always acquired before CPU locks.
+ */
+static __thread int iothread_lock_count;
+
+static void rr_cpu_mutex_lock(void)
+{
+    if (iothread_lock_count++ == 0) {
+        /*
+         * Circumvent qemu_mutex_lock_iothread()'s state keeping by
+         * acquiring the BQL directly.
+         */
+        qemu_mutex_lock(&qemu_global_mutex);
+    }
+}
+
+static void rr_cpu_mutex_unlock(void)
+{
+    g_assert(iothread_lock_count > 0);
+    if (--iothread_lock_count == 0) {
+        /*
+         * Circumvent qemu_mutex_unlock_iothread()'s state keeping by
+         * releasing the BQL directly.
+         */
+        qemu_mutex_unlock(&qemu_global_mutex);
+    }
+}
+
+static void do_cpu_mutex_lock(CPUState *cpu, const char *file, int line)
 {
-/* coverity gets confused by the indirect function call */
+    /* coverity gets confused by the indirect function call */
 #ifdef __COVERITY__
-    qemu_mutex_lock_impl(&cpu->lock, file, line);
+    qemu_mutex_lock_impl(cpu->lock, file, line);
 #else
     QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
 
+    f(cpu->lock, file, line);
+#endif
+}
+
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+{
     g_assert(!cpu_mutex_locked(cpu));
     set_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
-    f(&cpu->lock, file, line);
-#endif
+
+    if (qemu_is_tcg_rr()) {
+        rr_cpu_mutex_lock();
+    } else {
+        do_cpu_mutex_lock(cpu, file, line);
+    }
 }
 
 void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
 {
     g_assert(cpu_mutex_locked(cpu));
-    qemu_mutex_unlock_impl(&cpu->lock, file, line);
     clear_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+
+    if (qemu_is_tcg_rr()) {
+        rr_cpu_mutex_unlock();
+        return;
+    }
+    qemu_mutex_unlock_impl(cpu->lock, file, line);
 }
 
 bool cpu_mutex_locked(const CPUState *cpu)
@@ -1215,8 +1272,6 @@ static void qemu_init_sigbus(void)
 }
 #endif /* !CONFIG_LINUX */
 
-static QemuMutex qemu_global_mutex;
-
 static QemuThread io_thread;
 
 /* cpu creation */
@@ -1877,8 +1932,6 @@ bool qemu_in_vcpu_thread(void)
     return current_cpu && qemu_cpu_is_self(current_cpu);
 }
 
-static __thread bool iothread_locked = false;
-
 bool qemu_mutex_iothread_locked(void)
 {
     return iothread_locked;
@@ -1897,6 +1950,8 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line)
 
     g_assert(!qemu_mutex_iothread_locked());
     bql_lock(&qemu_global_mutex, file, line);
+    g_assert(iothread_lock_count == 0);
+    iothread_lock_count++;
     iothread_locked = true;
 }
 
@@ -1904,7 +1959,10 @@ void qemu_mutex_unlock_iothread(void)
 {
     g_assert(qemu_mutex_iothread_locked());
     iothread_locked = false;
-    qemu_mutex_unlock(&qemu_global_mutex);
+    g_assert(iothread_lock_count > 0);
+    if (--iothread_lock_count == 0) {
+        qemu_mutex_unlock(&qemu_global_mutex);
+    }
 }
 
 static bool all_vcpus_paused(void)
@@ -2128,6 +2186,16 @@ void qemu_init_vcpu(CPUState *cpu)
         cpu_address_space_init(cpu, 0, "cpu-memory", cpu->memory);
     }
 
+    /*
+     * In TCG RR, cpu->lock is the BQL under the hood. In all other modes,
+     * cpu->lock is a standalone per-CPU lock.
+     */
+    if (qemu_is_tcg_rr()) {
+        qemu_mutex_destroy(cpu->lock);
+        g_free(cpu->lock);
+        cpu->lock = &qemu_global_mutex;
+    }
+
     if (kvm_enabled()) {
         qemu_kvm_start_vcpu(cpu);
     } else if (hax_enabled()) {
diff --git a/qom/cpu.c b/qom/cpu.c
index be8393e589..2c05aa1bca 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -371,7 +371,8 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_cores = 1;
     cpu->nr_threads = 1;
 
-    qemu_mutex_init(&cpu->lock);
+    cpu->lock = g_new(QemuMutex, 1);
+    qemu_mutex_init(cpu->lock);
     qemu_cond_init(&cpu->cond);
     QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
diff --git a/stubs/cpu-lock.c b/stubs/cpu-lock.c
index 3f07d3a28b..7406a66d97 100644
--- a/stubs/cpu-lock.c
+++ b/stubs/cpu-lock.c
@@ -5,16 +5,16 @@ void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
 {
 /* coverity gets confused by the indirect function call */
 #ifdef __COVERITY__
-    qemu_mutex_lock_impl(&cpu->lock, file, line);
+    qemu_mutex_lock_impl(cpu->lock, file, line);
 #else
     QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
-    f(&cpu->lock, file, line);
+    f(cpu->lock, file, line);
 #endif
 }
 
 void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
 {
-    qemu_mutex_unlock_impl(&cpu->lock, file, line);
+    qemu_mutex_unlock_impl(cpu->lock, file, line);
 }
 
 bool cpu_mutex_locked(const CPUState *cpu)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 08/73] tcg-runtime: define helper_cpu_halted_set
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (6 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 09/73] ppc: convert to helper_cpu_halted_set Emilio G. Cota
                   ` (66 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/tcg-runtime.h | 2 ++
 accel/tcg/tcg-runtime.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/accel/tcg/tcg-runtime.h b/accel/tcg/tcg-runtime.h
index dfe325625c..46386bb564 100644
--- a/accel/tcg/tcg-runtime.h
+++ b/accel/tcg/tcg-runtime.h
@@ -28,6 +28,8 @@ DEF_HELPER_FLAGS_1(lookup_tb_ptr, TCG_CALL_NO_WG_SE, ptr, env)
 
 DEF_HELPER_FLAGS_1(exit_atomic, TCG_CALL_NO_WG, noreturn, env)
 
+DEF_HELPER_FLAGS_2(cpu_halted_set, TCG_CALL_NO_RWG, void, env, i32)
+
 #ifdef CONFIG_SOFTMMU
 
 DEF_HELPER_FLAGS_5(atomic_cmpxchgb, TCG_CALL_NO_WG,
diff --git a/accel/tcg/tcg-runtime.c b/accel/tcg/tcg-runtime.c
index d0d4484406..4aa038465f 100644
--- a/accel/tcg/tcg-runtime.c
+++ b/accel/tcg/tcg-runtime.c
@@ -167,3 +167,10 @@ void HELPER(exit_atomic)(CPUArchState *env)
 {
     cpu_loop_exit_atomic(ENV_GET_CPU(env), GETPC());
 }
+
+void HELPER(cpu_halted_set)(CPUArchState *env, uint32_t val)
+{
+    CPUState *cpu = ENV_GET_CPU(env);
+
+    cpu->halted = val;
+}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 09/73] ppc: convert to helper_cpu_halted_set
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (7 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 08/73] tcg-runtime: define helper_cpu_halted_set Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 10/73] cris: " Emilio G. Cota
                   ` (65 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, David Gibson, qemu-ppc

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/translate.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 819221f246..5835e85a2f 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -1572,8 +1572,7 @@ GEN_LOGICAL2(nor, tcg_gen_nor_tl, 0x03, PPC_INTEGER);
 static void gen_pause(DisasContext *ctx)
 {
     TCGv_i32 t0 = tcg_const_i32(0);
-    tcg_gen_st_i32(t0, cpu_env,
-                   -offsetof(PowerPCCPU, env) + offsetof(CPUState, halted));
+    gen_helper_cpu_halted_set(cpu_env, t0);
     tcg_temp_free_i32(t0);
 
     /* Stop translation, this gives other CPUs a chance to run */
@@ -3547,8 +3546,7 @@ static void gen_sync(DisasContext *ctx)
 static void gen_wait(DisasContext *ctx)
 {
     TCGv_i32 t0 = tcg_const_i32(1);
-    tcg_gen_st_i32(t0, cpu_env,
-                   -offsetof(PowerPCCPU, env) + offsetof(CPUState, halted));
+    gen_helper_cpu_halted_set(cpu_env, t0);
     tcg_temp_free_i32(t0);
     /* Stop translation, as the CPU is supposed to sleep from now */
     gen_exception_nip(ctx, EXCP_HLT, ctx->base.pc_next);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 10/73] cris: convert to helper_cpu_halted_set
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (8 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 09/73] ppc: convert to helper_cpu_halted_set Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 11/73] hppa: " Emilio G. Cota
                   ` (64 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Edgar E. Iglesias

And fix the temp leak along the way.

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/cris/translate.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/cris/translate.c b/target/cris/translate.c
index 11b2c11174..f059745ec0 100644
--- a/target/cris/translate.c
+++ b/target/cris/translate.c
@@ -2829,8 +2829,9 @@ static int dec_rfe_etc(CPUCRISState *env, DisasContext *dc)
     cris_cc_mask(dc, 0);
 
     if (dc->op2 == 15) {
-        tcg_gen_st_i32(tcg_const_i32(1), cpu_env,
-                       -offsetof(CRISCPU, env) + offsetof(CPUState, halted));
+        TCGv_i32 tmp = tcg_const_i32(1);
+        gen_helper_cpu_halted_set(cpu_env, tmp);
+        tcg_temp_free_i32(tmp);
         tcg_gen_movi_tl(env_pc, dc->pc + 2);
         t_gen_raise_exception(EXCP_HLT);
         return 2;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 11/73] hppa: convert to helper_cpu_halted_set
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (9 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 10/73] cris: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 12/73] m68k: " Emilio G. Cota
                   ` (63 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/hppa/translate.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index b4fd307b77..3c83a0c5b6 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -2662,8 +2662,7 @@ static bool trans_or(DisasContext *ctx, arg_rrr_cf *a)
 
             /* Tell the qemu main loop to halt until this cpu has work.  */
             tmp = tcg_const_i32(1);
-            tcg_gen_st_i32(tmp, cpu_env, -offsetof(HPPACPU, env) +
-                                         offsetof(CPUState, halted));
+            gen_helper_cpu_halted_set(cpu_env, tmp);
             tcg_temp_free_i32(tmp);
             gen_excp_1(EXCP_HALTED);
             ctx->base.is_jmp = DISAS_NORETURN;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 12/73] m68k: convert to helper_cpu_halted_set
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (10 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 11/73] hppa: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 13/73] alpha: " Emilio G. Cota
                   ` (62 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Laurent Vivier

Cc: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/m68k/translate.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index 6217a683f1..aa3237999f 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -43,7 +43,6 @@
 #undef DEFO32
 #undef DEFO64
 
-static TCGv_i32 cpu_halted;
 static TCGv_i32 cpu_exception_index;
 
 static char cpu_reg_names[2 * 8 * 3 + 5 * 4];
@@ -79,9 +78,6 @@ void m68k_tcg_init(void)
 #undef DEFO32
 #undef DEFO64
 
-    cpu_halted = tcg_global_mem_new_i32(cpu_env,
-                                        -offsetof(M68kCPU, env) +
-                                        offsetof(CPUState, halted), "HALTED");
     cpu_exception_index = tcg_global_mem_new_i32(cpu_env,
                                                  -offsetof(M68kCPU, env) +
                                                  offsetof(CPUState, exception_index),
@@ -4637,6 +4633,7 @@ DISAS_INSN(halt)
 DISAS_INSN(stop)
 {
     uint16_t ext;
+    TCGv_i32 tmp;
 
     if (IS_USER(s)) {
         gen_exception(s, s->base.pc_next, EXCP_PRIVILEGE);
@@ -4646,7 +4643,9 @@ DISAS_INSN(stop)
     ext = read_im16(env, s);
 
     gen_set_sr_im(s, ext, 0);
-    tcg_gen_movi_i32(cpu_halted, 1);
+    tmp = tcg_const_i32(1);
+    gen_helper_cpu_halted_set(cpu_env, tmp);
+    tcg_temp_free_i32(tmp);
     gen_exception(s, s->pc, EXCP_HLT);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 13/73] alpha: convert to helper_cpu_halted_set
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (11 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 12/73] m68k: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 14/73] microblaze: " Emilio G. Cota
                   ` (61 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/alpha/translate.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index 9d8f9b3eea..a75413e9b5 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -1226,8 +1226,7 @@ static DisasJumpType gen_call_pal(DisasContext *ctx, int palcode)
             /* WTINT */
             {
                 TCGv_i32 tmp = tcg_const_i32(1);
-                tcg_gen_st_i32(tmp, cpu_env, -offsetof(AlphaCPU, env) +
-                                             offsetof(CPUState, halted));
+                gen_helper_cpu_halted_set(cpu_env, tmp);
                 tcg_temp_free_i32(tmp);
             }
             tcg_gen_movi_i64(ctx->ir[IR_V0], 0);
@@ -1382,8 +1381,7 @@ static DisasJumpType gen_mtpr(DisasContext *ctx, TCGv vb, int regno)
         /* WAIT */
         {
             TCGv_i32 tmp = tcg_const_i32(1);
-            tcg_gen_st_i32(tmp, cpu_env, -offsetof(AlphaCPU, env) +
-                                         offsetof(CPUState, halted));
+            gen_helper_cpu_halted_set(cpu_env, tmp);
             tcg_temp_free_i32(tmp);
         }
         return gen_excp(ctx, EXCP_HALTED, 0);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 14/73] microblaze: convert to helper_cpu_halted_set
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (12 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 13/73] alpha: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 15/73] cpu: define cpu_halted helpers Emilio G. Cota
                   ` (60 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Edgar E. Iglesias

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/microblaze/translate.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 78ca265b04..008b84d456 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -1233,9 +1233,7 @@ static void dec_br(DisasContext *dc)
             LOG_DIS("sleep\n");
 
             t_sync_flags(dc);
-            tcg_gen_st_i32(tmp_1, cpu_env,
-                           -offsetof(MicroBlazeCPU, env)
-                           +offsetof(CPUState, halted));
+            gen_helper_cpu_halted_set(cpu_env, tmp_1);
             tcg_gen_movi_i64(cpu_SR[SR_PC], dc->pc + 4);
             gen_helper_raise_exception(cpu_env, tmp_hlt);
             tcg_temp_free_i32(tmp_hlt);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 15/73] cpu: define cpu_halted helpers
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (13 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 14/73] microblaze: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 16/73] tcg-runtime: convert to cpu_halted_set Emilio G. Cota
                   ` (59 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

cpu->halted will soon be protected by cpu->lock.
We will use these helpers to ease the transition,
since right now cpu->halted has many direct callers.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index d826a2852e..9fe74e4392 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -502,6 +502,30 @@ bool cpu_mutex_locked(const CPUState *cpu);
  */
 bool no_cpu_mutex_locked(void);
 
+static inline uint32_t cpu_halted(CPUState *cpu)
+{
+    uint32_t ret;
+
+    if (cpu_mutex_locked(cpu)) {
+        return cpu->halted;
+    }
+    cpu_mutex_lock(cpu);
+    ret = cpu->halted;
+    cpu_mutex_unlock(cpu);
+    return ret;
+}
+
+static inline void cpu_halted_set(CPUState *cpu, uint32_t val)
+{
+    if (cpu_mutex_locked(cpu)) {
+        cpu->halted = val;
+        return;
+    }
+    cpu_mutex_lock(cpu);
+    cpu->halted = val;
+    cpu_mutex_unlock(cpu);
+}
+
 static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
 {
     unsigned int i;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 16/73] tcg-runtime: convert to cpu_halted_set
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (14 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 15/73] cpu: define cpu_halted helpers Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 17/73] arm: convert to cpu_halted Emilio G. Cota
                   ` (58 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/tcg-runtime.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/accel/tcg/tcg-runtime.c b/accel/tcg/tcg-runtime.c
index 4aa038465f..70e3c9de71 100644
--- a/accel/tcg/tcg-runtime.c
+++ b/accel/tcg/tcg-runtime.c
@@ -172,5 +172,5 @@ void HELPER(cpu_halted_set)(CPUArchState *env, uint32_t val)
 {
     CPUState *cpu = ENV_GET_CPU(env);
 
-    cpu->halted = val;
+    cpu_halted_set(cpu, val);
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 17/73] arm: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (15 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 16/73] tcg-runtime: convert to cpu_halted_set Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 18/73] ppc: " Emilio G. Cota
                   ` (57 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Andrzej Zaborowski,
	Peter Maydell, qemu-arm

Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-arm@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/arm/omap1.c            | 4 ++--
 hw/arm/pxa2xx_gpio.c      | 2 +-
 hw/arm/pxa2xx_pic.c       | 2 +-
 target/arm/arm-powerctl.c | 6 +++---
 target/arm/cpu.c          | 2 +-
 target/arm/op_helper.c    | 2 +-
 6 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
index 539d29ef9c..55a7672976 100644
--- a/hw/arm/omap1.c
+++ b/hw/arm/omap1.c
@@ -1769,7 +1769,7 @@ static uint64_t omap_clkdsp_read(void *opaque, hwaddr addr,
     case 0x18:	/* DSP_SYSST */
         cpu = CPU(s->cpu);
         return (s->clkm.clocking_scheme << 11) | s->clkm.cold_start |
-                (cpu->halted << 6);      /* Quite useless... */
+                (cpu_halted(cpu) << 6);      /* Quite useless... */
     }
 
     OMAP_BAD_REG(addr);
@@ -3790,7 +3790,7 @@ void omap_mpu_wakeup(void *opaque, int irq, int req)
     struct omap_mpu_state_s *mpu = (struct omap_mpu_state_s *) opaque;
     CPUState *cpu = CPU(mpu->cpu);
 
-    if (cpu->halted) {
+    if (cpu_halted(cpu)) {
         cpu_interrupt(cpu, CPU_INTERRUPT_EXITTB);
     }
 }
diff --git a/hw/arm/pxa2xx_gpio.c b/hw/arm/pxa2xx_gpio.c
index e15070188e..5c3fea42e9 100644
--- a/hw/arm/pxa2xx_gpio.c
+++ b/hw/arm/pxa2xx_gpio.c
@@ -128,7 +128,7 @@ static void pxa2xx_gpio_set(void *opaque, int line, int level)
         pxa2xx_gpio_irq_update(s);
 
     /* Wake-up GPIOs */
-    if (cpu->halted && (mask & ~s->dir[bank] & pxa2xx_gpio_wake[bank])) {
+    if (cpu_halted(cpu) && (mask & ~s->dir[bank] & pxa2xx_gpio_wake[bank])) {
         cpu_interrupt(cpu, CPU_INTERRUPT_EXITTB);
     }
 }
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
index 61275fa040..46ab4c3fc2 100644
--- a/hw/arm/pxa2xx_pic.c
+++ b/hw/arm/pxa2xx_pic.c
@@ -58,7 +58,7 @@ static void pxa2xx_pic_update(void *opaque)
     PXA2xxPICState *s = (PXA2xxPICState *) opaque;
     CPUState *cpu = CPU(s->cpu);
 
-    if (cpu->halted) {
+    if (cpu_halted(cpu)) {
         mask[0] = s->int_pending[0] & (s->int_enabled[0] | s->int_idle);
         mask[1] = s->int_pending[1] & (s->int_enabled[1] | s->int_idle);
         if (mask[0] || mask[1]) {
diff --git a/target/arm/arm-powerctl.c b/target/arm/arm-powerctl.c
index f77a950db6..bd94ac2235 100644
--- a/target/arm/arm-powerctl.c
+++ b/target/arm/arm-powerctl.c
@@ -64,7 +64,7 @@ static void arm_set_cpu_on_async_work(CPUState *target_cpu_state,
 
     /* Initialize the cpu we are turning on */
     cpu_reset(target_cpu_state);
-    target_cpu_state->halted = 0;
+    cpu_halted_set(target_cpu_state, 0);
 
     if (info->target_aa64) {
         if ((info->target_el < 3) && arm_feature(&target_cpu->env,
@@ -235,7 +235,7 @@ static void arm_set_cpu_on_and_reset_async_work(CPUState *target_cpu_state,
 
     /* Initialize the cpu we are turning on */
     cpu_reset(target_cpu_state);
-    target_cpu_state->halted = 0;
+    cpu_halted_set(target_cpu_state, 0);
 
     /* Finally set the power status */
     assert(qemu_mutex_iothread_locked());
@@ -291,7 +291,7 @@ static void arm_set_cpu_off_async_work(CPUState *target_cpu_state,
 
     assert(qemu_mutex_iothread_locked());
     target_cpu->power_state = PSCI_OFF;
-    target_cpu_state->halted = 1;
+    cpu_halted_set(target_cpu_state, 1);
     target_cpu_state->exception_index = EXCP_HLT;
 }
 
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 54b61f917b..845730c00c 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -173,7 +173,7 @@ static void arm_cpu_reset(CPUState *s)
     env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
 
     cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
-    s->halted = cpu->start_powered_off;
+    cpu_halted_set(s, cpu->start_powered_off);
 
     if (arm_feature(env, ARM_FEATURE_IWMMXT)) {
         env->iwmmxt.cregs[ARM_IWMMXT_wCID] = 0x69051000 | 'Q';
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
index c998eadfaa..f9ccdc9abf 100644
--- a/target/arm/op_helper.c
+++ b/target/arm/op_helper.c
@@ -479,7 +479,7 @@ void HELPER(wfi)(CPUARMState *env, uint32_t insn_len)
     }
 
     cs->exception_index = EXCP_HLT;
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cpu_loop_exit(cs);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 18/73] ppc: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (16 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 17/73] arm: convert to cpu_halted Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 19/73] sh4: " Emilio G. Cota
                   ` (56 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, David Gibson, qemu-ppc

In ppce500_spin.c, acquire the lock just once to update
both cpu->halted and cpu->stopped.

In hw/ppc/spapr_hcall.c, acquire the lock just once to
update cpu->halted and call cpu_has_work, since later
in the series we'll acquire the BQL (if not already held)
from cpu_has_work.

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/helper_regs.h        |  2 +-
 hw/ppc/e500.c                   |  4 ++--
 hw/ppc/ppc.c                    | 10 +++++-----
 hw/ppc/ppce500_spin.c           |  6 ++++--
 hw/ppc/spapr_cpu_core.c         |  4 ++--
 hw/ppc/spapr_hcall.c            |  4 +++-
 hw/ppc/spapr_rtas.c             |  6 +++---
 target/ppc/excp_helper.c        |  4 ++--
 target/ppc/kvm.c                |  4 ++--
 target/ppc/translate_init.inc.c |  6 +++---
 10 files changed, 27 insertions(+), 23 deletions(-)

diff --git a/target/ppc/helper_regs.h b/target/ppc/helper_regs.h
index a2205e1044..5afce54318 100644
--- a/target/ppc/helper_regs.h
+++ b/target/ppc/helper_regs.h
@@ -161,7 +161,7 @@ static inline int hreg_store_msr(CPUPPCState *env, target_ulong value,
 #if !defined(CONFIG_USER_ONLY)
     if (unlikely(msr_pow == 1)) {
         if (!env->pending_interrupts && (*env->check_pow)(env)) {
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             excp = EXCP_HALTED;
         }
     }
diff --git a/hw/ppc/e500.c b/hw/ppc/e500.c
index 7553f674c9..4e82906ef2 100644
--- a/hw/ppc/e500.c
+++ b/hw/ppc/e500.c
@@ -657,7 +657,7 @@ static void ppce500_cpu_reset_sec(void *opaque)
 
     /* Secondary CPU starts in halted state for now. Needs to change when
        implementing non-kernel boot. */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
 }
 
@@ -671,7 +671,7 @@ static void ppce500_cpu_reset(void *opaque)
     cpu_reset(cs);
 
     /* Set initial guest state. */
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
     env->gpr[1] = (16 * MiB) - 8;
     env->gpr[3] = bi->dt_base;
     env->gpr[4] = 0;
diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index d1e3d4cd20..45e212390a 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -149,7 +149,7 @@ static void ppc6xx_set_irq(void *opaque, int pin, int level)
             /* XXX: Note that the only way to restart the CPU is to reset it */
             if (level) {
                 LOG_IRQ("%s: stop the CPU\n", __func__);
-                cs->halted = 1;
+                cpu_halted_set(cs, 1);
             }
             break;
         case PPC6xx_INPUT_HRESET:
@@ -228,10 +228,10 @@ static void ppc970_set_irq(void *opaque, int pin, int level)
             /* XXX: TODO: relay the signal to CKSTP_OUT pin */
             if (level) {
                 LOG_IRQ("%s: stop the CPU\n", __func__);
-                cs->halted = 1;
+                cpu_halted_set(cs, 1);
             } else {
                 LOG_IRQ("%s: restart the CPU\n", __func__);
-                cs->halted = 0;
+                cpu_halted_set(cs, 0);
                 qemu_cpu_kick(cs);
             }
             break;
@@ -457,10 +457,10 @@ static void ppc40x_set_irq(void *opaque, int pin, int level)
             /* Level sensitive - active low */
             if (level) {
                 LOG_IRQ("%s: stop the CPU\n", __func__);
-                cs->halted = 1;
+                cpu_halted_set(cs, 1);
             } else {
                 LOG_IRQ("%s: restart the CPU\n", __func__);
-                cs->halted = 0;
+                cpu_halted_set(cs, 0);
                 qemu_cpu_kick(cs);
             }
             break;
diff --git a/hw/ppc/ppce500_spin.c b/hw/ppc/ppce500_spin.c
index c45fc858de..4b3532730f 100644
--- a/hw/ppc/ppce500_spin.c
+++ b/hw/ppc/ppce500_spin.c
@@ -107,9 +107,11 @@ static void spin_kick(CPUState *cs, run_on_cpu_data data)
     map_start = ldq_p(&curspin->addr) & ~(map_size - 1);
     mmubooke_create_initial_mapping(env, 0, map_start, map_size);
 
-    cs->halted = 0;
-    cs->exception_index = -1;
+    cpu_mutex_lock(cs);
+    cpu_halted_set(cs, 0);
     cs->stopped = false;
+    cpu_mutex_unlock(cs);
+    cs->exception_index = -1;
     qemu_cpu_kick(cs);
 }
 
diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
index ef6cbb9c29..cbb8474cad 100644
--- a/hw/ppc/spapr_cpu_core.c
+++ b/hw/ppc/spapr_cpu_core.c
@@ -36,7 +36,7 @@ static void spapr_cpu_reset(void *opaque)
     /* All CPUs start halted.  CPU0 is unhalted from the machine level
      * reset code and the rest are explicitly started up by the guest
      * using an RTAS call */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 
     /* Set compatibility mode to match the boot CPU, which was either set
      * by the machine reset code or by CAS. This should never fail.
@@ -90,7 +90,7 @@ void spapr_cpu_set_entry_state(PowerPCCPU *cpu, target_ulong nip, target_ulong r
     env->nip = nip;
     env->gpr[3] = r3;
     kvmppc_set_reg_ppc_online(cpu, 1);
-    CPU(cpu)->halted = 0;
+    cpu_halted_set(CPU(cpu), 0);
     /* Enable Power-saving mode Exit Cause exceptions */
     ppc_store_lpcr(cpu, env->spr[SPR_LPCR] | pcc->lpcr_pm);
 }
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 476bad6271..2ff49ece8e 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -1057,11 +1057,13 @@ static target_ulong h_cede(PowerPCCPU *cpu, sPAPRMachineState *spapr,
 
     env->msr |= (1ULL << MSR_EE);
     hreg_compute_hflags(env);
+    cpu_mutex_lock(cs);
     if (!cpu_has_work(cs)) {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
         cs->exit_request = 1;
     }
+    cpu_mutex_unlock(cs);
     return H_SUCCESS;
 }
 
diff --git a/hw/ppc/spapr_rtas.c b/hw/ppc/spapr_rtas.c
index 7a2cb786a3..60b7fd8a79 100644
--- a/hw/ppc/spapr_rtas.c
+++ b/hw/ppc/spapr_rtas.c
@@ -109,7 +109,7 @@ static void rtas_query_cpu_stopped_state(PowerPCCPU *cpu_,
     id = rtas_ld(args, 0);
     cpu = spapr_find_cpu(id);
     if (cpu != NULL) {
-        if (CPU(cpu)->halted) {
+        if (cpu_halted(CPU(cpu))) {
             rtas_st(rets, 1, 0);
         } else {
             rtas_st(rets, 1, 2);
@@ -153,7 +153,7 @@ static void rtas_start_cpu(PowerPCCPU *callcpu, sPAPRMachineState *spapr,
     env = &newcpu->env;
     pcc = POWERPC_CPU_GET_CLASS(newcpu);
 
-    if (!CPU(newcpu)->halted) {
+    if (!cpu_halted(CPU(newcpu))) {
         rtas_st(rets, 0, RTAS_OUT_HW_ERROR);
         return;
     }
@@ -207,7 +207,7 @@ static void rtas_stop_self(PowerPCCPU *cpu, sPAPRMachineState *spapr,
      * This could deliver an interrupt on a dying CPU and crash the
      * guest */
     ppc_store_lpcr(cpu, env->spr[SPR_LPCR] & ~pcc->lpcr_pm);
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     kvmppc_set_reg_ppc_online(cpu, 0);
     qemu_cpu_kick(cs);
 }
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 39bedbb11d..d0092cc13d 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -220,7 +220,7 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
                 qemu_log("Machine check while not allowed. "
                         "Entering checkstop state\n");
             }
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             cpu_interrupt_exittb(cs);
         }
         if (env->msr_mask & MSR_HVB) {
@@ -1005,7 +1005,7 @@ void helper_pminsn(CPUPPCState *env, powerpc_pm_insn_t insn)
     CPUState *cs;
 
     cs = CPU(ppc_env_get_cpu(env));
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 
     /* The architecture specifies that HDEC interrupts are
      * discarded in PM states
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
index d01852fe31..faa757c417 100644
--- a/target/ppc/kvm.c
+++ b/target/ppc/kvm.c
@@ -1373,7 +1373,7 @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
 
 int kvm_arch_process_async_events(CPUState *cs)
 {
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 static int kvmppc_handle_halt(PowerPCCPU *cpu)
@@ -1382,7 +1382,7 @@ static int kvmppc_handle_halt(PowerPCCPU *cpu)
     CPUPPCState *env = &cpu->env;
 
     if (!(cs->interrupt_request & CPU_INTERRUPT_HARD) && (msr_ee)) {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
     }
 
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 58542c0fe0..b2cad43e2a 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8468,7 +8468,7 @@ static bool cpu_has_work_POWER7(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
             return false;
         }
@@ -8622,7 +8622,7 @@ static bool cpu_has_work_POWER8(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
             return false;
         }
@@ -8814,7 +8814,7 @@ static bool cpu_has_work_POWER9(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         uint64_t psscr = env->spr[SPR_PSSCR];
 
         if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 19/73] sh4: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (17 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 18/73] ppc: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 20/73] i386: " Emilio G. Cota
                   ` (55 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Aurelien Jarno

Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/sh4/op_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/sh4/op_helper.c b/target/sh4/op_helper.c
index 4f825bae5a..57cc363ccc 100644
--- a/target/sh4/op_helper.c
+++ b/target/sh4/op_helper.c
@@ -105,7 +105,7 @@ void helper_sleep(CPUSH4State *env)
 {
     CPUState *cs = CPU(sh_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     env->in_sleep = 1;
     raise_exception(env, EXCP_HLT, 0);
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 20/73] i386: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (18 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 19/73] sh4: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 21/73] lm32: " Emilio G. Cota
                   ` (54 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Eduardo Habkost

Cc: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/cpu.h         |  2 +-
 target/i386/cpu.c         |  2 +-
 target/i386/hax-all.c     |  4 ++--
 target/i386/helper.c      |  4 ++--
 target/i386/hvf/hvf.c     |  4 ++--
 target/i386/hvf/x86hvf.c  |  4 ++--
 target/i386/kvm.c         | 10 +++++-----
 target/i386/misc_helper.c |  2 +-
 target/i386/whpx-all.c    |  6 +++---
 9 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 95112b9118..65b4e5341b 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1617,7 +1617,7 @@ static inline void cpu_x86_load_seg_cache_sipi(X86CPU *cpu,
                            sipi_vector << 12,
                            env->segs[R_CS].limit,
                            env->segs[R_CS].flags);
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
 }
 
 int cpu_x86_get_descr_debug(CPUX86State *env, unsigned int selector,
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index d3aa6a815b..fe23d684db 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4731,7 +4731,7 @@ static void x86_cpu_reset(CPUState *s)
     /* We hard-wire the BSP to the first CPU. */
     apic_designate_bsp(cpu->apic_state, s->cpu_index == 0);
 
-    s->halted = !cpu_is_bsp(cpu);
+    cpu_halted_set(s, !cpu_is_bsp(cpu));
 
     if (kvm_enabled()) {
         kvm_arch_reset_vcpu(cpu);
diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index b978a9b821..22951017cf 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -471,7 +471,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
         return 0;
     }
 
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
 
     if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
         cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
@@ -548,7 +548,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
                 !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
                 /* hlt instruction with interrupt disabled is shutdown */
                 env->eflags |= IF_MASK;
-                cpu->halted = 1;
+                cpu_halted_set(cpu, 1);
                 cpu->exception_index = EXCP_HLT;
                 ret = 1;
             }
diff --git a/target/i386/helper.c b/target/i386/helper.c
index e695f8ba7a..a75278f954 100644
--- a/target/i386/helper.c
+++ b/target/i386/helper.c
@@ -454,7 +454,7 @@ void x86_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
                     (env->hflags >> HF_INHIBIT_IRQ_SHIFT) & 1,
                     (env->a20_mask >> 20) & 1,
                     (env->hflags >> HF_SMM_SHIFT) & 1,
-                    cs->halted);
+                    cpu_halted(cs));
     } else
 #endif
     {
@@ -481,7 +481,7 @@ void x86_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
                     (env->hflags >> HF_INHIBIT_IRQ_SHIFT) & 1,
                     (env->a20_mask >> 20) & 1,
                     (env->hflags >> HF_SMM_SHIFT) & 1,
-                    cs->halted);
+                    cpu_halted(cs));
     }
 
     for(i = 0; i < 6; i++) {
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index 42f9447303..7dc1d721ff 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -672,7 +672,7 @@ int hvf_vcpu_exec(CPUState *cpu)
         vmx_update_tpr(cpu);
 
         qemu_mutex_unlock_iothread();
-        if (!cpu_is_bsp(X86_CPU(cpu)) && cpu->halted) {
+        if (!cpu_is_bsp(X86_CPU(cpu)) && cpu_halted(cpu)) {
             qemu_mutex_lock_iothread();
             return EXCP_HLT;
         }
@@ -706,7 +706,7 @@ int hvf_vcpu_exec(CPUState *cpu)
                 (EFLAGS(env) & IF_MASK))
                 && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) &&
                 !(idtvec_info & VMCS_IDT_VEC_VALID)) {
-                cpu->halted = 1;
+                cpu_halted_set(cpu, 1);
                 ret = EXCP_HLT;
             }
             ret = EXCP_INTERRUPT;
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index df8e946fbc..163bbed23f 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -446,7 +446,7 @@ int hvf_process_events(CPUState *cpu_state)
     if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK)) ||
         (cpu_state->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cpu_state->halted = 0;
+        cpu_halted_set(cpu_state, 0);
     }
     if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
         hvf_cpu_synchronize_state(cpu_state);
@@ -458,5 +458,5 @@ int hvf_process_events(CPUState *cpu_state)
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
     }
-    return cpu_state->halted;
+    return cpu_halted(cpu);
 }
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index beae1b99da..81e3d65cc9 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2836,7 +2836,7 @@ static int kvm_get_mp_state(X86CPU *cpu)
     }
     env->mp_state = mp_state.mp_state;
     if (kvm_irqchip_in_kernel()) {
-        cs->halted = (mp_state.mp_state == KVM_MP_STATE_HALTED);
+        cpu_halted_set(cs, mp_state.mp_state == KVM_MP_STATE_HALTED);
     }
     return 0;
 }
@@ -3320,7 +3320,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         env->exception_injected = EXCP12_MCHK;
         env->has_error_code = 0;
 
-        cs->halted = 0;
+        cpu_halted_set(cs, 0);
         if (kvm_irqchip_in_kernel() && env->mp_state == KVM_MP_STATE_HALTED) {
             env->mp_state = KVM_MP_STATE_RUNNABLE;
         }
@@ -3343,7 +3343,7 @@ int kvm_arch_process_async_events(CPUState *cs)
     if (((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
         (cs->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cs->halted = 0;
+        cpu_halted_set(cs, 0);
     }
     if (cs->interrupt_request & CPU_INTERRUPT_SIPI) {
         kvm_cpu_synchronize_state(cs);
@@ -3356,7 +3356,7 @@ int kvm_arch_process_async_events(CPUState *cs)
                                       env->tpr_access_type);
     }
 
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 static int kvm_handle_halt(X86CPU *cpu)
@@ -3367,7 +3367,7 @@ static int kvm_handle_halt(X86CPU *cpu)
     if (!((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
           (env->eflags & IF_MASK)) &&
         !(cs->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         return EXCP_HLT;
     }
 
diff --git a/target/i386/misc_helper.c b/target/i386/misc_helper.c
index 78f2020ef2..fcd6d833e8 100644
--- a/target/i386/misc_helper.c
+++ b/target/i386/misc_helper.c
@@ -554,7 +554,7 @@ static void do_hlt(X86CPU *cpu)
     CPUX86State *env = &cpu->env;
 
     env->hflags &= ~HF_INHIBIT_IRQ_MASK; /* needed if sti is just before */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     cpu_loop_exit(cs);
 }
diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
index 57e53e1f1f..b9c79ccd99 100644
--- a/target/i386/whpx-all.c
+++ b/target/i386/whpx-all.c
@@ -697,7 +697,7 @@ static int whpx_handle_halt(CPUState *cpu)
           (env->eflags & IF_MASK)) &&
         !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu->exception_index = EXCP_HLT;
-        cpu->halted = true;
+        cpu_halted_set(cpu, true);
         ret = 1;
     }
     qemu_mutex_unlock_iothread();
@@ -857,7 +857,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
         (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cpu->halted = false;
+        cpu_halted_set(cpu, false);
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
@@ -887,7 +887,7 @@ static int whpx_vcpu_run(CPUState *cpu)
     int ret;
 
     whpx_vcpu_process_async_events(cpu);
-    if (cpu->halted) {
+    if (cpu_halted(cpu)) {
         cpu->exception_index = EXCP_HLT;
         atomic_set(&cpu->exit_request, false);
         return 0;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 21/73] lm32: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (19 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 20/73] i386: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 22/73] m68k: " Emilio G. Cota
                   ` (53 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Michael Walle

Cc: Michael Walle <michael@walle.cc>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/lm32/op_helper.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/lm32/op_helper.c b/target/lm32/op_helper.c
index 234d55e056..392634441b 100644
--- a/target/lm32/op_helper.c
+++ b/target/lm32/op_helper.c
@@ -31,7 +31,7 @@ void HELPER(hlt)(CPULM32State *env)
 {
     CPUState *cs = CPU(lm32_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     cpu_loop_exit(cs);
 }
@@ -44,7 +44,7 @@ void HELPER(ill)(CPULM32State *env)
             "Connect a debugger or switch to the monitor console "
             "to find out more.\n");
     vm_stop(RUN_STATE_PAUSED);
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     raise_exception(env, EXCP_HALTED);
 #endif
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 22/73] m68k: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (20 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 21/73] lm32: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 23/73] mips: " Emilio G. Cota
                   ` (52 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Laurent Vivier

Cc: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/m68k/op_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/m68k/op_helper.c b/target/m68k/op_helper.c
index 76f439985a..9f717dfec1 100644
--- a/target/m68k/op_helper.c
+++ b/target/m68k/op_helper.c
@@ -237,7 +237,7 @@ static void cf_interrupt_all(CPUM68KState *env, int is_hw)
                 do_m68k_semihosting(env, env->dregs[0]);
                 return;
             }
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             cs->exception_index = EXCP_HLT;
             cpu_loop_exit(cs);
             return;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 23/73] mips: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (21 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 22/73] m68k: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 24/73] riscv: " Emilio G. Cota
                   ` (51 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Aurelien Jarno,
	Aleksandar Markovic, James Hogan

Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Cc: James Hogan <jhogan@kernel.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/mips/cps.c           | 2 +-
 hw/misc/mips_itu.c      | 4 ++--
 target/mips/kvm.c       | 2 +-
 target/mips/op_helper.c | 8 ++++----
 target/mips/translate.c | 4 ++--
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/hw/mips/cps.c b/hw/mips/cps.c
index fc97f59af4..57f3c9e5fc 100644
--- a/hw/mips/cps.c
+++ b/hw/mips/cps.c
@@ -49,7 +49,7 @@ static void main_cpu_reset(void *opaque)
     cpu_reset(cs);
 
     /* All VPs are halted on reset. Leave powering up to CPC. */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 }
 
 static bool cpu_mips_itu_supported(CPUMIPSState *env)
diff --git a/hw/misc/mips_itu.c b/hw/misc/mips_itu.c
index 3afdbe69c6..397dfa7a3c 100644
--- a/hw/misc/mips_itu.c
+++ b/hw/misc/mips_itu.c
@@ -181,7 +181,7 @@ static void wake_blocked_threads(ITCStorageCell *c)
 {
     CPUState *cs;
     CPU_FOREACH(cs) {
-        if (cs->halted && (c->blocked_threads & (1ULL << cs->cpu_index))) {
+        if (cpu_halted(cs) && (c->blocked_threads & (1ULL << cs->cpu_index))) {
             cpu_interrupt(cs, CPU_INTERRUPT_WAKE);
         }
     }
@@ -191,7 +191,7 @@ static void wake_blocked_threads(ITCStorageCell *c)
 static void QEMU_NORETURN block_thread_and_exit(ITCStorageCell *c)
 {
     c->blocked_threads |= 1ULL << current_cpu->cpu_index;
-    current_cpu->halted = 1;
+    cpu_halted_set(current_cpu, 1);
     current_cpu->exception_index = EXCP_HLT;
     cpu_loop_exit_restore(current_cpu, current_cpu->mem_io_pc);
 }
diff --git a/target/mips/kvm.c b/target/mips/kvm.c
index 8e72850962..0b177a7577 100644
--- a/target/mips/kvm.c
+++ b/target/mips/kvm.c
@@ -156,7 +156,7 @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
 
 int kvm_arch_process_async_events(CPUState *cs)
 {
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index 0f272a5b93..704cb81147 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -623,7 +623,7 @@ static bool mips_vpe_is_wfi(MIPSCPU *c)
 
     /* If the VPE is halted but otherwise active, it means it's waiting for
        an interrupt.  */
-    return cpu->halted && mips_vpe_active(env);
+    return cpu_halted(cpu) && mips_vpe_active(env);
 }
 
 static bool mips_vp_is_wfi(MIPSCPU *c)
@@ -631,7 +631,7 @@ static bool mips_vp_is_wfi(MIPSCPU *c)
     CPUState *cpu = CPU(c);
     CPUMIPSState *env = &c->env;
 
-    return cpu->halted && mips_vp_active(env);
+    return cpu_halted(cpu) && mips_vp_active(env);
 }
 
 static inline void mips_vpe_wake(MIPSCPU *c)
@@ -650,7 +650,7 @@ static inline void mips_vpe_sleep(MIPSCPU *cpu)
 
     /* The VPE was shut off, really go to bed.
        Reset any old _WAKE requests.  */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cpu_reset_interrupt(cs, CPU_INTERRUPT_WAKE);
 }
 
@@ -2635,7 +2635,7 @@ void helper_wait(CPUMIPSState *env)
 {
     CPUState *cs = CPU(mips_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cpu_reset_interrupt(cs, CPU_INTERRUPT_WAKE);
     /* Last instruction in the block, PC was updated before
        - no need to recover PC and icount */
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 364bd6dc4f..9550320ada 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -30044,7 +30044,7 @@ void cpu_state_reset(CPUMIPSState *env)
             env->tcs[i].CP0_TCHalt = 1;
         }
         env->active_tc.CP0_TCHalt = 1;
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
 
         if (cs->cpu_index == 0) {
             /* VPE0 starts up enabled.  */
@@ -30052,7 +30052,7 @@ void cpu_state_reset(CPUMIPSState *env)
             env->CP0_VPEConf0 |= (1 << CP0VPEC0_MVP) | (1 << CP0VPEC0_VPA);
 
             /* TC0 starts up unhalted.  */
-            cs->halted = 0;
+            cpu_halted_set(cs, 0);
             env->active_tc.CP0_TCHalt = 0;
             env->tcs[0].CP0_TCHalt = 0;
             /* With thread 0 active.  */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 24/73] riscv: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (22 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 23/73] mips: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 25/73] s390x: " Emilio G. Cota
                   ` (50 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Palmer Dabbelt,
	Sagar Karandikar, Bastian Koppelmann, Alistair Francis

Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Alistair Francis <alistair23@gmail.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/riscv/op_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
index b7dc18a41e..7732b4ba32 100644
--- a/target/riscv/op_helper.c
+++ b/target/riscv/op_helper.c
@@ -135,7 +135,7 @@ void helper_wfi(CPURISCVState *env)
         get_field(env->mstatus, MSTATUS_TW)) {
         riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC());
     } else {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
         cpu_loop_exit(cs);
     }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 25/73] s390x: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (23 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 24/73] riscv: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 26/73] sparc: " Emilio G. Cota
                   ` (49 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Cornelia Huck,
	Christian Borntraeger, David Hildenbrand, qemu-s390x

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/intc/s390_flic.c        |  2 +-
 target/s390x/cpu.c         | 22 +++++++++++++++-------
 target/s390x/excp_helper.c |  2 +-
 target/s390x/kvm.c         |  2 +-
 target/s390x/sigp.c        |  8 ++++----
 5 files changed, 22 insertions(+), 14 deletions(-)

diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c
index 5f8168f0f0..bfb5cf1d07 100644
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -198,7 +198,7 @@ static void qemu_s390_flic_notify(uint32_t type)
         }
 
         /* we always kick running CPUs for now, this is tricky */
-        if (cs->halted) {
+        if (cpu_halted(cs)) {
             /* don't check for subclasses, CPUs double check when waking up */
             if (type & FLIC_PENDING_SERVICE) {
                 if (!(cpu->env.psw.mask & PSW_MASK_EXT)) {
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 698dd9cb82..4e17df5c39 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -285,7 +285,7 @@ static void s390_cpu_initfn(Object *obj)
     CPUS390XState *env = &cpu->env;
 
     cs->env_ptr = env;
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     object_property_add(obj, "crash-information", "GuestPanicInformation",
                         s390_cpu_get_crash_info_qom, NULL, NULL, NULL, NULL);
@@ -310,8 +310,8 @@ static void s390_cpu_finalize(Object *obj)
 #if !defined(CONFIG_USER_ONLY)
 static bool disabled_wait(CPUState *cpu)
 {
-    return cpu->halted && !(S390_CPU(cpu)->env.psw.mask &
-                            (PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK));
+    return cpu_halted(cpu) && !(S390_CPU(cpu)->env.psw.mask &
+                                (PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK));
 }
 
 static unsigned s390_count_running_cpus(void)
@@ -337,10 +337,16 @@ unsigned int s390_cpu_halt(S390CPU *cpu)
     CPUState *cs = CPU(cpu);
     trace_cpu_halt(cs->cpu_index);
 
-    if (!cs->halted) {
-        cs->halted = 1;
+    /*
+     * cpu_halted and cpu_halted_set acquire the cpu lock if it
+     * isn't already held, so acquire it first.
+     */
+    cpu_mutex_lock(cs);
+    if (!cpu_halted(cs)) {
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
     }
+    cpu_mutex_unlock(cs);
 
     return s390_count_running_cpus();
 }
@@ -350,10 +356,12 @@ void s390_cpu_unhalt(S390CPU *cpu)
     CPUState *cs = CPU(cpu);
     trace_cpu_unhalt(cs->cpu_index);
 
-    if (cs->halted) {
-        cs->halted = 0;
+    cpu_mutex_lock(cs);
+    if (cpu_halted(cs)) {
+        cpu_halted_set(cs, 0);
         cs->exception_index = -1;
     }
+    cpu_mutex_unlock(cs);
 }
 
 unsigned int s390_cpu_set_state(uint8_t cpu_state, S390CPU *cpu)
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
index a758649f47..633d59346b 100644
--- a/target/s390x/excp_helper.c
+++ b/target/s390x/excp_helper.c
@@ -461,7 +461,7 @@ try_deliver:
     if ((env->psw.mask & PSW_MASK_WAIT) || stopped) {
         /* don't trigger a cpu_loop_exit(), use an interrupt instead */
         cpu_interrupt(CPU(cpu), CPU_INTERRUPT_HALT);
-    } else if (cs->halted) {
+    } else if (cpu_halted(cs)) {
         /* unhalt if we had a WAIT PSW somehwere in our injection chain */
         s390_cpu_unhalt(cpu);
     }
diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
index 19530fb94e..433a7e4573 100644
--- a/target/s390x/kvm.c
+++ b/target/s390x/kvm.c
@@ -1003,7 +1003,7 @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
 
 int kvm_arch_process_async_events(CPUState *cs)
 {
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 static int s390_kvm_irq_to_interrupt(struct kvm_s390_irq *irq,
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
index c1f9245797..d410da797a 100644
--- a/target/s390x/sigp.c
+++ b/target/s390x/sigp.c
@@ -115,7 +115,7 @@ static void sigp_stop(CPUState *cs, run_on_cpu_data arg)
     }
 
     /* disabled wait - sleeping in user space */
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         s390_cpu_set_state(S390_CPU_STATE_STOPPED, cpu);
     } else {
         /* execute the stop function */
@@ -131,7 +131,7 @@ static void sigp_stop_and_store_status(CPUState *cs, run_on_cpu_data arg)
     SigpInfo *si = arg.host_ptr;
 
     /* disabled wait - sleeping in user space */
-    if (s390_cpu_get_state(cpu) == S390_CPU_STATE_OPERATING && cs->halted) {
+    if (s390_cpu_get_state(cpu) == S390_CPU_STATE_OPERATING && cpu_halted(cs)) {
         s390_cpu_set_state(S390_CPU_STATE_STOPPED, cpu);
     }
 
@@ -313,7 +313,7 @@ static void sigp_cond_emergency(S390CPU *src_cpu, S390CPU *dst_cpu,
     }
 
     /* this looks racy, but these values are only used when STOPPED */
-    idle = CPU(dst_cpu)->halted;
+    idle = cpu_halted(CPU(dst_cpu));
     psw_addr = dst_cpu->env.psw.addr;
     psw_mask = dst_cpu->env.psw.mask;
     asn = si->param;
@@ -347,7 +347,7 @@ static void sigp_sense_running(S390CPU *dst_cpu, SigpInfo *si)
     }
 
     /* If halted (which includes also STOPPED), it is not running */
-    if (CPU(dst_cpu)->halted) {
+    if (cpu_halted(CPU(dst_cpu))) {
         si->cc = SIGP_CC_ORDER_CODE_ACCEPTED;
     } else {
         set_sigp_status(si, SIGP_STAT_NOT_RUNNING);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 26/73] sparc: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (24 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 25/73] s390x: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 27/73] xtensa: " Emilio G. Cota
                   ` (48 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Fabien Chouteau,
	Mark Cave-Ayland, Artyom Tarasenko

Cc: Fabien Chouteau <chouteau@adacore.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/sparc/leon3.c      | 2 +-
 hw/sparc/sun4m.c      | 8 ++++----
 hw/sparc64/sparc64.c  | 4 ++--
 target/sparc/helper.c | 2 +-
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/hw/sparc/leon3.c b/hw/sparc/leon3.c
index 774639af33..7b9239af87 100644
--- a/hw/sparc/leon3.c
+++ b/hw/sparc/leon3.c
@@ -61,7 +61,7 @@ static void main_cpu_reset(void *opaque)
 
     cpu_reset(cpu);
 
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     env->pc     = s->entry;
     env->npc    = s->entry + 4;
     env->regbase[6] = s->sp;
diff --git a/hw/sparc/sun4m.c b/hw/sparc/sun4m.c
index ca1e3825d5..4aee7eef48 100644
--- a/hw/sparc/sun4m.c
+++ b/hw/sparc/sun4m.c
@@ -167,7 +167,7 @@ static void cpu_kick_irq(SPARCCPU *cpu)
     CPUSPARCState *env = &cpu->env;
     CPUState *cs = CPU(cpu);
 
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
     cpu_check_irqs(env);
     qemu_cpu_kick(cs);
 }
@@ -198,7 +198,7 @@ static void main_cpu_reset(void *opaque)
     CPUState *cs = CPU(cpu);
 
     cpu_reset(cs);
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
 }
 
 static void secondary_cpu_reset(void *opaque)
@@ -207,7 +207,7 @@ static void secondary_cpu_reset(void *opaque)
     CPUState *cs = CPU(cpu);
 
     cpu_reset(cs);
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 }
 
 static void cpu_halt_signal(void *opaque, int irq, int level)
@@ -828,7 +828,7 @@ static void cpu_devinit(const char *cpu_type, unsigned int id,
     } else {
         qemu_register_reset(secondary_cpu_reset, cpu);
         cs = CPU(cpu);
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
     }
     *cpu_irqs = qemu_allocate_irqs(cpu_set_irq, cpu, MAX_PILS);
     env->prom_addr = prom_addr;
diff --git a/hw/sparc64/sparc64.c b/hw/sparc64/sparc64.c
index 408388945e..372bbd4f5b 100644
--- a/hw/sparc64/sparc64.c
+++ b/hw/sparc64/sparc64.c
@@ -100,7 +100,7 @@ static void cpu_kick_irq(SPARCCPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUSPARCState *env = &cpu->env;
 
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
     cpu_check_irqs(env);
     qemu_cpu_kick(cs);
 }
@@ -115,7 +115,7 @@ void sparc64_cpu_set_ivec_irq(void *opaque, int irq, int level)
         if (!(env->ivec_status & 0x20)) {
             trace_sparc64_cpu_ivec_raise_irq(irq);
             cs = CPU(cpu);
-            cs->halted = 0;
+            cpu_halted_set(cs, 0);
             env->interrupt_index = TT_IVEC;
             env->ivec_status |= 0x20;
             env->ivec_data[0] = (0x1f << 6) | irq;
diff --git a/target/sparc/helper.c b/target/sparc/helper.c
index 46232788c8..dd00cf7cac 100644
--- a/target/sparc/helper.c
+++ b/target/sparc/helper.c
@@ -245,7 +245,7 @@ void helper_power_down(CPUSPARCState *env)
 {
     CPUState *cs = CPU(sparc_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     env->pc = env->npc;
     env->npc = env->pc + 4;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 27/73] xtensa: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (25 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 26/73] sparc: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 28/73] gdbstub: " Emilio G. Cota
                   ` (47 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Max Filippov

Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/xtensa/cpu.c        | 2 +-
 target/xtensa/exc_helper.c | 2 +-
 target/xtensa/helper.c     | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index a54dbe4260..d4ca35e6cc 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -86,7 +86,7 @@ static void xtensa_cpu_reset(CPUState *s)
 
 #ifndef CONFIG_USER_ONLY
     reset_mmu(env);
-    s->halted = env->runstall;
+    cpu_halted_set(s, env->runstall);
 #endif
 }
 
diff --git a/target/xtensa/exc_helper.c b/target/xtensa/exc_helper.c
index 4a1f7aef5d..ee4b803be8 100644
--- a/target/xtensa/exc_helper.c
+++ b/target/xtensa/exc_helper.c
@@ -116,7 +116,7 @@ void HELPER(waiti)(CPUXtensaState *env, uint32_t pc, uint32_t intlevel)
     }
 
     cpu = CPU(xtensa_env_get_cpu(env));
-    cpu->halted = 1;
+    cpu_halted_set(cpu, 1);
     HELPER(exception)(env, EXCP_HLT);
 }
 
diff --git a/target/xtensa/helper.c b/target/xtensa/helper.c
index f4867a9b56..58876a7622 100644
--- a/target/xtensa/helper.c
+++ b/target/xtensa/helper.c
@@ -318,7 +318,7 @@ void xtensa_runstall(CPUXtensaState *env, bool runstall)
     CPUState *cpu = CPU(xtensa_env_get_cpu(env));
 
     env->runstall = runstall;
-    cpu->halted = runstall;
+    cpu_halted_set(cpu, runstall);
     if (runstall) {
         cpu_interrupt(cpu, CPU_INTERRUPT_HALT);
     } else {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 28/73] gdbstub: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (26 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 27/73] xtensa: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 29/73] openrisc: " Emilio G. Cota
                   ` (46 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 gdbstub.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index bc774ae992..569b504552 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1664,13 +1664,13 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
                         object_get_canonical_path_component(OBJECT(cpu));
                     len = snprintf((char *)mem_buf, sizeof(buf) / 2,
                                    "%s %s [%s]", cpu_model, cpu_name,
-                                   cpu->halted ? "halted " : "running");
+                                   cpu_halted(cpu) ? "halted " : "running");
                     g_free(cpu_name);
                 } else {
                     /* memtohex() doubles the required space */
                     len = snprintf((char *)mem_buf, sizeof(buf) / 2,
                                    "CPU#%d [%s]", cpu->cpu_index,
-                                   cpu->halted ? "halted " : "running");
+                                   cpu_halted(cpu) ? "halted " : "running");
                 }
                 trace_gdbstub_op_extra_info((char *)mem_buf);
                 memtohex(buf, mem_buf, len);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 29/73] openrisc: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (27 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 28/73] gdbstub: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 30/73] cpu-exec: " Emilio G. Cota
                   ` (45 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Stafford Horne

Cc: Stafford Horne <shorne@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/openrisc/sys_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/openrisc/sys_helper.c b/target/openrisc/sys_helper.c
index 05f66c455b..0915e622e6 100644
--- a/target/openrisc/sys_helper.c
+++ b/target/openrisc/sys_helper.c
@@ -137,7 +137,7 @@ void HELPER(mtspr)(CPUOpenRISCState *env, target_ulong spr, target_ulong rb)
         if (env->pmr & PMR_DME || env->pmr & PMR_SME) {
             cpu_restore_state(cs, GETPC(), true);
             env->pc += 4;
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             raise_exception(cpu, EXCP_HALTED);
         }
         break;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 30/73] cpu-exec: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (28 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 29/73] openrisc: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 31/73] cpu: " Emilio G. Cota
                   ` (44 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cpu-exec.c | 25 +++++++++++++++++++++----
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 45ef41ebb2..78dca325ba 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -425,14 +425,21 @@ static inline TranslationBlock *tb_find(CPUState *cpu,
     return tb;
 }
 
-static inline bool cpu_handle_halt(CPUState *cpu)
+static inline bool cpu_handle_halt_locked(CPUState *cpu)
 {
-    if (cpu->halted) {
+    g_assert(cpu_mutex_locked(cpu));
+
+    if (cpu_halted(cpu)) {
 #if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
         if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
             && replay_interrupt()) {
             X86CPU *x86_cpu = X86_CPU(cpu);
+
+            /* prevent deadlock; cpu_mutex must be acquired _after_ the BQL */
+            cpu_mutex_unlock(cpu);
             qemu_mutex_lock_iothread();
+            cpu_mutex_lock(cpu);
+
             apic_poll_irq(x86_cpu->apic_state);
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
             qemu_mutex_unlock_iothread();
@@ -442,12 +449,22 @@ static inline bool cpu_handle_halt(CPUState *cpu)
             return true;
         }
 
-        cpu->halted = 0;
+        cpu_halted_set(cpu, 0);
     }
 
     return false;
 }
 
+static inline bool cpu_handle_halt(CPUState *cpu)
+{
+    bool ret;
+
+    cpu_mutex_lock(cpu);
+    ret = cpu_handle_halt_locked(cpu);
+    cpu_mutex_unlock(cpu);
+    return ret;
+}
+
 static inline void cpu_handle_debug_exception(CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
@@ -546,7 +563,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
         } else if (interrupt_request & CPU_INTERRUPT_HALT) {
             replay_interrupt();
             cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
-            cpu->halted = 1;
+            cpu_halted_set(cpu, 1);
             cpu->exception_index = EXCP_HLT;
             qemu_mutex_unlock_iothread();
             return true;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 31/73] cpu: convert to cpu_halted
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (29 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 30/73] cpu-exec: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 32/73] cpu: define cpu_interrupt_request helpers Emilio G. Cota
                   ` (43 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

This finishes the conversion to cpu_halted.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cpu-exec.c | 2 +-
 cpus.c               | 8 ++++----
 qom/cpu.c            | 2 +-
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 78dca325ba..8af0b5de07 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -435,7 +435,7 @@ static inline bool cpu_handle_halt_locked(CPUState *cpu)
             && replay_interrupt()) {
             X86CPU *x86_cpu = X86_CPU(cpu);
 
-            /* prevent deadlock; cpu_mutex must be acquired _after_ the BQL */
+            /* locking order: cpu_mutex must be acquired _after_ the BQL */
             cpu_mutex_unlock(cpu);
             qemu_mutex_lock_iothread();
             cpu_mutex_lock(cpu);
diff --git a/cpus.c b/cpus.c
index 4b967e4674..c6806184ee 100644
--- a/cpus.c
+++ b/cpus.c
@@ -204,7 +204,7 @@ static bool cpu_thread_is_idle(CPUState *cpu)
     if (cpu_is_stopped(cpu)) {
         return true;
     }
-    if (!cpu->halted || cpu_has_work(cpu) ||
+    if (!cpu_halted(cpu) || cpu_has_work(cpu) ||
         kvm_halt_in_kernel()) {
         return false;
     }
@@ -1687,7 +1687,7 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
 
     cpu->thread_id = qemu_get_thread_id();
     cpu->created = true;
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     current_cpu = cpu;
 
     hax_init_vcpu(cpu);
@@ -1846,7 +1846,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
                  *
                  * cpu->halted should ensure we sleep in wait_io_event
                  */
-                g_assert(cpu->halted);
+                g_assert(cpu_halted(cpu));
                 break;
             case EXCP_ATOMIC:
                 qemu_mutex_unlock_iothread();
@@ -2343,7 +2343,7 @@ CpuInfoList *qmp_query_cpus(Error **errp)
         info->value = g_malloc0(sizeof(*info->value));
         info->value->CPU = cpu->cpu_index;
         info->value->current = (cpu == first_cpu);
-        info->value->halted = cpu->halted;
+        info->value->halted = cpu_halted(cpu);
         info->value->qom_path = object_get_canonical_path(OBJECT(cpu));
         info->value->thread_id = cpu->thread_id;
 #if defined(TARGET_I386)
diff --git a/qom/cpu.c b/qom/cpu.c
index 2c05aa1bca..c5106d5af8 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -261,7 +261,7 @@ static void cpu_common_reset(CPUState *cpu)
     }
 
     cpu->interrupt_request = 0;
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     cpu->mem_io_pc = 0;
     cpu->mem_io_vaddr = 0;
     cpu->icount_extra = 0;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 32/73] cpu: define cpu_interrupt_request helpers
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (30 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 31/73] cpu: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 33/73] ppc: use cpu_reset_interrupt Emilio G. Cota
                   ` (42 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Add a comment about how atomic_read works here. The comment refers to
a "BQL-less CPU loop", which will materialize toward the end
of this series.

Note that the modifications to cpu_reset_interrupt are there to
avoid deadlock during the CPU lock transition; once that is complete,
cpu_interrupt_request will be simple again.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 37 +++++++++++++++++++++++++++++++++++++
 qom/cpu.c         | 27 +++++++++++++++++++++------
 2 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 9fe74e4392..c92a8c544a 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -526,6 +526,43 @@ static inline void cpu_halted_set(CPUState *cpu, uint32_t val)
     cpu_mutex_unlock(cpu);
 }
 
+/*
+ * When sending an interrupt, setters OR the appropriate bit and kick the
+ * destination vCPU. The latter can then read interrupt_request without
+ * acquiring the CPU lock, because once the kick-induced completes, they'll read
+ * an up-to-date interrupt_request.
+ * Setters always acquire the lock, which guarantees that (1) concurrent
+ * updates from different threads won't result in data races, and (2) the
+ * BQL-less CPU loop will always see an up-to-date interrupt_request, since the
+ * loop holds the CPU lock.
+ */
+static inline uint32_t cpu_interrupt_request(CPUState *cpu)
+{
+    return atomic_read(&cpu->interrupt_request);
+}
+
+static inline void cpu_interrupt_request_or(CPUState *cpu, uint32_t mask)
+{
+    if (cpu_mutex_locked(cpu)) {
+        atomic_set(&cpu->interrupt_request, cpu->interrupt_request | mask);
+        return;
+    }
+    cpu_mutex_lock(cpu);
+    atomic_set(&cpu->interrupt_request, cpu->interrupt_request | mask);
+    cpu_mutex_unlock(cpu);
+}
+
+static inline void cpu_interrupt_request_set(CPUState *cpu, uint32_t val)
+{
+    if (cpu_mutex_locked(cpu)) {
+        atomic_set(&cpu->interrupt_request, val);
+        return;
+    }
+    cpu_mutex_lock(cpu);
+    atomic_set(&cpu->interrupt_request, val);
+    cpu_mutex_unlock(cpu);
+}
+
 static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
 {
     unsigned int i;
diff --git a/qom/cpu.c b/qom/cpu.c
index c5106d5af8..00add81a7f 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -98,14 +98,29 @@ static void cpu_common_get_memory_mapping(CPUState *cpu,
  * BQL here if we need to.  cpu_interrupt assumes it is held.*/
 void cpu_reset_interrupt(CPUState *cpu, int mask)
 {
-    bool need_lock = !qemu_mutex_iothread_locked();
+    bool has_bql = qemu_mutex_iothread_locked();
+    bool has_cpu_lock = cpu_mutex_locked(cpu);
 
-    if (need_lock) {
-        qemu_mutex_lock_iothread();
+    if (has_bql) {
+        if (has_cpu_lock) {
+            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
+        } else {
+            cpu_mutex_lock(cpu);
+            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
+            cpu_mutex_unlock(cpu);
+        }
+        return;
+    }
+
+    if (has_cpu_lock) {
+        cpu_mutex_unlock(cpu);
     }
-    cpu->interrupt_request &= ~mask;
-    if (need_lock) {
-        qemu_mutex_unlock_iothread();
+    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
+    atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
+    qemu_mutex_unlock_iothread();
+    if (!has_cpu_lock) {
+        cpu_mutex_unlock(cpu);
     }
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 33/73] ppc: use cpu_reset_interrupt
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (31 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 32/73] cpu: define cpu_interrupt_request helpers Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 34/73] exec: " Emilio G. Cota
                   ` (41 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Paolo Bonzini, David Gibson,
	qemu-ppc

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/excp_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index d0092cc13d..484ac9b9e8 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -931,7 +931,7 @@ bool ppc_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
     if (interrupt_request & CPU_INTERRUPT_HARD) {
         ppc_hw_interrupt(env);
         if (env->pending_interrupts == 0) {
-            cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
         }
         return true;
     }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 34/73] exec: use cpu_reset_interrupt
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (32 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 33/73] ppc: use cpu_reset_interrupt Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 35/73] i386: " Emilio G. Cota
                   ` (40 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 exec.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/exec.c b/exec.c
index 518064530b..5ed64e274b 100644
--- a/exec.c
+++ b/exec.c
@@ -779,7 +779,7 @@ static int cpu_common_post_load(void *opaque, int version_id)
 
     /* 0x01 was CPU_INTERRUPT_EXIT. This line can be removed when the
        version_id is increased. */
-    cpu->interrupt_request &= ~0x01;
+    cpu_reset_interrupt(cpu, 1);
     tlb_flush(cpu);
 
     /* loadvm has just updated the content of RAM, bypassing the
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 35/73] i386: use cpu_reset_interrupt
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (33 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 34/73] exec: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 36/73] s390x: " Emilio G. Cota
                   ` (39 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Paolo Bonzini

From: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/hax-all.c    |  4 ++--
 target/i386/hvf/x86hvf.c |  8 ++++----
 target/i386/kvm.c        | 14 +++++++-------
 target/i386/seg_helper.c | 13 ++++++-------
 target/i386/svm_helper.c |  2 +-
 target/i386/whpx-all.c   | 10 +++++-----
 6 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index 22951017cf..518c6ff103 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -424,7 +424,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
         irq = cpu_get_pic_interrupt(env);
         if (irq >= 0) {
             hax_inject_interrupt(env, irq);
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
         }
     }
 
@@ -474,7 +474,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
     cpu_halted_set(cpu, 0);
 
     if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
-        cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 163bbed23f..e8b13ed534 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -402,7 +402,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
 
     if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) {
         if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
-            cpu_state->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_NMI);
             info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC;
             wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
         } else {
@@ -414,7 +414,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
         (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) {
         int line = cpu_get_pic_interrupt(&x86cpu->env);
-        cpu_state->interrupt_request &= ~CPU_INTERRUPT_HARD;
+        cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_HARD);
         if (line >= 0) {
             wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line |
                   VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
@@ -440,7 +440,7 @@ int hvf_process_events(CPUState *cpu_state)
     }
 
     if (cpu_state->interrupt_request & CPU_INTERRUPT_POLL) {
-        cpu_state->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
     if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
@@ -453,7 +453,7 @@ int hvf_process_events(CPUState *cpu_state)
         do_cpu_sipi(cpu);
     }
     if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
-        cpu_state->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_TPR);
         hvf_cpu_synchronize_state(cpu_state);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 81e3d65cc9..f2e187e40f 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2893,7 +2893,7 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
              */
             events.smi.pending = cs->interrupt_request & CPU_INTERRUPT_SMI;
             events.smi.latched_init = cs->interrupt_request & CPU_INTERRUPT_INIT;
-            cs->interrupt_request &= ~(CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
+            cpu_reset_interrupt(cs, CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
         } else {
             /* Keep these in cs->interrupt_request.  */
             events.smi.pending = 0;
@@ -3189,7 +3189,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
     if (cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
         if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
             qemu_mutex_lock_iothread();
-            cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
             qemu_mutex_unlock_iothread();
             DPRINTF("injected NMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_NMI);
@@ -3200,7 +3200,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
         }
         if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
             qemu_mutex_lock_iothread();
-            cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
             qemu_mutex_unlock_iothread();
             DPRINTF("injected SMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_SMI);
@@ -3236,7 +3236,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
             (env->eflags & IF_MASK)) {
             int irq;
 
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
             irq = cpu_get_pic_interrupt(env);
             if (irq >= 0) {
                 struct kvm_interrupt intr;
@@ -3307,7 +3307,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         /* We must not raise CPU_INTERRUPT_MCE if it's not supported. */
         assert(env->mcg_cap);
 
-        cs->interrupt_request &= ~CPU_INTERRUPT_MCE;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_MCE);
 
         kvm_cpu_synchronize_state(cs);
 
@@ -3337,7 +3337,7 @@ int kvm_arch_process_async_events(CPUState *cs)
     }
 
     if (cs->interrupt_request & CPU_INTERRUPT_POLL) {
-        cs->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
     if (((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
@@ -3350,7 +3350,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         do_cpu_sipi(cpu);
     }
     if (cs->interrupt_request & CPU_INTERRUPT_TPR) {
-        cs->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_TPR);
         kvm_cpu_synchronize_state(cs);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
diff --git a/target/i386/seg_helper.c b/target/i386/seg_helper.c
index 63e265cb38..b580403ef4 100644
--- a/target/i386/seg_helper.c
+++ b/target/i386/seg_helper.c
@@ -1332,7 +1332,7 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
     switch (interrupt_request) {
 #if !defined(CONFIG_USER_ONLY)
     case CPU_INTERRUPT_POLL:
-        cs->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
         break;
 #endif
@@ -1341,23 +1341,22 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
         break;
     case CPU_INTERRUPT_SMI:
         cpu_svm_check_intercept_param(env, SVM_EXIT_SMI, 0, 0);
-        cs->interrupt_request &= ~CPU_INTERRUPT_SMI;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_SMI);
         do_smm_enter(cpu);
         break;
     case CPU_INTERRUPT_NMI:
         cpu_svm_check_intercept_param(env, SVM_EXIT_NMI, 0, 0);
-        cs->interrupt_request &= ~CPU_INTERRUPT_NMI;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_NMI);
         env->hflags2 |= HF2_NMI_MASK;
         do_interrupt_x86_hardirq(env, EXCP02_NMI, 1);
         break;
     case CPU_INTERRUPT_MCE:
-        cs->interrupt_request &= ~CPU_INTERRUPT_MCE;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_MCE);
         do_interrupt_x86_hardirq(env, EXCP12_MCHK, 0);
         break;
     case CPU_INTERRUPT_HARD:
         cpu_svm_check_intercept_param(env, SVM_EXIT_INTR, 0, 0);
-        cs->interrupt_request &= ~(CPU_INTERRUPT_HARD |
-                                   CPU_INTERRUPT_VIRQ);
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD | CPU_INTERRUPT_VIRQ);
         intno = cpu_get_pic_interrupt(env);
         qemu_log_mask(CPU_LOG_TB_IN_ASM,
                       "Servicing hardware INT=0x%02x\n", intno);
@@ -1372,7 +1371,7 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
         qemu_log_mask(CPU_LOG_TB_IN_ASM,
                       "Servicing virtual hardware INT=0x%02x\n", intno);
         do_interrupt_x86_hardirq(env, intno, 1);
-        cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
         break;
 #endif
     }
diff --git a/target/i386/svm_helper.c b/target/i386/svm_helper.c
index 9fd22a883b..a6d33e55d8 100644
--- a/target/i386/svm_helper.c
+++ b/target/i386/svm_helper.c
@@ -700,7 +700,7 @@ void do_vmexit(CPUX86State *env, uint32_t exit_code, uint64_t exit_info_1)
     env->hflags &= ~HF_GUEST_MASK;
     env->intercept = 0;
     env->intercept_exceptions = 0;
-    cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
+    cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
     env->tsc_offset = 0;
 
     env->gdt.base  = x86_ldq_phys(cs, env->vm_hsave + offsetof(struct vmcb,
diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
index b9c79ccd99..9673bdc219 100644
--- a/target/i386/whpx-all.c
+++ b/target/i386/whpx-all.c
@@ -728,14 +728,14 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
     if (!vcpu->interruption_pending &&
         cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
         if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
             vcpu->interruptable = false;
             new_int.InterruptionType = WHvX64PendingNmi;
             new_int.InterruptionPending = 1;
             new_int.InterruptionVector = 2;
         }
         if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
         }
     }
 
@@ -758,7 +758,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
         vcpu->interruptable && (env->eflags & IF_MASK)) {
         assert(!new_int.InterruptionPending);
         if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
             irq = cpu_get_pic_interrupt(env);
             if (irq >= 0) {
                 new_int.InterruptionType = WHvX64PendingInterrupt;
@@ -850,7 +850,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
-        cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
@@ -868,7 +868,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
-        cpu->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        cpu_reset_interrupt(cpu, CPU_INTERRUPT_TPR);
         if (!cpu->vcpu_dirty) {
             whpx_get_registers(cpu);
         }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 36/73] s390x: use cpu_reset_interrupt
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (34 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 35/73] i386: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 37/73] openrisc: " Emilio G. Cota
                   ` (38 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Paolo Bonzini,
	Cornelia Huck, David Hildenbrand, qemu-s390x

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/s390x/excp_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
index 633d59346b..56bb107b48 100644
--- a/target/s390x/excp_helper.c
+++ b/target/s390x/excp_helper.c
@@ -454,7 +454,7 @@ try_deliver:
 
     /* we might still have pending interrupts, but not deliverable */
     if (!env->pending_int && !qemu_s390_flic_has_any(flic)) {
-        cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
     }
 
     /* WAIT PSW during interrupt injection or STOP interrupt */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 37/73] openrisc: use cpu_reset_interrupt
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (35 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 36/73] s390x: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 38/73] arm: convert to cpu_interrupt_request Emilio G. Cota
                   ` (37 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Paolo Bonzini, Stafford Horne

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/openrisc/sys_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/openrisc/sys_helper.c b/target/openrisc/sys_helper.c
index 0915e622e6..1f367de6f2 100644
--- a/target/openrisc/sys_helper.c
+++ b/target/openrisc/sys_helper.c
@@ -170,7 +170,7 @@ void HELPER(mtspr)(CPUOpenRISCState *env, target_ulong spr, target_ulong rb)
                 env->ttmr = (rb & ~TTMR_IP) | ip;
             } else {    /* Clear IP bit.  */
                 env->ttmr = rb & ~TTMR_IP;
-                cs->interrupt_request &= ~CPU_INTERRUPT_TIMER;
+                cpu_reset_interrupt(cs, CPU_INTERRUPT_TIMER);
             }
 
             cpu_openrisc_timer_update(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 38/73] arm: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (36 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 37/73] openrisc: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 39/73] i386: " Emilio G. Cota
                   ` (36 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Peter Maydell, qemu-arm

Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-arm@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/arm/cpu.c     |  6 +++---
 target/arm/helper.c  | 16 +++++++---------
 target/arm/machine.c |  2 +-
 3 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 845730c00c..7037e22580 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -73,7 +73,7 @@ static bool arm_cpu_has_work(CPUState *cs)
     ARMCPU *cpu = ARM_CPU(cs);
 
     return (cpu->power_state != PSCI_OFF)
-        && cs->interrupt_request &
+        && cpu_interrupt_request(cs) &
         (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
          | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
          | CPU_INTERRUPT_EXITTB);
@@ -484,7 +484,7 @@ void arm_cpu_update_virq(ARMCPU *cpu)
     bool new_state = (env->cp15.hcr_el2 & HCR_VI) ||
         (env->irq_line_state & CPU_INTERRUPT_VIRQ);
 
-    if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VIRQ) != 0)) {
+    if (new_state != ((cpu_interrupt_request(cs) & CPU_INTERRUPT_VIRQ) != 0)) {
         if (new_state) {
             cpu_interrupt(cs, CPU_INTERRUPT_VIRQ);
         } else {
@@ -505,7 +505,7 @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
     bool new_state = (env->cp15.hcr_el2 & HCR_VF) ||
         (env->irq_line_state & CPU_INTERRUPT_VFIQ);
 
-    if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFIQ) != 0)) {
+    if (new_state != ((cpu_interrupt_request(cs) & CPU_INTERRUPT_VFIQ) != 0)) {
         if (new_state) {
             cpu_interrupt(cs, CPU_INTERRUPT_VFIQ);
         } else {
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 1fa282a7fc..2affe7965c 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -1893,23 +1893,24 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
     CPUState *cs = ENV_GET_CPU(env);
     uint64_t hcr_el2 = arm_hcr_el2_eff(env);
     uint64_t ret = 0;
+    uint32_t interrupt_request = cpu_interrupt_request(cs);
 
     if (hcr_el2 & HCR_IMO) {
-        if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
+        if (interrupt_request & CPU_INTERRUPT_VIRQ) {
             ret |= CPSR_I;
         }
     } else {
-        if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
+        if (interrupt_request & CPU_INTERRUPT_HARD) {
             ret |= CPSR_I;
         }
     }
 
     if (hcr_el2 & HCR_FMO) {
-        if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
+        if (interrupt_request & CPU_INTERRUPT_VFIQ) {
             ret |= CPSR_F;
         }
     } else {
-        if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
+        if (interrupt_request & CPU_INTERRUPT_FIQ) {
             ret |= CPSR_F;
         }
     }
@@ -9647,10 +9648,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
         return;
     }
 
-    /* Hooks may change global state so BQL should be held, also the
-     * BQL needs to be held for any modification of
-     * cs->interrupt_request.
-     */
+    /* Hooks may change global state so BQL should be held */
     g_assert(qemu_mutex_iothread_locked());
 
     arm_call_pre_el_change_hook(cpu);
@@ -9665,7 +9663,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
     arm_call_el_change_hook(cpu);
 
     if (!kvm_enabled()) {
-        cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_EXITTB);
     }
 }
 #endif /* !CONFIG_USER_ONLY */
diff --git a/target/arm/machine.c b/target/arm/machine.c
index b292549614..4f2099ecde 100644
--- a/target/arm/machine.c
+++ b/target/arm/machine.c
@@ -693,7 +693,7 @@ static int cpu_post_load(void *opaque, int version_id)
     if (env->irq_line_state == UINT32_MAX) {
         CPUState *cs = CPU(cpu);
 
-        env->irq_line_state = cs->interrupt_request &
+        env->irq_line_state = cpu_interrupt_request(cs) &
             (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ |
              CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VFIQ);
     }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 39/73] i386: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (37 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 38/73] arm: convert to cpu_interrupt_request Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 40/73] i386/kvm: " Emilio G. Cota
                   ` (35 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/cpu.c        | 2 +-
 target/i386/helper.c     | 4 ++--
 target/i386/svm_helper.c | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index fe23d684db..ec87f611ba 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -5686,7 +5686,7 @@ int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
 
 static bool x86_cpu_has_work(CPUState *cs)
 {
-    return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0;
+    return x86_cpu_pending_interrupt(cs, cpu_interrupt_request(cs)) != 0;
 }
 
 static void x86_disas_set_info(CPUState *cs, disassemble_info *info)
diff --git a/target/i386/helper.c b/target/i386/helper.c
index a75278f954..9197fb4edc 100644
--- a/target/i386/helper.c
+++ b/target/i386/helper.c
@@ -1035,12 +1035,12 @@ void do_cpu_init(X86CPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUX86State *env = &cpu->env;
     CPUX86State *save = g_new(CPUX86State, 1);
-    int sipi = cs->interrupt_request & CPU_INTERRUPT_SIPI;
+    int sipi = cpu_interrupt_request(cs) & CPU_INTERRUPT_SIPI;
 
     *save = *env;
 
     cpu_reset(cs);
-    cs->interrupt_request = sipi;
+    cpu_interrupt_request_set(cs, sipi);
     memcpy(&env->start_init_save, &save->start_init_save,
            offsetof(CPUX86State, end_init_save) -
            offsetof(CPUX86State, start_init_save));
diff --git a/target/i386/svm_helper.c b/target/i386/svm_helper.c
index a6d33e55d8..ebf3643ba7 100644
--- a/target/i386/svm_helper.c
+++ b/target/i386/svm_helper.c
@@ -316,7 +316,7 @@ void helper_vmrun(CPUX86State *env, int aflag, int next_eip_addend)
     if (int_ctl & V_IRQ_MASK) {
         CPUState *cs = CPU(x86_env_get_cpu(env));
 
-        cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_VIRQ);
     }
 
     /* maybe we need to inject an event */
@@ -674,7 +674,7 @@ void do_vmexit(CPUX86State *env, uint32_t exit_code, uint64_t exit_info_1)
                        env->vm_vmcb + offsetof(struct vmcb, control.int_ctl));
     int_ctl &= ~(V_TPR_MASK | V_IRQ_MASK);
     int_ctl |= env->v_tpr & V_TPR_MASK;
-    if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_VIRQ) {
         int_ctl |= V_IRQ_MASK;
     }
     x86_stl_phys(cs,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 40/73] i386/kvm: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (38 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 39/73] i386: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 41/73] i386/hax-all: " Emilio G. Cota
                   ` (34 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/kvm.c | 58 ++++++++++++++++++++++++++++-------------------
 1 file changed, 35 insertions(+), 23 deletions(-)

diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index f2e187e40f..44a9e3d243 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2888,11 +2888,14 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
         events.smi.smm = !!(env->hflags & HF_SMM_MASK);
         events.smi.smm_inside_nmi = !!(env->hflags2 & HF2_SMM_INSIDE_NMI_MASK);
         if (kvm_irqchip_in_kernel()) {
+            uint32_t interrupt_request;
+
             /* As soon as these are moved to the kernel, remove them
              * from cs->interrupt_request.
              */
-            events.smi.pending = cs->interrupt_request & CPU_INTERRUPT_SMI;
-            events.smi.latched_init = cs->interrupt_request & CPU_INTERRUPT_INIT;
+            interrupt_request = cpu_interrupt_request(cs);
+            events.smi.pending = interrupt_request & CPU_INTERRUPT_SMI;
+            events.smi.latched_init = interrupt_request & CPU_INTERRUPT_INIT;
             cpu_reset_interrupt(cs, CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
         } else {
             /* Keep these in cs->interrupt_request.  */
@@ -3183,14 +3186,14 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
 {
     X86CPU *x86_cpu = X86_CPU(cpu);
     CPUX86State *env = &x86_cpu->env;
+    uint32_t interrupt_request;
     int ret;
 
+    interrupt_request = cpu_interrupt_request(cpu);
     /* Inject NMI */
-    if (cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
-        if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
-            qemu_mutex_lock_iothread();
+    if (interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
+        if (interrupt_request & CPU_INTERRUPT_NMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
-            qemu_mutex_unlock_iothread();
             DPRINTF("injected NMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_NMI);
             if (ret < 0) {
@@ -3198,10 +3201,8 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
                         strerror(-ret));
             }
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
-            qemu_mutex_lock_iothread();
+        if (interrupt_request & CPU_INTERRUPT_SMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
-            qemu_mutex_unlock_iothread();
             DPRINTF("injected SMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_SMI);
             if (ret < 0) {
@@ -3215,16 +3216,22 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
         qemu_mutex_lock_iothread();
     }
 
+    /*
+     * We might have cleared some bits in cpu->interrupt_request since reading
+     * it; read it again.
+     */
+    interrupt_request = cpu_interrupt_request(cpu);
+
     /* Force the VCPU out of its inner loop to process any INIT requests
      * or (for userspace APIC, but it is cheap to combine the checks here)
      * pending TPR access reports.
      */
-    if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
-        if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if (interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
+        if ((interrupt_request & CPU_INTERRUPT_INIT) &&
             !(env->hflags & HF_SMM_MASK)) {
             cpu->exit_request = 1;
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+        if (interrupt_request & CPU_INTERRUPT_TPR) {
             cpu->exit_request = 1;
         }
     }
@@ -3232,7 +3239,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
     if (!kvm_pic_in_kernel()) {
         /* Try to inject an interrupt if the guest can accept it */
         if (run->ready_for_interrupt_injection &&
-            (cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+            (interrupt_request & CPU_INTERRUPT_HARD) &&
             (env->eflags & IF_MASK)) {
             int irq;
 
@@ -3256,7 +3263,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
          * interrupt, request an interrupt window exit.  This will
          * cause a return to userspace as soon as the guest is ready to
          * receive interrupts. */
-        if ((cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
             run->request_interrupt_window = 1;
         } else {
             run->request_interrupt_window = 0;
@@ -3302,8 +3309,9 @@ int kvm_arch_process_async_events(CPUState *cs)
 {
     X86CPU *cpu = X86_CPU(cs);
     CPUX86State *env = &cpu->env;
+    uint32_t interrupt_request;
 
-    if (cs->interrupt_request & CPU_INTERRUPT_MCE) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_MCE) {
         /* We must not raise CPU_INTERRUPT_MCE if it's not supported. */
         assert(env->mcg_cap);
 
@@ -3326,7 +3334,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         }
     }
 
-    if ((cs->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if ((cpu_interrupt_request(cs) & CPU_INTERRUPT_INIT) &&
         !(env->hflags & HF_SMM_MASK)) {
         kvm_cpu_synchronize_state(cs);
         do_cpu_init(cpu);
@@ -3336,20 +3344,21 @@ int kvm_arch_process_async_events(CPUState *cs)
         return 0;
     }
 
-    if (cs->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
-    if (((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    interrupt_request = cpu_interrupt_request(cs);
+    if (((interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
-        (cs->interrupt_request & CPU_INTERRUPT_NMI)) {
+        (interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cs, 0);
     }
-    if (cs->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (interrupt_request & CPU_INTERRUPT_SIPI) {
         kvm_cpu_synchronize_state(cs);
         do_cpu_sipi(cpu);
     }
-    if (cs->interrupt_request & CPU_INTERRUPT_TPR) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_TPR) {
         cpu_reset_interrupt(cs, CPU_INTERRUPT_TPR);
         kvm_cpu_synchronize_state(cs);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
@@ -3363,10 +3372,13 @@ static int kvm_handle_halt(X86CPU *cpu)
 {
     CPUState *cs = CPU(cpu);
     CPUX86State *env = &cpu->env;
+    uint32_t interrupt_request;
+
+    interrupt_request = cpu_interrupt_request(cs);
 
-    if (!((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if (!((interrupt_request & CPU_INTERRUPT_HARD) &&
           (env->eflags & IF_MASK)) &&
-        !(cs->interrupt_request & CPU_INTERRUPT_NMI)) {
+        !(interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cs, 1);
         return EXCP_HLT;
     }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 41/73] i386/hax-all: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (39 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 40/73] i386/kvm: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 42/73] i386/whpx-all: " Emilio G. Cota
                   ` (33 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/hax-all.c | 30 +++++++++++++++++-------------
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index 518c6ff103..18da1808c6 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -284,7 +284,7 @@ int hax_vm_destroy(struct hax_vm *vm)
 
 static void hax_handle_interrupt(CPUState *cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
 
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
@@ -418,7 +418,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
      * Unlike KVM, HAX kernel check for the eflags, instead of qemu
      */
     if (ht->ready_for_interrupt_injection &&
-        (cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+        (cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
         int irq;
 
         irq = cpu_get_pic_interrupt(env);
@@ -432,7 +432,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
      * interrupt, request an interrupt window exit.  This will
      * cause a return to userspace as soon as the guest is ready to
      * receive interrupts. */
-    if ((cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+    if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
         ht->request_interrupt_window = 1;
     } else {
         ht->request_interrupt_window = 0;
@@ -473,19 +473,19 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
 
     cpu_halted_set(cpu, 0);
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_INIT) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_INIT) {
         DPRINTF("\nhax_vcpu_hax_exec: handling INIT for %d\n",
                 cpu->cpu_index);
         do_cpu_init(x86_cpu);
         hax_vcpu_sync_state(env, 1);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_SIPI) {
         DPRINTF("hax_vcpu_hax_exec: handling SIPI for %d\n",
                 cpu->cpu_index);
         hax_vcpu_sync_state(env, 0);
@@ -544,13 +544,17 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
             ret = -1;
             break;
         case HAX_EXIT_HLT:
-            if (!(cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
-                !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
-                /* hlt instruction with interrupt disabled is shutdown */
-                env->eflags |= IF_MASK;
-                cpu_halted_set(cpu, 1);
-                cpu->exception_index = EXCP_HLT;
-                ret = 1;
+            {
+                uint32_t interrupt_request = cpu_interrupt_request(cpu);
+
+                if (!(interrupt_request & CPU_INTERRUPT_HARD) &&
+                    !(interrupt_request & CPU_INTERRUPT_NMI)) {
+                    /* hlt instruction with interrupt disabled is shutdown */
+                    env->eflags |= IF_MASK;
+                    cpu_halted_set(cpu, 1);
+                    cpu->exception_index = EXCP_HLT;
+                    ret = 1;
+                }
             }
             break;
         /* these situations will continue to hax module */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 42/73] i386/whpx-all: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (40 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 41/73] i386/hax-all: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 43/73] i386/hvf: convert to cpu_request_interrupt Emilio G. Cota
                   ` (32 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/whpx-all.c | 41 ++++++++++++++++++++++++-----------------
 1 file changed, 24 insertions(+), 17 deletions(-)

diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
index 9673bdc219..0d8cfa3a19 100644
--- a/target/i386/whpx-all.c
+++ b/target/i386/whpx-all.c
@@ -690,12 +690,14 @@ static int whpx_handle_portio(CPUState *cpu,
 static int whpx_handle_halt(CPUState *cpu)
 {
     struct CPUX86State *env = (CPUArchState *)(cpu->env_ptr);
+    uint32_t interrupt_request;
     int ret = 0;
 
     qemu_mutex_lock_iothread();
-    if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+    interrupt_request = cpu_interrupt_request(cpu);
+    if (!((interrupt_request & CPU_INTERRUPT_HARD) &&
           (env->eflags & IF_MASK)) &&
-        !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        !(interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu->exception_index = EXCP_HLT;
         cpu_halted_set(cpu, true);
         ret = 1;
@@ -713,6 +715,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
     struct CPUX86State *env = (CPUArchState *)(cpu->env_ptr);
     X86CPU *x86_cpu = X86_CPU(cpu);
     int irq;
+    uint32_t interrupt_request;
     uint8_t tpr;
     WHV_X64_PENDING_INTERRUPTION_REGISTER new_int;
     UINT32 reg_count = 0;
@@ -724,17 +727,19 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
     qemu_mutex_lock_iothread();
 
+    interrupt_request = cpu_interrupt_request(cpu);
+
     /* Inject NMI */
     if (!vcpu->interruption_pending &&
-        cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
-        if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
+        interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
+        if (interrupt_request & CPU_INTERRUPT_NMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
             vcpu->interruptable = false;
             new_int.InterruptionType = WHvX64PendingNmi;
             new_int.InterruptionPending = 1;
             new_int.InterruptionVector = 2;
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
+        if (interrupt_request & CPU_INTERRUPT_SMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
         }
     }
@@ -743,12 +748,12 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
      * Force the VCPU out of its inner loop to process any INIT requests or
      * commit pending TPR access.
      */
-    if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
-        if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if (interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
+        if ((interrupt_request & CPU_INTERRUPT_INIT) &&
             !(env->hflags & HF_SMM_MASK)) {
             cpu->exit_request = 1;
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+        if (interrupt_request & CPU_INTERRUPT_TPR) {
             cpu->exit_request = 1;
         }
     }
@@ -757,7 +762,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
     if (!vcpu->interruption_pending &&
         vcpu->interruptable && (env->eflags & IF_MASK)) {
         assert(!new_int.InterruptionPending);
-        if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
+        if (interrupt_request & CPU_INTERRUPT_HARD) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
             irq = cpu_get_pic_interrupt(env);
             if (irq >= 0) {
@@ -787,7 +792,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
     /* Update the state of the interrupt delivery notification */
     if (!vcpu->window_registered &&
-        cpu->interrupt_request & CPU_INTERRUPT_HARD) {
+        cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) {
         reg_values[reg_count].DeliverabilityNotifications.InterruptNotification
             = 1;
         vcpu->window_registered = 1;
@@ -840,8 +845,9 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     struct CPUX86State *env = (CPUArchState *)(cpu->env_ptr);
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    uint32_t interrupt_request;
 
-    if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_INIT) &&
         !(env->hflags & HF_SMM_MASK)) {
 
         do_cpu_init(x86_cpu);
@@ -849,25 +855,26 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
         vcpu->interruptable = true;
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
-    if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+    interrupt_request = cpu_interrupt_request(cpu);
+    if (((interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
-        (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        (interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cpu, false);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (interrupt_request & CPU_INTERRUPT_SIPI) {
         if (!cpu->vcpu_dirty) {
             whpx_get_registers(cpu);
         }
         do_cpu_sipi(x86_cpu);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_TPR) {
         cpu_reset_interrupt(cpu, CPU_INTERRUPT_TPR);
         if (!cpu->vcpu_dirty) {
             whpx_get_registers(cpu);
@@ -1350,7 +1357,7 @@ static void whpx_memory_init(void)
 
 static void whpx_handle_interrupt(CPUState *cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
 
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 43/73] i386/hvf: convert to cpu_request_interrupt
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (41 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 42/73] i386/whpx-all: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 44/73] ppc: convert to cpu_interrupt_request Emilio G. Cota
                   ` (31 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/hvf/hvf.c    |  8 +++++---
 target/i386/hvf/x86hvf.c | 26 +++++++++++++++-----------
 2 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index 7dc1d721ff..3ecf91f682 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -249,7 +249,7 @@ void update_apic_tpr(CPUState *cpu)
 
 static void hvf_handle_interrupt(CPUState * cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
     }
@@ -701,10 +701,12 @@ int hvf_vcpu_exec(CPUState *cpu)
         ret = 0;
         switch (exit_reason) {
         case EXIT_REASON_HLT: {
+            uint32_t interrupt_request = cpu_interrupt_request(cpu);
+
             macvm_set_rip(cpu, rip + ins_len);
-            if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+            if (!((interrupt_request & CPU_INTERRUPT_HARD) &&
                 (EFLAGS(env) & IF_MASK))
-                && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) &&
+                && !(interrupt_request & CPU_INTERRUPT_NMI) &&
                 !(idtvec_info & VMCS_IDT_VEC_VALID)) {
                 cpu_halted_set(cpu, 1);
                 ret = EXCP_HLT;
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index e8b13ed534..91feafeedc 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -358,6 +358,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
 
     uint8_t vector;
     uint64_t intr_type;
+    uint32_t interrupt_request;
     bool have_event = true;
     if (env->interrupt_injected != -1) {
         vector = env->interrupt_injected;
@@ -400,7 +401,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
         };
     }
 
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_NMI) {
         if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
             cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_NMI);
             info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC;
@@ -411,7 +412,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
     }
 
     if (!(env->hflags & HF_INHIBIT_IRQ_MASK) &&
-        (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
+        (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) {
         int line = cpu_get_pic_interrupt(&x86cpu->env);
         cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_HARD);
@@ -420,39 +421,42 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
                   VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
         }
     }
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_HARD) {
         vmx_set_int_window_exiting(cpu_state);
     }
-    return (cpu_state->interrupt_request
-            & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR));
+    return cpu_interrupt_request(cpu_state) & (CPU_INTERRUPT_INIT |
+                                               CPU_INTERRUPT_TPR);
 }
 
 int hvf_process_events(CPUState *cpu_state)
 {
     X86CPU *cpu = X86_CPU(cpu_state);
     CPUX86State *env = &cpu->env;
+    uint32_t interrupt_request;
 
     EFLAGS(env) = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
 
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_INIT) {
         hvf_cpu_synchronize_state(cpu_state);
         do_cpu_init(cpu);
     }
 
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
-    if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
+
+    interrupt_request = cpu_interrupt_request(cpu_state);
+    if (((interrupt_request & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK)) ||
-        (cpu_state->interrupt_request & CPU_INTERRUPT_NMI)) {
+        (interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cpu_state, 0);
     }
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (interrupt_request & CPU_INTERRUPT_SIPI) {
         hvf_cpu_synchronize_state(cpu_state);
         do_cpu_sipi(cpu);
     }
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_TPR) {
         cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_TPR);
         hvf_cpu_synchronize_state(cpu_state);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 44/73] ppc: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (42 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 43/73] i386/hvf: convert to cpu_request_interrupt Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 45/73] sh4: " Emilio G. Cota
                   ` (30 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, David Gibson, qemu-ppc

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/ppc/ppc.c                    |  2 +-
 target/ppc/kvm.c                |  4 ++--
 target/ppc/translate_init.inc.c | 14 +++++++-------
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index 45e212390a..100badefe4 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -89,7 +89,7 @@ void ppc_set_irq(PowerPCCPU *cpu, int n_IRQ, int level)
 
     LOG_IRQ("%s: %p n_IRQ %d level %d => pending %08" PRIx32
                 "req %08x\n", __func__, env, n_IRQ, level,
-                env->pending_interrupts, CPU(cpu)->interrupt_request);
+                env->pending_interrupts, cpu_interrupt_request(CPU(cpu)));
 
     if (locked) {
         qemu_mutex_unlock_iothread();
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
index faa757c417..16f09c504d 100644
--- a/target/ppc/kvm.c
+++ b/target/ppc/kvm.c
@@ -1339,7 +1339,7 @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
      * interrupt, reset, etc) in PPC-specific env->irq_input_state. */
     if (!cap_interrupt_level &&
         run->ready_for_interrupt_injection &&
-        (cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+        (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
         (env->irq_input_state & (1<<PPC_INPUT_INT)))
     {
         /* For now KVM disregards the 'irq' argument. However, in the
@@ -1381,7 +1381,7 @@ static int kvmppc_handle_halt(PowerPCCPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
 
-    if (!(cs->interrupt_request & CPU_INTERRUPT_HARD) && (msr_ee)) {
+    if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) && (msr_ee)) {
         cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
     }
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index b2cad43e2a..f387404913 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8469,7 +8469,7 @@ static bool cpu_has_work_POWER7(CPUState *cs)
     CPUPPCState *env = &cpu->env;
 
     if (cpu_halted(cs)) {
-        if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
         }
         if ((env->pending_interrupts & (1u << PPC_INTERRUPT_EXT)) &&
@@ -8493,7 +8493,7 @@ static bool cpu_has_work_POWER7(CPUState *cs)
         }
         return false;
     } else {
-        return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+        return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
     }
 }
 
@@ -8623,7 +8623,7 @@ static bool cpu_has_work_POWER8(CPUState *cs)
     CPUPPCState *env = &cpu->env;
 
     if (cpu_halted(cs)) {
-        if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
         }
         if ((env->pending_interrupts & (1u << PPC_INTERRUPT_EXT)) &&
@@ -8655,7 +8655,7 @@ static bool cpu_has_work_POWER8(CPUState *cs)
         }
         return false;
     } else {
-        return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+        return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
     }
 }
 
@@ -8817,7 +8817,7 @@ static bool cpu_has_work_POWER9(CPUState *cs)
     if (cpu_halted(cs)) {
         uint64_t psscr = env->spr[SPR_PSSCR];
 
-        if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
         }
 
@@ -8863,7 +8863,7 @@ static bool cpu_has_work_POWER9(CPUState *cs)
         }
         return false;
     } else {
-        return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+        return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
     }
 }
 
@@ -10332,7 +10332,7 @@ static bool ppc_cpu_has_work(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+    return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
 }
 
 /* CPUClass::reset() */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 45/73] sh4: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (43 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 44/73] ppc: convert to cpu_interrupt_request Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 46/73] cris: " Emilio G. Cota
                   ` (29 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Aurelien Jarno

Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/sh4/cpu.c    | 2 +-
 target/sh4/helper.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
index b9f393b7c7..58ea212f53 100644
--- a/target/sh4/cpu.c
+++ b/target/sh4/cpu.c
@@ -45,7 +45,7 @@ static void superh_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
 
 static bool superh_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 /* CPUClass::reset() */
diff --git a/target/sh4/helper.c b/target/sh4/helper.c
index 2ff0cf4060..8463da5bc8 100644
--- a/target/sh4/helper.c
+++ b/target/sh4/helper.c
@@ -83,7 +83,7 @@ void superh_cpu_do_interrupt(CPUState *cs)
 {
     SuperHCPU *cpu = SUPERH_CPU(cs);
     CPUSH4State *env = &cpu->env;
-    int do_irq = cs->interrupt_request & CPU_INTERRUPT_HARD;
+    int do_irq = cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
     int do_exp, irq_vector = cs->exception_index;
 
     /* prioritize exceptions over interrupts */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 46/73] cris: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (44 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 45/73] sh4: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 47/73] hppa: " Emilio G. Cota
                   ` (28 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Edgar E. Iglesias

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/cris/cpu.c    | 2 +-
 target/cris/helper.c | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/target/cris/cpu.c b/target/cris/cpu.c
index a23aba2688..3cdba581e6 100644
--- a/target/cris/cpu.c
+++ b/target/cris/cpu.c
@@ -37,7 +37,7 @@ static void cris_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool cris_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
 }
 
 /* CPUClass::reset() */
diff --git a/target/cris/helper.c b/target/cris/helper.c
index b2dbb2075c..5c453d5221 100644
--- a/target/cris/helper.c
+++ b/target/cris/helper.c
@@ -116,7 +116,7 @@ int cris_cpu_handle_mmu_fault(CPUState *cs, vaddr address, int size, int rw,
     if (r > 0) {
         qemu_log_mask(CPU_LOG_MMU,
                 "%s returns %d irqreq=%x addr=%" VADDR_PRIx " phy=%x vec=%x"
-                " pc=%x\n", __func__, r, cs->interrupt_request, address,
+                " pc=%x\n", __func__, r, cpu_interrupt_request(cs), address,
                 res.phy, res.bf_vec, env->pc);
     }
     return r;
@@ -130,7 +130,7 @@ void crisv10_cpu_do_interrupt(CPUState *cs)
 
     D_LOG("exception index=%d interrupt_req=%d\n",
           cs->exception_index,
-          cs->interrupt_request);
+          cpu_interrupt_request(cs));
 
     if (env->dslot) {
         /* CRISv10 never takes interrupts while in a delay-slot.  */
@@ -192,7 +192,7 @@ void cris_cpu_do_interrupt(CPUState *cs)
 
     D_LOG("exception index=%d interrupt_req=%d\n",
           cs->exception_index,
-          cs->interrupt_request);
+          cpu_interrupt_request(cs));
 
     switch (cs->exception_index) {
     case EXCP_BREAK:
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 47/73] hppa: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (45 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 46/73] cris: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 48/73] lm32: " Emilio G. Cota
                   ` (27 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/hppa/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 00bf444620..1ab4e62850 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -60,7 +60,7 @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
 
 static bool hppa_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 static void hppa_cpu_disas_set_info(CPUState *cs, disassemble_info *info)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 48/73] lm32: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (46 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 47/73] hppa: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 49/73] m68k: " Emilio G. Cota
                   ` (26 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Michael Walle

Cc: Michael Walle <michael@walle.cc>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/lm32/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/lm32/cpu.c b/target/lm32/cpu.c
index b7499cb627..1508bb6199 100644
--- a/target/lm32/cpu.c
+++ b/target/lm32/cpu.c
@@ -101,7 +101,7 @@ static void lm32_cpu_init_cfg_reg(LM32CPU *cpu)
 
 static bool lm32_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 /* CPUClass::reset() */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 49/73] m68k: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (47 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 48/73] lm32: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 50/73] mips: " Emilio G. Cota
                   ` (25 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Laurent Vivier

Cc: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/m68k/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
index 582e3a73b3..99a7eb4340 100644
--- a/target/m68k/cpu.c
+++ b/target/m68k/cpu.c
@@ -34,7 +34,7 @@ static void m68k_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool m68k_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 static void m68k_set_feature(CPUM68KState *env, int feature)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 50/73] mips: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (48 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 49/73] m68k: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 51/73] nios: " Emilio G. Cota
                   ` (24 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Aurelien Jarno,
	Aleksandar Markovic, James Hogan

Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Cc: James Hogan <jhogan@kernel.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/mips/cpu.c | 7 ++++---
 target/mips/kvm.c | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/target/mips/cpu.c b/target/mips/cpu.c
index e217fb3e36..fdae8cf440 100644
--- a/target/mips/cpu.c
+++ b/target/mips/cpu.c
@@ -56,11 +56,12 @@ static bool mips_cpu_has_work(CPUState *cs)
     MIPSCPU *cpu = MIPS_CPU(cs);
     CPUMIPSState *env = &cpu->env;
     bool has_work = false;
+    uint32_t interrupt_request = cpu_interrupt_request(cs);
 
     /* Prior to MIPS Release 6 it is implementation dependent if non-enabled
        interrupts wake-up the CPU, however most of the implementations only
        check for interrupts that can be taken. */
-    if ((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if ((interrupt_request & CPU_INTERRUPT_HARD) &&
         cpu_mips_hw_interrupts_pending(env)) {
         if (cpu_mips_hw_interrupts_enabled(env) ||
             (env->insn_flags & ISA_MIPS32R6)) {
@@ -72,7 +73,7 @@ static bool mips_cpu_has_work(CPUState *cs)
     if (env->CP0_Config3 & (1 << CP0C3_MT)) {
         /* The QEMU model will issue an _WAKE request whenever the CPUs
            should be woken up.  */
-        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
+        if (interrupt_request & CPU_INTERRUPT_WAKE) {
             has_work = true;
         }
 
@@ -82,7 +83,7 @@ static bool mips_cpu_has_work(CPUState *cs)
     }
     /* MIPS Release 6 has the ability to halt the CPU.  */
     if (env->CP0_Config5 & (1 << CP0C5_VP)) {
-        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
+        if (interrupt_request & CPU_INTERRUPT_WAKE) {
             has_work = true;
         }
         if (!mips_vp_active(env)) {
diff --git a/target/mips/kvm.c b/target/mips/kvm.c
index 0b177a7577..568c3d8f4a 100644
--- a/target/mips/kvm.c
+++ b/target/mips/kvm.c
@@ -135,7 +135,7 @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
 
     qemu_mutex_lock_iothread();
 
-    if ((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if ((cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
             cpu_mips_io_interrupts_pending(cpu)) {
         intr.cpu = -1;
         intr.irq = 2;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 51/73] nios: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (49 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 50/73] mips: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 52/73] s390x: " Emilio G. Cota
                   ` (23 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Chris Wulff, Marek Vasut

Cc: Chris Wulff <crwulff@gmail.com>
Cc: Marek Vasut <marex@denx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/nios2/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/nios2/cpu.c b/target/nios2/cpu.c
index fbfaa2ce26..49a75414d3 100644
--- a/target/nios2/cpu.c
+++ b/target/nios2/cpu.c
@@ -36,7 +36,7 @@ static void nios2_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool nios2_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
 }
 
 /* CPUClass::reset() */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 52/73] s390x: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (50 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 51/73] nios: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 53/73] alpha: " Emilio G. Cota
                   ` (22 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Cornelia Huck,
	Christian Borntraeger, David Hildenbrand, qemu-s390x

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/intc/s390_flic.c | 2 +-
 target/s390x/cpu.c  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c
index bfb5cf1d07..d944824e67 100644
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -189,7 +189,7 @@ static void qemu_s390_flic_notify(uint32_t type)
     CPU_FOREACH(cs) {
         S390CPU *cpu = S390_CPU(cs);
 
-        cs->interrupt_request |= CPU_INTERRUPT_HARD;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_HARD);
 
         /* ignore CPUs that are not sleeping */
         if (s390_cpu_get_state(cpu) != S390_CPU_STATE_OPERATING &&
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 4e17df5c39..5169adb6a5 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -62,7 +62,7 @@ static bool s390_cpu_has_work(CPUState *cs)
         return false;
     }
 
-    if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+    if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
         return false;
     }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 53/73] alpha: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (51 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 52/73] s390x: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 54/73] moxie: " Emilio G. Cota
                   ` (21 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/alpha/cpu.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
index 1fd95d6c0f..cebd459251 100644
--- a/target/alpha/cpu.c
+++ b/target/alpha/cpu.c
@@ -42,10 +42,10 @@ static bool alpha_cpu_has_work(CPUState *cs)
        assume that if a CPU really wants to stay asleep, it will mask
        interrupts at the chipset level, which will prevent these bits
        from being set in the first place.  */
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD
-                                    | CPU_INTERRUPT_TIMER
-                                    | CPU_INTERRUPT_SMP
-                                    | CPU_INTERRUPT_MCHK);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD
+                                        | CPU_INTERRUPT_TIMER
+                                        | CPU_INTERRUPT_SMP
+                                        | CPU_INTERRUPT_MCHK);
 }
 
 static void alpha_cpu_disas_set_info(CPUState *cpu, disassemble_info *info)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 54/73] moxie: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (52 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 53/73] alpha: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 55/73] sparc: " Emilio G. Cota
                   ` (20 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Anthony Green

Cc: Anthony Green <green@moxielogic.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/moxie/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/moxie/cpu.c b/target/moxie/cpu.c
index 46434e65ba..72206a03ee 100644
--- a/target/moxie/cpu.c
+++ b/target/moxie/cpu.c
@@ -33,7 +33,7 @@ static void moxie_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool moxie_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 static void moxie_cpu_reset(CPUState *s)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 55/73] sparc: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (53 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 54/73] moxie: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 56/73] openrisc: " Emilio G. Cota
                   ` (19 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Mark Cave-Ayland, Artyom Tarasenko

Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/sparc64/sparc64.c | 4 ++--
 target/sparc/cpu.c   | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/sparc64/sparc64.c b/hw/sparc64/sparc64.c
index 372bbd4f5b..58faeb111a 100644
--- a/hw/sparc64/sparc64.c
+++ b/hw/sparc64/sparc64.c
@@ -56,7 +56,7 @@ void cpu_check_irqs(CPUSPARCState *env)
     /* The bit corresponding to psrpil is (1<< psrpil), the next bit
        is (2 << psrpil). */
     if (pil < (2 << env->psrpil)) {
-        if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
+        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) {
             trace_sparc64_cpu_check_irqs_reset_irq(env->interrupt_index);
             env->interrupt_index = 0;
             cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
@@ -87,7 +87,7 @@ void cpu_check_irqs(CPUSPARCState *env)
                 break;
             }
         }
-    } else if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
+    } else if (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) {
         trace_sparc64_cpu_check_irqs_disabled(pil, env->pil_in, env->softint,
                                               env->interrupt_index);
         env->interrupt_index = 0;
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 4a4445bdf5..933fd8e954 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -708,7 +708,7 @@ static bool sparc_cpu_has_work(CPUState *cs)
     SPARCCPU *cpu = SPARC_CPU(cs);
     CPUSPARCState *env = &cpu->env;
 
-    return (cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    return (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
            cpu_interrupts_enabled(env);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 56/73] openrisc: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (54 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 55/73] sparc: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 57/73] unicore32: " Emilio G. Cota
                   ` (18 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Stafford Horne

Cc: Stafford Horne <shorne@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/openrisc/cputimer.c | 2 +-
 target/openrisc/cpu.c  | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/openrisc/cputimer.c b/hw/openrisc/cputimer.c
index 850f88761c..739404e4f5 100644
--- a/hw/openrisc/cputimer.c
+++ b/hw/openrisc/cputimer.c
@@ -102,7 +102,7 @@ static void openrisc_timer_cb(void *opaque)
         CPUState *cs = CPU(cpu);
 
         cpu->env.ttmr |= TTMR_IP;
-        cs->interrupt_request |= CPU_INTERRUPT_TIMER;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_TIMER);
     }
 
     switch (cpu->env.ttmr & TTMR_M) {
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
index 541b2a66c7..a01737dfda 100644
--- a/target/openrisc/cpu.c
+++ b/target/openrisc/cpu.c
@@ -32,8 +32,8 @@ static void openrisc_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool openrisc_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD |
-                                    CPU_INTERRUPT_TIMER);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD |
+                                        CPU_INTERRUPT_TIMER);
 }
 
 static void openrisc_disas_set_info(CPUState *cpu, disassemble_info *info)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 57/73] unicore32: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (55 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 56/73] openrisc: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 58/73] microblaze: " Emilio G. Cota
                   ` (17 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Guan Xuetao

Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/unicore32/cpu.c     | 2 +-
 target/unicore32/softmmu.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/unicore32/cpu.c b/target/unicore32/cpu.c
index 2b49d1ca40..65c5334551 100644
--- a/target/unicore32/cpu.c
+++ b/target/unicore32/cpu.c
@@ -29,7 +29,7 @@ static void uc32_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool uc32_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request &
+    return cpu_interrupt_request(cs) &
         (CPU_INTERRUPT_HARD | CPU_INTERRUPT_EXITTB);
 }
 
diff --git a/target/unicore32/softmmu.c b/target/unicore32/softmmu.c
index 00c7e0d028..f58e2361e0 100644
--- a/target/unicore32/softmmu.c
+++ b/target/unicore32/softmmu.c
@@ -119,7 +119,7 @@ void uc32_cpu_do_interrupt(CPUState *cs)
     /* The PC already points to the proper instruction.  */
     env->regs[30] = env->regs[31];
     env->regs[31] = addr;
-    cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
+    cpu_interrupt_request_or(cs, CPU_INTERRUPT_EXITTB);
 }
 
 static int get_phys_addr_ucv2(CPUUniCore32State *env, uint32_t address,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 58/73] microblaze: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (56 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 57/73] unicore32: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 59/73] accel/tcg: " Emilio G. Cota
                   ` (16 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Edgar E. Iglesias

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/microblaze/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
index 5596cd5485..0cdd7fe917 100644
--- a/target/microblaze/cpu.c
+++ b/target/microblaze/cpu.c
@@ -84,7 +84,7 @@ static void mb_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool mb_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
 }
 
 #ifndef CONFIG_USER_ONLY
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 59/73] accel/tcg: convert to cpu_interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (57 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 58/73] microblaze: " Emilio G. Cota
@ 2019-03-04 18:17 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 60/73] cpu: convert to interrupt_request Emilio G. Cota
                   ` (15 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cpu-exec.c      | 15 ++++++++-------
 accel/tcg/tcg-all.c       | 12 +++++++++---
 accel/tcg/translate-all.c |  2 +-
 3 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 8af0b5de07..ddd38fe9ac 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -431,7 +431,7 @@ static inline bool cpu_handle_halt_locked(CPUState *cpu)
 
     if (cpu_halted(cpu)) {
 #if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
-        if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
+        if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL)
             && replay_interrupt()) {
             X86CPU *x86_cpu = X86_CPU(cpu);
 
@@ -544,16 +544,17 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
      */
     atomic_mb_set(&cpu->icount_decr.u16.high, 0);
 
-    if (unlikely(atomic_read(&cpu->interrupt_request))) {
+    if (unlikely(cpu_interrupt_request(cpu))) {
         int interrupt_request;
+
         qemu_mutex_lock_iothread();
-        interrupt_request = cpu->interrupt_request;
+        interrupt_request = cpu_interrupt_request(cpu);
         if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
             /* Mask out external interrupts for this step. */
             interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
         }
         if (interrupt_request & CPU_INTERRUPT_DEBUG) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
             cpu->exception_index = EXCP_DEBUG;
             qemu_mutex_unlock_iothread();
             return true;
@@ -562,7 +563,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
             /* Do nothing */
         } else if (interrupt_request & CPU_INTERRUPT_HALT) {
             replay_interrupt();
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HALT);
             cpu_halted_set(cpu, 1);
             cpu->exception_index = EXCP_HLT;
             qemu_mutex_unlock_iothread();
@@ -599,10 +600,10 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
             }
             /* The target hook may have updated the 'cpu->interrupt_request';
              * reload the 'interrupt_request' value */
-            interrupt_request = cpu->interrupt_request;
+            interrupt_request = cpu_interrupt_request(cpu);
         }
         if (interrupt_request & CPU_INTERRUPT_EXITTB) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_EXITTB;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_EXITTB);
             /* ensure that no TB jump will be modified as
                the program flow was changed */
             *last_tb = NULL;
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
index 3d25bdcc17..4e2fe70350 100644
--- a/accel/tcg/tcg-all.c
+++ b/accel/tcg/tcg-all.c
@@ -39,10 +39,16 @@ unsigned long tcg_tb_size;
 static void tcg_handle_interrupt(CPUState *cpu, int mask)
 {
     int old_mask;
-    g_assert(qemu_mutex_iothread_locked());
 
-    old_mask = cpu->interrupt_request;
-    cpu->interrupt_request |= mask;
+    if (!cpu_mutex_locked(cpu)) {
+        cpu_mutex_lock(cpu);
+        old_mask = cpu_interrupt_request(cpu);
+        cpu_interrupt_request_or(cpu, mask);
+        cpu_mutex_unlock(cpu);
+    } else {
+        old_mask = cpu_interrupt_request(cpu);
+        cpu_interrupt_request_or(cpu, mask);
+    }
 
     /*
      * If called from iothread context, wake the target cpu in
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 8f593b926f..62585fd95c 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -2343,7 +2343,7 @@ void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf)
 void cpu_interrupt(CPUState *cpu, int mask)
 {
     g_assert(qemu_mutex_iothread_locked());
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
     atomic_set(&cpu->icount_decr.u16.high, -1);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 60/73] cpu: convert to interrupt_request
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (58 preceding siblings ...)
  2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 59/73] accel/tcg: " Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 61/73] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
                   ` (14 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

This finishes the conversion to interrupt_request.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 qom/cpu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/qom/cpu.c b/qom/cpu.c
index 00add81a7f..f2695be9b2 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -275,7 +275,7 @@ static void cpu_common_reset(CPUState *cpu)
         log_cpu_state(cpu, cc->reset_dump_flags);
     }
 
-    cpu->interrupt_request = 0;
+    cpu_interrupt_request_set(cpu, 0);
     cpu_halted_set(cpu, 0);
     cpu->mem_io_pc = 0;
     cpu->mem_io_vaddr = 0;
@@ -412,7 +412,7 @@ static vaddr cpu_adjust_watchpoint_address(CPUState *cpu, vaddr addr, int len)
 
 static void generic_handle_interrupt(CPUState *cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
 
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 61/73] cpu: call .cpu_has_work with the CPU lock held
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (59 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 60/73] cpu: convert to interrupt_request Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 62/73] cpu: introduce cpu_has_work_with_iothread_lock Emilio G. Cota
                   ` (13 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index c92a8c544a..651e1ab4d8 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -85,7 +85,8 @@ struct TranslationBlock;
  * @parse_features: Callback to parse command line arguments.
  * @reset: Callback to reset the #CPUState to its initial state.
  * @reset_dump_flags: #CPUDumpFlags to use for reset logging.
- * @has_work: Callback for checking if there is work to do.
+ * @has_work: Callback for checking if there is work to do. Called with the
+ * CPU lock held.
  * @do_interrupt: Callback for interrupt handling.
  * @do_unassigned_access: Callback for unassigned access handling.
  * (this is deprecated: new targets should use do_transaction_failed instead)
@@ -808,9 +809,16 @@ const char *parse_cpu_model(const char *cpu_model);
 static inline bool cpu_has_work(CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
+    bool ret;
 
     g_assert(cc->has_work);
-    return cc->has_work(cpu);
+    if (cpu_mutex_locked(cpu)) {
+        return cc->has_work(cpu);
+    }
+    cpu_mutex_lock(cpu);
+    ret = cc->has_work(cpu);
+    cpu_mutex_unlock(cpu);
+    return ret;
 }
 
 /**
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 62/73] cpu: introduce cpu_has_work_with_iothread_lock
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (60 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 61/73] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 63/73] ppc: convert to cpu_has_work_with_iothread_lock Emilio G. Cota
                   ` (12 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

It will gain some users soon.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 37 ++++++++++++++++++++++++++++++++++---
 1 file changed, 34 insertions(+), 3 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 651e1ab4d8..fc768bbe00 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -27,6 +27,7 @@
 #include "qapi/qapi-types-run-state.h"
 #include "qemu/bitmap.h"
 #include "qemu/fprintf-fn.h"
+#include "qemu/main-loop.h"
 #include "qemu/rcu_queue.h"
 #include "qemu/queue.h"
 #include "qemu/thread.h"
@@ -87,6 +88,8 @@ struct TranslationBlock;
  * @reset_dump_flags: #CPUDumpFlags to use for reset logging.
  * @has_work: Callback for checking if there is work to do. Called with the
  * CPU lock held.
+ * @has_work_with_iothread_lock: Callback for checking if there is work to do.
+ * Called with both the BQL and the CPU lock held.
  * @do_interrupt: Callback for interrupt handling.
  * @do_unassigned_access: Callback for unassigned access handling.
  * (this is deprecated: new targets should use do_transaction_failed instead)
@@ -170,6 +173,7 @@ typedef struct CPUClass {
     void (*reset)(CPUState *cpu);
     int reset_dump_flags;
     bool (*has_work)(CPUState *cpu);
+    bool (*has_work_with_iothread_lock)(CPUState *cpu);
     void (*do_interrupt)(CPUState *cpu);
     CPUUnassignedAccess do_unassigned_access;
     void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
@@ -809,14 +813,41 @@ const char *parse_cpu_model(const char *cpu_model);
 static inline bool cpu_has_work(CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
+    bool has_cpu_lock = cpu_mutex_locked(cpu);
+    bool (*func)(CPUState *cpu);
     bool ret;
 
+    /* some targets require us to hold the BQL when checking for work */
+    if (cc->has_work_with_iothread_lock) {
+        if (qemu_mutex_iothread_locked()) {
+            func = cc->has_work_with_iothread_lock;
+            goto call_func;
+        }
+
+        if (has_cpu_lock) {
+            /* avoid deadlock by acquiring the locks in order */
+            cpu_mutex_unlock(cpu);
+        }
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cpu);
+
+        ret = cc->has_work_with_iothread_lock(cpu);
+
+        qemu_mutex_unlock_iothread();
+        if (!has_cpu_lock) {
+            cpu_mutex_unlock(cpu);
+        }
+        return ret;
+    }
+
     g_assert(cc->has_work);
-    if (cpu_mutex_locked(cpu)) {
-        return cc->has_work(cpu);
+    func = cc->has_work;
+ call_func:
+    if (has_cpu_lock) {
+        return func(cpu);
     }
     cpu_mutex_lock(cpu);
-    ret = cc->has_work(cpu);
+    ret = func(cpu);
     cpu_mutex_unlock(cpu);
     return ret;
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 63/73] ppc: convert to cpu_has_work_with_iothread_lock
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (61 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 62/73] cpu: introduce cpu_has_work_with_iothread_lock Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 64/73] mips: " Emilio G. Cota
                   ` (11 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, David Gibson, qemu-ppc

Soon we will call cpu_has_work without the BQL.

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/translate_init.inc.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index f387404913..8c2ac99707 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8468,6 +8468,8 @@ static bool cpu_has_work_POWER7(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (cpu_halted(cs)) {
         if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
@@ -8511,7 +8513,7 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
     pcc->pcr_supported = PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
     pcc->init_proc = init_proc_POWER7;
     pcc->check_pow = check_pow_nocheck;
-    cc->has_work = cpu_has_work_POWER7;
+    cc->has_work_with_iothread_lock = cpu_has_work_POWER7;
     pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
                        PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
                        PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
@@ -8622,6 +8624,8 @@ static bool cpu_has_work_POWER8(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (cpu_halted(cs)) {
         if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
@@ -8673,7 +8677,7 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
     pcc->pcr_supported = PCR_COMPAT_2_07 | PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
     pcc->init_proc = init_proc_POWER8;
     pcc->check_pow = check_pow_nocheck;
-    cc->has_work = cpu_has_work_POWER8;
+    cc->has_work_with_iothread_lock = cpu_has_work_POWER8;
     pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
                        PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
                        PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
@@ -8814,6 +8818,8 @@ static bool cpu_has_work_POWER9(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (cpu_halted(cs)) {
         uint64_t psscr = env->spr[SPR_PSSCR];
 
@@ -8882,7 +8888,7 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
                          PCR_COMPAT_2_05;
     pcc->init_proc = init_proc_POWER9;
     pcc->check_pow = check_pow_nocheck;
-    cc->has_work = cpu_has_work_POWER9;
+    cc->has_work_with_iothread_lock = cpu_has_work_POWER9;
     pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
                        PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
                        PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
@@ -10332,6 +10338,8 @@ static bool ppc_cpu_has_work(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
 }
 
@@ -10531,7 +10539,7 @@ static void ppc_cpu_class_init(ObjectClass *oc, void *data)
     cc->class_by_name = ppc_cpu_class_by_name;
     pcc->parent_parse_features = cc->parse_features;
     cc->parse_features = ppc_cpu_parse_featurestr;
-    cc->has_work = ppc_cpu_has_work;
+    cc->has_work_with_iothread_lock = ppc_cpu_has_work;
     cc->do_interrupt = ppc_cpu_do_interrupt;
     cc->cpu_exec_interrupt = ppc_cpu_exec_interrupt;
     cc->dump_state = ppc_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 64/73] mips: convert to cpu_has_work_with_iothread_lock
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (62 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 63/73] ppc: convert to cpu_has_work_with_iothread_lock Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 65/73] s390x: " Emilio G. Cota
                   ` (10 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Aurelien Jarno, Aleksandar Markovic

Soon we will call cpu_has_work without the BQL.

Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/mips/cpu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/mips/cpu.c b/target/mips/cpu.c
index fdae8cf440..de1eb771fa 100644
--- a/target/mips/cpu.c
+++ b/target/mips/cpu.c
@@ -58,6 +58,8 @@ static bool mips_cpu_has_work(CPUState *cs)
     bool has_work = false;
     uint32_t interrupt_request = cpu_interrupt_request(cs);
 
+    g_assert(qemu_mutex_iothread_locked());
+
     /* Prior to MIPS Release 6 it is implementation dependent if non-enabled
        interrupts wake-up the CPU, however most of the implementations only
        check for interrupts that can be taken. */
@@ -190,7 +192,7 @@ static void mips_cpu_class_init(ObjectClass *c, void *data)
     cc->reset = mips_cpu_reset;
 
     cc->class_by_name = mips_cpu_class_by_name;
-    cc->has_work = mips_cpu_has_work;
+    cc->has_work_with_iothread_lock = mips_cpu_has_work;
     cc->do_interrupt = mips_cpu_do_interrupt;
     cc->cpu_exec_interrupt = mips_cpu_exec_interrupt;
     cc->dump_state = mips_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 65/73] s390x: convert to cpu_has_work_with_iothread_lock
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (63 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 64/73] mips: " Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 66/73] riscv: " Emilio G. Cota
                   ` (9 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Cornelia Huck,
	David Hildenbrand, qemu-s390x

Soon we will call cpu_has_work without the BQL.

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/s390x/cpu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 5169adb6a5..27b76d7d8c 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -56,6 +56,8 @@ static bool s390_cpu_has_work(CPUState *cs)
 {
     S390CPU *cpu = S390_CPU(cs);
 
+    g_assert(qemu_mutex_iothread_locked());
+
     /* STOPPED cpus can never wake up */
     if (s390_cpu_get_state(cpu) != S390_CPU_STATE_LOAD &&
         s390_cpu_get_state(cpu) != S390_CPU_STATE_OPERATING) {
@@ -470,7 +472,7 @@ static void s390_cpu_class_init(ObjectClass *oc, void *data)
     scc->initial_cpu_reset = s390_cpu_initial_reset;
     cc->reset = s390_cpu_full_reset;
     cc->class_by_name = s390_cpu_class_by_name,
-    cc->has_work = s390_cpu_has_work;
+    cc->has_work_with_iothread_lock = s390_cpu_has_work;
 #ifdef CONFIG_TCG
     cc->do_interrupt = s390_cpu_do_interrupt;
 #endif
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 66/73] riscv: convert to cpu_has_work_with_iothread_lock
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (64 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 65/73] s390x: " Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 67/73] sparc: " Emilio G. Cota
                   ` (8 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Palmer Dabbelt,
	Sagar Karandikar, Bastian Koppelmann

Soon we will call cpu_has_work without the BQL.

Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/riscv/cpu.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index cc3ddc0ae4..a9a0a6728d 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -257,6 +257,9 @@ static bool riscv_cpu_has_work(CPUState *cs)
 #ifndef CONFIG_USER_ONLY
     RISCVCPU *cpu = RISCV_CPU(cs);
     CPURISCVState *env = &cpu->env;
+
+    g_assert(qemu_mutex_iothread_locked());
+
     /*
      * Definition of the WFI instruction requires it to ignore the privilege
      * mode and delegation registers, but respect individual enables
@@ -343,7 +346,7 @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
     cc->reset = riscv_cpu_reset;
 
     cc->class_by_name = riscv_cpu_class_by_name;
-    cc->has_work = riscv_cpu_has_work;
+    cc->has_work_with_iothread_lock = riscv_cpu_has_work;
     cc->do_interrupt = riscv_cpu_do_interrupt;
     cc->cpu_exec_interrupt = riscv_cpu_exec_interrupt;
     cc->dump_state = riscv_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 67/73] sparc: convert to cpu_has_work_with_iothread_lock
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (65 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 66/73] riscv: " Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 68/73] xtensa: " Emilio G. Cota
                   ` (7 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Alex Bennée, Mark Cave-Ayland, Artyom Tarasenko

Soon we will call cpu_has_work without the BQL.

Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/sparc/cpu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 933fd8e954..b15f6ef180 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -708,6 +708,8 @@ static bool sparc_cpu_has_work(CPUState *cs)
     SPARCCPU *cpu = SPARC_CPU(cs);
     CPUSPARCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     return (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
            cpu_interrupts_enabled(env);
 }
@@ -869,7 +871,7 @@ static void sparc_cpu_class_init(ObjectClass *oc, void *data)
 
     cc->class_by_name = sparc_cpu_class_by_name;
     cc->parse_features = sparc_cpu_parse_features;
-    cc->has_work = sparc_cpu_has_work;
+    cc->has_work_with_iothread_lock = sparc_cpu_has_work;
     cc->do_interrupt = sparc_cpu_do_interrupt;
     cc->cpu_exec_interrupt = sparc_cpu_exec_interrupt;
     cc->dump_state = sparc_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 68/73] xtensa: convert to cpu_has_work_with_iothread_lock
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (66 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 67/73] sparc: " Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle Emilio G. Cota
                   ` (6 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée, Max Filippov

Soon we will call cpu_has_work without the BQL.

Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/xtensa/cpu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index d4ca35e6cc..5f3b4a70b0 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -47,6 +47,8 @@ static bool xtensa_cpu_has_work(CPUState *cs)
 #ifndef CONFIG_USER_ONLY
     XtensaCPU *cpu = XTENSA_CPU(cs);
 
+    g_assert(qemu_mutex_iothread_locked());
+
     return !cpu->env.runstall && cpu->env.pending_irq_level;
 #else
     return true;
@@ -173,7 +175,7 @@ static void xtensa_cpu_class_init(ObjectClass *oc, void *data)
     cc->reset = xtensa_cpu_reset;
 
     cc->class_by_name = xtensa_cpu_class_by_name;
-    cc->has_work = xtensa_cpu_has_work;
+    cc->has_work_with_iothread_lock = xtensa_cpu_has_work;
     cc->do_interrupt = xtensa_cpu_do_interrupt;
     cc->cpu_exec_interrupt = xtensa_cpu_exec_interrupt;
     cc->dump_state = xtensa_cpu_dump_state;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (67 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 68/73] xtensa: " Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 70/73] cpu: protect CPU state with cpu->lock instead of the BQL Emilio G. Cota
                   ` (5 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

This function is only called from TCG rr mode, so add
a prefix to mark this as well as an assertion.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 cpus.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/cpus.c b/cpus.c
index c6806184ee..61fb6e033f 100644
--- a/cpus.c
+++ b/cpus.c
@@ -211,10 +211,12 @@ static bool cpu_thread_is_idle(CPUState *cpu)
     return true;
 }
 
-static bool all_cpu_threads_idle(void)
+static bool qemu_tcg_rr_all_cpu_threads_idle(void)
 {
     CPUState *cpu;
 
+    g_assert(qemu_is_tcg_rr());
+
     CPU_FOREACH(cpu) {
         if (!cpu_thread_is_idle(cpu)) {
             return false;
@@ -692,7 +694,7 @@ void qemu_start_warp_timer(void)
     }
 
     if (replay_mode != REPLAY_MODE_PLAY) {
-        if (!all_cpu_threads_idle()) {
+        if (!qemu_tcg_rr_all_cpu_threads_idle()) {
             return;
         }
 
@@ -1325,7 +1327,7 @@ static void qemu_tcg_rr_wait_io_event(void)
 {
     CPUState *cpu;
 
-    while (all_cpu_threads_idle()) {
+    while (qemu_tcg_rr_all_cpu_threads_idle()) {
         stop_tcg_kick_timer();
         qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
     }
@@ -1660,7 +1662,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             atomic_mb_set(&cpu->exit_request, 0);
         }
 
-        if (use_icount && all_cpu_threads_idle()) {
+        if (use_icount && qemu_tcg_rr_all_cpu_threads_idle()) {
             /*
              * When all cpus are sleeping (e.g in WFI), to avoid a deadlock
              * in the main_loop, wake it up in order to start the warp timer.
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 70/73] cpu: protect CPU state with cpu->lock instead of the BQL
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (68 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 71/73] cpus-common: release BQL earlier in run_on_cpu Emilio G. Cota
                   ` (4 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Use the per-CPU locks to protect the CPUs' state, instead of
using the BQL. These locks are uncontended (they are mostly
acquired by the corresponding vCPU thread), so acquiring them
is cheaper than acquiring the BQL, which particularly in
MTTCG can be contended at high core counts.

In this conversion we drop qemu_cpu_cond and qemu_pause_cond,
and use cpu->cond instead.

In qom/cpu.c we can finally remove the ugliness that
results from having to hold both the BQL and the CPU lock;
now we just have to grab the CPU lock.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  20 ++--
 cpus.c            | 281 ++++++++++++++++++++++++++++++++++------------
 qom/cpu.c         |  29 +----
 3 files changed, 225 insertions(+), 105 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index fc768bbe00..44e6e83ecd 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -309,10 +309,6 @@ struct qemu_work_item;
  * valid under cpu_list_lock.
  * @created: Indicates whether the CPU thread has been successfully created.
  * @interrupt_request: Indicates a pending interrupt request.
- * @halted: Nonzero if the CPU is in suspended state.
- * @stop: Indicates a pending stop request.
- * @stopped: Indicates the CPU has been artificially stopped.
- * @unplug: Indicates a pending CPU unplug request.
  * @crash_occurred: Indicates the OS reported a crash (panic) for this CPU
  * @singlestep_enabled: Flags for single-stepping.
  * @icount_extra: Instructions until next timer event.
@@ -342,6 +338,10 @@ struct qemu_work_item;
  *        after the BQL.
  * @cond: Condition variable for per-CPU events.
  * @work_list: List of pending asynchronous work.
+ * @halted: Nonzero if the CPU is in suspended state.
+ * @stop: Indicates a pending stop request.
+ * @stopped: Indicates the CPU has been artificially stopped.
+ * @unplug: Indicates a pending CPU unplug request.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
  * @trace_dstate: Dynamic tracing state of events for this vCPU (bitmask).
@@ -365,12 +365,7 @@ struct CPUState {
 #endif
     int thread_id;
     bool running, has_waiter;
-    struct QemuCond *halt_cond;
     bool thread_kicked;
-    bool created;
-    bool stop;
-    bool stopped;
-    bool unplug;
     bool crash_occurred;
     bool exit_request;
     uint32_t cflags_next_tb;
@@ -384,7 +379,13 @@ struct CPUState {
     QemuMutex *lock;
     /* fields below protected by @lock */
     QemuCond cond;
+    QemuCond *halt_cond;
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
+    uint32_t halted;
+    bool created;
+    bool stop;
+    bool stopped;
+    bool unplug;
 
     CPUAddressSpace *cpu_ases;
     int num_ases;
@@ -432,7 +433,6 @@ struct CPUState {
     /* TODO Move common fields from CPUArchState here. */
     int cpu_index;
     int cluster_index;
-    uint32_t halted;
     uint32_t can_do_io;
     int32_t exception_index;
 
diff --git a/cpus.c b/cpus.c
index 61fb6e033f..d3abf5d3dc 100644
--- a/cpus.c
+++ b/cpus.c
@@ -181,24 +181,30 @@ bool cpu_mutex_locked(const CPUState *cpu)
     return test_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
 }
 
-bool cpu_is_stopped(CPUState *cpu)
+/* Called with the CPU's lock held */
+static bool cpu_is_stopped_locked(CPUState *cpu)
 {
     return cpu->stopped || !runstate_is_running();
 }
 
-static inline bool cpu_work_list_empty(CPUState *cpu)
+bool cpu_is_stopped(CPUState *cpu)
 {
-    bool ret;
+    if (!cpu_mutex_locked(cpu)) {
+        bool ret;
 
-    cpu_mutex_lock(cpu);
-    ret = QSIMPLEQ_EMPTY(&cpu->work_list);
-    cpu_mutex_unlock(cpu);
-    return ret;
+        cpu_mutex_lock(cpu);
+        ret = cpu_is_stopped_locked(cpu);
+        cpu_mutex_unlock(cpu);
+        return ret;
+    }
+    return cpu_is_stopped_locked(cpu);
 }
 
 static bool cpu_thread_is_idle(CPUState *cpu)
 {
-    if (cpu->stop || !cpu_work_list_empty(cpu)) {
+    g_assert(cpu_mutex_locked(cpu));
+
+    if (cpu->stop || !QSIMPLEQ_EMPTY(&cpu->work_list)) {
         return false;
     }
     if (cpu_is_stopped(cpu)) {
@@ -216,9 +222,17 @@ static bool qemu_tcg_rr_all_cpu_threads_idle(void)
     CPUState *cpu;
 
     g_assert(qemu_is_tcg_rr());
+    g_assert(qemu_mutex_iothread_locked());
+    g_assert(no_cpu_mutex_locked());
 
     CPU_FOREACH(cpu) {
-        if (!cpu_thread_is_idle(cpu)) {
+        bool is_idle;
+
+        cpu_mutex_lock(cpu);
+        is_idle = cpu_thread_is_idle(cpu);
+        cpu_mutex_unlock(cpu);
+
+        if (!is_idle) {
             return false;
         }
     }
@@ -780,6 +794,8 @@ void qemu_start_warp_timer(void)
 
 static void qemu_account_warp_timer(void)
 {
+    g_assert(qemu_mutex_iothread_locked());
+
     if (!use_icount || !icount_sleep) {
         return;
     }
@@ -1090,6 +1106,7 @@ static void kick_tcg_thread(void *opaque)
 static void start_tcg_kick_timer(void)
 {
     assert(!mttcg_enabled);
+    g_assert(qemu_mutex_iothread_locked());
     if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) {
         tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
                                            kick_tcg_thread, NULL);
@@ -1102,6 +1119,7 @@ static void start_tcg_kick_timer(void)
 static void stop_tcg_kick_timer(void)
 {
     assert(!mttcg_enabled);
+    g_assert(qemu_mutex_iothread_locked());
     if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) {
         timer_del(tcg_kick_vcpu_timer);
     }
@@ -1204,6 +1222,8 @@ int vm_shutdown(void)
 
 static bool cpu_can_run(CPUState *cpu)
 {
+    g_assert(cpu_mutex_locked(cpu));
+
     if (cpu->stop) {
         return false;
     }
@@ -1276,16 +1296,9 @@ static void qemu_init_sigbus(void)
 
 static QemuThread io_thread;
 
-/* cpu creation */
-static QemuCond qemu_cpu_cond;
-/* system init */
-static QemuCond qemu_pause_cond;
-
 void qemu_init_cpu_loop(void)
 {
     qemu_init_sigbus();
-    qemu_cond_init(&qemu_cpu_cond);
-    qemu_cond_init(&qemu_pause_cond);
     qemu_mutex_init(&qemu_global_mutex);
 
     qemu_thread_get_self(&io_thread);
@@ -1303,46 +1316,70 @@ static void qemu_tcg_destroy_vcpu(CPUState *cpu)
 {
 }
 
-static void qemu_cpu_stop(CPUState *cpu, bool exit)
+static void qemu_cpu_stop_locked(CPUState *cpu, bool exit)
 {
+    g_assert(cpu_mutex_locked(cpu));
     g_assert(qemu_cpu_is_self(cpu));
     cpu->stop = false;
     cpu->stopped = true;
     if (exit) {
         cpu_exit(cpu);
     }
-    qemu_cond_broadcast(&qemu_pause_cond);
+    qemu_cond_broadcast(&cpu->cond);
+}
+
+static void qemu_cpu_stop(CPUState *cpu, bool exit)
+{
+    cpu_mutex_lock(cpu);
+    qemu_cpu_stop_locked(cpu, exit);
+    cpu_mutex_unlock(cpu);
 }
 
 static void qemu_wait_io_event_common(CPUState *cpu)
 {
+    g_assert(cpu_mutex_locked(cpu));
+
     atomic_mb_set(&cpu->thread_kicked, false);
     if (cpu->stop) {
-        qemu_cpu_stop(cpu, false);
+        qemu_cpu_stop_locked(cpu, false);
     }
+    /*
+     * unlock+lock cpu_mutex, so that other vCPUs have a chance to grab the
+     * lock and queue some work for this vCPU.
+     */
+    cpu_mutex_unlock(cpu);
     process_queued_cpu_work(cpu);
+    cpu_mutex_lock(cpu);
 }
 
 static void qemu_tcg_rr_wait_io_event(void)
 {
     CPUState *cpu;
 
+    g_assert(qemu_mutex_iothread_locked());
+    g_assert(no_cpu_mutex_locked());
+
     while (qemu_tcg_rr_all_cpu_threads_idle()) {
         stop_tcg_kick_timer();
-        qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait(first_cpu->halt_cond, first_cpu->lock);
     }
 
     start_tcg_kick_timer();
 
     CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
         qemu_wait_io_event_common(cpu);
+        cpu_mutex_unlock(cpu);
     }
 }
 
 static void qemu_wait_io_event(CPUState *cpu)
 {
+    g_assert(cpu_mutex_locked(cpu));
+    g_assert(!qemu_mutex_iothread_locked());
+
     while (cpu_thread_is_idle(cpu)) {
-        qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait(cpu->halt_cond, cpu->lock);
     }
 
 #ifdef _WIN32
@@ -1362,6 +1399,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
     rcu_register_thread();
 
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     cpu->can_do_io = 1;
@@ -1374,14 +1412,20 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
     }
 
     kvm_init_cpu_signals(cpu);
+    qemu_mutex_unlock_iothread();
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = kvm_cpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
@@ -1389,10 +1433,16 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
 
+    cpu_mutex_unlock(cpu);
+    qemu_mutex_lock_iothread();
     qemu_kvm_destroy_vcpu(cpu);
-    cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
     qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
+    cpu->created = false;
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
+
     rcu_unregister_thread();
     return NULL;
 }
@@ -1409,7 +1459,7 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
 
     rcu_register_thread();
 
-    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     cpu->can_do_io = 1;
@@ -1420,10 +1470,10 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
-        qemu_mutex_unlock_iothread();
+        cpu_mutex_unlock(cpu);
         do {
             int sig;
             r = sigwait(&waitset, &sig);
@@ -1432,11 +1482,11 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
             perror("sigwait");
             exit(1);
         }
-        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cpu);
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug);
 
-    qemu_mutex_unlock_iothread();
+    cpu_mutex_unlock(cpu);
     rcu_unregister_thread();
     return NULL;
 #endif
@@ -1467,6 +1517,8 @@ static int64_t tcg_get_icount_limit(void)
 static void handle_icount_deadline(void)
 {
     assert(qemu_in_vcpu_thread());
+    g_assert(qemu_mutex_iothread_locked());
+
     if (use_icount) {
         int64_t deadline =
             qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
@@ -1547,12 +1599,15 @@ static void deal_with_unplugged_cpus(void)
     CPUState *cpu;
 
     CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
         if (cpu->unplug && !cpu_can_run(cpu)) {
             qemu_tcg_destroy_vcpu(cpu);
             cpu->created = false;
-            qemu_cond_signal(&qemu_cpu_cond);
+            qemu_cond_signal(&cpu->cond);
+            cpu_mutex_unlock(cpu);
             break;
         }
+        cpu_mutex_unlock(cpu);
     }
 }
 
@@ -1573,24 +1628,36 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
     rcu_register_thread();
     tcg_register_thread();
 
+    /*
+     * We call cpu_mutex_lock/unlock just to please the assertions in common
+     * code, since here cpu->lock is an alias to the BQL.
+     */
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
-
     cpu->thread_id = qemu_get_thread_id();
     cpu->created = true;
     cpu->can_do_io = 1;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
 
     /* wait for initial kick-off after machine start */
+    cpu_mutex_lock(first_cpu);
     while (first_cpu->stopped) {
-        qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait(first_cpu->halt_cond, first_cpu->lock);
+        cpu_mutex_unlock(first_cpu);
 
         /* process any pending work */
         CPU_FOREACH(cpu) {
             current_cpu = cpu;
+            cpu_mutex_lock(cpu);
             qemu_wait_io_event_common(cpu);
+            cpu_mutex_unlock(cpu);
         }
+
+        cpu_mutex_lock(first_cpu);
     }
+    cpu_mutex_unlock(first_cpu);
 
     start_tcg_kick_timer();
 
@@ -1617,7 +1684,12 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             cpu = first_cpu;
         }
 
-        while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
+        while (cpu) {
+            cpu_mutex_lock(cpu);
+            if (!QSIMPLEQ_EMPTY(&cpu->work_list) || cpu->exit_request) {
+                cpu_mutex_unlock(cpu);
+                break;
+            }
 
             atomic_mb_set(&tcg_current_rr_cpu, cpu);
             current_cpu = cpu;
@@ -1628,6 +1700,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             if (cpu_can_run(cpu)) {
                 int r;
 
+                cpu_mutex_unlock(cpu);
                 qemu_mutex_unlock_iothread();
                 prepare_icount_for_run(cpu);
 
@@ -1635,11 +1708,14 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
 
                 process_icount_data(cpu);
                 qemu_mutex_lock_iothread();
+                cpu_mutex_lock(cpu);
 
                 if (r == EXCP_DEBUG) {
                     cpu_handle_guest_debug(cpu);
+                    cpu_mutex_unlock(cpu);
                     break;
                 } else if (r == EXCP_ATOMIC) {
+                    cpu_mutex_unlock(cpu);
                     qemu_mutex_unlock_iothread();
                     cpu_exec_step_atomic(cpu);
                     qemu_mutex_lock_iothread();
@@ -1649,11 +1725,15 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
                 if (cpu->unplug) {
                     cpu = CPU_NEXT(cpu);
                 }
+                cpu_mutex_unlock(current_cpu);
                 break;
             }
 
+            cpu_mutex_unlock(cpu);
             cpu = CPU_NEXT(cpu);
-        } /* while (cpu && !cpu->exit_request).. */
+        } /* for (;;) .. */
+
+        g_assert(no_cpu_mutex_locked());
 
         /* Does not need atomic_mb_set because a spurious wakeup is okay.  */
         atomic_set(&tcg_current_rr_cpu, NULL);
@@ -1685,6 +1765,7 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
 
     rcu_register_thread();
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
@@ -1693,11 +1774,17 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
     current_cpu = cpu;
 
     hax_init_vcpu(cpu);
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_mutex_unlock_iothread();
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = hax_smp_cpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
@@ -1705,6 +1792,8 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
 
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
+
+    cpu_mutex_unlock(cpu);
     rcu_unregister_thread();
     return NULL;
 }
@@ -1722,6 +1811,7 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
     rcu_register_thread();
 
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
@@ -1729,14 +1819,20 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
     current_cpu = cpu;
 
     hvf_init_vcpu(cpu);
+    qemu_mutex_unlock_iothread();
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = hvf_vcpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
@@ -1744,10 +1840,16 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
 
+    cpu_mutex_unlock(cpu);
+    qemu_mutex_lock_iothread();
     hvf_vcpu_destroy(cpu);
-    cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
     qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
+    cpu->created = false;
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
+
     rcu_unregister_thread();
     return NULL;
 }
@@ -1760,6 +1862,7 @@ static void *qemu_whpx_cpu_thread_fn(void *arg)
     rcu_register_thread();
 
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     current_cpu = cpu;
@@ -1769,28 +1872,40 @@ static void *qemu_whpx_cpu_thread_fn(void *arg)
         fprintf(stderr, "whpx_init_vcpu failed: %s\n", strerror(-r));
         exit(1);
     }
+    qemu_mutex_unlock_iothread();
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = whpx_vcpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
         }
         while (cpu_thread_is_idle(cpu)) {
-            qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+            qemu_cond_wait(cpu->halt_cond, cpu->lock);
         }
         qemu_wait_io_event_common(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
 
+    cpu_mutex_unlock(cpu);
+    qemu_mutex_lock_iothread();
     whpx_destroy_vcpu(cpu);
-    cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
     qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
+    cpu->created = false;
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
+
     rcu_unregister_thread();
     return NULL;
 }
@@ -1818,14 +1933,14 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
     rcu_register_thread();
     tcg_register_thread();
 
-    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
     cpu->created = true;
     cpu->can_do_io = 1;
     current_cpu = cpu;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     /* process any pending work */
     cpu->exit_request = 1;
@@ -1833,9 +1948,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
     do {
         if (cpu_can_run(cpu)) {
             int r;
-            qemu_mutex_unlock_iothread();
+            cpu_mutex_unlock(cpu);
             r = tcg_cpu_exec(cpu);
-            qemu_mutex_lock_iothread();
+            cpu_mutex_lock(cpu);
             switch (r) {
             case EXCP_DEBUG:
                 cpu_handle_guest_debug(cpu);
@@ -1851,9 +1966,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
                 g_assert(cpu_halted(cpu));
                 break;
             case EXCP_ATOMIC:
-                qemu_mutex_unlock_iothread();
+                cpu_mutex_unlock(cpu);
                 cpu_exec_step_atomic(cpu);
-                qemu_mutex_lock_iothread();
+                cpu_mutex_lock(cpu);
             default:
                 /* Ignore everything else? */
                 break;
@@ -1866,8 +1981,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
 
     qemu_tcg_destroy_vcpu(cpu);
     cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
-    qemu_mutex_unlock_iothread();
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
     rcu_unregister_thread();
     return NULL;
 }
@@ -1967,54 +2082,69 @@ void qemu_mutex_unlock_iothread(void)
     }
 }
 
-static bool all_vcpus_paused(void)
-{
-    CPUState *cpu;
-
-    CPU_FOREACH(cpu) {
-        if (!cpu->stopped) {
-            return false;
-        }
-    }
-
-    return true;
-}
-
 void pause_all_vcpus(void)
 {
     CPUState *cpu;
 
+    g_assert(no_cpu_mutex_locked());
+
     qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
     CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
         if (qemu_cpu_is_self(cpu)) {
             qemu_cpu_stop(cpu, true);
         } else {
             cpu->stop = true;
             qemu_cpu_kick(cpu);
         }
+        cpu_mutex_unlock(cpu);
     }
 
+ drop_locks_and_stop_all_vcpus:
     /* We need to drop the replay_lock so any vCPU threads woken up
      * can finish their replay tasks
      */
     replay_mutex_unlock();
+    qemu_mutex_unlock_iothread();
 
-    while (!all_vcpus_paused()) {
-        qemu_cond_wait(&qemu_pause_cond, &qemu_global_mutex);
-        CPU_FOREACH(cpu) {
+    CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
+        if (!cpu->stopped) {
+            cpu->stop = true;
             qemu_cpu_kick(cpu);
+            qemu_cond_wait(&cpu->cond, cpu->lock);
         }
+        cpu_mutex_unlock(cpu);
     }
 
-    qemu_mutex_unlock_iothread();
     replay_mutex_lock();
     qemu_mutex_lock_iothread();
+
+    /* a CPU might have been hot-plugged while we weren't holding the BQL */
+    CPU_FOREACH(cpu) {
+        bool stopped;
+
+        cpu_mutex_lock(cpu);
+        stopped = cpu->stopped;
+        cpu_mutex_unlock(cpu);
+
+        if (!stopped) {
+            goto drop_locks_and_stop_all_vcpus;
+        }
+    }
 }
 
 void cpu_resume(CPUState *cpu)
 {
-    cpu->stop = false;
-    cpu->stopped = false;
+    if (cpu_mutex_locked(cpu)) {
+        cpu->stop = false;
+        cpu->stopped = false;
+    } else {
+        cpu_mutex_lock(cpu);
+        cpu->stop = false;
+        cpu->stopped = false;
+        cpu_mutex_unlock(cpu);
+    }
     qemu_cpu_kick(cpu);
 }
 
@@ -2030,8 +2160,11 @@ void resume_all_vcpus(void)
 
 void cpu_remove_sync(CPUState *cpu)
 {
+    cpu_mutex_lock(cpu);
     cpu->stop = true;
     cpu->unplug = true;
+    cpu_mutex_unlock(cpu);
+
     qemu_cpu_kick(cpu);
     qemu_mutex_unlock_iothread();
     qemu_thread_join(cpu->thread);
@@ -2212,9 +2345,15 @@ void qemu_init_vcpu(CPUState *cpu)
         qemu_dummy_start_vcpu(cpu);
     }
 
+    qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
     while (!cpu->created) {
-        qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
+        qemu_cond_wait(&cpu->cond, cpu->lock);
     }
+    cpu_mutex_unlock(cpu);
+
+    qemu_mutex_lock_iothread();
 }
 
 void cpu_stop_current(void)
diff --git a/qom/cpu.c b/qom/cpu.c
index f2695be9b2..65b070a570 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -94,32 +94,13 @@ static void cpu_common_get_memory_mapping(CPUState *cpu,
     error_setg(errp, "Obtaining memory mappings is unsupported on this CPU.");
 }
 
-/* Resetting the IRQ comes from across the code base so we take the
- * BQL here if we need to.  cpu_interrupt assumes it is held.*/
 void cpu_reset_interrupt(CPUState *cpu, int mask)
 {
-    bool has_bql = qemu_mutex_iothread_locked();
-    bool has_cpu_lock = cpu_mutex_locked(cpu);
-
-    if (has_bql) {
-        if (has_cpu_lock) {
-            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
-        } else {
-            cpu_mutex_lock(cpu);
-            atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
-            cpu_mutex_unlock(cpu);
-        }
-        return;
-    }
-
-    if (has_cpu_lock) {
-        cpu_mutex_unlock(cpu);
-    }
-    qemu_mutex_lock_iothread();
-    cpu_mutex_lock(cpu);
-    atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
-    qemu_mutex_unlock_iothread();
-    if (!has_cpu_lock) {
+    if (cpu_mutex_locked(cpu)) {
+        atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
+    } else {
+        cpu_mutex_lock(cpu);
+        atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
         cpu_mutex_unlock(cpu);
     }
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 71/73] cpus-common: release BQL earlier in run_on_cpu
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (69 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 70/73] cpu: protect CPU state with cpu->lock instead of the BQL Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 72/73] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
                   ` (3 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

After completing the conversion to per-CPU locks, there is no need
to release the BQL after having called cpu_kick.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 cpus-common.c | 20 +++++---------------
 1 file changed, 5 insertions(+), 15 deletions(-)

diff --git a/cpus-common.c b/cpus-common.c
index 62e282bff1..1241024b2c 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -145,6 +145,11 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
         return;
     }
 
+    /* We are going to sleep on the CPU lock, so release the BQL */
+    if (has_bql) {
+        qemu_mutex_unlock_iothread();
+    }
+
     wi.func = func;
     wi.data = data;
     wi.done = false;
@@ -153,21 +158,6 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 
     cpu_mutex_lock(cpu);
     queue_work_on_cpu_locked(cpu, &wi);
-
-    /*
-     * We are going to sleep on the CPU lock, so release the BQL.
-     *
-     * During the transition to per-CPU locks, we release the BQL _after_
-     * having kicked the destination CPU (from queue_work_on_cpu_locked above).
-     * This makes sure that the enqueued work will be seen by the CPU
-     * after being woken up from the kick, since the CPU sleeps on the BQL.
-     * Once we complete the transition to per-CPU locks, we will release
-     * the BQL earlier in this function.
-     */
-    if (has_bql) {
-        qemu_mutex_unlock_iothread();
-    }
-
     while (!atomic_mb_read(&wi.done)) {
         CPUState *self_cpu = current_cpu;
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 72/73] cpu: add async_run_on_cpu_no_bql
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (70 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 71/73] cpus-common: release BQL earlier in run_on_cpu Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 73/73] cputlb: queue async flush jobs without the BQL Emilio G. Cota
                   ` (2 subsequent siblings)
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

Some async jobs do not need the BQL.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 14 ++++++++++++++
 cpus-common.c     | 39 ++++++++++++++++++++++++++++++++++-----
 2 files changed, 48 insertions(+), 5 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 44e6e83ecd..f75f5e74ca 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -898,9 +898,23 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
  * @data: Data to pass to the function.
  *
  * Schedules the function @func for execution on the vCPU @cpu asynchronously.
+ * See also: async_run_on_cpu_no_bql()
  */
 void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
 
+/**
+ * async_run_on_cpu_no_bql:
+ * @cpu: The vCPU to run on.
+ * @func: The function to be executed.
+ * @data: Data to pass to the function.
+ *
+ * Schedules the function @func for execution on the vCPU @cpu asynchronously.
+ * This function is run outside the BQL.
+ * See also: async_run_on_cpu()
+ */
+void async_run_on_cpu_no_bql(CPUState *cpu, run_on_cpu_func func,
+                             run_on_cpu_data data);
+
 /**
  * async_safe_run_on_cpu:
  * @cpu: The vCPU to run on.
diff --git a/cpus-common.c b/cpus-common.c
index 1241024b2c..5832a8bf37 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -109,6 +109,7 @@ struct qemu_work_item {
     run_on_cpu_func func;
     run_on_cpu_data data;
     bool free, exclusive, done;
+    bool bql;
 };
 
 /* Called with the CPU's lock held */
@@ -155,6 +156,7 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
     wi.done = false;
     wi.free = false;
     wi.exclusive = false;
+    wi.bql = true;
 
     cpu_mutex_lock(cpu);
     queue_work_on_cpu_locked(cpu, &wi);
@@ -179,6 +181,21 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
     wi->func = func;
     wi->data = data;
     wi->free = true;
+    wi->bql = true;
+
+    queue_work_on_cpu(cpu, wi);
+}
+
+void async_run_on_cpu_no_bql(CPUState *cpu, run_on_cpu_func func,
+                             run_on_cpu_data data)
+{
+    struct qemu_work_item *wi;
+
+    wi = g_malloc0(sizeof(struct qemu_work_item));
+    wi->func = func;
+    wi->data = data;
+    wi->free = true;
+    /* wi->bql initialized to false */
 
     queue_work_on_cpu(cpu, wi);
 }
@@ -323,6 +340,7 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
     wi->data = data;
     wi->free = true;
     wi->exclusive = true;
+    /* wi->bql initialized to false */
 
     queue_work_on_cpu(cpu, wi);
 }
@@ -347,6 +365,7 @@ static void process_queued_cpu_work_locked(CPUState *cpu)
              * BQL, so it goes to sleep; start_exclusive() is sleeping too, so
              * neither CPU can proceed.
              */
+            g_assert(!wi->bql);
             if (has_bql) {
                 qemu_mutex_unlock_iothread();
             }
@@ -357,12 +376,22 @@ static void process_queued_cpu_work_locked(CPUState *cpu)
                 qemu_mutex_lock_iothread();
             }
         } else {
-            if (has_bql) {
-                wi->func(cpu, wi->data);
+            if (wi->bql) {
+                if (has_bql) {
+                    wi->func(cpu, wi->data);
+                } else {
+                    qemu_mutex_lock_iothread();
+                    wi->func(cpu, wi->data);
+                    qemu_mutex_unlock_iothread();
+                }
             } else {
-                qemu_mutex_lock_iothread();
-                wi->func(cpu, wi->data);
-                qemu_mutex_unlock_iothread();
+                if (has_bql) {
+                    qemu_mutex_unlock_iothread();
+                    wi->func(cpu, wi->data);
+                    qemu_mutex_lock_iothread();
+                } else {
+                    wi->func(cpu, wi->data);
+                }
             }
         }
         cpu_mutex_lock(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v7 73/73] cputlb: queue async flush jobs without the BQL
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (71 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 72/73] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
@ 2019-03-04 18:18 ` Emilio G. Cota
  2019-03-05  9:16 ` [Qemu-devel] [PATCH v7 00/73] per-CPU locks Alex Bennée
  2019-03-06 19:40 ` Richard Henderson
  74 siblings, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-04 18:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson, Alex Bennée

This yields sizable scalability improvements, as the below results show.

Host: Two Intel E5-2683 v3 14-core CPUs at 2.00 GHz (Haswell)

Workload: Ubuntu 18.04 ppc64 compiling the linux kernel with
"make -j N", where N is the number of cores in the guest.

                      Speedup vs a single thread (higher is better):

         14 +---------------------------------------------------------------+
            |       +    +       +      +       +      +      $$$$$$  +     |
            |                                            $$$$$              |
            |                                      $$$$$$                   |
         12 |-+                                $A$$                       +-|
            |                                $$                             |
            |                             $$$                               |
         10 |-+                         $$    ##D#####################D   +-|
            |                        $$$ #####**B****************           |
            |                      $$####*****                   *****      |
            |                    A$#*****                             B     |
          8 |-+                $$B**                                      +-|
            |                $$**                                           |
            |               $**                                             |
          6 |-+           $$*                                             +-|
            |            A**                                                |
            |           $B                                                  |
            |           $                                                   |
          4 |-+        $*                                                 +-|
            |          $                                                    |
            |         $                                                     |
          2 |-+      $                                                    +-|
            |        $                                 +cputlb-no-bql $$A$$ |
            |       A                                   +per-cpu-lock ##D## |
            |       +    +       +      +       +      +     baseline **B** |
          0 +---------------------------------------------------------------+
                    1    4       8      12      16     20      24     28
                                       Guest vCPUs
  png: https://imgur.com/zZRvS7q

Some notes:
- baseline corresponds to the commit before this series

- per-cpu-lock is the commit that converts the CPU loop to per-cpu locks.

- cputlb-no-bql is this commit.

- I'm using taskset to assign cores to threads, favouring locality whenever
  possible but not using SMT. When N=1, I'm using a single host core, which
  leads to superlinear speedups (since with more cores the I/O thread can execute
  while vCPU threads sleep). In the future I might use N+1 host cores for N
  guest cores to avoid this, or perhaps pin guest threads to cores one-by-one.

Single-threaded performance is affected very lightly. Results
below for debian aarch64 bootup+test for the entire series
on an Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz host:

- Before:

 Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs):

       7269.033478      task-clock (msec)         #    0.998 CPUs utilized            ( +-  0.06% )
    30,659,870,302      cycles                    #    4.218 GHz                      ( +-  0.06% )
    54,790,540,051      instructions              #    1.79  insns per cycle          ( +-  0.05% )
     9,796,441,380      branches                  # 1347.695 M/sec                    ( +-  0.05% )
       165,132,201      branch-misses             #    1.69% of all branches          ( +-  0.12% )

       7.287011656 seconds time elapsed                                          ( +-  0.10% )

- After:

       7375.924053      task-clock (msec)         #    0.998 CPUs utilized            ( +-  0.13% )
    31,107,548,846      cycles                    #    4.217 GHz                      ( +-  0.12% )
    55,355,668,947      instructions              #    1.78  insns per cycle          ( +-  0.05% )
     9,929,917,664      branches                  # 1346.261 M/sec                    ( +-  0.04% )
       166,547,442      branch-misses             #    1.68% of all branches          ( +-  0.09% )

       7.389068145 seconds time elapsed                                          ( +-  0.13% )

That is, a 1.37% slowdown.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cputlb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 88cc8389e9..d9e0814b5c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -260,7 +260,7 @@ static void flush_all_helper(CPUState *src, run_on_cpu_func fn,
 
     CPU_FOREACH(cpu) {
         if (cpu != src) {
-            async_run_on_cpu(cpu, fn, d);
+            async_run_on_cpu_no_bql(cpu, fn, d);
         }
     }
 }
@@ -336,8 +336,8 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap)
     tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap);
 
     if (cpu->created && !qemu_cpu_is_self(cpu)) {
-        async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work,
-                         RUN_ON_CPU_HOST_INT(idxmap));
+        async_run_on_cpu_no_bql(cpu, tlb_flush_by_mmuidx_async_work,
+                                RUN_ON_CPU_HOST_INT(idxmap));
     } else {
         tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap));
     }
@@ -481,8 +481,8 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap)
     addr_and_mmu_idx |= idxmap;
 
     if (!qemu_cpu_is_self(cpu)) {
-        async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_work,
-                         RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
+        async_run_on_cpu_no_bql(cpu, tlb_flush_page_by_mmuidx_async_work,
+                                RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
     } else {
         tlb_flush_page_by_mmuidx_async_work(
             cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v7 00/73] per-CPU locks
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (72 preceding siblings ...)
  2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 73/73] cputlb: queue async flush jobs without the BQL Emilio G. Cota
@ 2019-03-05  9:16 ` Alex Bennée
  2019-03-06 19:40 ` Richard Henderson
  74 siblings, 0 replies; 78+ messages in thread
From: Alex Bennée @ 2019-03-05  9:16 UTC (permalink / raw)
  To: Emilio G. Cota
  Cc: qemu-devel, Richard Henderson, Aleksandar Markovic,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Walle,
	Palmer Dabbelt, Paolo Bonzini, Peter Maydell, qemu-arm, qemu-ppc,
	qemu-s390x, Sagar Karandikar, Stafford Horne


Emilio G. Cota <cota@braap.org> writes:

> v6: https://lists.gnu.org/archive/html/qemu-devel/2019-01/msg07650.html
>
> All patches in the series have reviews now. Thanks everyone!
>
> I've tested all patches with `make check-qtest -j' for all targets.
> The series is checkpatch-clean (just some warnings about __COVERITY__).
>
> You can fetch the series from:
>   https://github.com/cota/qemu/tree/cpu-lock-v7

Just to say I've applied and tested the whole series and I'm still
seeing the improvements so I think it's ready to be picked up:

  Tested-by: Alex Bennée <alex.bennee@linaro.org>


>
> ---
> v6->v7:
>
> - Rebase on master
>   - Add a cpu_halted_set call to arm code that wasn't there in v6
>
> - Add R-b's and Ack's.
>
> - Add comment to patch 3's log to explain why the bitmap is added
>   there, even though it only gains a user at the end of the series.
>
> - Fix "prevent deadlock" comments before assertions; use
>   "enforce locking order" instead, which is more accurate.
>
> - Add a few more comments, as suggested by Alex.
>
> v6->v7 diff (before rebase) below.
>
> Thanks,
>
> 		Emilio
> ---
> diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
> index e4ae04f72c..a513457520 100644
> --- a/accel/tcg/cpu-exec.c
> +++ b/accel/tcg/cpu-exec.c
> @@ -435,7 +435,7 @@ static inline bool cpu_handle_halt_locked(CPUState *cpu)
>              && replay_interrupt()) {
>              X86CPU *x86_cpu = X86_CPU(cpu);
>
> -            /* prevent deadlock; cpu_mutex must be acquired _after_ the BQL */
> +            /* locking order: cpu_mutex must be acquired _after_ the BQL */
>              cpu_mutex_unlock(cpu);
>              qemu_mutex_lock_iothread();
>              cpu_mutex_lock(cpu);
> diff --git a/cpus.c b/cpus.c
> index 4f17fe25bf..82a93f2a5a 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -2062,7 +2062,7 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line)
>  {
>      QemuMutexLockFunc bql_lock = atomic_read(&qemu_bql_mutex_lock_func);
>
> -    /* prevent deadlock with CPU mutex */
> +    /* enforce locking order */
>      g_assert(no_cpu_mutex_locked());
>
>      g_assert(!qemu_mutex_iothread_locked());
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index bb0729f969..726cb7b090 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -322,7 +322,8 @@ struct qemu_work_item;
>   * @mem_io_pc: Host Program Counter at which the memory was accessed.
>   * @mem_io_vaddr: Target virtual address at which the memory was accessed.
>   * @kvm_fd: vCPU file descriptor for KVM.
> - * @lock: Lock to prevent multiple access to per-CPU fields.
> + * @lock: Lock to prevent multiple access to per-CPU fields. Must be acquired
> + *        after the BQL.
>   * @cond: Condition variable for per-CPU events.
>   * @work_list: List of pending asynchronous work.
>   * @halted: Nonzero if the CPU is in suspended state.
> @@ -804,6 +805,7 @@ static inline bool cpu_has_work(CPUState *cpu)
>      bool (*func)(CPUState *cpu);
>      bool ret;
>
> +    /* some targets require us to hold the BQL when checking for work */
>      if (cc->has_work_with_iothread_lock) {
>          if (qemu_mutex_iothread_locked()) {
>              func = cc->has_work_with_iothread_lock;
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index 3f3c670897..65a14deb2f 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -3216,6 +3216,10 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
>          qemu_mutex_lock_iothread();
>      }
>
> +    /*
> +     * We might have cleared some bits in cpu->interrupt_request since reading
> +     * it; read it again.
> +     */
>      interrupt_request = cpu_interrupt_request(cpu);
>
>      /* Force the VCPU out of its inner loop to process any INIT requests


--
Alex Bennée

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v7 00/73] per-CPU locks
  2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
                   ` (73 preceding siblings ...)
  2019-03-05  9:16 ` [Qemu-devel] [PATCH v7 00/73] per-CPU locks Alex Bennée
@ 2019-03-06 19:40 ` Richard Henderson
  2019-03-06 19:57   ` Peter Maydell
  2019-03-06 23:49   ` Emilio G. Cota
  74 siblings, 2 replies; 78+ messages in thread
From: Richard Henderson @ 2019-03-06 19:40 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Alex Bennée, Aleksandar Markovic, Alistair Francis,
	Andrzej Zaborowski, Anthony Green, Artyom Tarasenko,
	Aurelien Jarno, Bastian Koppelmann, Christian Borntraeger,
	Chris Wulff, Cornelia Huck, David Gibson, David Hildenbrand,
	Edgar E. Iglesias, Eduardo Habkost, Fabien Chouteau, Guan Xuetao,
	James Hogan, Laurent Vivier, Marek Vasut, Mark Cave-Ayland,
	Max Filippov, Michael Walle, Palmer Dabbelt, Paolo Bonzini,
	Peter Maydell, qemu-arm, qemu-ppc, qemu-s390x, Sagar Karandikar,
	Stafford Horne

On 3/4/19 10:17 AM, Emilio G. Cota wrote:
> v6: https://lists.gnu.org/archive/html/qemu-devel/2019-01/msg07650.html
> 
> All patches in the series have reviews now. Thanks everyone!
> 
> I've tested all patches with `make check-qtest -j' for all targets.
> The series is checkpatch-clean (just some warnings about __COVERITY__).
> 
> You can fetch the series from:
>   https://github.com/cota/qemu/tree/cpu-lock-v7

Having fixed the OSX build errors, Peter reported to me a hang on the x86_64
bios-tables-test with an arm32 host.  I have been able to replicate this.


(gdb) info thr
  Id   Target Id                                       Frame
  1    Thread 0xb35a72e0 (LWP 10966) "qemu-system-x86" 0xb606d138 in
pthread_cond_wait@@GLIBC_2.4 () from /usr/lib/libpthread.so.0
  2    Thread 0xb35a3d30 (LWP 10967) "qemu-system-x86" 0xb5fe70fc in syscall ()
   from /usr/lib/libc.so.6
  3    Thread 0xa07fed30 (LWP 10968) "qemu-system-x86" 0xb5fe0688 in poll ()
   from /usr/lib/libc.so.6
* 4    Thread 0x9fdfed30 (LWP 10969) "qemu-system-x86" 0xb606d138 in
pthread_cond_wait@@GLIBC_2.4 () from /usr/lib/libpthread.so.0

(gdb) thr 1
[Switching to thread 1 (Thread 0xb35a72e0 (LWP 10966))]
#0  0xb606d138 in pthread_cond_wait@@GLIBC_2.4 () from /usr/lib/libpthread.so.0
(gdb) where
#0  0xb606d138 in pthread_cond_wait@@GLIBC_2.4 () from /usr/lib/libpthread.so.0
#1  0x009a66a4 in qemu_cond_wait_impl (cond=cond@entry=0x16727b0,
    mutex=0xcad4c8 <qemu_global_mutex>,
    file=file@entry=0xa057f0 "/home/rth/qemu/qemu/cpus-common.c",
    line=line@entry=166) at /home/rth/qemu/qemu/util/qemu-thread-posix.c:161
#2  0x00692378 in run_on_cpu (cpu=cpu@entry=0x16725a8, func=<optimized out>,
    data=...) at /home/rth/qemu/qemu/cpus-common.c:166
#3  0x00604c18 in vapic_enable_tpr_reporting (enable=<optimized out>)
    at /home/rth/qemu/qemu/hw/i386/kvmvapic.c:507
#4  0x00710160 in qdev_reset_one (dev=<optimized out>, opaque=<optimized out>)
    at /home/rth/qemu/qemu/hw/core/qdev.c:250

(gdb) thr 4
[Switching to thread 4 (Thread 0x9fdfed30 (LWP 10969))]
#0  0xb606d138 in pthread_cond_wait@@GLIBC_2.4 () from /usr/lib/libpthread.so.0
(gdb) where
#0  0xb606d138 in pthread_cond_wait@@GLIBC_2.4 () from /usr/lib/libpthread.so.0
#1  0x009a66a4 in qemu_cond_wait_impl (cond=0x1692ed8,
    mutex=0xcad4c8 <qemu_global_mutex>,
    file=file@entry=0x9d0108 "/home/rth/qemu/qemu/cpus.c",
    line=line@entry=1647) at /home/rth/qemu/qemu/util/qemu-thread-posix.c:161
#2  0x00553070 in qemu_tcg_rr_cpu_thread_fn (arg=<optimized out>)
    at /home/rth/qemu/qemu/cpus.c:1647
#3  0x009a5f20 in qemu_thread_start (args=<optimized out>)
    at /home/rth/qemu/qemu/util/qemu-thread-posix.c:502


I had hoped that I would have been able to reproduce this on another host by
forcing rr mode.  But so far no luck.

Thoughts?


r~

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v7 00/73] per-CPU locks
  2019-03-06 19:40 ` Richard Henderson
@ 2019-03-06 19:57   ` Peter Maydell
  2019-03-06 23:49   ` Emilio G. Cota
  1 sibling, 0 replies; 78+ messages in thread
From: Peter Maydell @ 2019-03-06 19:57 UTC (permalink / raw)
  To: Richard Henderson
  Cc: Emilio G. Cota, QEMU Developers, Alex Bennée,
	Aleksandar Markovic, Alistair Francis, Andrzej Zaborowski,
	Anthony Green, Artyom Tarasenko, Aurelien Jarno,
	Bastian Koppelmann, Christian Borntraeger, Chris Wulff,
	Cornelia Huck, David Gibson, David Hildenbrand,
	Edgar E. Iglesias, Eduardo Habkost, Fabien Chouteau, Guan Xuetao,
	James Hogan, Laurent Vivier, Marek Vasut, Mark Cave-Ayland,
	Max Filippov, Michael Walle, Palmer Dabbelt, Paolo Bonzini,
	qemu-arm, qemu-ppc, qemu-s390x, Sagar Karandikar, Stafford Horne

On Wed, 6 Mar 2019 at 19:41, Richard Henderson
<richard.henderson@linaro.org> wrote:
> I had hoped that I would have been able to reproduce this on another host by
> forcing rr mode.  But so far no luck.

Have you tried rr's "chaos mode" switch?

thanks
-- PMM

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v7 00/73] per-CPU locks
  2019-03-06 19:40 ` Richard Henderson
  2019-03-06 19:57   ` Peter Maydell
@ 2019-03-06 23:49   ` Emilio G. Cota
  1 sibling, 0 replies; 78+ messages in thread
From: Emilio G. Cota @ 2019-03-06 23:49 UTC (permalink / raw)
  To: Richard Henderson
  Cc: qemu-devel, Alex Bennée, Aleksandar Markovic,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Walle,
	Palmer Dabbelt, Paolo Bonzini, Peter Maydell, qemu-arm, qemu-ppc,
	qemu-s390x, Sagar Karandikar, Stafford Horne

On Wed, Mar 06, 2019 at 11:40:57 -0800, Richard Henderson wrote:
> Peter reported to me a hang on the x86_64
> bios-tables-test with an arm32 host.  I have been able to replicate this.
(snip)

I reproduced this on a power7 machine (gcc110). However, after a few
git checkout->build cycles I was not able to reproduce this again (!?).

I currently have very little time to spend on this. Given how close we
are to the soft freeze, I suggest we defer the merge of this series
to post-v4.0.

Thanks,

		Emilio

^ permalink raw reply	[flat|nested] 78+ messages in thread

end of thread, other threads:[~2019-03-06 23:49 UTC | newest]

Thread overview: 78+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-04 18:17 [Qemu-devel] [PATCH v7 00/73] per-CPU locks Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 01/73] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 02/73] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 03/73] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 04/73] cpu: make qemu_work_cond per-cpu Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 05/73] cpu: move run_on_cpu to cpus-common Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 06/73] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 07/73] cpu: make per-CPU locks an alias of the BQL in TCG rr mode Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 08/73] tcg-runtime: define helper_cpu_halted_set Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 09/73] ppc: convert to helper_cpu_halted_set Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 10/73] cris: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 11/73] hppa: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 12/73] m68k: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 13/73] alpha: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 14/73] microblaze: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 15/73] cpu: define cpu_halted helpers Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 16/73] tcg-runtime: convert to cpu_halted_set Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 17/73] arm: convert to cpu_halted Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 18/73] ppc: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 19/73] sh4: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 20/73] i386: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 21/73] lm32: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 22/73] m68k: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 23/73] mips: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 24/73] riscv: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 25/73] s390x: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 26/73] sparc: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 27/73] xtensa: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 28/73] gdbstub: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 29/73] openrisc: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 30/73] cpu-exec: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 31/73] cpu: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 32/73] cpu: define cpu_interrupt_request helpers Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 33/73] ppc: use cpu_reset_interrupt Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 34/73] exec: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 35/73] i386: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 36/73] s390x: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 37/73] openrisc: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 38/73] arm: convert to cpu_interrupt_request Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 39/73] i386: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 40/73] i386/kvm: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 41/73] i386/hax-all: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 42/73] i386/whpx-all: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 43/73] i386/hvf: convert to cpu_request_interrupt Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 44/73] ppc: convert to cpu_interrupt_request Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 45/73] sh4: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 46/73] cris: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 47/73] hppa: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 48/73] lm32: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 49/73] m68k: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 50/73] mips: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 51/73] nios: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 52/73] s390x: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 53/73] alpha: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 54/73] moxie: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 55/73] sparc: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 56/73] openrisc: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 57/73] unicore32: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 58/73] microblaze: " Emilio G. Cota
2019-03-04 18:17 ` [Qemu-devel] [PATCH v7 59/73] accel/tcg: " Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 60/73] cpu: convert to interrupt_request Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 61/73] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 62/73] cpu: introduce cpu_has_work_with_iothread_lock Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 63/73] ppc: convert to cpu_has_work_with_iothread_lock Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 64/73] mips: " Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 65/73] s390x: " Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 66/73] riscv: " Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 67/73] sparc: " Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 68/73] xtensa: " Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 69/73] cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 70/73] cpu: protect CPU state with cpu->lock instead of the BQL Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 71/73] cpus-common: release BQL earlier in run_on_cpu Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 72/73] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
2019-03-04 18:18 ` [Qemu-devel] [PATCH v7 73/73] cputlb: queue async flush jobs without the BQL Emilio G. Cota
2019-03-05  9:16 ` [Qemu-devel] [PATCH v7 00/73] per-CPU locks Alex Bennée
2019-03-06 19:40 ` Richard Henderson
2019-03-06 19:57   ` Peter Maydell
2019-03-06 23:49   ` Emilio G. Cota

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.