All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC v3 0/56] per-CPU locks
@ 2018-10-19  1:05 Emilio G. Cota
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 01/56] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
                   ` (56 more replies)
  0 siblings, 57 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Aleksandar Markovic, Alexander Graf,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Clark,
	Michael Walle, Palmer Dabbelt, Pavel Dovgalyuk,
	Peter Crosthwaite, Peter Maydell, qemu-arm, qemu-ppc, qemu-s390x,
	Richard Henderson, Sagar Karandikar, Stafford Horne

Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Alistair Francis <alistair23@gmail.com>
Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Anthony Green <green@moxielogic.com>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Chris Wulff <crwulff@gmail.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: David Hildenbrand <david@redhat.com>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Fabien Chouteau <chouteau@adacore.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: James Hogan <jhogan@kernel.org>
Cc: Laurent Vivier <laurent@vivier.eu>
Cc: Marek Vasut <marex@denx.de>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Clark <mjc@sifive.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Pavel Dovgalyuk <dovgaluk@ispras.ru>
Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-arm@nongnu.org
Cc: qemu-ppc@nongnu.org
Cc: qemu-s390x@nongnu.org
Cc: Richard Henderson <rth@twiddle.net>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Stafford Horne <shorne@gmail.com>

I'm calling this series a v3 because it supersedes the two series
I previously sent about using atomics for interrupt_request:
  https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg02013.html
The approach in that series cannot work reliably; using (locked) atomics
to set interrupt_request but not using (locked) atomics to read it
can lead to missed updates.

This series takes a different approach: it serializes access to many
CPUState fields, including .interrupt_request, with a per-CPU lock.

Protecting more fields of CPUState with the lock then allows us to
substitute the BQL for the per-CPU lock in many places, notably
the execution loop in cpus.c. This leads to better scalability
for MTTCG, since CPUs don't have to acquire a contended lock
(the BQL) every time they stop executing code.

Some hurdles that remain:

1. I am not happy with the shutdown path via pause_all_vcpus.
   What happens if
   (a) A CPU is added while we're calling pause_all_vcpus?
   (b) Some CPUs are trying to run exclusive work while we
       call pause_all_vcpus?
   Am I being overly paranoid here?

2. I have done very light testing with x86_64 KVM, and no
   testing with other accels (hvf, hax, whpx). check-qtest
   works, except for an s390x test that to me is broken
   in master -- I reported the problem here:
     https://lists.gnu.org/archive/html/qemu-devel/2018-10/msg03728.html

3. This might break record-replay. Furthermore, a quick test with icount
   on aarch64 seems to work, but I haven't tested icount extensively.

4. Some architectures still need the BQL in cpu_has_work.
   This leads to some contortions to avoid deadlock, since
   in this series cpu_has_work is called with the CPU lock held.

5. The interrupt handling path remains with the BQL held, mostly
   because the ISAs I routinely work with need the BQL anyway
   when handling the interrupt. We can complete the pushdown
   of the BQL to .do_interrupt/.exec_interrupt later on; this
   series is already way too long.

Points (1)-(3) makes this series an RFC and not a proper patch series.
I'd appreciate feedback on this approach and/or testing.

Note that this series fixes a bug by which cpu_has_work is
called without the BQL (from cpu_handle_halt). After
this series, cpu_has_work is called with the CPU lock,
and only the targets that need the BQL in cpu_has_work
acquire it.

For some performance numbers, see the last patch.

The series is checkpatch-clean; only one warning due to the
use of __COVERITY__ in cpus.c.

You can fetch this series from:

  https://github.com/cota/qemu/tree/cpu-lock-v3

Note that it applies on top of tcg-next + my dynamic TLB series,
which I'm using in the faint hope that the ubuntu experiments might
run a bit faster.

Thanks!

		Emilio

^ permalink raw reply	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 01/56] cpu: convert queued work to a QSIMPLEQ
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-19  6:26   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 02/56] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
                   ` (55 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

Instead of open-coding it.

While at it, make sure that all accesses to the list are
performed while holding the list's lock.

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  6 +++---
 cpus-common.c     | 25 ++++++++-----------------
 cpus.c            | 14 ++++++++++++--
 qom/cpu.c         |  1 +
 4 files changed, 24 insertions(+), 22 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index dc130cd307..53488b202f 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -315,8 +315,8 @@ struct qemu_work_item;
  * @mem_io_pc: Host Program Counter at which the memory was accessed.
  * @mem_io_vaddr: Target virtual address at which the memory was accessed.
  * @kvm_fd: vCPU file descriptor for KVM.
- * @work_mutex: Lock to prevent multiple access to queued_work_*.
- * @queued_work_first: First asynchronous work pending.
+ * @work_mutex: Lock to prevent multiple access to @work_list.
+ * @work_list: List of pending asynchronous work.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
  * @trace_dstate: Dynamic tracing state of events for this vCPU (bitmask).
@@ -357,7 +357,7 @@ struct CPUState {
     sigjmp_buf jmp_env;
 
     QemuMutex work_mutex;
-    struct qemu_work_item *queued_work_first, *queued_work_last;
+    QSIMPLEQ_HEAD(, qemu_work_item) work_list;
 
     CPUAddressSpace *cpu_ases;
     int num_ases;
diff --git a/cpus-common.c b/cpus-common.c
index 98dd8c6ff1..a2a6cd93a1 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -107,7 +107,7 @@ void cpu_list_remove(CPUState *cpu)
 }
 
 struct qemu_work_item {
-    struct qemu_work_item *next;
+    QSIMPLEQ_ENTRY(qemu_work_item) node;
     run_on_cpu_func func;
     run_on_cpu_data data;
     bool free, exclusive, done;
@@ -116,13 +116,7 @@ struct qemu_work_item {
 static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
 {
     qemu_mutex_lock(&cpu->work_mutex);
-    if (cpu->queued_work_first == NULL) {
-        cpu->queued_work_first = wi;
-    } else {
-        cpu->queued_work_last->next = wi;
-    }
-    cpu->queued_work_last = wi;
-    wi->next = NULL;
+    QSIMPLEQ_INSERT_TAIL(&cpu->work_list, wi, node);
     wi->done = false;
     qemu_mutex_unlock(&cpu->work_mutex);
 
@@ -314,17 +308,14 @@ void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
 
-    if (cpu->queued_work_first == NULL) {
+    qemu_mutex_lock(&cpu->work_mutex);
+    if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
+        qemu_mutex_unlock(&cpu->work_mutex);
         return;
     }
-
-    qemu_mutex_lock(&cpu->work_mutex);
-    while (cpu->queued_work_first != NULL) {
-        wi = cpu->queued_work_first;
-        cpu->queued_work_first = wi->next;
-        if (!cpu->queued_work_first) {
-            cpu->queued_work_last = NULL;
-        }
+    while (!QSIMPLEQ_EMPTY(&cpu->work_list)) {
+        wi = QSIMPLEQ_FIRST(&cpu->work_list);
+        QSIMPLEQ_REMOVE_HEAD(&cpu->work_list, node);
         qemu_mutex_unlock(&cpu->work_mutex);
         if (wi->exclusive) {
             /* Running work items outside the BQL avoids the following deadlock:
diff --git a/cpus.c b/cpus.c
index cce64874e6..6d86522031 100644
--- a/cpus.c
+++ b/cpus.c
@@ -88,9 +88,19 @@ bool cpu_is_stopped(CPUState *cpu)
     return cpu->stopped || !runstate_is_running();
 }
 
+static inline bool cpu_work_list_empty(CPUState *cpu)
+{
+    bool ret;
+
+    qemu_mutex_lock(&cpu->work_mutex);
+    ret = QSIMPLEQ_EMPTY(&cpu->work_list);
+    qemu_mutex_unlock(&cpu->work_mutex);
+    return ret;
+}
+
 static bool cpu_thread_is_idle(CPUState *cpu)
 {
-    if (cpu->stop || cpu->queued_work_first) {
+    if (cpu->stop || !cpu_work_list_empty(cpu)) {
         return false;
     }
     if (cpu_is_stopped(cpu)) {
@@ -1509,7 +1519,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             cpu = first_cpu;
         }
 
-        while (cpu && !cpu->queued_work_first && !cpu->exit_request) {
+        while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
 
             atomic_mb_set(&tcg_current_rr_cpu, cpu);
             current_cpu = cpu;
diff --git a/qom/cpu.c b/qom/cpu.c
index 20ad54d43f..c47169896e 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -373,6 +373,7 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_threads = 1;
 
     qemu_mutex_init(&cpu->work_mutex);
+    QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 02/56] cpu: rename cpu->work_mutex to cpu->lock
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 01/56] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-19  6:26   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 03/56] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
                   ` (54 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

This lock will soon protect more fields of the struct. Give
it a more appropriate name.

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  5 +++--
 cpus-common.c     | 14 +++++++-------
 cpus.c            |  4 ++--
 qom/cpu.c         |  2 +-
 4 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 53488b202f..b813ca28fa 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -315,7 +315,7 @@ struct qemu_work_item;
  * @mem_io_pc: Host Program Counter at which the memory was accessed.
  * @mem_io_vaddr: Target virtual address at which the memory was accessed.
  * @kvm_fd: vCPU file descriptor for KVM.
- * @work_mutex: Lock to prevent multiple access to @work_list.
+ * @lock: Lock to prevent multiple access to per-CPU fields.
  * @work_list: List of pending asynchronous work.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
@@ -356,7 +356,8 @@ struct CPUState {
     int64_t icount_extra;
     sigjmp_buf jmp_env;
 
-    QemuMutex work_mutex;
+    QemuMutex lock;
+    /* fields below protected by @lock */
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
 
     CPUAddressSpace *cpu_ases;
diff --git a/cpus-common.c b/cpus-common.c
index a2a6cd93a1..2913294cb7 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -115,10 +115,10 @@ struct qemu_work_item {
 
 static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
 {
-    qemu_mutex_lock(&cpu->work_mutex);
+    qemu_mutex_lock(&cpu->lock);
     QSIMPLEQ_INSERT_TAIL(&cpu->work_list, wi, node);
     wi->done = false;
-    qemu_mutex_unlock(&cpu->work_mutex);
+    qemu_mutex_unlock(&cpu->lock);
 
     qemu_cpu_kick(cpu);
 }
@@ -308,15 +308,15 @@ void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
 
-    qemu_mutex_lock(&cpu->work_mutex);
+    qemu_mutex_lock(&cpu->lock);
     if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
-        qemu_mutex_unlock(&cpu->work_mutex);
+        qemu_mutex_unlock(&cpu->lock);
         return;
     }
     while (!QSIMPLEQ_EMPTY(&cpu->work_list)) {
         wi = QSIMPLEQ_FIRST(&cpu->work_list);
         QSIMPLEQ_REMOVE_HEAD(&cpu->work_list, node);
-        qemu_mutex_unlock(&cpu->work_mutex);
+        qemu_mutex_unlock(&cpu->lock);
         if (wi->exclusive) {
             /* Running work items outside the BQL avoids the following deadlock:
              * 1) start_exclusive() is called with the BQL taken while another
@@ -332,13 +332,13 @@ void process_queued_cpu_work(CPUState *cpu)
         } else {
             wi->func(cpu, wi->data);
         }
-        qemu_mutex_lock(&cpu->work_mutex);
+        qemu_mutex_lock(&cpu->lock);
         if (wi->free) {
             g_free(wi);
         } else {
             atomic_mb_set(&wi->done, true);
         }
     }
-    qemu_mutex_unlock(&cpu->work_mutex);
+    qemu_mutex_unlock(&cpu->lock);
     qemu_cond_broadcast(&qemu_work_cond);
 }
diff --git a/cpus.c b/cpus.c
index 6d86522031..b2a9698dc0 100644
--- a/cpus.c
+++ b/cpus.c
@@ -92,9 +92,9 @@ static inline bool cpu_work_list_empty(CPUState *cpu)
 {
     bool ret;
 
-    qemu_mutex_lock(&cpu->work_mutex);
+    qemu_mutex_lock(&cpu->lock);
     ret = QSIMPLEQ_EMPTY(&cpu->work_list);
-    qemu_mutex_unlock(&cpu->work_mutex);
+    qemu_mutex_unlock(&cpu->lock);
     return ret;
 }
 
diff --git a/qom/cpu.c b/qom/cpu.c
index c47169896e..d0758c907d 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -372,7 +372,7 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_cores = 1;
     cpu->nr_threads = 1;
 
-    qemu_mutex_init(&cpu->work_mutex);
+    qemu_mutex_init(&cpu->lock);
     QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 03/56] cpu: introduce cpu_mutex_lock/unlock
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 01/56] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 02/56] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 04/56] cpu: make qemu_work_cond per-cpu Emilio G. Cota
                   ` (53 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

The few direct users of &cpu->lock will be converted soon.

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h   | 26 ++++++++++++++++++++++++
 cpus.c              | 48 +++++++++++++++++++++++++++++++++++++++++++--
 stubs/cpu-lock.c    | 15 ++++++++++++++
 stubs/Makefile.objs |  1 +
 4 files changed, 88 insertions(+), 2 deletions(-)
 create mode 100644 stubs/cpu-lock.c

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index b813ca28fa..1292e7aa33 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -452,6 +452,32 @@ extern struct CPUTailQ cpus;
 
 extern __thread CPUState *current_cpu;
 
+/**
+ * cpu_mutex_lock - lock a CPU's mutex
+ * @cpu: the CPU whose mutex is to be locked
+ *
+ * To avoid deadlock, a CPU's mutex must be acquired after the BQL.
+ */
+#define cpu_mutex_lock(cpu)                             \
+    cpu_mutex_lock_impl(cpu, __FILE__, __LINE__)
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line);
+
+/**
+ * cpu_mutex_unlock - unlock a CPU's mutex
+ * @cpu: the CPU whose mutex is to be unlocked
+ */
+#define cpu_mutex_unlock(cpu)                           \
+    cpu_mutex_unlock_impl(cpu, __FILE__, __LINE__)
+void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line);
+
+/**
+ * cpu_mutex_locked - check whether a CPU's mutex is locked
+ * @cpu: the CPU of interest
+ *
+ * Returns true if the calling thread is currently holding the CPU's mutex.
+ */
+bool cpu_mutex_locked(const CPUState *cpu);
+
 static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
 {
     unsigned int i;
diff --git a/cpus.c b/cpus.c
index b2a9698dc0..a190651653 100644
--- a/cpus.c
+++ b/cpus.c
@@ -83,6 +83,47 @@ static unsigned int throttle_percentage;
 #define CPU_THROTTLE_PCT_MAX 99
 #define CPU_THROTTLE_TIMESLICE_NS 10000000
 
+/* XXX: is this really the max number of CPUs? */
+#define CPU_LOCK_BITMAP_SIZE 2048
+
+/*
+ * Note: we index the bitmap with cpu->cpu_index + 1 so that the logic
+ * also works during early CPU initialization, when cpu->cpu_index is set to
+ * UNASSIGNED_CPU_INDEX == -1.
+ */
+static __thread DECLARE_BITMAP(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
+
+static bool no_cpu_mutex_locked(void)
+{
+    return bitmap_empty(cpu_lock_bitmap, CPU_LOCK_BITMAP_SIZE);
+}
+
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+{
+/* coverity gets confused by the indirect function call */
+#ifdef __COVERITY__
+    qemu_mutex_lock_impl(&cpu->lock, file, line);
+#else
+    QemuMutexLockFunc f = atomic_read(&qemu_mutex_lock_func);
+
+    g_assert(!cpu_mutex_locked(cpu));
+    set_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+    f(&cpu->lock, file, line);
+#endif
+}
+
+void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
+{
+    g_assert(cpu_mutex_locked(cpu));
+    qemu_mutex_unlock_impl(&cpu->lock, file, line);
+    clear_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+}
+
+bool cpu_mutex_locked(const CPUState *cpu)
+{
+    return test_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
+}
+
 bool cpu_is_stopped(CPUState *cpu)
 {
     return cpu->stopped || !runstate_is_running();
@@ -92,9 +133,9 @@ static inline bool cpu_work_list_empty(CPUState *cpu)
 {
     bool ret;
 
-    qemu_mutex_lock(&cpu->lock);
+    cpu_mutex_lock(cpu);
     ret = QSIMPLEQ_EMPTY(&cpu->work_list);
-    qemu_mutex_unlock(&cpu->lock);
+    cpu_mutex_unlock(cpu);
     return ret;
 }
 
@@ -1843,6 +1884,9 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line)
 {
     QemuMutexLockFunc bql_lock = atomic_read(&qemu_bql_mutex_lock_func);
 
+    /* prevent deadlock with CPU mutex */
+    g_assert(no_cpu_mutex_locked());
+
     g_assert(!qemu_mutex_iothread_locked());
     bql_lock(&qemu_global_mutex, file, line);
     iothread_locked = true;
diff --git a/stubs/cpu-lock.c b/stubs/cpu-lock.c
new file mode 100644
index 0000000000..bc54f00b78
--- /dev/null
+++ b/stubs/cpu-lock.c
@@ -0,0 +1,15 @@
+#include "qemu/osdep.h"
+#include "qom/cpu.h"
+
+void cpu_mutex_lock_impl(CPUState *cpu, const char *file, int line)
+{
+}
+
+void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line)
+{
+}
+
+bool cpu_mutex_locked(const CPUState *cpu)
+{
+    return true;
+}
diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
index 53d3f32cb2..fbcdc0256d 100644
--- a/stubs/Makefile.objs
+++ b/stubs/Makefile.objs
@@ -8,6 +8,7 @@ stub-obj-y += blockdev-close-all-bdrv-states.o
 stub-obj-y += clock-warp.o
 stub-obj-y += cpu-get-clock.o
 stub-obj-y += cpu-get-icount.o
+stub-obj-y += cpu-lock.o
 stub-obj-y += dump.o
 stub-obj-y += error-printf.o
 stub-obj-y += fdset.o
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 04/56] cpu: make qemu_work_cond per-cpu
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (2 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 03/56] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 05/56] cpu: move run_on_cpu to cpus-common Emilio G. Cota
                   ` (52 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

This eliminates the need to use the BQL to queue CPU work.

While at it, give the per-cpu field a generic name ("cond") since
it will soon be used for more than just queueing CPU work.

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  6 +++---
 cpus-common.c     | 48 ++++++++++++++++++++++++++++++++++-------------
 cpus.c            |  2 +-
 qom/cpu.c         |  1 +
 4 files changed, 40 insertions(+), 17 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 1292e7aa33..82937881ef 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -316,6 +316,7 @@ struct qemu_work_item;
  * @mem_io_vaddr: Target virtual address at which the memory was accessed.
  * @kvm_fd: vCPU file descriptor for KVM.
  * @lock: Lock to prevent multiple access to per-CPU fields.
+ * @cond: Condition variable for per-CPU events.
  * @work_list: List of pending asynchronous work.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
@@ -358,6 +359,7 @@ struct CPUState {
 
     QemuMutex lock;
     /* fields below protected by @lock */
+    QemuCond cond;
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
 
     CPUAddressSpace *cpu_ases;
@@ -762,12 +764,10 @@ bool cpu_is_stopped(CPUState *cpu);
  * @cpu: The vCPU to run on.
  * @func: The function to be executed.
  * @data: Data to pass to the function.
- * @mutex: Mutex to release while waiting for @func to run.
  *
  * Used internally in the implementation of run_on_cpu.
  */
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
-                   QemuMutex *mutex);
+void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
 
 /**
  * run_on_cpu:
diff --git a/cpus-common.c b/cpus-common.c
index 2913294cb7..2881707c35 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -26,7 +26,6 @@
 static QemuMutex qemu_cpu_list_lock;
 static QemuCond exclusive_cond;
 static QemuCond exclusive_resume;
-static QemuCond qemu_work_cond;
 
 /* >= 1 if a thread is inside start_exclusive/end_exclusive.  Written
  * under qemu_cpu_list_lock, read with atomic operations.
@@ -42,7 +41,6 @@ void qemu_init_cpu_list(void)
     qemu_mutex_init(&qemu_cpu_list_lock);
     qemu_cond_init(&exclusive_cond);
     qemu_cond_init(&exclusive_resume);
-    qemu_cond_init(&qemu_work_cond);
 }
 
 void cpu_list_lock(void)
@@ -113,39 +111,52 @@ struct qemu_work_item {
     bool free, exclusive, done;
 };
 
-static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
+/* Called with the CPU's lock held */
+static void queue_work_on_cpu_locked(CPUState *cpu, struct qemu_work_item *wi)
 {
-    qemu_mutex_lock(&cpu->lock);
     QSIMPLEQ_INSERT_TAIL(&cpu->work_list, wi, node);
     wi->done = false;
-    qemu_mutex_unlock(&cpu->lock);
 
     qemu_cpu_kick(cpu);
 }
 
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
-                   QemuMutex *mutex)
+static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
+{
+    cpu_mutex_lock(cpu);
+    queue_work_on_cpu_locked(cpu, wi);
+    cpu_mutex_unlock(cpu);
+}
+
+void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
     struct qemu_work_item wi;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (qemu_cpu_is_self(cpu)) {
         func(cpu, data);
         return;
     }
 
+    qemu_mutex_unlock_iothread();
+
     wi.func = func;
     wi.data = data;
     wi.done = false;
     wi.free = false;
     wi.exclusive = false;
 
-    queue_work_on_cpu(cpu, &wi);
+    cpu_mutex_lock(cpu);
+    queue_work_on_cpu_locked(cpu, &wi);
     while (!atomic_mb_read(&wi.done)) {
         CPUState *self_cpu = current_cpu;
 
-        qemu_cond_wait(&qemu_work_cond, mutex);
+        qemu_cond_wait(&cpu->cond, &cpu->lock);
         current_cpu = self_cpu;
     }
+    cpu_mutex_unlock(cpu);
+
+    qemu_mutex_lock_iothread();
 }
 
 void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
@@ -307,6 +318,7 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
 void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
+    bool has_bql = qemu_mutex_iothread_locked();
 
     qemu_mutex_lock(&cpu->lock);
     if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
@@ -324,13 +336,23 @@ void process_queued_cpu_work(CPUState *cpu)
              * BQL, so it goes to sleep; start_exclusive() is sleeping too, so
              * neither CPU can proceed.
              */
-            qemu_mutex_unlock_iothread();
+            if (has_bql) {
+                qemu_mutex_unlock_iothread();
+            }
             start_exclusive();
             wi->func(cpu, wi->data);
             end_exclusive();
-            qemu_mutex_lock_iothread();
+            if (has_bql) {
+                qemu_mutex_lock_iothread();
+            }
         } else {
-            wi->func(cpu, wi->data);
+            if (has_bql) {
+                wi->func(cpu, wi->data);
+            } else {
+                qemu_mutex_lock_iothread();
+                wi->func(cpu, wi->data);
+                qemu_mutex_unlock_iothread();
+            }
         }
         qemu_mutex_lock(&cpu->lock);
         if (wi->free) {
@@ -340,5 +362,5 @@ void process_queued_cpu_work(CPUState *cpu)
         }
     }
     qemu_mutex_unlock(&cpu->lock);
-    qemu_cond_broadcast(&qemu_work_cond);
+    qemu_cond_broadcast(&cpu->cond);
 }
diff --git a/cpus.c b/cpus.c
index a190651653..e844335386 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1236,7 +1236,7 @@ void qemu_init_cpu_loop(void)
 
 void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
-    do_run_on_cpu(cpu, func, data, &qemu_global_mutex);
+    do_run_on_cpu(cpu, func, data);
 }
 
 static void qemu_kvm_destroy_vcpu(CPUState *cpu)
diff --git a/qom/cpu.c b/qom/cpu.c
index d0758c907d..bb031a3a6a 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -373,6 +373,7 @@ static void cpu_common_initfn(Object *obj)
     cpu->nr_threads = 1;
 
     qemu_mutex_init(&cpu->lock);
+    qemu_cond_init(&cpu->cond);
     QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 05/56] cpu: move run_on_cpu to cpus-common
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (3 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 04/56] cpu: make qemu_work_cond per-cpu Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-19  6:39   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 06/56] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
                   ` (51 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

We don't pass a pointer to qemu_global_mutex anymore.

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 10 ----------
 cpus-common.c     |  2 +-
 cpus.c            |  5 -----
 3 files changed, 1 insertion(+), 16 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 82937881ef..90fd685899 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -759,16 +759,6 @@ void qemu_cpu_kick(CPUState *cpu);
  */
 bool cpu_is_stopped(CPUState *cpu);
 
-/**
- * do_run_on_cpu:
- * @cpu: The vCPU to run on.
- * @func: The function to be executed.
- * @data: Data to pass to the function.
- *
- * Used internally in the implementation of run_on_cpu.
- */
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
-
 /**
  * run_on_cpu:
  * @cpu: The vCPU to run on.
diff --git a/cpus-common.c b/cpus-common.c
index 2881707c35..20096ec3c6 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -127,7 +127,7 @@ static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
     cpu_mutex_unlock(cpu);
 }
 
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
+void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
     struct qemu_work_item wi;
 
diff --git a/cpus.c b/cpus.c
index e844335386..a101e8863c 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1234,11 +1234,6 @@ void qemu_init_cpu_loop(void)
     qemu_thread_get_self(&io_thread);
 }
 
-void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
-{
-    do_run_on_cpu(cpu, func, data);
-}
-
 static void qemu_kvm_destroy_vcpu(CPUState *cpu)
 {
     if (kvm_destroy_vcpu(cpu) < 0) {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 06/56] cpu: introduce process_queued_cpu_work_locked
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (4 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 05/56] cpu: move run_on_cpu to cpus-common Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-19  6:41   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 07/56] target/m68k: rename cpu_halted to cpu_halt Emilio G. Cota
                   ` (50 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

It will gain a user once we protect more of CPUState under cpu->lock.

This completes the conversion to cpu_mutex_lock/unlock in the file.

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  9 +++++++++
 cpus-common.c     | 17 +++++++++++------
 2 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 90fd685899..30d1c260dc 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -986,6 +986,15 @@ void cpu_remove_sync(CPUState *cpu);
  */
 void process_queued_cpu_work(CPUState *cpu);
 
+/**
+ * process_queued_cpu_work_locked - process all items on CPU work queue
+ * @cpu: The CPU which work queue to process.
+ *
+ * Call with @cpu->lock held.
+ * See also: process_queued_cpu_work()
+ */
+void process_queued_cpu_work_locked(CPUState *cpu);
+
 /**
  * cpu_exec_start:
  * @cpu: The CPU for the current thread.
diff --git a/cpus-common.c b/cpus-common.c
index 20096ec3c6..d559f94ef1 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -315,20 +315,19 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
     queue_work_on_cpu(cpu, wi);
 }
 
-void process_queued_cpu_work(CPUState *cpu)
+/* Called with the CPU's lock held */
+void process_queued_cpu_work_locked(CPUState *cpu)
 {
     struct qemu_work_item *wi;
     bool has_bql = qemu_mutex_iothread_locked();
 
-    qemu_mutex_lock(&cpu->lock);
     if (QSIMPLEQ_EMPTY(&cpu->work_list)) {
-        qemu_mutex_unlock(&cpu->lock);
         return;
     }
     while (!QSIMPLEQ_EMPTY(&cpu->work_list)) {
         wi = QSIMPLEQ_FIRST(&cpu->work_list);
         QSIMPLEQ_REMOVE_HEAD(&cpu->work_list, node);
-        qemu_mutex_unlock(&cpu->lock);
+        cpu_mutex_unlock(cpu);
         if (wi->exclusive) {
             /* Running work items outside the BQL avoids the following deadlock:
              * 1) start_exclusive() is called with the BQL taken while another
@@ -354,13 +353,19 @@ void process_queued_cpu_work(CPUState *cpu)
                 qemu_mutex_unlock_iothread();
             }
         }
-        qemu_mutex_lock(&cpu->lock);
+        cpu_mutex_lock(cpu);
         if (wi->free) {
             g_free(wi);
         } else {
             atomic_mb_set(&wi->done, true);
         }
     }
-    qemu_mutex_unlock(&cpu->lock);
     qemu_cond_broadcast(&cpu->cond);
 }
+
+void process_queued_cpu_work(CPUState *cpu)
+{
+    cpu_mutex_lock(cpu);
+    process_queued_cpu_work_locked(cpu);
+    cpu_mutex_unlock(cpu);
+}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 07/56] target/m68k: rename cpu_halted to cpu_halt
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (5 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 06/56] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 12:53   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 08/56] cpu: define cpu_halted helpers Emilio G. Cota
                   ` (49 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Laurent Vivier

To avoid a name clash with the soon-to-be-defined cpu_halted() helper.

Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/m68k/translate.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index ae3651b867..86491048f8 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -43,7 +43,7 @@
 #undef DEFO32
 #undef DEFO64
 
-static TCGv_i32 cpu_halted;
+static TCGv_i32 cpu_halt;
 static TCGv_i32 cpu_exception_index;
 
 static char cpu_reg_names[2 * 8 * 3 + 5 * 4];
@@ -79,7 +79,7 @@ void m68k_tcg_init(void)
 #undef DEFO32
 #undef DEFO64
 
-    cpu_halted = tcg_global_mem_new_i32(cpu_env,
+    cpu_halt = tcg_global_mem_new_i32(cpu_env,
                                         -offsetof(M68kCPU, env) +
                                         offsetof(CPUState, halted), "HALTED");
     cpu_exception_index = tcg_global_mem_new_i32(cpu_env,
@@ -4646,7 +4646,7 @@ DISAS_INSN(stop)
     ext = read_im16(env, s);
 
     gen_set_sr_im(s, ext, 0);
-    tcg_gen_movi_i32(cpu_halted, 1);
+    tcg_gen_movi_i32(cpu_halt, 1);
     gen_exception(s, s->pc, EXCP_HLT);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 08/56] cpu: define cpu_halted helpers
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (6 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 07/56] target/m68k: rename cpu_halted to cpu_halt Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 12:54   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 09/56] arm: convert to cpu_halted Emilio G. Cota
                   ` (48 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

cpu->halted will soon be protected by cpu->lock.
We will use these helpers to ease the transition,
since right now cpu->halted has many direct callers.

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 30d1c260dc..3bf6767cb0 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -480,6 +480,30 @@ void cpu_mutex_unlock_impl(CPUState *cpu, const char *file, int line);
  */
 bool cpu_mutex_locked(const CPUState *cpu);
 
+static inline uint32_t cpu_halted(CPUState *cpu)
+{
+    uint32_t ret;
+
+    if (cpu_mutex_locked(cpu)) {
+        return cpu->halted;
+    }
+    cpu_mutex_lock(cpu);
+    ret = cpu->halted;
+    cpu_mutex_unlock(cpu);
+    return ret;
+}
+
+static inline void cpu_halted_set(CPUState *cpu, uint32_t val)
+{
+    if (cpu_mutex_locked(cpu)) {
+        cpu->halted = val;
+        return;
+    }
+    cpu_mutex_lock(cpu);
+    cpu->halted = val;
+    cpu_mutex_unlock(cpu);
+}
+
 static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
 {
     unsigned int i;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 09/56] arm: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (7 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 08/56] cpu: define cpu_halted helpers Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 12:55   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 10/56] ppc: " Emilio G. Cota
                   ` (47 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Andrzej Zaborowski, Peter Maydell, qemu-arm

Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-arm@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/arm/omap1.c            | 4 ++--
 hw/arm/pxa2xx_gpio.c      | 2 +-
 hw/arm/pxa2xx_pic.c       | 2 +-
 target/arm/arm-powerctl.c | 4 ++--
 target/arm/cpu.c          | 2 +-
 target/arm/op_helper.c    | 2 +-
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
index 539d29ef9c..55a7672976 100644
--- a/hw/arm/omap1.c
+++ b/hw/arm/omap1.c
@@ -1769,7 +1769,7 @@ static uint64_t omap_clkdsp_read(void *opaque, hwaddr addr,
     case 0x18:	/* DSP_SYSST */
         cpu = CPU(s->cpu);
         return (s->clkm.clocking_scheme << 11) | s->clkm.cold_start |
-                (cpu->halted << 6);      /* Quite useless... */
+                (cpu_halted(cpu) << 6);      /* Quite useless... */
     }
 
     OMAP_BAD_REG(addr);
@@ -3790,7 +3790,7 @@ void omap_mpu_wakeup(void *opaque, int irq, int req)
     struct omap_mpu_state_s *mpu = (struct omap_mpu_state_s *) opaque;
     CPUState *cpu = CPU(mpu->cpu);
 
-    if (cpu->halted) {
+    if (cpu_halted(cpu)) {
         cpu_interrupt(cpu, CPU_INTERRUPT_EXITTB);
     }
 }
diff --git a/hw/arm/pxa2xx_gpio.c b/hw/arm/pxa2xx_gpio.c
index e15070188e..5c3fea42e9 100644
--- a/hw/arm/pxa2xx_gpio.c
+++ b/hw/arm/pxa2xx_gpio.c
@@ -128,7 +128,7 @@ static void pxa2xx_gpio_set(void *opaque, int line, int level)
         pxa2xx_gpio_irq_update(s);
 
     /* Wake-up GPIOs */
-    if (cpu->halted && (mask & ~s->dir[bank] & pxa2xx_gpio_wake[bank])) {
+    if (cpu_halted(cpu) && (mask & ~s->dir[bank] & pxa2xx_gpio_wake[bank])) {
         cpu_interrupt(cpu, CPU_INTERRUPT_EXITTB);
     }
 }
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
index 61275fa040..46ab4c3fc2 100644
--- a/hw/arm/pxa2xx_pic.c
+++ b/hw/arm/pxa2xx_pic.c
@@ -58,7 +58,7 @@ static void pxa2xx_pic_update(void *opaque)
     PXA2xxPICState *s = (PXA2xxPICState *) opaque;
     CPUState *cpu = CPU(s->cpu);
 
-    if (cpu->halted) {
+    if (cpu_halted(cpu)) {
         mask[0] = s->int_pending[0] & (s->int_enabled[0] | s->int_idle);
         mask[1] = s->int_pending[1] & (s->int_enabled[1] | s->int_idle);
         if (mask[0] || mask[1]) {
diff --git a/target/arm/arm-powerctl.c b/target/arm/arm-powerctl.c
index ce55eeb682..e4477444fc 100644
--- a/target/arm/arm-powerctl.c
+++ b/target/arm/arm-powerctl.c
@@ -64,7 +64,7 @@ static void arm_set_cpu_on_async_work(CPUState *target_cpu_state,
 
     /* Initialize the cpu we are turning on */
     cpu_reset(target_cpu_state);
-    target_cpu_state->halted = 0;
+    cpu_halted_set(target_cpu_state, 0);
 
     if (info->target_aa64) {
         if ((info->target_el < 3) && arm_feature(&target_cpu->env,
@@ -228,7 +228,7 @@ static void arm_set_cpu_off_async_work(CPUState *target_cpu_state,
 
     assert(qemu_mutex_iothread_locked());
     target_cpu->power_state = PSCI_OFF;
-    target_cpu_state->halted = 1;
+    cpu_halted_set(target_cpu_state, 1);
     target_cpu_state->exception_index = EXCP_HLT;
 }
 
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index b5e61cc177..9c5cda8eb7 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -149,7 +149,7 @@ static void arm_cpu_reset(CPUState *s)
     env->vfp.xregs[ARM_VFP_MVFR2] = cpu->mvfr2;
 
     cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
-    s->halted = cpu->start_powered_off;
+    cpu_halted_set(s, cpu->start_powered_off);
 
     if (arm_feature(env, ARM_FEATURE_IWMMXT)) {
         env->iwmmxt.cregs[ARM_IWMMXT_wCID] = 0x69051000 | 'Q';
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
index fb15a13e6c..8e393823f8 100644
--- a/target/arm/op_helper.c
+++ b/target/arm/op_helper.c
@@ -465,7 +465,7 @@ void HELPER(wfi)(CPUARMState *env, uint32_t insn_len)
     }
 
     cs->exception_index = EXCP_HLT;
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cpu_loop_exit(cs);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 10/56] ppc: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (8 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 09/56] arm: convert to cpu_halted Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 12:56   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 11/56] sh4: " Emilio G. Cota
                   ` (46 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, David Gibson, Alexander Graf, qemu-ppc

In ppce500_spin.c, acquire the lock just once to update
both cpu->halted and cpu->stopped.

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-ppc@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/helper_regs.h        |  2 +-
 hw/ppc/e500.c                   |  4 ++--
 hw/ppc/ppc.c                    | 10 +++++-----
 hw/ppc/ppce500_spin.c           |  6 ++++--
 hw/ppc/spapr_cpu_core.c         |  4 ++--
 hw/ppc/spapr_hcall.c            |  4 +++-
 hw/ppc/spapr_rtas.c             |  6 +++---
 target/ppc/excp_helper.c        |  4 ++--
 target/ppc/kvm.c                |  4 ++--
 target/ppc/translate_init.inc.c |  6 +++---
 10 files changed, 27 insertions(+), 23 deletions(-)

diff --git a/target/ppc/helper_regs.h b/target/ppc/helper_regs.h
index 5efd18049e..9298052ac5 100644
--- a/target/ppc/helper_regs.h
+++ b/target/ppc/helper_regs.h
@@ -161,7 +161,7 @@ static inline int hreg_store_msr(CPUPPCState *env, target_ulong value,
 #if !defined(CONFIG_USER_ONLY)
     if (unlikely(msr_pow == 1)) {
         if (!env->pending_interrupts && (*env->check_pow)(env)) {
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             excp = EXCP_HALTED;
         }
     }
diff --git a/hw/ppc/e500.c b/hw/ppc/e500.c
index e6747fce28..6843c545b7 100644
--- a/hw/ppc/e500.c
+++ b/hw/ppc/e500.c
@@ -657,7 +657,7 @@ static void ppce500_cpu_reset_sec(void *opaque)
 
     /* Secondary CPU starts in halted state for now. Needs to change when
        implementing non-kernel boot. */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
 }
 
@@ -671,7 +671,7 @@ static void ppce500_cpu_reset(void *opaque)
     cpu_reset(cs);
 
     /* Set initial guest state. */
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
     env->gpr[1] = (16 * MiB) - 8;
     env->gpr[3] = bi->dt_base;
     env->gpr[4] = 0;
diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index ec4be25f49..d1a5a0b877 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -151,7 +151,7 @@ static void ppc6xx_set_irq(void *opaque, int pin, int level)
             /* XXX: Note that the only way to restart the CPU is to reset it */
             if (level) {
                 LOG_IRQ("%s: stop the CPU\n", __func__);
-                cs->halted = 1;
+                cpu_halted_set(cs, 1);
             }
             break;
         case PPC6xx_INPUT_HRESET:
@@ -230,10 +230,10 @@ static void ppc970_set_irq(void *opaque, int pin, int level)
             /* XXX: TODO: relay the signal to CKSTP_OUT pin */
             if (level) {
                 LOG_IRQ("%s: stop the CPU\n", __func__);
-                cs->halted = 1;
+                cpu_halted_set(cs, 1);
             } else {
                 LOG_IRQ("%s: restart the CPU\n", __func__);
-                cs->halted = 0;
+                cpu_halted_set(cs, 0);
                 qemu_cpu_kick(cs);
             }
             break;
@@ -361,10 +361,10 @@ static void ppc40x_set_irq(void *opaque, int pin, int level)
             /* Level sensitive - active low */
             if (level) {
                 LOG_IRQ("%s: stop the CPU\n", __func__);
-                cs->halted = 1;
+                cpu_halted_set(cs, 1);
             } else {
                 LOG_IRQ("%s: restart the CPU\n", __func__);
-                cs->halted = 0;
+                cpu_halted_set(cs, 0);
                 qemu_cpu_kick(cs);
             }
             break;
diff --git a/hw/ppc/ppce500_spin.c b/hw/ppc/ppce500_spin.c
index c45fc858de..4b3532730f 100644
--- a/hw/ppc/ppce500_spin.c
+++ b/hw/ppc/ppce500_spin.c
@@ -107,9 +107,11 @@ static void spin_kick(CPUState *cs, run_on_cpu_data data)
     map_start = ldq_p(&curspin->addr) & ~(map_size - 1);
     mmubooke_create_initial_mapping(env, 0, map_start, map_size);
 
-    cs->halted = 0;
-    cs->exception_index = -1;
+    cpu_mutex_lock(cs);
+    cpu_halted_set(cs, 0);
     cs->stopped = false;
+    cpu_mutex_unlock(cs);
+    cs->exception_index = -1;
     qemu_cpu_kick(cs);
 }
 
diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
index 2398ce62c0..4c9c60b53b 100644
--- a/hw/ppc/spapr_cpu_core.c
+++ b/hw/ppc/spapr_cpu_core.c
@@ -37,7 +37,7 @@ static void spapr_cpu_reset(void *opaque)
     /* All CPUs start halted.  CPU0 is unhalted from the machine level
      * reset code and the rest are explicitly started up by the guest
      * using an RTAS call */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 
     /* Set compatibility mode to match the boot CPU, which was either set
      * by the machine reset code or by CAS. This should never fail.
@@ -91,7 +91,7 @@ void spapr_cpu_set_entry_state(PowerPCCPU *cpu, target_ulong nip, target_ulong r
     env->nip = nip;
     env->gpr[3] = r3;
     kvmppc_set_reg_ppc_online(cpu, 1);
-    CPU(cpu)->halted = 0;
+    cpu_halted_set(CPU(cpu), 0);
     /* Enable Power-saving mode Exit Cause exceptions */
     ppc_store_lpcr(cpu, env->spr[SPR_LPCR] | pcc->lpcr_pm);
 }
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index ae913d070f..9891fc7740 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -1088,11 +1088,13 @@ static target_ulong h_cede(PowerPCCPU *cpu, sPAPRMachineState *spapr,
 
     env->msr |= (1ULL << MSR_EE);
     hreg_compute_hflags(env);
+    cpu_mutex_lock(cs);
     if (!cpu_has_work(cs)) {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
         cs->exit_request = 1;
     }
+    cpu_mutex_unlock(cs);
     return H_SUCCESS;
 }
 
diff --git a/hw/ppc/spapr_rtas.c b/hw/ppc/spapr_rtas.c
index d6a0952154..925f67123c 100644
--- a/hw/ppc/spapr_rtas.c
+++ b/hw/ppc/spapr_rtas.c
@@ -109,7 +109,7 @@ static void rtas_query_cpu_stopped_state(PowerPCCPU *cpu_,
     id = rtas_ld(args, 0);
     cpu = spapr_find_cpu(id);
     if (cpu != NULL) {
-        if (CPU(cpu)->halted) {
+        if (cpu_halted(CPU(cpu))) {
             rtas_st(rets, 1, 0);
         } else {
             rtas_st(rets, 1, 2);
@@ -153,7 +153,7 @@ static void rtas_start_cpu(PowerPCCPU *callcpu, sPAPRMachineState *spapr,
     env = &newcpu->env;
     pcc = POWERPC_CPU_GET_CLASS(newcpu);
 
-    if (!CPU(newcpu)->halted) {
+    if (!cpu_halted(CPU(newcpu))) {
         rtas_st(rets, 0, RTAS_OUT_HW_ERROR);
         return;
     }
@@ -207,7 +207,7 @@ static void rtas_stop_self(PowerPCCPU *cpu, sPAPRMachineState *spapr,
      * This could deliver an interrupt on a dying CPU and crash the
      * guest */
     ppc_store_lpcr(cpu, env->spr[SPR_LPCR] & ~pcc->lpcr_pm);
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     kvmppc_set_reg_ppc_online(cpu, 0);
     qemu_cpu_kick(cs);
 }
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 0ec7ae1ad4..5e1778584a 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -206,7 +206,7 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
                 qemu_log("Machine check while not allowed. "
                         "Entering checkstop state\n");
             }
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             cpu_interrupt_exittb(cs);
         }
         if (env->msr_mask & MSR_HVB) {
@@ -954,7 +954,7 @@ void helper_pminsn(CPUPPCState *env, powerpc_pm_insn_t insn)
     CPUState *cs;
 
     cs = CPU(ppc_env_get_cpu(env));
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     env->in_pm_state = true;
 
     /* The architecture specifies that HDEC interrupts are
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
index 30aeafa7de..dc6b8d5e9e 100644
--- a/target/ppc/kvm.c
+++ b/target/ppc/kvm.c
@@ -1368,7 +1368,7 @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
 
 int kvm_arch_process_async_events(CPUState *cs)
 {
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 static int kvmppc_handle_halt(PowerPCCPU *cpu)
@@ -1377,7 +1377,7 @@ static int kvmppc_handle_halt(PowerPCCPU *cpu)
     CPUPPCState *env = &cpu->env;
 
     if (!(cs->interrupt_request & CPU_INTERRUPT_HARD) && (msr_ee)) {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
     }
 
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 263e63cb03..0e423bea69 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8445,7 +8445,7 @@ static bool cpu_has_work_POWER7(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
             return false;
         }
@@ -8599,7 +8599,7 @@ static bool cpu_has_work_POWER8(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
             return false;
         }
@@ -8791,7 +8791,7 @@ static bool cpu_has_work_POWER9(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
             return false;
         }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 11/56] sh4: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (9 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 10/56] ppc: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 12:57   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 12/56] i386: " Emilio G. Cota
                   ` (45 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Aurelien Jarno

Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/sh4/op_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/sh4/op_helper.c b/target/sh4/op_helper.c
index 4f825bae5a..57cc363ccc 100644
--- a/target/sh4/op_helper.c
+++ b/target/sh4/op_helper.c
@@ -105,7 +105,7 @@ void helper_sleep(CPUSH4State *env)
 {
     CPUState *cs = CPU(sh_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     env->in_sleep = 1;
     raise_exception(env, EXCP_HLT, 0);
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 12/56] i386: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (10 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 11/56] sh4: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 12:59   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 13/56] lm32: " Emilio G. Cota
                   ` (44 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Eduardo Habkost

Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/cpu.h         |  2 +-
 target/i386/cpu.c         |  2 +-
 target/i386/hax-all.c     |  4 ++--
 target/i386/helper.c      |  4 ++--
 target/i386/hvf/hvf.c     |  8 ++++----
 target/i386/hvf/x86hvf.c  |  4 ++--
 target/i386/kvm.c         | 10 +++++-----
 target/i386/misc_helper.c |  2 +-
 target/i386/whpx-all.c    |  6 +++---
 9 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 730c06f80a..461459520a 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1600,7 +1600,7 @@ static inline void cpu_x86_load_seg_cache_sipi(X86CPU *cpu,
                            sipi_vector << 12,
                            env->segs[R_CS].limit,
                            env->segs[R_CS].flags);
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
 }
 
 int cpu_x86_get_descr_debug(CPUX86State *env, unsigned int selector,
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index c88876dfe3..b91d80af0a 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4524,7 +4524,7 @@ static void x86_cpu_reset(CPUState *s)
     /* We hard-wire the BSP to the first CPU. */
     apic_designate_bsp(cpu->apic_state, s->cpu_index == 0);
 
-    s->halted = !cpu_is_bsp(cpu);
+    cpu_halted_set(s, !cpu_is_bsp(cpu));
 
     if (kvm_enabled()) {
         kvm_arch_reset_vcpu(cpu);
diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index d2e512856b..f095c527e3 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -480,7 +480,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
         return 0;
     }
 
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
 
     if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
         cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
@@ -557,7 +557,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
                 !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
                 /* hlt instruction with interrupt disabled is shutdown */
                 env->eflags |= IF_MASK;
-                cpu->halted = 1;
+                cpu_halted_set(cpu, 1);
                 cpu->exception_index = EXCP_HLT;
                 ret = 1;
             }
diff --git a/target/i386/helper.c b/target/i386/helper.c
index e695f8ba7a..a75278f954 100644
--- a/target/i386/helper.c
+++ b/target/i386/helper.c
@@ -454,7 +454,7 @@ void x86_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
                     (env->hflags >> HF_INHIBIT_IRQ_SHIFT) & 1,
                     (env->a20_mask >> 20) & 1,
                     (env->hflags >> HF_SMM_SHIFT) & 1,
-                    cs->halted);
+                    cpu_halted(cs));
     } else
 #endif
     {
@@ -481,7 +481,7 @@ void x86_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
                     (env->hflags >> HF_INHIBIT_IRQ_SHIFT) & 1,
                     (env->a20_mask >> 20) & 1,
                     (env->hflags >> HF_SMM_SHIFT) & 1,
-                    cs->halted);
+                    cpu_halted(cs));
     }
 
     for(i = 0; i < 6; i++) {
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index 9f52bc413a..fb3b2a26a1 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -500,7 +500,7 @@ void hvf_reset_vcpu(CPUState *cpu) {
     }
 
     hv_vm_sync_tsc(0);
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     hv_vcpu_invalidate_tlb(cpu->hvf_fd);
     hv_vcpu_flush(cpu->hvf_fd);
 }
@@ -665,7 +665,7 @@ int hvf_vcpu_exec(CPUState *cpu)
     int ret = 0;
     uint64_t rip = 0;
 
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
 
     if (hvf_process_events(cpu)) {
         return EXCP_HLT;
@@ -683,7 +683,7 @@ int hvf_vcpu_exec(CPUState *cpu)
         vmx_update_tpr(cpu);
 
         qemu_mutex_unlock_iothread();
-        if (!cpu_is_bsp(X86_CPU(cpu)) && cpu->halted) {
+        if (!cpu_is_bsp(X86_CPU(cpu)) && cpu_halted(cpu)) {
             qemu_mutex_lock_iothread();
             return EXCP_HLT;
         }
@@ -717,7 +717,7 @@ int hvf_vcpu_exec(CPUState *cpu)
                 (EFLAGS(env) & IF_MASK))
                 && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) &&
                 !(idtvec_info & VMCS_IDT_VEC_VALID)) {
-                cpu->halted = 1;
+                cpu_halted_set(cpu, 1);
                 ret = EXCP_HLT;
             }
             ret = EXCP_INTERRUPT;
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index df8e946fbc..163bbed23f 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -446,7 +446,7 @@ int hvf_process_events(CPUState *cpu_state)
     if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK)) ||
         (cpu_state->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cpu_state->halted = 0;
+        cpu_halted_set(cpu_state, 0);
     }
     if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
         hvf_cpu_synchronize_state(cpu_state);
@@ -458,5 +458,5 @@ int hvf_process_events(CPUState *cpu_state)
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
     }
-    return cpu_state->halted;
+    return cpu_halted(cpu);
 }
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index dc4047b02f..d593818cd5 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2650,7 +2650,7 @@ static int kvm_get_mp_state(X86CPU *cpu)
     }
     env->mp_state = mp_state.mp_state;
     if (kvm_irqchip_in_kernel()) {
-        cs->halted = (mp_state.mp_state == KVM_MP_STATE_HALTED);
+        cpu_halted_set(cs, mp_state.mp_state == KVM_MP_STATE_HALTED);
     }
     return 0;
 }
@@ -3136,7 +3136,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         env->exception_injected = EXCP12_MCHK;
         env->has_error_code = 0;
 
-        cs->halted = 0;
+        cpu_halted_set(cs, 0);
         if (kvm_irqchip_in_kernel() && env->mp_state == KVM_MP_STATE_HALTED) {
             env->mp_state = KVM_MP_STATE_RUNNABLE;
         }
@@ -3159,7 +3159,7 @@ int kvm_arch_process_async_events(CPUState *cs)
     if (((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
         (cs->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cs->halted = 0;
+        cpu_halted_set(cs, 0);
     }
     if (cs->interrupt_request & CPU_INTERRUPT_SIPI) {
         kvm_cpu_synchronize_state(cs);
@@ -3172,7 +3172,7 @@ int kvm_arch_process_async_events(CPUState *cs)
                                       env->tpr_access_type);
     }
 
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 static int kvm_handle_halt(X86CPU *cpu)
@@ -3183,7 +3183,7 @@ static int kvm_handle_halt(X86CPU *cpu)
     if (!((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
           (env->eflags & IF_MASK)) &&
         !(cs->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
         return EXCP_HLT;
     }
 
diff --git a/target/i386/misc_helper.c b/target/i386/misc_helper.c
index 78f2020ef2..fcd6d833e8 100644
--- a/target/i386/misc_helper.c
+++ b/target/i386/misc_helper.c
@@ -554,7 +554,7 @@ static void do_hlt(X86CPU *cpu)
     CPUX86State *env = &cpu->env;
 
     env->hflags &= ~HF_INHIBIT_IRQ_MASK; /* needed if sti is just before */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     cpu_loop_exit(cs);
 }
diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
index 57e53e1f1f..b9c79ccd99 100644
--- a/target/i386/whpx-all.c
+++ b/target/i386/whpx-all.c
@@ -697,7 +697,7 @@ static int whpx_handle_halt(CPUState *cpu)
           (env->eflags & IF_MASK)) &&
         !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
         cpu->exception_index = EXCP_HLT;
-        cpu->halted = true;
+        cpu_halted_set(cpu, true);
         ret = 1;
     }
     qemu_mutex_unlock_iothread();
@@ -857,7 +857,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
         (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
-        cpu->halted = false;
+        cpu_halted_set(cpu, false);
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
@@ -887,7 +887,7 @@ static int whpx_vcpu_run(CPUState *cpu)
     int ret;
 
     whpx_vcpu_process_async_events(cpu);
-    if (cpu->halted) {
+    if (cpu_halted(cpu)) {
         cpu->exception_index = EXCP_HLT;
         atomic_set(&cpu->exit_request, false);
         return 0;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 13/56] lm32: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (11 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 12/56] i386: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:00   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 14/56] m68k: " Emilio G. Cota
                   ` (43 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Michael Walle

Cc: Michael Walle <michael@walle.cc>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/lm32/op_helper.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/lm32/op_helper.c b/target/lm32/op_helper.c
index 234d55e056..392634441b 100644
--- a/target/lm32/op_helper.c
+++ b/target/lm32/op_helper.c
@@ -31,7 +31,7 @@ void HELPER(hlt)(CPULM32State *env)
 {
     CPUState *cs = CPU(lm32_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     cpu_loop_exit(cs);
 }
@@ -44,7 +44,7 @@ void HELPER(ill)(CPULM32State *env)
             "Connect a debugger or switch to the monitor console "
             "to find out more.\n");
     vm_stop(RUN_STATE_PAUSED);
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     raise_exception(env, EXCP_HALTED);
 #endif
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 14/56] m68k: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (12 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 13/56] lm32: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:01   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 15/56] mips: " Emilio G. Cota
                   ` (42 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Laurent Vivier

Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/m68k/op_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/m68k/op_helper.c b/target/m68k/op_helper.c
index 8d09ed91c4..61ba1a6dec 100644
--- a/target/m68k/op_helper.c
+++ b/target/m68k/op_helper.c
@@ -237,7 +237,7 @@ static void cf_interrupt_all(CPUM68KState *env, int is_hw)
                 do_m68k_semihosting(env, env->dregs[0]);
                 return;
             }
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             cs->exception_index = EXCP_HLT;
             cpu_loop_exit(cs);
             return;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 15/56] mips: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (13 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 14/56] m68k: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:02   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 16/56] riscv: " Emilio G. Cota
                   ` (41 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Aurelien Jarno, Aleksandar Markovic, James Hogan

Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Cc: James Hogan <jhogan@kernel.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/mips/cps.c           | 2 +-
 hw/misc/mips_itu.c      | 4 ++--
 target/mips/kvm.c       | 2 +-
 target/mips/op_helper.c | 8 ++++----
 target/mips/translate.c | 4 ++--
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/hw/mips/cps.c b/hw/mips/cps.c
index 4285d1964e..a8b27eee78 100644
--- a/hw/mips/cps.c
+++ b/hw/mips/cps.c
@@ -49,7 +49,7 @@ static void main_cpu_reset(void *opaque)
     cpu_reset(cs);
 
     /* All VPs are halted on reset. Leave powering up to CPC. */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 }
 
 static bool cpu_mips_itu_supported(CPUMIPSState *env)
diff --git a/hw/misc/mips_itu.c b/hw/misc/mips_itu.c
index 43bbec46cf..7c383939a7 100644
--- a/hw/misc/mips_itu.c
+++ b/hw/misc/mips_itu.c
@@ -162,7 +162,7 @@ static void wake_blocked_threads(ITCStorageCell *c)
 {
     CPUState *cs;
     CPU_FOREACH(cs) {
-        if (cs->halted && (c->blocked_threads & (1ULL << cs->cpu_index))) {
+        if (cpu_halted(cs) && (c->blocked_threads & (1ULL << cs->cpu_index))) {
             cpu_interrupt(cs, CPU_INTERRUPT_WAKE);
         }
     }
@@ -172,7 +172,7 @@ static void wake_blocked_threads(ITCStorageCell *c)
 static void QEMU_NORETURN block_thread_and_exit(ITCStorageCell *c)
 {
     c->blocked_threads |= 1ULL << current_cpu->cpu_index;
-    current_cpu->halted = 1;
+    cpu_halted_set(current_cpu, 1);
     current_cpu->exception_index = EXCP_HLT;
     cpu_loop_exit_restore(current_cpu, current_cpu->mem_io_pc);
 }
diff --git a/target/mips/kvm.c b/target/mips/kvm.c
index 8e72850962..0b177a7577 100644
--- a/target/mips/kvm.c
+++ b/target/mips/kvm.c
@@ -156,7 +156,7 @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
 
 int kvm_arch_process_async_events(CPUState *cs)
 {
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index c148b310cd..8904dfa2b4 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -649,7 +649,7 @@ static bool mips_vpe_is_wfi(MIPSCPU *c)
 
     /* If the VPE is halted but otherwise active, it means it's waiting for
        an interrupt.  */
-    return cpu->halted && mips_vpe_active(env);
+    return cpu_halted(cpu) && mips_vpe_active(env);
 }
 
 static bool mips_vp_is_wfi(MIPSCPU *c)
@@ -657,7 +657,7 @@ static bool mips_vp_is_wfi(MIPSCPU *c)
     CPUState *cpu = CPU(c);
     CPUMIPSState *env = &c->env;
 
-    return cpu->halted && mips_vp_active(env);
+    return cpu_halted(cpu) && mips_vp_active(env);
 }
 
 static inline void mips_vpe_wake(MIPSCPU *c)
@@ -674,7 +674,7 @@ static inline void mips_vpe_sleep(MIPSCPU *cpu)
 
     /* The VPE was shut off, really go to bed.
        Reset any old _WAKE requests.  */
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cpu_reset_interrupt(cs, CPU_INTERRUPT_WAKE);
 }
 
@@ -2519,7 +2519,7 @@ void helper_wait(CPUMIPSState *env)
 {
     CPUState *cs = CPU(mips_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cpu_reset_interrupt(cs, CPU_INTERRUPT_WAKE);
     /* Last instruction in the block, PC was updated before
        - no need to recover PC and icount */
diff --git a/target/mips/translate.c b/target/mips/translate.c
index ab16cdb911..544e4dc19c 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -25753,7 +25753,7 @@ void cpu_state_reset(CPUMIPSState *env)
             env->tcs[i].CP0_TCHalt = 1;
         }
         env->active_tc.CP0_TCHalt = 1;
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
 
         if (cs->cpu_index == 0) {
             /* VPE0 starts up enabled.  */
@@ -25761,7 +25761,7 @@ void cpu_state_reset(CPUMIPSState *env)
             env->CP0_VPEConf0 |= (1 << CP0VPEC0_MVP) | (1 << CP0VPEC0_VPA);
 
             /* TC0 starts up unhalted.  */
-            cs->halted = 0;
+            cpu_halted_set(cs, 0);
             env->active_tc.CP0_TCHalt = 0;
             env->tcs[0].CP0_TCHalt = 0;
             /* With thread 0 active.  */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 16/56] riscv: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (14 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 15/56] mips: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-19 17:24   ` Palmer Dabbelt
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 17/56] s390x: " Emilio G. Cota
                   ` (40 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Michael Clark, Palmer Dabbelt, Sagar Karandikar,
	Bastian Koppelmann, Alistair Francis

Cc: Michael Clark <mjc@sifive.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Alistair Francis <alistair23@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/riscv/op_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
index aec7558e1b..b5c32241dd 100644
--- a/target/riscv/op_helper.c
+++ b/target/riscv/op_helper.c
@@ -736,7 +736,7 @@ void helper_wfi(CPURISCVState *env)
 {
     CPUState *cs = CPU(riscv_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     cpu_loop_exit(cs);
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 17/56] s390x: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (15 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 16/56] riscv: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:04   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 18/56] sparc: " Emilio G. Cota
                   ` (39 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Cornelia Huck, Christian Borntraeger,
	Alexander Graf, David Hildenbrand, qemu-s390x

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/intc/s390_flic.c        |  2 +-
 target/s390x/cpu.c         | 18 +++++++++++-------
 target/s390x/excp_helper.c |  2 +-
 target/s390x/kvm.c         |  2 +-
 target/s390x/sigp.c        |  8 ++++----
 5 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c
index 5f8168f0f0..bfb5cf1d07 100644
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -198,7 +198,7 @@ static void qemu_s390_flic_notify(uint32_t type)
         }
 
         /* we always kick running CPUs for now, this is tricky */
-        if (cs->halted) {
+        if (cpu_halted(cs)) {
             /* don't check for subclasses, CPUs double check when waking up */
             if (type & FLIC_PENDING_SERVICE) {
                 if (!(cpu->env.psw.mask & PSW_MASK_EXT)) {
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 18ba7f85a5..956d4e1d18 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -288,7 +288,7 @@ static void s390_cpu_initfn(Object *obj)
     CPUS390XState *env = &cpu->env;
 
     cs->env_ptr = env;
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     object_property_add(obj, "crash-information", "GuestPanicInformation",
                         s390_cpu_get_crash_info_qom, NULL, NULL, NULL, NULL);
@@ -313,8 +313,8 @@ static void s390_cpu_finalize(Object *obj)
 #if !defined(CONFIG_USER_ONLY)
 static bool disabled_wait(CPUState *cpu)
 {
-    return cpu->halted && !(S390_CPU(cpu)->env.psw.mask &
-                            (PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK));
+    return cpu_halted(cpu) && !(S390_CPU(cpu)->env.psw.mask &
+                                (PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK));
 }
 
 static unsigned s390_count_running_cpus(void)
@@ -340,10 +340,12 @@ unsigned int s390_cpu_halt(S390CPU *cpu)
     CPUState *cs = CPU(cpu);
     trace_cpu_halt(cs->cpu_index);
 
-    if (!cs->halted) {
-        cs->halted = 1;
+    cpu_mutex_lock(cs);
+    if (!cpu_halted(cs)) {
+        cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
     }
+    cpu_mutex_unlock(cs);
 
     return s390_count_running_cpus();
 }
@@ -353,10 +355,12 @@ void s390_cpu_unhalt(S390CPU *cpu)
     CPUState *cs = CPU(cpu);
     trace_cpu_unhalt(cs->cpu_index);
 
-    if (cs->halted) {
-        cs->halted = 0;
+    cpu_mutex_lock(cs);
+    if (cpu_halted(cs)) {
+        cpu_halted_set(cs, 0);
         cs->exception_index = -1;
     }
+    cpu_mutex_unlock(cs);
 }
 
 unsigned int s390_cpu_set_state(uint8_t cpu_state, S390CPU *cpu)
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
index 2a33222f7e..d22c5b3ce5 100644
--- a/target/s390x/excp_helper.c
+++ b/target/s390x/excp_helper.c
@@ -461,7 +461,7 @@ try_deliver:
     if ((env->psw.mask & PSW_MASK_WAIT) || stopped) {
         /* don't trigger a cpu_loop_exit(), use an interrupt instead */
         cpu_interrupt(CPU(cpu), CPU_INTERRUPT_HALT);
-    } else if (cs->halted) {
+    } else if (cpu_halted(cs)) {
         /* unhalt if we had a WAIT PSW somehwere in our injection chain */
         s390_cpu_unhalt(cpu);
     }
diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
index 2ebf26adfe..ffb52888c0 100644
--- a/target/s390x/kvm.c
+++ b/target/s390x/kvm.c
@@ -1005,7 +1005,7 @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
 
 int kvm_arch_process_async_events(CPUState *cs)
 {
-    return cs->halted;
+    return cpu_halted(cs);
 }
 
 static int s390_kvm_irq_to_interrupt(struct kvm_s390_irq *irq,
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
index c1f9245797..d410da797a 100644
--- a/target/s390x/sigp.c
+++ b/target/s390x/sigp.c
@@ -115,7 +115,7 @@ static void sigp_stop(CPUState *cs, run_on_cpu_data arg)
     }
 
     /* disabled wait - sleeping in user space */
-    if (cs->halted) {
+    if (cpu_halted(cs)) {
         s390_cpu_set_state(S390_CPU_STATE_STOPPED, cpu);
     } else {
         /* execute the stop function */
@@ -131,7 +131,7 @@ static void sigp_stop_and_store_status(CPUState *cs, run_on_cpu_data arg)
     SigpInfo *si = arg.host_ptr;
 
     /* disabled wait - sleeping in user space */
-    if (s390_cpu_get_state(cpu) == S390_CPU_STATE_OPERATING && cs->halted) {
+    if (s390_cpu_get_state(cpu) == S390_CPU_STATE_OPERATING && cpu_halted(cs)) {
         s390_cpu_set_state(S390_CPU_STATE_STOPPED, cpu);
     }
 
@@ -313,7 +313,7 @@ static void sigp_cond_emergency(S390CPU *src_cpu, S390CPU *dst_cpu,
     }
 
     /* this looks racy, but these values are only used when STOPPED */
-    idle = CPU(dst_cpu)->halted;
+    idle = cpu_halted(CPU(dst_cpu));
     psw_addr = dst_cpu->env.psw.addr;
     psw_mask = dst_cpu->env.psw.mask;
     asn = si->param;
@@ -347,7 +347,7 @@ static void sigp_sense_running(S390CPU *dst_cpu, SigpInfo *si)
     }
 
     /* If halted (which includes also STOPPED), it is not running */
-    if (CPU(dst_cpu)->halted) {
+    if (cpu_halted(CPU(dst_cpu))) {
         si->cc = SIGP_CC_ORDER_CODE_ACCEPTED;
     } else {
         set_sigp_status(si, SIGP_STAT_NOT_RUNNING);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 18/56] sparc: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (16 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 17/56] s390x: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:04   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 19/56] xtensa: " Emilio G. Cota
                   ` (38 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Fabien Chouteau, Mark Cave-Ayland, Artyom Tarasenko

Cc: Fabien Chouteau <chouteau@adacore.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/sparc/leon3.c      | 2 +-
 hw/sparc/sun4m.c      | 8 ++++----
 hw/sparc64/sparc64.c  | 4 ++--
 target/sparc/helper.c | 2 +-
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/hw/sparc/leon3.c b/hw/sparc/leon3.c
index fa98ab8177..0746001f91 100644
--- a/hw/sparc/leon3.c
+++ b/hw/sparc/leon3.c
@@ -61,7 +61,7 @@ static void main_cpu_reset(void *opaque)
 
     cpu_reset(cpu);
 
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     env->pc     = s->entry;
     env->npc    = s->entry + 4;
     env->regbase[6] = s->sp;
diff --git a/hw/sparc/sun4m.c b/hw/sparc/sun4m.c
index 3c29b68e67..5bd6512b2b 100644
--- a/hw/sparc/sun4m.c
+++ b/hw/sparc/sun4m.c
@@ -168,7 +168,7 @@ static void cpu_kick_irq(SPARCCPU *cpu)
     CPUSPARCState *env = &cpu->env;
     CPUState *cs = CPU(cpu);
 
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
     cpu_check_irqs(env);
     qemu_cpu_kick(cs);
 }
@@ -199,7 +199,7 @@ static void main_cpu_reset(void *opaque)
     CPUState *cs = CPU(cpu);
 
     cpu_reset(cs);
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
 }
 
 static void secondary_cpu_reset(void *opaque)
@@ -208,7 +208,7 @@ static void secondary_cpu_reset(void *opaque)
     CPUState *cs = CPU(cpu);
 
     cpu_reset(cs);
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
 }
 
 static void cpu_halt_signal(void *opaque, int irq, int level)
@@ -825,7 +825,7 @@ static void cpu_devinit(const char *cpu_type, unsigned int id,
     } else {
         qemu_register_reset(secondary_cpu_reset, cpu);
         cs = CPU(cpu);
-        cs->halted = 1;
+        cpu_halted_set(cs, 1);
     }
     *cpu_irqs = qemu_allocate_irqs(cpu_set_irq, cpu, MAX_PILS);
     env->prom_addr = prom_addr;
diff --git a/hw/sparc64/sparc64.c b/hw/sparc64/sparc64.c
index 408388945e..372bbd4f5b 100644
--- a/hw/sparc64/sparc64.c
+++ b/hw/sparc64/sparc64.c
@@ -100,7 +100,7 @@ static void cpu_kick_irq(SPARCCPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUSPARCState *env = &cpu->env;
 
-    cs->halted = 0;
+    cpu_halted_set(cs, 0);
     cpu_check_irqs(env);
     qemu_cpu_kick(cs);
 }
@@ -115,7 +115,7 @@ void sparc64_cpu_set_ivec_irq(void *opaque, int irq, int level)
         if (!(env->ivec_status & 0x20)) {
             trace_sparc64_cpu_ivec_raise_irq(irq);
             cs = CPU(cpu);
-            cs->halted = 0;
+            cpu_halted_set(cs, 0);
             env->interrupt_index = TT_IVEC;
             env->ivec_status |= 0x20;
             env->ivec_data[0] = (0x1f << 6) | irq;
diff --git a/target/sparc/helper.c b/target/sparc/helper.c
index 46232788c8..dd00cf7cac 100644
--- a/target/sparc/helper.c
+++ b/target/sparc/helper.c
@@ -245,7 +245,7 @@ void helper_power_down(CPUSPARCState *env)
 {
     CPUState *cs = CPU(sparc_env_get_cpu(env));
 
-    cs->halted = 1;
+    cpu_halted_set(cs, 1);
     cs->exception_index = EXCP_HLT;
     env->pc = env->npc;
     env->npc = env->pc + 4;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 19/56] xtensa: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (17 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 18/56] sparc: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:10   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 20/56] gdbstub: " Emilio G. Cota
                   ` (37 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Max Filippov

Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/xtensa/cpu.c       | 2 +-
 target/xtensa/helper.c    | 2 +-
 target/xtensa/op_helper.c | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index a54dbe4260..d4ca35e6cc 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -86,7 +86,7 @@ static void xtensa_cpu_reset(CPUState *s)
 
 #ifndef CONFIG_USER_ONLY
     reset_mmu(env);
-    s->halted = env->runstall;
+    cpu_halted_set(s, env->runstall);
 #endif
 }
 
diff --git a/target/xtensa/helper.c b/target/xtensa/helper.c
index 501082f55b..dd6819fbad 100644
--- a/target/xtensa/helper.c
+++ b/target/xtensa/helper.c
@@ -807,7 +807,7 @@ void xtensa_runstall(CPUXtensaState *env, bool runstall)
     CPUState *cpu = CPU(xtensa_env_get_cpu(env));
 
     env->runstall = runstall;
-    cpu->halted = runstall;
+    cpu_halted_set(cpu, runstall);
     if (runstall) {
         cpu_interrupt(cpu, CPU_INTERRUPT_HALT);
     } else {
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
index e4b42ab3e5..510040b593 100644
--- a/target/xtensa/op_helper.c
+++ b/target/xtensa/op_helper.c
@@ -414,7 +414,7 @@ void HELPER(waiti)(CPUXtensaState *env, uint32_t pc, uint32_t intlevel)
     }
 
     cpu = CPU(xtensa_env_get_cpu(env));
-    cpu->halted = 1;
+    cpu_halted_set(cpu, 1);
     HELPER(exception)(env, EXCP_HLT);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 20/56] gdbstub: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (18 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 19/56] xtensa: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:10   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 21/56] openrisc: " Emilio G. Cota
                   ` (36 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 gdbstub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gdbstub.c b/gdbstub.c
index c8478de8f5..a5ff50d9e7 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1305,7 +1305,7 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
                 /* memtohex() doubles the required space */
                 len = snprintf((char *)mem_buf, sizeof(buf) / 2,
                                "CPU#%d [%s]", cpu->cpu_index,
-                               cpu->halted ? "halted " : "running");
+                               cpu_halted(cpu) ? "halted " : "running");
                 trace_gdbstub_op_extra_info((char *)mem_buf);
                 memtohex(buf, mem_buf, len);
                 put_packet(s, buf);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 21/56] openrisc: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (19 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 20/56] gdbstub: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:11   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 22/56] cpu-exec: " Emilio G. Cota
                   ` (35 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Stafford Horne

Cc: Stafford Horne <shorne@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/openrisc/sys_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/openrisc/sys_helper.c b/target/openrisc/sys_helper.c
index b66a45c1e0..ab4d8fb520 100644
--- a/target/openrisc/sys_helper.c
+++ b/target/openrisc/sys_helper.c
@@ -137,7 +137,7 @@ void HELPER(mtspr)(CPUOpenRISCState *env, target_ulong spr, target_ulong rb)
         if (env->pmr & PMR_DME || env->pmr & PMR_SME) {
             cpu_restore_state(cs, GETPC(), true);
             env->pc += 4;
-            cs->halted = 1;
+            cpu_halted_set(cs, 1);
             raise_exception(cpu, EXCP_HALTED);
         }
         break;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 22/56] cpu-exec: convert to cpu_halted
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (20 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 21/56] openrisc: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 23/56] cpu: define cpu_interrupt_request helpers Emilio G. Cota
                   ` (34 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cpu-exec.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 870027d435..f37c9b1e94 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -422,14 +422,20 @@ static inline TranslationBlock *tb_find(CPUState *cpu,
     return tb;
 }
 
-static inline bool cpu_handle_halt(CPUState *cpu)
+static inline bool cpu_handle_halt_locked(CPUState *cpu)
 {
-    if (cpu->halted) {
+    g_assert(cpu_mutex_locked(cpu));
+
+    if (cpu_halted(cpu)) {
 #if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
         if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
             && replay_interrupt()) {
             X86CPU *x86_cpu = X86_CPU(cpu);
+
+            cpu_mutex_unlock(cpu);
             qemu_mutex_lock_iothread();
+            cpu_mutex_lock(cpu);
+
             apic_poll_irq(x86_cpu->apic_state);
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
             qemu_mutex_unlock_iothread();
@@ -439,12 +445,22 @@ static inline bool cpu_handle_halt(CPUState *cpu)
             return true;
         }
 
-        cpu->halted = 0;
+        cpu_halted_set(cpu, 0);
     }
 
     return false;
 }
 
+static inline bool cpu_handle_halt(CPUState *cpu)
+{
+    bool ret;
+
+    cpu_mutex_lock(cpu);
+    ret = cpu_handle_halt_locked(cpu);
+    cpu_mutex_unlock(cpu);
+    return ret;
+}
+
 static inline void cpu_handle_debug_exception(CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
@@ -543,7 +559,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
         } else if (interrupt_request & CPU_INTERRUPT_HALT) {
             replay_interrupt();
             cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
-            cpu->halted = 1;
+            cpu_halted_set(cpu, 1);
             cpu->exception_index = EXCP_HLT;
             qemu_mutex_unlock_iothread();
             return true;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 23/56] cpu: define cpu_interrupt_request helpers
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (21 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 22/56] cpu-exec: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:15   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 24/56] ppc: use cpu_reset_interrupt Emilio G. Cota
                   ` (33 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 3bf6767cb0..cd66b8828a 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -504,6 +504,41 @@ static inline void cpu_halted_set(CPUState *cpu, uint32_t val)
     cpu_mutex_unlock(cpu);
 }
 
+static inline uint32_t cpu_interrupt_request(CPUState *cpu)
+{
+    uint32_t ret;
+
+    if (cpu_mutex_locked(cpu)) {
+        return cpu->interrupt_request;
+    }
+    cpu_mutex_lock(cpu);
+    ret = cpu->interrupt_request;
+    cpu_mutex_unlock(cpu);
+    return ret;
+}
+
+static inline void cpu_interrupt_request_or(CPUState *cpu, uint32_t mask)
+{
+    if (cpu_mutex_locked(cpu)) {
+        cpu->interrupt_request |= mask;
+        return;
+    }
+    cpu_mutex_lock(cpu);
+    cpu->interrupt_request |= mask;
+    cpu_mutex_unlock(cpu);
+}
+
+static inline void cpu_interrupt_request_set(CPUState *cpu, uint32_t val)
+{
+    if (cpu_mutex_locked(cpu)) {
+        cpu->interrupt_request = val;
+        return;
+    }
+    cpu_mutex_lock(cpu);
+    cpu->interrupt_request = val;
+    cpu_mutex_unlock(cpu);
+}
+
 static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
 {
     unsigned int i;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 24/56] ppc: use cpu_reset_interrupt
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (22 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 23/56] cpu: define cpu_interrupt_request helpers Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:15   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 25/56] exec: " Emilio G. Cota
                   ` (32 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, David Gibson, Alexander Graf, qemu-ppc

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-ppc@nongnu.org
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/excp_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 5e1778584a..737c9c72be 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -880,7 +880,7 @@ bool ppc_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
     if (interrupt_request & CPU_INTERRUPT_HARD) {
         ppc_hw_interrupt(env);
         if (env->pending_interrupts == 0) {
-            cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
         }
         return true;
     }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 25/56] exec: use cpu_reset_interrupt
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (23 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 24/56] ppc: use cpu_reset_interrupt Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:17   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 26/56] i386: " Emilio G. Cota
                   ` (31 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 exec.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/exec.c b/exec.c
index 4fd831ef06..6006902975 100644
--- a/exec.c
+++ b/exec.c
@@ -776,7 +776,7 @@ static int cpu_common_post_load(void *opaque, int version_id)
 
     /* 0x01 was CPU_INTERRUPT_EXIT. This line can be removed when the
        version_id is increased. */
-    cpu->interrupt_request &= ~0x01;
+    cpu_reset_interrupt(cpu, ~0x01);
     tlb_flush(cpu);
 
     /* loadvm has just updated the content of RAM, bypassing the
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 26/56] i386: use cpu_reset_interrupt
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (24 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 25/56] exec: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:18   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 27/56] s390x: " Emilio G. Cota
                   ` (30 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Richard Henderson, Eduardo Habkost

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/hax-all.c    |  4 ++--
 target/i386/hvf/x86hvf.c |  8 ++++----
 target/i386/kvm.c        | 14 +++++++-------
 target/i386/seg_helper.c | 13 ++++++-------
 target/i386/svm_helper.c |  2 +-
 target/i386/whpx-all.c   | 10 +++++-----
 6 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index f095c527e3..8b53a9708f 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -433,7 +433,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
         irq = cpu_get_pic_interrupt(env);
         if (irq >= 0) {
             hax_inject_interrupt(env, irq);
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
         }
     }
 
@@ -483,7 +483,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
     cpu_halted_set(cpu, 0);
 
     if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
-        cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 163bbed23f..e8b13ed534 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -402,7 +402,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
 
     if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) {
         if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
-            cpu_state->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_NMI);
             info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC;
             wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
         } else {
@@ -414,7 +414,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
         (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) {
         int line = cpu_get_pic_interrupt(&x86cpu->env);
-        cpu_state->interrupt_request &= ~CPU_INTERRUPT_HARD;
+        cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_HARD);
         if (line >= 0) {
             wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line |
                   VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
@@ -440,7 +440,7 @@ int hvf_process_events(CPUState *cpu_state)
     }
 
     if (cpu_state->interrupt_request & CPU_INTERRUPT_POLL) {
-        cpu_state->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
     if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
@@ -453,7 +453,7 @@ int hvf_process_events(CPUState *cpu_state)
         do_cpu_sipi(cpu);
     }
     if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
-        cpu_state->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_TPR);
         hvf_cpu_synchronize_state(cpu_state);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index d593818cd5..effaf87f01 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2709,7 +2709,7 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
              */
             events.smi.pending = cs->interrupt_request & CPU_INTERRUPT_SMI;
             events.smi.latched_init = cs->interrupt_request & CPU_INTERRUPT_INIT;
-            cs->interrupt_request &= ~(CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
+            cpu_reset_interrupt(cs, CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
         } else {
             /* Keep these in cs->interrupt_request.  */
             events.smi.pending = 0;
@@ -3005,7 +3005,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
     if (cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
         if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
             qemu_mutex_lock_iothread();
-            cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
             qemu_mutex_unlock_iothread();
             DPRINTF("injected NMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_NMI);
@@ -3016,7 +3016,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
         }
         if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
             qemu_mutex_lock_iothread();
-            cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
             qemu_mutex_unlock_iothread();
             DPRINTF("injected SMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_SMI);
@@ -3052,7 +3052,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
             (env->eflags & IF_MASK)) {
             int irq;
 
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
             irq = cpu_get_pic_interrupt(env);
             if (irq >= 0) {
                 struct kvm_interrupt intr;
@@ -3123,7 +3123,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         /* We must not raise CPU_INTERRUPT_MCE if it's not supported. */
         assert(env->mcg_cap);
 
-        cs->interrupt_request &= ~CPU_INTERRUPT_MCE;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_MCE);
 
         kvm_cpu_synchronize_state(cs);
 
@@ -3153,7 +3153,7 @@ int kvm_arch_process_async_events(CPUState *cs)
     }
 
     if (cs->interrupt_request & CPU_INTERRUPT_POLL) {
-        cs->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
     if (((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
@@ -3166,7 +3166,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         do_cpu_sipi(cpu);
     }
     if (cs->interrupt_request & CPU_INTERRUPT_TPR) {
-        cs->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_TPR);
         kvm_cpu_synchronize_state(cs);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
diff --git a/target/i386/seg_helper.c b/target/i386/seg_helper.c
index 33714bc6e1..ac5497de79 100644
--- a/target/i386/seg_helper.c
+++ b/target/i386/seg_helper.c
@@ -1332,7 +1332,7 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
     switch (interrupt_request) {
 #if !defined(CONFIG_USER_ONLY)
     case CPU_INTERRUPT_POLL:
-        cs->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
         break;
 #endif
@@ -1341,23 +1341,22 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
         break;
     case CPU_INTERRUPT_SMI:
         cpu_svm_check_intercept_param(env, SVM_EXIT_SMI, 0, 0);
-        cs->interrupt_request &= ~CPU_INTERRUPT_SMI;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_SMI);
         do_smm_enter(cpu);
         break;
     case CPU_INTERRUPT_NMI:
         cpu_svm_check_intercept_param(env, SVM_EXIT_NMI, 0, 0);
-        cs->interrupt_request &= ~CPU_INTERRUPT_NMI;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_NMI);
         env->hflags2 |= HF2_NMI_MASK;
         do_interrupt_x86_hardirq(env, EXCP02_NMI, 1);
         break;
     case CPU_INTERRUPT_MCE:
-        cs->interrupt_request &= ~CPU_INTERRUPT_MCE;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_MCE);
         do_interrupt_x86_hardirq(env, EXCP12_MCHK, 0);
         break;
     case CPU_INTERRUPT_HARD:
         cpu_svm_check_intercept_param(env, SVM_EXIT_INTR, 0, 0);
-        cs->interrupt_request &= ~(CPU_INTERRUPT_HARD |
-                                   CPU_INTERRUPT_VIRQ);
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD | CPU_INTERRUPT_VIRQ);
         intno = cpu_get_pic_interrupt(env);
         qemu_log_mask(CPU_LOG_TB_IN_ASM,
                       "Servicing hardware INT=0x%02x\n", intno);
@@ -1372,7 +1371,7 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
         qemu_log_mask(CPU_LOG_TB_IN_ASM,
                       "Servicing virtual hardware INT=0x%02x\n", intno);
         do_interrupt_x86_hardirq(env, intno, 1);
-        cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
         break;
 #endif
     }
diff --git a/target/i386/svm_helper.c b/target/i386/svm_helper.c
index 9fd22a883b..a6d33e55d8 100644
--- a/target/i386/svm_helper.c
+++ b/target/i386/svm_helper.c
@@ -700,7 +700,7 @@ void do_vmexit(CPUX86State *env, uint32_t exit_code, uint64_t exit_info_1)
     env->hflags &= ~HF_GUEST_MASK;
     env->intercept = 0;
     env->intercept_exceptions = 0;
-    cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
+    cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
     env->tsc_offset = 0;
 
     env->gdt.base  = x86_ldq_phys(cs, env->vm_hsave + offsetof(struct vmcb,
diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
index b9c79ccd99..9673bdc219 100644
--- a/target/i386/whpx-all.c
+++ b/target/i386/whpx-all.c
@@ -728,14 +728,14 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
     if (!vcpu->interruption_pending &&
         cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
         if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
             vcpu->interruptable = false;
             new_int.InterruptionType = WHvX64PendingNmi;
             new_int.InterruptionPending = 1;
             new_int.InterruptionVector = 2;
         }
         if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
         }
     }
 
@@ -758,7 +758,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
         vcpu->interruptable && (env->eflags & IF_MASK)) {
         assert(!new_int.InterruptionPending);
         if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
             irq = cpu_get_pic_interrupt(env);
             if (irq >= 0) {
                 new_int.InterruptionType = WHvX64PendingInterrupt;
@@ -850,7 +850,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
-        cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
@@ -868,7 +868,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
-        cpu->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        cpu_reset_interrupt(cpu, CPU_INTERRUPT_TPR);
         if (!cpu->vcpu_dirty) {
             whpx_get_registers(cpu);
         }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 27/56] s390x: use cpu_reset_interrupt
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (25 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 26/56] i386: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:18   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 28/56] openrisc: " Emilio G. Cota
                   ` (29 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Cornelia Huck, Richard Henderson, Alexander Graf,
	David Hildenbrand, qemu-s390x

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Alexander Graf <agraf@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/s390x/excp_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
index d22c5b3ce5..7ca50b3df6 100644
--- a/target/s390x/excp_helper.c
+++ b/target/s390x/excp_helper.c
@@ -454,7 +454,7 @@ try_deliver:
 
     /* we might still have pending interrupts, but not deliverable */
     if (!env->pending_int && !qemu_s390_flic_has_any(flic)) {
-        cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
     }
 
     /* WAIT PSW during interrupt injection or STOP interrupt */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 28/56] openrisc: use cpu_reset_interrupt
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (26 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 27/56] s390x: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:18   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 29/56] arm: convert to cpu_interrupt_request Emilio G. Cota
                   ` (28 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Stafford Horne

From: Paolo Bonzini <pbonzini@redhat.com>

Cc: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/openrisc/sys_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/openrisc/sys_helper.c b/target/openrisc/sys_helper.c
index ab4d8fb520..c645cc896d 100644
--- a/target/openrisc/sys_helper.c
+++ b/target/openrisc/sys_helper.c
@@ -170,7 +170,7 @@ void HELPER(mtspr)(CPUOpenRISCState *env, target_ulong spr, target_ulong rb)
                 env->ttmr = (rb & ~TTMR_IP) | ip;
             } else {    /* Clear IP bit.  */
                 env->ttmr = rb & ~TTMR_IP;
-                cs->interrupt_request &= ~CPU_INTERRUPT_TIMER;
+                cpu_reset_interrupt(cs, CPU_INTERRUPT_TIMER);
             }
 
             cpu_openrisc_timer_update(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 29/56] arm: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (27 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 28/56] openrisc: " Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:21   ` Richard Henderson
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 30/56] i386: " Emilio G. Cota
                   ` (27 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Maydell, qemu-arm

Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-arm@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/arm/cpu.c    |  2 +-
 target/arm/helper.c | 13 ++++++-------
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 9c5cda8eb7..7330c2dae1 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -49,7 +49,7 @@ static bool arm_cpu_has_work(CPUState *cs)
     ARMCPU *cpu = ARM_CPU(cs);
 
     return (cpu->power_state != PSCI_OFF)
-        && cs->interrupt_request &
+        && cpu_interrupt_request(cs) &
         (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
          | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
          | CPU_INTERRUPT_EXITTB);
diff --git a/target/arm/helper.c b/target/arm/helper.c
index c83f7c1109..85e0b9645e 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -1295,12 +1295,14 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
     CPUState *cs = ENV_GET_CPU(env);
     uint64_t ret = 0;
 
-    if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
+    cpu_mutex_lock(cs);
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) {
         ret |= CPSR_I;
     }
-    if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_FIQ) {
         ret |= CPSR_F;
     }
+    cpu_mutex_unlock(cs);
     /* External aborts are not possible in QEMU so A bit is always clear */
     return ret;
 }
@@ -8579,10 +8581,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
         return;
     }
 
-    /* Hooks may change global state so BQL should be held, also the
-     * BQL needs to be held for any modification of
-     * cs->interrupt_request.
-     */
+    /* Hooks may change global state so BQL should be held */
     g_assert(qemu_mutex_iothread_locked());
 
     arm_call_pre_el_change_hook(cpu);
@@ -8597,7 +8596,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
     arm_call_el_change_hook(cpu);
 
     if (!kvm_enabled()) {
-        cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_EXITTB);
     }
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 30/56] i386: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (28 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 29/56] arm: convert to cpu_interrupt_request Emilio G. Cota
@ 2018-10-19  1:05 ` Emilio G. Cota
  2018-10-21 13:27   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 31/56] ppc: " Emilio G. Cota
                   ` (26 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Richard Henderson, Eduardo Habkost

Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/i386/cpu.c        |  2 +-
 target/i386/hax-all.c    | 16 +++++------
 target/i386/helper.c     |  4 +--
 target/i386/hvf/hvf.c    |  6 ++--
 target/i386/hvf/x86hvf.c | 32 ++++++++++++++--------
 target/i386/kvm.c        | 59 ++++++++++++++++++++++++----------------
 target/i386/svm_helper.c |  4 +--
 target/i386/whpx-all.c   | 44 ++++++++++++++++++------------
 8 files changed, 98 insertions(+), 69 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index b91d80af0a..9eaf3274b2 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -5473,7 +5473,7 @@ int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
 
 static bool x86_cpu_has_work(CPUState *cs)
 {
-    return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0;
+    return x86_cpu_pending_interrupt(cs, cpu_interrupt_request(cs)) != 0;
 }
 
 static void x86_disas_set_info(CPUState *cs, disassemble_info *info)
diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index 8b53a9708f..11751d78ad 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -293,7 +293,7 @@ int hax_vm_destroy(struct hax_vm *vm)
 
 static void hax_handle_interrupt(CPUState *cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
 
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
@@ -427,7 +427,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
      * Unlike KVM, HAX kernel check for the eflags, instead of qemu
      */
     if (ht->ready_for_interrupt_injection &&
-        (cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+        (cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
         int irq;
 
         irq = cpu_get_pic_interrupt(env);
@@ -441,7 +441,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
      * interrupt, request an interrupt window exit.  This will
      * cause a return to userspace as soon as the guest is ready to
      * receive interrupts. */
-    if ((cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+    if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
         ht->request_interrupt_window = 1;
     } else {
         ht->request_interrupt_window = 0;
@@ -482,19 +482,19 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
 
     cpu_halted_set(cpu, 0);
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_INIT) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_INIT) {
         DPRINTF("\nhax_vcpu_hax_exec: handling INIT for %d\n",
                 cpu->cpu_index);
         do_cpu_init(x86_cpu);
         hax_vcpu_sync_state(env, 1);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_SIPI) {
         DPRINTF("hax_vcpu_hax_exec: handling SIPI for %d\n",
                 cpu->cpu_index);
         hax_vcpu_sync_state(env, 0);
@@ -553,8 +553,8 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
             ret = -1;
             break;
         case HAX_EXIT_HLT:
-            if (!(cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
-                !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+            if (!(cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) &&
+                !(cpu_interrupt_request(cpu) & CPU_INTERRUPT_NMI)) {
                 /* hlt instruction with interrupt disabled is shutdown */
                 env->eflags |= IF_MASK;
                 cpu_halted_set(cpu, 1);
diff --git a/target/i386/helper.c b/target/i386/helper.c
index a75278f954..9197fb4edc 100644
--- a/target/i386/helper.c
+++ b/target/i386/helper.c
@@ -1035,12 +1035,12 @@ void do_cpu_init(X86CPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUX86State *env = &cpu->env;
     CPUX86State *save = g_new(CPUX86State, 1);
-    int sipi = cs->interrupt_request & CPU_INTERRUPT_SIPI;
+    int sipi = cpu_interrupt_request(cs) & CPU_INTERRUPT_SIPI;
 
     *save = *env;
 
     cpu_reset(cs);
-    cs->interrupt_request = sipi;
+    cpu_interrupt_request_set(cs, sipi);
     memcpy(&env->start_init_save, &save->start_init_save,
            offsetof(CPUX86State, end_init_save) -
            offsetof(CPUX86State, start_init_save));
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index fb3b2a26a1..513a7ef417 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -250,7 +250,7 @@ void update_apic_tpr(CPUState *cpu)
 
 static void hvf_handle_interrupt(CPUState * cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
     }
@@ -713,9 +713,9 @@ int hvf_vcpu_exec(CPUState *cpu)
         switch (exit_reason) {
         case EXIT_REASON_HLT: {
             macvm_set_rip(cpu, rip + ins_len);
-            if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+            if (!((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) &&
                 (EFLAGS(env) & IF_MASK))
-                && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) &&
+                && !(cpu_interrupt_request(cpu) & CPU_INTERRUPT_NMI) &&
                 !(idtvec_info & VMCS_IDT_VEC_VALID)) {
                 cpu_halted_set(cpu, 1);
                 ret = EXCP_HLT;
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index e8b13ed534..aae1324533 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -358,6 +358,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
 
     uint8_t vector;
     uint64_t intr_type;
+    bool ret;
     bool have_event = true;
     if (env->interrupt_injected != -1) {
         vector = env->interrupt_injected;
@@ -400,7 +401,8 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
         };
     }
 
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) {
+    cpu_mutex_lock(cpu_state);
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_NMI) {
         if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
             cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_NMI);
             info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC;
@@ -411,7 +413,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
     }
 
     if (!(env->hflags & HF_INHIBIT_IRQ_MASK) &&
-        (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
+        (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) {
         int line = cpu_get_pic_interrupt(&x86cpu->env);
         cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_HARD);
@@ -420,43 +422,49 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
                   VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
         }
     }
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_HARD) {
         vmx_set_int_window_exiting(cpu_state);
     }
-    return (cpu_state->interrupt_request
-            & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR));
+    ret = cpu_interrupt_request(cpu_state)
+          & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR);
+    cpu_mutex_unlock(cpu_state);
+    return ret;
 }
 
 int hvf_process_events(CPUState *cpu_state)
 {
     X86CPU *cpu = X86_CPU(cpu_state);
     CPUX86State *env = &cpu->env;
+    int ret;
 
     EFLAGS(env) = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
 
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
+    cpu_mutex_lock(cpu_state);
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_INIT) {
         hvf_cpu_synchronize_state(cpu_state);
         do_cpu_init(cpu);
     }
 
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
-    if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if (((cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_HARD) &&
         (EFLAGS(env) & IF_MASK)) ||
-        (cpu_state->interrupt_request & CPU_INTERRUPT_NMI)) {
+        (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cpu_state, 0);
     }
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_SIPI) {
         hvf_cpu_synchronize_state(cpu_state);
         do_cpu_sipi(cpu);
     }
-    if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
+    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_TPR) {
         cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_TPR);
         hvf_cpu_synchronize_state(cpu_state);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
     }
-    return cpu_halted(cpu);
+    ret = cpu_halted(cpu);
+    cpu_mutex_unlock(cpu_state);
+    return ret;
 }
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index effaf87f01..2e98a0ac63 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -2707,9 +2707,12 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
             /* As soon as these are moved to the kernel, remove them
              * from cs->interrupt_request.
              */
-            events.smi.pending = cs->interrupt_request & CPU_INTERRUPT_SMI;
-            events.smi.latched_init = cs->interrupt_request & CPU_INTERRUPT_INIT;
+            cpu_mutex_lock(cs);
+            events.smi.pending = cpu_interrupt_request(cs) & CPU_INTERRUPT_SMI;
+            events.smi.latched_init = cpu_interrupt_request(cs) &
+                CPU_INTERRUPT_INIT;
             cpu_reset_interrupt(cs, CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
+            cpu_mutex_unlock(cs);
         } else {
             /* Keep these in cs->interrupt_request.  */
             events.smi.pending = 0;
@@ -3001,12 +3004,12 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
     CPUX86State *env = &x86_cpu->env;
     int ret;
 
+    cpu_mutex_lock(cpu);
+
     /* Inject NMI */
-    if (cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
-        if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
-            qemu_mutex_lock_iothread();
+    if (cpu_interrupt_request(cpu) & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
+        if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_NMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
-            qemu_mutex_unlock_iothread();
             DPRINTF("injected NMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_NMI);
             if (ret < 0) {
@@ -3014,10 +3017,8 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
                         strerror(-ret));
             }
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
-            qemu_mutex_lock_iothread();
+        if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_SMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
-            qemu_mutex_unlock_iothread();
             DPRINTF("injected SMI\n");
             ret = kvm_vcpu_ioctl(cpu, KVM_SMI);
             if (ret < 0) {
@@ -3028,19 +3029,21 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
     }
 
     if (!kvm_pic_in_kernel()) {
+        cpu_mutex_unlock(cpu);
         qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cpu);
     }
 
     /* Force the VCPU out of its inner loop to process any INIT requests
      * or (for userspace APIC, but it is cheap to combine the checks here)
      * pending TPR access reports.
      */
-    if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
-        if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if (cpu_interrupt_request(cpu) & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
+        if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_INIT) &&
             !(env->hflags & HF_SMM_MASK)) {
             cpu->exit_request = 1;
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+        if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_TPR) {
             cpu->exit_request = 1;
         }
     }
@@ -3048,7 +3051,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
     if (!kvm_pic_in_kernel()) {
         /* Try to inject an interrupt if the guest can accept it */
         if (run->ready_for_interrupt_injection &&
-            (cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+            (cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) &&
             (env->eflags & IF_MASK)) {
             int irq;
 
@@ -3072,7 +3075,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
          * interrupt, request an interrupt window exit.  This will
          * cause a return to userspace as soon as the guest is ready to
          * receive interrupts. */
-        if ((cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD)) {
             run->request_interrupt_window = 1;
         } else {
             run->request_interrupt_window = 0;
@@ -3083,6 +3086,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
 
         qemu_mutex_unlock_iothread();
     }
+    cpu_mutex_unlock(cpu);
 }
 
 MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run)
@@ -3118,8 +3122,9 @@ int kvm_arch_process_async_events(CPUState *cs)
 {
     X86CPU *cpu = X86_CPU(cs);
     CPUX86State *env = &cpu->env;
+    int ret;
 
-    if (cs->interrupt_request & CPU_INTERRUPT_MCE) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_MCE) {
         /* We must not raise CPU_INTERRUPT_MCE if it's not supported. */
         assert(env->mcg_cap);
 
@@ -3142,7 +3147,7 @@ int kvm_arch_process_async_events(CPUState *cs)
         }
     }
 
-    if ((cs->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if ((cpu_interrupt_request(cs) & CPU_INTERRUPT_INIT) &&
         !(env->hflags & HF_SMM_MASK)) {
         kvm_cpu_synchronize_state(cs);
         do_cpu_init(cpu);
@@ -3152,27 +3157,30 @@ int kvm_arch_process_async_events(CPUState *cs)
         return 0;
     }
 
-    if (cs->interrupt_request & CPU_INTERRUPT_POLL) {
+    cpu_mutex_lock(cs);
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
         apic_poll_irq(cpu->apic_state);
     }
-    if (((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if (((cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
-        (cs->interrupt_request & CPU_INTERRUPT_NMI)) {
+        (cpu_interrupt_request(cs) & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cs, 0);
     }
-    if (cs->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_SIPI) {
         kvm_cpu_synchronize_state(cs);
         do_cpu_sipi(cpu);
     }
-    if (cs->interrupt_request & CPU_INTERRUPT_TPR) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_TPR) {
         cpu_reset_interrupt(cs, CPU_INTERRUPT_TPR);
         kvm_cpu_synchronize_state(cs);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
     }
+    ret = cpu_halted(cs);
+    cpu_mutex_unlock(cs);
 
-    return cpu_halted(cs);
+    return ret;
 }
 
 static int kvm_handle_halt(X86CPU *cpu)
@@ -3180,12 +3188,15 @@ static int kvm_handle_halt(X86CPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUX86State *env = &cpu->env;
 
-    if (!((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    cpu_mutex_lock(cs);
+    if (!((cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
           (env->eflags & IF_MASK)) &&
-        !(cs->interrupt_request & CPU_INTERRUPT_NMI)) {
+        !(cpu_interrupt_request(cs) & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cs, 1);
+        cpu_mutex_unlock(cs);
         return EXCP_HLT;
     }
+    cpu_mutex_unlock(cs);
 
     return 0;
 }
diff --git a/target/i386/svm_helper.c b/target/i386/svm_helper.c
index a6d33e55d8..ebf3643ba7 100644
--- a/target/i386/svm_helper.c
+++ b/target/i386/svm_helper.c
@@ -316,7 +316,7 @@ void helper_vmrun(CPUX86State *env, int aflag, int next_eip_addend)
     if (int_ctl & V_IRQ_MASK) {
         CPUState *cs = CPU(x86_env_get_cpu(env));
 
-        cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_VIRQ);
     }
 
     /* maybe we need to inject an event */
@@ -674,7 +674,7 @@ void do_vmexit(CPUX86State *env, uint32_t exit_code, uint64_t exit_info_1)
                        env->vm_vmcb + offsetof(struct vmcb, control.int_ctl));
     int_ctl &= ~(V_TPR_MASK | V_IRQ_MASK);
     int_ctl |= env->v_tpr & V_TPR_MASK;
-    if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
+    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_VIRQ) {
         int_ctl |= V_IRQ_MASK;
     }
     x86_stl_phys(cs,
diff --git a/target/i386/whpx-all.c b/target/i386/whpx-all.c
index 9673bdc219..5456f26d8c 100644
--- a/target/i386/whpx-all.c
+++ b/target/i386/whpx-all.c
@@ -693,14 +693,16 @@ static int whpx_handle_halt(CPUState *cpu)
     int ret = 0;
 
     qemu_mutex_lock_iothread();
-    if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+    cpu_mutex_lock(cpu);
+    if (!((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) &&
           (env->eflags & IF_MASK)) &&
-        !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        !(cpu_interrupt_request(cpu) & CPU_INTERRUPT_NMI)) {
         cpu->exception_index = EXCP_HLT;
         cpu_halted_set(cpu, true);
         ret = 1;
     }
     qemu_mutex_unlock_iothread();
+    cpu_mutex_unlock(cpu);
 
     return ret;
 }
@@ -724,17 +726,20 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
     qemu_mutex_lock_iothread();
 
+    cpu_mutex_lock(cpu);
+
     /* Inject NMI */
     if (!vcpu->interruption_pending &&
-        cpu->interrupt_request & (CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
-        if (cpu->interrupt_request & CPU_INTERRUPT_NMI) {
+        cpu_interrupt_request(cpu) & (CPU_INTERRUPT_NMI |
+                                             CPU_INTERRUPT_SMI)) {
+        if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_NMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
             vcpu->interruptable = false;
             new_int.InterruptionType = WHvX64PendingNmi;
             new_int.InterruptionPending = 1;
             new_int.InterruptionVector = 2;
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
+        if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_SMI) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
         }
     }
@@ -743,12 +748,12 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
      * Force the VCPU out of its inner loop to process any INIT requests or
      * commit pending TPR access.
      */
-    if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
-        if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
+    if (cpu_interrupt_request(cpu) & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
+        if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_INIT) &&
             !(env->hflags & HF_SMM_MASK)) {
             cpu->exit_request = 1;
         }
-        if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+        if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_TPR) {
             cpu->exit_request = 1;
         }
     }
@@ -757,7 +762,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
     if (!vcpu->interruption_pending &&
         vcpu->interruptable && (env->eflags & IF_MASK)) {
         assert(!new_int.InterruptionPending);
-        if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
+        if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
             irq = cpu_get_pic_interrupt(env);
             if (irq >= 0) {
@@ -787,7 +792,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
     /* Update the state of the interrupt delivery notification */
     if (!vcpu->window_registered &&
-        cpu->interrupt_request & CPU_INTERRUPT_HARD) {
+        cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) {
         reg_values[reg_count].DeliverabilityNotifications.InterruptNotification
             = 1;
         vcpu->window_registered = 1;
@@ -796,6 +801,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
     }
 
     qemu_mutex_unlock_iothread();
+    cpu_mutex_unlock(cpu);
 
     if (reg_count) {
         hr = whp_dispatch.WHvSetVirtualProcessorRegisters(
@@ -841,7 +847,9 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
 
-    if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
+    cpu_mutex_lock(cpu);
+
+    if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_INIT) &&
         !(env->hflags & HF_SMM_MASK)) {
 
         do_cpu_init(x86_cpu);
@@ -849,25 +857,25 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
         vcpu->interruptable = true;
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL) {
         cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
         apic_poll_irq(x86_cpu->apic_state);
     }
 
-    if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if (((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) &&
          (env->eflags & IF_MASK)) ||
-        (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        (cpu_interrupt_request(cpu) & CPU_INTERRUPT_NMI)) {
         cpu_halted_set(cpu, false);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_SIPI) {
         if (!cpu->vcpu_dirty) {
             whpx_get_registers(cpu);
         }
         do_cpu_sipi(x86_cpu);
     }
 
-    if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+    if (cpu_interrupt_request(cpu) & CPU_INTERRUPT_TPR) {
         cpu_reset_interrupt(cpu, CPU_INTERRUPT_TPR);
         if (!cpu->vcpu_dirty) {
             whpx_get_registers(cpu);
@@ -876,6 +884,8 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
                                       env->tpr_access_type);
     }
 
+    cpu_mutex_unlock(cpu);
+
     return;
 }
 
@@ -1350,7 +1360,7 @@ static void whpx_memory_init(void)
 
 static void whpx_handle_interrupt(CPUState *cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
 
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 31/56] ppc: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (29 preceding siblings ...)
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 30/56] i386: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 32/56] sh4: " Emilio G. Cota
                   ` (25 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, David Gibson, Alexander Graf, qemu-ppc

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-ppc@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/ppc/ppc.c                    |  2 +-
 target/ppc/excp_helper.c        |  2 +-
 target/ppc/kvm.c                |  6 ++++--
 target/ppc/translate_init.inc.c | 14 +++++++-------
 4 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index d1a5a0b877..bc1cefa13f 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -91,7 +91,7 @@ void ppc_set_irq(PowerPCCPU *cpu, int n_IRQ, int level)
 
     LOG_IRQ("%s: %p n_IRQ %d level %d => pending %08" PRIx32
                 "req %08x\n", __func__, env, n_IRQ, level,
-                env->pending_interrupts, CPU(cpu)->interrupt_request);
+                env->pending_interrupts, cpu_interrupt_request(CPU(cpu)));
 
     if (locked) {
         qemu_mutex_unlock_iothread();
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 737c9c72be..75a434f46b 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -753,7 +753,7 @@ static void ppc_hw_interrupt(CPUPPCState *env)
 
     qemu_log_mask(CPU_LOG_INT, "%s: %p pending %08x req %08x me %d ee %d\n",
                   __func__, env, env->pending_interrupts,
-                  cs->interrupt_request, (int)msr_me, (int)msr_ee);
+                  cpu_interrupt_request(cs), (int)msr_me, (int)msr_ee);
 #endif
     /* External reset */
     if (env->pending_interrupts & (1 << PPC_INTERRUPT_RESET)) {
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
index dc6b8d5e9e..1e1a69f49e 100644
--- a/target/ppc/kvm.c
+++ b/target/ppc/kvm.c
@@ -1334,7 +1334,7 @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
      * interrupt, reset, etc) in PPC-specific env->irq_input_state. */
     if (!cap_interrupt_level &&
         run->ready_for_interrupt_injection &&
-        (cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+        (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
         (env->irq_input_state & (1<<PPC_INPUT_INT)))
     {
         /* For now KVM disregards the 'irq' argument. However, in the
@@ -1376,10 +1376,12 @@ static int kvmppc_handle_halt(PowerPCCPU *cpu)
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
 
-    if (!(cs->interrupt_request & CPU_INTERRUPT_HARD) && (msr_ee)) {
+    cpu_mutex_lock(cs);
+    if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) && (msr_ee)) {
         cpu_halted_set(cs, 1);
         cs->exception_index = EXCP_HLT;
     }
+    cpu_mutex_unlock(cs);
 
     return 0;
 }
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 0e423bea69..6827db14b6 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8446,7 +8446,7 @@ static bool cpu_has_work_POWER7(CPUState *cs)
     CPUPPCState *env = &cpu->env;
 
     if (cpu_halted(cs)) {
-        if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
         }
         if ((env->pending_interrupts & (1u << PPC_INTERRUPT_EXT)) &&
@@ -8470,7 +8470,7 @@ static bool cpu_has_work_POWER7(CPUState *cs)
         }
         return false;
     } else {
-        return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+        return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
     }
 }
 
@@ -8600,7 +8600,7 @@ static bool cpu_has_work_POWER8(CPUState *cs)
     CPUPPCState *env = &cpu->env;
 
     if (cpu_halted(cs)) {
-        if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
         }
         if ((env->pending_interrupts & (1u << PPC_INTERRUPT_EXT)) &&
@@ -8632,7 +8632,7 @@ static bool cpu_has_work_POWER8(CPUState *cs)
         }
         return false;
     } else {
-        return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+        return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
     }
 }
 
@@ -8792,7 +8792,7 @@ static bool cpu_has_work_POWER9(CPUState *cs)
     CPUPPCState *env = &cpu->env;
 
     if (cpu_halted(cs)) {
-        if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
         }
         /* External Exception */
@@ -8825,7 +8825,7 @@ static bool cpu_has_work_POWER9(CPUState *cs)
         }
         return false;
     } else {
-        return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+        return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
     }
 }
 
@@ -10236,7 +10236,7 @@ static bool ppc_cpu_has_work(CPUState *cs)
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
-    return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
+    return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
 }
 
 /* CPUClass::reset() */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 32/56] sh4: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (30 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 31/56] ppc: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:28   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 33/56] cris: " Emilio G. Cota
                   ` (24 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Aurelien Jarno

Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/sh4/cpu.c    | 2 +-
 target/sh4/helper.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
index b9f393b7c7..58ea212f53 100644
--- a/target/sh4/cpu.c
+++ b/target/sh4/cpu.c
@@ -45,7 +45,7 @@ static void superh_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
 
 static bool superh_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 /* CPUClass::reset() */
diff --git a/target/sh4/helper.c b/target/sh4/helper.c
index 2ff0cf4060..8463da5bc8 100644
--- a/target/sh4/helper.c
+++ b/target/sh4/helper.c
@@ -83,7 +83,7 @@ void superh_cpu_do_interrupt(CPUState *cs)
 {
     SuperHCPU *cpu = SUPERH_CPU(cs);
     CPUSH4State *env = &cpu->env;
-    int do_irq = cs->interrupt_request & CPU_INTERRUPT_HARD;
+    int do_irq = cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
     int do_exp, irq_vector = cs->exception_index;
 
     /* prioritize exceptions over interrupts */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 33/56] cris: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (31 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 32/56] sh4: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:29   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 34/56] hppa: " Emilio G. Cota
                   ` (23 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Edgar E. Iglesias

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/cris/cpu.c    | 2 +-
 target/cris/helper.c | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/target/cris/cpu.c b/target/cris/cpu.c
index a23aba2688..3cdba581e6 100644
--- a/target/cris/cpu.c
+++ b/target/cris/cpu.c
@@ -37,7 +37,7 @@ static void cris_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool cris_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
 }
 
 /* CPUClass::reset() */
diff --git a/target/cris/helper.c b/target/cris/helper.c
index d2ec349191..e3fa19363f 100644
--- a/target/cris/helper.c
+++ b/target/cris/helper.c
@@ -116,7 +116,7 @@ int cris_cpu_handle_mmu_fault(CPUState *cs, vaddr address, int size, int rw,
     if (r > 0) {
         qemu_log_mask(CPU_LOG_MMU,
                 "%s returns %d irqreq=%x addr=%" VADDR_PRIx " phy=%x vec=%x"
-                " pc=%x\n", __func__, r, cs->interrupt_request, address,
+                " pc=%x\n", __func__, r, cpu_interrupt_request(cs), address,
                 res.phy, res.bf_vec, env->pc);
     }
     return r;
@@ -130,7 +130,7 @@ void crisv10_cpu_do_interrupt(CPUState *cs)
 
     D_LOG("exception index=%d interrupt_req=%d\n",
           cs->exception_index,
-          cs->interrupt_request);
+          cpu_interrupt_request(cs));
 
     if (env->dslot) {
         /* CRISv10 never takes interrupts while in a delay-slot.  */
@@ -192,7 +192,7 @@ void cris_cpu_do_interrupt(CPUState *cs)
 
     D_LOG("exception index=%d interrupt_req=%d\n",
           cs->exception_index,
-          cs->interrupt_request);
+          cpu_interrupt_request(cs));
 
     switch (cs->exception_index) {
     case EXCP_BREAK:
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 34/56] hppa: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (32 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 33/56] cris: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:29   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 35/56] lm32: " Emilio G. Cota
                   ` (22 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Richard Henderson

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/hppa/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 00bf444620..1ab4e62850 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -60,7 +60,7 @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
 
 static bool hppa_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 static void hppa_cpu_disas_set_info(CPUState *cs, disassemble_info *info)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 35/56] lm32: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (33 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 34/56] hppa: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:29   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 36/56] m68k: " Emilio G. Cota
                   ` (21 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Michael Walle

Cc: Michael Walle <michael@walle.cc>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/lm32/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/lm32/cpu.c b/target/lm32/cpu.c
index b7499cb627..1508bb6199 100644
--- a/target/lm32/cpu.c
+++ b/target/lm32/cpu.c
@@ -101,7 +101,7 @@ static void lm32_cpu_init_cfg_reg(LM32CPU *cpu)
 
 static bool lm32_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 /* CPUClass::reset() */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 36/56] m68k: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (34 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 35/56] lm32: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:29   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 37/56] mips: " Emilio G. Cota
                   ` (20 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Laurent Vivier

Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/m68k/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
index 582e3a73b3..99a7eb4340 100644
--- a/target/m68k/cpu.c
+++ b/target/m68k/cpu.c
@@ -34,7 +34,7 @@ static void m68k_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool m68k_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 static void m68k_set_feature(CPUM68KState *env, int feature)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 37/56] mips: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (35 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 36/56] m68k: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:30   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 38/56] nios: " Emilio G. Cota
                   ` (19 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Aurelien Jarno, Aleksandar Markovic, James Hogan

Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Cc: James Hogan <jhogan@kernel.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/mips/cpu.c | 6 +++---
 target/mips/kvm.c | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/target/mips/cpu.c b/target/mips/cpu.c
index 497706b669..e30aec6851 100644
--- a/target/mips/cpu.c
+++ b/target/mips/cpu.c
@@ -60,7 +60,7 @@ static bool mips_cpu_has_work(CPUState *cs)
     /* Prior to MIPS Release 6 it is implementation dependent if non-enabled
        interrupts wake-up the CPU, however most of the implementations only
        check for interrupts that can be taken. */
-    if ((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if ((cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
         cpu_mips_hw_interrupts_pending(env)) {
         if (cpu_mips_hw_interrupts_enabled(env) ||
             (env->insn_flags & ISA_MIPS32R6)) {
@@ -72,7 +72,7 @@ static bool mips_cpu_has_work(CPUState *cs)
     if (env->CP0_Config3 & (1 << CP0C3_MT)) {
         /* The QEMU model will issue an _WAKE request whenever the CPUs
            should be woken up.  */
-        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
+        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_WAKE) {
             has_work = true;
         }
 
@@ -82,7 +82,7 @@ static bool mips_cpu_has_work(CPUState *cs)
     }
     /* MIPS Release 6 has the ability to halt the CPU.  */
     if (env->CP0_Config5 & (1 << CP0C5_VP)) {
-        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
+        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_WAKE) {
             has_work = true;
         }
         if (!mips_vp_active(env)) {
diff --git a/target/mips/kvm.c b/target/mips/kvm.c
index 0b177a7577..568c3d8f4a 100644
--- a/target/mips/kvm.c
+++ b/target/mips/kvm.c
@@ -135,7 +135,7 @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
 
     qemu_mutex_lock_iothread();
 
-    if ((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    if ((cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
             cpu_mips_io_interrupts_pending(cpu)) {
         intr.cpu = -1;
         intr.irq = 2;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 38/56] nios: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (36 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 37/56] mips: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:30   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 39/56] s390x: " Emilio G. Cota
                   ` (18 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Chris Wulff, Marek Vasut

Cc: Chris Wulff <crwulff@gmail.com>
Cc: Marek Vasut <marex@denx.de>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/nios2/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/nios2/cpu.c b/target/nios2/cpu.c
index fbfaa2ce26..49a75414d3 100644
--- a/target/nios2/cpu.c
+++ b/target/nios2/cpu.c
@@ -36,7 +36,7 @@ static void nios2_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool nios2_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
 }
 
 /* CPUClass::reset() */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 39/56] s390x: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (37 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 38/56] nios: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:30   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 40/56] alpha: " Emilio G. Cota
                   ` (17 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Cornelia Huck, Christian Borntraeger,
	Alexander Graf, Richard Henderson, David Hildenbrand, qemu-s390x

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/intc/s390_flic.c | 2 +-
 target/s390x/cpu.c  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c
index bfb5cf1d07..d944824e67 100644
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -189,7 +189,7 @@ static void qemu_s390_flic_notify(uint32_t type)
     CPU_FOREACH(cs) {
         S390CPU *cpu = S390_CPU(cs);
 
-        cs->interrupt_request |= CPU_INTERRUPT_HARD;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_HARD);
 
         /* ignore CPUs that are not sleeping */
         if (s390_cpu_get_state(cpu) != S390_CPU_STATE_OPERATING &&
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 956d4e1d18..1f91df57bc 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -65,7 +65,7 @@ static bool s390_cpu_has_work(CPUState *cs)
         return false;
     }
 
-    if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
+    if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
         return false;
     }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 40/56] alpha: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (38 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 39/56] s390x: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:31   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 41/56] moxie: " Emilio G. Cota
                   ` (16 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Richard Henderson

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/alpha/cpu.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
index a953897fcc..4e8965fb6c 100644
--- a/target/alpha/cpu.c
+++ b/target/alpha/cpu.c
@@ -42,10 +42,10 @@ static bool alpha_cpu_has_work(CPUState *cs)
        assume that if a CPU really wants to stay asleep, it will mask
        interrupts at the chipset level, which will prevent these bits
        from being set in the first place.  */
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD
-                                    | CPU_INTERRUPT_TIMER
-                                    | CPU_INTERRUPT_SMP
-                                    | CPU_INTERRUPT_MCHK);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD
+                                        | CPU_INTERRUPT_TIMER
+                                        | CPU_INTERRUPT_SMP
+                                        | CPU_INTERRUPT_MCHK);
 }
 
 static void alpha_cpu_disas_set_info(CPUState *cpu, disassemble_info *info)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 41/56] moxie: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (39 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 40/56] alpha: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:31   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 42/56] sparc: " Emilio G. Cota
                   ` (15 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Anthony Green

Cc: Anthony Green <green@moxielogic.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/moxie/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/moxie/cpu.c b/target/moxie/cpu.c
index 8d67eb6727..bad92cfc61 100644
--- a/target/moxie/cpu.c
+++ b/target/moxie/cpu.c
@@ -33,7 +33,7 @@ static void moxie_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool moxie_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & CPU_INTERRUPT_HARD;
+    return cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD;
 }
 
 static void moxie_cpu_reset(CPUState *s)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 42/56] sparc: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (40 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 41/56] moxie: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:32   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 43/56] openrisc: " Emilio G. Cota
                   ` (14 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Mark Cave-Ayland, Artyom Tarasenko

Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/sparc64/sparc64.c | 19 +++++++++++++------
 target/sparc/cpu.c   |  2 +-
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/hw/sparc64/sparc64.c b/hw/sparc64/sparc64.c
index 372bbd4f5b..640c4b6a30 100644
--- a/hw/sparc64/sparc64.c
+++ b/hw/sparc64/sparc64.c
@@ -56,11 +56,13 @@ void cpu_check_irqs(CPUSPARCState *env)
     /* The bit corresponding to psrpil is (1<< psrpil), the next bit
        is (2 << psrpil). */
     if (pil < (2 << env->psrpil)) {
-        if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
+        cpu_mutex_lock(cs);
+        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) {
             trace_sparc64_cpu_check_irqs_reset_irq(env->interrupt_index);
             env->interrupt_index = 0;
             cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
         }
+        cpu_mutex_unlock(cs);
         return;
     }
 
@@ -87,11 +89,16 @@ void cpu_check_irqs(CPUSPARCState *env)
                 break;
             }
         }
-    } else if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
-        trace_sparc64_cpu_check_irqs_disabled(pil, env->pil_in, env->softint,
-                                              env->interrupt_index);
-        env->interrupt_index = 0;
-        cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
+    } else {
+        cpu_mutex_lock(cs);
+        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) {
+            trace_sparc64_cpu_check_irqs_disabled(pil, env->pil_in,
+                                                  env->softint,
+                                                  env->interrupt_index);
+            env->interrupt_index = 0;
+            cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
+        }
+        cpu_mutex_unlock(cs);
     }
 }
 
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 0f090ece54..88427283c1 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -709,7 +709,7 @@ static bool sparc_cpu_has_work(CPUState *cs)
     SPARCCPU *cpu = SPARC_CPU(cs);
     CPUSPARCState *env = &cpu->env;
 
-    return (cs->interrupt_request & CPU_INTERRUPT_HARD) &&
+    return (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
            cpu_interrupts_enabled(env);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 43/56] openrisc: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (41 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 42/56] sparc: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:32   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 44/56] unicore32: " Emilio G. Cota
                   ` (13 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Stafford Horne

Cc: Stafford Horne <shorne@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 hw/openrisc/cputimer.c | 2 +-
 target/openrisc/cpu.c  | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/openrisc/cputimer.c b/hw/openrisc/cputimer.c
index 850f88761c..739404e4f5 100644
--- a/hw/openrisc/cputimer.c
+++ b/hw/openrisc/cputimer.c
@@ -102,7 +102,7 @@ static void openrisc_timer_cb(void *opaque)
         CPUState *cs = CPU(cpu);
 
         cpu->env.ttmr |= TTMR_IP;
-        cs->interrupt_request |= CPU_INTERRUPT_TIMER;
+        cpu_interrupt_request_or(cs, CPU_INTERRUPT_TIMER);
     }
 
     switch (cpu->env.ttmr & TTMR_M) {
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
index fb7cb5c507..cdbc9353b7 100644
--- a/target/openrisc/cpu.c
+++ b/target/openrisc/cpu.c
@@ -32,8 +32,8 @@ static void openrisc_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool openrisc_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD |
-                                    CPU_INTERRUPT_TIMER);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD |
+                                        CPU_INTERRUPT_TIMER);
 }
 
 static void openrisc_disas_set_info(CPUState *cpu, disassemble_info *info)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 44/56] unicore32: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (42 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 43/56] openrisc: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:33   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 45/56] microblaze: " Emilio G. Cota
                   ` (12 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Guan Xuetao

Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/unicore32/cpu.c     | 2 +-
 target/unicore32/softmmu.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/unicore32/cpu.c b/target/unicore32/cpu.c
index 2b49d1ca40..65c5334551 100644
--- a/target/unicore32/cpu.c
+++ b/target/unicore32/cpu.c
@@ -29,7 +29,7 @@ static void uc32_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool uc32_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request &
+    return cpu_interrupt_request(cs) &
         (CPU_INTERRUPT_HARD | CPU_INTERRUPT_EXITTB);
 }
 
diff --git a/target/unicore32/softmmu.c b/target/unicore32/softmmu.c
index 00c7e0d028..f58e2361e0 100644
--- a/target/unicore32/softmmu.c
+++ b/target/unicore32/softmmu.c
@@ -119,7 +119,7 @@ void uc32_cpu_do_interrupt(CPUState *cs)
     /* The PC already points to the proper instruction.  */
     env->regs[30] = env->regs[31];
     env->regs[31] = addr;
-    cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
+    cpu_interrupt_request_or(cs, CPU_INTERRUPT_EXITTB);
 }
 
 static int get_phys_addr_ucv2(CPUUniCore32State *env, uint32_t address,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 45/56] microblaze: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (43 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 44/56] unicore32: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:33   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 46/56] accel/tcg: " Emilio G. Cota
                   ` (11 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Edgar E. Iglesias

Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/microblaze/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
index 9b546a2c18..206fdd8651 100644
--- a/target/microblaze/cpu.c
+++ b/target/microblaze/cpu.c
@@ -84,7 +84,7 @@ static void mb_cpu_set_pc(CPUState *cs, vaddr value)
 
 static bool mb_cpu_has_work(CPUState *cs)
 {
-    return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
+    return cpu_interrupt_request(cs) & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
 }
 
 #ifndef CONFIG_USER_ONLY
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 46/56] accel/tcg: convert to cpu_interrupt_request
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (44 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 45/56] microblaze: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-21 13:34   ` Richard Henderson
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 47/56] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
                   ` (10 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cpu-exec.c      | 14 +++++++-------
 accel/tcg/tcg-all.c       | 12 +++++++++---
 accel/tcg/translate-all.c |  2 +-
 3 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index f37c9b1e94..25027a1fc6 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -428,7 +428,7 @@ static inline bool cpu_handle_halt_locked(CPUState *cpu)
 
     if (cpu_halted(cpu)) {
 #if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
-        if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
+        if ((cpu_interrupt_request(cpu) & CPU_INTERRUPT_POLL)
             && replay_interrupt()) {
             X86CPU *x86_cpu = X86_CPU(cpu);
 
@@ -540,16 +540,16 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
      */
     atomic_mb_set(&cpu->icount_decr.u16.high, 0);
 
-    if (unlikely(atomic_read(&cpu->interrupt_request))) {
+    if (unlikely(cpu_interrupt_request(cpu))) {
         int interrupt_request;
         qemu_mutex_lock_iothread();
-        interrupt_request = cpu->interrupt_request;
+        interrupt_request = cpu_interrupt_request(cpu);
         if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
             /* Mask out external interrupts for this step. */
             interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
         }
         if (interrupt_request & CPU_INTERRUPT_DEBUG) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
             cpu->exception_index = EXCP_DEBUG;
             qemu_mutex_unlock_iothread();
             return true;
@@ -558,7 +558,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
             /* Do nothing */
         } else if (interrupt_request & CPU_INTERRUPT_HALT) {
             replay_interrupt();
-            cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_HALT);
             cpu_halted_set(cpu, 1);
             cpu->exception_index = EXCP_HLT;
             qemu_mutex_unlock_iothread();
@@ -595,10 +595,10 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
             }
             /* The target hook may have updated the 'cpu->interrupt_request';
              * reload the 'interrupt_request' value */
-            interrupt_request = cpu->interrupt_request;
+            interrupt_request = cpu_interrupt_request(cpu);
         }
         if (interrupt_request & CPU_INTERRUPT_EXITTB) {
-            cpu->interrupt_request &= ~CPU_INTERRUPT_EXITTB;
+            cpu_reset_interrupt(cpu, CPU_INTERRUPT_EXITTB);
             /* ensure that no TB jump will be modified as
                the program flow was changed */
             *last_tb = NULL;
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
index 3d25bdcc17..4e2fe70350 100644
--- a/accel/tcg/tcg-all.c
+++ b/accel/tcg/tcg-all.c
@@ -39,10 +39,16 @@ unsigned long tcg_tb_size;
 static void tcg_handle_interrupt(CPUState *cpu, int mask)
 {
     int old_mask;
-    g_assert(qemu_mutex_iothread_locked());
 
-    old_mask = cpu->interrupt_request;
-    cpu->interrupt_request |= mask;
+    if (!cpu_mutex_locked(cpu)) {
+        cpu_mutex_lock(cpu);
+        old_mask = cpu_interrupt_request(cpu);
+        cpu_interrupt_request_or(cpu, mask);
+        cpu_mutex_unlock(cpu);
+    } else {
+        old_mask = cpu_interrupt_request(cpu);
+        cpu_interrupt_request_or(cpu, mask);
+    }
 
     /*
      * If called from iothread context, wake the target cpu in
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 356dcd0948..038d82fdb5 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -2340,7 +2340,7 @@ void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf)
 void cpu_interrupt(CPUState *cpu, int mask)
 {
     g_assert(qemu_mutex_iothread_locked());
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
     atomic_set(&cpu->icount_decr.u16.high, -1);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 47/56] cpu: call .cpu_has_work with the CPU lock held
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (45 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 46/56] accel/tcg: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 48/56] ppc: acquire the BQL in cpu_has_work Emilio G. Cota
                   ` (9 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index cd66b8828a..ca7d92c360 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -784,9 +784,16 @@ const char *parse_cpu_model(const char *cpu_model);
 static inline bool cpu_has_work(CPUState *cpu)
 {
     CPUClass *cc = CPU_GET_CLASS(cpu);
+    bool ret;
 
     g_assert(cc->has_work);
-    return cc->has_work(cpu);
+    if (cpu_mutex_locked(cpu)) {
+        return cc->has_work(cpu);
+    }
+    cpu_mutex_lock(cpu);
+    ret = cc->has_work(cpu);
+    cpu_mutex_unlock(cpu);
+    return ret;
 }
 
 /**
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 48/56] ppc: acquire the BQL in cpu_has_work
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (46 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 47/56] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  6:58   ` Paolo Bonzini
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 49/56] mips: " Emilio G. Cota
                   ` (8 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, David Gibson, Alexander Graf, qemu-ppc

Soon we will call cpu_has_work without the BQL.

Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-ppc@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/ppc/translate_init.inc.c | 77 +++++++++++++++++++++++++++++++--
 1 file changed, 73 insertions(+), 4 deletions(-)

diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 6827db14b6..a206715873 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -18,6 +18,7 @@
  * License along with this library; if not, see <http://www.gnu.org/licenses/>.
  */
 
+#include "qemu/main-loop.h"
 #include "disas/bfd.h"
 #include "exec/gdbstub.h"
 #include "kvm_ppc.h"
@@ -8440,11 +8441,13 @@ static bool ppc_pvr_match_power7(PowerPCCPUClass *pcc, uint32_t pvr)
     return false;
 }
 
-static bool cpu_has_work_POWER7(CPUState *cs)
+static bool cpu_has_work_POWER7_locked(CPUState *cs)
 {
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (cpu_halted(cs)) {
         if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
@@ -8474,6 +8477,21 @@ static bool cpu_has_work_POWER7(CPUState *cs)
     }
 }
 
+static bool cpu_has_work_POWER7(CPUState *cs)
+{
+    if (!qemu_mutex_iothread_locked()) {
+        bool ret;
+
+        cpu_mutex_unlock(cs);
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cs);
+        ret = cpu_has_work_POWER7_locked(cs);
+        qemu_mutex_unlock_iothread();
+        return ret;
+    }
+    return cpu_has_work_POWER7_locked(cs);
+}
+
 POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(oc);
@@ -8594,11 +8612,13 @@ static bool ppc_pvr_match_power8(PowerPCCPUClass *pcc, uint32_t pvr)
     return false;
 }
 
-static bool cpu_has_work_POWER8(CPUState *cs)
+static bool cpu_has_work_POWER8_locked(CPUState *cs)
 {
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (cpu_halted(cs)) {
         if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
@@ -8636,6 +8656,21 @@ static bool cpu_has_work_POWER8(CPUState *cs)
     }
 }
 
+static bool cpu_has_work_POWER8(CPUState *cs)
+{
+    if (!qemu_mutex_iothread_locked()) {
+        bool ret;
+
+        cpu_mutex_unlock(cs);
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cs);
+        ret = cpu_has_work_POWER8_locked(cs);
+        qemu_mutex_unlock_iothread();
+        return ret;
+    }
+    return cpu_has_work_POWER8_locked(cs);
+}
+
 POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(oc);
@@ -8786,11 +8821,13 @@ static bool ppc_pvr_match_power9(PowerPCCPUClass *pcc, uint32_t pvr)
     return false;
 }
 
-static bool cpu_has_work_POWER9(CPUState *cs)
+static bool cpu_has_work_POWER9_locked(CPUState *cs)
 {
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     if (cpu_halted(cs)) {
         if (!(cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD)) {
             return false;
@@ -8829,6 +8866,21 @@ static bool cpu_has_work_POWER9(CPUState *cs)
     }
 }
 
+static bool cpu_has_work_POWER9(CPUState *cs)
+{
+    if (!qemu_mutex_iothread_locked()) {
+        bool ret;
+
+        cpu_mutex_unlock(cs);
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cs);
+        ret = cpu_has_work_POWER9_locked(cs);
+        qemu_mutex_unlock_iothread();
+        return ret;
+    }
+    return cpu_has_work_POWER9_locked(cs);
+}
+
 POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(oc);
@@ -10231,14 +10283,31 @@ static void ppc_cpu_set_pc(CPUState *cs, vaddr value)
     cpu->env.nip = value;
 }
 
-static bool ppc_cpu_has_work(CPUState *cs)
+static bool ppc_cpu_has_work_locked(CPUState *cs)
 {
     PowerPCCPU *cpu = POWERPC_CPU(cs);
     CPUPPCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     return msr_ee && (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD);
 }
 
+static bool ppc_cpu_has_work(CPUState *cs)
+{
+    if (!qemu_mutex_iothread_locked()) {
+        bool ret;
+
+        cpu_mutex_unlock(cs);
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cs);
+        ret = ppc_cpu_has_work_locked(cs);
+        qemu_mutex_unlock_iothread();
+        return ret;
+    }
+    return ppc_cpu_has_work_locked(cs);
+}
+
 /* CPUClass::reset() */
 static void ppc_cpu_reset(CPUState *s)
 {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 49/56] mips: acquire the BQL in cpu_has_work
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (47 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 48/56] ppc: acquire the BQL in cpu_has_work Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 50/56] s390: " Emilio G. Cota
                   ` (7 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Aurelien Jarno, Aleksandar Markovic

Soon we will call cpu_has_work without the BQL.

Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/mips/cpu.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/target/mips/cpu.c b/target/mips/cpu.c
index e30aec6851..ea30c3c474 100644
--- a/target/mips/cpu.c
+++ b/target/mips/cpu.c
@@ -19,6 +19,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/main-loop.h"
 #include "qapi/error.h"
 #include "cpu.h"
 #include "internal.h"
@@ -51,12 +52,14 @@ static void mips_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
     env->hflags |= tb->flags & MIPS_HFLAG_BMASK;
 }
 
-static bool mips_cpu_has_work(CPUState *cs)
+static bool mips_cpu_has_work_locked(CPUState *cs)
 {
     MIPSCPU *cpu = MIPS_CPU(cs);
     CPUMIPSState *env = &cpu->env;
     bool has_work = false;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     /* Prior to MIPS Release 6 it is implementation dependent if non-enabled
        interrupts wake-up the CPU, however most of the implementations only
        check for interrupts that can be taken. */
@@ -92,6 +95,21 @@ static bool mips_cpu_has_work(CPUState *cs)
     return has_work;
 }
 
+static bool mips_cpu_has_work(CPUState *cs)
+{
+    if (!qemu_mutex_iothread_locked()) {
+        bool ret;
+
+        cpu_mutex_unlock(cs);
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cs);
+        ret = mips_cpu_has_work_locked(cs);
+        qemu_mutex_unlock_iothread();
+        return ret;
+    }
+    return mips_cpu_has_work_locked(cs);
+}
+
 /* CPUClass::reset() */
 static void mips_cpu_reset(CPUState *s)
 {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 50/56] s390: acquire the BQL in cpu_has_work
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (48 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 49/56] mips: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 51/56] riscv: " Emilio G. Cota
                   ` (6 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Cornelia Huck, Richard Henderson, Alexander Graf,
	David Hildenbrand, qemu-s390x

Soon we will call cpu_has_work without the BQL.

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Alexander Graf <agraf@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/s390x/cpu.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 1f91df57bc..64c10ad115 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -24,6 +24,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/main-loop.h"
 #include "qapi/error.h"
 #include "cpu.h"
 #include "internal.h"
@@ -55,10 +56,12 @@ static void s390_cpu_set_pc(CPUState *cs, vaddr value)
     cpu->env.psw.addr = value;
 }
 
-static bool s390_cpu_has_work(CPUState *cs)
+static bool s390_cpu_has_work_locked(CPUState *cs)
 {
     S390CPU *cpu = S390_CPU(cs);
 
+    g_assert(qemu_mutex_iothread_locked());
+
     /* STOPPED cpus can never wake up */
     if (s390_cpu_get_state(cpu) != S390_CPU_STATE_LOAD &&
         s390_cpu_get_state(cpu) != S390_CPU_STATE_OPERATING) {
@@ -72,6 +75,21 @@ static bool s390_cpu_has_work(CPUState *cs)
     return s390_cpu_has_int(cpu);
 }
 
+static bool s390_cpu_has_work(CPUState *cs)
+{
+    if (!qemu_mutex_iothread_locked()) {
+        bool ret;
+
+        cpu_mutex_unlock(cs);
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cs);
+        ret = s390_cpu_has_work_locked(cs);
+        qemu_mutex_unlock_iothread();
+        return ret;
+    }
+    return s390_cpu_has_work_locked(cs);
+}
+
 #if !defined(CONFIG_USER_ONLY)
 /* S390CPUClass::load_normal() */
 static void s390_cpu_load_normal(CPUState *s)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 51/56] riscv: acquire the BQL in cpu_has_work
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (49 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 50/56] s390: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19 17:24   ` Palmer Dabbelt
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 52/56] sparc: " Emilio G. Cota
                   ` (5 subsequent siblings)
  56 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Michael Clark, Palmer Dabbelt, Sagar Karandikar,
	Bastian Koppelmann

Soon we will call cpu_has_work without the BQL.

Cc: Michael Clark <mjc@sifive.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/riscv/cpu.c | 21 ++++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index d630e8fd6c..b10995c807 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -18,6 +18,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/main-loop.h"
 #include "qemu/log.h"
 #include "cpu.h"
 #include "exec/exec-all.h"
@@ -244,11 +245,14 @@ static void riscv_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
     env->pc = tb->pc;
 }
 
-static bool riscv_cpu_has_work(CPUState *cs)
+static bool riscv_cpu_has_work_locked(CPUState *cs)
 {
 #ifndef CONFIG_USER_ONLY
     RISCVCPU *cpu = RISCV_CPU(cs);
     CPURISCVState *env = &cpu->env;
+
+    g_assert(qemu_mutex_iothread_locked());
+
     /*
      * Definition of the WFI instruction requires it to ignore the privilege
      * mode and delegation registers, but respect individual enables
@@ -259,6 +263,21 @@ static bool riscv_cpu_has_work(CPUState *cs)
 #endif
 }
 
+static bool riscv_cpu_has_work(CPUState *cs)
+{
+    if (!qemu_mutex_iothread_locked()) {
+        bool ret;
+
+        cpu_mutex_unlock(cs);
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cs);
+        ret = riscv_cpu_has_work_locked(cs);
+        qemu_mutex_unlock_iothread();
+        return ret;
+    }
+    return riscv_cpu_has_work_locked(cs);
+}
+
 void restore_state_to_opc(CPURISCVState *env, TranslationBlock *tb,
                           target_ulong *data)
 {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 52/56] sparc: acquire the BQL in cpu_has_work
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (50 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 51/56] riscv: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 53/56] xtensa: " Emilio G. Cota
                   ` (4 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Mark Cave-Ayland, Artyom Tarasenko

Soon we will call cpu_has_work without the BQL.

Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/sparc/cpu.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 88427283c1..854c1733c8 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -18,6 +18,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/main-loop.h"
 #include "qapi/error.h"
 #include "cpu.h"
 #include "qemu/error-report.h"
@@ -704,15 +705,30 @@ static void sparc_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
     cpu->env.npc = tb->cs_base;
 }
 
-static bool sparc_cpu_has_work(CPUState *cs)
+static bool sparc_cpu_has_work_locked(CPUState *cs)
 {
     SPARCCPU *cpu = SPARC_CPU(cs);
     CPUSPARCState *env = &cpu->env;
 
+    g_assert(qemu_mutex_iothread_locked());
+
     return (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
            cpu_interrupts_enabled(env);
 }
 
+static bool sparc_cpu_has_work(CPUState *cs)
+{
+    if (!qemu_mutex_iothread_locked()) {
+        bool ret;
+
+        qemu_mutex_lock_iothread();
+        ret = sparc_cpu_has_work_locked(cs);
+        qemu_mutex_unlock_iothread();
+        return ret;
+    }
+    return sparc_cpu_has_work_locked(cs);
+}
+
 static char *sparc_cpu_type_name(const char *cpu_model)
 {
     char *name = g_strdup_printf(SPARC_CPU_TYPE_NAME("%s"), cpu_model);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 53/56] xtensa: acquire the BQL in cpu_has_work
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (51 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 52/56] sparc: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 54/56] cpu: protect most CPU state with cpu->lock Emilio G. Cota
                   ` (3 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Max Filippov

Soon we will call cpu_has_work without the BQL.

Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 target/xtensa/cpu.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index d4ca35e6cc..5cb881f89b 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -29,6 +29,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/main-loop.h"
 #include "qapi/error.h"
 #include "cpu.h"
 #include "qemu-common.h"
@@ -42,17 +43,34 @@ static void xtensa_cpu_set_pc(CPUState *cs, vaddr value)
     cpu->env.pc = value;
 }
 
-static bool xtensa_cpu_has_work(CPUState *cs)
+static bool xtensa_cpu_has_work_locked(CPUState *cs)
 {
 #ifndef CONFIG_USER_ONLY
     XtensaCPU *cpu = XTENSA_CPU(cs);
 
+    g_assert(qemu_mutex_iothread_locked());
+
     return !cpu->env.runstall && cpu->env.pending_irq_level;
 #else
     return true;
 #endif
 }
 
+static bool xtensa_cpu_has_work(CPUState *cs)
+{
+    if (!qemu_mutex_iothread_locked()) {
+        bool ret;
+
+        cpu_mutex_unlock(cs);
+        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cs);
+        ret = xtensa_cpu_has_work_locked(cs);
+        qemu_mutex_unlock_iothread();
+        return ret;
+    }
+    return xtensa_cpu_has_work_locked(cs);
+}
+
 /* CPUClass::reset() */
 static void xtensa_cpu_reset(CPUState *s)
 {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 54/56] cpu: protect most CPU state with cpu->lock
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (52 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 53/56] xtensa: " Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 55/56] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
                   ` (2 subsequent siblings)
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

Instead of taking the BQL every time we exit the exec loop,
have a per-CPU lock to serialize accesses the the CPU's state.

Differently from the BQL, this lock is uncontended so
acquiring it is cheap.

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h |  20 ++--
 cpus.c            | 290 ++++++++++++++++++++++++++++++++--------------
 qom/cpu.c         |  23 ++--
 3 files changed, 221 insertions(+), 112 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index ca7d92c360..b351fe6164 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -286,10 +286,6 @@ struct qemu_work_item;
  * valid under cpu_list_lock.
  * @created: Indicates whether the CPU thread has been successfully created.
  * @interrupt_request: Indicates a pending interrupt request.
- * @halted: Nonzero if the CPU is in suspended state.
- * @stop: Indicates a pending stop request.
- * @stopped: Indicates the CPU has been artificially stopped.
- * @unplug: Indicates a pending CPU unplug request.
  * @crash_occurred: Indicates the OS reported a crash (panic) for this CPU
  * @singlestep_enabled: Flags for single-stepping.
  * @icount_extra: Instructions until next timer event.
@@ -318,6 +314,10 @@ struct qemu_work_item;
  * @lock: Lock to prevent multiple access to per-CPU fields.
  * @cond: Condition variable for per-CPU events.
  * @work_list: List of pending asynchronous work.
+ * @halted: Nonzero if the CPU is in suspended state.
+ * @stop: Indicates a pending stop request.
+ * @stopped: Indicates the CPU has been artificially stopped.
+ * @unplug: Indicates a pending CPU unplug request.
  * @trace_dstate_delayed: Delayed changes to trace_dstate (includes all changes
  *                        to @trace_dstate).
  * @trace_dstate: Dynamic tracing state of events for this vCPU (bitmask).
@@ -341,12 +341,7 @@ struct CPUState {
 #endif
     int thread_id;
     bool running, has_waiter;
-    struct QemuCond *halt_cond;
     bool thread_kicked;
-    bool created;
-    bool stop;
-    bool stopped;
-    bool unplug;
     bool crash_occurred;
     bool exit_request;
     uint32_t cflags_next_tb;
@@ -360,7 +355,13 @@ struct CPUState {
     QemuMutex lock;
     /* fields below protected by @lock */
     QemuCond cond;
+    QemuCond halt_cond;
     QSIMPLEQ_HEAD(, qemu_work_item) work_list;
+    uint32_t halted;
+    bool created;
+    bool stop;
+    bool stopped;
+    bool unplug;
 
     CPUAddressSpace *cpu_ases;
     int num_ases;
@@ -407,7 +408,6 @@ struct CPUState {
 
     /* TODO Move common fields from CPUArchState here. */
     int cpu_index;
-    uint32_t halted;
     uint32_t can_do_io;
     int32_t exception_index;
 
diff --git a/cpus.c b/cpus.c
index a101e8863c..c92776507e 100644
--- a/cpus.c
+++ b/cpus.c
@@ -124,30 +124,36 @@ bool cpu_mutex_locked(const CPUState *cpu)
     return test_bit(cpu->cpu_index + 1, cpu_lock_bitmap);
 }
 
-bool cpu_is_stopped(CPUState *cpu)
+/* Called with the CPU's lock held */
+static bool cpu_is_stopped_locked(CPUState *cpu)
 {
     return cpu->stopped || !runstate_is_running();
 }
 
-static inline bool cpu_work_list_empty(CPUState *cpu)
+bool cpu_is_stopped(CPUState *cpu)
 {
-    bool ret;
+    if (!cpu_mutex_locked(cpu)) {
+        bool ret;
 
-    cpu_mutex_lock(cpu);
-    ret = QSIMPLEQ_EMPTY(&cpu->work_list);
-    cpu_mutex_unlock(cpu);
-    return ret;
+        cpu_mutex_lock(cpu);
+        ret = cpu_is_stopped_locked(cpu);
+        cpu_mutex_unlock(cpu);
+        return ret;
+    }
+    return cpu_is_stopped_locked(cpu);
 }
 
 static bool cpu_thread_is_idle(CPUState *cpu)
 {
-    if (cpu->stop || !cpu_work_list_empty(cpu)) {
+    g_assert(cpu_mutex_locked(cpu));
+
+    if (cpu->stop || !QSIMPLEQ_EMPTY(&cpu->work_list)) {
         return false;
     }
     if (cpu_is_stopped(cpu)) {
         return true;
     }
-    if (!cpu->halted || cpu_has_work(cpu) ||
+    if (!cpu_halted(cpu) || cpu_has_work(cpu) ||
         kvm_halt_in_kernel()) {
         return false;
     }
@@ -157,13 +163,23 @@ static bool cpu_thread_is_idle(CPUState *cpu)
 static bool all_cpu_threads_idle(void)
 {
     CPUState *cpu;
+    bool ret = true;
+
+    g_assert(no_cpu_mutex_locked());
 
+    CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
+    }
     CPU_FOREACH(cpu) {
         if (!cpu_thread_is_idle(cpu)) {
-            return false;
+            ret = false;
+            break;
         }
     }
-    return true;
+    CPU_FOREACH(cpu) {
+        cpu_mutex_unlock(cpu);
+    }
+    return ret;
 }
 
 /***********************************************************/
@@ -721,6 +737,8 @@ void qemu_start_warp_timer(void)
 
 static void qemu_account_warp_timer(void)
 {
+    g_assert(qemu_mutex_iothread_locked());
+
     if (!use_icount || !icount_sleep) {
         return;
     }
@@ -1031,6 +1049,7 @@ static void kick_tcg_thread(void *opaque)
 static void start_tcg_kick_timer(void)
 {
     assert(!mttcg_enabled);
+    g_assert(qemu_mutex_iothread_locked());
     if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) {
         tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
                                            kick_tcg_thread, NULL);
@@ -1043,6 +1062,7 @@ static void start_tcg_kick_timer(void)
 static void stop_tcg_kick_timer(void)
 {
     assert(!mttcg_enabled);
+    g_assert(qemu_mutex_iothread_locked());
     if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) {
         timer_del(tcg_kick_vcpu_timer);
     }
@@ -1145,6 +1165,8 @@ int vm_shutdown(void)
 
 static bool cpu_can_run(CPUState *cpu)
 {
+    g_assert(cpu_mutex_locked(cpu));
+
     if (cpu->stop) {
         return false;
     }
@@ -1221,14 +1243,11 @@ static QemuThread io_thread;
 
 /* cpu creation */
 static QemuCond qemu_cpu_cond;
-/* system init */
-static QemuCond qemu_pause_cond;
 
 void qemu_init_cpu_loop(void)
 {
     qemu_init_sigbus();
     qemu_cond_init(&qemu_cpu_cond);
-    qemu_cond_init(&qemu_pause_cond);
     qemu_mutex_init(&qemu_global_mutex);
 
     qemu_thread_get_self(&io_thread);
@@ -1246,42 +1265,66 @@ static void qemu_tcg_destroy_vcpu(CPUState *cpu)
 {
 }
 
-static void qemu_cpu_stop(CPUState *cpu, bool exit)
+static void qemu_cpu_stop_locked(CPUState *cpu, bool exit)
 {
+    g_assert(cpu_mutex_locked(cpu));
     g_assert(qemu_cpu_is_self(cpu));
     cpu->stop = false;
     cpu->stopped = true;
     if (exit) {
         cpu_exit(cpu);
     }
-    qemu_cond_broadcast(&qemu_pause_cond);
+    qemu_cond_broadcast(&cpu->cond);
+}
+
+static void qemu_cpu_stop(CPUState *cpu, bool exit)
+{
+    cpu_mutex_lock(cpu);
+    qemu_cpu_stop_locked(cpu, exit);
+    cpu_mutex_unlock(cpu);
 }
 
 static void qemu_wait_io_event_common(CPUState *cpu)
 {
+    g_assert(cpu_mutex_locked(cpu));
+
     atomic_mb_set(&cpu->thread_kicked, false);
     if (cpu->stop) {
-        qemu_cpu_stop(cpu, false);
+        qemu_cpu_stop_locked(cpu, false);
     }
-    process_queued_cpu_work(cpu);
+    process_queued_cpu_work_locked(cpu);
 }
 
 static void qemu_tcg_rr_wait_io_event(CPUState *cpu)
 {
+    g_assert(qemu_mutex_iothread_locked());
+    g_assert(no_cpu_mutex_locked());
+
     while (all_cpu_threads_idle()) {
         stop_tcg_kick_timer();
-        qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+        qemu_mutex_unlock_iothread();
+
+        cpu_mutex_lock(cpu);
+        qemu_cond_wait(&cpu->halt_cond, &cpu->lock);
+        cpu_mutex_unlock(cpu);
+
+        qemu_mutex_lock_iothread();
     }
 
     start_tcg_kick_timer();
 
+    cpu_mutex_lock(cpu);
     qemu_wait_io_event_common(cpu);
+    cpu_mutex_unlock(cpu);
 }
 
 static void qemu_wait_io_event(CPUState *cpu)
 {
+    g_assert(cpu_mutex_locked(cpu));
+    g_assert(!qemu_mutex_iothread_locked());
+
     while (cpu_thread_is_idle(cpu)) {
-        qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait(&cpu->halt_cond, &cpu->lock);
     }
 
 #ifdef _WIN32
@@ -1301,6 +1344,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
     rcu_register_thread();
 
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     cpu->can_do_io = 1;
@@ -1313,14 +1357,20 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
     }
 
     kvm_init_cpu_signals(cpu);
+    qemu_mutex_unlock_iothread();
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = kvm_cpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
@@ -1328,10 +1378,16 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
 
+    cpu_mutex_unlock(cpu);
+    qemu_mutex_lock_iothread();
     qemu_kvm_destroy_vcpu(cpu);
-    cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
     qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
+    cpu->created = false;
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
+
     rcu_unregister_thread();
     return NULL;
 }
@@ -1348,7 +1404,7 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
 
     rcu_register_thread();
 
-    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     cpu->can_do_io = 1;
@@ -1359,10 +1415,10 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
-        qemu_mutex_unlock_iothread();
+        cpu_mutex_unlock(cpu);
         do {
             int sig;
             r = sigwait(&waitset, &sig);
@@ -1371,10 +1427,11 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
             perror("sigwait");
             exit(1);
         }
-        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cpu);
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug);
 
+    cpu_mutex_unlock(cpu);
     rcu_unregister_thread();
     return NULL;
 #endif
@@ -1405,6 +1462,8 @@ static int64_t tcg_get_icount_limit(void)
 static void handle_icount_deadline(void)
 {
     assert(qemu_in_vcpu_thread());
+    g_assert(qemu_mutex_iothread_locked());
+
     if (use_icount) {
         int64_t deadline =
             qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
@@ -1485,12 +1544,15 @@ static void deal_with_unplugged_cpus(void)
     CPUState *cpu;
 
     CPU_FOREACH(cpu) {
+        cpu_mutex_lock(cpu);
         if (cpu->unplug && !cpu_can_run(cpu)) {
             qemu_tcg_destroy_vcpu(cpu);
             cpu->created = false;
             qemu_cond_signal(&qemu_cpu_cond);
+            cpu_mutex_unlock(cpu);
             break;
         }
+        cpu_mutex_unlock(cpu);
     }
 }
 
@@ -1511,25 +1573,33 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
     rcu_register_thread();
     tcg_register_thread();
 
-    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
-
     cpu->thread_id = qemu_get_thread_id();
     cpu->created = true;
     cpu->can_do_io = 1;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
 
     /* wait for initial kick-off after machine start */
+    cpu_mutex_lock(first_cpu);
     while (first_cpu->stopped) {
-        qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait(&first_cpu->halt_cond, &first_cpu->lock);
+        cpu_mutex_unlock(first_cpu);
 
         /* process any pending work */
         CPU_FOREACH(cpu) {
             current_cpu = cpu;
+            cpu_mutex_lock(cpu);
             qemu_wait_io_event_common(cpu);
+            cpu_mutex_unlock(cpu);
         }
+
+        cpu_mutex_lock(first_cpu);
     }
+    cpu_mutex_unlock(first_cpu);
 
+    qemu_mutex_lock_iothread();
     start_tcg_kick_timer();
 
     cpu = first_cpu;
@@ -1555,7 +1625,12 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             cpu = first_cpu;
         }
 
-        while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
+        while (cpu) {
+            cpu_mutex_lock(cpu);
+            if (!QSIMPLEQ_EMPTY(&cpu->work_list) || cpu->exit_request) {
+                cpu_mutex_unlock(cpu);
+                break;
+            }
 
             atomic_mb_set(&tcg_current_rr_cpu, cpu);
             current_cpu = cpu;
@@ -1566,6 +1641,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             if (cpu_can_run(cpu)) {
                 int r;
 
+                cpu_mutex_unlock(cpu);
                 qemu_mutex_unlock_iothread();
                 prepare_icount_for_run(cpu);
 
@@ -1573,11 +1649,14 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
 
                 process_icount_data(cpu);
                 qemu_mutex_lock_iothread();
+                cpu_mutex_lock(cpu);
 
                 if (r == EXCP_DEBUG) {
                     cpu_handle_guest_debug(cpu);
+                    cpu_mutex_unlock(cpu);
                     break;
                 } else if (r == EXCP_ATOMIC) {
+                    cpu_mutex_unlock(cpu);
                     qemu_mutex_unlock_iothread();
                     cpu_exec_step_atomic(cpu);
                     qemu_mutex_lock_iothread();
@@ -1587,11 +1666,13 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
                 if (cpu->unplug) {
                     cpu = CPU_NEXT(cpu);
                 }
+                cpu_mutex_unlock(current_cpu);
                 break;
             }
 
+            cpu_mutex_unlock(cpu);
             cpu = CPU_NEXT(cpu);
-        } /* while (cpu && !cpu->exit_request).. */
+        } /* for (;;) .. */
 
         /* Does not need atomic_mb_set because a spurious wakeup is okay.  */
         atomic_set(&tcg_current_rr_cpu, NULL);
@@ -1615,19 +1696,26 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
 
     rcu_register_thread();
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
     cpu->created = true;
-    cpu->halted = 0;
+    cpu_halted_set(cpu, 0);
     current_cpu = cpu;
 
     hax_init_vcpu(cpu);
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_mutex_unlock_iothread();
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = hax_smp_cpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
@@ -1635,6 +1723,8 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
 
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
+
+    cpu_mutex_unlock(cpu);
     rcu_unregister_thread();
     return NULL;
 }
@@ -1652,6 +1742,7 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
     rcu_register_thread();
 
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
@@ -1659,14 +1750,20 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
     current_cpu = cpu;
 
     hvf_init_vcpu(cpu);
+    qemu_mutex_unlock_iothread();
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = hvf_vcpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
@@ -1674,10 +1771,16 @@ static void *qemu_hvf_cpu_thread_fn(void *arg)
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
 
+    cpu_mutex_unlock(cpu);
+    qemu_mutex_lock_iothread();
     hvf_vcpu_destroy(cpu);
-    cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
     qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
+    cpu->created = false;
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
+
     rcu_unregister_thread();
     return NULL;
 }
@@ -1690,6 +1793,7 @@ static void *qemu_whpx_cpu_thread_fn(void *arg)
     rcu_register_thread();
 
     qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     current_cpu = cpu;
@@ -1699,28 +1803,40 @@ static void *qemu_whpx_cpu_thread_fn(void *arg)
         fprintf(stderr, "whpx_init_vcpu failed: %s\n", strerror(-r));
         exit(1);
     }
+    qemu_mutex_unlock_iothread();
 
     /* signal CPU creation */
     cpu->created = true;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     do {
         if (cpu_can_run(cpu)) {
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             r = whpx_vcpu_exec(cpu);
+            qemu_mutex_unlock_iothread();
+            cpu_mutex_lock(cpu);
+
             if (r == EXCP_DEBUG) {
                 cpu_handle_guest_debug(cpu);
             }
         }
         while (cpu_thread_is_idle(cpu)) {
-            qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+            qemu_cond_wait(&cpu->halt_cond, &cpu->lock);
         }
         qemu_wait_io_event_common(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
 
+    cpu_mutex_unlock(cpu);
+    qemu_mutex_lock_iothread();
     whpx_destroy_vcpu(cpu);
-    cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
     qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
+    cpu->created = false;
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
+
     rcu_unregister_thread();
     return NULL;
 }
@@ -1748,14 +1864,14 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
     rcu_register_thread();
     tcg_register_thread();
 
-    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
     cpu->created = true;
     cpu->can_do_io = 1;
     current_cpu = cpu;
-    qemu_cond_signal(&qemu_cpu_cond);
+    qemu_cond_signal(&cpu->cond);
 
     /* process any pending work */
     cpu->exit_request = 1;
@@ -1763,9 +1879,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
     do {
         if (cpu_can_run(cpu)) {
             int r;
-            qemu_mutex_unlock_iothread();
+            cpu_mutex_unlock(cpu);
             r = tcg_cpu_exec(cpu);
-            qemu_mutex_lock_iothread();
+            cpu_mutex_lock(cpu);
             switch (r) {
             case EXCP_DEBUG:
                 cpu_handle_guest_debug(cpu);
@@ -1778,12 +1894,12 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
                  *
                  * cpu->halted should ensure we sleep in wait_io_event
                  */
-                g_assert(cpu->halted);
+                g_assert(cpu_halted(cpu));
                 break;
             case EXCP_ATOMIC:
-                qemu_mutex_unlock_iothread();
+                cpu_mutex_unlock(cpu);
                 cpu_exec_step_atomic(cpu);
-                qemu_mutex_lock_iothread();
+                cpu_mutex_lock(cpu);
             default:
                 /* Ignore everything else? */
                 break;
@@ -1796,8 +1912,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
 
     qemu_tcg_destroy_vcpu(cpu);
     cpu->created = false;
-    qemu_cond_signal(&qemu_cpu_cond);
-    qemu_mutex_unlock_iothread();
+    qemu_cond_signal(&cpu->cond);
+    cpu_mutex_unlock(cpu);
     rcu_unregister_thread();
     return NULL;
 }
@@ -1831,7 +1947,7 @@ static void qemu_cpu_kick_thread(CPUState *cpu)
 
 void qemu_cpu_kick(CPUState *cpu)
 {
-    qemu_cond_broadcast(cpu->halt_cond);
+    qemu_cond_broadcast(&cpu->halt_cond);
     if (tcg_enabled()) {
         cpu_exit(cpu);
         /* NOP unless doing single-thread RR */
@@ -1894,19 +2010,6 @@ void qemu_mutex_unlock_iothread(void)
     qemu_mutex_unlock(&qemu_global_mutex);
 }
 
-static bool all_vcpus_paused(void)
-{
-    CPUState *cpu;
-
-    CPU_FOREACH(cpu) {
-        if (!cpu->stopped) {
-            return false;
-        }
-    }
-
-    return true;
-}
-
 void pause_all_vcpus(void)
 {
     CPUState *cpu;
@@ -1925,23 +2028,38 @@ void pause_all_vcpus(void)
      * can finish their replay tasks
      */
     replay_mutex_unlock();
+    qemu_mutex_unlock_iothread();
 
-    while (!all_vcpus_paused()) {
-        qemu_cond_wait(&qemu_pause_cond, &qemu_global_mutex);
-        CPU_FOREACH(cpu) {
-            qemu_cpu_kick(cpu);
+    CPU_FOREACH(cpu) {
+        CPUState *cs;
+
+        /* XXX: is this necessary, or just paranoid? */
+        CPU_FOREACH(cs) {
+            qemu_cpu_kick(cs);
+        }
+
+        cpu_mutex_lock(cpu);
+        if (!cpu->stopped) {
+            qemu_cond_wait(&cpu->cond, &cpu->lock);
         }
+        cpu_mutex_unlock(cpu);
     }
 
-    qemu_mutex_unlock_iothread();
     replay_mutex_lock();
     qemu_mutex_lock_iothread();
 }
 
 void cpu_resume(CPUState *cpu)
 {
-    cpu->stop = false;
-    cpu->stopped = false;
+    if (cpu_mutex_locked(cpu)) {
+        cpu->stop = false;
+        cpu->stopped = false;
+    } else {
+        cpu_mutex_lock(cpu);
+        cpu->stop = false;
+        cpu->stopped = false;
+        cpu_mutex_unlock(cpu);
+    }
     qemu_cpu_kick(cpu);
 }
 
@@ -1957,8 +2075,11 @@ void resume_all_vcpus(void)
 
 void cpu_remove_sync(CPUState *cpu)
 {
+    cpu_mutex_lock(cpu);
     cpu->stop = true;
     cpu->unplug = true;
+    cpu_mutex_unlock(cpu);
+
     qemu_cpu_kick(cpu);
     qemu_mutex_unlock_iothread();
     qemu_thread_join(cpu->thread);
@@ -1971,7 +2092,6 @@ void cpu_remove_sync(CPUState *cpu)
 static void qemu_tcg_init_vcpu(CPUState *cpu)
 {
     char thread_name[VCPU_THREAD_NAME_SIZE];
-    static QemuCond *single_tcg_halt_cond;
     static QemuThread *single_tcg_cpu_thread;
     static int tcg_region_inited;
 
@@ -1989,8 +2109,6 @@ static void qemu_tcg_init_vcpu(CPUState *cpu)
 
     if (qemu_tcg_mttcg_enabled() || !single_tcg_cpu_thread) {
         cpu->thread = g_malloc0(sizeof(QemuThread));
-        cpu->halt_cond = g_malloc0(sizeof(QemuCond));
-        qemu_cond_init(cpu->halt_cond);
 
         if (qemu_tcg_mttcg_enabled()) {
             /* create a thread per vCPU with TCG (MTTCG) */
@@ -2008,7 +2126,6 @@ static void qemu_tcg_init_vcpu(CPUState *cpu)
                                qemu_tcg_rr_cpu_thread_fn,
                                cpu, QEMU_THREAD_JOINABLE);
 
-            single_tcg_halt_cond = cpu->halt_cond;
             single_tcg_cpu_thread = cpu->thread;
         }
 #ifdef _WIN32
@@ -2017,7 +2134,6 @@ static void qemu_tcg_init_vcpu(CPUState *cpu)
     } else {
         /* For non-MTTCG cases we share the thread */
         cpu->thread = single_tcg_cpu_thread;
-        cpu->halt_cond = single_tcg_halt_cond;
         cpu->thread_id = first_cpu->thread_id;
         cpu->can_do_io = 1;
         cpu->created = true;
@@ -2029,8 +2145,6 @@ static void qemu_hax_start_vcpu(CPUState *cpu)
     char thread_name[VCPU_THREAD_NAME_SIZE];
 
     cpu->thread = g_malloc0(sizeof(QemuThread));
-    cpu->halt_cond = g_malloc0(sizeof(QemuCond));
-    qemu_cond_init(cpu->halt_cond);
 
     snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/HAX",
              cpu->cpu_index);
@@ -2046,8 +2160,6 @@ static void qemu_kvm_start_vcpu(CPUState *cpu)
     char thread_name[VCPU_THREAD_NAME_SIZE];
 
     cpu->thread = g_malloc0(sizeof(QemuThread));
-    cpu->halt_cond = g_malloc0(sizeof(QemuCond));
-    qemu_cond_init(cpu->halt_cond);
     snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/KVM",
              cpu->cpu_index);
     qemu_thread_create(cpu->thread, thread_name, qemu_kvm_cpu_thread_fn,
@@ -2063,8 +2175,6 @@ static void qemu_hvf_start_vcpu(CPUState *cpu)
     assert(hvf_enabled());
 
     cpu->thread = g_malloc0(sizeof(QemuThread));
-    cpu->halt_cond = g_malloc0(sizeof(QemuCond));
-    qemu_cond_init(cpu->halt_cond);
 
     snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/HVF",
              cpu->cpu_index);
@@ -2077,8 +2187,6 @@ static void qemu_whpx_start_vcpu(CPUState *cpu)
     char thread_name[VCPU_THREAD_NAME_SIZE];
 
     cpu->thread = g_malloc0(sizeof(QemuThread));
-    cpu->halt_cond = g_malloc0(sizeof(QemuCond));
-    qemu_cond_init(cpu->halt_cond);
     snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/WHPX",
              cpu->cpu_index);
     qemu_thread_create(cpu->thread, thread_name, qemu_whpx_cpu_thread_fn,
@@ -2093,8 +2201,6 @@ static void qemu_dummy_start_vcpu(CPUState *cpu)
     char thread_name[VCPU_THREAD_NAME_SIZE];
 
     cpu->thread = g_malloc0(sizeof(QemuThread));
-    cpu->halt_cond = g_malloc0(sizeof(QemuCond));
-    qemu_cond_init(cpu->halt_cond);
     snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/DUMMY",
              cpu->cpu_index);
     qemu_thread_create(cpu->thread, thread_name, qemu_dummy_cpu_thread_fn, cpu,
@@ -2129,9 +2235,15 @@ void qemu_init_vcpu(CPUState *cpu)
         qemu_dummy_start_vcpu(cpu);
     }
 
+    qemu_mutex_unlock_iothread();
+
+    cpu_mutex_lock(cpu);
     while (!cpu->created) {
-        qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
+        qemu_cond_wait(&cpu->cond, &cpu->lock);
     }
+    cpu_mutex_unlock(cpu);
+
+    qemu_mutex_lock_iothread();
 }
 
 void cpu_stop_current(void)
@@ -2261,7 +2373,7 @@ CpuInfoList *qmp_query_cpus(Error **errp)
         info->value = g_malloc0(sizeof(*info->value));
         info->value->CPU = cpu->cpu_index;
         info->value->current = (cpu == first_cpu);
-        info->value->halted = cpu->halted;
+        info->value->halted = cpu_halted(cpu);
         info->value->qom_path = object_get_canonical_path(OBJECT(cpu));
         info->value->thread_id = cpu->thread_id;
 #if defined(TARGET_I386)
diff --git a/qom/cpu.c b/qom/cpu.c
index bb031a3a6a..2aa760eed0 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -94,18 +94,14 @@ static void cpu_common_get_memory_mapping(CPUState *cpu,
     error_setg(errp, "Obtaining memory mappings is unsupported on this CPU.");
 }
 
-/* Resetting the IRQ comes from across the code base so we take the
- * BQL here if we need to.  cpu_interrupt assumes it is held.*/
 void cpu_reset_interrupt(CPUState *cpu, int mask)
 {
-    bool need_lock = !qemu_mutex_iothread_locked();
-
-    if (need_lock) {
-        qemu_mutex_lock_iothread();
-    }
-    cpu->interrupt_request &= ~mask;
-    if (need_lock) {
-        qemu_mutex_unlock_iothread();
+    if (cpu_mutex_locked(cpu)) {
+        cpu->interrupt_request &= ~mask;
+    } else {
+        cpu_mutex_lock(cpu);
+        cpu->interrupt_request &= ~mask;
+        cpu_mutex_unlock(cpu);
     }
 }
 
@@ -261,8 +257,8 @@ static void cpu_common_reset(CPUState *cpu)
         log_cpu_state(cpu, cc->reset_dump_flags);
     }
 
-    cpu->interrupt_request = 0;
-    cpu->halted = 0;
+    cpu_interrupt_request_set(cpu, 0);
+    cpu_halted_set(cpu, 0);
     cpu->mem_io_pc = 0;
     cpu->mem_io_vaddr = 0;
     cpu->icount_extra = 0;
@@ -374,6 +370,7 @@ static void cpu_common_initfn(Object *obj)
 
     qemu_mutex_init(&cpu->lock);
     qemu_cond_init(&cpu->cond);
+    qemu_cond_init(&cpu->halt_cond);
     QSIMPLEQ_INIT(&cpu->work_list);
     QTAILQ_INIT(&cpu->breakpoints);
     QTAILQ_INIT(&cpu->watchpoints);
@@ -397,7 +394,7 @@ static vaddr cpu_adjust_watchpoint_address(CPUState *cpu, vaddr addr, int len)
 
 static void generic_handle_interrupt(CPUState *cpu, int mask)
 {
-    cpu->interrupt_request |= mask;
+    cpu_interrupt_request_or(cpu, mask);
 
     if (!qemu_cpu_is_self(cpu)) {
         qemu_cpu_kick(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 55/56] cpu: add async_run_on_cpu_no_bql
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (53 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 54/56] cpu: protect most CPU state with cpu->lock Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 56/56] cputlb: queue async flush jobs without the BQL Emilio G. Cota
  2018-10-19  6:59 ` [Qemu-devel] [RFC v3 0/56] per-CPU locks Paolo Bonzini
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

Some async jobs do not need the BQL.

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 14 ++++++++++++++
 cpus-common.c     | 39 ++++++++++++++++++++++++++++++++++-----
 2 files changed, 48 insertions(+), 5 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index b351fe6164..3028447002 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -842,9 +842,23 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
  * @data: Data to pass to the function.
  *
  * Schedules the function @func for execution on the vCPU @cpu asynchronously.
+ * See also: async_run_on_cpu_no_bql()
  */
 void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
 
+/**
+ * async_run_on_cpu_no_bql:
+ * @cpu: The vCPU to run on.
+ * @func: The function to be executed.
+ * @data: Data to pass to the function.
+ *
+ * Schedules the function @func for execution on the vCPU @cpu asynchronously.
+ * This function is run outside the BQL.
+ * See also: async_run_on_cpu()
+ */
+void async_run_on_cpu_no_bql(CPUState *cpu, run_on_cpu_func func,
+                             run_on_cpu_data data);
+
 /**
  * async_safe_run_on_cpu:
  * @cpu: The vCPU to run on.
diff --git a/cpus-common.c b/cpus-common.c
index d559f94ef1..9f33cc94d5 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -109,6 +109,7 @@ struct qemu_work_item {
     run_on_cpu_func func;
     run_on_cpu_data data;
     bool free, exclusive, done;
+    bool bql;
 };
 
 /* Called with the CPU's lock held */
@@ -145,6 +146,7 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
     wi.done = false;
     wi.free = false;
     wi.exclusive = false;
+    wi.bql = true;
 
     cpu_mutex_lock(cpu);
     queue_work_on_cpu_locked(cpu, &wi);
@@ -167,6 +169,21 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
     wi->func = func;
     wi->data = data;
     wi->free = true;
+    wi->bql = true;
+
+    queue_work_on_cpu(cpu, wi);
+}
+
+void async_run_on_cpu_no_bql(CPUState *cpu, run_on_cpu_func func,
+                             run_on_cpu_data data)
+{
+    struct qemu_work_item *wi;
+
+    wi = g_malloc0(sizeof(struct qemu_work_item));
+    wi->func = func;
+    wi->data = data;
+    wi->free = true;
+    /* wi->bql initialized to false */
 
     queue_work_on_cpu(cpu, wi);
 }
@@ -311,6 +328,7 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
     wi->data = data;
     wi->free = true;
     wi->exclusive = true;
+    /* wi->bql initialized to false */
 
     queue_work_on_cpu(cpu, wi);
 }
@@ -335,6 +353,7 @@ void process_queued_cpu_work_locked(CPUState *cpu)
              * BQL, so it goes to sleep; start_exclusive() is sleeping too, so
              * neither CPU can proceed.
              */
+            g_assert(!wi->bql);
             if (has_bql) {
                 qemu_mutex_unlock_iothread();
             }
@@ -345,12 +364,22 @@ void process_queued_cpu_work_locked(CPUState *cpu)
                 qemu_mutex_lock_iothread();
             }
         } else {
-            if (has_bql) {
-                wi->func(cpu, wi->data);
+            if (wi->bql) {
+                if (has_bql) {
+                    wi->func(cpu, wi->data);
+                } else {
+                    qemu_mutex_lock_iothread();
+                    wi->func(cpu, wi->data);
+                    qemu_mutex_unlock_iothread();
+                }
             } else {
-                qemu_mutex_lock_iothread();
-                wi->func(cpu, wi->data);
-                qemu_mutex_unlock_iothread();
+                if (has_bql) {
+                    qemu_mutex_unlock_iothread();
+                    wi->func(cpu, wi->data);
+                    qemu_mutex_lock_iothread();
+                } else {
+                    wi->func(cpu, wi->data);
+                }
             }
         }
         cpu_mutex_lock(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* [Qemu-devel] [RFC v3 56/56] cputlb: queue async flush jobs without the BQL
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (54 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 55/56] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
@ 2018-10-19  1:06 ` Emilio G. Cota
  2018-10-19  6:59 ` [Qemu-devel] [RFC v3 0/56] per-CPU locks Paolo Bonzini
  56 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19  1:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Peter Crosthwaite, Richard Henderson

This yields sizable scalability improvements, as the below results show.

Host: Two Intel E5-2683 v3 14-core CPUs at 2.00 GHz (Haswell)

Workload: Ubuntu 18.04 ppc64 compiling the linux kernel with
"make -j N", where N is the number of cores in the guest.

                      Speedup vs a single thread (higher is better):

         14 +---------------------------------------------------------------+
            |       +    +       +      +       +      +      $$$$$$  +     |
            |                                            $$$$$              |
            |                                      $$$$$$                   |
         12 |-+                                $A$$                       +-|
            |                                $$                             |
            |                             $$$                               |
         10 |-+                         $$    ##D#####################D   +-|
            |                        $$$ #####**B****************           |
            |                      $$####*****                   *****      |
            |                    A$#*****                             B     |
          8 |-+                $$B**                                      +-|
            |                $$**                                           |
            |               $**                                             |
          6 |-+           $$*                                             +-|
            |            A**                                                |
            |           $B                                                  |
            |           $                                                   |
          4 |-+        $*                                                 +-|
            |          $                                                    |
            |         $                                                     |
          2 |-+      $                                                    +-|
            |        $                                 +cputlb-no-bql $$A$$ |
            |       A                                   +per-cpu-lock ##D## |
            |       +    +       +      +       +      +     baseline **B** |
          0 +---------------------------------------------------------------+
                    1    4       8      12      16     20      24     28
                                       Guest vCPUs
  png: https://imgur.com/zZRvS7q

Some notes:
- baseline corresponds to the commit before this series

- per-cpu-lock is the commit that converts the CPU loop to per-cpu locks.

- cputlb-no-bql is this commit.

- I'm using taskset to assign cores to threads, favouring locality whenever
  possible but not using SMT. When N=1, I'm using a single host core, which
  leads to superlinear speedups (since with more cores the I/O thread can execute
  while vCPU threads sleep). In the future I might use N+1 host cores for N
  guest cores to avoid this, or perhaps pin guest threads to cores one-by-one.

- Scalability is not good at 64 cores, where the BQL for handling
  interrupts dominates. I got this from another machine (a 64-core one),
  that unfortunately is much slower than this 28-core one, so I don't have
  the numbers for 1-16 cores. The plot is normalized at 16-core baseline
  performance, and therefore very ugly :-) https://imgur.com/XyKGkAw
  See below for an example of the *huge* amount of waiting on the BQL:

(qemu) info sync-profile
Type               Object  Call site                             Wait Time (s)         Count  Average (us)
----------------------------------------------------------------------------------------------------------
BQL mutex  0x55ba286c9800  accel/tcg/cpu-exec.c:545                 2868.85676      14872596        192.90
BQL mutex  0x55ba286c9800  hw/ppc/ppc.c:70                           539.58924       3666820        147.15
BQL mutex  0x55ba286c9800  target/ppc/helper_regs.h:105              323.49283       2544959        127.11
mutex      [           2]  util/qemu-timer.c:426                     181.38420       3666839         49.47
condvar    [          61]  cpus.c:1327                               136.50872         15379       8876.31
BQL mutex  0x55ba286c9800  accel/tcg/cpu-exec.c:516                   86.14785        946301         91.04
condvar    0x55ba286eb6a0  cpus-common.c:196                          78.41010           126     622302.35
BQL mutex  0x55ba286c9800  util/main-loop.c:236                       28.14795        272940        103.13
mutex      [          64]  include/qom/cpu.h:514                      17.87662      75139413          0.24
BQL mutex  0x55ba286c9800  target/ppc/translate_init.inc.c:8665        7.04738         36528        192.93
----------------------------------------------------------------------------------------------------------

Single-threaded performance is affected very lightly. Results
below for debian aarch64 bootup+test for the entire series
on an Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz host:

- Before:

 Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs):

       7269.033478      task-clock (msec)         #    0.998 CPUs utilized            ( +-  0.06% )
    30,659,870,302      cycles                    #    4.218 GHz                      ( +-  0.06% )
    54,790,540,051      instructions              #    1.79  insns per cycle          ( +-  0.05% )
     9,796,441,380      branches                  # 1347.695 M/sec                    ( +-  0.05% )
       165,132,201      branch-misses             #    1.69% of all branches          ( +-  0.12% )

       7.287011656 seconds time elapsed                                          ( +-  0.10% )

- After:

       7375.924053      task-clock (msec)         #    0.998 CPUs utilized            ( +-  0.13% )
    31,107,548,846      cycles                    #    4.217 GHz                      ( +-  0.12% )
    55,355,668,947      instructions              #    1.78  insns per cycle          ( +-  0.05% )
     9,929,917,664      branches                  # 1346.261 M/sec                    ( +-  0.04% )
       166,547,442      branch-misses             #    1.68% of all branches          ( +-  0.09% )

       7.389068145 seconds time elapsed                                          ( +-  0.13% )

That is, a 1.37% slowdown.

Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 accel/tcg/cputlb.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 353d76d6a5..e3582f2f1d 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -212,7 +212,7 @@ static void flush_all_helper(CPUState *src, run_on_cpu_func fn,
 
     CPU_FOREACH(cpu) {
         if (cpu != src) {
-            async_run_on_cpu(cpu, fn, d);
+            async_run_on_cpu_no_bql(cpu, fn, d);
         }
     }
 }
@@ -280,8 +280,8 @@ void tlb_flush(CPUState *cpu)
     if (cpu->created && !qemu_cpu_is_self(cpu)) {
         if (atomic_mb_read(&cpu->pending_tlb_flush) != ALL_MMUIDX_BITS) {
             atomic_mb_set(&cpu->pending_tlb_flush, ALL_MMUIDX_BITS);
-            async_run_on_cpu(cpu, tlb_flush_global_async_work,
-                             RUN_ON_CPU_NULL);
+            async_run_on_cpu_no_bql(cpu, tlb_flush_global_async_work,
+                                    RUN_ON_CPU_NULL);
         }
     } else {
         tlb_flush_nocheck(cpu);
@@ -341,8 +341,8 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap)
             tlb_debug("reduced mmu_idx: 0x%" PRIx16 "\n", pending_flushes);
 
             atomic_or(&cpu->pending_tlb_flush, pending_flushes);
-            async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work,
-                             RUN_ON_CPU_HOST_INT(pending_flushes));
+            async_run_on_cpu_no_bql(cpu, tlb_flush_by_mmuidx_async_work,
+                                    RUN_ON_CPU_HOST_INT(pending_flushes));
         }
     } else {
         tlb_flush_by_mmuidx_async_work(cpu,
@@ -442,8 +442,8 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr)
     tlb_debug("page :" TARGET_FMT_lx "\n", addr);
 
     if (!qemu_cpu_is_self(cpu)) {
-        async_run_on_cpu(cpu, tlb_flush_page_async_work,
-                         RUN_ON_CPU_TARGET_PTR(addr));
+        async_run_on_cpu_no_bql(cpu, tlb_flush_page_async_work,
+                                RUN_ON_CPU_TARGET_PTR(addr));
     } else {
         tlb_flush_page_async_work(cpu, RUN_ON_CPU_TARGET_PTR(addr));
     }
@@ -514,8 +514,9 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap)
     addr_and_mmu_idx |= idxmap;
 
     if (!qemu_cpu_is_self(cpu)) {
-        async_run_on_cpu(cpu, tlb_check_page_and_flush_by_mmuidx_async_work,
-                         RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
+        async_run_on_cpu_no_bql(cpu,
+                                tlb_check_page_and_flush_by_mmuidx_async_work,
+                                RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
     } else {
         tlb_check_page_and_flush_by_mmuidx_async_work(
             cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 01/56] cpu: convert queued work to a QSIMPLEQ
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 01/56] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
@ 2018-10-19  6:26   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-19  6:26 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Richard Henderson, Peter Crosthwaite

On 10/18/18 6:05 PM, Emilio G. Cota wrote:
> Instead of open-coding it.
> 
> While at it, make sure that all accesses to the list are
> performed while holding the list's lock.
> 
> Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h |  6 +++---
>  cpus-common.c     | 25 ++++++++-----------------
>  cpus.c            | 14 ++++++++++++--
>  qom/cpu.c         |  1 +
>  4 files changed, 24 insertions(+), 22 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 02/56] cpu: rename cpu->work_mutex to cpu->lock
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 02/56] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
@ 2018-10-19  6:26   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-19  6:26 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Richard Henderson, Peter Crosthwaite

On 10/18/18 6:05 PM, Emilio G. Cota wrote:
> This lock will soon protect more fields of the struct. Give
> it a more appropriate name.
> 
> Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h |  5 +++--
>  cpus-common.c     | 14 +++++++-------
>  cpus.c            |  4 ++--
>  qom/cpu.c         |  2 +-
>  4 files changed, 13 insertions(+), 12 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 05/56] cpu: move run_on_cpu to cpus-common
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 05/56] cpu: move run_on_cpu to cpus-common Emilio G. Cota
@ 2018-10-19  6:39   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-19  6:39 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Richard Henderson, Peter Crosthwaite

On 10/18/18 6:05 PM, Emilio G. Cota wrote:
> We don't pass a pointer to qemu_global_mutex anymore.
> 
> Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h | 10 ----------
>  cpus-common.c     |  2 +-
>  cpus.c            |  5 -----
>  3 files changed, 1 insertion(+), 16 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 06/56] cpu: introduce process_queued_cpu_work_locked
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 06/56] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
@ 2018-10-19  6:41   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-19  6:41 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 10/18/18 6:05 PM, Emilio G. Cota wrote:
> It will gain a user once we protect more of CPUState under cpu->lock.
> 
> This completes the conversion to cpu_mutex_lock/unlock in the file.
> 
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h |  9 +++++++++
>  cpus-common.c     | 17 +++++++++++------
>  2 files changed, 20 insertions(+), 6 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 48/56] ppc: acquire the BQL in cpu_has_work
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 48/56] ppc: acquire the BQL in cpu_has_work Emilio G. Cota
@ 2018-10-19  6:58   ` Paolo Bonzini
  2018-10-20 16:31     ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Paolo Bonzini @ 2018-10-19  6:58 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: David Gibson, Alexander Graf, qemu-ppc

On 19/10/2018 03:06, Emilio G. Cota wrote:
> Soon we will call cpu_has_work without the BQL.
> 
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: qemu-ppc@nongnu.org
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/ppc/translate_init.inc.c | 77 +++++++++++++++++++++++++++++++--
>  1 file changed, 73 insertions(+), 4 deletions(-)
> 

Perhaps we should instead define both ->cpu_has_work and
->cpu_has_work_with_iothread_lock members, and move the generic
unlock/relock code to common code?

Paolo

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 0/56] per-CPU locks
  2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
                   ` (55 preceding siblings ...)
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 56/56] cputlb: queue async flush jobs without the BQL Emilio G. Cota
@ 2018-10-19  6:59 ` Paolo Bonzini
  2018-10-19 14:50   ` Emilio G. Cota
  56 siblings, 1 reply; 118+ messages in thread
From: Paolo Bonzini @ 2018-10-19  6:59 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Aleksandar Markovic, Alexander Graf, Alistair Francis,
	Andrzej Zaborowski, Anthony Green, Artyom Tarasenko,
	Aurelien Jarno, Bastian Koppelmann, Christian Borntraeger,
	Chris Wulff, Cornelia Huck, David Gibson, David Hildenbrand,
	Edgar E. Iglesias, Eduardo Habkost, Fabien Chouteau, Guan Xuetao,
	James Hogan, Laurent Vivier, Marek Vasut, Mark Cave-Ayland,
	Max Filippov, Michael Clark, Michael Walle, Palmer Dabbelt,
	Pavel Dovgalyuk, Peter Crosthwaite, Peter Maydell, qemu-arm,
	qemu-ppc, qemu-s390x, Richard Henderson, Sagar Karandikar,
	Stafford Horne

On 19/10/2018 03:05, Emilio G. Cota wrote:
> I'm calling this series a v3 because it supersedes the two series
> I previously sent about using atomics for interrupt_request:
>   https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg02013.html
> The approach in that series cannot work reliably; using (locked) atomics
> to set interrupt_request but not using (locked) atomics to read it
> can lead to missed updates.

The idea here was that changes to protected fields are all followed by
kick.  That may not have been the case, granted, but I wonder if the
plan is unworkable.

Paolo

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 0/56] per-CPU locks
  2018-10-19  6:59 ` [Qemu-devel] [RFC v3 0/56] per-CPU locks Paolo Bonzini
@ 2018-10-19 14:50   ` Emilio G. Cota
  2018-10-19 16:01     ` Paolo Bonzini
  0 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19 14:50 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, Aleksandar Markovic, Alexander Graf,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Clark,
	Michael Walle, Palmer Dabbelt, Pavel Dovgalyuk,
	Peter Crosthwaite, Peter Maydell, qemu-arm, qemu-ppc, qemu-s390x,
	Richard Henderson, Sagar Karandikar, Stafford Horne

On Fri, Oct 19, 2018 at 08:59:24 +0200, Paolo Bonzini wrote:
> On 19/10/2018 03:05, Emilio G. Cota wrote:
> > I'm calling this series a v3 because it supersedes the two series
> > I previously sent about using atomics for interrupt_request:
> >   https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg02013.html
> > The approach in that series cannot work reliably; using (locked) atomics
> > to set interrupt_request but not using (locked) atomics to read it
> > can lead to missed updates.
> 
> The idea here was that changes to protected fields are all followed by
> kick.  That may not have been the case, granted, but I wonder if the
> plan is unworkable.

I suspect that the cpu->interrupt_request+kick mechanism is not the issue,
otherwise master should not work--we do atomic_read(cpu->interrupt_request)
and only if that read != 0 we take the BQL.

My guess is that the problem is with other reads of cpu->interrupt_request,
e.g. those in cpu_has_work. Currently those reads happen with the
BQL held, and updates to cpu->interrupt_request take the BQL. If we drop
the BQL from the setters to instead use locked atomics (like in the
aforementioned series), those BQL-protected readers might miss updates.

Given that we need a per-CPU lock anyway to remove the BQL from the
CPU loop, extending this lock to protect cpu->interrupt_request is
a simple solution that keeps the current logic and allows for
greater scalability.

Thanks,

		Emilio

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 0/56] per-CPU locks
  2018-10-19 14:50   ` Emilio G. Cota
@ 2018-10-19 16:01     ` Paolo Bonzini
  2018-10-19 19:29       ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Paolo Bonzini @ 2018-10-19 16:01 UTC (permalink / raw)
  To: Emilio G. Cota
  Cc: qemu-devel, Aleksandar Markovic, Alexander Graf,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Clark,
	Michael Walle, Palmer Dabbelt, Pavel Dovgalyuk,
	Peter Crosthwaite, Peter Maydell, qemu-arm, qemu-ppc, qemu-s390x,
	Richard Henderson, Sagar Karandikar, Stafford Horne

On 19/10/2018 16:50, Emilio G. Cota wrote:
> On Fri, Oct 19, 2018 at 08:59:24 +0200, Paolo Bonzini wrote:
>> On 19/10/2018 03:05, Emilio G. Cota wrote:
>>> I'm calling this series a v3 because it supersedes the two series
>>> I previously sent about using atomics for interrupt_request:
>>>   https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg02013.html
>>> The approach in that series cannot work reliably; using (locked) atomics
>>> to set interrupt_request but not using (locked) atomics to read it
>>> can lead to missed updates.
>>
>> The idea here was that changes to protected fields are all followed by
>> kick.  That may not have been the case, granted, but I wonder if the
>> plan is unworkable.
> 
> I suspect that the cpu->interrupt_request+kick mechanism is not the issue,
> otherwise master should not work--we do atomic_read(cpu->interrupt_request)
> and only if that read != 0 we take the BQL.
> 
> My guess is that the problem is with other reads of cpu->interrupt_request,
> e.g. those in cpu_has_work. Currently those reads happen with the
> BQL held, and updates to cpu->interrupt_request take the BQL. If we drop
> the BQL from the setters to instead use locked atomics (like in the
> aforementioned series), those BQL-protected readers might miss updates.

cpu_has_work is only needed to handle the processor's halted state (or
is it?).  If it is, OR+kick should work.

> Given that we need a per-CPU lock anyway to remove the BQL from the
> CPU loop, extending this lock to protect cpu->interrupt_request is
> a simple solution that keeps the current logic and allows for
> greater scalability.

Sure, I was just curious what the problem was.  KVM uses OR+kick with no
problems.

Paolo

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 16/56] riscv: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 16/56] riscv: " Emilio G. Cota
@ 2018-10-19 17:24   ` Palmer Dabbelt
  0 siblings, 0 replies; 118+ messages in thread
From: Palmer Dabbelt @ 2018-10-19 17:24 UTC (permalink / raw)
  To: cota; +Cc: qemu-devel, pbonzini, Michael Clark, sagark, kbastian, alistair23

On Thu, 18 Oct 2018 18:05:45 PDT (-0700), cota@braap.org wrote:
> Cc: Michael Clark <mjc@sifive.com>
> Cc: Palmer Dabbelt <palmer@sifive.com>
> Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
> Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
> Cc: Alistair Francis <alistair23@gmail.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/riscv/op_helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
> index aec7558e1b..b5c32241dd 100644
> --- a/target/riscv/op_helper.c
> +++ b/target/riscv/op_helper.c
> @@ -736,7 +736,7 @@ void helper_wfi(CPURISCVState *env)
>  {
>      CPUState *cs = CPU(riscv_env_get_cpu(env));
>
> -    cs->halted = 1;
> +    cpu_halted_set(cs, 1);
>      cs->exception_index = EXCP_HLT;
>      cpu_loop_exit(cs);
>  }

Reviewed-by: Palmer Dabbelt <palmer@sifive.com>

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 51/56] riscv: acquire the BQL in cpu_has_work
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 51/56] riscv: " Emilio G. Cota
@ 2018-10-19 17:24   ` Palmer Dabbelt
  0 siblings, 0 replies; 118+ messages in thread
From: Palmer Dabbelt @ 2018-10-19 17:24 UTC (permalink / raw)
  To: cota; +Cc: qemu-devel, pbonzini, Michael Clark, sagark, kbastian

On Thu, 18 Oct 2018 18:06:20 PDT (-0700), cota@braap.org wrote:
> Soon we will call cpu_has_work without the BQL.
>
> Cc: Michael Clark <mjc@sifive.com>
> Cc: Palmer Dabbelt <palmer@sifive.com>
> Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
> Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/riscv/cpu.c | 21 ++++++++++++++++++++-
>  1 file changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index d630e8fd6c..b10995c807 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -18,6 +18,7 @@
>   */
>
>  #include "qemu/osdep.h"
> +#include "qemu/main-loop.h"
>  #include "qemu/log.h"
>  #include "cpu.h"
>  #include "exec/exec-all.h"
> @@ -244,11 +245,14 @@ static void riscv_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb)
>      env->pc = tb->pc;
>  }
>
> -static bool riscv_cpu_has_work(CPUState *cs)
> +static bool riscv_cpu_has_work_locked(CPUState *cs)
>  {
>  #ifndef CONFIG_USER_ONLY
>      RISCVCPU *cpu = RISCV_CPU(cs);
>      CPURISCVState *env = &cpu->env;
> +
> +    g_assert(qemu_mutex_iothread_locked());
> +
>      /*
>       * Definition of the WFI instruction requires it to ignore the privilege
>       * mode and delegation registers, but respect individual enables
> @@ -259,6 +263,21 @@ static bool riscv_cpu_has_work(CPUState *cs)
>  #endif
>  }
>
> +static bool riscv_cpu_has_work(CPUState *cs)
> +{
> +    if (!qemu_mutex_iothread_locked()) {
> +        bool ret;
> +
> +        cpu_mutex_unlock(cs);
> +        qemu_mutex_lock_iothread();
> +        cpu_mutex_lock(cs);
> +        ret = riscv_cpu_has_work_locked(cs);
> +        qemu_mutex_unlock_iothread();
> +        return ret;
> +    }
> +    return riscv_cpu_has_work_locked(cs);
> +}
> +
>  void restore_state_to_opc(CPURISCVState *env, TranslationBlock *tb,
>                            target_ulong *data)
>  {

I'm afraid I don't understand the locking scheme, but as far as the RISC-V 
stuff goes this looks fine.

Reviewed-by: Palmer Dabbelt <palmer@sifive.com>

Thanks!

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 0/56] per-CPU locks
  2018-10-19 16:01     ` Paolo Bonzini
@ 2018-10-19 19:29       ` Emilio G. Cota
  2018-10-19 23:46         ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19 19:29 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, Aleksandar Markovic, Alexander Graf,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Clark,
	Michael Walle, Palmer Dabbelt, Pavel Dovgalyuk,
	Peter Crosthwaite, Peter Maydell, qemu-arm, qemu-ppc, qemu-s390x,
	Richard Henderson, Sagar Karandikar, Stafford Horne

On Fri, Oct 19, 2018 at 18:01:18 +0200, Paolo Bonzini wrote:
> On 19/10/2018 16:50, Emilio G. Cota wrote:
> > On Fri, Oct 19, 2018 at 08:59:24 +0200, Paolo Bonzini wrote:
> >> On 19/10/2018 03:05, Emilio G. Cota wrote:
> >>> I'm calling this series a v3 because it supersedes the two series
> >>> I previously sent about using atomics for interrupt_request:
> >>>   https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg02013.html
> >>> The approach in that series cannot work reliably; using (locked) atomics
> >>> to set interrupt_request but not using (locked) atomics to read it
> >>> can lead to missed updates.
> >>
> >> The idea here was that changes to protected fields are all followed by
> >> kick.  That may not have been the case, granted, but I wonder if the
> >> plan is unworkable.
> > 
> > I suspect that the cpu->interrupt_request+kick mechanism is not the issue,
> > otherwise master should not work--we do atomic_read(cpu->interrupt_request)
> > and only if that read != 0 we take the BQL.
> > 
> > My guess is that the problem is with other reads of cpu->interrupt_request,
> > e.g. those in cpu_has_work. Currently those reads happen with the
> > BQL held, and updates to cpu->interrupt_request take the BQL. If we drop
> > the BQL from the setters to instead use locked atomics (like in the
> > aforementioned series), those BQL-protected readers might miss updates.
> 
> cpu_has_work is only needed to handle the processor's halted state (or
> is it?).  If it is, OR+kick should work.
> 
> > Given that we need a per-CPU lock anyway to remove the BQL from the
> > CPU loop, extending this lock to protect cpu->interrupt_request is
> > a simple solution that keeps the current logic and allows for
> > greater scalability.
> 
> Sure, I was just curious what the problem was.  KVM uses OR+kick with no
> problems.

I never found exactly where things break. The hangs happen
pretty early when booting a large (-smp > 16) x86_64 Ubuntu guest.
Booting never completes (ssh unresponsive) if I don't have the
console output (I suspect the console output slows things down
enough to hide some races). I only see a few threads busy:
a couple of vCPU threads, and the I/O thread.

I didn't have time to debug any further, so I moved on
to an alternative approach.

So it is possible that it was my implementation, and not the approach,
what was at fault :-)

Thanks,

		E.

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 0/56] per-CPU locks
  2018-10-19 19:29       ` Emilio G. Cota
@ 2018-10-19 23:46         ` Emilio G. Cota
  2018-10-22 15:30           ` Paolo Bonzini
  0 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-19 23:46 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, Aleksandar Markovic, Alexander Graf,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Clark,
	Michael Walle, Palmer Dabbelt, Pavel Dovgalyuk,
	Peter Crosthwaite, Peter Maydell, qemu-arm, qemu-ppc, qemu-s390x,
	Richard Henderson, Sagar Karandikar, Stafford Horne

On Fri, Oct 19, 2018 at 15:29:32 -0400, Emilio G. Cota wrote:
> On Fri, Oct 19, 2018 at 18:01:18 +0200, Paolo Bonzini wrote:
> > > Given that we need a per-CPU lock anyway to remove the BQL from the
> > > CPU loop, extending this lock to protect cpu->interrupt_request is
> > > a simple solution that keeps the current logic and allows for
> > > greater scalability.
> > 
> > Sure, I was just curious what the problem was.  KVM uses OR+kick with no
> > problems.
> 
> I never found exactly where things break. The hangs happen
> pretty early when booting a large (-smp > 16) x86_64 Ubuntu guest.
> Booting never completes (ssh unresponsive) if I don't have the
> console output (I suspect the console output slows things down
> enough to hide some races). I only see a few threads busy:
> a couple of vCPU threads, and the I/O thread.
> 
> I didn't have time to debug any further, so I moved on
> to an alternative approach.
> 
> So it is possible that it was my implementation, and not the approach,
> what was at fault :-)

I've just observed a similar hang after adding the "BQL
pushdown" patches on top of this series. So it's likely that the
hangs come from those patches, and not from the work on
cpu->interrupt_request. I just confirmed with the prior
series, and removing the pushdown patches fixes the hangs there
as well.

Thanks,

		Emilio

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 48/56] ppc: acquire the BQL in cpu_has_work
  2018-10-19  6:58   ` Paolo Bonzini
@ 2018-10-20 16:31     ` Emilio G. Cota
  2018-10-21 13:42       ` Richard Henderson
  0 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-20 16:31 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, David Gibson, Alexander Graf, qemu-ppc

On Fri, Oct 19, 2018 at 08:58:31 +0200, Paolo Bonzini wrote:
> On 19/10/2018 03:06, Emilio G. Cota wrote:
> > Soon we will call cpu_has_work without the BQL.
> > 
> > Cc: David Gibson <david@gibson.dropbear.id.au>
> > Cc: Alexander Graf <agraf@suse.de>
> > Cc: qemu-ppc@nongnu.org
> > Signed-off-by: Emilio G. Cota <cota@braap.org>
> > ---
> >  target/ppc/translate_init.inc.c | 77 +++++++++++++++++++++++++++++++--
> >  1 file changed, 73 insertions(+), 4 deletions(-)
> > 
> 
> Perhaps we should instead define both ->cpu_has_work and
> ->cpu_has_work_with_iothread_lock members, and move the generic
> unlock/relock code to common code?

I like this. How does the appended look?

Thanks,

		Emilio
---8<---

[PATCH] cpu: introduce cpu_has_work_with_iothread_lock

It will gain some users soon.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qom/cpu.h | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index ad8859d014..d9e6f5d4d2 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -26,6 +26,7 @@
 #include "exec/memattrs.h"
 #include "qapi/qapi-types-run-state.h"
 #include "qemu/bitmap.h"
+#include "qemu/main-loop.h"
 #include "qemu/rcu_queue.h"
 #include "qemu/queue.h"
 #include "qemu/thread.h"
@@ -86,6 +87,8 @@ struct TranslationBlock;
  * @reset_dump_flags: #CPUDumpFlags to use for reset logging.
  * @has_work: Callback for checking if there is work to do. Called with the
  * CPU lock held.
+ * @has_work_with_iothread_lock: Callback for checking if there is work to do.
+ * Called with both the BQL and the CPU lock held.
  * @do_interrupt: Callback for interrupt handling.
  * @do_unassigned_access: Callback for unassigned access handling.
  * (this is deprecated: new targets should use do_transaction_failed instead)
@@ -157,6 +160,7 @@ typedef struct CPUClass {
     void (*reset)(CPUState *cpu);
     int reset_dump_flags;
     bool (*has_work)(CPUState *cpu);
+    bool (*has_work_with_iothread_lock)(CPUState *cpu);
     void (*do_interrupt)(CPUState *cpu);
     CPUUnassignedAccess do_unassigned_access;
     void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
@@ -774,6 +778,40 @@ CPUState *cpu_create(const char *typename);
  */
 const char *parse_cpu_model(const char *cpu_model);
 
+/* do not call directly; use cpu_has_work instead */
+static inline bool cpu_has_work_bql(CPUState *cpu)
+{
+    CPUClass *cc = CPU_GET_CLASS(cpu);
+    bool has_cpu_lock = cpu_mutex_locked(cpu);
+    bool has_bql = qemu_mutex_iothread_locked();
+    bool ret;
+
+    if (has_bql) {
+        if (has_cpu_lock) {
+            return cc->has_work_with_iothread_lock(cpu);
+        }
+        cpu_mutex_lock(cpu);
+        ret = cc->has_work_with_iothread_lock(cpu);
+        cpu_mutex_unlock(cpu);
+        return ret;
+    }
+
+    if (has_cpu_lock) {
+        /* avoid deadlock by acquiring the locks in order */
+        cpu_mutex_unlock(cpu);
+    }
+    qemu_mutex_lock_iothread();
+    cpu_mutex_lock(cpu);
+
+    ret = cc->has_work_with_iothread_lock(cpu);
+
+    qemu_mutex_unlock_iothread();
+    if (!has_cpu_lock) {
+        cpu_mutex_unlock(cpu);
+    }
+    return ret;
+}
+
 /**
  * cpu_has_work:
  * @cpu: The vCPU to check.
@@ -787,6 +825,10 @@ static inline bool cpu_has_work(CPUState *cpu)
     CPUClass *cc = CPU_GET_CLASS(cpu);
     bool ret;
 
+    if (cc->has_work_with_iothread_lock) {
+        return cpu_has_work_bql(cpu);
+    }
+
     g_assert(cc->has_work);
     if (cpu_mutex_locked(cpu)) {
         return cc->has_work(cpu);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 07/56] target/m68k: rename cpu_halted to cpu_halt
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 07/56] target/m68k: rename cpu_halted to cpu_halt Emilio G. Cota
@ 2018-10-21 12:53   ` Richard Henderson
  2018-10-21 13:38     ` Richard Henderson
  0 siblings, 1 reply; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 12:53 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Laurent Vivier

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> To avoid a name clash with the soon-to-be-defined cpu_halted() helper.
> 
> Cc: Laurent Vivier <laurent@vivier.eu>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/m68k/translate.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)

Although for this usage it's probably better to avoid the
tcg_global_mem_new_i32 and just use tcg_gen_st_i32.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 08/56] cpu: define cpu_halted helpers
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 08/56] cpu: define cpu_halted helpers Emilio G. Cota
@ 2018-10-21 12:54   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 12:54 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> cpu->halted will soon be protected by cpu->lock.
> We will use these helpers to ease the transition,
> since right now cpu->halted has many direct callers.
> 
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 09/56] arm: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 09/56] arm: convert to cpu_halted Emilio G. Cota
@ 2018-10-21 12:55   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 12:55 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, qemu-arm, Peter Maydell

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Andrzej Zaborowski <balrogg@gmail.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: qemu-arm@nongnu.org
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  hw/arm/omap1.c            | 4 ++--
>  hw/arm/pxa2xx_gpio.c      | 2 +-
>  hw/arm/pxa2xx_pic.c       | 2 +-
>  target/arm/arm-powerctl.c | 4 ++--
>  target/arm/cpu.c          | 2 +-
>  target/arm/op_helper.c    | 2 +-
>  6 files changed, 8 insertions(+), 8 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 10/56] ppc: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 10/56] ppc: " Emilio G. Cota
@ 2018-10-21 12:56   ` Richard Henderson
  2018-10-22 21:12     ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 12:56 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, qemu-ppc, Alexander Graf, David Gibson

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> @@ -1088,11 +1088,13 @@ static target_ulong h_cede(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>  
>      env->msr |= (1ULL << MSR_EE);
>      hreg_compute_hflags(env);
> +    cpu_mutex_lock(cs);
>      if (!cpu_has_work(cs)) {
> -        cs->halted = 1;
> +        cpu_halted_set(cs, 1);
>          cs->exception_index = EXCP_HLT;
>          cs->exit_request = 1;
>      }
> +    cpu_mutex_unlock(cs);
>      return H_SUCCESS;

Why does this one get extra locking?


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 11/56] sh4: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 11/56] sh4: " Emilio G. Cota
@ 2018-10-21 12:57   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 12:57 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Aurelien Jarno

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Aurelien Jarno <aurelien@aurel32.net>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/sh4/op_helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 12/56] i386: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 12/56] i386: " Emilio G. Cota
@ 2018-10-21 12:59   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 12:59 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Eduardo Habkost

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/i386/cpu.h         |  2 +-
>  target/i386/cpu.c         |  2 +-
>  target/i386/hax-all.c     |  4 ++--
>  target/i386/helper.c      |  4 ++--
>  target/i386/hvf/hvf.c     |  8 ++++----
>  target/i386/hvf/x86hvf.c  |  4 ++--
>  target/i386/kvm.c         | 10 +++++-----
>  target/i386/misc_helper.c |  2 +-
>  target/i386/whpx-all.c    |  6 +++---
>  9 files changed, 21 insertions(+), 21 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 13/56] lm32: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 13/56] lm32: " Emilio G. Cota
@ 2018-10-21 13:00   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:00 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Michael Walle

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Michael Walle <michael@walle.cc>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/lm32/op_helper.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

+++ b/target/lm32/op_helper.c
> @@ -31,7 +31,7 @@ void HELPER(hlt)(CPULM32State *env)
>  {
>      CPUState *cs = CPU(lm32_env_get_cpu(env));
>  
> -    cs->halted = 1;
> +    cpu_halted_set(cs, 1);
>      cs->exception_index = EXCP_HLT;
>      cpu_loop_exit(cs);

I am beginning to think this sequence of three should be its own helper...


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 14/56] m68k: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 14/56] m68k: " Emilio G. Cota
@ 2018-10-21 13:01   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:01 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Laurent Vivier

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Laurent Vivier <laurent@vivier.eu>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/m68k/op_helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 15/56] mips: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 15/56] mips: " Emilio G. Cota
@ 2018-10-21 13:02   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:02 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Aleksandar Markovic, Aurelien Jarno, James Hogan

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Aurelien Jarno <aurelien@aurel32.net>
> Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
> Cc: James Hogan <jhogan@kernel.org>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  hw/mips/cps.c           | 2 +-
>  hw/misc/mips_itu.c      | 4 ++--
>  target/mips/kvm.c       | 2 +-
>  target/mips/op_helper.c | 8 ++++----
>  target/mips/translate.c | 4 ++--
>  5 files changed, 10 insertions(+), 10 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 17/56] s390x: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 17/56] s390x: " Emilio G. Cota
@ 2018-10-21 13:04   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:04 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: David Hildenbrand, Cornelia Huck, Alexander Graf,
	Christian Borntraeger, qemu-s390x, Paolo Bonzini

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: qemu-s390x@nongnu.org
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  hw/intc/s390_flic.c        |  2 +-
>  target/s390x/cpu.c         | 18 +++++++++++-------
>  target/s390x/excp_helper.c |  2 +-
>  target/s390x/kvm.c         |  2 +-
>  target/s390x/sigp.c        |  8 ++++----
>  5 files changed, 18 insertions(+), 14 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 18/56] sparc: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 18/56] sparc: " Emilio G. Cota
@ 2018-10-21 13:04   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:04 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Mark Cave-Ayland, Artyom Tarasenko, Fabien Chouteau

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Fabien Chouteau <chouteau@adacore.com>
> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> Cc: Artyom Tarasenko <atar4qemu@gmail.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  hw/sparc/leon3.c      | 2 +-
>  hw/sparc/sun4m.c      | 8 ++++----
>  hw/sparc64/sparc64.c  | 4 ++--
>  target/sparc/helper.c | 2 +-
>  4 files changed, 8 insertions(+), 8 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 19/56] xtensa: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 19/56] xtensa: " Emilio G. Cota
@ 2018-10-21 13:10   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:10 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Max Filippov

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Max Filippov <jcmvbkbc@gmail.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/xtensa/cpu.c       | 2 +-
>  target/xtensa/helper.c    | 2 +-
>  target/xtensa/op_helper.c | 2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 20/56] gdbstub: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 20/56] gdbstub: " Emilio G. Cota
@ 2018-10-21 13:10   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:10 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  gdbstub.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 21/56] openrisc: convert to cpu_halted
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 21/56] openrisc: " Emilio G. Cota
@ 2018-10-21 13:11   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:11 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Stafford Horne

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Cc: Stafford Horne <shorne@gmail.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/openrisc/sys_helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 23/56] cpu: define cpu_interrupt_request helpers
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 23/56] cpu: define cpu_interrupt_request helpers Emilio G. Cota
@ 2018-10-21 13:15   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:15 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qom/cpu.h | 35 +++++++++++++++++++++++++++++++++++
>  1 file changed, 35 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 24/56] ppc: use cpu_reset_interrupt
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 24/56] ppc: use cpu_reset_interrupt Emilio G. Cota
@ 2018-10-21 13:15   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:15 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, qemu-ppc, Alexander Graf, David Gibson

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> From: Paolo Bonzini <pbonzini@redhat.com>
> 
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: qemu-ppc@nongnu.org
> Acked-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/ppc/excp_helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 25/56] exec: use cpu_reset_interrupt
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 25/56] exec: " Emilio G. Cota
@ 2018-10-21 13:17   ` Richard Henderson
  2018-10-22 23:28     ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:17 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Richard Henderson, Peter Crosthwaite

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> -    cpu->interrupt_request &= ~0x01;
> +    cpu_reset_interrupt(cpu, ~0x01);

cpu_reset_interrupt(cpu, 1);

Although this is during vmload, and I'm not sure what locks you really want to
play with here.  Perhaps it's ok...


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 26/56] i386: use cpu_reset_interrupt
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 26/56] i386: " Emilio G. Cota
@ 2018-10-21 13:18   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:18 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Eduardo Habkost, Richard Henderson

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> From: Paolo Bonzini <pbonzini@redhat.com>
> 
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/i386/hax-all.c    |  4 ++--
>  target/i386/hvf/x86hvf.c |  8 ++++----
>  target/i386/kvm.c        | 14 +++++++-------
>  target/i386/seg_helper.c | 13 ++++++-------
>  target/i386/svm_helper.c |  2 +-
>  target/i386/whpx-all.c   | 10 +++++-----
>  6 files changed, 25 insertions(+), 26 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 27/56] s390x: use cpu_reset_interrupt
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 27/56] s390x: " Emilio G. Cota
@ 2018-10-21 13:18   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:18 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: David Hildenbrand, Cornelia Huck, Alexander Graf, qemu-s390x,
	Paolo Bonzini, Richard Henderson

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> From: Paolo Bonzini <pbonzini@redhat.com>
> 
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: qemu-s390x@nongnu.org
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/s390x/excp_helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 28/56] openrisc: use cpu_reset_interrupt
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 28/56] openrisc: " Emilio G. Cota
@ 2018-10-21 13:18   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:18 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Stafford Horne

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> From: Paolo Bonzini <pbonzini@redhat.com>
> 
> Cc: Stafford Horne <shorne@gmail.com>
> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/openrisc/sys_helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 29/56] arm: convert to cpu_interrupt_request
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 29/56] arm: convert to cpu_interrupt_request Emilio G. Cota
@ 2018-10-21 13:21   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:21 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, qemu-arm, Peter Maydell

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> +++ b/target/arm/helper.c
> @@ -1295,12 +1295,14 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
>      CPUState *cs = ENV_GET_CPU(env);
>      uint64_t ret = 0;
>  
> -    if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
> +    cpu_mutex_lock(cs);
> +    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) {
>          ret |= CPSR_I;
>      }
> -    if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
> +    if (cpu_interrupt_request(cs) & CPU_INTERRUPT_FIQ) {
>          ret |= CPSR_F;
>      }
> +    cpu_mutex_unlock(cs);
>      /* External aborts are not possible in QEMU so A bit is always clear */
>      return ret;
>  }

I think simply reading cpu_interrupt_request once into a local variable is
better, and no need for extra locking then.

Otherwise,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 30/56] i386: convert to cpu_interrupt_request
  2018-10-19  1:05 ` [Qemu-devel] [RFC v3 30/56] i386: " Emilio G. Cota
@ 2018-10-21 13:27   ` Richard Henderson
  2018-10-23 20:28     ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:27 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Eduardo Habkost, Richard Henderson

On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> @@ -713,9 +713,9 @@ int hvf_vcpu_exec(CPUState *cpu)
>          switch (exit_reason) {
>          case EXIT_REASON_HLT: {
>              macvm_set_rip(cpu, rip + ins_len);
> -            if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
> +            if (!((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) &&
>                  (EFLAGS(env) & IF_MASK))
> -                && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) &&
> +                && !(cpu_interrupt_request(cpu) & CPU_INTERRUPT_NMI) &&
>                  !(idtvec_info & VMCS_IDT_VEC_VALID)) {
>                  cpu_halted_set(cpu, 1);
>                  ret = EXCP_HLT;

Likewise wrt multiple calls.

> @@ -400,7 +401,8 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
>          };
>      }
>  
> -    if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) {
> +    cpu_mutex_lock(cpu_state);
> +    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_NMI) {
>          if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
>              cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_NMI);
>              info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC;
> @@ -411,7 +413,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
>      }
>  
>      if (!(env->hflags & HF_INHIBIT_IRQ_MASK) &&
> -        (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
> +        (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_HARD) &&
>          (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) {
>          int line = cpu_get_pic_interrupt(&x86cpu->env);
>          cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_HARD);

Likewise.

I think you need to be more careful about this in the conversions.  Previously,
the compiler would CSE these two loads; now you're taking a lock twice.

Or in the second instance, once, since you explicitly take the lock around a
big block.  But I think that's papering over the fact that you make 4 calls
when you should have made one, *and* not hold the lock across all that code.


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 32/56] sh4: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 32/56] sh4: " Emilio G. Cota
@ 2018-10-21 13:28   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:28 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Aurelien Jarno

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Aurelien Jarno <aurelien@aurel32.net>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/sh4/cpu.c    | 2 +-
>  target/sh4/helper.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 33/56] cris: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 33/56] cris: " Emilio G. Cota
@ 2018-10-21 13:29   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:29 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Edgar E. Iglesias

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/cris/cpu.c    | 2 +-
>  target/cris/helper.c | 6 +++---
>  2 files changed, 4 insertions(+), 4 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 34/56] hppa: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 34/56] hppa: " Emilio G. Cota
@ 2018-10-21 13:29   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:29 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Richard Henderson

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Richard Henderson <rth@twiddle.net>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/hppa/cpu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 35/56] lm32: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 35/56] lm32: " Emilio G. Cota
@ 2018-10-21 13:29   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:29 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Michael Walle

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Michael Walle <michael@walle.cc>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/lm32/cpu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 36/56] m68k: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 36/56] m68k: " Emilio G. Cota
@ 2018-10-21 13:29   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:29 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Laurent Vivier

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Laurent Vivier <laurent@vivier.eu>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/m68k/cpu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 37/56] mips: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 37/56] mips: " Emilio G. Cota
@ 2018-10-21 13:30   ` Richard Henderson
  2018-10-22 23:38     ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:30 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Aleksandar Markovic, Aurelien Jarno, James Hogan

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> @@ -60,7 +60,7 @@ static bool mips_cpu_has_work(CPUState *cs)
>      /* Prior to MIPS Release 6 it is implementation dependent if non-enabled
>         interrupts wake-up the CPU, however most of the implementations only
>         check for interrupts that can be taken. */
> -    if ((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
> +    if ((cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
>          cpu_mips_hw_interrupts_pending(env)) {
>          if (cpu_mips_hw_interrupts_enabled(env) ||
>              (env->insn_flags & ISA_MIPS32R6)) {
> @@ -72,7 +72,7 @@ static bool mips_cpu_has_work(CPUState *cs)
>      if (env->CP0_Config3 & (1 << CP0C3_MT)) {
>          /* The QEMU model will issue an _WAKE request whenever the CPUs
>             should be woken up.  */
> -        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
> +        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_WAKE) {
>              has_work = true;
>          }
>  
> @@ -82,7 +82,7 @@ static bool mips_cpu_has_work(CPUState *cs)
>      }
>      /* MIPS Release 6 has the ability to halt the CPU.  */
>      if (env->CP0_Config5 & (1 << CP0C5_VP)) {
> -        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
> +        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_WAKE) {
>              has_work = true;
>          }
>          if (!mips_vp_active(env)) {

Multiple calls.


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 38/56] nios: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 38/56] nios: " Emilio G. Cota
@ 2018-10-21 13:30   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:30 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Marek Vasut, Paolo Bonzini, Chris Wulff

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Chris Wulff <crwulff@gmail.com>
> Cc: Marek Vasut <marex@denx.de>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/nios2/cpu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 39/56] s390x: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 39/56] s390x: " Emilio G. Cota
@ 2018-10-21 13:30   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:30 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: David Hildenbrand, Cornelia Huck, Alexander Graf,
	Christian Borntraeger, qemu-s390x, Paolo Bonzini,
	Richard Henderson

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Alexander Graf <agraf@suse.de>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: qemu-s390x@nongnu.org
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  hw/intc/s390_flic.c | 2 +-
>  target/s390x/cpu.c  | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 40/56] alpha: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 40/56] alpha: " Emilio G. Cota
@ 2018-10-21 13:31   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:31 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Richard Henderson

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Richard Henderson <rth@twiddle.net>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/alpha/cpu.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 41/56] moxie: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 41/56] moxie: " Emilio G. Cota
@ 2018-10-21 13:31   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:31 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Anthony Green

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Anthony Green <green@moxielogic.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/moxie/cpu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 42/56] sparc: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 42/56] sparc: " Emilio G. Cota
@ 2018-10-21 13:32   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:32 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Mark Cave-Ayland, Artyom Tarasenko

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> Cc: Artyom Tarasenko <atar4qemu@gmail.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  hw/sparc64/sparc64.c | 19 +++++++++++++------
>  target/sparc/cpu.c   |  2 +-
>  2 files changed, 14 insertions(+), 7 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 43/56] openrisc: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 43/56] openrisc: " Emilio G. Cota
@ 2018-10-21 13:32   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:32 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Stafford Horne

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Stafford Horne <shorne@gmail.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  hw/openrisc/cputimer.c | 2 +-
>  target/openrisc/cpu.c  | 4 ++--
>  2 files changed, 3 insertions(+), 3 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 44/56] unicore32: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 44/56] unicore32: " Emilio G. Cota
@ 2018-10-21 13:33   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:33 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Guan Xuetao

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/unicore32/cpu.c     | 2 +-
>  target/unicore32/softmmu.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 45/56] microblaze: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 45/56] microblaze: " Emilio G. Cota
@ 2018-10-21 13:33   ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:33 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Edgar E. Iglesias

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  target/microblaze/cpu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 46/56] accel/tcg: convert to cpu_interrupt_request
  2018-10-19  1:06 ` [Qemu-devel] [RFC v3 46/56] accel/tcg: " Emilio G. Cota
@ 2018-10-21 13:34   ` Richard Henderson
  2018-10-22 23:50     ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:34 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel
  Cc: Paolo Bonzini, Richard Henderson, Peter Crosthwaite

On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> @@ -540,16 +540,16 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>       */
>      atomic_mb_set(&cpu->icount_decr.u16.high, 0);
>  
> -    if (unlikely(atomic_read(&cpu->interrupt_request))) {
> +    if (unlikely(cpu_interrupt_request(cpu))) {
>          int interrupt_request;
>          qemu_mutex_lock_iothread();
> -        interrupt_request = cpu->interrupt_request;
> +        interrupt_request = cpu_interrupt_request(cpu);
>          if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
>              /* Mask out external interrupts for this step. */
>              interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
>          }
>          if (interrupt_request & CPU_INTERRUPT_DEBUG) {
> -            cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
> +            cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
>              cpu->exception_index = EXCP_DEBUG;
>              qemu_mutex_unlock_iothread();
>              return true;

Multiple calls.


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 07/56] target/m68k: rename cpu_halted to cpu_halt
  2018-10-21 12:53   ` Richard Henderson
@ 2018-10-21 13:38     ` Richard Henderson
  2018-10-22 22:58       ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:38 UTC (permalink / raw)
  To: Emilio G. Cota, qemu-devel; +Cc: Paolo Bonzini, Laurent Vivier

On 10/21/18 1:53 PM, Richard Henderson wrote:
> On 10/19/18 2:05 AM, Emilio G. Cota wrote:
>> To avoid a name clash with the soon-to-be-defined cpu_halted() helper.
>>
>> Cc: Laurent Vivier <laurent@vivier.eu>
>> Signed-off-by: Emilio G. Cota <cota@braap.org>
>> ---
>>  target/m68k/translate.c | 6 +++---
>>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> Although for this usage it's probably better to avoid the
> tcg_global_mem_new_i32 and just use tcg_gen_st_i32.

And, as I read further, you need to convert this use to a helper call.
Otherwise you've still got an unlocked direct modification to cpu->halted
from within the TCG generated code.

There are several other targets that do the same thing: alpha, cris, hppa,
mips, microblaze, ppc.  And typically they will do exactly the same thing: set
the flag and then raise the halt exception.


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 48/56] ppc: acquire the BQL in cpu_has_work
  2018-10-20 16:31     ` Emilio G. Cota
@ 2018-10-21 13:42       ` Richard Henderson
  0 siblings, 0 replies; 118+ messages in thread
From: Richard Henderson @ 2018-10-21 13:42 UTC (permalink / raw)
  To: Emilio G. Cota, Paolo Bonzini
  Cc: Alexander Graf, qemu-ppc, qemu-devel, David Gibson

On 10/20/18 5:31 PM, Emilio G. Cota wrote:
> I like this. How does the appended look?
> 
> Thanks,
> 
> 		Emilio
> ---8<---
> 
> [PATCH] cpu: introduce cpu_has_work_with_iothread_lock

I might just inline cpu_has_work_bql into the one caller.
You could even share has_cpu_lock with the code there.


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 0/56] per-CPU locks
  2018-10-19 23:46         ` Emilio G. Cota
@ 2018-10-22 15:30           ` Paolo Bonzini
  0 siblings, 0 replies; 118+ messages in thread
From: Paolo Bonzini @ 2018-10-22 15:30 UTC (permalink / raw)
  To: Emilio G. Cota
  Cc: qemu-devel, Aleksandar Markovic, Alexander Graf,
	Alistair Francis, Andrzej Zaborowski, Anthony Green,
	Artyom Tarasenko, Aurelien Jarno, Bastian Koppelmann,
	Christian Borntraeger, Chris Wulff, Cornelia Huck, David Gibson,
	David Hildenbrand, Edgar E. Iglesias, Eduardo Habkost,
	Fabien Chouteau, Guan Xuetao, James Hogan, Laurent Vivier,
	Marek Vasut, Mark Cave-Ayland, Max Filippov, Michael Clark,
	Michael Walle, Palmer Dabbelt, Pavel Dovgalyuk,
	Peter Crosthwaite, Peter Maydell, qemu-arm, qemu-ppc, qemu-s390x,
	Richard Henderson, Sagar Karandikar, Stafford Horne

On 20/10/2018 01:46, Emilio G. Cota wrote:
>> So it is possible that it was my implementation, and not the approach,
>> what was at fault :-)
> I've just observed a similar hang after adding the "BQL
> pushdown" patches on top of this series. So it's likely that the
> hangs come from those patches, and not from the work on
> cpu->interrupt_request. I just confirmed with the prior
> series, and removing the pushdown patches fixes the hangs there
> as well.

Oh well, not a big deal.  You already wrote these patches and I don't
have much time for MTTCG anyway, so I am okay with sticking with them.
Thanks!

Paolo

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 10/56] ppc: convert to cpu_halted
  2018-10-21 12:56   ` Richard Henderson
@ 2018-10-22 21:12     ` Emilio G. Cota
  0 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-22 21:12 UTC (permalink / raw)
  To: Richard Henderson
  Cc: qemu-devel, Paolo Bonzini, qemu-ppc, Alexander Graf, David Gibson

On Sun, Oct 21, 2018 at 13:56:59 +0100, Richard Henderson wrote:
> On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> > @@ -1088,11 +1088,13 @@ static target_ulong h_cede(PowerPCCPU *cpu, sPAPRMachineState *spapr,
> >  
> >      env->msr |= (1ULL << MSR_EE);
> >      hreg_compute_hflags(env);
> > +    cpu_mutex_lock(cs);
> >      if (!cpu_has_work(cs)) {
> > -        cs->halted = 1;
> > +        cpu_halted_set(cs, 1);
> >          cs->exception_index = EXCP_HLT;
> >          cs->exit_request = 1;
> >      }
> > +    cpu_mutex_unlock(cs);
> >      return H_SUCCESS;
> 
> Why does this one get extra locking?

It's taking into account that later in the series we
expand the CPU lock to cpu_has_work. I've added the
following note to this patch's commit log:

> In hw/ppc/spapr_hcall.c, acquire the lock just once to
> update cpu->halted and call cpu_has_work, since later
> in the series we'll acquire the BQL (if not already held)
> from cpu_has_work.

Thanks,

		Emilio

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 07/56] target/m68k: rename cpu_halted to cpu_halt
  2018-10-21 13:38     ` Richard Henderson
@ 2018-10-22 22:58       ` Emilio G. Cota
  0 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-22 22:58 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel, Paolo Bonzini, Laurent Vivier

On Sun, Oct 21, 2018 at 14:38:38 +0100, Richard Henderson wrote:
> On 10/21/18 1:53 PM, Richard Henderson wrote:
> > On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> >> To avoid a name clash with the soon-to-be-defined cpu_halted() helper.
> >>
> >> Cc: Laurent Vivier <laurent@vivier.eu>
> >> Signed-off-by: Emilio G. Cota <cota@braap.org>
> >> ---
> >>  target/m68k/translate.c | 6 +++---
> >>  1 file changed, 3 insertions(+), 3 deletions(-)
> > 
> > Although for this usage it's probably better to avoid the
> > tcg_global_mem_new_i32 and just use tcg_gen_st_i32.
> 
> And, as I read further, you need to convert this use to a helper call.
> Otherwise you've still got an unlocked direct modification to cpu->halted
> from within the TCG generated code.
> 
> There are several other targets that do the same thing: alpha, cris, hppa,
> mips, microblaze, ppc.  And typically they will do exactly the same thing: set
> the flag and then raise the halt exception.

Ouch -- I entirely missed these!

For v4, I defined helper_cpu_halted_set in tcg-runtime, and
converted all direct setters to it.

Thanks,

		Emilio

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 25/56] exec: use cpu_reset_interrupt
  2018-10-21 13:17   ` Richard Henderson
@ 2018-10-22 23:28     ` Emilio G. Cota
  0 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-22 23:28 UTC (permalink / raw)
  To: Richard Henderson
  Cc: qemu-devel, Paolo Bonzini, Richard Henderson, Peter Crosthwaite

On Sun, Oct 21, 2018 at 14:17:01 +0100, Richard Henderson wrote:
> On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> > -    cpu->interrupt_request &= ~0x01;
> > +    cpu_reset_interrupt(cpu, ~0x01);
> 
> cpu_reset_interrupt(cpu, 1);

Ouch. Fixed.

> Although this is during vmload, and I'm not sure what locks you really want to
> play with here.  Perhaps it's ok...

I checked with check-qtest that it's OK -- note that the lock
is initialized right after the CPU thread is created.

I'd like to keep the locked version, so that race checkers don't
get confused.

Thanks,

		Emilio

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 37/56] mips: convert to cpu_interrupt_request
  2018-10-21 13:30   ` Richard Henderson
@ 2018-10-22 23:38     ` Emilio G. Cota
  0 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-22 23:38 UTC (permalink / raw)
  To: Richard Henderson
  Cc: qemu-devel, Paolo Bonzini, Aleksandar Markovic, Aurelien Jarno,
	James Hogan

On Sun, Oct 21, 2018 at 14:30:20 +0100, Richard Henderson wrote:
> On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> > @@ -60,7 +60,7 @@ static bool mips_cpu_has_work(CPUState *cs)
> >      /* Prior to MIPS Release 6 it is implementation dependent if non-enabled
> >         interrupts wake-up the CPU, however most of the implementations only
> >         check for interrupts that can be taken. */
> > -    if ((cs->interrupt_request & CPU_INTERRUPT_HARD) &&
> > +    if ((cpu_interrupt_request(cs) & CPU_INTERRUPT_HARD) &&
> >          cpu_mips_hw_interrupts_pending(env)) {
> >          if (cpu_mips_hw_interrupts_enabled(env) ||
> >              (env->insn_flags & ISA_MIPS32R6)) {
> > @@ -72,7 +72,7 @@ static bool mips_cpu_has_work(CPUState *cs)
> >      if (env->CP0_Config3 & (1 << CP0C3_MT)) {
> >          /* The QEMU model will issue an _WAKE request whenever the CPUs
> >             should be woken up.  */
> > -        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
> > +        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_WAKE) {
> >              has_work = true;
> >          }
> >  
> > @@ -82,7 +82,7 @@ static bool mips_cpu_has_work(CPUState *cs)
> >      }
> >      /* MIPS Release 6 has the ability to halt the CPU.  */
> >      if (env->CP0_Config5 & (1 << CP0C5_VP)) {
> > -        if (cs->interrupt_request & CPU_INTERRUPT_WAKE) {
> > +        if (cpu_interrupt_request(cs) & CPU_INTERRUPT_WAKE) {
> >              has_work = true;
> >          }
> >          if (!mips_vp_active(env)) {
> 
> Multiple calls.

Fixed, even though cpu_has_work ends up being called with
the lock held later in the series.

Thanks,

		E.

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 46/56] accel/tcg: convert to cpu_interrupt_request
  2018-10-21 13:34   ` Richard Henderson
@ 2018-10-22 23:50     ` Emilio G. Cota
  2018-10-23  2:17       ` Richard Henderson
  0 siblings, 1 reply; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-22 23:50 UTC (permalink / raw)
  To: Richard Henderson
  Cc: qemu-devel, Paolo Bonzini, Richard Henderson, Peter Crosthwaite

On Sun, Oct 21, 2018 at 14:34:25 +0100, Richard Henderson wrote:
> On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> > @@ -540,16 +540,16 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
> >       */
> >      atomic_mb_set(&cpu->icount_decr.u16.high, 0);
> >  
> > -    if (unlikely(atomic_read(&cpu->interrupt_request))) {
> > +    if (unlikely(cpu_interrupt_request(cpu))) {
> >          int interrupt_request;
> >          qemu_mutex_lock_iothread();
> > -        interrupt_request = cpu->interrupt_request;
> > +        interrupt_request = cpu_interrupt_request(cpu);
> >          if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
> >              /* Mask out external interrupts for this step. */
> >              interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
> >          }
> >          if (interrupt_request & CPU_INTERRUPT_DEBUG) {
> > -            cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
> > +            cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
> >              cpu->exception_index = EXCP_DEBUG;
> >              qemu_mutex_unlock_iothread();
> >              return true;
> 
> Multiple calls.

I'd rather keep it as is.

The first read takes the lock, and that has to stay unless
we want to use atomic_set on interrupt_request everywhere.

The second read happens after the BQL has been acquired;
note that to avoid deadlock we never acquire the BQL after
a CPU lock; the second (locked) read thus has to stay.

Subsequent accesses are all via cpu_reset_interrupt.
If we wanted to avoid reacquiring the lock, we'd have
to explicitly acquire the lock before the 2nd read,
and add unlocks everywhere (like the many qemu_mutex_unlock_iothread
calls), which would be ugly. But we'd also have to be careful
not to longjmp with the CPU mutex held, so we'd have to
unlock/lock around cc->cpu_exec_interrupt.

Given that the CPU lock is uncontended (so it's cheap to
acquire) and that the cases where we call cpu_reset_interrupt
are not that frequent (CPU_INTERRUPT_{DEBUG,HALT,EXITTB}),
I'd rather just keep the patch as is.

Thanks,

		Emilio

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 46/56] accel/tcg: convert to cpu_interrupt_request
  2018-10-22 23:50     ` Emilio G. Cota
@ 2018-10-23  2:17       ` Richard Henderson
  2018-10-23 20:21         ` Emilio G. Cota
  0 siblings, 1 reply; 118+ messages in thread
From: Richard Henderson @ 2018-10-23  2:17 UTC (permalink / raw)
  To: Emilio G. Cota, Richard Henderson
  Cc: qemu-devel, Paolo Bonzini, Peter Crosthwaite

On 10/23/18 12:50 AM, Emilio G. Cota wrote:
> On Sun, Oct 21, 2018 at 14:34:25 +0100, Richard Henderson wrote:
>> On 10/19/18 2:06 AM, Emilio G. Cota wrote:
>>> @@ -540,16 +540,16 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>>>       */
>>>      atomic_mb_set(&cpu->icount_decr.u16.high, 0);
>>>  
>>> -    if (unlikely(atomic_read(&cpu->interrupt_request))) {
>>> +    if (unlikely(cpu_interrupt_request(cpu))) {
>>>          int interrupt_request;
>>>          qemu_mutex_lock_iothread();
>>> -        interrupt_request = cpu->interrupt_request;
>>> +        interrupt_request = cpu_interrupt_request(cpu);
>>>          if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
>>>              /* Mask out external interrupts for this step. */
>>>              interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
>>>          }
>>>          if (interrupt_request & CPU_INTERRUPT_DEBUG) {
>>> -            cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
>>> +            cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
>>>              cpu->exception_index = EXCP_DEBUG;
>>>              qemu_mutex_unlock_iothread();
>>>              return true;
>>
>> Multiple calls.
> 
> I'd rather keep it as is.
> 
> The first read takes the lock, and that has to stay unless
> we want to use atomic_set on interrupt_request everywhere.

Why not?  That's even cheaper.

> Given that the CPU lock is uncontended (so it's cheap to
> acquire) ...

It still requires at minimum a "lock xchg" (or equivalent on non-x86), which
isn't free -- think 50-ish cycles minimum just for that one insn, plus call
overhead.


r~

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 46/56] accel/tcg: convert to cpu_interrupt_request
  2018-10-23  2:17       ` Richard Henderson
@ 2018-10-23 20:21         ` Emilio G. Cota
  0 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-23 20:21 UTC (permalink / raw)
  To: Richard Henderson
  Cc: Richard Henderson, qemu-devel, Paolo Bonzini, Peter Crosthwaite

On Tue, Oct 23, 2018 at 03:17:11 +0100, Richard Henderson wrote:
> On 10/23/18 12:50 AM, Emilio G. Cota wrote:
> > On Sun, Oct 21, 2018 at 14:34:25 +0100, Richard Henderson wrote:
> >> On 10/19/18 2:06 AM, Emilio G. Cota wrote:
> >>> @@ -540,16 +540,16 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
> >>>       */
> >>>      atomic_mb_set(&cpu->icount_decr.u16.high, 0);
> >>>  
> >>> -    if (unlikely(atomic_read(&cpu->interrupt_request))) {
> >>> +    if (unlikely(cpu_interrupt_request(cpu))) {
> >>>          int interrupt_request;
> >>>          qemu_mutex_lock_iothread();
> >>> -        interrupt_request = cpu->interrupt_request;
> >>> +        interrupt_request = cpu_interrupt_request(cpu);
> >>>          if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
> >>>              /* Mask out external interrupts for this step. */
> >>>              interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
> >>>          }
> >>>          if (interrupt_request & CPU_INTERRUPT_DEBUG) {
> >>> -            cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
> >>> +            cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
> >>>              cpu->exception_index = EXCP_DEBUG;
> >>>              qemu_mutex_unlock_iothread();
> >>>              return true;
> >>
> >> Multiple calls.
> > 
> > I'd rather keep it as is.
> > 
> > The first read takes the lock, and that has to stay unless
> > we want to use atomic_set on interrupt_request everywhere.
> 
> Why not?  That's even cheaper.
> 
> > Given that the CPU lock is uncontended (so it's cheap to
> > acquire) ...
> 
> It still requires at minimum a "lock xchg" (or equivalent on non-x86), which
> isn't free -- think 50-ish cycles minimum just for that one insn, plus call
> overhead.

OK, I changed the first read to atomic_read (changing all the other
writers to atomic_set, but thanks to the helpers it's just very
few of them), and then I'm holding both the BQL + cpu->lock throughout.

Thanks,

		Emilio

^ permalink raw reply	[flat|nested] 118+ messages in thread

* Re: [Qemu-devel] [RFC v3 30/56] i386: convert to cpu_interrupt_request
  2018-10-21 13:27   ` Richard Henderson
@ 2018-10-23 20:28     ` Emilio G. Cota
  0 siblings, 0 replies; 118+ messages in thread
From: Emilio G. Cota @ 2018-10-23 20:28 UTC (permalink / raw)
  To: Richard Henderson
  Cc: qemu-devel, Paolo Bonzini, Eduardo Habkost, Richard Henderson

On Sun, Oct 21, 2018 at 14:27:22 +0100, Richard Henderson wrote:
> On 10/19/18 2:05 AM, Emilio G. Cota wrote:
> > @@ -713,9 +713,9 @@ int hvf_vcpu_exec(CPUState *cpu)
> >          switch (exit_reason) {
> >          case EXIT_REASON_HLT: {
> >              macvm_set_rip(cpu, rip + ins_len);
> > -            if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
> > +            if (!((cpu_interrupt_request(cpu) & CPU_INTERRUPT_HARD) &&
> >                  (EFLAGS(env) & IF_MASK))
> > -                && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) &&
> > +                && !(cpu_interrupt_request(cpu) & CPU_INTERRUPT_NMI) &&
> >                  !(idtvec_info & VMCS_IDT_VEC_VALID)) {
> >                  cpu_halted_set(cpu, 1);
> >                  ret = EXCP_HLT;
> 
> Likewise wrt multiple calls.
> 
> > @@ -400,7 +401,8 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
> >          };
> >      }
> >  
> > -    if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) {
> > +    cpu_mutex_lock(cpu_state);
> > +    if (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_NMI) {
> >          if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
> >              cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_NMI);
> >              info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC;
> > @@ -411,7 +413,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
> >      }
> >  
> >      if (!(env->hflags & HF_INHIBIT_IRQ_MASK) &&
> > -        (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) &&
> > +        (cpu_interrupt_request(cpu_state) & CPU_INTERRUPT_HARD) &&
> >          (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) {
> >          int line = cpu_get_pic_interrupt(&x86cpu->env);
> >          cpu_reset_interrupt(cpu_state, CPU_INTERRUPT_HARD);
> 
> Likewise.
> 
> I think you need to be more careful about this in the conversions.  Previously,
> the compiler would CSE these two loads; now you're taking a lock twice.
> 
> Or in the second instance, once, since you explicitly take the lock around a
> big block.  But I think that's papering over the fact that you make 4 calls
> when you should have made one, *and* not hold the lock across all that code.

Yes, I'm aware of this. For a first pass I wanted to make sure no updates
would be lost, e.g.

	int interrupt_request = cpu_interrupt_request(cpu);
	if (interrupt_request & FOO) {
		do_foo(); /* sets cpu->interrupt_request | BAR */
	}
	if (interrupt_request & BAR) { /* wrongly misses BAR update */
		do_bar();
	}

I'll go through the entire patch to amend these.

Thanks,

		E.

^ permalink raw reply	[flat|nested] 118+ messages in thread

end of thread, other threads:[~2018-10-23 20:28 UTC | newest]

Thread overview: 118+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-19  1:05 [Qemu-devel] [RFC v3 0/56] per-CPU locks Emilio G. Cota
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 01/56] cpu: convert queued work to a QSIMPLEQ Emilio G. Cota
2018-10-19  6:26   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 02/56] cpu: rename cpu->work_mutex to cpu->lock Emilio G. Cota
2018-10-19  6:26   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 03/56] cpu: introduce cpu_mutex_lock/unlock Emilio G. Cota
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 04/56] cpu: make qemu_work_cond per-cpu Emilio G. Cota
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 05/56] cpu: move run_on_cpu to cpus-common Emilio G. Cota
2018-10-19  6:39   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 06/56] cpu: introduce process_queued_cpu_work_locked Emilio G. Cota
2018-10-19  6:41   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 07/56] target/m68k: rename cpu_halted to cpu_halt Emilio G. Cota
2018-10-21 12:53   ` Richard Henderson
2018-10-21 13:38     ` Richard Henderson
2018-10-22 22:58       ` Emilio G. Cota
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 08/56] cpu: define cpu_halted helpers Emilio G. Cota
2018-10-21 12:54   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 09/56] arm: convert to cpu_halted Emilio G. Cota
2018-10-21 12:55   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 10/56] ppc: " Emilio G. Cota
2018-10-21 12:56   ` Richard Henderson
2018-10-22 21:12     ` Emilio G. Cota
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 11/56] sh4: " Emilio G. Cota
2018-10-21 12:57   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 12/56] i386: " Emilio G. Cota
2018-10-21 12:59   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 13/56] lm32: " Emilio G. Cota
2018-10-21 13:00   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 14/56] m68k: " Emilio G. Cota
2018-10-21 13:01   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 15/56] mips: " Emilio G. Cota
2018-10-21 13:02   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 16/56] riscv: " Emilio G. Cota
2018-10-19 17:24   ` Palmer Dabbelt
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 17/56] s390x: " Emilio G. Cota
2018-10-21 13:04   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 18/56] sparc: " Emilio G. Cota
2018-10-21 13:04   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 19/56] xtensa: " Emilio G. Cota
2018-10-21 13:10   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 20/56] gdbstub: " Emilio G. Cota
2018-10-21 13:10   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 21/56] openrisc: " Emilio G. Cota
2018-10-21 13:11   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 22/56] cpu-exec: " Emilio G. Cota
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 23/56] cpu: define cpu_interrupt_request helpers Emilio G. Cota
2018-10-21 13:15   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 24/56] ppc: use cpu_reset_interrupt Emilio G. Cota
2018-10-21 13:15   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 25/56] exec: " Emilio G. Cota
2018-10-21 13:17   ` Richard Henderson
2018-10-22 23:28     ` Emilio G. Cota
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 26/56] i386: " Emilio G. Cota
2018-10-21 13:18   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 27/56] s390x: " Emilio G. Cota
2018-10-21 13:18   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 28/56] openrisc: " Emilio G. Cota
2018-10-21 13:18   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 29/56] arm: convert to cpu_interrupt_request Emilio G. Cota
2018-10-21 13:21   ` Richard Henderson
2018-10-19  1:05 ` [Qemu-devel] [RFC v3 30/56] i386: " Emilio G. Cota
2018-10-21 13:27   ` Richard Henderson
2018-10-23 20:28     ` Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 31/56] ppc: " Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 32/56] sh4: " Emilio G. Cota
2018-10-21 13:28   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 33/56] cris: " Emilio G. Cota
2018-10-21 13:29   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 34/56] hppa: " Emilio G. Cota
2018-10-21 13:29   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 35/56] lm32: " Emilio G. Cota
2018-10-21 13:29   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 36/56] m68k: " Emilio G. Cota
2018-10-21 13:29   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 37/56] mips: " Emilio G. Cota
2018-10-21 13:30   ` Richard Henderson
2018-10-22 23:38     ` Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 38/56] nios: " Emilio G. Cota
2018-10-21 13:30   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 39/56] s390x: " Emilio G. Cota
2018-10-21 13:30   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 40/56] alpha: " Emilio G. Cota
2018-10-21 13:31   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 41/56] moxie: " Emilio G. Cota
2018-10-21 13:31   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 42/56] sparc: " Emilio G. Cota
2018-10-21 13:32   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 43/56] openrisc: " Emilio G. Cota
2018-10-21 13:32   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 44/56] unicore32: " Emilio G. Cota
2018-10-21 13:33   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 45/56] microblaze: " Emilio G. Cota
2018-10-21 13:33   ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 46/56] accel/tcg: " Emilio G. Cota
2018-10-21 13:34   ` Richard Henderson
2018-10-22 23:50     ` Emilio G. Cota
2018-10-23  2:17       ` Richard Henderson
2018-10-23 20:21         ` Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 47/56] cpu: call .cpu_has_work with the CPU lock held Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 48/56] ppc: acquire the BQL in cpu_has_work Emilio G. Cota
2018-10-19  6:58   ` Paolo Bonzini
2018-10-20 16:31     ` Emilio G. Cota
2018-10-21 13:42       ` Richard Henderson
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 49/56] mips: " Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 50/56] s390: " Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 51/56] riscv: " Emilio G. Cota
2018-10-19 17:24   ` Palmer Dabbelt
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 52/56] sparc: " Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 53/56] xtensa: " Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 54/56] cpu: protect most CPU state with cpu->lock Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 55/56] cpu: add async_run_on_cpu_no_bql Emilio G. Cota
2018-10-19  1:06 ` [Qemu-devel] [RFC v3 56/56] cputlb: queue async flush jobs without the BQL Emilio G. Cota
2018-10-19  6:59 ` [Qemu-devel] [RFC v3 0/56] per-CPU locks Paolo Bonzini
2018-10-19 14:50   ` Emilio G. Cota
2018-10-19 16:01     ` Paolo Bonzini
2018-10-19 19:29       ` Emilio G. Cota
2018-10-19 23:46         ` Emilio G. Cota
2018-10-22 15:30           ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.