All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/9] Fix missing memory barriers on ARM
@ 2023-03-06 22:32 Paolo Bonzini
  2023-03-06 22:32 ` [PATCH v2 1/9] qatomic: add smp_mb__before/after_rmw() Paolo Bonzini
                   ` (8 more replies)
  0 siblings, 9 replies; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:32 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

This series fixes more instances of the problem fixed by commits
5710a3e09f9b ("async: use explicit memory barriers", 2020-04-09) and
7455ff1aa015 ("aio_wait_kick: add missing memory barrier", 2022-06-24).
This is an interesting case where ARM's memory ordering is somewhat 
stronger than what you would expect.

On ARM, seqcst loads and stores (which QEMU does not use) are compiled
respectively as LDAR and STLR instructions.  Even though LDAR is
also used for load-acquire operations, it also waits for all STLRs to
leave the store buffer.  Thus, LDAR and STLR alone are load-acquire
and store-release operations, but LDAR also provides store-against-load
ordering as long as the previous store is a STLR.

Compare this to ARMv7, where store-release is DMB+STR and load-acquire
is LDR+DMB, but an additional DMB is needed between store-seqcst and
load-seqcst (e.g. DMB+STR+DMB+LDR+DMB); or with x86, where MOV provides
load-acquire and store-release semantics and the two can be reordered.

Likewise, on ARM sequentially consistent read-modify-write operations only
need to use LDAXR and STLXR respectively for the load and the store, while
on x86 they need to use the stronger LOCK prefix.

In a strange twist of events, however, the _stronger_ semantics
of the ARM instructions can end up causing bugs on ARM, not on x86.
The problems occur when seqcst atomics are mixed with relaxed atomics.
Here is how the two are compiled on ARM:

                   load              store
   relaxed         LDR               STR
   seqcst          LDAR              STLR

QEMU's atomics try to bridge the Linux API (that most of the developers
are familiar with) and the C11 API, and the two have a substantial
difference:

- in Linux, strongly-ordered atomics such as atomic_add_return() affect
  the global ordering of _all_ memory operations, including for example
  READ_ONCE()/WRITE_ONCE()

- in C11, sequentially consistent atomics (except for seqcst fences)
  only affect the ordering of sequentially consistent operations.
  In particular, since relaxed loads are done with LDR on ARM, they are
  not ordered against seqcst stores (which are done with STLR).

QEMU implements high-level synchronization primitives with the idea that
the primitives contain the necessary memory barriers, and the callers can
use relaxed atomics (qatomic_read/qatomic_set) or even regular accesses.
This is very much incompatible with the C11 view that seqcst accesses
are only ordered against other seqcst accesses, and requires using
seqcst fences as in the following example:

   qatomic_set(&y, 1);            qatomic_set(&x, 1);
   smp_mb();                      smp_mb();
   ... qatomic_read(&x) ...       ... qatomic_read(&y) ...

Bugs ensue when a qatomic_*() read-modify write operation is used instead
of one or both stores, for example:

   qatomic_<rmw>(&y, ...);
   smp_mb();
   ... qatomic_read(&x) ...

Code that was written when QEMU used the older GCC __sync_* builtins
(which did provide a full barrier with loads on either side of the
operation) might omit the smp_mb().  That's exactly what yours truly did
in qemu_event_set() and qemu_event_reset().  After a27dd2de68f3 ("KVM:
keep track of running ioctls", 2023-01-11), this showed up as hangs due to
threads sleeping forever in qemu_event_wait().

This _could_ also have been the cause of occasional hangs of rcutorture,
though I have not observed them personally.

In order to fix this, while avoiding worse performance from having two
back-to-back memory barriers on x86, patch 1 introduces optimized
memory barriers smp_mb__before_rmw() and smp_mb__after_rmw().  The usage
is similar to Linux's smp_mb__before/after_atomic(), but the name is
different because they affect _all_ RMW operations.  On Linux, instead,
they are not needed around those RMW operations that return the old value.

The compiler does the same when compiling __sync_* atomics, i.e. it adds
an extra memory barrier after the STLXR instruction on ARM but it does
nothing on x86.

The remaining patches add them everywhere they are needed.  In the
case of QemuEvent (patches 2-3), I reviewed the algorithm thoroughly,
dropping barriers that were not necessary and killing optimizations that
I wasn't entirely sure about.  For the other cases, instead, the changes
are minimal.

Note: I have a follow-up set of patches that gets rid completely of
atomic_mb_read(); atomic_mb_set() instead can remain and mimic Linux's
smp_store_mb() operation.  A glimpse of these changes is already visible
in patches 6 and 7.

Thanks to Emanuele Esposito and Gavin Shan for help debugging and
testing the changes!

Paolo

v1->v2:
- fix documentation in patch 1
- do not add barrier to util/async.c, update documentation for the
  existing ones
- leave acquire barrier in qemu_event_wait()

Paolo Bonzini (9):
  qatomic: add smp_mb__before/after_rmw()
  qemu-thread-posix: cleanup, fix, document QemuEvent
  qemu-thread-win32: cleanup, fix, document QemuEvent
  edu: add smp_mb__after_rmw()
  aio-wait: switch to smp_mb__after_rmw()
  qemu-coroutine-lock: add smp_mb__after_rmw()
  physmem: add missing memory barrier
  async: update documentation of the memory barriers
  async: clarify usage of barriers in the polling case

 docs/devel/atomics.rst     | 26 +++++++++---
 hw/misc/edu.c              |  5 +++
 include/block/aio-wait.h   |  2 +-
 include/qemu/atomic.h      | 17 +++++++-
 softmmu/physmem.c          |  3 ++
 util/async.c               | 43 ++++++++++++--------
 util/qemu-coroutine-lock.c |  9 ++++-
 util/qemu-thread-posix.c   | 69 ++++++++++++++++++++++----------
 util/qemu-thread-win32.c   | 82 ++++++++++++++++++++++++++------------
 9 files changed, 186 insertions(+), 70 deletions(-)

-- 
2.39.1



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v2 1/9] qatomic: add smp_mb__before/after_rmw()
  2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
@ 2023-03-06 22:32 ` Paolo Bonzini
  2023-03-06 22:32 ` [PATCH v2 2/9] qemu-thread-posix: cleanup, fix, document QemuEvent Paolo Bonzini
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:32 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

On ARM, seqcst loads and stores (which QEMU does not use) are compiled
respectively as LDAR and STLR instructions.  Even though LDAR is
also used for load-acquire operations, it also waits for all STLRs to
leave the store buffer.  Thus, LDAR and STLR alone are load-acquire
and store-release operations, but LDAR also provides store-against-load
ordering as long as the previous store is a STLR.

Compare this to ARMv7, where store-release is DMB+STR and load-acquire
is LDR+DMB, but an additional DMB is needed between store-seqcst and
load-seqcst (e.g. DMB+STR+DMB+LDR+DMB); or with x86, where MOV provides
load-acquire and store-release semantics and the two can be reordered.

Likewise, on ARM sequentially consistent read-modify-write operations only
need to use LDAXR and STLXR respectively for the load and the store, while
on x86 they need to use the stronger LOCK prefix.

In a strange twist of events, however, the _stronger_ semantics
of the ARM instructions can end up causing bugs on ARM, not on x86.
The problems occur when seqcst atomics are mixed with relaxed atomics.

QEMU's atomics try to bridge the Linux API (that most of the developers
are familiar with) and the C11 API, and the two have a substantial
difference:

- in Linux, strongly-ordered atomics such as atomic_add_return() affect
  the global ordering of _all_ memory operations, including for example
  READ_ONCE()/WRITE_ONCE()

- in C11, sequentially consistent atomics (except for seq-cst fences)
  only affect the ordering of sequentially consistent operations.
  In particular, since relaxed loads are done with LDR on ARM, they are
  not ordered against seqcst stores (which are done with STLR).

QEMU implements high-level synchronization primitives with the idea that
the primitives contain the necessary memory barriers, and the callers can
use relaxed atomics (qatomic_read/qatomic_set) or even regular accesses.
This is very much incompatible with the C11 view that seqcst accesses
are only ordered against other seqcst accesses, and requires using seqcst
fences as in the following example:

   qatomic_set(&y, 1);            qatomic_set(&x, 1);
   smp_mb();                      smp_mb();
   ... qatomic_read(&x) ...       ... qatomic_read(&y) ...

When a qatomic_*() read-modify write operation is used instead of one
or both stores, developers that are more familiar with the Linux API may
be tempted to omit the smp_mb(), which will work on x86 but not on ARM.

This nasty difference between Linux and C11 read-modify-write operations
has already caused issues in util/async.c and more are being found.
Provide something similar to Linux smp_mb__before/after_atomic(); this
has the double function of documenting clearly why there is a memory
barrier, and avoiding a double barrier on x86 and s390x systems.

The new macro can already be put to use in qatomic_mb_set().

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 docs/devel/atomics.rst | 26 +++++++++++++++++++++-----
 include/qemu/atomic.h  | 17 ++++++++++++++++-
 2 files changed, 37 insertions(+), 6 deletions(-)

diff --git a/docs/devel/atomics.rst b/docs/devel/atomics.rst
index 7957310071d9..633df65a97bc 100644
--- a/docs/devel/atomics.rst
+++ b/docs/devel/atomics.rst
@@ -27,7 +27,8 @@ provides macros that fall in three camps:
 
 - weak atomic access and manual memory barriers: ``qatomic_read()``,
   ``qatomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``,
-  ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``;
+  ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``,
+  ``smp_mb__before_rmw()``, ``smp_mb__after_rmw()``;
 
 - sequentially consistent atomic access: everything else.
 
@@ -472,7 +473,7 @@ and memory barriers, and the equivalents in QEMU:
   sequential consistency.
 
 - in QEMU, ``qatomic_read()`` and ``qatomic_set()`` do not participate in
-  the total ordering enforced by sequentially-consistent operations.
+  the ordering enforced by read-modify-write operations.
   This is because QEMU uses the C11 memory model.  The following example
   is correct in Linux but not in QEMU:
 
@@ -488,9 +489,24 @@ and memory barriers, and the equivalents in QEMU:
   because the read of ``y`` can be moved (by either the processor or the
   compiler) before the write of ``x``.
 
-  Fixing this requires an ``smp_mb()`` memory barrier between the write
-  of ``x`` and the read of ``y``.  In the common case where only one thread
-  writes ``x``, it is also possible to write it like this:
+  Fixing this requires a full memory barrier between the write of ``x`` and
+  the read of ``y``.  QEMU provides ``smp_mb__before_rmw()`` and
+  ``smp_mb__after_rmw()``; they act both as an optimization,
+  avoiding the memory barrier on processors where it is unnecessary,
+  and as a clarification of this corner case of the C11 memory model:
+
+      +--------------------------------+
+      | QEMU (correct)                 |
+      +================================+
+      | ::                             |
+      |                                |
+      |   a = qatomic_fetch_add(&x, 2);|
+      |   smp_mb__after_rmw();         |
+      |   b = qatomic_read(&y);        |
+      +--------------------------------+
+
+  In the common case where only one thread writes ``x``, it is also possible
+  to write it like this:
 
       +--------------------------------+
       | QEMU (correct)                 |
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index 874134fd19ae..f85834ee8b20 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -245,6 +245,20 @@
 #define smp_wmb()   smp_mb_release()
 #define smp_rmb()   smp_mb_acquire()
 
+/*
+ * SEQ_CST is weaker than the older __sync_* builtins and Linux
+ * kernel read-modify-write atomics.  Provide a macro to obtain
+ * the same semantics.
+ */
+#if !defined(QEMU_SANITIZE_THREAD) && \
+    (defined(__i386__) || defined(__x86_64__) || defined(__s390x__))
+# define smp_mb__before_rmw() signal_barrier()
+# define smp_mb__after_rmw() signal_barrier()
+#else
+# define smp_mb__before_rmw() smp_mb()
+# define smp_mb__after_rmw() smp_mb()
+#endif
+
 /* qatomic_mb_read/set semantics map Java volatile variables. They are
  * less expensive on some platforms (notably POWER) than fully
  * sequentially consistent operations.
@@ -259,7 +273,8 @@
 #if !defined(QEMU_SANITIZE_THREAD) && \
     (defined(__i386__) || defined(__x86_64__) || defined(__s390x__))
 /* This is more efficient than a store plus a fence.  */
-# define qatomic_mb_set(ptr, i)  ((void)qatomic_xchg(ptr, i))
+# define qatomic_mb_set(ptr, i) \
+    ({ (void)qatomic_xchg(ptr, i); smp_mb__after_rmw(); })
 #else
 # define qatomic_mb_set(ptr, i) \
    ({ qatomic_store_release(ptr, i); smp_mb(); })
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 2/9] qemu-thread-posix: cleanup, fix, document QemuEvent
  2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
  2023-03-06 22:32 ` [PATCH v2 1/9] qatomic: add smp_mb__before/after_rmw() Paolo Bonzini
@ 2023-03-06 22:32 ` Paolo Bonzini
  2023-03-06 22:33 ` [PATCH v2 3/9] qemu-thread-win32: " Paolo Bonzini
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:32 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

QemuEvent is currently broken on ARM due to missing memory barriers
after qatomic_*().  Apart from adding the memory barrier, a closer look
reveals some unpaired memory barriers too.  Document more clearly what
is going on.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/qemu-thread-posix.c | 69 ++++++++++++++++++++++++++++------------
 1 file changed, 49 insertions(+), 20 deletions(-)

diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
index 93d250579741..02f674b207cc 100644
--- a/util/qemu-thread-posix.c
+++ b/util/qemu-thread-posix.c
@@ -384,13 +384,21 @@ void qemu_event_destroy(QemuEvent *ev)
 
 void qemu_event_set(QemuEvent *ev)
 {
-    /* qemu_event_set has release semantics, but because it *loads*
+    assert(ev->initialized);
+
+    /*
+     * Pairs with both qemu_event_reset() and qemu_event_wait().
+     *
+     * qemu_event_set has release semantics, but because it *loads*
      * ev->value we need a full memory barrier here.
      */
-    assert(ev->initialized);
     smp_mb();
     if (qatomic_read(&ev->value) != EV_SET) {
-        if (qatomic_xchg(&ev->value, EV_SET) == EV_BUSY) {
+        int old = qatomic_xchg(&ev->value, EV_SET);
+
+        /* Pairs with memory barrier in kernel futex_wait system call.  */
+        smp_mb__after_rmw();
+        if (old == EV_BUSY) {
             /* There were waiters, wake them up.  */
             qemu_futex_wake(ev, INT_MAX);
         }
@@ -399,18 +407,19 @@ void qemu_event_set(QemuEvent *ev)
 
 void qemu_event_reset(QemuEvent *ev)
 {
-    unsigned value;
-
     assert(ev->initialized);
-    value = qatomic_read(&ev->value);
-    smp_mb_acquire();
-    if (value == EV_SET) {
-        /*
-         * If there was a concurrent reset (or even reset+wait),
-         * do nothing.  Otherwise change EV_SET->EV_FREE.
-         */
-        qatomic_or(&ev->value, EV_FREE);
-    }
+
+    /*
+     * If there was a concurrent reset (or even reset+wait),
+     * do nothing.  Otherwise change EV_SET->EV_FREE.
+     */
+    qatomic_or(&ev->value, EV_FREE);
+
+    /*
+     * Order reset before checking the condition in the caller.
+     * Pairs with the first memory barrier in qemu_event_set().
+     */
+    smp_mb__after_rmw();
 }
 
 void qemu_event_wait(QemuEvent *ev)
@@ -418,20 +427,40 @@ void qemu_event_wait(QemuEvent *ev)
     unsigned value;
 
     assert(ev->initialized);
-    value = qatomic_read(&ev->value);
-    smp_mb_acquire();
+
+    /*
+     * qemu_event_wait must synchronize with qemu_event_set even if it does
+     * not go down the slow path, so this load-acquire is needed that
+     * synchronizes with the first memory barrier in qemu_event_set().
+     *
+     * If we do go down the slow path, there is no requirement at all: we
+     * might miss a qemu_event_set() here but ultimately the memory barrier in
+     * qemu_futex_wait() will ensure the check is done correctly.
+     */
+    value = qatomic_load_acquire(&ev->value);
     if (value != EV_SET) {
         if (value == EV_FREE) {
             /*
-             * Leave the event reset and tell qemu_event_set that there
-             * are waiters.  No need to retry, because there cannot be
-             * a concurrent busy->free transition.  After the CAS, the
-             * event will be either set or busy.
+             * Leave the event reset and tell qemu_event_set that there are
+             * waiters.  No need to retry, because there cannot be a concurrent
+             * busy->free transition.  After the CAS, the event will be either
+             * set or busy.
+             *
+             * This cmpxchg doesn't have particular ordering requirements if it
+             * succeeds (moving the store earlier can only cause qemu_event_set()
+             * to issue _more_ wakeups), the failing case needs acquire semantics
+             * like the load above.
              */
             if (qatomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) == EV_SET) {
                 return;
             }
         }
+
+        /*
+         * This is the final check for a concurrent set, so it does need
+         * a smp_mb() pairing with the second barrier of qemu_event_set().
+         * The barrier is inside the FUTEX_WAIT system call.
+         */
         qemu_futex_wait(ev, EV_BUSY);
     }
 }
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 3/9] qemu-thread-win32: cleanup, fix, document QemuEvent
  2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
  2023-03-06 22:32 ` [PATCH v2 1/9] qatomic: add smp_mb__before/after_rmw() Paolo Bonzini
  2023-03-06 22:32 ` [PATCH v2 2/9] qemu-thread-posix: cleanup, fix, document QemuEvent Paolo Bonzini
@ 2023-03-06 22:33 ` Paolo Bonzini
  2023-03-06 22:33 ` [PATCH v2 4/9] edu: add smp_mb__after_rmw() Paolo Bonzini
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:33 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

QemuEvent is currently broken on ARM due to missing memory barriers
after qatomic_*().  Apart from adding the memory barrier, a closer look
reveals some unpaired memory barriers that are not really needed and
complicated the functions unnecessarily.  Also, it is relying on
a memory barrier in ResetEvent(); the barrier _ought_ to be there
but there is really no documentation about it, so make it explicit.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/qemu-thread-win32.c | 82 +++++++++++++++++++++++++++-------------
 1 file changed, 56 insertions(+), 26 deletions(-)

diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c
index 69db254ac7c1..970ba5642d7b 100644
--- a/util/qemu-thread-win32.c
+++ b/util/qemu-thread-win32.c
@@ -272,12 +272,20 @@ void qemu_event_destroy(QemuEvent *ev)
 void qemu_event_set(QemuEvent *ev)
 {
     assert(ev->initialized);
-    /* qemu_event_set has release semantics, but because it *loads*
+
+    /*
+     * Pairs with both qemu_event_reset() and qemu_event_wait().
+     *
+     * qemu_event_set has release semantics, but because it *loads*
      * ev->value we need a full memory barrier here.
      */
     smp_mb();
     if (qatomic_read(&ev->value) != EV_SET) {
-        if (qatomic_xchg(&ev->value, EV_SET) == EV_BUSY) {
+        int old = qatomic_xchg(&ev->value, EV_SET);
+
+        /* Pairs with memory barrier after ResetEvent.  */
+        smp_mb__after_rmw();
+        if (old == EV_BUSY) {
             /* There were waiters, wake them up.  */
             SetEvent(ev->event);
         }
@@ -286,17 +294,19 @@ void qemu_event_set(QemuEvent *ev)
 
 void qemu_event_reset(QemuEvent *ev)
 {
-    unsigned value;
-
     assert(ev->initialized);
-    value = qatomic_read(&ev->value);
-    smp_mb_acquire();
-    if (value == EV_SET) {
-        /* If there was a concurrent reset (or even reset+wait),
-         * do nothing.  Otherwise change EV_SET->EV_FREE.
-         */
-        qatomic_or(&ev->value, EV_FREE);
-    }
+
+    /*
+     * If there was a concurrent reset (or even reset+wait),
+     * do nothing.  Otherwise change EV_SET->EV_FREE.
+     */
+    qatomic_or(&ev->value, EV_FREE);
+
+    /*
+     * Order reset before checking the condition in the caller.
+     * Pairs with the first memory barrier in qemu_event_set().
+     */
+    smp_mb__after_rmw();
 }
 
 void qemu_event_wait(QemuEvent *ev)
@@ -304,29 +314,49 @@ void qemu_event_wait(QemuEvent *ev)
     unsigned value;
 
     assert(ev->initialized);
-    value = qatomic_read(&ev->value);
-    smp_mb_acquire();
+
+    /*
+     * qemu_event_wait must synchronize with qemu_event_set even if it does
+     * not go down the slow path, so this load-acquire is needed that
+     * synchronizes with the first memory barrier in qemu_event_set().
+     *
+     * If we do go down the slow path, there is no requirement at all: we
+     * might miss a qemu_event_set() here but ultimately the memory barrier in
+     * qemu_futex_wait() will ensure the check is done correctly.
+     */
+    value = qatomic_load_acquire(&ev->value);
     if (value != EV_SET) {
         if (value == EV_FREE) {
-            /* qemu_event_set is not yet going to call SetEvent, but we are
-             * going to do another check for EV_SET below when setting EV_BUSY.
-             * At that point it is safe to call WaitForSingleObject.
+            /*
+             * Here the underlying kernel event is reset, but qemu_event_set is
+             * not yet going to call SetEvent.  However, there will be another
+             * check for EV_SET below when setting EV_BUSY.  At that point it
+             * is safe to call WaitForSingleObject.
              */
             ResetEvent(ev->event);
 
-            /* Tell qemu_event_set that there are waiters.  No need to retry
-             * because there cannot be a concurrent busy->free transition.
-             * After the CAS, the event will be either set or busy.
+            /*
+             * It is not clear whether ResetEvent provides this barrier; kernel
+             * APIs (KeResetEvent/KeClearEvent) do not.  Better safe than sorry!
+             */
+            smp_mb();
+
+            /*
+             * Leave the event reset and tell qemu_event_set that there are
+             * waiters.  No need to retry, because there cannot be a concurrent
+             * busy->free transition.  After the CAS, the event will be either
+             * set or busy.
              */
             if (qatomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) == EV_SET) {
-                value = EV_SET;
-            } else {
-                value = EV_BUSY;
+                return;
             }
         }
-        if (value == EV_BUSY) {
-            WaitForSingleObject(ev->event, INFINITE);
-        }
+
+        /*
+         * ev->value is now EV_BUSY.  Since we didn't observe EV_SET,
+         * qemu_event_set() must observe EV_BUSY and call SetEvent().
+         */
+        WaitForSingleObject(ev->event, INFINITE);
     }
 }
 
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 4/9] edu: add smp_mb__after_rmw()
  2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
                   ` (2 preceding siblings ...)
  2023-03-06 22:33 ` [PATCH v2 3/9] qemu-thread-win32: " Paolo Bonzini
@ 2023-03-06 22:33 ` Paolo Bonzini
  2023-03-06 22:33 ` [PATCH v2 5/9] aio-wait: switch to smp_mb__after_rmw() Paolo Bonzini
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:33 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

Ensure ordering between clearing the COMPUTING flag and checking
IRQFACT, and between setting the IRQFACT flag and checking
COMPUTING.  This ensures that no wakeups are lost.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/misc/edu.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hw/misc/edu.c b/hw/misc/edu.c
index e935c418d400..a1f8bc77e770 100644
--- a/hw/misc/edu.c
+++ b/hw/misc/edu.c
@@ -267,6 +267,8 @@ static void edu_mmio_write(void *opaque, hwaddr addr, uint64_t val,
     case 0x20:
         if (val & EDU_STATUS_IRQFACT) {
             qatomic_or(&edu->status, EDU_STATUS_IRQFACT);
+            /* Order check of the COMPUTING flag after setting IRQFACT.  */
+            smp_mb__after_rmw();
         } else {
             qatomic_and(&edu->status, ~EDU_STATUS_IRQFACT);
         }
@@ -349,6 +351,9 @@ static void *edu_fact_thread(void *opaque)
         qemu_mutex_unlock(&edu->thr_mutex);
         qatomic_and(&edu->status, ~EDU_STATUS_COMPUTING);
 
+        /* Clear COMPUTING flag before checking IRQFACT.  */
+        smp_mb__after_rmw();
+
         if (qatomic_read(&edu->status) & EDU_STATUS_IRQFACT) {
             qemu_mutex_lock_iothread();
             edu_raise_irq(edu, FACT_IRQ);
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 5/9] aio-wait: switch to smp_mb__after_rmw()
  2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
                   ` (3 preceding siblings ...)
  2023-03-06 22:33 ` [PATCH v2 4/9] edu: add smp_mb__after_rmw() Paolo Bonzini
@ 2023-03-06 22:33 ` Paolo Bonzini
  2023-03-07 10:50   ` David Hildenbrand
  2023-03-06 22:33 ` [PATCH v2 6/9] qemu-coroutine-lock: add smp_mb__after_rmw() Paolo Bonzini
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:33 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

The barrier comes after an atomic increment, so it is enough to use
smp_mb__after_rmw(); this avoids a double barrier on x86 systems.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/block/aio-wait.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h
index dd9a7f6461ef..da13357bb8cf 100644
--- a/include/block/aio-wait.h
+++ b/include/block/aio-wait.h
@@ -85,7 +85,7 @@ extern AioWait global_aio_wait;
     /* Increment wait_->num_waiters before evaluating cond. */     \
     qatomic_inc(&wait_->num_waiters);                              \
     /* Paired with smp_mb in aio_wait_kick(). */                   \
-    smp_mb();                                                      \
+    smp_mb__after_rmw();                                           \
     if (ctx_ && in_aio_context_home_thread(ctx_)) {                \
         while ((cond)) {                                           \
             aio_poll(ctx_, true);                                  \
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 6/9] qemu-coroutine-lock: add smp_mb__after_rmw()
  2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
                   ` (4 preceding siblings ...)
  2023-03-06 22:33 ` [PATCH v2 5/9] aio-wait: switch to smp_mb__after_rmw() Paolo Bonzini
@ 2023-03-06 22:33 ` Paolo Bonzini
  2023-03-06 22:33 ` [PATCH v2 7/9] physmem: add missing memory barrier Paolo Bonzini
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:33 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

mutex->from_push and mutex->handoff in qemu-coroutine-lock implement
the familiar pattern:

   write a                                  write b
   smp_mb()                                 smp_mb()
   read b                                   read a

The memory barrier is required by the C memory model even after a
SEQ_CST read-modify-write operation such as QSLIST_INSERT_HEAD_ATOMIC.
Add it and avoid the unclear qatomic_mb_read() operation.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/qemu-coroutine-lock.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c
index 58f3f771817b..84a50a9e9117 100644
--- a/util/qemu-coroutine-lock.c
+++ b/util/qemu-coroutine-lock.c
@@ -201,10 +201,16 @@ static void coroutine_fn qemu_co_mutex_lock_slowpath(AioContext *ctx,
     trace_qemu_co_mutex_lock_entry(mutex, self);
     push_waiter(mutex, &w);
 
+    /*
+     * Add waiter before reading mutex->handoff.  Pairs with qatomic_mb_set
+     * in qemu_co_mutex_unlock.
+     */
+    smp_mb__after_rmw();
+
     /* This is the "Responsibility Hand-Off" protocol; a lock() picks from
      * a concurrent unlock() the responsibility of waking somebody up.
      */
-    old_handoff = qatomic_mb_read(&mutex->handoff);
+    old_handoff = qatomic_read(&mutex->handoff);
     if (old_handoff &&
         has_waiters(mutex) &&
         qatomic_cmpxchg(&mutex->handoff, old_handoff, 0) == old_handoff) {
@@ -303,6 +309,7 @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex)
         }
 
         our_handoff = mutex->sequence;
+        /* Set handoff before checking for waiters.  */
         qatomic_mb_set(&mutex->handoff, our_handoff);
         if (!has_waiters(mutex)) {
             /* The concurrent lock has not added itself yet, so it
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 7/9] physmem: add missing memory barrier
  2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
                   ` (5 preceding siblings ...)
  2023-03-06 22:33 ` [PATCH v2 6/9] qemu-coroutine-lock: add smp_mb__after_rmw() Paolo Bonzini
@ 2023-03-06 22:33 ` Paolo Bonzini
  2023-03-06 22:33 ` [PATCH v2 8/9] async: update documentation of the memory barriers Paolo Bonzini
  2023-03-06 22:33 ` [PATCH v2 9/9] async: clarify usage of barriers in the polling case Paolo Bonzini
  8 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:33 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 softmmu/physmem.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 47143edb4f6c..a6efd8e8dd11 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -2927,6 +2927,8 @@ void cpu_register_map_client(QEMUBH *bh)
     qemu_mutex_lock(&map_client_list_lock);
     client->bh = bh;
     QLIST_INSERT_HEAD(&map_client_list, client, link);
+    /* Write map_client_list before reading in_use.  */
+    smp_mb();
     if (!qatomic_read(&bounce.in_use)) {
         cpu_notify_map_clients_locked();
     }
@@ -3116,6 +3118,7 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
     qemu_vfree(bounce.buffer);
     bounce.buffer = NULL;
     memory_region_unref(bounce.mr);
+    /* Clear in_use before reading map_client_list.  */
     qatomic_mb_set(&bounce.in_use, false);
     cpu_notify_map_clients();
 }
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
                   ` (6 preceding siblings ...)
  2023-03-06 22:33 ` [PATCH v2 7/9] physmem: add missing memory barrier Paolo Bonzini
@ 2023-03-06 22:33 ` Paolo Bonzini
  2023-03-06 22:48   ` Stefan Hajnoczi
  2023-03-06 23:39   ` Richard Henderson
  2023-03-06 22:33 ` [PATCH v2 9/9] async: clarify usage of barriers in the polling case Paolo Bonzini
  8 siblings, 2 replies; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:33 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

Ever since commit 8c6b0356b539 ("util/async: make bh_aio_poll() O(1)",
2020-02-22), synchronization between qemu_bh_schedule() and aio_bh_poll()
is happening when the bottom half is enqueued in the bh_list; not
when the flags are set.  Update the documentation to match.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/async.c | 33 +++++++++++++++++++--------------
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/util/async.c b/util/async.c
index 0657b7539777..e4b494150e7d 100644
--- a/util/async.c
+++ b/util/async.c
@@ -74,14 +74,21 @@ static void aio_bh_enqueue(QEMUBH *bh, unsigned new_flags)
     unsigned old_flags;
 
     /*
-     * The memory barrier implicit in qatomic_fetch_or makes sure that:
-     * 1. idle & any writes needed by the callback are done before the
-     *    locations are read in the aio_bh_poll.
-     * 2. ctx is loaded before the callback has a chance to execute and bh
-     *    could be freed.
+     * Synchronizes with atomic_fetch_and() in aio_bh_dequeue(), ensuring that
+     * insertion starts after BH_PENDING is set.
      */
     old_flags = qatomic_fetch_or(&bh->flags, BH_PENDING | new_flags);
+
     if (!(old_flags & BH_PENDING)) {
+        /*
+         * At this point the bottom half becomes visible to aio_bh_poll().
+         * This insertion thus synchronizes with QSLIST_MOVE_ATOMIC in
+         * aio_bh_poll(), ensuring that:
+         * 1. any writes needed by the callback are visible from the callback
+         *    after aio_bh_dequeue() returns bh.
+         * 2. ctx is loaded before the callback has a chance to execute and bh
+         *    could be freed.
+         */
         QSLIST_INSERT_HEAD_ATOMIC(&ctx->bh_list, bh, next);
     }
 
@@ -107,11 +114,8 @@ static QEMUBH *aio_bh_dequeue(BHList *head, unsigned *flags)
     QSLIST_REMOVE_HEAD(head, next);
 
     /*
-     * The qatomic_and is paired with aio_bh_enqueue().  The implicit memory
-     * barrier ensures that the callback sees all writes done by the scheduling
-     * thread.  It also ensures that the scheduling thread sees the cleared
-     * flag before bh->cb has run, and thus will call aio_notify again if
-     * necessary.
+     * Synchronizes with qatomic_fetch_or() in aio_bh_enqueue(), ensuring that
+     * the removal finishes before BH_PENDING is reset.
      */
     *flags = qatomic_fetch_and(&bh->flags,
                               ~(BH_PENDING | BH_SCHEDULED | BH_IDLE));
@@ -158,6 +162,7 @@ int aio_bh_poll(AioContext *ctx)
     BHListSlice *s;
     int ret = 0;
 
+    /* Synchronizes with QSLIST_INSERT_HEAD_ATOMIC in aio_bh_enqueue().  */
     QSLIST_MOVE_ATOMIC(&slice.bh_list, &ctx->bh_list);
     QSIMPLEQ_INSERT_TAIL(&ctx->bh_slice_list, &slice, next);
 
@@ -448,15 +453,15 @@ LuringState *aio_get_linux_io_uring(AioContext *ctx)
 void aio_notify(AioContext *ctx)
 {
     /*
-     * Write e.g. bh->flags before writing ctx->notified.  Pairs with smp_mb in
-     * aio_notify_accept.
+     * Write e.g. ctx->bh_list before writing ctx->notified.  Pairs with
+     * smp_mb() in aio_notify_accept().
      */
     smp_wmb();
     qatomic_set(&ctx->notified, true);
 
     /*
-     * Write ctx->notified before reading ctx->notify_me.  Pairs
-     * with smp_mb in aio_ctx_prepare or aio_poll.
+     * Write ctx->notified (and also ctx->bh_list) before reading ctx->notify_me.
+     * Pairs with smp_mb() in aio_ctx_prepare or aio_poll.
      */
     smp_mb();
     if (qatomic_read(&ctx->notify_me)) {
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 9/9] async: clarify usage of barriers in the polling case
  2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
                   ` (7 preceding siblings ...)
  2023-03-06 22:33 ` [PATCH v2 8/9] async: update documentation of the memory barriers Paolo Bonzini
@ 2023-03-06 22:33 ` Paolo Bonzini
  2023-03-06 22:49   ` Stefan Hajnoczi
  2023-03-07 10:51   ` David Hildenbrand
  8 siblings, 2 replies; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-06 22:33 UTC (permalink / raw)
  To: qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger, richard.henderson

Explain that aio_context_notifier_poll() relies on
aio_notify_accept() to catch all the memory writes that were
done before ctx->notified was set to true.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/async.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/util/async.c b/util/async.c
index e4b494150e7d..21016a1ac7c1 100644
--- a/util/async.c
+++ b/util/async.c
@@ -474,8 +474,9 @@ void aio_notify_accept(AioContext *ctx)
     qatomic_set(&ctx->notified, false);
 
     /*
-     * Write ctx->notified before reading e.g. bh->flags.  Pairs with smp_wmb
-     * in aio_notify.
+     * Order reads of ctx->notified (in aio_context_notifier_poll()) and the
+     * above clearing of ctx->notified before reads of e.g. bh->flags.  Pairs
+     * with smp_wmb() in aio_notify.
      */
     smp_mb();
 }
@@ -498,6 +499,11 @@ static bool aio_context_notifier_poll(void *opaque)
     EventNotifier *e = opaque;
     AioContext *ctx = container_of(e, AioContext, notifier);
 
+    /*
+     * No need for load-acquire because we just want to kick the
+     * event loop.  aio_notify_accept() takes care of synchronizing
+     * the event loop with the producers.
+     */
     return qatomic_read(&ctx->notified);
 }
 
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-06 22:33 ` [PATCH v2 8/9] async: update documentation of the memory barriers Paolo Bonzini
@ 2023-03-06 22:48   ` Stefan Hajnoczi
  2023-03-06 23:39   ` Richard Henderson
  1 sibling, 0 replies; 23+ messages in thread
From: Stefan Hajnoczi @ 2023-03-06 22:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, gshan, eesposit, david, stefanha, cohuck, eauger,
	richard.henderson

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 9/9] async: clarify usage of barriers in the polling case
  2023-03-06 22:33 ` [PATCH v2 9/9] async: clarify usage of barriers in the polling case Paolo Bonzini
@ 2023-03-06 22:49   ` Stefan Hajnoczi
  2023-03-07 10:51   ` David Hildenbrand
  1 sibling, 0 replies; 23+ messages in thread
From: Stefan Hajnoczi @ 2023-03-06 22:49 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, gshan, eesposit, david, stefanha, cohuck, eauger,
	richard.henderson

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-06 22:33 ` [PATCH v2 8/9] async: update documentation of the memory barriers Paolo Bonzini
  2023-03-06 22:48   ` Stefan Hajnoczi
@ 2023-03-06 23:39   ` Richard Henderson
  2023-03-07 10:49     ` Paolo Bonzini
  1 sibling, 1 reply; 23+ messages in thread
From: Richard Henderson @ 2023-03-06 23:39 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger

On 3/6/23 14:33, Paolo Bonzini wrote:
> @@ -107,11 +114,8 @@ static QEMUBH *aio_bh_dequeue(BHList *head, unsigned *flags)
>       QSLIST_REMOVE_HEAD(head, next);
>   
>       /*
> -     * The qatomic_and is paired with aio_bh_enqueue().  The implicit memory
> -     * barrier ensures that the callback sees all writes done by the scheduling
> -     * thread.  It also ensures that the scheduling thread sees the cleared
> -     * flag before bh->cb has run, and thus will call aio_notify again if
> -     * necessary.
> +     * Synchronizes with qatomic_fetch_or() in aio_bh_enqueue(), ensuring that
> +     * the removal finishes before BH_PENDING is reset.
>        */
>       *flags = qatomic_fetch_and(&bh->flags,

Per this new comment, about the remove finishing first, it would seem that we need 
smp_mb__before_rmw here, because QSLIST_REMOVE_HEAD is not SEQCST.


r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-06 23:39   ` Richard Henderson
@ 2023-03-07 10:49     ` Paolo Bonzini
  2023-03-07 15:54       ` Richard Henderson
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-07 10:49 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger

On 3/7/23 00:39, Richard Henderson wrote:
> On 3/6/23 14:33, Paolo Bonzini wrote:
>> @@ -107,11 +114,8 @@ static QEMUBH *aio_bh_dequeue(BHList *head, 
>> unsigned *flags)
>>       QSLIST_REMOVE_HEAD(head, next);
>>       /*
>> -     * The qatomic_and is paired with aio_bh_enqueue().  The implicit 
>> memory
>> -     * barrier ensures that the callback sees all writes done by the 
>> scheduling
>> -     * thread.  It also ensures that the scheduling thread sees the 
>> cleared
>> -     * flag before bh->cb has run, and thus will call aio_notify 
>> again if
>> -     * necessary.
>> +     * Synchronizes with qatomic_fetch_or() in aio_bh_enqueue(), 
>> ensuring that
>> +     * the removal finishes before BH_PENDING is reset.
>>        */
>>       *flags = qatomic_fetch_and(&bh->flags,
> 
> Per this new comment, about the remove finishing first, it would seem 
> that we need smp_mb__before_rmw here, because QSLIST_REMOVE_HEAD is not 
> SEQCST.

Only release-acquire is needed for consistent access to the list, seqcst 
and smp_mb__before/after_rmw() are only needed (as expected) to handle 
wakeup.

Just to be safe, I tried modeling this with cppmem 
(http://svr-pes20-cppmem.cl.cam.ac.uk/cppmem/); support for 
compare-and-swap is very limited, therefore the test looks nothing like 
the C code(*), but it should be ok:

int main() {
   atomic_int x = 0;
   atomic_int y = 0;
   {{{
       { cas_strong_explicit(&x, 0, 1, mo_acquire, mo_acquire);
         x.load(mo_relaxed).readsvalue(1); // fetch_or returned 0
//      r1 = y.load(mo_relaxed);
         y.store(1, mo_release); }         // bh inserted

   ||| { y.load(mo_acquire).readsvalue(1); // bh was in list
         y.store(0, mo_relaxed);           // bh removed
//      r1 = x.load(mo_relaxed);
         x.store(0, mo_release); }         // fetch_and

   ||| { cas_strong_explicit(&x, 0, 2, mo_acquire, mo_acquire);
         x.load(mo_relaxed).readsvalue(2); // fetch_or returned 0
//      r1 = y.load(mo_relaxed);
         y.store(2, mo_release); }         // bh inserted

   }}};
   return 0;
}

You can uncomment the debugging instructions (r1 = ...) too, but leaving 
all three uncommented will blow up.

Use the release_acquire model since it's faster and there are no seqcst 
operations in the above test.  It will take 1-2 minutes to run the model 
on a laptop, and the result shows that the only consistent (i.e. allowed 
by the C standard) execution is the one where thread 0 runs entirely 
before thread 1, and thread 1 runs entirely before thread 2.  You get 
the opposite order if the "bh was in list" line is changed to 
"readsvalue(2)".

This matches the description I had sent yesterday.

Paolo

(*) A couple points of interest.  First, there is no way to say "CAS has 
succeeded" so I am writing different values to x (this is not a problem 
because the code in QEMU only checks whether the pending bit was zero) 
and checking that they can be read back (which isn't a big limitation 
because other threads could sneak in anyway right after the checking 
load).  Second, there is no way to say "reads something other than 0", 
so you cannot get both valid executions with one run of the model, 
instead you can change the "bh was in list" line to "readsvalue(2)".




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 5/9] aio-wait: switch to smp_mb__after_rmw()
  2023-03-06 22:33 ` [PATCH v2 5/9] aio-wait: switch to smp_mb__after_rmw() Paolo Bonzini
@ 2023-03-07 10:50   ` David Hildenbrand
  0 siblings, 0 replies; 23+ messages in thread
From: David Hildenbrand @ 2023-03-07 10:50 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel
  Cc: gshan, eesposit, stefanha, cohuck, eauger, richard.henderson

On 06.03.23 23:33, Paolo Bonzini wrote:
> The barrier comes after an atomic increment, so it is enough to use
> smp_mb__after_rmw(); this avoids a double barrier on x86 systems.
> 
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   include/block/aio-wait.h | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h
> index dd9a7f6461ef..da13357bb8cf 100644
> --- a/include/block/aio-wait.h
> +++ b/include/block/aio-wait.h
> @@ -85,7 +85,7 @@ extern AioWait global_aio_wait;
>       /* Increment wait_->num_waiters before evaluating cond. */     \
>       qatomic_inc(&wait_->num_waiters);                              \
>       /* Paired with smp_mb in aio_wait_kick(). */                   \
> -    smp_mb();                                                      \
> +    smp_mb__after_rmw();                                           \
>       if (ctx_ && in_aio_context_home_thread(ctx_)) {                \
>           while ((cond)) {                                           \
>               aio_poll(ctx_, true);                                  \

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 9/9] async: clarify usage of barriers in the polling case
  2023-03-06 22:33 ` [PATCH v2 9/9] async: clarify usage of barriers in the polling case Paolo Bonzini
  2023-03-06 22:49   ` Stefan Hajnoczi
@ 2023-03-07 10:51   ` David Hildenbrand
  1 sibling, 0 replies; 23+ messages in thread
From: David Hildenbrand @ 2023-03-07 10:51 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel
  Cc: gshan, eesposit, stefanha, cohuck, eauger, richard.henderson

On 06.03.23 23:33, Paolo Bonzini wrote:
> Explain that aio_context_notifier_poll() relies on
> aio_notify_accept() to catch all the memory writes that were
> done before ctx->notified was set to true.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   util/async.c | 10 ++++++++--
>   1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/util/async.c b/util/async.c
> index e4b494150e7d..21016a1ac7c1 100644
> --- a/util/async.c
> +++ b/util/async.c
> @@ -474,8 +474,9 @@ void aio_notify_accept(AioContext *ctx)
>       qatomic_set(&ctx->notified, false);
>   
>       /*
> -     * Write ctx->notified before reading e.g. bh->flags.  Pairs with smp_wmb
> -     * in aio_notify.
> +     * Order reads of ctx->notified (in aio_context_notifier_poll()) and the
> +     * above clearing of ctx->notified before reads of e.g. bh->flags.  Pairs
> +     * with smp_wmb() in aio_notify.
>        */
>       smp_mb();
>   }
> @@ -498,6 +499,11 @@ static bool aio_context_notifier_poll(void *opaque)
>       EventNotifier *e = opaque;
>       AioContext *ctx = container_of(e, AioContext, notifier);
>   
> +    /*
> +     * No need for load-acquire because we just want to kick the
> +     * event loop.  aio_notify_accept() takes care of synchronizing
> +     * the event loop with the producers.
> +     */
>       return qatomic_read(&ctx->notified);
>   }
>   

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-07 10:49     ` Paolo Bonzini
@ 2023-03-07 15:54       ` Richard Henderson
  2023-03-07 17:00         ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Richard Henderson @ 2023-03-07 15:54 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger

On 3/7/23 02:49, Paolo Bonzini wrote:
> On 3/7/23 00:39, Richard Henderson wrote:
>> On 3/6/23 14:33, Paolo Bonzini wrote:
>>> @@ -107,11 +114,8 @@ static QEMUBH *aio_bh_dequeue(BHList *head, unsigned *flags)
>>>       QSLIST_REMOVE_HEAD(head, next);
>>>       /*
>>> -     * The qatomic_and is paired with aio_bh_enqueue().  The implicit memory
>>> -     * barrier ensures that the callback sees all writes done by the scheduling
>>> -     * thread.  It also ensures that the scheduling thread sees the cleared
>>> -     * flag before bh->cb has run, and thus will call aio_notify again if
>>> -     * necessary.
>>> +     * Synchronizes with qatomic_fetch_or() in aio_bh_enqueue(), ensuring that
>>> +     * the removal finishes before BH_PENDING is reset.
>>>        */
>>>       *flags = qatomic_fetch_and(&bh->flags,
>>
>> Per this new comment, about the remove finishing first, it would seem that we need 
>> smp_mb__before_rmw here, because QSLIST_REMOVE_HEAD is not SEQCST.
> 
> Only release-acquire is needed for consistent access to the list, seqcst and 
> smp_mb__before/after_rmw() are only needed (as expected) to handle wakeup.
> 
> Just to be safe, I tried modeling this with cppmem 
> (http://svr-pes20-cppmem.cl.cam.ac.uk/cppmem/); support for compare-and-swap is very 
> limited, therefore the test looks nothing like the C code(*), but it should be ok:

You do realize that QSLIST_REMOVE_HEAD is not a compare-and-swap, right?

#define QSLIST_REMOVE_HEAD(head, field) do {                             \
         typeof((head)->slh_first) elm = (head)->slh_first;               \
         (head)->slh_first = elm->field.sle_next;                         \
         elm->field.sle_next = NULL;                                      \
} while (/*CONSTCOND*/0)

As I read aio_bh_queue, this is exactly the situation you describe in patch 1 justifying 
the introduction of the new barriers.


r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-07 15:54       ` Richard Henderson
@ 2023-03-07 17:00         ` Paolo Bonzini
  2023-03-07 17:26           ` Richard Henderson
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-07 17:00 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger

On 3/7/23 16:54, Richard Henderson wrote:
>>
>> Just to be safe, I tried modeling this with cppmem 
>> (http://svr-pes20-cppmem.cl.cam.ac.uk/cppmem/); support for 
>> compare-and-swap is very limited, therefore the test looks nothing 
>> like the C code(*), but it should be ok:
> 
> You do realize that QSLIST_REMOVE_HEAD is not a compare-and-swap, right?
> 
> #define QSLIST_REMOVE_HEAD(head, field) do {                             \
>          typeof((head)->slh_first) elm = (head)->slh_first;               \
>          (head)->slh_first = elm->field.sle_next;                         \
>          elm->field.sle_next = NULL;                                      \
> } while (/*CONSTCOND*/0)

Yes, the compare-and-swap is just how I modeled the enqueuing thread's 
fetch_or

         cas_strong_explicit(&x, 0, 1, mo_acquire, mo_acquire);
         x.load(mo_relaxed).readsvalue(1); // fetch_or returned 0
         y.store(1, mo_release);           // bh inserted

while QSLIST_REMOVE_HEAD in the dequeuing thread is not ordered at all:

         y.store(0, mo_relaxed);           // QSLIST_REMOVE_HEAD
         x.store(0, mo_release);           // fetch_and

> As I read aio_bh_queue, this is exactly the situation you describe in 
> patch 1 justifying the introduction of the new barriers.

Only store-store reordering is required between QSLIST_REMOVE_HEAD and 
atomic_fetch_and(); and that one *is* blocked by atomic_fetch_and(), 
since mo_seq_cst is a superset of both mo_release.  The new barriers are 
needed for store-load reordering.

Paolo



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-07 17:00         ` Paolo Bonzini
@ 2023-03-07 17:26           ` Richard Henderson
  2023-03-08 10:49             ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Richard Henderson @ 2023-03-07 17:26 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger

On 3/7/23 09:00, Paolo Bonzini wrote:
> while QSLIST_REMOVE_HEAD in the dequeuing thread is not ordered at all:
> 
>          y.store(0, mo_relaxed);           // QSLIST_REMOVE_HEAD
>          x.store(0, mo_release);           // fetch_and
> 
>> As I read aio_bh_queue, this is exactly the situation you describe in patch 1 justifying 
>> the introduction of the new barriers.
> 
> Only store-store reordering is required between QSLIST_REMOVE_HEAD and atomic_fetch_and(); 
> and that one *is* blocked by atomic_fetch_and(), since mo_seq_cst is a superset of both 
> mo_release.  The new barriers are needed for store-load reordering.

In patch 1 you say:

# in C11, sequentially consistent atomics (except for seq-cst fences)
# only affect the ordering of sequentially consistent operations.

and the store in QSLIST_REMOVE_HEAD is not a sequentially consistent operation.
Therefore by your own logic we must have a separate barrier here.

I think this is mostly a compiler-theoretic problem, because hardware will see barriers 
that affect all memory operations, not just seq-cst.  Only compilers can be so literal as 
to decide to reschedule the two because they have different source-level model.

I wonder if your definition/description of smp_mb__before_rmw() isn't actively misleading 
in this case.

- We could drop it entirely and be less confusing, by not having to explain it.
- We could define it as  signal_barrier() for all hosts, simply to fix the
   compiler-theoretic reordering problem.


Thoughts?


r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-07 17:26           ` Richard Henderson
@ 2023-03-08 10:49             ` Paolo Bonzini
  2023-03-08 16:47               ` Richard Henderson
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-08 10:49 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger

On 3/7/23 18:26, Richard Henderson wrote:
> On 3/7/23 09:00, Paolo Bonzini wrote:
>> while QSLIST_REMOVE_HEAD in the dequeuing thread is not ordered at all:
>>
>>          y.store(0, mo_relaxed);           // QSLIST_REMOVE_HEAD
>>          x.store(0, mo_release);           // fetch_and
>>
>>> As I read aio_bh_queue, this is exactly the situation you describe in 
>>> patch 1 justifying the introduction of the new barriers.
>>
>> Only store-store reordering is required between QSLIST_REMOVE_HEAD and 
>> atomic_fetch_and(); and that one *is* blocked by atomic_fetch_and(), 
>> since mo_seq_cst is a superset of both mo_release.  The new barriers 
>> are needed for store-load reordering.
> 
> In patch 1 you say:
> 
> # in C11, sequentially consistent atomics (except for seq-cst fences)
> # only affect the ordering of sequentially consistent operations.
> 
> and the store in QSLIST_REMOVE_HEAD is not a sequentially consistent 
> operation.
> Therefore by your own logic we must have a separate barrier here.

You're right that the comment is contradictory.

It's the comment that is wrong.  The right text should be

---
in C11, with the exception of seq-cst fences, the order established by 
sequentially consistent atomics does not propagate to other memory 
accesses on either side of the seq-cst atomic.  As far as those are 
concerned, loads performed by a seq-cst atomic are just acquire loads, 
and stores are just release stores.  Even though loads that occur after 
a RMW operation cannot move above the load, they can still sneak above 
the store!
---

The first sentence is a more accurate way to say the same thing.  The 
rest sentence says what happens (mo_seq_cst downgrades to mo_acq_rel as 
far as those other accesses are concerned) and I missed the opportunity 
to include it because it didn't matter for the cases where the series 
added barriers---except now it matters for *not* adding barriers.

Now, I also said that I don't understand mo_acq_rel and that is indeed 
true.  Instead, in this case I'm basically treating mo_seq_cst as a 
superset of both mo_release (on the dequeue side) or mo_acquire (on the 
enqueue side), individually.

> I wonder if your definition/description of smp_mb__before_rmw() isn't 
> actively misleading in this case.
> 
> - We could drop it entirely and be less confusing, by not having to 
> explain it.
> - We could define it as  signal_barrier() for all hosts, simply to fix the
>    compiler-theoretic reordering problem.

The case that I was imagining for smp_mb__before_rmw() is something like 
this:

	wake_me = true;
	smp_mb__before_rmw();
	if (qatomic_xchg(&can_sleep, true)) { ... }

where you really need a full barrier.

Paolo



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-08 10:49             ` Paolo Bonzini
@ 2023-03-08 16:47               ` Richard Henderson
  2023-03-08 18:08                 ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Richard Henderson @ 2023-03-08 16:47 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger

On 3/8/23 02:49, Paolo Bonzini wrote:
> On 3/7/23 18:26, Richard Henderson wrote:
>> On 3/7/23 09:00, Paolo Bonzini wrote:
>>> while QSLIST_REMOVE_HEAD in the dequeuing thread is not ordered at all:
>>>
>>>          y.store(0, mo_relaxed);           // QSLIST_REMOVE_HEAD
>>>          x.store(0, mo_release);           // fetch_and
>>>
>>>> As I read aio_bh_queue, this is exactly the situation you describe in patch 1 
>>>> justifying the introduction of the new barriers.
>>>
>>> Only store-store reordering is required between QSLIST_REMOVE_HEAD and 
>>> atomic_fetch_and(); and that one *is* blocked by atomic_fetch_and(), since mo_seq_cst 
>>> is a superset of both mo_release.  The new barriers are needed for store-load reordering.
>>
>> In patch 1 you say:
>>
>> # in C11, sequentially consistent atomics (except for seq-cst fences)
>> # only affect the ordering of sequentially consistent operations.
>>
>> and the store in QSLIST_REMOVE_HEAD is not a sequentially consistent operation.
>> Therefore by your own logic we must have a separate barrier here.
> 
> You're right that the comment is contradictory.
> 
> It's the comment that is wrong.  The right text should be
> 
> ---
> in C11, with the exception of seq-cst fences, the order established by sequentially 
> consistent atomics does not propagate to other memory accesses on either side of the 
> seq-cst atomic.  As far as those are concerned, loads performed by a seq-cst atomic are 
> just acquire loads, and stores are just release stores.  Even though loads that occur 
> after a RMW operation cannot move above the load, they can still sneak above the store!
> ---

Ok, thanks.

>> I wonder if your definition/description of smp_mb__before_rmw() isn't actively 
>> misleading in this case.
>>
>> - We could drop it entirely and be less confusing, by not having to explain it.
>> - We could define it as  signal_barrier() for all hosts, simply to fix the
>>    compiler-theoretic reordering problem.
> 
> The case that I was imagining for smp_mb__before_rmw() is something like this:
> 
>      wake_me = true;
>      smp_mb__before_rmw();
>      if (qatomic_xchg(&can_sleep, true)) { ... }
> 
> where you really need a full barrier.

What is different about this that doesn't apply to the remove-head case we've been talking 
about?


r~



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-08 16:47               ` Richard Henderson
@ 2023-03-08 18:08                 ` Paolo Bonzini
  2023-03-08 18:41                   ` Richard Henderson
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2023-03-08 18:08 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger

On 3/8/23 17:47, Richard Henderson wrote:
>> The case that I was imagining for smp_mb__before_rmw() is something 
>> like this:
>>
>>      wake_me = true;
>>      smp_mb__before_rmw();
>>      if (qatomic_xchg(&can_sleep, true)) { ... }
>>
>> where you really need a full barrier.
> 
> What is different about this that doesn't apply to the remove-head case 
> we've been talking about?

For remove-head, nothing is going to change the BH_PENDING flag while 
the code runs.  This would be an okay transformation of the code, at 
both the compiler and the processor level:

   // first part of atomic_fetch_and
   old_flags = LDAXR of bh->flags

   // QSLIST_REMOVE_HEAD ends up between load and store of
   // atomic_fetch_and
   all the loads and stores for remove-head

   // second part of atomic_fetch_and
   new_flags = old_flags & ~(BH_PENDING|BH_SCHEDULED|BH_IDLE);
   (successful) STLXR of new_flags into bh->flags


Instead in the case above, I was thinking you would get a missed wakeup 
without the barriers.  Here is the full pseudocode:

   // this store can move below the load of can_sleep
   qatomic_set(&wake_me, true);
   if (qatomic_xchg(&can_sleep, true)) sleep;

   // this store can move below the load of wake_me
   qatomic_set(&can_sleep, false);
   if (qatomic_xchg(&wake_me, false)) wake them;

The buggy case is where the threads observe can_sleep = true, wake_me = 
false, i.e. the original value of the variables.  Let's look at it with 
CPPMEM.

Like before, the CPPMEM test must use CAS and .readsvalue() to hack 
around the lack of "if"s.  Two .readsvalue() clauses represent the buggy 
case, so we are all good if there is *no* consistent executions.

Unfortunately, it fails:

   // 2 consistent (i.e. buggy) executions
   int main() {
     atomic_int w = 0;
     atomic_int s = 1;
     {{{
       { w.store(1, mo_relaxed);
         // the buggy case is the one in which the threads read the
         // original value of w and s.  So here the CAS writes a
         // dummy value different from both 0 and 1
         cas_strong_explicit(&s, 0, 99, mo_seq_cst, mo_seq_cst);
         s.load(mo_relaxed).readsvalue(1); }
     |||
       { s.store(0, mo_relaxed);
         // same as above
         cas_strong_explicit(&w, 1, 99, mo_seq_cst, mo_seq_cst);
         w.load(mo_relaxed).readsvalue(0); }
     }}}
   }

It works with barriers, which models the extra smp_mb__before_rmw():

   // no consistent executions (i.e. it works)
   int main() {
     atomic_int w = 0;
     atomic_int s = 1;
     {{{
       { w.store(1, mo_relaxed);
         atomic_thread_fence(mo_seq_cst);
         cas_strong_explicit(&s, 0, 99, mo_relaxed, mo_relaxed);
         s.load(mo_relaxed).readsvalue(1); }
     |||
       { s.store(0, mo_relaxed);
         atomic_thread_fence(mo_seq_cst);
         cas_strong_explicit(&w, 1, 99, mo_relaxed, mo_relaxed);
         w.load(mo_relaxed).readsvalue(0); }
     }}}
   }

I think I agree with you that _in practice_ it's going to work at the 
processor level; the pseudo-assembly would be

   r1 = LDAXR(can_sleep);
                                    r2 = LDAXR(wake_me);
                                    STR(can_sleep, false);
                                    STLXR(wake_me, false); // successful
   STR(wake_me, true);
   STLXR(can_sleep, true); // successful (?)
   if r1 == 0 { ... }
                                    if r2 != 0 { ... }

and I can't think of a way in which both store-exclusives (or xchg, or 
cmpxchg) would succeed.  So perhaps smp_mb__before_rmw() could indeed be 
a signal_fence().  But that's quite scary even for the standards of this 
discussion...

Paolo



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 8/9] async: update documentation of the memory barriers
  2023-03-08 18:08                 ` Paolo Bonzini
@ 2023-03-08 18:41                   ` Richard Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Richard Henderson @ 2023-03-08 18:41 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel
  Cc: gshan, eesposit, david, stefanha, cohuck, eauger

On 3/8/23 10:08, Paolo Bonzini wrote:
> On 3/8/23 17:47, Richard Henderson wrote:
>>> The case that I was imagining for smp_mb__before_rmw() is something like this:
>>>
>>>      wake_me = true;
>>>      smp_mb__before_rmw();
>>>      if (qatomic_xchg(&can_sleep, true)) { ... }
>>>
>>> where you really need a full barrier.
>>
>> What is different about this that doesn't apply to the remove-head case we've been 
>> talking about?
> 
> For remove-head, nothing is going to change the BH_PENDING flag while the code runs.  This 
> would be an okay transformation of the code, at both the compiler and the processor level:
> 
>    // first part of atomic_fetch_and
>    old_flags = LDAXR of bh->flags
> 
>    // QSLIST_REMOVE_HEAD ends up between load and store of
>    // atomic_fetch_and
>    all the loads and stores for remove-head
> 
>    // second part of atomic_fetch_and
>    new_flags = old_flags & ~(BH_PENDING|BH_SCHEDULED|BH_IDLE);
>    (successful) STLXR of new_flags into bh->flags
> 
> 
> Instead in the case above, I was thinking you would get a missed wakeup without the 
> barriers.  Here is the full pseudocode:
> 
>    // this store can move below the load of can_sleep
>    qatomic_set(&wake_me, true);
>    if (qatomic_xchg(&can_sleep, true)) sleep;
> 
>    // this store can move below the load of wake_me
>    qatomic_set(&can_sleep, false);
>    if (qatomic_xchg(&wake_me, false)) wake them;
> 
> The buggy case is where the threads observe can_sleep = true, wake_me = false, i.e. the 
> original value of the variables.

Oh, I see, buggy in a larger context.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2023-03-08 18:42 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-06 22:32 [PATCH v2 0/9] Fix missing memory barriers on ARM Paolo Bonzini
2023-03-06 22:32 ` [PATCH v2 1/9] qatomic: add smp_mb__before/after_rmw() Paolo Bonzini
2023-03-06 22:32 ` [PATCH v2 2/9] qemu-thread-posix: cleanup, fix, document QemuEvent Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 3/9] qemu-thread-win32: " Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 4/9] edu: add smp_mb__after_rmw() Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 5/9] aio-wait: switch to smp_mb__after_rmw() Paolo Bonzini
2023-03-07 10:50   ` David Hildenbrand
2023-03-06 22:33 ` [PATCH v2 6/9] qemu-coroutine-lock: add smp_mb__after_rmw() Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 7/9] physmem: add missing memory barrier Paolo Bonzini
2023-03-06 22:33 ` [PATCH v2 8/9] async: update documentation of the memory barriers Paolo Bonzini
2023-03-06 22:48   ` Stefan Hajnoczi
2023-03-06 23:39   ` Richard Henderson
2023-03-07 10:49     ` Paolo Bonzini
2023-03-07 15:54       ` Richard Henderson
2023-03-07 17:00         ` Paolo Bonzini
2023-03-07 17:26           ` Richard Henderson
2023-03-08 10:49             ` Paolo Bonzini
2023-03-08 16:47               ` Richard Henderson
2023-03-08 18:08                 ` Paolo Bonzini
2023-03-08 18:41                   ` Richard Henderson
2023-03-06 22:33 ` [PATCH v2 9/9] async: clarify usage of barriers in the polling case Paolo Bonzini
2023-03-06 22:49   ` Stefan Hajnoczi
2023-03-07 10:51   ` David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.