All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures
@ 2020-04-06 19:13 Paolo Bonzini
  2020-04-06 19:13 ` [PATCH 1/4] atomics: convert to reStructuredText Paolo Bonzini
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Paolo Bonzini @ 2020-04-06 19:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Ying Fang, stefanha

Patch 4 fixes qemu-img and qemu-io hangs on weakly-ordered architectures.
Patch 1-3 are related docs fixes and improvements.

This is RFC because it relies on the iothread being locked during aio_poll
on the main AioContext.  If I add assertions for this however I see a
failure for test 267, so I am posting it as a preview before I debug that.
The doc patches can also go in independently of course.

Paolo

Paolo Bonzini (4):
  atomics: convert to reStructuredText
  atomics: update documentation for C11
  rcu: do not mention atomic_mb_read/set in documentation
  async: use explicit memory barriers

 docs/devel/atomics.rst | 385 +++++++++++++++++++++++++++++++++++++++
 docs/devel/atomics.txt | 403 -----------------------------------------
 docs/devel/index.rst   |   1 +
 docs/devel/rcu.txt     |   4 +-
 util/aio-posix.c       |   9 +-
 util/aio-win32.c       |   8 +-
 util/async.c           |  12 +-
 7 files changed, 413 insertions(+), 409 deletions(-)
 create mode 100644 docs/devel/atomics.rst
 delete mode 100644 docs/devel/atomics.txt

-- 
2.18.2



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/4] atomics: convert to reStructuredText
  2020-04-06 19:13 [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures Paolo Bonzini
@ 2020-04-06 19:13 ` Paolo Bonzini
  2020-04-06 19:58   ` Eric Blake
  2020-04-06 19:13 ` [PATCH 2/4] atomics: update documentation for C11 Paolo Bonzini
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2020-04-06 19:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Ying Fang, stefanha

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 docs/devel/atomics.rst | 447 +++++++++++++++++++++++++++++++++++++++++
 docs/devel/atomics.txt | 403 -------------------------------------
 docs/devel/index.rst   |   1 +
 3 files changed, 448 insertions(+), 403 deletions(-)
 create mode 100644 docs/devel/atomics.rst
 delete mode 100644 docs/devel/atomics.txt

diff --git a/docs/devel/atomics.rst b/docs/devel/atomics.rst
new file mode 100644
index 0000000000..00f3f7d3ed
--- /dev/null
+++ b/docs/devel/atomics.rst
@@ -0,0 +1,447 @@
+=========================
+Atomic operations in QEMU
+=========================
+
+CPUs perform independent memory operations effectively in random order.
+but this can be a problem for CPU-CPU interaction (including interactions
+between QEMU and the guest).  Multi-threaded programs use various tools
+to instruct the compiler and the CPU to restrict the order to something
+that is consistent with the expectations of the programmer.
+
+The most basic tool is locking.  Mutexes, condition variables and
+semaphores are used in QEMU, and should be the default approach to
+synchronization.  Anything else is considerably harder, but it's
+also justified more often than one would like.  The two tools that
+are provided by ``qemu/atomic.h`` are memory barriers and atomic operations.
+
+Macros defined by ``qemu/atomic.h`` fall in three camps:
+
+- compiler barriers: ``barrier()``;
+
+- weak atomic access and manual memory barriers: ``atomic_read()``,
+  ``atomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``, ``smp_mb_acquire()``,
+  ``smp_mb_release()``, ``smp_read_barrier_depends()``;
+
+- sequentially consistent atomic access: everything else.
+
+
+Compiler memory barrier
+=======================
+
+``barrier()`` prevents the compiler from moving the memory accesses either
+side of it to the other side.  The compiler barrier has no direct effect
+on the CPU, which may then reorder things however it wishes.
+
+``barrier()`` is mostly used within ``qemu/atomic.h`` itself.  On some
+architectures, CPU guarantees are strong enough that blocking compiler
+optimizations already ensures the correct order of execution.  In this
+case, ``qemu/atomic.h`` will reduce stronger memory barriers to simple
+compiler barriers.
+
+Still, ``barrier()`` can be useful when writing code that can be interrupted
+by signal handlers.
+
+
+Sequentially consistent atomic access
+=====================================
+
+Most of the operations in the ``qemu/atomic.h`` header ensure *sequential
+consistency*, where "the result of any execution is the same as if the
+operations of all the processors were executed in some sequential order,
+and the operations of each individual processor appear in this sequence
+in the order specified by its program".
+
+``qemu/atomic.h`` provides the following set of atomic read-modify-write
+operations::
+
+    void atomic_inc(ptr)
+    void atomic_dec(ptr)
+    void atomic_add(ptr, val)
+    void atomic_sub(ptr, val)
+    void atomic_and(ptr, val)
+    void atomic_or(ptr, val)
+
+    typeof(*ptr) atomic_fetch_inc(ptr)
+    typeof(*ptr) atomic_fetch_dec(ptr)
+    typeof(*ptr) atomic_fetch_add(ptr, val)
+    typeof(*ptr) atomic_fetch_sub(ptr, val)
+    typeof(*ptr) atomic_fetch_and(ptr, val)
+    typeof(*ptr) atomic_fetch_or(ptr, val)
+    typeof(*ptr) atomic_fetch_xor(ptr, val)
+    typeof(*ptr) atomic_fetch_inc_nonzero(ptr)
+    typeof(*ptr) atomic_xchg(ptr, val)
+    typeof(*ptr) atomic_cmpxchg(ptr, old, new)
+
+all of which return the old value of ``*ptr``.  These operations are
+polymorphic; they operate on any type that is as wide as a pointer.
+
+Similar operations return the new value of ``*ptr``::
+
+    typeof(*ptr) atomic_inc_fetch(ptr)
+    typeof(*ptr) atomic_dec_fetch(ptr)
+    typeof(*ptr) atomic_add_fetch(ptr, val)
+    typeof(*ptr) atomic_sub_fetch(ptr, val)
+    typeof(*ptr) atomic_and_fetch(ptr, val)
+    typeof(*ptr) atomic_or_fetch(ptr, val)
+    typeof(*ptr) atomic_xor_fetch(ptr, val)
+
+Sequentially consistent loads and stores can be done using::
+
+    atomic_fetch_add(ptr, 0) for loads
+    atomic_xchg(ptr, val) for stores
+
+However, they are quite expensive on some platforms, notably POWER and
+Arm.  Therefore, qemu/atomic.h provides two primitives with slightly
+weaker constraints::
+
+    typeof(*ptr) atomic_mb_read(ptr)
+    void         atomic_mb_set(ptr, val)
+
+The semantics of these primitives map to Java volatile variables,
+and are strongly related to memory barriers as used in the Linux
+kernel (see below).
+
+As long as you use atomic_mb_read and atomic_mb_set, accesses cannot
+be reordered with each other, and it is also not possible to reorder
+"normal" accesses around them.
+
+However, and this is the important difference between
+atomic_mb_read/atomic_mb_set and sequential consistency, it is important
+for both threads to access the same volatile variable.  It is not the
+case that everything visible to thread A when it writes volatile field f
+becomes visible to thread B after it reads volatile field g. The store
+and load have to "match" (i.e., be performed on the same volatile
+field) to achieve the right semantics.
+
+
+These operations operate on any type that is as wide as an int or smaller.
+
+
+Weak atomic access and manual memory barriers
+=============================================
+
+Compared to sequentially consistent atomic access, programming with
+weaker consistency models can be considerably more complicated.
+In general, if the algorithm you are writing includes both writes
+and reads on the same side, it is generally simpler to use sequentially
+consistent primitives.
+
+When using this model, variables are accessed with:
+
+- ``atomic_read()`` and ``atomic_set()``; these prevent the compiler from
+  optimizing accesses out of existence and creating unsolicited
+  accesses, but do not otherwise impose any ordering on loads and
+  stores: both the compiler and the processor are free to reorder
+  them.
+
+- ``atomic_load_acquire()``, which guarantees the LOAD to appear to
+  happen, with respect to the other components of the system,
+  before all the LOAD or STORE operations specified afterwards.
+  Operations coming before ``atomic_load_acquire()`` can still be
+  reordered after it.
+
+- ``atomic_store_release()``, which guarantees the STORE to appear to
+  happen, with respect to the other components of the system,
+  after all the LOAD or STORE operations specified afterwards.
+  Operations coming after ``atomic_store_release()`` can still be
+  reordered after it.
+
+Restrictions to the ordering of accesses can also be specified
+using the memory barrier macros: ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``,
+``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``.
+
+Memory barriers control the order of references to shared memory.
+They come in six kinds:
+
+- ``smp_rmb()`` guarantees that all the LOAD operations specified before
+  the barrier will appear to happen before all the LOAD operations
+  specified after the barrier with respect to the other components of
+  the system.
+
+  In other words, ``smp_rmb()`` puts a partial ordering on loads, but is not
+  required to have any effect on stores.
+
+- ``smp_wmb()`` guarantees that all the STORE operations specified before
+  the barrier will appear to happen before all the STORE operations
+  specified after the barrier with respect to the other components of
+  the system.
+
+  In other words, ``smp_wmb()`` puts a partial ordering on stores, but is not
+  required to have any effect on loads.
+
+- ``smp_mb_acquire()`` guarantees that all the LOAD operations specified before
+  the barrier will appear to happen before all the LOAD or STORE operations
+  specified after the barrier with respect to the other components of
+  the system.
+
+- ``smp_mb_release()`` guarantees that all the STORE operations specified *after*
+  the barrier will appear to happen after all the LOAD or STORE operations
+  specified *before* the barrier with respect to the other components of
+  the system.
+
+- ``smp_mb()`` guarantees that all the LOAD and STORE operations specified
+  before the barrier will appear to happen before all the LOAD and
+  STORE operations specified after the barrier with respect to the other
+  components of the system.
+
+  ``smp_mb()`` puts a partial ordering on both loads and stores.  It is
+  stronger than both a read and a write memory barrier; it implies both
+  ``smp_mb_acquire()`` and ``smp_mb_release()``, but it also prevents STOREs
+  coming before the barrier from overtaking LOADs coming after the
+  barrier and vice versa.
+
+- ``smp_read_barrier_depends()`` is a weaker kind of read barrier.  On
+  most processors, whenever two loads are performed such that the
+  second depends on the result of the first (e.g., the first load
+  retrieves the address to which the second load will be directed),
+  the processor will guarantee that the first LOAD will appear to happen
+  before the second with respect to the other components of the system.
+  However, this is not always true---for example, it was not true on
+  Alpha processors.  Whenever this kind of access happens to shared
+  memory (that is not protected by a lock), a read barrier is needed,
+  and ``smp_read_barrier_depends()`` can be used instead of ``smp_rmb()``.
+
+  Note that the first load really has to have a _data_ dependency and not
+  a control dependency.  If the address for the second load is dependent
+  on the first load, but the dependency is through a conditional rather
+  than actually loading the address itself, then it's a _control_
+  dependency and a full read barrier or better is required.
+
+
+This is the set of barriers that is required *between* two ``atomic_read()``
+and ``atomic_set()`` operations to achieve sequential consistency:
+
+   +----------------+-------------------------------------------------------+
+   |                |                 2nd operation                         |
+   |                +------------------+-----------------+------------------+
+   | 1st operation  | (after last)     | atomic_read     | atomic_set       |
+   +----------------+------------------+-----------------+------------------+
+   | (before first) | ..               | none            | smp_mb_release() |
+   +----------------+------------------+-----------------+------------------+
+   | atomic_read    | smp_mb_acquire() | smp_rmb() [1]_  | [2]_             |
+   +----------------+------------------+-----------------+------------------+
+   | atomic_set     | none             | smp_mb() [3]_   | smp_wmb()        |
+   +----------------+------------------+-----------------+------------------+
+
+   .. [1] Or smp_read_barrier_depends().
+
+   .. [2] This requires a load-store barrier.  This is achieved by
+          either smp_mb_acquire() or smp_mb_release().
+
+   .. [3] This requires a store-load barrier.  On most machines, the only
+          way to achieve this is a full barrier.
+
+
+You can see that the two possible definitions of ``atomic_mb_read()``
+and ``atomic_mb_set()`` are the following:
+
+  1) | atomic_mb_read(p)   = atomic_read(p); smp_mb_acquire()
+     | atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v); smp_mb()
+
+  2) | atomic_mb_read(p)   = smp_mb() atomic_read(p); smp_mb_acquire()
+     | atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v);
+
+Usually the former is used, because ``smp_mb()`` is expensive and a program
+normally has more reads than writes.  Therefore it makes more sense to
+make ``atomic_mb_set()`` the more expensive operation.
+
+There are two common cases in which atomic_mb_read and atomic_mb_set
+generate too many memory barriers, and thus it can be useful to manually
+place barriers, or use atomic_load_acquire/atomic_store_release instead:
+
+- when a data structure has one thread that is always a writer
+  and one thread that is always a reader, manual placement of
+  memory barriers makes the write side faster.  Furthermore,
+  correctness is easy to check for in this case using the "pairing"
+  trick that is explained below:
+
+    +----------------------------------------------------------------------+
+    | thread 1                                                             |
+    +-----------------------------------+----------------------------------+
+    | before                            | after                            |
+    +===================================+==================================+
+    | ::                                | ::                               |
+    |                                   |                                  |
+    |   (other writes)                  |                                  |
+    |   atomic_mb_set(&a, x)            |   atomic_store_release(&a, x)    |
+    |   atomic_mb_set(&b, y)            |   atomic_store_release(&b, y)    |
+    +-----------------------------------+----------------------------------+
+
+    +----------------------------------------------------------------------+
+    | thread 2                                                             |
+    +-----------------------------------+----------------------------------+
+    | before                            | after                            |
+    +===================================+==================================+
+    | ::                                | ::                               |
+    |                                   |                                  |
+    |   y = atomic_mb_read(&b)          |   y = atomic_load_acquire(&b)    |
+    |   x = atomic_mb_read(&a)          |   x = atomic_load_acquire(&a)    |
+    |   (other reads)                   |                                  |
+    +-----------------------------------+----------------------------------+
+
+  Note that the barrier between the stores in thread 1, and between
+  the loads in thread 2, has been optimized here to a write or a
+  read memory barrier respectively.  On some architectures, notably
+  ARMv7, smp_mb_acquire and smp_mb_release are just as expensive as
+  smp_mb, but smp_rmb and/or smp_wmb are more efficient.
+
+- sometimes, a thread is accessing many variables that are otherwise
+  unrelated to each other (for example because, apart from the current
+  thread, exactly one other thread will read or write each of these
+  variables).  In this case, it is possible to "hoist" the implicit
+  barriers provided by ``atomic_mb_read()`` and ``atomic_mb_set()`` outside
+  a loop.  For example, the above definition ``atomic_mb_read()`` gives
+  the following transformation:
+
+    +-----------------------------------+----------------------------------+
+    | before                            | after                            |
+    +===================================+==================================+
+    | ::                                | ::                               |
+    |                                   |                                  |
+    |   n = 0;                          |   n = 0;                         |
+    |   for (i = 0; i < 10; i++)        |   for (i = 0; i < 10; i++)       |
+    |     n += atomic_mb_read(&a[i]);   |     n += atomic_read(&a[i]);     |
+    |                                   |   smp_mb_acquire();              |
+    +-----------------------------------+----------------------------------+
+
+  Similarly, atomic_mb_set() can be transformed as follows:
+
+    +-----------------------------------+----------------------------------+
+    | before                            | after                            |
+    +===================================+==================================+
+    | ::                                | ::                               |
+    |                                   |                                  |
+    |                                   |   smp_mb_release();              |
+    |   for (i = 0; i < 10; i++)        |   for (i = 0; i < 10; i++)       |
+    |     atomic_mb_set(&a[i], false);  |     atomic_set(&a[i], false);    |
+    |                                   |   smp_mb();                      |
+    +-----------------------------------+----------------------------------+
+
+
+  The other thread can still use ``atomic_mb_read()``/``atomic_mb_set()``.
+
+The two tricks can be combined.  In this case, splitting a loop in
+two lets you hoist the barriers out of the loops _and_ eliminate the
+expensive ``smp_mb()``:
+
+    +-----------------------------------+----------------------------------+
+    | before                            | after                            |
+    +===================================+==================================+
+    | ::                                | ::                               |
+    |                                   |                                  |
+    |                                   |     smp_mb_release();            |
+    |   for (i = 0; i < 10; i++) {      |     for (i = 0; i < 10; i++)     |
+    |     atomic_mb_set(&a[i], false);  |       atomic_set(&a[i], false);  |
+    |     atomic_mb_set(&b[i], false);  |     smb_wmb();                   |
+    |   }                               |     for (i = 0; i < 10; i++)     |
+    |                                   |       atomic_set(&a[i], false);  |
+    |                                   |     smp_mb();                    |
+    +-----------------------------------+----------------------------------+
+
+
+Memory barrier pairing
+----------------------
+
+A useful rule of thumb is that memory barriers should always, or almost
+always, be paired with another barrier.  In the case of QEMU, however,
+note that the other barrier may actually be in a driver that runs in
+the guest!
+
+For the purposes of pairing, ``smp_read_barrier_depends()`` and ``smp_rmb()``
+both count as read barriers.  A read barrier shall pair with a write
+barrier or a full barrier; a write barrier shall pair with a read
+barrier or a full barrier.  A full barrier can pair with anything.
+For example:
+
+      +--------------------+------------------------------+
+      | thread 1           | thread 2                     |
+      +====================+==============================+
+      | ::                 | ::                           |
+      |                    |                              |
+      |   a = 1;           |                              |
+      |   smp_wmb();       |                              |
+      |   b = 2;           |   x = b;                     |
+      |                    |   smp_rmb();                 |
+      |                    |   y = a;                     |
+      +--------------------+------------------------------+
+
+Note that the "writing" thread is accessing the variables in the
+opposite order as the "reading" thread.  This is expected: stores
+before the write barrier will normally match the loads after the
+read barrier, and vice versa.  The same is true for more than 2
+access and for data dependency barriers:
+
+      +--------------------+------------------------------+
+      | thread 1           | thread 2                     |
+      +====================+==============================+
+      | ::                 | ::                           |
+      |                    |                              |
+      |   b[2] = 1;        |                              |
+      |   smp_wmb();       |                              |
+      |   x->i = 2;        |                              |
+      |   smp_wmb();       |                              |
+      |   a = x;           |  x = a;                      |
+      |                    |  smp_read_barrier_depends(); |
+      |                    |  y = x->i;                   |
+      |                    |  smp_read_barrier_depends(); |
+      |                    |  z = b[y];                   |
+      +--------------------+------------------------------+
+
+``smp_wmb()`` also pairs with ``atomic_mb_read()`` and ``smp_mb_acquire()``.
+and ``smp_rmb()`` also pairs with ``atomic_mb_set()`` and ``smp_mb_release()``.
+
+
+Comparison with Linux kernel memory barriers
+============================================
+
+Here is a list of differences between Linux kernel atomic operations
+and memory barriers, and the equivalents in QEMU:
+
+- atomic operations in Linux are always on a 32-bit int type and
+  use a boxed atomic_t type; atomic operations in QEMU are polymorphic
+  and use normal C types.
+
+- Originally, atomic_read and atomic_set in Linux gave no guarantee
+  at all. Linux 4.1 updated them to implement volatile
+  semantics via ACCESS_ONCE (or the more recent READ/WRITE_ONCE).
+
+  QEMU's atomic_read/set implement, if the compiler supports it, C11
+  atomic relaxed semantics, and volatile semantics otherwise.
+  Both semantics prevent the compiler from doing certain transformations;
+  the difference is that atomic accesses are guaranteed to be atomic,
+  while volatile accesses aren't. Thus, in the volatile case we just cross
+  our fingers hoping that the compiler will generate atomic accesses,
+  since we assume the variables passed are machine-word sized and
+  properly aligned.
+  No barriers are implied by atomic_read/set in either Linux or QEMU.
+
+- atomic read-modify-write operations in Linux are of three kinds:
+
+         ===================== =========================================
+         ``atomic_OP``         returns void
+         ``atomic_OP_return``  returns new value of the variable
+         ``atomic_fetch_OP``   returns the old value of the variable
+         ``atomic_cmpxchg``    returns the old value of the variable
+         ===================== =========================================
+
+  In QEMU, the second kind does not exist.  Currently Linux has
+  atomic_fetch_or only.  QEMU provides and, or, inc, dec, add, sub.
+
+- different atomic read-modify-write operations in Linux imply
+  a different set of memory barriers; in QEMU, all of them enforce
+  sequential consistency, which means they imply full memory barriers
+  before and after the operation.
+
+- Linux does not have an equivalent of ``atomic_mb_set()``.  In particular,
+  note that ``smp_store_mb()`` is a little weaker than ``atomic_mb_set()``.
+  ``atomic_mb_read()`` compiles to the same instructions as Linux's
+  ``smp_load_acquire()``, but this should be treated as an implementation
+  detail.
+
+Sources
+=======
+
+* Documentation/memory-barriers.txt from the Linux kernel
+
+* "The JSR-133 Cookbook for Compiler Writers", available at
+  http://g.oswego.edu/dl/jmm/cookbook.html
diff --git a/docs/devel/atomics.txt b/docs/devel/atomics.txt
deleted file mode 100644
index 67bdf82628..0000000000
--- a/docs/devel/atomics.txt
+++ /dev/null
@@ -1,403 +0,0 @@
-CPUs perform independent memory operations effectively in random order.
-but this can be a problem for CPU-CPU interaction (including interactions
-between QEMU and the guest).  Multi-threaded programs use various tools
-to instruct the compiler and the CPU to restrict the order to something
-that is consistent with the expectations of the programmer.
-
-The most basic tool is locking.  Mutexes, condition variables and
-semaphores are used in QEMU, and should be the default approach to
-synchronization.  Anything else is considerably harder, but it's
-also justified more often than one would like.  The two tools that
-are provided by qemu/atomic.h are memory barriers and atomic operations.
-
-Macros defined by qemu/atomic.h fall in three camps:
-
-- compiler barriers: barrier();
-
-- weak atomic access and manual memory barriers: atomic_read(),
-  atomic_set(), smp_rmb(), smp_wmb(), smp_mb(), smp_mb_acquire(),
-  smp_mb_release(), smp_read_barrier_depends();
-
-- sequentially consistent atomic access: everything else.
-
-
-COMPILER MEMORY BARRIER
-=======================
-
-barrier() prevents the compiler from moving the memory accesses either
-side of it to the other side.  The compiler barrier has no direct effect
-on the CPU, which may then reorder things however it wishes.
-
-barrier() is mostly used within qemu/atomic.h itself.  On some
-architectures, CPU guarantees are strong enough that blocking compiler
-optimizations already ensures the correct order of execution.  In this
-case, qemu/atomic.h will reduce stronger memory barriers to simple
-compiler barriers.
-
-Still, barrier() can be useful when writing code that can be interrupted
-by signal handlers.
-
-
-SEQUENTIALLY CONSISTENT ATOMIC ACCESS
-=====================================
-
-Most of the operations in the qemu/atomic.h header ensure *sequential
-consistency*, where "the result of any execution is the same as if the
-operations of all the processors were executed in some sequential order,
-and the operations of each individual processor appear in this sequence
-in the order specified by its program".
-
-qemu/atomic.h provides the following set of atomic read-modify-write
-operations:
-
-    void atomic_inc(ptr)
-    void atomic_dec(ptr)
-    void atomic_add(ptr, val)
-    void atomic_sub(ptr, val)
-    void atomic_and(ptr, val)
-    void atomic_or(ptr, val)
-
-    typeof(*ptr) atomic_fetch_inc(ptr)
-    typeof(*ptr) atomic_fetch_dec(ptr)
-    typeof(*ptr) atomic_fetch_add(ptr, val)
-    typeof(*ptr) atomic_fetch_sub(ptr, val)
-    typeof(*ptr) atomic_fetch_and(ptr, val)
-    typeof(*ptr) atomic_fetch_or(ptr, val)
-    typeof(*ptr) atomic_fetch_xor(ptr, val)
-    typeof(*ptr) atomic_fetch_inc_nonzero(ptr)
-    typeof(*ptr) atomic_xchg(ptr, val)
-    typeof(*ptr) atomic_cmpxchg(ptr, old, new)
-
-all of which return the old value of *ptr.  These operations are
-polymorphic; they operate on any type that is as wide as a pointer.
-
-Similar operations return the new value of *ptr:
-
-    typeof(*ptr) atomic_inc_fetch(ptr)
-    typeof(*ptr) atomic_dec_fetch(ptr)
-    typeof(*ptr) atomic_add_fetch(ptr, val)
-    typeof(*ptr) atomic_sub_fetch(ptr, val)
-    typeof(*ptr) atomic_and_fetch(ptr, val)
-    typeof(*ptr) atomic_or_fetch(ptr, val)
-    typeof(*ptr) atomic_xor_fetch(ptr, val)
-
-Sequentially consistent loads and stores can be done using:
-
-    atomic_fetch_add(ptr, 0) for loads
-    atomic_xchg(ptr, val) for stores
-
-However, they are quite expensive on some platforms, notably POWER and
-Arm.  Therefore, qemu/atomic.h provides two primitives with slightly
-weaker constraints:
-
-    typeof(*ptr) atomic_mb_read(ptr)
-    void         atomic_mb_set(ptr, val)
-
-The semantics of these primitives map to Java volatile variables,
-and are strongly related to memory barriers as used in the Linux
-kernel (see below).
-
-As long as you use atomic_mb_read and atomic_mb_set, accesses cannot
-be reordered with each other, and it is also not possible to reorder
-"normal" accesses around them.
-
-However, and this is the important difference between
-atomic_mb_read/atomic_mb_set and sequential consistency, it is important
-for both threads to access the same volatile variable.  It is not the
-case that everything visible to thread A when it writes volatile field f
-becomes visible to thread B after it reads volatile field g. The store
-and load have to "match" (i.e., be performed on the same volatile
-field) to achieve the right semantics.
-
-
-These operations operate on any type that is as wide as an int or smaller.
-
-
-WEAK ATOMIC ACCESS AND MANUAL MEMORY BARRIERS
-=============================================
-
-Compared to sequentially consistent atomic access, programming with
-weaker consistency models can be considerably more complicated.
-In general, if the algorithm you are writing includes both writes
-and reads on the same side, it is generally simpler to use sequentially
-consistent primitives.
-
-When using this model, variables are accessed with:
-
-- atomic_read() and atomic_set(); these prevent the compiler from
-  optimizing accesses out of existence and creating unsolicited
-  accesses, but do not otherwise impose any ordering on loads and
-  stores: both the compiler and the processor are free to reorder
-  them.
-
-- atomic_load_acquire(), which guarantees the LOAD to appear to
-  happen, with respect to the other components of the system,
-  before all the LOAD or STORE operations specified afterwards.
-  Operations coming before atomic_load_acquire() can still be
-  reordered after it.
-
-- atomic_store_release(), which guarantees the STORE to appear to
-  happen, with respect to the other components of the system,
-  after all the LOAD or STORE operations specified afterwards.
-  Operations coming after atomic_store_release() can still be
-  reordered after it.
-
-Restrictions to the ordering of accesses can also be specified
-using the memory barrier macros: smp_rmb(), smp_wmb(), smp_mb(),
-smp_mb_acquire(), smp_mb_release(), smp_read_barrier_depends().
-
-Memory barriers control the order of references to shared memory.
-They come in six kinds:
-
-- smp_rmb() guarantees that all the LOAD operations specified before
-  the barrier will appear to happen before all the LOAD operations
-  specified after the barrier with respect to the other components of
-  the system.
-
-  In other words, smp_rmb() puts a partial ordering on loads, but is not
-  required to have any effect on stores.
-
-- smp_wmb() guarantees that all the STORE operations specified before
-  the barrier will appear to happen before all the STORE operations
-  specified after the barrier with respect to the other components of
-  the system.
-
-  In other words, smp_wmb() puts a partial ordering on stores, but is not
-  required to have any effect on loads.
-
-- smp_mb_acquire() guarantees that all the LOAD operations specified before
-  the barrier will appear to happen before all the LOAD or STORE operations
-  specified after the barrier with respect to the other components of
-  the system.
-
-- smp_mb_release() guarantees that all the STORE operations specified *after*
-  the barrier will appear to happen after all the LOAD or STORE operations
-  specified *before* the barrier with respect to the other components of
-  the system.
-
-- smp_mb() guarantees that all the LOAD and STORE operations specified
-  before the barrier will appear to happen before all the LOAD and
-  STORE operations specified after the barrier with respect to the other
-  components of the system.
-
-  smp_mb() puts a partial ordering on both loads and stores.  It is
-  stronger than both a read and a write memory barrier; it implies both
-  smp_mb_acquire() and smp_mb_release(), but it also prevents STOREs
-  coming before the barrier from overtaking LOADs coming after the
-  barrier and vice versa.
-
-- smp_read_barrier_depends() is a weaker kind of read barrier.  On
-  most processors, whenever two loads are performed such that the
-  second depends on the result of the first (e.g., the first load
-  retrieves the address to which the second load will be directed),
-  the processor will guarantee that the first LOAD will appear to happen
-  before the second with respect to the other components of the system.
-  However, this is not always true---for example, it was not true on
-  Alpha processors.  Whenever this kind of access happens to shared
-  memory (that is not protected by a lock), a read barrier is needed,
-  and smp_read_barrier_depends() can be used instead of smp_rmb().
-
-  Note that the first load really has to have a _data_ dependency and not
-  a control dependency.  If the address for the second load is dependent
-  on the first load, but the dependency is through a conditional rather
-  than actually loading the address itself, then it's a _control_
-  dependency and a full read barrier or better is required.
-
-
-This is the set of barriers that is required *between* two atomic_read()
-and atomic_set() operations to achieve sequential consistency:
-
-                    |               2nd operation                   |
-                    |-----------------------------------------------|
-     1st operation  | (after last)   | atomic_read | atomic_set     |
-     ---------------+----------------+-------------+----------------|
-     (before first) |                | none        | smp_mb_release |
-     ---------------+----------------+-------------+----------------|
-     atomic_read    | smp_mb_acquire | smp_rmb     | **             |
-     ---------------+----------------+-------------+----------------|
-     atomic_set     | none           | smp_mb()*** | smp_wmb()      |
-     ---------------+----------------+-------------+----------------|
-
-       * Or smp_read_barrier_depends().
-
-      ** This requires a load-store barrier.  This is achieved by
-         either smp_mb_acquire() or smp_mb_release().
-
-     *** This requires a store-load barrier.  On most machines, the only
-         way to achieve this is a full barrier.
-
-
-You can see that the two possible definitions of atomic_mb_read()
-and atomic_mb_set() are the following:
-
-    1) atomic_mb_read(p)   = atomic_read(p); smp_mb_acquire()
-       atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v); smp_mb()
-
-    2) atomic_mb_read(p)   = smp_mb() atomic_read(p); smp_mb_acquire()
-       atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v);
-
-Usually the former is used, because smp_mb() is expensive and a program
-normally has more reads than writes.  Therefore it makes more sense to
-make atomic_mb_set() the more expensive operation.
-
-There are two common cases in which atomic_mb_read and atomic_mb_set
-generate too many memory barriers, and thus it can be useful to manually
-place barriers, or use atomic_load_acquire/atomic_store_release instead:
-
-- when a data structure has one thread that is always a writer
-  and one thread that is always a reader, manual placement of
-  memory barriers makes the write side faster.  Furthermore,
-  correctness is easy to check for in this case using the "pairing"
-  trick that is explained below:
-
-     thread 1                                thread 1
-     -------------------------               ------------------------
-     (other writes)
-     atomic_mb_set(&a, x)                    atomic_store_release(&a, x)
-     atomic_mb_set(&b, y)                    atomic_store_release(&b, y)
-
-                                       =>
-     thread 2                                thread 2
-     -------------------------               ------------------------
-     y = atomic_mb_read(&b)                  y = atomic_load_acquire(&b)
-     x = atomic_mb_read(&a)                  x = atomic_load_acquire(&a)
-     (other reads)
-
-  Note that the barrier between the stores in thread 1, and between
-  the loads in thread 2, has been optimized here to a write or a
-  read memory barrier respectively.  On some architectures, notably
-  ARMv7, smp_mb_acquire and smp_mb_release are just as expensive as
-  smp_mb, but smp_rmb and/or smp_wmb are more efficient.
-
-- sometimes, a thread is accessing many variables that are otherwise
-  unrelated to each other (for example because, apart from the current
-  thread, exactly one other thread will read or write each of these
-  variables).  In this case, it is possible to "hoist" the implicit
-  barriers provided by atomic_mb_read() and atomic_mb_set() outside
-  a loop.  For example, the above definition atomic_mb_read() gives
-  the following transformation:
-
-     n = 0;                                  n = 0;
-     for (i = 0; i < 10; i++)          =>    for (i = 0; i < 10; i++)
-       n += atomic_mb_read(&a[i]);             n += atomic_read(&a[i]);
-                                             smp_mb_acquire();
-
-  Similarly, atomic_mb_set() can be transformed as follows:
-
-                                             smp_mb_release();
-     for (i = 0; i < 10; i++)          =>    for (i = 0; i < 10; i++)
-       atomic_mb_set(&a[i], false);            atomic_set(&a[i], false);
-                                             smp_mb();
-
-
-  The other thread can still use atomic_mb_read()/atomic_mb_set().
-
-The two tricks can be combined.  In this case, splitting a loop in
-two lets you hoist the barriers out of the loops _and_ eliminate the
-expensive smp_mb():
-
-                                             smp_mb_release();
-     for (i = 0; i < 10; i++) {        =>    for (i = 0; i < 10; i++)
-       atomic_mb_set(&a[i], false);            atomic_set(&a[i], false);
-       atomic_mb_set(&b[i], false);          smb_wmb();
-     }                                       for (i = 0; i < 10; i++)
-                                               atomic_set(&a[i], false);
-                                             smp_mb();
-
-
-Memory barrier pairing
-----------------------
-
-A useful rule of thumb is that memory barriers should always, or almost
-always, be paired with another barrier.  In the case of QEMU, however,
-note that the other barrier may actually be in a driver that runs in
-the guest!
-
-For the purposes of pairing, smp_read_barrier_depends() and smp_rmb()
-both count as read barriers.  A read barrier shall pair with a write
-barrier or a full barrier; a write barrier shall pair with a read
-barrier or a full barrier.  A full barrier can pair with anything.
-For example:
-
-        thread 1             thread 2
-        ===============      ===============
-        a = 1;
-        smp_wmb();
-        b = 2;               x = b;
-                             smp_rmb();
-                             y = a;
-
-Note that the "writing" thread is accessing the variables in the
-opposite order as the "reading" thread.  This is expected: stores
-before the write barrier will normally match the loads after the
-read barrier, and vice versa.  The same is true for more than 2
-access and for data dependency barriers:
-
-        thread 1             thread 2
-        ===============      ===============
-        b[2] = 1;
-        smp_wmb();
-        x->i = 2;
-        smp_wmb();
-        a = x;               x = a;
-                             smp_read_barrier_depends();
-                             y = x->i;
-                             smp_read_barrier_depends();
-                             z = b[y];
-
-smp_wmb() also pairs with atomic_mb_read() and smp_mb_acquire().
-and smp_rmb() also pairs with atomic_mb_set() and smp_mb_release().
-
-
-COMPARISON WITH LINUX KERNEL MEMORY BARRIERS
-============================================
-
-Here is a list of differences between Linux kernel atomic operations
-and memory barriers, and the equivalents in QEMU:
-
-- atomic operations in Linux are always on a 32-bit int type and
-  use a boxed atomic_t type; atomic operations in QEMU are polymorphic
-  and use normal C types.
-
-- Originally, atomic_read and atomic_set in Linux gave no guarantee
-  at all. Linux 4.1 updated them to implement volatile
-  semantics via ACCESS_ONCE (or the more recent READ/WRITE_ONCE).
-
-  QEMU's atomic_read/set implement, if the compiler supports it, C11
-  atomic relaxed semantics, and volatile semantics otherwise.
-  Both semantics prevent the compiler from doing certain transformations;
-  the difference is that atomic accesses are guaranteed to be atomic,
-  while volatile accesses aren't. Thus, in the volatile case we just cross
-  our fingers hoping that the compiler will generate atomic accesses,
-  since we assume the variables passed are machine-word sized and
-  properly aligned.
-  No barriers are implied by atomic_read/set in either Linux or QEMU.
-
-- atomic read-modify-write operations in Linux are of three kinds:
-
-         atomic_OP          returns void
-         atomic_OP_return   returns new value of the variable
-         atomic_fetch_OP    returns the old value of the variable
-         atomic_cmpxchg     returns the old value of the variable
-
-  In QEMU, the second kind does not exist.  Currently Linux has
-  atomic_fetch_or only.  QEMU provides and, or, inc, dec, add, sub.
-
-- different atomic read-modify-write operations in Linux imply
-  a different set of memory barriers; in QEMU, all of them enforce
-  sequential consistency, which means they imply full memory barriers
-  before and after the operation.
-
-- Linux does not have an equivalent of atomic_mb_set().  In particular,
-  note that smp_store_mb() is a little weaker than atomic_mb_set().
-  atomic_mb_read() compiles to the same instructions as Linux's
-  smp_load_acquire(), but this should be treated as an implementation
-  detail.
-
-SOURCES
-=======
-
-* Documentation/memory-barriers.txt from the Linux kernel
-
-* "The JSR-133 Cookbook for Compiler Writers", available at
-  http://g.oswego.edu/dl/jmm/cookbook.html
diff --git a/docs/devel/index.rst b/docs/devel/index.rst
index b734ba4655..a9e1200dff 100644
--- a/docs/devel/index.rst
+++ b/docs/devel/index.rst
@@ -17,6 +17,7 @@ Contents:
    loads-stores
    memory
    migration
+   atomics
    stable-process
    testing
    decodetree
-- 
2.18.2




^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/4] atomics: update documentation for C11
  2020-04-06 19:13 [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures Paolo Bonzini
  2020-04-06 19:13 ` [PATCH 1/4] atomics: convert to reStructuredText Paolo Bonzini
@ 2020-04-06 19:13 ` Paolo Bonzini
  2020-04-06 20:03   ` Eric Blake
  2020-04-07  9:06   ` Stefan Hajnoczi
  2020-04-06 19:13 ` [PATCH 3/4] rcu: do not mention atomic_mb_read/set in documentation Paolo Bonzini
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 13+ messages in thread
From: Paolo Bonzini @ 2020-04-06 19:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Ying Fang, stefanha

Deprecate atomic_mb_read and atomic_mb_set; it is not really possible to
use them correctly because they do not interoperate with sequentially-consistent
RMW operations.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 docs/devel/atomics.rst | 290 ++++++++++++++++-------------------------
 1 file changed, 114 insertions(+), 176 deletions(-)

diff --git a/docs/devel/atomics.rst b/docs/devel/atomics.rst
index 00f3f7d3ed..e92752a64d 100644
--- a/docs/devel/atomics.rst
+++ b/docs/devel/atomics.rst
@@ -11,10 +11,15 @@ that is consistent with the expectations of the programmer.
 The most basic tool is locking.  Mutexes, condition variables and
 semaphores are used in QEMU, and should be the default approach to
 synchronization.  Anything else is considerably harder, but it's
-also justified more often than one would like.  The two tools that
-are provided by ``qemu/atomic.h`` are memory barriers and atomic operations.
+also justified more often than one would like;
+the most performance-critical parts of QEMU in particular require
+a very low level approach to concurrency, involving memory barriers
+and atomic operations.  The semantics of concurrent memory accesses are governed
+by the C11 memory model.
 
-Macros defined by ``qemu/atomic.h`` fall in three camps:
+QEMU provides a header, ``qemu/atomic.h``, which wraps C11 atomics to
+provide better portability and a less verbose syntax.  ``qemu/atomic.h``
+provides macros that fall in three camps:
 
 - compiler barriers: ``barrier()``;
 
@@ -24,6 +29,14 @@ Macros defined by ``qemu/atomic.h`` fall in three camps:
 
 - sequentially consistent atomic access: everything else.
 
+In general, use of ``qemu/atomic.h`` should be wrapped with more easily
+used data structures (e.g. the lock-free singly-liked list operations
+``QSLIST_INSERT_HEAD_ATOMIC`` and ``QSLIST_MOVE_ATOMIC``) or synchronization
+primitives (such as RCU, ``QemuEvent`` or ``QemuLockCnt``).  Bare use of
+atomic operations and memory barriers should be limited to inter-thread
+checking of flags and documented thoroughly.
+
+
 
 Compiler memory barrier
 =======================
@@ -85,36 +98,14 @@ Similar operations return the new value of ``*ptr``::
     typeof(*ptr) atomic_or_fetch(ptr, val)
     typeof(*ptr) atomic_xor_fetch(ptr, val)
 
-Sequentially consistent loads and stores can be done using::
-
-    atomic_fetch_add(ptr, 0) for loads
-    atomic_xchg(ptr, val) for stores
+These operations operate on any type that is as wide as an int or smaller.
 
-However, they are quite expensive on some platforms, notably POWER and
-Arm.  Therefore, qemu/atomic.h provides two primitives with slightly
-weaker constraints::
+``qemu/atomic.h`` also provides sequentially consistent loads and stores can::
 
     typeof(*ptr) atomic_mb_read(ptr)
     void         atomic_mb_set(ptr, val)
 
-The semantics of these primitives map to Java volatile variables,
-and are strongly related to memory barriers as used in the Linux
-kernel (see below).
-
-As long as you use atomic_mb_read and atomic_mb_set, accesses cannot
-be reordered with each other, and it is also not possible to reorder
-"normal" accesses around them.
-
-However, and this is the important difference between
-atomic_mb_read/atomic_mb_set and sequential consistency, it is important
-for both threads to access the same volatile variable.  It is not the
-case that everything visible to thread A when it writes volatile field f
-becomes visible to thread B after it reads volatile field g. The store
-and load have to "match" (i.e., be performed on the same volatile
-field) to achieve the right semantics.
-
-
-These operations operate on any type that is as wide as an int or smaller.
+which however are deprecated.
 
 
 Weak atomic access and manual memory barriers
@@ -208,135 +199,62 @@ They come in six kinds:
   dependency and a full read barrier or better is required.
 
 
-This is the set of barriers that is required *between* two ``atomic_read()``
-and ``atomic_set()`` operations to achieve sequential consistency:
-
-   +----------------+-------------------------------------------------------+
-   |                |                 2nd operation                         |
-   |                +------------------+-----------------+------------------+
-   | 1st operation  | (after last)     | atomic_read     | atomic_set       |
-   +----------------+------------------+-----------------+------------------+
-   | (before first) | ..               | none            | smp_mb_release() |
-   +----------------+------------------+-----------------+------------------+
-   | atomic_read    | smp_mb_acquire() | smp_rmb() [1]_  | [2]_             |
-   +----------------+------------------+-----------------+------------------+
-   | atomic_set     | none             | smp_mb() [3]_   | smp_wmb()        |
-   +----------------+------------------+-----------------+------------------+
-
-   .. [1] Or smp_read_barrier_depends().
-
-   .. [2] This requires a load-store barrier.  This is achieved by
-          either smp_mb_acquire() or smp_mb_release().
-
-   .. [3] This requires a store-load barrier.  On most machines, the only
-          way to achieve this is a full barrier.
-
-
-You can see that the two possible definitions of ``atomic_mb_read()``
-and ``atomic_mb_set()`` are the following:
-
-  1) | atomic_mb_read(p)   = atomic_read(p); smp_mb_acquire()
-     | atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v); smp_mb()
-
-  2) | atomic_mb_read(p)   = smp_mb() atomic_read(p); smp_mb_acquire()
-     | atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v);
-
-Usually the former is used, because ``smp_mb()`` is expensive and a program
-normally has more reads than writes.  Therefore it makes more sense to
-make ``atomic_mb_set()`` the more expensive operation.
-
-There are two common cases in which atomic_mb_read and atomic_mb_set
-generate too many memory barriers, and thus it can be useful to manually
-place barriers, or use atomic_load_acquire/atomic_store_release instead:
-
-- when a data structure has one thread that is always a writer
-  and one thread that is always a reader, manual placement of
-  memory barriers makes the write side faster.  Furthermore,
-  correctness is easy to check for in this case using the "pairing"
-  trick that is explained below:
-
-    +----------------------------------------------------------------------+
-    | thread 1                                                             |
-    +-----------------------------------+----------------------------------+
-    | before                            | after                            |
-    +===================================+==================================+
-    | ::                                | ::                               |
-    |                                   |                                  |
-    |   (other writes)                  |                                  |
-    |   atomic_mb_set(&a, x)            |   atomic_store_release(&a, x)    |
-    |   atomic_mb_set(&b, y)            |   atomic_store_release(&b, y)    |
-    +-----------------------------------+----------------------------------+
-
-    +----------------------------------------------------------------------+
-    | thread 2                                                             |
-    +-----------------------------------+----------------------------------+
-    | before                            | after                            |
-    +===================================+==================================+
-    | ::                                | ::                               |
-    |                                   |                                  |
-    |   y = atomic_mb_read(&b)          |   y = atomic_load_acquire(&b)    |
-    |   x = atomic_mb_read(&a)          |   x = atomic_load_acquire(&a)    |
-    |   (other reads)                   |                                  |
-    +-----------------------------------+----------------------------------+
-
-  Note that the barrier between the stores in thread 1, and between
-  the loads in thread 2, has been optimized here to a write or a
-  read memory barrier respectively.  On some architectures, notably
-  ARMv7, smp_mb_acquire and smp_mb_release are just as expensive as
-  smp_mb, but smp_rmb and/or smp_wmb are more efficient.
-
-- sometimes, a thread is accessing many variables that are otherwise
-  unrelated to each other (for example because, apart from the current
-  thread, exactly one other thread will read or write each of these
-  variables).  In this case, it is possible to "hoist" the implicit
-  barriers provided by ``atomic_mb_read()`` and ``atomic_mb_set()`` outside
-  a loop.  For example, the above definition ``atomic_mb_read()`` gives
-  the following transformation:
-
-    +-----------------------------------+----------------------------------+
-    | before                            | after                            |
-    +===================================+==================================+
-    | ::                                | ::                               |
-    |                                   |                                  |
-    |   n = 0;                          |   n = 0;                         |
-    |   for (i = 0; i < 10; i++)        |   for (i = 0; i < 10; i++)       |
-    |     n += atomic_mb_read(&a[i]);   |     n += atomic_read(&a[i]);     |
-    |                                   |   smp_mb_acquire();              |
-    +-----------------------------------+----------------------------------+
-
-  Similarly, atomic_mb_set() can be transformed as follows:
-
-    +-----------------------------------+----------------------------------+
-    | before                            | after                            |
-    +===================================+==================================+
-    | ::                                | ::                               |
-    |                                   |                                  |
-    |                                   |   smp_mb_release();              |
-    |   for (i = 0; i < 10; i++)        |   for (i = 0; i < 10; i++)       |
-    |     atomic_mb_set(&a[i], false);  |     atomic_set(&a[i], false);    |
-    |                                   |   smp_mb();                      |
-    +-----------------------------------+----------------------------------+
-
-
-  The other thread can still use ``atomic_mb_read()``/``atomic_mb_set()``.
+Memory barriers and ``atomic_load_acquire``/``atomic_store_release`` are
+mostly used when a data structure has one thread that is always a writer
+and one thread that is always a reader:
+
+    +----------------------------------+----------------------------------+
+    | thread 1                         | thread 2                         |
+    +==================================+==================================+
+    | ::                               | ::                               |
+    |                                  |                                  |
+    |   atomic_store_release(&a, x)    |   y = atomic_load_acquire(&b)    |
+    |   atomic_store_release(&b, y)    |   x = atomic_load_acquire(&a)    |
+    +----------------------------------+----------------------------------+
+
+In this case, correctness is easy to check for in this case using the
+"pairing" trick that is explained below.
+
+Sometimes, a thread is accessing many variables that are otherwise
+unrelated to each other (for example because, apart from the current
+thread, exactly one other thread will read or write each of these
+variables).  In this case, it is possible to "hoist" the barriers
+outside a loop.  For example:
+
+    +------------------------------------------+----------------------------------+
+    | before                                   | after                            |
+    +==========================================+==================================+
+    | ::                                       | ::                               |
+    |                                          |                                  |
+    |   n = 0;                                 |   n = 0;                         |
+    |   for (i = 0; i < 10; i++)               |   for (i = 0; i < 10; i++)       |
+    |     n += atomic_load_acquire(&a[i]);     |     n += atomic_read(&a[i]);     |
+    |                                          |   smp_mb_acquire();              |
+    +------------------------------------------+----------------------------------+
+    | ::                                       | ::                               |
+    |                                          |                                  |
+    |                                          |   smp_mb_release();              |
+    |   for (i = 0; i < 10; i++)               |   for (i = 0; i < 10; i++)       |
+    |     atomic_store_release(&a[i], false);  |     atomic_set(&a[i], false);    |
+    +------------------------------------------+----------------------------------+
 
 The two tricks can be combined.  In this case, splitting a loop in
-two lets you hoist the barriers out of the loops _and_ eliminate the
-expensive ``smp_mb()``:
-
-    +-----------------------------------+----------------------------------+
-    | before                            | after                            |
-    +===================================+==================================+
-    | ::                                | ::                               |
-    |                                   |                                  |
-    |                                   |     smp_mb_release();            |
-    |   for (i = 0; i < 10; i++) {      |     for (i = 0; i < 10; i++)     |
-    |     atomic_mb_set(&a[i], false);  |       atomic_set(&a[i], false);  |
-    |     atomic_mb_set(&b[i], false);  |     smb_wmb();                   |
-    |   }                               |     for (i = 0; i < 10; i++)     |
-    |                                   |       atomic_set(&a[i], false);  |
-    |                                   |     smp_mb();                    |
-    +-----------------------------------+----------------------------------+
+two lets you hoist the barriers out of the loops, and replace a
+``smp_mb_release()`` with a (possibly cheaper, and clearer as well)
+``smp_wmb()``:
+
+    +------------------------------------------+----------------------------------+
+    | before                                   | after                            |
+    +==========================================+==================================+
+    | ::                                       | ::                               |
+    |                                          |                                  |
+    |                                          |     smp_mb_release();            |
+    |   for (i = 0; i < 10; i++) {             |     for (i = 0; i < 10; i++)     |
+    |     atomic_store_release(&a[i], false);  |       atomic_set(&a[i], false);  |
+    |     atomic_store_release(&b[i], false);  |     smb_wmb();                   |
+    |   }                                      |     for (i = 0; i < 10; i++)     |
+    |                                          |       atomic_set(&b[i], false);  |
+    +------------------------------------------+----------------------------------+
 
 
 Memory barrier pairing
@@ -347,11 +265,13 @@ always, be paired with another barrier.  In the case of QEMU, however,
 note that the other barrier may actually be in a driver that runs in
 the guest!
 
-For the purposes of pairing, ``smp_read_barrier_depends()`` and ``smp_rmb()``
-both count as read barriers.  A read barrier shall pair with a write
-barrier or a full barrier; a write barrier shall pair with a read
-barrier or a full barrier.  A full barrier can pair with anything.
-For example:
+For the purposes of pairing, ``smp_read_barrier_depends()``, ``smp_rmb()``
+and ``smp_mb_acquire()`` all count as read barriers; ``smp_wmb()` and
+``smp_mb_release()`` both count as write barriers.
+
+A read barrier shall pair with a write barrier or a full barrier;
+a write barrier shall pair with a read barrier or a full barrier.
+A full barrier can pair with anything.  For example:
 
       +--------------------+------------------------------+
       | thread 1           | thread 2                     |
@@ -387,9 +307,6 @@ access and for data dependency barriers:
       |                    |  z = b[y];                   |
       +--------------------+------------------------------+
 
-``smp_wmb()`` also pairs with ``atomic_mb_read()`` and ``smp_mb_acquire()``.
-and ``smp_rmb()`` also pairs with ``atomic_mb_set()`` and ``smp_mb_release()``.
-
 
 Comparison with Linux kernel memory barriers
 ============================================
@@ -424,24 +341,45 @@ and memory barriers, and the equivalents in QEMU:
          ``atomic_cmpxchg``    returns the old value of the variable
          ===================== =========================================
 
-  In QEMU, the second kind does not exist.  Currently Linux has
-  atomic_fetch_or only.  QEMU provides and, or, inc, dec, add, sub.
+  In QEMU, the second kind does not exist.
 
 - different atomic read-modify-write operations in Linux imply
   a different set of memory barriers; in QEMU, all of them enforce
-  sequential consistency, which means they imply full memory barriers
-  before and after the operation.
-
-- Linux does not have an equivalent of ``atomic_mb_set()``.  In particular,
-  note that ``smp_store_mb()`` is a little weaker than ``atomic_mb_set()``.
-  ``atomic_mb_read()`` compiles to the same instructions as Linux's
-  ``smp_load_acquire()``, but this should be treated as an implementation
-  detail.
+  sequential consistency.
+
+- in QEMU, ``atomic_read()`` and ``atomic_set()`` do not participate in
+  the total ordering enforced by sequentially-consistent operations.
+  This is because QEMU uses the C11 memory model.  The following example
+  is correct in Linux but not in QEMU:
+
+      +----------------------------------+--------------------------------+
+      | Linux (correct)                  | QEMU (incorrect)               |
+      +==================================+================================+
+      | ::                               | ::                             |
+      |                                  |                                |
+      |   a = atomic_fetch_add(&x, 2);   |   a = atomic_fetch_add(&x, 2); |
+      |   b = READ_ONCE(&y);             |   b = atomic_read(&y);         |
+      +----------------------------------+--------------------------------+
+
+  because the read of ``y`` can be moved (by either the processor or the
+  compiler) before the write of ``x``.
+
+  Fixing this requires an ``smp_mb()`` memory barrier between the write
+  of ``x`` and the read of ``y``.  In the common case where only one thread
+  writes ``x``, it is also possible to write it like this:
+
+      +--------------------------------+
+      | QEMU (correct)                 |
+      +================================+
+      | ::                             |
+      |                                |
+      |   a = atomic_read(&x);         |
+      |   atomic_set(&x, a + 2);       |
+      |   smp_mb();                    |
+      |   b = atomic_read(&y);         |
+      +--------------------------------+
 
 Sources
 =======
 
 * Documentation/memory-barriers.txt from the Linux kernel
-
-* "The JSR-133 Cookbook for Compiler Writers", available at
-  http://g.oswego.edu/dl/jmm/cookbook.html
-- 
2.18.2




^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/4] rcu: do not mention atomic_mb_read/set in documentation
  2020-04-06 19:13 [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures Paolo Bonzini
  2020-04-06 19:13 ` [PATCH 1/4] atomics: convert to reStructuredText Paolo Bonzini
  2020-04-06 19:13 ` [PATCH 2/4] atomics: update documentation for C11 Paolo Bonzini
@ 2020-04-06 19:13 ` Paolo Bonzini
  2020-04-06 19:13 ` [PATCH 4/4] async: use explicit memory barriers and relaxed accesses Paolo Bonzini
  2020-04-07  9:10 ` [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures Stefan Hajnoczi
  4 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2020-04-06 19:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Ying Fang, stefanha

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 docs/devel/rcu.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/devel/rcu.txt b/docs/devel/rcu.txt
index d83fed2f79..0ce15ba198 100644
--- a/docs/devel/rcu.txt
+++ b/docs/devel/rcu.txt
@@ -132,7 +132,7 @@ The core RCU API is small:
 
      typeof(*p) atomic_rcu_read(p);
 
-        atomic_rcu_read() is similar to atomic_mb_read(), but it makes
+        atomic_rcu_read() is similar to atomic_load_acquire(), but it makes
         some assumptions on the code that calls it.  This allows a more
         optimized implementation.
 
@@ -154,7 +154,7 @@ The core RCU API is small:
 
      void atomic_rcu_set(p, typeof(*p) v);
 
-        atomic_rcu_set() is also similar to atomic_mb_set(), and it also
+        atomic_rcu_set() is similar to atomic_store_release(), though it also
         makes assumptions on the code that calls it in order to allow a more
         optimized implementation.
 
-- 
2.18.2




^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/4] async: use explicit memory barriers and relaxed accesses
  2020-04-06 19:13 [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures Paolo Bonzini
                   ` (2 preceding siblings ...)
  2020-04-06 19:13 ` [PATCH 3/4] rcu: do not mention atomic_mb_read/set in documentation Paolo Bonzini
@ 2020-04-06 19:13 ` Paolo Bonzini
  2020-04-07  9:09   ` Stefan Hajnoczi
  2020-04-07  9:10 ` [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures Stefan Hajnoczi
  4 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2020-04-06 19:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Ying Fang, stefanha

When using C11 atomics, non-seqcst reads and writes do not participate
in the total order of seqcst operations.  In util/async.c and util/aio-posix.c,
in particular, the pattern that we use

          write ctx->notify_me                 write bh->scheduled
          read bh->scheduled                   read ctx->notify_me
          if !bh->scheduled, sleep             if ctx->notify_me, notify

needs to use seqcst operations for both the write and the read.  In
general this is something that we do not want, because there can be
many sources that are polled in addition to bottom halves.  The
alternative is to place a seqcst memory barrier between the write
and the read.  This also comes with a disadvantage, in that the
memory barrier is implicit on strongly-ordered architectures and
it wastes a few dozen clock cycles.

Fortunately, ctx->notify_me is never written concurrently by two
threads, so we can instead relax the writes to ctx->notify_me.
[This part of the commit message is still to be written more
in detail and is what I am not sure about.]

Analyzed-by: Ying Fang <fangying1@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/aio-posix.c |  9 ++++++++-
 util/aio-win32.c |  8 +++++++-
 util/async.c     | 12 ++++++++++--
 3 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/util/aio-posix.c b/util/aio-posix.c
index cd6cf0a4a9..37afec726f 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -569,7 +569,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
      * so disable the optimization now.
      */
     if (blocking) {
-        atomic_add(&ctx->notify_me, 2);
+        atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2);
+        /*
+         * Write ctx->notify_me before computing the timeout
+         * (reading bottom half flags, etc.).  Pairs with
+         * atomic_xchg in aio_notify().
+         */
+        smp_mb();
     }
 
     qemu_lockcnt_inc(&ctx->list_lock);
@@ -590,6 +596,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     }
 
     if (blocking) {
+        /* Finish the poll before clearing the flag.  */
         atomic_sub(&ctx->notify_me, 2);
         aio_notify_accept(ctx);
     }
diff --git a/util/aio-win32.c b/util/aio-win32.c
index a23b9c364d..2cccdb35c1 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -331,7 +331,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
      * so disable the optimization now.
      */
     if (blocking) {
-        atomic_add(&ctx->notify_me, 2);
+        atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2);
+        /*
+         * Write ctx->notify_me before computing the timeout
+         * (reading bottom half flags, etc.).  Pairs with
+         * atomic_xchg in aio_notify().
+         */
+        smp_mb();
     }
 
     qemu_lockcnt_inc(&ctx->list_lock);
diff --git a/util/async.c b/util/async.c
index b94518b948..ee1bc87d2b 100644
--- a/util/async.c
+++ b/util/async.c
@@ -249,7 +249,14 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
 {
     AioContext *ctx = (AioContext *) source;
 
-    atomic_or(&ctx->notify_me, 1);
+    atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) | 1);
+
+    /*
+     * Write ctx->notify_me before computing the timeout
+     * (reading bottom half flags, etc.).  Pairs with
+     * atomic_xchg in aio_notify().
+     */
+    smp_mb();
 
     /* We assume there is no timeout already supplied */
     *timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
@@ -414,7 +422,7 @@ void aio_notify(AioContext *ctx)
      * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll.
      */
     smp_mb();
-    if (ctx->notify_me) {
+    if (atomic_read(&ctx->notify_me)) {
         event_notifier_set(&ctx->notifier);
         atomic_mb_set(&ctx->notified, true);
     }
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] atomics: convert to reStructuredText
  2020-04-06 19:13 ` [PATCH 1/4] atomics: convert to reStructuredText Paolo Bonzini
@ 2020-04-06 19:58   ` Eric Blake
  2020-04-07 10:29     ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Blake @ 2020-04-06 19:58 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel; +Cc: Ying Fang, stefanha

On 4/6/20 2:13 PM, Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   docs/devel/atomics.rst | 447 +++++++++++++++++++++++++++++++++++++++++
>   docs/devel/atomics.txt | 403 -------------------------------------
>   docs/devel/index.rst   |   1 +
>   3 files changed, 448 insertions(+), 403 deletions(-)
>   create mode 100644 docs/devel/atomics.rst
>   delete mode 100644 docs/devel/atomics.txt
> 

Pre-existing grammar nits, that you may want to touch up while reformatting:

> +Compiler memory barrier
> +=======================
> +
> +``barrier()`` prevents the compiler from moving the memory accesses either
> +side of it to the other side.  The compiler barrier has no direct effect

s/either/on either/


> +``qemu/atomic.h`` provides the following set of atomic read-modify-write
> +operations::
> +
> +    void atomic_inc(ptr)
> +    void atomic_dec(ptr)
> +    void atomic_add(ptr, val)
> +    void atomic_sub(ptr, val)
> +    void atomic_and(ptr, val)
> +    void atomic_or(ptr, val)
> +
> +    typeof(*ptr) atomic_fetch_inc(ptr)
> +    typeof(*ptr) atomic_fetch_dec(ptr)
> +    typeof(*ptr) atomic_fetch_add(ptr, val)
> +    typeof(*ptr) atomic_fetch_sub(ptr, val)
> +    typeof(*ptr) atomic_fetch_and(ptr, val)
> +    typeof(*ptr) atomic_fetch_or(ptr, val)
> +    typeof(*ptr) atomic_fetch_xor(ptr, val)
> +    typeof(*ptr) atomic_fetch_inc_nonzero(ptr)
> +    typeof(*ptr) atomic_xchg(ptr, val)
> +    typeof(*ptr) atomic_cmpxchg(ptr, old, new)
> +
> +all of which return the old value of ``*ptr``.  These operations are
> +polymorphic; they operate on any type that is as wide as a pointer.

Is th is 'as wide as a pointer' or 'no wider than a pointer'? In other 
words, can 'val' be a narrower type?


> +However, and this is the important difference between
> +atomic_mb_read/atomic_mb_set and sequential consistency, it is important
> +for both threads to access the same volatile variable.  It is not the
> +case that everything visible to thread A when it writes volatile field f
> +becomes visible to thread B after it reads volatile field g. The store
> +and load have to "match" (i.e., be performed on the same volatile
> +field) to achieve the right semantics.
> +
> +
> +These operations operate on any type that is as wide as an int or smaller.

Is that all operations in this same section, or only the last set of two 
operations (atomic_mb_read/set)?  What is the appropriate code to use if 
a pointer is wider than int?


> +
> +You can see that the two possible definitions of ``atomic_mb_read()``
> +and ``atomic_mb_set()`` are the following:
> +
> +  1) | atomic_mb_read(p)   = atomic_read(p); smp_mb_acquire()
> +     | atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v); smp_mb()
> +
> +  2) | atomic_mb_read(p)   = smp_mb() atomic_read(p); smp_mb_acquire()

Missing semicolon after smp_mb()

> +     | atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v);
> +
> +Usually the former is used, because ``smp_mb()`` is expensive and a program
> +normally has more reads than writes.  Therefore it makes more sense to
> +make ``atomic_mb_set()`` the more expensive operation.
> +

> +Memory barrier pairing
> +----------------------
> +
> +A useful rule of thumb is that memory barriers should always, or almost
> +always, be paired with another barrier.  In the case of QEMU, however,
> +note that the other barrier may actually be in a driver that runs in
> +the guest!
> +
> +For the purposes of pairing, ``smp_read_barrier_depends()`` and ``smp_rmb()``
> +both count as read barriers.  A read barrier shall pair with a write
> +barrier or a full barrier; a write barrier shall pair with a read

'shall' is awkward (if this is not a formal RFC-style requirement), 
better for colloquial English is 'must' or 'should' (twice)

> +barrier or a full barrier.  A full barrier can pair with anything.

> +Comparison with Linux kernel memory barriers
> +============================================
> +
> +Here is a list of differences between Linux kernel atomic operations
> +and memory barriers, and the equivalents in QEMU:
> +
> +- atomic operations in Linux are always on a 32-bit int type and
> +  use a boxed atomic_t type; atomic operations in QEMU are polymorphic
> +  and use normal C types.
> +
> +- Originally, atomic_read and atomic_set in Linux gave no guarantee
> +  at all. Linux 4.1 updated them to implement volatile
> +  semantics via ACCESS_ONCE (or the more recent READ/WRITE_ONCE).
> +
> +  QEMU's atomic_read/set implement, if the compiler supports it, C11
> +  atomic relaxed semantics, and volatile semantics otherwise.

Reads better as:

QEMU's atomic_read/set implement C11 atomic relaxed semantics if the 
compiler supports it, and volatile semantics otherwise.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] atomics: update documentation for C11
  2020-04-06 19:13 ` [PATCH 2/4] atomics: update documentation for C11 Paolo Bonzini
@ 2020-04-06 20:03   ` Eric Blake
  2020-04-07  9:06   ` Stefan Hajnoczi
  1 sibling, 0 replies; 13+ messages in thread
From: Eric Blake @ 2020-04-06 20:03 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel; +Cc: Ying Fang, stefanha

On 4/6/20 2:13 PM, Paolo Bonzini wrote:
> Deprecate atomic_mb_read and atomic_mb_set; it is not really possible to
> use them correctly because they do not interoperate with sequentially-consistent
> RMW operations.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   docs/devel/atomics.rst | 290 ++++++++++++++++-------------------------
>   1 file changed, 114 insertions(+), 176 deletions(-)
> 

> @@ -24,6 +29,14 @@ Macros defined by ``qemu/atomic.h`` fall in three camps:
>   
>   - sequentially consistent atomic access: everything else.
>   
> +In general, use of ``qemu/atomic.h`` should be wrapped with more easily
> +used data structures (e.g. the lock-free singly-liked list operations

linked

> +``QSLIST_INSERT_HEAD_ATOMIC`` and ``QSLIST_MOVE_ATOMIC``) or synchronization
> +primitives (such as RCU, ``QemuEvent`` or ``QemuLockCnt``).  Bare use of
> +atomic operations and memory barriers should be limited to inter-thread
> +checking of flags and documented thoroughly.
> +
> +
>   
>   Compiler memory barrier
>   =======================
> @@ -85,36 +98,14 @@ Similar operations return the new value of ``*ptr``::
>       typeof(*ptr) atomic_or_fetch(ptr, val)
>       typeof(*ptr) atomic_xor_fetch(ptr, val)
>   
> -Sequentially consistent loads and stores can be done using::
> -
> -    atomic_fetch_add(ptr, 0) for loads
> -    atomic_xchg(ptr, val) for stores
> +These operations operate on any type that is as wide as an int or smaller.
>   
> -However, they are quite expensive on some platforms, notably POWER and
> -Arm.  Therefore, qemu/atomic.h provides two primitives with slightly
> -weaker constraints::
> +``qemu/atomic.h`` also provides sequentially consistent loads and stores can::
>   

s/ can//

>       typeof(*ptr) atomic_mb_read(ptr)
>       void         atomic_mb_set(ptr, val)
>   
> -The semantics of these primitives map to Java volatile variables,
> -and are strongly related to memory barriers as used in the Linux
> -kernel (see below).
> -
> -As long as you use atomic_mb_read and atomic_mb_set, accesses cannot
> -be reordered with each other, and it is also not possible to reorder
> -"normal" accesses around them.
> -
> -However, and this is the important difference between
> -atomic_mb_read/atomic_mb_set and sequential consistency, it is important
> -for both threads to access the same volatile variable.  It is not the
> -case that everything visible to thread A when it writes volatile field f
> -becomes visible to thread B after it reads volatile field g. The store
> -and load have to "match" (i.e., be performed on the same volatile
> -field) to achieve the right semantics.
> -
> -
> -These operations operate on any type that is as wide as an int or smaller.
> +which however are deprecated.
>   
-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] atomics: update documentation for C11
  2020-04-06 19:13 ` [PATCH 2/4] atomics: update documentation for C11 Paolo Bonzini
  2020-04-06 20:03   ` Eric Blake
@ 2020-04-07  9:06   ` Stefan Hajnoczi
  2020-04-07  9:13     ` Paolo Bonzini
  1 sibling, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2020-04-07  9:06 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Ying Fang, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1205 bytes --]

On Mon, Apr 06, 2020 at 03:13:18PM -0400, Paolo Bonzini wrote:
> -The semantics of these primitives map to Java volatile variables,
> -and are strongly related to memory barriers as used in the Linux
> -kernel (see below).
> -
> -As long as you use atomic_mb_read and atomic_mb_set, accesses cannot
> -be reordered with each other, and it is also not possible to reorder
> -"normal" accesses around them.
> -
> -However, and this is the important difference between
> -atomic_mb_read/atomic_mb_set and sequential consistency, it is important
> -for both threads to access the same volatile variable.  It is not the
> -case that everything visible to thread A when it writes volatile field f
> -becomes visible to thread B after it reads volatile field g. The store
> -and load have to "match" (i.e., be performed on the same volatile
> -field) to achieve the right semantics.
> -
> -
> -These operations operate on any type that is as wide as an int or smaller.
> +which however are deprecated.

Please indicate why they are deprecated and advise which alternative is
preferred.  This will help readers understand the current best practices
and make a decision about whether to avoid the deprecated APIs.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] async: use explicit memory barriers and relaxed accesses
  2020-04-06 19:13 ` [PATCH 4/4] async: use explicit memory barriers and relaxed accesses Paolo Bonzini
@ 2020-04-07  9:09   ` Stefan Hajnoczi
  2020-04-07  9:18     ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2020-04-07  9:09 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Ying Fang, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2063 bytes --]

On Mon, Apr 06, 2020 at 03:13:20PM -0400, Paolo Bonzini wrote:
> When using C11 atomics, non-seqcst reads and writes do not participate
> in the total order of seqcst operations.  In util/async.c and util/aio-posix.c,
> in particular, the pattern that we use
> 
>           write ctx->notify_me                 write bh->scheduled
>           read bh->scheduled                   read ctx->notify_me
>           if !bh->scheduled, sleep             if ctx->notify_me, notify
> 
> needs to use seqcst operations for both the write and the read.  In
> general this is something that we do not want, because there can be
> many sources that are polled in addition to bottom halves.  The
> alternative is to place a seqcst memory barrier between the write
> and the read.  This also comes with a disadvantage, in that the
> memory barrier is implicit on strongly-ordered architectures and
> it wastes a few dozen clock cycles.
> 
> Fortunately, ctx->notify_me is never written concurrently by two
> threads, so we can instead relax the writes to ctx->notify_me.
> [This part of the commit message is still to be written more
> in detail and is what I am not sure about.]
> 
> Analyzed-by: Ying Fang <fangying1@huawei.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  util/aio-posix.c |  9 ++++++++-
>  util/aio-win32.c |  8 +++++++-
>  util/async.c     | 12 ++++++++++--
>  3 files changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/util/aio-posix.c b/util/aio-posix.c
> index cd6cf0a4a9..37afec726f 100644
> --- a/util/aio-posix.c
> +++ b/util/aio-posix.c
> @@ -569,7 +569,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>       * so disable the optimization now.
>       */
>      if (blocking) {
> -        atomic_add(&ctx->notify_me, 2);
> +        atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2);

Non-atomic "atomic" code looks suspicious and warrants a comment
mentioning that this is only executed from one thread.  This applies to
the other instances in this patch too.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures
  2020-04-06 19:13 [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures Paolo Bonzini
                   ` (3 preceding siblings ...)
  2020-04-06 19:13 ` [PATCH 4/4] async: use explicit memory barriers and relaxed accesses Paolo Bonzini
@ 2020-04-07  9:10 ` Stefan Hajnoczi
  4 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2020-04-07  9:10 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Ying Fang, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1331 bytes --]

On Mon, Apr 06, 2020 at 03:13:16PM -0400, Paolo Bonzini wrote:
> Patch 4 fixes qemu-img and qemu-io hangs on weakly-ordered architectures.
> Patch 1-3 are related docs fixes and improvements.
> 
> This is RFC because it relies on the iothread being locked during aio_poll
> on the main AioContext.  If I add assertions for this however I see a
> failure for test 267, so I am posting it as a preview before I debug that.
> The doc patches can also go in independently of course.
> 
> Paolo
> 
> Paolo Bonzini (4):
>   atomics: convert to reStructuredText
>   atomics: update documentation for C11
>   rcu: do not mention atomic_mb_read/set in documentation
>   async: use explicit memory barriers
> 
>  docs/devel/atomics.rst | 385 +++++++++++++++++++++++++++++++++++++++
>  docs/devel/atomics.txt | 403 -----------------------------------------
>  docs/devel/index.rst   |   1 +
>  docs/devel/rcu.txt     |   4 +-
>  util/aio-posix.c       |   9 +-
>  util/aio-win32.c       |   8 +-
>  util/async.c           |  12 +-
>  7 files changed, 413 insertions(+), 409 deletions(-)
>  create mode 100644 docs/devel/atomics.rst
>  delete mode 100644 docs/devel/atomics.txt

I have left comments requesting clarifications, but the code change
looks fine:

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] atomics: update documentation for C11
  2020-04-07  9:06   ` Stefan Hajnoczi
@ 2020-04-07  9:13     ` Paolo Bonzini
  0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2020-04-07  9:13 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Ying Fang, qemu-devel


[-- Attachment #1.1: Type: text/plain, Size: 965 bytes --]

On 07/04/20 11:06, Stefan Hajnoczi wrote:
>> -
>> -However, and this is the important difference between
>> -atomic_mb_read/atomic_mb_set and sequential consistency, it is important
>> -for both threads to access the same volatile variable.  It is not the
>> -case that everything visible to thread A when it writes volatile field f
>> -becomes visible to thread B after it reads volatile field g. The store
>> -and load have to "match" (i.e., be performed on the same volatile
>> -field) to achieve the right semantics.
>> -
>> -
>> -These operations operate on any type that is as wide as an int or smaller.
>> +which however are deprecated.
> Please indicate why they are deprecated and advise which alternative is
> preferred.  This will help readers understand the current best practices
> and make a decision about whether to avoid the deprecated APIs.

True, I will put a link to the "Comparison with Linux" section and clarify.

Paolo


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] async: use explicit memory barriers and relaxed accesses
  2020-04-07  9:09   ` Stefan Hajnoczi
@ 2020-04-07  9:18     ` Paolo Bonzini
  0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2020-04-07  9:18 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Ying Fang, qemu-devel


[-- Attachment #1.1: Type: text/plain, Size: 518 bytes --]

On 07/04/20 11:09, Stefan Hajnoczi wrote:
>>      if (blocking) {
>> -        atomic_add(&ctx->notify_me, 2);
>> +        atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2);
> Non-atomic "atomic" code looks suspicious and warrants a comment
> mentioning that this is only executed from one thread.  This applies to
> the other instances in this patch too.

Yes, that's the patch that is missing from this series, which
strengthens the assertion to ensure that we're in the home thread.

Paolo


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] atomics: convert to reStructuredText
  2020-04-06 19:58   ` Eric Blake
@ 2020-04-07 10:29     ` Paolo Bonzini
  0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2020-04-07 10:29 UTC (permalink / raw)
  To: Eric Blake, qemu-devel; +Cc: Ying Fang, stefanha

On 06/04/20 21:58, Eric Blake wrote:
>>
>> +For the purposes of pairing, ``smp_read_barrier_depends()`` and
>> ``smp_rmb()``
>> +both count as read barriers.  A read barrier shall pair with a write
>> +barrier or a full barrier; a write barrier shall pair with a read
> 
> 'shall' is awkward (if this is not a formal RFC-style requirement),
> better for colloquial English is 'must' or 'should' (twice)

Yes, it is a formal requirement.  If you don't pair barriers, you don't
get any synchronization between the threads.

Thanks for the edits!  I have included them, except when the text is
going to be removed by the next patch.

Paolo



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-04-07 10:29 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-06 19:13 [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures Paolo Bonzini
2020-04-06 19:13 ` [PATCH 1/4] atomics: convert to reStructuredText Paolo Bonzini
2020-04-06 19:58   ` Eric Blake
2020-04-07 10:29     ` Paolo Bonzini
2020-04-06 19:13 ` [PATCH 2/4] atomics: update documentation for C11 Paolo Bonzini
2020-04-06 20:03   ` Eric Blake
2020-04-07  9:06   ` Stefan Hajnoczi
2020-04-07  9:13     ` Paolo Bonzini
2020-04-06 19:13 ` [PATCH 3/4] rcu: do not mention atomic_mb_read/set in documentation Paolo Bonzini
2020-04-06 19:13 ` [PATCH 4/4] async: use explicit memory barriers and relaxed accesses Paolo Bonzini
2020-04-07  9:09   ` Stefan Hajnoczi
2020-04-07  9:18     ` Paolo Bonzini
2020-04-07  9:10 ` [RFC PATCH 0/4] async: fix hangs on weakly-ordered architectures Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.