linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH memory-model 0/9] LKMM updates for v5.10
@ 2020-08-31 18:20 Paul E. McKenney
  2020-08-31 18:20 ` [PATCH kcsan 1/9] docs: fix references for DMA*.txt files paulmck
                   ` (8 more replies)
  0 siblings, 9 replies; 30+ messages in thread
From: Paul E. McKenney @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks

Hello!

This series provides LKMM updates:

1.	fix references for DMA*.txt files.

2.	Replace HTTP links with HTTPS ones: LKMM.

3.	tools/memory-model: Update recipes.txt prime_numbers.c path.

4.	tools/memory-model: Improve litmus-test documentation.

5.	tools/memory-model: Add a simple entry point document.

6.	tools/memory-model: Expand the cheatsheet.txt notion of relaxed.

7.	tools/memory-model: Move Documentation description to
	Documentation/README.

8.	tools/memory-model: Document categories of ordering primitives.

9.	tools/memory-model:  Document locking corner cases.

						Thanx, Paul

------------------------------------------------------------------------

 Documentation/litmus-tests/locking/DCL-broken.litmus |   55 
 Documentation/litmus-tests/locking/DCL-fixed.litmus  |   56 
 Documentation/litmus-tests/locking/RM-broken.litmus  |   42 
 Documentation/litmus-tests/locking/RM-fixed.litmus   |   42 
 Documentation/memory-barriers.txt                    |    6 
 tools/memory-model/Documentation/README              |   86 +
 tools/memory-model/Documentation/cheatsheet.txt      |   27 
 tools/memory-model/Documentation/litmus-tests.txt    | 1078 ++++++++++++++++++-
 tools/memory-model/Documentation/locking.txt         |  320 +++++
 tools/memory-model/Documentation/ordering.txt        |  462 ++++++++
 tools/memory-model/Documentation/recipes.txt         |    4 
 tools/memory-model/Documentation/references.txt      |    2 
 tools/memory-model/Documentation/simple.txt          |  271 ++++
 tools/memory-model/README                            |  182 ---
 tools/memory-model/control-dependencies.txt          |  256 ++++
 15 files changed, 2730 insertions(+), 159 deletions(-)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH kcsan 1/9] docs: fix references for DMA*.txt files
  2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
@ 2020-08-31 18:20 ` paulmck
  2020-08-31 18:20 ` [PATCH kcsan 2/9] Replace HTTP links with HTTPS ones: LKMM paulmck
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: paulmck @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Mauro Carvalho Chehab,
	Paul E . McKenney

From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>

As we moved those files to core-api, fix references to point
to their newer locations.

Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/memory-barriers.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 9618633..39a5115 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -546,8 +546,8 @@ There are certain things that the Linux kernel memory barriers do not guarantee:
 	[*] For information on bus mastering DMA and coherency please read:
 
 	    Documentation/driver-api/pci/pci.rst
-	    Documentation/DMA-API-HOWTO.txt
-	    Documentation/DMA-API.txt
+	    Documentation/core-api/dma-api-howto.rst
+	    Documentation/core-api/dma-api.rst
 
 
 DATA DEPENDENCY BARRIERS (HISTORICAL)
@@ -1932,7 +1932,7 @@ There are some more advanced barrier functions:
      here.
 
      See the subsection "Kernel I/O barrier effects" for more information on
-     relaxed I/O accessors and the Documentation/DMA-API.txt file for more
+     relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for more
      information on consistent memory.
 
  (*) pmem_wmb();
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH kcsan 2/9] Replace HTTP links with HTTPS ones: LKMM
  2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
  2020-08-31 18:20 ` [PATCH kcsan 1/9] docs: fix references for DMA*.txt files paulmck
@ 2020-08-31 18:20 ` paulmck
  2020-08-31 18:20 ` [PATCH kcsan 3/9] tools/memory-model: Update recipes.txt prime_numbers.c path paulmck
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: paulmck @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Alexander A. Klimov,
	Paul E . McKenney

From: "Alexander A. Klimov" <grandmaster@al2klimov.de>

Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.

Deterministic algorithm:
For each file:
  If not .svg:
    For each line:
      If doesn't contain `\bxmlns\b`:
        For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
          If both the HTTP and HTTPS versions
          return 200 OK and serve the same content:
            Replace HTTP with HTTPS.

Signed-off-by: Alexander A. Klimov <grandmaster@al2klimov.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 tools/memory-model/Documentation/references.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/memory-model/Documentation/references.txt b/tools/memory-model/Documentation/references.txt
index ecbbaa5..c5fdfd1 100644
--- a/tools/memory-model/Documentation/references.txt
+++ b/tools/memory-model/Documentation/references.txt
@@ -120,7 +120,7 @@ o	Jade Alglave, Luc Maranget, and Michael Tautschnig. 2014. "Herding
 
 o	Jade Alglave, Patrick Cousot, and Luc Maranget. 2016. "Syntax and
 	semantics of the weak consistency model specification language
-	cat". CoRR abs/1608.07531 (2016). http://arxiv.org/abs/1608.07531
+	cat". CoRR abs/1608.07531 (2016). https://arxiv.org/abs/1608.07531
 
 
 Memory-model comparisons
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH kcsan 3/9] tools/memory-model: Update recipes.txt prime_numbers.c path
  2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
  2020-08-31 18:20 ` [PATCH kcsan 1/9] docs: fix references for DMA*.txt files paulmck
  2020-08-31 18:20 ` [PATCH kcsan 2/9] Replace HTTP links with HTTPS ones: LKMM paulmck
@ 2020-08-31 18:20 ` paulmck
  2020-08-31 18:20 ` [PATCH kcsan 4/9] tools/memory-model: Improve litmus-test documentation paulmck
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: paulmck @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

The expand_to_next_prime() and next_prime_number() functions have moved
from lib/prime_numbers.c to lib/math/prime_numbers.c, so this commit
updates recipes.txt to reflect this change.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 tools/memory-model/Documentation/recipes.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/Documentation/recipes.txt b/tools/memory-model/Documentation/recipes.txt
index 63c4adf..03f58b1 100644
--- a/tools/memory-model/Documentation/recipes.txt
+++ b/tools/memory-model/Documentation/recipes.txt
@@ -1,7 +1,7 @@
 This document provides "recipes", that is, litmus tests for commonly
 occurring situations, as well as a few that illustrate subtly broken but
 attractive nuisances.  Many of these recipes include example code from
-v4.13 of the Linux kernel.
+v5.7 of the Linux kernel.
 
 The first section covers simple special cases, the second section
 takes off the training wheels to cover more involved examples,
@@ -278,7 +278,7 @@ is present if the value loaded determines the address of a later access
 first place (control dependency).  Note that the term "data dependency"
 is sometimes casually used to cover both address and data dependencies.
 
-In lib/prime_numbers.c, the expand_to_next_prime() function invokes
+In lib/math/prime_numbers.c, the expand_to_next_prime() function invokes
 rcu_assign_pointer(), and the next_prime_number() function invokes
 rcu_dereference().  This combination mediates access to a bit vector
 that is expanded as additional primes are needed.
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH kcsan 4/9] tools/memory-model: Improve litmus-test documentation
  2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
                   ` (2 preceding siblings ...)
  2020-08-31 18:20 ` [PATCH kcsan 3/9] tools/memory-model: Update recipes.txt prime_numbers.c path paulmck
@ 2020-08-31 18:20 ` paulmck
  2020-08-31 18:20 ` [PATCH kcsan 5/9] tools/memory-model: Add a simple entry point document paulmck
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: paulmck @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

The current LKMM documentation says very little about litmus tests, and
worse yet directs people to the herd7 documentation for more information.
Now, the herd7 documentation is quite voluminous and educational,
but it is intended for people creating and modifying memory models,
not those attempting to use them.

This commit therefore updates README and creates a litmus-tests.txt
file that gives an overview of litmus-test format and describes ways of
modeling various special cases, illustrated with numerous examples.

[ paulmck: Add Alan Stern feedback. ]
[ paulmck: Apply Dave Chinner feedback. ]
[ paulmck: Apply Andrii Nakryiko feedback. ]
[ paulmck: Apply Johannes Weiner feedback. ]
Link: https://lwn.net/Articles/827180/
Reported-by: Dave Chinner <david@fromorbit.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 tools/memory-model/Documentation/litmus-tests.txt | 1070 +++++++++++++++++++++
 tools/memory-model/README                         |  155 +--
 2 files changed, 1108 insertions(+), 117 deletions(-)
 create mode 100644 tools/memory-model/Documentation/litmus-tests.txt

diff --git a/tools/memory-model/Documentation/litmus-tests.txt b/tools/memory-model/Documentation/litmus-tests.txt
new file mode 100644
index 0000000..289a38d
--- /dev/null
+++ b/tools/memory-model/Documentation/litmus-tests.txt
@@ -0,0 +1,1070 @@
+Linux-Kernel Memory Model Litmus Tests
+======================================
+
+This file describes the LKMM litmus-test format by example, describes
+some tricks and traps, and finally outlines LKMM's limitations.  Earlier
+versions of this material appeared in a number of LWN articles, including:
+
+https://lwn.net/Articles/720550/
+	A formal kernel memory-ordering model (part 2)
+https://lwn.net/Articles/608550/
+	Axiomatic validation of memory barriers and atomic instructions
+https://lwn.net/Articles/470681/
+	Validating Memory Barriers and Atomic Instructions
+
+This document presents information in decreasing order of applicability,
+so that, where possible, the information that has proven more commonly
+useful is shown near the beginning.
+
+For information on installing LKMM, including the underlying "herd7"
+tool, please see tools/memory-model/README.
+
+
+Copy-Pasta
+==========
+
+As with other software, it is often better (if less macho) to adapt an
+existing litmus test than it is to create one from scratch.  A number
+of litmus tests may be found in the kernel source tree:
+
+	tools/memory-model/litmus-tests/
+	Documentation/litmus-tests/
+
+Several thousand more example litmus tests are available on github
+and kernel.org:
+
+	https://github.com/paulmckrcu/litmus
+	https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git/tree/CodeSamples/formal/herd
+	https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git/tree/CodeSamples/formal/litmus
+
+The -l and -L arguments to "git grep" can be quite helpful in identifying
+existing litmus tests that are similar to the one you need.  But even if
+you start with an existing litmus test, it is still helpful to have a
+good understanding of the litmus-test format.
+
+
+Examples and Format
+===================
+
+This section describes the overall format of litmus tests, starting
+with a small example of the message-passing pattern and moving on to
+more complex examples that illustrate explicit initialization and LKMM's
+minimalistic set of flow-control statements.
+
+
+Message-Passing Example
+-----------------------
+
+This section gives an overview of the format of a litmus test using an
+example based on the common message-passing use case.  This use case
+appears often in the Linux kernel.  For example, a flag (modeled by "y"
+below) indicates that a buffer (modeled by "x" below) is now completely
+filled in and ready for use.  It would be very bad if the consumer saw the
+flag set, but, due to memory misordering, saw old values in the buffer.
+
+This example asks whether smp_store_release() and smp_load_acquire()
+suffices to avoid this bad outcome:
+
+ 1 C MP+pooncerelease+poacquireonce
+ 2
+ 3 {}
+ 4
+ 5 P0(int *x, int *y)
+ 6 {
+ 7   WRITE_ONCE(*x, 1);
+ 8   smp_store_release(y, 1);
+ 9 }
+10
+11 P1(int *x, int *y)
+12 {
+13   int r0;
+14   int r1;
+15
+16   r0 = smp_load_acquire(y);
+17   r1 = READ_ONCE(*x);
+18 }
+19
+20 exists (1:r0=1 /\ 1:r1=0)
+
+Line 1 starts with "C", which identifies this file as being in the
+LKMM C-language format (which, as we will see, is a small fragment
+of the full C language).  The remainder of line 1 is the name of
+the test, which by convention is the filename with the ".litmus"
+suffix stripped.  In this case, the actual test may be found in
+tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus
+in the Linux-kernel source tree.
+
+Mechanically generated litmus tests will often have an optional
+double-quoted comment string on the second line.  Such strings are ignored
+when running the test.  Yes, you can add your own comments to litmus
+tests, but this is a bit involved due to the use of multiple parsers.
+For now, you can use C-language comments in the C code, and these comments
+may be in either the "/* */" or the "//" style.  A later section will
+cover the full litmus-test commenting story.
+
+Line 3 is the initialization section.  Because the default initialization
+to zero suffices for this test, the "{}" syntax is used, which mean the
+initialization section is empty.  Litmus tests requiring non-default
+initialization must have non-empty initialization sections, as in the
+example that will be presented later in this document.
+
+Lines 5-9 show the first process and lines 11-18 the second process.  Each
+process corresponds to a Linux-kernel task (or kthread, workqueue, thread,
+and so on; LKMM discussions often use these terms interchangeably).
+The name of the first process is "P0" and that of the second "P1".
+You can name your processes anything you like as long as the names consist
+of a single "P" followed by a number, and as long as the numbers are
+consecutive starting with zero.  This can actually be quite helpful,
+for example, a .litmus file matching "^P1(" but not matching "^P2("
+must contain a two-process litmus test.
+
+The argument list for each function are pointers to the global variables
+used by that function.  Unlike normal C-language function parameters, the
+names are significant.  The fact that both P0() and P1() have a formal
+parameter named "x" means that these two processes are working with the
+same global variable, also named "x".  So the "int *x, int *y" on P0()
+and P1() mean that both processes are working with two shared global
+variables, "x" and "y".  Global variables are always passed to processes
+by reference, hence "P0(int *x, int *y)", but *never* "P0(int x, int y)".
+
+P0() has no local variables, but P1() has two of them named "r0" and "r1".
+These names may be freely chosen, but for historical reasons stemming from
+other litmus-test formats, it is conventional to use names consisting of
+"r" followed by a number as shown here.  A common bug in litmus tests
+is forgetting to add a global variable to a process's parameter list.
+This will sometimes result in an error message, but can also cause the
+intended global to instead be silently treated as an undeclared local
+variable.
+
+Each process's code is similar to Linux-kernel C, as can be seen on lines
+7-8 and 13-17.  This code may use many of the Linux kernel's atomic
+operations, some of its exclusive-lock functions, and some of its RCU
+and SRCU functions.  An approximate list of the currently supported
+functions may be found in the linux-kernel.def file.
+
+The P0() process does "WRITE_ONCE(*x, 1)" on line 7.  Because "x" is a
+pointer in P0()'s parameter list, this does an unordered store to global
+variable "x".  Line 8 does "smp_store_release(y, 1)", and because "y"
+is also in P0()'s parameter list, this does a release store to global
+variable "y".
+
+The P1() process declares two local variables on lines 13 and 14.
+Line 16 does "r0 = smp_load_acquire(y)" which does an acquire load
+from global variable "y" into local variable "r0".  Line 17 does a
+"r1 = READ_ONCE(*x)", which does an unordered load from "*x" into local
+variable "r1".  Both "x" and "y" are in P1()'s parameter list, so both
+reference the same global variables that are used by P0().
+
+Line 20 is the "exists" assertion expression to evaluate the final state.
+This final state is evaluated after the dust has settled: both processes
+have completed and all of their memory references and memory barriers
+have propagated to all parts of the system.  The references to the local
+variables "r0" and "r1" in line 24 must be prefixed with "1:" to specify
+which process they are local to.
+
+Note that the assertion expression is written in the litmus-test
+language rather than in C.  For example, single "=" is an equality
+operator rather than an assignment.  The "/\" character combination means
+"and".  Similarly, "\/" stands for "or".  Both of these are ASCII-art
+representations of the corresponding mathematical symbols.  Finally,
+"~" stands for "logical not", which is "!" in C, and not to be confused
+with the C-language "~" operator which instead stands for "bitwise not".
+Parentheses may be used to override precedence.
+
+The "exists" assertion on line 20 is satisfied if the consumer sees the
+flag ("y") set but the buffer ("x") as not yet filled in, that is, if P1()
+loaded a value from "x" that was equal to 1 but loaded a value from "y"
+that was still equal to zero.
+
+This example can be checked by running the following command, which
+absolutely must be run from the tools/memory-model directory and from
+this directory only:
+
+herd7 -conf linux-kernel.cfg litmus-tests/MP+pooncerelease+poacquireonce.litmus
+
+The output is the result of something similar to a full state-space
+search, and is as follows:
+
+ 1 Test MP+pooncerelease+poacquireonce Allowed
+ 2 States 3
+ 3 1:r0=0; 1:r1=0;
+ 4 1:r0=0; 1:r1=1;
+ 5 1:r0=1; 1:r1=1;
+ 6 No
+ 7 Witnesses
+ 8 Positive: 0 Negative: 3
+ 9 Condition exists (1:r0=1 /\ 1:r1=0)
+10 Observation MP+pooncerelease+poacquireonce Never 0 3
+11 Time MP+pooncerelease+poacquireonce 0.00
+12 Hash=579aaa14d8c35a39429b02e698241d09
+
+The most pertinent line is line 10, which contains "Never 0 3", which
+indicates that the bad result flagged by the "exists" clause never
+happens.  This line might instead say "Sometimes" to indicate that the
+bad result happened in some but not all executions, or it might say
+"Always" to indicate that the bad result happened in all executions.
+(The herd7 tool doesn't judge, so it is only an LKMM convention that the
+"exists" clause indicates a bad result.  To see this, invert the "exists"
+clause's condition and run the test.)  The numbers ("0 3") at the end
+of this line indicate the number of end states satisfying the "exists"
+clause (0) and the number not not satisfying that clause (3).
+
+Another important part of this output is shown in lines 2-5, repeated here:
+
+ 2 States 3
+ 3 1:r0=0; 1:r1=0;
+ 4 1:r0=0; 1:r1=1;
+ 5 1:r0=1; 1:r1=1;
+
+Line 2 gives the total number of end states, and each of lines 3-5 list
+one of these states, with the first ("1:r0=0; 1:r1=0;") indicating that
+both of P1()'s loads returned the value "0".  As expected, given the
+"Never" on line 10, the state flagged by the "exists" clause is not
+listed.  This full list of states can be helpful when debugging a new
+litmus test.
+
+The rest of the output is not normally needed, either due to irrelevance
+or due to being redundant with the lines discussed above.  However, the
+following paragraph lists them for the benefit of readers possessed of
+an insatiable curiosity.  Other readers should feel free to skip ahead.
+
+Line 1 echos the test name, along with the "Test" and "Allowed".  Line 6's
+"No" says that the "exists" clause was not satisfied by any execution,
+and as such it has the same meaning as line 10's "Never".  Line 7 is a
+lead-in to line 8's "Positive: 0 Negative: 3", which lists the number
+of end states satisfying and not satisfying the "exists" clause, just
+like the two numbers at the end of line 10.  Line 9 repeats the "exists"
+clause so that you don't have to look it up in the litmus-test file.
+The number at the end of line 11 (which begins with "Time") gives the
+time in seconds required to analyze the litmus test.  Small tests such
+as this one complete in a few milliseconds, so "0.00" is quite common.
+Line 12 gives a hash of the contents for the litmus-test file, and is used
+by tooling that manages litmus tests and their output.  This tooling is
+used by people modifying LKMM itself, and among other things lets such
+people know which of the several thousand relevant litmus tests were
+affected by a given change to LKMM.
+
+
+Initialization
+--------------
+
+The previous example relied on the default zero initialization for
+"x" and "y", but a similar litmus test could instead initialize them
+to some other value:
+
+ 1 C MP+pooncerelease+poacquireonce
+ 2
+ 3 {
+ 4   x=42;
+ 5   y=42;
+ 6 }
+ 7
+ 8 P0(int *x, int *y)
+ 9 {
+10   WRITE_ONCE(*x, 1);
+11   smp_store_release(y, 1);
+12 }
+13
+14 P1(int *x, int *y)
+15 {
+16   int r0;
+17   int r1;
+18
+19   r0 = smp_load_acquire(y);
+20   r1 = READ_ONCE(*x);
+21 }
+22
+23 exists (1:r0=1 /\ 1:r1=42)
+
+Lines 3-6 now initialize both "x" and "y" to the value 42.  This also
+means that the "exists" clause on line 23 must change "1:r1=0" to
+"1:r1=42".
+
+Running the test gives the same overall result as before, but with the
+value 42 appearing in place of the value zero:
+
+ 1 Test MP+pooncerelease+poacquireonce Allowed
+ 2 States 3
+ 3 1:r0=1; 1:r1=1;
+ 4 1:r0=42; 1:r1=1;
+ 5 1:r0=42; 1:r1=42;
+ 6 No
+ 7 Witnesses
+ 8 Positive: 0 Negative: 3
+ 9 Condition exists (1:r0=1 /\ 1:r1=42)
+10 Observation MP+pooncerelease+poacquireonce Never 0 3
+11 Time MP+pooncerelease+poacquireonce 0.02
+12 Hash=ab9a9b7940a75a792266be279a980156
+
+It is tempting to avoid the open-coded repetitions of the value "42"
+by defining another global variable "initval=42" and replacing all
+occurrences of "42" with "initval".  This will not, repeat *not*,
+initialize "x" and "y" to 42, but instead to the address of "initval"
+(try it!).  See the section below on linked lists to learn more about
+why this approach to initialization can be useful.
+
+
+Control Structures
+------------------
+
+LKMM supports the C-language "if" statement, which allows modeling of
+conditional branches.  In LKMM, conditional branches can affect ordering,
+but only if you are *very* careful (compilers are surprisingly able
+to optimize away conditional branches).  The following example shows
+the "load buffering" (LB) use case that is used in the Linux kernel to
+synchronize between ring-buffer producers and consumers.  In the example
+below, P0() is one side checking to see if an operation may proceed and
+P1() is the other side completing its update.
+
+ 1 C LB+fencembonceonce+ctrlonceonce
+ 2
+ 3 {}
+ 4
+ 5 P0(int *x, int *y)
+ 6 {
+ 7   int r0;
+ 8
+ 9   r0 = READ_ONCE(*x);
+10   if (r0)
+11     WRITE_ONCE(*y, 1);
+12 }
+13
+14 P1(int *x, int *y)
+15 {
+16   int r0;
+17
+18   r0 = READ_ONCE(*y);
+19   smp_mb();
+20   WRITE_ONCE(*x, 1);
+21 }
+22
+23 exists (0:r0=1 /\ 1:r0=1)
+
+P1()'s "if" statement on line 10 works as expected, so that line 11 is
+executed only if line 9 loads a non-zero value from "x".  Because P1()'s
+write of "1" to "x" happens only after P1()'s read from "y", one would
+hope that the "exists" clause cannot be satisfied.  LKMM agrees:
+
+ 1 Test LB+fencembonceonce+ctrlonceonce Allowed
+ 2 States 2
+ 3 0:r0=0; 1:r0=0;
+ 4 0:r0=1; 1:r0=0;
+ 5 No
+ 6 Witnesses
+ 7 Positive: 0 Negative: 2
+ 8 Condition exists (0:r0=1 /\ 1:r0=1)
+ 9 Observation LB+fencembonceonce+ctrlonceonce Never 0 2
+10 Time LB+fencembonceonce+ctrlonceonce 0.00
+11 Hash=e5260556f6de495fd39b556d1b831c3b
+
+However, there is no "while" statement due to the fact that full
+state-space search has some difficulty with iteration.  However, there
+are tricks that may be used to handle some special cases, which are
+discussed below.  In addition, loop-unrolling tricks may be applied,
+albeit sparingly.
+
+
+Tricks and Traps
+================
+
+This section covers extracting debug output from herd7, emulating
+spin loops, handling trivial linked lists, adding comments to litmus tests,
+emulating call_rcu(), and finally tricks to improve herd7 performance
+in order to better handle large litmus tests.
+
+
+Debug Output
+------------
+
+By default, the herd7 state output includes all variables mentioned
+in the "exists" clause.  But sometimes debugging efforts are greatly
+aided by the values of other variables.  Consider this litmus test
+(tools/memory-order/litmus-tests/SB+rfionceonce-poonceonces.litmus but
+slightly modified), which probes an obscure corner of hardware memory
+ordering:
+
+ 1 C SB+rfionceonce-poonceonces
+ 2
+ 3 {}
+ 4
+ 5 P0(int *x, int *y)
+ 6 {
+ 7   int r1;
+ 8   int r2;
+ 9
+10   WRITE_ONCE(*x, 1);
+11   r1 = READ_ONCE(*x);
+12   r2 = READ_ONCE(*y);
+13 }
+14
+15 P1(int *x, int *y)
+16 {
+17   int r3;
+18   int r4;
+19
+20   WRITE_ONCE(*y, 1);
+21   r3 = READ_ONCE(*y);
+22   r4 = READ_ONCE(*x);
+23 }
+24
+25 exists (0:r2=0 /\ 1:r4=0)
+
+The herd7 output is as follows:
+
+ 1 Test SB+rfionceonce-poonceonces Allowed
+ 2 States 4
+ 3 0:r2=0; 1:r4=0;
+ 4 0:r2=0; 1:r4=1;
+ 5 0:r2=1; 1:r4=0;
+ 6 0:r2=1; 1:r4=1;
+ 7 Ok
+ 8 Witnesses
+ 9 Positive: 1 Negative: 3
+10 Condition exists (0:r2=0 /\ 1:r4=0)
+11 Observation SB+rfionceonce-poonceonces Sometimes 1 3
+12 Time SB+rfionceonce-poonceonces 0.01
+13 Hash=c7f30fe0faebb7d565405d55b7318ada
+
+(This output indicates that CPUs are permitted to "snoop their own
+store buffers", which all of Linux's CPU families other than s390 will
+happily do.  Such snooping results in disagreement among CPUs on the
+order of stores from different CPUs, which is rarely an issue.)
+
+But the herd7 output shows only the two variables mentioned in the
+"exists" clause.  Someone modifying this test might wish to know the
+values of "x", "y", "0:r1", and "0:r3" as well.  The "locations"
+statement on line 25 shows how to cause herd7 to display additional
+variables:
+
+ 1 C SB+rfionceonce-poonceonces
+ 2
+ 3 {}
+ 4
+ 5 P0(int *x, int *y)
+ 6 {
+ 7   int r1;
+ 8   int r2;
+ 9
+10   WRITE_ONCE(*x, 1);
+11   r1 = READ_ONCE(*x);
+12   r2 = READ_ONCE(*y);
+13 }
+14
+15 P1(int *x, int *y)
+16 {
+17   int r3;
+18   int r4;
+19
+20   WRITE_ONCE(*y, 1);
+21   r3 = READ_ONCE(*y);
+22   r4 = READ_ONCE(*x);
+23 }
+24
+25 locations [0:r1; 1:r3; x; y]
+26 exists (0:r2=0 /\ 1:r4=0)
+
+The herd7 output then displays the values of all the variables:
+
+ 1 Test SB+rfionceonce-poonceonces Allowed
+ 2 States 4
+ 3 0:r1=1; 0:r2=0; 1:r3=1; 1:r4=0; x=1; y=1;
+ 4 0:r1=1; 0:r2=0; 1:r3=1; 1:r4=1; x=1; y=1;
+ 5 0:r1=1; 0:r2=1; 1:r3=1; 1:r4=0; x=1; y=1;
+ 6 0:r1=1; 0:r2=1; 1:r3=1; 1:r4=1; x=1; y=1;
+ 7 Ok
+ 8 Witnesses
+ 9 Positive: 1 Negative: 3
+10 Condition exists (0:r2=0 /\ 1:r4=0)
+11 Observation SB+rfionceonce-poonceonces Sometimes 1 3
+12 Time SB+rfionceonce-poonceonces 0.01
+13 Hash=40de8418c4b395388f6501cafd1ed38d
+
+What if you would like to know the value of a particular global variable
+at some particular point in a given process's execution?  One approach
+is to use a READ_ONCE() to load that global variable into a new local
+variable, then add that local variable to the "locations" clause.
+But be careful:  In some litmus tests, adding a READ_ONCE() will change
+the outcome!  For one example, please see the C-READ_ONCE.litmus and
+C-READ_ONCE-omitted.litmus tests located here:
+
+	https://github.com/paulmckrcu/litmus/blob/master/manual/kernel/
+
+
+Spin Loops
+----------
+
+The analysis carried out by herd7 explores full state space, which is
+at best of exponential time complexity.  Adding processes and increasing
+the amount of code in a give process can greatly increase execution time.
+Potentially infinite loops, such as those used to wait for locks to
+become available, are clearly problematic.
+
+Fortunately, it is possible to avoid state-space explosion by specially
+modeling such loops.  For example, the following litmus tests emulates
+locking using xchg_acquire(), but instead of enclosing xchg_acquire()
+in a spin loop, it instead excludes executions that fail to acquire the
+lock using a herd7 "filter" clause.  Note that for exclusive locking, you
+are better off using the spin_lock() and spin_unlock() that LKMM directly
+models, if for no other reason that these are much faster.  However, the
+techniques illustrated in this section can be used for other purposes,
+such as emulating reader-writer locking, which LKMM does not yet model.
+
+ 1 C C-SB+l-o-o-u+l-o-o-u-X
+ 2
+ 3 {
+ 4 }
+ 5
+ 6 P0(int *sl, int *x0, int *x1)
+ 7 {
+ 8   int r2;
+ 9   int r1;
+10
+11   r2 = xchg_acquire(sl, 1);
+12   WRITE_ONCE(*x0, 1);
+13   r1 = READ_ONCE(*x1);
+14   smp_store_release(sl, 0);
+15 }
+16
+17 P1(int *sl, int *x0, int *x1)
+18 {
+19   int r2;
+20   int r1;
+21
+22   r2 = xchg_acquire(sl, 1);
+23   WRITE_ONCE(*x1, 1);
+24   r1 = READ_ONCE(*x0);
+25   smp_store_release(sl, 0);
+26 }
+27
+28 filter (0:r2=0 /\ 1:r2=0)
+29 exists (0:r1=0 /\ 1:r1=0)
+
+This litmus test may be found here:
+
+https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git/tree/CodeSamples/formal/herd/C-SB+l-o-o-u+l-o-o-u-X.litmus
+
+This test uses two global variables, "x1" and "x2", and also emulates a
+single global spinlock named "sl".  This spinlock is held by whichever
+process changes the value of "sl" from "0" to "1", and is released when
+that process sets "sl" back to "0".  P0()'s lock acquisition is emulated
+on line 11 using xchg_acquire(), which unconditionally stores the value
+"1" to "sl" and stores either "0" or "1" to "r2", depending on whether
+the lock acquisition was successful or unsuccessful (due to "sl" already
+having the value "1"), respectively.  P1() operates in a similar manner.
+
+Rather unconventionally, execution appears to proceed to the critical
+section on lines 12 and 13 in either case.  Line 14 then uses an
+smp_store_release() to store zero to "sl", thus emulating lock release.
+
+The case where xchg_acquire() fails to acquire the lock is handled by
+the "filter" clause on line 28, which tells herd7 to keep only those
+executions in which both "0:r2" and "1:r2" are zero, that is to pay
+attention only to those executions in which both locks are actually
+acquired.  Thus, the bogus executions that would execute the critical
+sections are discarded and any effects that they might have had are
+ignored.  Note well that the "filter" clause keeps those executions
+for which its expression is satisfied, that is, for which the expression
+evaluates to true.  In other words, the "filter" clause says what to
+keep, not what to discard.
+
+The result of running this test is as follows:
+
+ 1 Test C-SB+l-o-o-u+l-o-o-u-X Allowed
+ 2 States 2
+ 3 0:r1=0; 1:r1=1;
+ 4 0:r1=1; 1:r1=0;
+ 5 No
+ 6 Witnesses
+ 7 Positive: 0 Negative: 2
+ 8 Condition exists (0:r1=0 /\ 1:r1=0)
+ 9 Observation C-SB+l-o-o-u+l-o-o-u-X Never 0 2
+10 Time C-SB+l-o-o-u+l-o-o-u-X 0.03
+
+The "Never" on line 9 indicates that this use of xchg_acquire() and
+smp_store_release() really does correctly emulate locking.
+
+Why doesn't the litmus test take the simpler approach of using a spin loop
+to handle failed spinlock acquisitions, like the kernel does?  The key
+insight behind this litmus test is that spin loops have no effect on the
+possible "exists"-clause outcomes of program execution in the absence
+of deadlock.  In other words, given a high-quality lock-acquisition
+primitive in a deadlock-free program running on high-quality hardware,
+each lock acquisition will eventually succeed.  Because herd7 already
+explores the full state space, the length of time required to actually
+acquire the lock does not matter.  After all, herd7 already models all
+possible durations of the xchg_acquire() statements.
+
+Why not just add the "filter" clause to the "exists" clause, thus
+avoiding the "filter" clause entirely?  This does work, but is slower.
+The reason that the "filter" clause is faster is that (in the common case)
+herd7 knows to abandon an execution as soon as the "filter" expression
+fails to be satisfied.  In contrast, the "exists" clause is evaluated
+only at the end of time, thus requiring herd7 to waste time on bogus
+executions in which both critical sections proceed concurrently.  In
+addition, some LKMM users like the separation of concerns provided by
+using the both the "filter" and "exists" clauses.
+
+Readers lacking a pathological interest in odd corner cases should feel
+free to skip the remainder of this section.
+
+But what if the litmus test were to temporarily set "0:r2" to a non-zero
+value?  Wouldn't that cause herd7 to abandon the execution prematurely
+due to an early mismatch of the "filter" clause?
+
+Why not just try it?  Line 4 of the following modified litmus test
+introduces a new global variable "x2" that is initialized to "1".  Line 23
+of P1() reads that variable into "1:r2" to force an early mismatch with
+the "filter" clause.  Line 24 does a known-true "if" condition to avoid
+and static analysis that herd7 might do.  Finally the "exists" clause
+on line 32 is updated to a condition that is alway satisfied at the end
+of the test.
+
+ 1 C C-SB+l-o-o-u+l-o-o-u-X
+ 2
+ 3 {
+ 4   x2=1;
+ 5 }
+ 6
+ 7 P0(int *sl, int *x0, int *x1)
+ 8 {
+ 9   int r2;
+10   int r1;
+11
+12   r2 = xchg_acquire(sl, 1);
+13   WRITE_ONCE(*x0, 1);
+14   r1 = READ_ONCE(*x1);
+15   smp_store_release(sl, 0);
+16 }
+17
+18 P1(int *sl, int *x0, int *x1, int *x2)
+19 {
+20   int r2;
+21   int r1;
+22
+23   r2 = READ_ONCE(*x2);
+24   if (r2)
+25     r2 = xchg_acquire(sl, 1);
+26   WRITE_ONCE(*x1, 1);
+27   r1 = READ_ONCE(*x0);
+28   smp_store_release(sl, 0);
+29 }
+30
+31 filter (0:r2=0 /\ 1:r2=0)
+32 exists (x1=1)
+
+If the "filter" clause were to check each variable at each point in the
+execution, running this litmus test would display no executions because
+all executions would be filtered out at line 23.  However, the output
+is instead as follows:
+
+ 1 Test C-SB+l-o-o-u+l-o-o-u-X Allowed
+ 2 States 1
+ 3 x1=1;
+ 4 Ok
+ 5 Witnesses
+ 6 Positive: 2 Negative: 0
+ 7 Condition exists (x1=1)
+ 8 Observation C-SB+l-o-o-u+l-o-o-u-X Always 2 0
+ 9 Time C-SB+l-o-o-u+l-o-o-u-X 0.04
+10 Hash=080bc508da7f291e122c6de76c0088e3
+
+Line 3 shows that there is one execution that did not get filtered out,
+so the "filter" clause is evaluated only on the last assignment to
+the variables that it checks.  In this case, the "filter" clause is a
+disjunction, so it might be evaluated twice, once at the final (and only)
+assignment to "0:r2" and once at the final assignment to "1:r2".
+
+
+Linked Lists
+------------
+
+LKMM can handle linked lists, but only linked lists in which each node
+contains nothing except a pointer to the next node in the list.  This is
+of course quite restrictive, but there is nevertheless quite a bit that
+can be done within these confines, as can be seen in the litmus test
+at tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus:
+
+ 1 C MP+onceassign+derefonce
+ 2
+ 3 {
+ 4 y=z;
+ 5 z=0;
+ 6 }
+ 7
+ 8 P0(int *x, int **y)
+ 9 {
+10   WRITE_ONCE(*x, 1);
+11   rcu_assign_pointer(*y, x);
+12 }
+13
+14 P1(int *x, int **y)
+15 {
+16   int *r0;
+17   int r1;
+18
+19   rcu_read_lock();
+20   r0 = rcu_dereference(*y);
+21   r1 = READ_ONCE(*r0);
+22   rcu_read_unlock();
+23 }
+24
+25 exists (1:r0=x /\ 1:r1=0)
+
+Line 4's "y=z" may seem odd, given that "z" has not yet been initialized.
+But "y=z" does not set the value of "y" to that of "z", but instead
+sets the value of "y" to the *address* of "z".  Lines 4 and 5 therefore
+create a simple linked list, with "y" pointing to "z" and "z" having a
+NULL pointer.  A much longer linked list could be created if desired,
+and circular singly linked lists can also be created and manipulated.
+
+The "exists" clause works the same way, with the "1:r0=x" comparing P1()'s
+"r0" not to the value of "x", but again to its address.  This term of the
+"exists" clause therefore tests whether line 20's load from "y" saw the
+value stored by line 11, which is in fact what is required in this case.
+
+P0()'s line 10 initializes "x" to the value 1 then line 11 links to "x"
+from "y", replacing "z".
+
+P1()'s line 20 loads a pointer from "y", and line 21 dereferences that
+pointer.  The RCU read-side critical section spanning lines 19-22 is
+just for show in this example.
+
+Running this test results in the following:
+
+ 1 Test MP+onceassign+derefonce Allowed
+ 2 States 2
+ 3 1:r0=x; 1:r1=1;
+ 4 1:r0=z; 1:r1=0;
+ 5 No
+ 6 Witnesses
+ 7 Positive: 0 Negative: 2
+ 8 Condition exists (1:r0=x /\ 1:r1=0)
+ 9 Observation MP+onceassign+derefonce Never 0 2
+10 Time MP+onceassign+derefonce 0.00
+11 Hash=49ef7a741563570102448a256a0c8568
+
+The only possible outcomes feature P1() loading a pointer to "z"
+(which contains zero) on the one hand and P1() loading a pointer to "x"
+(which contains the value one) on the other.  This should be reassuring
+because it says that RCU readers cannot see the old preinitialization
+values when accessing a newly inserted list node.  This undesirable
+scenario is flagged by the "exists" clause, and would occur if P1()
+loaded a pointer to "x", but obtained the pre-initialization value of
+zero after dereferencing that pointer.
+
+
+Comments
+--------
+
+Different portions of a litmus test are processed by different parsers,
+which has the charming effect of requiring different comment syntax in
+different portions of the litmus test.  The C-syntax portions use
+C-language comments (either "/* */" or "//"), while the other portions
+use Ocaml comments "(* *)".
+
+The following litmus test illustrates the comment style corresponding
+to each syntactic unit of the test:
+
+ 1 C MP+onceassign+derefonce (* A *)
+ 2
+ 3 (* B *)
+ 4
+ 5 {
+ 6 y=z; (* C *)
+ 7 z=0;
+ 8 } // D
+ 9
+10 // E
+11
+12 P0(int *x, int **y) // F
+13 {
+14   WRITE_ONCE(*x, 1);  // G
+15   rcu_assign_pointer(*y, x);
+16 }
+17
+18 // H
+19
+20 P1(int *x, int **y)
+21 {
+22   int *r0;
+23   int r1;
+24
+25   rcu_read_lock();
+26   r0 = rcu_dereference(*y);
+27   r1 = READ_ONCE(*r0);
+28   rcu_read_unlock();
+29 }
+30
+31 // I
+32
+33 exists (* J *) (1:r0=x /\ (* K *) 1:r1=0) (* L *)
+
+In short, use C-language comments in the C code and Ocaml comments in
+the rest of the litmus test.
+
+On the other hand, if you prefer C-style comments everywhere, the
+C preprocessor is your friend.
+
+
+Asynchronous RCU Grace Periods
+------------------------------
+
+The following litmus test is derived from the example show in
+Documentation/litmus-tests/rcu/RCU+sync+free.litmus, but converted to
+emulate call_rcu():
+
+ 1 C RCU+sync+free
+ 2
+ 3 {
+ 4 int x = 1;
+ 5 int *y = &x;
+ 6 int z = 1;
+ 7 }
+ 8
+ 9 P0(int *x, int *z, int **y)
+10 {
+11   int *r0;
+12   int r1;
+13
+14   rcu_read_lock();
+15   r0 = rcu_dereference(*y);
+16   r1 = READ_ONCE(*r0);
+17   rcu_read_unlock();
+18 }
+19
+20 P1(int *z, int **y, int *c)
+21 {
+22   rcu_assign_pointer(*y, z);
+23   smp_store_release(*c, 1); // Emulate call_rcu().
+24 }
+25
+26 P2(int *x, int *z, int **y, int *c)
+27 {
+28   int r0;
+29
+30   r0 = smp_load_acquire(*c); // Note call_rcu() request.
+31   synchronize_rcu(); // Wait one grace period.
+32   WRITE_ONCE(*x, 0); // Emulate the RCU callback.
+33 }
+34
+35 filter (2:r0=1) (* Reject too-early starts. *)
+36 exists (0:r0=x /\ 0:r1=0)
+
+Lines 4-6 initialize a linked list headed by "y" that initially contains
+"x".  In addition, "z" is pre-initialized to prepare for P1(), which
+will replace "x" with "z" in this list.
+
+P0() on lines 9-18 enters an RCU read-side critical section, loads the
+list header "y" and dereferences it, leaving the node in "0:r0" and
+the node's value in "0:r1".
+
+P1() on lines 20-24 updates the list header to instead reference "z",
+then emulates call_rcu() by doing a release store into "c".
+
+P2() on lines 27-33 emulates the behind-the-scenes effect of doing a
+call_rcu().  Line 30 first does an acquire load from "c", then line 31
+waits for an RCU grace period to elapse, and finally line 32 emulates
+the RCU callback, which in turn emulates a call to kfree().
+
+Of course, it is possible for P2() to start too soon, so that the
+value of "2:r0" is zero rather than the required value of "1".
+The "filter" clause on line 35 handles this possibility, rejecting
+all executions in which "2:r0" is not equal to the value "1".
+
+
+Performance
+-----------
+
+LKMM's exploration of the full state-space can be extremely helpful,
+but it does not come for free.  The price is exponential computational
+complexity in terms of the number of processes, the average number
+of statements in each process, and the total number of stores in the
+litmus test.
+
+So it is best to start small and then work up.  Where possible, break
+your code down into small pieces each representing a core concurrency
+requirement.
+
+That said, herd7 is quite fast.  On an unprepossessing x86 laptop, it
+was able to analyze the following 10-process RCU litmus test in about
+six seconds.
+
+https://github.com/paulmckrcu/litmus/blob/master/auto/C-RW-R+RW-R+RW-G+RW-G+RW-G+RW-G+RW-R+RW-R+RW-R+RW-R.litmus
+
+One way to make herd7 run faster is to use the "-speedcheck true" option.
+This option prevents herd7 from generating all possible end states,
+instead causing it to focus solely on whether or not the "exists"
+clause can be satisfied.  With this option, herd7 evaluates the above
+litmus test in about 300 milliseconds, for more than an order of magnitude
+improvement in performance.
+
+Larger 16-process litmus tests that would normally consume 15 minutes
+of time complete in about 40 seconds with this option.  To be fair,
+you do get an extra 65,535 states when you leave off the "-speedcheck
+true" option.
+
+https://github.com/paulmckrcu/litmus/blob/master/auto/C-RW-R+RW-R+RW-G+RW-G+RW-G+RW-G+RW-R+RW-R+RW-R+RW-R+RW-G+RW-G+RW-G+RW-G+RW-R+RW-R.litmus
+
+Nevertheless, litmus-test analysis really is of exponential complexity,
+whether with or without "-speedcheck true".  Increasing by just three
+processes to a 19-process litmus test requires 2 hours and 40 minutes
+without, and about 8 minutes with "-speedcheck true".  Each of these
+results represent roughly an order of magnitude slowdown compared to the
+16-process litmus test.  Again, to be fair, the multi-hour run explores
+no fewer than 524,287 additional states compared to the shorter one.
+
+https://github.com/paulmckrcu/litmus/blob/master/auto/C-RW-R+RW-R+RW-G+RW-G+RW-G+RW-G+RW-R+RW-R+RW-R+RW-R+RW-R+RW-R+RW-G+RW-G+RW-G+RW-G+RW-R+RW-R+RW-R.litmus
+
+If you don't like command-line arguments, you can obtain a similar speedup
+by adding a "filter" clause with exactly the same expression as your
+"exists" clause.
+
+However, please note that seeing the full set of states can be extremely
+helpful when developing and debugging litmus tests.
+
+
+LIMITATIONS
+===========
+
+Limitations of the Linux-kernel memory model (LKMM) include:
+
+1.	Compiler optimizations are not accurately modeled.  Of course,
+	the use of READ_ONCE() and WRITE_ONCE() limits the compiler's
+	ability to optimize, but under some circumstances it is possible
+	for the compiler to undermine the memory model.  For more
+	information, see Documentation/explanation.txt (in particular,
+	the "THE PROGRAM ORDER RELATION: po AND po-loc" and "A WARNING"
+	sections).
+
+	Note that this limitation in turn limits LKMM's ability to
+	accurately model address, control, and data dependencies.
+	For example, if the compiler can deduce the value of some variable
+	carrying a dependency, then the compiler can break that dependency
+	by substituting a constant of that value.
+
+2.	Multiple access sizes for a single variable are not supported,
+	and neither are misaligned or partially overlapping accesses.
+
+3.	Exceptions and interrupts are not modeled.  In some cases,
+	this limitation can be overcome by modeling the interrupt or
+	exception with an additional process.
+
+4.	I/O such as MMIO or DMA is not supported.
+
+5.	Self-modifying code (such as that found in the kernel's
+	alternatives mechanism, function tracer, Berkeley Packet Filter
+	JIT compiler, and module loader) is not supported.
+
+6.	Complete modeling of all variants of atomic read-modify-write
+	operations, locking primitives, and RCU is not provided.
+	For example, call_rcu() and rcu_barrier() are not supported.
+	However, a substantial amount of support is provided for these
+	operations, as shown in the linux-kernel.def file.
+
+	Here are specific limitations:
+
+	a.	When rcu_assign_pointer() is passed NULL, the Linux
+		kernel provides no ordering, but LKMM models this
+		case as a store release.
+
+	b.	The "unless" RMW operations are not currently modeled:
+		atomic_long_add_unless(), atomic_inc_unless_negative(),
+		and atomic_dec_unless_positive().  These can be emulated
+		in litmus tests, for example, by using atomic_cmpxchg().
+
+		One exception of this limitation is atomic_add_unless(),
+		which is provided directly by herd7 (so no corresponding
+		definition in linux-kernel.def).  atomic_add_unless() is
+		modeled by herd7 therefore it can be used in litmus tests.
+
+	c.	The call_rcu() function is not modeled.  As was shown above,
+		it can be emulated in litmus tests by adding another
+		process that invokes synchronize_rcu() and the body of the
+		callback function, with (for example) a release-acquire
+		from the site of the emulated call_rcu() to the beginning
+		of the additional process.
+
+	d.	The rcu_barrier() function is not modeled.  It can be
+		emulated in litmus tests emulating call_rcu() via
+		(for example) a release-acquire from the end of each
+		additional call_rcu() process to the site of the
+		emulated rcu-barrier().
+
+	e.	Although sleepable RCU (SRCU) is now modeled, there
+		are some subtle differences between its semantics and
+		those in the Linux kernel.  For example, the kernel
+		might interpret the following sequence as two partially
+		overlapping SRCU read-side critical sections:
+
+			 1  r1 = srcu_read_lock(&my_srcu);
+			 2  do_something_1();
+			 3  r2 = srcu_read_lock(&my_srcu);
+			 4  do_something_2();
+			 5  srcu_read_unlock(&my_srcu, r1);
+			 6  do_something_3();
+			 7  srcu_read_unlock(&my_srcu, r2);
+
+		In contrast, LKMM will interpret this as a nested pair of
+		SRCU read-side critical sections, with the outer critical
+		section spanning lines 1-7 and the inner critical section
+		spanning lines 3-5.
+
+		This difference would be more of a concern had anyone
+		identified a reasonable use case for partially overlapping
+		SRCU read-side critical sections.  For more information
+		on the trickiness of such overlapping, please see:
+		https://paulmck.livejournal.com/40593.html
+
+	f.	Reader-writer locking is not modeled.  It can be
+		emulated in litmus tests using atomic read-modify-write
+		operations.
+
+The fragment of the C language supported by these litmus tests is quite
+limited and in some ways non-standard:
+
+1.	There is no automatic C-preprocessor pass.  You can of course
+	run it manually, if you choose.
+
+2.	There is no way to create functions other than the Pn() functions
+	that model the concurrent processes.
+
+3.	The Pn() functions' formal parameters must be pointers to the
+	global shared variables.  Nothing can be passed by value into
+	these functions.
+
+4.	The only functions that can be invoked are those built directly
+	into herd7 or that are defined in the linux-kernel.def file.
+
+5.	The "switch", "do", "for", "while", and "goto" C statements are
+	not supported.	The "switch" statement can be emulated by the
+	"if" statement.  The "do", "for", and "while" statements can
+	often be emulated by manually unrolling the loop, or perhaps by
+	enlisting the aid of the C preprocessor to minimize the resulting
+	code duplication.  Some uses of "goto" can be emulated by "if",
+	and some others by unrolling.
+
+6.	Although you can use a wide variety of types in litmus-test
+	variable declarations, and especially in global-variable
+	declarations, the "herd7" tool understands only int and
+	pointer types.	There is no support for floating-point types,
+	enumerations, characters, strings, arrays, or structures.
+
+7.	Parsing of variable declarations is very loose, with almost no
+	type checking.
+
+8.	Initializers differ from their C-language counterparts.
+	For example, when an initializer contains the name of a shared
+	variable, that name denotes a pointer to that variable, not
+	the current value of that variable.  For example, "int x = y"
+	is interpreted the way "int x = &y" would be in C.
+
+9.	Dynamic memory allocation is not supported, although this can
+	be worked around in some cases by supplying multiple statically
+	allocated variables.
+
+Some of these limitations may be overcome in the future, but others are
+more likely to be addressed by incorporating the Linux-kernel memory model
+into other tools.
+
+Finally, please note that LKMM is subject to change as hardware, use cases,
+and compilers evolve.
diff --git a/tools/memory-model/README b/tools/memory-model/README
index ecb7385..d2e03c4 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -63,10 +63,32 @@ BASIC USAGE: HERD7
 ==================
 
 The memory model is used, in conjunction with "herd7", to exhaustively
-explore the state space of small litmus tests.
+explore the state space of small litmus tests.  Documentation describing
+the format, features, capabilities and limitations of these litmus
+tests is available in tools/memory-model/Documentation/litmus-tests.txt.
 
-For example, to run SB+fencembonceonces.litmus against the memory model:
+Example litmus tests may be found in the Linux-kernel source tree:
 
+	tools/memory-model/litmus-tests/
+	Documentation/litmus-tests/
+
+Several thousand more example litmus tests are available here:
+
+	https://github.com/paulmckrcu/litmus
+	https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git/tree/CodeSamples/formal/herd
+	https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git/tree/CodeSamples/formal/litmus
+
+Documentation describing litmus tests and now to use them may be found
+here:
+
+	tools/memory-model/Documentation/litmus-tests.txt
+
+The remainder of this section uses the SB+fencembonceonces.litmus test
+located in the tools/memory-model directory.
+
+To run SB+fencembonceonces.litmus against the memory model:
+
+  $ cd $LINUX_SOURCE_TREE/tools/memory-model
   $ herd7 -conf linux-kernel.cfg litmus-tests/SB+fencembonceonces.litmus
 
 Here is the corresponding output:
@@ -87,7 +109,11 @@ Here is the corresponding output:
 The "Positive: 0 Negative: 3" and the "Never 0 3" each indicate that
 this litmus test's "exists" clause can not be satisfied.
 
-See "herd7 -help" or "herdtools7/doc/" for more information.
+See "herd7 -help" or "herdtools7/doc/" for more information on running the
+tool itself, but please be aware that this documentation is intended for
+people who work on the memory model itself, that is, people making changes
+to the tools/memory-model/linux-kernel.* files.  It is not intended for
+people focusing on writing, understanding, and running LKMM litmus tests.
 
 
 =====================
@@ -124,7 +150,11 @@ that during two million trials, the state specified in this litmus
 test's "exists" clause was not reached.
 
 And, as with "herd7", please see "klitmus7 -help" or "herdtools7/doc/"
-for more information.
+for more information.  And again, please be aware that this documentation
+is intended for people who work on the memory model itself, that is,
+people making changes to the tools/memory-model/linux-kernel.* files.
+It is not intended for people focusing on writing, understanding, and
+running LKMM litmus tests.
 
 
 ====================
@@ -137,6 +167,10 @@ Documentation/cheatsheet.txt
 Documentation/explanation.txt
 	Describes the memory model in detail.
 
+Documentation/litmus-tests.txt
+	Describes the format, features, capabilities, and limitations
+	of the litmus tests that LKMM can evaluate.
+
 Documentation/recipes.txt
 	Lists common memory-ordering patterns.
 
@@ -187,116 +221,3 @@ README
 	This file.
 
 scripts	Various scripts, see scripts/README.
-
-
-===========
-LIMITATIONS
-===========
-
-The Linux-kernel memory model (LKMM) has the following limitations:
-
-1.	Compiler optimizations are not accurately modeled.  Of course,
-	the use of READ_ONCE() and WRITE_ONCE() limits the compiler's
-	ability to optimize, but under some circumstances it is possible
-	for the compiler to undermine the memory model.  For more
-	information, see Documentation/explanation.txt (in particular,
-	the "THE PROGRAM ORDER RELATION: po AND po-loc" and "A WARNING"
-	sections).
-
-	Note that this limitation in turn limits LKMM's ability to
-	accurately model address, control, and data dependencies.
-	For example, if the compiler can deduce the value of some variable
-	carrying a dependency, then the compiler can break that dependency
-	by substituting a constant of that value.
-
-2.	Multiple access sizes for a single variable are not supported,
-	and neither are misaligned or partially overlapping accesses.
-
-3.	Exceptions and interrupts are not modeled.  In some cases,
-	this limitation can be overcome by modeling the interrupt or
-	exception with an additional process.
-
-4.	I/O such as MMIO or DMA is not supported.
-
-5.	Self-modifying code (such as that found in the kernel's
-	alternatives mechanism, function tracer, Berkeley Packet Filter
-	JIT compiler, and module loader) is not supported.
-
-6.	Complete modeling of all variants of atomic read-modify-write
-	operations, locking primitives, and RCU is not provided.
-	For example, call_rcu() and rcu_barrier() are not supported.
-	However, a substantial amount of support is provided for these
-	operations, as shown in the linux-kernel.def file.
-
-	a.	When rcu_assign_pointer() is passed NULL, the Linux
-		kernel provides no ordering, but LKMM models this
-		case as a store release.
-
-	b.	The "unless" RMW operations are not currently modeled:
-		atomic_long_add_unless(), atomic_inc_unless_negative(),
-		and atomic_dec_unless_positive().  These can be emulated
-		in litmus tests, for example, by using atomic_cmpxchg().
-
-		One exception of this limitation is atomic_add_unless(),
-		which is provided directly by herd7 (so no corresponding
-		definition in linux-kernel.def).  atomic_add_unless() is
-		modeled by herd7 therefore it can be used in litmus tests.
-
-	c.	The call_rcu() function is not modeled.  It can be
-		emulated in litmus tests by adding another process that
-		invokes synchronize_rcu() and the body of the callback
-		function, with (for example) a release-acquire from
-		the site of the emulated call_rcu() to the beginning
-		of the additional process.
-
-	d.	The rcu_barrier() function is not modeled.  It can be
-		emulated in litmus tests emulating call_rcu() via
-		(for example) a release-acquire from the end of each
-		additional call_rcu() process to the site of the
-		emulated rcu-barrier().
-
-	e.	Although sleepable RCU (SRCU) is now modeled, there
-		are some subtle differences between its semantics and
-		those in the Linux kernel.  For example, the kernel
-		might interpret the following sequence as two partially
-		overlapping SRCU read-side critical sections:
-
-			 1  r1 = srcu_read_lock(&my_srcu);
-			 2  do_something_1();
-			 3  r2 = srcu_read_lock(&my_srcu);
-			 4  do_something_2();
-			 5  srcu_read_unlock(&my_srcu, r1);
-			 6  do_something_3();
-			 7  srcu_read_unlock(&my_srcu, r2);
-
-		In contrast, LKMM will interpret this as a nested pair of
-		SRCU read-side critical sections, with the outer critical
-		section spanning lines 1-7 and the inner critical section
-		spanning lines 3-5.
-
-		This difference would be more of a concern had anyone
-		identified a reasonable use case for partially overlapping
-		SRCU read-side critical sections.  For more information,
-		please see: https://paulmck.livejournal.com/40593.html
-
-	f.	Reader-writer locking is not modeled.  It can be
-		emulated in litmus tests using atomic read-modify-write
-		operations.
-
-The "herd7" tool has some additional limitations of its own, apart from
-the memory model:
-
-1.	Non-trivial data structures such as arrays or structures are
-	not supported.	However, pointers are supported, allowing trivial
-	linked lists to be constructed.
-
-2.	Dynamic memory allocation is not supported, although this can
-	be worked around in some cases by supplying multiple statically
-	allocated variables.
-
-Some of these limitations may be overcome in the future, but others are
-more likely to be addressed by incorporating the Linux-kernel memory model
-into other tools.
-
-Finally, please note that LKMM is subject to change as hardware, use cases,
-and compilers evolve.
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH kcsan 5/9] tools/memory-model: Add a simple entry point document
  2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
                   ` (3 preceding siblings ...)
  2020-08-31 18:20 ` [PATCH kcsan 4/9] tools/memory-model: Improve litmus-test documentation paulmck
@ 2020-08-31 18:20 ` paulmck
  2020-08-31 18:20 ` [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed paulmck
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: paulmck @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney, Dave Chinner

From: "Paul E. McKenney" <paulmck@kernel.org>

Current LKMM documentation assumes that the reader already understands
concurrency in the Linux kernel, which won't necessarily always be the
case.  This commit supplies a simple.txt file that provides a starting
point for someone who is new to concurrency in the Linux kernel.
That said, this file might also useful as a reminder to experienced
developers of simpler approaches to dealing with concurrency.

Link: Link: https://lwn.net/Articles/827180/
[ paulmck: Apply feedback from Joel Fernandes. ]
Co-developed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Co-developed-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 tools/memory-model/Documentation/litmus-tests.txt |   8 +-
 tools/memory-model/Documentation/simple.txt       | 271 ++++++++++++++++++++++
 tools/memory-model/README                         |   5 +
 3 files changed, 282 insertions(+), 2 deletions(-)
 create mode 100644 tools/memory-model/Documentation/simple.txt

diff --git a/tools/memory-model/Documentation/litmus-tests.txt b/tools/memory-model/Documentation/litmus-tests.txt
index 289a38d..2f840dc 100644
--- a/tools/memory-model/Documentation/litmus-tests.txt
+++ b/tools/memory-model/Documentation/litmus-tests.txt
@@ -726,8 +726,12 @@ P0()'s line 10 initializes "x" to the value 1 then line 11 links to "x"
 from "y", replacing "z".
 
 P1()'s line 20 loads a pointer from "y", and line 21 dereferences that
-pointer.  The RCU read-side critical section spanning lines 19-22 is
-just for show in this example.
+pointer.  The RCU read-side critical section spanning lines 19-22 is just
+for show in this example.  Note that the address used for line 21's load
+depends on (in this case, "is exactly the same as") the value loaded by
+line 20.  This is an example of what is called an "address dependency".
+This particular address dependency extends from the load on line 20 to the
+load on line 21.  Address dependencies provide a weak form of ordering.
 
 Running this test results in the following:
 
diff --git a/tools/memory-model/Documentation/simple.txt b/tools/memory-model/Documentation/simple.txt
new file mode 100644
index 0000000..81e1a0e
--- /dev/null
+++ b/tools/memory-model/Documentation/simple.txt
@@ -0,0 +1,271 @@
+This document provides options for those wishing to keep their
+memory-ordering lives simple, as is necessary for those whose domain
+is complex.  After all, there are bugs other than memory-ordering bugs,
+and the time spent gaining memory-ordering knowledge is not available
+for gaining domain knowledge.  Furthermore Linux-kernel memory model
+(LKMM) is quite complex, with subtle differences in code often having
+dramatic effects on correctness.
+
+The options near the beginning of this list are quite simple.  The idea
+is not that kernel hackers don't already know about them, but rather
+that they might need the occasional reminder.
+
+Please note that this is a generic guide, and that specific subsystems
+will often have special requirements or idioms.  For example, developers
+of MMIO-based device drivers will often need to use mb(), rmb(), and
+wmb(), and therefore might find smp_mb(), smp_rmb(), and smp_wmb()
+to be more natural than smp_load_acquire() and smp_store_release().
+On the other hand, those coming in from other environments will likely
+be more familiar with these last two.
+
+
+Single-threaded code
+====================
+
+In single-threaded code, there is no reordering, at least assuming
+that your toolchain and hardware are working correctly.  In addition,
+it is generally a mistake to assume your code will only run in a single
+threaded context as the kernel can enter the same code path on multiple
+CPUs at the same time.  One important exception is a function that makes
+no external data references.
+
+In the general case, you will need to take explicit steps to ensure that
+your code really is executed within a single thread that does not access
+shared variables.  A simple way to achieve this is to define a global lock
+that you acquire at the beginning of your code and release at the end,
+taking care to ensure that all references to your code's shared data are
+also carried out under that same lock.  Because only one thread can hold
+this lock at a given time, your code will be executed single-threaded.
+This approach is called "code locking".
+
+Code locking can severely limit both performance and scalability, so it
+should be used with caution, and only on code paths that execute rarely.
+After all, a huge amount of effort was required to remove the Linux
+kernel's old "Big Kernel Lock", so let's please be very careful about
+adding new "little kernel locks".
+
+One of the advantages of locking is that, in happy contrast with the
+year 1981, almost all kernel developers are very familiar with locking.
+The Linux kernel's lockdep (CONFIG_PROVE_LOCKING=y) is very helpful with
+the formerly feared deadlock scenarios.
+
+Please use the standard locking primitives provided by the kernel rather
+than rolling your own.  For one thing, the standard primitives interact
+properly with lockdep.  For another thing, these primitives have been
+tuned to deal better with high contention.  And for one final thing, it is
+surprisingly hard to correctly code production-quality lock acquisition
+and release functions.  After all, even simple non-production-quality
+locking functions must carefully prevent both the CPU and the compiler
+from moving code in either direction across the locking function.
+
+Despite the scalability limitations of single-threaded code, RCU
+takes this approach for much of its grace-period processing and also
+for early-boot operation.  The reason RCU is able to scale despite
+single-threaded grace-period processing is use of batching, where all
+updates that accumulated during one grace period are handled by the
+next one.  In other words, slowing down grace-period processing makes
+it more efficient.  Nor is RCU unique:  Similar batching optimizations
+are used in many I/O operations.
+
+
+Packaged code
+=============
+
+Even if performance and scalability concerns prevent your code from
+being completely single-threaded, it is often possible to use library
+functions that handle the concurrency nearly or entirely on their own.
+This approach delegates any LKMM worries to the library maintainer.
+
+In the kernel, what is the "library"?  Quite a bit.  It includes the
+contents of the lib/ directory, much of the include/linux/ directory along
+with a lot of other heavily used APIs.  But heavily used examples include
+the list macros (for example, include/linux/{,rcu}list.h), workqueues,
+smp_call_function(), and the various hash tables and search trees.
+
+
+Data locking
+============
+
+With code locking, we use single-threaded code execution to guarantee
+serialized access to the data that the code is accessing.  However,
+we can also achieve this by instead associating the lock with specific
+instances of the data structures.  This creates a "critical section"
+in the code execution that will execute as though it is single threaded.
+By placing all the accesses and modifications to a shared data structure
+inside a critical section, we ensure that the execution context that
+holds the lock has exclusive access to the shared data.
+
+The poster boy for this approach is the hash table, where placing a lock
+in each hash bucket allows operations on different buckets to proceed
+concurrently.  This works because the buckets do not overlap with each
+other, so that an operation on one bucket does not interfere with any
+other bucket.
+
+As the number of buckets increases, data locking scales naturally.
+In particular, if the amount of data increases with the number of CPUs,
+increasing the number of buckets as the number of CPUs increase results
+in a naturally scalable data structure.
+
+
+Per-CPU processing
+==================
+
+Partitioning processing and data over CPUs allows each CPU to take
+a single-threaded approach while providing excellent performance and
+scalability.  Of course, there is no free lunch:  The dark side of this
+excellence is substantially increased memory footprint.
+
+In addition, it is sometimes necessary to occasionally update some global
+view of this processing and data, in which case something like locking
+must be used to protect this global view.  This is the approach taken
+by the percpu_counter infrastructure. In many cases, there are already
+generic/library variants of commonly used per-cpu constructs available.
+Please use them rather than rolling your own.
+
+RCU uses DEFINE_PER_CPU*() declaration to create a number of per-CPU
+data sets.  For example, each CPU does private quiescent-state processing
+within its instance of the per-CPU rcu_data structure, and then uses data
+locking to report quiescent states up the grace-period combining tree.
+
+
+Packaged primitives: Sequence locking
+=====================================
+
+Lockless programming is considered by many to be more difficult than
+lock-based programming, but there are a few lockless design patterns that
+have been built out into an API.  One of these APIs is sequence locking.
+Although this APIs can be used in extremely complex ways, there are simple
+and effective ways of using it that avoid the need to pay attention to
+memory ordering.
+
+The basic keep-things-simple rule for sequence locking is "do not write
+in read-side code".  Yes, you can do writes from within sequence-locking
+readers, but it won't be so simple.  For example, such writes will be
+lockless and should be idempotent.
+
+For more sophisticated use cases, LKMM can guide you, including use
+cases involving combining sequence locking with other synchronization
+primitives.  (LKMM does not yet know about sequence locking, so it is
+currently necessary to open-code it in your litmus tests.)
+
+Additional information may be found in include/linux/seqlock.h.
+
+Packaged primitives: RCU
+========================
+
+Another lockless design pattern that has been baked into an API
+is RCU.  The Linux kernel makes sophisticated use of RCU, but the
+keep-things-simple rules for RCU are "do not write in read-side code"
+and "do not update anything that is visible to and accessed by readers",
+and "protect updates with locking".
+
+These rules are illustrated by the functions foo_update_a() and
+foo_get_a() shown in Documentation/RCU/whatisRCU.rst.  Additional
+RCU usage patterns maybe found in Documentation/RCU and in the
+source code.
+
+
+Packaged primitives: Atomic operations
+======================================
+
+Back in the day, the Linux kernel had three types of atomic operations:
+
+1.	Initialization and read-out, such as atomic_set() and atomic_read().
+
+2.	Operations that did not return a value and provided no ordering,
+	such as atomic_inc() and atomic_dec().
+
+3.	Operations that returned a value and provided full ordering, such as
+	atomic_add_return() and atomic_dec_and_test().  Note that some
+	value-returning operations provide full ordering only conditionally.
+	For example, cmpxchg() provides ordering only upon success.
+
+More recent kernels have operations that return a value but do not
+provide full ordering.  These are flagged with either a _relaxed()
+suffix (providing no ordering), or an _acquire() or _release() suffix
+(providing limited ordering).
+
+Additional information may be found in these files:
+
+Documentation/atomic_t.txt
+Documentation/atomic_bitops.txt
+Documentation/core-api/atomic_ops.rst
+Documentation/core-api/refcount-vs-atomic.rst
+
+Reading code using these primitives is often also quite helpful.
+
+
+Lockless, fully ordered
+=======================
+
+When using locking, there often comes a time when it is necessary
+to access some variable or another without holding the data lock
+that serializes access to that variable.
+
+If you want to keep things simple, use the initialization and read-out
+operations from the previous section only when there are no racing
+accesses.  Otherwise, use only fully ordered operations when accessing
+or modifying the variable.  This approach guarantees that code prior
+to a given access to that variable will be seen by all CPUs has having
+happened before any code following any later access to that same variable.
+
+Please note that per-CPU functions are not atomic operations and
+hence they do not provide any ordering guarantees at all.
+
+If the lockless accesses are frequently executed reads that are used
+only for heuristics, or if they are frequently executed writes that
+are used only for statistics, please see the next section.
+
+
+Lockless statistics and heuristics
+==================================
+
+Unordered primitives such as atomic_read(), atomic_set(), READ_ONCE(), and
+WRITE_ONCE() can safely be used in some cases.  These primitives provide
+no ordering, but they do prevent the compiler from carrying out a number
+of destructive optimizations (for which please see the next section).
+One example use for these primitives is statistics, such as per-CPU
+counters exemplified by the rt_cache_stat structure's routing-cache
+statistics counters.  Another example use case is heuristics, such as
+the jiffies_till_first_fqs and jiffies_till_next_fqs kernel parameters
+controlling how often RCU scans for idle CPUs.
+
+But be careful.  "Unordered" really does mean "unordered".  It is all
+too easy to assume ordering, and this assumption must be avoided when
+using these primitives.
+
+
+Don't let the compiler trip you up
+==================================
+
+It can be quite tempting to use plain C-language accesses for lockless
+loads from and stores to shared variables.  Although this is both
+possible and quite common in the Linux kernel, it does require a
+surprising amount of analysis, care, and knowledge about the compiler.
+Yes, some decades ago it was not unfair to consider a C compiler to be
+an assembler with added syntax and better portability, but the advent of
+sophisticated optimizing compilers mean that those days are long gone.
+Today's optimizing compilers can profoundly rewrite your code during the
+translation process, and have long been ready, willing, and able to do so.
+
+Therefore, if you really need to use C-language assignments instead of
+READ_ONCE(), WRITE_ONCE(), and so on, you will need to have a very good
+understanding of both the C standard and your compiler.  Here are some
+introductory references and some tooling to start you on this noble quest:
+
+Who's afraid of a big bad optimizing compiler?
+	https://lwn.net/Articles/793253/
+Calibrating your fear of big bad optimizing compilers
+	https://lwn.net/Articles/799218/
+Concurrency bugs should fear the big bad data-race detector (part 1)
+	https://lwn.net/Articles/816850/
+Concurrency bugs should fear the big bad data-race detector (part 2)
+	https://lwn.net/Articles/816854/
+
+
+More complex use cases
+======================
+
+If the alternatives above do not do what you need, please look at the
+recipes-pairs.txt file to peel off the next layer of the memory-ordering
+onion.
diff --git a/tools/memory-model/README b/tools/memory-model/README
index d2e03c4..c8144d4 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -177,6 +177,11 @@ Documentation/recipes.txt
 Documentation/references.txt
 	Provides background reading.
 
+Documentation/simple.txt
+	Starting point for someone new to Linux-kernel concurrency.
+	And also for those needing a reminder of the simpler approaches
+	to concurrency!
+
 linux-kernel.bell
 	Categorizes the relevant instructions, including memory
 	references, memory barriers, atomic read-modify-write operations,
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
                   ` (4 preceding siblings ...)
  2020-08-31 18:20 ` [PATCH kcsan 5/9] tools/memory-model: Add a simple entry point document paulmck
@ 2020-08-31 18:20 ` paulmck
  2020-09-02  3:54   ` Boqun Feng
  2020-08-31 18:20 ` [PATCH kcsan 7/9] tools/memory-model: Move Documentation description to Documentation/README paulmck
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 30+ messages in thread
From: paulmck @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

This commit adds a key entry enumerating the various types of relaxed
operations.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 tools/memory-model/Documentation/cheatsheet.txt | 27 ++++++++++++++-----------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
index 33ba98d..31b814d 100644
--- a/tools/memory-model/Documentation/cheatsheet.txt
+++ b/tools/memory-model/Documentation/cheatsheet.txt
@@ -5,7 +5,7 @@
 
 Store, e.g., WRITE_ONCE()            Y                                       Y
 Load, e.g., READ_ONCE()              Y                          Y   Y        Y
-Unsuccessful RMW operation           Y                          Y   Y        Y
+Relaxed operation                    Y                          Y   Y        Y
 rcu_dereference()                    Y                          Y   Y        Y
 Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
 Successful *_release()         C        Y  Y    Y     W                      Y
@@ -17,14 +17,17 @@ smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
 smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
 
 
-Key:	C:	Ordering is cumulative
-	P:	Ordering propagates
-	R:	Read, for example, READ_ONCE(), or read portion of RMW
-	W:	Write, for example, WRITE_ONCE(), or write portion of RMW
-	Y:	Provides ordering
-	a:	Provides ordering given intervening RMW atomic operation
-	DR:	Dependent read (address dependency)
-	DW:	Dependent write (address, data, or control dependency)
-	RMW:	Atomic read-modify-write operation
-	SELF:	Orders self, as opposed to accesses before and/or after
-	SV:	Orders later accesses to the same variable
+Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
+		  operation, an unsuccessful RMW operation, or one of
+		  the atomic_read() and atomic_set() family of operations.
+	C:	  Ordering is cumulative
+	P:	  Ordering propagates
+	R:	  Read, for example, READ_ONCE(), or read portion of RMW
+	W:	  Write, for example, WRITE_ONCE(), or write portion of RMW
+	Y:	  Provides ordering
+	a:	  Provides ordering given intervening RMW atomic operation
+	DR:	  Dependent read (address dependency)
+	DW:	  Dependent write (address, data, or control dependency)
+	RMW:	  Atomic read-modify-write operation
+	SELF:	  Orders self, as opposed to accesses before and/or after
+	SV:	  Orders later accesses to the same variable
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH kcsan 7/9] tools/memory-model: Move Documentation description to Documentation/README
  2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
                   ` (5 preceding siblings ...)
  2020-08-31 18:20 ` [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed paulmck
@ 2020-08-31 18:20 ` paulmck
  2020-08-31 18:20 ` [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives paulmck
  2020-08-31 18:20 ` [PATCH kcsan 9/9] tools/memory-model: Document locking corner cases paulmck
  8 siblings, 0 replies; 30+ messages in thread
From: paulmck @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

This commit moves the descriptions of the files residing in
tools/memory-model/Documentation to a README file in that directory,
leaving behind the description of tools/memory-model/Documentation/README
itself.  After this change, tools/memory-model/Documentation/README
provides a guide to the files in the tools/memory-model/Documentation
directory, guiding people with different skills and needs to the most
appropriate starting point.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 tools/memory-model/Documentation/README | 62 +++++++++++++++++++++++++++++++++
 tools/memory-model/README               | 22 ++----------
 2 files changed, 64 insertions(+), 20 deletions(-)
 create mode 100644 tools/memory-model/Documentation/README

diff --git a/tools/memory-model/Documentation/README b/tools/memory-model/Documentation/README
new file mode 100644
index 0000000..4326603
--- /dev/null
+++ b/tools/memory-model/Documentation/README
@@ -0,0 +1,62 @@
+This file serves as the guide for the other files residing in the
+tools/memory-model/Documentation directory.  It has been said that at
+its best, communication involves identifying where the target audience
+is and then building a bridge from where they are to where they need
+to go.  Unfortunately, this time-honored approach falls short in this
+case because readers of the documents in this directory might be in any
+number of places.
+
+This document therefore describes a number of places to start reading
+the documentation in this directory, depending on what you know and what
+you would like to learn:
+
+o	You are new to Linux-kernel concurrency: simple.txt
+
+o	You are familiar with the concurrency facilities that you
+	need, and just want to get started with LKMM litmus tests:
+	litmus-tests.txt
+
+o	You are familiar with Linux-kernel concurrency, and would
+	like a detailed intuitive understanding of LKMM, including
+	situations involving more than two threads: recipes.txt
+
+o	You are familiar with Linux-kernel concurrency and the
+	use of LKMM, and would like a cheat sheet to remind you
+	of LKMM's guarantees: cheatsheet.txt
+
+o	You are familiar with Linux-kernel concurrency and the
+	use of LKMM, and would like to learn about LKMM's requirements,
+	rationale, and implementation: explanation.txt
+
+o	You are interested in the publications related to LKMM, including
+	hardware manuals, academic literature, standards-committee working
+	papers, and LWN articles: references.txt
+
+
+====================
+DESCRIPTION OF FILES
+====================
+
+Documentation/README
+	This file.
+
+Documentation/cheatsheet.txt
+	Quick-reference guide to the Linux-kernel memory model.
+
+Documentation/explanation.txt
+	Describes the memory model in detail.
+
+Documentation/litmus-tests.txt
+	Describes the format, features, capabilities, and limitations
+	of the litmus tests that LKMM can evaluate.
+
+Documentation/recipes.txt
+	Lists common memory-ordering patterns.
+
+Documentation/references.txt
+	Provides background reading.
+
+Documentation/simple.txt
+	Starting point for someone new to Linux-kernel concurrency.
+	And also for those needing a reminder of the simpler approaches
+	to concurrency!
diff --git a/tools/memory-model/README b/tools/memory-model/README
index c8144d4..39d08d1 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -161,26 +161,8 @@ running LKMM litmus tests.
 DESCRIPTION OF FILES
 ====================
 
-Documentation/cheatsheet.txt
-	Quick-reference guide to the Linux-kernel memory model.
-
-Documentation/explanation.txt
-	Describes the memory model in detail.
-
-Documentation/litmus-tests.txt
-	Describes the format, features, capabilities, and limitations
-	of the litmus tests that LKMM can evaluate.
-
-Documentation/recipes.txt
-	Lists common memory-ordering patterns.
-
-Documentation/references.txt
-	Provides background reading.
-
-Documentation/simple.txt
-	Starting point for someone new to Linux-kernel concurrency.
-	And also for those needing a reminder of the simpler approaches
-	to concurrency!
+Documentation/README
+	Guide to the other documents in the Documentation/ directory.
 
 linux-kernel.bell
 	Categorizes the relevant instructions, including memory
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives
  2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
                   ` (6 preceding siblings ...)
  2020-08-31 18:20 ` [PATCH kcsan 7/9] tools/memory-model: Move Documentation description to Documentation/README paulmck
@ 2020-08-31 18:20 ` paulmck
  2020-08-31 22:34   ` Akira Yokosawa
  2020-09-01  1:23   ` Alan Stern
  2020-08-31 18:20 ` [PATCH kcsan 9/9] tools/memory-model: Document locking corner cases paulmck
  8 siblings, 2 replies; 30+ messages in thread
From: paulmck @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

The Linux kernel has a number of categories of ordering primitives, which
are recorded in the LKMM implementation and hinted at by cheatsheet.txt.
But there is no overview of these categories, and such an overview
is needed in order to understand multithreaded LKMM litmus tests.
This commit therefore adds an ordering.txt as well as extracting a
control-dependencies.txt from memory-barriers.txt.  It also updates the
README file.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 tools/memory-model/Documentation/README       |  24 +-
 tools/memory-model/Documentation/ordering.txt | 462 ++++++++++++++++++++++++++
 tools/memory-model/control-dependencies.txt   | 256 ++++++++++++++
 3 files changed, 740 insertions(+), 2 deletions(-)
 create mode 100644 tools/memory-model/Documentation/ordering.txt
 create mode 100644 tools/memory-model/control-dependencies.txt

diff --git a/tools/memory-model/Documentation/README b/tools/memory-model/Documentation/README
index 4326603..16177aa 100644
--- a/tools/memory-model/Documentation/README
+++ b/tools/memory-model/Documentation/README
@@ -8,10 +8,19 @@ number of places.
 
 This document therefore describes a number of places to start reading
 the documentation in this directory, depending on what you know and what
-you would like to learn:
+you would like to learn.  These are cumulative, that is, understanding
+of the documents earlier in this list is required by the documents later
+in this list.
 
 o	You are new to Linux-kernel concurrency: simple.txt
 
+o	You have some background in Linux-kernel concurrency, and would
+	like an overview of the types of low-level concurrency primitives
+	that are provided:  ordering.txt
+
+	Here, "low level" means atomic operations to single locations in
+	memory.
+
 o	You are familiar with the concurrency facilities that you
 	need, and just want to get started with LKMM litmus tests:
 	litmus-tests.txt
@@ -20,6 +29,9 @@ o	You are familiar with Linux-kernel concurrency, and would
 	like a detailed intuitive understanding of LKMM, including
 	situations involving more than two threads: recipes.txt
 
+o	You would like a detailed understanding of what your compiler can
+	and cannot do to control dependencies: control-dependencies.txt
+
 o	You are familiar with Linux-kernel concurrency and the
 	use of LKMM, and would like a cheat sheet to remind you
 	of LKMM's guarantees: cheatsheet.txt
@@ -37,12 +49,16 @@ o	You are interested in the publications related to LKMM, including
 DESCRIPTION OF FILES
 ====================
 
-Documentation/README
+README
 	This file.
 
 Documentation/cheatsheet.txt
 	Quick-reference guide to the Linux-kernel memory model.
 
+Documentation/control-dependencies.txt
+	A guide to preventing compiler optimizations from destroying
+	your control dependencies.
+
 Documentation/explanation.txt
 	Describes the memory model in detail.
 
@@ -50,6 +66,10 @@ Documentation/litmus-tests.txt
 	Describes the format, features, capabilities, and limitations
 	of the litmus tests that LKMM can evaluate.
 
+Documentation/ordering.txt
+	Describes the Linux kernel's low-level memory-ordering primitives
+	by category.
+
 Documentation/recipes.txt
 	Lists common memory-ordering patterns.
 
diff --git a/tools/memory-model/Documentation/ordering.txt b/tools/memory-model/Documentation/ordering.txt
new file mode 100644
index 0000000..4b2cc55
--- /dev/null
+++ b/tools/memory-model/Documentation/ordering.txt
@@ -0,0 +1,462 @@
+This document expands on the types of ordering that are summarized in
+cheatsheet.txt and used in in various other files.
+
+
+Types of Ordering
+=================
+
+This section describes the types of ordering in roughly decreasing order
+of strength on the theory that stronger ordering is more heavily used
+and easier to understand.  Each of the following types of ordering has
+its own subsection below:
+
+1.	Barriers (also known as "fences").  A barrier orders some or all
+	of the CPU's prior operations against some or all of its subsequent
+	operations.
+
+	a.	Full memory barriers:  More famously, smp_mb(), but this
+		category also includes those non-void (value returning)
+		read-modify-write (RMW) atomic operations whose
+		names do not end in _acquire, _release, or _relaxed.
+		It also includes RCU grace-period operations such as
+		synchronize_rcu(), but at a very high cost, especially
+		in terms of latency.  These operations order all prior
+		memory accesses against all subsequent memory accesses.
+
+	b.	RMW ordering augmentation.  The smp_mb__before_atomic()
+		and smp_mb__after_atomic() are by far the most heavily
+		used of these.	They provide smp_mb()-style full ordering
+		to a later (or earlier, respectively) non-value-returning
+		RMW atomic operations such as atomic_inc().
+
+	c.	Write memory barrier.  This is smp_wmb(), which orders
+		prior marked stores against later marked stores.
+
+	d.	Read memory barrier.  This is smp_rmb(), which orders
+		prior loads against later loads.
+
+2.	Ordered memory accesses.  These operations order themselves
+	against some or all of the CPUs prior or subsequent accesses,
+	depending on the category of operation.
+
+	a.	Release operations.  This category includes
+		smp_store_release(), atomic_set_release(),
+		rcu_assign_pointer(), and value-returning RMW operations
+		whose names end in _release.  These operations order
+		their own store against all of the CPU's subsequent
+		memory accesses.
+
+	b.	Acquire operations.  This category includes
+		smp_load_acquire(), atomic_read_acquire(), and
+		value-returning RMW operations whose names end in
+		_acquire.  These operations order their own load against
+		all of the CPU's prior memory accesses.
+
+	c.	RCU read-side ordering.  This category includes
+		rcu_dereference() and srcu_dereference().  These
+		operations order their load (which must be a pointer)
+		against any of the CPU's subsequent memory accesses
+		whose address has been calculated from the value loaded,
+		that is against any subsequent memory access having
+		an *address dependency* on the value returned by the
+		rcu_dereference() or srcu_dereference().
+
+	d.	Control dependencies.  A control dependency extends
+		from a marked load (READ_ONCE() or stronger) through
+		an "if" condition to a marked store (WRITE_ONCE() or
+		stronger) that is executed only one of the legs of that
+		"if" statement.  Control dependencies are fragile and
+		easily destroyed by compiler optimizers.
+
+		Control dependencies are so named because they are
+		mediated by control-flow instructions such as comparisons
+		and conditional branches.
+
+3.	Unordered accesses, as the name indicates, have no ordering
+	properties except to the extent that they interact with one of
+	the ordering mechanisms called out above.
+
+	a.	Unordered marked operations.  This category includes
+		READ_ONCE(), WRITE_ONCE(), atomic_read(), atomic_set(),
+		volatile variables (such as the "jiffies" counter),
+		value-returning RMW operations whose names end in
+		_relaxed, and non-value-returning RMW operations
+		whose names do not end in either _acquire or _release.
+		These operations provide no ordering guarantees.
+
+	b.	Unmarked C-language accesses.  This category includes
+		accesses to normal variables, that is, variables that are
+		not marked "volatile" and are not C11 atomic variables.
+		These operations provide no ordering guarantees, and
+		further do not guarantee "atomic" access.  For example,
+		the compiler might (and sometimes does) split a plain
+		C-language store into multiple smaller stores.	A load
+		from that same variable running on some other CPU while
+		such a store is executing might see a value that is a
+		mashup of the old value and the new value.
+
+Each of the above categories is covered in more detail by one of the
+following section.
+
+Note well that none of these primitives generate any code in kernels
+built with CONFIG_SMP=n.  Therefore, if you are attempting to order
+accesses to a physical device within a device driver, please use the
+ordering primitives provided for that purpose, for example, mb() instead
+of smp_mb().  See "Linux Kernel Device Drivers" for more information.
+
+
+Full Memory Barriers
+--------------------
+
+A number of Linux-kernel primitives provide full-memory-barrier semantics.
+Suppose that a given CPU invokes such a primitive.  Then all CPUs will
+agree that any earlier action taken by that CPU happened before any
+later action taken by that same CPU.  For example, consider the following:
+
+	WRITE_ONCE(x, 1);
+	smp_mb(); // Order store to x before load from y.
+	r1 = READ_ONCE(y);
+
+All CPUs will agree that the store to "x" happened before the load from "y",
+as indicated by the comment.  And yes, please comment your memory-ordering
+primitives.  It is surprisingly hard to remember what they were for even
+a few months after the fact.
+
+Linux-kernel primitives providing full ordering include the following:
+
+o	The smp_mb() full memory barrier, as shown above.
+
+o	Value-returning read-modify-write (RMW) atomic operations
+	whose names do not end in _acquire, _release, or _relaxed.
+	Value-returning operations can be recognized by their
+	non-void return types.	Examples include atomic_add_return(),
+	atomic_dec_and_test(), cmpxchg(), and xchg().  Note that
+	conditional operations such as cmpxchg() are only guaranteed
+	to provide ordering when they succeed.
+
+	In contrast, non-value-returning RMW atomic operations, that is,
+	those with void return types, do not guarantee any ordering
+	whatsoever.  Nor do value-returning RMW atomic operations
+	whose names end in _relaxed.  Examples of the former include
+	atomic_inc() and atomic_dec(), while examples of the latter
+	include atomic_cmpxchg_relaxed() and atomic_xchg_relaxed().
+
+	Value-returning RMW atomic operations whose names end in _acquire
+	or _release provide limited ordering, and will be described
+	later in this document.
+
+o	RCU's grace-period primitives, including synchronize_rcu(),
+	synchronize_rcu_expedited(), synchronize_srcu() and so on.
+	However, these primitives have orders of magnitude greater
+	overhead than smp_mb(), atomic_xchg(), and so on.  Therefore,
+	RCU's grace-period primitives are typically instead used to
+	provide ordering against RCU read-side critical sections, as
+	documented in their comment headers.  But of course if you need a
+	synchronize_rcu() to interact with readers, it costs you nothing
+	to also rely on its additional semantics as a full memory barrier.
+	Just please carefully comment this, otherwise your future self
+	will hate you.
+
+
+RMW Ordering Augmentation
+-------------------------
+
+As noted in the previous section, non-value-returning RMW operations
+such as atomic_inc() and atomic_dec() guarantee no ordering whatsoever.
+One way to get full ordering is through use of smp_mb(), for example,
+as follows:
+
+Nevertheless, a number of popular CPU families, including x86,
+nevertheless provide full ordering for these primitives.  One way to
+obtain full ordering is to use smp_mb(), like this:
+
+	WRITE_ONCE(x, 1);
+	atomic_inc(&my_counter);
+	smp_mb(); // Inefficient on x86!!!
+	r1 = READ_ONCE(y);
+
+Except that this is inefficient on x86, on which atomic_inc() provides
+full ordering all by itself.  The smp_mb__after_atomic() primitive
+can be used instead:
+
+	WRITE_ONCE(x, 1);
+	atomic_inc(&my_counter);
+	smp_mb__after_atomic(); // Order store to x before load from y.
+	r1 = READ_ONCE(y);
+
+The smp_mb__after_atomic() primitive emits code only on CPUs whose
+atomic_inc() implementations do not guarantee full ordering.  There
+are a number of variations on the smp_mb__*() theme:
+
+o	smp_mb__before_atomic(), which provides full ordering prior
+	to an unordered RMW atomic operation.
+
+o	smp_mb__after_atomic(), which, as shown above, provides full
+	ordering subsequent to an unordered RMW atomic operation.
+
+o	smp_mb__after_spinlock(), which provides full ordering subsequent
+	to a successful spinlock acquisition.  Note that spin_lock() is
+	always successful but spin_trylock() might not be.
+
+o	smp_mb__after_srcu_read_unlock(), which provides full ordering
+	subsequent to an srcu_read_unlock().
+
+Placing code between the smp__*() primitive and the thing whose ordering
+that it is augmenting is generally bad practice because the ordering of
+the intervening code will differ from one CPU architecture to another.
+
+
+Write Memory Barrier
+--------------------
+
+The Linux kernel's write memory barrier is smp_wmb().  If a CPU executes
+the following code:
+
+	WRITE_ONCE(x, 1);
+	smp_wmb();
+	WRITE_ONCE(y, 1);
+
+Then any given CPU will see the write to "x" has having preceded the write
+to "y".  However, you are usually better off using a release store, as
+described in the "Release Operations" section below.
+
+Note that smp_wmb() might fail to provide ordering for unmarked C-language
+stores because profile-driven optimization could determine that the value
+being overwritten is almost always the value being written.  Such a compiler
+might then reasonably decide to transform "x = 1" and "y = 1" as follows:
+
+	if (x != 1)
+		x = 1;
+	smp_wmb(); // BUG: does not order the reads!!!
+	if (y != 1)
+		y = 1;
+
+Therefore, if you need to use smp_wmb() with unmarked C-language
+writes, please make sure that your compiler will not make this sort
+of transformation.
+
+
+Read Memory Barrier
+-------------------
+
+The Linux kernel's read memory barrier is smp_rmb().  If a CPU executes
+the following code:
+
+	r0 = READ_ONCE(y);
+	smp_rmb();
+	r1 = READ_ONCE(x);
+
+Then any given CPU will see the read from "y" as having preceded the read from
+"x".  However, you are usually better off using an acquire load, as described
+in the "Acquire Operations" section below.
+
+
+Release Operations
+------------------
+
+The smp_wmb() example shown above is usually improved by instead using
+a release store:
+
+	WRITE_ONCE(x, 1);
+	smp_store_release(&y, 1);
+
+This saves a line of code, and more important makes it easier to connect
+up the different pieces of the concurrent algorithm.  The variable stored
+to by the smp_store_release(), in this case "y", will normally be used
+in an acquire operation in the other piece of the concurrent algorithm.
+
+There is a wide variety of release operations:
+
+o	Store operations, including smp_store_release(),
+	atomic_set_release(), and atomic_long_set_release().
+
+o	RCU's rcu_assign_pointer() operation.  This is the same as
+	smp_store_release() except that: (1) It takes the pointer
+	to be assigned to instead of a pointer to that pointer,
+	as smp_store_release() would, (2) It is intended to be used
+	in conjunction with rcu_dereference() and similar, and
+	(3) It checks for an RCU-protected pointer.
+
+o	Value-returning RMW operations whose names end in _release,
+	such as atomic_fetch_add_release() and cmpxchg_release().
+	Note that release ordering is provided only against the
+	memory-store portion of the RMW operation.  Note also that
+	conditional operations such as cmpxchg_release() are
+	only guaranteed to provide ordering when they succeed.
+
+As mentioned earlier, release operations are often paired with
+acquire operations, which are the subject of the next section.
+
+
+Acquire Operations
+------------------
+
+The smp_rmb() example shown above is usually improved by instead using
+an acquire load:
+
+	r0 = smp_load_acquire(&y);
+	r1 = READ_ONCE(x);
+
+As with smp_store_release(), this saves a line of code and makes it easier
+to connect the different pieces of the concurrent algorithm by looking for
+the smp_store_release() that stores to "y".
+
+There are a couple of categories of acquire operations:
+
+o	Load operations, including smp_load_acquire(),
+	atomic_read_acquire(), and atomic64_read_acquire().
+
+o	Value-returning RMW operations whose names end in _acquire, such
+	as atomic_xchg_acquire() and atomic_cmpxchg_acquire().	Note that
+	release ordering is provided only against the memory-load portion
+	of the RMW operation.  Note also that conditional operations
+	such as atomic_cmpxchg_acquire() are only guaranteed to provide
+	ordering when they succeed.
+
+Symmetry being what it is, acquire operations are often paired with
+release operations.
+
+
+RCU Read-Side Ordering
+----------------------
+
+There are two major types of RCU read-side ordering:
+
+o	Marking of RCU read-side critical sections, for example,
+	via rcu_read_lock() and rcu_read_unlock().  These operations
+	incur very low overhead because they interact only with
+	the corresponding grace-period primitives, in this case,
+	synchronize_rcu() and friends.	The way this works is that
+	if a given call to synchronize_rcu() cannot prove that it
+	started before a given call to rcu_read_lock(), then that
+	synchronize_rcu() is not permitted to return until the matching
+	rcu_read_unlock() is reached.
+
+	For more information, please see the synchronize_rcu() docbook
+	header comment and the material in Documentation/RCU.
+
+o	Accessing RCU-protected pointers via rcu_dereference()
+	and friends.  A call to rcu_dereference() is usually paired
+	with a call to rcu_assign_pointer() in much the same way
+	that a call to smp_load_acquire() could be paired with a
+	call to smp_store_release().  Calls to rcu_dereference() and
+	rcu_assign_pointer are often buried in other APIs, for example,
+	the RCU list API members defined in include/linux/rculist.h.
+	For more information, please see the docbook headers in that
+	file and again the material in Documentation/RCU.
+
+	If there is any significant processing of the pointer value
+	between the rcu_dereference() that returned it and a later
+	dereference(), please read Documentation/RCU/rcu_dereference.txt.
+
+It can also be quite helpful to review uses in the Linux kernel.
+
+
+Control Dependencies
+--------------------
+
+A control dependency can enforce ordering between an READ_ONCE() and
+a WRITE_ONCE() when there is an "if" condition between them.  The
+classic example is as follows:
+
+	q = READ_ONCE(a);
+	if (q) {
+		WRITE_ONCE(b, 1);
+	}
+
+In this case, all CPUs would see the read from "a" as happening before
+the write to "b".
+
+However, control dependencies are easily destroyed by compiler
+optimizations.  Please see the "control-dependencies.txt" file for
+more information.
+
+
+Unordered Marked Operations
+---------------------------
+
+Unordered operations to different variables are just that, unordered.
+However, if a group of CPUs apply these operations to a single variable,
+all the CPUs will agree on the operation order.  Of course, it is also
+possible to constrain reordering of unordered operations to different
+variables using the various mechanisms described earlier in this document.
+
+These operations come in three categories:
+
+o	Marked writes, such as WRITE_ONCE() and atomic_set().  These
+	primitives prevent the compiler from a number of destructive
+	optimizations such as omitting an early write to a variable
+	in favor of a later write to that same variable.  They provide
+	no ordering guarantees, and in fact many CPUs will happily
+	reorder marked writes with each other or with other unordered
+	operations, unless these operations are on the same variable.
+
+o	Marked reads, such as READ_ONCE() and atomic_read().  These
+	primitives prevent the compiler from a number of destructive
+	optimizations such as fusing a pair of successive reads from
+	the same variable into a single read.  They provide no ordering
+	guarantees, and in fact many CPUs will happily reorder marked
+	reads with each other or with other unordered operations, unless
+	these operations are on the same variable.
+
+o	Unordered RMW atomic operations.  These are non-value-returning
+	RMW atomic operations whose names do not end in _acquire or
+	_release, and also value-returning RMW operations whose names
+	end in _relaxed.  Examples include atomic_add(), atomic_or(),
+	and atomic64_fetch_xor_relaxed().  These operations do carry
+	out the specified RMW operation atomically, for example, five
+	concurrent atomic_add() operations applied to a given variable
+	will reliably increase the value of that variable by five.
+	However, many CPUs will happily reorder these operations with
+	each other or with other unordered operations.
+
+	This category of operations can be efficiently ordered using
+	smp_mb__before_atomic() and smp_mb__after_atomic().  as was
+	discussed in the "RMW Ordering Augmentation" section
+
+In short, these operations can be freely reordered unless they are all
+operating on a single variable or unless they are constrained by one of
+the operations called out earlier in this document.
+
+
+Unmarked C-Language Accesses
+----------------------------
+
+Unmarked C-language accesses are unordered, and are also subject to
+any number of compiler optimizations, many of which can break your
+concurrent code.  It is possible to used unmarked C-language accesses for
+shared variables that are subject to concurrent access, but great care
+is required on an ongoing basis.  The compiler-constraining barrier()
+primitive can be helpful, as can the various ordering primitives discussed
+in this document.  It nevertheless bears repeating that use of unmarked
+C-language accesses requires careful attention to not just your code,
+but to all the compilers that might be used to build it.
+
+Here are some ways of using unmarked C-language accesses for shared
+variables without such worries:
+
+o	Guard all accesses to a given variable by a particular lock,
+	so that there are never concurrent conflicting accesses to that
+	variable.  (There are "conflicting accesses" when at least one of
+	the concurrent accesses to a variable is an unmarked C-language
+	access and when at least one of those accesses is a write.)
+
+o	As above, but using other synchronization primitives such
+	as reader-writer locks or sequence locks as designed.
+
+o	Restrict use of a given variable to statistics or heuristics
+	where the occasional bogus value can be tolerated.
+
+If you need to live more dangerously, please do take the time to
+understand the compilers.  One place to start is these two LWN
+articles:
+
+Who's afraid of a big bad optimizing compiler?
+	https://lwn.net/Articles/793253
+Calibrating your fear of big bad optimizing compilers
+	https://lwn.net/Articles/799218
+
+Used properly, unmarked C-language accesses can reduce overhead on
+fastpaths.  However, the price is great care and continual attention
+to your compiler as new versions come out and as new optimizations
+are enabled.
diff --git a/tools/memory-model/control-dependencies.txt b/tools/memory-model/control-dependencies.txt
new file mode 100644
index 0000000..366520c
--- /dev/null
+++ b/tools/memory-model/control-dependencies.txt
@@ -0,0 +1,256 @@
+CONTROL DEPENDENCIES
+====================
+
+Control dependencies can be a bit tricky because current compilers do
+not understand them.  The purpose of this section is to help you prevent
+the compiler's ignorance from breaking your code.
+
+A load-load control dependency requires a full read memory barrier, not
+simply a data dependency barrier to make it work correctly.  Consider the
+following bit of code:
+
+	q = READ_ONCE(a);
+	if (q) {
+		<data dependency barrier>  /* BUG: No data dependency!!! */
+		p = READ_ONCE(b);
+	}
+
+This will not have the desired effect because there is no actual data
+dependency, but rather a control dependency that the CPU may short-circuit
+by attempting to predict the outcome in advance, so that other CPUs see
+the load from b as having happened before the load from a.  In such a
+case what's actually required is:
+
+	q = READ_ONCE(a);
+	if (q) {
+		<read barrier>
+		p = READ_ONCE(b);
+	}
+
+However, stores are not speculated.  This means that ordering -is- provided
+for load-store control dependencies, as in the following example:
+
+	q = READ_ONCE(a);
+	if (q) {
+		WRITE_ONCE(b, 1);
+	}
+
+Control dependencies pair normally with other types of barriers.
+That said, please note that neither READ_ONCE() nor WRITE_ONCE()
+are optional! Without the READ_ONCE(), the compiler might combine the
+load from "a" with other loads from "a".  Without the WRITE_ONCE(),
+the compiler might combine the store to "b" with other stores to "b",
+or, worse yet, convert the store into a check followed by a store.
+
+Worse yet, if the compiler is able to prove (say) that the value of
+variable "a" is always non-zero, it would be well within its rights
+to optimize the original example by eliminating the "if" statement
+as follows:
+
+	q = a;
+	b = 1;  /* BUG: Compiler and CPU can both reorder!!! */
+
+So don't leave out either the READ_ONCE() or the WRITE_ONCE().
+
+It is tempting to try to enforce ordering on identical stores on both
+branches of the "if" statement as follows:
+
+	q = READ_ONCE(a);
+	if (q) {
+		barrier();
+		WRITE_ONCE(b, 1);
+		do_something();
+	} else {
+		barrier();
+		WRITE_ONCE(b, 1);
+		do_something_else();
+	}
+
+Unfortunately, current compilers will transform this as follows at high
+optimization levels:
+
+	q = READ_ONCE(a);
+	barrier();
+	WRITE_ONCE(b, 1);  /* BUG: No ordering vs. load from a!!! */
+	if (q) {
+		/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
+		do_something();
+	} else {
+		/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
+		do_something_else();
+	}
+
+Now there is no conditional between the load from "a" and the store to
+"b", which means that the CPU is within its rights to reorder them:
+The conditional is absolutely required, and must be present in the
+assembly code even after all compiler optimizations have been applied.
+Therefore, if you need ordering in this example, you need explicit
+memory barriers, for example, smp_store_release():
+
+	q = READ_ONCE(a);
+	if (q) {
+		smp_store_release(&b, 1);
+		do_something();
+	} else {
+		smp_store_release(&b, 1);
+		do_something_else();
+	}
+
+In contrast, without explicit memory barriers, two-legged-if control
+ordering is guaranteed only when the stores differ, for example:
+
+	q = READ_ONCE(a);
+	if (q) {
+		WRITE_ONCE(b, 1);
+		do_something();
+	} else {
+		WRITE_ONCE(b, 2);
+		do_something_else();
+	}
+
+The initial READ_ONCE() is still required to prevent the compiler from
+proving the value of "a".
+
+In addition, you need to be careful what you do with the local variable "q",
+otherwise the compiler might be able to guess the value and again remove
+the needed conditional.  For example:
+
+	q = READ_ONCE(a);
+	if (q % MAX) {
+		WRITE_ONCE(b, 1);
+		do_something();
+	} else {
+		WRITE_ONCE(b, 2);
+		do_something_else();
+	}
+
+If MAX is defined to be 1, then the compiler knows that (q % MAX) is
+equal to zero, in which case the compiler is within its rights to
+transform the above code into the following:
+
+	q = READ_ONCE(a);
+	WRITE_ONCE(b, 2);
+	do_something_else();
+
+Given this transformation, the CPU is not required to respect the ordering
+between the load from variable "a" and the store to variable "b".  It is
+tempting to add a barrier(), but this does not help.  The conditional
+is gone, and the barrier won't bring it back.  Therefore, if you are
+relying on this ordering, you should make sure that MAX is greater than
+one, perhaps as follows:
+
+	q = READ_ONCE(a);
+	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
+	if (q % MAX) {
+		WRITE_ONCE(b, 1);
+		do_something();
+	} else {
+		WRITE_ONCE(b, 2);
+		do_something_else();
+	}
+
+Please note once again that the stores to "b" differ.  If they were
+identical, as noted earlier, the compiler could pull this store outside
+of the 'if' statement.
+
+You must also be careful not to rely too much on boolean short-circuit
+evaluation.  Consider this example:
+
+	q = READ_ONCE(a);
+	if (q || 1 > 0)
+		WRITE_ONCE(b, 1);
+
+Because the first condition cannot fault and the second condition is
+always true, the compiler can transform this example as following,
+defeating control dependency:
+
+	q = READ_ONCE(a);
+	WRITE_ONCE(b, 1);
+
+This example underscores the need to ensure that the compiler cannot
+out-guess your code.  More generally, although READ_ONCE() does force
+the compiler to actually emit code for a given load, it does not force
+the compiler to use the results.
+
+In addition, control dependencies apply only to the then-clause and
+else-clause of the if-statement in question.  In particular, it does
+not necessarily apply to code following the if-statement:
+
+	q = READ_ONCE(a);
+	if (q) {
+		WRITE_ONCE(b, 1);
+	} else {
+		WRITE_ONCE(b, 2);
+	}
+	WRITE_ONCE(c, 1);  /* BUG: No ordering against the read from "a". */
+
+It is tempting to argue that there in fact is ordering because the
+compiler cannot reorder volatile accesses and also cannot reorder
+the writes to "b" with the condition.  Unfortunately for this line
+of reasoning, the compiler might compile the two writes to "b" as
+conditional-move instructions, as in this fanciful pseudo-assembly
+language:
+
+	ld r1,a
+	cmp r1,$0
+	cmov,ne r4,$1
+	cmov,eq r4,$2
+	st r4,b
+	st $1,c
+
+A weakly ordered CPU would have no dependency of any sort between the load
+from "a" and the store to "c".  The control dependencies would extend
+only to the pair of cmov instructions and the store depending on them.
+In short, control dependencies apply only to the stores in the then-clause
+and else-clause of the if-statement in question (including functions
+invoked by those two clauses), not to code following that if-statement.
+
+
+Note well that the ordering provided by a control dependency is local
+to the CPU containing it.  See the section on "Multicopy atomicity"
+for more information.
+
+
+In summary:
+
+  (*) Control dependencies can order prior loads against later stores.
+      However, they do -not- guarantee any other sort of ordering:
+      Not prior loads against later loads, nor prior stores against
+      later anything.  If you need these other forms of ordering,
+      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
+      later loads, smp_mb().
+
+  (*) If both legs of the "if" statement begin with identical stores to
+      the same variable, then those stores must be ordered, either by
+      preceding both of them with smp_mb() or by using smp_store_release()
+      to carry out the stores.  Please note that it is -not- sufficient
+      to use barrier() at beginning of each leg of the "if" statement
+      because, as shown by the example above, optimizing compilers can
+      destroy the control dependency while respecting the letter of the
+      barrier() law.
+
+  (*) Control dependencies require at least one run-time conditional
+      between the prior load and the subsequent store, and this
+      conditional must involve the prior load.  If the compiler is able
+      to optimize the conditional away, it will have also optimized
+      away the ordering.  Careful use of READ_ONCE() and WRITE_ONCE()
+      can help to preserve the needed conditional.
+
+  (*) Control dependencies require that the compiler avoid reordering the
+      dependency into nonexistence.  Careful use of READ_ONCE() or
+      atomic{,64}_read() can help to preserve your control dependency.
+      Please see the COMPILER BARRIER section for more information.
+
+  (*) Control dependencies apply only to the then-clause and else-clause
+      of the if-statement containing the control dependency, including
+      any functions that these two clauses call.  Control dependencies
+      do -not- apply to code following the if-statement containing the
+      control dependency.
+
+  (*) Control dependencies pair normally with other types of barriers.
+
+  (*) Control dependencies do -not- provide multicopy atomicity.  If you
+      need all the CPUs to see a given store at the same time, use smp_mb().
+
+  (*) Compilers do not understand control dependencies.  It is therefore
+      your job to ensure that they do not break your code.
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH kcsan 9/9] tools/memory-model:  Document locking corner cases
  2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
                   ` (7 preceding siblings ...)
  2020-08-31 18:20 ` [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives paulmck
@ 2020-08-31 18:20 ` paulmck
  2020-08-31 20:17   ` Alan Stern
  8 siblings, 1 reply; 30+ messages in thread
From: paulmck @ 2020-08-31 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

Most Linux-kernel uses of locking are straightforward, but there are
corner-case uses that rely on less well-known aspects of the lock and
unlock primitives.  This commit therefore adds a locking.txt and litmus
tests in Documentation/litmus-tests/locking to explain these corner-case
uses.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 .../litmus-tests/locking/DCL-broken.litmus         |  55 ++++
 .../litmus-tests/locking/DCL-fixed.litmus          |  56 ++++
 .../litmus-tests/locking/RM-broken.litmus          |  42 +++
 Documentation/litmus-tests/locking/RM-fixed.litmus |  42 +++
 tools/memory-model/Documentation/locking.txt       | 320 +++++++++++++++++++++
 5 files changed, 515 insertions(+)
 create mode 100644 Documentation/litmus-tests/locking/DCL-broken.litmus
 create mode 100644 Documentation/litmus-tests/locking/DCL-fixed.litmus
 create mode 100644 Documentation/litmus-tests/locking/RM-broken.litmus
 create mode 100644 Documentation/litmus-tests/locking/RM-fixed.litmus
 create mode 100644 tools/memory-model/Documentation/locking.txt

diff --git a/Documentation/litmus-tests/locking/DCL-broken.litmus b/Documentation/litmus-tests/locking/DCL-broken.litmus
new file mode 100644
index 0000000..cfaa25f
--- /dev/null
+++ b/Documentation/litmus-tests/locking/DCL-broken.litmus
@@ -0,0 +1,55 @@
+C DCL-broken
+
+(*
+ * Result: Sometimes
+ *
+ * This litmus test demonstrates more than just locking is required to
+ * correctly implement double-checked locking.
+ *)
+
+{
+	int flag;
+	int data;
+	int lck;
+}
+
+P0(int *flag, int *data, int *lck)
+{
+	int r0;
+	int r1;
+	int r2;
+
+	r0 = READ_ONCE(*flag);
+	if (r0 == 0) {
+		spin_lock(lck);
+		r1 = READ_ONCE(*flag);
+		if (r1 == 0) {
+			WRITE_ONCE(*data, 1);
+			WRITE_ONCE(*flag, 1);
+		}
+		spin_unlock(lck);
+	}
+	r2 = READ_ONCE(*data);
+}
+
+P1(int *flag, int *data, int *lck)
+{
+	int r0;
+	int r1;
+	int r2;
+
+	r0 = READ_ONCE(*flag);
+	if (r0 == 0) {
+		spin_lock(lck);
+		r1 = READ_ONCE(*flag);
+		if (r1 == 0) {
+			WRITE_ONCE(*data, 1);
+			WRITE_ONCE(*flag, 1);
+		}
+		spin_unlock(lck);
+	}
+	r2 = READ_ONCE(*data);
+}
+
+locations [flag;data;lck;0:r0;0:r1;1:r0;1:r1]
+exists (0:r2=0 \/ 1:r2=0)
diff --git a/Documentation/litmus-tests/locking/DCL-fixed.litmus b/Documentation/litmus-tests/locking/DCL-fixed.litmus
new file mode 100644
index 0000000..579d6c2
--- /dev/null
+++ b/Documentation/litmus-tests/locking/DCL-fixed.litmus
@@ -0,0 +1,56 @@
+C DCL-fixed
+
+(*
+ * Result: Never
+ *
+ * This litmus test demonstrates that double-checked locking can be
+ * reliable given proper use of smp_load_acquire() and smp_store_release()
+ * in addition to the locking.
+ *)
+
+{
+	int flag;
+	int data;
+	int lck;
+}
+
+P0(int *flag, int *data, int *lck)
+{
+	int r0;
+	int r1;
+	int r2;
+
+	r0 = smp_load_acquire(flag);
+	if (r0 == 0) {
+		spin_lock(lck);
+		r1 = READ_ONCE(*flag);
+		if (r1 == 0) {
+			WRITE_ONCE(*data, 1);
+			smp_store_release(flag, 1);
+		}
+		spin_unlock(lck);
+	}
+	r2 = READ_ONCE(*data);
+}
+
+P1(int *flag, int *data, int *lck)
+{
+	int r0;
+	int r1;
+	int r2;
+
+	r0 = smp_load_acquire(flag);
+	if (r0 == 0) {
+		spin_lock(lck);
+		r1 = READ_ONCE(*flag);
+		if (r1 == 0) {
+			WRITE_ONCE(*data, 1);
+			smp_store_release(flag, 1);
+		}
+		spin_unlock(lck);
+	}
+	r2 = READ_ONCE(*data);
+}
+
+locations [flag;data;lck;0:r0;0:r1;1:r0;1:r1]
+exists (0:r2=0 \/ 1:r2=0)
diff --git a/Documentation/litmus-tests/locking/RM-broken.litmus b/Documentation/litmus-tests/locking/RM-broken.litmus
new file mode 100644
index 0000000..c586ae4
--- /dev/null
+++ b/Documentation/litmus-tests/locking/RM-broken.litmus
@@ -0,0 +1,42 @@
+C RM-broken
+
+(*
+ * Result: DEADLOCK
+ *
+ * This litmus test demonstrates that the old "roach motel" approach
+ * to locking, where code can be freely moved into critical sections,
+ * cannot be used in the Linux kernel.
+ *)
+
+{
+	int lck;
+	int x;
+	int y;
+}
+
+P0(int *x, int *y, int *lck)
+{
+	int r2;
+
+	spin_lock(lck);
+	r2 = atomic_inc_return(y);
+	WRITE_ONCE(*x, 1);
+	spin_unlock(lck);
+}
+
+P1(int *x, int *y, int *lck)
+{
+	int r0;
+	int r1;
+	int r2;
+
+	spin_lock(lck);
+	r0 = READ_ONCE(*x);
+	r1 = READ_ONCE(*x);
+	r2 = atomic_inc_return(y);
+	spin_unlock(lck);
+}
+
+locations [x;lck;0:r2;1:r0;1:r1;1:r2]
+filter (y=2 /\ 1:r0=0 /\ 1:r1=1)
+exists (1:r2=1)
diff --git a/Documentation/litmus-tests/locking/RM-fixed.litmus b/Documentation/litmus-tests/locking/RM-fixed.litmus
new file mode 100644
index 0000000..6728567
--- /dev/null
+++ b/Documentation/litmus-tests/locking/RM-fixed.litmus
@@ -0,0 +1,42 @@
+C RM-fixed
+
+(*
+ * Result: Never
+ *
+ * This litmus test demonstrates that the old "roach motel" approach
+ * to locking, where code can be freely moved into critical sections,
+ * cannot be used in the Linux kernel.
+ *)
+
+{
+	int lck;
+	int x;
+	int y;
+}
+
+P0(int *x, int *y, int *lck)
+{
+	int r2;
+
+	spin_lock(lck);
+	r2 = atomic_inc_return(y);
+	WRITE_ONCE(*x, 1);
+	spin_unlock(lck);
+}
+
+P1(int *x, int *y, int *lck)
+{
+	int r0;
+	int r1;
+	int r2;
+
+	r0 = READ_ONCE(*x);
+	r1 = READ_ONCE(*x);
+	spin_lock(lck);
+	r2 = atomic_inc_return(y);
+	spin_unlock(lck);
+}
+
+locations [x;lck;0:r2;1:r0;1:r1;1:r2]
+filter (y=2 /\ 1:r0=0 /\ 1:r1=1)
+exists (1:r2=1)
diff --git a/tools/memory-model/Documentation/locking.txt b/tools/memory-model/Documentation/locking.txt
new file mode 100644
index 0000000..a6ad6aa
--- /dev/null
+++ b/tools/memory-model/Documentation/locking.txt
@@ -0,0 +1,320 @@
+Locking
+=======
+
+Locking is well-known and the common use cases are straightforward: Any
+CPU holding a given lock sees any changes previously seen or made by any
+CPU before it previously released that same lock.  This last sentence
+is the only part of this document that most developers will need to read.
+
+However, developers who would like to also access lock-protected shared
+variables outside of their corresponding locks should continue reading.
+
+
+Locking and Prior Accesses
+--------------------------
+
+The basic rule of locking is worth repeating:
+
+	Any CPU holding a given lock sees any changes previously seen
+	or made by any CPU before it previously released that same lock.
+
+Note that this statement is a bit stronger than "Any CPU holding a
+given lock sees all changes made by any CPU during the time that CPU was
+previously holding this same lock".  For example, consider the following
+pair of code fragments:
+
+	/* See MP+polocks.litmus. */
+	void CPU0(void)
+	{
+		WRITE_ONCE(x, 1);
+		spin_lock(&mylock);
+		WRITE_ONCE(y, 1);
+		spin_unlock(&mylock);
+	}
+
+	void CPU1(void)
+	{
+		spin_lock(&mylock);
+		r0 = READ_ONCE(y);
+		spin_unlock(&mylock);
+		r1 = READ_ONCE(x);
+	}
+
+The basic rule guarantees that if CPU0() acquires mylock before CPU1(),
+then both r0 and r1 must be set to the value 1.  This also has the
+consequence that if the final value of r0 is equal to 1, then the final
+value of r1 must also be equal to 1.  In contrast, the weaker rule would
+say nothing about the final value of r1.
+
+
+Locking and Subsequent Accesses
+-------------------------------
+
+The converse to the basic rule also holds:  Any CPU holding a given
+lock will not see any changes that will be made by any CPU after it
+subsequently acquires this same lock.  This converse statement is
+illustrated by the following litmus test:
+
+	/* See MP+porevlocks.litmus. */
+	void CPU0(void)
+	{
+		r0 = READ_ONCE(y);
+		spin_lock(&mylock);
+		r1 = READ_ONCE(x);
+		spin_unlock(&mylock);
+	}
+
+	void CPU1(void)
+	{
+		spin_lock(&mylock);
+		WRITE_ONCE(x, 1);
+		spin_unlock(&mylock);
+		WRITE_ONCE(y, 1);
+	}
+
+This converse to the basic rule guarantees that if CPU0() acquires
+mylock before CPU1(), then both r0 and r1 must be set to the value 0.
+This also has the consequence that if the final value of r1 is equal
+to 0, then the final value of r0 must also be equal to 0.  In contrast,
+the weaker rule would say nothing about the final value of r0.
+
+These examples show only a single pair of CPUs, but the effects of the
+locking basic rule extend across multiple acquisitions of a given lock
+across multiple CPUs.
+
+
+Double-Checked Locking
+----------------------
+
+It is well known that more than just a lock is required to make
+double-checked locking work correctly,  This litmus test illustrates
+one incorrect approach:
+
+	/* See Documentation/litmus-tests/locking/DCL-broken.litmus. */
+	P0(int *flag, int *data, int *lck)
+	{
+		int r0;
+		int r1;
+		int r2;
+
+		r0 = READ_ONCE(*flag);
+		if (r0 == 0) {
+			spin_lock(lck);
+			r1 = READ_ONCE(*flag);
+			if (r1 == 0) {
+				WRITE_ONCE(*data, 1);
+				WRITE_ONCE(*flag, 1);
+			}
+			spin_unlock(lck);
+		}
+		r2 = READ_ONCE(*data);
+	}
+	/* P1() is the exactly the same as P0(). */
+
+There are two problems.  First, there is no ordering between the first
+READ_ONCE() of "flag" and the READ_ONCE() of "data".  Second, there is
+no ordering between the two WRITE_ONCE() calls.  It should therefore be
+no surprise that "r2" can be zero, and a quick herd7 run confirms this.
+
+One way to fix this is to use smp_load_acquire() and smp_store_release()
+as shown in this corrected version:
+
+	/* See Documentation/litmus-tests/locking/DCL-fixed.litmus. */
+	P0(int *flag, int *data, int *lck)
+	{
+		int r0;
+		int r1;
+		int r2;
+
+		r0 = smp_load_acquire(flag);
+		if (r0 == 0) {
+			spin_lock(lck);
+			r1 = READ_ONCE(*flag);
+			if (r1 == 0) {
+				WRITE_ONCE(*data, 1);
+				smp_store_release(flag, 1);
+			}
+			spin_unlock(lck);
+		}
+		r2 = READ_ONCE(*data);
+	}
+	/* P1() is the exactly the same as P0(). */
+
+The smp_load_acquire() guarantees that its load from "flags" will
+be ordered before the READ_ONCE() from data, thus solving the first
+problem.  The smp_store_release() guarantees that its store will be
+ordered after the WRITE_ONCE() to "data", solving the second problem.
+The smp_store_release() pairs with the smp_load_acquire(), thus ensuring
+that the ordering provided by each actually takes effect.  Again, a
+quick herd7 run confirms this.
+
+In short, if you access a lock-protected variable without holding the
+corresponding lock, you will need to provide additional ordering, in
+this case, via the smp_load_acquire() and the smp_store_release().
+
+
+Ordering Provided by a Lock to CPUs Not Holding That Lock
+---------------------------------------------------------
+
+It is not necessarily the case that accesses ordered by locking will be
+seen as ordered by CPUs not holding that lock.  Consider this example:
+
+	/* See Z6.0+pooncelock+pooncelock+pombonce.litmus. */
+	void CPU0(void)
+	{
+		spin_lock(&mylock);
+		WRITE_ONCE(x, 1);
+		WRITE_ONCE(y, 1);
+		spin_unlock(&mylock);
+	}
+
+	void CPU1(void)
+	{
+		spin_lock(&mylock);
+		r0 = READ_ONCE(y);
+		WRITE_ONCE(z, 1);
+		spin_unlock(&mylock);
+	}
+
+	void CPU2(void)
+	{
+		WRITE_ONCE(z, 2);
+		smp_mb();
+		r1 = READ_ONCE(x);
+	}
+
+Counter-intuitive though it might be, it is quite possible to have
+the final value of r0 be 1, the final value of z be 2, and the final
+value of r1 be 0.  The reason for this surprising outcome is that CPU2()
+never acquired the lock, and thus did not fully benefit from the lock's
+ordering properties.
+
+Ordering can be extended to CPUs not holding the lock by careful use
+of smp_mb__after_spinlock():
+
+	/* See Z6.0+pooncelock+poonceLock+pombonce.litmus. */
+	void CPU0(void)
+	{
+		spin_lock(&mylock);
+		WRITE_ONCE(x, 1);
+		WRITE_ONCE(y, 1);
+		spin_unlock(&mylock);
+	}
+
+	void CPU1(void)
+	{
+		spin_lock(&mylock);
+		smp_mb__after_spinlock();
+		r0 = READ_ONCE(y);
+		WRITE_ONCE(z, 1);
+		spin_unlock(&mylock);
+	}
+
+	void CPU2(void)
+	{
+		WRITE_ONCE(z, 2);
+		smp_mb();
+		r1 = READ_ONCE(x);
+	}
+
+This addition of smp_mb__after_spinlock() strengthens the lock
+acquisition sufficiently to rule out the counter-intuitive outcome.
+In other words, the addition of the smp_mb__after_spinlock() prohibits
+the counter-intuitive result where the final value of r0 is 1, the final
+value of z is 2, and the final value of r1 is 0.
+
+
+No Roach-Motel Locking!
+-----------------------
+
+This example requires familiarity with the herd7 "filter" clause, so
+please read up on that topic in litmus-tests.txt.
+
+It is tempting to allow memory-reference instructions to be pulled
+into a critical section, but this cannot be allowed in the general case.
+For example, consider a spin loop preceding a lock-based critical section.
+Now, herd7 does not model spin loops, but we can emulate one with two
+loads, with a "filter" clause to constrain the first to return the
+initial value and the second to return the updated value, as shown below:
+
+	/* See Documentation/litmus-tests/locking/RM-fixed.litmus. */
+	P0(int *x, int *y, int *lck)
+	{
+		int r2;
+
+		spin_lock(lck);
+		r2 = atomic_inc_return(y);
+		WRITE_ONCE(*x, 1);
+		spin_unlock(lck);
+	}
+
+	P1(int *x, int *y, int *lck)
+	{
+		int r0;
+		int r1;
+		int r2;
+
+		r0 = READ_ONCE(*x);
+		r1 = READ_ONCE(*x);
+		spin_lock(lck);
+		r2 = atomic_inc_return(y);
+		spin_unlock(lck);
+	}
+
+	filter (y=2 /\ 1:r0=0 /\ 1:r1=1)
+	exists (1:r2=1)
+
+The variable "x" is the control variable for the emulated spin loop.
+P0() sets it to "1" while holding the lock, and P1() emulates the
+spin loop by reading it twice, first into "1:r0" (which should get the
+initial value "0") and then into "1:r1" (which should get the updated
+value "1").
+
+The purpose of the variable "y" is to reject deadlocked executions.
+Only those executions where the final value of "y" have avoided deadlock.
+
+The "filter" clause takes all this into account, constraining "y" to
+equal "2", "1:r0" to equal "0", and "1:r1" to equal 1.
+
+Then the "exists" clause checks to see if P1() acquired its lock first,
+which should not happen given the filter clause because P0() updates
+"x" while holding the lock.  And herd7 confirms this.
+
+But suppose that the compiler was permitted to reorder the spin loop
+into P1()'s critical section, like this:
+
+	/* See Documentation/litmus-tests/locking/RM-broken.litmus. */
+	P0(int *x, int *y, int *lck)
+	{
+		int r2;
+
+		spin_lock(lck);
+		r2 = atomic_inc_return(y);
+		WRITE_ONCE(*x, 1);
+		spin_unlock(lck);
+	}
+
+	P1(int *x, int *y, int *lck)
+	{
+		int r0;
+		int r1;
+		int r2;
+
+		spin_lock(lck);
+		r0 = READ_ONCE(*x);
+		r1 = READ_ONCE(*x);
+		r2 = atomic_inc_return(y);
+		spin_unlock(lck);
+	}
+
+	locations [x;lck;0:r2;1:r0;1:r1;1:r2]
+	filter (y=2 /\ 1:r0=0 /\ 1:r1=1)
+	exists (1:r2=1)
+
+If "1:r0" is equal to "0", "1:r1" can never equal "1" because P0()
+cannot update "x" while P1() holds the lock.  And herd7 confirms this,
+showing zero executions matching the "filter" criteria.
+
+And this is why Linux-kernel lock and unlock primitives must prevent
+code from entering critical sections.  It is not sufficient to only
+prevnt code from leaving them.
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 9/9] tools/memory-model:  Document locking corner cases
  2020-08-31 18:20 ` [PATCH kcsan 9/9] tools/memory-model: Document locking corner cases paulmck
@ 2020-08-31 20:17   ` Alan Stern
  2020-08-31 21:47     ` Paul E. McKenney
  0 siblings, 1 reply; 30+ messages in thread
From: Alan Stern @ 2020-08-31 20:17 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Mon, Aug 31, 2020 at 11:20:37AM -0700, paulmck@kernel.org wrote:
> +No Roach-Motel Locking!
> +-----------------------
> +
> +This example requires familiarity with the herd7 "filter" clause, so
> +please read up on that topic in litmus-tests.txt.
> +
> +It is tempting to allow memory-reference instructions to be pulled
> +into a critical section, but this cannot be allowed in the general case.
> +For example, consider a spin loop preceding a lock-based critical section.
> +Now, herd7 does not model spin loops, but we can emulate one with two
> +loads, with a "filter" clause to constrain the first to return the
> +initial value and the second to return the updated value, as shown below:
> +
> +	/* See Documentation/litmus-tests/locking/RM-fixed.litmus. */
> +	P0(int *x, int *y, int *lck)
> +	{
> +		int r2;
> +
> +		spin_lock(lck);
> +		r2 = atomic_inc_return(y);
> +		WRITE_ONCE(*x, 1);
> +		spin_unlock(lck);
> +	}
> +
> +	P1(int *x, int *y, int *lck)
> +	{
> +		int r0;
> +		int r1;
> +		int r2;
> +
> +		r0 = READ_ONCE(*x);
> +		r1 = READ_ONCE(*x);
> +		spin_lock(lck);
> +		r2 = atomic_inc_return(y);
> +		spin_unlock(lck);
> +	}
> +
> +	filter (y=2 /\ 1:r0=0 /\ 1:r1=1)
> +	exists (1:r2=1)
> +
> +The variable "x" is the control variable for the emulated spin loop.
> +P0() sets it to "1" while holding the lock, and P1() emulates the
> +spin loop by reading it twice, first into "1:r0" (which should get the
> +initial value "0") and then into "1:r1" (which should get the updated
> +value "1").
> +
> +The purpose of the variable "y" is to reject deadlocked executions.
> +Only those executions where the final value of "y" have avoided deadlock.
> +
> +The "filter" clause takes all this into account, constraining "y" to
> +equal "2", "1:r0" to equal "0", and "1:r1" to equal 1.
> +
> +Then the "exists" clause checks to see if P1() acquired its lock first,
> +which should not happen given the filter clause because P0() updates
> +"x" while holding the lock.  And herd7 confirms this.
> +
> +But suppose that the compiler was permitted to reorder the spin loop
> +into P1()'s critical section, like this:
> +
> +	/* See Documentation/litmus-tests/locking/RM-broken.litmus. */
> +	P0(int *x, int *y, int *lck)
> +	{
> +		int r2;
> +
> +		spin_lock(lck);
> +		r2 = atomic_inc_return(y);
> +		WRITE_ONCE(*x, 1);
> +		spin_unlock(lck);
> +	}
> +
> +	P1(int *x, int *y, int *lck)
> +	{
> +		int r0;
> +		int r1;
> +		int r2;
> +
> +		spin_lock(lck);
> +		r0 = READ_ONCE(*x);
> +		r1 = READ_ONCE(*x);
> +		r2 = atomic_inc_return(y);
> +		spin_unlock(lck);
> +	}
> +
> +	locations [x;lck;0:r2;1:r0;1:r1;1:r2]
> +	filter (y=2 /\ 1:r0=0 /\ 1:r1=1)
> +	exists (1:r2=1)
> +
> +If "1:r0" is equal to "0", "1:r1" can never equal "1" because P0()
> +cannot update "x" while P1() holds the lock.  And herd7 confirms this,
> +showing zero executions matching the "filter" criteria.
> +
> +And this is why Linux-kernel lock and unlock primitives must prevent
> +code from entering critical sections.  It is not sufficient to only
> +prevnt code from leaving them.

Is this discussion perhaps overkill?

Let's put it this way: Suppose we have the following code:

	P0(int *x, int *lck)
	{
		spin_lock(lck);
		WRITE_ONCE(*x, 1);
		do_something();
		spin_unlock(lck);
	}

	P1(int *x, int *lck)
	{
		while (READ_ONCE(*x) == 0)
			;
		spin_lock(lck);
		do_something_else();
		spin_unlock(lck);
	}

It's obvious that this test won't deadlock.  But if P1 is changed to:

	P1(int *x, int *lck)
	{
		spin_lock(lck);
		while (READ_ONCE(*x) == 0)
			;
		do_something_else();
		spin_unlock(lck);
	}

then it's equally obvious that the test can deadlock.  No need for
fancy memory models or litmus tests or anything else.

Alan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 9/9] tools/memory-model:  Document locking corner cases
  2020-08-31 20:17   ` Alan Stern
@ 2020-08-31 21:47     ` Paul E. McKenney
  2020-09-01  1:45       ` Alan Stern
  0 siblings, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2020-08-31 21:47 UTC (permalink / raw)
  To: Alan Stern
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Mon, Aug 31, 2020 at 04:17:01PM -0400, Alan Stern wrote:
> On Mon, Aug 31, 2020 at 11:20:37AM -0700, paulmck@kernel.org wrote:
> > +No Roach-Motel Locking!
> > +-----------------------
> > +
> > +This example requires familiarity with the herd7 "filter" clause, so
> > +please read up on that topic in litmus-tests.txt.
> > +
> > +It is tempting to allow memory-reference instructions to be pulled
> > +into a critical section, but this cannot be allowed in the general case.
> > +For example, consider a spin loop preceding a lock-based critical section.
> > +Now, herd7 does not model spin loops, but we can emulate one with two
> > +loads, with a "filter" clause to constrain the first to return the
> > +initial value and the second to return the updated value, as shown below:
> > +
> > +	/* See Documentation/litmus-tests/locking/RM-fixed.litmus. */
> > +	P0(int *x, int *y, int *lck)
> > +	{
> > +		int r2;
> > +
> > +		spin_lock(lck);
> > +		r2 = atomic_inc_return(y);
> > +		WRITE_ONCE(*x, 1);
> > +		spin_unlock(lck);
> > +	}
> > +
> > +	P1(int *x, int *y, int *lck)
> > +	{
> > +		int r0;
> > +		int r1;
> > +		int r2;
> > +
> > +		r0 = READ_ONCE(*x);
> > +		r1 = READ_ONCE(*x);
> > +		spin_lock(lck);
> > +		r2 = atomic_inc_return(y);
> > +		spin_unlock(lck);
> > +	}
> > +
> > +	filter (y=2 /\ 1:r0=0 /\ 1:r1=1)
> > +	exists (1:r2=1)
> > +
> > +The variable "x" is the control variable for the emulated spin loop.
> > +P0() sets it to "1" while holding the lock, and P1() emulates the
> > +spin loop by reading it twice, first into "1:r0" (which should get the
> > +initial value "0") and then into "1:r1" (which should get the updated
> > +value "1").
> > +
> > +The purpose of the variable "y" is to reject deadlocked executions.
> > +Only those executions where the final value of "y" have avoided deadlock.
> > +
> > +The "filter" clause takes all this into account, constraining "y" to
> > +equal "2", "1:r0" to equal "0", and "1:r1" to equal 1.
> > +
> > +Then the "exists" clause checks to see if P1() acquired its lock first,
> > +which should not happen given the filter clause because P0() updates
> > +"x" while holding the lock.  And herd7 confirms this.
> > +
> > +But suppose that the compiler was permitted to reorder the spin loop
> > +into P1()'s critical section, like this:
> > +
> > +	/* See Documentation/litmus-tests/locking/RM-broken.litmus. */
> > +	P0(int *x, int *y, int *lck)
> > +	{
> > +		int r2;
> > +
> > +		spin_lock(lck);
> > +		r2 = atomic_inc_return(y);
> > +		WRITE_ONCE(*x, 1);
> > +		spin_unlock(lck);
> > +	}
> > +
> > +	P1(int *x, int *y, int *lck)
> > +	{
> > +		int r0;
> > +		int r1;
> > +		int r2;
> > +
> > +		spin_lock(lck);
> > +		r0 = READ_ONCE(*x);
> > +		r1 = READ_ONCE(*x);
> > +		r2 = atomic_inc_return(y);
> > +		spin_unlock(lck);
> > +	}
> > +
> > +	locations [x;lck;0:r2;1:r0;1:r1;1:r2]
> > +	filter (y=2 /\ 1:r0=0 /\ 1:r1=1)
> > +	exists (1:r2=1)
> > +
> > +If "1:r0" is equal to "0", "1:r1" can never equal "1" because P0()
> > +cannot update "x" while P1() holds the lock.  And herd7 confirms this,
> > +showing zero executions matching the "filter" criteria.
> > +
> > +And this is why Linux-kernel lock and unlock primitives must prevent
> > +code from entering critical sections.  It is not sufficient to only
> > +prevnt code from leaving them.
> 
> Is this discussion perhaps overkill?
> 
> Let's put it this way: Suppose we have the following code:
> 
> 	P0(int *x, int *lck)
> 	{
> 		spin_lock(lck);
> 		WRITE_ONCE(*x, 1);
> 		do_something();
> 		spin_unlock(lck);
> 	}
> 
> 	P1(int *x, int *lck)
> 	{
> 		while (READ_ONCE(*x) == 0)
> 			;
> 		spin_lock(lck);
> 		do_something_else();
> 		spin_unlock(lck);
> 	}
> 
> It's obvious that this test won't deadlock.  But if P1 is changed to:
> 
> 	P1(int *x, int *lck)
> 	{
> 		spin_lock(lck);
> 		while (READ_ONCE(*x) == 0)
> 			;
> 		do_something_else();
> 		spin_unlock(lck);
> 	}
> 
> then it's equally obvious that the test can deadlock.  No need for
> fancy memory models or litmus tests or anything else.

For people like you and me, who have been thinking about memory ordering
for longer than either of us care to admit, this level of exposition is
most definitely -way- overkill!!!

But I have had people be very happy and grateful that I explained this to
them at this level of detail.  Yes, I started parallel programming before
some of them were born, but they are definitely within our target audience
for this particular document.  And it is not just Linux kernel hackers
who need this level of detail.  A roughly similar transactional-memory
scenario proved to be so non-obvious to any number of noted researchers
that Blundell, Lewis, and Martin needed to feature it in this paper:
https://ieeexplore.ieee.org/abstract/document/4069174
(Alternative source: https://repository.upenn.edu/cgi/viewcontent.cgi?article=1344&context=cis_papers)

Please note that I am -not- advocating making (say) explanation.txt or
recipes.txt more newbie-accessible than they already are.  After all,
the point of the README file in that same directory is to direct people
to the documentation files that are the best fit for them, and both
explanation.txt and recipes.txt contain advanced material, and thus
require similarly advanced prerequisites.

Seem reasonable, or am I missing your point?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives
  2020-08-31 18:20 ` [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives paulmck
@ 2020-08-31 22:34   ` Akira Yokosawa
  2020-08-31 23:12     ` Paul E. McKenney
  2020-09-01  1:23   ` Alan Stern
  1 sibling, 1 reply; 30+ messages in thread
From: Akira Yokosawa @ 2020-08-31 22:34 UTC (permalink / raw)
  To: paulmck, linux-kernel, linux-arch, kernel-team, mingo
  Cc: stern, parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget, akiyks

On Mon, 31 Aug 2020 11:20:36 -0700, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> The Linux kernel has a number of categories of ordering primitives, which
> are recorded in the LKMM implementation and hinted at by cheatsheet.txt.
> But there is no overview of these categories, and such an overview
> is needed in order to understand multithreaded LKMM litmus tests.
> This commit therefore adds an ordering.txt as well as extracting a
> control-dependencies.txt from memory-barriers.txt.  It also updates the
> README file.
> 
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---
>  tools/memory-model/Documentation/README       |  24 +-
>  tools/memory-model/Documentation/ordering.txt | 462 ++++++++++++++++++++++++++
>  tools/memory-model/control-dependencies.txt   | 256 ++++++++++++++
>  3 files changed, 740 insertions(+), 2 deletions(-)
>  create mode 100644 tools/memory-model/Documentation/ordering.txt
>  create mode 100644 tools/memory-model/control-dependencies.txt

Hi Paul,

Didn't you mean to put control-dependencies.txt under tools/memory-model/Documentation/ ?

        Thanks, Akira

> 
> diff --git a/tools/memory-model/Documentation/README b/tools/memory-model/Documentation/README
> index 4326603..16177aa 100644
> --- a/tools/memory-model/Documentation/README
> +++ b/tools/memory-model/Documentation/README
> @@ -8,10 +8,19 @@ number of places.
>  
>  This document therefore describes a number of places to start reading
>  the documentation in this directory, depending on what you know and what
> -you would like to learn:
> +you would like to learn.  These are cumulative, that is, understanding
> +of the documents earlier in this list is required by the documents later
> +in this list.
>  
>  o	You are new to Linux-kernel concurrency: simple.txt
>  
> +o	You have some background in Linux-kernel concurrency, and would
> +	like an overview of the types of low-level concurrency primitives
> +	that are provided:  ordering.txt
> +
> +	Here, "low level" means atomic operations to single locations in
> +	memory.
> +
>  o	You are familiar with the concurrency facilities that you
>  	need, and just want to get started with LKMM litmus tests:
>  	litmus-tests.txt
> @@ -20,6 +29,9 @@ o	You are familiar with Linux-kernel concurrency, and would
>  	like a detailed intuitive understanding of LKMM, including
>  	situations involving more than two threads: recipes.txt
>  
> +o	You would like a detailed understanding of what your compiler can
> +	and cannot do to control dependencies: control-dependencies.txt
> +
>  o	You are familiar with Linux-kernel concurrency and the
>  	use of LKMM, and would like a cheat sheet to remind you
>  	of LKMM's guarantees: cheatsheet.txt
> @@ -37,12 +49,16 @@ o	You are interested in the publications related to LKMM, including
>  DESCRIPTION OF FILES
>  ====================
>  
> -Documentation/README
> +README
>  	This file.
>  
>  Documentation/cheatsheet.txt
>  	Quick-reference guide to the Linux-kernel memory model.
>  
> +Documentation/control-dependencies.txt
> +	A guide to preventing compiler optimizations from destroying
> +	your control dependencies.
> +
>  Documentation/explanation.txt
>  	Describes the memory model in detail.
[...]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives
  2020-08-31 22:34   ` Akira Yokosawa
@ 2020-08-31 23:12     ` Paul E. McKenney
  0 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2020-08-31 23:12 UTC (permalink / raw)
  To: Akira Yokosawa
  Cc: linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, peterz, boqun.feng, npiggin, dhowells,
	j.alglave, luc.maranget

On Tue, Sep 01, 2020 at 07:34:20AM +0900, Akira Yokosawa wrote:
> On Mon, 31 Aug 2020 11:20:36 -0700, paulmck@kernel.org wrote:
> > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > The Linux kernel has a number of categories of ordering primitives, which
> > are recorded in the LKMM implementation and hinted at by cheatsheet.txt.
> > But there is no overview of these categories, and such an overview
> > is needed in order to understand multithreaded LKMM litmus tests.
> > This commit therefore adds an ordering.txt as well as extracting a
> > control-dependencies.txt from memory-barriers.txt.  It also updates the
> > README file.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> >  tools/memory-model/Documentation/README       |  24 +-
> >  tools/memory-model/Documentation/ordering.txt | 462 ++++++++++++++++++++++++++
> >  tools/memory-model/control-dependencies.txt   | 256 ++++++++++++++
> >  3 files changed, 740 insertions(+), 2 deletions(-)
> >  create mode 100644 tools/memory-model/Documentation/ordering.txt
> >  create mode 100644 tools/memory-model/control-dependencies.txt
> 
> Hi Paul,
> 
> Didn't you mean to put control-dependencies.txt under tools/memory-model/Documentation/ ?

Indeed I did, good catch, thank you!

							Thanx, Paul

>         Thanks, Akira
> 
> > 
> > diff --git a/tools/memory-model/Documentation/README b/tools/memory-model/Documentation/README
> > index 4326603..16177aa 100644
> > --- a/tools/memory-model/Documentation/README
> > +++ b/tools/memory-model/Documentation/README
> > @@ -8,10 +8,19 @@ number of places.
> >  
> >  This document therefore describes a number of places to start reading
> >  the documentation in this directory, depending on what you know and what
> > -you would like to learn:
> > +you would like to learn.  These are cumulative, that is, understanding
> > +of the documents earlier in this list is required by the documents later
> > +in this list.
> >  
> >  o	You are new to Linux-kernel concurrency: simple.txt
> >  
> > +o	You have some background in Linux-kernel concurrency, and would
> > +	like an overview of the types of low-level concurrency primitives
> > +	that are provided:  ordering.txt
> > +
> > +	Here, "low level" means atomic operations to single locations in
> > +	memory.
> > +
> >  o	You are familiar with the concurrency facilities that you
> >  	need, and just want to get started with LKMM litmus tests:
> >  	litmus-tests.txt
> > @@ -20,6 +29,9 @@ o	You are familiar with Linux-kernel concurrency, and would
> >  	like a detailed intuitive understanding of LKMM, including
> >  	situations involving more than two threads: recipes.txt
> >  
> > +o	You would like a detailed understanding of what your compiler can
> > +	and cannot do to control dependencies: control-dependencies.txt
> > +
> >  o	You are familiar with Linux-kernel concurrency and the
> >  	use of LKMM, and would like a cheat sheet to remind you
> >  	of LKMM's guarantees: cheatsheet.txt
> > @@ -37,12 +49,16 @@ o	You are interested in the publications related to LKMM, including
> >  DESCRIPTION OF FILES
> >  ====================
> >  
> > -Documentation/README
> > +README
> >  	This file.
> >  
> >  Documentation/cheatsheet.txt
> >  	Quick-reference guide to the Linux-kernel memory model.
> >  
> > +Documentation/control-dependencies.txt
> > +	A guide to preventing compiler optimizations from destroying
> > +	your control dependencies.
> > +
> >  Documentation/explanation.txt
> >  	Describes the memory model in detail.
> [...]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives
  2020-08-31 18:20 ` [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives paulmck
  2020-08-31 22:34   ` Akira Yokosawa
@ 2020-09-01  1:23   ` Alan Stern
  2020-09-01  2:58     ` Paul E. McKenney
  1 sibling, 1 reply; 30+ messages in thread
From: Alan Stern @ 2020-09-01  1:23 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Mon, Aug 31, 2020 at 11:20:36AM -0700, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> The Linux kernel has a number of categories of ordering primitives, which
> are recorded in the LKMM implementation and hinted at by cheatsheet.txt.
> But there is no overview of these categories, and such an overview
> is needed in order to understand multithreaded LKMM litmus tests.
> This commit therefore adds an ordering.txt as well as extracting a
> control-dependencies.txt from memory-barriers.txt.  It also updates the
> README file.
> 
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---

This document could use some careful editing.  But one pair of errors
stands out in particular:

> diff --git a/tools/memory-model/Documentation/ordering.txt b/tools/memory-model/Documentation/ordering.txt
> new file mode 100644
> index 0000000..4b2cc55
> --- /dev/null
> +++ b/tools/memory-model/Documentation/ordering.txt

> +2.	Ordered memory accesses.  These operations order themselves
> +	against some or all of the CPUs prior or subsequent accesses,
> +	depending on the category of operation.
> +
> +	a.	Release operations.  This category includes
> +		smp_store_release(), atomic_set_release(),
> +		rcu_assign_pointer(), and value-returning RMW operations
> +		whose names end in _release.  These operations order
> +		their own store against all of the CPU's subsequent
---------------------------------------------------------^^^^^^^^^^
> +		memory accesses.
> +
> +	b.	Acquire operations.  This category includes
> +		smp_load_acquire(), atomic_read_acquire(), and
> +		value-returning RMW operations whose names end in
> +		_acquire.  These operations order their own load against
> +		all of the CPU's prior memory accesses.
---------------------------------^^^^^

Double-oops!

Alan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 9/9] tools/memory-model:  Document locking corner cases
  2020-08-31 21:47     ` Paul E. McKenney
@ 2020-09-01  1:45       ` Alan Stern
  2020-09-01 17:04         ` Paul E. McKenney
  0 siblings, 1 reply; 30+ messages in thread
From: Alan Stern @ 2020-09-01  1:45 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Mon, Aug 31, 2020 at 02:47:38PM -0700, Paul E. McKenney wrote:
> On Mon, Aug 31, 2020 at 04:17:01PM -0400, Alan Stern wrote:

> > Is this discussion perhaps overkill?
> > 
> > Let's put it this way: Suppose we have the following code:
> > 
> > 	P0(int *x, int *lck)
> > 	{
> > 		spin_lock(lck);
> > 		WRITE_ONCE(*x, 1);
> > 		do_something();
> > 		spin_unlock(lck);
> > 	}
> > 
> > 	P1(int *x, int *lck)
> > 	{
> > 		while (READ_ONCE(*x) == 0)
> > 			;
> > 		spin_lock(lck);
> > 		do_something_else();
> > 		spin_unlock(lck);
> > 	}
> > 
> > It's obvious that this test won't deadlock.  But if P1 is changed to:
> > 
> > 	P1(int *x, int *lck)
> > 	{
> > 		spin_lock(lck);
> > 		while (READ_ONCE(*x) == 0)
> > 			;
> > 		do_something_else();
> > 		spin_unlock(lck);
> > 	}
> > 
> > then it's equally obvious that the test can deadlock.  No need for
> > fancy memory models or litmus tests or anything else.
> 
> For people like you and me, who have been thinking about memory ordering
> for longer than either of us care to admit, this level of exposition is
> most definitely -way- overkill!!!
> 
> But I have had people be very happy and grateful that I explained this to
> them at this level of detail.  Yes, I started parallel programming before
> some of them were born, but they are definitely within our target audience
> for this particular document.  And it is not just Linux kernel hackers
> who need this level of detail.  A roughly similar transactional-memory
> scenario proved to be so non-obvious to any number of noted researchers
> that Blundell, Lewis, and Martin needed to feature it in this paper:
> https://ieeexplore.ieee.org/abstract/document/4069174
> (Alternative source: https://repository.upenn.edu/cgi/viewcontent.cgi?article=1344&context=cis_papers)
> 
> Please note that I am -not- advocating making (say) explanation.txt or
> recipes.txt more newbie-accessible than they already are.  After all,
> the point of the README file in that same directory is to direct people
> to the documentation files that are the best fit for them, and both
> explanation.txt and recipes.txt contain advanced material, and thus
> require similarly advanced prerequisites.
> 
> Seem reasonable, or am I missing your point?

The question is, what are you trying to accomplish in this section?  Are 
you trying to demonstrate that it isn't safe to allow arbitrary code to 
leak into a critical section?  If so then you don't need to present an 
LKMM litmus test to make the point; the example I gave here will do 
quite as well.  Perhaps even better, since it doesn't drag in all sorts 
of extraneous concepts like limitations of litmus tests or how to 
emulate a spin loop.

On the other hand, if your goal is to show how to construct a litmus 
test that will model a particular C language test case (such as the one 
I gave), then the text does a reasonable job -- although I do think it 
could be clarified somewhat.  For instance, it wouldn't hurt to include 
the real C code before giving the corresponding litmus test, so that the 
reader will have a clear idea of what you're trying to model.

Just what you want to achieve here is not clear from the context.

Besides, the example is in any case a straw man.  The text starts out 
saying "It is tempting to allow memory-reference instructions to be 
pulled into a critical section", but then the example pulls an entire 
spin loop inside -- not just the memory references but also the 
conditional branch instruction at the bottom of the loop!  I can't 
imagine anyone would think it was safe to allow branches to leak into a 
critical section, particularly when doing so would break a control 
dependency (as it does here).

Alan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives
  2020-09-01  1:23   ` Alan Stern
@ 2020-09-01  2:58     ` Paul E. McKenney
  0 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2020-09-01  2:58 UTC (permalink / raw)
  To: Alan Stern
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Mon, Aug 31, 2020 at 09:23:09PM -0400, Alan Stern wrote:
> On Mon, Aug 31, 2020 at 11:20:36AM -0700, paulmck@kernel.org wrote:
> > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > The Linux kernel has a number of categories of ordering primitives, which
> > are recorded in the LKMM implementation and hinted at by cheatsheet.txt.
> > But there is no overview of these categories, and such an overview
> > is needed in order to understand multithreaded LKMM litmus tests.
> > This commit therefore adds an ordering.txt as well as extracting a
> > control-dependencies.txt from memory-barriers.txt.  It also updates the
> > README file.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> 
> This document could use some careful editing.  But one pair of errors
> stands out in particular:
> 
> > diff --git a/tools/memory-model/Documentation/ordering.txt b/tools/memory-model/Documentation/ordering.txt
> > new file mode 100644
> > index 0000000..4b2cc55
> > --- /dev/null
> > +++ b/tools/memory-model/Documentation/ordering.txt
> 
> > +2.	Ordered memory accesses.  These operations order themselves
> > +	against some or all of the CPUs prior or subsequent accesses,
> > +	depending on the category of operation.
> > +
> > +	a.	Release operations.  This category includes
> > +		smp_store_release(), atomic_set_release(),
> > +		rcu_assign_pointer(), and value-returning RMW operations
> > +		whose names end in _release.  These operations order
> > +		their own store against all of the CPU's subsequent
> ---------------------------------------------------------^^^^^^^^^^
> > +		memory accesses.
> > +
> > +	b.	Acquire operations.  This category includes
> > +		smp_load_acquire(), atomic_read_acquire(), and
> > +		value-returning RMW operations whose names end in
> > +		_acquire.  These operations order their own load against
> > +		all of the CPU's prior memory accesses.
> ---------------------------------^^^^^
> 
> Double-oops!

Hey, at least I am consistently wrong!  ;-)

Fixed, thank you!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 9/9] tools/memory-model:  Document locking corner cases
  2020-09-01  1:45       ` Alan Stern
@ 2020-09-01 17:04         ` Paul E. McKenney
  2020-09-01 20:11           ` Alan Stern
  0 siblings, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2020-09-01 17:04 UTC (permalink / raw)
  To: Alan Stern
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Mon, Aug 31, 2020 at 09:45:04PM -0400, Alan Stern wrote:
> On Mon, Aug 31, 2020 at 02:47:38PM -0700, Paul E. McKenney wrote:
> > On Mon, Aug 31, 2020 at 04:17:01PM -0400, Alan Stern wrote:
> 
> > > Is this discussion perhaps overkill?
> > > 
> > > Let's put it this way: Suppose we have the following code:
> > > 
> > > 	P0(int *x, int *lck)
> > > 	{
> > > 		spin_lock(lck);
> > > 		WRITE_ONCE(*x, 1);
> > > 		do_something();
> > > 		spin_unlock(lck);
> > > 	}
> > > 
> > > 	P1(int *x, int *lck)
> > > 	{
> > > 		while (READ_ONCE(*x) == 0)
> > > 			;
> > > 		spin_lock(lck);
> > > 		do_something_else();
> > > 		spin_unlock(lck);
> > > 	}
> > > 
> > > It's obvious that this test won't deadlock.  But if P1 is changed to:
> > > 
> > > 	P1(int *x, int *lck)
> > > 	{
> > > 		spin_lock(lck);
> > > 		while (READ_ONCE(*x) == 0)
> > > 			;
> > > 		do_something_else();
> > > 		spin_unlock(lck);
> > > 	}
> > > 
> > > then it's equally obvious that the test can deadlock.  No need for
> > > fancy memory models or litmus tests or anything else.
> > 
> > For people like you and me, who have been thinking about memory ordering
> > for longer than either of us care to admit, this level of exposition is
> > most definitely -way- overkill!!!
> > 
> > But I have had people be very happy and grateful that I explained this to
> > them at this level of detail.  Yes, I started parallel programming before
> > some of them were born, but they are definitely within our target audience
> > for this particular document.  And it is not just Linux kernel hackers
> > who need this level of detail.  A roughly similar transactional-memory
> > scenario proved to be so non-obvious to any number of noted researchers
> > that Blundell, Lewis, and Martin needed to feature it in this paper:
> > https://ieeexplore.ieee.org/abstract/document/4069174
> > (Alternative source: https://repository.upenn.edu/cgi/viewcontent.cgi?article=1344&context=cis_papers)
> > 
> > Please note that I am -not- advocating making (say) explanation.txt or
> > recipes.txt more newbie-accessible than they already are.  After all,
> > the point of the README file in that same directory is to direct people
> > to the documentation files that are the best fit for them, and both
> > explanation.txt and recipes.txt contain advanced material, and thus
> > require similarly advanced prerequisites.
> > 
> > Seem reasonable, or am I missing your point?
> 
> The question is, what are you trying to accomplish in this section?  Are 
> you trying to demonstrate that it isn't safe to allow arbitrary code to 
> leak into a critical section?  If so then you don't need to present an 
> LKMM litmus test to make the point; the example I gave here will do 
> quite as well.  Perhaps even better, since it doesn't drag in all sorts 
> of extraneous concepts like limitations of litmus tests or how to 
> emulate a spin loop.
> 
> On the other hand, if your goal is to show how to construct a litmus 
> test that will model a particular C language test case (such as the one 
> I gave), then the text does a reasonable job -- although I do think it 
> could be clarified somewhat.  For instance, it wouldn't hurt to include 
> the real C code before giving the corresponding litmus test, so that the 
> reader will have a clear idea of what you're trying to model.

Makes sense.  I can apply this at some point, but I would welcome a patch
from you, which I would be happy to fold in with your Codeveloped-by.

> Just what you want to achieve here is not clear from the context.

People who have internalized the "roach motel" model of locking
(https://www.cs.umd.edu/~pugh/java/memoryModel/BidirectionalMemoryBarrier.html)
need their internalization adjusted.

> Besides, the example is in any case a straw man.  The text starts out 
> saying "It is tempting to allow memory-reference instructions to be 
> pulled into a critical section", but then the example pulls an entire 
> spin loop inside -- not just the memory references but also the 
> conditional branch instruction at the bottom of the loop!  I can't 
> imagine anyone would think it was safe to allow branches to leak into a 
> critical section, particularly when doing so would break a control 
> dependency (as it does here).

Most people outside of a few within the Linux kernel community and within
the various hardware memory-ordering communities don't know that control
dependencies even exist, so could not be expected to see any danger
in rather thoroughly folding, spindling, or otherwise mutilating them,
let alone pulling them into a lock-based critical section.  And many in
the various toolchain communities see dependencies of any sort as an
impediment to performance that should be broken wherever and whenever
possible.

That said, a less prejudicial introduction to this example might be good.
What did you have in mind?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 9/9] tools/memory-model:  Document locking corner cases
  2020-09-01 17:04         ` Paul E. McKenney
@ 2020-09-01 20:11           ` Alan Stern
  2020-09-03 23:45             ` Paul E. McKenney
  0 siblings, 1 reply; 30+ messages in thread
From: Alan Stern @ 2020-09-01 20:11 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Tue, Sep 01, 2020 at 10:04:21AM -0700, Paul E. McKenney wrote:
> On Mon, Aug 31, 2020 at 09:45:04PM -0400, Alan Stern wrote:

> > The question is, what are you trying to accomplish in this section?  Are 
> > you trying to demonstrate that it isn't safe to allow arbitrary code to 
> > leak into a critical section?  If so then you don't need to present an 
> > LKMM litmus test to make the point; the example I gave here will do 
> > quite as well.  Perhaps even better, since it doesn't drag in all sorts 
> > of extraneous concepts like limitations of litmus tests or how to 
> > emulate a spin loop.
> > 
> > On the other hand, if your goal is to show how to construct a litmus 
> > test that will model a particular C language test case (such as the one 
> > I gave), then the text does a reasonable job -- although I do think it 
> > could be clarified somewhat.  For instance, it wouldn't hurt to include 
> > the real C code before giving the corresponding litmus test, so that the 
> > reader will have a clear idea of what you're trying to model.
> 
> Makes sense.  I can apply this at some point, but I would welcome a patch
> from you, which I would be happy to fold in with your Codeveloped-by.

I don't have time to work on these documents now.  Maybe later on.  They 
do need some serious editing.  (You could try reading through them 
carefully yourself; I'm sure you'd find a lot of things to fix up.)

Incidentally, your patch bomb from yesterday was the first time I had 
seen these things (except for the litmus-test format document).

> > Just what you want to achieve here is not clear from the context.
> 
> People who have internalized the "roach motel" model of locking
> (https://www.cs.umd.edu/~pugh/java/memoryModel/BidirectionalMemoryBarrier.html)
> need their internalization adjusted.

Shucks, if you only want to show that letting arbitrary code (i.e., 
branches) migrate into a critical section is unsafe, all you need is 
this uniprocessor example:

	P0(int *sl)
	{
		goto Skip;
		spin_lock(sl);
		spin_unlock(sl);
	Skip:
		spin_lock(sl);
		spin_unlock(sl);
	}

This does nothing but runs fine.  Letting the branch move into the first 
critical section gives:

	P0(int *sl)
	{
		spin_lock(sl);
		goto Skip;
		spin_unlock(sl);
	Skip:
		spin_lock(sl);
		spin_unlock(sl);
	}

which self-deadlocks 100% of the time.  You don't need to know anything 
about memory models or concurrency to understand this.

On the other hand, if you want to show that letting memory accesses leak 
into a critical section is unsafe then you need a different example: 
spin loops won't do it.

> > Besides, the example is in any case a straw man.  The text starts out 
> > saying "It is tempting to allow memory-reference instructions to be 
> > pulled into a critical section", but then the example pulls an entire 
> > spin loop inside -- not just the memory references but also the 
> > conditional branch instruction at the bottom of the loop!  I can't 
> > imagine anyone would think it was safe to allow branches to leak into a 
> > critical section, particularly when doing so would break a control 
> > dependency (as it does here).
> 
> Most people outside of a few within the Linux kernel community and within
> the various hardware memory-ordering communities don't know that control
> dependencies even exist, so could not be expected to see any danger
> in rather thoroughly folding, spindling, or otherwise mutilating them,
> let alone pulling them into a lock-based critical section.  And many in
> the various toolchain communities see dependencies of any sort as an
> impediment to performance that should be broken wherever and whenever
> possible.
> 
> That said, a less prejudicial introduction to this example might be good.
> What did you have in mind?

Again, it depends on what example is intended to accomplish (which you 
still haven't said explicitly).  Whatever it is, I don't think the 
current text is a good way to do it.

Alan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-08-31 18:20 ` [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed paulmck
@ 2020-09-02  3:54   ` Boqun Feng
  2020-09-02 10:14     ` peterz
  0 siblings, 1 reply; 30+ messages in thread
From: Boqun Feng @ 2020-09-02  3:54 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, peterz, npiggin, dhowells, j.alglave,
	luc.maranget, akiyks

On Mon, Aug 31, 2020 at 11:20:34AM -0700, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> This commit adds a key entry enumerating the various types of relaxed
> operations.
> 
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---
>  tools/memory-model/Documentation/cheatsheet.txt | 27 ++++++++++++++-----------
>  1 file changed, 15 insertions(+), 12 deletions(-)
> 
> diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
> index 33ba98d..31b814d 100644
> --- a/tools/memory-model/Documentation/cheatsheet.txt
> +++ b/tools/memory-model/Documentation/cheatsheet.txt
> @@ -5,7 +5,7 @@
>  
>  Store, e.g., WRITE_ONCE()            Y                                       Y
>  Load, e.g., READ_ONCE()              Y                          Y   Y        Y
> -Unsuccessful RMW operation           Y                          Y   Y        Y
> +Relaxed operation                    Y                          Y   Y        Y
>  rcu_dereference()                    Y                          Y   Y        Y
>  Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
>  Successful *_release()         C        Y  Y    Y     W                      Y
> @@ -17,14 +17,17 @@ smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
>  smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
>  
>  
> -Key:	C:	Ordering is cumulative
> -	P:	Ordering propagates
> -	R:	Read, for example, READ_ONCE(), or read portion of RMW
> -	W:	Write, for example, WRITE_ONCE(), or write portion of RMW
> -	Y:	Provides ordering
> -	a:	Provides ordering given intervening RMW atomic operation
> -	DR:	Dependent read (address dependency)
> -	DW:	Dependent write (address, data, or control dependency)
> -	RMW:	Atomic read-modify-write operation
> -	SELF:	Orders self, as opposed to accesses before and/or after
> -	SV:	Orders later accesses to the same variable
> +Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> +		  operation, an unsuccessful RMW operation, or one of
> +		  the atomic_read() and atomic_set() family of operations.

To be accurate, atomic_set() doesn't return any value, so it cannot be
ordered against DR and DW ;-)

I think we can split the Relaxed family into two groups:

void Relaxed: atomic_set() or atomic RMW operations that don't return
              any value (e.g atomic_inc())

non-void Relaxed: a *_relaxed() RMW operation, an unsuccessful RMW
                  operation, or atomic_read().

And "void Relaxed" is similar to WRITE_ONCE(), only has "Self" and "SV"
equal "Y", while "non-void Relaxed" plays the same rule as "Relaxed"
in this patch.

Thoughts?

Regards,
Boqun


> +	C:	  Ordering is cumulative
> +	P:	  Ordering propagates
> +	R:	  Read, for example, READ_ONCE(), or read portion of RMW
> +	W:	  Write, for example, WRITE_ONCE(), or write portion of RMW
> +	Y:	  Provides ordering
> +	a:	  Provides ordering given intervening RMW atomic operation
> +	DR:	  Dependent read (address dependency)
> +	DW:	  Dependent write (address, data, or control dependency)
> +	RMW:	  Atomic read-modify-write operation
> +	SELF:	  Orders self, as opposed to accesses before and/or after
> +	SV:	  Orders later accesses to the same variable
> -- 
> 2.9.5
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-09-02  3:54   ` Boqun Feng
@ 2020-09-02 10:14     ` peterz
  2020-09-02 12:37       ` Boqun Feng
  0 siblings, 1 reply; 30+ messages in thread
From: peterz @ 2020-09-02 10:14 UTC (permalink / raw)
  To: Boqun Feng
  Cc: paulmck, linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Wed, Sep 02, 2020 at 11:54:48AM +0800, Boqun Feng wrote:
> On Mon, Aug 31, 2020 at 11:20:34AM -0700, paulmck@kernel.org wrote:
> > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > This commit adds a key entry enumerating the various types of relaxed
> > operations.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> >  tools/memory-model/Documentation/cheatsheet.txt | 27 ++++++++++++++-----------
> >  1 file changed, 15 insertions(+), 12 deletions(-)
> > 
> > diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
> > index 33ba98d..31b814d 100644
> > --- a/tools/memory-model/Documentation/cheatsheet.txt
> > +++ b/tools/memory-model/Documentation/cheatsheet.txt
> > @@ -5,7 +5,7 @@
> >  
> >  Store, e.g., WRITE_ONCE()            Y                                       Y
> >  Load, e.g., READ_ONCE()              Y                          Y   Y        Y
> > -Unsuccessful RMW operation           Y                          Y   Y        Y
> > +Relaxed operation                    Y                          Y   Y        Y
> >  rcu_dereference()                    Y                          Y   Y        Y
> >  Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
> >  Successful *_release()         C        Y  Y    Y     W                      Y
> > @@ -17,14 +17,17 @@ smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
> >  smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
> >  
> >  
> > -Key:	C:	Ordering is cumulative
> > -	P:	Ordering propagates
> > -	R:	Read, for example, READ_ONCE(), or read portion of RMW
> > -	W:	Write, for example, WRITE_ONCE(), or write portion of RMW
> > -	Y:	Provides ordering
> > -	a:	Provides ordering given intervening RMW atomic operation
> > -	DR:	Dependent read (address dependency)
> > -	DW:	Dependent write (address, data, or control dependency)
> > -	RMW:	Atomic read-modify-write operation
> > -	SELF:	Orders self, as opposed to accesses before and/or after
> > -	SV:	Orders later accesses to the same variable
> > +Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> > +		  operation, an unsuccessful RMW operation, or one of
> > +		  the atomic_read() and atomic_set() family of operations.
> 
> To be accurate, atomic_set() doesn't return any value, so it cannot be
> ordered against DR and DW ;-)

Surely DW is valid for any store.

> I think we can split the Relaxed family into two groups:
> 
> void Relaxed: atomic_set() or atomic RMW operations that don't return
>               any value (e.g atomic_inc())
> 
> non-void Relaxed: a *_relaxed() RMW operation, an unsuccessful RMW
>                   operation, or atomic_read().
> 
> And "void Relaxed" is similar to WRITE_ONCE(), only has "Self" and "SV"
> equal "Y", while "non-void Relaxed" plays the same rule as "Relaxed"
> in this patch.
> 
> Thoughts?

I get confused by the mention of all this atomic_read() atomic_set()
crud in the first place, why are they called out specifically from any
other regular load/store ?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-09-02 10:14     ` peterz
@ 2020-09-02 12:37       ` Boqun Feng
  2020-09-02 12:47         ` peterz
  2020-09-03 23:30         ` Paul E. McKenney
  0 siblings, 2 replies; 30+ messages in thread
From: Boqun Feng @ 2020-09-02 12:37 UTC (permalink / raw)
  To: peterz
  Cc: paulmck, linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Wed, Sep 02, 2020 at 12:14:12PM +0200, peterz@infradead.org wrote:
> On Wed, Sep 02, 2020 at 11:54:48AM +0800, Boqun Feng wrote:
> > On Mon, Aug 31, 2020 at 11:20:34AM -0700, paulmck@kernel.org wrote:
> > > From: "Paul E. McKenney" <paulmck@kernel.org>
> > > 
> > > This commit adds a key entry enumerating the various types of relaxed
> > > operations.
> > > 
> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > ---
> > >  tools/memory-model/Documentation/cheatsheet.txt | 27 ++++++++++++++-----------
> > >  1 file changed, 15 insertions(+), 12 deletions(-)
> > > 
> > > diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
> > > index 33ba98d..31b814d 100644
> > > --- a/tools/memory-model/Documentation/cheatsheet.txt
> > > +++ b/tools/memory-model/Documentation/cheatsheet.txt
> > > @@ -5,7 +5,7 @@
> > >  
> > >  Store, e.g., WRITE_ONCE()            Y                                       Y
> > >  Load, e.g., READ_ONCE()              Y                          Y   Y        Y
> > > -Unsuccessful RMW operation           Y                          Y   Y        Y
> > > +Relaxed operation                    Y                          Y   Y        Y
> > >  rcu_dereference()                    Y                          Y   Y        Y
> > >  Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
> > >  Successful *_release()         C        Y  Y    Y     W                      Y
> > > @@ -17,14 +17,17 @@ smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
> > >  smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
> > >  
> > >  
> > > -Key:	C:	Ordering is cumulative
> > > -	P:	Ordering propagates
> > > -	R:	Read, for example, READ_ONCE(), or read portion of RMW
> > > -	W:	Write, for example, WRITE_ONCE(), or write portion of RMW
> > > -	Y:	Provides ordering
> > > -	a:	Provides ordering given intervening RMW atomic operation
> > > -	DR:	Dependent read (address dependency)
> > > -	DW:	Dependent write (address, data, or control dependency)
> > > -	RMW:	Atomic read-modify-write operation
> > > -	SELF:	Orders self, as opposed to accesses before and/or after
> > > -	SV:	Orders later accesses to the same variable
> > > +Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> > > +		  operation, an unsuccessful RMW operation, or one of
> > > +		  the atomic_read() and atomic_set() family of operations.
> > 
> > To be accurate, atomic_set() doesn't return any value, so it cannot be
> > ordered against DR and DW ;-)
> 
> Surely DW is valid for any store.
> 

IIUC, the DW colomn stands for whether the corresponding operation (in
this case, it's atomic_set()) is ordered any write that depends on this
operation. I don't think there is a write->write dependency, so DW for
atomic_set() should not be Y, just as the DW for WRITE_ONCE().

> > I think we can split the Relaxed family into two groups:
> > 
> > void Relaxed: atomic_set() or atomic RMW operations that don't return
> >               any value (e.g atomic_inc())
> > 
> > non-void Relaxed: a *_relaxed() RMW operation, an unsuccessful RMW
> >                   operation, or atomic_read().
> > 
> > And "void Relaxed" is similar to WRITE_ONCE(), only has "Self" and "SV"
> > equal "Y", while "non-void Relaxed" plays the same rule as "Relaxed"
> > in this patch.
> > 
> > Thoughts?
> 
> I get confused by the mention of all this atomic_read() atomic_set()
> crud in the first place, why are they called out specifically from any
> other regular load/store ?

Agreed. Probably we should fold those two operations into "Load" and
"Store" cases.

Regards,
Boqun

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-09-02 12:37       ` Boqun Feng
@ 2020-09-02 12:47         ` peterz
  2020-09-03 23:30         ` Paul E. McKenney
  1 sibling, 0 replies; 30+ messages in thread
From: peterz @ 2020-09-02 12:47 UTC (permalink / raw)
  To: Boqun Feng
  Cc: paulmck, linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Wed, Sep 02, 2020 at 08:37:15PM +0800, Boqun Feng wrote:
> On Wed, Sep 02, 2020 at 12:14:12PM +0200, peterz@infradead.org wrote:

> > > To be accurate, atomic_set() doesn't return any value, so it cannot be
> > > ordered against DR and DW ;-)
> > 
> > Surely DW is valid for any store.
> > 
> 
> IIUC, the DW colomn stands for whether the corresponding operation (in
> this case, it's atomic_set()) is ordered any write that depends on this
> operation. I don't think there is a write->write dependency, so DW for
> atomic_set() should not be Y, just as the DW for WRITE_ONCE().

Ah, just shows I can't read I suppose ;-) I thought we were talking of
the other side of the depency.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-09-02 12:37       ` Boqun Feng
  2020-09-02 12:47         ` peterz
@ 2020-09-03 23:30         ` Paul E. McKenney
  2020-09-04  0:59           ` Boqun Feng
  1 sibling, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2020-09-03 23:30 UTC (permalink / raw)
  To: Boqun Feng
  Cc: peterz, linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Wed, Sep 02, 2020 at 08:37:15PM +0800, Boqun Feng wrote:
> On Wed, Sep 02, 2020 at 12:14:12PM +0200, peterz@infradead.org wrote:
> > On Wed, Sep 02, 2020 at 11:54:48AM +0800, Boqun Feng wrote:
> > > On Mon, Aug 31, 2020 at 11:20:34AM -0700, paulmck@kernel.org wrote:
> > > > From: "Paul E. McKenney" <paulmck@kernel.org>
> > > > 
> > > > This commit adds a key entry enumerating the various types of relaxed
> > > > operations.
> > > > 
> > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > > ---
> > > >  tools/memory-model/Documentation/cheatsheet.txt | 27 ++++++++++++++-----------
> > > >  1 file changed, 15 insertions(+), 12 deletions(-)
> > > > 
> > > > diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
> > > > index 33ba98d..31b814d 100644
> > > > --- a/tools/memory-model/Documentation/cheatsheet.txt
> > > > +++ b/tools/memory-model/Documentation/cheatsheet.txt
> > > > @@ -5,7 +5,7 @@
> > > >  
> > > >  Store, e.g., WRITE_ONCE()            Y                                       Y
> > > >  Load, e.g., READ_ONCE()              Y                          Y   Y        Y
> > > > -Unsuccessful RMW operation           Y                          Y   Y        Y
> > > > +Relaxed operation                    Y                          Y   Y        Y
> > > >  rcu_dereference()                    Y                          Y   Y        Y
> > > >  Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
> > > >  Successful *_release()         C        Y  Y    Y     W                      Y
> > > > @@ -17,14 +17,17 @@ smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
> > > >  smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
> > > >  
> > > >  
> > > > -Key:	C:	Ordering is cumulative
> > > > -	P:	Ordering propagates
> > > > -	R:	Read, for example, READ_ONCE(), or read portion of RMW
> > > > -	W:	Write, for example, WRITE_ONCE(), or write portion of RMW
> > > > -	Y:	Provides ordering
> > > > -	a:	Provides ordering given intervening RMW atomic operation
> > > > -	DR:	Dependent read (address dependency)
> > > > -	DW:	Dependent write (address, data, or control dependency)
> > > > -	RMW:	Atomic read-modify-write operation
> > > > -	SELF:	Orders self, as opposed to accesses before and/or after
> > > > -	SV:	Orders later accesses to the same variable
> > > > +Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> > > > +		  operation, an unsuccessful RMW operation, or one of
> > > > +		  the atomic_read() and atomic_set() family of operations.
> > > 
> > > To be accurate, atomic_set() doesn't return any value, so it cannot be
> > > ordered against DR and DW ;-)
> > 
> > Surely DW is valid for any store.
> > 
> 
> IIUC, the DW colomn stands for whether the corresponding operation (in
> this case, it's atomic_set()) is ordered any write that depends on this
> operation. I don't think there is a write->write dependency, so DW for
> atomic_set() should not be Y, just as the DW for WRITE_ONCE().
> 
> > > I think we can split the Relaxed family into two groups:
> > > 
> > > void Relaxed: atomic_set() or atomic RMW operations that don't return
> > >               any value (e.g atomic_inc())
> > > 
> > > non-void Relaxed: a *_relaxed() RMW operation, an unsuccessful RMW
> > >                   operation, or atomic_read().
> > > 
> > > And "void Relaxed" is similar to WRITE_ONCE(), only has "Self" and "SV"
> > > equal "Y", while "non-void Relaxed" plays the same rule as "Relaxed"
> > > in this patch.
> > > 
> > > Thoughts?
> > 
> > I get confused by the mention of all this atomic_read() atomic_set()
> > crud in the first place, why are they called out specifically from any
> > other regular load/store ?
> 
> Agreed. Probably we should fold those two operations into "Load" and
> "Store" cases.

All good points.

How about like this, adding "Relaxed" to the WRITE_ONCE() and READ_ONCE()
rows and "RMW" to the "Relaxed operation" row?

The file contents are followed by a diff against the previous version.

							Thanx, Paul

------------------------------------------------------------------------

                                  Prior Operation     Subsequent Operation
                                  ---------------  ---------------------------
                               C  Self  R  W  RMW  Self  R  W  DR  DW  RMW  SV
                              --  ----  -  -  ---  ----  -  -  --  --  ---  --

Relaxed store                        Y                                       Y
Relaxed load                         Y                          Y   Y        Y
Relaxed RMW operation                Y                          Y   Y        Y
rcu_dereference()                    Y                          Y   Y        Y
Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
Successful *_release()         C        Y  Y    Y     W                      Y
smp_rmb()                               Y       R        Y      Y        R
smp_wmb()                                  Y    W           Y       Y    W
smp_mb() & synchronize_rcu()  CP        Y  Y    Y        Y  Y   Y   Y    Y
Successful full non-void RMW  CP     Y  Y  Y    Y     Y  Y  Y   Y   Y    Y   Y
smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y


Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
		  operation, an unsuccessful RMW operation, READ_ONCE(),
		  WRITE_ONCE(), or one of the atomic_read() and
		  atomic_set() family of operations.
	C:	  Ordering is cumulative
	P:	  Ordering propagates
	R:	  Read, for example, READ_ONCE(), or read portion of RMW
	W:	  Write, for example, WRITE_ONCE(), or write portion of RMW
	Y:	  Provides ordering
	a:	  Provides ordering given intervening RMW atomic operation
	DR:	  Dependent read (address dependency)
	DW:	  Dependent write (address, data, or control dependency)
	RMW:	  Atomic read-modify-write operation
	SELF:	  Orders self, as opposed to accesses before and/or after
	SV:	  Orders later accesses to the same variable

------------------------------------------------------------------------

diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
index 31b814d..4146b8d 100644
--- a/tools/memory-model/Documentation/cheatsheet.txt
+++ b/tools/memory-model/Documentation/cheatsheet.txt
@@ -3,9 +3,9 @@
                                C  Self  R  W  RMW  Self  R  W  DR  DW  RMW  SV
                               --  ----  -  -  ---  ----  -  -  --  --  ---  --
 
-Store, e.g., WRITE_ONCE()            Y                                       Y
-Load, e.g., READ_ONCE()              Y                          Y   Y        Y
-Relaxed operation                    Y                          Y   Y        Y
+Relaxed store                        Y                                       Y
+Relaxed load                         Y                          Y   Y        Y
+Relaxed RMW operation                Y                          Y   Y        Y
 rcu_dereference()                    Y                          Y   Y        Y
 Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
 Successful *_release()         C        Y  Y    Y     W                      Y
@@ -18,8 +18,9 @@ smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
 
 
 Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
-		  operation, an unsuccessful RMW operation, or one of
-		  the atomic_read() and atomic_set() family of operations.
+		  operation, an unsuccessful RMW operation, READ_ONCE(),
+		  WRITE_ONCE(), or one of the atomic_read() and
+		  atomic_set() family of operations.
 	C:	  Ordering is cumulative
 	P:	  Ordering propagates
 	R:	  Read, for example, READ_ONCE(), or read portion of RMW

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 9/9] tools/memory-model:  Document locking corner cases
  2020-09-01 20:11           ` Alan Stern
@ 2020-09-03 23:45             ` Paul E. McKenney
  2020-09-04 19:52               ` Alan Stern
  0 siblings, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2020-09-03 23:45 UTC (permalink / raw)
  To: Alan Stern
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Tue, Sep 01, 2020 at 04:11:10PM -0400, Alan Stern wrote:
> On Tue, Sep 01, 2020 at 10:04:21AM -0700, Paul E. McKenney wrote:
> > On Mon, Aug 31, 2020 at 09:45:04PM -0400, Alan Stern wrote:
> 
> > > The question is, what are you trying to accomplish in this section?  Are 
> > > you trying to demonstrate that it isn't safe to allow arbitrary code to 
> > > leak into a critical section?  If so then you don't need to present an 
> > > LKMM litmus test to make the point; the example I gave here will do 
> > > quite as well.  Perhaps even better, since it doesn't drag in all sorts 
> > > of extraneous concepts like limitations of litmus tests or how to 
> > > emulate a spin loop.
> > > 
> > > On the other hand, if your goal is to show how to construct a litmus 
> > > test that will model a particular C language test case (such as the one 
> > > I gave), then the text does a reasonable job -- although I do think it 
> > > could be clarified somewhat.  For instance, it wouldn't hurt to include 
> > > the real C code before giving the corresponding litmus test, so that the 
> > > reader will have a clear idea of what you're trying to model.
> > 
> > Makes sense.  I can apply this at some point, but I would welcome a patch
> > from you, which I would be happy to fold in with your Codeveloped-by.
> 
> I don't have time to work on these documents now.  Maybe later on.  They 
> do need some serious editing.  (You could try reading through them 
> carefully yourself; I'm sure you'd find a lot of things to fix up.)
> 
> Incidentally, your patch bomb from yesterday was the first time I had 
> seen these things (except for the litmus-test format document).

The hope was to have a good version of them completed some weeks ago,
but life intervened.

My current thought is to move these three patches out of my queue for
v5.10 to try again in v5.11:

0b8c06b75ea1 ("tools/memory-model: Add a simple entry point document")
dc372dc0dc89 ("tools/memory-model: Move Documentation description to Documentation/README")
0d9aaf8df7cb ("tools/memory-model: Document categories of ordering primitives")
35dd5f6d17a0 ("tools/memory-model:  Document locking corner cases")

These would remain in my v5.10 queue:

1e44e6e82e7b ("Replace HTTP links with HTTPS ones: LKMM")
cc9628b45c9f ("tools/memory-model: Update recipes.txt prime_numbers.c path")
984f272be9d7 ("tools/memory-model: Improve litmus-test documentation")
7c22cf3b731f ("tools/memory-model: Expand the cheatsheet.txt notion of relaxed")
	(But with the updates from the other thread.)

Does that work?  If not, what would?

> > > Just what you want to achieve here is not clear from the context.
> > 
> > People who have internalized the "roach motel" model of locking
> > (https://www.cs.umd.edu/~pugh/java/memoryModel/BidirectionalMemoryBarrier.html)
> > need their internalization adjusted.
> 
> Shucks, if you only want to show that letting arbitrary code (i.e., 
> branches) migrate into a critical section is unsafe, all you need is 
> this uniprocessor example:
> 
> 	P0(int *sl)
> 	{
> 		goto Skip;
> 		spin_lock(sl);
> 		spin_unlock(sl);
> 	Skip:
> 		spin_lock(sl);
> 		spin_unlock(sl);
> 	}
> 
> This does nothing but runs fine.  Letting the branch move into the first 
> critical section gives:
> 
> 	P0(int *sl)
> 	{
> 		spin_lock(sl);
> 		goto Skip;
> 		spin_unlock(sl);
> 	Skip:
> 		spin_lock(sl);
> 		spin_unlock(sl);
> 	}
> 
> which self-deadlocks 100% of the time.  You don't need to know anything 
> about memory models or concurrency to understand this.

Although your example does an excellent job of illustrating the general
point about branches, I am not convinced that it would be seen as
demonstrating the dangers of moving an entire loop into a critical
section.

> On the other hand, if you want to show that letting memory accesses leak 
> into a critical section is unsafe then you need a different example: 
> spin loops won't do it.

I am not immediately coming up with an example that is broken by leaking
isolated memory accesses into a critical section.  I will give it some
more thought.

> > > Besides, the example is in any case a straw man.  The text starts out 
> > > saying "It is tempting to allow memory-reference instructions to be 
> > > pulled into a critical section", but then the example pulls an entire 
> > > spin loop inside -- not just the memory references but also the 
> > > conditional branch instruction at the bottom of the loop!  I can't 
> > > imagine anyone would think it was safe to allow branches to leak into a 
> > > critical section, particularly when doing so would break a control 
> > > dependency (as it does here).
> > 
> > Most people outside of a few within the Linux kernel community and within
> > the various hardware memory-ordering communities don't know that control
> > dependencies even exist, so could not be expected to see any danger
> > in rather thoroughly folding, spindling, or otherwise mutilating them,
> > let alone pulling them into a lock-based critical section.  And many in
> > the various toolchain communities see dependencies of any sort as an
> > impediment to performance that should be broken wherever and whenever
> > possible.
> > 
> > That said, a less prejudicial introduction to this example might be good.
> > What did you have in mind?
> 
> Again, it depends on what example is intended to accomplish (which you 
> still haven't said explicitly).  Whatever it is, I don't think the 
> current text is a good way to do it.

OK, as noted above, I will move this one out of the v5.10 queue into the
v5.11 queue, which should provide time to refine it, one way or another.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-09-03 23:30         ` Paul E. McKenney
@ 2020-09-04  0:59           ` Boqun Feng
  2020-09-04  2:39             ` Paul E. McKenney
  0 siblings, 1 reply; 30+ messages in thread
From: Boqun Feng @ 2020-09-04  0:59 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: peterz, linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Thu, Sep 03, 2020 at 04:30:37PM -0700, Paul E. McKenney wrote:
> On Wed, Sep 02, 2020 at 08:37:15PM +0800, Boqun Feng wrote:
> > On Wed, Sep 02, 2020 at 12:14:12PM +0200, peterz@infradead.org wrote:
> > > On Wed, Sep 02, 2020 at 11:54:48AM +0800, Boqun Feng wrote:
> > > > On Mon, Aug 31, 2020 at 11:20:34AM -0700, paulmck@kernel.org wrote:
> > > > > From: "Paul E. McKenney" <paulmck@kernel.org>
> > > > > 
> > > > > This commit adds a key entry enumerating the various types of relaxed
> > > > > operations.
> > > > > 
> > > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > > > ---
> > > > >  tools/memory-model/Documentation/cheatsheet.txt | 27 ++++++++++++++-----------
> > > > >  1 file changed, 15 insertions(+), 12 deletions(-)
> > > > > 
> > > > > diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
> > > > > index 33ba98d..31b814d 100644
> > > > > --- a/tools/memory-model/Documentation/cheatsheet.txt
> > > > > +++ b/tools/memory-model/Documentation/cheatsheet.txt
> > > > > @@ -5,7 +5,7 @@
> > > > >  
> > > > >  Store, e.g., WRITE_ONCE()            Y                                       Y
> > > > >  Load, e.g., READ_ONCE()              Y                          Y   Y        Y
> > > > > -Unsuccessful RMW operation           Y                          Y   Y        Y
> > > > > +Relaxed operation                    Y                          Y   Y        Y
> > > > >  rcu_dereference()                    Y                          Y   Y        Y
> > > > >  Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
> > > > >  Successful *_release()         C        Y  Y    Y     W                      Y
> > > > > @@ -17,14 +17,17 @@ smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
> > > > >  smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
> > > > >  
> > > > >  
> > > > > -Key:	C:	Ordering is cumulative
> > > > > -	P:	Ordering propagates
> > > > > -	R:	Read, for example, READ_ONCE(), or read portion of RMW
> > > > > -	W:	Write, for example, WRITE_ONCE(), or write portion of RMW
> > > > > -	Y:	Provides ordering
> > > > > -	a:	Provides ordering given intervening RMW atomic operation
> > > > > -	DR:	Dependent read (address dependency)
> > > > > -	DW:	Dependent write (address, data, or control dependency)
> > > > > -	RMW:	Atomic read-modify-write operation
> > > > > -	SELF:	Orders self, as opposed to accesses before and/or after
> > > > > -	SV:	Orders later accesses to the same variable
> > > > > +Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> > > > > +		  operation, an unsuccessful RMW operation, or one of
> > > > > +		  the atomic_read() and atomic_set() family of operations.
> > > > 
> > > > To be accurate, atomic_set() doesn't return any value, so it cannot be
> > > > ordered against DR and DW ;-)
> > > 
> > > Surely DW is valid for any store.
> > > 
> > 
> > IIUC, the DW colomn stands for whether the corresponding operation (in
> > this case, it's atomic_set()) is ordered any write that depends on this
> > operation. I don't think there is a write->write dependency, so DW for
> > atomic_set() should not be Y, just as the DW for WRITE_ONCE().
> > 
> > > > I think we can split the Relaxed family into two groups:
> > > > 
> > > > void Relaxed: atomic_set() or atomic RMW operations that don't return
> > > >               any value (e.g atomic_inc())
> > > > 
> > > > non-void Relaxed: a *_relaxed() RMW operation, an unsuccessful RMW
> > > >                   operation, or atomic_read().
> > > > 
> > > > And "void Relaxed" is similar to WRITE_ONCE(), only has "Self" and "SV"
> > > > equal "Y", while "non-void Relaxed" plays the same rule as "Relaxed"
> > > > in this patch.
> > > > 
> > > > Thoughts?
> > > 
> > > I get confused by the mention of all this atomic_read() atomic_set()
> > > crud in the first place, why are they called out specifically from any
> > > other regular load/store ?
> > 
> > Agreed. Probably we should fold those two operations into "Load" and
> > "Store" cases.
> 
> All good points.
> 
> How about like this, adding "Relaxed" to the WRITE_ONCE() and READ_ONCE()
> rows and "RMW" to the "Relaxed operation" row?
> 

Much better now, thanks! However ...

> The file contents are followed by a diff against the previous version.
> 
> 							Thanx, Paul
> 
> ------------------------------------------------------------------------
> 
>                                   Prior Operation     Subsequent Operation
>                                   ---------------  ---------------------------
>                                C  Self  R  W  RMW  Self  R  W  DR  DW  RMW  SV
>                               --  ----  -  -  ---  ----  -  -  --  --  ---  --
> 
> Relaxed store                        Y                                       Y
> Relaxed load                         Y                          Y   Y        Y
> Relaxed RMW operation                Y                          Y   Y        Y

void Relaxed RMW operation is still missing ;-) Maybe:

  void Relaxed RMW operation           Y                                       Y

> rcu_dereference()                    Y                          Y   Y        Y
> Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
> Successful *_release()         C        Y  Y    Y     W                      Y
> smp_rmb()                               Y       R        Y      Y        R
> smp_wmb()                                  Y    W           Y       Y    W
> smp_mb() & synchronize_rcu()  CP        Y  Y    Y        Y  Y   Y   Y    Y
> Successful full non-void RMW  CP     Y  Y  Y    Y     Y  Y  Y   Y   Y    Y   Y
> smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
> smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
> 
> 
> Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> 		  operation, an unsuccessful RMW operation, READ_ONCE(),
> 		  WRITE_ONCE(), or one of the atomic_read() and
> 		  atomic_set() family of operations.

And:
		  a RMW operation that doesn't return any value (e.g
		  atomic_inc()), IOW it's a void Relaxed operation.

Thoughts?

Regards,
Boqun

> 	C:	  Ordering is cumulative
> 	P:	  Ordering propagates
> 	R:	  Read, for example, READ_ONCE(), or read portion of RMW
> 	W:	  Write, for example, WRITE_ONCE(), or write portion of RMW
> 	Y:	  Provides ordering
> 	a:	  Provides ordering given intervening RMW atomic operation
> 	DR:	  Dependent read (address dependency)
> 	DW:	  Dependent write (address, data, or control dependency)
> 	RMW:	  Atomic read-modify-write operation
> 	SELF:	  Orders self, as opposed to accesses before and/or after
> 	SV:	  Orders later accesses to the same variable
> 
> ------------------------------------------------------------------------
> 
> diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
> index 31b814d..4146b8d 100644
> --- a/tools/memory-model/Documentation/cheatsheet.txt
> +++ b/tools/memory-model/Documentation/cheatsheet.txt
> @@ -3,9 +3,9 @@
>                                 C  Self  R  W  RMW  Self  R  W  DR  DW  RMW  SV
>                                --  ----  -  -  ---  ----  -  -  --  --  ---  --
>  
> -Store, e.g., WRITE_ONCE()            Y                                       Y
> -Load, e.g., READ_ONCE()              Y                          Y   Y        Y
> -Relaxed operation                    Y                          Y   Y        Y
> +Relaxed store                        Y                                       Y
> +Relaxed load                         Y                          Y   Y        Y
> +Relaxed RMW operation                Y                          Y   Y        Y
>  rcu_dereference()                    Y                          Y   Y        Y
>  Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
>  Successful *_release()         C        Y  Y    Y     W                      Y
> @@ -18,8 +18,9 @@ smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
>  
>  
>  Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> -		  operation, an unsuccessful RMW operation, or one of
> -		  the atomic_read() and atomic_set() family of operations.
> +		  operation, an unsuccessful RMW operation, READ_ONCE(),
> +		  WRITE_ONCE(), or one of the atomic_read() and
> +		  atomic_set() family of operations.
>  	C:	  Ordering is cumulative
>  	P:	  Ordering propagates
>  	R:	  Read, for example, READ_ONCE(), or read portion of RMW

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-09-04  0:59           ` Boqun Feng
@ 2020-09-04  2:39             ` Paul E. McKenney
  2020-09-04  2:47               ` Boqun Feng
  0 siblings, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2020-09-04  2:39 UTC (permalink / raw)
  To: Boqun Feng
  Cc: peterz, linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Fri, Sep 04, 2020 at 08:59:21AM +0800, Boqun Feng wrote:
> On Thu, Sep 03, 2020 at 04:30:37PM -0700, Paul E. McKenney wrote:

[ . . . ]

> > How about like this, adding "Relaxed" to the WRITE_ONCE() and READ_ONCE()
> > rows and "RMW" to the "Relaxed operation" row?
> > 
> 
> Much better now, thanks! However ...
> 
> > The file contents are followed by a diff against the previous version.
> > 
> > 							Thanx, Paul
> > 
> > ------------------------------------------------------------------------

[ . . . ]

> > Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> > 		  operation, an unsuccessful RMW operation, READ_ONCE(),
> > 		  WRITE_ONCE(), or one of the atomic_read() and
> > 		  atomic_set() family of operations.
> 
> And:
> 		  a RMW operation that doesn't return any value (e.g
> 		  atomic_inc()), IOW it's a void Relaxed operation.

Good point!  Please see below.

							Thanx, Paul

------------------------------------------------------------------------

                                  Prior Operation     Subsequent Operation
                                  ---------------  ---------------------------
                               C  Self  R  W  RMW  Self  R  W  DR  DW  RMW  SV
                              --  ----  -  -  ---  ----  -  -  --  --  ---  --

Relaxed store                        Y                                       Y
Relaxed load                         Y                          Y   Y        Y
Relaxed RMW operation                Y                          Y   Y        Y
rcu_dereference()                    Y                          Y   Y        Y
Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
Successful *_release()         C        Y  Y    Y     W                      Y
smp_rmb()                               Y       R        Y      Y        R
smp_wmb()                                  Y    W           Y       Y    W
smp_mb() & synchronize_rcu()  CP        Y  Y    Y        Y  Y   Y   Y    Y
Successful full non-void RMW  CP     Y  Y  Y    Y     Y  Y  Y   Y   Y    Y   Y
smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y


Key:	Relaxed:  A relaxed operation is either READ_ONCE(), WRITE_ONCE(),
		  a *_relaxed() RMW operation, an unsuccessful RMW
		  operation, a non-value-returning RMW operation such
		  as atomic_inc(), or one of the atomic*_read() and
		  atomic*_set() family of operations.
	C:	  Ordering is cumulative
	P:	  Ordering propagates
	R:	  Read, for example, READ_ONCE(), or read portion of RMW
	W:	  Write, for example, WRITE_ONCE(), or write portion of RMW
	Y:	  Provides ordering
	a:	  Provides ordering given intervening RMW atomic operation
	DR:	  Dependent read (address dependency)
	DW:	  Dependent write (address, data, or control dependency)
	RMW:	  Atomic read-modify-write operation
	SELF:	  Orders self, as opposed to accesses before and/or after
	SV:	  Orders later accesses to the same variable

------------------------------------------------------------------------

diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
index 4146b8d..99d0087 100644
--- a/tools/memory-model/Documentation/cheatsheet.txt
+++ b/tools/memory-model/Documentation/cheatsheet.txt
@@ -17,10 +17,11 @@ smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
 smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
 
 
-Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
-		  operation, an unsuccessful RMW operation, READ_ONCE(),
-		  WRITE_ONCE(), or one of the atomic_read() and
-		  atomic_set() family of operations.
+Key:	Relaxed:  A relaxed operation is either READ_ONCE(), WRITE_ONCE(),
+		  a *_relaxed() RMW operation, an unsuccessful RMW
+		  operation, a non-value-returning RMW operation such
+		  as atomic_inc(), or one of the atomic*_read() and
+		  atomic*_set() family of operations.
 	C:	  Ordering is cumulative
 	P:	  Ordering propagates
 	R:	  Read, for example, READ_ONCE(), or read portion of RMW

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-09-04  2:39             ` Paul E. McKenney
@ 2020-09-04  2:47               ` Boqun Feng
  2020-09-04 19:56                 ` Paul E. McKenney
  0 siblings, 1 reply; 30+ messages in thread
From: Boqun Feng @ 2020-09-04  2:47 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: peterz, linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Thu, Sep 03, 2020 at 07:39:55PM -0700, Paul E. McKenney wrote:
> On Fri, Sep 04, 2020 at 08:59:21AM +0800, Boqun Feng wrote:
> > On Thu, Sep 03, 2020 at 04:30:37PM -0700, Paul E. McKenney wrote:
> 
> [ . . . ]
> 
> > > How about like this, adding "Relaxed" to the WRITE_ONCE() and READ_ONCE()
> > > rows and "RMW" to the "Relaxed operation" row?
> > > 
> > 
> > Much better now, thanks! However ...
> > 
> > > The file contents are followed by a diff against the previous version.
> > > 
> > > 							Thanx, Paul
> > > 
> > > ------------------------------------------------------------------------
> 
> [ . . . ]
> 
> > > Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> > > 		  operation, an unsuccessful RMW operation, READ_ONCE(),
> > > 		  WRITE_ONCE(), or one of the atomic_read() and
> > > 		  atomic_set() family of operations.
> > 
> > And:
> > 		  a RMW operation that doesn't return any value (e.g
> > 		  atomic_inc()), IOW it's a void Relaxed operation.
> 
> Good point!  Please see below.
> 

Looks good to me ;-)


Acked-by: Boqun Feng <boqun.feng@gmail.com>


Regards,
Boqun

> 							Thanx, Paul
> 
> ------------------------------------------------------------------------
> 
>                                   Prior Operation     Subsequent Operation
>                                   ---------------  ---------------------------
>                                C  Self  R  W  RMW  Self  R  W  DR  DW  RMW  SV
>                               --  ----  -  -  ---  ----  -  -  --  --  ---  --
> 
> Relaxed store                        Y                                       Y
> Relaxed load                         Y                          Y   Y        Y
> Relaxed RMW operation                Y                          Y   Y        Y
> rcu_dereference()                    Y                          Y   Y        Y
> Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
> Successful *_release()         C        Y  Y    Y     W                      Y
> smp_rmb()                               Y       R        Y      Y        R
> smp_wmb()                                  Y    W           Y       Y    W
> smp_mb() & synchronize_rcu()  CP        Y  Y    Y        Y  Y   Y   Y    Y
> Successful full non-void RMW  CP     Y  Y  Y    Y     Y  Y  Y   Y   Y    Y   Y
> smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
> smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
> 
> 
> Key:	Relaxed:  A relaxed operation is either READ_ONCE(), WRITE_ONCE(),
> 		  a *_relaxed() RMW operation, an unsuccessful RMW
> 		  operation, a non-value-returning RMW operation such
> 		  as atomic_inc(), or one of the atomic*_read() and
> 		  atomic*_set() family of operations.
> 	C:	  Ordering is cumulative
> 	P:	  Ordering propagates
> 	R:	  Read, for example, READ_ONCE(), or read portion of RMW
> 	W:	  Write, for example, WRITE_ONCE(), or write portion of RMW
> 	Y:	  Provides ordering
> 	a:	  Provides ordering given intervening RMW atomic operation
> 	DR:	  Dependent read (address dependency)
> 	DW:	  Dependent write (address, data, or control dependency)
> 	RMW:	  Atomic read-modify-write operation
> 	SELF:	  Orders self, as opposed to accesses before and/or after
> 	SV:	  Orders later accesses to the same variable
> 
> ------------------------------------------------------------------------
> 
> diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
> index 4146b8d..99d0087 100644
> --- a/tools/memory-model/Documentation/cheatsheet.txt
> +++ b/tools/memory-model/Documentation/cheatsheet.txt
> @@ -17,10 +17,11 @@ smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
>  smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
>  
>  
> -Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> -		  operation, an unsuccessful RMW operation, READ_ONCE(),
> -		  WRITE_ONCE(), or one of the atomic_read() and
> -		  atomic_set() family of operations.
> +Key:	Relaxed:  A relaxed operation is either READ_ONCE(), WRITE_ONCE(),
> +		  a *_relaxed() RMW operation, an unsuccessful RMW
> +		  operation, a non-value-returning RMW operation such
> +		  as atomic_inc(), or one of the atomic*_read() and
> +		  atomic*_set() family of operations.
>  	C:	  Ordering is cumulative
>  	P:	  Ordering propagates
>  	R:	  Read, for example, READ_ONCE(), or read portion of RMW

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 9/9] tools/memory-model:  Document locking corner cases
  2020-09-03 23:45             ` Paul E. McKenney
@ 2020-09-04 19:52               ` Alan Stern
  0 siblings, 0 replies; 30+ messages in thread
From: Alan Stern @ 2020-09-04 19:52 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, linux-arch, kernel-team, mingo, parri.andrea, will,
	peterz, boqun.feng, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Thu, Sep 03, 2020 at 04:45:07PM -0700, Paul E. McKenney wrote:

> The hope was to have a good version of them completed some weeks ago,
> but life intervened.
> 
> My current thought is to move these three patches out of my queue for
> v5.10 to try again in v5.11:
> 
> 0b8c06b75ea1 ("tools/memory-model: Add a simple entry point document")
> dc372dc0dc89 ("tools/memory-model: Move Documentation description to Documentation/README")
> 0d9aaf8df7cb ("tools/memory-model: Document categories of ordering primitives")
> 35dd5f6d17a0 ("tools/memory-model:  Document locking corner cases")
> 
> These would remain in my v5.10 queue:
> 
> 1e44e6e82e7b ("Replace HTTP links with HTTPS ones: LKMM")
> cc9628b45c9f ("tools/memory-model: Update recipes.txt prime_numbers.c path")
> 984f272be9d7 ("tools/memory-model: Improve litmus-test documentation")
> 7c22cf3b731f ("tools/memory-model: Expand the cheatsheet.txt notion of relaxed")
> 	(But with the updates from the other thread.)
> 
> Does that work?  If not, what would?

That sounds reasonable.

> > > > Just what you want to achieve here is not clear from the context.
> > > 
> > > People who have internalized the "roach motel" model of locking
> > > (https://www.cs.umd.edu/~pugh/java/memoryModel/BidirectionalMemoryBarrier.html)
> > > need their internalization adjusted.
> > 
> > Shucks, if you only want to show that letting arbitrary code (i.e., 
> > branches) migrate into a critical section is unsafe, all you need is 
> > this uniprocessor example:
> > 
> > 	P0(int *sl)
> > 	{
> > 		goto Skip;
> > 		spin_lock(sl);
> > 		spin_unlock(sl);
> > 	Skip:
> > 		spin_lock(sl);
> > 		spin_unlock(sl);
> > 	}
> > 
> > This does nothing but runs fine.  Letting the branch move into the first 
> > critical section gives:
> > 
> > 	P0(int *sl)
> > 	{
> > 		spin_lock(sl);
> > 		goto Skip;
> > 		spin_unlock(sl);
> > 	Skip:
> > 		spin_lock(sl);
> > 		spin_unlock(sl);
> > 	}
> > 
> > which self-deadlocks 100% of the time.  You don't need to know anything 
> > about memory models or concurrency to understand this.
> 
> Although your example does an excellent job of illustrating the general
> point about branches, I am not convinced that it would be seen as
> demonstrating the dangers of moving an entire loop into a critical
> section.

All right, how about this?

	P0(int *sl)
	{
		while (spin_is_locked(sl))
			cpu_relax();
		spin_lock(sl);
		spin_unlock(sl);
	}

Runs normally, even if other threads are doing unknown locking and 
unlocking at the same time.  But:

	P0(int *sl)
	{
		spin_lock(sl);
		while (spin_is_locked(sl))
			cpu_relax();
		spin_unlock(sl);
	}

always goes into an infinite loop.

> > On the other hand, if you want to show that letting memory accesses leak 
> > into a critical section is unsafe then you need a different example: 
> > spin loops won't do it.
> 
> I am not immediately coming up with an example that is broken by leaking
> isolated memory accesses into a critical section.  I will give it some
> more thought.

It may turn out to be a hard challenge.  As far as I know, there are no 
such examples, unless you want to count something like this:

	spin_lock(sl);
	spin_unlock(sl);
	spin_lock(sl);
	spin_unlock(sl);

transformed to:

	spin_lock(sl);
	spin_lock(sl);
	spin_unlock(sl);
	spin_unlock(sl);

You could view this transformation as moving the second spin_lock up 
into the first critical section (obviously dangerous since spin_lock 
involves a loop), or as moving the first spin_unlock down into the 
second critical section (not so obvious since spin_unlock is just a 
memory access).

Okay, so let's restrict ourselves to memory accesses and loops that 
don't touch the spinlock variable itself.  Then we would need something 
more similar to the original example, like this:

	P0(spin_lock *sl, int *x)
	{
		while (READ_ONCE(x) == 0)
			cpu_relax();
		spin_lock(sl);
		spin_unlock(sl);
	}

	P1(spin_lock *sl, int *x)
	{
		spin_lock(sl);
		WRITE_ONCE(x, 1);
		spin_unlock(sl);
	}

This will always run to completion.  But if the loop in P0 is moved into 
the critical section, the test may never end.  Again, you don't need 
fancy memory models to understand this; you just need to know that 
critical sections are mutually exclusive.

But if this example didn't have a loop, allowing the memory access to 
leak into the critical section would be fine.

Alan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed
  2020-09-04  2:47               ` Boqun Feng
@ 2020-09-04 19:56                 ` Paul E. McKenney
  0 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2020-09-04 19:56 UTC (permalink / raw)
  To: Boqun Feng
  Cc: peterz, linux-kernel, linux-arch, kernel-team, mingo, stern,
	parri.andrea, will, npiggin, dhowells, j.alglave, luc.maranget,
	akiyks

On Fri, Sep 04, 2020 at 10:47:17AM +0800, Boqun Feng wrote:
> On Thu, Sep 03, 2020 at 07:39:55PM -0700, Paul E. McKenney wrote:
> > On Fri, Sep 04, 2020 at 08:59:21AM +0800, Boqun Feng wrote:
> > > On Thu, Sep 03, 2020 at 04:30:37PM -0700, Paul E. McKenney wrote:
> > 
> > [ . . . ]
> > 
> > > > How about like this, adding "Relaxed" to the WRITE_ONCE() and READ_ONCE()
> > > > rows and "RMW" to the "Relaxed operation" row?
> > > > 
> > > 
> > > Much better now, thanks! However ...
> > > 
> > > > The file contents are followed by a diff against the previous version.
> > > > 
> > > > 							Thanx, Paul
> > > > 
> > > > ------------------------------------------------------------------------
> > 
> > [ . . . ]
> > 
> > > > Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> > > > 		  operation, an unsuccessful RMW operation, READ_ONCE(),
> > > > 		  WRITE_ONCE(), or one of the atomic_read() and
> > > > 		  atomic_set() family of operations.
> > > 
> > > And:
> > > 		  a RMW operation that doesn't return any value (e.g
> > > 		  atomic_inc()), IOW it's a void Relaxed operation.
> > 
> > Good point!  Please see below.
> > 
> 
> Looks good to me ;-)
> 
> 
> Acked-by: Boqun Feng <boqun.feng@gmail.com>

Applied, thank you!

							Thanx, Paul

> Regards,
> Boqun
> 
> > 							Thanx, Paul
> > 
> > ------------------------------------------------------------------------
> > 
> >                                   Prior Operation     Subsequent Operation
> >                                   ---------------  ---------------------------
> >                                C  Self  R  W  RMW  Self  R  W  DR  DW  RMW  SV
> >                               --  ----  -  -  ---  ----  -  -  --  --  ---  --
> > 
> > Relaxed store                        Y                                       Y
> > Relaxed load                         Y                          Y   Y        Y
> > Relaxed RMW operation                Y                          Y   Y        Y
> > rcu_dereference()                    Y                          Y   Y        Y
> > Successful *_acquire()               R                   Y  Y   Y   Y    Y   Y
> > Successful *_release()         C        Y  Y    Y     W                      Y
> > smp_rmb()                               Y       R        Y      Y        R
> > smp_wmb()                                  Y    W           Y       Y    W
> > smp_mb() & synchronize_rcu()  CP        Y  Y    Y        Y  Y   Y   Y    Y
> > Successful full non-void RMW  CP     Y  Y  Y    Y     Y  Y  Y   Y   Y    Y   Y
> > smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
> > smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
> > 
> > 
> > Key:	Relaxed:  A relaxed operation is either READ_ONCE(), WRITE_ONCE(),
> > 		  a *_relaxed() RMW operation, an unsuccessful RMW
> > 		  operation, a non-value-returning RMW operation such
> > 		  as atomic_inc(), or one of the atomic*_read() and
> > 		  atomic*_set() family of operations.
> > 	C:	  Ordering is cumulative
> > 	P:	  Ordering propagates
> > 	R:	  Read, for example, READ_ONCE(), or read portion of RMW
> > 	W:	  Write, for example, WRITE_ONCE(), or write portion of RMW
> > 	Y:	  Provides ordering
> > 	a:	  Provides ordering given intervening RMW atomic operation
> > 	DR:	  Dependent read (address dependency)
> > 	DW:	  Dependent write (address, data, or control dependency)
> > 	RMW:	  Atomic read-modify-write operation
> > 	SELF:	  Orders self, as opposed to accesses before and/or after
> > 	SV:	  Orders later accesses to the same variable
> > 
> > ------------------------------------------------------------------------
> > 
> > diff --git a/tools/memory-model/Documentation/cheatsheet.txt b/tools/memory-model/Documentation/cheatsheet.txt
> > index 4146b8d..99d0087 100644
> > --- a/tools/memory-model/Documentation/cheatsheet.txt
> > +++ b/tools/memory-model/Documentation/cheatsheet.txt
> > @@ -17,10 +17,11 @@ smp_mb__before_atomic()       CP        Y  Y    Y        a  a   a   a    Y
> >  smp_mb__after_atomic()        CP        a  a    Y        Y  Y   Y   Y    Y
> >  
> >  
> > -Key:	Relaxed:  A relaxed operation is either a *_relaxed() RMW
> > -		  operation, an unsuccessful RMW operation, READ_ONCE(),
> > -		  WRITE_ONCE(), or one of the atomic_read() and
> > -		  atomic_set() family of operations.
> > +Key:	Relaxed:  A relaxed operation is either READ_ONCE(), WRITE_ONCE(),
> > +		  a *_relaxed() RMW operation, an unsuccessful RMW
> > +		  operation, a non-value-returning RMW operation such
> > +		  as atomic_inc(), or one of the atomic*_read() and
> > +		  atomic*_set() family of operations.
> >  	C:	  Ordering is cumulative
> >  	P:	  Ordering propagates
> >  	R:	  Read, for example, READ_ONCE(), or read portion of RMW

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2020-09-04 19:56 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-31 18:20 [PATCH memory-model 0/9] LKMM updates for v5.10 Paul E. McKenney
2020-08-31 18:20 ` [PATCH kcsan 1/9] docs: fix references for DMA*.txt files paulmck
2020-08-31 18:20 ` [PATCH kcsan 2/9] Replace HTTP links with HTTPS ones: LKMM paulmck
2020-08-31 18:20 ` [PATCH kcsan 3/9] tools/memory-model: Update recipes.txt prime_numbers.c path paulmck
2020-08-31 18:20 ` [PATCH kcsan 4/9] tools/memory-model: Improve litmus-test documentation paulmck
2020-08-31 18:20 ` [PATCH kcsan 5/9] tools/memory-model: Add a simple entry point document paulmck
2020-08-31 18:20 ` [PATCH kcsan 6/9] tools/memory-model: Expand the cheatsheet.txt notion of relaxed paulmck
2020-09-02  3:54   ` Boqun Feng
2020-09-02 10:14     ` peterz
2020-09-02 12:37       ` Boqun Feng
2020-09-02 12:47         ` peterz
2020-09-03 23:30         ` Paul E. McKenney
2020-09-04  0:59           ` Boqun Feng
2020-09-04  2:39             ` Paul E. McKenney
2020-09-04  2:47               ` Boqun Feng
2020-09-04 19:56                 ` Paul E. McKenney
2020-08-31 18:20 ` [PATCH kcsan 7/9] tools/memory-model: Move Documentation description to Documentation/README paulmck
2020-08-31 18:20 ` [PATCH kcsan 8/9] tools/memory-model: Document categories of ordering primitives paulmck
2020-08-31 22:34   ` Akira Yokosawa
2020-08-31 23:12     ` Paul E. McKenney
2020-09-01  1:23   ` Alan Stern
2020-09-01  2:58     ` Paul E. McKenney
2020-08-31 18:20 ` [PATCH kcsan 9/9] tools/memory-model: Document locking corner cases paulmck
2020-08-31 20:17   ` Alan Stern
2020-08-31 21:47     ` Paul E. McKenney
2020-09-01  1:45       ` Alan Stern
2020-09-01 17:04         ` Paul E. McKenney
2020-09-01 20:11           ` Alan Stern
2020-09-03 23:45             ` Paul E. McKenney
2020-09-04 19:52               ` Alan Stern

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).