All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] defer: Employ new scheme for code snippets (cont.)
@ 2018-12-03 15:33 Akira Yokosawa
  2018-12-03 15:35 ` [PATCH 1/6] defer: Employ new scheme for 'lst:defer:Hazard-Pointer Storage and Erasure' Akira Yokosawa
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Akira Yokosawa @ 2018-12-03 15:33 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: perfbook, Akira Yokosawa

Hi Paul,

This is a followup patch set to update remaining code snippets
in chapter "defer".

My first thought was that patches #5 and #6 might conflict with
upcoming updates to reflect consolidation of RCU flavors, but it
looks like these code snippets are unlikely to be affected.

Patch #1 assumes that you intentionally presented a different
set of hazptr API from the one used in hazptr.h.

        Thanks, Akira
--
Akira Yokosawa (6):
  defer: Employ new scheme for 'lst:defer:Hazard-Pointer Storage and
    Erasure'
  defer: Employ new scheme for snippets of route_hazptr.c
  defer: Employ new scheme for snippet of seqlock.h
  defer: Employ new scheme for snippets of route_seqlock.c
  defer: Employ new scheme for snippets in rcuintro and rcufundamental
  defer: Employ new scheme for snippets of route_rcu.c

 CodeSamples/defer/route_hazptr.c  |  50 ++--
 CodeSamples/defer/route_rcu.c     |  75 +++---
 CodeSamples/defer/route_seqlock.c |  56 ++--
 CodeSamples/defer/seqlock.h       |  55 ++--
 defer/hazptr.tex                  | 168 +++---------
 defer/rcufundamental.tex          | 381 +++++++++++++--------------
 defer/rcuintro.tex                |  13 +-
 defer/rcuusage.tex                | 523 ++++++++++++++++----------------------
 defer/seqlock.tex                 | 259 ++++++-------------
 9 files changed, 637 insertions(+), 943 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/6] defer: Employ new scheme for 'lst:defer:Hazard-Pointer Storage and Erasure'
  2018-12-03 15:33 [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Akira Yokosawa
@ 2018-12-03 15:35 ` Akira Yokosawa
  2018-12-03 15:37 ` [PATCH 2/6] defer: Employ new scheme for snippets of route_hazptr.c Akira Yokosawa
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Akira Yokosawa @ 2018-12-03 15:35 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: perfbook, Akira Yokosawa

From a526b98e4fe052223c82dbd3cf961d40bc27f294 Mon Sep 17 00:00:00 2001
From: Akira Yokosawa <akiyks@gmail.com>
Date: Tue, 4 Dec 2018 00:13:47 +0900
Subject: [PATCH 1/6] defer: Employ new scheme for 'lst:defer:Hazard-Pointer Storage and Erasure'

Signed-off-by: Akira Yokosawa <akiyks@gmail.com>
---
 defer/hazptr.tex | 58 ++++++++++++++++++++++++++++----------------------------
 1 file changed, 29 insertions(+), 29 deletions(-)

diff --git a/defer/hazptr.tex b/defer/hazptr.tex
index 1aca813..588f603 100644
--- a/defer/hazptr.tex
+++ b/defer/hazptr.tex
@@ -22,33 +22,30 @@ and there are no longer any hazard pointers referencing it, that element
 may safely be freed.
 
 \begin{listing}[btp]
-{ \scriptsize
-\begin{verbbox}
- 1 int hp_store(void **p, void **hp)
- 2 {
- 3   void *tmp;
- 4 
- 5   tmp = READ_ONCE(*p);
- 6   WRITE_ONCE(*hp, tmp);
- 7   smp_mb();
- 8   if (tmp != READ_ONCE(*p) ||
- 9       tmp == HAZPTR_POISON) {
-10     WRITE_ONCE(*hp, NULL);
-11     return 0;
-12   }
-13   return 1;
-14 }
-15 
-16 void hp_erase(void **hp)
-17 {
-18   smp_mb();
-19   WRITE_ONCE(*hp, NULL);
-20   hp_free(hp);
-21 }
-\end{verbbox}
-}
-\centering
-\theverbbox
+\begin{linelabel}[ln:defer:Hazard-Pointer Storage and Erasure]
+\begin{VerbatimL}[commandchars=\\\[\]]
+int hp_store(void **p, void **hp)	\lnlbl[store:b]
+{
+	void *tmp;
+
+	tmp = READ_ONCE(*p);
+	WRITE_ONCE(*hp, tmp);
+	smp_mb();
+	if (tmp != READ_ONCE(*p) || tmp == HAZPTR_POISON) {
+		WRITE_ONCE(*hp, NULL);
+		return 0;
+	}
+	return 1;
+}					\lnlbl[store:e]
+
+void hp_erase(void **hp)		\lnlbl[erase:b]
+{
+	smp_mb();
+	WRITE_ONCE(*hp, NULL);
+	hp_free(hp);
+}					\lnlbl[erase:e]
+\end{VerbatimL}
+\end{linelabel}
 \caption{Hazard-Pointer Storage and Erasure}
 \label{lst:defer:Hazard-Pointer Storage and Erasure}
 \end{listing}
@@ -56,13 +53,16 @@ may safely be freed.
 Of course, this means that hazard-pointer acquisition must be carried
 out quite carefully in order to avoid destructive races with concurrent
 deletion.
+\begin{lineref}[ln:defer:Hazard-Pointer Storage and Erasure]
 One implementation is shown in
 Listing~\ref{lst:defer:Hazard-Pointer Storage and Erasure},
-which shows \co{hp_store()} on lines~1-14 and \co{hp_erase()} on
-lines~16-21.
+which shows \co{hp_store()} on
+lines~\lnref{store:b}-\lnref{store:e} and \co{hp_erase()} on
+lines~\lnref{erase:b}-\lnref{erase:e}.
 The \co{smp_mb()} primitive will be described in detail in
 Chapter~\ref{chp:Advanced Synchronization: Memory Ordering}, but may be ignored for
 the purposes of this brief overview.
+\end{lineref}
 
 The \co{hp_store()} function records a hazard pointer at \co{hp} for the data
 element whose pointer is referenced by \co{p}, while checking for
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/6] defer: Employ new scheme for snippets of route_hazptr.c
  2018-12-03 15:33 [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Akira Yokosawa
  2018-12-03 15:35 ` [PATCH 1/6] defer: Employ new scheme for 'lst:defer:Hazard-Pointer Storage and Erasure' Akira Yokosawa
@ 2018-12-03 15:37 ` Akira Yokosawa
  2018-12-03 15:39 ` [PATCH 3/6] defer: Employ new scheme for snippet of seqlock.h Akira Yokosawa
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Akira Yokosawa @ 2018-12-03 15:37 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: perfbook, Akira Yokosawa

From be603d83f1555cea7c2e1e6d1306c613ceb79b21 Mon Sep 17 00:00:00 2001
From: Akira Yokosawa <akiyks@gmail.com>
Date: Tue, 4 Dec 2018 00:15:18 +0900
Subject: [PATCH 2/6] defer: Employ new scheme for snippets of route_hazptr.c

Signed-off-by: Akira Yokosawa <akiyks@gmail.com>
---
 CodeSamples/defer/route_hazptr.c |  50 ++++++++++--------
 defer/hazptr.tex                 | 110 +++++----------------------------------
 2 files changed, 40 insertions(+), 120 deletions(-)

diff --git a/CodeSamples/defer/route_hazptr.c b/CodeSamples/defer/route_hazptr.c
index 49e6e4a..c9285c6 100644
--- a/CodeSamples/defer/route_hazptr.c
+++ b/CodeSamples/defer/route_hazptr.c
@@ -23,58 +23,61 @@
 #include "hazptr.h"
 
 /* Route-table entry to be included in the routing list. */
+//\begin{snippet}[labelbase=ln:defer:route_hazptr:lookup,commandchars=\\\[\]]
 struct route_entry {
-	struct hazptr_head hh;
+	struct hazptr_head hh;				//\lnlbl{hh}
 	struct route_entry *re_next;
 	unsigned long addr;
 	unsigned long iface;
-	int re_freed;
+	int re_freed;					//\lnlbl{re_freed}
 };
-
+								//\fcvexclude
 struct route_entry route_list;
 DEFINE_SPINLOCK(routelock);
-
-/* This thread's fixed-sized set of hazard pointers. */
+								//\fcvexclude
+/* This thread's fixed-sized set of hazard pointers. */		//\fcvexclude
 hazard_pointer __thread *my_hazptr;
 
-/*
- * Look up a route entry, return the corresponding interface. 
- */
+/*								  \fcvexclude
+ * Look up a route entry, return the corresponding interface. 	  \fcvexclude
+ */								//\fcvexclude
 unsigned long route_lookup(unsigned long addr)
 {
 	int offset = 0;
 	struct route_entry *rep;
 	struct route_entry **repp;
 
-retry:
+retry:							//\lnlbl{retry}
 	repp = &route_list.re_next;
 	do {
 		rep = READ_ONCE(*repp);
 		if (rep == NULL)
 			return ULONG_MAX;
-		if (rep == (struct route_entry *)HAZPTR_POISON)
+		if (rep == (struct route_entry *)HAZPTR_POISON)	//\lnlbl{acq:b}
 			goto retry; /* element deleted. */
-
-		/* Store a hazard pointer. */
+								//\fcvexclude
+		/* Store a hazard pointer. */			//\fcvexclude
 		my_hazptr[offset].p = &rep->hh;
 		offset = !offset;
 		smp_mb(); /* Force pointer loads in order. */
-
-		/* Recheck the hazard pointer against the original. */
+								//\fcvexclude
+		/* Recheck the hazard pointer against the original. */ //\fcvexclude
 		if (READ_ONCE(*repp) != rep)
-			goto retry;
-
-		/* Advance to next. */
+			goto retry;			//\lnlbl{acq:e}
+								//\fcvexclude
+		/* Advance to next. */				//\fcvexclude
 		repp = &rep->re_next;
 	} while (rep->addr != addr);
 	if (READ_ONCE(rep->re_freed))
 		abort();
 	return rep->iface;
 }
+//\end{snippet}
 
 /*
  * Add an element to the route table.
  */
+//\begin{snippet}[labelbase=ln:defer:route_hazptr:add_del,commandchars=\\\[\]]
 int route_add(unsigned long addr, unsigned long interface)
 {
 	struct route_entry *rep;
@@ -84,7 +87,7 @@ int route_add(unsigned long addr, unsigned long interface)
 		return -ENOMEM;
 	rep->addr = addr;
 	rep->iface = interface;
-	rep->re_freed = 0;
+	rep->re_freed = 0;				//\lnlbl{init_freed}
 	spin_lock(&routelock);
 	rep->re_next = route_list.re_next;
 	route_list.re_next = rep;
@@ -92,9 +95,9 @@ int route_add(unsigned long addr, unsigned long interface)
 	return 0;
 }
 
-/*
- * Remove the specified element from the route table.
- */
+/*								  \fcvexclude
+ * Remove the specified element from the route table.		  \fcvexclude
+ */								//\fcvexclude
 int route_del(unsigned long addr)
 {
 	struct route_entry *rep;
@@ -108,9 +111,9 @@ int route_del(unsigned long addr)
 			break;
 		if (rep->addr == addr) {
 			*repp = rep->re_next;
-			rep->re_next = (struct route_entry *)HAZPTR_POISON;
+			rep->re_next = (struct route_entry *)HAZPTR_POISON; //\lnlbl{poison}
 			spin_unlock(&routelock);
-			hazptr_free_later(&rep->hh);
+			hazptr_free_later(&rep->hh);	//\lnlbl{free_later}
 			return 0;
 		}
 		repp = &rep->re_next;
@@ -118,6 +121,7 @@ int route_del(unsigned long addr)
 	spin_unlock(&routelock);
 	return -ENOENT;
 }
+//\end{snippet}
 
 /*
  * Clear all elements from the route table.
diff --git a/defer/hazptr.tex b/defer/hazptr.tex
index 588f603..98ba002 100644
--- a/defer/hazptr.tex
+++ b/defer/hazptr.tex
@@ -181,101 +181,13 @@ Chapter~\ref{chp:Data Structures}
 and in other publications~\cite{ThomasEHart2007a,McKenney:2013:SDS:2483852.2483867,MagedMichael04a}.
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
- 1 struct route_entry {
- 2   struct hazptr_head hh;
- 3   struct route_entry *re_next;
- 4   unsigned long addr;
- 5   unsigned long iface;
- 6   int re_freed;
- 7 };
- 8 struct route_entry route_list;
- 9 DEFINE_SPINLOCK(routelock);
-10 hazard_pointer __thread *my_hazptr;
-11
-12 unsigned long route_lookup(unsigned long addr)
-13 {
-14   int offset = 0;
-15   struct route_entry *rep;
-16   struct route_entry **repp;
-17
-18 retry:
-19   repp = &route_list.re_next;
-20   do {
-21     rep = READ_ONCE(*repp);
-22     if (rep == NULL)
-23       return ULONG_MAX;
-24     if (rep == (struct route_entry *)HAZPTR_POISON)
-25       goto retry;
-26     my_hazptr[offset].p = &rep->hh;
-27     offset = !offset;
-28     smp_mb();
-29     if (READ_ONCE(*repp) != rep)
-30       goto retry;
-31     repp = &rep->re_next;
-32   } while (rep->addr != addr);
-33   if (READ_ONCE(rep->re_freed))
-34     abort();
-35   return rep->iface;
-36 }
-\end{verbbox}
-}
-\centering
-\theverbbox
+\input{CodeSamples/defer/route_hazptr@lookup.fcv}
 \caption{Hazard-Pointer Pre-BSD Routing Table Lookup}
 \label{lst:defer:Hazard-Pointer Pre-BSD Routing Table Lookup}
 \end{listing}
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
- 1 int route_add(unsigned long addr,
- 2               unsigned long interface)
- 3 {
- 4   struct route_entry *rep;
- 5
- 6   rep = malloc(sizeof(*rep));
- 7   if (!rep)
- 8     return -ENOMEM;
- 9   rep->addr = addr;
-10   rep->iface = interface;
-11   rep->re_freed = 0;
-12   spin_lock(&routelock);
-13   rep->re_next = route_list.re_next;
-14   route_list.re_next = rep;
-15   spin_unlock(&routelock);
-16   return 0;
-17 }
-18
-19 int route_del(unsigned long addr)
-20 {
-21   struct route_entry *rep;
-22   struct route_entry **repp;
-23
-24   spin_lock(&routelock);
-25   repp = &route_list.re_next;
-26   for (;;) {
-27     rep = *repp;
-28     if (rep == NULL)
-29       break;
-30     if (rep->addr == addr) {
-31       *repp = rep->re_next;
-32       rep->re_next =
-33           (struct route_entry *)HAZPTR_POISON;
-34       spin_unlock(&routelock);
-35       hazptr_free_later(&rep->hh);
-36       return 0;
-37     }
-38     repp = &rep->re_next;
-39   }
-40   spin_unlock(&routelock);
-41   return -ENOENT;
-42 }
-\end{verbbox}
-}
-\centering
-\theverbbox
+\input{CodeSamples/defer/route_hazptr@add_del.fcv}
 \caption{Hazard-Pointer Pre-BSD Routing Table Add/Delete}
 \label{lst:defer:Hazard-Pointer Pre-BSD Routing Table Add/Delete}
 \end{listing}
@@ -293,24 +205,28 @@ on
 page~\pageref{lst:defer:Sequential Pre-BSD Routing Table},
 so only differences will be discussed.
 
+\begin{lineref}[ln:defer:route_hazptr:lookup]
 Starting with
 Listing~\ref{lst:defer:Hazard-Pointer Pre-BSD Routing Table Lookup},
-line~2 shows the \co{->hh} field used to queue objects pending
+line~\lnref{hh} shows the \co{->hh} field used to queue objects pending
 hazard-pointer free,
-line~6 shows the \co{->re_freed} field used to detect use-after-free bugs,
-and lines~24-30 attempt to acquire a hazard pointer, branching
-to line~18's \co{retry} label on failure.
+line~\lnref{re_freed} shows the \co{->re_freed} field used to detect use-after-free bugs,
+and lines~\lnref{acq:b}-\lnref{acq:e} attempt to acquire a hazard pointer, branching
+to line~\lnref{retry}'s \co{retry} label on failure.
+\end{lineref}
 
+\begin{lineref}[ln:defer:route_hazptr:add_del]
 In
 Listing~\ref{lst:defer:Hazard-Pointer Pre-BSD Routing Table Add/Delete},
-line~11 initializes \co{->re_freed},
-lines~32 and~33 poison the \co{->re_next} field of the newly removed
+line~\lnref{init_freed} initializes \co{->re_freed},
+line~\lnref{poison} poisons the \co{->re_next} field of the newly removed
 object, and
-line~35 passes that object to the hazard pointers's
+line~\lnref{free_later} passes that object to the hazard pointers's
 \co{hazptr_free_later()} function, which will free that object once it
 is safe to do so.
 The spinlocks work the same as in
 Listing~\ref{lst:defer:Reference-Counted Pre-BSD Routing Table Add/Delete}.
+\end{lineref}
 
 \begin{figure}[tb]
 \centering
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/6] defer: Employ new scheme for snippet of seqlock.h
  2018-12-03 15:33 [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Akira Yokosawa
  2018-12-03 15:35 ` [PATCH 1/6] defer: Employ new scheme for 'lst:defer:Hazard-Pointer Storage and Erasure' Akira Yokosawa
  2018-12-03 15:37 ` [PATCH 2/6] defer: Employ new scheme for snippets of route_hazptr.c Akira Yokosawa
@ 2018-12-03 15:39 ` Akira Yokosawa
  2018-12-03 15:41 ` [PATCH 4/6] defer: Employ new scheme for snippets of route_seqlock.c Akira Yokosawa
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Akira Yokosawa @ 2018-12-03 15:39 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: perfbook, Akira Yokosawa

From 6a5f865a117ceea5b924506b9bd55f622b95365f Mon Sep 17 00:00:00 2001
From: Akira Yokosawa <akiyks@gmail.com>
Date: Tue, 4 Dec 2018 00:15:45 +0900
Subject: [PATCH 3/6] defer: Employ new scheme for snippet of seqlock.h

Also convert inline snippets in hazptr.tex.

Signed-off-by: Akira Yokosawa <akiyks@gmail.com>
---
 CodeSamples/defer/seqlock.h |  55 +++++++++--------
 defer/seqlock.tex           | 146 +++++++++++++++++---------------------------
 2 files changed, 85 insertions(+), 116 deletions(-)

diff --git a/CodeSamples/defer/seqlock.h b/CodeSamples/defer/seqlock.h
index c994285..dece371 100644
--- a/CodeSamples/defer/seqlock.h
+++ b/CodeSamples/defer/seqlock.h
@@ -18,50 +18,53 @@
  * Copyright (c) 2008 Paul E. McKenney, IBM Corporation.
  */
 
-typedef struct {
-	unsigned long seq;
+//\begin{snippet}[labelbase=ln:defer:seqlock:impl,commandchars=\\\[\]]
+typedef struct {				//\lnlbl{typedef:b}
+	unsigned long seq;			//\lnlbl{typedef:seq}
 	spinlock_t lock;
-} seqlock_t;
+} seqlock_t;					//\lnlbl{typedef:e}
 
-#define DEFINE_SEQ_LOCK(name) seqlock_t name = { \
-	.seq = 0, \
-	.lock = __SPIN_LOCK_UNLOCKED(name.lock), \
-};
-
-static inline void seqlock_init(seqlock_t *slp)
+#define DEFINE_SEQ_LOCK(name) seqlock_t name = { 	/* \fcvexclude */ \
+	.seq = 0,					/* \fcvexclude */ \
+	.lock = __SPIN_LOCK_UNLOCKED(name.lock),	/* \fcvexclude */ \
+};							/* \fcvexclude */
+							/* \fcvexclude */
+static inline void seqlock_init(seqlock_t *slp)		//\lnlbl{init:b}
 {
 	slp->seq = 0;
 	spin_lock_init(&slp->lock);
-}
+}							//\lnlbl{init:e}
 
-static inline unsigned long read_seqbegin(seqlock_t *slp)
+static inline unsigned long read_seqbegin(seqlock_t *slp) //\lnlbl{read_seqbegin:b}
 {
 	unsigned long s;
 
-	s = READ_ONCE(slp->seq);
-	smp_mb();
-	return s & ~0x1UL;
-}
+	s = READ_ONCE(slp->seq);			//\lnlbl{read_seqbegin:fetch}
+	smp_mb();					//\lnlbl{read_seqbegin:mb}
+	return s & ~0x1UL;				//\lnlbl{read_seqbegin:ret}
+}							//\lnlbl{read_seqbegin:e}
 
-static inline int read_seqretry(seqlock_t *slp, unsigned long oldseq)
+static inline int read_seqretry(seqlock_t *slp,		//\lnlbl{read_seqretry:b}
+                                unsigned long oldseq)
 {
 	unsigned long s;
 
-	smp_mb();
-	s = READ_ONCE(slp->seq);
-	return s != oldseq;
-}
+	smp_mb();					//\lnlbl{read_seqretry:mb}
+	s = READ_ONCE(slp->seq);			//\lnlbl{read_seqretry:fetch}
+	return s != oldseq;				//\lnlbl{read_seqretry:ret}
+}							//\lnlbl{read_seqretry:e}
 
-static inline void write_seqlock(seqlock_t *slp)
+static inline void write_seqlock(seqlock_t *slp)	//\lnlbl{write_seqlock:b}
 {
 	spin_lock(&slp->lock);
 	++slp->seq;
 	smp_mb();
-}
+}							//\lnlbl{write_seqlock:e}
 
-static inline void write_sequnlock(seqlock_t *slp)
+static inline void write_sequnlock(seqlock_t *slp)	//\lnlbl{write_sequnlock:b}
 {
-	smp_mb();
-	++slp->seq;
+	smp_mb();					//\lnlbl{write_sequnlock:mb}
+	++slp->seq;					//\lnlbl{write_sequnlock:inc}
 	spin_unlock(&slp->lock);
-}
+}							//\lnlbl{write_sequnlock:e}
+//\end{snippet}
diff --git a/defer/seqlock.tex b/defer/seqlock.tex
index 84da3c0..7fabc35 100644
--- a/defer/seqlock.tex
+++ b/defer/seqlock.tex
@@ -50,30 +50,22 @@ very rarely need to retry.
 } \QuickQuizEnd
 
 \begin{listing}[bp]
-{ \scriptsize
-\begin{verbbox}
-  1 do {
-  2   seq = read_seqbegin(&test_seqlock);
-  3   /* read-side access. */
-  4 } while (read_seqretry(&test_seqlock, seq));
-\end{verbbox}
-}
-\centering
-\theverbbox
+\begin{VerbatimL}
+do {
+	seq = read_seqbegin(&test_seqlock);
+	/* read-side access. */
+} while (read_seqretry(&test_seqlock, seq));
+\end{VerbatimL}
 \caption{Sequence-Locking Reader}
 \label{lst:defer:Sequence-Locking Reader}
 \end{listing}
 
 \begin{listing}[bp]
-{ \scriptsize
-\begin{verbbox}
-  1 write_seqlock(&test_seqlock);
-  2 /* Update */
-  3 write_sequnlock(&test_seqlock);
-\end{verbbox}
-}
-\centering
-\theverbbox
+\begin{VerbatimL}
+write_seqlock(&test_seqlock);
+/* Update */
+write_sequnlock(&test_seqlock);
+\end{VerbatimL}
 \caption{Sequence-Locking Writer}
 \label{lst:defer:Sequence-Locking Writer}
 \end{listing}
@@ -101,55 +93,7 @@ quantities used for timekeeping.
 It is also used in pathname traversal to detect concurrent rename operations.
 
 \begin{listing}[tb]
-{ \scriptsize
-\begin{verbbox}
- 1  typedef struct {
- 2    unsigned long seq;
- 3    spinlock_t lock;
- 4  } seqlock_t;
- 5
- 6  static void seqlock_init(seqlock_t *slp)
- 7  {
- 8    slp->seq = 0;
- 9    spin_lock_init(&slp->lock);
-10  }
-11
-12  static unsigned long read_seqbegin(seqlock_t *slp)
-13  {
-14    unsigned long s;
-15
-16    s = READ_ONCE(slp->seq);
-17    smp_mb();
-18    return s & ~0x1UL;
-19  }
-20
-21  static int read_seqretry(seqlock_t *slp,
-22                           unsigned long oldseq)
-23  {
-24    unsigned long s;
-25
-26    smp_mb();
-27    s = READ_ONCE(slp->seq);
-28    return s != oldseq;
-29  }
-30
-31  static void write_seqlock(seqlock_t *slp)
-32  {
-33    spin_lock(&slp->lock);
-34    ++slp->seq;
-35    smp_mb();
-36  }
-37
-38  static void write_sequnlock(seqlock_t *slp)
-39  {
-40    smp_mb();
-41    ++slp->seq;
-42    spin_unlock(&slp->lock);
-43  }
-\end{verbbox}
-}
-\centering
-\theverbbox
+\input{CodeSamples/defer/seqlock@impl.fcv}
 \caption{Sequence-Locking Implementation}
 \label{lst:defer:Sequence-Locking Implementation}
 \end{listing}
@@ -157,18 +101,26 @@ It is also used in pathname traversal to detect concurrent rename operations.
 A simple implementation of sequence locks is shown in
 Listing~\ref{lst:defer:Sequence-Locking Implementation}
 (\path{seqlock.h}).
-The \co{seqlock_t} data structure is shown on lines~1-4, and contains
+\begin{lineref}[ln:defer:seqlock:impl:typedef]
+The \co{seqlock_t} data structure is shown on
+lines~\lnref{b}-\lnref{e}, and contains
 the sequence number along with a lock to serialize writers.
-Lines~6-10 show \co{seqlock_init()}, which, as the name indicates,
+\end{lineref}
+\begin{lineref}[ln:defer:seqlock:impl:init]
+Lines~\lnref{b}-\lnref{e} show \co{seqlock_init()}, which, as the name indicates,
 initializes a \co{seqlock_t}.
+\end{lineref}
 
-Lines~12-19 show \co{read_seqbegin()}, which begins a sequence-lock
+\begin{lineref}[ln:defer:seqlock:impl:read_seqbegin]
+Lines~\lnref{b}-\lnref{e} show \co{read_seqbegin()}, which begins a sequence-lock
 read-side critical section.
-Line~16 takes a snapshot of the sequence counter, and line~17 orders
+Line~\lnref{fetch} takes a snapshot of the sequence counter, and
+line~\lnref{mb} orders
 this snapshot operation before the caller's critical section.
-Finally, line~18 returns the value of the snapshot (with the least-significant
+Finally, line~\lnref{ret} returns the value of the snapshot (with the least-significant
 bit cleared), which the caller
 will pass to a later call to \co{read_seqretry()}.
+\end{lineref}
 
 \QuickQuiz{}
 	Why not have \co{read_seqbegin()} in
@@ -185,17 +137,20 @@ will pass to a later call to \co{read_seqretry()}.
 	check internal to \co{read_seqbegin()} might be preferable.
 } \QuickQuizEnd
 
-Lines~21-29 show \co{read_seqretry()}, which returns true if there
+\begin{lineref}[ln:defer:seqlock:impl:read_seqretry]
+Lines~\lnref{b}-\lnref{e} show \co{read_seqretry()}, which returns true if there
 was at least one writer since the time of the corresponding
 call to \co{read_seqbegin()}.
-Line~26 orders the caller's prior critical section before line~27's
+Line~\lnref{mb} orders the caller's prior critical section before line~\lnref{fetch}'s
 fetch of the new snapshot of the sequence counter.
-Finally, line~28 checks whether the sequence counter has changed,
+Finally, line~\lnref{ret} checks whether the sequence counter has changed,
 in other words, whether there has been at least one writer, and returns
 true if so.
+\end{lineref}
 
 \QuickQuiz{}
-	Why is the \co{smp_mb()} on line~26 of
+	Why is the \co{smp_mb()} on
+	line~\ref{ln:defer:seqlock:impl:read_seqretry:mb} of
 	Listing~\ref{lst:defer:Sequence-Locking Implementation}
 	needed?
 \QuickQuizAnswer{
@@ -213,19 +168,23 @@ true if so.
 \QuickQuizAnswer{
 	In older versions of the Linux kernel, no.
 
-	In very new versions of the Linux kernel, line~16 could use
+	\begin{lineref}[ln:defer:seqlock:impl]
+	In very new versions of the Linux kernel,
+        line~\lnref{read_seqbegin:fetch} could use
 	\co{smp_load_acquire()} instead of \co{READ_ONCE()}, which
-	in turn would allow the \co{smp_mb()} on line~17 to be dropped.
-	Similarly, line~41 could use an \co{smp_store_release()}, for
+	in turn would allow the \co{smp_mb()} on
+        line~\lnref{read_seqbegin:mb} to be dropped.
+	Similarly, line~\lnref{write_sequnlock:inc} could use an
+        \co{smp_store_release()}, for
 	example, as follows:
 
-\begin{minipage}[c][5ex][c]{\columnwidth}\scriptsize
-\begin{verbatim}
+\begin{VerbatimU}
 smp_store_release(&slp->seq, READ_ONCE(slp->seq) + 1);
-\end{verbatim}
-\end{minipage}
+\end{VerbatimU}
 
-	This would allow the \co{smp_mb()} on line~40 to be dropped.
+	This would allow the \co{smp_mb()} on
+	line~\lnref{write_sequnlock:mb} to be dropped.
+	\end{lineref}
 } \QuickQuizEnd
 
 \QuickQuiz{}
@@ -239,12 +198,16 @@ smp_store_release(&slp->seq, READ_ONCE(slp->seq) + 1);
 	situation, in which case, go wild with the sequence-locking updates!
 } \QuickQuizEnd
 
-Lines~31-36 show \co{write_seqlock()}, which simply acquires the lock,
+\begin{lineref}[ln:defer:seqlock:impl:write_seqlock]
+Lines~\lnref{b}-\lnref{e} show \co{write_seqlock()}, which simply acquires the lock,
 increments the sequence number, and executes a memory barrier to ensure
 that this increment is ordered before the caller's critical section.
-Lines~38-43 show \co{write_sequnlock()}, which executes a memory barrier
+\end{lineref}
+\begin{lineref}[ln:defer:seqlock:impl:write_sequnlock]
+Lines~\lnref{b}-\lnref{e} show \co{write_sequnlock()}, which executes a memory barrier
 to ensure that the caller's critical section is ordered before the
-increment of the sequence number on line~44, then releases the lock.
+increment of the sequence number on line~\lnref{inc}, then releases the lock.
+\end{lineref}
 
 \QuickQuiz{}
 	What if something else serializes writers, so that the lock
@@ -255,7 +218,8 @@ increment of the sequence number on line~44, then releases the lock.
 } \QuickQuizEnd
 
 \QuickQuiz{}
-	Why isn't \co{seq} on line~2 of
+	Why isn't \co{seq} on
+	line~\ref{ln:defer:seqlock:impl:typedef:seq} of
 	Listing~\ref{lst:defer:Sequence-Locking Implementation}
 	\co{unsigned} rather than \co{unsigned long}?
 	After all, if \co{unsigned} is good enough for the Linux
@@ -266,7 +230,9 @@ increment of the sequence number on line~44, then releases the lock.
 	it to ignore the following sequence of events:
 	\begin{enumerate}
 	\item	Thread~0 executes \co{read_seqbegin()}, picking up
-		\co{->seq} in line~16, noting that the value is even,
+		\co{->seq} in
+		line~\ref{ln:defer:seqlock:impl:read_seqbegin:fetch},
+		noting that the value is even,
 		and thus returning to the caller.
 	\item	Thread~0 starts executing its read-side critical section,
 		but is then preempted for a long time.
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 4/6] defer: Employ new scheme for snippets of route_seqlock.c
  2018-12-03 15:33 [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Akira Yokosawa
                   ` (2 preceding siblings ...)
  2018-12-03 15:39 ` [PATCH 3/6] defer: Employ new scheme for snippet of seqlock.h Akira Yokosawa
@ 2018-12-03 15:41 ` Akira Yokosawa
  2018-12-03 15:42 ` [PATCH 5/6] defer: Employ new scheme for snippets in rcuintro and rcufundamental Akira Yokosawa
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Akira Yokosawa @ 2018-12-03 15:41 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: perfbook, Akira Yokosawa

From 8232875eee86903a48912cacdd5d2268e5ff8d64 Mon Sep 17 00:00:00 2001
From: Akira Yokosawa <akiyks@gmail.com>
Date: Tue, 4 Dec 2018 00:16:17 +0900
Subject: [PATCH 4/6] defer: Employ new scheme for snippets of route_seqlock.c

Signed-off-by: Akira Yokosawa <akiyks@gmail.com>
---
 CodeSamples/defer/route_seqlock.c |  56 ++++++++++---------
 defer/seqlock.tex                 | 113 ++++++--------------------------------
 2 files changed, 48 insertions(+), 121 deletions(-)

diff --git a/CodeSamples/defer/route_seqlock.c b/CodeSamples/defer/route_seqlock.c
index 8224682..969704b 100644
--- a/CodeSamples/defer/route_seqlock.c
+++ b/CodeSamples/defer/route_seqlock.c
@@ -23,19 +23,20 @@
 #include "seqlock.h"
 
 /* Route-table entry to be included in the routing list. */
+//\begin{snippet}[labelbase=ln:defer:route_seqlock:lookup,commandchars=\\\[\]]
 struct route_entry {
 	struct route_entry *re_next;
 	unsigned long addr;
 	unsigned long iface;
-	int re_freed;
+	int re_freed;					//\lnlbl{struct:re_freed}
 };
-
+								//\fcvexclude
 struct route_entry route_list;
-DEFINE_SEQ_LOCK(sl);
+DEFINE_SEQ_LOCK(sl);					//\lnlbl{struct:sl}
 
-/*
- * Look up a route entry, return the corresponding interface. 
- */
+/*								  \fcvexclude
+ * Look up a route entry, return the corresponding interface. 	  \fcvexclude
+ */								//\fcvexclude
 unsigned long route_lookup(unsigned long addr)
 {
 	struct route_entry *rep;
@@ -43,31 +44,33 @@ unsigned long route_lookup(unsigned long addr)
 	unsigned long ret;
 	unsigned long s;
 
-retry:
-	s = read_seqbegin(&sl);
+retry:							//\lnlbl{lookup:retry}
+	s = read_seqbegin(&sl);				//\lnlbl{lookup:r_sqbegin}
 	repp = &route_list.re_next;
 	do {
 		rep = READ_ONCE(*repp);
 		if (rep == NULL) {
-			if (read_seqretry(&sl, s))
-				goto retry;
+			if (read_seqretry(&sl, s))	//\lnlbl{lookup:r_sqretry1}
+				goto retry;		//\lnlbl{lookup:goto_retry1}
 			return ULONG_MAX;
 		}
-
-		/* Advance to next. */
+								//\fcvexclude
+		/* Advance to next. */				//\fcvexclude
 		repp = &rep->re_next;
 	} while (rep->addr != addr);
-	if (READ_ONCE(rep->re_freed))
-		abort();
+	if (READ_ONCE(rep->re_freed))			//\lnlbl{lookup:chk_freed}
+		abort();				//\lnlbl{lookup:abort}
 	ret = rep->iface;
-	if (read_seqretry(&sl, s))
-		goto retry;
+	if (read_seqretry(&sl, s))			//\lnlbl{lookup:r_sqretry2}
+		goto retry;				//\lnlbl{lookup:goto_retry2}
 	return ret;
 }
+//\end{snippet}
 
 /*
  * Add an element to the route table.
  */
+//\begin{snippet}[labelbase=ln:defer:route_seqlock:add_del,commandchars=\\\[\]]
 int route_add(unsigned long addr, unsigned long interface)
 {
 	struct route_entry *rep;
@@ -77,23 +80,23 @@ int route_add(unsigned long addr, unsigned long interface)
 		return -ENOMEM;
 	rep->addr = addr;
 	rep->iface = interface;
-	rep->re_freed = 0;
-	write_seqlock(&sl);
+	rep->re_freed = 0;			//\lnlbl{add:clr_freed}
+	write_seqlock(&sl);			//\lnlbl{add:w_sqlock}
 	rep->re_next = route_list.re_next;
 	route_list.re_next = rep;
-	write_sequnlock(&sl);
+	write_sequnlock(&sl);			//\lnlbl{add:w_squnlock}
 	return 0;
 }
 
-/*
- * Remove the specified element from the route table.
- */
+/*								  \fcvexclude
+ * Remove the specified element from the route table.		  \fcvexclude
+ */								//\fcvexclude
 int route_del(unsigned long addr)
 {
 	struct route_entry *rep;
 	struct route_entry **repp;
 
-	write_seqlock(&sl);
+	write_seqlock(&sl);				//\lnlbl{del:w_sqlock}
 	repp = &route_list.re_next;
 	for (;;) {
 		rep = *repp;
@@ -101,17 +104,18 @@ int route_del(unsigned long addr)
 			break;
 		if (rep->addr == addr) {
 			*repp = rep->re_next;
-			write_sequnlock(&sl);
+			write_sequnlock(&sl);		//\lnlbl{del:w_squnlock1}
 			smp_mb();
-			rep->re_freed = 1;
+			rep->re_freed = 1;		//\lnlbl{del:set_freed}
 			free(rep);
 			return 0;
 		}
 		repp = &rep->re_next;
 	}
-	write_sequnlock(&sl);
+	write_sequnlock(&sl);				//\lnlbl{del:w_squnlock2}
 	return -ENOENT;
 }
+//\end{snippet}
 
 /*
  * Clear all elements from the route table.
diff --git a/defer/seqlock.tex b/defer/seqlock.tex
index 7fabc35..3eabf31 100644
--- a/defer/seqlock.tex
+++ b/defer/seqlock.tex
@@ -263,100 +263,13 @@ increment of the sequence number on line~\lnref{inc}, then releases the lock.
 } \QuickQuizEnd
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
- 1 struct route_entry {
- 2   struct route_entry *re_next;
- 3   unsigned long addr;
- 4   unsigned long iface;
- 5   int re_freed;
- 6 };
- 7 struct route_entry route_list;
- 8 DEFINE_SEQ_LOCK(sl);
- 9
-10 unsigned long route_lookup(unsigned long addr)
-11 {
-12   struct route_entry *rep;
-13   struct route_entry **repp;
-14   unsigned long ret;
-15   unsigned long s;
-16
-17 retry:
-18   s = read_seqbegin(&sl);
-19   repp = &route_list.re_next;
-20   do {
-21     rep = READ_ONCE(*repp);
-22     if (rep == NULL) {
-23       if (read_seqretry(&sl, s))
-24         goto retry;
-25       return ULONG_MAX;
-26     }
-27     repp = &rep->re_next;
-28   } while (rep->addr != addr);
-29   if (READ_ONCE(rep->re_freed))
-30     abort();
-31   ret = rep->iface;
-32   if (read_seqretry(&sl, s))
-33     goto retry;
-34   return ret;
-35 }
-\end{verbbox}
-}
-\centering
-\theverbbox
+\input{CodeSamples/defer/route_seqlock@lookup.fcv}
 \caption{Sequence-Locked Pre-BSD Routing Table Lookup (BUGGY!!!)}
 \label{lst:defer:Sequence-Locked Pre-BSD Routing Table Lookup}
 \end{listing}
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
- 1 int route_add(unsigned long addr,
- 2               unsigned long interface)
- 3 {
- 4   struct route_entry *rep;
- 5
- 6   rep = malloc(sizeof(*rep));
- 7   if (!rep)
- 8     return -ENOMEM;
- 9   rep->addr = addr;
-10   rep->iface = interface;
-11   rep->re_freed = 0;
-12   write_seqlock(&sl);
-13   rep->re_next = route_list.re_next;
-14   route_list.re_next = rep;
-15   write_sequnlock(&sl);
-16   return 0;
-17 }
-18
-19 int route_del(unsigned long addr)
-20 {
-21   struct route_entry *rep;
-22   struct route_entry **repp;
-23
-24   write_seqlock(&sl);
-25   repp = &route_list.re_next;
-26   for (;;) {
-27     rep = *repp;
-28     if (rep == NULL)
-29       break;
-30     if (rep->addr == addr) {
-31       *repp = rep->re_next;
-32       write_sequnlock(&sl);
-33       smp_mb();
-34       rep->re_freed = 1;
-35       free(rep);
-36       return 0;
-37     }
-38     repp = &rep->re_next;
-39   }
-40   write_sequnlock(&sl);
-41   return -ENOENT;
-42 }
-\end{verbbox}
-}
-\centering
-\theverbbox
+\input{CodeSamples/defer/route_seqlock@add_del.fcv}
 \caption{Sequence-Locked Pre-BSD Routing Table Add/Delete (BUGGY!!!)}
 \label{lst:defer:Sequence-Locked Pre-BSD Routing Table Add/Delete}
 \end{listing}
@@ -370,19 +283,29 @@ shows \co{route_add()} and \co{route_del()} (\path{route_seqlock.c}).
 This implementation is once again similar to its counterparts in earlier
 sections, so only the differences will be highlighted.
 
+\begin{lineref}[ln:defer:route_seqlock:lookup]
 In
 Listing~\ref{lst:defer:Sequence-Locked Pre-BSD Routing Table Lookup},
-line~5 adds \co{->re_freed}, which is checked on lines~29 and~30.
-Line~8 adds a sequence lock, which is used by \co{route_lookup()}
-on lines~18, 23, and~32, with lines~24 and~33 branching back to
-the \co{retry} label on line~17.
+line~\lnref{struct:re_freed} adds \co{->re_freed}, which is checked on
+lines~\lnref{lookup:chk_freed} and~\lnref{lookup:abort}.
+Line~\lnref{struct:sl} adds a sequence lock, which is used by \co{route_lookup()}
+\end{lineref}
+\begin{lineref}[ln:defer:route_seqlock:lookup:lookup]
+on lines~\lnref{r_sqbegin}, \lnref{r_sqretry1}, and~\lnref{r_sqretry2},
+with lines~\lnref{goto_retry1} and~\lnref{goto_retry2} branching back to
+the \co{retry} label on line~\lnref{retry}.
 The effect is to retry any lookup that runs concurrently with an update.
+\end{lineref}
 
+\begin{lineref}[ln:defer:route_seqlock:add_del]
 In
 Listing~\ref{lst:defer:Sequence-Locked Pre-BSD Routing Table Add/Delete},
-lines~12, 15, 24, and~40 acquire and release the sequence lock,
-while lines~11, 33, and~44 handle \co{->re_freed}.
+lines~\lnref{add:w_sqlock}, \lnref{add:w_squnlock}, \lnref{del:w_sqlock},
+\lnref{del:w_squnlock1}, and~\lnref{del:w_squnlock2}
+acquire and release the sequence lock,
+while lines~\lnref{add:clr_freed} and~\lnref{del:set_freed} handle \co{->re_freed}.
 This implementation is therefore quite straightforward.
+\end{lineref}
 
 \begin{figure}[tb]
 \centering
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 5/6] defer: Employ new scheme for snippets in rcuintro and rcufundamental
  2018-12-03 15:33 [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Akira Yokosawa
                   ` (3 preceding siblings ...)
  2018-12-03 15:41 ` [PATCH 4/6] defer: Employ new scheme for snippets of route_seqlock.c Akira Yokosawa
@ 2018-12-03 15:42 ` Akira Yokosawa
  2018-12-03 15:44 ` [PATCH 6/6] defer: Employ new scheme for snippets of route_rcu.c Akira Yokosawa
  2018-12-03 17:23 ` [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Paul E. McKenney
  6 siblings, 0 replies; 8+ messages in thread
From: Akira Yokosawa @ 2018-12-03 15:42 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: perfbook, Akira Yokosawa

From 7bd805e63060278c9222b39c8c238227835f04a4 Mon Sep 17 00:00:00 2001
From: Akira Yokosawa <akiyks@gmail.com>
Date: Tue, 4 Dec 2018 00:16:42 +0900
Subject: [PATCH 5/6] defer: Employ new scheme for snippets in rcuintro and rcufundamental

Signed-off-by: Akira Yokosawa <akiyks@gmail.com>
---
 defer/rcufundamental.tex | 381 ++++++++++++++++++++++-------------------------
 defer/rcuintro.tex       |  13 +-
 2 files changed, 179 insertions(+), 215 deletions(-)

diff --git a/defer/rcufundamental.tex b/defer/rcufundamental.tex
index 2660e7d..3a5fad2 100644
--- a/defer/rcufundamental.tex
+++ b/defer/rcufundamental.tex
@@ -69,26 +69,22 @@ summarizes RCU fundamentals.
 \label{sec:defer:Publish-Subscribe Mechanism}
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
-  1 struct foo {
-  2   int a;
-  3   int b;
-  4   int c;
-  5 };
-  6 struct foo *gp = NULL;
-  7
-  8 /* . . . */
-  9
- 10 p = kmalloc(sizeof(*p), GFP_KERNEL);
- 11 p->a = 1;
- 12 p->b = 2;
- 13 p->c = 3;
- 14 gp = p;
-\end{verbbox}
-}
-\centering
-\theverbbox
+\begin{VerbatimL}
+struct foo {
+	int a;
+	int b;
+	int c;
+};
+struct foo *gp = NULL;
+
+/* . . . */
+
+p = kmalloc(sizeof(*p), GFP_KERNEL);
+p->a = 1;
+p->b = 2;
+p->c = 3;
+gp = p;
+\end{VerbatimL}
 \caption{Data Structure Publication (Unsafe)}
 \label{lst:defer:Data Structure Publication (Unsafe)}
 \end{listing}
@@ -116,17 +112,12 @@ We therefore encapsulate them into a primitive
 \co{rcu_assign_pointer()} that has publication semantics.
 The last four lines would then be as follows:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 p->a = 1;
-  2 p->b = 2;
-  3 p->c = 3;
-  4 rcu_assign_pointer(gp, p);
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+p->a = 1;
+p->b = 2;
+p->c = 3;
+rcu_assign_pointer(gp, p);
+\end{VerbatimN}
 
 The \co{rcu_assign_pointer()}
 would \emph{publish} the new structure, forcing both the compiler
@@ -137,17 +128,12 @@ However, it is not sufficient to only enforce ordering at the
 updater, as the reader must enforce proper ordering as well.
 Consider for example the following code fragment:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 p = gp;
-  2 if (p != NULL) {
-  3   do_something_with(p->a, p->b, p->c);
-  4 }
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+p = gp;
+if (p != NULL) {
+	do_something_with(p->a, p->b, p->c);
+}
+\end{VerbatimN}
 
 Although this code fragment might well seem immune to misordering,
 unfortunately, the
@@ -177,19 +163,14 @@ directives are required for this purpose:\footnote{
 	\co{memory_order_acquire}, thus emitting a needless memory-barrier
 	instruction on weakly ordered systems.)}
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 rcu_read_lock();
-  2 p = rcu_dereference(gp);
-  3 if (p != NULL) {
-  4   do_something_with(p->a, p->b, p->c);
-  5 }
-  6 rcu_read_unlock();
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+rcu_read_lock();
+p = rcu_dereference(gp);
+if (p != NULL) {
+	do_something_with(p->a, p->b, p->c);
+}
+rcu_read_unlock();
+\end{VerbatimN}
 
 The \co{rcu_dereference()} primitive can thus be thought of
 as \emph{subscribing} to a given value of the specified pointer,
@@ -240,27 +221,25 @@ Figure~\ref{fig:defer:Linux Linked List Abbreviated},
 which shows only the non-header (blue) elements.
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
-  1 struct foo {
-  2   struct list_head *list;
-  3   int a;
-  4   int b;
-  5   int c;
-  6 };
-  7 LIST_HEAD(head);
-  8
-  9 /* . . . */
- 10
- 11 p = kmalloc(sizeof(*p), GFP_KERNEL);
- 12 p->a = 1;
- 13 p->b = 2;
- 14 p->c = 3;
- 15 list_add_rcu(&p->list, &head);
-\end{verbbox}
-}
-\centering
-\theverbbox
+\begin{linelabel}[ln:defer:RCU Data Structure Publication]
+\begin{VerbatimL}[commandchars=\\\[\]]
+struct foo {
+	struct list_head *list;
+	int a;
+	int b;
+	int c;
+};
+LIST_HEAD(head);
+
+/* . . . */
+
+p = kmalloc(sizeof(*p), GFP_KERNEL);
+p->a = 1;
+p->b = 2;
+p->c = 3;
+list_add_rcu(&p->list, &head);		\lnlbl[add_rcu]
+\end{VerbatimL}
+\end{linelabel}
 \caption{RCU Data Structure Publication}
 \label{lst:defer:RCU Data Structure Publication}
 \end{listing}
@@ -269,7 +248,8 @@ Adapting the pointer-publish example for the linked list results in
 the code shown in
 Listing~\ref{lst:defer:RCU Data Structure Publication}.
 
-Line~15 must be protected by some synchronization mechanism (most
+Line~\ref{ln:defer:RCU Data Structure Publication:add_rcu}
+must be protected by some synchronization mechanism (most
 commonly some sort of lock) to prevent multiple \co{list_add_rcu()}
 instances from executing concurrently.
 However, such synchronization does not prevent this \co{list_add()}
@@ -277,18 +257,13 @@ instance from executing concurrently with RCU readers.
 
 Subscribing to an RCU-protected list is straightforward:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 rcu_read_lock();
-  2 list_for_each_entry_rcu(p, head, list) {
-  3   do_something_with(p->a, p->b, p->c);
-  4 }
-  5 rcu_read_unlock();
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+rcu_read_lock();
+list_for_each_entry_rcu(p, head, list) {
+	do_something_with(p->a, p->b, p->c);
+}
+rcu_read_unlock();
+\end{VerbatimN}
 
 The \co{list_add_rcu()} primitive publishes an entry, inserting it at
 the head of the specified list, guaranteeing that the corresponding
@@ -331,27 +306,25 @@ in the same way lists are, as shown in
 Figure~\ref{fig:defer:Linux Linked List Abbreviated}.
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
-  1 struct foo {
-  2   struct hlist_node *list;
-  3   int a;
-  4   int b;
-  5   int c;
-  6 };
-  7 HLIST_HEAD(head);
-  8
-  9 /* . . . */
- 10
- 11 p = kmalloc(sizeof(*p), GFP_KERNEL);
- 12 p->a = 1;
- 13 p->b = 2;
- 14 p->c = 3;
- 15 hlist_add_head_rcu(&p->list, &head);
-\end{verbbox}
-}
-\centering
-\theverbbox
+\begin{linelabel}[ln:defer:RCU hlist Publication]
+\begin{VerbatimL}[commandchars=\\\[\]]
+struct foo {
+	struct hlist_node *list;
+	int a;
+	int b;
+	int c;
+};
+HLIST_HEAD(head);
+
+/* . . . */
+
+p = kmalloc(sizeof(*p), GFP_KERNEL);
+p->a = 1;
+p->b = 2;
+p->c = 3;
+hlist_add_head_rcu(&p->list, &head);	\lnlbl[add_head]
+\end{VerbatimL}
+\end{linelabel}
 \caption{RCU {\tt hlist} Publication}
 \label{lst:defer:RCU hlist Publication}
 \end{listing}
@@ -360,24 +333,20 @@ Publishing a new element to an RCU-protected hlist is quite similar
 to doing so for the circular list,
 as shown in Listing~\ref{lst:defer:RCU hlist Publication}.
 
-As before, line~15 must be protected by some sort of synchronization
+As before, line~\ref{ln:defer:RCU hlist Publication:add_head}
+must be protected by some sort of synchronization
 mechanism, for example, a lock.
 
 Subscribing to an RCU-protected hlist is also similar to the
 circular list:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 rcu_read_lock();
-  2 hlist_for_each_entry_rcu(p, head, list) {
-  3   do_something_with(p->a, p->b, p->c);
-  4 }
-  5 rcu_read_unlock();
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+rcu_read_lock();
+hlist_for_each_entry_rcu(p, head, list) {
+	do_something_with(p->a, p->b, p->c);
+}
+rcu_read_unlock();
+\end{VerbatimN}
 
 \begin{table*}[tb]
 \renewcommand*{\arraystretch}{1.2}
@@ -488,33 +457,31 @@ RCU to wait for readers:
 \end{enumerate}
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
-  1 struct foo {
-  2   struct list_head *list;
-  3   int a;
-  4   int b;
-  5   int c;
-  6 };
-  7 LIST_HEAD(head);
-  8
-  9 /* . . . */
- 10
- 11 p = search(head, key);
- 12 if (p == NULL) {
- 13   /* Take appropriate action, unlock, & return. */
- 14 }
- 15 q = kmalloc(sizeof(*p), GFP_KERNEL);
- 16 *q = *p;
- 17 q->b = 2;
- 18 q->c = 3;
- 19 list_replace_rcu(&p->list, &q->list);
- 20 synchronize_rcu();
- 21 kfree(p);
-\end{verbbox}
+\begin{linelabel}[ln:defer:Canonical RCU Replacement Example]
+\begin{VerbatimL}[commandchars=\\\[\]]
+struct foo {
+	struct list_head *list;
+	int a;
+	int b;
+	int c;
+};
+LIST_HEAD(head);
+
+/* . . . */
+
+p = search(head, key);			\lnlbl[search]
+if (p == NULL) {
+	/* Take appropriate action, unlock, & return. */
 }
-\centering
-\theverbbox
+q = kmalloc(sizeof(*p), GFP_KERNEL);
+*q = *p;				\lnlbl[copy]
+q->b = 2;				\lnlbl[update]
+q->c = 3;
+list_replace_rcu(&p->list, &q->list);	\lnlbl[replace]
+synchronize_rcu();			\lnlbl[sync_rcu]
+kfree(p);				\lnlbl[kfree]
+\end{VerbatimL}
+\end{linelabel}
 \caption{Canonical RCU Replacement Example}
 \label{lst:defer:Canonical RCU Replacement Example}
 \end{listing}
@@ -524,10 +491,15 @@ Listing~\ref{lst:defer:Canonical RCU Replacement Example},
 adapted from those in Section~\ref{sec:defer:Publish-Subscribe Mechanism},
 demonstrates this process, with field \co{a} being the search key.
 
-Lines~19, 20, and~21 implement the three steps called out above.
-Lines~16-19 gives RCU (``read-copy update'') its name: while permitting
-concurrent \emph{reads}, line~16 \emph{copies} and lines~17-19
+\begin{lineref}[ln:defer:Canonical RCU Replacement Example]
+Lines~\lnref{replace}, \lnref{sync_rcu}, and~\lnref{kfree}
+implement the three steps called out above.
+Lines~\lnref{copy}-\lnref{replace}
+gives RCU (``read-copy update'') its name: while permitting
+concurrent \emph{reads}, line~\lnref{copy} \emph{copies} and
+lines~\lnref{update}-\lnref{replace}
 do an \emph{update}.
+\end{lineref}
 
 As discussed in Section~\ref{sec:defer:Introduction to RCU},
 the \co{synchronize_rcu()} primitive can be quite simple
@@ -563,24 +535,24 @@ We can now revisit the deletion example from
 Section~\ref{sec:defer:Introduction to RCU},
 but now with the benefit of a firm understanding of the fundamental
 concepts underlying RCU.
+\begin{lineref}[ln:defer:Canonical RCU Replacement Example]
 To begin this new version of the deletion example,
-we will modify lines~11-21 in
+we will modify
+lines~\lnref{search}-\lnref{kfree} in
 Listing~\ref{lst:defer:Canonical RCU Replacement Example}
 to read as follows:
-
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 p = search(head, key);
-  2 if (p != NULL) {
-  3   list_del_rcu(&p->list);
-  4   synchronize_rcu();
-  5   kfree(p);
-  6 }
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\end{lineref}
+
+\begin{linelabel}[ln:defer:RCU Deletion From Linked List]
+\begin{VerbatimN}[commandchars=\\\[\]]
+p = search(head, key);
+if (p != NULL) {
+	list_del_rcu(&p->list);		\lnlbl[del_rcu]
+	synchronize_rcu();		\lnlbl[sync_rcu]
+	kfree(p);
+}
+\end{VerbatimN}
+\end{linelabel}
 
 \begin{figure}[tb]
 \centering
@@ -601,8 +573,9 @@ Please note that
 we have omitted the backwards pointers and the link from the tail
 of the list to the head for clarity.
 
+\begin{lineref}[ln:defer:RCU Deletion From Linked List]
 After the \co{list_del_rcu()} on
-line~3 has completed, the \co{5,6,7}~element
+line~\lnref{del_rcu} has completed, the \co{5,6,7}~element
 has been removed from the list, as shown in the second row of
 Figure~\ref{fig:defer:RCU Deletion From Linked List}.
 Since readers do not synchronize directly with updaters,
@@ -626,12 +599,13 @@ element~\co{5,6,7} after exiting from their RCU read-side
 critical sections.
 Therefore,
 once the \co{synchronize_rcu()} on
-line~4 completes, so that all pre-existing readers are
+line~\lnref{sync_rcu} completes, so that all pre-existing readers are
 guaranteed to have completed,
 there can be no more readers referencing this
 element, as indicated by its green shading on the third row of
 Figure~\ref{fig:defer:RCU Deletion From Linked List}.
 We are thus back to a single version of the list.
+\end{lineref}
 
 At this point, the \co{5,6,7}~element may safely be
 freed, as shown on the final row of
@@ -648,20 +622,17 @@ here are the last few lines of the
 example shown in
 Listing~\ref{lst:defer:Canonical RCU Replacement Example}:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 q = kmalloc(sizeof(*p), GFP_KERNEL);
-  2 *q = *p;
-  3 q->b = 2;
-  4 q->c = 3;
-  5 list_replace_rcu(&p->list, &q->list);
-  6 synchronize_rcu();
-  7 kfree(p);
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{linelabel}[ln:defer:Canonical RCU Replacement Example (2nd)]
+\begin{VerbatimN}[commandchars=\\\[\],firstnumber=15]
+q = kmalloc(sizeof(*p), GFP_KERNEL);	\lnlbl[kmalloc]
+*q = *p;				\lnlbl[copy]
+q->b = 2;				\lnlbl[update1]
+q->c = 3;				\lnlbl[update2]
+list_replace_rcu(&p->list, &q->list);	\lnlbl[replace]
+synchronize_rcu();			\lnlbl[sync_rcu]
+kfree(p);				\lnlbl[kfree]
+\end{VerbatimN}
+\end{linelabel}
 
 \begin{figure}[tbp]
 \centering
@@ -689,24 +660,26 @@ The following text describes how to replace the \co{5,6,7} element
 with \co{5,2,3} in such a way that any given reader sees one of these
 two values.
 
-Line~1 \co{kmalloc()}s a replacement element, as follows,
+\begin{lineref}[ln:defer:Canonical RCU Replacement Example (2nd)]
+Line~\lnref{kmalloc} \co{kmalloc()}s a replacement element, as follows,
 resulting in the state as shown in the second row of
 Figure~\ref{fig:defer:RCU Replacement in Linked List}.
 At this point, no reader can hold a reference to the newly allocated
 element (as indicated by its green shading), and it is uninitialized
 (as indicated by the question marks).
 
-Line~2 copies the old element to the new one, resulting in the
+Line~\lnref{copy} copies the old element to the new one, resulting in the
 state as shown in the third row of
 Figure~\ref{fig:defer:RCU Replacement in Linked List}.
 The newly allocated element still cannot be referenced by readers, but
 it is now initialized.
 
-Line~3 updates \co{q->b} to the value ``2'', and
-line~4 updates \co{q->c} to the value ``3'', as shown on the fourth row of
+Line~\lnref{update1} updates \co{q->b} to the value ``2'', and
+line~\lnref{update2} updates \co{q->c} to the value ``3'',
+as shown on the fourth row of
 Figure~\ref{fig:defer:RCU Replacement in Linked List}.
 
-Now, line~5 does the replacement, so that the new element is
+Now, line~\lnref{replace} does the replacement, so that the new element is
 finally visible to readers, and hence is shaded red, as shown on
 the fifth row of
 Figure~\ref{fig:defer:RCU Replacement in Linked List}.
@@ -716,7 +689,7 @@ therefore now shaded yellow), but
 new readers will instead see the \co{5,2,3} element.
 But any given reader is guaranteed to see some well-defined list.
 
-After the \co{synchronize_rcu()} on line~6 returns,
+After the \co{synchronize_rcu()} on line~\lnref{sync_rcu} returns,
 a grace period will have elapsed, and so all reads that started before the
 \co{list_replace_rcu()} will have completed.
 In particular, any readers that might have been holding references
@@ -729,9 +702,10 @@ Figure~\ref{fig:defer:RCU Replacement in Linked List}.
 As far as the readers are concerned, we are back to having a single version
 of the list, but with the new element in place of the old.
 
-After the \co{kfree()} on line~7 completes, the list will
+After the \co{kfree()} on line~\lnref{kfree} completes, the list will
 appear as shown on the final row of
 Figure~\ref{fig:defer:RCU Replacement in Linked List}.
+\end{lineref}
 
 Despite the fact that RCU was named after the replacement case,
 the vast majority of RCU usage within the Linux kernel relies on
@@ -753,23 +727,18 @@ versions of the list active at a given time.
 	Listing~\ref{lst:defer:Concurrent RCU Deletion}.
 
 \begin{listing}[htbp]
-\scriptsize
-{
-\begin{verbbox}
-  1 spin_lock(&mylock);
-  2 p = search(head, key);
-  3 if (p == NULL)
-  4   spin_unlock(&mylock);
-  5 else {
-  6   list_del_rcu(&p->list);
-  7   spin_unlock(&mylock);
-  8   synchronize_rcu();
-  9   kfree(p);
- 10 }
-\end{verbbox}
+\begin{VerbatimL}
+spin_lock(&mylock);
+p = search(head, key);
+if (p == NULL)
+	spin_unlock(&mylock);
+else {
+	list_del_rcu(&p->list);
+	spin_unlock(&mylock);
+	synchronize_rcu();
+	kfree(p);
 }
-\centering
-\theverbbox
+\end{VerbatimL}
 \caption{Concurrent RCU Deletion}
 \label{lst:defer:Concurrent RCU Deletion}
 \end{listing}
diff --git a/defer/rcuintro.tex b/defer/rcuintro.tex
index 6e909c9..3259a19 100644
--- a/defer/rcuintro.tex
+++ b/defer/rcuintro.tex
@@ -146,15 +146,10 @@ with time advancing from the top of the figure to the bottom.
 Although production-quality implementations of this approach can be
 quite complex, a toy implementation is exceedingly simple:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 for_each_online_cpu(cpu)
-  2   run_on(cpu);
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+for_each_online_cpu(cpu)
+	run_on(cpu);
+\end{VerbatimN}
 
 The \co{for_each_online_cpu()} primitive iterates over all CPUs, and
 the \co{run_on()} function causes the current thread to execute on the
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 6/6] defer: Employ new scheme for snippets of route_rcu.c
  2018-12-03 15:33 [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Akira Yokosawa
                   ` (4 preceding siblings ...)
  2018-12-03 15:42 ` [PATCH 5/6] defer: Employ new scheme for snippets in rcuintro and rcufundamental Akira Yokosawa
@ 2018-12-03 15:44 ` Akira Yokosawa
  2018-12-03 17:23 ` [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Paul E. McKenney
  6 siblings, 0 replies; 8+ messages in thread
From: Akira Yokosawa @ 2018-12-03 15:44 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: perfbook, Akira Yokosawa

From 955dcb167cc6d8136c693ad193e0cd711a495150 Mon Sep 17 00:00:00 2001
From: Akira Yokosawa <akiyks@gmail.com>
Date: Tue, 4 Dec 2018 00:17:12 +0900
Subject: [PATCH 6/6] defer: Employ new scheme for snippets of route_rcu.c

Also convert inline snippets in rcuusage.tex.

Signed-off-by: Akira Yokosawa <akiyks@gmail.com>
---
 CodeSamples/defer/route_rcu.c |  75 +++---
 defer/rcuusage.tex            | 523 +++++++++++++++++-------------------------
 2 files changed, 256 insertions(+), 342 deletions(-)

diff --git a/CodeSamples/defer/route_rcu.c b/CodeSamples/defer/route_rcu.c
index 1fd69ea..168eead 100644
--- a/CodeSamples/defer/route_rcu.c
+++ b/CodeSamples/defer/route_rcu.c
@@ -31,48 +31,51 @@
 #include "../api.h"
 
 /* Route-table entry to be included in the routing list. */
+//\begin{snippet}[labelbase=ln:defer:route_rcu:lookup,commandchars=\\\[\]]
 struct route_entry {
-	struct rcu_head rh;
+	struct rcu_head rh;			//\lnlbl{rh}
 	struct cds_list_head re_next;
 	unsigned long addr;
 	unsigned long iface;
-	int re_freed;
+	int re_freed;				//\lnlbl{re_freed}
 };
-
+								//\fcvexclude
 CDS_LIST_HEAD(route_list);
 DEFINE_SPINLOCK(routelock);
 
-static void re_free(struct route_entry *rep)
-{
-	WRITE_ONCE(rep->re_freed, 1);
-	free(rep);
-}
-
-/*
- * Look up a route entry, return the corresponding interface. 
- */
+static void re_free(struct route_entry *rep)			//\fcvexclude
+{								//\fcvexclude
+	WRITE_ONCE(rep->re_freed, 1);				//\fcvexclude
+	free(rep);						//\fcvexclude
+}								//\fcvexclude
+								//\fcvexclude
+/*								  \fcvexclude
+ * Look up a route entry, return the corresponding interface.	  \fcvexclude
+ */								//\fcvexclude
 unsigned long route_lookup(unsigned long addr)
 {
 	struct route_entry *rep;
 	unsigned long ret;
 
-	rcu_read_lock();
+	rcu_read_lock();				//\lnlbl{lock}
 	cds_list_for_each_entry_rcu(rep, &route_list, re_next) {
 		if (rep->addr == addr) {
 			ret = rep->iface;
-			if (READ_ONCE(rep->re_freed))
-				abort();
-			rcu_read_unlock();
+			if (READ_ONCE(rep->re_freed))	//\lnlbl{chk_freed}
+				abort();		//\lnlbl{abort}
+			rcu_read_unlock();		//\lnlbl{unlock1}
 			return ret;
 		}
 	}
-	rcu_read_unlock();
+	rcu_read_unlock();				//\lnlbl{unlock2}
 	return ULONG_MAX;
 }
+//\end{snippet}
 
 /*
  * Add an element to the route table.
  */
+//\begin{snippet}[labelbase=ln:defer:route_rcu:add_del,commandchars=\\\[\]]
 int route_add(unsigned long addr, unsigned long interface)
 {
 	struct route_entry *rep;
@@ -83,38 +86,46 @@ int route_add(unsigned long addr, unsigned long interface)
 	rep->addr = addr;
 	rep->iface = interface;
 	rep->re_freed = 0;
-	spin_lock(&routelock);
-	cds_list_add_rcu(&rep->re_next, &route_list);
-	spin_unlock(&routelock);
+	spin_lock(&routelock);				//\lnlbl{add:lock}
+	cds_list_add_rcu(&rep->re_next, &route_list);	//\lnlbl{add:add_rcu}
+	spin_unlock(&routelock);			//\lnlbl{add:unlock}
 	return 0;
 }
 
-static void route_cb(struct rcu_head *rhp)
+static void route_cb(struct rcu_head *rhp)		//\lnlbl{cb:b}
 {
-	struct route_entry *rep = container_of(rhp, struct route_entry, rh);
+	struct route_entry *rep = container_of(rhp, struct route_entry, rh); //\fcvexclude
+								//\fcvexclude
+	re_free(rep);						//\fcvexclude
+/* --- Alternative code for code snippet: begin ---		  \fcvexclude
+	struct route_entry *rep;
 
-	re_free(rep);
-}
+	rep = container_of(rhp, struct route_entry, rh);
+	WRITE_ONCE(rep->re_freed, 1);
+	free(rep);
+   --- Alternative code for code snippet: end --- */		//\fcvexclude
+}							//\lnlbl{cb:e}
 
-/*
- * Remove the specified element from the route table.
- */
+/*								  \fcvexclude
+ * Remove the specified element from the route table.		  \fcvexclude
+ */								//\fcvexclude
 int route_del(unsigned long addr)
 {
 	struct route_entry *rep;
 
-	spin_lock(&routelock);
+	spin_lock(&routelock);				//\lnlbl{del:lock}
 	cds_list_for_each_entry(rep, &route_list, re_next) {
 		if (rep->addr == addr) {
-			cds_list_del_rcu(&rep->re_next);
-			spin_unlock(&routelock);
-			call_rcu(&rep->rh, route_cb);
+			cds_list_del_rcu(&rep->re_next);//\lnlbl{del:del_rcu}
+			spin_unlock(&routelock);	//\lnlbl{del:unlock1}
+			call_rcu(&rep->rh, route_cb);	//\lnlbl{del:call_rcu}
 			return 0;
 		}
 	}
-	spin_unlock(&routelock);
+	spin_unlock(&routelock);			//\lnlbl{del:unlock2}
 	return -ENOENT;
 }
+//\end{snippet}
 
 /*
  * Clear all elements from the route table.
diff --git a/defer/rcuusage.tex b/defer/rcuusage.tex
index ed0c8be..a6772a4 100644
--- a/defer/rcuusage.tex
+++ b/defer/rcuusage.tex
@@ -44,95 +44,13 @@ Section~\ref{sec:defer:RCU Usage Summary} provides a summary.
 \label{sec:defer:RCU for Pre-BSD Routing}
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
- 1 struct route_entry {
- 2   struct rcu_head rh;
- 3   struct cds_list_head re_next;
- 4   unsigned long addr;
- 5   unsigned long iface;
- 6   int re_freed;
- 7 };
- 8 CDS_LIST_HEAD(route_list);
- 9 DEFINE_SPINLOCK(routelock);
-10
-11 unsigned long route_lookup(unsigned long addr)
-12 {
-13   struct route_entry *rep;
-14   unsigned long ret;
-15
-16   rcu_read_lock();
-17   cds_list_for_each_entry_rcu(rep, &route_list,
-18                               re_next) {
-19     if (rep->addr == addr) {
-20       ret = rep->iface;
-21       if (READ_ONCE(rep->re_freed))
-22         abort();
-23       rcu_read_unlock();
-24       return ret;
-25     }
-26   }
-27   rcu_read_unlock();
-28   return ULONG_MAX;
-29 }
-\end{verbbox}
-}
-\centering
-\theverbbox
+\input{CodeSamples/defer/route_rcu@lookup.fcv}
 \caption{RCU Pre-BSD Routing Table Lookup}
 \label{lst:defer:RCU Pre-BSD Routing Table Lookup}
 \end{listing}
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
- 1 int route_add(unsigned long addr,
- 2               unsigned long interface)
- 3 {
- 4   struct route_entry *rep;
- 5
- 6   rep = malloc(sizeof(*rep));
- 7   if (!rep)
- 8     return -ENOMEM;
- 9   rep->addr = addr;
-10   rep->iface = interface;
-11   rep->re_freed = 0;
-12   spin_lock(&routelock);
-13   cds_list_add_rcu(&rep->re_next, &route_list);
-14   spin_unlock(&routelock);
-15   return 0;
-16 }
-17
-18 static void route_cb(struct rcu_head *rhp)
-19 {
-20   struct route_entry *rep;
-21
-22   rep = container_of(rhp, struct route_entry, rh);
-23   WRITE_ONCE(rep->re_freed, 1);
-24   free(rep);
-25 }
-26
-27 int route_del(unsigned long addr)
-28 {
-29   struct route_entry *rep;
-30
-31   spin_lock(&routelock);
-32   cds_list_for_each_entry(rep, &route_list,
-33                           re_next) {
-34     if (rep->addr == addr) {
-35       cds_list_del_rcu(&rep->re_next);
-36       spin_unlock(&routelock);
-37       call_rcu(&rep->rh, route_cb);
-38       return 0;
-39     }
-40   }
-41   spin_unlock(&routelock);
-42   return -ENOENT;
-43 }
-\end{verbbox}
-}
-\centering
-\theverbbox
+\input{CodeSamples/defer/route_rcu@add_del.fcv}
 \caption{RCU Pre-BSD Routing Table Add/Delete}
 \label{lst:defer:RCU Pre-BSD Routing Table Add/Delete}
 \end{listing}
@@ -144,17 +62,24 @@ show code for an RCU-protected Pre-BSD routing table
 The former shows data structures and \co{route_lookup()},
 and the latter shows \co{route_add()} and \co{route_del()}.
 
+\begin{lineref}[ln:defer:route_rcu:lookup]
 In Listing~\ref{lst:defer:RCU Pre-BSD Routing Table Lookup},
-line~2 adds the \co{->rh} field used by RCU reclamation,
-line~6 adds the \co{->re_freed} use-after-free-check field,
-lines~16, 17, 23, and~27 add RCU read-side protection,
-and lines~21 and~22 add the use-after-free check.
+line~\lnref{rh} adds the \co{->rh} field used by RCU reclamation,
+line~\lnref{re_freed} adds the \co{->re_freed} use-after-free-check field,
+lines~\lnref{lock}, \lnref{unlock1}, and~\lnref{unlock2}
+add RCU read-side protection,
+and lines~\lnref{chk_freed} and~\lnref{abort} add the use-after-free check.
+\end{lineref}
+\begin{lineref}[ln:defer:route_rcu:add_del]
 In Listing~\ref{lst:defer:RCU Pre-BSD Routing Table Add/Delete},
-lines~12, 14, 31, 36, and~41 add update-side locking,
-lines~13 and~35 add RCU update-side protection,
-line~37 causes \co{route_cb()} to be invoked after a grace period elapses,
-and lines~18-25 define \co{route_cb()}.
+lines~\lnref{add:lock}, \lnref{add:unlock}, \lnref{del:lock},
+\lnref{del:unlock1}, and~\lnref{del:unlock2} add update-side locking,
+lines~\lnref{add:add_rcu} and~\lnref{del:del_rcu} add RCU update-side protection,
+line~\lnref{del:call_rcu} causes \co{route_cb()} to be invoked after
+a grace period elapses,
+and lines~\lnref{cb:b}-\lnref{cb:e} define \co{route_cb()}.
 This is minimal added code for a working concurrent implementation.
+\end{lineref}
 
 \begin{figure}[tb]
 \centering
@@ -279,42 +204,27 @@ Figure~\ref{fig:defer:Performance Advantage of RCU Over Reader-Writer Locking}.
 	First, consider that the inner loop used to
 	take this measurement is as follows:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 for (i = 0; i < CSCOUNT_SCALE; i++) {
-  2   rcu_read_lock();
-  3   rcu_read_unlock();
-  4 }
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+for (i = 0; i < CSCOUNT_SCALE; i++) {
+	rcu_read_lock();
+	rcu_read_unlock();
+}
+\end{VerbatimN}
 
 	Next, consider the effective definitions of \co{rcu_read_lock()}
 	and \co{rcu_read_unlock()}:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 #define rcu_read_lock()   do { } while (0)
-  2 #define rcu_read_unlock() do { } while (0)
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+#define rcu_read_lock()   do { } while (0)
+#define rcu_read_unlock() do { } while (0)
+\end{VerbatimN}
 
 	Consider also that the compiler does simple optimizations,
 	allowing it to replace the loop with:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
+\begin{VerbatimN}
 i = CSCOUNT_SCALE;
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\end{VerbatimN}
 
 	So the ``measurement'' of 100 femtoseconds is simply the fixed
 	overhead of the timing measurements divided by the number of
@@ -407,16 +317,11 @@ cycle.
 	RCU read-side primitives is via the following (illegal) sequence
 	of statements:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\small
-\begin{verbatim}
+\begin{VerbatimU}
 rcu_read_lock();
 synchronize_rcu();
 rcu_read_unlock();
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\end{VerbatimU}
 
 	The \co{synchronize_rcu()} cannot return until all
 	pre-existing RCU read-side critical sections complete, but
@@ -445,23 +350,18 @@ Attempting to do such an upgrade with reader-writer locking results
 in deadlock.
 A sample code fragment that does an RCU read-to-update upgrade follows:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 rcu_read_lock();
-  2 list_for_each_entry_rcu(p, &head, list_field) {
-  3   do_something_with(p);
-  4   if (need_update(p)) {
-  5     spin_lock(my_lock);
-  6     do_update(p);
-  7     spin_unlock(&my_lock);
-  8   }
-  9 }
- 10 rcu_read_unlock();
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+rcu_read_lock();
+list_for_each_entry_rcu(p, &head, list_field) {
+	do_something_with(p);
+	if (need_update(p)) {
+		spin_lock(my_lock);
+		do_update(p);
+		spin_unlock(&my_lock);
+	}
+}
+rcu_read_unlock();
+\end{VerbatimN}
 
 Note that \co{do_update()} is executed under
 the protection of the lock \emph{and} under RCU read-side protection.
@@ -691,17 +591,12 @@ the RCU read-side primitives may be used as a restricted
 reference-counting mechanism.
 For example, consider the following code fragment:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 rcu_read_lock();  /* acquire reference. */
-  2 p = rcu_dereference(head);
-  3 /* do something with p. */
-  4 rcu_read_unlock();  /* release reference. */
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+rcu_read_lock();  /* acquire reference. */
+p = rcu_dereference(head);
+/* do something with p. */
+rcu_read_unlock();  /* release reference. */
+\end{VerbatimN}
 
 The \co{rcu_read_lock()} primitive can be thought of as
 acquiring a reference to \co{p}, because a grace period
@@ -716,20 +611,15 @@ from one task to another.
 Regardless of these restrictions,
 the following code can safely delete \co{p}:
 
-\vspace{5pt}
-\begin{minipage}[t]{\columnwidth}
-\scriptsize
-\begin{verbatim}
-  1 spin_lock(&mylock);
-  2 p = head;
-  3 rcu_assign_pointer(head, NULL);
-  4 spin_unlock(&mylock);
-  5 /* Wait for all references to be released. */
-  6 synchronize_rcu();
-  7 kfree(p);
-\end{verbatim}
-\end{minipage}
-\vspace{5pt}
+\begin{VerbatimN}
+spin_lock(&mylock);
+p = head;
+rcu_assign_pointer(head, NULL);
+spin_unlock(&mylock);
+/* Wait for all references to be released. */
+synchronize_rcu();
+kfree(p);
+\end{VerbatimN}
 
 The assignment to \co{head} prevents any future references
 to \co{p} from being acquired, and the \co{synchronize_rcu()}
@@ -877,55 +767,57 @@ guaranteed to remain in existence for the duration of that RCU
 read-side critical section.
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
-  1 int delete(int key)
-  2 {
-  3   struct element *p;
-  4   int b;
-  5 
-  6   b = hashfunction(key);
-  7   rcu_read_lock();
-  8   p = rcu_dereference(hashtable[b]);
-  9   if (p == NULL || p->key != key) {
- 10     rcu_read_unlock();
- 11     return 0;
- 12   }
- 13   spin_lock(&p->lock);
- 14   if (hashtable[b] == p && p->key == key) {
- 15     rcu_read_unlock();
- 16     rcu_assign_pointer(hashtable[b], NULL);
- 17     spin_unlock(&p->lock);
- 18     synchronize_rcu();
- 19     kfree(p);
- 20     return 1;
- 21   }
- 22   spin_unlock(&p->lock);
- 23   rcu_read_unlock();
- 24   return 0;
- 25 }
-\end{verbbox}
+\begin{linelabel}[ln:defer:Existence Guarantees Enable Per-Element Locking]
+\begin{VerbatimL}[commandchars=\\\@\$]
+int delete(int key)
+{
+	struct element *p;
+	int b;
+
+	b = hashfunction(key);			\lnlbl@hash$
+	rcu_read_lock();			\lnlbl@rdlock$
+	p = rcu_dereference(hashtable[b]);
+	if (p == NULL || p->key != key) {	\lnlbl@chkkey$
+		rcu_read_unlock();		\lnlbl@rdunlock1$
+		return 0;			\lnlbl@ret_0:a$
+	}
+	spin_lock(&p->lock);			\lnlbl@acq$
+	if (hashtable[b] == p && p->key == key) {\lnlbl@chkkey2$
+		rcu_read_unlock();		\lnlbl@rdunlock2$
+		rcu_assign_pointer(hashtable[b], NULL);\lnlbl@remove$
+		spin_unlock(&p->lock);		\lnlbl@rel1$
+		synchronize_rcu();		\lnlbl@sync_rcu$
+		kfree(p);			\lnlbl@kfree$
+		return 1;			\lnlbl@ret_1$
+	}
+	spin_unlock(&p->lock);			\lnlbl@rel2$
+	rcu_read_unlock();			\lnlbl@rdunlock3$
+	return 0;				\lnlbl@ret_0:b$
 }
-\centering
-\theverbbox
+\end{VerbatimL}
+\end{linelabel}
 \caption{Existence Guarantees Enable Per-Element Locking}
 \label{lst:defer:Existence Guarantees Enable Per-Element Locking}
 \end{listing}
 
+\begin{lineref}[ln:defer:Existence Guarantees Enable Per-Element Locking]
 Listing~\ref{lst:defer:Existence Guarantees Enable Per-Element Locking}
 demonstrates how RCU-based existence guarantees can enable
 per-element locking via a function that deletes an element from
 a hash table.
-Line~6 computes a hash function, and line~7 enters an RCU
+Line~\lnref{hash} computes a hash function, and line~\lnref{rdlock} enters an RCU
 read-side critical section.
-If line~9 finds that the corresponding bucket of the hash table is
+If line~\lnref{chkkey} finds that the corresponding bucket of the hash table is
 empty or that the element present is not the one we wish to delete,
-then line~10 exits the RCU read-side critical section and line~11
+then line~\lnref{rdunlock1} exits the RCU read-side critical section and
+line~\lnref{ret_0:a}
 indicates failure.
+\end{lineref}
 
 \QuickQuiz{}
 	What if the element we need to delete is not the first element
-	of the list on line~9 of
+	of the list on
+        line~\ref{ln:defer:Existence Guarantees Enable Per-Element Locking:chkkey} of
 	Listing~\ref{lst:defer:Existence Guarantees Enable Per-Element Locking}?
 \QuickQuizAnswer{
 	As with
@@ -936,24 +828,29 @@ indicates failure.
 	full chaining.
 } \QuickQuizEnd
 
-Otherwise, line~13 acquires the update-side spinlock, and
-line~14 then checks that the element is still the one that we want.
-If so, line~15 leaves the RCU read-side critical section,
-line~16 removes it from the table, line~17 releases
-the lock, line~18 waits for all pre-existing RCU read-side critical
-sections to complete, line~19 frees the newly removed element,
-and line~20 indicates success.
-If the element is no longer the one we want, line~22 releases
-the lock, line~23 leaves the RCU read-side critical section,
-and line~24 indicates failure to delete the specified key.
+\begin{lineref}[ln:defer:Existence Guarantees Enable Per-Element Locking]
+Otherwise, line~\lnref{acq} acquires the update-side spinlock, and
+line~\lnref{chkkey2} then checks that the element is still the one that we want.
+If so, line~\lnref{rdunlock2} leaves the RCU read-side critical section,
+line~\lnref{remove} removes it from the table, line~\lnref{rel1} releases
+the lock, line~\lnref{sync_rcu} waits for all pre-existing RCU read-side critical
+sections to complete, line~\lnref{kfree} frees the newly removed element,
+and line~\lnref{ret_1} indicates success.
+If the element is no longer the one we want, line~\lnref{rel2} releases
+the lock, line~\lnref{rdunlock3} leaves the RCU read-side critical section,
+and line~\lnref{ret_0:b} indicates failure to delete the specified key.
+\end{lineref}
 
 \QuickQuiz{}
+	\begin{lineref}[ln:defer:Existence Guarantees Enable Per-Element Locking]
 	Why is it OK to exit the RCU read-side critical section on
-	line~15 of
+	line~\lnref{rdunlock2} of
 	Listing~\ref{lst:defer:Existence Guarantees Enable Per-Element Locking}
-	before releasing the lock on line~17?
+	before releasing the lock on line~\lnref{rel1}?
+	\end{lineref}
 \QuickQuizAnswer{
-	First, please note that the second check on line~14 is
+	\begin{lineref}[ln:defer:Existence Guarantees Enable Per-Element Locking]
+	First, please note that the second check on line~\lnref{chkkey2} is
 	necessary because some other
 	CPU might have removed this element while we were waiting
 	to acquire the lock.
@@ -970,42 +867,48 @@ and line~24 indicates failure to delete the specified key.
 	% A re-check is necessary if the key can mutate or if it is
 	% necessary to reject deleted entries (in cases where deletion
 	% is recorded by mutating the key.
+	\end{lineref}
 } \QuickQuizEnd
 
 \QuickQuiz{}
+	\begin{lineref}[ln:defer:Existence Guarantees Enable Per-Element Locking]
 	Why not exit the RCU read-side critical section on
-	line~23 of
+	line~\lnref{rdunlock3} of
 	Listing~\ref{lst:defer:Existence Guarantees Enable Per-Element Locking}
-	before releasing the lock on line~22?
+	before releasing the lock on line~\lnref{rel2}?
+	\end{lineref}
 \QuickQuizAnswer{
+	\begin{lineref}[ln:defer:Existence Guarantees Enable Per-Element Locking]
 	Suppose we reverse the order of these two lines.
 	Then this code is vulnerable to the following sequence of
 	events:
 	\begin{enumerate}
 	\item	CPU~0 invokes \co{delete()}, and finds the element
-		to be deleted, executing through line~15.
+		to be deleted, executing through line~\lnref{rdunlock2}.
 		It has not yet actually deleted the element, but
 		is about to do so.
 	\item	CPU~1 concurrently invokes \co{delete()}, attempting
 		to delete this same element.
 		However, CPU~0 still holds the lock, so CPU~1 waits
-		for it at line~13.
-	\item	CPU~0 executes lines~16 and 17, and blocks at
-		line~18 waiting for CPU~1 to exit its RCU read-side
-		critical section.
-	\item	CPU~1 now acquires the lock, but the test on line~14
+		for it at line~\lnref{acq}.
+	\item	CPU~0 executes lines~\lnref{remove} and~\lnref{rel1},
+		and blocks at line~\lnref{sync_rcu} waiting for CPU~1
+		to exit its RCU read-side critical section.
+	\item	CPU~1 now acquires the lock, but the test on line~\lnref{chkkey2}
 		fails because CPU~0 has already removed the element.
-		CPU~1 now executes line~22 (which we switched with line~23
+		CPU~1 now executes line~\lnref{rel2}
+                (which we switched with line~\lnref{rdunlock3}
 		for the purposes of this Quick Quiz)
 		and exits its RCU read-side critical section.
 	\item	CPU~0 can now return from \co{synchronize_rcu()},
-		and thus executes line~19, sending the element to
+		and thus executes line~\lnref{kfree}, sending the element to
 		the freelist.
 	\item	CPU~1 now attempts to release a lock for an element
 		that has been freed, and, worse yet, possibly
 		reallocated as some other type of data structure.
 		This is a fatal memory-corruption error.
 	\end{enumerate}
+	\end{lineref}
 } \QuickQuizEnd
 
 Alert readers will recognize this as only a slight variation on
@@ -1127,84 +1030,88 @@ A simplified version of this code is shown
 Listing~\ref{lst:defer:Using RCU to Wait for NMIs to Finish}.
 
 \begin{listing}[tbp]
-{ \scriptsize
-\begin{verbbox}
-  1 struct profile_buffer {
-  2   long size;
-  3   atomic_t entry[0];
-  4 };
-  5 static struct profile_buffer *buf = NULL;
-  6
-  7 void nmi_profile(unsigned long pcvalue)
-  8 {
-  9   struct profile_buffer *p = rcu_dereference(buf);
- 10
- 11   if (p == NULL)
- 12     return;
- 13   if (pcvalue >= p->size)
- 14     return;
- 15   atomic_inc(&p->entry[pcvalue]);
- 16 }
- 17
- 18 void nmi_stop(void)
- 19 {
- 20   struct profile_buffer *p = buf;
- 21
- 22   if (p == NULL)
- 23     return;
- 24   rcu_assign_pointer(buf, NULL);
- 25   synchronize_sched();
- 26   kfree(p);
- 27 }
-\end{verbbox}
-}
-\centering
-\theverbbox
+\begin{linelabel}[ln:defer:Using RCU to Wait for NMIs to Finish]
+\begin{VerbatimL}[commandchars=\\\@\$]
+struct profile_buffer {				\lnlbl@struct:b$
+	long size;
+	atomic_t entry[0];
+};						\lnlbl@struct:e$
+static struct profile_buffer *buf = NULL;	\lnlbl@struct:buf$
+
+void nmi_profile(unsigned long pcvalue)		\lnlbl@nmi_profile:b$
+{
+	struct profile_buffer *p = rcu_dereference(buf);\lnlbl@nmi_profile:rcu_deref$
+
+	if (p == NULL)				\lnlbl@nmi_profile:if_NULL$
+		return;				\lnlbl@nmi_profile:ret:a$
+	if (pcvalue >= p->size)			\lnlbl@nmi_profile:if_oor$
+		return;				\lnlbl@nmi_profile:ret:b$
+	atomic_inc(&p->entry[pcvalue]);		\lnlbl@nmi_profile:inc$
+}						\lnlbl@nmi_profile:e$
+
+void nmi_stop(void)				\lnlbl@nmi_stop:b$
+{
+	struct profile_buffer *p = buf;		\lnlbl@nmi_stop:fetch$
+
+	if (p == NULL)				\lnlbl@nmi_stop:if_NULL$
+		return;				\lnlbl@nmi_stop:ret$
+	rcu_assign_pointer(buf, NULL);		\lnlbl@nmi_stop:NULL$
+	synchronize_sched();			\lnlbl@nmi_stop:sync_sched$
+	kfree(p);				\lnlbl@nmi_stop:kfree$
+}						\lnlbl@nmi_stop:e$
+\end{VerbatimL}
+\end{linelabel}
 \caption{Using RCU to Wait for NMIs to Finish}
 \label{lst:defer:Using RCU to Wait for NMIs to Finish}
 \end{listing}
 
-Lines~1-4 define a \co{profile_buffer} structure, containing a
+\begin{lineref}[ln:defer:Using RCU to Wait for NMIs to Finish:struct]
+Lines~\lnref{b}-\lnref{e} define a \co{profile_buffer} structure, containing a
 size and an indefinite array of entries.
-Line~5 defines a pointer to a profile buffer, which is
+Line~\lnref{buf} defines a pointer to a profile buffer, which is
 presumably initialized elsewhere to point to a dynamically allocated
 region of memory.
+\end{lineref}
 
-Lines~7-16 define the \co{nmi_profile()} function,
+\begin{lineref}[ln:defer:Using RCU to Wait for NMIs to Finish:nmi_profile]
+Lines~\lnref{b}-\lnref{e} define the \co{nmi_profile()} function,
 which is called from within an NMI handler.
 As such, it cannot be preempted, nor can it be interrupted by a normal
 interrupts handler, however, it is still subject to delays due to cache misses,
 ECC errors, and cycle stealing by other hardware threads within the same
 core.
-Line~9 gets a local pointer to the profile buffer using the
+Line~\lnref{rcu_deref} gets a local pointer to the profile buffer using the
 \co{rcu_dereference()} primitive to ensure memory ordering on
 DEC Alpha, and
-lines~11 and~12 exit from this function if there is no
-profile buffer currently allocated, while lines~13 and~14
+lines~\lnref{if_NULL} and~\lnref{ret:a} exit from this function if there is no
+profile buffer currently allocated, while lines~\lnref{if_oor} and~\lnref{ret:b}
 exit from this function if the \co{pcvalue} argument
 is out of range.
-Otherwise, line~15 increments the profile-buffer entry indexed
+Otherwise, line~\lnref{inc} increments the profile-buffer entry indexed
 by the \co{pcvalue} argument.
 Note that storing the size with the buffer guarantees that the
 range check matches the buffer, even if a large buffer is suddenly
 replaced by a smaller one.
+\end{lineref}
 
-Lines~18-27 define the \co{nmi_stop()} function,
+\begin{lineref}[ln:defer:Using RCU to Wait for NMIs to Finish:nmi_stop]
+Lines~\lnref{b}-\lnref{e} define the \co{nmi_stop()} function,
 where the caller is responsible for mutual exclusion (for example,
 holding the correct lock).
-Line~20 fetches a pointer to the profile buffer, and
-lines~22 and~23 exit the function if there is no buffer.
-Otherwise, line~24 \co{NULL}s out the profile-buffer pointer
+Line~\lnref{fetch} fetches a pointer to the profile buffer, and
+lines~\lnref{if_NULL} and~\lnref{ret} exit the function if there is no buffer.
+Otherwise, line~\lnref{NULL} \co{NULL}s out the profile-buffer pointer
 (using the \co{rcu_assign_pointer()} primitive to maintain
 memory ordering on weakly ordered machines),
-and line~25 waits for an RCU Sched grace period to elapse,
+and line~\lnref{sync_sched} waits for an RCU Sched grace period to elapse,
 in particular, waiting for all non-preemptible regions of code,
 including NMI handlers, to complete.
-Once execution continues at line~26, we are guaranteed that
+Once execution continues at line~\lnref{kfree}, we are guaranteed that
 any instance of \co{nmi_profile()} that obtained a
 pointer to the old buffer has returned.
 It is therefore safe to free the buffer, in this case using the
 \co{kfree()} primitive.
+\end{lineref}
 
 \QuickQuiz{}
 	Suppose that the \co{nmi_profile()} function was preemptible.
@@ -1218,46 +1125,42 @@ It is therefore safe to free the buffer, in this case using the
 	Listing~\ref{lst:defer:Using RCU to Wait for Mythical Preemptible NMIs to Finish}.
 %
 \begin{listing}[tb]
-{\scriptsize
-\begin{verbbox}
-  1 struct profile_buffer {
-  2   long size;
-  3   atomic_t entry[0];
-  4 };
-  5 static struct profile_buffer *buf = NULL;
-  6
-  7 void nmi_profile(unsigned long pcvalue)
-  8 {
-  9   struct profile_buffer *p;
- 10
- 11   rcu_read_lock();
- 12   p = rcu_dereference(buf);
- 13   if (p == NULL) {
- 14     rcu_read_unlock();
- 15     return;
- 16   }
- 17   if (pcvalue >= p->size) {
- 18     rcu_read_unlock();
- 19     return;
- 20   }
- 21   atomic_inc(&p->entry[pcvalue]);
- 22   rcu_read_unlock();
- 23 }
- 24
- 25 void nmi_stop(void)
- 26 {
- 27   struct profile_buffer *p = buf;
- 28
- 29   if (p == NULL)
- 30     return;
- 31   rcu_assign_pointer(buf, NULL);
- 32   synchronize_rcu();
- 33   kfree(p);
- 34 }
-\end{verbbox}
+\begin{VerbatimL}
+struct profile_buffer {
+	long size;
+	atomic_t entry[0];
+};
+static struct profile_buffer *buf = NULL;
+
+void nmi_profile(unsigned long pcvalue)
+{
+	struct profile_buffer *p;
+
+	rcu_read_lock();
+	p = rcu_dereference(buf);
+	if (p == NULL) {
+		rcu_read_unlock();
+		return;
+	}
+	if (pcvalue >= p->size) {
+		rcu_read_unlock();
+		return;
+	}
+	atomic_inc(&p->entry[pcvalue]);
+	rcu_read_unlock();
 }
-\centering
-\theverbbox
+
+void nmi_stop(void)
+{
+	struct profile_buffer *p = buf;
+
+	if (p == NULL)
+		return;
+	rcu_assign_pointer(buf, NULL);
+	synchronize_rcu();
+	kfree(p);
+}
+\end{VerbatimL}
 \caption{Using RCU to Wait for Mythical Preemptible NMIs to Finish}
 \label{lst:defer:Using RCU to Wait for Mythical Preemptible NMIs to Finish}
 \end{listing}
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/6] defer: Employ new scheme for code snippets (cont.)
  2018-12-03 15:33 [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Akira Yokosawa
                   ` (5 preceding siblings ...)
  2018-12-03 15:44 ` [PATCH 6/6] defer: Employ new scheme for snippets of route_rcu.c Akira Yokosawa
@ 2018-12-03 17:23 ` Paul E. McKenney
  6 siblings, 0 replies; 8+ messages in thread
From: Paul E. McKenney @ 2018-12-03 17:23 UTC (permalink / raw)
  To: Akira Yokosawa; +Cc: perfbook

On Tue, Dec 04, 2018 at 12:33:32AM +0900, Akira Yokosawa wrote:
> Hi Paul,
> 
> This is a followup patch set to update remaining code snippets
> in chapter "defer".
> 
> My first thought was that patches #5 and #6 might conflict with
> upcoming updates to reflect consolidation of RCU flavors, but it
> looks like these code snippets are unlikely to be affected.
> 
> Patch #1 assumes that you intentionally presented a different
> set of hazptr API from the one used in hazptr.h.
> 
>         Thanks, Akira

Applied and pushed, thank you!

							Thanx, Paul

> --
> Akira Yokosawa (6):
>   defer: Employ new scheme for 'lst:defer:Hazard-Pointer Storage and
>     Erasure'
>   defer: Employ new scheme for snippets of route_hazptr.c
>   defer: Employ new scheme for snippet of seqlock.h
>   defer: Employ new scheme for snippets of route_seqlock.c
>   defer: Employ new scheme for snippets in rcuintro and rcufundamental
>   defer: Employ new scheme for snippets of route_rcu.c
> 
>  CodeSamples/defer/route_hazptr.c  |  50 ++--
>  CodeSamples/defer/route_rcu.c     |  75 +++---
>  CodeSamples/defer/route_seqlock.c |  56 ++--
>  CodeSamples/defer/seqlock.h       |  55 ++--
>  defer/hazptr.tex                  | 168 +++---------
>  defer/rcufundamental.tex          | 381 +++++++++++++--------------
>  defer/rcuintro.tex                |  13 +-
>  defer/rcuusage.tex                | 523 ++++++++++++++++----------------------
>  defer/seqlock.tex                 | 259 ++++++-------------
>  9 files changed, 637 insertions(+), 943 deletions(-)
> 
> -- 
> 2.7.4
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-12-03 17:23 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-03 15:33 [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Akira Yokosawa
2018-12-03 15:35 ` [PATCH 1/6] defer: Employ new scheme for 'lst:defer:Hazard-Pointer Storage and Erasure' Akira Yokosawa
2018-12-03 15:37 ` [PATCH 2/6] defer: Employ new scheme for snippets of route_hazptr.c Akira Yokosawa
2018-12-03 15:39 ` [PATCH 3/6] defer: Employ new scheme for snippet of seqlock.h Akira Yokosawa
2018-12-03 15:41 ` [PATCH 4/6] defer: Employ new scheme for snippets of route_seqlock.c Akira Yokosawa
2018-12-03 15:42 ` [PATCH 5/6] defer: Employ new scheme for snippets in rcuintro and rcufundamental Akira Yokosawa
2018-12-03 15:44 ` [PATCH 6/6] defer: Employ new scheme for snippets of route_rcu.c Akira Yokosawa
2018-12-03 17:23 ` [PATCH 0/6] defer: Employ new scheme for code snippets (cont.) Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.