All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Lance Roy <ldr709@gmail.com>
Cc: linux-kernel@vger.kernel.org, mingo@kernel.org,
	jiangshanlai@gmail.com, dipankar@in.ibm.com,
	akpm@linux-foundation.org, mathieu.desnoyers@efficios.com,
	josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org,
	rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com,
	dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com,
	bobby.prani@gmail.com
Subject: Re: [PATCH] srcu: Implement more-efficient reader counts
Date: Mon, 23 Jan 2017 17:57:15 -0800	[thread overview]
Message-ID: <20170124015715.GD28085@linux.vnet.ibm.com> (raw)
In-Reply-To: <20170123165334.6873e47f@gmail.com>

On Mon, Jan 23, 2017 at 04:53:34PM -0800, Lance Roy wrote:
> On Mon, 23 Jan 2017 16:42:52 -0800
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> 
> > On Mon, Jan 23, 2017 at 01:35:18PM -0800, Lance Roy wrote:
> > > SRCU uses two per-cpu counters: a nesting counter to count the number of
> > > active critical sections, and a sequence counter to ensure that the nesting
> > > counters don't change while they are being added together in
> > > srcu_readers_active_idx_check().
> > > 
> > > This patch instead uses per-cpu lock and unlock counters. Because both
> > > counters only increase and srcu_readers_active_idx_check() reads the unlock
> > > counter before the lock counter, this achieves the same end without having
> > > to increment two different counters in srcu_read_lock(). This also saves a
> > > smp_mb() in srcu_readers_active_idx_check().
> > > 
> > > Possible bug: There is no guarantee that the lock counter won't overflow
> > > during srcu_readers_active_idx_check(), as there are no memory barriers
> > > around srcu_flip() (see comment in srcu_readers_active_idx_check() for
> > > details). However, this problem was already present before this patch.
> > > 
> > > Suggested-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> > > Signed-off-by: Lance Roy <ldr709@gmail.com>
> > > Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > > Cc: Lai Jiangshan <jiangshanlai@gmail.com>
> > > Cc: Peter Zijlstra <peterz@infradead.org>  
> > 
> > OK, this only has differences only in the comments, so I can reasonably
> > substitute it, even this near the next merge window.
> > 
> > But let's talk about the potential overflow.  If I understand correctly,
> > for this to happen, there needs to be ULONG_MAX/2 or thereabouts
> > srcu_read_lock() calls without matching srcu_read_unlock() calls.
> > Let's focus on 32-bit systems for the moment.
> > 
> > Lockdep allows at most 48 locks held at a given time by any single task,
> > and RCU does not pass in a non-NULL nest_lock -- if you acquire more than
> > that, a lockdep kernel will splat.  Each task has at least one 4K page of
> > kernel stack, so there can be at most 1,048,576 tasks (actually quite a
> > bit fewer due to the size of the task_struct and so on).  This allows
> > at most 50,331,648 unmatched srcu_read_lock() calls in the system,
> > which is not sufficient to cause overflow.
> > 
> > Or am I missing something here?
> > 
> > 								Thanx, Paul
> 
> It isn't the nest that is the problem. Here is a previous email I wrote
> describing the problem:
> 
> 
> The trouble is that disabling preemption is not enough to ensure that there
> is at most one srcu_read_lock() call per CPU that missed the srcu_flip().
> 
> Define a reader to be an SRCU lock+unlock pair. A reader is called active if it
> has incremented ->lock_count[] but hasn't incremented ->unlock_count[] yet, and
> completed if it has incremented ->unlock_count[]. I think that we only want to
> limit the number of active readers and the number of CPUs. In particular, I
> don't think there should be a limit on the rate of completion of read side
> critical section.
> 
> The goal of srcu_readers_active_idx_check() is to verify that there were zero
> active readers on the inactive index at some time during its execution. To do
> this, it totals the unlock counts, executes a memory barrier, totals the lock
> counts, and takes the difference. This difference counts the readers that are
> active when srcu_readers_lock_idx() gets to their CPU, plus the readers that
> completed after srcu_readers_unlock_idx() and before srcu_readers_lock_idx().
> If the true (infinite precision) value of the difference is zero, then there
> were no active readers at some point while srcu_readers_lock_idx() is running.
> However, the difference is only stored in a long, so there is a potential for
> overflow if too many readers complete during srcu_readers_active_idx_check().
> 
> For example, let's say there are three threads, each running on their own CPU:
> 
> int data, flag;
> struct srcu_struct *sp = /* ... */;
> 
> Thread 0:
> 	data = 1;
> 	synchronize_srcu(sp);
> 	flag = 1;
> 
> Thread 1:
> 	int data_r, flag_r;
> 	int idx = srcu_read_lock(sp);
> 	data_r = data;
> 	flag_r = flag;
> 	srcu_read_unlock(sp, idx);
> 	BUG_ON(flag_r == 1 && data_r == 0);
> 
> Thread 2:
> 	while (true) {
> 		int idx = srcu_read_lock(sp);
> 		srcu_read_unlock(sp, idx);
> 	}
> 
> Let's take the following execution order. Thread 1 increments
> the CPU 1 version of sp->lock_count[0], sets idx to zero, and loads data (0)
> into data_r. Thread 0 then sets data to be 1, verifies that there are no
> readers on index 1, and increments sp->completed, but the CPU actually doesn't
> preform the last operation, putting it off until the next memory barrier. Thread
> 0 then calls srcu_readers_active_idx_check() on index 0, which runs
> srcu_readers_unlock_idx() (returning 0). Right after srcu_readers_unlock_idx()
> completes, thread 2 starts running. Since Thread 0 hasn't actually incremented
> sp->completed in any way that is visible to thread 2, srcu_read_lock() will
> still return 0. Thread 2 can then run for ULONG_MAX iterations, setting
> the CPU 2 version of sp->unlock_count[0] to ULONG_MAX. CPU 0 then finally gets
> around to incrementing sp->completed, runs its memory barrier, and then reads
> the lock counts: 1, 0, ULONG_MAX. The total of ULONG_MAX+1 will overflow to 0
> and compare equal with earlier unlock count. Thread 0 will then think that the
> grace period is over and set flag to one. Thread 1 can then read flag (1) into
> flag_r and run srcu_read_unlock(). The BUG_ON statement will then fail.
> 
> Although ULONG_MAX readers completed during srcu_readers_active_idx_check(),
> there were at most 2 active readers at any time, so this program doesn't run
> into any limit.
> 
> I hope that was clear enough.

Yeah, we did have this same conversation awhile back, didn't we?

Back then, did I think to ask if this could be minimized or even prevented
by adding memory barriers appropriately?  ;-)

							Thanx, Paul

  reply	other threads:[~2017-01-24  1:57 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-14  9:19 [PATCH tip/core/rcu 0/3] SRCU updates for 4.11 Paul E. McKenney
2017-01-14  9:19 ` [PATCH tip/core/rcu 1/3] srcu: More efficient reader counts Paul E. McKenney
2017-01-14  9:31   ` Ingo Molnar
2017-01-14 19:48     ` Paul E. McKenney
2017-01-14  9:20 ` [PATCH tip/core/rcu 2/3] srcu: Force full grace-period ordering Paul E. McKenney
2017-01-14  9:35   ` Ingo Molnar
2017-01-14 19:54     ` Paul E. McKenney
2017-01-14 21:41       ` Paul E. McKenney
2017-01-15  7:11         ` Ingo Molnar
2017-01-15  7:40           ` Paul E. McKenney
2017-01-15  7:57             ` Ingo Molnar
2017-01-15  9:24               ` Paul E. McKenney
2017-01-15  9:40                 ` Ingo Molnar
2017-01-15 19:45                   ` Paul E. McKenney
2017-01-16  6:56                     ` Ingo Molnar
2017-01-23  8:12         ` Michael Ellerman
2017-01-24  2:45           ` Paul E. McKenney
2017-01-15  6:54       ` Ingo Molnar
2017-01-14  9:20 ` [PATCH tip/core/rcu 3/3] rcutorture: Add CBMC-based formal verification for SRCU Paul E. McKenney
2017-01-15 22:41 ` [PATCH tip/core/rcu v2 0/3] SRCU updates for 4.11 Paul E. McKenney
2017-01-15 22:42   ` [PATCH v2 tip/core/rcu 1/3] srcu: Implement more-efficient reader counts Paul E. McKenney
2017-01-23 20:17     ` Lance Roy
2017-01-23 20:17       ` [PATCH] SRCU: More efficient " Lance Roy
2017-01-23 20:35         ` Paul E. McKenney
2017-01-23 21:33           ` Lance Roy
2017-01-23 21:35             ` [PATCH] srcu: Implement more-efficient " Lance Roy
2017-01-24  0:42               ` Paul E. McKenney
2017-01-24  0:53                 ` Lance Roy
2017-01-24  1:57                   ` Paul E. McKenney [this message]
2017-01-24  3:26                     ` Lance Roy
2017-01-24 17:07                       ` Paul E. McKenney
2017-01-15 22:42   ` [PATCH v2 tip/core/rcu 2/3] srcu: Force full grace-period ordering Paul E. McKenney
2017-01-23  8:38     ` Lance Roy
2017-01-23 19:12       ` Paul E. McKenney
2017-01-23 20:06         ` Lance Roy
2017-01-15 22:42   ` [PATCH v2 tip/core/rcu 3/3] rcutorture: Add CBMC-based formal verification for SRCU Paul E. McKenney
2017-01-24 22:00   ` [PATCH v3 tip/core/rcu 0/4] SRCU updates for 4.11 Paul E. McKenney
2017-01-24 22:00     ` [PATCH v3 tip/core/rcu 1/4] srcu: Implement more-efficient reader counts Paul E. McKenney
2017-01-25 18:17       ` Lance Roy
2017-01-25 21:03         ` Paul E. McKenney
2017-01-24 22:00     ` [PATCH v3 tip/core/rcu 2/4] srcu: Force full grace-period ordering Paul E. McKenney
2017-01-24 22:00     ` [PATCH v3 tip/core/rcu 3/4] rcutorture: Add CBMC-based formal verification for SRCU Paul E. McKenney
2017-01-24 22:00     ` [PATCH v3 tip/core/rcu 4/4] srcu: Reduce probability of SRCU ->unlock_count[] counter overflow Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170124015715.GD28085@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=bobby.prani@gmail.com \
    --cc=dhowells@redhat.com \
    --cc=dipankar@in.ibm.com \
    --cc=dvhart@linux.intel.com \
    --cc=edumazet@google.com \
    --cc=fweisbec@gmail.com \
    --cc=jiangshanlai@gmail.com \
    --cc=josh@joshtriplett.org \
    --cc=ldr709@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mingo@kernel.org \
    --cc=oleg@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.