All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Mathieu Desnoyers <compudj@krystal.dyndns.org>
Cc: ltt-dev@lists.casi.polymtl.ca, linux-kernel@vger.kernel.org,
	Robert Wisniewski <bob@watson.ibm.com>
Subject: Re: [RFC git tree] Userspace RCU (urcu) for Linux (repost)
Date: Sat, 7 Feb 2009 07:10:28 -0800	[thread overview]
Message-ID: <20090207151028.GA11150@linux.vnet.ibm.com> (raw)
In-Reply-To: <20090206163432.GF10918@linux.vnet.ibm.com>

On Fri, Feb 06, 2009 at 08:34:32AM -0800, Paul E. McKenney wrote:
> On Fri, Feb 06, 2009 at 05:06:40AM -0800, Paul E. McKenney wrote:
> > On Thu, Feb 05, 2009 at 11:58:41PM -0500, Mathieu Desnoyers wrote:
> > > (sorry for repost, I got the ltt-dev email wrong in the previous one)
> > > 
> > > Hi Paul,
> > > 
> > > I figured out I needed some userspace RCU for the userspace tracing part
> > > of LTTng (for quick read access to the control variables) to trace
> > > userspace pthread applications. So I've done a quick-and-dirty userspace
> > > RCU implementation.
> > > 
> > > It works so far, but I have not gone through any formal verification
> > > phase. It seems to work on paper, and the tests are also OK (so far),
> > > but I offer no guarantee for this 300-lines-ish 1-day hack. :-) If you
> > > want to comment on it, it would be welcome. It's a userland-only
> > > library. It's also currently x86-only, but only a few basic definitions
> > > must be adapted in urcu.h to port it.
> > > 
> > > Here is the link to my git tree :
> > > 
> > > git://lttng.org/userspace-rcu.git
> > > 
> > > http://lttng.org/cgi-bin/gitweb.cgi?p=userspace-rcu.git;a=summary
> > 
> > Very cool!!!  I will take a look!
> > 
> > I will also point you at a few that I have put together:
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git
> > 
> > (In the CodeSamples/defer directory.)
> 
> Interesting approach, using the signal to force memory-barrier execution!
> 
> o	One possible optimization would be to avoid sending a signal to
> 	a blocked thread, as the context switch leading to blocking
> 	will have implied a memory barrier -- otherwise it would not
> 	be safe to resume the thread on some other CPU.  That said,
> 	not sure whether checking to see whether a thread is blocked is
> 	any faster than sending it a signal and forcing it to wake up.
> 
> 	Of course, this approach does require that the enclosing
> 	application be willing to give up a signal.  I suspect that most
> 	applications would be OK with this, though some might not.
> 
> 	Of course, I cannot resist pointing to an old LKML thread:
> 
> 		http://lkml.org/lkml/2001/10/8/189
> 
> 	But I think that the time is now right.  ;-)
> 
> o	I don't understand the purpose of rcu_write_lock() and
> 	rcu_write_unlock().  I am concerned that it will lead people
> 	to decide that a single global lock must protect RCU updates,
> 	which is of course absolutely not the case.  I strongly
> 	suggest making these internal to the urcu.c file.  Yes,
> 	uses of urcu_publish_content() would then hit two locks (the
> 	internal-to-urcu.c one and whatever they are using to protect
> 	their data structure), but let's face it, if you are sending a
> 	signal to each and every thread, the additional overhead of the
> 	extra lock is the least of your worries.
> 
> 	If you really want to heavily optimize this, I would suggest
> 	setting up a state machine that permits multiple concurrent
> 	calls to urcu_publish_content() to share the same set of signal
> 	invocations.  That way, if the caller has partitioned the
> 	data structure, global locking might be avoided completely
> 	(or at least greatly restricted in scope).
> 
> 	Of course, if updates are rare, the optimization would not
> 	help, but in that case, acquiring two locks would be even less
> 	of a problem.
> 
> o	Is urcu_qparity relying on initialization to zero?  Or on the
> 	fact that, for all x, 1-x!=x mod 2^32?  Ah, given that this is
> 	used to index urcu_active_readers[], you must be relying on
> 	initialization to zero.
> 
> o	In rcu_read_lock(), why is a non-atomic increment of the
> 	urcu_active_readers[urcu_parity] element safe?  Are you
> 	relying on the compiler generating an x86 add-to-memory
> 	instruction?
> 
> 	Ditto for rcu_read_unlock().
> 
> 	Ah, never mind!!!  I now see the __thread specification,
> 	and the keeping of references to it in the reader_data list.
> 
> o	Combining the equivalent of rcu_assign_pointer() and
> 	synchronize_rcu() into urcu_publish_content() is an interesting
> 	approach.  Not yet sure whether or not it is a good idea.  I
> 	guess trying it out on several applications would be the way
> 	to find out.  ;-)
> 
> 	That said, I suspect that it would be very convenient in a
> 	number of situations.
> 
> o	It would be good to avoid having to pass the return value
> 	of rcu_read_lock() into rcu_read_unlock().  It should be
> 	possible to avoid this via counter value tricks, though this
> 	would add a bit more code in rcu_read_lock() on 32-bit machines.
> 	(64-bit machines don't have to worry about counter overflow.)
> 
> 	See the recently updated version of CodeSamples/defer/rcu_nest.[ch]
> 	in the aforementioned git archive for a way to do this.
> 	(And perhaps I should apply this change to SRCU...)
> 
> o	Your test looks a bit strange, not sure why you test all the
> 	different variables.  It would be nice to take a test duration
> 	as an argument and run the test for that time.
> 
> 	I killed the test after better part of an hour on my laptop,
> 	will retry on a larger machine (after noting the 18 threads
> 	created!).  (And yes, I first tried Power, which objected
> 	strenously to the "mfence" and "lock; incl" instructions,
> 	so getting an x86 machine to try on.)
> 
> Again, looks interesting!  Looks plausible, although I have not 100%
> convinced myself that it is perfectly bug-free.  But I do maintain
> a healthy skepticism of purported RCU algorithms, especially ones that
> I have written.  ;-)

OK, here is one sequence of concern...

o	Thread 0 starts rcu_read_lock(), picking up the current
	get_urcu_qparity() into the local variable urcu_parity().
	Assume that the value returned is zero.

o	Thread 0 is now preempted.

o	Thread 1 invokes urcu_publish_content():

	o	It substitutes the pointer.

	o	It forces all threads to execute a memory barrier
		(thread 0 runs just long enough to process its signal
		and then is immediately preempted again).

	o	It switches the parity, which is now one.

	o	It waits for all readers on parity zero, and there are
		none, because thread 0 has not yet registered itself.

	o	It therefore returns the old pointer.  So far, so good.

o	Thread 0 now resumes:

	o	It increments its urcu_active_readers[0].

	o	It forces a compiler barrier.

	o	It returns zero (why not store this in thread-local
		storage rather than returning?).

	o	It enters its critical section, obtaining a reference
		to the new pointer that thread 1 just published.

o	Thread 1 now again invokes urcu_publish_content():
 
	o	It substitutes the pointer.

	o	It forces all threads to execute a memory barrier,
		including thread 0.

	o	It switches the parity, which is now zero.

	o	It waits for all readers on parity one, and there are
		none, because thread 0 has registered itself on parity
		zero!!!

	o	Thread 1 therefore returns the old pointer.

	o	Thread 1 frees the old pointer, which thread 0 is still
		using!!!

So, how to fix?  Here are some approaches:

o	Make urcu_publish_content() do two parity flips rather than one.
	I use this approach in my rcu_rcpg, rcu_rcpl, and rcu_rcpls
	algorithms in CodeSamples/defer.

o	Use a single free-running counter, in a manner similar to rcu_nest,
	as suggested earlier.  This one is interesting, as I rely on a
	read-side memory barrier to handle the long-preemption case.
	However, if you believe that any thread that waits several minutes
	between executing adjacent instructions must have been preempted
	(which the memory barriers that are required to do a context
	switch), then a compiler barrier suffices.  ;-)

Of course, the probability of seeing this failure during test is quite
low, since it is unlikely that thread 0 would run just long enough to
execute its signal handler.  However, it could happen.  And if you were
to adapt this algorithm for use in a real-time application, then priority
boosting could cause this to happen naturally.

							Thanx, Paul

  reply	other threads:[~2009-02-07 15:10 UTC|newest]

Thread overview: 116+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-02-06  3:05 [RFC git tree] Userspace RCU (urcu) for Linux Mathieu Desnoyers
2009-02-06  4:58 ` [RFC git tree] Userspace RCU (urcu) for Linux (repost) Mathieu Desnoyers
2009-02-06 13:06   ` Paul E. McKenney
2009-02-06 16:34     ` Paul E. McKenney
2009-02-07 15:10       ` Paul E. McKenney [this message]
2009-02-07 22:16         ` Paul E. McKenney
2009-02-08  0:19           ` Mathieu Desnoyers
2009-02-07 23:38         ` Mathieu Desnoyers
2009-02-08  0:44           ` Paul E. McKenney
2009-02-08 21:46             ` Mathieu Desnoyers
2009-02-08 22:36               ` Paul E. McKenney
2009-02-09  0:24                 ` Paul E. McKenney
2009-02-09  0:54                   ` Mathieu Desnoyers
2009-02-09  1:08                     ` [ltt-dev] " Mathieu Desnoyers
2009-02-09  3:47                       ` Paul E. McKenney
2009-02-09  3:42                     ` Paul E. McKenney
2009-02-09  0:40                 ` [ltt-dev] " Mathieu Desnoyers
2009-02-08 22:44       ` Mathieu Desnoyers
2009-02-09  4:11         ` Paul E. McKenney
2009-02-09  4:53           ` Mathieu Desnoyers
2009-02-09  5:17             ` [ltt-dev] " Mathieu Desnoyers
2009-02-09  7:03               ` Mathieu Desnoyers
2009-02-09 15:33                 ` Paul E. McKenney
2009-02-10 19:17                   ` Mathieu Desnoyers
2009-02-10 21:16                     ` Paul E. McKenney
2009-02-10 21:28                       ` Mathieu Desnoyers
2009-02-10 22:21                         ` Paul E. McKenney
2009-02-10 22:58                           ` Paul E. McKenney
2009-02-10 23:01                             ` Paul E. McKenney
2009-02-11  0:57                           ` Mathieu Desnoyers
2009-02-11  5:28                             ` Paul E. McKenney
2009-02-11  6:35                               ` Mathieu Desnoyers
2009-02-11 15:32                                 ` Paul E. McKenney
2009-02-11 18:52                                   ` Mathieu Desnoyers
2009-02-11 20:09                                     ` Paul E. McKenney
2009-02-11 21:42                                       ` Mathieu Desnoyers
2009-02-11 22:08                                         ` Mathieu Desnoyers
     [not found]                                         ` <20090212003549.GU6694@linux.vnet.ibm.com>
2009-02-12  2:33                                           ` Paul E. McKenney
2009-02-12  2:37                                             ` Paul E. McKenney
2009-02-12  4:10                                               ` Mathieu Desnoyers
2009-02-12  5:09                                                 ` Paul E. McKenney
2009-02-12  5:47                                                   ` Mathieu Desnoyers
2009-02-12 16:18                                                     ` Paul E. McKenney
2009-02-12 18:40                                                       ` Mathieu Desnoyers
2009-02-12 20:28                                                         ` Paul E. McKenney
2009-02-12 21:27                                                           ` Mathieu Desnoyers
2009-02-12 23:26                                                             ` Paul E. McKenney
2009-02-13 13:12                                                               ` Mathieu Desnoyers
2009-02-12  4:08                                             ` Mathieu Desnoyers
2009-02-12  5:01                                               ` Paul E. McKenney
2009-02-12  7:05                                                 ` Mathieu Desnoyers
2009-02-12 16:46                                                   ` Paul E. McKenney
2009-02-12 19:29                                                     ` Mathieu Desnoyers
2009-02-12 20:02                                                       ` Paul E. McKenney
2009-02-12 20:09                                                         ` Mathieu Desnoyers
2009-02-12 20:35                                                           ` Paul E. McKenney
2009-02-12 21:15                                                             ` Mathieu Desnoyers
2009-02-12 20:13                                                         ` Linus Torvalds
2009-02-12 20:39                                                           ` Paul E. McKenney
2009-02-12 21:15                                                             ` Linus Torvalds
2009-02-12 21:59                                                               ` Paul E. McKenney
2009-02-13 13:50                                                                 ` Nick Piggin
2009-02-13 14:56                                                                   ` Paul E. McKenney
2009-02-13 15:10                                                                     ` Mathieu Desnoyers
2009-02-13 15:55                                                                       ` Mathieu Desnoyers
2009-02-13 16:18                                                                         ` Linus Torvalds
2009-02-13 17:33                                                                           ` Mathieu Desnoyers
2009-02-13 17:53                                                                             ` Linus Torvalds
2009-02-13 18:09                                                                               ` Linus Torvalds
2009-02-13 18:54                                                                                 ` Mathieu Desnoyers
2009-02-13 19:36                                                                                   ` Paul E. McKenney
2009-02-14  5:07                                                                                     ` Mike Frysinger
2009-02-14  5:20                                                                                       ` Paul E. McKenney
2009-02-14  5:46                                                                                         ` Mike Frysinger
2009-02-14 15:06                                                                                           ` Paul E. McKenney
2009-02-14 17:37                                                                                             ` Mike Frysinger
2009-02-22 14:23                                                                                           ` Pavel Machek
2009-02-22 18:28                                                                                             ` Mike Frysinger
2009-02-14  6:42                                                                                         ` Mathieu Desnoyers
2009-02-14  3:15                                                                                 ` [Uclinux-dist-devel] " Mike Frysinger
2009-02-13 18:40                                                                               ` Mathieu Desnoyers
2009-02-13 16:05                                                                   ` Linus Torvalds
2009-02-14  3:11                                                                     ` [Uclinux-dist-devel] " Mike Frysinger
2009-02-14  4:58                                                           ` Robin Getz
2009-02-12 19:38                                                     ` Mathieu Desnoyers
2009-02-12 20:17                                                       ` Paul E. McKenney
2009-02-12 21:53                                                         ` Mathieu Desnoyers
2009-02-12 23:04                                                           ` Paul E. McKenney
2009-02-13 12:49                                                             ` Mathieu Desnoyers
2009-02-11  5:08                     ` Lai Jiangshan
2009-02-11  8:58                       ` Mathieu Desnoyers
2009-02-09 13:23               ` Paul E. McKenney
2009-02-09 17:28                 ` Mathieu Desnoyers
2009-02-09 17:47                   ` Paul E. McKenney
2009-02-09 18:13                     ` Mathieu Desnoyers
2009-02-09 18:19                       ` Mathieu Desnoyers
2009-02-09 18:37                       ` Paul E. McKenney
2009-02-09 18:49                         ` Paul E. McKenney
2009-02-09 19:05                           ` Mathieu Desnoyers
2009-02-09 19:15                             ` Mathieu Desnoyers
2009-02-09 19:35                               ` Paul E. McKenney
2009-02-09 19:23                             ` Paul E. McKenney
2009-02-09 13:16             ` Paul E. McKenney
2009-02-09 17:19               ` Bert Wesarg
2009-02-09 17:34                 ` Paul E. McKenney
2009-02-09 17:35                   ` Bert Wesarg
2009-02-09 17:40                     ` Paul E. McKenney
2009-02-09 17:42                       ` Mathieu Desnoyers
2009-02-09 18:00                         ` Paul E. McKenney
2009-02-09 17:45                       ` Bert Wesarg
2009-02-09 17:59                         ` Paul E. McKenney
2009-02-07 22:56   ` Kyle Moffett
2009-02-07 23:50     ` Mathieu Desnoyers
2009-02-08  0:13     ` Paul E. McKenney
2009-02-06  8:55 ` [RFC git tree] Userspace RCU (urcu) for Linux Bert Wesarg
2009-02-06 11:36   ` Mathieu Desnoyers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090207151028.GA11150@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=bob@watson.ibm.com \
    --cc=compudj@krystal.dyndns.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ltt-dev@lists.casi.polymtl.ca \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.