linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RE: [Linux-ia64] Linux kernel deadlock caused by spinlock bug
@ 2002-07-29 21:05 Van Maren, Kevin
  2002-07-29 21:18 ` Matthew Wilcox
  2002-07-30 15:58 ` Russell Lewis
  0 siblings, 2 replies; 16+ messages in thread
From: Van Maren, Kevin @ 2002-07-29 21:05 UTC (permalink / raw)
  To: 'Matthew Wilcox'
  Cc: 'linux-kernel@vger.kernel.org',
	'linux-ia64@linuxia64.org'

> On Mon, Jul 29, 2002 at 03:37:17PM -0500, Van Maren, Kevin wrote:
> > I changed the code to allow the writer bit to remain set even if
> > there is a reader.  By only allowing a single processor to set
> > the writer bit, I don't have to worry about pending writers starving
> > out readers.  The potential writer that was able to set the 
> writer bit
> > gains ownership of the lock when the current readers finish.  Since
> > the retry for read_lock does not keep trying to increment the reader
> > count, there are no other required changes.
> 
> however, this is broken.  linux relies on being able to do
> 
> read_lock(x);
> func()
>   -> func()
>        -> func()
>             -> read_lock(x);
> 
> if a writer comes between those two read locks, you're toast.
> 
> i suspect the right answer for the contention you're seeing 
> is an improved
> get_timeofday which is lockless.

Recursive read locks certainly make it more difficult to fix the
problem.  Placing a band-aid on gettimeofday will fix the symptom
in one location, but will not fix the general problem, which is
writer starvation with heavy read lock load.  The only way to fix
that is to make writer locks fair or to eliminate them (make them
_all_ stateless).

Recursive read locks also imply that you can't replace them with
a "normal" spinlock, which would also solve the problem (although
they do _not_ scale under contention -- something like O(N^2)
cache-cache transfers for N processors to acquire once).

There are ways of fixing the writer starvation and allowing recursive
read locks, but that is more work (and heavier-weight than desirable).
How pervasive are recursive reader locks?  Should they be a special
type of reader lock?

Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread
* Re: [Linux-ia64] Linux kernel deadlock caused by spinlock bug
@ 2002-07-30 21:15 Van Maren, Kevin
  0 siblings, 0 replies; 16+ messages in thread
From: Van Maren, Kevin @ 2002-07-30 21:15 UTC (permalink / raw)
  To: 'root@chaos.analogic.com'; +Cc: linux-kernel

> > Check out the title of the thread... Somebody has a real, reproducible 
> > deadlock on a rw_lock where many readers are starving out a writer, and 
> > the system hangs. 

> They have, as you say, "real reproducible" deadlocks because they are 
> not using straight spin-locks. Sombody tried to use cute queued locks. 
> This invention is the cause of the problem. The solution is to not 
> try to play tricks on "Mother Nature". 
> Cheers, 
> Dick Johnson 

Not quite.

The stock kernel hangs using regular reader/writer locks.  The problem
where a series of readers can continue passing a pending writer and
prevent the writer from _ever_ acquiring the lock affects at least i386
and ia64, and probably others, for both 2.4.x AND 2.5.x.

The problem would be fixed (but run very slow) by using normal spinlocks,
EXCEPT for the problem that reader locks are acquired recursivly, which
is the same reason my writer-preference patch could deadlock.

So we have the situation where the current code can deadlock, and the
only patch submitted can also lead to deadlock under a different situation.

It was suggested that a modified lock queue would be able to avoid
the eternal starvation problem, and it was also suggested that having
readers "spin" before acquiring the lock could reduce the problem.

Kevin


^ permalink raw reply	[flat|nested] 16+ messages in thread
* RE: [Linux-ia64] Linux kernel deadlock caused by spinlock bug
@ 2002-07-30 17:06 Van Maren, Kevin
  2002-07-30 17:44 ` William Lee Irwin III
  0 siblings, 1 reply; 16+ messages in thread
From: Van Maren, Kevin @ 2002-07-30 17:06 UTC (permalink / raw)
  To: 'Andi Kleen'; +Cc: linux-kernel

> > There are ways of fixing the writer starvation and allowing recursive
> > read locks, but that is more work (and heavier-weight than desirable).
> 
> One such way would be a variant of queued locks, like John Stultz's
>
http://oss.software.ibm.com/developer/opensource/linux/patches/?patch_id=218
> These are usually needed for fairness even with plain spinlocks on NUMA 
> boxes in any case (so if your box is NUMA then you will need it anyways) 
> They only exist for plain  spinlocks yet, but I guess they could be
extended 
> to readlocks.

This ES7000 system is not NUMA.  All memory is equidistant from all
processors,
with a full non-blocking crossbar interconnect, and the hardware guarantees
fairness under cacheline contention.  So the processors aren't being starved
or treated unfairly by the hardware, just by the reader-preference locking
code.

It isn't obvious to me how to extend those queued to reader/writer locks if
you
have to allow recursive readers without incurring the same overhead of
tracking
which processors already have a reader lock.

If you do want to trigger recursive rw_locks, simply change the header file
to
make them normal spinlocks.  Then whenever the kernel hangs, see where it
is.
Of course, this approach only finds all of them if you execute every code
path.

Does anyone want to chip in on why we need recursive r/w locks?  Or why it
is hard to remove them?  It doesn't sound like they are used much.

Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread
[parent not found: <3FAD1088D4556046AEC48D80B47B478C0101F3AE@usslc-exch-4.slc.unisys.com.suse.lists.linux.kernel>]
* RE: [Linux-ia64] Linux kernel deadlock caused by spinlock bug
@ 2002-07-29 21:29 Van Maren, Kevin
  2002-07-29 21:48 ` David Mosberger
  0 siblings, 1 reply; 16+ messages in thread
From: Van Maren, Kevin @ 2002-07-29 21:29 UTC (permalink / raw)
  To: 'Matthew Wilcox'
  Cc: 'linux-kernel@vger.kernel.org',
	'linux-ia64@linuxia64.org'

> On Mon, Jul 29, 2002 at 04:05:35PM -0500, Van Maren, Kevin wrote:
> > Recursive read locks certainly make it more difficult to fix the
> > problem.  Placing a band-aid on gettimeofday will fix the symptom
> > in one location, but will not fix the general problem, which is
> > writer starvation with heavy read lock load.  The only way to fix
> > that is to make writer locks fair or to eliminate them (make them
> > _all_ stateless).
> 
> The basic principle is that if you see contention on a spinlock, you
> should eliminate the spinlock somehow.  making spinlocks 
> `fair' doesn't
> help that you're spending lots of time spinning on a lock.

Yes, but that isn't the point: unless you eliminate all rw locks,
it is conceptually possible to cause a kernel deadlock by forcing
contention on the locks you didn't remove, if the user can force
the kernel to acquire a reader lock and if something else needs to
acquire the writer lock.  Correctness is the issue, not performance.
You have locks because there _could_ be contention, and locks handle
that contention _correctly_.  If you can eliminate the contention,
you can eliminate the locks, but if there is a chance for contention,
the locks have to remain, _and_ they have to handle contention
_correctly_, which does not occur with the current reader/writer
lock code, which can hang the kernel just as dead as a writer
between recursive reader lock calls with my code.

Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread
* Linux kernel deadlock caused by spinlock bug
@ 2002-07-29 20:37 Van Maren, Kevin
  2002-07-29 20:46 ` [Linux-ia64] " Matthew Wilcox
  0 siblings, 1 reply; 16+ messages in thread
From: Van Maren, Kevin @ 2002-07-29 20:37 UTC (permalink / raw)
  To: 'linux-kernel@vger.kernel.org',
	'linux-ia64@linuxia64.org'
  Cc: 'hpl@cs.utk.edu'

Hi all,

I have hit a problem with the Linux reader/writer spinlock
implementation that is causing a kernel deadlock (technically
livelock) which causes the system to hang (hard) when running
certain user applications.  This problem could be exploited as
a DoS attack on medium-to-large SMP machines.

Using some spiffy logic analyzers and debugging tools, I was able
to piece together the following scenario.

With "several" processors acquiring and releasing read locks, it is
possible for a processor to _never_ succeed in acquiring a write lock.
Even though the read lock is held for a very short period, with much
contention for the cache line the processor would often lose ownership
before it could release the read lock.  [Even if it had it longer,
because it was looping, there would still be a good chance that it
would lose the cache line while holding the reader lock.]  By the time
the reader got the cache line back to release the lock, another processor
had acquired the read lock.  This behavior resulted in a processor not
being able to acquire the write lock, which it was attempting to do in
an interrupt handler.  So the interrupt handler was _never_ able to
complete and other interrupts were blocked by that processor (in my
case, network and keyboard interrupts).

The specific case I tracked down consisted of several processes in
a tight gettimeofday() loop, which resulted in the reader count never
getting to zero because there was always an outstanding reader.  While
I will stipulate that it is not a good thing for several processes to
be looping in gettimeofday(), I will assert that it is a very bad thing
for a few processes calling such a benign system call to hang the system.

Unfortunately, this was not even a contrived test case, as I hit it
experimenting with HPL Linpack on a 32-processor Unisys ES7000 Orion 200,
although it does not take nearly 32 processors to reproduce.
See http://www.unisys.com/products/es7000__servers/hardware/index.htm

The HPL code polls for an incoming message in HPL_bcast, called by
HPL_pdupdateTT.  However, in the same while loop, it also calls
gettimeofday via HPL_ptimer.  The system hung when processor 0
received a timer interrupt before the user process it was running
sent the message, and after several processes were waiting for the
message.  So the other processes spun calling gettimeofday waiting
for the message that would not come until they stopped calling
gettimeofday.  Classic deadlock.  It normally took less than 5
minutes to reproduce (at which time I had to hard-reboot the system).

The logic analyzer proved it was not a hardware issue, since the
cache line was being fairly shared by the processors and the
reader count was being updated correctly.

I have included a new version of the write_lock code for IA64 at
the end of this email.  I'm not making any claims about it being
optimal, just that it appears to work.

I changed the code to allow the writer bit to remain set even if
there is a reader.  By only allowing a single processor to set
the writer bit, I don't have to worry about pending writers starving
out readers.  The potential writer that was able to set the writer bit
gains ownership of the lock when the current readers finish.  Since
the retry for read_lock does not keep trying to increment the reader
count, there are no other required changes.


A similar patch is needed for IA32 and any other SMP platform that
supports more than ~4 processors and does not already guarantee fairness
to writers.

The "fix" for IA32 would probably be very similar to the IA64 code,
but retaining the RW_LOCK_BIAS.  The only code that needs to change
is __write_lock_failed, which would need to keep RW_LOCK_BIAS subtracted
if the result is > 0xFF000000UL (0x0 - RW_LOCK_BIAS) [would not get
in __write_lock_failed if the result was 0].  Implementing that change
may require trashing another register.

Actually, the IA32 code should also have a "pause" instruction inserted
(especially for Foster processors) in all the retry code loops...
I've left the actual IA32 fix as an exercise for the reader, but I can
fix it and send out a patch if needed.

Kevin Van Maren



Here is the new IA64 write_lock code (include/asm-ia64/spinlock.h):


/*
 * write_lock pseudo-code:
 * Assume lock is unlocked, and try to acquire it.
 * If failed, wait until there isn't a writer, and then set the writer bit.
 * Once have writer bit, wait until there are no more readers.
 */
#define write_lock(rw)
\
do {
\
        __asm__ __volatile__ (
\
                "mov ar.ccv = r0\n"
\
                "dep r29 = -1, r0, 31, 1\n"
\
                ";;\n"
\
                "cmpxchg4.acq r2 = [%0], r29, ar.ccv\n"
\
                ";;\n"
\
                "cmp4.eq p0,p7 = r0, r2\n"
\
                "(p7) br.cond.spnt.few 2f\n"
\
                ";;\n"
\
                "1:\n"
\
                ".section .text.lock,\"ax\"\n"
\
                "2:\tld4 r30 = [%0]\n"
\
                ";;\n"
\
                "tbit.nz p7,p0 = r30, 31\n"
\
                "(p7) br.cond.spnt.few 2b\n"
\
                ";;\n"
\
                "mov ar.ccv = r30\n"
\
                "dep r29 = -1, r0, 31, 1\n"
\
                ";;\n"
\
                "or r29 = r29, r30\n"
\
                ";;\n"
\
                "cmpxchg4.acq r2 = [%0], r29, ar.ccv\n"
\
                ";;\n"
\
                "cmp4.eq p0,p7 = r30, r2\n"
\
                "(p7) br.cond.spnt.few 2b\n"
\
                ";;\n"
\
                "3:\n"
\
                "ld4 r2 = [%0]\n"
\
                ";;\n"
\
                "extr.u r30 = r2, 0, 31\n"
\
                ";;\n"
\
                "cmp4.eq p0,p7 = r0, r30\n"
\
                "(p7) br.cond.spnt.few 3b\n"
\
                "br.cond.sptk.few 1b\n"
\
                ".previous\n"
\
                :: "r"(rw) : "ar.ccv", "p7", "r2", "r29", "r30", "memory");
\
} while(0)

/*
 * clear_bit() has "acq" semantics; we're really need "rel" semantics,
 * but for simplicity, we simply do a fence for now...
 */
#define write_unlock(x)                         ({mb(); clear_bit(31,
(x));})



The old IA64 code (for comparison) was:

#define write_lock(rw)
\
do {
\
        __asm__ __volatile__ (
\
                "mov ar.ccv = r0\n"
\
                "dep r29 = -1, r0, 31, 1\n"
\
                ";;\n"
\
                "1:\n"
\
                "ld4 r2 = [%0]\n"
\
                ";;\n"
\
                "cmp4.eq p0,p7 = r0,r2\n"
\
                "(p7) br.cond.spnt.few 1b \n"
\
                "cmpxchg4.acq r2 = [%0], r29, ar.ccv\n"
\
                ";;\n"
\
                "cmp4.eq p0,p7 = r0, r2\n"
\
                "(p7) br.cond.spnt.few 1b\n"
\
                ";;\n"
\
                :: "r"(rw) : "ar.ccv", "p7", "r2", "r29", "memory");
\
} while(0)

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2002-07-31 17:35 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-07-29 21:05 [Linux-ia64] Linux kernel deadlock caused by spinlock bug Van Maren, Kevin
2002-07-29 21:18 ` Matthew Wilcox
2002-07-30 15:58 ` Russell Lewis
2002-07-30 16:56   ` Richard B. Johnson
2002-07-30 17:02     ` Russell Lewis
2002-07-30 17:14       ` Richard B. Johnson
2002-07-30 22:48     ` Sean Griffin
2002-07-31 17:37       ` Russell Lewis
  -- strict thread matches above, loose matches on Subject: below --
2002-07-30 21:15 Van Maren, Kevin
2002-07-30 17:06 Van Maren, Kevin
2002-07-30 17:44 ` William Lee Irwin III
     [not found] <3FAD1088D4556046AEC48D80B47B478C0101F3AE@usslc-exch-4.slc.unisys.com.suse.lists.linux.kernel>
2002-07-30 13:32 ` Andi Kleen
2002-07-30 16:27   ` William Lee Irwin III
2002-07-29 21:29 Van Maren, Kevin
2002-07-29 21:48 ` David Mosberger
2002-07-29 20:37 Van Maren, Kevin
2002-07-29 20:46 ` [Linux-ia64] " Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).