linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Esben Nielsen <simlo@phys.au.dk>
To: david singleton <dsingleton@mvista.com>
Cc: linux-kernel@vger.kernel.org, <dino@in.ibm.com>,
	<robustmutexes@lists.osdl.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: Recursion bug in -rt
Date: Thu, 5 Jan 2006 18:47:59 +0100 (MET)	[thread overview]
Message-ID: <Pine.LNX.4.44L0.0601051820000.3110-100000@lifa02.phys.au.dk> (raw)
In-Reply-To: <53A54388-7E0E-11DA-B7D6-000A959BB91E@mvista.com>

On Thu, 5 Jan 2006, david singleton wrote:

>
> On Jan 5, 2006, at 1:43 AM, Esben Nielsen wrote:
>
> >
> >
> > On Wed, 4 Jan 2006, david singleton wrote:
> >
> >> Dinakar,
> >> 	I've got a more complete patch for deadlock detection for robust
> >> futexes.
> >>
> >> 	The patch is at
> >> http://source.mvista.com/~dsingleton/patch-2.6.15-rc7-rt3-rf2
> >>
> >> 	This patch handles pthread_mutex deadlocks in two ways:
> >>
> >> 1) POSIX states that non-recursive pthread_mutexes hang if locked
> >> twice.
> >> This patch hangs the deadlocking thread on a waitqueue.  It releases
> >> all other kernel locks, like the mmap_sem and robust_sem before
> >> waiting
> >> on the waitqueue so as not to hang the kernel.
> >>
> >> 2) pthread_mutexes that are only robust, not PRIO_INHERIT,  do not
> >> hang.  They have a new error code returned to them,  -EWOULDDEADLOCK.
> >> Since there is no POSIX specification for robust pthread_mutexes yet,
> >> returning EWOULDDEADLOCK to a robust mutex is more in the spirit
> >> of robustness.  For robust pthread_mutexes we clean up if the holding
> >> thread dies and we return EWOULDDEADLOCK to inform the
> >> application that it is trying to deadlock itself.
> >>
> >> 	The patch handles both simple and circular deadlocks.  This is
> >> something I've been wanting to do for a while, export deadlock
> >> detection to all flavors of kernels.   The patch provides the correct
> >> deadlock behavior while not hanging the system.
> >
> >
> > Have you tested this on SMP - without CONFIG_DEBUG_DEADLOCKS? As far
> > as I
> > can see check_futex_deadlock() can easily crash the system as there are
> > _no_ spinlocks protecting access to task->blocked_on,
> > task->blocked_on->lock etc. As far as I can see the tasks and locks
> > structures can be freed at any point while you are in
> > check_futex_deadlock().
> >
>
> You are right.  I didn't extract enough of the deadlock check routines
> to get
> the locking.  I'll fix that.
>
> > Furthermore, check_futex_deadlock is recursive. You can easily from
> > userspace trick the system into a kernel stack overflow by making a
> > many
> > nested  futexes.
>
> The rt deadlock check is also recursive, but it stops at a depth of 20,
> deciding something must be corrupt to have a task blocked on more
> than 20 locks.
>
> We should also set a limit.  We can either just hang an ill-behaved
> app on the waitqueue or return an error code to the application.
> Any suggestions on which would be best, hang an illbehaved app
> or just return it an error?
>

Per default make the tasks block. You can always make the return code an
option on each futex, but it has to be off as default. Here is why I think
so:

Having an return code would require you do the deadlock detection "up
front" in the down() operation. That takes a lot of CPU cycles .Ofcourse,
if you return an error code, the application could use the info for something
constructive, but I am afraid must applications wont do anything constructive
about it anyway (i.e. such the application continues to run)  - such error
handling code would be hard to write.

What is needed in most application is that the stuff simply
deadlocks with the tasks blocked on the various locks. Then you can go in and
trace the locks "postmortem".

With the current setup, where you deadlock detection has to be performed
"up front" because the rt_mutex can make spinlock-deadlocks, the error
code will be a natural thing. But when rt_mutex is "fixed" or
replaced with something else, this feature  will force the kernel to run
deadlock detection "up front" even though it isn't used for anything
usefull.

Esben

> David
>
> >
> > I am working on a new way to do priority inheritance for nested locks
> > in
> > rt_mutex such you do not risc deadlocking the system on raw-spinlocks
> > when you have a rt_mutex deadlock. But it wont have deadlock detection
> > without
> > CONFIG_DEBUG_DEADLOCKS. On the other hand it would be possible to make
> > a
> > deadlock scanner finding deadlocks in the system after they have
> > happened.
> > With a suitable /proc interface it could even be done in userspace.
> >
> > My patch to the rt_mutex is far from finished. I haven't even compiled
> > a
> > kernel with it yet. I spend the little time I have between my
> > family goes to bed and I simply have to go to bed myself writing a
> > unittest framework for the  rt_mutex and have both the original and the
> > patched rt_mutex parsing all my tests. But I need more tests to hammer
> > out the details about task->state forinstance. If anyone is
> > interrested I
> > would be happy to send what I got right now.
> >
> > Esben
> >
> >>
> >> 	It's also easier to see if a POSIX compliant app has deadlocked
> >> itself.
> >> the 'ps' command will show that the wait channel of a deadlocked
> >> application is waiting at 'futex_deadlock'.
> >>
> >> 	Let me know if it passes all your tests.
> >>
> >> David
> >>
> >>
> >>
> >>
> >> On Dec 20, 2005, at 7:50 AM, Dinakar Guniguntala wrote:
> >>
> >>> On Tue, Dec 20, 2005 at 02:19:56PM +0100, Ingo Molnar wrote:
> >>>>
> >>>> hm, i'm looking at -rf4 - these changes look fishy:
> >>>>
> >>>> -       _raw_spin_lock(&lock_owner(lock)->task->pi_lock);
> >>>> +       if (current != lock_owner(lock)->task)
> >>>> +               _raw_spin_lock(&lock_owner(lock)->task->pi_lock);
> >>>>
> >>>> why is this done?
> >>>>
> >>>
> >>> Ingo, this is to prevent a kernel hang due to application error.
> >>>
> >>> Basically when an application does a pthread_mutex_lock twice on a
> >>> _nonrecursive_ mutex with robust/PI attributes the whole system
> >>> hangs.
> >>> Ofcourse the application clearly should not be doing anything like
> >>> that, but it should not end up hanging the system either
> >>>
> >>> 	-Dinakar
> >>>
> >>
> >> -
> >> To unsubscribe from this list: send the line "unsubscribe
> >> linux-kernel" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >> Please read the FAQ at  http://www.tux.org/lkml/
> >>
> >
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>


  reply	other threads:[~2006-01-05 17:48 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-12-14 22:39 Recursion bug in -rt Dinakar Guniguntala
2005-12-15  1:03 ` david singleton
2005-12-15 19:44   ` Dinakar Guniguntala
2005-12-15 20:40     ` David Singleton
2005-12-16  0:02     ` david singleton
2005-12-16 18:42       ` Dinakar Guniguntala
2005-12-16 21:26         ` David Singleton
2005-12-19 11:56           ` Dinakar Guniguntala
2005-12-19 20:11             ` David Singleton
2005-12-15 19:00 ` David Singleton
2005-12-15 19:52   ` Dinakar Guniguntala
2005-12-20 13:19   ` Ingo Molnar
2005-12-20 15:50     ` Dinakar Guniguntala
2005-12-20 17:43       ` Esben Nielsen
2005-12-20 19:33         ` Steven Rostedt
2005-12-20 20:42           ` Esben Nielsen
2005-12-20 21:20             ` Steven Rostedt
2005-12-20 21:55               ` david singleton
2005-12-20 22:56                 ` Esben Nielsen
2005-12-20 23:12                   ` Steven Rostedt
2005-12-20 23:55                     ` Esben Nielsen
2005-12-22  4:37                       ` david singleton
2005-12-20 22:43               ` Esben Nielsen
2005-12-20 22:59                 ` Steven Rostedt
2006-01-03  1:54       ` david singleton
2006-01-05  2:14       ` david singleton
2006-01-05  9:43         ` Esben Nielsen
2006-01-05 17:11           ` david singleton
2006-01-05 17:47             ` Esben Nielsen [this message]
2006-01-05 18:26               ` david singleton
2006-01-07  2:40               ` robust futex deadlock detection patch david singleton
     [not found]                 ` <a36005b50601071145y7e2ead9an4a4ca7896f35a85e@mail.gmail.com>
2006-01-07 19:49                   ` Ulrich Drepper
2006-01-09  9:23                 ` Esben Nielsen
2006-01-09 20:01                   ` David Singleton
2006-01-09 20:16                     ` Esben Nielsen
2006-01-09 21:08                       ` Steven Rostedt
2006-01-09 21:19                         ` Esben Nielsen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.44L0.0601051820000.3110-100000@lifa02.phys.au.dk \
    --to=simlo@phys.au.dk \
    --cc=dino@in.ibm.com \
    --cc=dsingleton@mvista.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=robustmutexes@lists.osdl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).