All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Peter Zijlstra <peterz@infradead.org>
Cc: "Nicolai Hähnle" <nhaehnle@gmail.com>,
	"Nicolai Hähnle" <Nicolai.Haehnle@amd.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	"Ingo Molnar" <mingo@redhat.com>,
	stable@vger.kernel.org,
	"Maarten Lankhorst" <maarten.lankhorst@canonical.com>
Subject: Re: [PATCH 1/4] locking/ww_mutex: Fix a deadlock affecting ww_mutexes
Date: Wed, 23 Nov 2016 14:08:48 +0100	[thread overview]
Message-ID: <20161123130848.q6yw73fjdhttmbqh@phenom.ffwll.local> (raw)
In-Reply-To: <20161123130046.GS3092@twins.programming.kicks-ass.net>

On Wed, Nov 23, 2016 at 02:00:46PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 23, 2016 at 12:25:22PM +0100, Nicolai Hähnle wrote:
> > From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
> > 
> > Fix a race condition involving 4 threads and 2 ww_mutexes as indicated in
> > the following example. Acquire context stamps are ordered like the thread
> > numbers, i.e. thread #1 should back off when it encounters a mutex locked
> > by thread #0 etc.
> > 
> > Thread #0    Thread #1    Thread #2    Thread #3
> > ---------    ---------    ---------    ---------
> >                                        lock(ww)
> >                                        success
> >              lock(ww')
> >              success
> >                           lock(ww)
> >              lock(ww)        .
> >                 .            .         unlock(ww) part 1
> > lock(ww)        .            .            .
> > success         .            .            .
> >                 .            .         unlock(ww) part 2
> >                 .         back off
> > lock(ww')       .
> >    .            .
> > (stuck)      (stuck)
> > 
> > Here, unlock(ww) part 1 is the part that sets lock->base.count to 1
> > (without being protected by lock->base.wait_lock), meaning that thread #0
> > can acquire ww in the fast path or, much more likely, the medium path
> > in mutex_optimistic_spin. Since lock->base.count == 0, thread #0 then
> > won't wake up any of the waiters in ww_mutex_set_context_fastpath.
> > 
> > Then, unlock(ww) part 2 wakes up _only_the_first_ waiter of ww. This is
> > thread #2, since waiters are added at the tail. Thread #2 wakes up and
> > backs off since it sees ww owned by a context with a lower stamp.
> > 
> > Meanwhile, thread #1 is never woken up, and so it won't back off its lock
> > on ww'. So thread #0 gets stuck waiting for ww' to be released.
> > 
> > This patch fixes the deadlock by waking up all waiters in the slow path
> > of ww_mutex_unlock.
> > 
> > We have an internal test case for amdgpu which continuously submits
> > command streams from tens of threads, where all command streams reference
> > hundreds of GPU buffer objects with a lot of overlap in the buffer lists
> > between command streams. This test reliably caused a deadlock, and while I
> > haven't completely confirmed that it is exactly the scenario outlined
> > above, this patch does fix the test case.
> > 
> > v2:
> > - use wake_q_add
> > - add additional explanations
> > 
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: stable@vger.kernel.org
> > Reviewed-by: Christian König <christian.koenig@amd.com> (v1)
> > Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
> 
> Completely and utterly fails to apply; I think this patch is based on
> code prior to the mutex rewrite.
> 
> Please rebase on tip/locking/core.
> 
> Also, is this a regression, or has this been a 'feature' of the ww_mutex
> code from early on?

Sorry forgot to mention that, but I checked. Seems to have been broken
since day 1, at least looking at the original code the wake-single-waiter
stuff is as old as the mutex code added in 2006.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: Peter Zijlstra <peterz@infradead.org>
Cc: "Nicolai Hähnle" <nhaehnle@gmail.com>,
	"Nicolai Hähnle" <Nicolai.Haehnle@amd.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	"Ingo Molnar" <mingo@redhat.com>,
	stable@vger.kernel.org,
	"Maarten Lankhorst" <maarten.lankhorst@canonical.com>
Subject: Re: [PATCH 1/4] locking/ww_mutex: Fix a deadlock affecting ww_mutexes
Date: Wed, 23 Nov 2016 14:08:48 +0100	[thread overview]
Message-ID: <20161123130848.q6yw73fjdhttmbqh@phenom.ffwll.local> (raw)
In-Reply-To: <20161123130046.GS3092@twins.programming.kicks-ass.net>

On Wed, Nov 23, 2016 at 02:00:46PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 23, 2016 at 12:25:22PM +0100, Nicolai H�hnle wrote:
> > From: Nicolai H�hnle <Nicolai.Haehnle@amd.com>
> > 
> > Fix a race condition involving 4 threads and 2 ww_mutexes as indicated in
> > the following example. Acquire context stamps are ordered like the thread
> > numbers, i.e. thread #1 should back off when it encounters a mutex locked
> > by thread #0 etc.
> > 
> > Thread #0    Thread #1    Thread #2    Thread #3
> > ---------    ---------    ---------    ---------
> >                                        lock(ww)
> >                                        success
> >              lock(ww')
> >              success
> >                           lock(ww)
> >              lock(ww)        .
> >                 .            .         unlock(ww) part 1
> > lock(ww)        .            .            .
> > success         .            .            .
> >                 .            .         unlock(ww) part 2
> >                 .         back off
> > lock(ww')       .
> >    .            .
> > (stuck)      (stuck)
> > 
> > Here, unlock(ww) part 1 is the part that sets lock->base.count to 1
> > (without being protected by lock->base.wait_lock), meaning that thread #0
> > can acquire ww in the fast path or, much more likely, the medium path
> > in mutex_optimistic_spin. Since lock->base.count == 0, thread #0 then
> > won't wake up any of the waiters in ww_mutex_set_context_fastpath.
> > 
> > Then, unlock(ww) part 2 wakes up _only_the_first_ waiter of ww. This is
> > thread #2, since waiters are added at the tail. Thread #2 wakes up and
> > backs off since it sees ww owned by a context with a lower stamp.
> > 
> > Meanwhile, thread #1 is never woken up, and so it won't back off its lock
> > on ww'. So thread #0 gets stuck waiting for ww' to be released.
> > 
> > This patch fixes the deadlock by waking up all waiters in the slow path
> > of ww_mutex_unlock.
> > 
> > We have an internal test case for amdgpu which continuously submits
> > command streams from tens of threads, where all command streams reference
> > hundreds of GPU buffer objects with a lot of overlap in the buffer lists
> > between command streams. This test reliably caused a deadlock, and while I
> > haven't completely confirmed that it is exactly the scenario outlined
> > above, this patch does fix the test case.
> > 
> > v2:
> > - use wake_q_add
> > - add additional explanations
> > 
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: stable@vger.kernel.org
> > Reviewed-by: Christian K�nig <christian.koenig@amd.com> (v1)
> > Signed-off-by: Nicolai H�hnle <nicolai.haehnle@amd.com>
> 
> Completely and utterly fails to apply; I think this patch is based on
> code prior to the mutex rewrite.
> 
> Please rebase on tip/locking/core.
> 
> Also, is this a regression, or has this been a 'feature' of the ww_mutex
> code from early on?

Sorry forgot to mention that, but I checked. Seems to have been broken
since day 1, at least looking at the original code the wake-single-waiter
stuff is as old as the mutex code added in 2006.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

  reply	other threads:[~2016-11-23 13:08 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-23 11:25 [PATCH 1/4] locking/ww_mutex: Fix a deadlock affecting ww_mutexes Nicolai Hähnle
2016-11-23 11:25 ` [PATCH 2/4] locking/ww_mutex: Remove redundant wakeups in ww_mutex_set_context_slowpath Nicolai Hähnle
2016-11-23 11:25   ` Nicolai Hähnle
2016-11-23 11:25 ` [PATCH 3/4] locking/Documentation: fix a typo Nicolai Hähnle
2016-11-23 11:25   ` Nicolai Hähnle
2016-11-23 11:25 ` [PATCH 4/4] locking/ww_mutex: Fix a comment typo Nicolai Hähnle
2016-11-23 11:25   ` Nicolai Hähnle
2016-11-23 12:50 ` [PATCH 1/4] locking/ww_mutex: Fix a deadlock affecting ww_mutexes Daniel Vetter
2016-11-23 12:50   ` Daniel Vetter
2016-11-23 13:00 ` Peter Zijlstra
2016-11-23 13:00   ` Peter Zijlstra
2016-11-23 13:00   ` Peter Zijlstra
2016-11-23 13:08   ` Daniel Vetter [this message]
2016-11-23 13:08     ` Daniel Vetter
2016-11-23 13:11     ` Daniel Vetter
2016-11-23 13:11       ` Daniel Vetter
2016-11-23 13:11       ` Daniel Vetter
2016-11-23 13:33       ` Maarten Lankhorst
2016-11-23 13:33         ` Maarten Lankhorst
2016-11-23 14:03 ` Peter Zijlstra
2016-11-23 14:03   ` Peter Zijlstra
2016-11-23 14:03   ` Peter Zijlstra
2016-11-23 14:25   ` Daniel Vetter
2016-11-23 14:25     ` Daniel Vetter
2016-11-23 14:25     ` Daniel Vetter
2016-11-23 14:32     ` Peter Zijlstra
2016-11-23 14:32       ` Peter Zijlstra
2016-11-24 11:26     ` Nicolai Hähnle
2016-11-24 11:26       ` Nicolai Hähnle
2016-11-24 11:40       ` Peter Zijlstra
2016-11-24 11:40         ` Peter Zijlstra
2016-11-24 11:40         ` Peter Zijlstra
2016-11-24 11:52         ` Daniel Vetter
2016-11-24 11:52           ` Daniel Vetter
2016-11-24 11:56           ` Peter Zijlstra
2016-11-24 11:56             ` Peter Zijlstra
2016-11-24 12:05             ` Nicolai Hähnle

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161123130848.q6yw73fjdhttmbqh@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=Nicolai.Haehnle@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maarten.lankhorst@canonical.com \
    --cc=mingo@redhat.com \
    --cc=nhaehnle@gmail.com \
    --cc=peterz@infradead.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.