From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751906AbcEKJCN (ORCPT ); Wed, 11 May 2016 05:02:13 -0400 Received: from mail-wm0-f67.google.com ([74.125.82.67]:33033 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751370AbcEKJCK (ORCPT ); Wed, 11 May 2016 05:02:10 -0400 Date: Wed, 11 May 2016 11:02:07 +0200 From: Michal Hocko To: Peter Zijlstra Cc: Tetsuo Handa , LKML , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , "David S. Miller" , Tony Luck , Andrew Morton , Chris Zankel , Max Filippov , Davidlohr Bueso , Waiman Long Subject: Re: [PATCH 03/11] locking, rwsem: introduce basis for down_write_killable Message-ID: <20160511090206.GG16677@dhcp22.suse.cz> References: <1459508695-14915-1-git-send-email-mhocko@kernel.org> <1459508695-14915-4-git-send-email-mhocko@kernel.org> <8bd03bdc-0373-a3bb-da12-045322efb797@I-love.SAKURA.ne.jp> <20160510115320.GJ23576@dhcp22.suse.cz> <20160510123806.GB3193@twins.programming.kicks-ass.net> <20160511072357.GC16677@dhcp22.suse.cz> <20160511083512.GG3193@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160511083512.GG3193@twins.programming.kicks-ass.net> User-Agent: Mutt/1.6.0 (2016-04-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 11-05-16 10:35:12, Peter Zijlstra wrote: > On Wed, May 11, 2016 at 09:23:57AM +0200, Michal Hocko wrote: > > On Tue 10-05-16 14:38:06, Peter Zijlstra wrote: > > > > Also, looking at it again; I think we're forgetting to re-adjust the > > > BIAS for the cancelled writer. > > > > Hmm, __rwsem_down_write_failed_common does > > > > /* undo write bias from down_write operation, stop active locking */ > > count = rwsem_atomic_update(-RWSEM_ACTIVE_WRITE_BIAS, sem); > > > > which should remove the bias AFAIU. > > Right; at this point we're neutral wrt bias. > > > Later we do > > > > if (waiting) { > > count = READ_ONCE(sem->count); > > > > /* > > * If there were already threads queued before us and there are > > * no active writers, the lock must be read owned; so we try to > > * wake any read locks that were queued ahead of us. > > */ > > if (count > RWSEM_WAITING_BIAS) > > sem = __rwsem_do_wake(sem, RWSEM_WAKE_READERS); > > > > } else > > count = rwsem_atomic_update(RWSEM_WAITING_BIAS, sem); > > > > and that might set RWSEM_WAITING_BIAS but the current holder of the lock > > should handle that correctly and wake the waiting tasks IIUC. I will go > > and check the code closer. It is quite easy to get this subtle code > > wrong.. > > Subtle; yes. > > So if you look at rwsem_try_write_lock() -- traditionally the only way > to exit this wait loop, you see it does: > > if (count == RWSEM_WAITING_BIAS && > cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS, > RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) { > if (!list_is_singular(&sem->wait_list)) > rwsem_atomic_update(RWSEM_WAITING_BIAS, sem); > rwsem_set_owner(sem); > return true; > } > > Which ends up clearing RWSEM_WAITING_BIAS is we were the only waiter -- > or rather, it always clear WAITING, but then tests the list and re-sets > it if there's more than one waiters on. > > Now, the signal break doesn't clear WAITING if we were the only waiter > on the list; which means any further down_read() will block (I didn't > look at what a subsequent down_write() would do). I was staring at this part as well but then I convinced myself that this is OK because rwsem_down_read_failed does: if (list_empty(&sem->wait_list)) adjustment += RWSEM_WAITING_BIAS; list_add_tail(&waiter.list, &sem->wait_list); /* we're now waiting on the lock, but no longer actively locking */ count = rwsem_atomic_update(adjustment, sem); /* If there are no active locks, wake the front queued process(es). * * If there are no writers and we are first in the queue, * wake our own waiter to join the existing active readers ! */ if (count == RWSEM_WAITING_BIAS || (count > RWSEM_WAITING_BIAS && adjustment != -RWSEM_ACTIVE_READ_BIAS)) sem = __rwsem_do_wake(sem, RWSEM_WAKE_ANY); __rwsem_do_wake should then see all the readers (including the current one) and wake them up and set waiter->task to NULL to allow the current one to break out of the loop as well. > So I think we needs something like this, to clear WAITING if we leave > the list empty. But maybe this is the correct way to go. > Does that make sense? I do not see an immediate problem with this. Maybe Tetsuo can try this out and see if it really makes a difference. > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c > index df4dcb883b50..7011dd1c286c 100644 > --- a/kernel/locking/rwsem-xadd.c > +++ b/kernel/locking/rwsem-xadd.c > @@ -489,6 +489,8 @@ __rwsem_down_write_failed_common(struct rw_semaphore *sem, int state) > do { > if (signal_pending_state(state, current)) { > raw_spin_lock_irq(&sem->wait_lock); > + if (list_singular(&sem->wait_list)) > + rwsem_atomic_update(-RWSEM_WAITING_BIAS, sem); > ret = ERR_PTR(-EINTR); > goto out; > } -- Michal Hocko SUSE Labs