From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754220AbbKLOE6 (ORCPT ); Thu, 12 Nov 2015 09:04:58 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52192 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753339AbbKLOE4 (ORCPT ); Thu, 12 Nov 2015 09:04:56 -0500 Date: Thu, 12 Nov 2015 16:00:58 +0100 From: Oleg Nesterov To: Boqun Feng Cc: Peter Zijlstra , mingo@kernel.org, linux-kernel@vger.kernel.org, paulmck@linux.vnet.ibm.com, corbet@lwn.net, mhocko@kernel.org, dhowells@redhat.com, torvalds@linux-foundation.org, will.deacon@arm.com, Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire() Message-ID: <20151112150058.GA30321@redhat.com> References: <20151102132901.157178466@infradead.org> <20151102134941.005198372@infradead.org> <20151103175958.GA4800@redhat.com> <20151111093939.GA6314@fixme-laptop.cn.ibm.com> <20151111121232.GN17308@twins.programming.kicks-ass.net> <20151111193953.GA23515@redhat.com> <20151112070915.GC6314@fixme-laptop.cn.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151112070915.GC6314@fixme-laptop.cn.ibm.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/12, Boqun Feng wrote: > > On Wed, Nov 11, 2015 at 08:39:53PM +0100, Oleg Nesterov wrote: > > > > object_t *object; > > spinlock_t lock; > > > > void update(void) > > { > > object_t *o; > > > > spin_lock(&lock); > > o = READ_ONCE(object); > > if (o) { > > BUG_ON(o->dead); > > do_something(o); > > } > > spin_unlock(&lock); > > } > > > > void destroy(void) // can be called only once, can't race with itself > > { > > object_t *o; > > > > o = object; > > object = NULL; > > > > /* > > * pairs with lock/ACQUIRE. The next update() must see > > * object == NULL after spin_lock(); > > */ > > smp_mb(); > > > > spin_unlock_wait(&lock); > > > > /* > > * pairs with unlock/RELEASE. The previous update() has > > * already passed BUG_ON(o->dead). > > * > > * (Yes, yes, in this particular case it is not needed, > > * we can rely on the control dependency). > > */ > > smp_mb(); > > > > o->dead = true; > > } > > > > I believe the code above is correct and it needs the barriers on both sides. > > > > Hmm.. probably incorrect.. because the ACQUIRE semantics of spin_lock() > only guarantees that the memory operations following spin_lock() can't > be reorder before the *LOAD* part of spin_lock() not the *STORE* part, > i.e. the case below can happen(assuming the spin_lock() is implemented > as ll/sc loop) > > spin_lock(&lock): > r1 = *lock; // LL, r1 == 0 > o = READ_ONCE(object); // could be reordered here. > *lock = 1; // SC > > This could happen because of the ACQUIRE semantics of spin_lock(), and > the current implementation of spin_lock() on PPC allows this happen. > > (Cc PPC maintainers for their opinions on this one) In this case the code above is obviously wrong. And I do not understand how we can rely on spin_unlock_wait() then. And afaics do_exit() is buggy too then, see below. > I think it's OK for it as an ACQUIRE(with a proper barrier) or even just > a control dependency to pair with spin_unlock(), for example, the > following snippet in do_exit() is OK, except the smp_mb() is redundant, > unless I'm missing something subtle: > > /* > * The setting of TASK_RUNNING by try_to_wake_up() may be delayed > * when the following two conditions become true. > * - There is race condition of mmap_sem (It is acquired by > * exit_mm()), and > * - SMI occurs before setting TASK_RUNINNG. > * (or hypervisor of virtual machine switches to other guest) > * As a result, we may become TASK_RUNNING after becoming TASK_DEAD > * > * To avoid it, we have to wait for releasing tsk->pi_lock which > * is held by try_to_wake_up() > */ > smp_mb(); > raw_spin_unlock_wait(&tsk->pi_lock); Perhaps it is me who missed something. But I don't think we can remove this mb(). And at the same time it can't help on PPC if I understand your explanation above correctly. To simplify, lets ignore exit_mm/down_read/etc. The exiting task does current->state = TASK_UNINTERRUPTIBLE; // without schedule() in between current->state = TASK_RUNNING; smp_mb(); spin_unlock_wait(pi_lock); current->state = TASK_DEAD; schedule(); and we need to ensure that if we race with try_to_wake_up(TASK_UNINTERRUPTIBLE) it can't change TASK_DEAD back to RUNNING. Without smp_mb() this can be reordered, spin_unlock_wait(pi_locked) can read the old "unlocked" state of pi_lock before we set UNINTERRUPTIBLE, so in fact we could have current->state = TASK_UNINTERRUPTIBLE; spin_unlock_wait(pi_lock); current->state = TASK_RUNNING; current->state = TASK_DEAD; and this can obviously race with ttwu() which can take pi_lock and see state == TASK_UNINTERRUPTIBLE after spin_unlock_wait(). And, if I understand you correctly, this smp_mb() can't help on PPC. try_to_wake_up() can read task->state before it writes to *pi_lock. To me this doesn't really differ from the code above, CPU 1 (do_exit) CPU_2 (ttwu) spin_lock(pi_lock): r1 = *pi_lock; // r1 == 0; p->state = TASK_UNINTERRUPTIBLE; state = p->state; p->state = TASK_RUNNING; mb(); spin_unlock_wait(); *pi_lock = 1; p->state = TASK_DEAD; if (state & TASK_UNINTERRUPTIBLE) // true p->state = RUNNING; No? And smp_mb__before_spinlock() looks wrong too then. Oleg.