From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754656AbbKLPTM (ORCPT ); Thu, 12 Nov 2015 10:19:12 -0500 Received: from mail-pa0-f65.google.com ([209.85.220.65]:34818 "EHLO mail-pa0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753307AbbKLPTK (ORCPT ); Thu, 12 Nov 2015 10:19:10 -0500 Date: Thu, 12 Nov 2015 23:18:39 +0800 From: Boqun Feng To: Oleg Nesterov Cc: Peter Zijlstra , mingo@kernel.org, linux-kernel@vger.kernel.org, paulmck@linux.vnet.ibm.com, corbet@lwn.net, mhocko@kernel.org, dhowells@redhat.com, torvalds@linux-foundation.org, will.deacon@arm.com, Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire() Message-ID: <20151112151839.GE6314@fixme-laptop.cn.ibm.com> References: <20151102132901.157178466@infradead.org> <20151102134941.005198372@infradead.org> <20151103175958.GA4800@redhat.com> <20151111093939.GA6314@fixme-laptop.cn.ibm.com> <20151111121232.GN17308@twins.programming.kicks-ass.net> <20151111193953.GA23515@redhat.com> <20151112070915.GC6314@fixme-laptop.cn.ibm.com> <20151112150058.GA30321@redhat.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="4VrXvz3cwkc87Wze" Content-Disposition: inline In-Reply-To: <20151112150058.GA30321@redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --4VrXvz3cwkc87Wze Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Nov 12, 2015 at 04:00:58PM +0100, Oleg Nesterov wrote: > On 11/12, Boqun Feng wrote: [snip] > > > > Hmm.. probably incorrect.. because the ACQUIRE semantics of spin_lock() > > only guarantees that the memory operations following spin_lock() can't > > be reorder before the *LOAD* part of spin_lock() not the *STORE* part, > > i.e. the case below can happen(assuming the spin_lock() is implemented > > as ll/sc loop) > > > > spin_lock(&lock): > > r1 =3D *lock; // LL, r1 =3D=3D 0 > > o =3D READ_ONCE(object); // could be reordered here. > > *lock =3D 1; // SC > > > > This could happen because of the ACQUIRE semantics of spin_lock(), and > > the current implementation of spin_lock() on PPC allows this happen. > > > > (Cc PPC maintainers for their opinions on this one) >=20 > In this case the code above is obviously wrong. And I do not understand > how we can rely on spin_unlock_wait() then. >=20 > And afaics do_exit() is buggy too then, see below. >=20 > > I think it's OK for it as an ACQUIRE(with a proper barrier) or even just > > a control dependency to pair with spin_unlock(), for example, the > > following snippet in do_exit() is OK, except the smp_mb() is redundant, > > unless I'm missing something subtle: > > > > /* > > * The setting of TASK_RUNNING by try_to_wake_up() may be delayed > > * when the following two conditions become true. > > * - There is race condition of mmap_sem (It is acquired by > > * exit_mm()), and > > * - SMI occurs before setting TASK_RUNINNG. > > * (or hypervisor of virtual machine switches to other guest) > > * As a result, we may become TASK_RUNNING after becoming TASK_DEAD > > * > > * To avoid it, we have to wait for releasing tsk->pi_lock which > > * is held by try_to_wake_up() > > */ > > smp_mb(); > > raw_spin_unlock_wait(&tsk->pi_lock); >=20 > Perhaps it is me who missed something. But I don't think we can remove > this mb(). And at the same time it can't help on PPC if I understand You are right, we need this smp_mb() to order the previous STORE of ->state with the LOAD of ->pi_lock. I missed that part because I saw all the explicit STOREs of ->state in do_exit() are set_current_state() which has a smp_mb() following the STOREs. > your explanation above correctly. >=20 > To simplify, lets ignore exit_mm/down_read/etc. The exiting task does >=20 >=20 > current->state =3D TASK_UNINTERRUPTIBLE; > // without schedule() in between > current->state =3D TASK_RUNNING; >=20 > smp_mb(); > spin_unlock_wait(pi_lock); >=20 > current->state =3D TASK_DEAD; > schedule(); >=20 > and we need to ensure that if we race with try_to_wake_up(TASK_UNINTERRUP= TIBLE) > it can't change TASK_DEAD back to RUNNING. >=20 > Without smp_mb() this can be reordered, spin_unlock_wait(pi_locked) can > read the old "unlocked" state of pi_lock before we set UNINTERRUPTIBLE, > so in fact we could have >=20 > current->state =3D TASK_UNINTERRUPTIBLE; > =09 > spin_unlock_wait(pi_lock); >=20 > current->state =3D TASK_RUNNING; >=20 > current->state =3D TASK_DEAD; >=20 > and this can obviously race with ttwu() which can take pi_lock and see > state =3D=3D TASK_UNINTERRUPTIBLE after spin_unlock_wait(). >=20 Yep, my mistake ;-) > And, if I understand you correctly, this smp_mb() can't help on PPC. > try_to_wake_up() can read task->state before it writes to *pi_lock. > To me this doesn't really differ from the code above, >=20 > CPU 1 (do_exit) CPU_2 (ttwu) >=20 > spin_lock(pi_lock): > r1 =3D *pi_lock; // r1 =3D=3D 0; > p->state =3D TASK_UNINTERRUPTIBLE; > state =3D p->state; > p->state =3D TASK_RUNNING; > mb(); > spin_unlock_wait(); > *pi_lock =3D 1; >=20 > p->state =3D TASK_DEAD; > if (state & TASK_UNINTERRUPTIBLE) // true > p->state =3D RUNNING; >=20 > No? >=20 do_exit() is surely buggy if spin_lock() could work in this way. > And smp_mb__before_spinlock() looks wrong too then. >=20 Maybe not? As smp_mb__before_spinlock() is used before a LOCK operation, which has both LOAD part and STORE part unlike spin_unlock_wait()? > Oleg. >=20 --4VrXvz3cwkc87Wze Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAABCAAGBQJWRK3MAAoJEEl56MO1B/q4lKUH/RBYgqG/6dgVHzxS5w962zrx FePgZierx3t1VAJng8SEbC0ILK929QZdkoBfDHll+wvXStW+edDAFHiuen9J681H w9S1KXsFxIWRmjOQVMxwijaWYEFdlcwK6A65ez3T23R4Ugbhxc8MQ/T5j+O1XYm+ +lgTbfw60BZKW2AHwYwTDBmMKbhuzhcwVPT2BAIwAO34gRr2uDU5Zzb/eR/LLvJ6 KmYyEx7a//9FJlw1rWMeNMeNotODRa7GCz3imKe/xa/hItkP+x1yRo+3MXiJfTxs 9keZGTV1jRqELqvFbad+g9qEA1dwmEq++yJ9tjfWPWdoymzLRdAXDHESZtvxwFg= =c+Zr -----END PGP SIGNATURE----- --4VrXvz3cwkc87Wze--