From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752466AbdGBCAy (ORCPT ); Sat, 1 Jul 2017 22:00:54 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:53140 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752288AbdGBCAv (ORCPT ); Sat, 1 Jul 2017 22:00:51 -0400 Date: Sat, 1 Jul 2017 19:00:45 -0700 From: "Paul E. McKenney" To: Manfred Spraul Cc: linux-kernel@vger.kernel.org, netfilter-devel@vger.kernel.org, netdev@vger.kernel.org, oleg@redhat.com, akpm@linux-foundation.org, mingo@redhat.com, dave@stgolabs.net, tj@kernel.org, arnd@arndb.de, linux-arch@vger.kernel.org, will.deacon@arm.com, peterz@infradead.org, stern@rowland.harvard.edu, parri.andrea@gmail.com, torvalds@linux-foundation.org, Pablo Neira Ayuso , Jozsef Kadlecsik , Florian Westphal , "David S. Miller" , coreteam@netfilter.org Subject: Re: [PATCH RFC 01/26] netfilter: Replace spin_unlock_wait() with lock/unlock pair Reply-To: paulmck@linux.vnet.ibm.com References: <20170629235918.GA6445@linux.vnet.ibm.com> <1498780894-8253-1-git-send-email-paulmck@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17070202-0048-0000-0000-000001B882ED X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007304; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000214; SDB=6.00881538; UDB=6.00439575; IPR=6.00661722; BA=6.00005448; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00016037; XFM=3.00000015; UTC=2017-07-02 02:00:50 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17070202-0049-0000-0000-000041BC2074 Message-Id: <20170702020045.GR2393@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-07-01_18:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1707020032 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Jul 01, 2017 at 09:44:12PM +0200, Manfred Spraul wrote: > Hi Paul, > > On 06/30/2017 02:01 AM, Paul E. McKenney wrote: > >There is no agreed-upon definition of spin_unlock_wait()'s semantics, > >and it appears that all callers could do just as well with a lock/unlock > >pair. This commit therefore replaces the spin_unlock_wait() calls > >in nf_conntrack_lock() and nf_conntrack_all_lock() with spin_lock() > >followed immediately by spin_unlock(). These functions do not appear > >to be invoked on any fastpaths. > > > >Signed-off-by: Paul E. McKenney > >Cc: Pablo Neira Ayuso > >Cc: Jozsef Kadlecsik > >Cc: Florian Westphal > >Cc: "David S. Miller" > >Cc: > >Cc: > >Cc: > >Cc: Will Deacon > >Cc: Peter Zijlstra > >Cc: Alan Stern > >Cc: Andrea Parri > >Cc: Linus Torvalds > >--- > > net/netfilter/nf_conntrack_core.c | 26 ++++++++------------------ > > 1 file changed, 8 insertions(+), 18 deletions(-) > > > >diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c > >index e847dbaa0c6b..9f997859d160 100644 > >--- a/net/netfilter/nf_conntrack_core.c > >+++ b/net/netfilter/nf_conntrack_core.c > >@@ -99,15 +99,11 @@ void nf_conntrack_lock(spinlock_t *lock) __acquires(lock) > > spin_lock(lock); > > while (unlikely(nf_conntrack_locks_all)) { > I think here an ACQUIRE is missing. > > spin_unlock(lock); > >- > >- /* > >- * Order the 'nf_conntrack_locks_all' load vs. the > >- * spin_unlock_wait() loads below, to ensure > >- * that 'nf_conntrack_locks_all_lock' is indeed held: > >- */ > >- smp_rmb(); /* spin_lock(&nf_conntrack_locks_all_lock) */ > >- spin_unlock_wait(&nf_conntrack_locks_all_lock); > >+ /* Wait for nf_conntrack_locks_all_lock holder to release ... */ > >+ spin_lock(&nf_conntrack_locks_all_lock); > >+ spin_unlock(&nf_conntrack_locks_all_lock); > > spin_lock(lock); > >+ /* ... and retry. */ > > } > > } > As far as I see, nf_conntrack_locks[] nests inside > nf_conntrack_lock_all_lock. > So > spin_lock(&nf_conntrack_locks_all_lock); > spin_lock(lock); > spin_unlock(&nf_conntrack_locks_all_lock); > > can replace the retry logic. > > Correct? Then what about the attached patch? At first glance, it looks correct to me, thank you! I have replaced my patch with this one for testing and further review. Thanx, Paul > -- > Manfred > > > >From 453e7a77f3756d939c754031b092cbdfbd149559 Mon Sep 17 00:00:00 2001 > From: Manfred Spraul > Date: Sun, 21 Aug 2016 07:17:55 +0200 > Subject: [PATCH] net/netfilter/nf_conntrack_core: Fix net_conntrack_lock() > > As we want to remove spin_unlock_wait() and replace it with explicit > spin_lock()/spin_unlock() calls, we can use this to simplify the > locking. > > In addition: > - Reading nf_conntrack_locks_all needs ACQUIRE memory ordering. > - The new code avoids the backwards loop. > > Only slightly tested, I did not manage to trigger calls to > nf_conntrack_all_lock(). > > Fixes: b16c29191dc8 > Signed-off-by: Manfred Spraul > Cc: > Cc: Sasha Levin > Cc: Pablo Neira Ayuso > Cc: netfilter-devel@vger.kernel.org > --- > net/netfilter/nf_conntrack_core.c | 44 +++++++++++++++++++++------------------ > 1 file changed, 24 insertions(+), 20 deletions(-) > > diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c > index e847dba..1193565 100644 > --- a/net/netfilter/nf_conntrack_core.c > +++ b/net/netfilter/nf_conntrack_core.c > @@ -96,19 +96,24 @@ static struct conntrack_gc_work conntrack_gc_work; > > void nf_conntrack_lock(spinlock_t *lock) __acquires(lock) > { > + /* 1) Acquire the lock */ > spin_lock(lock); > - while (unlikely(nf_conntrack_locks_all)) { > - spin_unlock(lock); > > - /* > - * Order the 'nf_conntrack_locks_all' load vs. the > - * spin_unlock_wait() loads below, to ensure > - * that 'nf_conntrack_locks_all_lock' is indeed held: > - */ > - smp_rmb(); /* spin_lock(&nf_conntrack_locks_all_lock) */ > - spin_unlock_wait(&nf_conntrack_locks_all_lock); > - spin_lock(lock); > - } > + /* 2) read nf_conntrack_locks_all, with ACQUIRE semantics */ > + if (likely(smp_load_acquire(&nf_conntrack_locks_all) == false)) > + return; > + > + /* fast path failed, unlock */ > + spin_unlock(lock); > + > + /* Slow path 1) get global lock */ > + spin_lock(&nf_conntrack_locks_all_lock); > + > + /* Slow path 2) get the lock we want */ > + spin_lock(lock); > + > + /* Slow path 3) release the global lock */ > + spin_unlock(&nf_conntrack_locks_all_lock); > } > EXPORT_SYMBOL_GPL(nf_conntrack_lock); > > @@ -149,18 +154,17 @@ static void nf_conntrack_all_lock(void) > int i; > > spin_lock(&nf_conntrack_locks_all_lock); > - nf_conntrack_locks_all = true; > > - /* > - * Order the above store of 'nf_conntrack_locks_all' against > - * the spin_unlock_wait() loads below, such that if > - * nf_conntrack_lock() observes 'nf_conntrack_locks_all' > - * we must observe nf_conntrack_locks[] held: > - */ > - smp_mb(); /* spin_lock(&nf_conntrack_locks_all_lock) */ > + nf_conntrack_locks_all = true; > > for (i = 0; i < CONNTRACK_LOCKS; i++) { > - spin_unlock_wait(&nf_conntrack_locks[i]); > + spin_lock(&nf_conntrack_locks[i]); > + > + /* This spin_unlock provides the "release" to ensure that > + * nf_conntrack_locks_all==true is visible to everyone that > + * acquired spin_lock(&nf_conntrack_locks[]). > + */ > + spin_unlock(&nf_conntrack_locks[i]); > } > } > > -- > 2.9.4 >