From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752566AbdF3TVd (ORCPT ); Fri, 30 Jun 2017 15:21:33 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38846 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752493AbdF3TVb (ORCPT ); Fri, 30 Jun 2017 15:21:31 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 53D7D7D0DD Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=oleg@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 53D7D7D0DD Date: Fri, 30 Jun 2017 21:21:23 +0200 From: Oleg Nesterov To: "Paul E. McKenney" Cc: linux-kernel@vger.kernel.org, netfilter-devel@vger.kernel.org, netdev@vger.kernel.org, akpm@linux-foundation.org, mingo@redhat.com, dave@stgolabs.net, manfred@colorfullife.com, tj@kernel.org, arnd@arndb.de, linux-arch@vger.kernel.org, will.deacon@arm.com, peterz@infradead.org, stern@rowland.harvard.edu, parri.andrea@gmail.com, torvalds@linux-foundation.org Subject: Re: [PATCH RFC 02/26] task_work: Replace spin_unlock_wait() with lock/unlock pair Message-ID: <20170630192123.GA8471@redhat.com> References: <20170629235918.GA6445@linux.vnet.ibm.com> <1498780894-8253-2-git-send-email-paulmck@linux.vnet.ibm.com> <20170630110445.GA5123@redhat.com> <20170630125020.GU2393@linux.vnet.ibm.com> <20170630152010.GA6935@redhat.com> <20170630161607.GX2393@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170630161607.GX2393@linux.vnet.ibm.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 30 Jun 2017 19:21:30 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/30, Paul E. McKenney wrote: > > On Fri, Jun 30, 2017 at 05:20:10PM +0200, Oleg Nesterov wrote: > > > > I do not think the overhead will be noticeable in this particular case. > > > > But I am not sure I understand why do we want to unlock_wait. Yes I agree, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ if it was not clear, I tried to say "why do we want to _remove_ unlock_wait". > > it has some problems, but still... > > > > The code above looks strange for me. If we are going to repeat this pattern > > the perhaps we should add a helper for lock+unlock and name it unlock_wait2 ;) > > > > If not, we should probably change this code more: > > This looks -much- better than my patch! May I have your Signed-off-by? Only if you promise to replace all RCU flavors with a single simple implementation based on rwlock ;) Seriously, of course I won't argue, and it seems that nobody except me likes this primitive, but to me spin_unlock_wait() looks like synchronize_rcu(() and sometimes it makes sense. Including this particular case. task_work_run() is going to flush/destroy the ->task_works list, so it needs to wait until all currently executing "readers" (task_work_cancel()'s which have started before ->task_works was updated) have completed. Oleg.