From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752873AbdF3Tuq (ORCPT ); Fri, 30 Jun 2017 15:50:46 -0400 Received: from iolanthe.rowland.org ([192.131.102.54]:57954 "HELO iolanthe.rowland.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1752765AbdF3Tuo (ORCPT ); Fri, 30 Jun 2017 15:50:44 -0400 Date: Fri, 30 Jun 2017 15:50:33 -0400 (EDT) From: Alan Stern X-X-Sender: stern@iolanthe.rowland.org To: Oleg Nesterov cc: "Paul E. McKenney" , , , , , , , , , , , , , , Subject: Re: [PATCH RFC 02/26] task_work: Replace spin_unlock_wait() with lock/unlock pair In-Reply-To: <20170630192123.GA8471@redhat.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 30 Jun 2017, Oleg Nesterov wrote: > On 06/30, Paul E. McKenney wrote: > > > > On Fri, Jun 30, 2017 at 05:20:10PM +0200, Oleg Nesterov wrote: > > > > > > I do not think the overhead will be noticeable in this particular case. > > > > > > But I am not sure I understand why do we want to unlock_wait. Yes I agree, > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > if it was not clear, I tried to say "why do we want to _remove_ unlock_wait". > > > > it has some problems, but still... > > > > > > The code above looks strange for me. If we are going to repeat this pattern > > > the perhaps we should add a helper for lock+unlock and name it unlock_wait2 ;) > > > > > > If not, we should probably change this code more: > > > > This looks -much- better than my patch! May I have your Signed-off-by? > > Only if you promise to replace all RCU flavors with a single simple implementation > based on rwlock ;) > > Seriously, of course I won't argue, and it seems that nobody except me likes > this primitive, but to me spin_unlock_wait() looks like synchronize_rcu(() and > sometimes it makes sense. If it looks like synchronize_rcu(), why not actually use synchronize_rcu()? Alan Stern > Including this particular case. task_work_run() is going to flush/destroy the > ->task_works list, so it needs to wait until all currently executing "readers" > (task_work_cancel()'s which have started before ->task_works was updated) have > completed.