From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161379AbcBQLgY (ORCPT ); Wed, 17 Feb 2016 06:36:24 -0500 Received: from casper.infradead.org ([85.118.1.10]:51659 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161285AbcBQLgW (ORCPT ); Wed, 17 Feb 2016 06:36:22 -0500 Date: Wed, 17 Feb 2016 12:36:18 +0100 From: Peter Zijlstra To: Dave Chinner Cc: Waiman Long , Alexander Viro , Jan Kara , Jeff Layton , "J. Bruce Fields" , Tejun Heo , Christoph Lameter , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ingo Molnar , Andi Kleen , Dave Chinner , Scott J Norton , Douglas Hatch Subject: Re: [RFC PATCH 1/2] lib/percpu-list: Per-cpu list with associated per-cpu locks Message-ID: <20160217113618.GO6375@twins.programming.kicks-ass.net> References: <1455672680-7153-1-git-send-email-Waiman.Long@hpe.com> <1455672680-7153-2-git-send-email-Waiman.Long@hpe.com> <20160217095318.GO14668@dastard> <20160217110040.GB6357@twins.programming.kicks-ass.net> <20160217111002.GQ14668@dastard> <20160217112654.GC6357@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160217112654.GC6357@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 17, 2016 at 12:26:54PM +0100, Peter Zijlstra wrote: > On Wed, Feb 17, 2016 at 10:10:02PM +1100, Dave Chinner wrote: > > On Wed, Feb 17, 2016 at 12:00:40PM +0100, Peter Zijlstra wrote: > > > > Yeah, that is pretty terrible. Maybe a visitor interface is advisable? > > > > > > visit_percpu_list_entries(struct percpu_list *head, void (*visitor)(struct list_head *pos, void *data), void *data) > > > { > > > int cpu; > > > > > > for_each_possible_cpu(cpu) { > > > spinlock_t *lock = per_cpu_ptr(&head->lock, cpu); > > > struct list_head *head = per_cpu_ptr(&head->list, cpu); > > > struct list_head *pos, *tmp; > > > > > > spin_lock(lock); > > > for (pos = head->next, tmp = pos->next; pos != head; pos = tmp) > > > visitor(pos, data); > > > > I thought about this - it's the same problem as the list_lru walking > > functions. That is, the visitor has to be able to drop the list lock > > to do blocking operations, so the lock has to be passed to the > > visitor/internal loop context somehow, and the way the callers can > > use it need to be documented. > > But you cannot drop the lock and guarantee fwd progress. The moment you > drop the lock, you have to restart the iteration from the head, since > any iterator you had might now be pointing into space. Ah, I see what iterate_bdevs() does. Yes, that's somewhat 'special'. Not sure it makes sense to craft a generic 'interface' for that.