From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161222AbcBQLA5 (ORCPT ); Wed, 17 Feb 2016 06:00:57 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:47898 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934013AbcBQLAz (ORCPT ); Wed, 17 Feb 2016 06:00:55 -0500 Date: Wed, 17 Feb 2016 12:00:40 +0100 From: Peter Zijlstra To: Dave Chinner Cc: Waiman Long , Alexander Viro , Jan Kara , Jeff Layton , "J. Bruce Fields" , Tejun Heo , Christoph Lameter , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ingo Molnar , Andi Kleen , Dave Chinner , Scott J Norton , Douglas Hatch Subject: Re: [RFC PATCH 1/2] lib/percpu-list: Per-cpu list with associated per-cpu locks Message-ID: <20160217110040.GB6357@twins.programming.kicks-ass.net> References: <1455672680-7153-1-git-send-email-Waiman.Long@hpe.com> <1455672680-7153-2-git-send-email-Waiman.Long@hpe.com> <20160217095318.GO14668@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160217095318.GO14668@dastard> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 17, 2016 at 08:53:18PM +1100, Dave Chinner wrote: > > +/** > > + * for_all_percpu_list_entries - iterate over all the per-cpu list with locking > > + * @pos: the type * to use as a loop cursor for the current . > > + * @next: an internal type * variable pointing to the next entry > > + * @pchead: an internal struct list * of percpu list head > > + * @pclock: an internal variable for the current per-cpu spinlock > > + * @head: the head of the per-cpu list > > + * @member: the name of the per-cpu list within the struct > > + */ > > +#define for_all_percpu_list_entries(pos, next, pchead, pclock, head, member)\ > > + { \ > > + int cpu; \ > > + for_each_possible_cpu (cpu) { \ > > + typeof(*pos) *next; \ > > + spinlock_t *pclock = per_cpu_ptr(&(head)->lock, cpu); \ > > + struct list_head *pchead = &per_cpu_ptr(head, cpu)->list;\ > > + spin_lock(pclock); \ > > + list_for_each_entry_safe(pos, next, pchead, member.list) > > + > > +#define end_all_percpu_list_entries(pclock) spin_unlock(pclock); } } > > This is a bit of a landmine Yeah, that is pretty terrible. Maybe a visitor interface is advisable? visit_percpu_list_entries(struct percpu_list *head, void (*visitor)(struct list_head *pos, void *data), void *data) { int cpu; for_each_possible_cpu(cpu) { spinlock_t *lock = per_cpu_ptr(&head->lock, cpu); struct list_head *head = per_cpu_ptr(&head->list, cpu); struct list_head *pos, *tmp; spin_lock(lock); for (pos = head->next, tmp = pos->next; pos != head; pos = tmp) visitor(pos, data); spin_unlock(lock); } }