From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751402AbcGNOfy (ORCPT ); Thu, 14 Jul 2016 10:35:54 -0400 Received: from mx2.suse.de ([195.135.220.15]:42686 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750897AbcGNOfv (ORCPT ); Thu, 14 Jul 2016 10:35:51 -0400 Date: Thu, 14 Jul 2016 16:35:47 +0200 From: Jan Kara To: Tejun Heo Cc: Waiman Long , Alexander Viro , Jan Kara , Jeff Layton , "J. Bruce Fields" , Christoph Lameter , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Andi Kleen , Dave Chinner , Boqun Feng , Scott J Norton , Douglas Hatch Subject: Re: [PATCH v2 1/7] lib/dlock-list: Distributed and lock-protected lists Message-ID: <20160714143547.GE13151@quack2.suse.cz> References: <1468258332-61537-1-git-send-email-Waiman.Long@hpe.com> <1468258332-61537-2-git-send-email-Waiman.Long@hpe.com> <20160713160823.GD4065@mtj.duckdns.org> <5786FEDB.9080107@hpe.com> <20160714115043.GD15005@htj.duckdns.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160714115043.GD15005@htj.duckdns.org> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 14-07-16 07:50:43, Tejun Heo wrote: > > > > +void dlock_list_add(struct dlock_list_node *node, struct dlock_list_head *head) > > > > +{ > > > > + struct dlock_list_head *myhead; > > > > + > > > > + /* > > > > + * Disable preemption to make sure that CPU won't gets changed. > > > > + */ > > > > + myhead = get_cpu_ptr(head); > > > > + spin_lock(&myhead->lock); > > > > + node->lockptr =&myhead->lock; > > > > + list_add(&node->list,&myhead->list); > > > > + spin_unlock(&myhead->lock); > > > > + put_cpu_ptr(head); > > > > +} > > > I wonder whether it'd be better to use irqsafe operations. lists tend > > > to be often used from irq contexts. > > > > The current use case only need to use the regular lock functions. You are > > right that future use cases may require an irqsafe version of locks. I can > > either modify the code now to allow lock type selection at init time, for > > example, or defer it as a future enhancement when the need arises. What do > > you think? > > The bulk of performance gain of dlist would come from being per-cpu > and I don't think it's likely that we'd see any noticeable difference > between irq and preempt safe operations. Given that what's being > implemented is really low level operations, I'd suggest going with > irqsafe from the get-go. I'm not sure here. i_sb_list for which percpu lists will be used is bashed pretty heavily under some workloads and the cost of additional interrupt disabling & enabling may be visible under those loads. Probably not in the cases where you get a boost from percpu lists but if the workload is mostly single-threaded, additional cpu cost may be measurable. So IMO we should check whether a load which creates tons of empty inodes in tmpfs from a single process doesn't regress with this change. Honza -- Jan Kara SUSE Labs, CR