From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1D37C433F5 for ; Wed, 22 Sep 2021 19:27:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 77ECF60FE8 for ; Wed, 22 Sep 2021 19:27:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 77ECF60FE8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=stgolabs.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=pcMpw6qGJ+NjR0wSxhnI87xPWYLroDrM541b4fj+Liw=; b=xQMUbUmQMYIkFznf+7NtssN3Vf IwNmnZGyJ0fsAkrddLcTM7DciLe9R86AAwHF8zhvcpVYalexBCuhsRzJOEt4kTFWxbyGujSX+tfYi 4rebOnPIhQlskPeTQu/b2SzkgJV0z2CeUuUeLtZEIoqfWXo4aXsz3O037WggqZyMkJErQgMx72m6/ sQo7rXPZ8OcwnCiRj1cMNocTv1U9Gyx/Fm7sa80DoZRE0L9BW4N0Vmt4I/LatNiia/0TmNX93/uQr vUVFhJcVexeCtPfmWIcsPGvaYMpG6HhIsa/dnabgmUcn+h4WlU4cYyxamp6Fvm2abs3OOezB7Doub APoROSVA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mT7sS-009UUk-Tl; Wed, 22 Sep 2021 19:25:57 +0000 Received: from smtp-out2.suse.de ([195.135.220.29]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mT7sO-009USj-KL for linux-arm-kernel@lists.infradead.org; Wed, 22 Sep 2021 19:25:54 +0000 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 7F6E11FF74; Wed, 22 Sep 2021 19:25:49 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 10C8813D96; Wed, 22 Sep 2021 19:25:43 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id X9AfNDeDS2GScAAAMHmgww (envelope-from ); Wed, 22 Sep 2021 19:25:43 +0000 Date: Wed, 22 Sep 2021 12:25:28 -0700 From: Davidlohr Bueso To: Alex Kogan Cc: linux@armlinux.org.uk, peterz@infradead.org, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org, guohanjun@huawei.com, jglauber@marvell.com, steven.sistare@oracle.com, daniel.m.jordan@oracle.com, dave.dice@oracle.com Subject: Re: [PATCH v15 3/6] locking/qspinlock: Introduce CNA into the slow path of qspinlock Message-ID: <20210922192528.ob22pu54oeqsoeno@offworld> References: <20210514200743.3026725-1-alex.kogan@oracle.com> <20210514200743.3026725-4-alex.kogan@oracle.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210514200743.3026725-4-alex.kogan@oracle.com> User-Agent: NeoMutt/20201120 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210922_122552_899571_31BDE586 X-CRM114-Status: GOOD ( 16.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, 14 May 2021, Alex Kogan wrote: >diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt >index a816935d23d4..94d35507560c 100644 >--- a/Documentation/admin-guide/kernel-parameters.txt >+++ b/Documentation/admin-guide/kernel-parameters.txt >@@ -3515,6 +3515,16 @@ > NUMA balancing. > Allowed values are enable and disable > >+ numa_spinlock= [NUMA, PV_OPS] Select the NUMA-aware variant >+ of spinlock. The options are: >+ auto - Enable this variant if running on a multi-node >+ machine in native environment. >+ on - Unconditionally enable this variant. Is there any reason why the user would explicitly pass the on option when the auto thing already does the multi-node check? Perhaps strange numa topologies? Otherwise I would say it's not needed and the fewer options we give the user for low level locking the better. >+ off - Unconditionally disable this variant. >+ >+ Not specifying this option is equivalent to >+ numa_spinlock=auto. >+ > numa_zonelist_order= [KNL, BOOT] Select zonelist order for NUMA. > 'node', 'default' can be specified > This can be set from sysctl after boot. >diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig >index 0045e1b44190..819c3dad8afc 100644 >--- a/arch/x86/Kconfig >+++ b/arch/x86/Kconfig >@@ -1564,6 +1564,26 @@ config NUMA > > Otherwise, you should say N. > >+config NUMA_AWARE_SPINLOCKS >+ bool "Numa-aware spinlocks" >+ depends on NUMA >+ depends on QUEUED_SPINLOCKS >+ depends on 64BIT >+ # For now, we depend on PARAVIRT_SPINLOCKS to make the patching work. >+ # This is awkward, but hopefully would be resolved once static_call() >+ # is available. >+ depends on PARAVIRT_SPINLOCKS We now have static_call() - see 9183c3f9ed7. >+ default y >+ help >+ Introduce NUMA (Non Uniform Memory Access) awareness into >+ the slow path of spinlocks. >+ >+ In this variant of qspinlock, the kernel will try to keep the lock >+ on the same node, thus reducing the number of remote cache misses, >+ while trading some of the short term fairness for better performance. >+ >+ Say N if you want absolute first come first serve fairness. This would also need a depends on !PREEMPT_RT, no? Raw spinlocks really want the determinism. Thanks, Davidlohr _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel