From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64206C433ED for ; Thu, 1 Apr 2021 18:55:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3CE6F6108B for ; Thu, 1 Apr 2021 18:55:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238959AbhDASyK (ORCPT ); Thu, 1 Apr 2021 14:54:10 -0400 Received: from aserp2130.oracle.com ([141.146.126.79]:48458 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236983AbhDASnK (ORCPT ); Thu, 1 Apr 2021 14:43:10 -0400 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 131FPHS4044621; Thu, 1 Apr 2021 15:32:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2020-01-29; bh=cBF5424e9faYN0tz+G+01mYGrv5O38dwoL4gRmjzSZk=; b=b1C3fGuwWE4JuzgkbAxmVK2Uy9frRuGPFC2lVCIS5lkJtWBCoXUwreO36gXISm25pJU0 jIoaWQKvRWUEapjTfZ1ciYw8/3SMYo2qRG5ryED++EI/zpv4+a9M2Zznnmf8RNV6lyb/ rCVIf+GGbbPlj6NK68013zX2GunX3FajOYKkuB2d6GWqJi2e4JZBhZEjUZnC3Ty7lEuD 27iA3T5q0lo+2LVNIfLwvzrE4NfgDUvwFkXCjqkNfPMX5WkHk1w1+9WbzZOLLerb8uVP lj9ngFSYESMs5daC43f5yhUNT9eDV0BhAiX8WaKp36ebVTwBkNTzR9EgCks+6AOIxIcJ wQ== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by aserp2130.oracle.com with ESMTP id 37n33dt768-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 01 Apr 2021 15:32:29 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 131FLCdF101173; Thu, 1 Apr 2021 15:32:28 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3020.oracle.com with ESMTP id 37n2abdcx1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 01 Apr 2021 15:32:28 +0000 Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 131FWG8s031929; Thu, 1 Apr 2021 15:32:16 GMT Received: from neelam.us.oracle.com (/10.152.128.16) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 01 Apr 2021 08:32:16 -0700 From: Alex Kogan To: linux@armlinux.org.uk, peterz@infradead.org, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org, guohanjun@huawei.com, jglauber@marvell.com Cc: steven.sistare@oracle.com, daniel.m.jordan@oracle.com, alex.kogan@oracle.com, dave.dice@oracle.com Subject: [PATCH v14 5/6] locking/qspinlock: Avoid moving certain threads between waiting queues in CNA Date: Thu, 1 Apr 2021 11:31:55 -0400 Message-Id: <20210401153156.1165900-6-alex.kogan@oracle.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210401153156.1165900-1-alex.kogan@oracle.com> References: <20210401153156.1165900-1-alex.kogan@oracle.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Proofpoint-IMR: 1 X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9941 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxscore=0 bulkscore=0 suspectscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2103310000 definitions=main-2104010104 X-Proofpoint-GUID: 6PIYxVHc8lKQRI3kumX_InE-9CMdkwzp X-Proofpoint-ORIG-GUID: 6PIYxVHc8lKQRI3kumX_InE-9CMdkwzp X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9941 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxlogscore=999 mlxscore=0 lowpriorityscore=0 suspectscore=0 priorityscore=1501 phishscore=0 clxscore=1015 impostorscore=0 malwarescore=0 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2103310000 definitions=main-2104010104 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Prohibit moving certain threads (e.g., in irq and nmi contexts) to the secondary queue. Those prioritized threads will always stay in the primary queue, and so will have a shorter wait time for the lock. Signed-off-by: Alex Kogan Reviewed-by: Steve Sistare Reviewed-by: Waiman Long --- kernel/locking/qspinlock_cna.h | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/kernel/locking/qspinlock_cna.h b/kernel/locking/qspinlock_cna.h index 0513360c11fe..29c3abbd3d94 100644 --- a/kernel/locking/qspinlock_cna.h +++ b/kernel/locking/qspinlock_cna.h @@ -4,6 +4,7 @@ #endif #include +#include /* * Implement a NUMA-aware version of MCS (aka CNA, or compact NUMA-aware lock). @@ -35,7 +36,8 @@ * running on the same NUMA node. If it is not, that waiter is detached from the * main queue and moved into the tail of the secondary queue. This way, we * gradually filter the primary queue, leaving only waiters running on the same - * preferred NUMA node. + * preferred NUMA node. Note that certain priortized waiters (e.g., in + * irq and nmi contexts) are excluded from being moved to the secondary queue. * * We change the NUMA node preference after a waiter at the head of the * secondary queue spins for a certain amount of time (10ms, by default). @@ -49,6 +51,8 @@ * Dave Dice */ +#define CNA_PRIORITY_NODE 0xffff + struct cna_node { struct mcs_spinlock mcs; u16 numa_node; @@ -121,9 +125,10 @@ static int __init cna_init_nodes(void) static __always_inline void cna_init_node(struct mcs_spinlock *node) { + bool priority = !in_task() || irqs_disabled() || rt_task(current); struct cna_node *cn = (struct cna_node *)node; - cn->numa_node = cn->real_numa_node; + cn->numa_node = priority ? CNA_PRIORITY_NODE : cn->real_numa_node; cn->partial_order = LOCAL_WAITER_FOUND; cn->start_time = 0; } @@ -266,11 +271,13 @@ static void cna_order_queue(struct mcs_spinlock *node) next_numa_node = ((struct cna_node *)next)->numa_node; if (next_numa_node != numa_node) { - struct mcs_spinlock *nnext = READ_ONCE(next->next); + if (next_numa_node != CNA_PRIORITY_NODE) { + struct mcs_spinlock *nnext = READ_ONCE(next->next); - if (nnext) { - cna_splice_next(node, next, nnext); - next = nnext; + if (nnext) { + cna_splice_next(node, next, nnext); + next = nnext; + } } /* * Inherit NUMA node id of primary queue, to maintain the @@ -287,6 +294,13 @@ static __always_inline u32 cna_wait_head_or_lock(struct qspinlock *lock, struct cna_node *cn = (struct cna_node *)node; if (!cn->start_time || !intra_node_threshold_reached(cn)) { + /* + * We are at the head of the wait queue, no need to use + * the fake NUMA node ID. + */ + if (cn->numa_node == CNA_PRIORITY_NODE) + cn->numa_node = cn->real_numa_node; + /* * Try and put the time otherwise spent spin waiting on * _Q_LOCKED_PENDING_MASK to use by sorting our lists. -- 2.24.3 (Apple Git-128) From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C78CCC433ED for ; Thu, 1 Apr 2021 15:41:11 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D183261373 for ; Thu, 1 Apr 2021 15:41:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D183261373 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xxUJ0f8TGrX0wqnRUfn7Cx/qNz6AD/niTonmT3GTtps=; b=Y6saEpdtcQ61ZTGld1Whz4+M6 UUrOaRKq1rhAwsMVKgMlPz3ffoeKvWUgQIEx09c+c9uRSME74pT1ZAvKm0mLPLyc+ZwPkaD2WVAHS 7wToMSnUatGEPLCeyVT81deh2248+3qkGf7zlqx8SBoy1pp11+NSRQmuiOnV08ZTHx14xe0S4VVZg zRQR0Vv/VBLsvsWvPu7IJeJ2m/2iP22FI2DBNsHAXUp+7xRPgb9RMoFsGBkuKO24dQ0v+jDRD+AC8 68z12Ewsbk0OMTdRwGXIXMXmF5lr4zrZPUGM3EG0S66/D+xdImTNkT9cNEmt72Mjr9ZUuLyvy4psL oxOzZAZeA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lRzPe-00A2vv-Ub; Thu, 01 Apr 2021 15:39:15 +0000 Received: from aserp2130.oracle.com ([141.146.126.79]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lRzJy-00A231-Vk for linux-arm-kernel@lists.infradead.org; Thu, 01 Apr 2021 15:33:54 +0000 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 131FPHS4044621; Thu, 1 Apr 2021 15:32:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2020-01-29; bh=cBF5424e9faYN0tz+G+01mYGrv5O38dwoL4gRmjzSZk=; b=b1C3fGuwWE4JuzgkbAxmVK2Uy9frRuGPFC2lVCIS5lkJtWBCoXUwreO36gXISm25pJU0 jIoaWQKvRWUEapjTfZ1ciYw8/3SMYo2qRG5ryED++EI/zpv4+a9M2Zznnmf8RNV6lyb/ rCVIf+GGbbPlj6NK68013zX2GunX3FajOYKkuB2d6GWqJi2e4JZBhZEjUZnC3Ty7lEuD 27iA3T5q0lo+2LVNIfLwvzrE4NfgDUvwFkXCjqkNfPMX5WkHk1w1+9WbzZOLLerb8uVP lj9ngFSYESMs5daC43f5yhUNT9eDV0BhAiX8WaKp36ebVTwBkNTzR9EgCks+6AOIxIcJ wQ== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by aserp2130.oracle.com with ESMTP id 37n33dt768-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 01 Apr 2021 15:32:29 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 131FLCdF101173; Thu, 1 Apr 2021 15:32:28 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3020.oracle.com with ESMTP id 37n2abdcx1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 01 Apr 2021 15:32:28 +0000 Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 131FWG8s031929; Thu, 1 Apr 2021 15:32:16 GMT Received: from neelam.us.oracle.com (/10.152.128.16) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 01 Apr 2021 08:32:16 -0700 From: Alex Kogan To: linux@armlinux.org.uk, peterz@infradead.org, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org, guohanjun@huawei.com, jglauber@marvell.com Cc: steven.sistare@oracle.com, daniel.m.jordan@oracle.com, alex.kogan@oracle.com, dave.dice@oracle.com Subject: [PATCH v14 5/6] locking/qspinlock: Avoid moving certain threads between waiting queues in CNA Date: Thu, 1 Apr 2021 11:31:55 -0400 Message-Id: <20210401153156.1165900-6-alex.kogan@oracle.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210401153156.1165900-1-alex.kogan@oracle.com> References: <20210401153156.1165900-1-alex.kogan@oracle.com> MIME-Version: 1.0 X-Proofpoint-IMR: 1 X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9941 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxscore=0 bulkscore=0 suspectscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2103310000 definitions=main-2104010104 X-Proofpoint-GUID: 6PIYxVHc8lKQRI3kumX_InE-9CMdkwzp X-Proofpoint-ORIG-GUID: 6PIYxVHc8lKQRI3kumX_InE-9CMdkwzp X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9941 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxlogscore=999 mlxscore=0 lowpriorityscore=0 suspectscore=0 priorityscore=1501 phishscore=0 clxscore=1015 impostorscore=0 malwarescore=0 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2103310000 definitions=main-2104010104 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210401_163324_129070_2662682E X-CRM114-Status: GOOD ( 23.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Prohibit moving certain threads (e.g., in irq and nmi contexts) to the secondary queue. Those prioritized threads will always stay in the primary queue, and so will have a shorter wait time for the lock. Signed-off-by: Alex Kogan Reviewed-by: Steve Sistare Reviewed-by: Waiman Long --- kernel/locking/qspinlock_cna.h | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/kernel/locking/qspinlock_cna.h b/kernel/locking/qspinlock_cna.h index 0513360c11fe..29c3abbd3d94 100644 --- a/kernel/locking/qspinlock_cna.h +++ b/kernel/locking/qspinlock_cna.h @@ -4,6 +4,7 @@ #endif #include +#include /* * Implement a NUMA-aware version of MCS (aka CNA, or compact NUMA-aware lock). @@ -35,7 +36,8 @@ * running on the same NUMA node. If it is not, that waiter is detached from the * main queue and moved into the tail of the secondary queue. This way, we * gradually filter the primary queue, leaving only waiters running on the same - * preferred NUMA node. + * preferred NUMA node. Note that certain priortized waiters (e.g., in + * irq and nmi contexts) are excluded from being moved to the secondary queue. * * We change the NUMA node preference after a waiter at the head of the * secondary queue spins for a certain amount of time (10ms, by default). @@ -49,6 +51,8 @@ * Dave Dice */ +#define CNA_PRIORITY_NODE 0xffff + struct cna_node { struct mcs_spinlock mcs; u16 numa_node; @@ -121,9 +125,10 @@ static int __init cna_init_nodes(void) static __always_inline void cna_init_node(struct mcs_spinlock *node) { + bool priority = !in_task() || irqs_disabled() || rt_task(current); struct cna_node *cn = (struct cna_node *)node; - cn->numa_node = cn->real_numa_node; + cn->numa_node = priority ? CNA_PRIORITY_NODE : cn->real_numa_node; cn->partial_order = LOCAL_WAITER_FOUND; cn->start_time = 0; } @@ -266,11 +271,13 @@ static void cna_order_queue(struct mcs_spinlock *node) next_numa_node = ((struct cna_node *)next)->numa_node; if (next_numa_node != numa_node) { - struct mcs_spinlock *nnext = READ_ONCE(next->next); + if (next_numa_node != CNA_PRIORITY_NODE) { + struct mcs_spinlock *nnext = READ_ONCE(next->next); - if (nnext) { - cna_splice_next(node, next, nnext); - next = nnext; + if (nnext) { + cna_splice_next(node, next, nnext); + next = nnext; + } } /* * Inherit NUMA node id of primary queue, to maintain the @@ -287,6 +294,13 @@ static __always_inline u32 cna_wait_head_or_lock(struct qspinlock *lock, struct cna_node *cn = (struct cna_node *)node; if (!cn->start_time || !intra_node_threshold_reached(cn)) { + /* + * We are at the head of the wait queue, no need to use + * the fake NUMA node ID. + */ + if (cn->numa_node == CNA_PRIORITY_NODE) + cn->numa_node = cn->real_numa_node; + /* * Try and put the time otherwise spent spin waiting on * _Q_LOCKED_PENDING_MASK to use by sorting our lists. -- 2.24.3 (Apple Git-128) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel