From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1205DC433B4 for ; Tue, 27 Apr 2021 02:45:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C4AA3613B2 for ; Tue, 27 Apr 2021 02:45:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234438AbhD0CqF (ORCPT ); Mon, 26 Apr 2021 22:46:05 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:16160 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231363AbhD0CqD (ORCPT ); Mon, 26 Apr 2021 22:46:03 -0400 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FTmJ54s1Mzmdsx; Tue, 27 Apr 2021 10:42:13 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.201.183) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.498.0; Tue, 27 Apr 2021 10:45:09 +0800 From: Barry Song To: , , , , , , CC: , , , , , , , , , , , , Barry Song , "Yongjia Xie" Subject: [PATCH] sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt Date: Tue, 27 Apr 2021 14:37:58 +1200 Message-ID: <20210427023758.4048-1-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.126.201.183] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org a severe qperf performance decrease was reported in the below use case: For a hardware with 2 NUMA nodes, node0 has cpu0-31, node1 has cpu32-63. Ethernet is located in node1. Run the below commands: $ taskset -c 32-63 stress -c 32 & $ qperf 192.168.50.166 tcp_lat tcp_lat: latency = 2.95ms. Normally the latency should be less than 20us. But in the above test, latency increased dramatically to 2.95ms. This is caused by ping-pong of qperf between node0 and node1. Since it is a sync wake-up and waker's nr_running == 1, WAKE_AFFINE will pull qperf to node1, but LB will soon migrate qperf back to node0. Not like a normal sync wake-up coming from a task, the waker in the above test is an interrupt and nr_running happens to be 1 since stress starts 32 threads on node1 with 32 cpus. Testing also shows the performance of qperf won't drop if the number of threads are increased to 64, 96 or larger values: $ taskset -c 32-63 stress -c 96 & $ qperf 192.168.50.166 tcp_lat tcp_lat: latency = 14.7us. Obviously "-c 96" makes "cpu_rq(this_cpu)->nr_running == 1" false in wake_affine_idle() so WAKE_AFFINE won't pull qperf to node1. To fix this issue, this patch checks the waker of sync wake-up is a task but not an interrupt. In this case, the waker will schedule out and give CPU to wakee. Reported-by: Yongjia Xie Signed-off-by: Barry Song --- kernel/sched/fair.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6d73bdbb2d40..8ad2d732033d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5829,7 +5829,12 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync) if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu)) return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu; - if (sync && cpu_rq(this_cpu)->nr_running == 1) + /* + * If this is a sync wake-up and the only running thread is just + * waker, thus, waker is not interrupt, we assume wakee will get + * the cpu of waker soon + */ + if (sync && cpu_rq(this_cpu)->nr_running == 1 && in_task()) return this_cpu; if (available_idle_cpu(prev_cpu)) -- 2.25.1