From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58133C433E2 for ; Fri, 28 Aug 2020 19:55:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2805920825 for ; Fri, 28 Aug 2020 19:55:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=digitalocean.com header.i=@digitalocean.com header.b="ASyRp9vU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726804AbgH1TzM (ORCPT ); Fri, 28 Aug 2020 15:55:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726881AbgH1Twr (ORCPT ); Fri, 28 Aug 2020 15:52:47 -0400 Received: from mail-qt1-x842.google.com (mail-qt1-x842.google.com [IPv6:2607:f8b0:4864:20::842]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0784C06121B for ; Fri, 28 Aug 2020 12:52:47 -0700 (PDT) Received: by mail-qt1-x842.google.com with SMTP id e7so407807qtj.11 for ; Fri, 28 Aug 2020 12:52:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=digitalocean.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=OYL3Em4mnCgbZ/91qMiWsRnxEDhZM0vaPJFtnedKRlo=; b=ASyRp9vUP70FTrMPzRoXk0znaImzFmZTeyVU2K8Y5AMn3a58mnP3A7N6780BKiQjiH Wmf0zU+/pxWgTgn5zCHThLwVtYRYxui736o24zkYg3w1MUz+CZQ5fSXV3eiEDX0BanHg HSpCsSE/4GmQh9UEjZKl5pLJVBHN+BzYNpl2k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=OYL3Em4mnCgbZ/91qMiWsRnxEDhZM0vaPJFtnedKRlo=; b=guARtl8MUYO11b8WYx7bqdNuDK5xTCZ8vsh9XlM8iEviiHbn6VZgzNSekqFD1yEe66 g1n2sdVukjyWFU4JosRSojONo3Y9Bf2aQWUv/Z940rIAlcJT07dFKTAIfxROqp1ailvJ DzwlCpLSzs2RrTHUy43L1CBe3tuiPeNZUJ3cUC9cxCEbpg5jlRW2LaEyLLxEzwG1xv7O NPxyiUYrJCmSzR32HWGpxDWTV/t9nAYntaDwf2cmGDFwE3HF8VdgfzuVufRtw194+awN ZFcYUyIEIm/WMVuilAwDc305NTR3SSMiTyCLkACDWL0Xocqg/WdnC1eHnuURfapPlwfF mCJw== X-Gm-Message-State: AOAM533YjRNd9xI0mixZglclE2Jd01+vaPrMQRpAdLXuDPrGdesf7Qpi qncCtm/Nb7+4H3ytkaqCQb1C3w== X-Google-Smtp-Source: ABdhPJwgCX/9K2w93qLttRCc2oF3cPdZcuOYDdFth23DsvrET06gGdpl382ZxOmJ8jzp5nCJOPrqXA== X-Received: by 2002:ac8:7a8d:: with SMTP id x13mr3108564qtr.192.1598644366895; Fri, 28 Aug 2020 12:52:46 -0700 (PDT) Received: from [192.168.1.240] (192-222-189-155.qc.cable.ebox.net. [192.222.189.155]) by smtp.gmail.com with ESMTPSA id r34sm150885qtr.18.2020.08.28.12.52.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Aug 2020 12:52:46 -0700 (PDT) From: Julien Desfossez To: Peter Zijlstra , Vineeth Pillai , Joel Fernandes , Tim Chen , Aaron Lu , Aubrey Li , Dhaval Giani , Chris Hyser , Nishanth Aravamudan Cc: Aubrey Li , mingo@kernel.org, tglx@linutronix.de, pjt@google.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com, Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini , joel@joelfernandes.org, vineeth@bitbyteword.org, Chen Yu , Christian Brauner , Agata Gruza , Antonio Gomez Iglesias , graf@amazon.com, konrad.wilk@oracle.com, dfaggioli@suse.com, rostedt@goodmis.org, derkling@google.com, benbjiang@tencent.com, Aubrey Li , Vineeth Remanan Pillai Subject: [RFC PATCH v7 13/23] sched: migration changes for core scheduling Date: Fri, 28 Aug 2020 15:51:14 -0400 Message-Id: X-Mailer: git-send-email 2.17.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Aubrey Li - Don't migrate if there is a cookie mismatch Load balance tries to move task from busiest CPU to the destination CPU. When core scheduling is enabled, if the task's cookie does not match with the destination CPU's core cookie, this task will be skipped by this CPU. This mitigates the forced idle time on the destination CPU. - Select cookie matched idle CPU In the fast path of task wakeup, select the first cookie matched idle CPU instead of the first idle CPU. - Find cookie matched idlest CPU In the slow path of task wakeup, find the idlest CPU whose core cookie matches with task's cookie - Don't migrate task if cookie not match For the NUMA load balance, don't migrate task to the CPU whose core cookie does not match with task's cookie Signed-off-by: Aubrey Li Signed-off-by: Tim Chen Signed-off-by: Vineeth Remanan Pillai --- kernel/sched/fair.c | 64 ++++++++++++++++++++++++++++++++++++++++---- kernel/sched/sched.h | 29 ++++++++++++++++++++ 2 files changed, 88 insertions(+), 5 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e54a35612efa..ae11a18644b6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2048,6 +2048,15 @@ static void task_numa_find_cpu(struct task_numa_env *env, if (!cpumask_test_cpu(cpu, env->p->cpus_ptr)) continue; +#ifdef CONFIG_SCHED_CORE + /* + * Skip this cpu if source task's cookie does not match + * with CPU's core cookie. + */ + if (!sched_core_cookie_match(cpu_rq(cpu), env->p)) + continue; +#endif + env->dst_cpu = cpu; if (task_numa_compare(env, taskimp, groupimp, maymove)) break; @@ -5983,11 +5992,17 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this /* Traverse only the allowed CPUs */ for_each_cpu_and(i, sched_group_span(group), p->cpus_ptr) { + struct rq *rq = cpu_rq(i); + +#ifdef CONFIG_SCHED_CORE + if (!sched_core_cookie_match(rq, p)) + continue; +#endif + if (sched_idle_cpu(i)) return i; if (available_idle_cpu(i)) { - struct rq *rq = cpu_rq(i); struct cpuidle_state *idle = idle_get_state(rq); if (idle && idle->exit_latency < min_exit_latency) { /* @@ -6244,8 +6259,18 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t for_each_cpu_wrap(cpu, cpus, target) { if (!--nr) return -1; - if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) - break; + + if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) { +#ifdef CONFIG_SCHED_CORE + /* + * If Core Scheduling is enabled, select this cpu + * only if the process cookie matches core cookie. + */ + if (sched_core_enabled(cpu_rq(cpu)) && + p->core_cookie == cpu_rq(cpu)->core->core_cookie) +#endif + break; + } } time = cpu_clock(this) - time; @@ -7626,8 +7651,9 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env) * We do not migrate tasks that are: * 1) throttled_lb_pair, or * 2) cannot be migrated to this CPU due to cpus_ptr, or - * 3) running (obviously), or - * 4) are cache-hot on their current CPU. + * 3) task's cookie does not match with this CPU's core cookie + * 4) running (obviously), or + * 5) are cache-hot on their current CPU. */ if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu)) return 0; @@ -7662,6 +7688,15 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env) return 0; } +#ifdef CONFIG_SCHED_CORE + /* + * Don't migrate task if the task's cookie does not match + * with the destination CPU's core cookie. + */ + if (!sched_core_cookie_match(cpu_rq(env->dst_cpu), p)) + return 0; +#endif + /* Record that we found atleast one task that could run on dst_cpu */ env->flags &= ~LBF_ALL_PINNED; @@ -8886,6 +8921,25 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) p->cpus_ptr)) continue; +#ifdef CONFIG_SCHED_CORE + if (sched_core_enabled(cpu_rq(this_cpu))) { + int i = 0; + bool cookie_match = false; + + for_each_cpu(i, sched_group_span(group)) { + struct rq *rq = cpu_rq(i); + + if (sched_core_cookie_match(rq, p)) { + cookie_match = true; + break; + } + } + /* Skip over this group if no cookie matched */ + if (!cookie_match) + continue; + } +#endif + local_group = cpumask_test_cpu(this_cpu, sched_group_span(group)); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index be6db45276e7..1c94b2862069 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1109,6 +1109,35 @@ static inline raw_spinlock_t *rq_lockp(struct rq *rq) bool cfs_prio_less(struct task_struct *a, struct task_struct *b); void sched_core_adjust_sibling_vruntime(int cpu, bool coresched_enabled); +/* + * Helper to check if the CPU's core cookie matches with the task's cookie + * when core scheduling is enabled. + * A special case is that the task's cookie always matches with CPU's core + * cookie if the CPU is in an idle core. + */ +static inline bool sched_core_cookie_match(struct rq *rq, struct task_struct *p) +{ + bool idle_core = true; + int cpu; + + /* Ignore cookie match if core scheduler is not enabled on the CPU. */ + if (!sched_core_enabled(rq)) + return true; + + for_each_cpu(cpu, cpu_smt_mask(cpu_of(rq))) { + if (!available_idle_cpu(cpu)) { + idle_core = false; + break; + } + } + + /* + * A CPU in an idle core is always the best choice for tasks with + * cookies. + */ + return idle_core || rq->core->core_cookie == p->core_cookie; +} + extern void queue_core_balance(struct rq *rq); #else /* !CONFIG_SCHED_CORE */ -- 2.17.1