From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2644AC433E6 for ; Fri, 28 Aug 2020 19:54:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D6A6020825 for ; Fri, 28 Aug 2020 19:54:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=digitalocean.com header.i=@digitalocean.com header.b="WA3NlxvA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727888AbgH1TyE (ORCPT ); Fri, 28 Aug 2020 15:54:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727036AbgH1TxG (ORCPT ); Fri, 28 Aug 2020 15:53:06 -0400 Received: from mail-qt1-x844.google.com (mail-qt1-x844.google.com [IPv6:2607:f8b0:4864:20::844]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4241C061264 for ; Fri, 28 Aug 2020 12:53:06 -0700 (PDT) Received: by mail-qt1-x844.google.com with SMTP id t23so458532qto.3 for ; Fri, 28 Aug 2020 12:53:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=digitalocean.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=rkgHQwSYHOGIglJXooigt9hTeVNRwmHLtdbUe0ivKtQ=; b=WA3NlxvAF5BI/guXu3dqJu/r1FjfaO/y6hONzV3gIOaVkCjf2+y1/EsMvNjPP+DX5Z Qo25LC/rb91PKd09PjkPSloZImRkoG76RRxzoHo7xaHaLiRU9OdIhFIrcerJN2HfFY3f bM8zxLETt8Y6jBuH8+5+xclE/86OiQNmaNKig= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=rkgHQwSYHOGIglJXooigt9hTeVNRwmHLtdbUe0ivKtQ=; b=WEX31m7uL0fzAMGSjAZzCOw/mmlga+5hi7jp1fPWC3jLBDwqD/aQwLWCyhBWU+FKju bFqnhpmQwrMfciOUqxlujudXvvLG1GxZLaLDqFbpnk3FDr37klAmBmIGj46yEvzlNu0Q d/kQ4sqh+AZfCQeYqDtf1w8oLPSp27vbBlTamxtjab/G31ijHe2pe3Xk8hTlQ2c6bHem j3F1gsJM7qX6OLJ7i0/sBw8tb5NME4A5+BzAu/bB+YBN8dHEFGdBBV+lDg3sHslL/+1K S4Gtc8bvdXBDf2SC9Sv2afdn1i4BwDS418bhfdDIpCEVzO1wCAAsR5OkhPOvvDaQPM+C q6GQ== X-Gm-Message-State: AOAM532gjNulwC0nUc1Lh35WXfIgN8QrZjfMNVH6AcuZNRfoi5qi3bfp qNZ/tlpN4HFOSeljoFkZ9VGIIA== X-Google-Smtp-Source: ABdhPJyNh8eP5pTr3GAoJLGx70QODuuA1rfEhxDlHEPgD43zZMWhFY/dumZH539wj75xR3AKys+Enw== X-Received: by 2002:aed:21da:: with SMTP id m26mr3129609qtc.197.1598644385820; Fri, 28 Aug 2020 12:53:05 -0700 (PDT) Received: from [192.168.1.240] (192-222-189-155.qc.cable.ebox.net. [192.222.189.155]) by smtp.gmail.com with ESMTPSA id r34sm150885qtr.18.2020.08.28.12.53.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Aug 2020 12:53:05 -0700 (PDT) From: Julien Desfossez To: Peter Zijlstra , Vineeth Pillai , Joel Fernandes , Tim Chen , Aaron Lu , Aubrey Li , Dhaval Giani , Chris Hyser , Nishanth Aravamudan Cc: mingo@kernel.org, tglx@linutronix.de, pjt@google.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com, Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini , joel@joelfernandes.org, vineeth@bitbyteword.org, Chen Yu , Christian Brauner , Agata Gruza , Antonio Gomez Iglesias , graf@amazon.com, konrad.wilk@oracle.com, dfaggioli@suse.com, rostedt@goodmis.org, derkling@google.com, benbjiang@tencent.com, Julien Desfossez , Vineeth Remanan Pillai Subject: [RFC PATCH v7 21/23] sched: cgroup tagging interface for core scheduling Date: Fri, 28 Aug 2020 15:51:22 -0400 Message-Id: <21d4fb65591c039d485a5eda306dfa83e766f91b.1598643276.git.jdesfossez@digitalocean.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Zijlstra Marks all tasks in a cgroup as matching for core-scheduling. A task will need to be moved into the core scheduler queue when the cgroup it belongs to is tagged to run with core scheduling. Similarly the task will need to be moved out of the core scheduler queue when the cgroup is untagged. Also after we forked a task, its core scheduler queue's presence will need to be updated according to its new cgroup's status. Use stop machine mechanism to update all tasks in a cgroup to prevent a new task from sneaking into the cgroup, and missed out from the update while we iterates through all the tasks in the cgroup. A more complicated scheme could probably avoid the stop machine. Such scheme will also need to resovle inconsistency between a task's cgroup core scheduling tag and residency in core scheduler queue. We are opting for the simple stop machine mechanism for now that avoids such complications. Core scheduler has extra overhead. Enable it only for core with more than one SMT hardware threads. Signed-off-by: Tim Chen Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Julien Desfossez Signed-off-by: Vineeth Remanan Pillai --- kernel/sched/core.c | 183 +++++++++++++++++++++++++++++++++++++++++-- kernel/sched/sched.h | 4 + 2 files changed, 180 insertions(+), 7 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 34238fd67f31..5f77e575bbac 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -153,6 +153,37 @@ static inline bool __sched_core_less(struct task_struct *a, struct task_struct * return false; } +static bool sched_core_empty(struct rq *rq) +{ + return RB_EMPTY_ROOT(&rq->core_tree); +} + +static bool sched_core_enqueued(struct task_struct *task) +{ + return !RB_EMPTY_NODE(&task->core_node); +} + +static struct task_struct *sched_core_first(struct rq *rq) +{ + struct task_struct *task; + + task = container_of(rb_first(&rq->core_tree), struct task_struct, core_node); + return task; +} + +static void sched_core_flush(int cpu) +{ + struct rq *rq = cpu_rq(cpu); + struct task_struct *task; + + while (!sched_core_empty(rq)) { + task = sched_core_first(rq); + rb_erase(&task->core_node, &rq->core_tree); + RB_CLEAR_NODE(&task->core_node); + } + rq->core->core_task_seq++; +} + static void sched_core_enqueue(struct rq *rq, struct task_struct *p) { struct rb_node *parent, **node; @@ -184,10 +215,11 @@ static void sched_core_dequeue(struct rq *rq, struct task_struct *p) { rq->core->core_task_seq++; - if (!p->core_cookie) + if (!sched_core_enqueued(p)) return; rb_erase(&p->core_node, &rq->core_tree); + RB_CLEAR_NODE(&p->core_node); } /* @@ -253,9 +285,23 @@ static int __sched_core_stopper(void *data) for_each_possible_cpu(cpu) { struct rq *rq = cpu_rq(cpu); - rq->core_enabled = enabled; - if (cpu_online(cpu) && rq->core != rq) - sched_core_adjust_sibling_vruntime(cpu, enabled); + + WARN_ON_ONCE(enabled == rq->core_enabled); + + if (!enabled || (enabled && cpumask_weight(cpu_smt_mask(cpu)) >= 2)) { + /* + * All active and migrating tasks will have already + * been removed from core queue when we clear the + * cgroup tags. However, dying tasks could still be + * left in core queue. Flush them here. + */ + if (!enabled) + sched_core_flush(cpu); + + rq->core_enabled = enabled; + if (cpu_online(cpu) && rq->core != rq) + sched_core_adjust_sibling_vruntime(cpu, enabled); + } } return 0; @@ -266,7 +312,11 @@ static int sched_core_count; static void __sched_core_enable(void) { - // XXX verify there are no cookie tasks (yet) + int cpu; + + /* verify there are no cookie tasks (yet) */ + for_each_online_cpu(cpu) + BUG_ON(!sched_core_empty(cpu_rq(cpu))); static_branch_enable(&__sched_core_enabled); stop_machine(__sched_core_stopper, (void *)true, NULL); @@ -274,8 +324,6 @@ static void __sched_core_enable(void) static void __sched_core_disable(void) { - // XXX verify there are no cookie tasks (left) - stop_machine(__sched_core_stopper, (void *)false, NULL); static_branch_disable(&__sched_core_enabled); } @@ -300,6 +348,7 @@ void sched_core_put(void) static inline void sched_core_enqueue(struct rq *rq, struct task_struct *p) { } static inline void sched_core_dequeue(struct rq *rq, struct task_struct *p) { } +static bool sched_core_enqueued(struct task_struct *task) { return false; } #endif /* CONFIG_SCHED_CORE */ @@ -3534,6 +3583,9 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) #ifdef CONFIG_SMP plist_node_init(&p->pushable_tasks, MAX_PRIO); RB_CLEAR_NODE(&p->pushable_dl_tasks); +#endif +#ifdef CONFIG_SCHED_CORE + RB_CLEAR_NODE(&p->core_node); #endif return 0; } @@ -7431,6 +7483,9 @@ void init_idle(struct task_struct *idle, int cpu) #ifdef CONFIG_SMP sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu); #endif +#ifdef CONFIG_SCHED_CORE + RB_CLEAR_NODE(&idle->core_node); +#endif } #ifdef CONFIG_SMP @@ -8416,6 +8471,15 @@ static void sched_change_group(struct task_struct *tsk, int type) tg = container_of(task_css_check(tsk, cpu_cgrp_id, true), struct task_group, css); tg = autogroup_task_group(tsk, tg); + +#ifdef CONFIG_SCHED_CORE + if ((unsigned long)tsk->sched_task_group == tsk->core_cookie) + tsk->core_cookie = 0UL; + + if (tg->tagged /* && !tsk->core_cookie ? */) + tsk->core_cookie = (unsigned long)tg; +#endif + tsk->sched_task_group = tg; #ifdef CONFIG_FAIR_GROUP_SCHED @@ -8508,6 +8572,18 @@ static int cpu_cgroup_css_online(struct cgroup_subsys_state *css) return 0; } +static void cpu_cgroup_css_offline(struct cgroup_subsys_state *css) +{ +#ifdef CONFIG_SCHED_CORE + struct task_group *tg = css_tg(css); + + if (tg->tagged) { + sched_core_put(); + tg->tagged = 0; + } +#endif +} + static void cpu_cgroup_css_released(struct cgroup_subsys_state *css) { struct task_group *tg = css_tg(css); @@ -9073,6 +9149,82 @@ static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css, } #endif /* CONFIG_RT_GROUP_SCHED */ +#ifdef CONFIG_SCHED_CORE +static u64 cpu_core_tag_read_u64(struct cgroup_subsys_state *css, struct cftype *cft) +{ + struct task_group *tg = css_tg(css); + + return !!tg->tagged; +} + +struct write_core_tag { + struct cgroup_subsys_state *css; + int val; +}; + +static int __sched_write_tag(void *data) +{ + struct write_core_tag *tag = (struct write_core_tag *) data; + struct cgroup_subsys_state *css = tag->css; + int val = tag->val; + struct task_group *tg = css_tg(tag->css); + struct css_task_iter it; + struct task_struct *p; + + tg->tagged = !!val; + + css_task_iter_start(css, 0, &it); + /* + * Note: css_task_iter_next will skip dying tasks. + * There could still be dying tasks left in the core queue + * when we set cgroup tag to 0 when the loop is done below. + */ + while ((p = css_task_iter_next(&it))) { + p->core_cookie = !!val ? (unsigned long)tg : 0UL; + + if (sched_core_enqueued(p)) { + sched_core_dequeue(task_rq(p), p); + if (!p->core_cookie) + continue; + } + + if (sched_core_enabled(task_rq(p)) && + p->core_cookie && task_on_rq_queued(p)) + sched_core_enqueue(task_rq(p), p); + + } + css_task_iter_end(&it); + + return 0; +} + +static int cpu_core_tag_write_u64(struct cgroup_subsys_state *css, struct cftype *cft, u64 val) +{ + struct task_group *tg = css_tg(css); + struct write_core_tag wtag; + + if (val > 1) + return -ERANGE; + + if (!static_branch_likely(&sched_smt_present)) + return -EINVAL; + + if (tg->tagged == !!val) + return 0; + + if (!!val) + sched_core_get(); + + wtag.css = css; + wtag.val = val; + stop_machine(__sched_write_tag, (void *) &wtag, NULL); + if (!val) + sched_core_put(); + + return 0; +} +#endif + static struct cftype cpu_legacy_files[] = { #ifdef CONFIG_FAIR_GROUP_SCHED { @@ -9109,6 +9261,14 @@ static struct cftype cpu_legacy_files[] = { .write_u64 = cpu_rt_period_write_uint, }, #endif +#ifdef CONFIG_SCHED_CORE + { + .name = "tag", + .flags = CFTYPE_NOT_ON_ROOT, + .read_u64 = cpu_core_tag_read_u64, + .write_u64 = cpu_core_tag_write_u64, + }, +#endif #ifdef CONFIG_UCLAMP_TASK_GROUP { .name = "uclamp.min", @@ -9282,6 +9442,14 @@ static struct cftype cpu_files[] = { .write_s64 = cpu_weight_nice_write_s64, }, #endif +#ifdef CONFIG_SCHED_CORE + { + .name = "tag", + .flags = CFTYPE_NOT_ON_ROOT, + .read_u64 = cpu_core_tag_read_u64, + .write_u64 = cpu_core_tag_write_u64, + }, +#endif #ifdef CONFIG_CFS_BANDWIDTH { .name = "max", @@ -9310,6 +9478,7 @@ static struct cftype cpu_files[] = { struct cgroup_subsys cpu_cgrp_subsys = { .css_alloc = cpu_cgroup_css_alloc, .css_online = cpu_cgroup_css_online, + .css_offline = cpu_cgroup_css_offline, .css_released = cpu_cgroup_css_released, .css_free = cpu_cgroup_css_free, .css_extra_stat_show = cpu_extra_stat_show, diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 676818bdb9df..8ef7ab3061ed 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -384,6 +384,10 @@ struct cfs_bandwidth { struct task_group { struct cgroup_subsys_state css; +#ifdef CONFIG_SCHED_CORE + int tagged; +#endif + #ifdef CONFIG_FAIR_GROUP_SCHED /* schedulable entities of this group on each CPU */ struct sched_entity **se; -- 2.17.1