From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 852B4C433FE for ; Fri, 17 Sep 2021 15:26:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6720B61212 for ; Fri, 17 Sep 2021 15:26:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234156AbhIQP1h (ORCPT ); Fri, 17 Sep 2021 11:27:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229663AbhIQP1f (ORCPT ); Fri, 17 Sep 2021 11:27:35 -0400 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [IPv6:2607:f8b0:4864:20::731]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 986E2C061574 for ; Fri, 17 Sep 2021 08:26:13 -0700 (PDT) Received: by mail-qk1-x731.google.com with SMTP id 73so13500345qki.4 for ; Fri, 17 Sep 2021 08:26:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=TM+ZvTPf59AjLNF10A7BeoUcO09QDb2XjgFhcgEDZCg=; b=pcT+euw9MpfxQpJL8z5inrpc9UbXVQZQ2nVgy3O0RT7nq5c/czEiRM05cSO6EWRAuU gE3sH5RezrEgqj9lPVJEAu1Uzrg63Jak+ajw250s5JXTUVdhdSWTpDiwqoZNqO6W2BTl iNiiM8l0WCOGD/sAV79DF7a1IHO3945+rAGe8zfPj+wOUu4ATHL6jo3lqNChqMEEN412 cJ0vLnhz7SNKPo0tENlJC/VFRAMZ9oFiGUgIcgYmTaG8TThWqqImE82NqnednT5wz7/Y LyRhnDQUB7rs2oy5bGl5O3HzHYdQHhgq2TJI78EruFDheVBMN8yXNEsxozl98C+G0V4B mm6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=TM+ZvTPf59AjLNF10A7BeoUcO09QDb2XjgFhcgEDZCg=; b=DYS2bQDWpmC4q2f+oBtSnDaue8AwWqSExrJVpDq2K5cSwCzPbLCH9w7eLRj84O2CIF MnUAHrmKCJn/8qA4eXB9B2UmO3AE4kg/CKnoNO+Vk033Hm6kQ1q+POGKX1vdH3FRGabW R3XVPTXBHLAo+vImkRF2o7NFBDkyJJETV7NwxNwk5KfP5mxcyLhxPo1TA/Li7NjuZrxx ckJM7nGhVyNYRXIprfdkjzWq8iYxvWfzvOWDos5Xz5bp3hj5rfiIav1S93xqPIirkA9h GnWT09iigmu9AQpPXy+gtw3MEoPrUm+ypeSkZoehHpGhAFZcWhjqSzMVC/ljMJzx2aXn EAKg== X-Gm-Message-State: AOAM530V4yifY+T+ct8ylj+MNr/V3uQDe8ft3RwFrkcdD9HniI4WubVu vbsNGzpLushF52cOR8nmQMT1ulR8cy7tY4URgo2Wyg== X-Google-Smtp-Source: ABdhPJxYtMwzo8aY4gkreJvqG8Rt6IjAobZ8kHMhtN1A5QYsNekX9RpcATkzUqrXx8D61aktoQcUcLetCjDmy3mwdzs= X-Received: by 2002:a25:5645:: with SMTP id k66mr14018619ybb.259.1631892372580; Fri, 17 Sep 2021 08:26:12 -0700 (PDT) MIME-Version: 1.0 References: <20210911011819.12184-1-ricardo.neri-calderon@linux.intel.com> <20210911011819.12184-3-ricardo.neri-calderon@linux.intel.com> In-Reply-To: <20210911011819.12184-3-ricardo.neri-calderon@linux.intel.com> From: Vincent Guittot Date: Fri, 17 Sep 2021 17:26:01 +0200 Message-ID: Subject: Re: [PATCH v5 2/6] sched/topology: Introduce sched_group::flags To: Ricardo Neri Cc: "Peter Zijlstra (Intel)" , Ingo Molnar , Juri Lelli , Srikar Dronamraju , Nicholas Piggin , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Len Brown , Srinivas Pandruvada , Tim Chen , Aubrey Li , "Ravi V. Shankar" , Ricardo Neri , Quentin Perret , "Joel Fernandes (Google)" , linuxppc-dev@lists.ozlabs.org, linux-kernel , Aubrey Li , Daniel Bristot de Oliveira , "Rafael J . Wysocki" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 11 Sept 2021 at 03:19, Ricardo Neri wrote: > > There exist situations in which the load balance needs to know the > properties of the CPUs in a scheduling group. When using asymmetric > packing, for instance, the load balancer needs to know not only the > state of dst_cpu but also of its SMT siblings, if any. > > Use the flags of the child scheduling domains to initialize scheduling > group flags. This will reflect the properties of the CPUs in the > group. > > A subsequent changeset will make use of these new flags. No functional > changes are introduced. > > Cc: Aubrey Li > Cc: Ben Segall > Cc: Daniel Bristot de Oliveira > Cc: Dietmar Eggemann > Cc: Mel Gorman > Cc: Quentin Perret > Cc: Rafael J. Wysocki > Cc: Srinivas Pandruvada > Cc: Steven Rostedt > Cc: Tim Chen > Reviewed-by: Joel Fernandes (Google) > Reviewed-by: Len Brown > Originally-by: Peter Zijlstra (Intel) > Signed-off-by: Peter Zijlstra (Intel) > Signed-off-by: Ricardo Neri Reviewed-by: Vincent Guittot > --- > Changes since v4: > * None > > Changes since v3: > * Clear the flags of the scheduling groups of a domain if its child is > destroyed. > * Minor rewording of the commit message. > > Changes since v2: > * Introduced this patch. > > Changes since v1: > * N/A > --- > kernel/sched/sched.h | 1 + > kernel/sched/topology.c | 21 ++++++++++++++++++--- > 2 files changed, 19 insertions(+), 3 deletions(-) > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index 3d3e5793e117..86ab33ce529d 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -1809,6 +1809,7 @@ struct sched_group { > unsigned int group_weight; > struct sched_group_capacity *sgc; > int asym_prefer_cpu; /* CPU of highest priority in group */ > + int flags; > > /* > * The CPUs this group covers. > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index 4e8698e62f07..c56faae461d9 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -716,8 +716,20 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu) > tmp = sd; > sd = sd->parent; > destroy_sched_domain(tmp); > - if (sd) > + if (sd) { > + struct sched_group *sg = sd->groups; > + > + /* > + * sched groups hold the flags of the child sched > + * domain for convenience. Clear such flags since > + * the child is being destroyed. > + */ > + do { > + sg->flags = 0; > + } while (sg != sd->groups); > + > sd->child = NULL; > + } > } > > for (tmp = sd; tmp; tmp = tmp->parent) > @@ -916,10 +928,12 @@ build_group_from_child_sched_domain(struct sched_domain *sd, int cpu) > return NULL; > > sg_span = sched_group_span(sg); > - if (sd->child) > + if (sd->child) { > cpumask_copy(sg_span, sched_domain_span(sd->child)); > - else > + sg->flags = sd->child->flags; > + } else { > cpumask_copy(sg_span, sched_domain_span(sd)); > + } > > atomic_inc(&sg->ref); > return sg; > @@ -1169,6 +1183,7 @@ static struct sched_group *get_group(int cpu, struct sd_data *sdd) > if (child) { > cpumask_copy(sched_group_span(sg), sched_domain_span(child)); > cpumask_copy(group_balance_mask(sg), sched_group_span(sg)); > + sg->flags = child->flags; > } else { > cpumask_set_cpu(cpu, sched_group_span(sg)); > cpumask_set_cpu(cpu, group_balance_mask(sg)); > -- > 2.17.1 >