From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE60CC4338F for ; Tue, 10 Aug 2021 14:42:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B325860EB5 for ; Tue, 10 Aug 2021 14:42:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241337AbhHJOmV (ORCPT ); Tue, 10 Aug 2021 10:42:21 -0400 Received: from mga06.intel.com ([134.134.136.31]:43641 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240965AbhHJOmI (ORCPT ); Tue, 10 Aug 2021 10:42:08 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10072"; a="275950406" X-IronPort-AV: E=Sophos;i="5.84,310,1620716400"; d="scan'208";a="275950406" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2021 07:41:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,310,1620716400"; d="scan'208";a="526170594" Received: from ranerica-svr.sc.intel.com ([172.25.110.23]) by fmsmga002.fm.intel.com with ESMTP; 10 Aug 2021 07:41:45 -0700 From: Ricardo Neri To: "Peter Zijlstra (Intel)" , Ingo Molnar , Juri Lelli , Vincent Guittot Cc: Srikar Dronamraju , Nicholas Piggin , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Len Brown , Srinivas Pandruvada , Tim Chen , Aubrey Li , "Ravi V. Shankar" , Ricardo Neri , "Rafael J. Wysocki" , Quentin Perret , "Joel Fernandes (Google)" , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, Ricardo Neri , Aubrey Li , Daniel Bristot de Oliveira Subject: [PATCH v4 5/6] sched/fair: Carve out logic to mark a group for asymmetric packing Date: Tue, 10 Aug 2021 07:41:44 -0700 Message-Id: <20210810144145.18776-6-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210810144145.18776-1-ricardo.neri-calderon@linux.intel.com> References: <20210810144145.18776-1-ricardo.neri-calderon@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Create a separate function, sched_asym(). A subsequent changeset will introduce logic to deal with SMT in conjunction with asmymmetric packing. Such logic will need the statistics of the scheduling group provided as argument. Update them before calling sched_asym(). Cc: Aubrey Li Cc: Ben Segall Cc: Daniel Bristot de Oliveira Cc: Dietmar Eggemann Cc: Mel Gorman Cc: Quentin Perret Cc: Rafael J. Wysocki Cc: Srinivas Pandruvada Cc: Steven Rostedt Cc: Tim Chen Reviewed-by: Joel Fernandes (Google) Reviewed-by: Len Brown Co-developed-by: Peter Zijlstra (Intel) Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ricardo Neri --- Changes since v3: * Remove a redundant check for the local group in sched_asym(). (Dietmar) * Reworded commit message for clarity. (Len) Changes since v2: * Introduced this patch. Changes since v1: * N/A --- kernel/sched/fair.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ae3d2bc59d8d..dd411cefb63f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8531,6 +8531,13 @@ group_type group_classify(unsigned int imbalance_pct, return group_has_spare; } +static inline bool +sched_asym(struct lb_env *env, struct sd_lb_stats *sds, struct sg_lb_stats *sgs, + struct sched_group *group) +{ + return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu); +} + /** * update_sg_lb_stats - Update sched_group's statistics for load balancing. * @env: The load balancing environment. @@ -8591,18 +8598,17 @@ static inline void update_sg_lb_stats(struct lb_env *env, } } + sgs->group_capacity = group->sgc->capacity; + + sgs->group_weight = group->group_weight; + /* Check if dst CPU is idle and preferred to this group */ if (!local_group && env->sd->flags & SD_ASYM_PACKING && - env->idle != CPU_NOT_IDLE && - sgs->sum_h_nr_running && - sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) { + env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && + sched_asym(env, sds, sgs, group)) { sgs->group_asym_packing = 1; } - sgs->group_capacity = group->sgc->capacity; - - sgs->group_weight = group->group_weight; - sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs); /* Computing avg_load makes sense only when group is overloaded */ -- 2.17.1