From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05822C47082 for ; Mon, 7 Jun 2021 13:31:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E45DD61159 for ; Mon, 7 Jun 2021 13:31:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230450AbhFGNdF (ORCPT ); Mon, 7 Jun 2021 09:33:05 -0400 Received: from mail-lj1-f177.google.com ([209.85.208.177]:37493 "EHLO mail-lj1-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230291AbhFGNc6 (ORCPT ); Mon, 7 Jun 2021 09:32:58 -0400 Received: by mail-lj1-f177.google.com with SMTP id e2so22244778ljk.4 for ; Mon, 07 Jun 2021 06:30:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=7pMeBbJnHe0dT56aBjX+SG8tgBkU1RFs5WrTELGr21w=; b=S/bfEag7I89zOTS/86dOO6jC4cob1/NUn22kCuKmb2B1z8hvctiuJzA9rY/gEG4b46 uCDehsvTljcTN45kv7qc+PiPfHVPLL33aKkCVYW/hhbHPi2NrIb3HgmuwO+aZyB5ziJm vrMfArzkFmt32vDk+p2RNUzAwaFvWldu76G9Bf3//0WUXjZdOOrGYavx+pit0/8fi/PM 12vAMOuBoy/HAMNETrGn8p//nNwKorc6LgGQxYNygMcA4kbAytF8+YXmaYqzSgKFTnKs NVw2ENxozeJigZymVLo63ReaHlbvhwZDm9gfh2YrzMoB/olyKutSLj7kYbGLimIy1Y6o 90lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=7pMeBbJnHe0dT56aBjX+SG8tgBkU1RFs5WrTELGr21w=; b=UHXDu7y1JbjVFT4gJkVbZOykcHwFGQCXuyyx7kts7CbJaF8gUyeUKnIMbF0Q2C0xzq 8ByTwQ8d0lff1Oi9qVfELScb/T2S86I9Hdh/yXM/zC7hLDXsltWbELse5PHFQcjLMV3P 7q4y6He887n03M+3tuS3vegjd9FPyuICdFis5KHRUrkxY2on3kNRens0SQ4IWcNf4hNG 8QOAql5+N9D2x4KnwLPSdyNXGEy+bmjw213yIxojLdTFqLXtVyyns0M3H0B9G6Dhp7k7 aqOISKMwPjID2VxJz6SZJkdCW2STfAiinaRJ2eJhNBwrQ8bMdcq8f8chJPmv5AR2250X Ly8Q== X-Gm-Message-State: AOAM531RpGKXuElZWsWj5Bbzk9kdWvHGul8qSrwCS+sd4tc0WN9JaNsh o0Kchnx6qFzzmBEkjlxvGZak4+l4XCCs5nTVGtheNQ== X-Google-Smtp-Source: ABdhPJyynPB60BGVh3pTGL9cuYERwmpP/tCiHXNF18eK0N8QEvX+6+NT5menbwA+NAmxn9IyhLYNKn5DboBOEdFSVRU= X-Received: by 2002:a2e:858a:: with SMTP id b10mr14594374lji.445.1623072591497; Mon, 07 Jun 2021 06:29:51 -0700 (PDT) MIME-Version: 1.0 References: <20210604102314.697749-1-odin@uged.al> In-Reply-To: <20210604102314.697749-1-odin@uged.al> From: Vincent Guittot Date: Mon, 7 Jun 2021 15:29:40 +0200 Message-ID: Subject: Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle To: Odin Ugedal Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , "open list:CONTROL GROUP (CGROUP)" , linux-kernel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 4 Jun 2021 at 12:26, Odin Ugedal wrote: > > This fixes an issue where fairness is decreased since cfs_rq's can > end up not being decayed properly. For two sibling control groups with > the same priority, this can often lead to a load ratio of 99/1 (!!). > > This happen because when a cfs_rq is throttled, all the descendant cfs_rq's > will be removed from the leaf list. When they initial cfs_rq is > unthrottled, it will currently only re add descendant cfs_rq's if they > have one or more entities enqueued. This is not a perfect heuristic. > > Instead, we insert all cfs_rq's that contain one or more enqueued > entities, or it its load is not completely decayed. > > Can often lead to situations like this for equally weighted control > groups: > > $ ps u -C stress > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 10009 88.8 0.0 3676 100 pts/1 R+ 11:04 0:13 stress --cpu 1 > root 10023 3.0 0.0 3676 104 pts/1 R+ 11:04 0:00 stress --cpu 1 > > Fixes: 31bc6aeaab1d ("sched/fair: Optimize update_blocked_averages()") > Signed-off-by: Odin Ugedal > --- > Changes since v1: > - Replaced cfs_rq field with using tg_load_avg_contrib > - Went from 3 to 1 patches; one is merged and one is replaced > by a new patchset. > Changes since v2: > - Use !cfs_rq_is_decayed() instead of tg_load_avg_contrib > - Moved cfs_rq_is_decayed to above its new use > Changes since v3: > - (hopefully) Fix config for !CONFIG_SMP > kernel/sched/fair.c | 40 +++++++++++++++++++++------------------- > 1 file changed, 21 insertions(+), 19 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 794c2cb945f8..eec32f214ff8 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -712,6 +712,25 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se) > return calc_delta_fair(sched_slice(cfs_rq, se), se); > } > > +static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq) It's not the best place for this function: - pelt.h header file is included below but cfs_rq_is_decayed() uses PELT - CONFIG_SMP is already defined few lines below - cfs_rq_is_decayed() is only used with CONFIG_FAIR_GROUP_SCHED and now with CONFIG_CFS_BANDWIDTH which depends on the former so moving cfs_rq_is_decayed() just above update_tg_load_avg() with other functions used for propagating and updating tg load seems a better place > +{ > + if (cfs_rq->load.weight) > + return false; > + > +#ifdef CONFIG_SMP > + if (cfs_rq->avg.load_sum) > + return false; > + > + if (cfs_rq->avg.util_sum) > + return false; > + > + if (cfs_rq->avg.runnable_sum) > + return false; > +#endif > + > + return true; > +} > + > #include "pelt.h" > #ifdef CONFIG_SMP > > @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data) > cfs_rq->throttled_clock_task_time += rq_clock_task(rq) - > cfs_rq->throttled_clock_task; > > - /* Add cfs_rq with already running entity in the list */ > - if (cfs_rq->nr_running >= 1) > + /* Add cfs_rq with load or one or more already running entities to the list */ > + if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running) > list_add_leaf_cfs_rq(cfs_rq); > } > > @@ -7895,23 +7914,6 @@ static bool __update_blocked_others(struct rq *rq, bool *done) > > #ifdef CONFIG_FAIR_GROUP_SCHED > > -static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq) > -{ > - if (cfs_rq->load.weight) > - return false; > - > - if (cfs_rq->avg.load_sum) > - return false; > - > - if (cfs_rq->avg.util_sum) > - return false; > - > - if (cfs_rq->avg.runnable_sum) > - return false; > - > - return true; > -} > - > static bool __update_blocked_fair(struct rq *rq, bool *done) > { > struct cfs_rq *cfs_rq, *pos; > -- > 2.31.1 > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Vincent Guittot Subject: Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle Date: Mon, 7 Jun 2021 15:29:40 +0200 Message-ID: References: <20210604102314.697749-1-odin@uged.al> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=7pMeBbJnHe0dT56aBjX+SG8tgBkU1RFs5WrTELGr21w=; b=S/bfEag7I89zOTS/86dOO6jC4cob1/NUn22kCuKmb2B1z8hvctiuJzA9rY/gEG4b46 uCDehsvTljcTN45kv7qc+PiPfHVPLL33aKkCVYW/hhbHPi2NrIb3HgmuwO+aZyB5ziJm vrMfArzkFmt32vDk+p2RNUzAwaFvWldu76G9Bf3//0WUXjZdOOrGYavx+pit0/8fi/PM 12vAMOuBoy/HAMNETrGn8p//nNwKorc6LgGQxYNygMcA4kbAytF8+YXmaYqzSgKFTnKs NVw2ENxozeJigZymVLo63ReaHlbvhwZDm9gfh2YrzMoB/olyKutSLj7kYbGLimIy1Y6o 90lQ== In-Reply-To: <20210604102314.697749-1-odin-RObV4cXtwVA@public.gmane.org> List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Odin Ugedal Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , "open list:CONTROL GROUP (CGROUP)" , linux-kernel On Fri, 4 Jun 2021 at 12:26, Odin Ugedal wrote: > > This fixes an issue where fairness is decreased since cfs_rq's can > end up not being decayed properly. For two sibling control groups with > the same priority, this can often lead to a load ratio of 99/1 (!!). > > This happen because when a cfs_rq is throttled, all the descendant cfs_rq's > will be removed from the leaf list. When they initial cfs_rq is > unthrottled, it will currently only re add descendant cfs_rq's if they > have one or more entities enqueued. This is not a perfect heuristic. > > Instead, we insert all cfs_rq's that contain one or more enqueued > entities, or it its load is not completely decayed. > > Can often lead to situations like this for equally weighted control > groups: > > $ ps u -C stress > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 10009 88.8 0.0 3676 100 pts/1 R+ 11:04 0:13 stress --cpu 1 > root 10023 3.0 0.0 3676 104 pts/1 R+ 11:04 0:00 stress --cpu 1 > > Fixes: 31bc6aeaab1d ("sched/fair: Optimize update_blocked_averages()") > Signed-off-by: Odin Ugedal > --- > Changes since v1: > - Replaced cfs_rq field with using tg_load_avg_contrib > - Went from 3 to 1 patches; one is merged and one is replaced > by a new patchset. > Changes since v2: > - Use !cfs_rq_is_decayed() instead of tg_load_avg_contrib > - Moved cfs_rq_is_decayed to above its new use > Changes since v3: > - (hopefully) Fix config for !CONFIG_SMP > kernel/sched/fair.c | 40 +++++++++++++++++++++------------------- > 1 file changed, 21 insertions(+), 19 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 794c2cb945f8..eec32f214ff8 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -712,6 +712,25 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se) > return calc_delta_fair(sched_slice(cfs_rq, se), se); > } > > +static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq) It's not the best place for this function: - pelt.h header file is included below but cfs_rq_is_decayed() uses PELT - CONFIG_SMP is already defined few lines below - cfs_rq_is_decayed() is only used with CONFIG_FAIR_GROUP_SCHED and now with CONFIG_CFS_BANDWIDTH which depends on the former so moving cfs_rq_is_decayed() just above update_tg_load_avg() with other functions used for propagating and updating tg load seems a better place > +{ > + if (cfs_rq->load.weight) > + return false; > + > +#ifdef CONFIG_SMP > + if (cfs_rq->avg.load_sum) > + return false; > + > + if (cfs_rq->avg.util_sum) > + return false; > + > + if (cfs_rq->avg.runnable_sum) > + return false; > +#endif > + > + return true; > +} > + > #include "pelt.h" > #ifdef CONFIG_SMP > > @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data) > cfs_rq->throttled_clock_task_time += rq_clock_task(rq) - > cfs_rq->throttled_clock_task; > > - /* Add cfs_rq with already running entity in the list */ > - if (cfs_rq->nr_running >= 1) > + /* Add cfs_rq with load or one or more already running entities to the list */ > + if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running) > list_add_leaf_cfs_rq(cfs_rq); > } > > @@ -7895,23 +7914,6 @@ static bool __update_blocked_others(struct rq *rq, bool *done) > > #ifdef CONFIG_FAIR_GROUP_SCHED > > -static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq) > -{ > - if (cfs_rq->load.weight) > - return false; > - > - if (cfs_rq->avg.load_sum) > - return false; > - > - if (cfs_rq->avg.util_sum) > - return false; > - > - if (cfs_rq->avg.runnable_sum) > - return false; > - > - return true; > -} > - > static bool __update_blocked_fair(struct rq *rq, bool *done) > { > struct cfs_rq *cfs_rq, *pos; > -- > 2.31.1 >