From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 860EFC433DF for ; Tue, 26 May 2020 19:22:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F1AA20849 for ; Tue, 26 May 2020 19:22:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590520934; bh=VC4Bv9HXXuWdCyESRJnXGXY9Lh5TYXispNRAl3SqAuw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=CirfIgRTaWBBDNv9juP72MUzjnJ5hc+1QawgfWdlajK2s1azBsJZTdtQGN4PgFUCn sNfOYCyRwlteJEhve7n8u0T8GLRsNypbrv0KDnopkpQIO7Hge/7BEcweI6RXXVNi+G rxfw5fpKS8ji9XJKN4YHDL2cvZXPPc3vqSTsOV7k= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404291AbgEZTWN (ORCPT ); Tue, 26 May 2020 15:22:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:39790 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391944AbgEZTKd (ORCPT ); Tue, 26 May 2020 15:10:33 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B271B20888; Tue, 26 May 2020 19:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590520232; bh=VC4Bv9HXXuWdCyESRJnXGXY9Lh5TYXispNRAl3SqAuw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tux5KwsnvfRJpNb9B2nzCSsaq0ea7D4pRo6mq47Sqnf4e6pfi4G+5DZFlTxe8RqrR 1bcHG7ASCedSVdKl5bhNWS7bAPq0ywoRWFY9dIr4cFukDJW2/iEcV3xJRB+L6FuyQl 0hrVew69ejAMuNgwVaw7zdN/XPA0kIEON8PoRHs4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vincent Guittot , Mel Gorman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Valentin Schneider , Phil Auld , Hillf Danton , Sasha Levin Subject: [PATCH 5.4 109/111] sched/fair: Reorder enqueue/dequeue_task_fair path Date: Tue, 26 May 2020 20:54:07 +0200 Message-Id: <20200526183943.226370333@linuxfoundation.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200526183932.245016380@linuxfoundation.org> References: <20200526183932.245016380@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vincent Guittot [ Upstream commit 6d4d22468dae3d8757af9f8b81b848a76ef4409d ] The walk through the cgroup hierarchy during the enqueue/dequeue of a task is split in 2 distinct parts for throttled cfs_rq without any added value but making code less readable. Change the code ordering such that everything related to a cfs_rq (throttled or not) will be done in the same loop. In addition, the same steps ordering is used when updating a cfs_rq: - update_load_avg - update_cfs_group - update *h_nr_running This reordering enables the use of h_nr_running in PELT algorithm. No functional and performance changes are expected and have been noticed during tests. Signed-off-by: Vincent Guittot Signed-off-by: Mel Gorman Signed-off-by: Ingo Molnar Reviewed-by: "Dietmar Eggemann " Acked-by: Peter Zijlstra Cc: Juri Lelli Cc: Valentin Schneider Cc: Phil Auld Cc: Hillf Danton Link: https://lore.kernel.org/r/20200224095223.13361-5-mgorman@techsingularity.net Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 42 ++++++++++++++++++++---------------------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eeaf34d65742..0e042e847ed3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5232,32 +5232,31 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) cfs_rq = cfs_rq_of(se); enqueue_entity(cfs_rq, se, flags); - /* - * end evaluation on encountering a throttled cfs_rq - * - * note: in the case of encountering a throttled cfs_rq we will - * post the final h_nr_running increment below. - */ - if (cfs_rq_throttled(cfs_rq)) - break; cfs_rq->h_nr_running++; cfs_rq->idle_h_nr_running += idle_h_nr_running; + /* end evaluation on encountering a throttled cfs_rq */ + if (cfs_rq_throttled(cfs_rq)) + goto enqueue_throttle; + flags = ENQUEUE_WAKEUP; } for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); - cfs_rq->h_nr_running++; - cfs_rq->idle_h_nr_running += idle_h_nr_running; + /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) - break; + goto enqueue_throttle; update_load_avg(cfs_rq, se, UPDATE_TG); update_cfs_group(se); + + cfs_rq->h_nr_running++; + cfs_rq->idle_h_nr_running += idle_h_nr_running; } +enqueue_throttle: if (!se) { add_nr_running(rq, 1); /* @@ -5317,17 +5316,13 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) cfs_rq = cfs_rq_of(se); dequeue_entity(cfs_rq, se, flags); - /* - * end evaluation on encountering a throttled cfs_rq - * - * note: in the case of encountering a throttled cfs_rq we will - * post the final h_nr_running decrement below. - */ - if (cfs_rq_throttled(cfs_rq)) - break; cfs_rq->h_nr_running--; cfs_rq->idle_h_nr_running -= idle_h_nr_running; + /* end evaluation on encountering a throttled cfs_rq */ + if (cfs_rq_throttled(cfs_rq)) + goto dequeue_throttle; + /* Don't dequeue parent if it has other entities besides us */ if (cfs_rq->load.weight) { /* Avoid re-evaluating load for this entity: */ @@ -5345,16 +5340,19 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); - cfs_rq->h_nr_running--; - cfs_rq->idle_h_nr_running -= idle_h_nr_running; + /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) - break; + goto dequeue_throttle; update_load_avg(cfs_rq, se, UPDATE_TG); update_cfs_group(se); + + cfs_rq->h_nr_running--; + cfs_rq->idle_h_nr_running -= idle_h_nr_running; } +dequeue_throttle: if (!se) sub_nr_running(rq, 1); -- 2.25.1