From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B92CAC433E0 for ; Tue, 9 Jun 2020 00:04:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 934E52078D for ; Tue, 9 Jun 2020 00:04:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591661064; bh=3J//wm36//AEStju1ppLMr4exvv9fdRyG83XN1eX8ck=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=bJ2pcE+tv8cf1a88jssM+TRSmpPDjFRObWzXZGAQikpPB79P230Is51cRE7xNsVk8 hrAAH/yNdJoMX1H7i3A3AqNnpZGrHemhyhkv0i37h2I3dSH0lhmTCZsd1C1bYuyYYq N7rlcHnxY6lD7VCek+P7OU4kvUIXe/na2Xd9uncc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732738AbgFIAEW (ORCPT ); Mon, 8 Jun 2020 20:04:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:45904 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731196AbgFHXVl (ORCPT ); Mon, 8 Jun 2020 19:21:41 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 76179208A7; Mon, 8 Jun 2020 23:21:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591658500; bh=3J//wm36//AEStju1ppLMr4exvv9fdRyG83XN1eX8ck=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OlARCsin/38FPcapX+oEnaPRkGtmp2LKixryJ8PJUT7/TtFMBk7ZbV5e/YgnsyB4N HU0V1Dwjl22lHtxdUEJA7Y9H+KPig2jbo1qTq6Vm9FR8DmI8yJCHCqpRW/TFYwasHe 9NLG+MnTJHZ81qtpv2NDWZSiIY6fTP4X9kAKb6vo= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Huaixin Chang , Peter Zijlstra , Ben Segall , Sasha Levin Subject: [PATCH AUTOSEL 5.4 132/175] sched: Defend cfs and rt bandwidth quota against overflow Date: Mon, 8 Jun 2020 19:18:05 -0400 Message-Id: <20200608231848.3366970-132-sashal@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200608231848.3366970-1-sashal@kernel.org> References: <20200608231848.3366970-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Huaixin Chang [ Upstream commit d505b8af58912ae1e1a211fabc9995b19bd40828 ] When users write some huge number into cpu.cfs_quota_us or cpu.rt_runtime_us, overflow might happen during to_ratio() shifts of schedulable checks. to_ratio() could be altered to avoid unnecessary internal overflow, but min_cfs_quota_period is less than 1 << BW_SHIFT, so a cutoff would still be needed. Set a cap MAX_BW for cfs_quota_us and rt_runtime_us to prevent overflow. Signed-off-by: Huaixin Chang Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Ben Segall Link: https://lkml.kernel.org/r/20200425105248.60093-1-changhuaixin@linux.alibaba.com Signed-off-by: Sasha Levin --- kernel/sched/core.c | 8 ++++++++ kernel/sched/rt.c | 12 +++++++++++- kernel/sched/sched.h | 2 ++ 3 files changed, 21 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4874e1468279..361cbc2dc966 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7374,6 +7374,8 @@ static DEFINE_MUTEX(cfs_constraints_mutex); const u64 max_cfs_quota_period = 1 * NSEC_PER_SEC; /* 1s */ static const u64 min_cfs_quota_period = 1 * NSEC_PER_MSEC; /* 1ms */ +/* More than 203 days if BW_SHIFT equals 20. */ +static const u64 max_cfs_runtime = MAX_BW * NSEC_PER_USEC; static int __cfs_schedulable(struct task_group *tg, u64 period, u64 runtime); @@ -7401,6 +7403,12 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota) if (period > max_cfs_quota_period) return -EINVAL; + /* + * Bound quota to defend quota against overflow during bandwidth shift. + */ + if (quota != RUNTIME_INF && quota > max_cfs_runtime) + return -EINVAL; + /* * Prevent race between setting of cfs_rq->runtime_enabled and * unthrottle_offline_cfs_rqs(). diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 7bf917e4d63a..5b04bba4500d 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -9,6 +9,8 @@ int sched_rr_timeslice = RR_TIMESLICE; int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE; +/* More than 4 hours if BW_SHIFT equals 20. */ +static const u64 max_rt_runtime = MAX_BW; static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun); @@ -2513,6 +2515,12 @@ static int tg_set_rt_bandwidth(struct task_group *tg, if (rt_period == 0) return -EINVAL; + /* + * Bound quota to defend quota against overflow during bandwidth shift. + */ + if (rt_runtime != RUNTIME_INF && rt_runtime > max_rt_runtime) + return -EINVAL; + mutex_lock(&rt_constraints_mutex); read_lock(&tasklist_lock); err = __rt_schedulable(tg, rt_period, rt_runtime); @@ -2634,7 +2642,9 @@ static int sched_rt_global_validate(void) return -EINVAL; if ((sysctl_sched_rt_runtime != RUNTIME_INF) && - (sysctl_sched_rt_runtime > sysctl_sched_rt_period)) + ((sysctl_sched_rt_runtime > sysctl_sched_rt_period) || + ((u64)sysctl_sched_rt_runtime * + NSEC_PER_USEC > max_rt_runtime))) return -EINVAL; return 0; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c7e7481968bf..570659f1c6e2 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1889,6 +1889,8 @@ extern void init_dl_rq_bw_ratio(struct dl_rq *dl_rq); #define BW_SHIFT 20 #define BW_UNIT (1 << BW_SHIFT) #define RATIO_SHIFT 8 +#define MAX_BW_BITS (64 - BW_SHIFT) +#define MAX_BW ((1ULL << MAX_BW_BITS) - 1) unsigned long to_ratio(u64 period, u64 runtime); extern void init_entity_runnable_average(struct sched_entity *se); -- 2.25.1