From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC8B4C433DB for ; Tue, 19 Jan 2021 15:40:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 682F823133 for ; Tue, 19 Jan 2021 15:40:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391625AbhASPkB (ORCPT ); Tue, 19 Jan 2021 10:40:01 -0500 Received: from foss.arm.com ([217.140.110.172]:36666 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391577AbhASPiv (ORCPT ); Tue, 19 Jan 2021 10:38:51 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A64A3D6E; Tue, 19 Jan 2021 07:38:05 -0800 (PST) Received: from [192.168.178.6] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 991063F66E; Tue, 19 Jan 2021 07:38:02 -0800 (PST) Subject: Re: [PATCH 4/6] sched/deadline: Block DL tasks on non-exclusive cpuset if bandwitdh control is enable To: Daniel Bristot de Oliveira , linux-kernel@vger.kernel.org Cc: Marco Perronet , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt , Ben Segall , Mel Gorman , Li Zefan , Tejun Heo , Johannes Weiner , Valentin Schneider , cgroups@vger.kernel.org References: <7b336c37cc3c38def6de181df8ba8c3148c5cc0c.1610463999.git.bristot@redhat.com> <4b37b32b-0e16-ffbc-ca6a-fbee935c0813@arm.com> <08dd4e61-5c4a-b010-2149-8f84ced3fb38@redhat.com> From: Dietmar Eggemann Message-ID: Date: Tue, 19 Jan 2021 16:37:46 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <08dd4e61-5c4a-b010-2149-8f84ced3fb38@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 19/01/2021 10:41, Daniel Bristot de Oliveira wrote: > On 1/14/21 4:51 PM, Dietmar Eggemann wrote: >> On 12/01/2021 16:53, Daniel Bristot de Oliveira wrote: [...] >> with this patch: >> >> cgroupv1: >> >> root@juno:/sys/fs/cgroup/cpuset# chrt -d --sched-period 1000000000 >> --sched-runtime 100000000 0 sleep 500 & >> [1] 1668 >> root@juno:/sys/fs/cgroup/cpuset# PID1=$! >> >> root@juno:/sys/fs/cgroup/cpuset# chrt -d --sched-period 1000000000 >> --sched-runtime 100000000 0 sleep 500 & >> [2] 1669 >> root@juno:/sys/fs/cgroup/cpuset# PID2=$! >> >> root@juno:/sys/fs/cgroup/cpuset# mkdir A >> >> root@juno:/sys/fs/cgroup/cpuset# echo 0 > ./A/cpuset.mems >> root@juno:/sys/fs/cgroup/cpuset# echo 0 > ./A/cpuset.cpus >> >> root@juno:/sys/fs/cgroup/cpuset# echo $PID2 > ./A/cgroup.procs >> -bash: echo: write error: Device or resource busy >> >> root@juno:/sys/fs/cgroup/cpuset# echo 1 > ./A/cpuset.cpu_exclusive >> >> root@juno:/sys/fs/cgroup/cpuset# echo $PID2 > ./A/cgroup.procs >> >> root@juno:/sys/fs/cgroup/cpuset# cat /proc/$PID1/status | grep >> Cpus_allowed_list | awk '{print $2}' >> 0-5 >> root@juno:/sys/fs/cgroup/cpuset# cat /proc/$PID2/status | grep >> Cpus_allowed_list | awk '{print $2}' >> 0 > > On CPU v1 we also need to disable the load balance to create a root domain, right? IMHO, that's not necessary for this example. But yes, if we create 2 exclusive cpusets A and B we want to turn off load-balancing on root level. It also doesn't hurt doing this in this example. But we end up with no sched domain since load-balance is disabled at root and A only contains CPU0. root@juno:/sys/fs/cgroup/cpuset# echo 0 > cpuset.sched_load_balance ls /proc/sys/kernel/sched_domain/cpu*/ doesn't show any (sched) domains. >> cgroupv2: > > Yeah, I see your point. I was seeing a different output because of Fedora > default's behavior of adding the tasks to the system.slice/user.slice... > > doing: > >> root@juno:/sys/fs/cgroup# echo +cpuset > cgroup.subtree_control > > # echo $$ > cgroup.procs The current shell should be already in the root cgroup? root@juno:/sys/fs/cgroup# echo $$ 1644 root@juno:/sys/fs/cgroup# cat cgroup.procs | grep $$ 1644 [...]