From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932736AbcFGN3o (ORCPT ); Tue, 7 Jun 2016 09:29:44 -0400 Received: from foss.arm.com ([217.140.101.70]:44355 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751973AbcFGN3n (ORCPT ); Tue, 7 Jun 2016 09:29:43 -0400 Date: Tue, 7 Jun 2016 14:30:09 +0100 From: Juri Lelli To: Daniel Bristot de Oliveira Cc: linux-kernel@vger.kernel.org, Rik van Riel , "Luis Claudio R. Goncalves" , Tejun Heo , Li Zefan , Johannes Weiner , cgroups@vger.kernel.org Subject: Re: [PATCH] cgroup: disable irqs while holding css_set_lock Message-ID: <20160607133009.GS9340@e106622-lin> References: <20160607101402.GP9340@e106622-lin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/06/16 09:39, Daniel Bristot de Oliveira wrote: > Ciao Juri, > Ciao, :-) > On 06/07/2016 07:14 AM, Juri Lelli wrote: > > Interesting. And your test is using cpuset controller to partion > > DEADLINE tasks and then modify groups concurrently? > > Yes. I was studying the partitioning/admission control of the > deadline scheduler, to document it. > > I was using the minimal task from sched deadline's documentation > as the load (the ./m in the bellow script). > > Here is the script I was using in the test: Thanks for sharing it. It is somewhat similar to some of my test scripts, but I've got a question below. > -----------%<------------------------------------------------------------ > #!/bin/sh > > # I am running on a 8 cpus box, you need to adjust the > # cpu mask to match to your cpu topology. > > cd /sys/fs/cgroup/cpuset > > # global settings > # echo 1 > cpuset.cpu_exclusive > echo 0 > cpuset.sched_load_balance > > # a cpuset to run ordinary load: > > if [ ! -d ordinary ]; then > mkdir ordinary > echo 0-3 > ordinary/cpuset.cpus > echo 0 > ordinary/cpuset.mems > echo 0 > ordinary/cpuset.cpu_exclusive > # the load balance can be enabled on this cpuset. > echo 1 > ordinary/cpuset.sched_load_balance > fi > > # move all threads to ordinary cpuset > ps -eL -o lwp | while read tid; do > echo $tid >> ordinary/tasks 2> /dev/null || echo "thread $tid is pinned or died" > done > > echo $$ > ordinary/tasks > cat /proc/self/cpuset > ~/m & > > # a single cpu cpuset (partitioned) > if [ ! -d partitioned ]; then > mkdir partitioned > echo 4 > partitioned/cpuset.cpus > echo 0 > partitioned/cpuset.mems > echo 0 > partitioned/cpuset.cpu_exclusive > fi > > echo $$ > partitioned/tasks > cat /proc/self/cpuset > ~/m & > > # a set of cpus (clustered) > if [ ! -d clustered ]; then > mkdir clustered > echo 5-7 > clustered/cpuset.cpus > echo 0 > clustered/cpuset.mems > echo 0 > clustered/cpuset.cpu_exclusive So, this and the partitioned one could actually overlap, since we don't set cpu_exclusive. Is that right? I guess affinity mask of both m processes gets set correclty, but I'm not sure if we are missing one check in the admission control. Can you actually create two overlapping sets and get DEADLINE tasks running in them? For example, what happens if partitioned is [4] and clustered is [4-7]? Does setattr() fail? It is not really related to this patch, I'm just wondering if there is another problem lying around. Thanks, - Juri > # the load balance can be enabled on this cpuset. > echo 1 > clustered/cpuset.sched_load_balance > fi > > echo $$ > clustered/tasks > cat /proc/self/cpuset > ~/m > ----------->%------------------------------------------------------------ > > The problem rarely reproduces. > > -- Daniel >