linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Steven Rostedt <rostedt@goodmis.org>
To: Juri Lelli <juri.lelli@redhat.com>
Cc: peterz@infradead.org, mingo@redhat.com,
	linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it,
	claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it,
	bristot@redhat.com, mathieu.poirier@linaro.org,
	lizefan@huawei.com, cgroups@vger.kernel.org
Subject: Re: [PATCH v5 4/5] sched/core: Prevent race condition between cpuset and __sched_setscheduler()
Date: Wed, 3 Oct 2018 15:42:30 -0400	[thread overview]
Message-ID: <20181003154230.4b8792fb@gandalf.local.home> (raw)
In-Reply-To: <20180903142801.20046-5-juri.lelli@redhat.com>

On Mon,  3 Sep 2018 16:28:00 +0200
Juri Lelli <juri.lelli@redhat.com> wrote:


> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 5b43f482fa0f..8dc26005bb1e 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -2410,6 +2410,24 @@ void __init cpuset_init_smp(void)
>  	BUG_ON(!cpuset_migrate_mm_wq);
>  }
>  
> +/**
> + * cpuset_read_only_lock - Grab the callback_lock from another subsysytem
> + *
> + * Description: Gives the holder read-only access to cpusets.
> + */
> +void cpuset_read_only_lock(void)
> +{
> +	raw_spin_lock(&callback_lock);

This was confusing to figure out why grabbing a spinlock gives read
only access. So I read the long comment above the definition of
callback_lock. A couple of notes.

1) The above description needs to go into more detail as to why
grabbing a spinlock is "read only".

2) The comment above the callback_lock needs to incorporate this, as
reading that comment alone will not give anyone an idea that this
exists.

Other than that, I don't see any issue with this patch.

-- Steve


> +}
> +
> +/**
> + * cpuset_read_only_unlock - Release the callback_lock from another subsysytem
> + */
> +void cpuset_read_only_unlock(void)
> +{
> +	raw_spin_unlock(&callback_lock);
> +}
> +
>  /**
>   * cpuset_cpus_allowed - return cpus_allowed mask from a tasks cpuset.
>   * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 22f5622cba69..ac11ee599968 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4228,6 +4228,13 @@ static int __sched_setscheduler(struct task_struct *p,
>  	rq = task_rq_lock(p, &rf);
>  	update_rq_clock(rq);
>  
> +	/*
> +	 * Make sure we don't race with the cpuset subsystem where root
> +	 * domains can be rebuilt or modified while operations like DL
> +	 * admission checks are carried out.
> +	 */
> +	cpuset_read_only_lock();
> +
>  	/*
>  	 * Changing the policy of the stop threads its a very bad idea:
>  	 */
> @@ -4289,6 +4296,7 @@ static int __sched_setscheduler(struct task_struct *p,
>  	/* Re-check policy now with rq lock held: */
>  	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
>  		policy = oldpolicy = -1;
> +		cpuset_read_only_unlock();
>  		task_rq_unlock(rq, p, &rf);
>  		goto recheck;
>  	}
> @@ -4346,6 +4354,7 @@ static int __sched_setscheduler(struct task_struct *p,
>  
>  	/* Avoid rq from going away on us: */
>  	preempt_disable();
> +	cpuset_read_only_unlock();
>  	task_rq_unlock(rq, p, &rf);
>  
>  	if (pi)
> @@ -4358,6 +4367,7 @@ static int __sched_setscheduler(struct task_struct *p,
>  	return 0;
>  
>  unlock:
> +	cpuset_read_only_unlock();
>  	task_rq_unlock(rq, p, &rf);
>  	return retval;
>  }


  reply	other threads:[~2018-10-03 19:42 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-03 14:27 [PATCH v5 0/5] sched/deadline: fix cpusets bandwidth accounting Juri Lelli
2018-09-03 14:27 ` [PATCH v5 1/5] sched/topology: Adding function partition_sched_domains_locked() Juri Lelli
2018-09-03 14:27 ` [PATCH v5 2/5] sched/core: Streamlining calls to task_rq_unlock() Juri Lelli
2018-09-03 14:27 ` [PATCH v5 3/5] cgroup/cpuset: make callback_lock raw Juri Lelli
2018-09-25 14:34   ` Juri Lelli
2018-11-07  9:59     ` Juri Lelli
2018-11-07 15:53     ` Tejun Heo
2018-11-07 16:38       ` Juri Lelli
2018-11-08 11:22         ` Juri Lelli
2018-11-08 19:11         ` Waiman Long
2018-11-09 10:34           ` Juri Lelli
2018-09-03 14:28 ` [PATCH v5 4/5] sched/core: Prevent race condition between cpuset and __sched_setscheduler() Juri Lelli
2018-10-03 19:42   ` Steven Rostedt [this message]
2018-10-04  9:04     ` Juri Lelli
2018-11-08 15:49       ` Waiman Long
2018-11-08 16:23         ` Juri Lelli
2018-09-03 14:28 ` [PATCH v5 5/5] cpuset: Rebuild root domain deadline accounting information Juri Lelli
2018-09-25 12:32   ` Peter Zijlstra
2018-09-25 13:07     ` Juri Lelli
2018-09-25 12:53   ` Peter Zijlstra
2018-09-25 13:08     ` Juri Lelli
2018-09-25  8:14 ` [PATCH v5 0/5] sched/deadline: fix cpusets bandwidth accounting Juri Lelli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181003154230.4b8792fb@gandalf.local.home \
    --to=rostedt@goodmis.org \
    --cc=bristot@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=claudio@evidence.eu.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=luca.abeni@santannapisa.it \
    --cc=mathieu.poirier@linaro.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tommaso.cucinotta@santannapisa.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).