linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: peterz@infradead.org
Cc: LKML <linux-kernel@vger.kernel.org>,
	Sebastian Siewior <bigeasy@linutronix.de>,
	Qais Yousef <qais.yousef@arm.com>, Scott Wood <swood@redhat.com>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Ingo Molnar <mingo@kernel.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Vincent Donnefort <vincent.donnefort@arm.com>
Subject: Re: [patch 09/10] sched/core: Add migrate_disable/enable()
Date: Fri, 18 Sep 2020 09:00:03 +0200	[thread overview]
Message-ID: <87bli3y3gs.fsf@nanos.tec.linutronix.de> (raw)
In-Reply-To: <20200917142438.GH1362448@hirez.programming.kicks-ass.net>

On Thu, Sep 17 2020 at 16:24, peterz wrote:
> On Thu, Sep 17, 2020 at 11:42:11AM +0200, Thomas Gleixner wrote:
>
>> +static inline void update_nr_migratory(struct task_struct *p, long delta)
>> +{
>> +	if (p->nr_cpus_allowed > 1 && p->sched_class->update_migratory)
>> +		p->sched_class->update_migratory(p, delta);
>> +}
>
> Right, so as you know, I totally hate this thing :-) It adds a second
> (and radically different) version of changing affinity. I'm working on a
> version that uses the normal *set_cpus_allowed*() interface.

Tried that back and forth and ended either up in locking hell or with
race conditions of sorts, but my scheduler foo is rusty.

>> +static inline void sched_migration_ctrl(struct task_struct *prev, int cpu)
>> +{
>> +	if (!prev->migration_ctrl.disable_cnt ||
>> +	    prev->cpus_ptr != &prev->cpus_mask)
>> +		return;
>> +
>> +	prev->cpus_ptr = cpumask_of(cpu);
>> +	update_nr_migratory(prev, -1);
>> +	prev->nr_cpus_allowed = 1;
>> +}
>
> So this thing is called from schedule(), with only rq->lock held, and
> that violates the locking rules for changing the affinity.
>
> I have a comment that explains how it's broken and why it's sort-of
> working.

Yeah :(

>> +void migrate_disable(void)
>> +{
>> +	unsigned long flags;
>> +
>> +	if (!current->migration_ctrl.disable_cnt) {
>> +		raw_spin_lock_irqsave(&current->pi_lock, flags);
>> +		current->migration_ctrl.disable_cnt++;
>> +		raw_spin_unlock_irqrestore(&current->pi_lock, flags);
>> +	} else {
>> +		current->migration_ctrl.disable_cnt++;
>> +	}
>> +}
>
> That pi_lock seems unfortunate, and it isn't obvious what the point of
> it is.

Indeed. That obviously lacks a big fat comment.

current->migration_ctrl.disable_cnt++ is obviously a RMW operation. So
you end up with the following:

CPU0                                            CPU1
migrate_disable()
   R = current->migration_ctrl.disable_cnt;
                                                set_cpus_allowed_ptr()
                                                  task_rq_lock();
                                                  if
                                                  (!p->migration_ctrl.disable_cnt) {
   current->migration_ctrl.disable_cnt = R + 1;
   							stop_one_cpu();
---> stopper_thread()
        BUG_ON(task->migration_ctrl.disable_cnt);

I tried to back out from that instead of BUG(), but that ended up being
even more of a hacky trainwreck than just biting the bullet and taking
pi_lock.

>
> So, what I'm missing with all this are the design contraints for this
> trainwreck. Because the 'sane' solution was having migrate_disable()
> imply cpus_read_lock(). But that didn't fly because we can't have
> migrate_disable() / migrate_enable() schedule for raisins.

Yeah. The original code had some magic

      if (preemptible())
      	    cpus_read_lock();
      else
            p->atomic_migrate_disable++;

but that caused another set of horrors with asymetric code like the
below and stuff like try_lock().

> And if I'm not mistaken, the above migrate_enable() *does* require being
> able to schedule, and our favourite piece of futex:
>
> 	raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
> 	spin_unlock(q.lock_ptr);
>
> is broken. Consider that spin_unlock() doing migrate_enable() with a
> pending sched_setaffinity().

Yes, we have the extra migrate_disable()/enable() pair around that.

The other way I solved that was to have a spin_[un]lock() variant which
does not have a migrate disable/enable. That works in that code because
there is no per CPUness requirement. Not pretty either...

Thanks,

        tglx

  parent reply	other threads:[~2020-09-18  7:01 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-17  9:42 [patch 00/10] sched: Migrate disable support for RT Thomas Gleixner
2020-09-17  9:42 ` [patch 01/10] sched: Fix balance_callback() Thomas Gleixner
2020-09-17  9:42 ` [patch 02/10] sched/hotplug: Ensure only per-cpu kthreads run during hotplug Thomas Gleixner
2020-09-17  9:42 ` [patch 03/10] sched/core: Wait for tasks being pushed away on hotplug Thomas Gleixner
2020-09-17  9:42 ` [patch 04/10] sched/hotplug: Consolidate task migration on CPU unplug Thomas Gleixner
2020-09-17  9:42 ` [patch 05/10] sched/core: Split __set_cpus_allowed_ptr() Thomas Gleixner
2020-09-17  9:42 ` [patch 06/10] sched: Add task components for migration control Thomas Gleixner
2020-09-17  9:42 ` [patch 07/10] sched/core: Add mechanism to wait for affinity setting to complete Thomas Gleixner
2020-09-17  9:42 ` [patch 08/10] sched: Add update_migratory() callback to scheduler classes Thomas Gleixner
2020-09-17  9:42 ` [patch 09/10] sched/core: Add migrate_disable/enable() Thomas Gleixner
2020-09-17 14:24   ` peterz
2020-09-17 14:38     ` Sebastian Siewior
2020-09-17 14:49       ` peterz
2020-09-17 15:13         ` Sebastian Siewior
2020-09-17 15:54           ` peterz
2020-09-17 16:30             ` Sebastian Siewior
2020-09-18  8:22               ` peterz
2020-09-18  8:48                 ` Sebastian Siewior
2020-09-18  7:00     ` Thomas Gleixner [this message]
2020-09-18  8:28       ` peterz
2020-09-17  9:42 ` [patch 10/10] sched/core: Make migrate disable and CPU hotplug cooperative Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87bli3y3gs.fsf@nanos.tec.linutronix.de \
    --to=tglx@linutronix.de \
    --cc=bigeasy@linutronix.de \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=qais.yousef@arm.com \
    --cc=rostedt@goodmis.org \
    --cc=swood@redhat.com \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.donnefort@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).