linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Valentin Schneider <valentin.schneider@arm.com>
To: Will Deacon <will@kernel.org>, linux-arm-kernel@lists.infradead.org
Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org,
	Will Deacon <will@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Marc Zyngier <maz@kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Qais Yousef <qais.yousef@arm.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Quentin Perret <qperret@google.com>, Tejun Heo <tj@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	kernel-team@android.com
Subject: Re: [PATCH v8 11/19] sched: Allow task CPU affinity to be restricted on asymmetric systems
Date: Fri, 04 Jun 2021 18:12:32 +0100	[thread overview]
Message-ID: <87zgw5d05b.mognet@arm.com> (raw)
In-Reply-To: <20210602164719.31777-12-will@kernel.org>

On 02/06/21 17:47, Will Deacon wrote:
> +static int restrict_cpus_allowed_ptr(struct task_struct *p,
> +				     struct cpumask *new_mask,
> +				     const struct cpumask *subset_mask)
> +{
> +	struct rq_flags rf;
> +	struct rq *rq;
> +	int err;
> +	struct cpumask *user_mask = NULL;
> +
> +	if (!p->user_cpus_ptr) {
> +		user_mask = kmalloc(cpumask_size(), GFP_KERNEL);
> +
> +		if (!user_mask)
> +			return -ENOMEM;
> +	}
> +
> +	rq = task_rq_lock(p, &rf);
> +
> +	/*
> +	 * Forcefully restricting the affinity of a deadline task is
> +	 * likely to cause problems, so fail and noisily override the
> +	 * mask entirely.
> +	 */
> +	if (task_has_dl_policy(p) && dl_bandwidth_enabled()) {
> +		err = -EPERM;
> +		goto err_unlock;
> +	}
> +
> +	if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) {
> +		err = -EINVAL;
> +		goto err_unlock;
> +	}
> +
> +	/*
> +	 * We're about to butcher the task affinity, so keep track of what
> +	 * the user asked for in case we're able to restore it later on.
> +	 */
> +	if (user_mask) {
> +		cpumask_copy(user_mask, p->cpus_ptr);
> +		p->user_cpus_ptr = user_mask;
> +	}
> +

Shouldn't that be done before any of the bailouts above, so we can
potentially restore the mask even if we end up forcefully expanding the
affinity?

> +	return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
> +
> +err_unlock:
> +	task_rq_unlock(rq, p, &rf);
> +	kfree(user_mask);
> +	return err;
> +}
> +
> +/*
> + * Restrict the CPU affinity of task @p so that it is a subset of
> + * task_cpu_possible_mask() and point @p->user_cpu_ptr to a copy of the
> + * old affinity mask. If the resulting mask is empty, we warn and walk
> + * up the cpuset hierarchy until we find a suitable mask.
> + */
> +void force_compatible_cpus_allowed_ptr(struct task_struct *p)
> +{
> +	cpumask_var_t new_mask;
> +	const struct cpumask *override_mask = task_cpu_possible_mask(p);
> +
> +	alloc_cpumask_var(&new_mask, GFP_KERNEL);
> +
> +	/*
> +	 * __migrate_task() can fail silently in the face of concurrent
> +	 * offlining of the chosen destination CPU, so take the hotplug
> +	 * lock to ensure that the migration succeeds.
> +	 */
> +	cpus_read_lock();

I'm thinking this might not be required with:

  http://lore.kernel.org/r/20210526205751.842360-3-valentin.schneider@arm.com

but then again this isn't merged yet :-)

> +	if (!cpumask_available(new_mask))
> +		goto out_set_mask;
> +
> +	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
> +		goto out_free_mask;
> +
> +	/*
> +	 * We failed to find a valid subset of the affinity mask for the
> +	 * task, so override it based on its cpuset hierarchy.
> +	 */
> +	cpuset_cpus_allowed(p, new_mask);
> +	override_mask = new_mask;
> +
> +out_set_mask:
> +	if (printk_ratelimit()) {
> +		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
> +				task_pid_nr(p), p->comm,
> +				cpumask_pr_args(override_mask));
> +	}
> +
> +	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
> +out_free_mask:
> +	cpus_read_unlock();
> +	free_cpumask_var(new_mask);
> +}
> +
> +static int
> +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask);
> +
> +/*
> + * Restore the affinity of a task @p which was previously restricted by a
> + * call to force_compatible_cpus_allowed_ptr(). This will clear (and free)
> + * @p->user_cpus_ptr.
> + */
> +void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
> +{
> +	unsigned long flags;
> +	struct cpumask *mask = p->user_cpus_ptr;
> +
> +	/*
> +	 * Try to restore the old affinity mask. If this fails, then
> +	 * we free the mask explicitly to avoid it being inherited across
> +	 * a subsequent fork().
> +	 */
> +	if (!mask || !__sched_setaffinity(p, mask))
> +		return;
> +
> +	raw_spin_lock_irqsave(&p->pi_lock, flags);
> +	release_user_cpus_ptr(p);
> +	raw_spin_unlock_irqrestore(&p->pi_lock, flags);

AFAICT an affinity change can happen between __sched_setaffinity() and
reacquiring the ->pi_lock. Right now this can't be another
force_compatible_cpus_allowed_ptr() because this is only driven by
arch_setup_new_exec() against current, so we should be fine, but here be
dragons.

  reply	other threads:[~2021-06-04 17:12 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-02 16:47 [PATCH v8 00/19] Add support for 32-bit tasks on asymmetric AArch32 systems Will Deacon
2021-06-02 16:47 ` [PATCH v8 01/19] arm64: cpuinfo: Split AArch32 registers out into a separate struct Will Deacon
2021-06-03 12:38   ` Mark Rutland
2021-06-03 17:24     ` Will Deacon
2021-06-02 16:47 ` [PATCH v8 02/19] arm64: Allow mismatched 32-bit EL0 support Will Deacon
2021-06-03 12:37   ` Mark Rutland
2021-06-03 17:44     ` Will Deacon
2021-06-04  9:38       ` Mark Rutland
2021-06-04 11:05         ` Will Deacon
2021-06-04 12:04           ` Mark Rutland
2021-06-04 13:50             ` Will Deacon
2021-06-02 16:47 ` [PATCH v8 03/19] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched " Will Deacon
2021-06-02 16:47 ` [PATCH v8 04/19] arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs Will Deacon
2021-06-02 16:47 ` [PATCH v8 05/19] sched: Introduce task_cpu_possible_mask() to limit fallback rq selection Will Deacon
2021-06-04 17:10   ` Valentin Schneider
2021-06-07 17:04     ` Will Deacon
2021-06-02 16:47 ` [PATCH v8 06/19] cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1 Will Deacon
2021-06-04 17:11   ` Valentin Schneider
2021-06-07 17:20     ` Will Deacon
2021-06-10 10:20       ` Valentin Schneider
2021-06-02 16:47 ` [PATCH v8 07/19] cpuset: Honour task_cpu_possible_mask() in guarantee_online_cpus() Will Deacon
2021-06-04 17:11   ` Valentin Schneider
2021-06-02 16:47 ` [PATCH v8 08/19] sched: Reject CPU affinity changes based on task_cpu_possible_mask() Will Deacon
2021-06-04 17:11   ` Valentin Schneider
2021-06-07 22:43     ` Will Deacon
2021-06-02 16:47 ` [PATCH v8 09/19] sched: Introduce task_struct::user_cpus_ptr to track requested affinity Will Deacon
2021-06-04 17:12   ` Valentin Schneider
2021-06-02 16:47 ` [PATCH v8 10/19] sched: Split the guts of sched_setaffinity() into a helper function Will Deacon
2021-06-04 17:12   ` Valentin Schneider
2021-06-02 16:47 ` [PATCH v8 11/19] sched: Allow task CPU affinity to be restricted on asymmetric systems Will Deacon
2021-06-04 17:12   ` Valentin Schneider [this message]
2021-06-07 22:52     ` Will Deacon
2021-06-10 10:20       ` Valentin Schneider
2021-06-02 16:47 ` [PATCH v8 12/19] sched: Introduce task_cpus_dl_admissible() to check proposed affinity Will Deacon
2021-06-03  9:43   ` Daniel Bristot de Oliveira
2021-06-03  9:52     ` Will Deacon
2021-06-02 16:47 ` [PATCH v8 13/19] arm64: Implement task_cpu_possible_mask() Will Deacon
2021-06-02 16:47 ` [PATCH v8 14/19] arm64: exec: Adjust affinity for compat tasks with mismatched 32-bit EL0 Will Deacon
2021-06-03  9:45   ` Daniel Bristot de Oliveira
2021-06-02 16:47 ` [PATCH v8 15/19] arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched system Will Deacon
2021-06-03 12:58   ` Mark Rutland
2021-06-03 17:40     ` Will Deacon
2021-06-04  9:49       ` Mark Rutland
2021-06-04 12:14         ` Qais Yousef
2021-06-02 16:47 ` [PATCH v8 16/19] arm64: Advertise CPUs capable of running 32-bit applications in sysfs Will Deacon
2021-06-02 16:47 ` [PATCH v8 17/19] arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0 Will Deacon
2021-06-02 16:47 ` [PATCH v8 18/19] arm64: Remove logic to kill 32-bit tasks on 64-bit-only cores Will Deacon
2021-06-02 16:47 ` [PATCH v8 19/19] Documentation: arm64: describe asymmetric 32-bit support Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87zgw5d05b.mognet@arm.com \
    --to=valentin.schneider@arm.com \
    --cc=bristot@redhat.com \
    --cc=catalin.marinas@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=juri.lelli@redhat.com \
    --cc=kernel-team@android.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=qais.yousef@arm.com \
    --cc=qperret@google.com \
    --cc=rjw@rjwysocki.net \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).