linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Ming Lei <ming.lei@redhat.com>, linux-kernel@vger.kernel.org
Cc: Ming Lei <ming.lei@redhat.com>, Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Peter Xu <peterx@redhat.com>, Juri Lelli <juri.lelli@redhat.com>
Subject: Re: [PATCH V3] sched/isolation: isolate from handling managed interrupt
Date: Sun, 19 Jan 2020 17:50:17 +0100	[thread overview]
Message-ID: <87ftgb4chi.fsf@nanos.tec.linutronix.de> (raw)
In-Reply-To: <20200118125354.15796-1-ming.lei@redhat.com>

Ming,

Ming Lei <ming.lei@redhat.com> writes:
>  
> +static bool hk_should_isolate(struct irq_data *data,
> +		const struct cpumask *affinity, unsigned int cpu)

Please align the first argument on the second line with the first
argument on the first line.

> +{
> +	const struct cpumask *hk_mask;
> +
> +	if (!housekeeping_enabled(HK_FLAG_MANAGED_IRQ))
> +		return false;
> +
> +	if (!irqd_affinity_is_managed(data))
> +		return false;

Pointless. That's already checked at the begin of the calling function.

> +
> +	if (!cpumask_test_cpu(cpu, affinity))
> +		return false;

Ditto.

> +	hk_mask = housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
> +	if (cpumask_subset(affinity, hk_mask))
> +		return false;
> +
> +	if (cpumask_intersects(irq_data_get_effective_affinity_mask(data),
> +				hk_mask))

I really had to think twice why this is correct. The example I gave you
is far more intuitive. It's just missing the check below.

> +		return false;
> +
> +	return cpumask_test_cpu(cpu, hk_mask);
> +}
> +
>  static void irq_restore_affinity_of_irq(struct irq_desc *desc, unsigned int cpu)
>  {
>  	struct irq_data *data = irq_desc_get_irq_data(desc);
> @@ -190,7 +216,8 @@ static void irq_restore_affinity_of_irq(struct irq_desc *desc, unsigned int cpu)
>  	 * CPU then it is already assigned to a CPU in the affinity
>  	 * mask. No point in trying to move it around.
>  	 */
> -	if (!irqd_is_single_target(data))
> +	if (!irqd_is_single_target(data) ||
> +			hk_should_isolate(data, affinity, cpu))

	if (!irqd_is_single_target(data) ||
	    hk_should_isolate(data, affinity, cpu))

Please.

>  		irq_set_affinity_locked(data, affinity, false);
>  }
>  
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index 1753486b440c..a8af2ca806e2 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -18,6 +18,7 @@
>  #include <linux/sched.h>
>  #include <linux/sched/rt.h>
>  #include <linux/sched/task.h>
> +#include <linux/sched/isolation.h>
>  #include <uapi/linux/sched/types.h>
>  #include <linux/task_work.h>
>  
> @@ -217,7 +218,40 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
>  	if (!chip || !chip->irq_set_affinity)
>  		return -EINVAL;
>  
> -	ret = chip->irq_set_affinity(data, mask, force);
> +	/*
> +	 * If this is a managed interrupt and housekeeping is enabled on
> +	 * it check whether the requested affinity mask intersects with
> +	 * a housekeeping CPU. If so, then remove the isolated CPUs from
> +	 * the mask and just keep the housekeeping CPU(s). This prevents
> +	 * the affinity setter from routing the interrupt to an isolated
> +	 * CPU to avoid that I/O submitted from a housekeeping CPU causes
> +	 * interrupts on an isolated one.
> +	 *
> +	 * If the masks do not intersect or include online CPU(s) then
> +	 * keep the requested mask. The isolated target CPUs are only
> +	 * receiving interrupts when the I/O operation was submitted
> +	 * directly from them.
> +	 *
> +	 * If all housekeeping CPUs in the affinity mask are offline,
> +	 * we will migrate the irq from isolate CPU when any housekeeping
> +	 * CPU in the mask becomes online.
> +	 */
> +	if (irqd_affinity_is_managed(data) &&
> +			housekeeping_enabled(HK_FLAG_MANAGED_IRQ)) {

Same here.

> +		static DEFINE_RAW_SPINLOCK(prog_mask_lock);
> +		static struct cpumask prog_mask;
> +		const struct cpumask *hk_mask =
> +			housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
> +

What's wrong with writing:

		const struct cpumask *hk_mask;

		hk_mask = housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);

Hmm?

> +		raw_spin_lock(&prog_mask_lock);
> +		cpumask_and(&prog_mask, mask, hk_mask);
> +		if (!cpumask_intersects(&prog_mask, cpu_online_mask))
> +			cpumask_copy(&prog_mask, mask);

Why copy?

		static struct cpumask tmp_mask;
		const struct cpumask *hk_mask, *mp;

		raw_spin_lock(&mask_lock);
		cpumask_and(&tmp_mask, mask, hk_mask);
		if (cpumask_intersects(&prog_mask, cpu_online_mask))
                	mp = &tmp_mask;
                else
                	mp = mask;
                ....
Hmm?

Thanks,

        tglx

  reply	other threads:[~2020-01-19 16:50 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-18 12:53 [PATCH V3] sched/isolation: isolate from handling managed interrupt Ming Lei
2020-01-19 16:50 ` Thomas Gleixner [this message]
2020-01-20  8:58   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87ftgb4chi.fsf@nanos.tec.linutronix.de \
    --to=tglx@linutronix.de \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=mingo@redhat.com \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).