linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/isolation: isolate from handling managed interrupt
@ 2020-01-16  9:48 Ming Lei
  2020-01-16 12:08 ` Thomas Gleixner
  2020-01-17 20:52 ` kbuild test robot
  0 siblings, 2 replies; 6+ messages in thread
From: Ming Lei @ 2020-01-16  9:48 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ming Lei, Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Peter Xu,
	Juri Lelli

Userspace can't change managed interrupt's affinity via /proc interface,
however application often requires the specified isolated CPUs not
disturbed by interrupts.

Add sub-parameter 'managed_irq' for 'isolcpus', so that we can isolate
from handling managed interrupt.

Not select isolated CPU as effective CPU if the interupt affinity includes
at least one housekeeping CPU. This way guarantees that isolated CPUs won't
be interrupted by managed irq if IO isn't submitted from any isolated CPU.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 .../admin-guide/kernel-parameters.txt         |  9 ++++++++
 include/linux/sched/isolation.h               |  1 +
 kernel/irq/manage.c                           | 22 ++++++++++++++++++-
 kernel/sched/isolation.c                      |  6 +++++
 4 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index ade4e6ec23e0..e0f18ac866d4 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1933,6 +1933,15 @@
 			  <cpu number> begins at 0 and the maximum value is
 			  "number of CPUs in system - 1".
 
+			managed_irq
+			  Isolate from handling managed interrupt. Userspace can't
+			  change managed interrupt's affinity via /proc interface,
+			  however application often requires the specified isolated
+			  CPUs not disturbed by interrupts. This way guarantees that
+			  isolated CPU won't be interrupted if IO isn't submitted
+			  from isolated CPU when managed interrupt is used by IO
+			  drivers.
+
 			The format of <cpu-list> is described above.
 
 
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index 6c8512d3be88..0fbcbacd1b29 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -13,6 +13,7 @@ enum hk_flags {
 	HK_FLAG_TICK		= (1 << 4),
 	HK_FLAG_DOMAIN		= (1 << 5),
 	HK_FLAG_WQ		= (1 << 6),
+	HK_FLAG_MANAGED_IRQ	= (1 << 7),
 };
 
 #ifdef CONFIG_CPU_ISOLATION
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 1753486b440c..9cc972d28d3c 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -20,6 +20,7 @@
 #include <linux/sched/task.h>
 #include <uapi/linux/sched/types.h>
 #include <linux/task_work.h>
+#include <linux/sched/isolation.h>
 
 #include "internals.h"
 
@@ -212,12 +213,29 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
 {
 	struct irq_desc *desc = irq_data_to_desc(data);
 	struct irq_chip *chip = irq_data_get_irq_chip(data);
+	const struct cpumask *housekeeping_mask =
+		housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
 	int ret;
+	cpumask_var_t tmp_mask = (struct cpumask *)mask;
 
 	if (!chip || !chip->irq_set_affinity)
 		return -EINVAL;
 
-	ret = chip->irq_set_affinity(data, mask, force);
+	zalloc_cpumask_var(&tmp_mask, GFP_ATOMIC);
+
+	/*
+	 * Userspace can't change managed irq's affinity, make sure that
+	 * isolated CPU won't be selected as the effective CPU if this
+	 * irq's affinity includes at least one housekeeping CPU.
+	 *
+	 * This way guarantees that isolated CPU won't be interrupted if
+	 * IO isn't submitted from isolated CPU.
+	 */
+	if (irqd_affinity_is_managed(data) && tmp_mask &&
+			cpumask_intersects(mask, housekeeping_mask))
+		cpumask_and(tmp_mask, mask, housekeeping_mask);
+
+	ret = chip->irq_set_affinity(data, tmp_mask, force);
 	switch (ret) {
 	case IRQ_SET_MASK_OK:
 	case IRQ_SET_MASK_OK_DONE:
@@ -229,6 +247,8 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
 		ret = 0;
 	}
 
+	free_cpumask_var(tmp_mask);
+
 	return ret;
 }
 
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 9fcb2a695a41..008d6ac2342b 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -163,6 +163,12 @@ static int __init housekeeping_isolcpus_setup(char *str)
 			continue;
 		}
 
+		if (!strncmp(str, "managed_irq,", 12)) {
+			str += 12;
+			flags |= HK_FLAG_MANAGED_IRQ;
+			continue;
+		}
+
 		pr_warn("isolcpus: Error, unknown flag\n");
 		return 0;
 	}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/isolation: isolate from handling managed interrupt
  2020-01-16  9:48 [PATCH] sched/isolation: isolate from handling managed interrupt Ming Lei
@ 2020-01-16 12:08 ` Thomas Gleixner
  2020-01-16 21:58   ` Ming Lei
  2020-01-17 20:52 ` kbuild test robot
  1 sibling, 1 reply; 6+ messages in thread
From: Thomas Gleixner @ 2020-01-16 12:08 UTC (permalink / raw)
  To: Ming Lei, linux-kernel
  Cc: Ming Lei, Ingo Molnar, Peter Zijlstra, Peter Xu, Juri Lelli

Ming,

Ming Lei <ming.lei@redhat.com> writes:

> @@ -212,12 +213,29 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
>  {
>  	struct irq_desc *desc = irq_data_to_desc(data);
>  	struct irq_chip *chip = irq_data_get_irq_chip(data);
> +	const struct cpumask *housekeeping_mask =
> +		housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
>  	int ret;
> +	cpumask_var_t tmp_mask = (struct cpumask *)mask;
>  
>  	if (!chip || !chip->irq_set_affinity)
>  		return -EINVAL;
>  
> -	ret = chip->irq_set_affinity(data, mask, force);
> +	zalloc_cpumask_var(&tmp_mask, GFP_ATOMIC);

I clearly told you:

    "That's wrong. This code is called with interrupts disabled, so
     GFP_KERNEL is wrong. And NO, we won't do a GFP_ATOMIC allocation
     here."

Is that last sentence unclear in any way?

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/isolation: isolate from handling managed interrupt
  2020-01-16 12:08 ` Thomas Gleixner
@ 2020-01-16 21:58   ` Ming Lei
  2020-01-17  0:23     ` Thomas Gleixner
  0 siblings, 1 reply; 6+ messages in thread
From: Ming Lei @ 2020-01-16 21:58 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Peter Xu, Juri Lelli

Hi Thomas,

On Thu, Jan 16, 2020 at 01:08:17PM +0100, Thomas Gleixner wrote:
> Ming,
> 
> Ming Lei <ming.lei@redhat.com> writes:
> 
> > @@ -212,12 +213,29 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
> >  {
> >  	struct irq_desc *desc = irq_data_to_desc(data);
> >  	struct irq_chip *chip = irq_data_get_irq_chip(data);
> > +	const struct cpumask *housekeeping_mask =
> > +		housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
> >  	int ret;
> > +	cpumask_var_t tmp_mask = (struct cpumask *)mask;
> >  
> >  	if (!chip || !chip->irq_set_affinity)
> >  		return -EINVAL;
> >  
> > -	ret = chip->irq_set_affinity(data, mask, force);
> > +	zalloc_cpumask_var(&tmp_mask, GFP_ATOMIC);
> 
> I clearly told you:
> 
>     "That's wrong. This code is called with interrupts disabled, so
>      GFP_KERNEL is wrong. And NO, we won't do a GFP_ATOMIC allocation
>      here."
> 
> Is that last sentence unclear in any way?

Yeah, it is clear.

But GFP_ATOMIC is usually allowed in atomic context, could you
explain it a bit why it can't be done in this case?

We still can fallback to current behavior if the allocation
fails.

Or could you suggest to solve the issue in other way if GFP_ATOMIC
can't be done?


Thanks,
Ming


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/isolation: isolate from handling managed interrupt
  2020-01-16 21:58   ` Ming Lei
@ 2020-01-17  0:23     ` Thomas Gleixner
  2020-01-17  1:51       ` Ming Lei
  0 siblings, 1 reply; 6+ messages in thread
From: Thomas Gleixner @ 2020-01-17  0:23 UTC (permalink / raw)
  To: Ming Lei; +Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Peter Xu, Juri Lelli

Ming,

Ming Lei <ming.lei@redhat.com> writes:
> On Thu, Jan 16, 2020 at 01:08:17PM +0100, Thomas Gleixner wrote:
>> Ming Lei <ming.lei@redhat.com> writes:
>> > -	ret = chip->irq_set_affinity(data, mask, force);
>> > +	zalloc_cpumask_var(&tmp_mask, GFP_ATOMIC);
>> 
>> I clearly told you:
>> 
>>     "That's wrong. This code is called with interrupts disabled, so
>>      GFP_KERNEL is wrong. And NO, we won't do a GFP_ATOMIC allocation
>>      here."
>> 
>> Is that last sentence unclear in any way?
>
> Yeah, it is clear.
>
> But GFP_ATOMIC is usually allowed in atomic context, could you
> explain it a bit why it can't be done in this case?

You could have asked that question before sending this patch :)

> We still can fallback to current behavior if the allocation fails.

Allocation fail is not the main concern here. In general we avoid atomic
allocations where ever we can, but even more so in contexts which are
fully atomic even on PREEMPT_RT simply because PREEMPT_RT cannot support
that.

Regular spin_lock(); GFP_ATOMIC; spin_unlock(); sections are not a
problem because spinlocks turn into sleepable 'spinlocks' on RT and are
preemptible.

irq_desc::lock is a raw spinlock which is a true spinlock on RT for
obvious reasons and there any form of memory allocation is a NONO.

> Or could you suggest to solve the issue in other way if GFP_ATOMIC
> can't be done?

Just use the same mechanism as irq_setup_affinity() for now. Not pretty,
but it does the job. I have a plan how to avoid that, but that's a
larger surgery.

Let me give you a few comments on the rest of the patch while at it.

> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -20,6 +20,7 @@
>  #include <linux/sched/task.h>
>  #include <uapi/linux/sched/types.h>
>  #include <linux/task_work.h>
> +#include <linux/sched/isolation.h>

Can you please move this include next to the other linux/sched/ one?

> @@ -212,12 +213,29 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
>  {
>  	struct irq_desc *desc = irq_data_to_desc(data);
>  	struct irq_chip *chip = irq_data_get_irq_chip(data);
> +	const struct cpumask *housekeeping_mask =
> +		housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);

Bah. This is unreadable garbage. What's wrong with defining the variable
and retrieving the pointer later on? Especially as this is only required
when there is an actual managed interrupt.

> +	/*
> +	 * Userspace can't change managed irq's affinity, make sure that
> +	 * isolated CPU won't be selected as the effective CPU if this
> +	 * irq's affinity includes at least one housekeeping CPU.
> +	 *
> +	 * This way guarantees that isolated CPU won't be interrupted if
> +	 * IO isn't submitted from isolated CPU.

This comment is more confusing than helpful. What about:

	/*
         * If this is a managed interrupt check whether the requested
         * affinity mask intersects with a housekeeping CPU. If so, then
         * remove the isolated CPUs from the mask and just keep the
         * housekeeping CPU(s). This prevents the affinity setter from
         * routing the interrupt to an isolated CPU to avoid that I/O
         * submitted from a housekeeping CPU causes interrupts on an
         * isolated one.
         *
         * If the masks do not intersect then keep the requested mask.
         * The isolated target CPUs are only receiving interrupts
         * when the I/O operation was submitted directly from them.
         */

Or something to that effect. Hmm?

> +	 */
> +	if (irqd_affinity_is_managed(data) && tmp_mask &&
> +			cpumask_intersects(mask, housekeeping_mask))
> +		cpumask_and(tmp_mask, mask, housekeeping_mask);

Now while writing the above comment the following interesting scenario
came to my mind:

Housekeeping CPUs 0-2, Isolated CPUs 3-7

Device has 4 queues. So the spreading results in:

 q0:   CPU 0/1, q1:   CPU 2/3, q2:   CPU 4/5, q3:   CPU 6/7

q1 is the interesting one. It's the one which gets the housekeeping mask
applied and CPU3 is removed.

So if CPU2 is offline when the mask is applied, then the result of the
affinity setting operation is going to be an error code because the
resulting mask is empty.

Admittedly this is a weird corner case, but it's bound to happen and the
resulting bug report is going to be hard to decode unless the reporter
can provide a 100% reproducer or a very accurate description of the
scenario.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/isolation: isolate from handling managed interrupt
  2020-01-17  0:23     ` Thomas Gleixner
@ 2020-01-17  1:51       ` Ming Lei
  0 siblings, 0 replies; 6+ messages in thread
From: Ming Lei @ 2020-01-17  1:51 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Peter Xu, Juri Lelli

Hi Thomas,

On Fri, Jan 17, 2020 at 01:23:03AM +0100, Thomas Gleixner wrote:
> Ming,
> 
> Ming Lei <ming.lei@redhat.com> writes:
> > On Thu, Jan 16, 2020 at 01:08:17PM +0100, Thomas Gleixner wrote:
> >> Ming Lei <ming.lei@redhat.com> writes:
> >> > -	ret = chip->irq_set_affinity(data, mask, force);
> >> > +	zalloc_cpumask_var(&tmp_mask, GFP_ATOMIC);
> >> 
> >> I clearly told you:
> >> 
> >>     "That's wrong. This code is called with interrupts disabled, so
> >>      GFP_KERNEL is wrong. And NO, we won't do a GFP_ATOMIC allocation
> >>      here."
> >> 
> >> Is that last sentence unclear in any way?
> >
> > Yeah, it is clear.
> >
> > But GFP_ATOMIC is usually allowed in atomic context, could you
> > explain it a bit why it can't be done in this case?
> 
> You could have asked that question before sending this patch :)
> 
> > We still can fallback to current behavior if the allocation fails.
> 
> Allocation fail is not the main concern here. In general we avoid atomic
> allocations where ever we can, but even more so in contexts which are
> fully atomic even on PREEMPT_RT simply because PREEMPT_RT cannot support
> that.
> 
> Regular spin_lock(); GFP_ATOMIC; spin_unlock(); sections are not a
> problem because spinlocks turn into sleepable 'spinlocks' on RT and are
> preemptible.
> 
> irq_desc::lock is a raw spinlock which is a true spinlock on RT for
> obvious reasons and there any form of memory allocation is a NONO.

Got it now, thanks for your so great explanation.

> 
> > Or could you suggest to solve the issue in other way if GFP_ATOMIC
> > can't be done?
> 
> Just use the same mechanism as irq_setup_affinity() for now. Not pretty,
> but it does the job. I have a plan how to avoid that, but that's a
> larger surgery.

Good point.

> 
> Let me give you a few comments on the rest of the patch while at it.
> 
> > --- a/kernel/irq/manage.c
> > +++ b/kernel/irq/manage.c
> > @@ -20,6 +20,7 @@
> >  #include <linux/sched/task.h>
> >  #include <uapi/linux/sched/types.h>
> >  #include <linux/task_work.h>
> > +#include <linux/sched/isolation.h>
> 
> Can you please move this include next to the other linux/sched/ one?

OK.

> 
> > @@ -212,12 +213,29 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
> >  {
> >  	struct irq_desc *desc = irq_data_to_desc(data);
> >  	struct irq_chip *chip = irq_data_get_irq_chip(data);
> > +	const struct cpumask *housekeeping_mask =
> > +		housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
> 
> Bah. This is unreadable garbage. What's wrong with defining the variable
> and retrieving the pointer later on? Especially as this is only required
> when there is an actual managed interrupt.

OK, just because default housekeeping_mask is cpu_possible_mask.

> 
> > +	/*
> > +	 * Userspace can't change managed irq's affinity, make sure that
> > +	 * isolated CPU won't be selected as the effective CPU if this
> > +	 * irq's affinity includes at least one housekeeping CPU.
> > +	 *
> > +	 * This way guarantees that isolated CPU won't be interrupted if
> > +	 * IO isn't submitted from isolated CPU.
> 
> This comment is more confusing than helpful. What about:
> 
> 	/*
>          * If this is a managed interrupt check whether the requested
>          * affinity mask intersects with a housekeeping CPU. If so, then
>          * remove the isolated CPUs from the mask and just keep the
>          * housekeeping CPU(s). This prevents the affinity setter from
>          * routing the interrupt to an isolated CPU to avoid that I/O
>          * submitted from a housekeeping CPU causes interrupts on an
>          * isolated one.
>          *
>          * If the masks do not intersect then keep the requested mask.
>          * The isolated target CPUs are only receiving interrupts
>          * when the I/O operation was submitted directly from them.
>          */
> 
> Or something to that effect. Hmm?

That is much better.

> 
> > +	 */
> > +	if (irqd_affinity_is_managed(data) && tmp_mask &&
> > +			cpumask_intersects(mask, housekeeping_mask))
> > +		cpumask_and(tmp_mask, mask, housekeeping_mask);
> 
> Now while writing the above comment the following interesting scenario
> came to my mind:
> 
> Housekeeping CPUs 0-2, Isolated CPUs 3-7
> 
> Device has 4 queues. So the spreading results in:
> 
>  q0:   CPU 0/1, q1:   CPU 2/3, q2:   CPU 4/5, q3:   CPU 6/7
> 
> q1 is the interesting one. It's the one which gets the housekeeping mask
> applied and CPU3 is removed.
> 
> So if CPU2 is offline when the mask is applied, then the result of the
> affinity setting operation is going to be an error code because the
> resulting mask is empty.
> 
> Admittedly this is a weird corner case, but it's bound to happen and the
> resulting bug report is going to be hard to decode unless the reporter
> can provide a 100% reproducer or a very accurate description of the
> scenario.

Good catch, we may fallback to normal affinity when all CPUs in the
generated mask are offline.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/isolation: isolate from handling managed interrupt
  2020-01-16  9:48 [PATCH] sched/isolation: isolate from handling managed interrupt Ming Lei
  2020-01-16 12:08 ` Thomas Gleixner
@ 2020-01-17 20:52 ` kbuild test robot
  1 sibling, 0 replies; 6+ messages in thread
From: kbuild test robot @ 2020-01-17 20:52 UTC (permalink / raw)
  To: Ming Lei
  Cc: kbuild-all, linux-kernel, Ming Lei, Thomas Gleixner, Ingo Molnar,
	Peter Zijlstra, Peter Xu, Juri Lelli

[-- Attachment #1: Type: text/plain, Size: 3374 bytes --]

Hi Ming,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tip/auto-latest]
[also build test ERROR on linux/master tip/irq/core tip/sched/core linus/master v5.5-rc6 next-20200117]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ming-Lei/sched-isolation-isolate-from-handling-managed-interrupt/20200117-150600
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 9be5556beac21234216feb91225e4a09a7cf6a98
config: s390-randconfig-a001-20200117 (attached as .config)
compiler: s390-linux-gcc (GCC) 7.5.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.5.0 make.cross ARCH=s390 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All error/warnings (new ones prefixed by >>):

   kernel//irq/manage.c: In function 'irq_do_set_affinity':
>> kernel//irq/manage.c:219:27: error: invalid initializer
     cpumask_var_t tmp_mask = (struct cpumask *)mask;
                              ^
>> kernel//irq/manage.c:234:37: warning: the address of 'tmp_mask' will always evaluate as 'true' [-Waddress]
     if (irqd_affinity_is_managed(data) && tmp_mask &&
                                        ^~

vim +234 kernel//irq/manage.c

   210	
   211	int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
   212				bool force)
   213	{
   214		struct irq_desc *desc = irq_data_to_desc(data);
   215		struct irq_chip *chip = irq_data_get_irq_chip(data);
   216		const struct cpumask *housekeeping_mask =
   217			housekeeping_cpumask(HK_FLAG_MANAGED_IRQ);
   218		int ret;
 > 219		cpumask_var_t tmp_mask = (struct cpumask *)mask;
   220	
   221		if (!chip || !chip->irq_set_affinity)
   222			return -EINVAL;
   223	
   224		zalloc_cpumask_var(&tmp_mask, GFP_ATOMIC);
   225	
   226		/*
   227		 * Userspace can't change managed irq's affinity, make sure that
   228		 * isolated CPU won't be selected as the effective CPU if this
   229		 * irq's affinity includes at least one housekeeping CPU.
   230		 *
   231		 * This way guarantees that isolated CPU won't be interrupted if
   232		 * IO isn't submitted from isolated CPU.
   233		 */
 > 234		if (irqd_affinity_is_managed(data) && tmp_mask &&
   235				cpumask_intersects(mask, housekeeping_mask))
   236			cpumask_and(tmp_mask, mask, housekeeping_mask);
   237	
   238		ret = chip->irq_set_affinity(data, tmp_mask, force);
   239		switch (ret) {
   240		case IRQ_SET_MASK_OK:
   241		case IRQ_SET_MASK_OK_DONE:
   242			cpumask_copy(desc->irq_common_data.affinity, mask);
   243			/* fall through */
   244		case IRQ_SET_MASK_OK_NOCOPY:
   245			irq_validate_effective_affinity(data);
   246			irq_set_thread_affinity(desc);
   247			ret = 0;
   248		}
   249	
   250		free_cpumask_var(tmp_mask);
   251	
   252		return ret;
   253	}
   254	

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 21722 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-01-17 20:52 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-16  9:48 [PATCH] sched/isolation: isolate from handling managed interrupt Ming Lei
2020-01-16 12:08 ` Thomas Gleixner
2020-01-16 21:58   ` Ming Lei
2020-01-17  0:23     ` Thomas Gleixner
2020-01-17  1:51       ` Ming Lei
2020-01-17 20:52 ` kbuild test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).