linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
@ 2009-04-08 21:07 Gary Hade
  2009-04-08 22:30 ` Yinghai Lu
                   ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: Gary Hade @ 2009-04-08 21:07 UTC (permalink / raw)
  To: mingo, mingo, tglx, hpa, x86; +Cc: linux-kernel, garyhade, lcm

Impact: Eliminates a race that can leave the system in an
        unusable state

During rapid offlining of multiple CPUs there is a chance
that an IRQ affinity move destination CPU will be offlined
before the IRQ affinity move initiated during the offlining
of a previous CPU completes.  This can happen when the device
is not very active and thus fails to generate the IRQ that is
needed to complete the IRQ affinity move before the move
destination CPU is offlined.  When this happens there is an
-EBUSY return from __assign_irq_vector() during the offlining
of the IRQ move destination CPU which prevents initiation of
a new IRQ affinity move operation to an online CPU.  This
leaves the IRQ affinity set to an offlined CPU.

I have been able to reproduce the problem on some of our
systems using the following script.  When the system is idle
the problem often reproduces during the first CPU offlining
sequence.

#!/bin/sh

SYS_CPU_DIR=/sys/devices/system/cpu
VICTIM_IRQ=25
IRQ_MASK=f0

iteration=0
while true; do
  echo $iteration
  echo $IRQ_MASK > /proc/irq/$VICTIM_IRQ/smp_affinity
  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
    echo 0 > $cpudir/online
  done
  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
    echo 1 > $cpudir/online
  done
  iteration=`expr $iteration + 1`
done

The proposed fix takes advantage of the fact that when all
CPUs in the old domain are offline there is nothing to be done
by send_cleanup_vector() during the affinity move completion.
So, we simply avoid setting cfg->move_in_progress preventing
the above mentioned -EBUSY return from __assign_irq_vector().
This allows initiation of a new IRQ affinity move to a CPU
that is not going offline.

Signed-off-by: Gary Hade <garyhade@us.ibm.com>

---
 arch/x86/kernel/apic/io_apic.c |   11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

Index: linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c
===================================================================
--- linux-2.6.30-rc1.orig/arch/x86/kernel/apic/io_apic.c	2009-04-08 09:23:00.000000000 -0700
+++ linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c	2009-04-08 09:23:16.000000000 -0700
@@ -363,7 +363,8 @@ set_extra_move_desc(struct irq_desc *des
 	struct irq_cfg *cfg = desc->chip_data;
 
 	if (!cfg->move_in_progress) {
-		/* it means that domain is not changed */
+		/* it means that domain has not changed or all CPUs
+		 * in old domain are offline */
 		if (!cpumask_intersects(desc->affinity, mask))
 			cfg->move_desc_pending = 1;
 	}
@@ -1262,8 +1263,11 @@ next:
 		current_vector = vector;
 		current_offset = offset;
 		if (old_vector) {
-			cfg->move_in_progress = 1;
 			cpumask_copy(cfg->old_domain, cfg->domain);
+			if (cpumask_intersects(cfg->old_domain,
+					       cpu_online_mask)) {
+				cfg->move_in_progress = 1;
+			}
 		}
 		for_each_cpu_and(new_cpu, tmp_mask, cpu_online_mask)
 			per_cpu(vector_irq, new_cpu)[vector] = irq;
@@ -2492,7 +2496,8 @@ static void irq_complete_move(struct irq
 		if (likely(!cfg->move_desc_pending))
 			return;
 
-		/* domain has not changed, but affinity did */
+		/* domain has not changed or all CPUs in old domain
+		 * are offline, but affinity changed */
 		me = smp_processor_id();
 		if (cpumask_test_cpu(me, desc->affinity)) {
 			*descp = desc = move_irq_desc(desc, me);

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered  inactive device IRQ interrruption
  2009-04-08 21:07 [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Gary Hade
@ 2009-04-08 22:30 ` Yinghai Lu
  2009-04-08 23:37   ` Gary Hade
  2009-04-10  1:29 ` Eric W. Biederman
  2009-04-12 19:32 ` [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Eric W. Biederman
  2 siblings, 1 reply; 21+ messages in thread
From: Yinghai Lu @ 2009-04-08 22:30 UTC (permalink / raw)
  To: Gary Hade; +Cc: mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Wed, Apr 8, 2009 at 2:07 PM, Gary Hade <garyhade@us.ibm.com> wrote:
> Impact: Eliminates a race that can leave the system in an
>        unusable state
>
> During rapid offlining of multiple CPUs there is a chance
> that an IRQ affinity move destination CPU will be offlined
> before the IRQ affinity move initiated during the offlining
> of a previous CPU completes.  This can happen when the device
> is not very active and thus fails to generate the IRQ that is
> needed to complete the IRQ affinity move before the move
> destination CPU is offlined.  When this happens there is an
> -EBUSY return from __assign_irq_vector() during the offlining
> of the IRQ move destination CPU which prevents initiation of
> a new IRQ affinity move operation to an online CPU.  This
> leaves the IRQ affinity set to an offlined CPU.
>
> I have been able to reproduce the problem on some of our
> systems using the following script.  When the system is idle
> the problem often reproduces during the first CPU offlining
> sequence.
>
> #!/bin/sh
>
> SYS_CPU_DIR=/sys/devices/system/cpu
> VICTIM_IRQ=25
> IRQ_MASK=f0
>
> iteration=0
> while true; do
>  echo $iteration
>  echo $IRQ_MASK > /proc/irq/$VICTIM_IRQ/smp_affinity
>  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
>    echo 0 > $cpudir/online
>  done
>  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
>    echo 1 > $cpudir/online
>  done
>  iteration=`expr $iteration + 1`
> done
>
> The proposed fix takes advantage of the fact that when all
> CPUs in the old domain are offline there is nothing to be done
> by send_cleanup_vector() during the affinity move completion.
> So, we simply avoid setting cfg->move_in_progress preventing
> the above mentioned -EBUSY return from __assign_irq_vector().
> This allows initiation of a new IRQ affinity move to a CPU
> that is not going offline.
>
> Signed-off-by: Gary Hade <garyhade@us.ibm.com>
>
> ---
>  arch/x86/kernel/apic/io_apic.c |   11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
>
> Index: linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c
> ===================================================================
> --- linux-2.6.30-rc1.orig/arch/x86/kernel/apic/io_apic.c        2009-04-08 09:23:00.000000000 -0700
> +++ linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c     2009-04-08 09:23:16.000000000 -0700
> @@ -363,7 +363,8 @@ set_extra_move_desc(struct irq_desc *des
>        struct irq_cfg *cfg = desc->chip_data;
>
>        if (!cfg->move_in_progress) {
> -               /* it means that domain is not changed */
> +               /* it means that domain has not changed or all CPUs
> +                * in old domain are offline */
>                if (!cpumask_intersects(desc->affinity, mask))
>                        cfg->move_desc_pending = 1;
>        }
> @@ -1262,8 +1263,11 @@ next:
>                current_vector = vector;
>                current_offset = offset;
>                if (old_vector) {
> -                       cfg->move_in_progress = 1;
>                        cpumask_copy(cfg->old_domain, cfg->domain);
> +                       if (cpumask_intersects(cfg->old_domain,
> +                                              cpu_online_mask)) {
> +                               cfg->move_in_progress = 1;
> +                       }
>                }
>                for_each_cpu_and(new_cpu, tmp_mask, cpu_online_mask)
>                        per_cpu(vector_irq, new_cpu)[vector] = irq;
> @@ -2492,7 +2496,8 @@ static void irq_complete_move(struct irq
>                if (likely(!cfg->move_desc_pending))
>                        return;
>
> -               /* domain has not changed, but affinity did */
> +               /* domain has not changed or all CPUs in old domain
> +                * are offline, but affinity changed */
>                me = smp_processor_id();
>                if (cpumask_test_cpu(me, desc->affinity)) {
>                        *descp = desc = move_irq_desc(desc, me);
> --

so you mean during __assign_irq_vector(), cpu_online_mask get updated?
with your patch, how about that it just happen right after you check
that second time.

it seems we are missing some lock_vector_lock() on the remove cpu from
online mask.

YH

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-08 22:30 ` Yinghai Lu
@ 2009-04-08 23:37   ` Gary Hade
  2009-04-08 23:58     ` Yinghai Lu
  0 siblings, 1 reply; 21+ messages in thread
From: Gary Hade @ 2009-04-08 23:37 UTC (permalink / raw)
  To: Yinghai Lu; +Cc: Gary Hade, mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Wed, Apr 08, 2009 at 03:30:15PM -0700, Yinghai Lu wrote:
> On Wed, Apr 8, 2009 at 2:07 PM, Gary Hade <garyhade@us.ibm.com> wrote:
> > Impact: Eliminates a race that can leave the system in an
> >        unusable state
> >
> > During rapid offlining of multiple CPUs there is a chance
> > that an IRQ affinity move destination CPU will be offlined
> > before the IRQ affinity move initiated during the offlining
> > of a previous CPU completes.  This can happen when the device
> > is not very active and thus fails to generate the IRQ that is
> > needed to complete the IRQ affinity move before the move
> > destination CPU is offlined.  When this happens there is an
> > -EBUSY return from __assign_irq_vector() during the offlining
> > of the IRQ move destination CPU which prevents initiation of
> > a new IRQ affinity move operation to an online CPU.  This
> > leaves the IRQ affinity set to an offlined CPU.
> >
> > I have been able to reproduce the problem on some of our
> > systems using the following script.  When the system is idle
> > the problem often reproduces during the first CPU offlining
> > sequence.
> >
> > #!/bin/sh
> >
> > SYS_CPU_DIR=/sys/devices/system/cpu
> > VICTIM_IRQ=25
> > IRQ_MASK=f0
> >
> > iteration=0
> > while true; do
> >  echo $iteration
> >  echo $IRQ_MASK > /proc/irq/$VICTIM_IRQ/smp_affinity
> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
> >    echo 0 > $cpudir/online
> >  done
> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
> >    echo 1 > $cpudir/online
> >  done
> >  iteration=`expr $iteration + 1`
> > done
> >
> > The proposed fix takes advantage of the fact that when all
> > CPUs in the old domain are offline there is nothing to be done
> > by send_cleanup_vector() during the affinity move completion.
> > So, we simply avoid setting cfg->move_in_progress preventing
> > the above mentioned -EBUSY return from __assign_irq_vector().
> > This allows initiation of a new IRQ affinity move to a CPU
> > that is not going offline.
> >
> > Signed-off-by: Gary Hade <garyhade@us.ibm.com>
> >
> > ---
> >  arch/x86/kernel/apic/io_apic.c |   11 ++++++++---
> >  1 file changed, 8 insertions(+), 3 deletions(-)
> >
> > Index: linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c
> > ===================================================================
> > --- linux-2.6.30-rc1.orig/arch/x86/kernel/apic/io_apic.c        2009-04-08 09:23:00.000000000 -0700
> > +++ linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c     2009-04-08 09:23:16.000000000 -0700
> > @@ -363,7 +363,8 @@ set_extra_move_desc(struct irq_desc *des
> >        struct irq_cfg *cfg = desc->chip_data;
> >
> >        if (!cfg->move_in_progress) {
> > -               /* it means that domain is not changed */
> > +               /* it means that domain has not changed or all CPUs
> > +                * in old domain are offline */
> >                if (!cpumask_intersects(desc->affinity, mask))
> >                        cfg->move_desc_pending = 1;
> >        }
> > @@ -1262,8 +1263,11 @@ next:
> >                current_vector = vector;
> >                current_offset = offset;
> >                if (old_vector) {
> > -                       cfg->move_in_progress = 1;
> >                        cpumask_copy(cfg->old_domain, cfg->domain);
> > +                       if (cpumask_intersects(cfg->old_domain,
> > +                                              cpu_online_mask)) {
> > +                               cfg->move_in_progress = 1;
> > +                       }
> >                }
> >                for_each_cpu_and(new_cpu, tmp_mask, cpu_online_mask)
> >                        per_cpu(vector_irq, new_cpu)[vector] = irq;
> > @@ -2492,7 +2496,8 @@ static void irq_complete_move(struct irq
> >                if (likely(!cfg->move_desc_pending))
> >                        return;
> >
> > -               /* domain has not changed, but affinity did */
> > +               /* domain has not changed or all CPUs in old domain
> > +                * are offline, but affinity changed */
> >                me = smp_processor_id();
> >                if (cpumask_test_cpu(me, desc->affinity)) {
> >                        *descp = desc = move_irq_desc(desc, me);
> > --
> 
> so you mean during __assign_irq_vector(), cpu_online_mask get updated?

No, the CPU being offlined is removed from cpu_online_mask 
earlier via a call to remove_cpu_from_maps() from
cpu_disable_common().  This happens just before fixup_irqs()
is called.

> with your patch, how about that it just happen right after you check
> that second time.
> 
> it seems we are missing some lock_vector_lock() on the remove cpu from
> online mask.

The remove_cpu_from_maps() call in cpu_disable_common() is vector
lock protected:
void cpu_disable_common(void)
{
               < snip >
	/* It's now safe to remove this processor from the online map */
	lock_vector_lock();
	remove_cpu_from_maps(cpu);
	unlock_vector_lock();
	fixup_irqs();
}

Is this what you meant?

Gary

-- 
Gary Hade
System x Enablement
IBM Linux Technology Center
503-578-4503  IBM T/L: 775-4503
garyhade@us.ibm.com
http://www.ibm.com/linux/ltc


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered  inactive device IRQ interrruption
  2009-04-08 23:37   ` Gary Hade
@ 2009-04-08 23:58     ` Yinghai Lu
  2009-04-08 23:59       ` Yinghai Lu
  0 siblings, 1 reply; 21+ messages in thread
From: Yinghai Lu @ 2009-04-08 23:58 UTC (permalink / raw)
  To: Gary Hade; +Cc: mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Wed, Apr 8, 2009 at 4:37 PM, Gary Hade <garyhade@us.ibm.com> wrote:
> On Wed, Apr 08, 2009 at 03:30:15PM -0700, Yinghai Lu wrote:
>> On Wed, Apr 8, 2009 at 2:07 PM, Gary Hade <garyhade@us.ibm.com> wrote:
>> > Impact: Eliminates a race that can leave the system in an
>> >        unusable state
>> >
>> > During rapid offlining of multiple CPUs there is a chance
>> > that an IRQ affinity move destination CPU will be offlined
>> > before the IRQ affinity move initiated during the offlining
>> > of a previous CPU completes.  This can happen when the device
>> > is not very active and thus fails to generate the IRQ that is
>> > needed to complete the IRQ affinity move before the move
>> > destination CPU is offlined.  When this happens there is an
>> > -EBUSY return from __assign_irq_vector() during the offlining
>> > of the IRQ move destination CPU which prevents initiation of
>> > a new IRQ affinity move operation to an online CPU.  This
>> > leaves the IRQ affinity set to an offlined CPU.
>> >
>> > I have been able to reproduce the problem on some of our
>> > systems using the following script.  When the system is idle
>> > the problem often reproduces during the first CPU offlining
>> > sequence.
>> >
>> > #!/bin/sh
>> >
>> > SYS_CPU_DIR=/sys/devices/system/cpu
>> > VICTIM_IRQ=25
>> > IRQ_MASK=f0
>> >
>> > iteration=0
>> > while true; do
>> >  echo $iteration
>> >  echo $IRQ_MASK > /proc/irq/$VICTIM_IRQ/smp_affinity
>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
>> >    echo 0 > $cpudir/online
>> >  done
>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
>> >    echo 1 > $cpudir/online
>> >  done
>> >  iteration=`expr $iteration + 1`
>> > done
>> >
>> > The proposed fix takes advantage of the fact that when all
>> > CPUs in the old domain are offline there is nothing to be done
>> > by send_cleanup_vector() during the affinity move completion.
>> > So, we simply avoid setting cfg->move_in_progress preventing
>> > the above mentioned -EBUSY return from __assign_irq_vector().
>> > This allows initiation of a new IRQ affinity move to a CPU
>> > that is not going offline.
>> >
>> > Signed-off-by: Gary Hade <garyhade@us.ibm.com>
>> >
>> > ---
>> >  arch/x86/kernel/apic/io_apic.c |   11 ++++++++---
>> >  1 file changed, 8 insertions(+), 3 deletions(-)
>> >
>> > Index: linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c
>> > ===================================================================
>> > --- linux-2.6.30-rc1.orig/arch/x86/kernel/apic/io_apic.c        2009-04-08 09:23:00.000000000 -0700
>> > +++ linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c     2009-04-08 09:23:16.000000000 -0700
>> > @@ -363,7 +363,8 @@ set_extra_move_desc(struct irq_desc *des
>> >        struct irq_cfg *cfg = desc->chip_data;
>> >
>> >        if (!cfg->move_in_progress) {
>> > -               /* it means that domain is not changed */
>> > +               /* it means that domain has not changed or all CPUs
>> > +                * in old domain are offline */
>> >                if (!cpumask_intersects(desc->affinity, mask))
>> >                        cfg->move_desc_pending = 1;
>> >        }
>> > @@ -1262,8 +1263,11 @@ next:
>> >                current_vector = vector;
>> >                current_offset = offset;
>> >                if (old_vector) {
>> > -                       cfg->move_in_progress = 1;
>> >                        cpumask_copy(cfg->old_domain, cfg->domain);
>> > +                       if (cpumask_intersects(cfg->old_domain,
>> > +                                              cpu_online_mask)) {
>> > +                               cfg->move_in_progress = 1;
>> > +                       }
>> >                }
>> >                for_each_cpu_and(new_cpu, tmp_mask, cpu_online_mask)
>> >                        per_cpu(vector_irq, new_cpu)[vector] = irq;
>> > @@ -2492,7 +2496,8 @@ static void irq_complete_move(struct irq
>> >                if (likely(!cfg->move_desc_pending))
>> >                        return;
>> >
>> > -               /* domain has not changed, but affinity did */
>> > +               /* domain has not changed or all CPUs in old domain
>> > +                * are offline, but affinity changed */
>> >                me = smp_processor_id();
>> >                if (cpumask_test_cpu(me, desc->affinity)) {
>> >                        *descp = desc = move_irq_desc(desc, me);
>> > --
>>
>> so you mean during __assign_irq_vector(), cpu_online_mask get updated?
>
> No, the CPU being offlined is removed from cpu_online_mask
> earlier via a call to remove_cpu_from_maps() from
> cpu_disable_common().  This happens just before fixup_irqs()
> is called.
>
>> with your patch, how about that it just happen right after you check
>> that second time.
>>
>> it seems we are missing some lock_vector_lock() on the remove cpu from
>> online mask.
>
> The remove_cpu_from_maps() call in cpu_disable_common() is vector
> lock protected:
> void cpu_disable_common(void)
> {
>               < snip >
>        /* It's now safe to remove this processor from the online map */
>        lock_vector_lock();
>        remove_cpu_from_maps(cpu);
>        unlock_vector_lock();
>        fixup_irqs();
> }


__assign_irq_vector always has vector_lock locked...
so cpu_online_mask will not changed during, why do you need to check
that again in __assign_irq_vector ?

YH

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered  inactive device IRQ interrruption
  2009-04-08 23:58     ` Yinghai Lu
@ 2009-04-08 23:59       ` Yinghai Lu
  2009-04-09 19:17         ` Gary Hade
  0 siblings, 1 reply; 21+ messages in thread
From: Yinghai Lu @ 2009-04-08 23:59 UTC (permalink / raw)
  To: Gary Hade; +Cc: mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Wed, Apr 8, 2009 at 4:58 PM, Yinghai Lu <yhlu.kernel@gmail.com> wrote:
> On Wed, Apr 8, 2009 at 4:37 PM, Gary Hade <garyhade@us.ibm.com> wrote:
>> On Wed, Apr 08, 2009 at 03:30:15PM -0700, Yinghai Lu wrote:
>>> On Wed, Apr 8, 2009 at 2:07 PM, Gary Hade <garyhade@us.ibm.com> wrote:
>>> > Impact: Eliminates a race that can leave the system in an
>>> >        unusable state
>>> >
>>> > During rapid offlining of multiple CPUs there is a chance
>>> > that an IRQ affinity move destination CPU will be offlined
>>> > before the IRQ affinity move initiated during the offlining
>>> > of a previous CPU completes.  This can happen when the device
>>> > is not very active and thus fails to generate the IRQ that is
>>> > needed to complete the IRQ affinity move before the move
>>> > destination CPU is offlined.  When this happens there is an
>>> > -EBUSY return from __assign_irq_vector() during the offlining
>>> > of the IRQ move destination CPU which prevents initiation of
>>> > a new IRQ affinity move operation to an online CPU.  This
>>> > leaves the IRQ affinity set to an offlined CPU.
>>> >
>>> > I have been able to reproduce the problem on some of our
>>> > systems using the following script.  When the system is idle
>>> > the problem often reproduces during the first CPU offlining
>>> > sequence.
>>> >
>>> > #!/bin/sh
>>> >
>>> > SYS_CPU_DIR=/sys/devices/system/cpu
>>> > VICTIM_IRQ=25
>>> > IRQ_MASK=f0
>>> >
>>> > iteration=0
>>> > while true; do
>>> >  echo $iteration
>>> >  echo $IRQ_MASK > /proc/irq/$VICTIM_IRQ/smp_affinity
>>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
>>> >    echo 0 > $cpudir/online
>>> >  done
>>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
>>> >    echo 1 > $cpudir/online
>>> >  done
>>> >  iteration=`expr $iteration + 1`
>>> > done
>>> >
>>> > The proposed fix takes advantage of the fact that when all
>>> > CPUs in the old domain are offline there is nothing to be done
>>> > by send_cleanup_vector() during the affinity move completion.
>>> > So, we simply avoid setting cfg->move_in_progress preventing
>>> > the above mentioned -EBUSY return from __assign_irq_vector().
>>> > This allows initiation of a new IRQ affinity move to a CPU
>>> > that is not going offline.
>>> >
>>> > Signed-off-by: Gary Hade <garyhade@us.ibm.com>
>>> >
>>> > ---
>>> >  arch/x86/kernel/apic/io_apic.c |   11 ++++++++---
>>> >  1 file changed, 8 insertions(+), 3 deletions(-)
>>> >
>>> > Index: linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c
>>> > ===================================================================
>>> > --- linux-2.6.30-rc1.orig/arch/x86/kernel/apic/io_apic.c        2009-04-08 09:23:00.000000000 -0700
>>> > +++ linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c     2009-04-08 09:23:16.000000000 -0700
>>> > @@ -363,7 +363,8 @@ set_extra_move_desc(struct irq_desc *des
>>> >        struct irq_cfg *cfg = desc->chip_data;
>>> >
>>> >        if (!cfg->move_in_progress) {
>>> > -               /* it means that domain is not changed */
>>> > +               /* it means that domain has not changed or all CPUs
>>> > +                * in old domain are offline */
>>> >                if (!cpumask_intersects(desc->affinity, mask))
>>> >                        cfg->move_desc_pending = 1;
>>> >        }
>>> > @@ -1262,8 +1263,11 @@ next:
>>> >                current_vector = vector;
>>> >                current_offset = offset;
>>> >                if (old_vector) {
>>> > -                       cfg->move_in_progress = 1;
>>> >                        cpumask_copy(cfg->old_domain, cfg->domain);
>>> > +                       if (cpumask_intersects(cfg->old_domain,
>>> > +                                              cpu_online_mask)) {
>>> > +                               cfg->move_in_progress = 1;
>>> > +                       }
>>> >                }
>>> >                for_each_cpu_and(new_cpu, tmp_mask, cpu_online_mask)
>>> >                        per_cpu(vector_irq, new_cpu)[vector] = irq;
>>> > @@ -2492,7 +2496,8 @@ static void irq_complete_move(struct irq
>>> >                if (likely(!cfg->move_desc_pending))
>>> >                        return;
>>> >
>>> > -               /* domain has not changed, but affinity did */
>>> > +               /* domain has not changed or all CPUs in old domain
>>> > +                * are offline, but affinity changed */
>>> >                me = smp_processor_id();
>>> >                if (cpumask_test_cpu(me, desc->affinity)) {
>>> >                        *descp = desc = move_irq_desc(desc, me);
>>> > --
>>>
>>> so you mean during __assign_irq_vector(), cpu_online_mask get updated?
>>
>> No, the CPU being offlined is removed from cpu_online_mask
>> earlier via a call to remove_cpu_from_maps() from
>> cpu_disable_common().  This happens just before fixup_irqs()
>> is called.
>>
>>> with your patch, how about that it just happen right after you check
>>> that second time.
>>>
>>> it seems we are missing some lock_vector_lock() on the remove cpu from
>>> online mask.
>>
>> The remove_cpu_from_maps() call in cpu_disable_common() is vector
>> lock protected:
>> void cpu_disable_common(void)
>> {
>>               < snip >
>>        /* It's now safe to remove this processor from the online map */
>>        lock_vector_lock();
>>        remove_cpu_from_maps(cpu);
>>        unlock_vector_lock();
>>        fixup_irqs();
>> }
>
>
> __assign_irq_vector always has vector_lock locked...
> so cpu_online_mask will not changed during, why do you need to check
> that again in __assign_irq_vector ?
>
looks like you need to clear move_in_progress in fixup_irqs()

YH

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-08 23:59       ` Yinghai Lu
@ 2009-04-09 19:17         ` Gary Hade
  2009-04-09 22:38           ` Yinghai Lu
  0 siblings, 1 reply; 21+ messages in thread
From: Gary Hade @ 2009-04-09 19:17 UTC (permalink / raw)
  To: Yinghai Lu; +Cc: Gary Hade, mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Wed, Apr 08, 2009 at 04:59:35PM -0700, Yinghai Lu wrote:
> On Wed, Apr 8, 2009 at 4:58 PM, Yinghai Lu <yhlu.kernel@gmail.com> wrote:
> > On Wed, Apr 8, 2009 at 4:37 PM, Gary Hade <garyhade@us.ibm.com> wrote:
> >> On Wed, Apr 08, 2009 at 03:30:15PM -0700, Yinghai Lu wrote:
> >>> On Wed, Apr 8, 2009 at 2:07 PM, Gary Hade <garyhade@us.ibm.com> wrote:
> >>> > Impact: Eliminates a race that can leave the system in an
> >>> >        unusable state
> >>> >
> >>> > During rapid offlining of multiple CPUs there is a chance
> >>> > that an IRQ affinity move destination CPU will be offlined
> >>> > before the IRQ affinity move initiated during the offlining
> >>> > of a previous CPU completes.  This can happen when the device
> >>> > is not very active and thus fails to generate the IRQ that is
> >>> > needed to complete the IRQ affinity move before the move
> >>> > destination CPU is offlined.  When this happens there is an
> >>> > -EBUSY return from __assign_irq_vector() during the offlining
> >>> > of the IRQ move destination CPU which prevents initiation of
> >>> > a new IRQ affinity move operation to an online CPU.  This
> >>> > leaves the IRQ affinity set to an offlined CPU.
> >>> >
> >>> > I have been able to reproduce the problem on some of our
> >>> > systems using the following script.  When the system is idle
> >>> > the problem often reproduces during the first CPU offlining
> >>> > sequence.
> >>> >
> >>> > #!/bin/sh
> >>> >
> >>> > SYS_CPU_DIR=/sys/devices/system/cpu
> >>> > VICTIM_IRQ=25
> >>> > IRQ_MASK=f0
> >>> >
> >>> > iteration=0
> >>> > while true; do
> >>> >  echo $iteration
> >>> >  echo $IRQ_MASK > /proc/irq/$VICTIM_IRQ/smp_affinity
> >>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
> >>> >    echo 0 > $cpudir/online
> >>> >  done
> >>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
> >>> >    echo 1 > $cpudir/online
> >>> >  done
> >>> >  iteration=`expr $iteration + 1`
> >>> > done
> >>> >
> >>> > The proposed fix takes advantage of the fact that when all
> >>> > CPUs in the old domain are offline there is nothing to be done
> >>> > by send_cleanup_vector() during the affinity move completion.
> >>> > So, we simply avoid setting cfg->move_in_progress preventing
> >>> > the above mentioned -EBUSY return from __assign_irq_vector().
> >>> > This allows initiation of a new IRQ affinity move to a CPU
> >>> > that is not going offline.
> >>> >
> >>> > Signed-off-by: Gary Hade <garyhade@us.ibm.com>
> >>> >
> >>> > ---
> >>> >  arch/x86/kernel/apic/io_apic.c |   11 ++++++++---
> >>> >  1 file changed, 8 insertions(+), 3 deletions(-)
> >>> >
> >>> > Index: linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c
> >>> > ===================================================================
> >>> > --- linux-2.6.30-rc1.orig/arch/x86/kernel/apic/io_apic.c        2009-04-08 09:23:00.000000000 -0700
> >>> > +++ linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c     2009-04-08 09:23:16.000000000 -0700
> >>> > @@ -363,7 +363,8 @@ set_extra_move_desc(struct irq_desc *des
> >>> >        struct irq_cfg *cfg = desc->chip_data;
> >>> >
> >>> >        if (!cfg->move_in_progress) {
> >>> > -               /* it means that domain is not changed */
> >>> > +               /* it means that domain has not changed or all CPUs
> >>> > +                * in old domain are offline */
> >>> >                if (!cpumask_intersects(desc->affinity, mask))
> >>> >                        cfg->move_desc_pending = 1;
> >>> >        }
> >>> > @@ -1262,8 +1263,11 @@ next:
> >>> >                current_vector = vector;
> >>> >                current_offset = offset;
> >>> >                if (old_vector) {
> >>> > -                       cfg->move_in_progress = 1;
> >>> >                        cpumask_copy(cfg->old_domain, cfg->domain);
> >>> > +                       if (cpumask_intersects(cfg->old_domain,
> >>> > +                                              cpu_online_mask)) {
> >>> > +                               cfg->move_in_progress = 1;
> >>> > +                       }
> >>> >                }
> >>> >                for_each_cpu_and(new_cpu, tmp_mask, cpu_online_mask)
> >>> >                        per_cpu(vector_irq, new_cpu)[vector] = irq;
> >>> > @@ -2492,7 +2496,8 @@ static void irq_complete_move(struct irq
> >>> >                if (likely(!cfg->move_desc_pending))
> >>> >                        return;
> >>> >
> >>> > -               /* domain has not changed, but affinity did */
> >>> > +               /* domain has not changed or all CPUs in old domain
> >>> > +                * are offline, but affinity changed */
> >>> >                me = smp_processor_id();
> >>> >                if (cpumask_test_cpu(me, desc->affinity)) {
> >>> >                        *descp = desc = move_irq_desc(desc, me);
> >>> > --
> >>>
> >>> so you mean during __assign_irq_vector(), cpu_online_mask get updated?
> >>
> >> No, the CPU being offlined is removed from cpu_online_mask
> >> earlier via a call to remove_cpu_from_maps() from
> >> cpu_disable_common().  This happens just before fixup_irqs()
> >> is called.
> >>
> >>> with your patch, how about that it just happen right after you check
> >>> that second time.
> >>>
> >>> it seems we are missing some lock_vector_lock() on the remove cpu from
> >>> online mask.
> >>
> >> The remove_cpu_from_maps() call in cpu_disable_common() is vector
> >> lock protected:
> >> void cpu_disable_common(void)
> >> {
> >>               < snip >
> >>        /* It's now safe to remove this processor from the online map */
> >>        lock_vector_lock();
> >>        remove_cpu_from_maps(cpu);
> >>        unlock_vector_lock();
> >>        fixup_irqs();
> >> }
> >
> >
> > __assign_irq_vector always has vector_lock locked...

OK, I see the 'vector_lock' spin_lock_irqsave/spin_unlock_irqrestore
surrounding the __assign_irq_vector call in assign_irq_vector.

> > so cpu_online_mask will not changed during,

I understand that this 'vector_lock' acquisition prevents
multiple simultaneous executions of __assign_irq_vector but
does that really prevent another thread executing outside 
__assign_irq_vector (or outside other 'vector_lock' serialized
code) from modifying cpu_online_mask?

Isn't it really 'cpu_add_remove_lock' (also held when 
__assign_irq_vector() is called in the context of a CPU add
or remove) that is used for this purpose?

> > why do you need to check that again in __assign_irq_vector ?

Because that is where the cfg->move_in_progress flag was
being set.

Is there some reason that the content of cpu_online_mask
cannot be trusted at this location?

If all the CPUs in the old domain are offline doesn't
that imply that we got to that location in response to
a CPU offline request?

> >
> looks like you need to clear move_in_progress in fixup_irqs()

This would be a difficult since I believe the code is
currently partitioned in a manner that prevents access to
irq_cfg records from functions defined in arch/x86/kernel/irq_32.c
and arch/x86/kernel/irq_64.c.  It also doesn't feel right to
allow cfg->move_in_progress to be set in __assign_irq_vector
and then clear it in fixup_irqs().

Gary

-- 
Gary Hade
System x Enablement
IBM Linux Technology Center
503-578-4503  IBM T/L: 775-4503
garyhade@us.ibm.com
http://www.ibm.com/linux/ltc



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered  inactive device IRQ interrruption
  2009-04-09 19:17         ` Gary Hade
@ 2009-04-09 22:38           ` Yinghai Lu
  2009-04-10  0:53             ` Gary Hade
  0 siblings, 1 reply; 21+ messages in thread
From: Yinghai Lu @ 2009-04-09 22:38 UTC (permalink / raw)
  To: Gary Hade, Eric W. Biederman
  Cc: mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Thu, Apr 9, 2009 at 12:17 PM, Gary Hade <garyhade@us.ibm.com> wrote:
> On Wed, Apr 08, 2009 at 04:59:35PM -0700, Yinghai Lu wrote:
>> On Wed, Apr 8, 2009 at 4:58 PM, Yinghai Lu <yhlu.kernel@gmail.com> wrote:
>> > On Wed, Apr 8, 2009 at 4:37 PM, Gary Hade <garyhade@us.ibm.com> wrote:
>> >> On Wed, Apr 08, 2009 at 03:30:15PM -0700, Yinghai Lu wrote:
>> >>> On Wed, Apr 8, 2009 at 2:07 PM, Gary Hade <garyhade@us.ibm.com> wrote:
>> >>> > Impact: Eliminates a race that can leave the system in an
>> >>> >        unusable state
>> >>> >
>> >>> > During rapid offlining of multiple CPUs there is a chance
>> >>> > that an IRQ affinity move destination CPU will be offlined
>> >>> > before the IRQ affinity move initiated during the offlining
>> >>> > of a previous CPU completes.  This can happen when the device
>> >>> > is not very active and thus fails to generate the IRQ that is
>> >>> > needed to complete the IRQ affinity move before the move
>> >>> > destination CPU is offlined.  When this happens there is an
>> >>> > -EBUSY return from __assign_irq_vector() during the offlining
>> >>> > of the IRQ move destination CPU which prevents initiation of
>> >>> > a new IRQ affinity move operation to an online CPU.  This
>> >>> > leaves the IRQ affinity set to an offlined CPU.
>> >>> >
>> >>> > I have been able to reproduce the problem on some of our
>> >>> > systems using the following script.  When the system is idle
>> >>> > the problem often reproduces during the first CPU offlining
>> >>> > sequence.
>> >>> >
>> >>> > #!/bin/sh
>> >>> >
>> >>> > SYS_CPU_DIR=/sys/devices/system/cpu
>> >>> > VICTIM_IRQ=25
>> >>> > IRQ_MASK=f0
>> >>> >
>> >>> > iteration=0
>> >>> > while true; do
>> >>> >  echo $iteration
>> >>> >  echo $IRQ_MASK > /proc/irq/$VICTIM_IRQ/smp_affinity
>> >>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
>> >>> >    echo 0 > $cpudir/online
>> >>> >  done
>> >>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
>> >>> >    echo 1 > $cpudir/online
>> >>> >  done
>> >>> >  iteration=`expr $iteration + 1`
>> >>> > done
>> >>> >
>> >>> > The proposed fix takes advantage of the fact that when all
>> >>> > CPUs in the old domain are offline there is nothing to be done
>> >>> > by send_cleanup_vector() during the affinity move completion.
>> >>> > So, we simply avoid setting cfg->move_in_progress preventing
>> >>> > the above mentioned -EBUSY return from __assign_irq_vector().
>> >>> > This allows initiation of a new IRQ affinity move to a CPU
>> >>> > that is not going offline.
>> >>> >
>> >>> > Signed-off-by: Gary Hade <garyhade@us.ibm.com>
>> >>> >
>> >>> > ---
>> >>> >  arch/x86/kernel/apic/io_apic.c |   11 ++++++++---
>> >>> >  1 file changed, 8 insertions(+), 3 deletions(-)
>> >>> >
>> >>> > Index: linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c
>> >>> > ===================================================================
>> >>> > --- linux-2.6.30-rc1.orig/arch/x86/kernel/apic/io_apic.c        2009-04-08 09:23:00.000000000 -0700
>> >>> > +++ linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c     2009-04-08 09:23:16.000000000 -0700
>> >>> > @@ -363,7 +363,8 @@ set_extra_move_desc(struct irq_desc *des
>> >>> >        struct irq_cfg *cfg = desc->chip_data;
>> >>> >
>> >>> >        if (!cfg->move_in_progress) {
>> >>> > -               /* it means that domain is not changed */
>> >>> > +               /* it means that domain has not changed or all CPUs
>> >>> > +                * in old domain are offline */
>> >>> >                if (!cpumask_intersects(desc->affinity, mask))
>> >>> >                        cfg->move_desc_pending = 1;
>> >>> >        }
>> >>> > @@ -1262,8 +1263,11 @@ next:
>> >>> >                current_vector = vector;
>> >>> >                current_offset = offset;
>> >>> >                if (old_vector) {
>> >>> > -                       cfg->move_in_progress = 1;
>> >>> >                        cpumask_copy(cfg->old_domain, cfg->domain);
>> >>> > +                       if (cpumask_intersects(cfg->old_domain,
>> >>> > +                                              cpu_online_mask)) {
>> >>> > +                               cfg->move_in_progress = 1;
>> >>> > +                       }
>> >>> >                }
>> >>> >                for_each_cpu_and(new_cpu, tmp_mask, cpu_online_mask)
>> >>> >                        per_cpu(vector_irq, new_cpu)[vector] = irq;
>> >>> > @@ -2492,7 +2496,8 @@ static void irq_complete_move(struct irq
>> >>> >                if (likely(!cfg->move_desc_pending))
>> >>> >                        return;
>> >>> >
>> >>> > -               /* domain has not changed, but affinity did */
>> >>> > +               /* domain has not changed or all CPUs in old domain
>> >>> > +                * are offline, but affinity changed */
>> >>> >                me = smp_processor_id();
>> >>> >                if (cpumask_test_cpu(me, desc->affinity)) {
>> >>> >                        *descp = desc = move_irq_desc(desc, me);
>> >>> > --
>> >>>
>> >>> so you mean during __assign_irq_vector(), cpu_online_mask get updated?
>> >>
>> >> No, the CPU being offlined is removed from cpu_online_mask
>> >> earlier via a call to remove_cpu_from_maps() from
>> >> cpu_disable_common().  This happens just before fixup_irqs()
>> >> is called.
>> >>
>> >>> with your patch, how about that it just happen right after you check
>> >>> that second time.
>> >>>
>> >>> it seems we are missing some lock_vector_lock() on the remove cpu from
>> >>> online mask.
>> >>
>> >> The remove_cpu_from_maps() call in cpu_disable_common() is vector
>> >> lock protected:
>> >> void cpu_disable_common(void)
>> >> {
>> >>               < snip >
>> >>        /* It's now safe to remove this processor from the online map */
>> >>        lock_vector_lock();
>> >>        remove_cpu_from_maps(cpu);
>> >>        unlock_vector_lock();
>> >>        fixup_irqs();
>> >> }
>> >
>> >
>> > __assign_irq_vector always has vector_lock locked...
>
> OK, I see the 'vector_lock' spin_lock_irqsave/spin_unlock_irqrestore
> surrounding the __assign_irq_vector call in assign_irq_vector.
>
>> > so cpu_online_mask will not changed during,
>
> I understand that this 'vector_lock' acquisition prevents
> multiple simultaneous executions of __assign_irq_vector but
> does that really prevent another thread executing outside
> __assign_irq_vector (or outside other 'vector_lock' serialized
> code) from modifying cpu_online_mask?
>
> Isn't it really 'cpu_add_remove_lock' (also held when
> __assign_irq_vector() is called in the context of a CPU add
> or remove) that is used for this purpose?
>
>> > why do you need to check that again in __assign_irq_vector ?
>
> Because that is where the cfg->move_in_progress flag was
> being set.
>
> Is there some reason that the content of cpu_online_mask
> cannot be trusted at this location?
>
> If all the CPUs in the old domain are offline doesn't
> that imply that we got to that location in response to
> a CPU offline request?
>
>> >
>> looks like you need to clear move_in_progress in fixup_irqs()
>
> This would be a difficult since I believe the code is
> currently partitioned in a manner that prevents access to
> irq_cfg records from functions defined in arch/x86/kernel/irq_32.c
> and arch/x86/kernel/irq_64.c.  It also doesn't feel right to
> allow cfg->move_in_progress to be set in __assign_irq_vector
> and then clear it in fixup_irqs().

it looks before fixup_irqs() cpu_online_mask get updated, and before
irq_complete_move get called.

so we could fixup_irqs to  clear move_in_progress and cleanup percpu
vector_irq ...

YH

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-09 22:38           ` Yinghai Lu
@ 2009-04-10  0:53             ` Gary Hade
  0 siblings, 0 replies; 21+ messages in thread
From: Gary Hade @ 2009-04-10  0:53 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Gary Hade, Eric W. Biederman, mingo, mingo, tglx, hpa, x86,
	linux-kernel, lcm

On Thu, Apr 09, 2009 at 03:38:04PM -0700, Yinghai Lu wrote:
> On Thu, Apr 9, 2009 at 12:17 PM, Gary Hade <garyhade@us.ibm.com> wrote:
> > On Wed, Apr 08, 2009 at 04:59:35PM -0700, Yinghai Lu wrote:
> >> On Wed, Apr 8, 2009 at 4:58 PM, Yinghai Lu <yhlu.kernel@gmail.com> wrote:
> >> > On Wed, Apr 8, 2009 at 4:37 PM, Gary Hade <garyhade@us.ibm.com> wrote:
> >> >> On Wed, Apr 08, 2009 at 03:30:15PM -0700, Yinghai Lu wrote:
> >> >>> On Wed, Apr 8, 2009 at 2:07 PM, Gary Hade <garyhade@us.ibm.com> wrote:
> >> >>> > Impact: Eliminates a race that can leave the system in an
> >> >>> >        unusable state
> >> >>> >
> >> >>> > During rapid offlining of multiple CPUs there is a chance
> >> >>> > that an IRQ affinity move destination CPU will be offlined
> >> >>> > before the IRQ affinity move initiated during the offlining
> >> >>> > of a previous CPU completes.  This can happen when the device
> >> >>> > is not very active and thus fails to generate the IRQ that is
> >> >>> > needed to complete the IRQ affinity move before the move
> >> >>> > destination CPU is offlined.  When this happens there is an
> >> >>> > -EBUSY return from __assign_irq_vector() during the offlining
> >> >>> > of the IRQ move destination CPU which prevents initiation of
> >> >>> > a new IRQ affinity move operation to an online CPU.  This
> >> >>> > leaves the IRQ affinity set to an offlined CPU.
> >> >>> >
> >> >>> > I have been able to reproduce the problem on some of our
> >> >>> > systems using the following script.  When the system is idle
> >> >>> > the problem often reproduces during the first CPU offlining
> >> >>> > sequence.
> >> >>> >
> >> >>> > #!/bin/sh
> >> >>> >
> >> >>> > SYS_CPU_DIR=/sys/devices/system/cpu
> >> >>> > VICTIM_IRQ=25
> >> >>> > IRQ_MASK=f0
> >> >>> >
> >> >>> > iteration=0
> >> >>> > while true; do
> >> >>> >  echo $iteration
> >> >>> >  echo $IRQ_MASK > /proc/irq/$VICTIM_IRQ/smp_affinity
> >> >>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
> >> >>> >    echo 0 > $cpudir/online
> >> >>> >  done
> >> >>> >  for cpudir in $SYS_CPU_DIR/cpu[1-9] $SYS_CPU_DIR/cpu??; do
> >> >>> >    echo 1 > $cpudir/online
> >> >>> >  done
> >> >>> >  iteration=`expr $iteration + 1`
> >> >>> > done
> >> >>> >
> >> >>> > The proposed fix takes advantage of the fact that when all
> >> >>> > CPUs in the old domain are offline there is nothing to be done
> >> >>> > by send_cleanup_vector() during the affinity move completion.
> >> >>> > So, we simply avoid setting cfg->move_in_progress preventing
> >> >>> > the above mentioned -EBUSY return from __assign_irq_vector().
> >> >>> > This allows initiation of a new IRQ affinity move to a CPU
> >> >>> > that is not going offline.
> >> >>> >
> >> >>> > Signed-off-by: Gary Hade <garyhade@us.ibm.com>
> >> >>> >
> >> >>> > ---
> >> >>> >  arch/x86/kernel/apic/io_apic.c |   11 ++++++++---
> >> >>> >  1 file changed, 8 insertions(+), 3 deletions(-)
> >> >>> >
> >> >>> > Index: linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c
> >> >>> > ===================================================================
> >> >>> > --- linux-2.6.30-rc1.orig/arch/x86/kernel/apic/io_apic.c        2009-04-08 09:23:00.000000000 -0700
> >> >>> > +++ linux-2.6.30-rc1/arch/x86/kernel/apic/io_apic.c     2009-04-08 09:23:16.000000000 -0700
> >> >>> > @@ -363,7 +363,8 @@ set_extra_move_desc(struct irq_desc *des
> >> >>> >        struct irq_cfg *cfg = desc->chip_data;
> >> >>> >
> >> >>> >        if (!cfg->move_in_progress) {
> >> >>> > -               /* it means that domain is not changed */
> >> >>> > +               /* it means that domain has not changed or all CPUs
> >> >>> > +                * in old domain are offline */
> >> >>> >                if (!cpumask_intersects(desc->affinity, mask))
> >> >>> >                        cfg->move_desc_pending = 1;
> >> >>> >        }
> >> >>> > @@ -1262,8 +1263,11 @@ next:
> >> >>> >                current_vector = vector;
> >> >>> >                current_offset = offset;
> >> >>> >                if (old_vector) {
> >> >>> > -                       cfg->move_in_progress = 1;
> >> >>> >                        cpumask_copy(cfg->old_domain, cfg->domain);
> >> >>> > +                       if (cpumask_intersects(cfg->old_domain,
> >> >>> > +                                              cpu_online_mask)) {
> >> >>> > +                               cfg->move_in_progress = 1;
> >> >>> > +                       }
> >> >>> >                }
> >> >>> >                for_each_cpu_and(new_cpu, tmp_mask, cpu_online_mask)
> >> >>> >                        per_cpu(vector_irq, new_cpu)[vector] = irq;
> >> >>> > @@ -2492,7 +2496,8 @@ static void irq_complete_move(struct irq
> >> >>> >                if (likely(!cfg->move_desc_pending))
> >> >>> >                        return;
> >> >>> >
> >> >>> > -               /* domain has not changed, but affinity did */
> >> >>> > +               /* domain has not changed or all CPUs in old domain
> >> >>> > +                * are offline, but affinity changed */
> >> >>> >                me = smp_processor_id();
> >> >>> >                if (cpumask_test_cpu(me, desc->affinity)) {
> >> >>> >                        *descp = desc = move_irq_desc(desc, me);
> >> >>> > --
> >> >>>
> >> >>> so you mean during __assign_irq_vector(), cpu_online_mask get updated?
> >> >>
> >> >> No, the CPU being offlined is removed from cpu_online_mask
> >> >> earlier via a call to remove_cpu_from_maps() from
> >> >> cpu_disable_common().  This happens just before fixup_irqs()
> >> >> is called.
> >> >>
> >> >>> with your patch, how about that it just happen right after you check
> >> >>> that second time.
> >> >>>
> >> >>> it seems we are missing some lock_vector_lock() on the remove cpu from
> >> >>> online mask.
> >> >>
> >> >> The remove_cpu_from_maps() call in cpu_disable_common() is vector
> >> >> lock protected:
> >> >> void cpu_disable_common(void)
> >> >> {
> >> >>               < snip >
> >> >>        /* It's now safe to remove this processor from the online map */
> >> >>        lock_vector_lock();
> >> >>        remove_cpu_from_maps(cpu);
> >> >>        unlock_vector_lock();
> >> >>        fixup_irqs();
> >> >> }
> >> >
> >> >
> >> > __assign_irq_vector always has vector_lock locked...
> >
> > OK, I see the 'vector_lock' spin_lock_irqsave/spin_unlock_irqrestore
> > surrounding the __assign_irq_vector call in assign_irq_vector.
> >
> >> > so cpu_online_mask will not changed during,
> >
> > I understand that this 'vector_lock' acquisition prevents
> > multiple simultaneous executions of __assign_irq_vector but
> > does that really prevent another thread executing outside
> > __assign_irq_vector (or outside other 'vector_lock' serialized
> > code) from modifying cpu_online_mask?
> >
> > Isn't it really 'cpu_add_remove_lock' (also held when
> > __assign_irq_vector() is called in the context of a CPU add
> > or remove) that is used for this purpose?
> >
> >> > why do you need to check that again in __assign_irq_vector ?
> >
> > Because that is where the cfg->move_in_progress flag was
> > being set.
> >
> > Is there some reason that the content of cpu_online_mask
> > cannot be trusted at this location?
> >
> > If all the CPUs in the old domain are offline doesn't
> > that imply that we got to that location in response to
> > a CPU offline request?
> >
> >> >
> >> looks like you need to clear move_in_progress in fixup_irqs()
> >
> > This would be a difficult since I believe the code is
> > currently partitioned in a manner that prevents access to
> > irq_cfg records from functions defined in arch/x86/kernel/irq_32.c
> > and arch/x86/kernel/irq_64.c.  It also doesn't feel right to
> > allow cfg->move_in_progress to be set in __assign_irq_vector
> > and then clear it in fixup_irqs().
> 
> it looks before fixup_irqs() cpu_online_mask get updated,

Correct.  It is updated during the remove_cpu_from_maps()
call from cpu_disable_common() that preceeds the fixup_irqs()
call from cpu_disable_common().

> and before irq_complete_move get called.

Also correct but it should be noted that irq_complete_move()
executes on an IRQ affinity move destination CPU which, depending
on how quickly the device generates the next IRQ, may not happen
before the IRQ affinity move destination CPU(s) are offlined.

> 
> so we could fixup_irqs to  clear move_in_progress and cleanup percpu
> vector_irq ...

If you are suggesting that the send_cleanup_vector() work
deferred to an IRQ affinity move destination CPU be performed
instead by code running on the CPU being removed, I considered
that before discovering the justification for the current design.
   http://lkml.org/lkml/2007/2/23/92

I also played around with using send_IPI_mask() to send
cfg->vector to the IRQ affinity move destination CPU(s) to
assure that cfg->move_in_progress would get cleared before
those CPU(s) were offlined.  This appeared to work, however,
it seemed like a hack and I wasn't sure how to deal with the
unhandled interrupts.

I then discovered that send_cleanup_vector() wasn't sending
IRQ_MOVE_CLEANUP_VECTOR to any CPU when all CPUs in the old
domain were offline.  This led me to the proposed fix.

Gary

-- 
Gary Hade
System x Enablement
IBM Linux Technology Center
503-578-4503  IBM T/L: 775-4503
garyhade@us.ibm.com
http://www.ibm.com/linux/ltc


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-08 21:07 [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Gary Hade
  2009-04-08 22:30 ` Yinghai Lu
@ 2009-04-10  1:29 ` Eric W. Biederman
  2009-04-10 20:09   ` Gary Hade
  2009-04-12 19:32 ` [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Eric W. Biederman
  2 siblings, 1 reply; 21+ messages in thread
From: Eric W. Biederman @ 2009-04-10  1:29 UTC (permalink / raw)
  To: Gary Hade; +Cc: mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

Gary Hade <garyhade@us.ibm.com> writes:

> Impact: Eliminates a race that can leave the system in an
>         unusable state
>
> During rapid offlining of multiple CPUs there is a chance
> that an IRQ affinity move destination CPU will be offlined
> before the IRQ affinity move initiated during the offlining
> of a previous CPU completes.  This can happen when the device
> is not very active and thus fails to generate the IRQ that is
> needed to complete the IRQ affinity move before the move
> destination CPU is offlined.  When this happens there is an
> -EBUSY return from __assign_irq_vector() during the offlining
> of the IRQ move destination CPU which prevents initiation of
> a new IRQ affinity move operation to an online CPU.  This
> leaves the IRQ affinity set to an offlined CPU.
>
> I have been able to reproduce the problem on some of our
> systems using the following script.  When the system is idle
> the problem often reproduces during the first CPU offlining
> sequence.

You appear to be focusing on the IBM x460 and x3835.  Can you describe
what kind of interrupt setup you are running.

You may be the first person to actually hit the problems with cpu offlining
and irq migration that have theoretically been present for a long.

Eric

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-10  1:29 ` Eric W. Biederman
@ 2009-04-10 20:09   ` Gary Hade
  2009-04-10 22:02     ` Eric W. Biederman
  0 siblings, 1 reply; 21+ messages in thread
From: Gary Hade @ 2009-04-10 20:09 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Gary Hade, mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Thu, Apr 09, 2009 at 06:29:10PM -0700, Eric W. Biederman wrote:
> Gary Hade <garyhade@us.ibm.com> writes:
> 
> > Impact: Eliminates a race that can leave the system in an
> >         unusable state
> >
> > During rapid offlining of multiple CPUs there is a chance
> > that an IRQ affinity move destination CPU will be offlined
> > before the IRQ affinity move initiated during the offlining
> > of a previous CPU completes.  This can happen when the device
> > is not very active and thus fails to generate the IRQ that is
> > needed to complete the IRQ affinity move before the move
> > destination CPU is offlined.  When this happens there is an
> > -EBUSY return from __assign_irq_vector() during the offlining
> > of the IRQ move destination CPU which prevents initiation of
> > a new IRQ affinity move operation to an online CPU.  This
> > leaves the IRQ affinity set to an offlined CPU.
> >
> > I have been able to reproduce the problem on some of our
> > systems using the following script.  When the system is idle
> > the problem often reproduces during the first CPU offlining
> > sequence.
> 
> You appear to be focusing on the IBM x460 and x3835.

True.  I have also observed IRQ interruptions on an IBM x3950 M2
which I believe, but am not certain, were due to the other
"I/O redirection table register write with Remote IRR bit set"
caused problem.

I intend to do more testing on the x3950 M2 and other
IBM System x servers but I unfortunately do not currently
have access to any Intel based non-IBM MP servers.  I was
hoping that my testing request might at least get some
others interested in running the simple test script on their
systems and reporting their results.  Have you perhaps tried
the test on any of the Intel based MP systems that you have
access to?

> Can you describe
> what kind of interrupt setup you are running.

Being somewhat of a ioapic neophyte I am not exactly sure
what you are asking for here.  This is ioapic information
logged during boot if that helps at all.
x3850:
    ACPI: IOAPIC (id[0x0f] address[0xfec00000] gsi_base[0])
    IOAPIC[0]: apic_id 15, version 0, address 0xfec00000, GSI 0-35
    ACPI: IOAPIC (id[0x0e] address[0xfec01000] gsi_base[36])
    IOAPIC[1]: apic_id 14, version 0, address 0xfec01000, GSI 36-71
x460:
    ACPI: IOAPIC (id[0x0f] address[0xfec00000] gsi_base[0])
    IOAPIC[0]: apic_id 15, version 17, address 0xfec00000, GSI 0-35
    ACPI: IOAPIC (id[0x0e] address[0xfec01000] gsi_base[36])
    IOAPIC[1]: apic_id 14, version 17, address 0xfec01000, GSI 36-71
    ACPI: IOAPIC (id[0x0d] address[0xfec02000] gsi_base[72])
    IOAPIC[2]: apic_id 13, version 17, address 0xfec02000, GSI 72-107
    ACPI: IOAPIC (id[0x0c] address[0xfec03000] gsi_base[108])
    IOAPIC[3]: apic_id 12, version 17, address 0xfec03000, GSI 108-143

> 
> You may be the first person to actually hit the problems with cpu offlining
> and irq migration that have theoretically been present for a long.

Your "Safely cleanup an irq after moving it" changes have been
present in mainline for quite some time so I have been thinking
about this as well.

I can certainly understand why it may not be very likely
for users to see the "I/O redirection table register write
with Remote IRR bit set" caused problem.  It has actually
been fairly difficult to reproduce.  I very much doubt that
there are many users out there that would be continuously offlining
and onlining all the offlineable CPUs from a script or program on
a heavily loaded system.  IMO, this would _not_ be a very common
useage scenario.  The test script that I provided usually performs
many CPU offline/online iterations before the problem is triggered.
A much more likely useage scenario, for which there is already code
in ack_apic_level() to avoid the problem, would be IRQ affinity
adjustments requested from user level (e.g. by the irqbalance daemon)
on a very active system.

It is less clear to me why users have not been reporting the
idle system race but I suspect that
  - script or program driven offlining of multiple CPUs
    may not be very common
  - the actual affinity on an idle system is usually set to
    cpu0 which is always online

I am glad you are looking at this since I know it involves code
that you should be quite familiar with.  Thanks!

Gary

-- 
Gary Hade
System x Enablement
IBM Linux Technology Center
503-578-4503  IBM T/L: 775-4503
garyhade@us.ibm.com
http://www.ibm.com/linux/ltc


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-10 20:09   ` Gary Hade
@ 2009-04-10 22:02     ` Eric W. Biederman
  2009-04-11  7:44       ` Yinghai Lu
  2009-04-11  7:51       ` Yinghai Lu
  0 siblings, 2 replies; 21+ messages in thread
From: Eric W. Biederman @ 2009-04-10 22:02 UTC (permalink / raw)
  To: Gary Hade; +Cc: mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

Gary Hade <garyhade@us.ibm.com> writes:

> On Thu, Apr 09, 2009 at 06:29:10PM -0700, Eric W. Biederman wrote:
>> Gary Hade <garyhade@us.ibm.com> writes:
>> 
>> > Impact: Eliminates a race that can leave the system in an
>> >         unusable state
>> >
>> > During rapid offlining of multiple CPUs there is a chance
>> > that an IRQ affinity move destination CPU will be offlined
>> > before the IRQ affinity move initiated during the offlining
>> > of a previous CPU completes.  This can happen when the device
>> > is not very active and thus fails to generate the IRQ that is
>> > needed to complete the IRQ affinity move before the move
>> > destination CPU is offlined.  When this happens there is an
>> > -EBUSY return from __assign_irq_vector() during the offlining
>> > of the IRQ move destination CPU which prevents initiation of
>> > a new IRQ affinity move operation to an online CPU.  This
>> > leaves the IRQ affinity set to an offlined CPU.
>> >
>> > I have been able to reproduce the problem on some of our
>> > systems using the following script.  When the system is idle
>> > the problem often reproduces during the first CPU offlining
>> > sequence.
>> 
>> You appear to be focusing on the IBM x460 and x3835.
>
> True.  I have also observed IRQ interruptions on an IBM x3950 M2
> which I believe, but am not certain, were due to the other
> "I/O redirection table register write with Remote IRR bit set"
> caused problem.
>
> I intend to do more testing on the x3950 M2 and other
> IBM System x servers but I unfortunately do not currently
> have access to any Intel based non-IBM MP servers.  I was
> hoping that my testing request might at least get some
> others interested in running the simple test script on their
> systems and reporting their results.  Have you perhaps tried
> the test on any of the Intel based MP systems that you have
> access to?
>
>> Can you describe
>> what kind of interrupt setup you are running.
>
> Being somewhat of a ioapic neophyte I am not exactly sure
> what you are asking for here.  This is ioapic information
> logged during boot if that helps at all.
> x3850:
>     ACPI: IOAPIC (id[0x0f] address[0xfec00000] gsi_base[0])
>     IOAPIC[0]: apic_id 15, version 0, address 0xfec00000, GSI 0-35
>     ACPI: IOAPIC (id[0x0e] address[0xfec01000] gsi_base[36])
>     IOAPIC[1]: apic_id 14, version 0, address 0xfec01000, GSI 36-71
> x460:
>     ACPI: IOAPIC (id[0x0f] address[0xfec00000] gsi_base[0])
>     IOAPIC[0]: apic_id 15, version 17, address 0xfec00000, GSI 0-35
>     ACPI: IOAPIC (id[0x0e] address[0xfec01000] gsi_base[36])
>     IOAPIC[1]: apic_id 14, version 17, address 0xfec01000, GSI 36-71
>     ACPI: IOAPIC (id[0x0d] address[0xfec02000] gsi_base[72])
>     IOAPIC[2]: apic_id 13, version 17, address 0xfec02000, GSI 72-107
>     ACPI: IOAPIC (id[0x0c] address[0xfec03000] gsi_base[108])
>     IOAPIC[3]: apic_id 12, version 17, address 0xfec03000, GSI 108-143

Sorry.  My real question is which mode you are running the ioapics in.


>> You may be the first person to actually hit the problems with cpu offlining
>> and irq migration that have theoretically been present for a long.
>
> Your "Safely cleanup an irq after moving it" changes have been
> present in mainline for quite some time so I have been thinking
> about this as well.
>
> I can certainly understand why it may not be very likely
> for users to see the "I/O redirection table register write
> with Remote IRR bit set" caused problem.  It has actually
> been fairly difficult to reproduce.  I very much doubt that
> there are many users out there that would be continuously offlining
> and onlining all the offlineable CPUs from a script or program on
> a heavily loaded system.  IMO, this would _not_ be a very common
> useage scenario.  The test script that I provided usually performs
> many CPU offline/online iterations before the problem is triggered.
> A much more likely useage scenario, for which there is already code
> in ack_apic_level() to avoid the problem, would be IRQ affinity
> adjustments requested from user level (e.g. by the irqbalance daemon)
> on a very active system.
>
> It is less clear to me why users have not been reporting the
> idle system race but I suspect that
>   - script or program driven offlining of multiple CPUs
>     may not be very common
>   - the actual affinity on an idle system is usually set to
>     cpu0 which is always online
>
> I am glad you are looking at this since I know it involves code
> that you should be quite familiar with.  Thanks!

The class of irq and the mode in which we run it make a bit difference.

MSIs and machines with iommus that remap irqs we should be able to
migrate irqs safely at any time.

Also lowest priority delivery mode when all cpus are setup on the same
vector to receive an irq.  Changing it's affinity should be safe.
This works for up to 8 cpus.  And I expect is the common case.

The rest of the case when we change the vector number short of turning
off the devices generating the interrupts (and thus stopping them at
the source) I do not know of a safe way to disable interrupts.

The only reason I did not make the irq migration at cpu shutdown time
depend on CONFIG_BROKEN was that it is by the suspend code on laptops.

Alternatively if you have ioapics that support having their irqs
acknowledged with a register I like ia64 has.  I think there are some
additional options.

So depending on what hardware you have it might be possible to implement
this safely.

It looks like some additional bugs have slipped in since last I looked.

set_irq_affinity does this:
ifdef CONFIG_GENERIC_PENDING_IRQ
	if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
		cpumask_copy(desc->affinity, cpumask);
		desc->chip->set_affinity(irq, cpumask);
	} else {
		desc->status |= IRQ_MOVE_PENDING;
		cpumask_copy(desc->pending_mask, cpumask);
	}
#else

That IRQ_DISABLED case is a software state and as such it has nothing to
do with how safe it is to move an irq in process context.

Do any of your device drivers call irq_disable?

Eric

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered  inactive device IRQ interrruption
  2009-04-10 22:02     ` Eric W. Biederman
@ 2009-04-11  7:44       ` Yinghai Lu
  2009-04-11  7:51       ` Yinghai Lu
  1 sibling, 0 replies; 21+ messages in thread
From: Yinghai Lu @ 2009-04-11  7:44 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Gary Hade, mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Fri, Apr 10, 2009 at 3:02 PM, Eric W. Biederman
<ebiederm@xmission.com> wrote:
> Gary Hade <garyhade@us.ibm.com> writes:
>
>> On Thu, Apr 09, 2009 at 06:29:10PM -0700, Eric W. Biederman wrote:
>>> Gary Hade <garyhade@us.ibm.com> writes:
>>>
>>> > Impact: Eliminates a race that can leave the system in an
>>> >         unusable state
>>> >
>>> > During rapid offlining of multiple CPUs there is a chance
>>> > that an IRQ affinity move destination CPU will be offlined
>>> > before the IRQ affinity move initiated during the offlining
>>> > of a previous CPU completes.  This can happen when the device
>>> > is not very active and thus fails to generate the IRQ that is
>>> > needed to complete the IRQ affinity move before the move
>>> > destination CPU is offlined.  When this happens there is an
>>> > -EBUSY return from __assign_irq_vector() during the offlining
>>> > of the IRQ move destination CPU which prevents initiation of
>>> > a new IRQ affinity move operation to an online CPU.  This
>>> > leaves the IRQ affinity set to an offlined CPU.
>>> >
>>> > I have been able to reproduce the problem on some of our
>>> > systems using the following script.  When the system is idle
>>> > the problem often reproduces during the first CPU offlining
>>> > sequence.
>>>
>>> You appear to be focusing on the IBM x460 and x3835.
>>
>> True.  I have also observed IRQ interruptions on an IBM x3950 M2
>> which I believe, but am not certain, were due to the other
>> "I/O redirection table register write with Remote IRR bit set"
>> caused problem.
>>
>> I intend to do more testing on the x3950 M2 and other
>> IBM System x servers but I unfortunately do not currently
>> have access to any Intel based non-IBM MP servers.  I was
>> hoping that my testing request might at least get some
>> others interested in running the simple test script on their
>> systems and reporting their results.  Have you perhaps tried
>> the test on any of the Intel based MP systems that you have
>> access to?
>>
>>> Can you describe
>>> what kind of interrupt setup you are running.
>>
>> Being somewhat of a ioapic neophyte I am not exactly sure
>> what you are asking for here.  This is ioapic information
>> logged during boot if that helps at all.
>> x3850:
>>     ACPI: IOAPIC (id[0x0f] address[0xfec00000] gsi_base[0])
>>     IOAPIC[0]: apic_id 15, version 0, address 0xfec00000, GSI 0-35
>>     ACPI: IOAPIC (id[0x0e] address[0xfec01000] gsi_base[36])
>>     IOAPIC[1]: apic_id 14, version 0, address 0xfec01000, GSI 36-71
>> x460:
>>     ACPI: IOAPIC (id[0x0f] address[0xfec00000] gsi_base[0])
>>     IOAPIC[0]: apic_id 15, version 17, address 0xfec00000, GSI 0-35
>>     ACPI: IOAPIC (id[0x0e] address[0xfec01000] gsi_base[36])
>>     IOAPIC[1]: apic_id 14, version 17, address 0xfec01000, GSI 36-71
>>     ACPI: IOAPIC (id[0x0d] address[0xfec02000] gsi_base[72])
>>     IOAPIC[2]: apic_id 13, version 17, address 0xfec02000, GSI 72-107
>>     ACPI: IOAPIC (id[0x0c] address[0xfec03000] gsi_base[108])
>>     IOAPIC[3]: apic_id 12, version 17, address 0xfec03000, GSI 108-143
>
> Sorry.  My real question is which mode you are running the ioapics in.
>

looks like ack_level_irq.

YH

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered  inactive device IRQ interrruption
  2009-04-10 22:02     ` Eric W. Biederman
  2009-04-11  7:44       ` Yinghai Lu
@ 2009-04-11  7:51       ` Yinghai Lu
  2009-04-11 11:01         ` Eric W. Biederman
  1 sibling, 1 reply; 21+ messages in thread
From: Yinghai Lu @ 2009-04-11  7:51 UTC (permalink / raw)
  To: Eric W. Biederman, Pallipadi, Venkatesh, Shaohua Li
  Cc: Gary Hade, mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Fri, Apr 10, 2009 at 3:02 PM, Eric W. Biederman
<ebiederm@xmission.com> wrote:
> Gary Hade <garyhade@us.ibm.com> writes:
>
> It looks like some additional bugs have slipped in since last I looked.
>
> set_irq_affinity does this:
> ifdef CONFIG_GENERIC_PENDING_IRQ
>        if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
>                cpumask_copy(desc->affinity, cpumask);
>                desc->chip->set_affinity(irq, cpumask);
>        } else {
>                desc->status |= IRQ_MOVE_PENDING;
>                cpumask_copy(desc->pending_mask, cpumask);
>        }
> #else
>
> That IRQ_DISABLED case is a software state and as such it has nothing to
> do with how safe it is to move an irq in process context.
>

commit 932775a4ab622e3c99bd59f14cc7d96722f79501
Author: venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>
Date:   Fri Sep 5 18:02:15 2008 -0700

    x86: HPET_MSI change IRQ affinity in process context when it is disabled

    Change the IRQ affinity in the process context when the IRQ is disabled.

    Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
    Signed-off-by: Shaohua Li <shaohua.li@intel.com>
    Signed-off-by: Ingo Molnar <mingo@elte.hu>

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index ddc9568..ad2ce72 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -87,10 +87,11 @@ int irq_set_affinity(unsigned int irq, cpumask_t cpumask)
                return -EINVAL;

 #ifdef CONFIG_GENERIC_PENDING_IRQ
-       if (desc->status & IRQ_MOVE_PCNTXT) {
+       if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
                unsigned long flags;

                spin_lock_irqsave(&desc->lock, flags);
+               desc->affinity = cpumask;
                desc->chip->set_affinity(irq, cpumask);
                spin_unlock_irqrestore(&desc->lock, flags);
        } else

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered  inactive device IRQ interrruption
  2009-04-11  7:51       ` Yinghai Lu
@ 2009-04-11 11:01         ` Eric W. Biederman
  2009-04-13 17:41           ` Pallipadi, Venkatesh
  0 siblings, 1 reply; 21+ messages in thread
From: Eric W. Biederman @ 2009-04-11 11:01 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Pallipadi, Venkatesh, Shaohua Li, Gary Hade, mingo, mingo, tglx,
	hpa, x86, linux-kernel, lcm

Yinghai Lu <yhlu.kernel@gmail.com> writes:

> commit 932775a4ab622e3c99bd59f14cc7d96722f79501
> Author: venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>
> Date:   Fri Sep 5 18:02:15 2008 -0700
>
>     x86: HPET_MSI change IRQ affinity in process context when it is disabled
>
>     Change the IRQ affinity in the process context when the IRQ is disabled.
>
>     Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
>     Signed-off-by: Shaohua Li <shaohua.li@intel.com>
>     Signed-off-by: Ingo Molnar <mingo@elte.hu>
>
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index ddc9568..ad2ce72 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -87,10 +87,11 @@ int irq_set_affinity(unsigned int irq, cpumask_t cpumask)
>                 return -EINVAL;
>
>  #ifdef CONFIG_GENERIC_PENDING_IRQ
> -       if (desc->status & IRQ_MOVE_PCNTXT) {
> +       if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
>                 unsigned long flags;
>
>                 spin_lock_irqsave(&desc->lock, flags);
> +               desc->affinity = cpumask;
>                 desc->chip->set_affinity(irq, cpumask);
>                 spin_unlock_irqrestore(&desc->lock, flags);
>         } else

If the goal is moving MSIs, we should modify the msi code to be safe
in process context and to set IRQ_MOVE_PCNTXT.

The only reason we migrate MSIs in interrupt context today is that there
wasn't infrastructure for support migration both in interrupt context
and outside of it.

Eric


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-08 21:07 [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Gary Hade
  2009-04-08 22:30 ` Yinghai Lu
  2009-04-10  1:29 ` Eric W. Biederman
@ 2009-04-12 19:32 ` Eric W. Biederman
  2009-04-13 21:09   ` Gary Hade
  2 siblings, 1 reply; 21+ messages in thread
From: Eric W. Biederman @ 2009-04-12 19:32 UTC (permalink / raw)
  To: Gary Hade; +Cc: mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

Gary Hade <garyhade@us.ibm.com> writes:

> Impact: Eliminates a race that can leave the system in an
>         unusable state
>
> During rapid offlining of multiple CPUs there is a chance
> that an IRQ affinity move destination CPU will be offlined
> before the IRQ affinity move initiated during the offlining
> of a previous CPU completes.  This can happen when the device
> is not very active and thus fails to generate the IRQ that is
> needed to complete the IRQ affinity move before the move
> destination CPU is offlined.  When this happens there is an
> -EBUSY return from __assign_irq_vector() during the offlining
> of the IRQ move destination CPU which prevents initiation of
> a new IRQ affinity move operation to an online CPU.  This
> leaves the IRQ affinity set to an offlined CPU.
>
> I have been able to reproduce the problem on some of our
> systems using the following script.  When the system is idle
> the problem often reproduces during the first CPU offlining
> sequence.

Ok.  I have had a chance to think through what you your patches
are doing and it is assuming the broken logic in cpu_down is correct
and patching over some but not all of the problems.

First the problem is not migrating irqs when IRR is set.  The general
problem is that the state machines in most ioapics are fragile and
can get confused if you reprogram them at any point when an irq can
come in.  In the middle of an interrupt handler is the one time we
know interrupts can not come in.

To really fix this problem we need to do two things.
1) Tack when irqs that can not be migrated from process context are
   on a cpu, and deny cpu hot-unplug.
2) Modify every interrupt that can be safely migrated in interrupt context
   to migrate irqs in interrupt context so no one encounters this problem
   in practice.

We can update MSIs and do a pci read to know when the update has made it
to a device.  Multi MSI is a disaster but I won't go there.

In lowest priority delivery mode when the irq is not changing domain but
just changing the set of possible cpus the interrupt can be delivered to.

And then of course all of the fun iommus that remap irqs.

Eric

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-11 11:01         ` Eric W. Biederman
@ 2009-04-13 17:41           ` Pallipadi, Venkatesh
  2009-04-13 18:50             ` Eric W. Biederman
  0 siblings, 1 reply; 21+ messages in thread
From: Pallipadi, Venkatesh @ 2009-04-13 17:41 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Yinghai Lu, Li, Shaohua, Gary Hade, mingo, mingo, tglx, hpa, x86,
	linux-kernel, lcm

On Sat, 2009-04-11 at 04:01 -0700, Eric W. Biederman wrote:
> Yinghai Lu <yhlu.kernel@gmail.com> writes:
> 
> > commit 932775a4ab622e3c99bd59f14cc7d96722f79501
> > Author: venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>
> > Date:   Fri Sep 5 18:02:15 2008 -0700
> >
> >     x86: HPET_MSI change IRQ affinity in process context when it is disabled
> >
> >     Change the IRQ affinity in the process context when the IRQ is disabled.
> >
> >     Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
> >     Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> >     Signed-off-by: Ingo Molnar <mingo@elte.hu>
> >
> > diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> > index ddc9568..ad2ce72 100644
> > --- a/kernel/irq/manage.c
> > +++ b/kernel/irq/manage.c
> > @@ -87,10 +87,11 @@ int irq_set_affinity(unsigned int irq, cpumask_t cpumask)
> >                 return -EINVAL;
> >
> >  #ifdef CONFIG_GENERIC_PENDING_IRQ
> > -       if (desc->status & IRQ_MOVE_PCNTXT) {
> > +       if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
> >                 unsigned long flags;
> >
> >                 spin_lock_irqsave(&desc->lock, flags);
> > +               desc->affinity = cpumask;
> >                 desc->chip->set_affinity(irq, cpumask);
> >                 spin_unlock_irqrestore(&desc->lock, flags);
> >         } else
> 
> If the goal is moving MSIs, we should modify the msi code to be safe
> in process context and to set IRQ_MOVE_PCNTXT.
> 
> The only reason we migrate MSIs in interrupt context today is that there
> wasn't infrastructure for support migration both in interrupt context
> and outside of it.
> 

Yes. The idea here was to force the MSI migration to happen in process
context. One of the patches in the series did

	disable_irq(dev->irq);
	irq_set_affinity(dev->irq, cpumask_of(dev->cpu));
	enable_irq(dev->irq);

with the above patch adding irq/manage code check for interrupt disabled
and moving the interrupt in process context.

IIRC, there was no IRQ_MOVE_PCNTXT when we were developing this HPET
code and we ended up having this ugly hack. IRQ_MOVE_PCNTXT was there
when we eventually submitted the patch upstream. But, looks like I did a
blind rebasing instead of using IRQ_MOVE_PCNTXT in hpet MSI code. That
was my fault. Will send a patch to fix this ugliness.

Thanks,
Venki


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-13 17:41           ` Pallipadi, Venkatesh
@ 2009-04-13 18:50             ` Eric W. Biederman
  2009-04-13 22:20               ` [PATCH] irq, x86: Remove IRQ_DISABLED check in process context IRQ move Pallipadi, Venkatesh
  0 siblings, 1 reply; 21+ messages in thread
From: Eric W. Biederman @ 2009-04-13 18:50 UTC (permalink / raw)
  To: Pallipadi, Venkatesh
  Cc: Yinghai Lu, Li, Shaohua, Gary Hade, mingo, mingo, tglx, hpa, x86,
	linux-kernel, lcm

"Pallipadi, Venkatesh" <venkatesh.pallipadi@intel.com> writes:

> On Sat, 2009-04-11 at 04:01 -0700, Eric W. Biederman wrote:
>> Yinghai Lu <yhlu.kernel@gmail.com> writes:
>> 
>> > commit 932775a4ab622e3c99bd59f14cc7d96722f79501
>> > Author: venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>
>> > Date:   Fri Sep 5 18:02:15 2008 -0700
>> >
>> >     x86: HPET_MSI change IRQ affinity in process context when it is disabled
>> >
>> >     Change the IRQ affinity in the process context when the IRQ is disabled.
>> >
>> >     Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
>> >     Signed-off-by: Shaohua Li <shaohua.li@intel.com>
>> >     Signed-off-by: Ingo Molnar <mingo@elte.hu>
>> >
>> > diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
>> > index ddc9568..ad2ce72 100644
>> > --- a/kernel/irq/manage.c
>> > +++ b/kernel/irq/manage.c
>> > @@ -87,10 +87,11 @@ int irq_set_affinity(unsigned int irq, cpumask_t cpumask)
>> >                 return -EINVAL;
>> >
>> >  #ifdef CONFIG_GENERIC_PENDING_IRQ
>> > -       if (desc->status & IRQ_MOVE_PCNTXT) {
>> > +       if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
>> >                 unsigned long flags;
>> >
>> >                 spin_lock_irqsave(&desc->lock, flags);
>> > +               desc->affinity = cpumask;
>> >                 desc->chip->set_affinity(irq, cpumask);
>> >                 spin_unlock_irqrestore(&desc->lock, flags);
>> >         } else
>> 
>> If the goal is moving MSIs, we should modify the msi code to be safe
>> in process context and to set IRQ_MOVE_PCNTXT.
>> 
>> The only reason we migrate MSIs in interrupt context today is that there
>> wasn't infrastructure for support migration both in interrupt context
>> and outside of it.
>> 
>
> Yes. The idea here was to force the MSI migration to happen in process
> context. One of the patches in the series did
>
> 	disable_irq(dev->irq);
> 	irq_set_affinity(dev->irq, cpumask_of(dev->cpu));
> 	enable_irq(dev->irq);
>
> with the above patch adding irq/manage code check for interrupt disabled
> and moving the interrupt in process context.
>
> IIRC, there was no IRQ_MOVE_PCNTXT when we were developing this HPET
> code and we ended up having this ugly hack. IRQ_MOVE_PCNTXT was there
> when we eventually submitted the patch upstream. But, looks like I did a
> blind rebasing instead of using IRQ_MOVE_PCNTXT in hpet MSI code. That
> was my fault. Will send a patch to fix this ugliness.

Thanks.

Eric

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
  2009-04-12 19:32 ` [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Eric W. Biederman
@ 2009-04-13 21:09   ` Gary Hade
  0 siblings, 0 replies; 21+ messages in thread
From: Gary Hade @ 2009-04-13 21:09 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Gary Hade, mingo, mingo, tglx, hpa, x86, linux-kernel, lcm

On Sun, Apr 12, 2009 at 12:32:11PM -0700, Eric W. Biederman wrote:
> Gary Hade <garyhade@us.ibm.com> writes:
> 
> > Impact: Eliminates a race that can leave the system in an
> >         unusable state
> >
> > During rapid offlining of multiple CPUs there is a chance
> > that an IRQ affinity move destination CPU will be offlined
> > before the IRQ affinity move initiated during the offlining
> > of a previous CPU completes.  This can happen when the device
> > is not very active and thus fails to generate the IRQ that is
> > needed to complete the IRQ affinity move before the move
> > destination CPU is offlined.  When this happens there is an
> > -EBUSY return from __assign_irq_vector() during the offlining
> > of the IRQ move destination CPU which prevents initiation of
> > a new IRQ affinity move operation to an online CPU.  This
> > leaves the IRQ affinity set to an offlined CPU.
> >
> > I have been able to reproduce the problem on some of our
> > systems using the following script.  When the system is idle
> > the problem often reproduces during the first CPU offlining
> > sequence.
> 
> Ok.  I have had a chance to think through what you your patches
> are doing and it is assuming the broken logic in cpu_down is correct
> and patching over some but not all of the problems.
> 
> First the problem is not migrating irqs when IRR is set.

When the device is very active, a printk in __target_IO_APIC_irq()
immediately prior to
    io_apic_modify(apic, 0x10 + pin*2, reg);
intermittently displays 'reg' values indicating that the
Remote IRR bit is set.

With PATCH 3/3 the same printk displays no 'reg' values
indicating that the Remote IRR bit is set _and_ the IRQ
interruption problem disappears.

This is what led me to very strongly believe that the
problem was caused by writing the I/O redirection table
register while the Remote IRR bit was set.

> The general
> problem is that the state machines in most ioapics are fragile and
> can get confused if you reprogram them at any point when an irq can
> come in.

IRQs are masked [from fixup_irqs() when offlining a CPU, from
ack_apic_level() when not offlining a CPU] during the reprogramming.
Does this not help avoid the issue?  Sorry if this is a nieve
question.

>  In the middle of an interrupt handler is the one time we
> know interrupts can not come in.
> 
> To really fix this problem we need to do two things.
> 1) Tack when irqs that can not be migrated from process context are
>    on a cpu, and deny cpu hot-unplug.
> 2) Modify every interrupt that can be safely migrated in interrupt context
>    to migrate irqs in interrupt context so no one encounters this problem
>    in practice.
> 
> We can update MSIs and do a pci read to know when the update has made it
> to a device.  Multi MSI is a disaster but I won't go there.
> 
> In lowest priority delivery mode when the irq is not changing domain but
> just changing the set of possible cpus the interrupt can be delivered to.
> 
> And then of course all of the fun iommus that remap irqs.

Sounds non-trivial.

Gary

-- 
Gary Hade
System x Enablement
IBM Linux Technology Center
503-578-4503  IBM T/L: 775-4503
garyhade@us.ibm.com
http://www.ibm.com/linux/ltc


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH] irq, x86: Remove IRQ_DISABLED check in process context IRQ move
  2009-04-13 18:50             ` Eric W. Biederman
@ 2009-04-13 22:20               ` Pallipadi, Venkatesh
  2009-04-14  1:40                 ` Eric W. Biederman
  2009-04-14 14:06                 ` [tip:irq/urgent] x86, irq: " tip-bot for Pallipadi, Venkatesh
  0 siblings, 2 replies; 21+ messages in thread
From: Pallipadi, Venkatesh @ 2009-04-13 22:20 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Pallipadi, Venkatesh, Yinghai Lu, Li, Shaohua, Gary Hade, mingo,
	mingo, tglx, hpa, x86, linux-kernel, lcm, suresh.b.siddha


As discussed in the thread here
http://marc.info/?l=linux-kernel&m=123964468521142&w=2

On Fri, Apr 10, 2009 at 3:02 PM, Eric W. Biederman
<ebiederm@xmission.com> wrote:
>
> It looks like some additional bugs have slipped in since last I looked.
>
> set_irq_affinity does this:
> ifdef CONFIG_GENERIC_PENDING_IRQ
>        if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
>                cpumask_copy(desc->affinity, cpumask);
>                desc->chip->set_affinity(irq, cpumask);
>        } else {
>                desc->status |= IRQ_MOVE_PENDING;
>                cpumask_copy(desc->pending_mask, cpumask);
>        }
> #else
>
> That IRQ_DISABLED case is a software state and as such it has nothing to
> do with how safe it is to move an irq in process context.
>

"Pallipadi, Venkatesh" <venkatesh.pallipadi@intel.com> writes:
> On Sat, 2009-04-11 at 04:01 -0700, Eric W. Biederman wrote:
> >
> > If the goal is moving MSIs, we should modify the msi code to be safe
> > in process context and to set IRQ_MOVE_PCNTXT.
> >
> > The only reason we migrate MSIs in interrupt context today is that there
> > wasn't infrastructure for support migration both in interrupt context
> > and outside of it.
>
> Yes. The idea here was to force the MSI migration to happen in process
> context. One of the patches in the series did
>
>         disable_irq(dev->irq);
>         irq_set_affinity(dev->irq, cpumask_of(dev->cpu));
>         enable_irq(dev->irq);
>
> with the above patch adding irq/manage code check for interrupt disabled
> and moving the interrupt in process context.
>
> IIRC, there was no IRQ_MOVE_PCNTXT when we were developing this HPET
> code and we ended up having this ugly hack. IRQ_MOVE_PCNTXT was there
> when we eventually submitted the patch upstream. But, looks like I did a
> blind rebasing instead of using IRQ_MOVE_PCNTXT in hpet MSI code. That
> was my fault. Will send a patch to fix this ugliness.

Below patch fixes this. i.e., revert
commit 932775a4ab622e3c99bd59f14cc7d96722f79501
and add PCNTXT to HPET MSI setup. Also removes copying of desc->affinity
in generic code as set_affinity routines are doing it internally.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
---
 arch/x86/kernel/apic/io_apic.c |    2 ++
 kernel/irq/manage.c            |    5 ++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 767fe7e..aaf8212 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -3667,12 +3667,14 @@ int arch_setup_hpet_msi(unsigned int irq)
 {
 	int ret;
 	struct msi_msg msg;
+	struct irq_desc *desc = irq_to_desc(irq);
 
 	ret = msi_compose_msg(NULL, irq, &msg);
 	if (ret < 0)
 		return ret;
 
 	hpet_msi_write(irq, &msg);
+	desc->status |= IRQ_MOVE_PCNTXT;
 	set_irq_chip_and_handler_name(irq, &hpet_msi_type, handle_edge_irq,
 		"edge");
 
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 7e2e7dd..2734eca 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -109,10 +109,9 @@ int irq_set_affinity(unsigned int irq, const struct cpumask *cpumask)
 	spin_lock_irqsave(&desc->lock, flags);
 
 #ifdef CONFIG_GENERIC_PENDING_IRQ
-	if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
-		cpumask_copy(desc->affinity, cpumask);
+	if (desc->status & IRQ_MOVE_PCNTXT)
 		desc->chip->set_affinity(irq, cpumask);
-	} else {
+	else {
 		desc->status |= IRQ_MOVE_PENDING;
 		cpumask_copy(desc->pending_mask, cpumask);
 	}
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH] irq, x86: Remove IRQ_DISABLED check in process context IRQ move
  2009-04-13 22:20               ` [PATCH] irq, x86: Remove IRQ_DISABLED check in process context IRQ move Pallipadi, Venkatesh
@ 2009-04-14  1:40                 ` Eric W. Biederman
  2009-04-14 14:06                 ` [tip:irq/urgent] x86, irq: " tip-bot for Pallipadi, Venkatesh
  1 sibling, 0 replies; 21+ messages in thread
From: Eric W. Biederman @ 2009-04-14  1:40 UTC (permalink / raw)
  To: Pallipadi, Venkatesh
  Cc: Yinghai Lu, Li, Shaohua, Gary Hade, mingo, mingo, tglx, hpa, x86,
	linux-kernel, lcm, suresh.b.siddha

"Pallipadi, Venkatesh" <venkatesh.pallipadi@intel.com> writes:

> As discussed in the thread here
> http://marc.info/?l=linux-kernel&m=123964468521142&w=2
>
> On Fri, Apr 10, 2009 at 3:02 PM, Eric W. Biederman
> <ebiederm@xmission.com> wrote:
>>
>> It looks like some additional bugs have slipped in since last I looked.
>>
>> set_irq_affinity does this:
>> ifdef CONFIG_GENERIC_PENDING_IRQ
>>        if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
>>                cpumask_copy(desc->affinity, cpumask);
>>                desc->chip->set_affinity(irq, cpumask);
>>        } else {
>>                desc->status |= IRQ_MOVE_PENDING;
>>                cpumask_copy(desc->pending_mask, cpumask);
>>        }
>> #else
>>
>> That IRQ_DISABLED case is a software state and as such it has nothing to
>> do with how safe it is to move an irq in process context.
>>
>
> "Pallipadi, Venkatesh" <venkatesh.pallipadi@intel.com> writes:
>> On Sat, 2009-04-11 at 04:01 -0700, Eric W. Biederman wrote:
>> >
>> > If the goal is moving MSIs, we should modify the msi code to be safe
>> > in process context and to set IRQ_MOVE_PCNTXT.
>> >
>> > The only reason we migrate MSIs in interrupt context today is that there
>> > wasn't infrastructure for support migration both in interrupt context
>> > and outside of it.
>>
>> Yes. The idea here was to force the MSI migration to happen in process
>> context. One of the patches in the series did
>>
>>         disable_irq(dev->irq);
>>         irq_set_affinity(dev->irq, cpumask_of(dev->cpu));
>>         enable_irq(dev->irq);
>>
>> with the above patch adding irq/manage code check for interrupt disabled
>> and moving the interrupt in process context.
>>
>> IIRC, there was no IRQ_MOVE_PCNTXT when we were developing this HPET
>> code and we ended up having this ugly hack. IRQ_MOVE_PCNTXT was there
>> when we eventually submitted the patch upstream. But, looks like I did a
>> blind rebasing instead of using IRQ_MOVE_PCNTXT in hpet MSI code. That
>> was my fault. Will send a patch to fix this ugliness.
>
> Below patch fixes this. i.e., revert
> commit 932775a4ab622e3c99bd59f14cc7d96722f79501
> and add PCNTXT to HPET MSI setup. Also removes copying of desc->affinity
> in generic code as set_affinity routines are doing it internally.

Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>

This looks good.

Do you think you could take this one step farther, place a read
after the hpet_msi_write to flush the write to the interrupt source, 
and then finish up the work to change the irq reception setup?

Roughly like ir_set_msi_irq_affinity?

That way we really do get everything done in process context.

> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
> ---
>  arch/x86/kernel/apic/io_apic.c |    2 ++
>  kernel/irq/manage.c            |    5 ++---
>  2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
> index 767fe7e..aaf8212 100644
> --- a/arch/x86/kernel/apic/io_apic.c
> +++ b/arch/x86/kernel/apic/io_apic.c
> @@ -3667,12 +3667,14 @@ int arch_setup_hpet_msi(unsigned int irq)
>  {
>  	int ret;
>  	struct msi_msg msg;
> +	struct irq_desc *desc = irq_to_desc(irq);
>  
>  	ret = msi_compose_msg(NULL, irq, &msg);
>  	if (ret < 0)
>  		return ret;
>  
>  	hpet_msi_write(irq, &msg);
> +	desc->status |= IRQ_MOVE_PCNTXT;
>  	set_irq_chip_and_handler_name(irq, &hpet_msi_type, handle_edge_irq,
>  		"edge");
>  
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index 7e2e7dd..2734eca 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -109,10 +109,9 @@ int irq_set_affinity(unsigned int irq, const struct cpumask *cpumask)
>  	spin_lock_irqsave(&desc->lock, flags);
>  
>  #ifdef CONFIG_GENERIC_PENDING_IRQ
> -	if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
> -		cpumask_copy(desc->affinity, cpumask);
> +	if (desc->status & IRQ_MOVE_PCNTXT)
>  		desc->chip->set_affinity(irq, cpumask);
> -	} else {
> +	else {
>  		desc->status |= IRQ_MOVE_PENDING;
>  		cpumask_copy(desc->pending_mask, cpumask);
>  	}
> -- 
> 1.6.0.6

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [tip:irq/urgent] x86, irq: Remove IRQ_DISABLED check in process context IRQ move
  2009-04-13 22:20               ` [PATCH] irq, x86: Remove IRQ_DISABLED check in process context IRQ move Pallipadi, Venkatesh
  2009-04-14  1:40                 ` Eric W. Biederman
@ 2009-04-14 14:06                 ` tip-bot for Pallipadi, Venkatesh
  1 sibling, 0 replies; 21+ messages in thread
From: tip-bot for Pallipadi, Venkatesh @ 2009-04-14 14:06 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, venkatesh.pallipadi, lcm, shaohua.li,
	ebiederm, garyhade, tglx, mingo

Commit-ID:  6ec3cfeca04622e3d80c9270191cd7f5f88214af
Gitweb:     http://git.kernel.org/tip/6ec3cfeca04622e3d80c9270191cd7f5f88214af
Author:     Pallipadi, Venkatesh <venkatesh.pallipadi@intel.com>
AuthorDate: Mon, 13 Apr 2009 15:20:58 -0700
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Tue, 14 Apr 2009 15:21:13 +0200

x86, irq: Remove IRQ_DISABLED check in process context IRQ move

As discussed in the thread here:

  http://marc.info/?l=linux-kernel&m=123964468521142&w=2

Eric W. Biederman observed:

> It looks like some additional bugs have slipped in since last I looked.
>
> set_irq_affinity does this:
> ifdef CONFIG_GENERIC_PENDING_IRQ
>        if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
>                cpumask_copy(desc->affinity, cpumask);
>                desc->chip->set_affinity(irq, cpumask);
>        } else {
>                desc->status |= IRQ_MOVE_PENDING;
>                cpumask_copy(desc->pending_mask, cpumask);
>        }
> #else
>
> That IRQ_DISABLED case is a software state and as such it has nothing to
> do with how safe it is to move an irq in process context.

[...]

>
> The only reason we migrate MSIs in interrupt context today is that there
> wasn't infrastructure for support migration both in interrupt context
> and outside of it.

Yes. The idea here was to force the MSI migration to happen in process
context. One of the patches in the series did

        disable_irq(dev->irq);
        irq_set_affinity(dev->irq, cpumask_of(dev->cpu));
        enable_irq(dev->irq);

with the above patch adding irq/manage code check for interrupt disabled
and moving the interrupt in process context.

IIRC, there was no IRQ_MOVE_PCNTXT when we were developing this HPET
code and we ended up having this ugly hack. IRQ_MOVE_PCNTXT was there
when we eventually submitted the patch upstream. But, looks like I did a
blind rebasing instead of using IRQ_MOVE_PCNTXT in hpet MSI code.

Below patch fixes this. i.e., revert commit 932775a4ab622e3c99bd59f14cc
and add PCNTXT to HPET MSI setup. Also removes copying of desc->affinity
in generic code as set_affinity routines are doing it internally.

Reported-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: "Li Shaohua" <shaohua.li@intel.com>
Cc: Gary Hade <garyhade@us.ibm.com>
Cc: "lcm@us.ibm.com" <lcm@us.ibm.com>
Cc: suresh.b.siddha@intel.com
LKML-Reference: <20090413222058.GB8211@linux-os.sc.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 arch/x86/kernel/apic/io_apic.c |    2 ++
 kernel/irq/manage.c            |    5 ++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index a2789e4..30da617 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -3670,12 +3670,14 @@ int arch_setup_hpet_msi(unsigned int irq)
 {
 	int ret;
 	struct msi_msg msg;
+	struct irq_desc *desc = irq_to_desc(irq);
 
 	ret = msi_compose_msg(NULL, irq, &msg);
 	if (ret < 0)
 		return ret;
 
 	hpet_msi_write(irq, &msg);
+	desc->status |= IRQ_MOVE_PCNTXT;
 	set_irq_chip_and_handler_name(irq, &hpet_msi_type, handle_edge_irq,
 		"edge");
 
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 7e2e7dd..2734eca 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -109,10 +109,9 @@ int irq_set_affinity(unsigned int irq, const struct cpumask *cpumask)
 	spin_lock_irqsave(&desc->lock, flags);
 
 #ifdef CONFIG_GENERIC_PENDING_IRQ
-	if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
-		cpumask_copy(desc->affinity, cpumask);
+	if (desc->status & IRQ_MOVE_PCNTXT)
 		desc->chip->set_affinity(irq, cpumask);
-	} else {
+	else {
 		desc->status |= IRQ_MOVE_PENDING;
 		cpumask_copy(desc->pending_mask, cpumask);
 	}

^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2009-04-14 14:07 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-08 21:07 [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Gary Hade
2009-04-08 22:30 ` Yinghai Lu
2009-04-08 23:37   ` Gary Hade
2009-04-08 23:58     ` Yinghai Lu
2009-04-08 23:59       ` Yinghai Lu
2009-04-09 19:17         ` Gary Hade
2009-04-09 22:38           ` Yinghai Lu
2009-04-10  0:53             ` Gary Hade
2009-04-10  1:29 ` Eric W. Biederman
2009-04-10 20:09   ` Gary Hade
2009-04-10 22:02     ` Eric W. Biederman
2009-04-11  7:44       ` Yinghai Lu
2009-04-11  7:51       ` Yinghai Lu
2009-04-11 11:01         ` Eric W. Biederman
2009-04-13 17:41           ` Pallipadi, Venkatesh
2009-04-13 18:50             ` Eric W. Biederman
2009-04-13 22:20               ` [PATCH] irq, x86: Remove IRQ_DISABLED check in process context IRQ move Pallipadi, Venkatesh
2009-04-14  1:40                 ` Eric W. Biederman
2009-04-14 14:06                 ` [tip:irq/urgent] x86, irq: " tip-bot for Pallipadi, Venkatesh
2009-04-12 19:32 ` [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Eric W. Biederman
2009-04-13 21:09   ` Gary Hade

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).