linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Gary Hade <garyhade@us.ibm.com>
To: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Gary Hade <garyhade@us.ibm.com>,
	mingo@elte.hu, mingo@redhat.com, tglx@linutronix.de,
	hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org,
	lcm@us.ibm.com
Subject: Re: [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption
Date: Fri, 10 Apr 2009 13:09:19 -0700	[thread overview]
Message-ID: <20090410200919.GA7242@us.ibm.com> (raw)
In-Reply-To: <m1r601fg9l.fsf@fess.ebiederm.org>

On Thu, Apr 09, 2009 at 06:29:10PM -0700, Eric W. Biederman wrote:
> Gary Hade <garyhade@us.ibm.com> writes:
> 
> > Impact: Eliminates a race that can leave the system in an
> >         unusable state
> >
> > During rapid offlining of multiple CPUs there is a chance
> > that an IRQ affinity move destination CPU will be offlined
> > before the IRQ affinity move initiated during the offlining
> > of a previous CPU completes.  This can happen when the device
> > is not very active and thus fails to generate the IRQ that is
> > needed to complete the IRQ affinity move before the move
> > destination CPU is offlined.  When this happens there is an
> > -EBUSY return from __assign_irq_vector() during the offlining
> > of the IRQ move destination CPU which prevents initiation of
> > a new IRQ affinity move operation to an online CPU.  This
> > leaves the IRQ affinity set to an offlined CPU.
> >
> > I have been able to reproduce the problem on some of our
> > systems using the following script.  When the system is idle
> > the problem often reproduces during the first CPU offlining
> > sequence.
> 
> You appear to be focusing on the IBM x460 and x3835.

True.  I have also observed IRQ interruptions on an IBM x3950 M2
which I believe, but am not certain, were due to the other
"I/O redirection table register write with Remote IRR bit set"
caused problem.

I intend to do more testing on the x3950 M2 and other
IBM System x servers but I unfortunately do not currently
have access to any Intel based non-IBM MP servers.  I was
hoping that my testing request might at least get some
others interested in running the simple test script on their
systems and reporting their results.  Have you perhaps tried
the test on any of the Intel based MP systems that you have
access to?

> Can you describe
> what kind of interrupt setup you are running.

Being somewhat of a ioapic neophyte I am not exactly sure
what you are asking for here.  This is ioapic information
logged during boot if that helps at all.
x3850:
    ACPI: IOAPIC (id[0x0f] address[0xfec00000] gsi_base[0])
    IOAPIC[0]: apic_id 15, version 0, address 0xfec00000, GSI 0-35
    ACPI: IOAPIC (id[0x0e] address[0xfec01000] gsi_base[36])
    IOAPIC[1]: apic_id 14, version 0, address 0xfec01000, GSI 36-71
x460:
    ACPI: IOAPIC (id[0x0f] address[0xfec00000] gsi_base[0])
    IOAPIC[0]: apic_id 15, version 17, address 0xfec00000, GSI 0-35
    ACPI: IOAPIC (id[0x0e] address[0xfec01000] gsi_base[36])
    IOAPIC[1]: apic_id 14, version 17, address 0xfec01000, GSI 36-71
    ACPI: IOAPIC (id[0x0d] address[0xfec02000] gsi_base[72])
    IOAPIC[2]: apic_id 13, version 17, address 0xfec02000, GSI 72-107
    ACPI: IOAPIC (id[0x0c] address[0xfec03000] gsi_base[108])
    IOAPIC[3]: apic_id 12, version 17, address 0xfec03000, GSI 108-143

> 
> You may be the first person to actually hit the problems with cpu offlining
> and irq migration that have theoretically been present for a long.

Your "Safely cleanup an irq after moving it" changes have been
present in mainline for quite some time so I have been thinking
about this as well.

I can certainly understand why it may not be very likely
for users to see the "I/O redirection table register write
with Remote IRR bit set" caused problem.  It has actually
been fairly difficult to reproduce.  I very much doubt that
there are many users out there that would be continuously offlining
and onlining all the offlineable CPUs from a script or program on
a heavily loaded system.  IMO, this would _not_ be a very common
useage scenario.  The test script that I provided usually performs
many CPU offline/online iterations before the problem is triggered.
A much more likely useage scenario, for which there is already code
in ack_apic_level() to avoid the problem, would be IRQ affinity
adjustments requested from user level (e.g. by the irqbalance daemon)
on a very active system.

It is less clear to me why users have not been reporting the
idle system race but I suspect that
  - script or program driven offlining of multiple CPUs
    may not be very common
  - the actual affinity on an idle system is usually set to
    cpu0 which is always online

I am glad you are looking at this since I know it involves code
that you should be quite familiar with.  Thanks!

Gary

-- 
Gary Hade
System x Enablement
IBM Linux Technology Center
503-578-4503  IBM T/L: 775-4503
garyhade@us.ibm.com
http://www.ibm.com/linux/ltc


  reply	other threads:[~2009-04-10 20:09 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-08 21:07 [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Gary Hade
2009-04-08 22:30 ` Yinghai Lu
2009-04-08 23:37   ` Gary Hade
2009-04-08 23:58     ` Yinghai Lu
2009-04-08 23:59       ` Yinghai Lu
2009-04-09 19:17         ` Gary Hade
2009-04-09 22:38           ` Yinghai Lu
2009-04-10  0:53             ` Gary Hade
2009-04-10  1:29 ` Eric W. Biederman
2009-04-10 20:09   ` Gary Hade [this message]
2009-04-10 22:02     ` Eric W. Biederman
2009-04-11  7:44       ` Yinghai Lu
2009-04-11  7:51       ` Yinghai Lu
2009-04-11 11:01         ` Eric W. Biederman
2009-04-13 17:41           ` Pallipadi, Venkatesh
2009-04-13 18:50             ` Eric W. Biederman
2009-04-13 22:20               ` [PATCH] irq, x86: Remove IRQ_DISABLED check in process context IRQ move Pallipadi, Venkatesh
2009-04-14  1:40                 ` Eric W. Biederman
2009-04-14 14:06                 ` [tip:irq/urgent] x86, irq: " tip-bot for Pallipadi, Venkatesh
2009-04-12 19:32 ` [PATCH 2/3] [BUGFIX] x86/x86_64: fix CPU offlining triggered inactive device IRQ interrruption Eric W. Biederman
2009-04-13 21:09   ` Gary Hade

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090410200919.GA7242@us.ibm.com \
    --to=garyhade@us.ibm.com \
    --cc=ebiederm@xmission.com \
    --cc=hpa@zytor.com \
    --cc=lcm@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=mingo@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).