From: Mathias Nyman <mathias.nyman@linux.intel.com>
To: Thomas Gleixner <tglx@linutronix.de>, x86@kernel.org
Cc: linux-pci <linux-pci@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Bjorn Helgaas <bhelgaas@google.com>,
Evan Green <evgreen@chromium.org>,
"Ghorai, Sukumar" <sukumar.ghorai@intel.com>,
"Amara, Madhusudanarao" <madhusudanarao.amara@intel.com>,
"Nandamuri, Srikanth" <srikanth.nandamuri@intel.com>
Subject: Re: MSI interrupt for xhci still lost on 5.6-rc6 after cpu hotplug
Date: Mon, 23 Mar 2020 11:42:29 +0200 [thread overview]
Message-ID: <b9fbd55a-7f97-088d-2cc2-4e4ea86d9440@linux.intel.com> (raw)
In-Reply-To: <87d0974akk.fsf@nanos.tec.linutronix.de>
On 20.3.2020 11.52, Thomas Gleixner wrote:
> Mathias,
>
> Mathias Nyman <mathias.nyman@linux.intel.com> writes:
>> I can reproduce the lost MSI interrupt issue on 5.6-rc6 which includes
>> the "Plug non-maskable MSI affinity race" patch.
>>
>> I can see this on a couple platforms, I'm running a script that first generates
>> a lot of usb traffic, and then in a busyloop sets irq affinity and turns off
>> and on cpus:
>>
>> for i in 1 3 5 7; do
>> echo "1" > /sys/devices/system/cpu/cpu$i/online
>> done
>> echo "A" > "/proc/irq/*/smp_affinity"
>> echo "A" > "/proc/irq/*/smp_affinity"
>> echo "F" > "/proc/irq/*/smp_affinity"
>> for i in 1 3 5 7; do
>> echo "0" > /sys/devices/system/cpu/cpu$i/online
>> done
>> trace snippet:
>> <idle>-0 [001] d.h. 129.676900: xhci_irq: xhci irq
>> <idle>-0 [001] d.h. 129.677507: xhci_irq: xhci irq
>> <idle>-0 [001] d.h. 129.677556: xhci_irq: xhci irq
>> <idle>-0 [001] d.h. 129.677647: xhci_irq: xhci irq
>> <...>-14 [001] d..1 129.679802: msi_set_affinity: direct update msi 122, vector 33 -> 33, apicid: 2 -> 6
>
> Looks like a regular affinity setting in interrupt context, but I can't
> make sense of the time stamps
I think so, everything worked normally after this one still.
>
>> <idle>-0 [003] d.h. 129.682639: xhci_irq: xhci irq
>> <idle>-0 [003] d.h. 129.702380: xhci_irq: xhci irq
>> <idle>-0 [003] d.h. 129.702493: xhci_irq: xhci irq
>> migration/3-24 [003] d..1 129.703150: msi_set_affinity: direct update msi 122, vector 33 -> 33, apicid: 6 -> 0
>
> So this is a CPU offline operation and after that irq 122 is silent, right?
Yes, after this irq 122 was silent.
>
>> kworker/0:0-5 [000] d.h. 131.328790: msi_set_affinity: direct update msi 121, vector 34 -> 34, apicid: 0 -> 0
>> kworker/0:0-5 [000] d.h. 133.312704: msi_set_affinity: direct update msi 121, vector 34 -> 34, apicid: 0 -> 0
>> kworker/0:0-5 [000] d.h. 135.360786: msi_set_affinity: direct update msi 121, vector 34 -> 34, apicid: 0 -> 0
>> <idle>-0 [000] d.h. 137.344694: msi_set_affinity: direct update msi 121, vector 34 -> 34, apicid: 0 -> 0
>> kworker/0:0-5 [000] d.h. 139.128679: msi_set_affinity: direct update msi 121, vector 34 -> 34, apicid: 0 -> 0
>> kworker/0:0-5 [000] d.h. 141.312686: msi_set_affinity: direct update msi 121, vector 34 -> 34, apicid: 0 -> 0
>> kworker/0:0-5 [000] d.h. 143.360703: msi_set_affinity: direct update msi 121, vector 34 -> 34, apicid: 0 -> 0
>> kworker/0:0-5 [000] d.h. 145.344791: msi_set_affinity: direct update msi 121, vector 34 -> 34, apicid: 0 -> 0
>
> That kworker context looks fishy. Can you please enable stacktraces in
> the tracer so I can see the call chains leading to this? OTOH that's irq
> 121 not 122. Anyway moar information is always useful.
>
> And please add the patch below.
>
Full function trace with patch is huge, can be found compressed at
https://drive.google.com/drive/folders/19AFZe32DYk4Kzxi8VYv-OWmNOCyIY6M5?usp=sharing
xhci_traces.tgz contains:
trace_full: full function trace.
trace: timestamp ~48.29 to ~48.93 of trace above, section with last xhci irq
trace_prink_only: only trace_printk() of "trace" above
This time xhci interrupts stopped after
migration/3-24 [003] d..1 48.530271: msi_set_affinity: twostep update msi, irq 122, vector 33 -> 34, apicid: 6 -> 4
Thanks
-Mathias
next prev parent reply other threads:[~2020-03-23 9:40 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-18 19:25 MSI interrupt for xhci still lost on 5.6-rc6 after cpu hotplug Mathias Nyman
2020-03-19 20:24 ` Evan Green
2020-03-20 8:07 ` Mathias Nyman
2020-03-20 9:52 ` Thomas Gleixner
2020-03-23 9:42 ` Mathias Nyman [this message]
2020-03-23 14:10 ` Thomas Gleixner
2020-03-23 20:32 ` Mathias Nyman
2020-03-24 0:24 ` Thomas Gleixner
2020-03-24 16:17 ` Evan Green
2020-03-24 19:03 ` Thomas Gleixner
2020-05-01 18:43 ` Raj, Ashok
2020-05-05 19:36 ` Thomas Gleixner
2020-05-05 20:16 ` Raj, Ashok
2020-05-05 21:47 ` Thomas Gleixner
2020-05-07 12:18 ` Raj, Ashok
2020-05-07 12:53 ` Thomas Gleixner
[not found] ` <20200507175715.GA22426@otc-nc-03>
2020-05-07 19:41 ` Thomas Gleixner
2020-03-25 17:12 ` Mathias Nyman
[not found] <20200508005528.GB61703@otc-nc-03>
2020-05-08 11:04 ` Thomas Gleixner
2020-05-08 16:09 ` Raj, Ashok
2020-05-08 16:49 ` Thomas Gleixner
2020-05-11 19:03 ` Raj, Ashok
2020-05-11 20:14 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b9fbd55a-7f97-088d-2cc2-4e4ea86d9440@linux.intel.com \
--to=mathias.nyman@linux.intel.com \
--cc=bhelgaas@google.com \
--cc=evgreen@chromium.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=madhusudanarao.amara@intel.com \
--cc=srikanth.nandamuri@intel.com \
--cc=sukumar.ghorai@intel.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).