linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
       [not found] <CAA85sZuuS=UHzhk0DabN45jCu-GYD-DxMOY8dd68Znnk5wsXVg@mail.gmail.com>
@ 2020-12-14  5:44 ` Bjorn Helgaas
  2020-12-14  9:14   ` Ian Kumlien
  0 siblings, 1 reply; 12+ messages in thread
From: Bjorn Helgaas @ 2020-12-14  5:44 UTC (permalink / raw)
  To: Ian Kumlien
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan, netdev,
	linux-kernel

[+cc Jesse, Tony, David, Jakub, Heiner, lists in case there's an ASPM
issue with I211 or Realtek NICs.  Beginning of thread:
https://lore.kernel.org/r/20201024205548.1837770-1-ian.kumlien@gmail.com

Short story: Ian has:

  Root Port --- Switch --- I211 NIC
                       \-- multifunction Realtek NIC, etc

and the I211 performance is poor with ASPM L1 enabled on both links
in the path to it.  The patch here disables ASPM on the upstream link
and fixes the performance, but AFAICT the devices in that path give us
no reason to disable L1.  If I understand the spec correctly, the
Realtek device should not be relevant to the I211 path.]

On Sun, Dec 13, 2020 at 10:39:53PM +0100, Ian Kumlien wrote:
> On Sun, Dec 13, 2020 at 12:47 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Sat, Oct 24, 2020 at 10:55:46PM +0200, Ian Kumlien wrote:
> > > Make pcie_aspm_check_latency comply with the PCIe spec, specifically:
> > > "5.4.1.2.2. Exit from the L1 State"
> > >
> > > Which makes it clear that each switch is required to initiate a
> > > transition within 1μs from receiving it, accumulating this latency and
> > > then we have to wait for the slowest link along the path before
> > > entering L0 state from L1.
> > > ...
> >
> > > On my specific system:
> > > 03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
> > > 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 816e (rev 1a)
> > >
> > >             Exit latency       Acceptable latency
> > > Tree:       L1       L0s       L1       L0s
> > > ----------  -------  -----     -------  ------
> > > 00:01.2     <32 us   -
> > > | 01:00.0   <32 us   -
> > > |- 02:03.0  <32 us   -
> > > | \03:00.0  <16 us   <2us      <64 us   <512ns
> > > |
> > > \- 02:04.0  <32 us   -
> > >   \04:00.0  <64 us   unlimited <64 us   <512ns
> > >
> > > 04:00.0's latency is the same as the maximum it allows so as we walk the path
> > > the first switchs startup latency will pass the acceptable latency limit
> > > for the link, and as a side-effect it fixes my issues with 03:00.0.
> > >
> > > Without this patch, 03:00.0 misbehaves and only gives me ~40 mbit/s over
> > > links with 6 or more hops. With this patch I'm back to a maximum of ~933
> > > mbit/s.
> >
> > There are two paths here that share a Link:
> >
> >   00:01.2 --- 01:00.0 -- 02:03.0 --- 03:00.0 I211 NIC
> >   00:01.2 --- 01:00.0 -- 02:04.0 --- 04:00.x multifunction Realtek
> >
> > 1) The path to the I211 NIC includes four Ports and two Links (the
> >    connection between 01:00.0 and 02:03.0 is internal Switch routing,
> >    not a Link).
> 
> >    The Ports advertise L1 exit latencies of <32us, <32us, <32us,
> >    <16us.  If both Links are in L1 and 03:00.0 initiates L1 exit at T,
> >    01:00.0 initiates L1 exit at T + 1.  A TLP from 03:00.0 may see up
> >    to 1 + 32 = 33us of L1 exit latency.
> >
> >    The NIC can tolerate up to 64us of L1 exit latency, so it is safe
> >    to enable L1 for both Links.
> >
> > 2) The path to the Realtek device is similar except that the Realtek
> >    L1 exit latency is <64us.  If both Links are in L1 and 04:00.x
> >    initiates L1 exit at T, 01:00.0 again initiates L1 exit at T + 1,
> >    but a TLP from 04:00.x may see up to 1 + 64 = 65us of L1 exit
> >    latency.
> >
> >    The Realtek device can only tolerate 64us of latency, so it is not
> >    safe to enable L1 for both Links.  It should be safe to enable L1
> >    on the shared link because the exit latency for that link would be
> >    <32us.
> 
> 04:00.0:
> DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
> LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s
> unlimited, L1 <64us
> 
> So maximum latency for the entire link has to be <64 us
> For the device to leave L1 ASPM takes <64us
> 
> So the device itself is the slowest entry along the link, which
> means that nothing else along that path can have ASPM enabled

Yes.  That's what I said above: "it is not safe to enable L1 for both
Links."  Unless I'm missing something, we agree on that.

I also said that it should be safe to enable L1 on the shared Link
(from 00:01.2 to 01:00.0) because if the downstream Link is always in
L0, the exit latency of the shared Link should be <32us, and 04:00.x
can tolerate 64us.

> > > The original code path did:
> > > 04:00:0-02:04.0 max latency 64    -> ok
> > > 02:04.0-01:00.0 max latency 32 +1 -> ok
> > > 01:00.0-00:01.2 max latency 32 +2 -> ok
> > >
> > > And thus didn't see any L1 ASPM latency issues.
> > >
> > > The new code does:
> > > 04:00:0-02:04.0 max latency 64    -> ok
> > > 02:04.0-01:00.0 max latency 64 +1 -> latency exceeded
> > > 01:00.0-00:01.2 max latency 64 +2 -> latency exceeded
> >
> > [Nit: I don't think we should add 1 for the 02:04.0 -- 01:00.0 piece
> > because that's internal Switch routing, not a Link.  But even without
> > that extra microsecond, this path does exceed the acceptable latency
> > since 1 + 64 = 65us, and 04:00.0 can only tolerate 64us.]
> 
> It does report L1 ASPM on both ends, so the links will be counted as
> such in the code.

This is a bit of a tangent and we shouldn't get too wrapped up in it.
This is a confusing aspect of PCIe.  We're talking about this path:

  00:01.2 --- [01:00.0 -- 02:04.0] --- 04:00.x multifunction Realtek

This path only contains two Links.  The first one is 
00:01.2 --- 01:00.0, and the second one is 02:04.0 --- 04:00.x.

01:00.0 is a Switch Upstream Port and 02:04.0 is a Switch Downstream
Port.  The connection between them is not a Link; it is some internal
wiring of the Switch that is completely opaque to software.

The ASPM information and knobs in 01:00.0 apply to the Link on its
upstream side, and the ASPM info and knobs in 02:04.0 apply to the
Link on its downstream side.

The example in sec 5.4.1.2.2 contains three Links.  The L1 exit latency
for the Link is the max of the exit latencies at each end:

  Link 1: max(32, 8) = 32us
  Link 2: max(8, 32) = 32us
  Link 3: max(32, 8) = 32us

The total delay for a TLP starting at the downstream end of Link 3
is 32 + 2 = 32us.

In the path to your 04:00.x Realtek device:

  Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
  Link 2 (from 02:04.0 to 04:00.x): max(32, 64) = 64us

If L1 were enabled on both Links, the exit latency would be 64 + 1 =
65us.

> I also assume that it can power down individual ports... and enter
> rest state if no links are up.

I don't think this is quite true -- a Link can't enter L1 unless the
Ports on both ends have L1 enabled, so I don't think it makes sense to
talk about an individual Port being in L1.

> > > It correctly identifies the issue.
> > >
> > > For reference, pcie information:
> > > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> >
> > The "lspci without my patches" [1] shows L1 enabled for the shared
> > Link from 00:01.2 --- 01:00.0 and for the Link to 03:00.0 (I211), but
> > not for the Link to 04:00.x (Realtek).
> >
> > Per my analysis above, that looks like it *should* be a safe
> > configuration.  03:00.0 can tolerate 64us, actual is <33us.  04:00.0
> > can tolerate 64us, actual should be <32us since only the shared Link
> > is in L1.
> 
> See above.

As I said above, if we enabled L1 only on the shared Link from 00:01.2
to 01:00.0, the exit latency should be acceptable.  In that case, a
TLP from 04:00.x would see only 32us of latency:

  Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us

and 04:00.x can tolerate 64us.

> > However, the commit log at [2] shows L1 *enabled* for both the shared
> > Link from 00:01.2 --- 01:00.0 and the 02:04.0 --- 04:00.x Link, and
> > that would definitely be a problem.
> >
> > Can you explain the differences between [1] and [2]?
> 
> I don't understand which sections you're referring to.

[1] is the "lspci without my patches" attachment of bugzilla #209725,
which is supposed to show the problem this patch solves.  We're
talking about the path to 04:00.x, and [1] show this:

  01:00.2 L1+
  01:00.0 L1+
  02:04.0 L1-
  04:00.0 L1-

AFAICT, that should be a legal configuration as far as 04:00.0 is
concerned, so it's not a reason for this patch.

[2] is a previous posting of this same patch, and its commit log
includes information about the same path to 04:00.x, but the "LnkCtl
Before" column shows:

  01:00.2 L1+
  01:00.0 L1+
  02:04.0 L1+
  04:00.0 L1+

I don't know why [1] shows L1 disabled on the downstream Link, while
[2] shows L1 *enabled* on the same Link.

> > > Kai-Heng Feng has a machine that will not boot with ASPM without
> > > this patch, information is documented here:
> > > https://bugzilla.kernel.org/show_bug.cgi?id=209671
> >
> > I started working through this info, too, but there's not enough
> > information to tell what difference this patch makes.  The attachments
> > compare:
> >
> >   1) CONFIG_PCIEASPM_DEFAULT=y without the patch [3] and
> >   2) CONFIG_PCIEASPM_POWERSAVE=y *with* the patch [4]
> >
> > Obviously CONFIG_PCIEASPM_POWERSAVE=y will configure things
> > differently than CONFIG_PCIEASPM_DEFAULT=y, so we can't tell what
> > changes are due to the config change and what are due to the patch.
> >
> > The lspci *with* the patch ([4]) shows L0s and L1 enabled at almost
> > every possible place.  Here are the Links, how they're configured, and
> > my analysis of the exit latencies vs acceptable latencies:
> >
> >   00:01.1 --- 01:00.0      L1+ (                  L1 <64us vs unl)
> >   00:01.2 --- 02:00.0      L1+ (                  L1 <64us vs 64us)
> >   00:01.3 --- 03:00.0      L1+ (                  L1 <64us vs 64us)
> >   00:01.4 --- 04:00.0      L1+ (                  L1 <64us vs unl)
> >   00:08.1 --- 05:00.x L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> >   00:08.2 --- 06:00.0 L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> >
> > So I can't tell what change prevents the freeze.  I would expect the
> > patch would cause us to *disable* L0s or L1 somewhere.
> >
> > The only place [4] shows ASPM disabled is for 05:00.1.  The spec says
> > we should program the same value in all functions of a multi-function
> > device.  This is a non-ARI device, so "only capabilities enabled in
> > all functions are enabled for the component as a whole."  That would
> > mean that L0s and L1 are effectively disabled for 05:00.x even though
> > 05:00.0 claims they're enabled.  But the latencies say ASPM L0s and L1
> > should be safe to be enabled.  This looks like another bug that's
> > probably unrelated.
> 
> I don't think it's unrelated, i suspect it's how PCIe works with
> multiple links...  a device can cause some kind of head of queue
> stalling - i don't know how but it really looks like it.

The text in quotes above is straight out of the spec (PCIe r5.0, sec
7.5.3.7).  Either the device works that way or it's not compliant.

The OS configures ASPM based on the requirements and capabilities
advertised by the device.  If a device has any head of queue stalling
or similar issues, those must be comprehended in the numbers
advertised by the device.  It's not up to the OS to speculate about
issues like that.

> > The patch might be correct; I haven't actually analyzed the code.  But
> > the commit log doesn't make sense to me yet.
> 
> I personally don't think that all this PCI information is required,
> the linux kernel is currently doing it wrong according to the spec.

We're trying to establish exactly *what* Linux is doing wrong.  So far
we don't have a good explanation of that.

Based on [1], in the path to 03:00.0, both Links have L1 enabled, with
an exit latency of <33us, and 03:00.0 can tolerate 64us.  That should
work fine.

Also based on [1], in the path to 04:00.x, the upstream Link has L1
enabled and the downstream Link has L1 disabled, for an exit latency
of <32us, and 04:00.0 can tolerate 64us.  That should also work fine.

(Alternately, disabling L1 on the upstream Link and enabling it on the
downstream Link should have an exit latency of <64us and 04:00.0 can
tolerate 64us, so that should work fine, too.)

> Also, since it's clearly doing the wrong thing, I'm worried that
> dists will take a kernel enable aspm and there will be alot of
> bugreports of non-booting systems or other weird issues... And the
> culprit was known all along.

There's clearly a problem on your system, but I don't know yet whether
Linux is doing something wrong, a device in your system is designed
incorrectly, or a device is designed correctly but the instance in
your system is defective.

> It's been five months...

I apologize for the delay.  ASPM is a subtle area of PCIe, the Linux
code is complicated, and we have a long history of issues with it.  I
want to fix the problem, but I want to make sure we do it in a way
that matches the spec so the fix applies to all systems.  I don't want
a magic fix that fixes your system in a way I don't quite understand.

Obviously *you* understand this, so hopefully it's just a matter of
pounding it through my thick skull :)

> > [1] https://bugzilla.kernel.org/attachment.cgi?id=293047
> > [2] https://lore.kernel.org/linux-pci/20201007132808.647589-1-ian.kumlien@gmail.com/
> > [3] https://bugzilla.kernel.org/attachment.cgi?id=292955
> > [4] https://bugzilla.kernel.org/attachment.cgi?id=292957
> >
> > > Signed-off-by: Ian Kumlien <ian.kumlien@gmail.com>
> > > Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> > > ---
> > >  drivers/pci/pcie/aspm.c | 22 ++++++++++++++--------
> > >  1 file changed, 14 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
> > > index 253c30cc1967..c03ead0f1013 100644
> > > --- a/drivers/pci/pcie/aspm.c
> > > +++ b/drivers/pci/pcie/aspm.c
> > > @@ -434,7 +434,7 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
> > >
> > >  static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > >  {
> > > -     u32 latency, l1_switch_latency = 0;
> > > +     u32 latency, l1_max_latency = 0, l1_switch_latency = 0;
> > >       struct aspm_latency *acceptable;
> > >       struct pcie_link_state *link;
> > >
> > > @@ -456,10 +456,14 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > >               if ((link->aspm_capable & ASPM_STATE_L0S_DW) &&
> > >                   (link->latency_dw.l0s > acceptable->l0s))
> > >                       link->aspm_capable &= ~ASPM_STATE_L0S_DW;
> > > +
> > >               /*
> > >                * Check L1 latency.
> > > -              * Every switch on the path to root complex need 1
> > > -              * more microsecond for L1. Spec doesn't mention L0s.
> > > +              *
> > > +              * PCIe r5.0, sec 5.4.1.2.2 states:
> > > +              * A Switch is required to initiate an L1 exit transition on its
> > > +              * Upstream Port Link after no more than 1 μs from the beginning of an
> > > +              * L1 exit transition on any of its Downstream Port Links.
> > >                *
> > >                * The exit latencies for L1 substates are not advertised
> > >                * by a device.  Since the spec also doesn't mention a way
> > > @@ -469,11 +473,13 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > >                * L1 exit latencies advertised by a device include L1
> > >                * substate latencies (and hence do not do any check).
> > >                */
> > > -             latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > -             if ((link->aspm_capable & ASPM_STATE_L1) &&
> > > -                 (latency + l1_switch_latency > acceptable->l1))
> > > -                     link->aspm_capable &= ~ASPM_STATE_L1;
> > > -             l1_switch_latency += 1000;
> > > +             if (link->aspm_capable & ASPM_STATE_L1) {
> > > +                     latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > +                     l1_max_latency = max_t(u32, latency, l1_max_latency);
> > > +                     if (l1_max_latency + l1_switch_latency > acceptable->l1)
> > > +                             link->aspm_capable &= ~ASPM_STATE_L1;
> > > +                     l1_switch_latency += 1000;
> > > +             }
> > >
> > >               link = link->parent;
> > >       }
> > > --
> > > 2.29.1
> > >

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-14  5:44 ` [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check Bjorn Helgaas
@ 2020-12-14  9:14   ` Ian Kumlien
  2020-12-14 14:02     ` Bjorn Helgaas
  0 siblings, 1 reply; 12+ messages in thread
From: Ian Kumlien @ 2020-12-14  9:14 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Mon, Dec 14, 2020 at 6:44 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> [+cc Jesse, Tony, David, Jakub, Heiner, lists in case there's an ASPM
> issue with I211 or Realtek NICs.  Beginning of thread:
> https://lore.kernel.org/r/20201024205548.1837770-1-ian.kumlien@gmail.com
>
> Short story: Ian has:
>
>   Root Port --- Switch --- I211 NIC
>                        \-- multifunction Realtek NIC, etc
>
> and the I211 performance is poor with ASPM L1 enabled on both links
> in the path to it.  The patch here disables ASPM on the upstream link
> and fixes the performance, but AFAICT the devices in that path give us
> no reason to disable L1.  If I understand the spec correctly, the
> Realtek device should not be relevant to the I211 path.]
>
> On Sun, Dec 13, 2020 at 10:39:53PM +0100, Ian Kumlien wrote:
> > On Sun, Dec 13, 2020 at 12:47 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Sat, Oct 24, 2020 at 10:55:46PM +0200, Ian Kumlien wrote:
> > > > Make pcie_aspm_check_latency comply with the PCIe spec, specifically:
> > > > "5.4.1.2.2. Exit from the L1 State"
> > > >
> > > > Which makes it clear that each switch is required to initiate a
> > > > transition within 1μs from receiving it, accumulating this latency and
> > > > then we have to wait for the slowest link along the path before
> > > > entering L0 state from L1.
> > > > ...
> > >
> > > > On my specific system:
> > > > 03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
> > > > 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 816e (rev 1a)
> > > >
> > > >             Exit latency       Acceptable latency
> > > > Tree:       L1       L0s       L1       L0s
> > > > ----------  -------  -----     -------  ------
> > > > 00:01.2     <32 us   -
> > > > | 01:00.0   <32 us   -
> > > > |- 02:03.0  <32 us   -
> > > > | \03:00.0  <16 us   <2us      <64 us   <512ns
> > > > |
> > > > \- 02:04.0  <32 us   -
> > > >   \04:00.0  <64 us   unlimited <64 us   <512ns
> > > >
> > > > 04:00.0's latency is the same as the maximum it allows so as we walk the path
> > > > the first switchs startup latency will pass the acceptable latency limit
> > > > for the link, and as a side-effect it fixes my issues with 03:00.0.
> > > >
> > > > Without this patch, 03:00.0 misbehaves and only gives me ~40 mbit/s over
> > > > links with 6 or more hops. With this patch I'm back to a maximum of ~933
> > > > mbit/s.
> > >
> > > There are two paths here that share a Link:
> > >
> > >   00:01.2 --- 01:00.0 -- 02:03.0 --- 03:00.0 I211 NIC
> > >   00:01.2 --- 01:00.0 -- 02:04.0 --- 04:00.x multifunction Realtek
> > >
> > > 1) The path to the I211 NIC includes four Ports and two Links (the
> > >    connection between 01:00.0 and 02:03.0 is internal Switch routing,
> > >    not a Link).
> >
> > >    The Ports advertise L1 exit latencies of <32us, <32us, <32us,
> > >    <16us.  If both Links are in L1 and 03:00.0 initiates L1 exit at T,
> > >    01:00.0 initiates L1 exit at T + 1.  A TLP from 03:00.0 may see up
> > >    to 1 + 32 = 33us of L1 exit latency.
> > >
> > >    The NIC can tolerate up to 64us of L1 exit latency, so it is safe
> > >    to enable L1 for both Links.
> > >
> > > 2) The path to the Realtek device is similar except that the Realtek
> > >    L1 exit latency is <64us.  If both Links are in L1 and 04:00.x
> > >    initiates L1 exit at T, 01:00.0 again initiates L1 exit at T + 1,
> > >    but a TLP from 04:00.x may see up to 1 + 64 = 65us of L1 exit
> > >    latency.
> > >
> > >    The Realtek device can only tolerate 64us of latency, so it is not
> > >    safe to enable L1 for both Links.  It should be safe to enable L1
> > >    on the shared link because the exit latency for that link would be
> > >    <32us.
> >
> > 04:00.0:
> > DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
> > LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s
> > unlimited, L1 <64us
> >
> > So maximum latency for the entire link has to be <64 us
> > For the device to leave L1 ASPM takes <64us
> >
> > So the device itself is the slowest entry along the link, which
> > means that nothing else along that path can have ASPM enabled
>
> Yes.  That's what I said above: "it is not safe to enable L1 for both
> Links."  Unless I'm missing something, we agree on that.
>
> I also said that it should be safe to enable L1 on the shared Link
> (from 00:01.2 to 01:00.0) because if the downstream Link is always in
> L0, the exit latency of the shared Link should be <32us, and 04:00.x
> can tolerate 64us.

Exit latency of shared link would be max of link, ie 64 + L1-hops, not 32

> > > > The original code path did:
> > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > 02:04.0-01:00.0 max latency 32 +1 -> ok
> > > > 01:00.0-00:01.2 max latency 32 +2 -> ok
> > > >
> > > > And thus didn't see any L1 ASPM latency issues.
> > > >
> > > > The new code does:
> > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > 02:04.0-01:00.0 max latency 64 +1 -> latency exceeded
> > > > 01:00.0-00:01.2 max latency 64 +2 -> latency exceeded
> > >
> > > [Nit: I don't think we should add 1 for the 02:04.0 -- 01:00.0 piece
> > > because that's internal Switch routing, not a Link.  But even without
> > > that extra microsecond, this path does exceed the acceptable latency
> > > since 1 + 64 = 65us, and 04:00.0 can only tolerate 64us.]
> >
> > It does report L1 ASPM on both ends, so the links will be counted as
> > such in the code.
>
> This is a bit of a tangent and we shouldn't get too wrapped up in it.
> This is a confusing aspect of PCIe.  We're talking about this path:
>
>   00:01.2 --- [01:00.0 -- 02:04.0] --- 04:00.x multifunction Realtek
>
> This path only contains two Links.  The first one is
> 00:01.2 --- 01:00.0, and the second one is 02:04.0 --- 04:00.x.
>
> 01:00.0 is a Switch Upstream Port and 02:04.0 is a Switch Downstream
> Port.  The connection between them is not a Link; it is some internal
> wiring of the Switch that is completely opaque to software.
>
> The ASPM information and knobs in 01:00.0 apply to the Link on its
> upstream side, and the ASPM info and knobs in 02:04.0 apply to the
> Link on its downstream side.
>
> The example in sec 5.4.1.2.2 contains three Links.  The L1 exit latency
> for the Link is the max of the exit latencies at each end:
>
>   Link 1: max(32, 8) = 32us
>   Link 2: max(8, 32) = 32us
>   Link 3: max(32, 8) = 32us
>
> The total delay for a TLP starting at the downstream end of Link 3
> is 32 + 2 = 32us.
>
> In the path to your 04:00.x Realtek device:
>
>   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
>   Link 2 (from 02:04.0 to 04:00.x): max(32, 64) = 64us
>
> If L1 were enabled on both Links, the exit latency would be 64 + 1 =
> 65us.

So one line to be removed from the changelog, i assume... And yes, the
code handles that - first disable is 01:00.0 <-> 00:01.2

> > I also assume that it can power down individual ports... and enter
> > rest state if no links are up.
>
> I don't think this is quite true -- a Link can't enter L1 unless the
> Ports on both ends have L1 enabled, so I don't think it makes sense to
> talk about an individual Port being in L1.
>
> > > > It correctly identifies the issue.
> > > >
> > > > For reference, pcie information:
> > > > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> > >
> > > The "lspci without my patches" [1] shows L1 enabled for the shared
> > > Link from 00:01.2 --- 01:00.0 and for the Link to 03:00.0 (I211), but
> > > not for the Link to 04:00.x (Realtek).
> > >
> > > Per my analysis above, that looks like it *should* be a safe
> > > configuration.  03:00.0 can tolerate 64us, actual is <33us.  04:00.0
> > > can tolerate 64us, actual should be <32us since only the shared Link
> > > is in L1.
> >
> > See above.
>
> As I said above, if we enabled L1 only on the shared Link from 00:01.2
> to 01:00.0, the exit latency should be acceptable.  In that case, a
> TLP from 04:00.x would see only 32us of latency:
>
>   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
>
> and 04:00.x can tolerate 64us.

But, again, you're completely ignoring the full link, ie 04:00.x would
also have to power on.

> > > However, the commit log at [2] shows L1 *enabled* for both the shared
> > > Link from 00:01.2 --- 01:00.0 and the 02:04.0 --- 04:00.x Link, and
> > > that would definitely be a problem.
> > >
> > > Can you explain the differences between [1] and [2]?
> >
> > I don't understand which sections you're referring to.
>
> [1] is the "lspci without my patches" attachment of bugzilla #209725,
> which is supposed to show the problem this patch solves.  We're
> talking about the path to 04:00.x, and [1] show this:
>
>   01:00.2 L1+
>   01:00.0 L1+
>   02:04.0 L1-
>   04:00.0 L1-
>
> AFAICT, that should be a legal configuration as far as 04:00.0 is
> concerned, so it's not a reason for this patch.

Actually, no, maximum path latency 64us

04:00.0 wakeup latency == 64us

Again, as stated, it can't be behind any sleeping L1 links

> [2] is a previous posting of this same patch, and its commit log
> includes information about the same path to 04:00.x, but the "LnkCtl
> Before" column shows:
>
>   01:00.2 L1+
>   01:00.0 L1+
>   02:04.0 L1+
>   04:00.0 L1+
>
> I don't know why [1] shows L1 disabled on the downstream Link, while
> [2] shows L1 *enabled* on the same Link.

From the data they look switched.

> > > > Kai-Heng Feng has a machine that will not boot with ASPM without
> > > > this patch, information is documented here:
> > > > https://bugzilla.kernel.org/show_bug.cgi?id=209671
> > >
> > > I started working through this info, too, but there's not enough
> > > information to tell what difference this patch makes.  The attachments
> > > compare:
> > >
> > >   1) CONFIG_PCIEASPM_DEFAULT=y without the patch [3] and
> > >   2) CONFIG_PCIEASPM_POWERSAVE=y *with* the patch [4]
> > >
> > > Obviously CONFIG_PCIEASPM_POWERSAVE=y will configure things
> > > differently than CONFIG_PCIEASPM_DEFAULT=y, so we can't tell what
> > > changes are due to the config change and what are due to the patch.
> > >
> > > The lspci *with* the patch ([4]) shows L0s and L1 enabled at almost
> > > every possible place.  Here are the Links, how they're configured, and
> > > my analysis of the exit latencies vs acceptable latencies:
> > >
> > >   00:01.1 --- 01:00.0      L1+ (                  L1 <64us vs unl)
> > >   00:01.2 --- 02:00.0      L1+ (                  L1 <64us vs 64us)
> > >   00:01.3 --- 03:00.0      L1+ (                  L1 <64us vs 64us)
> > >   00:01.4 --- 04:00.0      L1+ (                  L1 <64us vs unl)
> > >   00:08.1 --- 05:00.x L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > >   00:08.2 --- 06:00.0 L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > >
> > > So I can't tell what change prevents the freeze.  I would expect the
> > > patch would cause us to *disable* L0s or L1 somewhere.
> > >
> > > The only place [4] shows ASPM disabled is for 05:00.1.  The spec says
> > > we should program the same value in all functions of a multi-function
> > > device.  This is a non-ARI device, so "only capabilities enabled in
> > > all functions are enabled for the component as a whole."  That would
> > > mean that L0s and L1 are effectively disabled for 05:00.x even though
> > > 05:00.0 claims they're enabled.  But the latencies say ASPM L0s and L1
> > > should be safe to be enabled.  This looks like another bug that's
> > > probably unrelated.
> >
> > I don't think it's unrelated, i suspect it's how PCIe works with
> > multiple links...  a device can cause some kind of head of queue
> > stalling - i don't know how but it really looks like it.
>
> The text in quotes above is straight out of the spec (PCIe r5.0, sec
> 7.5.3.7).  Either the device works that way or it's not compliant.
>
> The OS configures ASPM based on the requirements and capabilities
> advertised by the device.  If a device has any head of queue stalling
> or similar issues, those must be comprehended in the numbers
> advertised by the device.  It's not up to the OS to speculate about
> issues like that.
>
> > > The patch might be correct; I haven't actually analyzed the code.  But
> > > the commit log doesn't make sense to me yet.
> >
> > I personally don't think that all this PCI information is required,
> > the linux kernel is currently doing it wrong according to the spec.
>
> We're trying to establish exactly *what* Linux is doing wrong.  So far
> we don't have a good explanation of that.

Yes we do, linux counts hops + max for "link" while what should be done is
counting hops + max for path

> Based on [1], in the path to 03:00.0, both Links have L1 enabled, with
> an exit latency of <33us, and 03:00.0 can tolerate 64us.  That should
> work fine.
>
> Also based on [1], in the path to 04:00.x, the upstream Link has L1
> enabled and the downstream Link has L1 disabled, for an exit latency
> of <32us, and 04:00.0 can tolerate 64us.  That should also work fine.

Again, ignoring the exit latendy for 04:00.0

> (Alternately, disabling L1 on the upstream Link and enabling it on the
> downstream Link should have an exit latency of <64us and 04:00.0 can
> tolerate 64us, so that should work fine, too.)

Then nothing else can have L1 aspm enabled

> > Also, since it's clearly doing the wrong thing, I'm worried that
> > dists will take a kernel enable aspm and there will be alot of
> > bugreports of non-booting systems or other weird issues... And the
> > culprit was known all along.
>
> There's clearly a problem on your system, but I don't know yet whether
> Linux is doing something wrong, a device in your system is designed
> incorrectly, or a device is designed correctly but the instance in
> your system is defective.

According to the spec it is, there is a explanation of how to
calculate the exit latency
and when you implement that, which i did (before knowing the actual
spec) then it works...

> > It's been five months...
>
> I apologize for the delay.  ASPM is a subtle area of PCIe, the Linux
> code is complicated, and we have a long history of issues with it.  I
> want to fix the problem, but I want to make sure we do it in a way
> that matches the spec so the fix applies to all systems.  I don't want
> a magic fix that fixes your system in a way I don't quite understand.

> Obviously *you* understand this, so hopefully it's just a matter of
> pounding it through my thick skull :)

I only understand what I've been forced to understand - and I do
leverage the existing code without
knowing what it does underneath, I only look at the links maximum
latency and make sure that I keep
the maximum latency along the path and not just link for link

once you realise that the max allowed latency is buffer dependent -
then this becomes obviously correct,
and then the pcie spec showed it as being correct as well... so...


> > > [1] https://bugzilla.kernel.org/attachment.cgi?id=293047
> > > [2] https://lore.kernel.org/linux-pci/20201007132808.647589-1-ian.kumlien@gmail.com/
> > > [3] https://bugzilla.kernel.org/attachment.cgi?id=292955
> > > [4] https://bugzilla.kernel.org/attachment.cgi?id=292957
> > >
> > > > Signed-off-by: Ian Kumlien <ian.kumlien@gmail.com>
> > > > Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> > > > ---
> > > >  drivers/pci/pcie/aspm.c | 22 ++++++++++++++--------
> > > >  1 file changed, 14 insertions(+), 8 deletions(-)
> > > >
> > > > diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
> > > > index 253c30cc1967..c03ead0f1013 100644
> > > > --- a/drivers/pci/pcie/aspm.c
> > > > +++ b/drivers/pci/pcie/aspm.c
> > > > @@ -434,7 +434,7 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
> > > >
> > > >  static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > >  {
> > > > -     u32 latency, l1_switch_latency = 0;
> > > > +     u32 latency, l1_max_latency = 0, l1_switch_latency = 0;
> > > >       struct aspm_latency *acceptable;
> > > >       struct pcie_link_state *link;
> > > >
> > > > @@ -456,10 +456,14 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > >               if ((link->aspm_capable & ASPM_STATE_L0S_DW) &&
> > > >                   (link->latency_dw.l0s > acceptable->l0s))
> > > >                       link->aspm_capable &= ~ASPM_STATE_L0S_DW;
> > > > +
> > > >               /*
> > > >                * Check L1 latency.
> > > > -              * Every switch on the path to root complex need 1
> > > > -              * more microsecond for L1. Spec doesn't mention L0s.
> > > > +              *
> > > > +              * PCIe r5.0, sec 5.4.1.2.2 states:
> > > > +              * A Switch is required to initiate an L1 exit transition on its
> > > > +              * Upstream Port Link after no more than 1 μs from the beginning of an
> > > > +              * L1 exit transition on any of its Downstream Port Links.
> > > >                *
> > > >                * The exit latencies for L1 substates are not advertised
> > > >                * by a device.  Since the spec also doesn't mention a way
> > > > @@ -469,11 +473,13 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > >                * L1 exit latencies advertised by a device include L1
> > > >                * substate latencies (and hence do not do any check).
> > > >                */
> > > > -             latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > -             if ((link->aspm_capable & ASPM_STATE_L1) &&
> > > > -                 (latency + l1_switch_latency > acceptable->l1))
> > > > -                     link->aspm_capable &= ~ASPM_STATE_L1;
> > > > -             l1_switch_latency += 1000;
> > > > +             if (link->aspm_capable & ASPM_STATE_L1) {
> > > > +                     latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > +                     l1_max_latency = max_t(u32, latency, l1_max_latency);
> > > > +                     if (l1_max_latency + l1_switch_latency > acceptable->l1)
> > > > +                             link->aspm_capable &= ~ASPM_STATE_L1;
> > > > +                     l1_switch_latency += 1000;
> > > > +             }
> > > >
> > > >               link = link->parent;
> > > >       }
> > > > --
> > > > 2.29.1
> > > >

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-14  9:14   ` Ian Kumlien
@ 2020-12-14 14:02     ` Bjorn Helgaas
  2020-12-14 15:47       ` Ian Kumlien
  0 siblings, 1 reply; 12+ messages in thread
From: Bjorn Helgaas @ 2020-12-14 14:02 UTC (permalink / raw)
  To: Ian Kumlien
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Mon, Dec 14, 2020 at 10:14:18AM +0100, Ian Kumlien wrote:
> On Mon, Dec 14, 2020 at 6:44 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >
> > [+cc Jesse, Tony, David, Jakub, Heiner, lists in case there's an ASPM
> > issue with I211 or Realtek NICs.  Beginning of thread:
> > https://lore.kernel.org/r/20201024205548.1837770-1-ian.kumlien@gmail.com
> >
> > Short story: Ian has:
> >
> >   Root Port --- Switch --- I211 NIC
> >                        \-- multifunction Realtek NIC, etc
> >
> > and the I211 performance is poor with ASPM L1 enabled on both links
> > in the path to it.  The patch here disables ASPM on the upstream link
> > and fixes the performance, but AFAICT the devices in that path give us
> > no reason to disable L1.  If I understand the spec correctly, the
> > Realtek device should not be relevant to the I211 path.]
> >
> > On Sun, Dec 13, 2020 at 10:39:53PM +0100, Ian Kumlien wrote:
> > > On Sun, Dec 13, 2020 at 12:47 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > On Sat, Oct 24, 2020 at 10:55:46PM +0200, Ian Kumlien wrote:
> > > > > Make pcie_aspm_check_latency comply with the PCIe spec, specifically:
> > > > > "5.4.1.2.2. Exit from the L1 State"
> > > > >
> > > > > Which makes it clear that each switch is required to
> > > > > initiate a transition within 1μs from receiving it,
> > > > > accumulating this latency and then we have to wait for the
> > > > > slowest link along the path before entering L0 state from
> > > > > L1.
> > > > > ...
> > > >
> > > > > On my specific system:
> > > > > 03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
> > > > > 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 816e (rev 1a)
> > > > >
> > > > >             Exit latency       Acceptable latency
> > > > > Tree:       L1       L0s       L1       L0s
> > > > > ----------  -------  -----     -------  ------
> > > > > 00:01.2     <32 us   -
> > > > > | 01:00.0   <32 us   -
> > > > > |- 02:03.0  <32 us   -
> > > > > | \03:00.0  <16 us   <2us      <64 us   <512ns
> > > > > |
> > > > > \- 02:04.0  <32 us   -
> > > > >   \04:00.0  <64 us   unlimited <64 us   <512ns
> > > > >
> > > > > 04:00.0's latency is the same as the maximum it allows so as
> > > > > we walk the path the first switchs startup latency will pass
> > > > > the acceptable latency limit for the link, and as a
> > > > > side-effect it fixes my issues with 03:00.0.
> > > > >
> > > > > Without this patch, 03:00.0 misbehaves and only gives me ~40
> > > > > mbit/s over links with 6 or more hops. With this patch I'm
> > > > > back to a maximum of ~933 mbit/s.
> > > >
> > > > There are two paths here that share a Link:
> > > >
> > > >   00:01.2 --- 01:00.0 -- 02:03.0 --- 03:00.0 I211 NIC
> > > >   00:01.2 --- 01:00.0 -- 02:04.0 --- 04:00.x multifunction Realtek
> > > >
> > > > 1) The path to the I211 NIC includes four Ports and two Links (the
> > > >    connection between 01:00.0 and 02:03.0 is internal Switch routing,
> > > >    not a Link).
> > >
> > > >    The Ports advertise L1 exit latencies of <32us, <32us, <32us,
> > > >    <16us.  If both Links are in L1 and 03:00.0 initiates L1 exit at T,
> > > >    01:00.0 initiates L1 exit at T + 1.  A TLP from 03:00.0 may see up
> > > >    to 1 + 32 = 33us of L1 exit latency.
> > > >
> > > >    The NIC can tolerate up to 64us of L1 exit latency, so it is safe
> > > >    to enable L1 for both Links.
> > > >
> > > > 2) The path to the Realtek device is similar except that the Realtek
> > > >    L1 exit latency is <64us.  If both Links are in L1 and 04:00.x
> > > >    initiates L1 exit at T, 01:00.0 again initiates L1 exit at T + 1,
> > > >    but a TLP from 04:00.x may see up to 1 + 64 = 65us of L1 exit
> > > >    latency.
> > > >
> > > >    The Realtek device can only tolerate 64us of latency, so it is not
> > > >    safe to enable L1 for both Links.  It should be safe to enable L1
> > > >    on the shared link because the exit latency for that link would be
> > > >    <32us.
> > >
> > > 04:00.0:
> > > DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
> > > LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s
> > > unlimited, L1 <64us
> > >
> > > So maximum latency for the entire link has to be <64 us
> > > For the device to leave L1 ASPM takes <64us
> > >
> > > So the device itself is the slowest entry along the link, which
> > > means that nothing else along that path can have ASPM enabled
> >
> > Yes.  That's what I said above: "it is not safe to enable L1 for both
> > Links."  Unless I'm missing something, we agree on that.
> >
> > I also said that it should be safe to enable L1 on the shared Link
> > (from 00:01.2 to 01:00.0) because if the downstream Link is always in
> > L0, the exit latency of the shared Link should be <32us, and 04:00.x
> > can tolerate 64us.
> 
> Exit latency of shared link would be max of link, ie 64 + L1-hops, not 32

I don't think this is true.  The path from 00:01.2 to 04:00.x includes
two Links, and they are independent.  The exit latency for each Link
depends only on the Port at each end:

  Link 1 (depends on 00:01.2 and 01:00.0): max(32, 32) = 32us
  Link 2 (depends on 02:04.0 and 04:00.x): max(32, 64) = 64us

If L1 is enabled for Link 1 and disabled for Link 2, Link 2 will
remain in L0 so it has no L1 exit latency, and the exit latency of
the entire path should be 32us.  

> > > > > The original code path did:
> > > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > > 02:04.0-01:00.0 max latency 32 +1 -> ok
> > > > > 01:00.0-00:01.2 max latency 32 +2 -> ok
> > > > >
> > > > > And thus didn't see any L1 ASPM latency issues.
> > > > >
> > > > > The new code does:
> > > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > > 02:04.0-01:00.0 max latency 64 +1 -> latency exceeded
> > > > > 01:00.0-00:01.2 max latency 64 +2 -> latency exceeded
> > > >
> > > > [Nit: I don't think we should add 1 for the 02:04.0 -- 01:00.0 piece
> > > > because that's internal Switch routing, not a Link.  But even without
> > > > that extra microsecond, this path does exceed the acceptable latency
> > > > since 1 + 64 = 65us, and 04:00.0 can only tolerate 64us.]
> > >
> > > It does report L1 ASPM on both ends, so the links will be counted as
> > > such in the code.
> >
> > This is a bit of a tangent and we shouldn't get too wrapped up in it.
> > This is a confusing aspect of PCIe.  We're talking about this path:
> >
> >   00:01.2 --- [01:00.0 -- 02:04.0] --- 04:00.x multifunction Realtek
> >
> > This path only contains two Links.  The first one is
> > 00:01.2 --- 01:00.0, and the second one is 02:04.0 --- 04:00.x.
> >
> > 01:00.0 is a Switch Upstream Port and 02:04.0 is a Switch Downstream
> > Port.  The connection between them is not a Link; it is some internal
> > wiring of the Switch that is completely opaque to software.
> >
> > The ASPM information and knobs in 01:00.0 apply to the Link on its
> > upstream side, and the ASPM info and knobs in 02:04.0 apply to the
> > Link on its downstream side.
> >
> > The example in sec 5.4.1.2.2 contains three Links.  The L1 exit latency
> > for the Link is the max of the exit latencies at each end:
> >
> >   Link 1: max(32, 8) = 32us
> >   Link 2: max(8, 32) = 32us
> >   Link 3: max(32, 8) = 32us
> >
> > The total delay for a TLP starting at the downstream end of Link 3
> > is 32 + 2 = 32us.
> >
> > In the path to your 04:00.x Realtek device:
> >
> >   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
> >   Link 2 (from 02:04.0 to 04:00.x): max(32, 64) = 64us
> >
> > If L1 were enabled on both Links, the exit latency would be 64 + 1 =
> > 65us.
> 
> So one line to be removed from the changelog, i assume... And yes, the
> code handles that - first disable is 01:00.0 <-> 00:01.2
> 
> > > I also assume that it can power down individual ports... and enter
> > > rest state if no links are up.
> >
> > I don't think this is quite true -- a Link can't enter L1 unless the
> > Ports on both ends have L1 enabled, so I don't think it makes sense to
> > talk about an individual Port being in L1.
> >
> > > > > It correctly identifies the issue.
> > > > >
> > > > > For reference, pcie information:
> > > > > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> > > >
> > > > The "lspci without my patches" [1] shows L1 enabled for the shared
> > > > Link from 00:01.2 --- 01:00.0 and for the Link to 03:00.0 (I211), but
> > > > not for the Link to 04:00.x (Realtek).
> > > >
> > > > Per my analysis above, that looks like it *should* be a safe
> > > > configuration.  03:00.0 can tolerate 64us, actual is <33us.  04:00.0
> > > > can tolerate 64us, actual should be <32us since only the shared Link
> > > > is in L1.
> > >
> > > See above.
> >
> > As I said above, if we enabled L1 only on the shared Link from 00:01.2
> > to 01:00.0, the exit latency should be acceptable.  In that case, a
> > TLP from 04:00.x would see only 32us of latency:
> >
> >   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
> >
> > and 04:00.x can tolerate 64us.
> 
> But, again, you're completely ignoring the full link, ie 04:00.x would
> also have to power on.

I think you're using "the full link" to refer to the entire path from
00:01.2 to 04:00.x.  In PCIe, a "Link" directly connects two Ports.
It doesn't refer to the entire path.

No, if L1 is disabled on 02:04.0 and 04:00.x (as Linux apparently does
by default), the Link between them never enters L1, so there is no
power-on for this Link.

> > > > However, the commit log at [2] shows L1 *enabled* for both the shared
> > > > Link from 00:01.2 --- 01:00.0 and the 02:04.0 --- 04:00.x Link, and
> > > > that would definitely be a problem.
> > > >
> > > > Can you explain the differences between [1] and [2]?
> > >
> > > I don't understand which sections you're referring to.
> >
> > [1] is the "lspci without my patches" attachment of bugzilla #209725,
> > which is supposed to show the problem this patch solves.  We're
> > talking about the path to 04:00.x, and [1] show this:
> >
> >   01:00.2 L1+               # <-- my typo here, should be 00:01.2
> >   01:00.0 L1+
> >   02:04.0 L1-
> >   04:00.0 L1-
> >
> > AFAICT, that should be a legal configuration as far as 04:00.0 is
> > concerned, so it's not a reason for this patch.
> 
> Actually, no, maximum path latency 64us
> 
> 04:00.0 wakeup latency == 64us
> 
> Again, as stated, it can't be behind any sleeping L1 links

It would be pointless for a device to advertise L1 support if it could
never be used.  04:00.0 advertises that it can tolerate L1 latency of
64us and that it can exit L1 in 64us or less.  So it *can* be behind a
Link in L1 as long as nothing else in the path adds more latency.

> > [2] is a previous posting of this same patch, and its commit log
> > includes information about the same path to 04:00.x, but the "LnkCtl
> > Before" column shows:
> >
> >   01:00.2 L1+               # <-- my typo here, should be 00:01.2
> >   01:00.0 L1+
> >   02:04.0 L1+
> >   04:00.0 L1+
> >
> > I don't know why [1] shows L1 disabled on the downstream Link, while
> > [2] shows L1 *enabled* on the same Link.
> 
> From the data they look switched.
> 
> > > > > Kai-Heng Feng has a machine that will not boot with ASPM without
> > > > > this patch, information is documented here:
> > > > > https://bugzilla.kernel.org/show_bug.cgi?id=209671
> > > >
> > > > I started working through this info, too, but there's not enough
> > > > information to tell what difference this patch makes.  The attachments
> > > > compare:
> > > >
> > > >   1) CONFIG_PCIEASPM_DEFAULT=y without the patch [3] and
> > > >   2) CONFIG_PCIEASPM_POWERSAVE=y *with* the patch [4]
> > > >
> > > > Obviously CONFIG_PCIEASPM_POWERSAVE=y will configure things
> > > > differently than CONFIG_PCIEASPM_DEFAULT=y, so we can't tell what
> > > > changes are due to the config change and what are due to the patch.
> > > >
> > > > The lspci *with* the patch ([4]) shows L0s and L1 enabled at almost
> > > > every possible place.  Here are the Links, how they're configured, and
> > > > my analysis of the exit latencies vs acceptable latencies:
> > > >
> > > >   00:01.1 --- 01:00.0      L1+ (                  L1 <64us vs unl)
> > > >   00:01.2 --- 02:00.0      L1+ (                  L1 <64us vs 64us)
> > > >   00:01.3 --- 03:00.0      L1+ (                  L1 <64us vs 64us)
> > > >   00:01.4 --- 04:00.0      L1+ (                  L1 <64us vs unl)
> > > >   00:08.1 --- 05:00.x L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > > >   00:08.2 --- 06:00.0 L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > > >
> > > > So I can't tell what change prevents the freeze.  I would expect the
> > > > patch would cause us to *disable* L0s or L1 somewhere.
> > > >
> > > > The only place [4] shows ASPM disabled is for 05:00.1.  The spec says
> > > > we should program the same value in all functions of a multi-function
> > > > device.  This is a non-ARI device, so "only capabilities enabled in
> > > > all functions are enabled for the component as a whole."  That would
> > > > mean that L0s and L1 are effectively disabled for 05:00.x even though
> > > > 05:00.0 claims they're enabled.  But the latencies say ASPM L0s and L1
> > > > should be safe to be enabled.  This looks like another bug that's
> > > > probably unrelated.
> > >
> > > I don't think it's unrelated, i suspect it's how PCIe works with
> > > multiple links...  a device can cause some kind of head of queue
> > > stalling - i don't know how but it really looks like it.
> >
> > The text in quotes above is straight out of the spec (PCIe r5.0, sec
> > 7.5.3.7).  Either the device works that way or it's not compliant.
> >
> > The OS configures ASPM based on the requirements and capabilities
> > advertised by the device.  If a device has any head of queue stalling
> > or similar issues, those must be comprehended in the numbers
> > advertised by the device.  It's not up to the OS to speculate about
> > issues like that.
> >
> > > > The patch might be correct; I haven't actually analyzed the code.  But
> > > > the commit log doesn't make sense to me yet.
> > >
> > > I personally don't think that all this PCI information is required,
> > > the linux kernel is currently doing it wrong according to the spec.
> >
> > We're trying to establish exactly *what* Linux is doing wrong.  So far
> > we don't have a good explanation of that.
> 
> Yes we do, linux counts hops + max for "link" while what should be done is
> counting hops + max for path

I think you're saying we need to include L1 exit latency even for
Links where L1 is disabled.  I don't think we should include those.

> > Based on [1], in the path to 03:00.0, both Links have L1 enabled, with
> > an exit latency of <33us, and 03:00.0 can tolerate 64us.  That should
> > work fine.
> >
> > Also based on [1], in the path to 04:00.x, the upstream Link has L1
> > enabled and the downstream Link has L1 disabled, for an exit latency
> > of <32us, and 04:00.0 can tolerate 64us.  That should also work fine.
> 
> Again, ignoring the exit latendy for 04:00.0
> 
> > (Alternately, disabling L1 on the upstream Link and enabling it on the
> > downstream Link should have an exit latency of <64us and 04:00.0 can
> > tolerate 64us, so that should work fine, too.)
> 
> Then nothing else can have L1 aspm enabled

Yes, as I said, we should be able to enable L1 on either of the Links
in the path to 04:00.x, but not both.

The original problem here is not with the Realtek device at 04:00.x
but with the I211 NIC at 03:00.0.  So we also need to figure out what
the connection is.  Does the same I211 performance problem occur if
you remove the Realtek device from the system?

03:00.0 can tolerate 64us of latency, so even if L1 is enabled on both
Links leading to it, the path exit latency would be <33us, which
should be fine.

> > > Also, since it's clearly doing the wrong thing, I'm worried that
> > > dists will take a kernel enable aspm and there will be alot of
> > > bugreports of non-booting systems or other weird issues... And the
> > > culprit was known all along.
> >
> > There's clearly a problem on your system, but I don't know yet whether
> > Linux is doing something wrong, a device in your system is designed
> > incorrectly, or a device is designed correctly but the instance in
> > your system is defective.
> 
> According to the spec it is, there is a explanation of how to
> calculate the exit latency
> and when you implement that, which i did (before knowing the actual
> spec) then it works...
> 
> > > It's been five months...
> >
> > I apologize for the delay.  ASPM is a subtle area of PCIe, the Linux
> > code is complicated, and we have a long history of issues with it.  I
> > want to fix the problem, but I want to make sure we do it in a way
> > that matches the spec so the fix applies to all systems.  I don't want
> > a magic fix that fixes your system in a way I don't quite understand.
> 
> > Obviously *you* understand this, so hopefully it's just a matter of
> > pounding it through my thick skull :)
> 
> I only understand what I've been forced to understand - and I do
> leverage the existing code without
> knowing what it does underneath, I only look at the links maximum
> latency and make sure that I keep
> the maximum latency along the path and not just link for link
> 
> once you realise that the max allowed latency is buffer dependent -
> then this becomes obviously correct,
> and then the pcie spec showed it as being correct as well... so...
> 
> 
> > > > [1] https://bugzilla.kernel.org/attachment.cgi?id=293047
> > > > [2] https://lore.kernel.org/linux-pci/20201007132808.647589-1-ian.kumlien@gmail.com/
> > > > [3] https://bugzilla.kernel.org/attachment.cgi?id=292955
> > > > [4] https://bugzilla.kernel.org/attachment.cgi?id=292957
> > > >
> > > > > Signed-off-by: Ian Kumlien <ian.kumlien@gmail.com>
> > > > > Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> > > > > ---
> > > > >  drivers/pci/pcie/aspm.c | 22 ++++++++++++++--------
> > > > >  1 file changed, 14 insertions(+), 8 deletions(-)
> > > > >
> > > > > diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
> > > > > index 253c30cc1967..c03ead0f1013 100644
> > > > > --- a/drivers/pci/pcie/aspm.c
> > > > > +++ b/drivers/pci/pcie/aspm.c
> > > > > @@ -434,7 +434,7 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
> > > > >
> > > > >  static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > >  {
> > > > > -     u32 latency, l1_switch_latency = 0;
> > > > > +     u32 latency, l1_max_latency = 0, l1_switch_latency = 0;
> > > > >       struct aspm_latency *acceptable;
> > > > >       struct pcie_link_state *link;
> > > > >
> > > > > @@ -456,10 +456,14 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > >               if ((link->aspm_capable & ASPM_STATE_L0S_DW) &&
> > > > >                   (link->latency_dw.l0s > acceptable->l0s))
> > > > >                       link->aspm_capable &= ~ASPM_STATE_L0S_DW;
> > > > > +
> > > > >               /*
> > > > >                * Check L1 latency.
> > > > > -              * Every switch on the path to root complex need 1
> > > > > -              * more microsecond for L1. Spec doesn't mention L0s.
> > > > > +              *
> > > > > +              * PCIe r5.0, sec 5.4.1.2.2 states:
> > > > > +              * A Switch is required to initiate an L1 exit transition on its
> > > > > +              * Upstream Port Link after no more than 1 μs from the beginning of an
> > > > > +              * L1 exit transition on any of its Downstream Port Links.
> > > > >                *
> > > > >                * The exit latencies for L1 substates are not advertised
> > > > >                * by a device.  Since the spec also doesn't mention a way
> > > > > @@ -469,11 +473,13 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > >                * L1 exit latencies advertised by a device include L1
> > > > >                * substate latencies (and hence do not do any check).
> > > > >                */
> > > > > -             latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > > -             if ((link->aspm_capable & ASPM_STATE_L1) &&
> > > > > -                 (latency + l1_switch_latency > acceptable->l1))
> > > > > -                     link->aspm_capable &= ~ASPM_STATE_L1;
> > > > > -             l1_switch_latency += 1000;
> > > > > +             if (link->aspm_capable & ASPM_STATE_L1) {
> > > > > +                     latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > > +                     l1_max_latency = max_t(u32, latency, l1_max_latency);
> > > > > +                     if (l1_max_latency + l1_switch_latency > acceptable->l1)
> > > > > +                             link->aspm_capable &= ~ASPM_STATE_L1;
> > > > > +                     l1_switch_latency += 1000;
> > > > > +             }
> > > > >
> > > > >               link = link->parent;
> > > > >       }
> > > > > --
> > > > > 2.29.1
> > > > >

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-14 14:02     ` Bjorn Helgaas
@ 2020-12-14 15:47       ` Ian Kumlien
  2020-12-14 19:19         ` Bjorn Helgaas
  0 siblings, 1 reply; 12+ messages in thread
From: Ian Kumlien @ 2020-12-14 15:47 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Mon, Dec 14, 2020 at 3:02 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Mon, Dec 14, 2020 at 10:14:18AM +0100, Ian Kumlien wrote:
> > On Mon, Dec 14, 2020 at 6:44 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > >
> > > [+cc Jesse, Tony, David, Jakub, Heiner, lists in case there's an ASPM
> > > issue with I211 or Realtek NICs.  Beginning of thread:
> > > https://lore.kernel.org/r/20201024205548.1837770-1-ian.kumlien@gmail.com
> > >
> > > Short story: Ian has:
> > >
> > >   Root Port --- Switch --- I211 NIC
> > >                        \-- multifunction Realtek NIC, etc
> > >
> > > and the I211 performance is poor with ASPM L1 enabled on both links
> > > in the path to it.  The patch here disables ASPM on the upstream link
> > > and fixes the performance, but AFAICT the devices in that path give us
> > > no reason to disable L1.  If I understand the spec correctly, the
> > > Realtek device should not be relevant to the I211 path.]
> > >
> > > On Sun, Dec 13, 2020 at 10:39:53PM +0100, Ian Kumlien wrote:
> > > > On Sun, Dec 13, 2020 at 12:47 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > On Sat, Oct 24, 2020 at 10:55:46PM +0200, Ian Kumlien wrote:
> > > > > > Make pcie_aspm_check_latency comply with the PCIe spec, specifically:
> > > > > > "5.4.1.2.2. Exit from the L1 State"
> > > > > >
> > > > > > Which makes it clear that each switch is required to
> > > > > > initiate a transition within 1μs from receiving it,
> > > > > > accumulating this latency and then we have to wait for the
> > > > > > slowest link along the path before entering L0 state from
> > > > > > L1.
> > > > > > ...
> > > > >
> > > > > > On my specific system:
> > > > > > 03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
> > > > > > 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 816e (rev 1a)
> > > > > >
> > > > > >             Exit latency       Acceptable latency
> > > > > > Tree:       L1       L0s       L1       L0s
> > > > > > ----------  -------  -----     -------  ------
> > > > > > 00:01.2     <32 us   -
> > > > > > | 01:00.0   <32 us   -
> > > > > > |- 02:03.0  <32 us   -
> > > > > > | \03:00.0  <16 us   <2us      <64 us   <512ns
> > > > > > |
> > > > > > \- 02:04.0  <32 us   -
> > > > > >   \04:00.0  <64 us   unlimited <64 us   <512ns
> > > > > >
> > > > > > 04:00.0's latency is the same as the maximum it allows so as
> > > > > > we walk the path the first switchs startup latency will pass
> > > > > > the acceptable latency limit for the link, and as a
> > > > > > side-effect it fixes my issues with 03:00.0.
> > > > > >
> > > > > > Without this patch, 03:00.0 misbehaves and only gives me ~40
> > > > > > mbit/s over links with 6 or more hops. With this patch I'm
> > > > > > back to a maximum of ~933 mbit/s.
> > > > >
> > > > > There are two paths here that share a Link:
> > > > >
> > > > >   00:01.2 --- 01:00.0 -- 02:03.0 --- 03:00.0 I211 NIC
> > > > >   00:01.2 --- 01:00.0 -- 02:04.0 --- 04:00.x multifunction Realtek
> > > > >
> > > > > 1) The path to the I211 NIC includes four Ports and two Links (the
> > > > >    connection between 01:00.0 and 02:03.0 is internal Switch routing,
> > > > >    not a Link).
> > > >
> > > > >    The Ports advertise L1 exit latencies of <32us, <32us, <32us,
> > > > >    <16us.  If both Links are in L1 and 03:00.0 initiates L1 exit at T,
> > > > >    01:00.0 initiates L1 exit at T + 1.  A TLP from 03:00.0 may see up
> > > > >    to 1 + 32 = 33us of L1 exit latency.
> > > > >
> > > > >    The NIC can tolerate up to 64us of L1 exit latency, so it is safe
> > > > >    to enable L1 for both Links.
> > > > >
> > > > > 2) The path to the Realtek device is similar except that the Realtek
> > > > >    L1 exit latency is <64us.  If both Links are in L1 and 04:00.x
> > > > >    initiates L1 exit at T, 01:00.0 again initiates L1 exit at T + 1,
> > > > >    but a TLP from 04:00.x may see up to 1 + 64 = 65us of L1 exit
> > > > >    latency.
> > > > >
> > > > >    The Realtek device can only tolerate 64us of latency, so it is not
> > > > >    safe to enable L1 for both Links.  It should be safe to enable L1
> > > > >    on the shared link because the exit latency for that link would be
> > > > >    <32us.
> > > >
> > > > 04:00.0:
> > > > DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
> > > > LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s
> > > > unlimited, L1 <64us
> > > >
> > > > So maximum latency for the entire link has to be <64 us
> > > > For the device to leave L1 ASPM takes <64us
> > > >
> > > > So the device itself is the slowest entry along the link, which
> > > > means that nothing else along that path can have ASPM enabled
> > >
> > > Yes.  That's what I said above: "it is not safe to enable L1 for both
> > > Links."  Unless I'm missing something, we agree on that.
> > >
> > > I also said that it should be safe to enable L1 on the shared Link
> > > (from 00:01.2 to 01:00.0) because if the downstream Link is always in
> > > L0, the exit latency of the shared Link should be <32us, and 04:00.x
> > > can tolerate 64us.
> >
> > Exit latency of shared link would be max of link, ie 64 + L1-hops, not 32
>
> I don't think this is true.  The path from 00:01.2 to 04:00.x includes
> two Links, and they are independent.  The exit latency for each Link
> depends only on the Port at each end:

The full path is what is important, because that is the actual latency
(which the current linux code doesn't do)

>   Link 1 (depends on 00:01.2 and 01:00.0): max(32, 32) = 32us
>   Link 2 (depends on 02:04.0 and 04:00.x): max(32, 64) = 64us
>
> If L1 is enabled for Link 1 and disabled for Link 2, Link 2 will
> remain in L0 so it has no L1 exit latency, and the exit latency of
> the entire path should be 32us.

My patch disables this so yes.

> > > > > > The original code path did:
> > > > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > > > 02:04.0-01:00.0 max latency 32 +1 -> ok
> > > > > > 01:00.0-00:01.2 max latency 32 +2 -> ok
> > > > > >
> > > > > > And thus didn't see any L1 ASPM latency issues.
> > > > > >
> > > > > > The new code does:
> > > > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > > > 02:04.0-01:00.0 max latency 64 +1 -> latency exceeded
> > > > > > 01:00.0-00:01.2 max latency 64 +2 -> latency exceeded
> > > > >
> > > > > [Nit: I don't think we should add 1 for the 02:04.0 -- 01:00.0 piece
> > > > > because that's internal Switch routing, not a Link.  But even without
> > > > > that extra microsecond, this path does exceed the acceptable latency
> > > > > since 1 + 64 = 65us, and 04:00.0 can only tolerate 64us.]
> > > >
> > > > It does report L1 ASPM on both ends, so the links will be counted as
> > > > such in the code.
> > >
> > > This is a bit of a tangent and we shouldn't get too wrapped up in it.
> > > This is a confusing aspect of PCIe.  We're talking about this path:
> > >
> > >   00:01.2 --- [01:00.0 -- 02:04.0] --- 04:00.x multifunction Realtek
> > >
> > > This path only contains two Links.  The first one is
> > > 00:01.2 --- 01:00.0, and the second one is 02:04.0 --- 04:00.x.
> > >
> > > 01:00.0 is a Switch Upstream Port and 02:04.0 is a Switch Downstream
> > > Port.  The connection between them is not a Link; it is some internal
> > > wiring of the Switch that is completely opaque to software.
> > >
> > > The ASPM information and knobs in 01:00.0 apply to the Link on its
> > > upstream side, and the ASPM info and knobs in 02:04.0 apply to the
> > > Link on its downstream side.
> > >
> > > The example in sec 5.4.1.2.2 contains three Links.  The L1 exit latency
> > > for the Link is the max of the exit latencies at each end:
> > >
> > >   Link 1: max(32, 8) = 32us
> > >   Link 2: max(8, 32) = 32us
> > >   Link 3: max(32, 8) = 32us
> > >
> > > The total delay for a TLP starting at the downstream end of Link 3
> > > is 32 + 2 = 32us.
> > >
> > > In the path to your 04:00.x Realtek device:
> > >
> > >   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
> > >   Link 2 (from 02:04.0 to 04:00.x): max(32, 64) = 64us
> > >
> > > If L1 were enabled on both Links, the exit latency would be 64 + 1 =
> > > 65us.
> >
> > So one line to be removed from the changelog, i assume... And yes, the
> > code handles that - first disable is 01:00.0 <-> 00:01.2
> >
> > > > I also assume that it can power down individual ports... and enter
> > > > rest state if no links are up.
> > >
> > > I don't think this is quite true -- a Link can't enter L1 unless the
> > > Ports on both ends have L1 enabled, so I don't think it makes sense to
> > > talk about an individual Port being in L1.
> > >
> > > > > > It correctly identifies the issue.
> > > > > >
> > > > > > For reference, pcie information:
> > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> > > > >
> > > > > The "lspci without my patches" [1] shows L1 enabled for the shared
> > > > > Link from 00:01.2 --- 01:00.0 and for the Link to 03:00.0 (I211), but
> > > > > not for the Link to 04:00.x (Realtek).
> > > > >
> > > > > Per my analysis above, that looks like it *should* be a safe
> > > > > configuration.  03:00.0 can tolerate 64us, actual is <33us.  04:00.0
> > > > > can tolerate 64us, actual should be <32us since only the shared Link
> > > > > is in L1.
> > > >
> > > > See above.
> > >
> > > As I said above, if we enabled L1 only on the shared Link from 00:01.2
> > > to 01:00.0, the exit latency should be acceptable.  In that case, a
> > > TLP from 04:00.x would see only 32us of latency:
> > >
> > >   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
> > >
> > > and 04:00.x can tolerate 64us.
> >
> > But, again, you're completely ignoring the full link, ie 04:00.x would
> > also have to power on.
>
> I think you're using "the full link" to refer to the entire path from
> 00:01.2 to 04:00.x.  In PCIe, a "Link" directly connects two Ports.
> It doesn't refer to the entire path.
>
> No, if L1 is disabled on 02:04.0 and 04:00.x (as Linux apparently does
> by default), the Link between them never enters L1, so there is no
> power-on for this Link.

It doesn't do it by default, my patch does

> > > > > However, the commit log at [2] shows L1 *enabled* for both the shared
> > > > > Link from 00:01.2 --- 01:00.0 and the 02:04.0 --- 04:00.x Link, and
> > > > > that would definitely be a problem.
> > > > >
> > > > > Can you explain the differences between [1] and [2]?
> > > >
> > > > I don't understand which sections you're referring to.
> > >
> > > [1] is the "lspci without my patches" attachment of bugzilla #209725,
> > > which is supposed to show the problem this patch solves.  We're
> > > talking about the path to 04:00.x, and [1] show this:
> > >
> > >   01:00.2 L1+               # <-- my typo here, should be 00:01.2
> > >   01:00.0 L1+
> > >   02:04.0 L1-
> > >   04:00.0 L1-
> > >
> > > AFAICT, that should be a legal configuration as far as 04:00.0 is
> > > concerned, so it's not a reason for this patch.
> >
> > Actually, no, maximum path latency 64us
> >
> > 04:00.0 wakeup latency == 64us
> >
> > Again, as stated, it can't be behind any sleeping L1 links
>
> It would be pointless for a device to advertise L1 support if it could
> never be used.  04:00.0 advertises that it can tolerate L1 latency of
> 64us and that it can exit L1 in 64us or less.  So it *can* be behind a
> Link in L1 as long as nothing else in the path adds more latency.

Yes, as long as nothing along the entire path adds latency - and I
didn't make the component
I can only say what it states, and we have to handle it.

> > > [2] is a previous posting of this same patch, and its commit log
> > > includes information about the same path to 04:00.x, but the "LnkCtl
> > > Before" column shows:
> > >
> > >   01:00.2 L1+               # <-- my typo here, should be 00:01.2
> > >   01:00.0 L1+
> > >   02:04.0 L1+
> > >   04:00.0 L1+
> > >
> > > I don't know why [1] shows L1 disabled on the downstream Link, while
> > > [2] shows L1 *enabled* on the same Link.
> >
> > From the data they look switched.
> >
> > > > > > Kai-Heng Feng has a machine that will not boot with ASPM without
> > > > > > this patch, information is documented here:
> > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=209671
> > > > >
> > > > > I started working through this info, too, but there's not enough
> > > > > information to tell what difference this patch makes.  The attachments
> > > > > compare:
> > > > >
> > > > >   1) CONFIG_PCIEASPM_DEFAULT=y without the patch [3] and
> > > > >   2) CONFIG_PCIEASPM_POWERSAVE=y *with* the patch [4]
> > > > >
> > > > > Obviously CONFIG_PCIEASPM_POWERSAVE=y will configure things
> > > > > differently than CONFIG_PCIEASPM_DEFAULT=y, so we can't tell what
> > > > > changes are due to the config change and what are due to the patch.
> > > > >
> > > > > The lspci *with* the patch ([4]) shows L0s and L1 enabled at almost
> > > > > every possible place.  Here are the Links, how they're configured, and
> > > > > my analysis of the exit latencies vs acceptable latencies:
> > > > >
> > > > >   00:01.1 --- 01:00.0      L1+ (                  L1 <64us vs unl)
> > > > >   00:01.2 --- 02:00.0      L1+ (                  L1 <64us vs 64us)
> > > > >   00:01.3 --- 03:00.0      L1+ (                  L1 <64us vs 64us)
> > > > >   00:01.4 --- 04:00.0      L1+ (                  L1 <64us vs unl)
> > > > >   00:08.1 --- 05:00.x L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > > > >   00:08.2 --- 06:00.0 L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > > > >
> > > > > So I can't tell what change prevents the freeze.  I would expect the
> > > > > patch would cause us to *disable* L0s or L1 somewhere.
> > > > >
> > > > > The only place [4] shows ASPM disabled is for 05:00.1.  The spec says
> > > > > we should program the same value in all functions of a multi-function
> > > > > device.  This is a non-ARI device, so "only capabilities enabled in
> > > > > all functions are enabled for the component as a whole."  That would
> > > > > mean that L0s and L1 are effectively disabled for 05:00.x even though
> > > > > 05:00.0 claims they're enabled.  But the latencies say ASPM L0s and L1
> > > > > should be safe to be enabled.  This looks like another bug that's
> > > > > probably unrelated.
> > > >
> > > > I don't think it's unrelated, i suspect it's how PCIe works with
> > > > multiple links...  a device can cause some kind of head of queue
> > > > stalling - i don't know how but it really looks like it.
> > >
> > > The text in quotes above is straight out of the spec (PCIe r5.0, sec
> > > 7.5.3.7).  Either the device works that way or it's not compliant.
> > >
> > > The OS configures ASPM based on the requirements and capabilities
> > > advertised by the device.  If a device has any head of queue stalling
> > > or similar issues, those must be comprehended in the numbers
> > > advertised by the device.  It's not up to the OS to speculate about
> > > issues like that.
> > >
> > > > > The patch might be correct; I haven't actually analyzed the code.  But
> > > > > the commit log doesn't make sense to me yet.
> > > >
> > > > I personally don't think that all this PCI information is required,
> > > > the linux kernel is currently doing it wrong according to the spec.
> > >
> > > We're trying to establish exactly *what* Linux is doing wrong.  So far
> > > we don't have a good explanation of that.
> >
> > Yes we do, linux counts hops + max for "link" while what should be done is
> > counting hops + max for path
>
> I think you're saying we need to include L1 exit latency even for
> Links where L1 is disabled.  I don't think we should include those.

Nope, the code does not do that, it only adds the l1 latency on L1 enabled hops

> > > Based on [1], in the path to 03:00.0, both Links have L1 enabled, with
> > > an exit latency of <33us, and 03:00.0 can tolerate 64us.  That should
> > > work fine.
> > >
> > > Also based on [1], in the path to 04:00.x, the upstream Link has L1
> > > enabled and the downstream Link has L1 disabled, for an exit latency
> > > of <32us, and 04:00.0 can tolerate 64us.  That should also work fine.
> >
> > Again, ignoring the exit latendy for 04:00.0
> >
> > > (Alternately, disabling L1 on the upstream Link and enabling it on the
> > > downstream Link should have an exit latency of <64us and 04:00.0 can
> > > tolerate 64us, so that should work fine, too.)
> >
> > Then nothing else can have L1 aspm enabled
>
> Yes, as I said, we should be able to enable L1 on either of the Links
> in the path to 04:00.x, but not both.

The code works backwards and disables the first hop that exceeds the
latency requirements -
we could argue that it should try to be smarter about it and try to
disable a minimum amount of links
while still retaining the minimum latency but... It is what it is and
it works when patched.

> The original problem here is not with the Realtek device at 04:00.x
> but with the I211 NIC at 03:00.0.  So we also need to figure out what
> the connection is.  Does the same I211 performance problem occur if
> you remove the Realtek device from the system?

It's mounted on the motherboard, so no I can't remove it.

> 03:00.0 can tolerate 64us of latency, so even if L1 is enabled on both
> Links leading to it, the path exit latency would be <33us, which
> should be fine.

Yes, it "should be" but due to broken ASPM latency calculations we
have some kind of
side effect that triggers a racecondition/sideeffect/bug that causes
it to misbehave.

Since fixing the latency calculation fixes it, I'll leave the rest to
someone with a logic
analyzer and a die-hard-fetish for pcie links - I can't debug it.

> > > > Also, since it's clearly doing the wrong thing, I'm worried that
> > > > dists will take a kernel enable aspm and there will be alot of
> > > > bugreports of non-booting systems or other weird issues... And the
> > > > culprit was known all along.
> > >
> > > There's clearly a problem on your system, but I don't know yet whether
> > > Linux is doing something wrong, a device in your system is designed
> > > incorrectly, or a device is designed correctly but the instance in
> > > your system is defective.
> >
> > According to the spec it is, there is a explanation of how to
> > calculate the exit latency
> > and when you implement that, which i did (before knowing the actual
> > spec) then it works...
> >
> > > > It's been five months...
> > >
> > > I apologize for the delay.  ASPM is a subtle area of PCIe, the Linux
> > > code is complicated, and we have a long history of issues with it.  I
> > > want to fix the problem, but I want to make sure we do it in a way
> > > that matches the spec so the fix applies to all systems.  I don't want
> > > a magic fix that fixes your system in a way I don't quite understand.
> >
> > > Obviously *you* understand this, so hopefully it's just a matter of
> > > pounding it through my thick skull :)
> >
> > I only understand what I've been forced to understand - and I do
> > leverage the existing code without
> > knowing what it does underneath, I only look at the links maximum
> > latency and make sure that I keep
> > the maximum latency along the path and not just link for link
> >
> > once you realise that the max allowed latency is buffer dependent -
> > then this becomes obviously correct,
> > and then the pcie spec showed it as being correct as well... so...
> >
> >
> > > > > [1] https://bugzilla.kernel.org/attachment.cgi?id=293047
> > > > > [2] https://lore.kernel.org/linux-pci/20201007132808.647589-1-ian.kumlien@gmail.com/
> > > > > [3] https://bugzilla.kernel.org/attachment.cgi?id=292955
> > > > > [4] https://bugzilla.kernel.org/attachment.cgi?id=292957
> > > > >
> > > > > > Signed-off-by: Ian Kumlien <ian.kumlien@gmail.com>
> > > > > > Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> > > > > > ---
> > > > > >  drivers/pci/pcie/aspm.c | 22 ++++++++++++++--------
> > > > > >  1 file changed, 14 insertions(+), 8 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
> > > > > > index 253c30cc1967..c03ead0f1013 100644
> > > > > > --- a/drivers/pci/pcie/aspm.c
> > > > > > +++ b/drivers/pci/pcie/aspm.c
> > > > > > @@ -434,7 +434,7 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
> > > > > >
> > > > > >  static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > > >  {
> > > > > > -     u32 latency, l1_switch_latency = 0;
> > > > > > +     u32 latency, l1_max_latency = 0, l1_switch_latency = 0;
> > > > > >       struct aspm_latency *acceptable;
> > > > > >       struct pcie_link_state *link;
> > > > > >
> > > > > > @@ -456,10 +456,14 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > > >               if ((link->aspm_capable & ASPM_STATE_L0S_DW) &&
> > > > > >                   (link->latency_dw.l0s > acceptable->l0s))
> > > > > >                       link->aspm_capable &= ~ASPM_STATE_L0S_DW;
> > > > > > +
> > > > > >               /*
> > > > > >                * Check L1 latency.
> > > > > > -              * Every switch on the path to root complex need 1
> > > > > > -              * more microsecond for L1. Spec doesn't mention L0s.
> > > > > > +              *
> > > > > > +              * PCIe r5.0, sec 5.4.1.2.2 states:
> > > > > > +              * A Switch is required to initiate an L1 exit transition on its
> > > > > > +              * Upstream Port Link after no more than 1 μs from the beginning of an
> > > > > > +              * L1 exit transition on any of its Downstream Port Links.
> > > > > >                *
> > > > > >                * The exit latencies for L1 substates are not advertised
> > > > > >                * by a device.  Since the spec also doesn't mention a way
> > > > > > @@ -469,11 +473,13 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > > >                * L1 exit latencies advertised by a device include L1
> > > > > >                * substate latencies (and hence do not do any check).
> > > > > >                */
> > > > > > -             latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > > > -             if ((link->aspm_capable & ASPM_STATE_L1) &&
> > > > > > -                 (latency + l1_switch_latency > acceptable->l1))
> > > > > > -                     link->aspm_capable &= ~ASPM_STATE_L1;
> > > > > > -             l1_switch_latency += 1000;
> > > > > > +             if (link->aspm_capable & ASPM_STATE_L1) {
> > > > > > +                     latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > > > +                     l1_max_latency = max_t(u32, latency, l1_max_latency);
> > > > > > +                     if (l1_max_latency + l1_switch_latency > acceptable->l1)
> > > > > > +                             link->aspm_capable &= ~ASPM_STATE_L1;
> > > > > > +                     l1_switch_latency += 1000;
> > > > > > +             }
> > > > > >
> > > > > >               link = link->parent;
> > > > > >       }
> > > > > > --
> > > > > > 2.29.1
> > > > > >

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-14 15:47       ` Ian Kumlien
@ 2020-12-14 19:19         ` Bjorn Helgaas
  2020-12-14 22:56           ` Ian Kumlien
  0 siblings, 1 reply; 12+ messages in thread
From: Bjorn Helgaas @ 2020-12-14 19:19 UTC (permalink / raw)
  To: Ian Kumlien
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Mon, Dec 14, 2020 at 04:47:32PM +0100, Ian Kumlien wrote:
> On Mon, Dec 14, 2020 at 3:02 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Mon, Dec 14, 2020 at 10:14:18AM +0100, Ian Kumlien wrote:
> > > On Mon, Dec 14, 2020 at 6:44 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > >
> > > > [+cc Jesse, Tony, David, Jakub, Heiner, lists in case there's an ASPM
> > > > issue with I211 or Realtek NICs.  Beginning of thread:
> > > > https://lore.kernel.org/r/20201024205548.1837770-1-ian.kumlien@gmail.com
> > > >
> > > > Short story: Ian has:
> > > >
> > > >   Root Port --- Switch --- I211 NIC
> > > >                        \-- multifunction Realtek NIC, etc
> > > >
> > > > and the I211 performance is poor with ASPM L1 enabled on both links
> > > > in the path to it.  The patch here disables ASPM on the upstream link
> > > > and fixes the performance, but AFAICT the devices in that path give us
> > > > no reason to disable L1.  If I understand the spec correctly, the
> > > > Realtek device should not be relevant to the I211 path.]
> > > >
> > > > On Sun, Dec 13, 2020 at 10:39:53PM +0100, Ian Kumlien wrote:
> > > > > On Sun, Dec 13, 2020 at 12:47 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > > On Sat, Oct 24, 2020 at 10:55:46PM +0200, Ian Kumlien wrote:
> > > > > > > Make pcie_aspm_check_latency comply with the PCIe spec, specifically:
> > > > > > > "5.4.1.2.2. Exit from the L1 State"
> > > > > > >
> > > > > > > Which makes it clear that each switch is required to
> > > > > > > initiate a transition within 1μs from receiving it,
> > > > > > > accumulating this latency and then we have to wait for the
> > > > > > > slowest link along the path before entering L0 state from
> > > > > > > L1.
> > > > > > > ...
> > > > > >
> > > > > > > On my specific system:
> > > > > > > 03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
> > > > > > > 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 816e (rev 1a)
> > > > > > >
> > > > > > >             Exit latency       Acceptable latency
> > > > > > > Tree:       L1       L0s       L1       L0s
> > > > > > > ----------  -------  -----     -------  ------
> > > > > > > 00:01.2     <32 us   -
> > > > > > > | 01:00.0   <32 us   -
> > > > > > > |- 02:03.0  <32 us   -
> > > > > > > | \03:00.0  <16 us   <2us      <64 us   <512ns
> > > > > > > |
> > > > > > > \- 02:04.0  <32 us   -
> > > > > > >   \04:00.0  <64 us   unlimited <64 us   <512ns
> > > > > > >
> > > > > > > 04:00.0's latency is the same as the maximum it allows so as
> > > > > > > we walk the path the first switchs startup latency will pass
> > > > > > > the acceptable latency limit for the link, and as a
> > > > > > > side-effect it fixes my issues with 03:00.0.
> > > > > > >
> > > > > > > Without this patch, 03:00.0 misbehaves and only gives me ~40
> > > > > > > mbit/s over links with 6 or more hops. With this patch I'm
> > > > > > > back to a maximum of ~933 mbit/s.
> > > > > >
> > > > > > There are two paths here that share a Link:
> > > > > >
> > > > > >   00:01.2 --- 01:00.0 -- 02:03.0 --- 03:00.0 I211 NIC
> > > > > >   00:01.2 --- 01:00.0 -- 02:04.0 --- 04:00.x multifunction Realtek
> > > > > >
> > > > > > 1) The path to the I211 NIC includes four Ports and two Links (the
> > > > > >    connection between 01:00.0 and 02:03.0 is internal Switch routing,
> > > > > >    not a Link).
> > > > >
> > > > > >    The Ports advertise L1 exit latencies of <32us, <32us, <32us,
> > > > > >    <16us.  If both Links are in L1 and 03:00.0 initiates L1 exit at T,
> > > > > >    01:00.0 initiates L1 exit at T + 1.  A TLP from 03:00.0 may see up
> > > > > >    to 1 + 32 = 33us of L1 exit latency.
> > > > > >
> > > > > >    The NIC can tolerate up to 64us of L1 exit latency, so it is safe
> > > > > >    to enable L1 for both Links.
> > > > > >
> > > > > > 2) The path to the Realtek device is similar except that the Realtek
> > > > > >    L1 exit latency is <64us.  If both Links are in L1 and 04:00.x
> > > > > >    initiates L1 exit at T, 01:00.0 again initiates L1 exit at T + 1,
> > > > > >    but a TLP from 04:00.x may see up to 1 + 64 = 65us of L1 exit
> > > > > >    latency.
> > > > > >
> > > > > >    The Realtek device can only tolerate 64us of latency, so it is not
> > > > > >    safe to enable L1 for both Links.  It should be safe to enable L1
> > > > > >    on the shared link because the exit latency for that link would be
> > > > > >    <32us.
> > > > >
> > > > > 04:00.0:
> > > > > DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
> > > > > LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s
> > > > > unlimited, L1 <64us
> > > > >
> > > > > So maximum latency for the entire link has to be <64 us
> > > > > For the device to leave L1 ASPM takes <64us
> > > > >
> > > > > So the device itself is the slowest entry along the link, which
> > > > > means that nothing else along that path can have ASPM enabled
> > > >
> > > > Yes.  That's what I said above: "it is not safe to enable L1 for both
> > > > Links."  Unless I'm missing something, we agree on that.
> > > >
> > > > I also said that it should be safe to enable L1 on the shared Link
> > > > (from 00:01.2 to 01:00.0) because if the downstream Link is always in
> > > > L0, the exit latency of the shared Link should be <32us, and 04:00.x
> > > > can tolerate 64us.
> > >
> > > Exit latency of shared link would be max of link, ie 64 + L1-hops, not 32
> >
> > I don't think this is true.  The path from 00:01.2 to 04:00.x includes
> > two Links, and they are independent.  The exit latency for each Link
> > depends only on the Port at each end:
> 
> The full path is what is important, because that is the actual latency
> (which the current linux code doesn't do)

I think you're saying we need to include the 04:00.x exit latency of
64us even though L1 is not enabled for 04:00.x.  I disagree; the L1
exit latency of Ports where L1 is disabled is irrelevant.  

> >   Link 1 (depends on 00:01.2 and 01:00.0): max(32, 32) = 32us
> >   Link 2 (depends on 02:04.0 and 04:00.x): max(32, 64) = 64us
> >
> > If L1 is enabled for Link 1 and disabled for Link 2, Link 2 will
> > remain in L0 so it has no L1 exit latency, and the exit latency of
> > the entire path should be 32us.
> 
> My patch disables this so yes.
> 
> > > > > > > The original code path did:
> > > > > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > > > > 02:04.0-01:00.0 max latency 32 +1 -> ok
> > > > > > > 01:00.0-00:01.2 max latency 32 +2 -> ok
> > > > > > >
> > > > > > > And thus didn't see any L1 ASPM latency issues.
> > > > > > >
> > > > > > > The new code does:
> > > > > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > > > > 02:04.0-01:00.0 max latency 64 +1 -> latency exceeded
> > > > > > > 01:00.0-00:01.2 max latency 64 +2 -> latency exceeded
> > > > > >
> > > > > > [Nit: I don't think we should add 1 for the 02:04.0 -- 01:00.0 piece
> > > > > > because that's internal Switch routing, not a Link.  But even without
> > > > > > that extra microsecond, this path does exceed the acceptable latency
> > > > > > since 1 + 64 = 65us, and 04:00.0 can only tolerate 64us.]
> > > > >
> > > > > It does report L1 ASPM on both ends, so the links will be counted as
> > > > > such in the code.
> > > >
> > > > This is a bit of a tangent and we shouldn't get too wrapped up in it.
> > > > This is a confusing aspect of PCIe.  We're talking about this path:
> > > >
> > > >   00:01.2 --- [01:00.0 -- 02:04.0] --- 04:00.x multifunction Realtek
> > > >
> > > > This path only contains two Links.  The first one is
> > > > 00:01.2 --- 01:00.0, and the second one is 02:04.0 --- 04:00.x.
> > > >
> > > > 01:00.0 is a Switch Upstream Port and 02:04.0 is a Switch Downstream
> > > > Port.  The connection between them is not a Link; it is some internal
> > > > wiring of the Switch that is completely opaque to software.
> > > >
> > > > The ASPM information and knobs in 01:00.0 apply to the Link on its
> > > > upstream side, and the ASPM info and knobs in 02:04.0 apply to the
> > > > Link on its downstream side.
> > > >
> > > > The example in sec 5.4.1.2.2 contains three Links.  The L1 exit latency
> > > > for the Link is the max of the exit latencies at each end:
> > > >
> > > >   Link 1: max(32, 8) = 32us
> > > >   Link 2: max(8, 32) = 32us
> > > >   Link 3: max(32, 8) = 32us
> > > >
> > > > The total delay for a TLP starting at the downstream end of Link 3
> > > > is 32 + 2 = 32us.
> > > >
> > > > In the path to your 04:00.x Realtek device:
> > > >
> > > >   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
> > > >   Link 2 (from 02:04.0 to 04:00.x): max(32, 64) = 64us
> > > >
> > > > If L1 were enabled on both Links, the exit latency would be 64 + 1 =
> > > > 65us.
> > >
> > > So one line to be removed from the changelog, i assume... And yes, the
> > > code handles that - first disable is 01:00.0 <-> 00:01.2
> > >
> > > > > I also assume that it can power down individual ports... and enter
> > > > > rest state if no links are up.
> > > >
> > > > I don't think this is quite true -- a Link can't enter L1 unless the
> > > > Ports on both ends have L1 enabled, so I don't think it makes sense to
> > > > talk about an individual Port being in L1.
> > > >
> > > > > > > It correctly identifies the issue.
> > > > > > >
> > > > > > > For reference, pcie information:
> > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> > > > > >
> > > > > > The "lspci without my patches" [1] shows L1 enabled for the shared
> > > > > > Link from 00:01.2 --- 01:00.0 and for the Link to 03:00.0 (I211), but
> > > > > > not for the Link to 04:00.x (Realtek).
> > > > > >
> > > > > > Per my analysis above, that looks like it *should* be a safe
> > > > > > configuration.  03:00.0 can tolerate 64us, actual is <33us.  04:00.0
> > > > > > can tolerate 64us, actual should be <32us since only the shared Link
> > > > > > is in L1.
> > > > >
> > > > > See above.
> > > >
> > > > As I said above, if we enabled L1 only on the shared Link from 00:01.2
> > > > to 01:00.0, the exit latency should be acceptable.  In that case, a
> > > > TLP from 04:00.x would see only 32us of latency:
> > > >
> > > >   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
> > > >
> > > > and 04:00.x can tolerate 64us.
> > >
> > > But, again, you're completely ignoring the full link, ie 04:00.x would
> > > also have to power on.
> >
> > I think you're using "the full link" to refer to the entire path from
> > 00:01.2 to 04:00.x.  In PCIe, a "Link" directly connects two Ports.
> > It doesn't refer to the entire path.
> >
> > No, if L1 is disabled on 02:04.0 and 04:00.x (as Linux apparently does
> > by default), the Link between them never enters L1, so there is no
> > power-on for this Link.
> 
> It doesn't do it by default, my patch does

I'm relying on [1], your "lspci without my patches" attachment named
"lspci-5.9-mainline.txt", which shows:

  02:04.0 LnkCtl: ASPM Disabled
  04:00.0 LnkCtl: ASPM Disabled

so I assumed that was what Linux did by default.

> > > > > > However, the commit log at [2] shows L1 *enabled* for both
> > > > > > the shared Link from 00:01.2 --- 01:00.0 and the 02:04.0
> > > > > > --- 04:00.x Link, and that would definitely be a problem.
> > > > > >
> > > > > > Can you explain the differences between [1] and [2]?
> > > > >
> > > > > I don't understand which sections you're referring to.
> > > >
> > > > [1] is the "lspci without my patches" attachment of bugzilla #209725,
> > > > which is supposed to show the problem this patch solves.  We're
> > > > talking about the path to 04:00.x, and [1] show this:
> > > >
> > > >   01:00.2 L1+               # <-- my typo here, should be 00:01.2
> > > >   01:00.0 L1+
> > > >   02:04.0 L1-
> > > >   04:00.0 L1-
> > > >
> > > > AFAICT, that should be a legal configuration as far as 04:00.0 is
> > > > concerned, so it's not a reason for this patch.
> > >
> > > Actually, no, maximum path latency 64us
> > >
> > > 04:00.0 wakeup latency == 64us
> > >
> > > Again, as stated, it can't be behind any sleeping L1 links
> >
> > It would be pointless for a device to advertise L1 support if it could
> > never be used.  04:00.0 advertises that it can tolerate L1 latency of
> > 64us and that it can exit L1 in 64us or less.  So it *can* be behind a
> > Link in L1 as long as nothing else in the path adds more latency.
> 
> Yes, as long as nothing along the entire path adds latency - and I
> didn't make the component
> I can only say what it states, and we have to handle it.
> 
> > > > [2] is a previous posting of this same patch, and its commit log
> > > > includes information about the same path to 04:00.x, but the "LnkCtl
> > > > Before" column shows:
> > > >
> > > >   01:00.2 L1+               # <-- my typo here, should be 00:01.2
> > > >   01:00.0 L1+
> > > >   02:04.0 L1+
> > > >   04:00.0 L1+
> > > >
> > > > I don't know why [1] shows L1 disabled on the downstream Link, while
> > > > [2] shows L1 *enabled* on the same Link.
> > >
> > > From the data they look switched.
> > >
> > > > > > > Kai-Heng Feng has a machine that will not boot with ASPM without
> > > > > > > this patch, information is documented here:
> > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=209671
> > > > > >
> > > > > > I started working through this info, too, but there's not
> > > > > > enough information to tell what difference this patch
> > > > > > makes.  The attachments compare:
> > > > > >
> > > > > >   1) CONFIG_PCIEASPM_DEFAULT=y without the patch [3] and
> > > > > >   2) CONFIG_PCIEASPM_POWERSAVE=y *with* the patch [4]
> > > > > >
> > > > > > Obviously CONFIG_PCIEASPM_POWERSAVE=y will configure
> > > > > > things differently than CONFIG_PCIEASPM_DEFAULT=y, so we
> > > > > > can't tell what changes are due to the config change and
> > > > > > what are due to the patch.
> > > > > >
> > > > > > The lspci *with* the patch ([4]) shows L0s and L1 enabled
> > > > > > at almost every possible place.  Here are the Links, how
> > > > > > they're configured, and my analysis of the exit latencies
> > > > > > vs acceptable latencies:
> > > > > >
> > > > > >   00:01.1 --- 01:00.0      L1+ (                  L1 <64us vs unl)
> > > > > >   00:01.2 --- 02:00.0      L1+ (                  L1 <64us vs 64us)
> > > > > >   00:01.3 --- 03:00.0      L1+ (                  L1 <64us vs 64us)
> > > > > >   00:01.4 --- 04:00.0      L1+ (                  L1 <64us vs unl)
> > > > > >   00:08.1 --- 05:00.x L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > > > > >   00:08.2 --- 06:00.0 L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > > > > >
> > > > > > So I can't tell what change prevents the freeze.  I would
> > > > > > expect the patch would cause us to *disable* L0s or L1
> > > > > > somewhere.
> > > > > >
> > > > > > The only place [4] shows ASPM disabled is for 05:00.1.
> > > > > > The spec says we should program the same value in all
> > > > > > functions of a multi-function device.  This is a non-ARI
> > > > > > device, so "only capabilities enabled in all functions are
> > > > > > enabled for the component as a whole."  That would mean
> > > > > > that L0s and L1 are effectively disabled for 05:00.x even
> > > > > > though 05:00.0 claims they're enabled.  But the latencies
> > > > > > say ASPM L0s and L1 should be safe to be enabled.  This
> > > > > > looks like another bug that's probably unrelated.
> > > > >
> > > > > I don't think it's unrelated, i suspect it's how PCIe works with
> > > > > multiple links...  a device can cause some kind of head of queue
> > > > > stalling - i don't know how but it really looks like it.
> > > >
> > > > The text in quotes above is straight out of the spec (PCIe r5.0, sec
> > > > 7.5.3.7).  Either the device works that way or it's not compliant.
> > > >
> > > > The OS configures ASPM based on the requirements and capabilities
> > > > advertised by the device.  If a device has any head of queue stalling
> > > > or similar issues, those must be comprehended in the numbers
> > > > advertised by the device.  It's not up to the OS to speculate about
> > > > issues like that.
> > > >
> > > > > > The patch might be correct; I haven't actually analyzed
> > > > > > the code.  But the commit log doesn't make sense to me
> > > > > > yet.
> > > > >
> > > > > I personally don't think that all this PCI information is required,
> > > > > the linux kernel is currently doing it wrong according to the spec.
> > > >
> > > > We're trying to establish exactly *what* Linux is doing wrong.  So far
> > > > we don't have a good explanation of that.
> > >
> > > Yes we do, linux counts hops + max for "link" while what should be done is
> > > counting hops + max for path
> >
> > I think you're saying we need to include L1 exit latency even for
> > Links where L1 is disabled.  I don't think we should include those.
> 
> Nope, the code does not do that, it only adds the l1 latency on L1
> enabled hops
> 
> > > > Based on [1], in the path to 03:00.0, both Links have L1 enabled, with
> > > > an exit latency of <33us, and 03:00.0 can tolerate 64us.  That should
> > > > work fine.
> > > >
> > > > Also based on [1], in the path to 04:00.x, the upstream Link has L1
> > > > enabled and the downstream Link has L1 disabled, for an exit latency
> > > > of <32us, and 04:00.0 can tolerate 64us.  That should also work fine.
> > >
> > > Again, ignoring the exit latency for 04:00.0
> > >
> > > > (Alternately, disabling L1 on the upstream Link and enabling it on the
> > > > downstream Link should have an exit latency of <64us and 04:00.0 can
> > > > tolerate 64us, so that should work fine, too.)
> > >
> > > Then nothing else can have L1 aspm enabled
> >
> > Yes, as I said, we should be able to enable L1 on either of the Links
> > in the path to 04:00.x, but not both.
> 
> The code works backwards and disables the first hop that exceeds the
> latency requirements -
> we could argue that it should try to be smarter about it and try to
> disable a minimum amount of links
> while still retaining the minimum latency but... It is what it is and
> it works when patched.
> 
> > The original problem here is not with the Realtek device at 04:00.x
> > but with the I211 NIC at 03:00.0.  So we also need to figure out what
> > the connection is.  Does the same I211 performance problem occur if
> > you remove the Realtek device from the system?
> 
> It's mounted on the motherboard, so no I can't remove it.

If you're interested, you could probably unload the Realtek drivers,
remove the devices, and set the PCI_EXP_LNKCTL_LD (Link Disable) bit
in 02:04.0, e.g.,

  # RT=/sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/0000:02:04.0
  # echo 1 > $RT/0000:04:00.0/remove
  # echo 1 > $RT/0000:04:00.1/remove
  # echo 1 > $RT/0000:04:00.2/remove
  # echo 1 > $RT/0000:04:00.4/remove
  # echo 1 > $RT/0000:04:00.7/remove
  # setpci -s02:04.0 CAP_EXP+0x10.w=0x0010

That should take 04:00.x out of the picture.

> > 03:00.0 can tolerate 64us of latency, so even if L1 is enabled on both
> > Links leading to it, the path exit latency would be <33us, which
> > should be fine.
> 
> Yes, it "should be" but due to broken ASPM latency calculations we
> have some kind of
> side effect that triggers a racecondition/sideeffect/bug that causes
> it to misbehave.
> 
> Since fixing the latency calculation fixes it, I'll leave the rest to
> someone with a logic
> analyzer and a die-hard-fetish for pcie links - I can't debug it.
> 
> > > > > Also, since it's clearly doing the wrong thing, I'm worried that
> > > > > dists will take a kernel enable aspm and there will be alot of
> > > > > bugreports of non-booting systems or other weird issues... And the
> > > > > culprit was known all along.
> > > >
> > > > There's clearly a problem on your system, but I don't know yet whether
> > > > Linux is doing something wrong, a device in your system is designed
> > > > incorrectly, or a device is designed correctly but the instance in
> > > > your system is defective.
> > >
> > > According to the spec it is, there is a explanation of how to
> > > calculate the exit latency
> > > and when you implement that, which i did (before knowing the actual
> > > spec) then it works...
> > >
> > > > > It's been five months...
> > > >
> > > > I apologize for the delay.  ASPM is a subtle area of PCIe, the Linux
> > > > code is complicated, and we have a long history of issues with it.  I
> > > > want to fix the problem, but I want to make sure we do it in a way
> > > > that matches the spec so the fix applies to all systems.  I don't want
> > > > a magic fix that fixes your system in a way I don't quite understand.
> > >
> > > > Obviously *you* understand this, so hopefully it's just a matter of
> > > > pounding it through my thick skull :)
> > >
> > > I only understand what I've been forced to understand - and I do
> > > leverage the existing code without
> > > knowing what it does underneath, I only look at the links maximum
> > > latency and make sure that I keep
> > > the maximum latency along the path and not just link for link
> > >
> > > once you realise that the max allowed latency is buffer dependent -
> > > then this becomes obviously correct,
> > > and then the pcie spec showed it as being correct as well... so...
> > >
> > >
> > > > > > [1] https://bugzilla.kernel.org/attachment.cgi?id=293047
> > > > > > [2] https://lore.kernel.org/linux-pci/20201007132808.647589-1-ian.kumlien@gmail.com/
> > > > > > [3] https://bugzilla.kernel.org/attachment.cgi?id=292955
> > > > > > [4] https://bugzilla.kernel.org/attachment.cgi?id=292957
> > > > > >
> > > > > > > Signed-off-by: Ian Kumlien <ian.kumlien@gmail.com>
> > > > > > > Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> > > > > > > ---
> > > > > > >  drivers/pci/pcie/aspm.c | 22 ++++++++++++++--------
> > > > > > >  1 file changed, 14 insertions(+), 8 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
> > > > > > > index 253c30cc1967..c03ead0f1013 100644
> > > > > > > --- a/drivers/pci/pcie/aspm.c
> > > > > > > +++ b/drivers/pci/pcie/aspm.c
> > > > > > > @@ -434,7 +434,7 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
> > > > > > >
> > > > > > >  static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > > > >  {
> > > > > > > -     u32 latency, l1_switch_latency = 0;
> > > > > > > +     u32 latency, l1_max_latency = 0, l1_switch_latency = 0;
> > > > > > >       struct aspm_latency *acceptable;
> > > > > > >       struct pcie_link_state *link;
> > > > > > >
> > > > > > > @@ -456,10 +456,14 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > > > >               if ((link->aspm_capable & ASPM_STATE_L0S_DW) &&
> > > > > > >                   (link->latency_dw.l0s > acceptable->l0s))
> > > > > > >                       link->aspm_capable &= ~ASPM_STATE_L0S_DW;
> > > > > > > +
> > > > > > >               /*
> > > > > > >                * Check L1 latency.
> > > > > > > -              * Every switch on the path to root complex need 1
> > > > > > > -              * more microsecond for L1. Spec doesn't mention L0s.
> > > > > > > +              *
> > > > > > > +              * PCIe r5.0, sec 5.4.1.2.2 states:
> > > > > > > +              * A Switch is required to initiate an L1 exit transition on its
> > > > > > > +              * Upstream Port Link after no more than 1 μs from the beginning of an
> > > > > > > +              * L1 exit transition on any of its Downstream Port Links.
> > > > > > >                *
> > > > > > >                * The exit latencies for L1 substates are not advertised
> > > > > > >                * by a device.  Since the spec also doesn't mention a way
> > > > > > > @@ -469,11 +473,13 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > > > >                * L1 exit latencies advertised by a device include L1
> > > > > > >                * substate latencies (and hence do not do any check).
> > > > > > >                */
> > > > > > > -             latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > > > > -             if ((link->aspm_capable & ASPM_STATE_L1) &&
> > > > > > > -                 (latency + l1_switch_latency > acceptable->l1))
> > > > > > > -                     link->aspm_capable &= ~ASPM_STATE_L1;
> > > > > > > -             l1_switch_latency += 1000;
> > > > > > > +             if (link->aspm_capable & ASPM_STATE_L1) {
> > > > > > > +                     latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > > > > +                     l1_max_latency = max_t(u32, latency, l1_max_latency);
> > > > > > > +                     if (l1_max_latency + l1_switch_latency > acceptable->l1)
> > > > > > > +                             link->aspm_capable &= ~ASPM_STATE_L1;
> > > > > > > +                     l1_switch_latency += 1000;
> > > > > > > +             }
> > > > > > >
> > > > > > >               link = link->parent;
> > > > > > >       }
> > > > > > > --
> > > > > > > 2.29.1
> > > > > > >

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-14 19:19         ` Bjorn Helgaas
@ 2020-12-14 22:56           ` Ian Kumlien
  2020-12-15  0:40             ` Bjorn Helgaas
  0 siblings, 1 reply; 12+ messages in thread
From: Ian Kumlien @ 2020-12-14 22:56 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Mon, Dec 14, 2020 at 8:19 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Mon, Dec 14, 2020 at 04:47:32PM +0100, Ian Kumlien wrote:
> > On Mon, Dec 14, 2020 at 3:02 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Mon, Dec 14, 2020 at 10:14:18AM +0100, Ian Kumlien wrote:
> > > > On Mon, Dec 14, 2020 at 6:44 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > >
> > > > > [+cc Jesse, Tony, David, Jakub, Heiner, lists in case there's an ASPM
> > > > > issue with I211 or Realtek NICs.  Beginning of thread:
> > > > > https://lore.kernel.org/r/20201024205548.1837770-1-ian.kumlien@gmail.com
> > > > >
> > > > > Short story: Ian has:
> > > > >
> > > > >   Root Port --- Switch --- I211 NIC
> > > > >                        \-- multifunction Realtek NIC, etc
> > > > >
> > > > > and the I211 performance is poor with ASPM L1 enabled on both links
> > > > > in the path to it.  The patch here disables ASPM on the upstream link
> > > > > and fixes the performance, but AFAICT the devices in that path give us
> > > > > no reason to disable L1.  If I understand the spec correctly, the
> > > > > Realtek device should not be relevant to the I211 path.]
> > > > >
> > > > > On Sun, Dec 13, 2020 at 10:39:53PM +0100, Ian Kumlien wrote:
> > > > > > On Sun, Dec 13, 2020 at 12:47 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > > > On Sat, Oct 24, 2020 at 10:55:46PM +0200, Ian Kumlien wrote:
> > > > > > > > Make pcie_aspm_check_latency comply with the PCIe spec, specifically:
> > > > > > > > "5.4.1.2.2. Exit from the L1 State"
> > > > > > > >
> > > > > > > > Which makes it clear that each switch is required to
> > > > > > > > initiate a transition within 1μs from receiving it,
> > > > > > > > accumulating this latency and then we have to wait for the
> > > > > > > > slowest link along the path before entering L0 state from
> > > > > > > > L1.
> > > > > > > > ...
> > > > > > >
> > > > > > > > On my specific system:
> > > > > > > > 03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
> > > > > > > > 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 816e (rev 1a)
> > > > > > > >
> > > > > > > >             Exit latency       Acceptable latency
> > > > > > > > Tree:       L1       L0s       L1       L0s
> > > > > > > > ----------  -------  -----     -------  ------
> > > > > > > > 00:01.2     <32 us   -
> > > > > > > > | 01:00.0   <32 us   -
> > > > > > > > |- 02:03.0  <32 us   -
> > > > > > > > | \03:00.0  <16 us   <2us      <64 us   <512ns
> > > > > > > > |
> > > > > > > > \- 02:04.0  <32 us   -
> > > > > > > >   \04:00.0  <64 us   unlimited <64 us   <512ns
> > > > > > > >
> > > > > > > > 04:00.0's latency is the same as the maximum it allows so as
> > > > > > > > we walk the path the first switchs startup latency will pass
> > > > > > > > the acceptable latency limit for the link, and as a
> > > > > > > > side-effect it fixes my issues with 03:00.0.
> > > > > > > >
> > > > > > > > Without this patch, 03:00.0 misbehaves and only gives me ~40
> > > > > > > > mbit/s over links with 6 or more hops. With this patch I'm
> > > > > > > > back to a maximum of ~933 mbit/s.
> > > > > > >
> > > > > > > There are two paths here that share a Link:
> > > > > > >
> > > > > > >   00:01.2 --- 01:00.0 -- 02:03.0 --- 03:00.0 I211 NIC
> > > > > > >   00:01.2 --- 01:00.0 -- 02:04.0 --- 04:00.x multifunction Realtek
> > > > > > >
> > > > > > > 1) The path to the I211 NIC includes four Ports and two Links (the
> > > > > > >    connection between 01:00.0 and 02:03.0 is internal Switch routing,
> > > > > > >    not a Link).
> > > > > >
> > > > > > >    The Ports advertise L1 exit latencies of <32us, <32us, <32us,
> > > > > > >    <16us.  If both Links are in L1 and 03:00.0 initiates L1 exit at T,
> > > > > > >    01:00.0 initiates L1 exit at T + 1.  A TLP from 03:00.0 may see up
> > > > > > >    to 1 + 32 = 33us of L1 exit latency.
> > > > > > >
> > > > > > >    The NIC can tolerate up to 64us of L1 exit latency, so it is safe
> > > > > > >    to enable L1 for both Links.
> > > > > > >
> > > > > > > 2) The path to the Realtek device is similar except that the Realtek
> > > > > > >    L1 exit latency is <64us.  If both Links are in L1 and 04:00.x
> > > > > > >    initiates L1 exit at T, 01:00.0 again initiates L1 exit at T + 1,
> > > > > > >    but a TLP from 04:00.x may see up to 1 + 64 = 65us of L1 exit
> > > > > > >    latency.
> > > > > > >
> > > > > > >    The Realtek device can only tolerate 64us of latency, so it is not
> > > > > > >    safe to enable L1 for both Links.  It should be safe to enable L1
> > > > > > >    on the shared link because the exit latency for that link would be
> > > > > > >    <32us.
> > > > > >
> > > > > > 04:00.0:
> > > > > > DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
> > > > > > LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s
> > > > > > unlimited, L1 <64us
> > > > > >
> > > > > > So maximum latency for the entire link has to be <64 us
> > > > > > For the device to leave L1 ASPM takes <64us
> > > > > >
> > > > > > So the device itself is the slowest entry along the link, which
> > > > > > means that nothing else along that path can have ASPM enabled
> > > > >
> > > > > Yes.  That's what I said above: "it is not safe to enable L1 for both
> > > > > Links."  Unless I'm missing something, we agree on that.
> > > > >
> > > > > I also said that it should be safe to enable L1 on the shared Link
> > > > > (from 00:01.2 to 01:00.0) because if the downstream Link is always in
> > > > > L0, the exit latency of the shared Link should be <32us, and 04:00.x
> > > > > can tolerate 64us.
> > > >
> > > > Exit latency of shared link would be max of link, ie 64 + L1-hops, not 32
> > >
> > > I don't think this is true.  The path from 00:01.2 to 04:00.x includes
> > > two Links, and they are independent.  The exit latency for each Link
> > > depends only on the Port at each end:
> >
> > The full path is what is important, because that is the actual latency
> > (which the current linux code doesn't do)
>
> I think you're saying we need to include the 04:00.x exit latency of
> 64us even though L1 is not enabled for 04:00.x.  I disagree; the L1
> exit latency of Ports where L1 is disabled is irrelevant.

I will redo the without patch and look again, I know that I have to
wait a while for it to happen.

Witch patch 3 i get:
dec 14 13:44:40 localhost kernel: pci 0000:04:00.0: ASPM latency
exceeded, disabling: L1:0000:01:00.0-0000:00:01.2

And it should only check links that has L1 aspm enabled, as per the
original code.

> > >   Link 1 (depends on 00:01.2 and 01:00.0): max(32, 32) = 32us
> > >   Link 2 (depends on 02:04.0 and 04:00.x): max(32, 64) = 64us
> > >
> > > If L1 is enabled for Link 1 and disabled for Link 2, Link 2 will
> > > remain in L0 so it has no L1 exit latency, and the exit latency of
> > > the entire path should be 32us.
> >
> > My patch disables this so yes.
> >
> > > > > > > > The original code path did:
> > > > > > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > > > > > 02:04.0-01:00.0 max latency 32 +1 -> ok
> > > > > > > > 01:00.0-00:01.2 max latency 32 +2 -> ok
> > > > > > > >
> > > > > > > > And thus didn't see any L1 ASPM latency issues.
> > > > > > > >
> > > > > > > > The new code does:
> > > > > > > > 04:00:0-02:04.0 max latency 64    -> ok
> > > > > > > > 02:04.0-01:00.0 max latency 64 +1 -> latency exceeded
> > > > > > > > 01:00.0-00:01.2 max latency 64 +2 -> latency exceeded
> > > > > > >
> > > > > > > [Nit: I don't think we should add 1 for the 02:04.0 -- 01:00.0 piece
> > > > > > > because that's internal Switch routing, not a Link.  But even without
> > > > > > > that extra microsecond, this path does exceed the acceptable latency
> > > > > > > since 1 + 64 = 65us, and 04:00.0 can only tolerate 64us.]
> > > > > >
> > > > > > It does report L1 ASPM on both ends, so the links will be counted as
> > > > > > such in the code.
> > > > >
> > > > > This is a bit of a tangent and we shouldn't get too wrapped up in it.
> > > > > This is a confusing aspect of PCIe.  We're talking about this path:
> > > > >
> > > > >   00:01.2 --- [01:00.0 -- 02:04.0] --- 04:00.x multifunction Realtek
> > > > >
> > > > > This path only contains two Links.  The first one is
> > > > > 00:01.2 --- 01:00.0, and the second one is 02:04.0 --- 04:00.x.
> > > > >
> > > > > 01:00.0 is a Switch Upstream Port and 02:04.0 is a Switch Downstream
> > > > > Port.  The connection between them is not a Link; it is some internal
> > > > > wiring of the Switch that is completely opaque to software.
> > > > >
> > > > > The ASPM information and knobs in 01:00.0 apply to the Link on its
> > > > > upstream side, and the ASPM info and knobs in 02:04.0 apply to the
> > > > > Link on its downstream side.
> > > > >
> > > > > The example in sec 5.4.1.2.2 contains three Links.  The L1 exit latency
> > > > > for the Link is the max of the exit latencies at each end:
> > > > >
> > > > >   Link 1: max(32, 8) = 32us
> > > > >   Link 2: max(8, 32) = 32us
> > > > >   Link 3: max(32, 8) = 32us
> > > > >
> > > > > The total delay for a TLP starting at the downstream end of Link 3
> > > > > is 32 + 2 = 32us.
> > > > >
> > > > > In the path to your 04:00.x Realtek device:
> > > > >
> > > > >   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
> > > > >   Link 2 (from 02:04.0 to 04:00.x): max(32, 64) = 64us
> > > > >
> > > > > If L1 were enabled on both Links, the exit latency would be 64 + 1 =
> > > > > 65us.
> > > >
> > > > So one line to be removed from the changelog, i assume... And yes, the
> > > > code handles that - first disable is 01:00.0 <-> 00:01.2
> > > >
> > > > > > I also assume that it can power down individual ports... and enter
> > > > > > rest state if no links are up.
> > > > >
> > > > > I don't think this is quite true -- a Link can't enter L1 unless the
> > > > > Ports on both ends have L1 enabled, so I don't think it makes sense to
> > > > > talk about an individual Port being in L1.
> > > > >
> > > > > > > > It correctly identifies the issue.
> > > > > > > >
> > > > > > > > For reference, pcie information:
> > > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> > > > > > >
> > > > > > > The "lspci without my patches" [1] shows L1 enabled for the shared
> > > > > > > Link from 00:01.2 --- 01:00.0 and for the Link to 03:00.0 (I211), but
> > > > > > > not for the Link to 04:00.x (Realtek).
> > > > > > >
> > > > > > > Per my analysis above, that looks like it *should* be a safe
> > > > > > > configuration.  03:00.0 can tolerate 64us, actual is <33us.  04:00.0
> > > > > > > can tolerate 64us, actual should be <32us since only the shared Link
> > > > > > > is in L1.
> > > > > >
> > > > > > See above.
> > > > >
> > > > > As I said above, if we enabled L1 only on the shared Link from 00:01.2
> > > > > to 01:00.0, the exit latency should be acceptable.  In that case, a
> > > > > TLP from 04:00.x would see only 32us of latency:
> > > > >
> > > > >   Link 1 (from 00:01.2 to 01:00.0): max(32, 32) = 32us
> > > > >
> > > > > and 04:00.x can tolerate 64us.
> > > >
> > > > But, again, you're completely ignoring the full link, ie 04:00.x would
> > > > also have to power on.
> > >
> > > I think you're using "the full link" to refer to the entire path from
> > > 00:01.2 to 04:00.x.  In PCIe, a "Link" directly connects two Ports.
> > > It doesn't refer to the entire path.
> > >
> > > No, if L1 is disabled on 02:04.0 and 04:00.x (as Linux apparently does
> > > by default), the Link between them never enters L1, so there is no
> > > power-on for this Link.
> >
> > It doesn't do it by default, my patch does
>
> I'm relying on [1], your "lspci without my patches" attachment named
> "lspci-5.9-mainline.txt", which shows:
>
>   02:04.0 LnkCtl: ASPM Disabled
>   04:00.0 LnkCtl: ASPM Disabled
>
> so I assumed that was what Linux did by default.

Interesting, they are disabled.

> > > > > > > However, the commit log at [2] shows L1 *enabled* for both
> > > > > > > the shared Link from 00:01.2 --- 01:00.0 and the 02:04.0
> > > > > > > --- 04:00.x Link, and that would definitely be a problem.
> > > > > > >
> > > > > > > Can you explain the differences between [1] and [2]?
> > > > > >
> > > > > > I don't understand which sections you're referring to.
> > > > >
> > > > > [1] is the "lspci without my patches" attachment of bugzilla #209725,
> > > > > which is supposed to show the problem this patch solves.  We're
> > > > > talking about the path to 04:00.x, and [1] show this:
> > > > >
> > > > >   01:00.2 L1+               # <-- my typo here, should be 00:01.2
> > > > >   01:00.0 L1+
> > > > >   02:04.0 L1-
> > > > >   04:00.0 L1-
> > > > >
> > > > > AFAICT, that should be a legal configuration as far as 04:00.0 is
> > > > > concerned, so it's not a reason for this patch.
> > > >
> > > > Actually, no, maximum path latency 64us
> > > >
> > > > 04:00.0 wakeup latency == 64us
> > > >
> > > > Again, as stated, it can't be behind any sleeping L1 links
> > >
> > > It would be pointless for a device to advertise L1 support if it could
> > > never be used.  04:00.0 advertises that it can tolerate L1 latency of
> > > 64us and that it can exit L1 in 64us or less.  So it *can* be behind a
> > > Link in L1 as long as nothing else in the path adds more latency.
> >
> > Yes, as long as nothing along the entire path adds latency - and I
> > didn't make the component
> > I can only say what it states, and we have to handle it.
> >
> > > > > [2] is a previous posting of this same patch, and its commit log
> > > > > includes information about the same path to 04:00.x, but the "LnkCtl
> > > > > Before" column shows:
> > > > >
> > > > >   01:00.2 L1+               # <-- my typo here, should be 00:01.2
> > > > >   01:00.0 L1+
> > > > >   02:04.0 L1+
> > > > >   04:00.0 L1+
> > > > >
> > > > > I don't know why [1] shows L1 disabled on the downstream Link, while
> > > > > [2] shows L1 *enabled* on the same Link.
> > > >
> > > > From the data they look switched.
> > > >
> > > > > > > > Kai-Heng Feng has a machine that will not boot with ASPM without
> > > > > > > > this patch, information is documented here:
> > > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=209671
> > > > > > >
> > > > > > > I started working through this info, too, but there's not
> > > > > > > enough information to tell what difference this patch
> > > > > > > makes.  The attachments compare:
> > > > > > >
> > > > > > >   1) CONFIG_PCIEASPM_DEFAULT=y without the patch [3] and
> > > > > > >   2) CONFIG_PCIEASPM_POWERSAVE=y *with* the patch [4]
> > > > > > >
> > > > > > > Obviously CONFIG_PCIEASPM_POWERSAVE=y will configure
> > > > > > > things differently than CONFIG_PCIEASPM_DEFAULT=y, so we
> > > > > > > can't tell what changes are due to the config change and
> > > > > > > what are due to the patch.
> > > > > > >
> > > > > > > The lspci *with* the patch ([4]) shows L0s and L1 enabled
> > > > > > > at almost every possible place.  Here are the Links, how
> > > > > > > they're configured, and my analysis of the exit latencies
> > > > > > > vs acceptable latencies:
> > > > > > >
> > > > > > >   00:01.1 --- 01:00.0      L1+ (                  L1 <64us vs unl)
> > > > > > >   00:01.2 --- 02:00.0      L1+ (                  L1 <64us vs 64us)
> > > > > > >   00:01.3 --- 03:00.0      L1+ (                  L1 <64us vs 64us)
> > > > > > >   00:01.4 --- 04:00.0      L1+ (                  L1 <64us vs unl)
> > > > > > >   00:08.1 --- 05:00.x L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > > > > > >   00:08.2 --- 06:00.0 L0s+ L1+ (L0s <64ns vs 4us, L1  <1us vs unl)
> > > > > > >
> > > > > > > So I can't tell what change prevents the freeze.  I would
> > > > > > > expect the patch would cause us to *disable* L0s or L1
> > > > > > > somewhere.
> > > > > > >
> > > > > > > The only place [4] shows ASPM disabled is for 05:00.1.
> > > > > > > The spec says we should program the same value in all
> > > > > > > functions of a multi-function device.  This is a non-ARI
> > > > > > > device, so "only capabilities enabled in all functions are
> > > > > > > enabled for the component as a whole."  That would mean
> > > > > > > that L0s and L1 are effectively disabled for 05:00.x even
> > > > > > > though 05:00.0 claims they're enabled.  But the latencies
> > > > > > > say ASPM L0s and L1 should be safe to be enabled.  This
> > > > > > > looks like another bug that's probably unrelated.
> > > > > >
> > > > > > I don't think it's unrelated, i suspect it's how PCIe works with
> > > > > > multiple links...  a device can cause some kind of head of queue
> > > > > > stalling - i don't know how but it really looks like it.
> > > > >
> > > > > The text in quotes above is straight out of the spec (PCIe r5.0, sec
> > > > > 7.5.3.7).  Either the device works that way or it's not compliant.
> > > > >
> > > > > The OS configures ASPM based on the requirements and capabilities
> > > > > advertised by the device.  If a device has any head of queue stalling
> > > > > or similar issues, those must be comprehended in the numbers
> > > > > advertised by the device.  It's not up to the OS to speculate about
> > > > > issues like that.
> > > > >
> > > > > > > The patch might be correct; I haven't actually analyzed
> > > > > > > the code.  But the commit log doesn't make sense to me
> > > > > > > yet.
> > > > > >
> > > > > > I personally don't think that all this PCI information is required,
> > > > > > the linux kernel is currently doing it wrong according to the spec.
> > > > >
> > > > > We're trying to establish exactly *what* Linux is doing wrong.  So far
> > > > > we don't have a good explanation of that.
> > > >
> > > > Yes we do, linux counts hops + max for "link" while what should be done is
> > > > counting hops + max for path
> > >
> > > I think you're saying we need to include L1 exit latency even for
> > > Links where L1 is disabled.  I don't think we should include those.
> >
> > Nope, the code does not do that, it only adds the l1 latency on L1
> > enabled hops
> >
> > > > > Based on [1], in the path to 03:00.0, both Links have L1 enabled, with
> > > > > an exit latency of <33us, and 03:00.0 can tolerate 64us.  That should
> > > > > work fine.
> > > > >
> > > > > Also based on [1], in the path to 04:00.x, the upstream Link has L1
> > > > > enabled and the downstream Link has L1 disabled, for an exit latency
> > > > > of <32us, and 04:00.0 can tolerate 64us.  That should also work fine.
> > > >
> > > > Again, ignoring the exit latency for 04:00.0
> > > >
> > > > > (Alternately, disabling L1 on the upstream Link and enabling it on the
> > > > > downstream Link should have an exit latency of <64us and 04:00.0 can
> > > > > tolerate 64us, so that should work fine, too.)
> > > >
> > > > Then nothing else can have L1 aspm enabled
> > >
> > > Yes, as I said, we should be able to enable L1 on either of the Links
> > > in the path to 04:00.x, but not both.
> >
> > The code works backwards and disables the first hop that exceeds the
> > latency requirements -
> > we could argue that it should try to be smarter about it and try to
> > disable a minimum amount of links
> > while still retaining the minimum latency but... It is what it is and
> > it works when patched.
> >
> > > The original problem here is not with the Realtek device at 04:00.x
> > > but with the I211 NIC at 03:00.0.  So we also need to figure out what
> > > the connection is.  Does the same I211 performance problem occur if
> > > you remove the Realtek device from the system?
> >
> > It's mounted on the motherboard, so no I can't remove it.
>
> If you're interested, you could probably unload the Realtek drivers,
> remove the devices, and set the PCI_EXP_LNKCTL_LD (Link Disable) bit
> in 02:04.0, e.g.,
>
>   # RT=/sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/0000:02:04.0
>   # echo 1 > $RT/0000:04:00.0/remove
>   # echo 1 > $RT/0000:04:00.1/remove
>   # echo 1 > $RT/0000:04:00.2/remove
>   # echo 1 > $RT/0000:04:00.4/remove
>   # echo 1 > $RT/0000:04:00.7/remove
>   # setpci -s02:04.0 CAP_EXP+0x10.w=0x0010
>
> That should take 04:00.x out of the picture.

Didn't actually change the behaviour, I'm suspecting an errata for AMD pcie...

So did this, with unpatched kernel:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  4.56 MBytes  38.2 Mbits/sec    0   67.9 KBytes
[  5]   1.00-2.00   sec  4.47 MBytes  37.5 Mbits/sec    0   96.2 KBytes
[  5]   2.00-3.00   sec  4.85 MBytes  40.7 Mbits/sec    0   50.9 KBytes
[  5]   3.00-4.00   sec  4.23 MBytes  35.4 Mbits/sec    0   70.7 KBytes
[  5]   4.00-5.00   sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
[  5]   5.00-6.00   sec  4.23 MBytes  35.4 Mbits/sec    0   45.2 KBytes
[  5]   6.00-7.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
[  5]   7.00-8.00   sec  3.98 MBytes  33.4 Mbits/sec    0   36.8 KBytes
[  5]   8.00-9.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
[  5]   9.00-10.00  sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  43.2 MBytes  36.2 Mbits/sec    0             sender
[  5]   0.00-10.00  sec  42.7 MBytes  35.8 Mbits/sec                  receiver

and:
echo 0 > /sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/link/l1_aspm

and:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   113 MBytes   951 Mbits/sec  153    772 KBytes
[  5]   1.00-2.00   sec   109 MBytes   912 Mbits/sec  276    550 KBytes
[  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec  123    625 KBytes
[  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec   31    687 KBytes
[  5]   4.00-5.00   sec   110 MBytes   923 Mbits/sec    0    679 KBytes
[  5]   5.00-6.00   sec   110 MBytes   923 Mbits/sec  136    577 KBytes
[  5]   6.00-7.00   sec   110 MBytes   923 Mbits/sec  214    645 KBytes
[  5]   7.00-8.00   sec   110 MBytes   923 Mbits/sec   32    628 KBytes
[  5]   8.00-9.00   sec   110 MBytes   923 Mbits/sec   81    537 KBytes
[  5]   9.00-10.00  sec   110 MBytes   923 Mbits/sec   10    577 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.08 GBytes   927 Mbits/sec  1056             sender
[  5]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec                  receiver

But this only confirms that the fix i experience is a side effect.

The original code is still wrong :)

> > > 03:00.0 can tolerate 64us of latency, so even if L1 is enabled on both
> > > Links leading to it, the path exit latency would be <33us, which
> > > should be fine.
> >
> > Yes, it "should be" but due to broken ASPM latency calculations we
> > have some kind of
> > side effect that triggers a racecondition/sideeffect/bug that causes
> > it to misbehave.
> >
> > Since fixing the latency calculation fixes it, I'll leave the rest to
> > someone with a logic
> > analyzer and a die-hard-fetish for pcie links - I can't debug it.
> >
> > > > > > Also, since it's clearly doing the wrong thing, I'm worried that
> > > > > > dists will take a kernel enable aspm and there will be alot of
> > > > > > bugreports of non-booting systems or other weird issues... And the
> > > > > > culprit was known all along.
> > > > >
> > > > > There's clearly a problem on your system, but I don't know yet whether
> > > > > Linux is doing something wrong, a device in your system is designed
> > > > > incorrectly, or a device is designed correctly but the instance in
> > > > > your system is defective.
> > > >
> > > > According to the spec it is, there is a explanation of how to
> > > > calculate the exit latency
> > > > and when you implement that, which i did (before knowing the actual
> > > > spec) then it works...
> > > >
> > > > > > It's been five months...
> > > > >
> > > > > I apologize for the delay.  ASPM is a subtle area of PCIe, the Linux
> > > > > code is complicated, and we have a long history of issues with it.  I
> > > > > want to fix the problem, but I want to make sure we do it in a way
> > > > > that matches the spec so the fix applies to all systems.  I don't want
> > > > > a magic fix that fixes your system in a way I don't quite understand.
> > > >
> > > > > Obviously *you* understand this, so hopefully it's just a matter of
> > > > > pounding it through my thick skull :)
> > > >
> > > > I only understand what I've been forced to understand - and I do
> > > > leverage the existing code without
> > > > knowing what it does underneath, I only look at the links maximum
> > > > latency and make sure that I keep
> > > > the maximum latency along the path and not just link for link
> > > >
> > > > once you realise that the max allowed latency is buffer dependent -
> > > > then this becomes obviously correct,
> > > > and then the pcie spec showed it as being correct as well... so...
> > > >
> > > >
> > > > > > > [1] https://bugzilla.kernel.org/attachment.cgi?id=293047
> > > > > > > [2] https://lore.kernel.org/linux-pci/20201007132808.647589-1-ian.kumlien@gmail.com/
> > > > > > > [3] https://bugzilla.kernel.org/attachment.cgi?id=292955
> > > > > > > [4] https://bugzilla.kernel.org/attachment.cgi?id=292957
> > > > > > >
> > > > > > > > Signed-off-by: Ian Kumlien <ian.kumlien@gmail.com>
> > > > > > > > Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> > > > > > > > ---
> > > > > > > >  drivers/pci/pcie/aspm.c | 22 ++++++++++++++--------
> > > > > > > >  1 file changed, 14 insertions(+), 8 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
> > > > > > > > index 253c30cc1967..c03ead0f1013 100644
> > > > > > > > --- a/drivers/pci/pcie/aspm.c
> > > > > > > > +++ b/drivers/pci/pcie/aspm.c
> > > > > > > > @@ -434,7 +434,7 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
> > > > > > > >
> > > > > > > >  static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > > > > >  {
> > > > > > > > -     u32 latency, l1_switch_latency = 0;
> > > > > > > > +     u32 latency, l1_max_latency = 0, l1_switch_latency = 0;
> > > > > > > >       struct aspm_latency *acceptable;
> > > > > > > >       struct pcie_link_state *link;
> > > > > > > >
> > > > > > > > @@ -456,10 +456,14 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > > > > >               if ((link->aspm_capable & ASPM_STATE_L0S_DW) &&
> > > > > > > >                   (link->latency_dw.l0s > acceptable->l0s))
> > > > > > > >                       link->aspm_capable &= ~ASPM_STATE_L0S_DW;
> > > > > > > > +
> > > > > > > >               /*
> > > > > > > >                * Check L1 latency.
> > > > > > > > -              * Every switch on the path to root complex need 1
> > > > > > > > -              * more microsecond for L1. Spec doesn't mention L0s.
> > > > > > > > +              *
> > > > > > > > +              * PCIe r5.0, sec 5.4.1.2.2 states:
> > > > > > > > +              * A Switch is required to initiate an L1 exit transition on its
> > > > > > > > +              * Upstream Port Link after no more than 1 μs from the beginning of an
> > > > > > > > +              * L1 exit transition on any of its Downstream Port Links.
> > > > > > > >                *
> > > > > > > >                * The exit latencies for L1 substates are not advertised
> > > > > > > >                * by a device.  Since the spec also doesn't mention a way
> > > > > > > > @@ -469,11 +473,13 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
> > > > > > > >                * L1 exit latencies advertised by a device include L1
> > > > > > > >                * substate latencies (and hence do not do any check).
> > > > > > > >                */
> > > > > > > > -             latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > > > > > -             if ((link->aspm_capable & ASPM_STATE_L1) &&
> > > > > > > > -                 (latency + l1_switch_latency > acceptable->l1))
> > > > > > > > -                     link->aspm_capable &= ~ASPM_STATE_L1;
> > > > > > > > -             l1_switch_latency += 1000;
> > > > > > > > +             if (link->aspm_capable & ASPM_STATE_L1) {
> > > > > > > > +                     latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> > > > > > > > +                     l1_max_latency = max_t(u32, latency, l1_max_latency);
> > > > > > > > +                     if (l1_max_latency + l1_switch_latency > acceptable->l1)
> > > > > > > > +                             link->aspm_capable &= ~ASPM_STATE_L1;
> > > > > > > > +                     l1_switch_latency += 1000;
> > > > > > > > +             }
> > > > > > > >
> > > > > > > >               link = link->parent;
> > > > > > > >       }
> > > > > > > > --
> > > > > > > > 2.29.1
> > > > > > > >

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-14 22:56           ` Ian Kumlien
@ 2020-12-15  0:40             ` Bjorn Helgaas
  2020-12-15 13:09               ` Ian Kumlien
  0 siblings, 1 reply; 12+ messages in thread
From: Bjorn Helgaas @ 2020-12-15  0:40 UTC (permalink / raw)
  To: Ian Kumlien
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Mon, Dec 14, 2020 at 11:56:31PM +0100, Ian Kumlien wrote:
> On Mon, Dec 14, 2020 at 8:19 PM Bjorn Helgaas <helgaas@kernel.org> wrote:

> > If you're interested, you could probably unload the Realtek drivers,
> > remove the devices, and set the PCI_EXP_LNKCTL_LD (Link Disable) bit
> > in 02:04.0, e.g.,
> >
> >   # RT=/sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/0000:02:04.0
> >   # echo 1 > $RT/0000:04:00.0/remove
> >   # echo 1 > $RT/0000:04:00.1/remove
> >   # echo 1 > $RT/0000:04:00.2/remove
> >   # echo 1 > $RT/0000:04:00.4/remove
> >   # echo 1 > $RT/0000:04:00.7/remove
> >   # setpci -s02:04.0 CAP_EXP+0x10.w=0x0010
> >
> > That should take 04:00.x out of the picture.
> 
> Didn't actually change the behaviour, I'm suspecting an errata for AMD pcie...
> 
> So did this, with unpatched kernel:
> [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> [  5]   0.00-1.00   sec  4.56 MBytes  38.2 Mbits/sec    0   67.9 KBytes
> [  5]   1.00-2.00   sec  4.47 MBytes  37.5 Mbits/sec    0   96.2 KBytes
> [  5]   2.00-3.00   sec  4.85 MBytes  40.7 Mbits/sec    0   50.9 KBytes
> [  5]   3.00-4.00   sec  4.23 MBytes  35.4 Mbits/sec    0   70.7 KBytes
> [  5]   4.00-5.00   sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> [  5]   5.00-6.00   sec  4.23 MBytes  35.4 Mbits/sec    0   45.2 KBytes
> [  5]   6.00-7.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> [  5]   7.00-8.00   sec  3.98 MBytes  33.4 Mbits/sec    0   36.8 KBytes
> [  5]   8.00-9.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> [  5]   9.00-10.00  sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-10.00  sec  43.2 MBytes  36.2 Mbits/sec    0             sender
> [  5]   0.00-10.00  sec  42.7 MBytes  35.8 Mbits/sec                  receiver
> 
> and:
> echo 0 > /sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/link/l1_aspm

BTW, thanks a lot for testing out the "l1_aspm" sysfs file.  I'm very
pleased that it seems to be working as intended.

> and:
> [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> [  5]   0.00-1.00   sec   113 MBytes   951 Mbits/sec  153    772 KBytes
> [  5]   1.00-2.00   sec   109 MBytes   912 Mbits/sec  276    550 KBytes
> [  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec  123    625 KBytes
> [  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec   31    687 KBytes
> [  5]   4.00-5.00   sec   110 MBytes   923 Mbits/sec    0    679 KBytes
> [  5]   5.00-6.00   sec   110 MBytes   923 Mbits/sec  136    577 KBytes
> [  5]   6.00-7.00   sec   110 MBytes   923 Mbits/sec  214    645 KBytes
> [  5]   7.00-8.00   sec   110 MBytes   923 Mbits/sec   32    628 KBytes
> [  5]   8.00-9.00   sec   110 MBytes   923 Mbits/sec   81    537 KBytes
> [  5]   9.00-10.00  sec   110 MBytes   923 Mbits/sec   10    577 KBytes
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-10.00  sec  1.08 GBytes   927 Mbits/sec  1056             sender
> [  5]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec                  receiver
> 
> But this only confirms that the fix i experience is a side effect.
> 
> The original code is still wrong :)

What exactly is this machine?  Brand, model, config?  Maybe you could
add this and a dmesg log to the buzilla?  It seems like other people
should be seeing the same problem, so I'm hoping to grub around on the
web to see if there are similar reports involving these devices.

https://bugzilla.kernel.org/show_bug.cgi?id=209725

Here's one that is superficially similar:
https://linux-hardware.org/index.php?probe=e5f24075e5&log=lspci_all
in that it has a RP -- switch -- I211 path.  Interestingly, the switch
here advertises <64us L1 exit latency instead of the <32us latency
your switch advertises.  Of course, I can't tell if it's exactly the
same switch.

Bjorn

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-15  0:40             ` Bjorn Helgaas
@ 2020-12-15 13:09               ` Ian Kumlien
  2020-12-16  0:08                 ` Bjorn Helgaas
  0 siblings, 1 reply; 12+ messages in thread
From: Ian Kumlien @ 2020-12-15 13:09 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Tue, Dec 15, 2020 at 1:40 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Mon, Dec 14, 2020 at 11:56:31PM +0100, Ian Kumlien wrote:
> > On Mon, Dec 14, 2020 at 8:19 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> > > If you're interested, you could probably unload the Realtek drivers,
> > > remove the devices, and set the PCI_EXP_LNKCTL_LD (Link Disable) bit
> > > in 02:04.0, e.g.,
> > >
> > >   # RT=/sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/0000:02:04.0
> > >   # echo 1 > $RT/0000:04:00.0/remove
> > >   # echo 1 > $RT/0000:04:00.1/remove
> > >   # echo 1 > $RT/0000:04:00.2/remove
> > >   # echo 1 > $RT/0000:04:00.4/remove
> > >   # echo 1 > $RT/0000:04:00.7/remove
> > >   # setpci -s02:04.0 CAP_EXP+0x10.w=0x0010
> > >
> > > That should take 04:00.x out of the picture.
> >
> > Didn't actually change the behaviour, I'm suspecting an errata for AMD pcie...
> >
> > So did this, with unpatched kernel:
> > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > [  5]   0.00-1.00   sec  4.56 MBytes  38.2 Mbits/sec    0   67.9 KBytes
> > [  5]   1.00-2.00   sec  4.47 MBytes  37.5 Mbits/sec    0   96.2 KBytes
> > [  5]   2.00-3.00   sec  4.85 MBytes  40.7 Mbits/sec    0   50.9 KBytes
> > [  5]   3.00-4.00   sec  4.23 MBytes  35.4 Mbits/sec    0   70.7 KBytes
> > [  5]   4.00-5.00   sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > [  5]   5.00-6.00   sec  4.23 MBytes  35.4 Mbits/sec    0   45.2 KBytes
> > [  5]   6.00-7.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > [  5]   7.00-8.00   sec  3.98 MBytes  33.4 Mbits/sec    0   36.8 KBytes
> > [  5]   8.00-9.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > [  5]   9.00-10.00  sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval           Transfer     Bitrate         Retr
> > [  5]   0.00-10.00  sec  43.2 MBytes  36.2 Mbits/sec    0             sender
> > [  5]   0.00-10.00  sec  42.7 MBytes  35.8 Mbits/sec                  receiver
> >
> > and:
> > echo 0 > /sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/link/l1_aspm
>
> BTW, thanks a lot for testing out the "l1_aspm" sysfs file.  I'm very
> pleased that it seems to be working as intended.

It was nice to find it for easy disabling :)

> > and:
> > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > [  5]   0.00-1.00   sec   113 MBytes   951 Mbits/sec  153    772 KBytes
> > [  5]   1.00-2.00   sec   109 MBytes   912 Mbits/sec  276    550 KBytes
> > [  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec  123    625 KBytes
> > [  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec   31    687 KBytes
> > [  5]   4.00-5.00   sec   110 MBytes   923 Mbits/sec    0    679 KBytes
> > [  5]   5.00-6.00   sec   110 MBytes   923 Mbits/sec  136    577 KBytes
> > [  5]   6.00-7.00   sec   110 MBytes   923 Mbits/sec  214    645 KBytes
> > [  5]   7.00-8.00   sec   110 MBytes   923 Mbits/sec   32    628 KBytes
> > [  5]   8.00-9.00   sec   110 MBytes   923 Mbits/sec   81    537 KBytes
> > [  5]   9.00-10.00  sec   110 MBytes   923 Mbits/sec   10    577 KBytes
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval           Transfer     Bitrate         Retr
> > [  5]   0.00-10.00  sec  1.08 GBytes   927 Mbits/sec  1056             sender
> > [  5]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec                  receiver
> >
> > But this only confirms that the fix i experience is a side effect.
> >
> > The original code is still wrong :)
>
> What exactly is this machine?  Brand, model, config?  Maybe you could
> add this and a dmesg log to the buzilla?  It seems like other people
> should be seeing the same problem, so I'm hoping to grub around on the
> web to see if there are similar reports involving these devices.

ASUS Pro WS X570-ACE with AMD Ryzen 9 3900X

> https://bugzilla.kernel.org/show_bug.cgi?id=209725
>
> Here's one that is superficially similar:
> https://linux-hardware.org/index.php?probe=e5f24075e5&log=lspci_all
> in that it has a RP -- switch -- I211 path.  Interestingly, the switch
> here advertises <64us L1 exit latency instead of the <32us latency
> your switch advertises.  Of course, I can't tell if it's exactly the
> same switch.

Same chipset it seems

I'm running bios version:
        Version: 2206
        Release Date: 08/13/2020

ANd latest is:
Version 3003
2020/12/07

Will test upgrading that as well, but it could be that they report the
incorrect latency of the switch - I don't know how many things AGESA
changes but... It's been updated twice since my upgrade.

> Bjorn

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-15 13:09               ` Ian Kumlien
@ 2020-12-16  0:08                 ` Bjorn Helgaas
  2020-12-16 11:20                   ` Ian Kumlien
  0 siblings, 1 reply; 12+ messages in thread
From: Bjorn Helgaas @ 2020-12-16  0:08 UTC (permalink / raw)
  To: Ian Kumlien
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Tue, Dec 15, 2020 at 02:09:12PM +0100, Ian Kumlien wrote:
> On Tue, Dec 15, 2020 at 1:40 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >
> > On Mon, Dec 14, 2020 at 11:56:31PM +0100, Ian Kumlien wrote:
> > > On Mon, Dec 14, 2020 at 8:19 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >
> > > > If you're interested, you could probably unload the Realtek drivers,
> > > > remove the devices, and set the PCI_EXP_LNKCTL_LD (Link Disable) bit
> > > > in 02:04.0, e.g.,
> > > >
> > > >   # RT=/sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/0000:02:04.0
> > > >   # echo 1 > $RT/0000:04:00.0/remove
> > > >   # echo 1 > $RT/0000:04:00.1/remove
> > > >   # echo 1 > $RT/0000:04:00.2/remove
> > > >   # echo 1 > $RT/0000:04:00.4/remove
> > > >   # echo 1 > $RT/0000:04:00.7/remove
> > > >   # setpci -s02:04.0 CAP_EXP+0x10.w=0x0010
> > > >
> > > > That should take 04:00.x out of the picture.
> > >
> > > Didn't actually change the behaviour, I'm suspecting an errata for AMD pcie...
> > >
> > > So did this, with unpatched kernel:
> > > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > > [  5]   0.00-1.00   sec  4.56 MBytes  38.2 Mbits/sec    0   67.9 KBytes
> > > [  5]   1.00-2.00   sec  4.47 MBytes  37.5 Mbits/sec    0   96.2 KBytes
> > > [  5]   2.00-3.00   sec  4.85 MBytes  40.7 Mbits/sec    0   50.9 KBytes
> > > [  5]   3.00-4.00   sec  4.23 MBytes  35.4 Mbits/sec    0   70.7 KBytes
> > > [  5]   4.00-5.00   sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > > [  5]   5.00-6.00   sec  4.23 MBytes  35.4 Mbits/sec    0   45.2 KBytes
> > > [  5]   6.00-7.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > > [  5]   7.00-8.00   sec  3.98 MBytes  33.4 Mbits/sec    0   36.8 KBytes
> > > [  5]   8.00-9.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > > [  5]   9.00-10.00  sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > [ ID] Interval           Transfer     Bitrate         Retr
> > > [  5]   0.00-10.00  sec  43.2 MBytes  36.2 Mbits/sec    0             sender
> > > [  5]   0.00-10.00  sec  42.7 MBytes  35.8 Mbits/sec                  receiver
> > >
> > > and:
> > > echo 0 > /sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/link/l1_aspm
> >
> > BTW, thanks a lot for testing out the "l1_aspm" sysfs file.  I'm very
> > pleased that it seems to be working as intended.
> 
> It was nice to find it for easy disabling :)
> 
> > > and:
> > > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > > [  5]   0.00-1.00   sec   113 MBytes   951 Mbits/sec  153    772 KBytes
> > > [  5]   1.00-2.00   sec   109 MBytes   912 Mbits/sec  276    550 KBytes
> > > [  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec  123    625 KBytes
> > > [  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec   31    687 KBytes
> > > [  5]   4.00-5.00   sec   110 MBytes   923 Mbits/sec    0    679 KBytes
> > > [  5]   5.00-6.00   sec   110 MBytes   923 Mbits/sec  136    577 KBytes
> > > [  5]   6.00-7.00   sec   110 MBytes   923 Mbits/sec  214    645 KBytes
> > > [  5]   7.00-8.00   sec   110 MBytes   923 Mbits/sec   32    628 KBytes
> > > [  5]   8.00-9.00   sec   110 MBytes   923 Mbits/sec   81    537 KBytes
> > > [  5]   9.00-10.00  sec   110 MBytes   923 Mbits/sec   10    577 KBytes
> > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > [ ID] Interval           Transfer     Bitrate         Retr
> > > [  5]   0.00-10.00  sec  1.08 GBytes   927 Mbits/sec  1056             sender
> > > [  5]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec                  receiver
> > >
> > > But this only confirms that the fix i experience is a side effect.
> > >
> > > The original code is still wrong :)
> >
> > What exactly is this machine?  Brand, model, config?  Maybe you could
> > add this and a dmesg log to the buzilla?  It seems like other people
> > should be seeing the same problem, so I'm hoping to grub around on the
> > web to see if there are similar reports involving these devices.
> 
> ASUS Pro WS X570-ACE with AMD Ryzen 9 3900X

Possible similar issues:

  https://forums.unraid.net/topic/94274-hardware-upgrade-woes/
  https://forums.servethehome.com/index.php?threads/upgraded-my-home-server-from-intel-to-amd-virtual-disk-stuck-in-degraded-unhealty-state.25535/ (Windows)

> > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> >
> > Here's one that is superficially similar:
> > https://linux-hardware.org/index.php?probe=e5f24075e5&log=lspci_all
> > in that it has a RP -- switch -- I211 path.  Interestingly, the switch
> > here advertises <64us L1 exit latency instead of the <32us latency
> > your switch advertises.  Of course, I can't tell if it's exactly the
> > same switch.
> 
> Same chipset it seems
> 
> I'm running bios version:
>         Version: 2206
>         Release Date: 08/13/2020
> 
> ANd latest is:
> Version 3003
> 2020/12/07
> 
> Will test upgrading that as well, but it could be that they report the
> incorrect latency of the switch - I don't know how many things AGESA
> changes but... It's been updated twice since my upgrade.

I wouldn't be surprised if the advertised exit latencies are writable
by the BIOS because it probably depends on electrical characteristics
outside the switch.  If so, it's possible ASUS just screwed it up.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-16  0:08                 ` Bjorn Helgaas
@ 2020-12-16 11:20                   ` Ian Kumlien
  2020-12-16 23:21                     ` Bjorn Helgaas
  0 siblings, 1 reply; 12+ messages in thread
From: Ian Kumlien @ 2020-12-16 11:20 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Wed, Dec 16, 2020 at 1:08 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Tue, Dec 15, 2020 at 02:09:12PM +0100, Ian Kumlien wrote:
> > On Tue, Dec 15, 2020 at 1:40 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > >
> > > On Mon, Dec 14, 2020 at 11:56:31PM +0100, Ian Kumlien wrote:
> > > > On Mon, Dec 14, 2020 at 8:19 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > >
> > > > > If you're interested, you could probably unload the Realtek drivers,
> > > > > remove the devices, and set the PCI_EXP_LNKCTL_LD (Link Disable) bit
> > > > > in 02:04.0, e.g.,
> > > > >
> > > > >   # RT=/sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/0000:02:04.0
> > > > >   # echo 1 > $RT/0000:04:00.0/remove
> > > > >   # echo 1 > $RT/0000:04:00.1/remove
> > > > >   # echo 1 > $RT/0000:04:00.2/remove
> > > > >   # echo 1 > $RT/0000:04:00.4/remove
> > > > >   # echo 1 > $RT/0000:04:00.7/remove
> > > > >   # setpci -s02:04.0 CAP_EXP+0x10.w=0x0010
> > > > >
> > > > > That should take 04:00.x out of the picture.
> > > >
> > > > Didn't actually change the behaviour, I'm suspecting an errata for AMD pcie...
> > > >
> > > > So did this, with unpatched kernel:
> > > > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > > > [  5]   0.00-1.00   sec  4.56 MBytes  38.2 Mbits/sec    0   67.9 KBytes
> > > > [  5]   1.00-2.00   sec  4.47 MBytes  37.5 Mbits/sec    0   96.2 KBytes
> > > > [  5]   2.00-3.00   sec  4.85 MBytes  40.7 Mbits/sec    0   50.9 KBytes
> > > > [  5]   3.00-4.00   sec  4.23 MBytes  35.4 Mbits/sec    0   70.7 KBytes
> > > > [  5]   4.00-5.00   sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > > > [  5]   5.00-6.00   sec  4.23 MBytes  35.4 Mbits/sec    0   45.2 KBytes
> > > > [  5]   6.00-7.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > > > [  5]   7.00-8.00   sec  3.98 MBytes  33.4 Mbits/sec    0   36.8 KBytes
> > > > [  5]   8.00-9.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > > > [  5]   9.00-10.00  sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > > [ ID] Interval           Transfer     Bitrate         Retr
> > > > [  5]   0.00-10.00  sec  43.2 MBytes  36.2 Mbits/sec    0             sender
> > > > [  5]   0.00-10.00  sec  42.7 MBytes  35.8 Mbits/sec                  receiver
> > > >
> > > > and:
> > > > echo 0 > /sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/link/l1_aspm
> > >
> > > BTW, thanks a lot for testing out the "l1_aspm" sysfs file.  I'm very
> > > pleased that it seems to be working as intended.
> >
> > It was nice to find it for easy disabling :)
> >
> > > > and:
> > > > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > > > [  5]   0.00-1.00   sec   113 MBytes   951 Mbits/sec  153    772 KBytes
> > > > [  5]   1.00-2.00   sec   109 MBytes   912 Mbits/sec  276    550 KBytes
> > > > [  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec  123    625 KBytes
> > > > [  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec   31    687 KBytes
> > > > [  5]   4.00-5.00   sec   110 MBytes   923 Mbits/sec    0    679 KBytes
> > > > [  5]   5.00-6.00   sec   110 MBytes   923 Mbits/sec  136    577 KBytes
> > > > [  5]   6.00-7.00   sec   110 MBytes   923 Mbits/sec  214    645 KBytes
> > > > [  5]   7.00-8.00   sec   110 MBytes   923 Mbits/sec   32    628 KBytes
> > > > [  5]   8.00-9.00   sec   110 MBytes   923 Mbits/sec   81    537 KBytes
> > > > [  5]   9.00-10.00  sec   110 MBytes   923 Mbits/sec   10    577 KBytes
> > > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > > [ ID] Interval           Transfer     Bitrate         Retr
> > > > [  5]   0.00-10.00  sec  1.08 GBytes   927 Mbits/sec  1056             sender
> > > > [  5]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec                  receiver
> > > >
> > > > But this only confirms that the fix i experience is a side effect.
> > > >
> > > > The original code is still wrong :)
> > >
> > > What exactly is this machine?  Brand, model, config?  Maybe you could
> > > add this and a dmesg log to the buzilla?  It seems like other people
> > > should be seeing the same problem, so I'm hoping to grub around on the
> > > web to see if there are similar reports involving these devices.
> >
> > ASUS Pro WS X570-ACE with AMD Ryzen 9 3900X
>
> Possible similar issues:
>
>   https://forums.unraid.net/topic/94274-hardware-upgrade-woes/
>   https://forums.servethehome.com/index.php?threads/upgraded-my-home-server-from-intel-to-amd-virtual-disk-stuck-in-degraded-unhealty-state.25535/ (Windows)

Could be, I suspect that we need a workaround (is there a quirk for
"reporting wrong latency"?) and the patches.

> > > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> > >
> > > Here's one that is superficially similar:
> > > https://linux-hardware.org/index.php?probe=e5f24075e5&log=lspci_all
> > > in that it has a RP -- switch -- I211 path.  Interestingly, the switch
> > > here advertises <64us L1 exit latency instead of the <32us latency
> > > your switch advertises.  Of course, I can't tell if it's exactly the
> > > same switch.
> >
> > Same chipset it seems
> >
> > I'm running bios version:
> >         Version: 2206
> >         Release Date: 08/13/2020
> >
> > ANd latest is:
> > Version 3003
> > 2020/12/07
> >
> > Will test upgrading that as well, but it could be that they report the
> > incorrect latency of the switch - I don't know how many things AGESA
> > changes but... It's been updated twice since my upgrade.
>
> I wouldn't be surprised if the advertised exit latencies are writable
> by the BIOS because it probably depends on electrical characteristics
> outside the switch.  If so, it's possible ASUS just screwed it up.

Not surprisingly, nothing changed.
(There was a lot of "stability improvements")

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-16 11:20                   ` Ian Kumlien
@ 2020-12-16 23:21                     ` Bjorn Helgaas
  2020-12-17 23:37                       ` Ian Kumlien
  0 siblings, 1 reply; 12+ messages in thread
From: Bjorn Helgaas @ 2020-12-16 23:21 UTC (permalink / raw)
  To: Ian Kumlien
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Wed, Dec 16, 2020 at 12:20:53PM +0100, Ian Kumlien wrote:
> On Wed, Dec 16, 2020 at 1:08 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Tue, Dec 15, 2020 at 02:09:12PM +0100, Ian Kumlien wrote:
> > > On Tue, Dec 15, 2020 at 1:40 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > On Mon, Dec 14, 2020 at 11:56:31PM +0100, Ian Kumlien wrote:
> > > > > On Mon, Dec 14, 2020 at 8:19 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > >
> > > > > > If you're interested, you could probably unload the Realtek drivers,
> > > > > > remove the devices, and set the PCI_EXP_LNKCTL_LD (Link Disable) bit
> > > > > > in 02:04.0, e.g.,
> > > > > >
> > > > > >   # RT=/sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/0000:02:04.0
> > > > > >   # echo 1 > $RT/0000:04:00.0/remove
> > > > > >   # echo 1 > $RT/0000:04:00.1/remove
> > > > > >   # echo 1 > $RT/0000:04:00.2/remove
> > > > > >   # echo 1 > $RT/0000:04:00.4/remove
> > > > > >   # echo 1 > $RT/0000:04:00.7/remove
> > > > > >   # setpci -s02:04.0 CAP_EXP+0x10.w=0x0010
> > > > > >
> > > > > > That should take 04:00.x out of the picture.
> > > > >
> > > > > Didn't actually change the behaviour, I'm suspecting an errata for AMD pcie...
> > > > >
> > > > > So did this, with unpatched kernel:
> > > > > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > > > > [  5]   0.00-1.00   sec  4.56 MBytes  38.2 Mbits/sec    0   67.9 KBytes
> > > > > [  5]   1.00-2.00   sec  4.47 MBytes  37.5 Mbits/sec    0   96.2 KBytes
> > > > > [  5]   2.00-3.00   sec  4.85 MBytes  40.7 Mbits/sec    0   50.9 KBytes
> > > > > [  5]   3.00-4.00   sec  4.23 MBytes  35.4 Mbits/sec    0   70.7 KBytes
> > > > > [  5]   4.00-5.00   sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > > > > [  5]   5.00-6.00   sec  4.23 MBytes  35.4 Mbits/sec    0   45.2 KBytes
> > > > > [  5]   6.00-7.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > > > > [  5]   7.00-8.00   sec  3.98 MBytes  33.4 Mbits/sec    0   36.8 KBytes
> > > > > [  5]   8.00-9.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > > > > [  5]   9.00-10.00  sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > > > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > > > [ ID] Interval           Transfer     Bitrate         Retr
> > > > > [  5]   0.00-10.00  sec  43.2 MBytes  36.2 Mbits/sec    0             sender
> > > > > [  5]   0.00-10.00  sec  42.7 MBytes  35.8 Mbits/sec                  receiver
> > > > >
> > > > > and:
> > > > > echo 0 > /sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/link/l1_aspm
> > > >
> > > > BTW, thanks a lot for testing out the "l1_aspm" sysfs file.  I'm very
> > > > pleased that it seems to be working as intended.
> > >
> > > It was nice to find it for easy disabling :)
> > >
> > > > > and:
> > > > > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > > > > [  5]   0.00-1.00   sec   113 MBytes   951 Mbits/sec  153    772 KBytes
> > > > > [  5]   1.00-2.00   sec   109 MBytes   912 Mbits/sec  276    550 KBytes
> > > > > [  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec  123    625 KBytes
> > > > > [  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec   31    687 KBytes
> > > > > [  5]   4.00-5.00   sec   110 MBytes   923 Mbits/sec    0    679 KBytes
> > > > > [  5]   5.00-6.00   sec   110 MBytes   923 Mbits/sec  136    577 KBytes
> > > > > [  5]   6.00-7.00   sec   110 MBytes   923 Mbits/sec  214    645 KBytes
> > > > > [  5]   7.00-8.00   sec   110 MBytes   923 Mbits/sec   32    628 KBytes
> > > > > [  5]   8.00-9.00   sec   110 MBytes   923 Mbits/sec   81    537 KBytes
> > > > > [  5]   9.00-10.00  sec   110 MBytes   923 Mbits/sec   10    577 KBytes
> > > > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > > > [ ID] Interval           Transfer     Bitrate         Retr
> > > > > [  5]   0.00-10.00  sec  1.08 GBytes   927 Mbits/sec  1056             sender
> > > > > [  5]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec                  receiver
> > > > >
> > > > > But this only confirms that the fix i experience is a side effect.
> > > > >
> > > > > The original code is still wrong :)
> > > >
> > > > What exactly is this machine?  Brand, model, config?  Maybe you could
> > > > add this and a dmesg log to the buzilla?  It seems like other people
> > > > should be seeing the same problem, so I'm hoping to grub around on the
> > > > web to see if there are similar reports involving these devices.
> > >
> > > ASUS Pro WS X570-ACE with AMD Ryzen 9 3900X
> >
> > Possible similar issues:
> >
> >   https://forums.unraid.net/topic/94274-hardware-upgrade-woes/
> >   https://forums.servethehome.com/index.php?threads/upgraded-my-home-server-from-intel-to-amd-virtual-disk-stuck-in-degraded-unhealty-state.25535/ (Windows)
> 
> Could be, I suspect that we need a workaround (is there a quirk for
> "reporting wrong latency"?) and the patches.

I don't think there's currently a quirk mechanism that would work for
correcting latencies, but there should be, and we could add one if we
can figure out for sure what's wrong.

I found this:

  https://www.reddit.com/r/VFIO/comments/hgk3cz/x570_pcieclassic_pci_bridge_woes/

which looks like it should be the same hardware (if you can collect a
dmesg log or "lspci -nnvv" output we could tell for sure) and is
interesting because it includes some lspci output that shows different
L1 exit latencies than what you see.

> > > > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> > > >
> > > > Here's one that is superficially similar:
> > > > https://linux-hardware.org/index.php?probe=e5f24075e5&log=lspci_all
> > > > in that it has a RP -- switch -- I211 path.  Interestingly, the switch
> > > > here advertises <64us L1 exit latency instead of the <32us latency
> > > > your switch advertises.  Of course, I can't tell if it's exactly the
> > > > same switch.
> > >
> > > Same chipset it seems
> > >
> > > I'm running bios version:
> > >         Version: 2206
> > >         Release Date: 08/13/2020
> > >
> > > ANd latest is:
> > > Version 3003
> > > 2020/12/07
> > >
> > > Will test upgrading that as well, but it could be that they report the
> > > incorrect latency of the switch - I don't know how many things AGESA
> > > changes but... It's been updated twice since my upgrade.
> >
> > I wouldn't be surprised if the advertised exit latencies are writable
> > by the BIOS because it probably depends on electrical characteristics
> > outside the switch.  If so, it's possible ASUS just screwed it up.
> 
> Not surprisingly, nothing changed.
> (There was a lot of "stability improvements")

I wouldn't be totally surprised if ASUS didn't test that I211 NIC
under Linux, but I'm sure it must work well under Windows.  If you
happen to have Windows, a free trial version of AIDA64 should be able
to give us the equivalent of "lspci -vv".

Bjorn

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check
  2020-12-16 23:21                     ` Bjorn Helgaas
@ 2020-12-17 23:37                       ` Ian Kumlien
  0 siblings, 0 replies; 12+ messages in thread
From: Ian Kumlien @ 2020-12-17 23:37 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Kai-Heng Feng, linux-pci, Alexander Duyck, Saheed O. Bolarinwa,
	Puranjay Mohan, Jesse Brandeburg, Tony Nguyen, David S. Miller,
	Jakub Kicinski, Heiner Kallweit, intel-wired-lan,
	Linux Kernel Network Developers, linux-kernel

On Thu, Dec 17, 2020 at 12:21 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Wed, Dec 16, 2020 at 12:20:53PM +0100, Ian Kumlien wrote:
> > On Wed, Dec 16, 2020 at 1:08 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Tue, Dec 15, 2020 at 02:09:12PM +0100, Ian Kumlien wrote:
> > > > On Tue, Dec 15, 2020 at 1:40 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > On Mon, Dec 14, 2020 at 11:56:31PM +0100, Ian Kumlien wrote:
> > > > > > On Mon, Dec 14, 2020 at 8:19 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > >
> > > > > > > If you're interested, you could probably unload the Realtek drivers,
> > > > > > > remove the devices, and set the PCI_EXP_LNKCTL_LD (Link Disable) bit
> > > > > > > in 02:04.0, e.g.,
> > > > > > >
> > > > > > >   # RT=/sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/0000:02:04.0
> > > > > > >   # echo 1 > $RT/0000:04:00.0/remove
> > > > > > >   # echo 1 > $RT/0000:04:00.1/remove
> > > > > > >   # echo 1 > $RT/0000:04:00.2/remove
> > > > > > >   # echo 1 > $RT/0000:04:00.4/remove
> > > > > > >   # echo 1 > $RT/0000:04:00.7/remove
> > > > > > >   # setpci -s02:04.0 CAP_EXP+0x10.w=0x0010
> > > > > > >
> > > > > > > That should take 04:00.x out of the picture.
> > > > > >
> > > > > > Didn't actually change the behaviour, I'm suspecting an errata for AMD pcie...
> > > > > >
> > > > > > So did this, with unpatched kernel:
> > > > > > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > > > > > [  5]   0.00-1.00   sec  4.56 MBytes  38.2 Mbits/sec    0   67.9 KBytes
> > > > > > [  5]   1.00-2.00   sec  4.47 MBytes  37.5 Mbits/sec    0   96.2 KBytes
> > > > > > [  5]   2.00-3.00   sec  4.85 MBytes  40.7 Mbits/sec    0   50.9 KBytes
> > > > > > [  5]   3.00-4.00   sec  4.23 MBytes  35.4 Mbits/sec    0   70.7 KBytes
> > > > > > [  5]   4.00-5.00   sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > > > > > [  5]   5.00-6.00   sec  4.23 MBytes  35.4 Mbits/sec    0   45.2 KBytes
> > > > > > [  5]   6.00-7.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > > > > > [  5]   7.00-8.00   sec  3.98 MBytes  33.4 Mbits/sec    0   36.8 KBytes
> > > > > > [  5]   8.00-9.00   sec  4.23 MBytes  35.4 Mbits/sec    0   36.8 KBytes
> > > > > > [  5]   9.00-10.00  sec  4.23 MBytes  35.4 Mbits/sec    0   48.1 KBytes
> > > > > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > > > > [ ID] Interval           Transfer     Bitrate         Retr
> > > > > > [  5]   0.00-10.00  sec  43.2 MBytes  36.2 Mbits/sec    0             sender
> > > > > > [  5]   0.00-10.00  sec  42.7 MBytes  35.8 Mbits/sec                  receiver
> > > > > >
> > > > > > and:
> > > > > > echo 0 > /sys/devices/pci0000:00/0000:00:01.2/0000:01:00.0/link/l1_aspm
> > > > >
> > > > > BTW, thanks a lot for testing out the "l1_aspm" sysfs file.  I'm very
> > > > > pleased that it seems to be working as intended.
> > > >
> > > > It was nice to find it for easy disabling :)
> > > >
> > > > > > and:
> > > > > > [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> > > > > > [  5]   0.00-1.00   sec   113 MBytes   951 Mbits/sec  153    772 KBytes
> > > > > > [  5]   1.00-2.00   sec   109 MBytes   912 Mbits/sec  276    550 KBytes
> > > > > > [  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec  123    625 KBytes
> > > > > > [  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec   31    687 KBytes
> > > > > > [  5]   4.00-5.00   sec   110 MBytes   923 Mbits/sec    0    679 KBytes
> > > > > > [  5]   5.00-6.00   sec   110 MBytes   923 Mbits/sec  136    577 KBytes
> > > > > > [  5]   6.00-7.00   sec   110 MBytes   923 Mbits/sec  214    645 KBytes
> > > > > > [  5]   7.00-8.00   sec   110 MBytes   923 Mbits/sec   32    628 KBytes
> > > > > > [  5]   8.00-9.00   sec   110 MBytes   923 Mbits/sec   81    537 KBytes
> > > > > > [  5]   9.00-10.00  sec   110 MBytes   923 Mbits/sec   10    577 KBytes
> > > > > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > > > > [ ID] Interval           Transfer     Bitrate         Retr
> > > > > > [  5]   0.00-10.00  sec  1.08 GBytes   927 Mbits/sec  1056             sender
> > > > > > [  5]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec                  receiver
> > > > > >
> > > > > > But this only confirms that the fix i experience is a side effect.
> > > > > >
> > > > > > The original code is still wrong :)
> > > > >
> > > > > What exactly is this machine?  Brand, model, config?  Maybe you could
> > > > > add this and a dmesg log to the buzilla?  It seems like other people
> > > > > should be seeing the same problem, so I'm hoping to grub around on the
> > > > > web to see if there are similar reports involving these devices.
> > > >
> > > > ASUS Pro WS X570-ACE with AMD Ryzen 9 3900X
> > >
> > > Possible similar issues:
> > >
> > >   https://forums.unraid.net/topic/94274-hardware-upgrade-woes/
> > >   https://forums.servethehome.com/index.php?threads/upgraded-my-home-server-from-intel-to-amd-virtual-disk-stuck-in-degraded-unhealty-state.25535/ (Windows)
> >
> > Could be, I suspect that we need a workaround (is there a quirk for
> > "reporting wrong latency"?) and the patches.
>
> I don't think there's currently a quirk mechanism that would work for
> correcting latencies, but there should be, and we could add one if we
> can figure out for sure what's wrong.
>
> I found this:
>
>   https://www.reddit.com/r/VFIO/comments/hgk3cz/x570_pcieclassic_pci_bridge_woes/
>
> which looks like it should be the same hardware (if you can collect a
> dmesg log or "lspci -nnvv" output we could tell for sure) and is
> interesting because it includes some lspci output that shows different
> L1 exit latencies than what you see.

I'll send both of them separately to you, no reason to push that to
everyone i assume.. =)

> > > > > https://bugzilla.kernel.org/show_bug.cgi?id=209725
> > > > >
> > > > > Here's one that is superficially similar:
> > > > > https://linux-hardware.org/index.php?probe=e5f24075e5&log=lspci_all
> > > > > in that it has a RP -- switch -- I211 path.  Interestingly, the switch
> > > > > here advertises <64us L1 exit latency instead of the <32us latency
> > > > > your switch advertises.  Of course, I can't tell if it's exactly the
> > > > > same switch.
> > > >
> > > > Same chipset it seems
> > > >
> > > > I'm running bios version:
> > > >         Version: 2206
> > > >         Release Date: 08/13/2020
> > > >
> > > > ANd latest is:
> > > > Version 3003
> > > > 2020/12/07
> > > >
> > > > Will test upgrading that as well, but it could be that they report the
> > > > incorrect latency of the switch - I don't know how many things AGESA
> > > > changes but... It's been updated twice since my upgrade.
> > >
> > > I wouldn't be surprised if the advertised exit latencies are writable
> > > by the BIOS because it probably depends on electrical characteristics
> > > outside the switch.  If so, it's possible ASUS just screwed it up.
> >
> > Not surprisingly, nothing changed.
> > (There was a lot of "stability improvements")
>
> I wouldn't be totally surprised if ASUS didn't test that I211 NIC
> under Linux, but I'm sure it must work well under Windows.  If you
> happen to have Windows, a free trial version of AIDA64 should be able
> to give us the equivalent of "lspci -vv".

I don't have windows, haven't had windows at home since '98 ;)

I'll check with some friends that dualboot om systems that might be
similar - will see what i can get


> Bjorn

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-12-17 23:38 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAA85sZuuS=UHzhk0DabN45jCu-GYD-DxMOY8dd68Znnk5wsXVg@mail.gmail.com>
2020-12-14  5:44 ` [PATCH 1/3] PCI/ASPM: Use the path max in L1 ASPM latency check Bjorn Helgaas
2020-12-14  9:14   ` Ian Kumlien
2020-12-14 14:02     ` Bjorn Helgaas
2020-12-14 15:47       ` Ian Kumlien
2020-12-14 19:19         ` Bjorn Helgaas
2020-12-14 22:56           ` Ian Kumlien
2020-12-15  0:40             ` Bjorn Helgaas
2020-12-15 13:09               ` Ian Kumlien
2020-12-16  0:08                 ` Bjorn Helgaas
2020-12-16 11:20                   ` Ian Kumlien
2020-12-16 23:21                     ` Bjorn Helgaas
2020-12-17 23:37                       ` Ian Kumlien

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).