linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ian Kumlien <ian.kumlien@gmail.com>
To: linux-pci@vger.kernel.org, Bjorn Helgaas <helgaas@kernel.org>
Subject: Re: [PATCH] Use maximum latency when determining L1/L0s ASPM v2
Date: Sat, 15 Aug 2020 21:39:35 +0200	[thread overview]
Message-ID: <CAA85sZtKSwyjXDK8TK_bpd6GMJkQafA5ur-sC=s_zDKx_0zU7w@mail.gmail.com> (raw)
In-Reply-To: <20200803145832.11234-1-ian.kumlien@gmail.com>

Hi again,

Just trying to bump and also asking a question ...

That 1 us, could that be related to L0s latency - so should we add a
potential workaround by doing
max_t(u32, 1000, L0s.up + L0s.dw) to ensure that the time is never
greater on any hardware ;)

And also warn about it... =)

On Mon, Aug 3, 2020 at 4:58 PM Ian Kumlien <ian.kumlien@gmail.com> wrote:
>
> Changes:
> * Handle L0s correclty as well, making it per direction
> * Moved the switch cost in to the if statement since a non L1 switch has
>   no additional cost.
>
> For L0s:
> We sumarize the entire latency per direction to see if it's acceptable
> for the PCIe endpoint.
>
> If it's not, we clear the link for the path that had too large latency.
>
> For L1:
> Currently we check the maximum latency of upstream and downstream
> per link, not the maximum for the path
>
> This would work if all links have the same latency, but:
> endpoint -> c -> b -> a -> root  (in the order we walk the path)
>
> If c or b has the higest latency, it will not register
>
> Fix this by maintaining the maximum latency value for the path
>
> This change fixes a regression introduced (but not caused) by:
> 66ff14e59e8a (PCI/ASPM: Allow ASPM on links to PCIe-to-PCI/PCI-X Bridges)
>
> Signed-off-by: Ian Kumlien <ian.kumlien@gmail.com>
> ---
>  drivers/pci/pcie/aspm.c | 41 ++++++++++++++++++++++++++---------------
>  1 file changed, 26 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
> index b17e5ffd31b1..bc512e217258 100644
> --- a/drivers/pci/pcie/aspm.c
> +++ b/drivers/pci/pcie/aspm.c
> @@ -434,7 +434,8 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
>
>  static void pcie_aspm_check_latency(struct pci_dev *endpoint)
>  {
> -       u32 latency, l1_switch_latency = 0;
> +       u32 latency, l1_max_latency = 0, l1_switch_latency = 0,
> +               l0s_latency_up = 0, l0s_latency_dw = 0;
>         struct aspm_latency *acceptable;
>         struct pcie_link_state *link;
>
> @@ -447,15 +448,22 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
>         acceptable = &link->acceptable[PCI_FUNC(endpoint->devfn)];
>
>         while (link) {
> -               /* Check upstream direction L0s latency */
> -               if ((link->aspm_capable & ASPM_STATE_L0S_UP) &&
> -                   (link->latency_up.l0s > acceptable->l0s))
> -                       link->aspm_capable &= ~ASPM_STATE_L0S_UP;
> -
> -               /* Check downstream direction L0s latency */
> -               if ((link->aspm_capable & ASPM_STATE_L0S_DW) &&
> -                   (link->latency_dw.l0s > acceptable->l0s))
> -                       link->aspm_capable &= ~ASPM_STATE_L0S_DW;
> +               if (link->aspm_capable & ASPM_STATE_L0S) {
> +                       /* Check upstream direction L0s latency */
> +                       if (link->aspm_capable & ASPM_STATE_L0S_UP) {
> +                               l0s_latency_up += link->latency_up.l0s;
> +                               if (l0s_latency_up > acceptable->l0s)
> +                                       link->aspm_capable &= ~ASPM_STATE_L0S_UP;
> +                       }
> +
> +                       /* Check downstream direction L0s latency */
> +                       if (link->aspm_capable & ASPM_STATE_L0S_DW) {
> +                               l0s_latency_dw += link->latency_dw.l0s;
> +                               if (l0s_latency_dw > acceptable->l0s)
> +                                       link->aspm_capable &= ~ASPM_STATE_L0S_DW;
> +                       }
> +               }
> +
>                 /*
>                  * Check L1 latency.
>                  * Every switch on the path to root complex need 1
> @@ -469,11 +477,14 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
>                  * L1 exit latencies advertised by a device include L1
>                  * substate latencies (and hence do not do any check).
>                  */
> -               latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> -               if ((link->aspm_capable & ASPM_STATE_L1) &&
> -                   (latency + l1_switch_latency > acceptable->l1))
> -                       link->aspm_capable &= ~ASPM_STATE_L1;
> -               l1_switch_latency += 1000;
> +               if (link->aspm_capable & ASPM_STATE_L1) {
> +                       latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
> +                       l1_max_latency = max_t(u32, latency, l1_max_latency);
> +                       if (l1_max_latency + l1_switch_latency > acceptable->l1)
> +                               link->aspm_capable &= ~ASPM_STATE_L1;
> +
> +                       l1_switch_latency += 1000;
> +               }
>
>                 link = link->parent;
>         }
> --
> 2.28.0
>

  reply	other threads:[~2020-08-15 21:40 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-27 21:30 [PATCH] Use maximum latency when determining L1 ASPM Ian Kumlien
2020-07-29 22:27 ` Bjorn Helgaas
2020-07-29 22:43   ` Ian Kumlien
2020-08-03 14:58   ` [PATCH] Use maximum latency when determining L1/L0s ASPM v2 Ian Kumlien
2020-08-15 19:39     ` Ian Kumlien [this message]
2020-09-18 22:47     ` Ian Kumlien
2020-09-22 20:19     ` Bjorn Helgaas
2020-09-22 21:02       ` Ian Kumlien
2020-09-22 23:00         ` Bjorn Helgaas
2020-09-22 23:29           ` Ian Kumlien
2020-09-22 23:31             ` Ian Kumlien
2020-09-23 21:23             ` Bjorn Helgaas
2020-09-23 21:36               ` Ian Kumlien
2020-09-23 21:48                 ` Ian Kumlien
2020-09-24 16:24                 ` Bjorn Helgaas
2020-09-25  8:06                   ` Ian Kumlien
2020-10-05 18:31     ` Bjorn Helgaas
2020-10-05 18:38       ` Ian Kumlien
2020-10-05 19:09         ` Bjorn Helgaas
2020-10-07 11:31           ` Ian Kumlien
2020-10-07 13:03             ` Bjorn Helgaas
     [not found] <CAA85sZvrPApeAYPVSYdVuKnp84xCpLBLf+f32e=R9tdPC0dvOw@mail.gmail.com>
2020-09-25 15:49 ` Bjorn Helgaas
2020-09-25 22:26   ` Ian Kumlien
2020-09-28  0:06     ` Bjorn Helgaas
2020-09-28 10:24       ` Ian Kumlien
2020-09-28 17:09         ` Bjorn Helgaas
2020-09-28 17:41           ` Ian Kumlien
2020-09-28 19:53             ` Alexander Duyck
2020-09-28 20:04               ` Ian Kumlien
2020-09-28 20:33                 ` Ian Kumlien
2020-09-28 23:30                   ` Alexander Duyck
2020-09-29 12:51                     ` Ian Kumlien
2020-09-29 16:23                       ` Alexander Duyck
2020-09-29 21:13                         ` Ian Kumlien
2020-09-28 21:43               ` Ian Kumlien
2020-09-28 18:10           ` Alexander Duyck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAA85sZtKSwyjXDK8TK_bpd6GMJkQafA5ur-sC=s_zDKx_0zU7w@mail.gmail.com' \
    --to=ian.kumlien@gmail.com \
    --cc=helgaas@kernel.org \
    --cc=linux-pci@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).