From: Bjorn Helgaas <helgaas@kernel.org>
To: "Keller, Jacob E" <jacob.e.keller@intel.com>
Cc: Tal Gilboa <talgi@mellanox.com>,
Tariq Toukan <tariqt@mellanox.com>,
Ariel Elior <ariel.elior@cavium.com>,
Ganesh Goudar <ganeshgr@chelsio.com>,
"Kirsher, Jeffrey T" <jeffrey.t.kirsher@intel.com>,
"everest-linux-l2@cavium.com" <everest-linux-l2@cavium.com>,
"intel-wired-lan@lists.osuosl.org"
<intel-wired-lan@lists.osuosl.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>
Subject: Re: [PATCH v5 03/14] PCI: Add pcie_bandwidth_capable() to compute max supported link bandwidth
Date: Mon, 2 Apr 2018 14:37:54 -0500 [thread overview]
Message-ID: <20180402193754.GK9322@bhelgaas-glaptop.roam.corp.google.com> (raw)
In-Reply-To: <02874ECE860811409154E81DA85FBB5882D4980A@ORSMSX115.amr.corp.intel.com>
On Mon, Apr 02, 2018 at 04:00:16PM +0000, Keller, Jacob E wrote:
> > -----Original Message-----
> > From: Tal Gilboa [mailto:talgi@mellanox.com]
> > Sent: Monday, April 02, 2018 7:34 AM
> > To: Bjorn Helgaas <helgaas@kernel.org>
> > Cc: Tariq Toukan <tariqt@mellanox.com>; Keller, Jacob E
> > <jacob.e.keller@intel.com>; Ariel Elior <ariel.elior@cavium.com>; Ganesh
> > Goudar <ganeshgr@chelsio.com>; Kirsher, Jeffrey T
> > <jeffrey.t.kirsher@intel.com>; everest-linux-l2@cavium.com; intel-wired-
> > lan@lists.osuosl.org; netdev@vger.kernel.org; linux-kernel@vger.kernel.org;
> > linux-pci@vger.kernel.org
> > Subject: Re: [PATCH v5 03/14] PCI: Add pcie_bandwidth_capable() to compute
> > max supported link bandwidth
> >
> > On 4/2/2018 5:05 PM, Bjorn Helgaas wrote:
> > > On Mon, Apr 02, 2018 at 10:34:58AM +0300, Tal Gilboa wrote:
> > >> On 4/2/2018 3:40 AM, Bjorn Helgaas wrote:
> > >>> On Sun, Apr 01, 2018 at 11:38:53PM +0300, Tal Gilboa wrote:
> > >>>> On 3/31/2018 12:05 AM, Bjorn Helgaas wrote:
> > >>>>> From: Tal Gilboa <talgi@mellanox.com>
> > >>>>>
> > >>>>> Add pcie_bandwidth_capable() to compute the max link bandwidth
> > supported by
> > >>>>> a device, based on the max link speed and width, adjusted by the
> > encoding
> > >>>>> overhead.
> > >>>>>
> > >>>>> The maximum bandwidth of the link is computed as:
> > >>>>>
> > >>>>> max_link_speed * max_link_width * (1 - encoding_overhead)
> > >>>>>
> > >>>>> The encoding overhead is about 20% for 2.5 and 5.0 GT/s links using
> > 8b/10b
> > >>>>> encoding, and about 1.5% for 8 GT/s or higher speed links using 128b/130b
> > >>>>> encoding.
> > >>>>>
> > >>>>> Signed-off-by: Tal Gilboa <talgi@mellanox.com>
> > >>>>> [bhelgaas: adjust for pcie_get_speed_cap() and pcie_get_width_cap()
> > >>>>> signatures, don't export outside drivers/pci]
> > >>>>> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
> > >>>>> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> > >>>>> ---
> > >>>>> drivers/pci/pci.c | 21 +++++++++++++++++++++
> > >>>>> drivers/pci/pci.h | 9 +++++++++
> > >>>>> 2 files changed, 30 insertions(+)
> > >>>>>
> > >>>>> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> > >>>>> index 43075be79388..9ce89e254197 100644
> > >>>>> --- a/drivers/pci/pci.c
> > >>>>> +++ b/drivers/pci/pci.c
> > >>>>> @@ -5208,6 +5208,27 @@ enum pcie_link_width
> > pcie_get_width_cap(struct pci_dev *dev)
> > >>>>> return PCIE_LNK_WIDTH_UNKNOWN;
> > >>>>> }
> > >>>>> +/**
> > >>>>> + * pcie_bandwidth_capable - calculates a PCI device's link bandwidth
> > capability
> > >>>>> + * @dev: PCI device
> > >>>>> + * @speed: storage for link speed
> > >>>>> + * @width: storage for link width
> > >>>>> + *
> > >>>>> + * Calculate a PCI device's link bandwidth by querying for its link speed
> > >>>>> + * and width, multiplying them, and applying encoding overhead.
> > >>>>> + */
> > >>>>> +u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed
> > *speed,
> > >>>>> + enum pcie_link_width *width)
> > >>>>> +{
> > >>>>> + *speed = pcie_get_speed_cap(dev);
> > >>>>> + *width = pcie_get_width_cap(dev);
> > >>>>> +
> > >>>>> + if (*speed == PCI_SPEED_UNKNOWN || *width ==
> > PCIE_LNK_WIDTH_UNKNOWN)
> > >>>>> + return 0;
> > >>>>> +
> > >>>>> + return *width * PCIE_SPEED2MBS_ENC(*speed);
> > >>>>> +}
> > >>>>> +
> > >>>>> /**
> > >>>>> * pci_select_bars - Make BAR mask from the type of resource
> > >>>>> * @dev: the PCI device for which BAR mask is made
> > >>>>> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> > >>>>> index 66738f1050c0..2a50172b9803 100644
> > >>>>> --- a/drivers/pci/pci.h
> > >>>>> +++ b/drivers/pci/pci.h
> > >>>>> @@ -261,8 +261,17 @@ void pci_disable_bridge_window(struct pci_dev
> > *dev);
> > >>>>> (speed) == PCIE_SPEED_2_5GT ? "2.5 GT/s" : \
> > >>>>> "Unknown speed")
> > >>>>> +/* PCIe speed to Mb/s with encoding overhead: 20% for gen2, ~1.5% for
> > gen3 */
> > >>>>> +#define PCIE_SPEED2MBS_ENC(speed) \
> > >>>>
> > >>>> Missing gen4.
> > >>>
> > >>> I made it "gen3+". I think that's accurate, isn't it? The spec
> > >>> doesn't seem to actually use "gen3" as a specific term, but sec 4.2.2
> > >>> says rates of 8 GT/s or higher (which I think includes gen3 and gen4)
> > >>> use 128b/130b encoding.
> > >>>
> > >>
> > >> I meant that PCIE_SPEED_16_0GT will return 0 from this macro since it wasn't
> > >> added. Need to return 15754.
> > >
> > > Oh, duh, of course! Sorry for being dense. What about the following?
> > > I included the calculation as opposed to just the magic numbers to try
> > > to make it clear how they're derived. This has the disadvantage of
> > > truncating the result instead of rounding, but I doubt that's
> > > significant in this context. If it is, we could use the magic numbers
> > > and put the computation in a comment.
> >
> > We can always use DIV_ROUND_UP((speed * enc_nominator),
> > enc_denominator). I think this is confusing and since this introduces a
> > bandwidth limit I would prefer to give a wider limit than a wrong one,
> > even it is by less than 1Mb/s. My vote is for leaving it as you wrote below.
> >
> > > Another question: we currently deal in Mb/s, not MB/s. Mb/s has the
> > > advantage of sort of corresponding to the GT/s numbers, but using MB/s
> > > would have the advantage of smaller numbers that match the table here:
> > > https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions,
> > > but I don't know what's most typical in user-facing situations.
> > > What's better?
> >
> > I don't know what's better but for network devices we measure bandwidth
> > in Gb/s, so presenting bandwidth in MB/s would mean additional
> > calculations. The truth is I would have prefer to use Gb/s instead of
> > Mb/s, but again, don't want to loss up to 1Gb/s.
>
> I prefer this version with the calculation in line since it makes
> the derivation clear. Keeping them in Mb/s makes it easier to
> convert to Gb/s, which is what most people would expect.
OK, let's keep this patch as-is since returning Mb/s means we
don't have to worry about floating point, and it sounds like we
agree the truncation isn't a big deal.
I'll post a proposal to convert to Gb/s when printing.
> > > commit 946435491b35b7782157e9a4d1bd73071fba7709
> > > Author: Tal Gilboa <talgi@mellanox.com>
> > > Date: Fri Mar 30 08:32:03 2018 -0500
> > >
> > > PCI: Add pcie_bandwidth_capable() to compute max supported link
> > bandwidth
> > >
> > > Add pcie_bandwidth_capable() to compute the max link bandwidth
> > supported by
> > > a device, based on the max link speed and width, adjusted by the encoding
> > > overhead.
> > >
> > > The maximum bandwidth of the link is computed as:
> > >
> > > max_link_width * max_link_speed * (1 - encoding_overhead)
> > >
> > > 2.5 and 5.0 GT/s links use 8b/10b encoding, which reduces the raw
> > bandwidth
> > > available by 20%; 8.0 GT/s and faster links use 128b/130b encoding, which
> > > reduces it by about 1.5%.
> > >
> > > The result is in Mb/s, i.e., megabits/second, of raw bandwidth.
> > >
> > > Signed-off-by: Tal Gilboa <talgi@mellanox.com>
> > > [bhelgaas: add 16 GT/s, adjust for pcie_get_speed_cap() and
> > > pcie_get_width_cap() signatures, don't export outside drivers/pci]
> > > Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
> > > Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> > >
> > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> > > index 43075be79388..ff1e72060952 100644
> > > --- a/drivers/pci/pci.c
> > > +++ b/drivers/pci/pci.c
> > > @@ -5208,6 +5208,28 @@ enum pcie_link_width pcie_get_width_cap(struct
> > pci_dev *dev)
> > > return PCIE_LNK_WIDTH_UNKNOWN;
> > > }
> > >
> > > +/**
> > > + * pcie_bandwidth_capable - calculate a PCI device's link bandwidth capability
> > > + * @dev: PCI device
> > > + * @speed: storage for link speed
> > > + * @width: storage for link width
> > > + *
> > > + * Calculate a PCI device's link bandwidth by querying for its link speed
> > > + * and width, multiplying them, and applying encoding overhead. The result
> > > + * is in Mb/s, i.e., megabits/second of raw bandwidth.
> > > + */
> > > +u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed
> > *speed,
> > > + enum pcie_link_width *width)
> > > +{
> > > + *speed = pcie_get_speed_cap(dev);
> > > + *width = pcie_get_width_cap(dev);
> > > +
> > > + if (*speed == PCI_SPEED_UNKNOWN || *width ==
> > PCIE_LNK_WIDTH_UNKNOWN)
> > > + return 0;
> > > +
> > > + return *width * PCIE_SPEED2MBS_ENC(*speed);
> > > +}
> > > +
> > > /**
> > > * pci_select_bars - Make BAR mask from the type of resource
> > > * @dev: the PCI device for which BAR mask is made
> > > diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> > > index 66738f1050c0..37f9299ed623 100644
> > > --- a/drivers/pci/pci.h
> > > +++ b/drivers/pci/pci.h
> > > @@ -261,8 +261,18 @@ void pci_disable_bridge_window(struct pci_dev *dev);
> > > (speed) == PCIE_SPEED_2_5GT ? "2.5 GT/s" : \
> > > "Unknown speed")
> > >
> > > +/* PCIe speed to Mb/s reduced by encoding overhead */
> > > +#define PCIE_SPEED2MBS_ENC(speed) \
> > > + ((speed) == PCIE_SPEED_16_0GT ? (16000*(128/130)) : \
> > > + (speed) == PCIE_SPEED_8_0GT ? (8000*(128/130)) : \
> > > + (speed) == PCIE_SPEED_5_0GT ? (5000*(8/10)) : \
> > > + (speed) == PCIE_SPEED_2_5GT ? (2500*(8/10)) : \
> > > + 0)
> > > +
> > > enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
> > > enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);
> > > +u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed
> > *speed,
> > > + enum pcie_link_width *width);
> > >
> > > /* Single Root I/O Virtualization */
> > > struct pci_sriov {
> > >
next prev parent reply other threads:[~2018-04-02 19:37 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-30 21:04 [PATCH v5 00/14] Report PCI device link status Bjorn Helgaas
2018-03-30 21:04 ` [PATCH v5 01/14] PCI: Add pcie_get_speed_cap() to find max supported link speed Bjorn Helgaas
2018-03-30 21:04 ` [PATCH v5 02/14] PCI: Add pcie_get_width_cap() to find max supported link width Bjorn Helgaas
2018-03-30 21:05 ` [PATCH v5 03/14] PCI: Add pcie_bandwidth_capable() to compute max supported link bandwidth Bjorn Helgaas
2018-04-01 20:38 ` Tal Gilboa
2018-04-02 0:40 ` Bjorn Helgaas
2018-04-02 7:34 ` Tal Gilboa
2018-04-02 14:05 ` Bjorn Helgaas
2018-04-02 14:34 ` Tal Gilboa
2018-04-02 16:00 ` Keller, Jacob E
2018-04-02 19:37 ` Bjorn Helgaas [this message]
2018-04-03 0:30 ` Jacob Keller
2018-04-03 14:05 ` Bjorn Helgaas
2018-04-03 16:54 ` Keller, Jacob E
2018-03-30 21:05 ` [PATCH v5 04/14] PCI: Add pcie_bandwidth_available() to compute bandwidth available to device Bjorn Helgaas
2018-04-01 20:41 ` Tal Gilboa
2018-04-02 0:41 ` Bjorn Helgaas
2018-03-30 21:05 ` [PATCH v5 05/14] PCI: Add pcie_print_link_status() to log link speed and whether it's limited Bjorn Helgaas
2018-04-02 16:25 ` Keller, Jacob E
2018-04-02 19:58 ` Bjorn Helgaas
2018-04-02 20:25 ` Keller, Jacob E
2018-04-02 21:09 ` Tal Gilboa
2018-04-13 4:32 ` Jakub Kicinski
2018-04-13 14:06 ` Bjorn Helgaas
2018-04-13 15:34 ` Keller, Jacob E
2018-03-30 21:05 ` [PATCH v5 06/14] net/mlx4_core: Report PCIe link properties with pcie_print_link_status() Bjorn Helgaas
2018-03-30 21:05 ` [PATCH v5 07/14] net/mlx5: " Bjorn Helgaas
2018-03-30 21:05 ` [PATCH v5 08/14] net/mlx5e: Use pcie_bandwidth_available() to compute bandwidth Bjorn Helgaas
2018-03-30 21:05 ` [PATCH v5 09/14] bnx2x: Report PCIe link properties with pcie_print_link_status() Bjorn Helgaas
2018-03-30 21:05 ` [PATCH v5 10/14] bnxt_en: " Bjorn Helgaas
2018-03-30 21:06 ` [PATCH v5 11/14] cxgb4: " Bjorn Helgaas
2018-03-30 21:06 ` [PATCH v5 12/14] fm10k: " Bjorn Helgaas
2018-04-02 15:56 ` Keller, Jacob E
2018-04-02 20:31 ` Bjorn Helgaas
2018-04-02 20:36 ` Keller, Jacob E
2018-03-30 21:06 ` [PATCH v5 13/14] ixgbe: " Bjorn Helgaas
2018-03-30 21:06 ` [PATCH v5 14/14] PCI: Remove unused pcie_get_minimum_link() Bjorn Helgaas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180402193754.GK9322@bhelgaas-glaptop.roam.corp.google.com \
--to=helgaas@kernel.org \
--cc=ariel.elior@cavium.com \
--cc=everest-linux-l2@cavium.com \
--cc=ganeshgr@chelsio.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=jeffrey.t.kirsher@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=talgi@mellanox.com \
--cc=tariqt@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).