linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Dave Jiang <dave.jiang@intel.com>
Cc: Bjorn Helgaas <helgaas@kernel.org>, <linux-cxl@vger.kernel.org>,
	<linux-pci@vger.kernel.org>, <linux-acpi@vger.kernel.org>,
	<dan.j.williams@intel.com>, <ira.weiny@intel.com>,
	<vishal.l.verma@intel.com>, <alison.schofield@intel.com>,
	<rafael@kernel.org>, <bhelgaas@google.com>,
	<robert.moore@intel.com>
Subject: Re: [PATCH 12/18] cxl: Add helpers to calculate pci latency for the CXL device
Date: Thu, 9 Feb 2023 15:10:40 +0000	[thread overview]
Message-ID: <20230209151040.00006d93@Huawei.com> (raw)
In-Reply-To: <158ba672-09f1-a202-4fb6-7168496b95c4@intel.com>

On Wed, 8 Feb 2023 16:56:30 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> On 2/8/23 3:15 PM, Bjorn Helgaas wrote:
> > On Tue, Feb 07, 2023 at 01:51:17PM -0700, Dave Jiang wrote:  
> >>
> >>
> >> On 2/6/23 3:39 PM, Bjorn Helgaas wrote:  
> >>> On Mon, Feb 06, 2023 at 01:51:10PM -0700, Dave Jiang wrote:  
> >>>> The latency is calculated by dividing the FLIT size over the
> >>>> bandwidth. Add support to retrieve the FLIT size for the CXL
> >>>> device and calculate the latency of the downstream link.  
> >   
> >>> I guess you only care about the latency of a single link, not the
> >>> entire path?  
> >>
> >> I am adding each of the link individually together in the next
> >> patch. Are you suggesting a similar function like
> >> pcie_bandwidth_available() but for latency for the entire path?  
> > 
> > Only a clarifying question.
> >   
> >>>> +static int cxl_get_flit_size(struct pci_dev *pdev)
> >>>> +{
> >>>> +	if (cxl_pci_flit_256(pdev))
> >>>> +		return 256;
> >>>> +
> >>>> +	return 66;  
> >>>
> >>> I don't know about the 66-byte flit format, maybe this part is
> >>> CXL-specific?  
> >>
> >> 68-byte flit format. Looks like this is a typo from me.  
> > 
> > This part must be CXL-specific, since I don't think PCIe mentions
> > 68-byte flits.
> >   
> >>>> + * The table indicates that if PCIe Flit Mode is set, then CXL is in 256B flits
> >>>> + * mode, otherwise it's 68B flits mode.
> >>>> + */
> >>>> +static inline bool cxl_pci_flit_256(struct pci_dev *pdev)
> >>>> +{
> >>>> +	u32 lnksta2;
> >>>> +
> >>>> +	pcie_capability_read_dword(pdev, PCI_EXP_LNKSTA2, &lnksta2);
> >>>> +	return lnksta2 & BIT(10);  
> >>>
> >>> Add a #define for the bit.  
> >>
> >> ok will add.
> >>  
> >>>
> >>> AFAICT, the PCIe spec defines this bit, and it only indicates the link
> >>> is or will be operating in Flit Mode; it doesn't actually say anything
> >>> about how large the flits are.  I suppose that's because PCIe only
> >>> talks about 256B flits, not 66B ones?  
> >>
> >> Looking at CXL v1.0 rev3.0 6.2.3 "256B Flit Mode", table 6-4, it shows that
> >> when PCIe Flit Mode is set, then CXL is in 256B flits mode, otherwise, it is
> >> 68B flits. So an assumption is made here regarding the flit side based on
> >> the table.  
> > 
> > So reading PCI_EXP_LNKSTA2 and extracting the Flit Mode bit is
> > PCIe-generic, but the interpretation of "PCIe Flit Mode not enabled
> > means 68-byte flits" is CXL-specific?
> > 
> > This sounds wrong, but I don't know quite how.  How would the PCI core
> > manage links where Flit Mode being cleared really means Flit Mode is
> > *enabled* but with a different size?  Seems like something could go
> > wrong there.  
> 
> Looking at the PCIe base spec and the CXL spec, that seemed to be the 
> only way that implies the flit size for a CXL device as far as I can 
> tell. I've yet to find a good way to make that determination. Dan?

So a given CXL port has either trained up in:
* normal PCI (in which case all the normal PCI stuff applies) and we'll
  fail some of the other checks in the CXL driver never get hear here
  - I 'think' the driver will load for the PCI device to enable things
  like firmware upgrade, but we won't register the CXL Port devices
  that ultimately call this stuff.
  It's perfectly possible to have a driver that will cope with this
  but it's pretty meaningless for a lot of cxl type 3 driver.
* 68 byte flit (which was CXL precursor to PCI going flit based)
  Can be queried via CXL DVSEC Flex Bus Port Status CXL r3.0 8.2.1.3.3
* 256 byte flits (may or may not be compatible with PCIe ones as there
  are some optional latency optimizations)

So if the 68 byte flit is enabled the 256 byte one should never be and
CXL description is overriding the old PCIe

Hence I think we should have the additional check on the flex bus
dvsec even though it should be consistent with your assumption above.

Hmm. That does raise a question of how we take the latency optimized
flits into account or indeed some of the other latency impacting things
that may or may not be running - IDE in it's various modes for example.

For latency optimized we can query relevant bit in the flex bus port status.
IDE info will be somewhere I guess though no idea if there is a way to
know the latency impacts.  

Jonathan

> 
> 
> > 
> > Bjorn  


  reply	other threads:[~2023-02-09 15:10 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-06 20:49 [PATCH 00/18] cxl: Add support for QTG ID retrieval for CXL subsystem Dave Jiang
2023-02-06 20:49 ` [PATCH 01/18] cxl: Export QTG ids from CFMWS to sysfs Dave Jiang
2023-02-09 11:15   ` Jonathan Cameron
2023-02-09 17:28     ` Dave Jiang
2023-02-06 20:49 ` [PATCH 02/18] ACPICA: Export acpi_ut_verify_cdat_checksum() Dave Jiang
2023-02-07 14:19   ` Rafael J. Wysocki
2023-02-07 15:47     ` Dave Jiang
2023-02-09 11:30       ` Jonathan Cameron
2023-02-06 20:49 ` [PATCH 03/18] cxl: Add checksum verification to CDAT from CXL Dave Jiang
2023-02-09 11:34   ` Jonathan Cameron
2023-02-09 17:31     ` Dave Jiang
2023-02-06 20:49 ` [PATCH 04/18] cxl: Add common helpers for cdat parsing Dave Jiang
2023-02-09 11:58   ` Jonathan Cameron
2023-02-09 22:57     ` Dave Jiang
2023-02-11 10:18       ` Lukas Wunner
2023-02-14 13:17         ` Jonathan Cameron
2023-02-14 20:36         ` Dave Jiang
2023-02-06 20:50 ` [PATCH 05/18] ACPICA: Fix 'struct acpi_cdat_dsmas' spelling mistake Dave Jiang
2023-02-06 22:00   ` Lukas Wunner
2023-02-06 20:50 ` [PATCH 06/18] cxl: Add callback to parse the DSMAS subtables from CDAT Dave Jiang
2023-02-09 13:29   ` Jonathan Cameron
2023-02-13 22:55     ` Dave Jiang
2023-02-06 20:50 ` [PATCH 07/18] cxl: Add callback to parse the DSLBIS subtable " Dave Jiang
2023-02-09 13:50   ` Jonathan Cameron
2023-02-14  0:24     ` Dave Jiang
2023-02-06 20:50 ` [PATCH 08/18] cxl: Add support for _DSM Function for retrieving QTG ID Dave Jiang
2023-02-09 14:02   ` Jonathan Cameron
2023-02-14 21:07     ` Dave Jiang
2023-02-06 20:50 ` [PATCH 09/18] cxl: Add helper function to retrieve ACPI handle of CXL root device Dave Jiang
2023-02-09 14:10   ` Jonathan Cameron
2023-02-14 21:29     ` Dave Jiang
2023-02-06 20:50 ` [PATCH 10/18] PCI: Export pcie_get_speed() using the code from sysfs PCI link speed show function Dave Jiang
2023-02-06 22:27   ` Lukas Wunner
2023-02-07 20:29     ` Dave Jiang
2023-02-06 20:51 ` [PATCH 11/18] PCI: Export pcie_get_width() using the code from sysfs PCI link width " Dave Jiang
2023-02-06 22:43   ` Bjorn Helgaas
2023-02-07 20:35     ` Dave Jiang
2023-02-06 20:51 ` [PATCH 12/18] cxl: Add helpers to calculate pci latency for the CXL device Dave Jiang
2023-02-06 22:39   ` Bjorn Helgaas
2023-02-07 20:51     ` Dave Jiang
2023-02-08 22:15       ` Bjorn Helgaas
2023-02-08 23:56         ` Dave Jiang
2023-02-09 15:10           ` Jonathan Cameron [this message]
2023-02-14 22:22             ` Dave Jiang
2023-02-15 12:13               ` Jonathan Cameron
2023-02-22 17:54                 ` Dave Jiang
2023-02-09 15:16   ` Jonathan Cameron
2023-02-06 20:51 ` [PATCH 13/18] cxl: Add latency and bandwidth calculations for the CXL path Dave Jiang
2023-02-09 15:24   ` Jonathan Cameron
2023-02-14 23:03     ` Dave Jiang
2023-02-15 13:17       ` Jonathan Cameron
2023-02-15 16:38         ` Dave Jiang
2023-02-06 20:51 ` [PATCH 14/18] cxl: Wait Memory_Info_Valid before access memory related info Dave Jiang
2023-02-09 15:29   ` Jonathan Cameron
2023-02-06 20:51 ` [PATCH 15/18] cxl: Move identify and partition query from pci probe to port probe Dave Jiang
2023-02-09 15:29   ` Jonathan Cameron
2023-02-06 20:51 ` [PATCH 16/18] cxl: Move reading of CDAT data from device to after media is ready Dave Jiang
2023-02-06 22:17   ` Lukas Wunner
2023-02-07 20:55     ` Dave Jiang
2023-02-09 15:31   ` Jonathan Cameron
2023-02-06 20:51 ` [PATCH 17/18] cxl: Attach QTG IDs to the DPA ranges for the device Dave Jiang
2023-02-09 15:34   ` Jonathan Cameron
2023-02-06 20:52 ` [PATCH 18/18] cxl: Export sysfs attributes for device QTG IDs Dave Jiang
2023-02-09 15:41   ` Jonathan Cameron
2023-03-23 23:20     ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230209151040.00006d93@Huawei.com \
    --to=jonathan.cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=bhelgaas@google.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=helgaas@kernel.org \
    --cc=ira.weiny@intel.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=rafael@kernel.org \
    --cc=robert.moore@intel.com \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).