linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Bjorn Helgaas <helgaas@kernel.org>
Cc: linux-cxl@vger.kernel.org, Linux PCI <linux-pci@vger.kernel.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	"Weiny, Ira" <ira.weiny@intel.com>,
	Vishal L Verma <vishal.l.verma@intel.com>,
	"Schofield, Alison" <alison.schofield@intel.com>,
	Ben Widawsky <ben.widawsky@intel.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH v2 7/8] cxl/port: Introduce cxl_port objects
Date: Tue, 13 Apr 2021 10:18:44 -0700	[thread overview]
Message-ID: <CAPcyv4iQjePCpsG=SxZ-pbHuCX9cxVZtBzyYKVstNowUQg7WPQ@mail.gmail.com> (raw)
In-Reply-To: <CAPcyv4hAc=DERr1z8kr=V01+NSi74f-kSfMAdeArLmVb112_Dw@mail.gmail.com>

On Thu, Apr 8, 2021 at 7:13 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> Hi Bjorn, thanks for taking a look.
>
>
> On Thu, Apr 8, 2021 at 3:42 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >
> > [+cc Greg, Rafael, Matthew: device model questions]
> >
> > Hi Dan,
> >
> > On Thu, Apr 01, 2021 at 07:31:20AM -0700, Dan Williams wrote:
> > > Once the cxl_root is established then other ports in the hierarchy can
> > > be attached. The cxl_port object, unlike cxl_root that is associated
> > > with host bridges, is associated with PCIE Root Ports or PCIE Switch
> > > Ports. Add cxl_port instances for all PCIE Root Ports in an ACPI0016
> > > host bridge.
> >
> > I'm not a device model expert, but I'm not sure about adding a new
> > /sys/bus/cxl/devices hierarchy.  I'm under the impression that CXL
> > devices will be enumerated by the PCI core as PCIe devices.
>
> Yes, PCIe is involved, but mostly only for the CXL.io slow path
> (configuration and provisioning via mailbox) when we're talking about
> memory expander devices (CXL calls these Type-3). So-called "Type-3"
> support is the primary driver of this infrastructure.
>
> You might be thinking of CXL accelerator devices that will look like
> plain PCIe devices that happen to participate in the CPU cache
> hierarchy (CXL calls these Type-1). There will also be accelerator
> devices that want to share coherent memory with the system (CXL calls
> these Type-2).
>
> The infrastructure being proposed here is primarily for the memory
> expander (Type-3) device case where the PCI sysfs hierarchy is wholly
> unsuited for modeling it. A single CXL memory region device may span
> multiple endpoints, switches, and host bridges. It poses similar
> stress to an OS device model as RAID where there is a driver for the
> component contributors to an upper level device / driver that exposes
> the RAID Volume (CXL memory region interleave set). The CXL memory
> decode space (HDM: Host Managed Device Memory) is independent of the
> PCIe MMIO BAR space.
>
> That's where the /sys/bus/cxl hierarchy is needed, to manage the HDM
> space across the CXL topology in a way that is foreign to PCIE (HDM
> Decoder hierarchy).
>
> > Doesn't
> > that mean we will have one struct device in the pci_dev, and another
> > one in the cxl_port?
>
> Yes, that is the proposal.
>
> > That seems like an issue to me.  More below.
>
> hmm...
>
> >
> > > The cxl_port instances for PCIE Switch Ports are not
> > > included here as those are to be modeled as another service device
> > > registered on the pcie_port_bus_type.
> >
> > I'm hesitant about the idea of adding more uses of pcie_port_bus_type.
> > I really dislike portdrv because it makes a parallel hierarchy:
> >
> >   /sys/bus/pci
> >   /sys/bus/pci_express
> >
> > for things that really should not be different.  There's a struct
> > device in pci_dev, and potentially several pcie_devices, each with
> > another struct device.  We make these pcie_device things for AER, DPC,
> > hotplug, etc.  E.g.,
> >
> >   /sys/bus/pci/devices/0000:00:1c.0
> >   /sys/bus/pci_express/devices/0000:00:1c.0:pcie002  # AER
> >   /sys/bus/pci_express/devices/0000:00:1c.0:pcie010  # BW notification
> >
> > These are all the same PCI device.  AER is a PCI capability.
> > Bandwidth notification is just a feature of all Downstream Ports.  I
> > think it makes zero sense to have extra struct devices for them.  From
> > a device point of view (enumeration, power management, VM assignment),
> > we can't manage them separately from the underlying PCI device.  For
> > example, we have three separate "power/" directories, but obviously
> > there's only one point of control (00:1c.0):
> >
> >   /sys/devices/pci0000:00/0000:00:1c.0/power/
> >   /sys/devices/pci0000:00/0000:00:1c.0/0000:00:1c.0:pcie002/power/
> >   /sys/devices/pci0000:00/0000:00:1c.0/0000:00:1c.0:pcie010/power/
>
> The superfluous power/ issue can be cleaned up with
> device_set_pm_not_required().
>
> What are the other problems this poses, because in other areas this
> ability to subdivide a device's functionality into sub-drivers is a
> useful organization principle? So much so that several device writer
> teams came together to create the auxiliary-bus for the purpose of
> allowing sub-drivers to be carved off for independent functionality
> similar to the portdrv organization.
>

Bjorn, any further thoughts on this?

This port architecture question is in the critical path for the next
phase of CXL development (targeting v5.14 not v5.13).

  reply	other threads:[~2021-04-13 17:19 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-01 14:30 [PATCH v2 0/8] CXL Port Enumeration Dan Williams
2021-04-01 14:30 ` [PATCH v2 1/8] cxl/mem: Move some definitions to mem.h Dan Williams
2021-04-06 16:38   ` Jonathan Cameron
2021-04-14  0:18     ` Dan Williams
2021-04-14  0:42       ` Dan Williams
2021-04-14  9:21         ` Jonathan Cameron
2021-04-01 14:30 ` [PATCH v2 2/8] cxl/mem: Introduce 'struct cxl_regs' for "composable" CXL devices Dan Williams
2021-04-06 17:00   ` Jonathan Cameron
2021-04-14  0:40     ` Dan Williams
2021-04-01 14:30 ` [PATCH v2 3/8] cxl/core: Rename bus.c to core.c Dan Williams
2021-04-01 14:31 ` [PATCH v2 4/8] cxl/core: Refactor CXL register lookup for bridge reuse Dan Williams
2021-04-06 17:00   ` Jonathan Cameron
2021-04-15 20:53     ` Dan Williams
2021-04-01 14:31 ` [PATCH v2 5/8] cxl/acpi: Introduce ACPI0017 driver and cxl_root Dan Williams
2021-04-01 21:34   ` kernel test robot
2021-04-06 17:32   ` Jonathan Cameron
2021-04-15 15:00     ` Dan Williams
2021-04-01 14:31 ` [PATCH v2 6/8] cxl/Kconfig: Default drivers to CONFIG_CXL_BUS Dan Williams
2021-04-01 14:31 ` [PATCH v2 7/8] cxl/port: Introduce cxl_port objects Dan Williams
2021-04-06 17:44   ` Jonathan Cameron
2021-04-08 22:42   ` Bjorn Helgaas
2021-04-09  2:13     ` Dan Williams
2021-04-13 17:18       ` Dan Williams [this message]
2021-04-14  1:14       ` Bjorn Helgaas
2021-04-15  5:21         ` Dan Williams
2021-04-01 14:31 ` [PATCH v2 8/8] cxl/acpi: Add module parameters to stand in for ACPI tables Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPcyv4iQjePCpsG=SxZ-pbHuCX9cxVZtBzyYKVstNowUQg7WPQ@mail.gmail.com' \
    --to=dan.j.williams@intel.com \
    --cc=alison.schofield@intel.com \
    --cc=ben.widawsky@intel.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=helgaas@kernel.org \
    --cc=ira.weiny@intel.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=rjw@rjwysocki.net \
    --cc=vishal.l.verma@intel.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).