linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/9] CXL 2.0 Support
@ 2020-11-11  5:43 Ben Widawsky
  2020-11-11  5:43 ` [RFC PATCH 1/9] cxl/acpi: Add an acpi_cxl module for the CXL interconnect Ben Widawsky
                   ` (10 more replies)
  0 siblings, 11 replies; 67+ messages in thread
From: Ben Widawsky @ 2020-11-11  5:43 UTC (permalink / raw)
  To: linux-cxl
  Cc: linux-kernel, linux-pci, linux-acpi, Dan Williams, Ira Weiny,
	Vishal Verma, Kelley, Sean V, Bjorn Helgaas, Rafael J . Wysocki,
	Ben Widawsky

Introduce support for “type-3” memory devices defined in the recently released
Compute Express Link (CXL) 2.0 specification. Specifically, these are the memory
devices defined by section 8.2.8.5 of the CXL 2.0 spec. A reference
implementation emulating these devices is being submitted to the QEMU mailing
list. “Type-3” is a CXL device that acts as a memory expander for RAM or PMEM.
It might be interleaved with other CXL devices in a given physical address
range.

These changes allow for foundational enumeration of CXL 2.0 memory devices. The functionality present is:
- Initial driver bring-up
- Device enumeration and an initial sysfs representation
- Submit a basic firmware command via ‘mailbox’ to an emulated memory device
  with non-volatile capacity.

Some of the functionality that is still missing includes:
- Memory interleaving at the host bridge, root port, or switch level
- CXL 1.1 Root Complex Integrated Endpoint Support
- CXL 2.0 Hot plug support

In addition to the core functionality of discovering the spec defined registers
and resources, introduce a CXL device model that will be the foundation for
translating CXL capabilities into existing Linux infrastructure for Persistent
Memory and other memory devices. For now, this only includes support for the
management command mailbox that type-3 devices surface. These control devices
fill the role of “DIMMs” / nmemX memory-devices in LIBNVDIMM terms.

Now, while implementing the driver some feedback for the specification was
generated to cover perceived gaps and address conflicts. The feedback is
presented as a reference implementation in the driver and QEMU emulation.
Specifically the following concepts are original to the Linux implementation and
feedback / collaboration is requested to develop these into specification
proposals:
1. Top level ACPI object (ACPI0017)
2. HW imposed address space and interleave constraints
3. _OSC UUID A4D1629D-FF52-4888-BE96-E5CADE548DB1

ACPI0017
--------
Introduce a new ACPI namespace device with an _HID of ACPI0017. The purpose of
this object is twofold, support a legacy OS with a set of out-of-tree CXL
modules, and establish an attach point for a driver that knows about
interleaving. Both of these boil down to the same point, to centralize Operating
System support for resources described by the CXL Early Discovery Table (CEDT).

The legacy OS problem stems from the spec's description of a host bridge,
ACPI0016 is denoted as the _HID for host bridges, with a _CID of PNP0A08. In a
CXL unaware version of Linux, the core ACPI subsystem will bind a driver to
PNP0A08 and preclude a CXL-aware driver from binding to ACPI0016. An ACPI0017
device allows a standalone CXL-aware driver to register for handling /
coordinating CEDT and CXL-specific _OSC control.

Similarly when managing interleaving there needs to be some management layer
above the ACPI0016 device that is capable of assembling leaf nodes into
interleave sets. As is the case with ACPI0012 that does this central
coordination for NFIT defined resources, ACPI0017 does the same for CEDT
described resources.

Memory Windows
-------
For CXL.mem capable platforms, there is a need for a mechanism for platform
firmware to make the Operating System aware of any restrictions that hardware
might have in address space. For example, in a system with 4 host bridges all
participating in an interleave set, the firmware needs to provide some
description of this. That information is missing from the CXL 2.0 spec as of
today and it also is not implemented in the driver. A variety of ACPI based
mechanisms, for example _CRS fields on the ACPI0017 device, were considered.


CXL Exclusive _OSC
-----------------
CXL 2.0 definition provides new fields to _OSC for host bridges to allow for new
services provided by CXL - error handling, hot plug, capabilities, etc. This is
built on top of PCIe _OSC via a new UUID. A CXL unaware OS will use the old UUID
to configure the PCIe host bridge. The expectation is that a CXL aware OS uses
the new UUID and to modify both CXL and PCIE capabilities in one shot. The issue
arises when trying to define a standalone CXL driver. The core OS will configure
the PCIe _OSC, but when the CXL driver attempts to set CXL _OSC the current
definition makes that driver re-specify PCIE capabilities. An isolated CXL-only
_OSC allows the PCIE core to be unchanged and let a CXL driver stack manage CXL
_OSC without the possibility of clobbering / colliding with PCIE core OSC
management.  The proposal moves the new _OSC dwords (SUPC and CTRC) to their own
_OSC UUID.

Next steps after this basic foundation is expanded command support and LIBNVDIMM
integration. This is the initial “release early / release often” version of the
Linux CXL enabling.


Ben Widawsky (5):
  cxl/mem: Map memory device registers
  cxl/mem: Find device capabilities
  cxl/mem: Initialize the mailbox interface
  cxl/mem: Implement polled mode mailbox
  MAINTAINERS: Add maintainers of the CXL driver

Dan Williams (2):
  cxl/mem: Add a driver for the type-3 mailbox
  cxl/mem: Register CXL memX devices

Vishal Verma (2):
  cxl/acpi: Add an acpi_cxl module for the CXL interconnect
  cxl/acpi: add OSC support

 MAINTAINERS           |   9 +
 drivers/Kconfig       |   1 +
 drivers/Makefile      |   1 +
 drivers/cxl/Kconfig   |  50 ++++
 drivers/cxl/Makefile  |   9 +
 drivers/cxl/acpi.c    | 325 ++++++++++++++++++++++
 drivers/cxl/acpi.h    |  33 +++
 drivers/cxl/bus.c     |  35 +++
 drivers/cxl/bus.h     |   8 +
 drivers/cxl/cxl.h     | 166 +++++++++++
 drivers/cxl/mem.c     | 631 ++++++++++++++++++++++++++++++++++++++++++
 drivers/cxl/pci.h     |  21 ++
 include/acpi/actbl1.h |  52 ++++
 13 files changed, 1341 insertions(+)
 create mode 100644 drivers/cxl/Kconfig
 create mode 100644 drivers/cxl/Makefile
 create mode 100644 drivers/cxl/acpi.c
 create mode 100644 drivers/cxl/acpi.h
 create mode 100644 drivers/cxl/bus.c
 create mode 100644 drivers/cxl/bus.h
 create mode 100644 drivers/cxl/cxl.h
 create mode 100644 drivers/cxl/mem.c
 create mode 100644 drivers/cxl/pci.h

-- 
2.29.2


^ permalink raw reply	[flat|nested] 67+ messages in thread

end of thread, other threads:[~2020-12-07  6:13 UTC | newest]

Thread overview: 67+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-11  5:43 [RFC PATCH 0/9] CXL 2.0 Support Ben Widawsky
2020-11-11  5:43 ` [RFC PATCH 1/9] cxl/acpi: Add an acpi_cxl module for the CXL interconnect Ben Widawsky
2020-11-11  6:17   ` Randy Dunlap
2020-11-11  7:10   ` Christoph Hellwig
2020-11-11  7:30     ` Verma, Vishal L
2020-11-11  7:34       ` hch
2020-11-11  7:36         ` Verma, Vishal L
2020-11-11 23:03   ` Bjorn Helgaas
2020-11-16 17:59   ` Jonathan Cameron
2020-11-16 18:23     ` Verma, Vishal L
2020-11-17 14:32   ` Rafael J. Wysocki
2020-11-17 21:45     ` Dan Williams
2020-11-18 11:14       ` Rafael J. Wysocki
2020-11-11  5:43 ` [RFC PATCH 2/9] cxl/acpi: add OSC support Ben Widawsky
2020-11-16 17:59   ` Jonathan Cameron
2020-11-16 23:25     ` Dan Williams
2020-11-18 12:25       ` Rafael J. Wysocki
2020-11-18 17:58         ` Dan Williams
2020-11-11  5:43 ` [RFC PATCH 3/9] cxl/mem: Add a driver for the type-3 mailbox Ben Widawsky
2020-11-11  6:17   ` Randy Dunlap
2020-11-11  7:12   ` Christoph Hellwig
2020-11-11 17:17     ` Dan Williams
2020-11-11 18:27       ` Dan Williams
2020-11-11 21:41       ` Randy Dunlap
2020-11-11 22:40         ` Dan Williams
2020-11-16 16:56       ` Christoph Hellwig
2020-11-13 18:17   ` Bjorn Helgaas
2020-11-14  1:08     ` Ben Widawsky
2020-11-15  0:23       ` Dan Williams
2020-11-17 14:49   ` Jonathan Cameron
2020-12-04  7:22     ` Dan Williams
2020-12-04  7:27       ` Dan Williams
2020-12-04 17:39         ` Jonathan Cameron
2020-11-11  5:43 ` [RFC PATCH 4/9] cxl/mem: Map memory device registers Ben Widawsky
2020-11-13 18:17   ` Bjorn Helgaas
2020-11-14  1:12     ` Ben Widawsky
2020-11-16 23:19       ` Dan Williams
2020-11-17  0:23         ` Bjorn Helgaas
2020-11-23 19:20           ` Ben Widawsky
2020-11-23 19:32             ` Dan Williams
2020-11-23 19:58               ` Ben Widawsky
2020-11-17 15:00   ` Jonathan Cameron
2020-11-11  5:43 ` [RFC PATCH 5/9] cxl/mem: Find device capabilities Ben Widawsky
2020-11-13 18:26   ` Bjorn Helgaas
2020-11-14  1:36     ` Ben Widawsky
2020-11-17 15:15   ` Jonathan Cameron
2020-11-24  0:17     ` Ben Widawsky
2020-11-26  6:05   ` Jon Masters
2020-11-26 18:18     ` Ben Widawsky
2020-12-04  7:35     ` Dan Williams
2020-12-04  7:41   ` Dan Williams
2020-12-07  6:12     ` Ben Widawsky
2020-11-11  5:43 ` [RFC PATCH 6/9] cxl/mem: Initialize the mailbox interface Ben Widawsky
2020-11-17 15:22   ` Jonathan Cameron
2020-11-11  5:43 ` [RFC PATCH 7/9] cxl/mem: Implement polled mode mailbox Ben Widawsky
2020-11-13 23:14   ` Bjorn Helgaas
2020-11-17 15:31   ` Jonathan Cameron
2020-11-17 16:34     ` Ben Widawsky
2020-11-17 18:06       ` Jonathan Cameron
2020-11-17 18:38         ` Dan Williams
2020-11-11  5:43 ` [RFC PATCH 8/9] cxl/mem: Register CXL memX devices Ben Widawsky
2020-11-17 15:56   ` Jonathan Cameron
2020-11-20  2:16     ` Dan Williams
2020-11-20 15:20       ` Jonathan Cameron
2020-11-11  5:43 ` [RFC PATCH 9/9] MAINTAINERS: Add maintainers of the CXL driver Ben Widawsky
2020-11-11 22:06 ` [RFC PATCH 0/9] CXL 2.0 Support Ben Widawsky
2020-11-11 22:43 ` Bjorn Helgaas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).