linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/8] CXL 2.0 Support
@ 2021-02-10  0:02 Ben Widawsky
  2021-02-10  0:02 ` [PATCH v2 1/8] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints Ben Widawsky
                   ` (7 more replies)
  0 siblings, 8 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10  0:02 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V

# Changes since v1 [1]

   * Squash together several other patches (Ben)
   * Make register locator only search the DVSEC size. Bug fix. (Ben)
   * Get rid of anonymous structs in send UAPI (Ben)
   * Rename "MB" to "MBOX" in defines (Ben)
   * Dynamically allocate enable_cmds bitmask (Ben)
   * Async probe (Dan)
   * Remove get_live_device() (Dan)
   * CXL_MAILBOX_TIMEOUT_MS 2*HZ instead of runtime conversion (Dan)
   * Reword RAW Kconfig help (Dan)
   * Move IOCTL handlers to their own functions (Dan)
   * Remove HIDDEN flag (Dan)
   * Remove MUTEX flag (Dan)
   * Get rid of const info in mem_command (Dan)
   * Remove useless mbox initialiazation in user commands (Dan)
   * Rename DEBUG_UUID to VENDOR_DEBUG_UUID (Dan)
   * Remove dev_info of enabled commands (Dan)
   * Get rid of MANDATORY and PSEUDO flags (Dan)
   * Clarify cmd vs. mbox_cmd in send by removing cmd (Dan)
     * This results in removal of some very unlikely debug messages.
   * Reword Kconfig (David)
   * Cap payload size max to 1M to match spec (David)
   *    * Driver still binds, but IOCTls fail if too large.
   * s/US/MS for timeout (David)
   * Fix comment indenting to denote, not part of spec (David)
   * Use struct initializer for mailbox command (David)
   * Add units to sysfs ABI documentation (David)
   * Use FIELD_GET for register locator parsing (hch)
   * Use FIELD_GET/SET directly instead of wrappers (hch)
   * Remove cpp guards (hch)
   * Drop register read/write helpers (hch)
   * Squash together device capability patches (hch)
   * Move PCI_CLASS_MEMORY_CXL to pci_ids.h (hch)
   * Use file_inode instead of file->private_data (hch)
   * Hide RAW commands behind CONFIG option (Konrad)
   * Include security_locked_down() check (Konrad)
   * Extend past 80 characters in certain places (Konrad)
   * Remove magic numbers of register locator enumeration (Konrad)
   * Fix packing for send UAPI (Konrad)

---

In addition to the mailing list, please feel free to use #cxl on oftc IRC for
discussion.

---

# Summary

Introduce support for “type-3” memory devices defined in the Compute Express
Link (CXL) 2.0 specification [2]. Specifically, these are the memory devices
defined by section 8.2.8.5 of the CXL 2.0 spec. A reference implementation
emulating these devices has been submitted to the QEMU mailing list [3] and is
available on gitlab [4], but will move to a shared tree on kernel.org after
initial acceptance. “Type-3” is a CXL device that acts as a memory expander for
RAM or Persistent Memory. The device might be interleaved with other CXL devices
in a given physical address range.

In addition to the core functionality of discovering the spec defined registers
and resources, introduce a CXL device model that will be the foundation for
translating CXL capabilities into existing Linux infrastructure for Persistent
Memory and other memory devices. For now, this only includes support for the
management command mailbox the surfacing of type-3 devices. These control
devices fill the role of “DIMMs” / nmemX memory-devices in LIBNVDIMM terms.

## Userspace Interaction

Interaction with the driver and type-3 devices via the CXL drivers is introduced
in this patch series and considered stable ABI. They include

   * sysfs - Documentation/ABI/testing/sysfs-bus-cxl
   * IOCTL - Documentation/driver-api/cxl/memory-devices.rst
   * debugfs - Documentation/ABI/testing/debugfs-debug

Work is in process to add support for CXL interactions to the ndctl project [5]

### Development plans

One of the unique challenges that CXL imposes on the Linux driver model is that
it requires the operating system to perform physical address space management
interleaved across devices and bridges. Whereas LIBNVDIMM handles a list of
established static persistent memory address ranges (for example from the ACPI
NFIT), CXL introduces hotplug and the concept of allocating address space to
instantiate persistent memory ranges. This is similar to PCI in the sense that
the platform establishes the MMIO range for PCI BARs to be allocated, but it is
significantly complicated by the fact that a given device can optionally be
interleaved with other devices and can participate in several interleave-sets at
once. LIBNVDIMM handled something like this with the aliasing between PMEM and
BLOCK-WINDOW mode, but CXL adds flexibility to alias DEVICE MEMORY through up to
10 decoders per device.

All of the above needs to be enabled with respect to PCI hotplug events on
Type-3 memory device which needs hooks to determine if a given device is
contributing to a "System RAM" address range that is unable to be unplugged. In
other words CXL ties PCI hotplug to Memory Hotplug and PCI hotplug needs to be
able to negotiate with memory hotplug.  In the medium term the implications of
CXL hotplug vs ACPI SRAT/SLIT/HMAT need to be reconciled. One capability that
seems to be needed is either the dynamic allocation of new memory nodes, or
default initializing extra pgdat instances beyond what is enumerated in ACPI
SRAT to accommodate hot-added CXL memory.

Patches welcome, questions welcome as the development effort on the post v5.12
capabilities proceeds.

## Running in QEMU

The incantation to get CXL support in QEMU [4] is considered unstable at this
time. Future readers of this cover letter should verify if any changes are
needed. For the novice QEMU user, the following can be copy/pasted into a
working QEMU commandline. It is enough to make the simplest topology possible.
The topology would consist of a single memory window, single type3 device,
single root port, and single host bridge.

    +-------------+
    |   CXL PXB   |
    |             |
    |  +-------+  |<----------+
    |  |CXL RP |  |           |
    +--+-------+--+           v
           |            +----------+
           |            | "window" |
           |            +----------+
           v                  ^
    +-------------+           |
    |  CXL Type 3 |           |
    |   Device    |<----------+
    +-------------+

// Memory backend for "window"
-object memory-backend-file,id=cxl-mem1,share,mem-path=cxl-type3,size=512M

// Memory backend for LSA
-object memory-backend-file,id=cxl-mem1-lsa,share,mem-path=cxl-mem1-lsa,size=1K

// Host Bridge
-device pxb-cxl id=cxl.0,bus=pcie.0,bus_nr=52,uid=0 len-window-base=1,window-base[0]=0x4c0000000 memdev[0]=cxl-mem1

// Single root port
-device cxl rp,id=rp0,bus=cxl.0,addr=0.0,chassis=0,slot=0,memdev=cxl-mem1

// Single type3 device
-device cxl-type3,bus=rp0,memdev=cxl-mem1,id=cxl-pmem0,size=256M -device cxl-type3,bus=rp1,memdev=cxl-mem1,id=cxl-pmem1,size=256M,lsa=cxl-mem1-lsa

---

[1]: https://lore.kernel.org/linux-cxl/20210130002438.1872527-1-ben.widawsky@intel.com/
[2]: https://www.computeexpresslink.org/](https://www.computeexpresslink.org/)
[3]: https://lore.kernel.org/qemu-devel/20210202005948.241655-1-ben.widawsky@intel.com/
[4]: https://gitlab.com/bwidawsk/qemu/-/tree/cxl-2.0v4
[5]: https://github.com/pmem/ndctl/tree/cxl-2.0v2

---

Ben Widawsky (6):
  cxl/mem: Find device capabilities
  cxl/mem: Add basic IOCTL interface
  cxl/mem: Add a "RAW" send command
  cxl/mem: Enable commands via CEL
  cxl/mem: Add set of informational commands
  MAINTAINERS: Add maintainers of the CXL driver

Dan Williams (2):
  cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
  cxl/mem: Register CXL memX devices

 .clang-format                                 |    1 +
 Documentation/ABI/testing/sysfs-bus-cxl       |   26 +
 Documentation/driver-api/cxl/index.rst        |   12 +
 .../driver-api/cxl/memory-devices.rst         |   46 +
 Documentation/driver-api/index.rst            |    1 +
 .../userspace-api/ioctl/ioctl-number.rst      |    1 +
 MAINTAINERS                                   |   11 +
 drivers/Kconfig                               |    1 +
 drivers/Makefile                              |    1 +
 drivers/cxl/Kconfig                           |   67 +
 drivers/cxl/Makefile                          |    7 +
 drivers/cxl/bus.c                             |   29 +
 drivers/cxl/cxl.h                             |   99 ++
 drivers/cxl/mem.c                             | 1544 +++++++++++++++++
 drivers/cxl/pci.h                             |   31 +
 include/linux/pci_ids.h                       |    1 +
 include/uapi/linux/cxl_mem.h                  |  168 ++
 include/uapi/linux/pci_regs.h                 |    1 +
 18 files changed, 2047 insertions(+)
 create mode 100644 Documentation/ABI/testing/sysfs-bus-cxl
 create mode 100644 Documentation/driver-api/cxl/index.rst
 create mode 100644 Documentation/driver-api/cxl/memory-devices.rst
 create mode 100644 drivers/cxl/Kconfig
 create mode 100644 drivers/cxl/Makefile
 create mode 100644 drivers/cxl/bus.c
 create mode 100644 drivers/cxl/cxl.h
 create mode 100644 drivers/cxl/mem.c
 create mode 100644 drivers/cxl/pci.h
 create mode 100644 include/uapi/linux/cxl_mem.h

Cc: linux-acpi@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-nvdimm@lists.01.org
Cc: linux-pci@vger.kernel.org
Cc: Bjorn Helgaas <helgaas@kernel.org>
Cc: Chris Browy <cbrowy@avery-design.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Jon Masters <jcm@jonmasters.org>
Cc: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: "John Groves (jgroves)" <jgroves@micron.com>
Cc: "Kelley, Sean V" <sean.v.kelley@intel.com>

-- 
2.30.0


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH v2 1/8] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
  2021-02-10  0:02 [PATCH v2 0/8] CXL 2.0 Support Ben Widawsky
@ 2021-02-10  0:02 ` Ben Widawsky
  2021-02-10 16:17   ` Jonathan Cameron
  2021-02-10  0:02 ` [PATCH v2 2/8] cxl/mem: Find device capabilities Ben Widawsky
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10  0:02 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V, Jonathan Corbet

From: Dan Williams <dan.j.williams@intel.com>

The CXL.mem protocol allows a device to act as a provider of "System
RAM" and/or "Persistent Memory" that is fully coherent as if the memory
was attached to the typical CPU memory controller.

With the CXL-2.0 specification a PCI endpoint can implement a "Type-3"
device interface and give the operating system control over "Host
Managed Device Memory". See section 2.3 Type 3 CXL Device.

The memory range exported by the device may optionally be described by
the platform firmware memory map, or by infrastructure like LIBNVDIMM to
provision persistent memory capacity from one, or more, CXL.mem devices.

A pre-requisite for Linux-managed memory-capacity provisioning is this
cxl_mem driver that can speak the mailbox protocol defined in section
8.2.8.4 Mailbox Registers.

For now just land the initial driver boiler-plate and Documentation/
infrastructure.

Link: https://www.computeexpresslink.org/download-the-specification
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Acked-by: David Rientjes <rientjes@google.com> (v1)
---
 Documentation/driver-api/cxl/index.rst        | 12 ++++
 .../driver-api/cxl/memory-devices.rst         | 29 +++++++++
 Documentation/driver-api/index.rst            |  1 +
 drivers/Kconfig                               |  1 +
 drivers/Makefile                              |  1 +
 drivers/cxl/Kconfig                           | 35 +++++++++++
 drivers/cxl/Makefile                          |  4 ++
 drivers/cxl/mem.c                             | 63 +++++++++++++++++++
 drivers/cxl/pci.h                             | 18 ++++++
 include/linux/pci_ids.h                       |  1 +
 10 files changed, 165 insertions(+)
 create mode 100644 Documentation/driver-api/cxl/index.rst
 create mode 100644 Documentation/driver-api/cxl/memory-devices.rst
 create mode 100644 drivers/cxl/Kconfig
 create mode 100644 drivers/cxl/Makefile
 create mode 100644 drivers/cxl/mem.c
 create mode 100644 drivers/cxl/pci.h

diff --git a/Documentation/driver-api/cxl/index.rst b/Documentation/driver-api/cxl/index.rst
new file mode 100644
index 000000000000..036e49553542
--- /dev/null
+++ b/Documentation/driver-api/cxl/index.rst
@@ -0,0 +1,12 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+====================
+Compute Express Link
+====================
+
+.. toctree::
+   :maxdepth: 1
+
+   memory-devices
+
+.. only::  subproject and html
diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
new file mode 100644
index 000000000000..43177e700d62
--- /dev/null
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -0,0 +1,29 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. include:: <isonum.txt>
+
+===================================
+Compute Express Link Memory Devices
+===================================
+
+A Compute Express Link Memory Device is a CXL component that implements the
+CXL.mem protocol. It contains some amount of volatile memory, persistent memory,
+or both. It is enumerated as a PCI device for configuration and passing
+messages over an MMIO mailbox. Its contribution to the System Physical
+Address space is handled via HDM (Host Managed Device Memory) decoders
+that optionally define a device's contribution to an interleaved address
+range across multiple devices underneath a host-bridge or interleaved
+across host-bridges.
+
+Driver Infrastructure
+=====================
+
+This section covers the driver infrastructure for a CXL memory device.
+
+CXL Memory Device
+-----------------
+
+.. kernel-doc:: drivers/cxl/mem.c
+   :doc: cxl mem
+
+.. kernel-doc:: drivers/cxl/mem.c
+   :internal:
diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst
index 2456d0a97ed8..d246a18fd78f 100644
--- a/Documentation/driver-api/index.rst
+++ b/Documentation/driver-api/index.rst
@@ -35,6 +35,7 @@ available subsections can be seen below.
    usb/index
    firewire
    pci/index
+   cxl/index
    spi
    i2c
    ipmb
diff --git a/drivers/Kconfig b/drivers/Kconfig
index dcecc9f6e33f..62c753a73651 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -6,6 +6,7 @@ menu "Device Drivers"
 source "drivers/amba/Kconfig"
 source "drivers/eisa/Kconfig"
 source "drivers/pci/Kconfig"
+source "drivers/cxl/Kconfig"
 source "drivers/pcmcia/Kconfig"
 source "drivers/rapidio/Kconfig"
 
diff --git a/drivers/Makefile b/drivers/Makefile
index fd11b9ac4cc3..678ea810410f 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -73,6 +73,7 @@ obj-$(CONFIG_NVM)		+= lightnvm/
 obj-y				+= base/ block/ misc/ mfd/ nfc/
 obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
 obj-$(CONFIG_DAX)		+= dax/
+obj-$(CONFIG_CXL_BUS)		+= cxl/
 obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
 obj-$(CONFIG_NUBUS)		+= nubus/
 obj-y				+= macintosh/
diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
new file mode 100644
index 000000000000..9e80b311e928
--- /dev/null
+++ b/drivers/cxl/Kconfig
@@ -0,0 +1,35 @@
+# SPDX-License-Identifier: GPL-2.0-only
+menuconfig CXL_BUS
+	tristate "CXL (Compute Express Link) Devices Support"
+	depends on PCI
+	help
+	  CXL is a bus that is electrically compatible with PCI Express, but
+	  layers three protocols on that signalling (CXL.io, CXL.cache, and
+	  CXL.mem). The CXL.cache protocol allows devices to hold cachelines
+	  locally, the CXL.mem protocol allows devices to be fully coherent
+	  memory targets, the CXL.io protocol is equivalent to PCI Express.
+	  Say 'y' to enable support for the configuration and management of
+	  devices supporting these protocols.
+
+if CXL_BUS
+
+config CXL_MEM
+	tristate "CXL.mem: Memory Devices"
+	help
+	  The CXL.mem protocol allows a device to act as a provider of
+	  "System RAM" and/or "Persistent Memory" that is fully coherent
+	  as if the memory was attached to the typical CPU memory
+	  controller.
+
+	  Say 'y/m' to enable a driver (named "cxl_mem.ko" when built as
+	  a module) that will attach to CXL.mem devices for
+	  configuration, provisioning, and health monitoring. This
+	  driver is required for dynamic provisioning of CXL.mem
+	  attached memory which is a prerequisite for persistent memory
+	  support. Typically volatile memory is mapped by platform
+	  firmware and included in the platform memory map, but in some
+	  cases the OS is responsible for mapping that memory. See
+	  Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
+
+	  If unsure say 'm'.
+endif
diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
new file mode 100644
index 000000000000..4a30f7c3fc4a
--- /dev/null
+++ b/drivers/cxl/Makefile
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_CXL_MEM) += cxl_mem.o
+
+cxl_mem-y := mem.o
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
new file mode 100644
index 000000000000..99a6571508df
--- /dev/null
+++ b/drivers/cxl/mem.c
@@ -0,0 +1,63 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/io.h>
+#include "pci.h"
+
+static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
+{
+	int pos;
+
+	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC);
+	if (!pos)
+		return 0;
+
+	while (pos) {
+		u16 vendor, id;
+
+		pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER1, &vendor);
+		pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER2, &id);
+		if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id)
+			return pos;
+
+		pos = pci_find_next_ext_capability(pdev, pos,
+						   PCI_EXT_CAP_ID_DVSEC);
+	}
+
+	return 0;
+}
+
+static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct device *dev = &pdev->dev;
+	int regloc;
+
+	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
+	if (!regloc) {
+		dev_err(dev, "register location dvsec not found\n");
+		return -ENXIO;
+	}
+
+	return 0;
+}
+
+static const struct pci_device_id cxl_mem_pci_tbl[] = {
+	/* PCI class code for CXL.mem Type-3 Devices */
+	{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+	  PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF, 0xffffff, 0 },
+	{ /* terminate list */ },
+};
+MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl);
+
+static struct pci_driver cxl_mem_driver = {
+	.name			= KBUILD_MODNAME,
+	.id_table		= cxl_mem_pci_tbl,
+	.probe			= cxl_mem_probe,
+	.driver	= {
+		.probe_type	= PROBE_PREFER_ASYNCHRONOUS,
+	},
+};
+
+MODULE_LICENSE("GPL v2");
+module_pci_driver(cxl_mem_driver);
diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
new file mode 100644
index 000000000000..f135b9f7bb21
--- /dev/null
+++ b/drivers/cxl/pci.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#ifndef __CXL_PCI_H__
+#define __CXL_PCI_H__
+
+#define CXL_MEMORY_PROGIF	0x10
+
+/*
+ * See section 8.1 Configuration Space Registers in the CXL 2.0
+ * Specification
+ */
+#define PCI_EXT_CAP_ID_DVSEC		0x23
+#define PCI_DVSEC_VENDOR_ID_CXL		0x1E98
+#define PCI_DVSEC_ID_CXL		0x0
+
+#define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
+
+#endif /* __CXL_PCI_H__ */
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index d8156a5dbee8..766260a9b247 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -51,6 +51,7 @@
 #define PCI_BASE_CLASS_MEMORY		0x05
 #define PCI_CLASS_MEMORY_RAM		0x0500
 #define PCI_CLASS_MEMORY_FLASH		0x0501
+#define PCI_CLASS_MEMORY_CXL		0x0502
 #define PCI_CLASS_MEMORY_OTHER		0x0580
 
 #define PCI_BASE_CLASS_BRIDGE		0x06
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10  0:02 [PATCH v2 0/8] CXL 2.0 Support Ben Widawsky
  2021-02-10  0:02 ` [PATCH v2 1/8] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints Ben Widawsky
@ 2021-02-10  0:02 ` Ben Widawsky
  2021-02-10 13:32   ` Jonathan Cameron
  2021-02-10 17:41   ` Jonathan Cameron
  2021-02-10  0:02 ` [PATCH v2 3/8] cxl/mem: Register CXL memX devices Ben Widawsky
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10  0:02 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V

Provide enough functionality to utilize the mailbox of a memory device.
The mailbox is used to interact with the firmware running on the memory
device. The flow is proven with one implemented command, "identify".
Because the class code has already told the driver this is a memory
device and the identify command is mandatory.

CXL devices contain an array of capabilities that describe the
interactions software can have with the device or firmware running on
the device. A CXL compliant device must implement the device status and
the mailbox capability. Additionally, a CXL compliant memory device must
implement the memory device capability. Each of the capabilities can
[will] provide an offset within the MMIO region for interacting with the
CXL device.

The capabilities tell the driver how to find and map the register space
for CXL Memory Devices. The registers are required to utilize the CXL
spec defined mailbox interface. The spec outlines two mailboxes, primary
and secondary. The secondary mailbox is earmarked for system firmware,
and not handled in this driver.

Primary mailboxes are capable of generating an interrupt when submitting
a background command. That implementation is saved for a later time.

Link: https://www.computeexpresslink.org/download-the-specification
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/Kconfig           |  14 +
 drivers/cxl/cxl.h             |  93 +++++++
 drivers/cxl/mem.c             | 511 +++++++++++++++++++++++++++++++++-
 drivers/cxl/pci.h             |  13 +
 include/uapi/linux/pci_regs.h |   1 +
 5 files changed, 630 insertions(+), 2 deletions(-)
 create mode 100644 drivers/cxl/cxl.h

diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
index 9e80b311e928..c4ba3aa0a05d 100644
--- a/drivers/cxl/Kconfig
+++ b/drivers/cxl/Kconfig
@@ -32,4 +32,18 @@ config CXL_MEM
 	  Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
 
 	  If unsure say 'm'.
+
+config CXL_MEM_INSECURE_DEBUG
+	bool "CXL.mem debugging"
+	depends on CXL_MEM
+	help
+	  Enable debug of all CXL command payloads.
+
+	  Some CXL devices and controllers support encryption and other
+	  security features. The payloads for the commands that enable
+	  those features may contain sensitive clear-text security
+	  material. Disable debug of those command payloads by default.
+	  If you are a kernel developer actively working on CXL
+	  security enabling say Y, otherwise say N.
+
 endif
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
new file mode 100644
index 000000000000..745f5e0bfce3
--- /dev/null
+++ b/drivers/cxl/cxl.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2020 Intel Corporation. */
+
+#ifndef __CXL_H__
+#define __CXL_H__
+
+#include <linux/bitfield.h>
+#include <linux/bitops.h>
+#include <linux/io.h>
+
+/* CXL 2.0 8.2.8.1 Device Capabilities Array Register */
+#define CXLDEV_CAP_ARRAY_OFFSET 0x0
+#define   CXLDEV_CAP_ARRAY_CAP_ID 0
+#define   CXLDEV_CAP_ARRAY_ID_MASK GENMASK(15, 0)
+#define   CXLDEV_CAP_ARRAY_COUNT_MASK GENMASK(47, 32)
+/* CXL 2.0 8.2.8.2.1 CXL Device Capabilities */
+#define CXLDEV_CAP_CAP_ID_DEVICE_STATUS 0x1
+#define CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX 0x2
+#define CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX 0x3
+#define CXLDEV_CAP_CAP_ID_MEMDEV 0x4000
+
+/* CXL 2.0 8.2.8.4 Mailbox Registers */
+#define CXLDEV_MBOX_CAPS_OFFSET 0x00
+#define   CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0)
+#define CXLDEV_MBOX_CTRL_OFFSET 0x04
+#define   CXLDEV_MBOX_CTRL_DOORBELL BIT(0)
+#define CXLDEV_MBOX_CMD_OFFSET 0x08
+#define   CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK GENMASK(15, 0)
+#define   CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK GENMASK(36, 16)
+#define CXLDEV_MBOX_STATUS_OFFSET 0x10
+#define   CXLDEV_MBOX_STATUS_RET_CODE_MASK GENMASK(47, 32)
+#define CXLDEV_MBOX_BG_CMD_STATUS_OFFSET 0x18
+#define CXLDEV_MBOX_PAYLOAD_OFFSET 0x20
+
+/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
+#define CXLMDEV_STATUS_OFFSET 0x0
+#define   CXLMDEV_DEV_FATAL BIT(0)
+#define   CXLMDEV_FW_HALT BIT(1)
+#define   CXLMDEV_STATUS_MEDIA_STATUS_MASK GENMASK(3, 2)
+#define     CXLMDEV_MS_NOT_READY 0
+#define     CXLMDEV_MS_READY 1
+#define     CXLMDEV_MS_ERROR 2
+#define     CXLMDEV_MS_DISABLED 3
+#define CXLMDEV_READY(status)                                                  \
+	(FIELD_GET(CXLMDEV_STATUS_MEDIA_STATUS_MASK, status) ==                \
+	 CXLMDEV_MS_READY)
+#define   CXLMDEV_MBOX_IF_READY BIT(4)
+#define   CXLMDEV_RESET_NEEDED_MASK GENMASK(7, 5)
+#define     CXLMDEV_RESET_NEEDED_NOT 0
+#define     CXLMDEV_RESET_NEEDED_COLD 1
+#define     CXLMDEV_RESET_NEEDED_WARM 2
+#define     CXLMDEV_RESET_NEEDED_HOT 3
+#define     CXLMDEV_RESET_NEEDED_CXL 4
+#define CXLMDEV_RESET_NEEDED(status)                                           \
+	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
+	 CXLMDEV_RESET_NEEDED_NOT)
+
+/**
+ * struct cxl_mem - A CXL memory device
+ * @pdev: The PCI device associated with this CXL device.
+ * @regs: IO mappings to the device's MMIO
+ * @status_regs: CXL 2.0 8.2.8.3 Device Status Registers
+ * @mbox_regs: CXL 2.0 8.2.8.4 Mailbox Registers
+ * @memdev_regs: CXL 2.0 8.2.8.5 Memory Device Registers
+ * @payload_size: Size of space for payload
+ *                (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
+ * @mbox_mutex: Mutex to synchronize mailbox access.
+ * @firmware_version: Firmware version for the memory device.
+ * @pmem: Persistent memory capacity information.
+ * @ram: Volatile memory capacity information.
+ */
+struct cxl_mem {
+	struct pci_dev *pdev;
+	void __iomem *regs;
+
+	void __iomem *status_regs;
+	void __iomem *mbox_regs;
+	void __iomem *memdev_regs;
+
+	size_t payload_size;
+	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
+	char firmware_version[0x10];
+
+	struct {
+		struct range range;
+	} pmem;
+
+	struct {
+		struct range range;
+	} ram;
+};
+
+#endif /* __CXL_H__ */
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index 99a6571508df..0a868a15badc 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -4,6 +4,401 @@
 #include <linux/pci.h>
 #include <linux/io.h>
 #include "pci.h"
+#include "cxl.h"
+
+#define cxl_doorbell_busy(cxlm)                                                \
+	(readl((cxlm)->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET) &                  \
+	 CXLDEV_MBOX_CTRL_DOORBELL)
+
+/* CXL 2.0 - 8.2.8.4 */
+#define CXL_MAILBOX_TIMEOUT_MS (2 * HZ)
+
+enum opcode {
+	CXL_MBOX_OP_IDENTIFY		= 0x4000,
+	CXL_MBOX_OP_MAX			= 0x10000
+};
+
+/**
+ * struct mbox_cmd - A command to be submitted to hardware.
+ * @opcode: (input) The command set and command submitted to hardware.
+ * @payload_in: (input) Pointer to the input payload.
+ * @payload_out: (output) Pointer to the output payload. Must be allocated by
+ *		 the caller.
+ * @size_in: (input) Number of bytes to load from @payload.
+ * @size_out: (output) Number of bytes loaded into @payload.
+ * @return_code: (output) Error code returned from hardware.
+ *
+ * This is the primary mechanism used to send commands to the hardware.
+ * All the fields except @payload_* correspond exactly to the fields described in
+ * Command Register section of the CXL 2.0 spec (8.2.8.4.5). @payload_in and
+ * @payload_out are written to, and read from the Command Payload Registers
+ * defined in (8.2.8.4.8).
+ */
+struct mbox_cmd {
+	u16 opcode;
+	void *payload_in;
+	void *payload_out;
+	size_t size_in;
+	size_t size_out;
+	u16 return_code;
+#define CXL_MBOX_SUCCESS 0
+};
+
+static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
+{
+	const unsigned long start = jiffies;
+	unsigned long end = start;
+
+	while (cxl_doorbell_busy(cxlm)) {
+		end = jiffies;
+
+		if (time_after(end, start + CXL_MAILBOX_TIMEOUT_MS)) {
+			/* Check again in case preempted before timeout test */
+			if (!cxl_doorbell_busy(cxlm))
+				break;
+			return -ETIMEDOUT;
+		}
+		cpu_relax();
+	}
+
+	dev_dbg(&cxlm->pdev->dev, "Doorbell wait took %dms",
+		jiffies_to_msecs(end) - jiffies_to_msecs(start));
+	return 0;
+}
+
+static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
+				 struct mbox_cmd *mbox_cmd)
+{
+	struct device *dev = &cxlm->pdev->dev;
+
+	dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
+		mbox_cmd->opcode, mbox_cmd->size_in);
+
+	if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
+		print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1,
+				     mbox_cmd->payload_in, mbox_cmd->size_in,
+				     true);
+	}
+}
+
+/**
+ * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
+ * @cxlm: The CXL memory device to communicate with.
+ * @mbox_cmd: Command to send to the memory device.
+ *
+ * Context: Any context. Expects mbox_lock to be held.
+ * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
+ *         Caller should check the return code in @mbox_cmd to make sure it
+ *         succeeded.
+ *
+ * This is a generic form of the CXL mailbox send command, thus the only I/O
+ * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
+ * types of CXL devices may have further information available upon error
+ * conditions.
+ *
+ * The CXL spec allows for up to two mailboxes. The intention is for the primary
+ * mailbox to be OS controlled and the secondary mailbox to be used by system
+ * firmware. This allows the OS and firmware to communicate with the device and
+ * not need to coordinate with each other. The driver only uses the primary
+ * mailbox.
+ */
+static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
+				 struct mbox_cmd *mbox_cmd)
+{
+	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
+	u64 cmd_reg, status_reg;
+	size_t out_len;
+	int rc;
+
+	lockdep_assert_held(&cxlm->mbox_mutex);
+
+	/*
+	 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
+	 *   1. Caller reads MB Control Register to verify doorbell is clear
+	 *   2. Caller writes Command Register
+	 *   3. Caller writes Command Payload Registers if input payload is non-empty
+	 *   4. Caller writes MB Control Register to set doorbell
+	 *   5. Caller either polls for doorbell to be clear or waits for interrupt if configured
+	 *   6. Caller reads MB Status Register to fetch Return code
+	 *   7. If command successful, Caller reads Command Register to get Payload Length
+	 *   8. If output payload is non-empty, host reads Command Payload Registers
+	 *
+	 * Hardware is free to do whatever it wants before the doorbell is rung,
+	 * and isn't allowed to change anything after it clears the doorbell. As
+	 * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can
+	 * also happen in any order (though some orders might not make sense).
+	 */
+
+	/* #1 */
+	if (cxl_doorbell_busy(cxlm)) {
+		dev_err_ratelimited(&cxlm->pdev->dev,
+				    "Mailbox re-busy after acquiring\n");
+		return -EBUSY;
+	}
+
+	cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK,
+			     mbox_cmd->opcode);
+	if (mbox_cmd->size_in) {
+		if (WARN_ON(!mbox_cmd->payload_in))
+			return -EINVAL;
+
+		cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK,
+				      mbox_cmd->size_in);
+		memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in);
+	}
+
+	/* #2, #3 */
+	writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
+
+	/* #4 */
+	dev_dbg(&cxlm->pdev->dev, "Sending command\n");
+	writel(CXLDEV_MBOX_CTRL_DOORBELL,
+	       cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET);
+
+	/* #5 */
+	rc = cxl_mem_wait_for_doorbell(cxlm);
+	if (rc == -ETIMEDOUT) {
+		cxl_mem_mbox_timeout(cxlm, mbox_cmd);
+		return rc;
+	}
+
+	/* #6 */
+	status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET);
+	mbox_cmd->return_code =
+		FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
+
+	if (mbox_cmd->return_code != 0) {
+		dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
+		return 0;
+	}
+
+	/* #7 */
+	cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
+	out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg);
+
+	/* #8 */
+	if (out_len && mbox_cmd->payload_out)
+		memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
+
+	mbox_cmd->size_out = out_len;
+
+	return 0;
+}
+
+/**
+ * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox.
+ * @cxlm: The memory device to gain access to.
+ *
+ * Context: Any context. Takes the mbox_lock.
+ * Return: 0 if exclusive access was acquired.
+ */
+static int cxl_mem_mbox_get(struct cxl_mem *cxlm)
+{
+	struct device *dev = &cxlm->pdev->dev;
+	int rc = -EBUSY;
+	u64 md_status;
+
+	mutex_lock_io(&cxlm->mbox_mutex);
+
+	/*
+	 * XXX: There is some amount of ambiguity in the 2.0 version of the spec
+	 * around the mailbox interface ready (8.2.8.5.1.1).  The purpose of the
+	 * bit is to allow firmware running on the device to notify the driver
+	 * that it's ready to receive commands. It is unclear if the bit needs
+	 * to be read for each transaction mailbox, ie. the firmware can switch
+	 * it on and off as needed. Second, there is no defined timeout for
+	 * mailbox ready, like there is for the doorbell interface.
+	 *
+	 * Assumptions:
+	 * 1. The firmware might toggle the Mailbox Interface Ready bit, check
+	 *    it for every command.
+	 *
+	 * 2. If the doorbell is clear, the firmware should have first set the
+	 *    Mailbox Interface Ready bit. Therefore, waiting for the doorbell
+	 *    to be ready is sufficient.
+	 */
+	rc = cxl_mem_wait_for_doorbell(cxlm);
+	if (rc) {
+		dev_warn(dev, "Mailbox interface not ready\n");
+		goto out;
+	}
+
+	md_status = readq(cxlm->memdev_regs + CXLMDEV_STATUS_OFFSET);
+	if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) {
+		dev_err(dev,
+			"mbox: reported doorbell ready, but not mbox ready\n");
+		goto out;
+	}
+
+	/*
+	 * Hardware shouldn't allow a ready status but also have failure bits
+	 * set. Spit out an error, this should be a bug report
+	 */
+	rc = -EFAULT;
+	if (md_status & CXLMDEV_DEV_FATAL) {
+		dev_err(dev, "mbox: reported ready, but fatal\n");
+		goto out;
+	}
+	if (md_status & CXLMDEV_FW_HALT) {
+		dev_err(dev, "mbox: reported ready, but halted\n");
+		goto out;
+	}
+	if (CXLMDEV_RESET_NEEDED(md_status)) {
+		dev_err(dev, "mbox: reported ready, but reset needed\n");
+		goto out;
+	}
+
+	/* with lock held */
+	return 0;
+
+out:
+	mutex_unlock(&cxlm->mbox_mutex);
+	return rc;
+}
+
+/**
+ * cxl_mem_mbox_put() - Release exclusive access to the mailbox.
+ * @cxlm: The CXL memory device to communicate with.
+ *
+ * Context: Any context. Expects mbox_lock to be held.
+ */
+static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
+{
+	mutex_unlock(&cxlm->mbox_mutex);
+}
+
+/**
+ * cxl_mem_setup_regs() - Setup necessary MMIO.
+ * @cxlm: The CXL memory device to communicate with.
+ *
+ * Return: 0 if all necessary registers mapped.
+ *
+ * A memory device is required by spec to implement a certain set of MMIO
+ * regions. The purpose of this function is to enumerate and map those
+ * registers.
+ */
+static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
+{
+	struct device *dev = &cxlm->pdev->dev;
+	int cap, cap_count;
+	u64 cap_array;
+
+	cap_array = readq(cxlm->regs + CXLDEV_CAP_ARRAY_OFFSET);
+	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
+	    CXLDEV_CAP_ARRAY_CAP_ID)
+		return -ENODEV;
+
+	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
+
+	for (cap = 1; cap <= cap_count; cap++) {
+		void __iomem *register_block;
+		u32 offset;
+		u16 cap_id;
+
+		cap_id = readl(cxlm->regs + cap * 0x10) & 0xffff;
+		offset = readl(cxlm->regs + cap * 0x10 + 0x4);
+		register_block = cxlm->regs + offset;
+
+		switch (cap_id) {
+		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
+			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
+			cxlm->status_regs = register_block;
+			break;
+		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
+			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
+			cxlm->mbox_regs = register_block;
+			break;
+		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
+			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
+			break;
+		case CXLDEV_CAP_CAP_ID_MEMDEV:
+			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
+			cxlm->memdev_regs = register_block;
+			break;
+		default:
+			dev_dbg(dev, "Unknown cap ID: %d (0x%x)\n", cap_id, offset);
+			break;
+		}
+	}
+
+	if (!cxlm->status_regs || !cxlm->mbox_regs || !cxlm->memdev_regs) {
+		dev_err(dev, "registers not found: %s%s%s\n",
+			!cxlm->status_regs ? "status " : "",
+			!cxlm->mbox_regs ? "mbox " : "",
+			!cxlm->memdev_regs ? "memdev" : "");
+		return -ENXIO;
+	}
+
+	return 0;
+}
+
+static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
+{
+	const int cap = readl(cxlm->mbox_regs + CXLDEV_MBOX_CAPS_OFFSET);
+
+	cxlm->payload_size =
+		1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
+
+	/*
+	 * CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register
+	 *
+	 * If the size is too small, mandatory commands will not work and so
+	 * there's no point in going forward. If the size is too large, there's
+	 * no harm is soft limiting it.
+	 */
+	cxlm->payload_size = min_t(size_t, cxlm->payload_size, SZ_1M);
+	if (cxlm->payload_size < 256) {
+		dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)",
+			cxlm->payload_size);
+		return -ENXIO;
+	}
+
+	dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu",
+		cxlm->payload_size);
+
+	return 0;
+}
+
+static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
+				      u32 reg_hi)
+{
+	struct device *dev = &pdev->dev;
+	struct cxl_mem *cxlm;
+	void __iomem *regs;
+	u64 offset;
+	u8 bar;
+	int rc;
+
+	cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL);
+	if (!cxlm) {
+		dev_err(dev, "No memory available\n");
+		return NULL;
+	}
+
+	offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo);
+	bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo);
+
+	/* Basic sanity check that BAR is big enough */
+	if (pci_resource_len(pdev, bar) < offset) {
+		dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
+			&pdev->resource[bar], (unsigned long long)offset);
+		return NULL;
+	}
+
+	rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev));
+	if (rc != 0) {
+		dev_err(dev, "failed to map registers\n");
+		return NULL;
+	}
+	regs = pcim_iomap_table(pdev)[bar];
+
+	mutex_init(&cxlm->mbox_mutex);
+	cxlm->pdev = pdev;
+	cxlm->regs = regs + offset;
+
+	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
+	return cxlm;
+}
 
 static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
 {
@@ -28,10 +423,85 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
 	return 0;
 }
 
+/**
+ * cxl_mem_identify() - Send the IDENTIFY command to the device.
+ * @cxlm: The device to identify.
+ *
+ * Return: 0 if identify was executed successfully.
+ *
+ * This will dispatch the identify command to the device and on success populate
+ * structures to be exported to sysfs.
+ */
+static int cxl_mem_identify(struct cxl_mem *cxlm)
+{
+	struct cxl_mbox_identify {
+		char fw_revision[0x10];
+		__le64 total_capacity;
+		__le64 volatile_capacity;
+		__le64 persistent_capacity;
+		__le64 partition_align;
+		__le16 info_event_log_size;
+		__le16 warning_event_log_size;
+		__le16 failure_event_log_size;
+		__le16 fatal_event_log_size;
+		__le32 lsa_size;
+		u8 poison_list_max_mer[3];
+		__le16 inject_poison_limit;
+		u8 poison_caps;
+		u8 qos_telemetry_caps;
+	} __packed id;
+	struct mbox_cmd mbox_cmd = {
+		.opcode = CXL_MBOX_OP_IDENTIFY,
+		.payload_out = &id,
+		.size_in = 0,
+	};
+	int rc;
+
+	/* Retrieve initial device memory map */
+	rc = cxl_mem_mbox_get(cxlm);
+	if (rc)
+		return rc;
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
+	cxl_mem_mbox_put(cxlm);
+	if (rc)
+		return rc;
+
+	/* TODO: Handle retry or reset responses from firmware. */
+	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
+		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
+			mbox_cmd.return_code);
+		return -ENXIO;
+	}
+
+	if (mbox_cmd.size_out != sizeof(id))
+		return -ENXIO;
+
+	/*
+	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
+	 * For now, only the capacity is exported in sysfs
+	 */
+	cxlm->ram.range.start = 0;
+	cxlm->ram.range.end = le64_to_cpu(id.volatile_capacity) - 1;
+
+	cxlm->pmem.range.start = 0;
+	cxlm->pmem.range.end = le64_to_cpu(id.persistent_capacity) - 1;
+
+	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
+
+	return rc;
+}
+
 static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
-	int regloc;
+	struct cxl_mem *cxlm;
+	int rc, regloc, i;
+	u32 regloc_size;
+
+	rc = pcim_enable_device(pdev);
+	if (rc)
+		return rc;
 
 	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
 	if (!regloc) {
@@ -39,7 +509,44 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		return -ENXIO;
 	}
 
-	return 0;
+	/* Get the size of the Register Locator DVSEC */
+	pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, &regloc_size);
+	regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
+
+	regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET;
+
+	rc = -ENXIO;
+	for (i = regloc; i < regloc + regloc_size; i += 8) {
+		u32 reg_lo, reg_hi;
+		u8 reg_type;
+
+		/* "register low and high" contain other bits */
+		pci_read_config_dword(pdev, i, &reg_lo);
+		pci_read_config_dword(pdev, i + 4, &reg_hi);
+
+		reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo);
+
+		if (reg_type == CXL_REGLOC_RBI_MEMDEV) {
+			rc = 0;
+			cxlm = cxl_mem_create(pdev, reg_lo, reg_hi);
+			if (!cxlm)
+				rc = -ENODEV;
+			break;
+		}
+	}
+
+	if (rc)
+		return rc;
+
+	rc = cxl_mem_setup_regs(cxlm);
+	if (rc)
+		return rc;
+
+	rc = cxl_mem_setup_mailbox(cxlm);
+	if (rc)
+		return rc;
+
+	return cxl_mem_identify(cxlm);
 }
 
 static const struct pci_device_id cxl_mem_pci_tbl[] = {
diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
index f135b9f7bb21..ffcbc13d7b5b 100644
--- a/drivers/cxl/pci.h
+++ b/drivers/cxl/pci.h
@@ -14,5 +14,18 @@
 #define PCI_DVSEC_ID_CXL		0x0
 
 #define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
+#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET	0xC
+
+/* BAR Indicator Register (BIR) */
+#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
+
+/* Register Block Identifier (RBI) */
+#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
+#define CXL_REGLOC_RBI_EMPTY 0
+#define CXL_REGLOC_RBI_COMPONENT 1
+#define CXL_REGLOC_RBI_VIRT 2
+#define CXL_REGLOC_RBI_MEMDEV 3
+
+#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)
 
 #endif /* __CXL_PCI_H__ */
diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
index e709ae8235e7..6267ca9ae683 100644
--- a/include/uapi/linux/pci_regs.h
+++ b/include/uapi/linux/pci_regs.h
@@ -1080,6 +1080,7 @@
 
 /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
 #define PCI_DVSEC_HEADER1		0x4 /* Designated Vendor-Specific Header1 */
+#define PCI_DVSEC_HEADER1_LENGTH_MASK	0xFFF00000
 #define PCI_DVSEC_HEADER2		0x8 /* Designated Vendor-Specific Header2 */
 
 /* Data Link Feature */
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 3/8] cxl/mem: Register CXL memX devices
  2021-02-10  0:02 [PATCH v2 0/8] CXL 2.0 Support Ben Widawsky
  2021-02-10  0:02 ` [PATCH v2 1/8] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints Ben Widawsky
  2021-02-10  0:02 ` [PATCH v2 2/8] cxl/mem: Find device capabilities Ben Widawsky
@ 2021-02-10  0:02 ` Ben Widawsky
  2021-02-10 18:17   ` Jonathan Cameron
  2021-02-10  0:02 ` [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface Ben Widawsky
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10  0:02 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V

From: Dan Williams <dan.j.williams@intel.com>

Create the /sys/bus/cxl hierarchy to enumerate:

* Memory Devices (per-endpoint control devices)

* Memory Address Space Devices (platform address ranges with
  interleaving, performance, and persistence attributes)

* Memory Regions (active provisioned memory from an address space device
  that is in use as System RAM or delegated to libnvdimm as Persistent
  Memory regions).

For now, only the per-endpoint control devices are registered on the
'cxl' bus. However, going forward it will provide a mechanism to
coordinate cross-device interleave.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
---
 Documentation/ABI/testing/sysfs-bus-cxl       |  26 ++
 .../driver-api/cxl/memory-devices.rst         |  17 +
 drivers/cxl/Makefile                          |   3 +
 drivers/cxl/bus.c                             |  29 ++
 drivers/cxl/cxl.h                             |   4 +
 drivers/cxl/mem.c                             | 301 +++++++++++++++++-
 6 files changed, 378 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-bus-cxl
 create mode 100644 drivers/cxl/bus.c

diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
new file mode 100644
index 000000000000..2fe7490ad6a8
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-cxl
@@ -0,0 +1,26 @@
+What:		/sys/bus/cxl/devices/memX/firmware_version
+Date:		December, 2020
+KernelVersion:	v5.12
+Contact:	linux-cxl@vger.kernel.org
+Description:
+		(RO) "FW Revision" string as reported by the Identify
+		Memory Device Output Payload in the CXL-2.0
+		specification.
+
+What:		/sys/bus/cxl/devices/memX/ram/size
+Date:		December, 2020
+KernelVersion:	v5.12
+Contact:	linux-cxl@vger.kernel.org
+Description:
+		(RO) "Volatile Only Capacity" as bytes. Represents the
+		identically named field in the Identify Memory Device Output
+		Payload in the CXL-2.0 specification.
+
+What:		/sys/bus/cxl/devices/memX/pmem/size
+Date:		December, 2020
+KernelVersion:	v5.12
+Contact:	linux-cxl@vger.kernel.org
+Description:
+		(RO) "Persistent Only Capacity" as bytes. Represents the
+		identically named field in the Identify Memory Device Output
+		Payload in the CXL-2.0 specification.
diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
index 43177e700d62..1bad466f9167 100644
--- a/Documentation/driver-api/cxl/memory-devices.rst
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -27,3 +27,20 @@ CXL Memory Device
 
 .. kernel-doc:: drivers/cxl/mem.c
    :internal:
+
+CXL Bus
+-------
+.. kernel-doc:: drivers/cxl/bus.c
+   :doc: cxl bus
+
+External Interfaces
+===================
+
+CXL IOCTL Interface
+-------------------
+
+.. kernel-doc:: include/uapi/linux/cxl_mem.h
+   :doc: UAPI
+
+.. kernel-doc:: include/uapi/linux/cxl_mem.h
+   :internal:
diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
index 4a30f7c3fc4a..a314a1891f4d 100644
--- a/drivers/cxl/Makefile
+++ b/drivers/cxl/Makefile
@@ -1,4 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_CXL_BUS) += cxl_bus.o
 obj-$(CONFIG_CXL_MEM) += cxl_mem.o
 
+ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL
+cxl_bus-y := bus.o
 cxl_mem-y := mem.o
diff --git a/drivers/cxl/bus.c b/drivers/cxl/bus.c
new file mode 100644
index 000000000000..58f74796d525
--- /dev/null
+++ b/drivers/cxl/bus.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#include <linux/device.h>
+#include <linux/module.h>
+
+/**
+ * DOC: cxl bus
+ *
+ * The CXL bus provides namespace for control devices and a rendezvous
+ * point for cross-device interleave coordination.
+ */
+struct bus_type cxl_bus_type = {
+	.name = "cxl",
+};
+EXPORT_SYMBOL_GPL(cxl_bus_type);
+
+static __init int cxl_bus_init(void)
+{
+	return bus_register(&cxl_bus_type);
+}
+
+static void cxl_bus_exit(void)
+{
+	bus_unregister(&cxl_bus_type);
+}
+
+module_init(cxl_bus_init);
+module_exit(cxl_bus_exit);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 745f5e0bfce3..b3c56fa6e126 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -3,6 +3,7 @@
 
 #ifndef __CXL_H__
 #define __CXL_H__
+#include <linux/range.h>
 
 #include <linux/bitfield.h>
 #include <linux/bitops.h>
@@ -55,6 +56,7 @@
 	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
 	 CXLMDEV_RESET_NEEDED_NOT)
 
+struct cxl_memdev;
 /**
  * struct cxl_mem - A CXL memory device
  * @pdev: The PCI device associated with this CXL device.
@@ -72,6 +74,7 @@
 struct cxl_mem {
 	struct pci_dev *pdev;
 	void __iomem *regs;
+	struct cxl_memdev *cxlmd;
 
 	void __iomem *status_regs;
 	void __iomem *mbox_regs;
@@ -90,4 +93,5 @@ struct cxl_mem {
 	} ram;
 };
 
+extern struct bus_type cxl_bus_type;
 #endif /* __CXL_H__ */
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index 0a868a15badc..8bbd2495e237 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -1,11 +1,36 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
 #include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/cdev.h>
+#include <linux/idr.h>
 #include <linux/pci.h>
 #include <linux/io.h>
 #include "pci.h"
 #include "cxl.h"
 
+/**
+ * DOC: cxl mem
+ *
+ * This implements a CXL memory device ("type-3") as it is defined by the
+ * Compute Express Link specification.
+ *
+ * The driver has several responsibilities, mainly:
+ *  - Create the memX device and register on the CXL bus.
+ *  - Enumerate device's register interface and map them.
+ *  - Probe the device attributes to establish sysfs interface.
+ *  - Provide an IOCTL interface to userspace to communicate with the device for
+ *    things like firmware update.
+ *  - Support management of interleave sets.
+ *  - Handle and manage error conditions.
+ */
+
+/*
+ * An entire PCI topology full of devices should be enough for any
+ * config
+ */
+#define CXL_MEM_MAX_DEVS 65536
+
 #define cxl_doorbell_busy(cxlm)                                                \
 	(readl((cxlm)->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET) &                  \
 	 CXLDEV_MBOX_CTRL_DOORBELL)
@@ -44,6 +69,27 @@ struct mbox_cmd {
 #define CXL_MBOX_SUCCESS 0
 };
 
+/**
+ * struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
+ * @dev: driver core device object
+ * @cdev: char dev core object for ioctl operations
+ * @cxlm: pointer to the parent device driver data
+ * @ops_active: active user of @cxlm in ops handlers
+ * @ops_dead: completion when all @cxlm ops users have exited
+ * @id: id number of this memdev instance.
+ */
+struct cxl_memdev {
+	struct device dev;
+	struct cdev cdev;
+	struct cxl_mem *cxlm;
+	struct percpu_ref ops_active;
+	struct completion ops_dead;
+	int id;
+};
+
+static int cxl_mem_major;
+static DEFINE_IDA(cxl_memdev_ida);
+
 static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
 {
 	const unsigned long start = jiffies;
@@ -267,6 +313,33 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
 	mutex_unlock(&cxlm->mbox_mutex);
 }
 
+static long cxl_memdev_ioctl(struct file *file, unsigned int cmd,
+			     unsigned long arg)
+{
+	struct cxl_memdev *cxlmd;
+	struct inode *inode;
+	int rc = -ENOTTY;
+
+	inode = file_inode(file);
+	cxlmd = container_of(inode->i_cdev, typeof(*cxlmd), cdev);
+
+	if (!percpu_ref_tryget_live(&cxlmd->ops_active))
+		return -ENXIO;
+
+	/* TODO: ioctl body */
+
+	percpu_ref_put(&cxlmd->ops_active);
+
+	return rc;
+}
+
+static const struct file_operations cxl_memdev_fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = cxl_memdev_ioctl,
+	.compat_ioctl = compat_ptr_ioctl,
+	.llseek = noop_llseek,
+};
+
 /**
  * cxl_mem_setup_regs() - Setup necessary MMIO.
  * @cxlm: The CXL memory device to communicate with.
@@ -423,6 +496,197 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
 	return 0;
 }
 
+static struct cxl_memdev *to_cxl_memdev(struct device *dev)
+{
+	return container_of(dev, struct cxl_memdev, dev);
+}
+
+static void cxl_memdev_release(struct device *dev)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+
+	percpu_ref_exit(&cxlmd->ops_active);
+	ida_free(&cxl_memdev_ida, cxlmd->id);
+	kfree(cxlmd);
+}
+
+static char *cxl_memdev_devnode(struct device *dev, umode_t *mode, kuid_t *uid,
+				kgid_t *gid)
+{
+	return kasprintf(GFP_KERNEL, "cxl/%s", dev_name(dev));
+}
+
+static ssize_t firmware_version_show(struct device *dev,
+				     struct device_attribute *attr, char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+
+	return sprintf(buf, "%.16s\n", cxlm->firmware_version);
+}
+static DEVICE_ATTR_RO(firmware_version);
+
+static ssize_t payload_max_show(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+
+	return sprintf(buf, "%zu\n", cxlm->payload_size);
+}
+static DEVICE_ATTR_RO(payload_max);
+
+static ssize_t ram_size_show(struct device *dev, struct device_attribute *attr,
+			     char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+	unsigned long long len = range_len(&cxlm->ram.range);
+
+	return sprintf(buf, "%#llx\n", len);
+}
+
+static struct device_attribute dev_attr_ram_size =
+	__ATTR(size, 0444, ram_size_show, NULL);
+
+static ssize_t pmem_size_show(struct device *dev, struct device_attribute *attr,
+			      char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+	unsigned long long len = range_len(&cxlm->pmem.range);
+
+	return sprintf(buf, "%#llx\n", len);
+}
+
+static struct device_attribute dev_attr_pmem_size =
+	__ATTR(size, 0444, pmem_size_show, NULL);
+
+static struct attribute *cxl_memdev_attributes[] = {
+	&dev_attr_firmware_version.attr,
+	&dev_attr_payload_max.attr,
+	NULL,
+};
+
+static struct attribute *cxl_memdev_pmem_attributes[] = {
+	&dev_attr_pmem_size.attr,
+	NULL,
+};
+
+static struct attribute *cxl_memdev_ram_attributes[] = {
+	&dev_attr_ram_size.attr,
+	NULL,
+};
+
+static struct attribute_group cxl_memdev_attribute_group = {
+	.attrs = cxl_memdev_attributes,
+};
+
+static struct attribute_group cxl_memdev_ram_attribute_group = {
+	.name = "ram",
+	.attrs = cxl_memdev_ram_attributes,
+};
+
+static struct attribute_group cxl_memdev_pmem_attribute_group = {
+	.name = "pmem",
+	.attrs = cxl_memdev_pmem_attributes,
+};
+
+static const struct attribute_group *cxl_memdev_attribute_groups[] = {
+	&cxl_memdev_attribute_group,
+	&cxl_memdev_ram_attribute_group,
+	&cxl_memdev_pmem_attribute_group,
+	NULL,
+};
+
+static const struct device_type cxl_memdev_type = {
+	.name = "cxl_memdev",
+	.release = cxl_memdev_release,
+	.devnode = cxl_memdev_devnode,
+	.groups = cxl_memdev_attribute_groups,
+};
+
+static void cxlmdev_unregister(void *_cxlmd)
+{
+	struct cxl_memdev *cxlmd = _cxlmd;
+	struct device *dev = &cxlmd->dev;
+
+	percpu_ref_kill(&cxlmd->ops_active);
+	cdev_device_del(&cxlmd->cdev, dev);
+	wait_for_completion(&cxlmd->ops_dead);
+	cxlmd->cxlm = NULL;
+	put_device(dev);
+}
+
+static void cxlmdev_ops_active_release(struct percpu_ref *ref)
+{
+	struct cxl_memdev *cxlmd =
+		container_of(ref, typeof(*cxlmd), ops_active);
+
+	complete(&cxlmd->ops_dead);
+}
+
+static int cxl_mem_add_memdev(struct cxl_mem *cxlm)
+{
+	struct pci_dev *pdev = cxlm->pdev;
+	struct cxl_memdev *cxlmd;
+	struct device *dev;
+	struct cdev *cdev;
+	int rc;
+
+	cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL);
+	if (!cxlmd)
+		return -ENOMEM;
+	init_completion(&cxlmd->ops_dead);
+
+	/*
+	 * @cxlm is deallocated when the driver unbinds so operations
+	 * that are using it need to hold a live reference.
+	 */
+	cxlmd->cxlm = cxlm;
+	rc = percpu_ref_init(&cxlmd->ops_active, cxlmdev_ops_active_release, 0,
+			     GFP_KERNEL);
+	if (rc)
+		goto err_ref;
+
+	rc = ida_alloc_range(&cxl_memdev_ida, 0, CXL_MEM_MAX_DEVS, GFP_KERNEL);
+	if (rc < 0)
+		goto err_id;
+	cxlmd->id = rc;
+
+	dev = &cxlmd->dev;
+	device_initialize(dev);
+	dev->parent = &pdev->dev;
+	dev->bus = &cxl_bus_type;
+	dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
+	dev->type = &cxl_memdev_type;
+	dev_set_name(dev, "mem%d", cxlmd->id);
+
+	cdev = &cxlmd->cdev;
+	cdev_init(cdev, &cxl_memdev_fops);
+
+	rc = cdev_device_add(cdev, dev);
+	if (rc)
+		goto err_add;
+
+	return devm_add_action_or_reset(dev->parent, cxlmdev_unregister, cxlmd);
+
+err_add:
+	ida_free(&cxl_memdev_ida, cxlmd->id);
+err_id:
+	/*
+	 * Theoretically userspace could have already entered the fops,
+	 * so flush ops_active.
+	 */
+	percpu_ref_kill(&cxlmd->ops_active);
+	wait_for_completion(&cxlmd->ops_dead);
+	percpu_ref_exit(&cxlmd->ops_active);
+err_ref:
+	kfree(cxlmd);
+
+	return rc;
+}
+
 /**
  * cxl_mem_identify() - Send the IDENTIFY command to the device.
  * @cxlm: The device to identify.
@@ -546,7 +810,11 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (rc)
 		return rc;
 
-	return cxl_mem_identify(cxlm);
+	rc = cxl_mem_identify(cxlm);
+	if (rc)
+		return rc;
+
+	return cxl_mem_add_memdev(cxlm);
 }
 
 static const struct pci_device_id cxl_mem_pci_tbl[] = {
@@ -566,5 +834,34 @@ static struct pci_driver cxl_mem_driver = {
 	},
 };
 
+static __init int cxl_mem_init(void)
+{
+	int rc;
+	dev_t devt;
+
+	rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl");
+	if (rc)
+		return rc;
+
+	cxl_mem_major = MAJOR(devt);
+
+	rc = pci_register_driver(&cxl_mem_driver);
+	if (rc) {
+		unregister_chrdev_region(MKDEV(cxl_mem_major, 0),
+					 CXL_MEM_MAX_DEVS);
+		return rc;
+	}
+
+	return 0;
+}
+
+static __exit void cxl_mem_exit(void)
+{
+	pci_unregister_driver(&cxl_mem_driver);
+	unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS);
+}
+
 MODULE_LICENSE("GPL v2");
-module_pci_driver(cxl_mem_driver);
+module_init(cxl_mem_init);
+module_exit(cxl_mem_exit);
+MODULE_IMPORT_NS(CXL);
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-10  0:02 [PATCH v2 0/8] CXL 2.0 Support Ben Widawsky
                   ` (2 preceding siblings ...)
  2021-02-10  0:02 ` [PATCH v2 3/8] cxl/mem: Register CXL memX devices Ben Widawsky
@ 2021-02-10  0:02 ` Ben Widawsky
  2021-02-10 18:45   ` Jonathan Cameron
  2021-02-14 16:30   ` Al Viro
  2021-02-10  0:02 ` [PATCH v2 5/8] cxl/mem: Add a "RAW" send command Ben Widawsky
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10  0:02 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

Add a straightforward IOCTL that provides a mechanism for userspace to
query the supported memory device commands. CXL commands as they appear
to userspace are described as part of the UAPI kerneldoc. The command
list returned via this IOCTL will contain the full set of commands that
the driver supports, however, some of those commands may not be
available for use by userspace.

Memory device commands first appear in the CXL 2.0 specification. They
are submitted through a mailbox mechanism specified also originally
specified in the CXL 2.0 specification.

The send command allows userspace to issue mailbox commands directly to
the hardware. The list of available commands to send are the output of
the query command. The driver verifies basic properties of the command
and possibly inspect the input (or output) payload to determine whether
or not the command is allowed (or might taint the kernel).

Reported-by: kernel test robot <lkp@intel.com> # bug in earlier revision
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Dan Williams <dan.j.willams@intel.com>
---
 .clang-format                                 |   1 +
 .../userspace-api/ioctl/ioctl-number.rst      |   1 +
 drivers/cxl/mem.c                             | 291 +++++++++++++++++-
 include/uapi/linux/cxl_mem.h                  | 152 +++++++++
 4 files changed, 443 insertions(+), 2 deletions(-)
 create mode 100644 include/uapi/linux/cxl_mem.h

diff --git a/.clang-format b/.clang-format
index 10dc5a9a61b3..3f11c8901b43 100644
--- a/.clang-format
+++ b/.clang-format
@@ -109,6 +109,7 @@ ForEachMacros:
   - 'css_for_each_child'
   - 'css_for_each_descendant_post'
   - 'css_for_each_descendant_pre'
+  - 'cxl_for_each_cmd'
   - 'device_for_each_child_node'
   - 'dma_fence_chain_for_each'
   - 'do_for_each_ftrace_op'
diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst
index a4c75a28c839..6eb8e634664d 100644
--- a/Documentation/userspace-api/ioctl/ioctl-number.rst
+++ b/Documentation/userspace-api/ioctl/ioctl-number.rst
@@ -352,6 +352,7 @@ Code  Seq#    Include File                                           Comments
                                                                      <mailto:michael.klein@puffin.lb.shuttle.de>
 0xCC  00-0F  drivers/misc/ibmvmc.h                                   pseries VMC driver
 0xCD  01     linux/reiserfs_fs.h
+0xCE  01-02  uapi/linux/cxl_mem.h                                    Compute Express Link Memory Devices
 0xCF  02     fs/cifs/ioctl.c
 0xDB  00-0F  drivers/char/mwave/mwavepub.h
 0xDD  00-3F                                                          ZFCP device driver see drivers/s390/scsi/
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index 8bbd2495e237..ce65630bb75e 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#include <uapi/linux/cxl_mem.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
 #include <linux/cdev.h>
@@ -39,6 +40,7 @@
 #define CXL_MAILBOX_TIMEOUT_MS (2 * HZ)
 
 enum opcode {
+	CXL_MBOX_OP_INVALID		= 0x0000,
 	CXL_MBOX_OP_IDENTIFY		= 0x4000,
 	CXL_MBOX_OP_MAX			= 0x10000
 };
@@ -90,9 +92,57 @@ struct cxl_memdev {
 static int cxl_mem_major;
 static DEFINE_IDA(cxl_memdev_ida);
 
+/**
+ * struct cxl_mem_command - Driver representation of a memory device command
+ * @info: Command information as it exists for the UAPI
+ * @opcode: The actual bits used for the mailbox protocol
+ * @flags: Set of flags reflecting the state of the command.
+ *
+ *  * %CXL_CMD_FLAG_MANDATORY: Hardware must support this command. This flag is
+ *    only used internally by the driver for sanity checking.
+ *
+ * The cxl_mem_command is the driver's internal representation of commands that
+ * are supported by the driver. Some of these commands may not be supported by
+ * the hardware. The driver will use @info to validate the fields passed in by
+ * the user then submit the @opcode to the hardware.
+ *
+ * See struct cxl_command_info.
+ */
+struct cxl_mem_command {
+	struct cxl_command_info info;
+	enum opcode opcode;
+};
+
+#define CXL_CMD(_id, _flags, sin, sout)                                        \
+	[CXL_MEM_COMMAND_ID_##_id] = {                                         \
+	.info =	{                                                              \
+			.id = CXL_MEM_COMMAND_ID_##_id,                        \
+			.flags = CXL_MEM_COMMAND_FLAG_##_flags,                \
+			.size_in = sin,                                        \
+			.size_out = sout,                                      \
+		},                                                             \
+	.opcode = CXL_MBOX_OP_##_id,                                           \
+	}
+
+/*
+ * This table defines the supported mailbox commands for the driver. This table
+ * is made up of a UAPI structure. Non-negative values as parameters in the
+ * table will be validated against the user's input. For example, if size_in is
+ * 0, and the user passed in 1, it is an error.
+ */
+static struct cxl_mem_command mem_commands[] = {
+	CXL_CMD(IDENTIFY, NONE, 0, 0x43),
+};
+
+#define cxl_for_each_cmd(cmd)                                                  \
+	for ((cmd) = &mem_commands[0];                                         \
+	     ((cmd) - mem_commands) < ARRAY_SIZE(mem_commands); (cmd)++)
+
+#define cxl_cmd_count ARRAY_SIZE(mem_commands)
+
 static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
 {
-	const unsigned long start = jiffies;
+	unsigned long start = jiffies;
 	unsigned long end = start;
 
 	while (cxl_doorbell_busy(cxlm)) {
@@ -313,6 +363,243 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
 	mutex_unlock(&cxlm->mbox_mutex);
 }
 
+/**
+ * handle_mailbox_cmd_from_user() - Dispatch a mailbox command.
+ * @cxlmd: The CXL memory device to communicate with.
+ * @cmd: The validated command.
+ * @in_payload: Pointer to userspace's input payload.
+ * @out_payload: Pointer to userspace's output payload.
+ * @s: The command submitted by userspace. Has output fields.
+ *
+ * Return:
+ *  * %0	- Mailbox transaction succeeded.
+ *  * %-EFAULT	- Something happened with copy_to/from_user.
+ *  * %-ENOMEM  - Couldn't allocate a bounce buffer.
+ *  * %-EINTR	- Mailbox acquisition interrupted.
+ *  * %-E2BIG   - Output payload would overrun user's buffer.
+ *
+ * Creates the appropriate mailbox command on behalf of a userspace request.
+ * Return value, size, and output payload are all copied out to @u. The
+ * parameters for the command must be validated before calling this function.
+ *
+ * A 0 return code indicates the command executed successfully, not that it was
+ * itself successful. IOW, the cmd->retval should always be checked if wanting
+ * to determine the actual result.
+ */
+static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
+					const struct cxl_mem_command *cmd,
+					u64 in_payload, u64 out_payload,
+					struct cxl_send_command __user *s)
+{
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+	struct device *dev = &cxlmd->dev;
+	struct mbox_cmd mbox_cmd = {
+		.opcode = cmd->opcode,
+		.size_in = cmd->info.size_in,
+	};
+	s32 user_size_out;
+	int rc;
+
+	if (get_user(user_size_out, &s->out.size))
+		return -EFAULT;
+
+	if (cmd->info.size_out > 0) /* fixed size command */
+		mbox_cmd.payload_out = kvzalloc(cmd->info.size_out, GFP_KERNEL);
+	else if (cmd->info.size_out < 0) /* variable */
+		mbox_cmd.payload_out = kvzalloc(cxlm->payload_size, GFP_KERNEL);
+
+	if (cmd->info.size_in) {
+		mbox_cmd.payload_in = kvzalloc(cmd->info.size_in, GFP_KERNEL);
+		if (!mbox_cmd.payload_in) {
+			rc = -ENOMEM;
+			goto out;
+		}
+
+		if (copy_from_user(mbox_cmd.payload_in,
+				   u64_to_user_ptr(in_payload),
+				   cmd->info.size_in)) {
+			rc = -EFAULT;
+			goto out;
+		}
+	}
+
+	rc = cxl_mem_mbox_get(cxlm);
+	if (rc)
+		goto out;
+
+	dev_dbg(dev,
+		"Submitting %s command for user\n"
+		"\topcode: %x\n"
+		"\tsize: %ub\n",
+		cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
+		cmd->info.size_in);
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
+	cxl_mem_mbox_put(cxlm);
+	if (rc)
+		goto out;
+
+	rc = put_user(mbox_cmd.return_code, &s->retval);
+	if (rc)
+		goto out;
+
+	if (user_size_out < mbox_cmd.size_out) {
+		rc = -E2BIG;
+		goto out;
+	}
+
+	if (mbox_cmd.size_out) {
+		if (copy_to_user(u64_to_user_ptr(out_payload),
+				 mbox_cmd.payload_out, mbox_cmd.size_out)) {
+			rc = -EFAULT;
+			goto out;
+		}
+	}
+
+	rc = put_user(mbox_cmd.size_out, &s->out.size);
+
+out:
+	kvfree(mbox_cmd.payload_in);
+	kvfree(mbox_cmd.payload_out);
+	return rc;
+}
+
+/**
+ * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
+ * @cxlm: &struct cxl_mem device whose mailbox will be used.
+ * @send_cmd: &struct cxl_send_command copied in from userspace.
+ * @out_cmd: Sanitized and populated &struct cxl_mem_command.
+ *
+ * Return:
+ *  * %0	- @out_cmd is ready to send.
+ *  * %-ENOTTY	- Invalid command specified.
+ *  * %-EINVAL	- Reserved fields or invalid values were used.
+ *  * %-EPERM	- Attempted to use a protected command.
+ *  * %-ENOMEM	- Input or output buffer wasn't sized properly.
+ *
+ * The result of this command is a fully validated command in @out_cmd that is
+ * safe to send to the hardware.
+ *
+ * See handle_mailbox_cmd_from_user()
+ */
+static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
+				      const struct cxl_send_command *send_cmd,
+				      struct cxl_mem_command *out_cmd)
+{
+	const struct cxl_command_info *info;
+	struct cxl_mem_command *c;
+
+	if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX)
+		return -ENOTTY;
+
+	/*
+	 * The user can never specify an input payload larger than
+	 * hardware supports, but output can be arbitrarily large,
+	 * simply write out as much data as the hardware provides.
+	 */
+	if (send_cmd->in.size > cxlm->payload_size)
+		return -EINVAL;
+
+	if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
+		return -EINVAL;
+
+	if (send_cmd->rsvd)
+		return -EINVAL;
+
+	if (send_cmd->in.rsvd || send_cmd->out.rsvd)
+		return -EINVAL;
+
+	/* Convert user's command into the internal representation */
+	c = &mem_commands[send_cmd->id];
+	info = &c->info;
+
+	if (info->flags & CXL_MEM_COMMAND_FLAG_KERNEL)
+		return -EPERM;
+
+	/* Check the input buffer is the expected size */
+	if (info->size_in >= 0 && info->size_in != send_cmd->in.size)
+		return -ENOMEM;
+
+	/* Check the output buffer is at least large enough */
+	if (info->size_out >= 0 && send_cmd->out.size < info->size_out)
+		return -ENOMEM;
+
+	/* Setting a few const fields here... */
+	memcpy(out_cmd, c, sizeof(*c));
+	out_cmd->info.size_in = send_cmd->in.size;
+	out_cmd->info.size_out = send_cmd->out.size;
+
+	return 0;
+}
+
+static int cxl_query_cmd(struct cxl_memdev *cxlmd,
+			 struct cxl_mem_query_commands __user *q)
+{
+	struct device *dev = &cxlmd->dev;
+	struct cxl_mem_command *cmd;
+	u32 n_commands;
+	int j = 0;
+
+	dev_dbg(dev, "Query IOCTL\n");
+
+	if (get_user(n_commands, &q->n_commands))
+		return -EFAULT;
+
+	/* returns the total number if 0 elements are requested. */
+	if (n_commands == 0)
+		return put_user(cxl_cmd_count, &q->n_commands);
+
+	/*
+	 * otherwise, return max(n_commands, total commands) cxl_command_info
+	 * structures.
+	 */
+	cxl_for_each_cmd(cmd) {
+		const struct cxl_command_info *info = &cmd->info;
+
+		if (copy_to_user(&q->commands[j++], info, sizeof(*info)))
+			return -EFAULT;
+
+		if (j == n_commands)
+			break;
+	}
+
+	return 0;
+}
+
+static int cxl_send_cmd(struct cxl_memdev *cxlmd,
+			struct cxl_send_command __user *s)
+{
+	struct device *dev = &cxlmd->dev;
+	struct cxl_send_command send;
+	struct cxl_mem_command c;
+	int rc;
+
+	dev_dbg(dev, "Send IOCTL\n");
+
+	if (copy_from_user(&send, s, sizeof(send)))
+		return -EFAULT;
+
+	rc = cxl_validate_cmd_from_user(cxlmd->cxlm, &send, &c);
+	if (rc)
+		return rc;
+
+	return handle_mailbox_cmd_from_user(cxlmd, &c, send.in.payload,
+					    send.out.payload, s);
+}
+
+static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd,
+			       unsigned long arg)
+{
+	switch (cmd) {
+	case CXL_MEM_QUERY_COMMANDS:
+		return cxl_query_cmd(cxlmd, (void __user *)arg);
+	case CXL_MEM_SEND_COMMAND:
+		return cxl_send_cmd(cxlmd, (void __user *)arg);
+	default:
+		return -ENOTTY;
+	}
+}
+
 static long cxl_memdev_ioctl(struct file *file, unsigned int cmd,
 			     unsigned long arg)
 {
@@ -326,7 +613,7 @@ static long cxl_memdev_ioctl(struct file *file, unsigned int cmd,
 	if (!percpu_ref_tryget_live(&cxlmd->ops_active))
 		return -ENXIO;
 
-	/* TODO: ioctl body */
+	rc = __cxl_memdev_ioctl(cxlmd, cmd, arg);
 
 	percpu_ref_put(&cxlmd->ops_active);
 
diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
new file mode 100644
index 000000000000..f1f7e9f32ea5
--- /dev/null
+++ b/include/uapi/linux/cxl_mem.h
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * CXL IOCTLs for Memory Devices
+ */
+
+#ifndef _UAPI_CXL_MEM_H_
+#define _UAPI_CXL_MEM_H_
+
+#include <linux/types.h>
+
+/**
+ * DOC: UAPI
+ *
+ * Not all of all commands that the driver supports are always available for use
+ * by userspace. Userspace must check the results from the QUERY command in
+ * order to determine the live set of commands.
+ */
+
+#define CXL_MEM_QUERY_COMMANDS _IOR(0xCE, 1, struct cxl_mem_query_commands)
+#define CXL_MEM_SEND_COMMAND _IOWR(0xCE, 2, struct cxl_send_command)
+
+#define CXL_CMDS                                                          \
+	___C(INVALID, "Invalid Command"),                                 \
+	___C(IDENTIFY, "Identify Command"),                               \
+	___C(MAX, "Last command")
+
+#define ___C(a, b) CXL_MEM_COMMAND_ID_##a
+enum { CXL_CMDS };
+
+#undef ___C
+#define ___C(a, b) { b }
+static const struct {
+	const char *name;
+} cxl_command_names[] = { CXL_CMDS };
+#undef ___C
+
+/**
+ * struct cxl_command_info - Command information returned from a query.
+ * @id: ID number for the command.
+ * @flags: Flags that specify command behavior.
+ *
+ *  * %CXL_MEM_COMMAND_FLAG_KERNEL: This command is reserved for exclusive
+ *    kernel use.
+ *  * %CXL_MEM_COMMAND_FLAG_MUTEX: This command may require coordination with
+ *    the kernel in order to complete successfully.
+ *
+ * @size_in: Expected input size, or -1 if variable length.
+ * @size_out: Expected output size, or -1 if variable length.
+ *
+ * Represents a single command that is supported by both the driver and the
+ * hardware. This is returned as part of an array from the query ioctl. The
+ * following would be a command named "foobar" that takes a variable length
+ * input and returns 0 bytes of output.
+ *
+ *  - @id = 10
+ *  - @flags = CXL_MEM_COMMAND_FLAG_MUTEX
+ *  - @size_in = -1
+ *  - @size_out = 0
+ *
+ * See struct cxl_mem_query_commands.
+ */
+struct cxl_command_info {
+	__u32 id;
+
+	__u32 flags;
+#define CXL_MEM_COMMAND_FLAG_NONE 0
+#define CXL_MEM_COMMAND_FLAG_KERNEL BIT(0)
+#define CXL_MEM_COMMAND_FLAG_MASK GENMASK(1, 0)
+
+	__s32 size_in;
+	__s32 size_out;
+};
+
+/**
+ * struct cxl_mem_query_commands - Query supported commands.
+ * @n_commands: In/out parameter. When @n_commands is > 0, the driver will
+ *		return min(num_support_commands, n_commands). When @n_commands
+ *		is 0, driver will return the number of total supported commands.
+ * @rsvd: Reserved for future use.
+ * @commands: Output array of supported commands. This array must be allocated
+ *            by userspace to be at least min(num_support_commands, @n_commands)
+ *
+ * Allow userspace to query the available commands supported by both the driver,
+ * and the hardware. Commands that aren't supported by either the driver, or the
+ * hardware are not returned in the query.
+ *
+ * Examples:
+ *
+ *  - { .n_commands = 0 } // Get number of supported commands
+ *  - { .n_commands = 15, .commands = buf } // Return first 15 (or less)
+ *    supported commands
+ *
+ *  See struct cxl_command_info.
+ */
+struct cxl_mem_query_commands {
+	/*
+	 * Input: Number of commands to return (space allocated by user)
+	 * Output: Number of commands supported by the driver/hardware
+	 *
+	 * If n_commands is 0, kernel will only return number of commands and
+	 * not try to populate commands[], thus allowing userspace to know how
+	 * much space to allocate
+	 */
+	__u32 n_commands;
+	__u32 rsvd;
+
+	struct cxl_command_info __user commands[]; /* out: supported commands */
+};
+
+/**
+ * struct cxl_send_command - Send a command to a memory device.
+ * @id: The command to send to the memory device. This must be one of the
+ *	commands returned by the query command.
+ * @flags: Flags for the command (input).
+ * @rsvd: Must be zero.
+ * @retval: Return value from the memory device (output).
+ * @in.size: Size of the payload to provide to the device (input).
+ * @in.rsvd: Must be zero.
+ * @in.payload: Pointer to memory for payload input (little endian order).
+ * @out.size: Size of the payload received from the device (input/output). This
+ *	      field is filled in by userspace to let the driver know how much
+ *	      space was allocated for output. It is populated by the driver to
+ *	      let userspace know how large the output payload actually was.
+ * @out.rsvd: Must be zero.
+ * @out.payload: Pointer to memory for payload output (little endian order).
+ *
+ * Mechanism for userspace to send a command to the hardware for processing. The
+ * driver will do basic validation on the command sizes. In some cases even the
+ * payload may be introspected. Userspace is required to allocate large
+ * enough buffers for size_out which can be variable length in certain
+ * situations.
+ */
+struct cxl_send_command {
+	__u32 id;
+	__u32 flags;
+	__u32 rsvd;
+	__u32 retval;
+
+	struct {
+		__s32 size;
+		__u32 rsvd;
+		__u64 payload;
+	} in;
+
+	struct {
+		__s32 size;
+		__u32 rsvd;
+		__u64 payload;
+	} out;
+};
+
+#endif
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-10  0:02 [PATCH v2 0/8] CXL 2.0 Support Ben Widawsky
                   ` (3 preceding siblings ...)
  2021-02-10  0:02 ` [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface Ben Widawsky
@ 2021-02-10  0:02 ` Ben Widawsky
  2021-02-10 15:26   ` Ariel.Sibley
  2021-02-11 11:19   ` Jonathan Cameron
  2021-02-10  0:02 ` [PATCH v2 6/8] cxl/mem: Enable commands via CEL Ben Widawsky
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10  0:02 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V, Ariel Sibley

The CXL memory device send interface will have a number of supported
commands. The raw command is not such a command. Raw commands allow
userspace to send a specified opcode to the underlying hardware and
bypass all driver checks on the command. This is useful for a couple of
usecases, mainly:
1. Undocumented vendor specific hardware commands
2. Prototyping new hardware commands not yet supported by the driver

While this all sounds very powerful it comes with a couple of caveats:
1. Bug reports using raw commands will not get the same level of
   attention as bug reports using supported commands (via taint).
2. Supported commands will be rejected by the RAW command.

With this comes new debugfs knob to allow full access to your toes with
your weapon of choice.

Cc: Ariel Sibley <Ariel.Sibley@microchip.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/Kconfig          |  18 +++++
 drivers/cxl/mem.c            | 125 ++++++++++++++++++++++++++++++++++-
 include/uapi/linux/cxl_mem.h |  12 +++-
 3 files changed, 152 insertions(+), 3 deletions(-)

diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
index c4ba3aa0a05d..08eaa8e52083 100644
--- a/drivers/cxl/Kconfig
+++ b/drivers/cxl/Kconfig
@@ -33,6 +33,24 @@ config CXL_MEM
 
 	  If unsure say 'm'.
 
+config CXL_MEM_RAW_COMMANDS
+	bool "RAW Command Interface for Memory Devices"
+	depends on CXL_MEM
+	help
+	  Enable CXL RAW command interface.
+
+	  The CXL driver ioctl interface may assign a kernel ioctl command
+	  number for each specification defined opcode. At any given point in
+	  time the number of opcodes that the specification defines and a device
+	  may implement may exceed the kernel's set of associated ioctl function
+	  numbers. The mismatch is either by omission, specification is too new,
+	  or by design. When prototyping new hardware, or developing / debugging
+	  the driver it is useful to be able to submit any possible command to
+	  the hardware, even commands that may crash the kernel due to their
+	  potential impact to memory currently in use by the kernel.
+
+	  If developing CXL hardware or the driver say Y, otherwise say N.
+
 config CXL_MEM_INSECURE_DEBUG
 	bool "CXL.mem debugging"
 	depends on CXL_MEM
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index ce65630bb75e..6d766a994dce 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -1,6 +1,8 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
 #include <uapi/linux/cxl_mem.h>
+#include <linux/security.h>
+#include <linux/debugfs.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
 #include <linux/cdev.h>
@@ -41,7 +43,14 @@
 
 enum opcode {
 	CXL_MBOX_OP_INVALID		= 0x0000,
+	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
+	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
 	CXL_MBOX_OP_IDENTIFY		= 0x4000,
+	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
+	CXL_MBOX_OP_SET_LSA		= 0x4103,
+	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
+	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
+	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
 	CXL_MBOX_OP_MAX			= 0x10000
 };
 
@@ -91,6 +100,8 @@ struct cxl_memdev {
 
 static int cxl_mem_major;
 static DEFINE_IDA(cxl_memdev_ida);
+static struct dentry *cxl_debugfs;
+static bool raw_allow_all;
 
 /**
  * struct cxl_mem_command - Driver representation of a memory device command
@@ -132,6 +143,49 @@ struct cxl_mem_command {
  */
 static struct cxl_mem_command mem_commands[] = {
 	CXL_CMD(IDENTIFY, NONE, 0, 0x43),
+#ifdef CONFIG_CXL_MEM_RAW_COMMANDS
+	CXL_CMD(RAW, NONE, ~0, ~0),
+#endif
+};
+
+/*
+ * Commands that RAW doesn't permit. The rationale for each:
+ *
+ * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment /
+ * coordination of transaction timeout values at the root bridge level.
+ *
+ * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live
+ * and needs to be coordinated with HDM updates.
+ *
+ * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the
+ * driver and any writes from userspace invalidates those contents.
+ *
+ * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes
+ * to the device after it is marked clean, userspace can not make that
+ * assertion.
+ *
+ * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that
+ * is kept up to date with patrol notifications and error management.
+ */
+static u16 disabled_raw_commands[] = {
+	CXL_MBOX_OP_ACTIVATE_FW,
+	CXL_MBOX_OP_SET_PARTITION_INFO,
+	CXL_MBOX_OP_SET_LSA,
+	CXL_MBOX_OP_SET_SHUTDOWN_STATE,
+	CXL_MBOX_OP_SCAN_MEDIA,
+	CXL_MBOX_OP_GET_SCAN_MEDIA,
+};
+
+/*
+ * Command sets that RAW doesn't permit. All opcodes in this set are
+ * disabled because they pass plain text security payloads over the
+ * user/kernel boundary. This functionality is intended to be wrapped
+ * behind the keys ABI which allows for encrypted payloads in the UAPI
+ */
+static u8 security_command_sets[] = {
+	0x44, /* Sanitize */
+	0x45, /* Persistent Memory Data-at-rest Security */
+	0x46, /* Security Passthrough */
 };
 
 #define cxl_for_each_cmd(cmd)                                                  \
@@ -162,6 +216,16 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
 	return 0;
 }
 
+static bool is_security_command(u16 opcode)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(security_command_sets); i++)
+		if (security_command_sets[i] == (opcode >> 8))
+			return true;
+	return false;
+}
+
 static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
 				 struct mbox_cmd *mbox_cmd)
 {
@@ -170,7 +234,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
 	dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
 		mbox_cmd->opcode, mbox_cmd->size_in);
 
-	if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
+	if (!is_security_command(mbox_cmd->opcode) ||
+	    IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
 		print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1,
 				     mbox_cmd->payload_in, mbox_cmd->size_in,
 				     true);
@@ -434,6 +499,9 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
 		cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
 		cmd->info.size_in);
 
+	dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
+		      "raw command path used\n");
+
 	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
 	cxl_mem_mbox_put(cxlm);
 	if (rc)
@@ -464,6 +532,29 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
 	return rc;
 }
 
+static bool cxl_mem_raw_command_allowed(u16 opcode)
+{
+	int i;
+
+	if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
+		return false;
+
+	if (security_locked_down(LOCKDOWN_NONE))
+		return false;
+
+	if (raw_allow_all)
+		return true;
+
+	if (is_security_command(opcode))
+		return false;
+
+	for (i = 0; i < ARRAY_SIZE(disabled_raw_commands); i++)
+		if (disabled_raw_commands[i] == opcode)
+			return false;
+
+	return true;
+}
+
 /**
  * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
  * @cxlm: &struct cxl_mem device whose mailbox will be used.
@@ -500,6 +591,29 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
 	if (send_cmd->in.size > cxlm->payload_size)
 		return -EINVAL;
 
+	/* Checks are bypassed for raw commands but along comes the taint! */
+	if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) {
+		const struct cxl_mem_command temp = {
+			.info = {
+				.id = CXL_MEM_COMMAND_ID_RAW,
+				.flags = CXL_MEM_COMMAND_FLAG_NONE,
+				.size_in = send_cmd->in.size,
+				.size_out = send_cmd->out.size,
+			},
+			.opcode = send_cmd->raw.opcode
+		};
+
+		if (send_cmd->raw.rsvd)
+			return -EINVAL;
+
+		if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
+			return -EPERM;
+
+		memcpy(out_cmd, &temp, sizeof(temp));
+
+		return 0;
+	}
+
 	if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
 		return -EINVAL;
 
@@ -1123,8 +1237,9 @@ static struct pci_driver cxl_mem_driver = {
 
 static __init int cxl_mem_init(void)
 {
-	int rc;
+	struct dentry *mbox_debugfs;
 	dev_t devt;
+	int rc;
 
 	rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl");
 	if (rc)
@@ -1139,11 +1254,17 @@ static __init int cxl_mem_init(void)
 		return rc;
 	}
 
+	cxl_debugfs = debugfs_create_dir("cxl", NULL);
+	mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
+	debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs,
+			    &raw_allow_all);
+
 	return 0;
 }
 
 static __exit void cxl_mem_exit(void)
 {
+	debugfs_remove_recursive(cxl_debugfs);
 	pci_unregister_driver(&cxl_mem_driver);
 	unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS);
 }
diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
index f1f7e9f32ea5..72d1eb601a5d 100644
--- a/include/uapi/linux/cxl_mem.h
+++ b/include/uapi/linux/cxl_mem.h
@@ -22,6 +22,7 @@
 #define CXL_CMDS                                                          \
 	___C(INVALID, "Invalid Command"),                                 \
 	___C(IDENTIFY, "Identify Command"),                               \
+	___C(RAW, "Raw device command"),                                  \
 	___C(MAX, "Last command")
 
 #define ___C(a, b) CXL_MEM_COMMAND_ID_##a
@@ -112,6 +113,9 @@ struct cxl_mem_query_commands {
  * @id: The command to send to the memory device. This must be one of the
  *	commands returned by the query command.
  * @flags: Flags for the command (input).
+ * @raw: Special fields for raw commands
+ * @raw.opcode: Opcode passed to hardware when using the RAW command.
+ * @raw.rsvd: Must be zero.
  * @rsvd: Must be zero.
  * @retval: Return value from the memory device (output).
  * @in.size: Size of the payload to provide to the device (input).
@@ -133,7 +137,13 @@ struct cxl_mem_query_commands {
 struct cxl_send_command {
 	__u32 id;
 	__u32 flags;
-	__u32 rsvd;
+	union {
+		struct {
+			__u16 opcode;
+			__u16 rsvd;
+		} raw;
+		__u32 rsvd;
+	};
 	__u32 retval;
 
 	struct {
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 6/8] cxl/mem: Enable commands via CEL
  2021-02-10  0:02 [PATCH v2 0/8] CXL 2.0 Support Ben Widawsky
                   ` (4 preceding siblings ...)
  2021-02-10  0:02 ` [PATCH v2 5/8] cxl/mem: Add a "RAW" send command Ben Widawsky
@ 2021-02-10  0:02 ` Ben Widawsky
  2021-02-11 12:02   ` Jonathan Cameron
  2021-02-10  0:02 ` [PATCH v2 7/8] cxl/mem: Add set of informational commands Ben Widawsky
  2021-02-10  0:02 ` [PATCH v2 8/8] MAINTAINERS: Add maintainers of the CXL driver Ben Widawsky
  7 siblings, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10  0:02 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V

CXL devices identified by the memory-device class code must implement
the Device Command Interface (described in 8.2.9 of the CXL 2.0 spec).
While the driver already maintains a list of commands it supports, there
is still a need to be able to distinguish between commands that the
driver knows about from commands that are optionally supported by the
hardware.

The Command Effects Log (CEL) is specified in the CXL 2.0 specification.
The CEL is one of two types of logs, the other being vendor specific.
They are distinguished in hardware/spec via UUID. The CEL is useful for
2 things:
1. Determine which optional commands are supported by the CXL device.
2. Enumerate any vendor specific commands

The CEL is used by the driver to determine which commands are available
in the hardware and therefore which commands userspace is allowed to
execute. The set of enabled commands might be a subset of commands which
are advertised in UAPI via CXL_MEM_SEND_COMMAND IOCTL.

The implementation leaves the statically defined table of commands and
supplements it with a bitmap to determine commands that are enabled.
This organization was chosen for the following reasons:
- Smaller memory footprint. Doesn't need a table per device.
- Reduce memory allocation complexity.
- Fixed command IDs to opcode mapping for all devices makes development
  and debugging easier.
- Certain helpers are easily achievable, like cxl_for_each_cmd().

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/cxl.h            |   2 +
 drivers/cxl/mem.c            | 216 +++++++++++++++++++++++++++++++++++
 include/uapi/linux/cxl_mem.h |   1 +
 3 files changed, 219 insertions(+)

diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index b3c56fa6e126..9a5e595abfa4 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -68,6 +68,7 @@ struct cxl_memdev;
  *                (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
  * @mbox_mutex: Mutex to synchronize mailbox access.
  * @firmware_version: Firmware version for the memory device.
+ * @enabled_commands: Hardware commands found enabled in CEL.
  * @pmem: Persistent memory capacity information.
  * @ram: Volatile memory capacity information.
  */
@@ -83,6 +84,7 @@ struct cxl_mem {
 	size_t payload_size;
 	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
 	char firmware_version[0x10];
+	unsigned long *enabled_cmds;
 
 	struct {
 		struct range range;
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index 6d766a994dce..e9aa6ca18d99 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -45,6 +45,8 @@ enum opcode {
 	CXL_MBOX_OP_INVALID		= 0x0000,
 	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
 	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
+	CXL_MBOX_OP_GET_SUPPORTED_LOGS	= 0x0400,
+	CXL_MBOX_OP_GET_LOG		= 0x0401,
 	CXL_MBOX_OP_IDENTIFY		= 0x4000,
 	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
 	CXL_MBOX_OP_SET_LSA		= 0x4103,
@@ -103,6 +105,19 @@ static DEFINE_IDA(cxl_memdev_ida);
 static struct dentry *cxl_debugfs;
 static bool raw_allow_all;
 
+enum {
+	CEL_UUID,
+	VENDOR_DEBUG_UUID
+};
+
+/* See CXL 2.0 Table 170. Get Log Input Payload */
+static const uuid_t log_uuid[] = {
+	[CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96,
+			       0xb1, 0x62, 0x3b, 0x3f, 0x17),
+	[VENDOR_DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f,
+					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86)
+};
+
 /**
  * struct cxl_mem_command - Driver representation of a memory device command
  * @info: Command information as it exists for the UAPI
@@ -111,6 +126,8 @@ static bool raw_allow_all;
  *
  *  * %CXL_CMD_FLAG_MANDATORY: Hardware must support this command. This flag is
  *    only used internally by the driver for sanity checking.
+ *  * %CXL_CMD_INTERNAL_FLAG_PSEUDO: This is a pseudo command which doesn't have
+ *    a direct mapping to hardware. They are implicitly always enabled.
  *
  * The cxl_mem_command is the driver's internal representation of commands that
  * are supported by the driver. Some of these commands may not be supported by
@@ -146,6 +163,7 @@ static struct cxl_mem_command mem_commands[] = {
 #ifdef CONFIG_CXL_MEM_RAW_COMMANDS
 	CXL_CMD(RAW, NONE, ~0, ~0),
 #endif
+	CXL_CMD(GET_SUPPORTED_LOGS, NONE, 0, ~0),
 };
 
 /*
@@ -627,6 +645,10 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
 	c = &mem_commands[send_cmd->id];
 	info = &c->info;
 
+	/* Check that the command is enabled for hardware */
+	if (!test_bit(info->id, cxlm->enabled_cmds))
+		return -ENOTTY;
+
 	if (info->flags & CXL_MEM_COMMAND_FLAG_KERNEL)
 		return -EPERM;
 
@@ -869,6 +891,14 @@ static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
 	mutex_init(&cxlm->mbox_mutex);
 	cxlm->pdev = pdev;
 	cxlm->regs = regs + offset;
+	cxlm->enabled_cmds =
+		devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
+				   sizeof(unsigned long),
+				   GFP_KERNEL | __GFP_ZERO);
+	if (!cxlm->enabled_cmds) {
+		dev_err(dev, "No memory available for bitmap\n");
+		return NULL;
+	}
 
 	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
 	return cxlm;
@@ -1088,6 +1118,188 @@ static int cxl_mem_add_memdev(struct cxl_mem *cxlm)
 	return rc;
 }
 
+struct cxl_mbox_get_log {
+	uuid_t uuid;
+	__le32 offset;
+	__le32 length;
+} __packed;
+
+static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
+{
+	u32 remaining = size;
+	u32 offset = 0;
+
+	while (remaining) {
+		u32 xfer_size = min_t(u32, remaining, cxlm->payload_size);
+		struct cxl_mbox_get_log log = {
+			.uuid = *uuid,
+			.offset = cpu_to_le32(offset),
+			.length = cpu_to_le32(xfer_size)
+		};
+		struct mbox_cmd mbox_cmd = {
+			.opcode = CXL_MBOX_OP_GET_LOG,
+			.payload_in = &log,
+			.payload_out = out,
+			.size_in = sizeof(log),
+		};
+		int rc;
+
+		rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
+		if (rc)
+			return rc;
+
+		WARN_ON(mbox_cmd.size_out != xfer_size);
+
+		out += xfer_size;
+		remaining -= xfer_size;
+		offset += xfer_size;
+	}
+
+	return 0;
+}
+
+static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
+{
+	struct cxl_mem_command *c;
+
+	cxl_for_each_cmd(c)
+		if (c->opcode == opcode)
+			return c;
+
+	return NULL;
+}
+
+static void cxl_enable_cmd(struct cxl_mem *cxlm,
+			   const struct cxl_mem_command *cmd)
+{
+	if (test_and_set_bit(cmd->info.id, cxlm->enabled_cmds))
+		dev_WARN_ONCE(&cxlm->pdev->dev, true, "cmd enabled twice\n");
+}
+
+/**
+ * cxl_walk_cel() - Walk through the Command Effects Log.
+ * @cxlm: Device.
+ * @size: Length of the Command Effects Log.
+ * @cel: CEL
+ *
+ * Iterate over each entry in the CEL and determine if the driver supports the
+ * command. If so, the command is enabled for the device and can be used later.
+ */
+static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
+{
+	struct cel_entry {
+		__le16 opcode;
+		__le16 effect;
+	} *cel_entry;
+	const int cel_entries = size / sizeof(*cel_entry);
+	int i;
+
+	cel_entry = (struct cel_entry *)cel;
+
+	for (i = 0; i < cel_entries; i++) {
+		const struct cel_entry *ce = &cel_entry[i];
+		const struct cxl_mem_command *cmd =
+			cxl_mem_find_command(le16_to_cpu(ce->opcode));
+
+		if (!cmd) {
+			dev_dbg(&cxlm->pdev->dev, "Unsupported opcode 0x%04x",
+				le16_to_cpu(ce->opcode));
+			continue;
+		}
+
+		cxl_enable_cmd(cxlm, cmd);
+	}
+}
+
+/**
+ * cxl_mem_enumerate_cmds() - Enumerate commands for a device.
+ * @cxlm: The device.
+ *
+ * Returns 0 if enumerate completed successfully.
+ *
+ * CXL devices have optional support for certain commands. This function will
+ * determine the set of supported commands for the hardware and update the
+ * enabled_cmds bitmap in the @cxlm.
+ */
+static int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm)
+{
+	struct device *dev = &cxlm->pdev->dev;
+	struct cxl_mbox_get_supported_logs {
+		__le16 entries;
+		u8 rsvd[6];
+		struct gsl_entry {
+			uuid_t uuid;
+			__le32 size;
+		} __packed entry[2];
+	} __packed gsl;
+	struct mbox_cmd mbox_cmd = {
+		.opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS,
+		.payload_out = &gsl,
+		.size_in = 0,
+	};
+	int i, rc;
+
+	rc = cxl_mem_mbox_get(cxlm);
+	if (rc)
+		return rc;
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
+	if (rc)
+		goto out;
+
+	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
+		rc = -ENXIO;
+		goto out;
+	}
+
+	if (mbox_cmd.size_out > sizeof(gsl)) {
+		dev_warn(dev, "%zu excess logs\n",
+			 (mbox_cmd.size_out - sizeof(gsl)) /
+				 sizeof(struct gsl_entry));
+	}
+
+	for (i = 0; i < le16_to_cpu(gsl.entries); i++) {
+		u32 size = le32_to_cpu(gsl.entry[i].size);
+		uuid_t uuid = gsl.entry[i].uuid;
+		u8 *log;
+
+		dev_dbg(dev, "Found LOG type %pU of size %d", &uuid, size);
+
+		if (!uuid_equal(&uuid, &log_uuid[CEL_UUID]))
+			continue;
+
+		/*
+		 * It's a hardware bug if the log size is less than the input
+		 * payload size because there are many mandatory commands.
+		 */
+		if (sizeof(struct cxl_mbox_get_log) > size) {
+			dev_err(dev, "CEL log size reported was too small (%d)",
+				size);
+			rc = -ENOMEM;
+			goto out;
+		}
+
+		log = kvmalloc(size, GFP_KERNEL);
+		if (!log) {
+			rc = -ENOMEM;
+			goto out;
+		}
+
+		rc = cxl_xfer_log(cxlm, &uuid, size, log);
+		if (rc) {
+			kvfree(log);
+			goto out;
+		}
+
+		cxl_walk_cel(cxlm, size, log);
+		kvfree(log);
+	}
+
+out:
+	cxl_mem_mbox_put(cxlm);
+	return rc;
+}
+
 /**
  * cxl_mem_identify() - Send the IDENTIFY command to the device.
  * @cxlm: The device to identify.
@@ -1211,6 +1423,10 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (rc)
 		return rc;
 
+	rc = cxl_mem_enumerate_cmds(cxlm);
+	if (rc)
+		return rc;
+
 	rc = cxl_mem_identify(cxlm);
 	if (rc)
 		return rc;
diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
index 72d1eb601a5d..c5e75b9dad9d 100644
--- a/include/uapi/linux/cxl_mem.h
+++ b/include/uapi/linux/cxl_mem.h
@@ -23,6 +23,7 @@
 	___C(INVALID, "Invalid Command"),                                 \
 	___C(IDENTIFY, "Identify Command"),                               \
 	___C(RAW, "Raw device command"),                                  \
+	___C(GET_SUPPORTED_LOGS, "Get Supported Logs"),                   \
 	___C(MAX, "Last command")
 
 #define ___C(a, b) CXL_MEM_COMMAND_ID_##a
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 7/8] cxl/mem: Add set of informational commands
  2021-02-10  0:02 [PATCH v2 0/8] CXL 2.0 Support Ben Widawsky
                   ` (5 preceding siblings ...)
  2021-02-10  0:02 ` [PATCH v2 6/8] cxl/mem: Enable commands via CEL Ben Widawsky
@ 2021-02-10  0:02 ` Ben Widawsky
  2021-02-11 12:07   ` Jonathan Cameron
  2021-02-10  0:02 ` [PATCH v2 8/8] MAINTAINERS: Add maintainers of the CXL driver Ben Widawsky
  7 siblings, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10  0:02 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V

Add initial set of formal commands beyond basic identify and command
enumeration.

Of special note is the Get Log Command which is only specified to return
2 log types, CEL and VENDOR_DEBUG. Given that VENDOR_DEBUG is already a
large catch all for vendor specific information there is no known reason
for devices to be implementing other log types. Unknown log types are
included in the "vendor passthrough shenanigans" safety regime like raw
commands and blocked by default.

Up to this point there has been no reason to inspect payload data.
Given the need to check the log type add a new "validate_payload"
operation to define a generic mechanism to restrict / filter commands.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/mem.c            | 55 +++++++++++++++++++++++++++++++++++-
 include/uapi/linux/cxl_mem.h |  5 ++++
 2 files changed, 59 insertions(+), 1 deletion(-)

diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index e9aa6ca18d99..e8cc076b9f1b 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -44,12 +44,16 @@
 enum opcode {
 	CXL_MBOX_OP_INVALID		= 0x0000,
 	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
+	CXL_MBOX_OP_GET_FW_INFO		= 0x0200,
 	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
 	CXL_MBOX_OP_GET_SUPPORTED_LOGS	= 0x0400,
 	CXL_MBOX_OP_GET_LOG		= 0x0401,
 	CXL_MBOX_OP_IDENTIFY		= 0x4000,
+	CXL_MBOX_OP_GET_PARTITION_INFO	= 0x4100,
 	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
+	CXL_MBOX_OP_GET_LSA		= 0x4102,
 	CXL_MBOX_OP_SET_LSA		= 0x4103,
+	CXL_MBOX_OP_GET_HEALTH_INFO	= 0x4200,
 	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
 	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
 	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
@@ -118,6 +122,9 @@ static const uuid_t log_uuid[] = {
 					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86)
 };
 
+static int validate_log_uuid(struct cxl_mem *cxlm, void __user *payload,
+			     size_t size);
+
 /**
  * struct cxl_mem_command - Driver representation of a memory device command
  * @info: Command information as it exists for the UAPI
@@ -129,6 +136,10 @@ static const uuid_t log_uuid[] = {
  *  * %CXL_CMD_INTERNAL_FLAG_PSEUDO: This is a pseudo command which doesn't have
  *    a direct mapping to hardware. They are implicitly always enabled.
  *
+ * @validate_payload: A function called after the command is validated but
+ * before it's sent to the hardware. The primary purpose is to validate, or
+ * fixup the actual payload.
+ *
  * The cxl_mem_command is the driver's internal representation of commands that
  * are supported by the driver. Some of these commands may not be supported by
  * the hardware. The driver will use @info to validate the fields passed in by
@@ -139,9 +150,12 @@ static const uuid_t log_uuid[] = {
 struct cxl_mem_command {
 	struct cxl_command_info info;
 	enum opcode opcode;
+
+	int (*validate_payload)(struct cxl_mem *cxlm, void __user *payload,
+				size_t size);
 };
 
-#define CXL_CMD(_id, _flags, sin, sout)                                        \
+#define CXL_CMD_VALIDATE(_id, _flags, sin, sout, v)                            \
 	[CXL_MEM_COMMAND_ID_##_id] = {                                         \
 	.info =	{                                                              \
 			.id = CXL_MEM_COMMAND_ID_##_id,                        \
@@ -150,8 +164,12 @@ struct cxl_mem_command {
 			.size_out = sout,                                      \
 		},                                                             \
 	.opcode = CXL_MBOX_OP_##_id,                                           \
+	.validate_payload = v,                                                 \
 	}
 
+#define CXL_CMD(_id, _flags, sin, sout)                                        \
+	CXL_CMD_VALIDATE(_id, _flags, sin, sout, NULL)
+
 /*
  * This table defines the supported mailbox commands for the driver. This table
  * is made up of a UAPI structure. Non-negative values as parameters in the
@@ -164,6 +182,11 @@ static struct cxl_mem_command mem_commands[] = {
 	CXL_CMD(RAW, NONE, ~0, ~0),
 #endif
 	CXL_CMD(GET_SUPPORTED_LOGS, NONE, 0, ~0),
+	CXL_CMD(GET_FW_INFO, NONE, 0, 0x50),
+	CXL_CMD(GET_PARTITION_INFO, NONE, 0, 0x20),
+	CXL_CMD(GET_LSA, NONE, 0x8, ~0),
+	CXL_CMD(GET_HEALTH_INFO, NONE, 0, 0x12),
+	CXL_CMD_VALIDATE(GET_LOG, NONE, 0x18, ~0, validate_log_uuid),
 };
 
 /*
@@ -492,6 +515,14 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
 		mbox_cmd.payload_out = kvzalloc(cxlm->payload_size, GFP_KERNEL);
 
 	if (cmd->info.size_in) {
+		if (cmd->validate_payload) {
+			rc = cmd->validate_payload(cxlm,
+						   u64_to_user_ptr(in_payload),
+						   cmd->info.size_in);
+			if (rc)
+				goto out;
+		}
+
 		mbox_cmd.payload_in = kvzalloc(cmd->info.size_in, GFP_KERNEL);
 		if (!mbox_cmd.payload_in) {
 			rc = -ENOMEM;
@@ -1124,6 +1155,28 @@ struct cxl_mbox_get_log {
 	__le32 length;
 } __packed;
 
+static int validate_log_uuid(struct cxl_mem *cxlm, void __user *input,
+			     size_t size)
+{
+	struct cxl_mbox_get_log __user *get_log = input;
+	uuid_t payload_uuid;
+
+	if (copy_from_user(&payload_uuid, &get_log->uuid, sizeof(uuid_t)))
+		return -EFAULT;
+
+	if (uuid_equal(&payload_uuid, &log_uuid[CEL_UUID]))
+		return 0;
+	if (uuid_equal(&payload_uuid, &log_uuid[VENDOR_DEBUG_UUID]))
+		return 0;
+
+	/* All unspec'd logs shall taint */
+	if (WARN_ONCE(!cxl_mem_raw_command_allowed(CXL_MBOX_OP_RAW),
+		      "Unknown log UUID %pU used\n", &payload_uuid))
+		return -EPERM;
+
+	return 0;
+}
+
 static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
 {
 	u32 remaining = size;
diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
index c5e75b9dad9d..ba4d3b4d6b7d 100644
--- a/include/uapi/linux/cxl_mem.h
+++ b/include/uapi/linux/cxl_mem.h
@@ -24,6 +24,11 @@
 	___C(IDENTIFY, "Identify Command"),                               \
 	___C(RAW, "Raw device command"),                                  \
 	___C(GET_SUPPORTED_LOGS, "Get Supported Logs"),                   \
+	___C(GET_FW_INFO, "Get FW Info"),                                 \
+	___C(GET_PARTITION_INFO, "Get Partition Information"),            \
+	___C(GET_LSA, "Get Label Storage Area"),                          \
+	___C(GET_HEALTH_INFO, "Get Health Info"),                         \
+	___C(GET_LOG, "Get Log"),                                         \
 	___C(MAX, "Last command")
 
 #define ___C(a, b) CXL_MEM_COMMAND_ID_##a
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 8/8] MAINTAINERS: Add maintainers of the CXL driver
  2021-02-10  0:02 [PATCH v2 0/8] CXL 2.0 Support Ben Widawsky
                   ` (6 preceding siblings ...)
  2021-02-10  0:02 ` [PATCH v2 7/8] cxl/mem: Add set of informational commands Ben Widawsky
@ 2021-02-10  0:02 ` Ben Widawsky
  7 siblings, 0 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10  0:02 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V, Alison Schofield

Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Alison Schofield <alison.schofield@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
---
 MAINTAINERS | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 6eff4f720c72..93c8694a8f04 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4444,6 +4444,17 @@ M:	Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
 S:	Maintained
 F:	include/linux/compiler_attributes.h
 
+COMPUTE EXPRESS LINK (CXL)
+M:	Alison Schofield <alison.schofield@intel.com>
+M:	Vishal Verma <vishal.l.verma@intel.com>
+M:	Ira Weiny <ira.weiny@intel.com>
+M:	Ben Widawsky <ben.widawsky@intel.com>
+M:	Dan Williams <dan.j.williams@intel.com>
+L:	linux-cxl@vger.kernel.org
+S:	Maintained
+F:	drivers/cxl/
+F:	include/uapi/linux/cxl_mem.h
+
 CONEXANT ACCESSRUNNER USB DRIVER
 L:	accessrunner-general@lists.sourceforge.net
 S:	Orphan
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10  0:02 ` [PATCH v2 2/8] cxl/mem: Find device capabilities Ben Widawsky
@ 2021-02-10 13:32   ` Jonathan Cameron
  2021-02-10 15:07     ` Jonathan Cameron
  2021-02-10 19:32     ` Ben Widawsky
  2021-02-10 17:41   ` Jonathan Cameron
  1 sibling, 2 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-10 13:32 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Tue, 9 Feb 2021 16:02:53 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Provide enough functionality to utilize the mailbox of a memory device.
> The mailbox is used to interact with the firmware running on the memory
> device. The flow is proven with one implemented command, "identify".
> Because the class code has already told the driver this is a memory
> device and the identify command is mandatory.
> 
> CXL devices contain an array of capabilities that describe the
> interactions software can have with the device or firmware running on
> the device. A CXL compliant device must implement the device status and
> the mailbox capability. Additionally, a CXL compliant memory device must
> implement the memory device capability. Each of the capabilities can
> [will] provide an offset within the MMIO region for interacting with the
> CXL device.
> 
> The capabilities tell the driver how to find and map the register space
> for CXL Memory Devices. The registers are required to utilize the CXL
> spec defined mailbox interface. The spec outlines two mailboxes, primary
> and secondary. The secondary mailbox is earmarked for system firmware,
> and not handled in this driver.
> 
> Primary mailboxes are capable of generating an interrupt when submitting
> a background command. That implementation is saved for a later time.
> 
> Link: https://www.computeexpresslink.org/download-the-specification
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>

Hi Ben,


> +/**
> + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> + * @cxlm: The CXL memory device to communicate with.
> + * @mbox_cmd: Command to send to the memory device.
> + *
> + * Context: Any context. Expects mbox_lock to be held.
> + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> + *         Caller should check the return code in @mbox_cmd to make sure it
> + *         succeeded.

cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
enters an infinite loop as a result.

I haven't checked other paths, but to my mind it is not a good idea to require
two levels of error checking - the example here proves how easy it is to forget
one.

Now all I have to do is figure out why I'm getting an error in the first place!

Jonathan



> + *
> + * This is a generic form of the CXL mailbox send command, thus the only I/O
> + * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> + * types of CXL devices may have further information available upon error
> + * conditions.
> + *
> + * The CXL spec allows for up to two mailboxes. The intention is for the primary
> + * mailbox to be OS controlled and the secondary mailbox to be used by system
> + * firmware. This allows the OS and firmware to communicate with the device and
> + * not need to coordinate with each other. The driver only uses the primary
> + * mailbox.
> + */
> +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> +				 struct mbox_cmd *mbox_cmd)
> +{
> +	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> +	u64 cmd_reg, status_reg;
> +	size_t out_len;
> +	int rc;
> +
> +	lockdep_assert_held(&cxlm->mbox_mutex);
> +
> +	/*
> +	 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
> +	 *   1. Caller reads MB Control Register to verify doorbell is clear
> +	 *   2. Caller writes Command Register
> +	 *   3. Caller writes Command Payload Registers if input payload is non-empty
> +	 *   4. Caller writes MB Control Register to set doorbell
> +	 *   5. Caller either polls for doorbell to be clear or waits for interrupt if configured
> +	 *   6. Caller reads MB Status Register to fetch Return code
> +	 *   7. If command successful, Caller reads Command Register to get Payload Length
> +	 *   8. If output payload is non-empty, host reads Command Payload Registers
> +	 *
> +	 * Hardware is free to do whatever it wants before the doorbell is rung,
> +	 * and isn't allowed to change anything after it clears the doorbell. As
> +	 * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can
> +	 * also happen in any order (though some orders might not make sense).
> +	 */
> +
> +	/* #1 */
> +	if (cxl_doorbell_busy(cxlm)) {
> +		dev_err_ratelimited(&cxlm->pdev->dev,
> +				    "Mailbox re-busy after acquiring\n");
> +		return -EBUSY;
> +	}
> +
> +	cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK,
> +			     mbox_cmd->opcode);
> +	if (mbox_cmd->size_in) {
> +		if (WARN_ON(!mbox_cmd->payload_in))
> +			return -EINVAL;
> +
> +		cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK,
> +				      mbox_cmd->size_in);
> +		memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in);
> +	}
> +
> +	/* #2, #3 */
> +	writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> +
> +	/* #4 */
> +	dev_dbg(&cxlm->pdev->dev, "Sending command\n");
> +	writel(CXLDEV_MBOX_CTRL_DOORBELL,
> +	       cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET);
> +
> +	/* #5 */
> +	rc = cxl_mem_wait_for_doorbell(cxlm);
> +	if (rc == -ETIMEDOUT) {
> +		cxl_mem_mbox_timeout(cxlm, mbox_cmd);
> +		return rc;
> +	}
> +
> +	/* #6 */
> +	status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET);
> +	mbox_cmd->return_code =
> +		FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
> +
> +	if (mbox_cmd->return_code != 0) {
> +		dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
> +		return 0;

I'd return some sort of error in this path.  Otherwise the sort of missing
handling I mention above is too easy to hit.

> +	}
> +
> +	/* #7 */
> +	cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> +	out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg);
> +
> +	/* #8 */
> +	if (out_len && mbox_cmd->payload_out)
> +		memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
> +
> +	mbox_cmd->size_out = out_len;
> +
> +	return 0;
> +}
> +
> +/**
> + * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox.
> + * @cxlm: The memory device to gain access to.
> + *
> + * Context: Any context. Takes the mbox_lock.
> + * Return: 0 if exclusive access was acquired.
> + */
> +static int cxl_mem_mbox_get(struct cxl_mem *cxlm)
> +{
> +	struct device *dev = &cxlm->pdev->dev;
> +	int rc = -EBUSY;
> +	u64 md_status;
> +
> +	mutex_lock_io(&cxlm->mbox_mutex);
> +
> +	/*
> +	 * XXX: There is some amount of ambiguity in the 2.0 version of the spec
> +	 * around the mailbox interface ready (8.2.8.5.1.1).  The purpose of the
> +	 * bit is to allow firmware running on the device to notify the driver
> +	 * that it's ready to receive commands. It is unclear if the bit needs
> +	 * to be read for each transaction mailbox, ie. the firmware can switch
> +	 * it on and off as needed. Second, there is no defined timeout for
> +	 * mailbox ready, like there is for the doorbell interface.
> +	 *
> +	 * Assumptions:
> +	 * 1. The firmware might toggle the Mailbox Interface Ready bit, check
> +	 *    it for every command.
> +	 *
> +	 * 2. If the doorbell is clear, the firmware should have first set the
> +	 *    Mailbox Interface Ready bit. Therefore, waiting for the doorbell
> +	 *    to be ready is sufficient.
> +	 */
> +	rc = cxl_mem_wait_for_doorbell(cxlm);
> +	if (rc) {
> +		dev_warn(dev, "Mailbox interface not ready\n");
> +		goto out;
> +	}
> +
> +	md_status = readq(cxlm->memdev_regs + CXLMDEV_STATUS_OFFSET);
> +	if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) {
> +		dev_err(dev,
> +			"mbox: reported doorbell ready, but not mbox ready\n");
> +		goto out;
> +	}
> +
> +	/*
> +	 * Hardware shouldn't allow a ready status but also have failure bits
> +	 * set. Spit out an error, this should be a bug report
> +	 */
> +	rc = -EFAULT;
> +	if (md_status & CXLMDEV_DEV_FATAL) {
> +		dev_err(dev, "mbox: reported ready, but fatal\n");
> +		goto out;
> +	}
> +	if (md_status & CXLMDEV_FW_HALT) {
> +		dev_err(dev, "mbox: reported ready, but halted\n");
> +		goto out;
> +	}
> +	if (CXLMDEV_RESET_NEEDED(md_status)) {
> +		dev_err(dev, "mbox: reported ready, but reset needed\n");
> +		goto out;
> +	}
> +
> +	/* with lock held */
> +	return 0;
> +
> +out:
> +	mutex_unlock(&cxlm->mbox_mutex);
> +	return rc;
> +}
> +
> +/**
> + * cxl_mem_mbox_put() - Release exclusive access to the mailbox.
> + * @cxlm: The CXL memory device to communicate with.
> + *
> + * Context: Any context. Expects mbox_lock to be held.
> + */
> +static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> +{
> +	mutex_unlock(&cxlm->mbox_mutex);
> +}
> +
> +/**
> + * cxl_mem_setup_regs() - Setup necessary MMIO.
> + * @cxlm: The CXL memory device to communicate with.
> + *
> + * Return: 0 if all necessary registers mapped.
> + *
> + * A memory device is required by spec to implement a certain set of MMIO
> + * regions. The purpose of this function is to enumerate and map those
> + * registers.
> + */
> +static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
> +{
> +	struct device *dev = &cxlm->pdev->dev;
> +	int cap, cap_count;
> +	u64 cap_array;
> +
> +	cap_array = readq(cxlm->regs + CXLDEV_CAP_ARRAY_OFFSET);
> +	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
> +	    CXLDEV_CAP_ARRAY_CAP_ID)
> +		return -ENODEV;
> +
> +	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
> +
> +	for (cap = 1; cap <= cap_count; cap++) {
> +		void __iomem *register_block;
> +		u32 offset;
> +		u16 cap_id;
> +
> +		cap_id = readl(cxlm->regs + cap * 0x10) & 0xffff;
> +		offset = readl(cxlm->regs + cap * 0x10 + 0x4);
> +		register_block = cxlm->regs + offset;
> +
> +		switch (cap_id) {
> +		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
> +			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
> +			cxlm->status_regs = register_block;
> +			break;
> +		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
> +			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
> +			cxlm->mbox_regs = register_block;
> +			break;
> +		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
> +			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
> +			break;
> +		case CXLDEV_CAP_CAP_ID_MEMDEV:
> +			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
> +			cxlm->memdev_regs = register_block;
> +			break;
> +		default:
> +			dev_dbg(dev, "Unknown cap ID: %d (0x%x)\n", cap_id, offset);
> +			break;
> +		}
> +	}
> +
> +	if (!cxlm->status_regs || !cxlm->mbox_regs || !cxlm->memdev_regs) {
> +		dev_err(dev, "registers not found: %s%s%s\n",
> +			!cxlm->status_regs ? "status " : "",
> +			!cxlm->mbox_regs ? "mbox " : "",
> +			!cxlm->memdev_regs ? "memdev" : "");
> +		return -ENXIO;
> +	}
> +
> +	return 0;
> +}
> +
> +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
> +{
> +	const int cap = readl(cxlm->mbox_regs + CXLDEV_MBOX_CAPS_OFFSET);
> +
> +	cxlm->payload_size =
> +		1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
> +
> +	/*
> +	 * CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register
> +	 *
> +	 * If the size is too small, mandatory commands will not work and so
> +	 * there's no point in going forward. If the size is too large, there's
> +	 * no harm is soft limiting it.
> +	 */
> +	cxlm->payload_size = min_t(size_t, cxlm->payload_size, SZ_1M);
> +	if (cxlm->payload_size < 256) {
> +		dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)",
> +			cxlm->payload_size);
> +		return -ENXIO;
> +	}
> +
> +	dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu",
> +		cxlm->payload_size);
> +
> +	return 0;
> +}
> +
> +static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
> +				      u32 reg_hi)
> +{
> +	struct device *dev = &pdev->dev;
> +	struct cxl_mem *cxlm;
> +	void __iomem *regs;
> +	u64 offset;
> +	u8 bar;
> +	int rc;
> +
> +	cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL);
> +	if (!cxlm) {
> +		dev_err(dev, "No memory available\n");
> +		return NULL;
> +	}
> +
> +	offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo);
> +	bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo);
> +
> +	/* Basic sanity check that BAR is big enough */
> +	if (pci_resource_len(pdev, bar) < offset) {
> +		dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
> +			&pdev->resource[bar], (unsigned long long)offset);
> +		return NULL;
> +	}
> +
> +	rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev));
> +	if (rc != 0) {
> +		dev_err(dev, "failed to map registers\n");
> +		return NULL;
> +	}
> +	regs = pcim_iomap_table(pdev)[bar];
> +
> +	mutex_init(&cxlm->mbox_mutex);
> +	cxlm->pdev = pdev;
> +	cxlm->regs = regs + offset;
> +
> +	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
> +	return cxlm;
> +}
>  
>  static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
>  {
> @@ -28,10 +423,85 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
>  	return 0;
>  }
>  
> +/**
> + * cxl_mem_identify() - Send the IDENTIFY command to the device.
> + * @cxlm: The device to identify.
> + *
> + * Return: 0 if identify was executed successfully.
> + *
> + * This will dispatch the identify command to the device and on success populate
> + * structures to be exported to sysfs.
> + */
> +static int cxl_mem_identify(struct cxl_mem *cxlm)
> +{
> +	struct cxl_mbox_identify {
> +		char fw_revision[0x10];
> +		__le64 total_capacity;
> +		__le64 volatile_capacity;
> +		__le64 persistent_capacity;
> +		__le64 partition_align;
> +		__le16 info_event_log_size;
> +		__le16 warning_event_log_size;
> +		__le16 failure_event_log_size;
> +		__le16 fatal_event_log_size;
> +		__le32 lsa_size;
> +		u8 poison_list_max_mer[3];
> +		__le16 inject_poison_limit;
> +		u8 poison_caps;
> +		u8 qos_telemetry_caps;
> +	} __packed id;
> +	struct mbox_cmd mbox_cmd = {
> +		.opcode = CXL_MBOX_OP_IDENTIFY,
> +		.payload_out = &id,
> +		.size_in = 0,
> +	};
> +	int rc;
> +
> +	/* Retrieve initial device memory map */
> +	rc = cxl_mem_mbox_get(cxlm);
> +	if (rc)
> +		return rc;
> +
> +	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> +	cxl_mem_mbox_put(cxlm);
> +	if (rc)
> +		return rc;
> +
> +	/* TODO: Handle retry or reset responses from firmware. */
> +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> +		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> +			mbox_cmd.return_code);
> +		return -ENXIO;
> +	}
> +
> +	if (mbox_cmd.size_out != sizeof(id))
> +		return -ENXIO;
> +
> +	/*
> +	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> +	 * For now, only the capacity is exported in sysfs
> +	 */
> +	cxlm->ram.range.start = 0;
> +	cxlm->ram.range.end = le64_to_cpu(id.volatile_capacity) - 1;
> +
> +	cxlm->pmem.range.start = 0;
> +	cxlm->pmem.range.end = le64_to_cpu(id.persistent_capacity) - 1;
> +
> +	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
> +
> +	return rc;
> +}
> +
>  static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  {
>  	struct device *dev = &pdev->dev;
> -	int regloc;
> +	struct cxl_mem *cxlm;
> +	int rc, regloc, i;
> +	u32 regloc_size;
> +
> +	rc = pcim_enable_device(pdev);
> +	if (rc)
> +		return rc;
>  
>  	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
>  	if (!regloc) {
> @@ -39,7 +509,44 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  		return -ENXIO;
>  	}
>  
> -	return 0;
> +	/* Get the size of the Register Locator DVSEC */
> +	pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, &regloc_size);
> +	regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
> +
> +	regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET;
> +
> +	rc = -ENXIO;
> +	for (i = regloc; i < regloc + regloc_size; i += 8) {
> +		u32 reg_lo, reg_hi;
> +		u8 reg_type;
> +
> +		/* "register low and high" contain other bits */
> +		pci_read_config_dword(pdev, i, &reg_lo);
> +		pci_read_config_dword(pdev, i + 4, &reg_hi);
> +
> +		reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo);
> +
> +		if (reg_type == CXL_REGLOC_RBI_MEMDEV) {
> +			rc = 0;
> +			cxlm = cxl_mem_create(pdev, reg_lo, reg_hi);
> +			if (!cxlm)
> +				rc = -ENODEV;
> +			break;
> +		}
> +	}
> +
> +	if (rc)
> +		return rc;
> +
> +	rc = cxl_mem_setup_regs(cxlm);
> +	if (rc)
> +		return rc;
> +
> +	rc = cxl_mem_setup_mailbox(cxlm);
> +	if (rc)
> +		return rc;
> +
> +	return cxl_mem_identify(cxlm);
>  }
>  
>  static const struct pci_device_id cxl_mem_pci_tbl[] = {
> diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> index f135b9f7bb21..ffcbc13d7b5b 100644
> --- a/drivers/cxl/pci.h
> +++ b/drivers/cxl/pci.h
> @@ -14,5 +14,18 @@
>  #define PCI_DVSEC_ID_CXL		0x0
>  
>  #define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
> +#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET	0xC
> +
> +/* BAR Indicator Register (BIR) */
> +#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
> +
> +/* Register Block Identifier (RBI) */
> +#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
> +#define CXL_REGLOC_RBI_EMPTY 0
> +#define CXL_REGLOC_RBI_COMPONENT 1
> +#define CXL_REGLOC_RBI_VIRT 2
> +#define CXL_REGLOC_RBI_MEMDEV 3
> +
> +#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)
>  
>  #endif /* __CXL_PCI_H__ */
> diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> index e709ae8235e7..6267ca9ae683 100644
> --- a/include/uapi/linux/pci_regs.h
> +++ b/include/uapi/linux/pci_regs.h
> @@ -1080,6 +1080,7 @@
>  
>  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
>  #define PCI_DVSEC_HEADER1		0x4 /* Designated Vendor-Specific Header1 */
> +#define PCI_DVSEC_HEADER1_LENGTH_MASK	0xFFF00000
>  #define PCI_DVSEC_HEADER2		0x8 /* Designated Vendor-Specific Header2 */
>  
>  /* Data Link Feature */


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10 13:32   ` Jonathan Cameron
@ 2021-02-10 15:07     ` Jonathan Cameron
  2021-02-10 16:55       ` Ben Widawsky
  2021-02-10 19:32     ` Ben Widawsky
  1 sibling, 1 reply; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-10 15:07 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Wed, 10 Feb 2021 13:32:52 +0000
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Tue, 9 Feb 2021 16:02:53 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > Provide enough functionality to utilize the mailbox of a memory device.
> > The mailbox is used to interact with the firmware running on the memory
> > device. The flow is proven with one implemented command, "identify".
> > Because the class code has already told the driver this is a memory
> > device and the identify command is mandatory.
> > 
> > CXL devices contain an array of capabilities that describe the
> > interactions software can have with the device or firmware running on
> > the device. A CXL compliant device must implement the device status and
> > the mailbox capability. Additionally, a CXL compliant memory device must
> > implement the memory device capability. Each of the capabilities can
> > [will] provide an offset within the MMIO region for interacting with the
> > CXL device.
> > 
> > The capabilities tell the driver how to find and map the register space
> > for CXL Memory Devices. The registers are required to utilize the CXL
> > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > and secondary. The secondary mailbox is earmarked for system firmware,
> > and not handled in this driver.
> > 
> > Primary mailboxes are capable of generating an interrupt when submitting
> > a background command. That implementation is saved for a later time.
> > 
> > Link: https://www.computeexpresslink.org/download-the-specification
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Reviewed-by: Dan Williams <dan.j.williams@intel.com>  
> 
> Hi Ben,
> 
> 
> > +/**
> > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > + * @cxlm: The CXL memory device to communicate with.
> > + * @mbox_cmd: Command to send to the memory device.
> > + *
> > + * Context: Any context. Expects mbox_lock to be held.
> > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > + *         Caller should check the return code in @mbox_cmd to make sure it
> > + *         succeeded.  
> 
> cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> enters an infinite loop as a result.
> 
> I haven't checked other paths, but to my mind it is not a good idea to require
> two levels of error checking - the example here proves how easy it is to forget
> one.
> 
> Now all I have to do is figure out why I'm getting an error in the first place!

For reference this seems to be our old issue of arm64 memcpy_fromio() only doing 8 byte
or 1 byte copies.  The hack in QEMU to allow that to work, doesn't work.
Result is that 1 byte reads replicate across the register
(in this case instead of 0000001c I get 1c1c1c1c)

For these particular registers, we are covered by the rules in 8.2 which says that
a 1, 2, 4, 8 aligned reads of 64 bit registers etc are fine.

So we should not have to care.  This isn't true for the component registers where
we need to guarantee 4 or 8 byte reads only.

For this particular issue the mailbox_read_reg() function in the QEMU code
needs to handle the size 1 case and set min_access_size = 1 for
mailbox_ops.  Logically it should also handle the 2 byte case I think,
but I'm not hitting that.

Jonathan

> 
> Jonathan
> 
> 
> 
> > + *
> > + * This is a generic form of the CXL mailbox send command, thus the only I/O
> > + * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> > + * types of CXL devices may have further information available upon error
> > + * conditions.
> > + *
> > + * The CXL spec allows for up to two mailboxes. The intention is for the primary
> > + * mailbox to be OS controlled and the secondary mailbox to be used by system
> > + * firmware. This allows the OS and firmware to communicate with the device and
> > + * not need to coordinate with each other. The driver only uses the primary
> > + * mailbox.
> > + */
> > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > +				 struct mbox_cmd *mbox_cmd)
> > +{
> > +	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> > +	u64 cmd_reg, status_reg;
> > +	size_t out_len;
> > +	int rc;
> > +
> > +	lockdep_assert_held(&cxlm->mbox_mutex);
> > +
> > +	/*
> > +	 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
> > +	 *   1. Caller reads MB Control Register to verify doorbell is clear
> > +	 *   2. Caller writes Command Register
> > +	 *   3. Caller writes Command Payload Registers if input payload is non-empty
> > +	 *   4. Caller writes MB Control Register to set doorbell
> > +	 *   5. Caller either polls for doorbell to be clear or waits for interrupt if configured
> > +	 *   6. Caller reads MB Status Register to fetch Return code
> > +	 *   7. If command successful, Caller reads Command Register to get Payload Length
> > +	 *   8. If output payload is non-empty, host reads Command Payload Registers
> > +	 *
> > +	 * Hardware is free to do whatever it wants before the doorbell is rung,
> > +	 * and isn't allowed to change anything after it clears the doorbell. As
> > +	 * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can
> > +	 * also happen in any order (though some orders might not make sense).
> > +	 */
> > +
> > +	/* #1 */
> > +	if (cxl_doorbell_busy(cxlm)) {
> > +		dev_err_ratelimited(&cxlm->pdev->dev,
> > +				    "Mailbox re-busy after acquiring\n");
> > +		return -EBUSY;
> > +	}
> > +
> > +	cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK,
> > +			     mbox_cmd->opcode);
> > +	if (mbox_cmd->size_in) {
> > +		if (WARN_ON(!mbox_cmd->payload_in))
> > +			return -EINVAL;
> > +
> > +		cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK,
> > +				      mbox_cmd->size_in);
> > +		memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in);
> > +	}
> > +
> > +	/* #2, #3 */
> > +	writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > +
> > +	/* #4 */
> > +	dev_dbg(&cxlm->pdev->dev, "Sending command\n");
> > +	writel(CXLDEV_MBOX_CTRL_DOORBELL,
> > +	       cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET);
> > +
> > +	/* #5 */
> > +	rc = cxl_mem_wait_for_doorbell(cxlm);
> > +	if (rc == -ETIMEDOUT) {
> > +		cxl_mem_mbox_timeout(cxlm, mbox_cmd);
> > +		return rc;
> > +	}
> > +
> > +	/* #6 */
> > +	status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET);
> > +	mbox_cmd->return_code =
> > +		FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
> > +
> > +	if (mbox_cmd->return_code != 0) {
> > +		dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
> > +		return 0;  
> 
> I'd return some sort of error in this path.  Otherwise the sort of missing
> handling I mention above is too easy to hit.
> 
> > +	}
> > +
> > +	/* #7 */
> > +	cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > +	out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg);
> > +
> > +	/* #8 */
> > +	if (out_len && mbox_cmd->payload_out)
> > +		memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
> > +
> > +	mbox_cmd->size_out = out_len;
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox.
> > + * @cxlm: The memory device to gain access to.
> > + *
> > + * Context: Any context. Takes the mbox_lock.
> > + * Return: 0 if exclusive access was acquired.
> > + */
> > +static int cxl_mem_mbox_get(struct cxl_mem *cxlm)
> > +{
> > +	struct device *dev = &cxlm->pdev->dev;
> > +	int rc = -EBUSY;
> > +	u64 md_status;
> > +
> > +	mutex_lock_io(&cxlm->mbox_mutex);
> > +
> > +	/*
> > +	 * XXX: There is some amount of ambiguity in the 2.0 version of the spec
> > +	 * around the mailbox interface ready (8.2.8.5.1.1).  The purpose of the
> > +	 * bit is to allow firmware running on the device to notify the driver
> > +	 * that it's ready to receive commands. It is unclear if the bit needs
> > +	 * to be read for each transaction mailbox, ie. the firmware can switch
> > +	 * it on and off as needed. Second, there is no defined timeout for
> > +	 * mailbox ready, like there is for the doorbell interface.
> > +	 *
> > +	 * Assumptions:
> > +	 * 1. The firmware might toggle the Mailbox Interface Ready bit, check
> > +	 *    it for every command.
> > +	 *
> > +	 * 2. If the doorbell is clear, the firmware should have first set the
> > +	 *    Mailbox Interface Ready bit. Therefore, waiting for the doorbell
> > +	 *    to be ready is sufficient.
> > +	 */
> > +	rc = cxl_mem_wait_for_doorbell(cxlm);
> > +	if (rc) {
> > +		dev_warn(dev, "Mailbox interface not ready\n");
> > +		goto out;
> > +	}
> > +
> > +	md_status = readq(cxlm->memdev_regs + CXLMDEV_STATUS_OFFSET);
> > +	if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) {
> > +		dev_err(dev,
> > +			"mbox: reported doorbell ready, but not mbox ready\n");
> > +		goto out;
> > +	}
> > +
> > +	/*
> > +	 * Hardware shouldn't allow a ready status but also have failure bits
> > +	 * set. Spit out an error, this should be a bug report
> > +	 */
> > +	rc = -EFAULT;
> > +	if (md_status & CXLMDEV_DEV_FATAL) {
> > +		dev_err(dev, "mbox: reported ready, but fatal\n");
> > +		goto out;
> > +	}
> > +	if (md_status & CXLMDEV_FW_HALT) {
> > +		dev_err(dev, "mbox: reported ready, but halted\n");
> > +		goto out;
> > +	}
> > +	if (CXLMDEV_RESET_NEEDED(md_status)) {
> > +		dev_err(dev, "mbox: reported ready, but reset needed\n");
> > +		goto out;
> > +	}
> > +
> > +	/* with lock held */
> > +	return 0;
> > +
> > +out:
> > +	mutex_unlock(&cxlm->mbox_mutex);
> > +	return rc;
> > +}
> > +
> > +/**
> > + * cxl_mem_mbox_put() - Release exclusive access to the mailbox.
> > + * @cxlm: The CXL memory device to communicate with.
> > + *
> > + * Context: Any context. Expects mbox_lock to be held.
> > + */
> > +static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> > +{
> > +	mutex_unlock(&cxlm->mbox_mutex);
> > +}
> > +
> > +/**
> > + * cxl_mem_setup_regs() - Setup necessary MMIO.
> > + * @cxlm: The CXL memory device to communicate with.
> > + *
> > + * Return: 0 if all necessary registers mapped.
> > + *
> > + * A memory device is required by spec to implement a certain set of MMIO
> > + * regions. The purpose of this function is to enumerate and map those
> > + * registers.
> > + */
> > +static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
> > +{
> > +	struct device *dev = &cxlm->pdev->dev;
> > +	int cap, cap_count;
> > +	u64 cap_array;
> > +
> > +	cap_array = readq(cxlm->regs + CXLDEV_CAP_ARRAY_OFFSET);
> > +	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
> > +	    CXLDEV_CAP_ARRAY_CAP_ID)
> > +		return -ENODEV;
> > +
> > +	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
> > +
> > +	for (cap = 1; cap <= cap_count; cap++) {
> > +		void __iomem *register_block;
> > +		u32 offset;
> > +		u16 cap_id;
> > +
> > +		cap_id = readl(cxlm->regs + cap * 0x10) & 0xffff;
> > +		offset = readl(cxlm->regs + cap * 0x10 + 0x4);
> > +		register_block = cxlm->regs + offset;
> > +
> > +		switch (cap_id) {
> > +		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
> > +			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
> > +			cxlm->status_regs = register_block;
> > +			break;
> > +		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
> > +			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
> > +			cxlm->mbox_regs = register_block;
> > +			break;
> > +		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
> > +			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
> > +			break;
> > +		case CXLDEV_CAP_CAP_ID_MEMDEV:
> > +			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
> > +			cxlm->memdev_regs = register_block;
> > +			break;
> > +		default:
> > +			dev_dbg(dev, "Unknown cap ID: %d (0x%x)\n", cap_id, offset);
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (!cxlm->status_regs || !cxlm->mbox_regs || !cxlm->memdev_regs) {
> > +		dev_err(dev, "registers not found: %s%s%s\n",
> > +			!cxlm->status_regs ? "status " : "",
> > +			!cxlm->mbox_regs ? "mbox " : "",
> > +			!cxlm->memdev_regs ? "memdev" : "");
> > +		return -ENXIO;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
> > +{
> > +	const int cap = readl(cxlm->mbox_regs + CXLDEV_MBOX_CAPS_OFFSET);
> > +
> > +	cxlm->payload_size =
> > +		1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
> > +
> > +	/*
> > +	 * CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register
> > +	 *
> > +	 * If the size is too small, mandatory commands will not work and so
> > +	 * there's no point in going forward. If the size is too large, there's
> > +	 * no harm is soft limiting it.
> > +	 */
> > +	cxlm->payload_size = min_t(size_t, cxlm->payload_size, SZ_1M);
> > +	if (cxlm->payload_size < 256) {
> > +		dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)",
> > +			cxlm->payload_size);
> > +		return -ENXIO;
> > +	}
> > +
> > +	dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu",
> > +		cxlm->payload_size);
> > +
> > +	return 0;
> > +}
> > +
> > +static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
> > +				      u32 reg_hi)
> > +{
> > +	struct device *dev = &pdev->dev;
> > +	struct cxl_mem *cxlm;
> > +	void __iomem *regs;
> > +	u64 offset;
> > +	u8 bar;
> > +	int rc;
> > +
> > +	cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL);
> > +	if (!cxlm) {
> > +		dev_err(dev, "No memory available\n");
> > +		return NULL;
> > +	}
> > +
> > +	offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo);
> > +	bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo);
> > +
> > +	/* Basic sanity check that BAR is big enough */
> > +	if (pci_resource_len(pdev, bar) < offset) {
> > +		dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
> > +			&pdev->resource[bar], (unsigned long long)offset);
> > +		return NULL;
> > +	}
> > +
> > +	rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev));
> > +	if (rc != 0) {
> > +		dev_err(dev, "failed to map registers\n");
> > +		return NULL;
> > +	}
> > +	regs = pcim_iomap_table(pdev)[bar];
> > +
> > +	mutex_init(&cxlm->mbox_mutex);
> > +	cxlm->pdev = pdev;
> > +	cxlm->regs = regs + offset;
> > +
> > +	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
> > +	return cxlm;
> > +}
> >  
> >  static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> >  {
> > @@ -28,10 +423,85 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> >  	return 0;
> >  }
> >  
> > +/**
> > + * cxl_mem_identify() - Send the IDENTIFY command to the device.
> > + * @cxlm: The device to identify.
> > + *
> > + * Return: 0 if identify was executed successfully.
> > + *
> > + * This will dispatch the identify command to the device and on success populate
> > + * structures to be exported to sysfs.
> > + */
> > +static int cxl_mem_identify(struct cxl_mem *cxlm)
> > +{
> > +	struct cxl_mbox_identify {
> > +		char fw_revision[0x10];
> > +		__le64 total_capacity;
> > +		__le64 volatile_capacity;
> > +		__le64 persistent_capacity;
> > +		__le64 partition_align;
> > +		__le16 info_event_log_size;
> > +		__le16 warning_event_log_size;
> > +		__le16 failure_event_log_size;
> > +		__le16 fatal_event_log_size;
> > +		__le32 lsa_size;
> > +		u8 poison_list_max_mer[3];
> > +		__le16 inject_poison_limit;
> > +		u8 poison_caps;
> > +		u8 qos_telemetry_caps;
> > +	} __packed id;
> > +	struct mbox_cmd mbox_cmd = {
> > +		.opcode = CXL_MBOX_OP_IDENTIFY,
> > +		.payload_out = &id,
> > +		.size_in = 0,
> > +	};
> > +	int rc;
> > +
> > +	/* Retrieve initial device memory map */
> > +	rc = cxl_mem_mbox_get(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > +	cxl_mem_mbox_put(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	/* TODO: Handle retry or reset responses from firmware. */
> > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > +		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> > +			mbox_cmd.return_code);
> > +		return -ENXIO;
> > +	}
> > +
> > +	if (mbox_cmd.size_out != sizeof(id))
> > +		return -ENXIO;
> > +
> > +	/*
> > +	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> > +	 * For now, only the capacity is exported in sysfs
> > +	 */
> > +	cxlm->ram.range.start = 0;
> > +	cxlm->ram.range.end = le64_to_cpu(id.volatile_capacity) - 1;
> > +
> > +	cxlm->pmem.range.start = 0;
> > +	cxlm->pmem.range.end = le64_to_cpu(id.persistent_capacity) - 1;
> > +
> > +	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
> > +
> > +	return rc;
> > +}
> > +
> >  static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> >  {
> >  	struct device *dev = &pdev->dev;
> > -	int regloc;
> > +	struct cxl_mem *cxlm;
> > +	int rc, regloc, i;
> > +	u32 regloc_size;
> > +
> > +	rc = pcim_enable_device(pdev);
> > +	if (rc)
> > +		return rc;
> >  
> >  	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
> >  	if (!regloc) {
> > @@ -39,7 +509,44 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> >  		return -ENXIO;
> >  	}
> >  
> > -	return 0;
> > +	/* Get the size of the Register Locator DVSEC */
> > +	pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, &regloc_size);
> > +	regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
> > +
> > +	regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET;
> > +
> > +	rc = -ENXIO;
> > +	for (i = regloc; i < regloc + regloc_size; i += 8) {
> > +		u32 reg_lo, reg_hi;
> > +		u8 reg_type;
> > +
> > +		/* "register low and high" contain other bits */
> > +		pci_read_config_dword(pdev, i, &reg_lo);
> > +		pci_read_config_dword(pdev, i + 4, &reg_hi);
> > +
> > +		reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo);
> > +
> > +		if (reg_type == CXL_REGLOC_RBI_MEMDEV) {
> > +			rc = 0;
> > +			cxlm = cxl_mem_create(pdev, reg_lo, reg_hi);
> > +			if (!cxlm)
> > +				rc = -ENODEV;
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = cxl_mem_setup_regs(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = cxl_mem_setup_mailbox(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	return cxl_mem_identify(cxlm);
> >  }
> >  
> >  static const struct pci_device_id cxl_mem_pci_tbl[] = {
> > diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> > index f135b9f7bb21..ffcbc13d7b5b 100644
> > --- a/drivers/cxl/pci.h
> > +++ b/drivers/cxl/pci.h
> > @@ -14,5 +14,18 @@
> >  #define PCI_DVSEC_ID_CXL		0x0
> >  
> >  #define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
> > +#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET	0xC
> > +
> > +/* BAR Indicator Register (BIR) */
> > +#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
> > +
> > +/* Register Block Identifier (RBI) */
> > +#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
> > +#define CXL_REGLOC_RBI_EMPTY 0
> > +#define CXL_REGLOC_RBI_COMPONENT 1
> > +#define CXL_REGLOC_RBI_VIRT 2
> > +#define CXL_REGLOC_RBI_MEMDEV 3
> > +
> > +#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)
> >  
> >  #endif /* __CXL_PCI_H__ */
> > diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> > index e709ae8235e7..6267ca9ae683 100644
> > --- a/include/uapi/linux/pci_regs.h
> > +++ b/include/uapi/linux/pci_regs.h
> > @@ -1080,6 +1080,7 @@
> >  
> >  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
> >  #define PCI_DVSEC_HEADER1		0x4 /* Designated Vendor-Specific Header1 */
> > +#define PCI_DVSEC_HEADER1_LENGTH_MASK	0xFFF00000
> >  #define PCI_DVSEC_HEADER2		0x8 /* Designated Vendor-Specific Header2 */
> >  
> >  /* Data Link Feature */  
> 


^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-10  0:02 ` [PATCH v2 5/8] cxl/mem: Add a "RAW" send command Ben Widawsky
@ 2021-02-10 15:26   ` Ariel.Sibley
  2021-02-10 16:49     ` Ben Widawsky
  2021-02-11 16:43     ` Dan Williams
  2021-02-11 11:19   ` Jonathan Cameron
  1 sibling, 2 replies; 57+ messages in thread
From: Ariel.Sibley @ 2021-02-10 15:26 UTC (permalink / raw)
  To: ben.widawsky, linux-cxl
  Cc: linux-acpi, linux-kernel, linux-nvdimm, linux-pci, helgaas,
	cbrowy, hch, dan.j.williams, david, rientjes, ira.weiny, jcm,
	Jonathan.Cameron, rafael.j.wysocki, rdunlap, vishal.l.verma,
	jgroves, sean.v.kelley

> diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> index c4ba3aa0a05d..08eaa8e52083 100644
> --- a/drivers/cxl/Kconfig
> +++ b/drivers/cxl/Kconfig
> @@ -33,6 +33,24 @@ config CXL_MEM
> 
>           If unsure say 'm'.
> 
> +config CXL_MEM_RAW_COMMANDS
> +       bool "RAW Command Interface for Memory Devices"
> +       depends on CXL_MEM
> +       help
> +         Enable CXL RAW command interface.
> +
> +         The CXL driver ioctl interface may assign a kernel ioctl command
> +         number for each specification defined opcode. At any given point in
> +         time the number of opcodes that the specification defines and a device
> +         may implement may exceed the kernel's set of associated ioctl function
> +         numbers. The mismatch is either by omission, specification is too new,
> +         or by design. When prototyping new hardware, or developing /
> debugging
> +         the driver it is useful to be able to submit any possible command to
> +         the hardware, even commands that may crash the kernel due to their
> +         potential impact to memory currently in use by the kernel.
> +
> +         If developing CXL hardware or the driver say Y, otherwise say N.

Blocking RAW commands by default will prevent vendors from developing user space tools that utilize vendor specific commands. Vendors of CXL.mem devices should take ownership of ensuring any vendor defined commands that could cause user data to be exposed or corrupted are disabled at the device level for shipping configurations.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 1/8] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
  2021-02-10  0:02 ` [PATCH v2 1/8] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints Ben Widawsky
@ 2021-02-10 16:17   ` Jonathan Cameron
  2021-02-10 17:12     ` Ben Widawsky
  0 siblings, 1 reply; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-10 16:17 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V, Jonathan Corbet, Dave Jiang

On Tue, 9 Feb 2021 16:02:52 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> From: Dan Williams <dan.j.williams@intel.com>
> 
> The CXL.mem protocol allows a device to act as a provider of "System
> RAM" and/or "Persistent Memory" that is fully coherent as if the memory
> was attached to the typical CPU memory controller.
> 
> With the CXL-2.0 specification a PCI endpoint can implement a "Type-3"
> device interface and give the operating system control over "Host
> Managed Device Memory". See section 2.3 Type 3 CXL Device.
> 
> The memory range exported by the device may optionally be described by
> the platform firmware memory map, or by infrastructure like LIBNVDIMM to
> provision persistent memory capacity from one, or more, CXL.mem devices.
> 
> A pre-requisite for Linux-managed memory-capacity provisioning is this
> cxl_mem driver that can speak the mailbox protocol defined in section
> 8.2.8.4 Mailbox Registers.
> 
> For now just land the initial driver boiler-plate and Documentation/
> infrastructure.
> 
> Link: https://www.computeexpresslink.org/download-the-specification
> Cc: Jonathan Corbet <corbet@lwn.net>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Acked-by: David Rientjes <rientjes@google.com> (v1)

A few trivial bits inline but nothing that I feel that strongly about.
It is probably a good idea to add a note about generic dvsec code
somewhere in this patch description (to avoid people raising it on
future versions!)

With the define of PCI_EXT_CAP_ID_DVSEC dropped (it's in the generic
header already).

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  Documentation/driver-api/cxl/index.rst        | 12 ++++
>  .../driver-api/cxl/memory-devices.rst         | 29 +++++++++
>  Documentation/driver-api/index.rst            |  1 +
>  drivers/Kconfig                               |  1 +
>  drivers/Makefile                              |  1 +
>  drivers/cxl/Kconfig                           | 35 +++++++++++
>  drivers/cxl/Makefile                          |  4 ++
>  drivers/cxl/mem.c                             | 63 +++++++++++++++++++
>  drivers/cxl/pci.h                             | 18 ++++++
>  include/linux/pci_ids.h                       |  1 +
>  10 files changed, 165 insertions(+)
>  create mode 100644 Documentation/driver-api/cxl/index.rst
>  create mode 100644 Documentation/driver-api/cxl/memory-devices.rst
>  create mode 100644 drivers/cxl/Kconfig
>  create mode 100644 drivers/cxl/Makefile
>  create mode 100644 drivers/cxl/mem.c
>  create mode 100644 drivers/cxl/pci.h
> 
> diff --git a/Documentation/driver-api/cxl/index.rst b/Documentation/driver-api/cxl/index.rst
> new file mode 100644
> index 000000000000..036e49553542
> --- /dev/null
> +++ b/Documentation/driver-api/cxl/index.rst
> @@ -0,0 +1,12 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +====================
> +Compute Express Link
> +====================
> +
> +.. toctree::
> +   :maxdepth: 1
> +
> +   memory-devices
> +
> +.. only::  subproject and html
> diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
> new file mode 100644
> index 000000000000..43177e700d62
> --- /dev/null
> +++ b/Documentation/driver-api/cxl/memory-devices.rst
> @@ -0,0 +1,29 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +.. include:: <isonum.txt>
> +
> +===================================
> +Compute Express Link Memory Devices
> +===================================
> +
> +A Compute Express Link Memory Device is a CXL component that implements the
> +CXL.mem protocol. It contains some amount of volatile memory, persistent memory,
> +or both. It is enumerated as a PCI device for configuration and passing
> +messages over an MMIO mailbox. Its contribution to the System Physical
> +Address space is handled via HDM (Host Managed Device Memory) decoders
> +that optionally define a device's contribution to an interleaved address
> +range across multiple devices underneath a host-bridge or interleaved
> +across host-bridges.
> +
> +Driver Infrastructure
> +=====================
> +
> +This section covers the driver infrastructure for a CXL memory device.
> +
> +CXL Memory Device
> +-----------------
> +
> +.. kernel-doc:: drivers/cxl/mem.c
> +   :doc: cxl mem
> +
> +.. kernel-doc:: drivers/cxl/mem.c
> +   :internal:
> diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst
> index 2456d0a97ed8..d246a18fd78f 100644
> --- a/Documentation/driver-api/index.rst
> +++ b/Documentation/driver-api/index.rst
> @@ -35,6 +35,7 @@ available subsections can be seen below.
>     usb/index
>     firewire
>     pci/index
> +   cxl/index
>     spi
>     i2c
>     ipmb
> diff --git a/drivers/Kconfig b/drivers/Kconfig
> index dcecc9f6e33f..62c753a73651 100644
> --- a/drivers/Kconfig
> +++ b/drivers/Kconfig
> @@ -6,6 +6,7 @@ menu "Device Drivers"
>  source "drivers/amba/Kconfig"
>  source "drivers/eisa/Kconfig"
>  source "drivers/pci/Kconfig"
> +source "drivers/cxl/Kconfig"
>  source "drivers/pcmcia/Kconfig"
>  source "drivers/rapidio/Kconfig"
>  
> diff --git a/drivers/Makefile b/drivers/Makefile
> index fd11b9ac4cc3..678ea810410f 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -73,6 +73,7 @@ obj-$(CONFIG_NVM)		+= lightnvm/
>  obj-y				+= base/ block/ misc/ mfd/ nfc/
>  obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
>  obj-$(CONFIG_DAX)		+= dax/
> +obj-$(CONFIG_CXL_BUS)		+= cxl/
>  obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
>  obj-$(CONFIG_NUBUS)		+= nubus/
>  obj-y				+= macintosh/
> diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> new file mode 100644
> index 000000000000..9e80b311e928
> --- /dev/null
> +++ b/drivers/cxl/Kconfig
> @@ -0,0 +1,35 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +menuconfig CXL_BUS
> +	tristate "CXL (Compute Express Link) Devices Support"
> +	depends on PCI
> +	help
> +	  CXL is a bus that is electrically compatible with PCI Express, but
> +	  layers three protocols on that signalling (CXL.io, CXL.cache, and
> +	  CXL.mem). The CXL.cache protocol allows devices to hold cachelines
> +	  locally, the CXL.mem protocol allows devices to be fully coherent
> +	  memory targets, the CXL.io protocol is equivalent to PCI Express.
> +	  Say 'y' to enable support for the configuration and management of
> +	  devices supporting these protocols.
> +
> +if CXL_BUS
> +
> +config CXL_MEM
> +	tristate "CXL.mem: Memory Devices"
> +	help
> +	  The CXL.mem protocol allows a device to act as a provider of
> +	  "System RAM" and/or "Persistent Memory" that is fully coherent
> +	  as if the memory was attached to the typical CPU memory
> +	  controller.
> +
> +	  Say 'y/m' to enable a driver (named "cxl_mem.ko" when built as
> +	  a module) that will attach to CXL.mem devices for
> +	  configuration, provisioning, and health monitoring. This
> +	  driver is required for dynamic provisioning of CXL.mem
> +	  attached memory which is a prerequisite for persistent memory
> +	  support. Typically volatile memory is mapped by platform
> +	  firmware and included in the platform memory map, but in some
> +	  cases the OS is responsible for mapping that memory. See
> +	  Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
> +
> +	  If unsure say 'm'.
> +endif
> diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
> new file mode 100644
> index 000000000000..4a30f7c3fc4a
> --- /dev/null
> +++ b/drivers/cxl/Makefile
> @@ -0,0 +1,4 @@
> +# SPDX-License-Identifier: GPL-2.0
> +obj-$(CONFIG_CXL_MEM) += cxl_mem.o
> +
> +cxl_mem-y := mem.o
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> new file mode 100644
> index 000000000000..99a6571508df
> --- /dev/null
> +++ b/drivers/cxl/mem.c
> @@ -0,0 +1,63 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/io.h>
> +#include "pci.h"
> +
> +static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> +{
> +	int pos;
> +
> +	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC);
> +	if (!pos)
> +		return 0;
> +
> +	while (pos) {
> +		u16 vendor, id;
> +
> +		pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER1, &vendor);
> +		pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER2, &id);
> +		if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id)
> +			return pos;
> +
> +		pos = pci_find_next_ext_capability(pdev, pos,
> +						   PCI_EXT_CAP_ID_DVSEC);
> +	}
> +
> +	return 0;

Christopher Hellwig raised this in v1. 

https://lore.kernel.org/linux-pci/20201104201141.GA399378@bjorn-Precision-5520/

+CC Dave Jiang for update on that.

This wants to move towards a generic helper.  We can do the deduplication
later as Bjorn suggested.

> +}
> +
> +static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> +	struct device *dev = &pdev->dev;
> +	int regloc;
> +
> +	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
> +	if (!regloc) {
> +		dev_err(dev, "register location dvsec not found\n");
> +		return -ENXIO;
> +	}
> +
> +	return 0;
> +}
> +
> +static const struct pci_device_id cxl_mem_pci_tbl[] = {
> +	/* PCI class code for CXL.mem Type-3 Devices */
> +	{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
> +	  PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF, 0xffffff, 0 },

Having looked at this and thought 'thats a bit tricky to check'
I did a quick grep and seems the kernel is split between this approach
and people going with the mor readable c99 style initiators
	.class = .. etc

Personally I'd find the c99 approach easier to read. 

> +	{ /* terminate list */ },
> +};
> +MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl);
> +
> +static struct pci_driver cxl_mem_driver = {
> +	.name			= KBUILD_MODNAME,
> +	.id_table		= cxl_mem_pci_tbl,
> +	.probe			= cxl_mem_probe,
> +	.driver	= {
> +		.probe_type	= PROBE_PREFER_ASYNCHRONOUS,
> +	},
> +};
> +
> +MODULE_LICENSE("GPL v2");
> +module_pci_driver(cxl_mem_driver);
> diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> new file mode 100644
> index 000000000000..f135b9f7bb21
> --- /dev/null
> +++ b/drivers/cxl/pci.h
> @@ -0,0 +1,18 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> +#ifndef __CXL_PCI_H__
> +#define __CXL_PCI_H__
> +
> +#define CXL_MEMORY_PROGIF	0x10
> +
> +/*
> + * See section 8.1 Configuration Space Registers in the CXL 2.0
> + * Specification
> + */
> +#define PCI_EXT_CAP_ID_DVSEC		0x23

This is already in include/uapi/linux/pci_regs.h

> +#define PCI_DVSEC_VENDOR_ID_CXL		0x1E98
> +#define PCI_DVSEC_ID_CXL		0x0
> +
> +#define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
> +
> +#endif /* __CXL_PCI_H__ */
> diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
> index d8156a5dbee8..766260a9b247 100644
> --- a/include/linux/pci_ids.h
> +++ b/include/linux/pci_ids.h
> @@ -51,6 +51,7 @@
>  #define PCI_BASE_CLASS_MEMORY		0x05
>  #define PCI_CLASS_MEMORY_RAM		0x0500
>  #define PCI_CLASS_MEMORY_FLASH		0x0501
> +#define PCI_CLASS_MEMORY_CXL		0x0502
>  #define PCI_CLASS_MEMORY_OTHER		0x0580
>  
>  #define PCI_BASE_CLASS_BRIDGE		0x06


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-10 15:26   ` Ariel.Sibley
@ 2021-02-10 16:49     ` Ben Widawsky
  2021-02-10 18:03       ` Ariel.Sibley
  2021-02-11 16:43     ` Dan Williams
  1 sibling, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10 16:49 UTC (permalink / raw)
  To: Ariel.Sibley
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	helgaas, cbrowy, hch, dan.j.williams, david, rientjes, ira.weiny,
	jcm, Jonathan.Cameron, rafael.j.wysocki, rdunlap, vishal.l.verma,
	jgroves, sean.v.kelley

On 21-02-10 15:26:27, Ariel.Sibley@microchip.com wrote:
> > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > index c4ba3aa0a05d..08eaa8e52083 100644
> > --- a/drivers/cxl/Kconfig
> > +++ b/drivers/cxl/Kconfig
> > @@ -33,6 +33,24 @@ config CXL_MEM
> > 
> >           If unsure say 'm'.
> > 
> > +config CXL_MEM_RAW_COMMANDS
> > +       bool "RAW Command Interface for Memory Devices"
> > +       depends on CXL_MEM
> > +       help
> > +         Enable CXL RAW command interface.
> > +
> > +         The CXL driver ioctl interface may assign a kernel ioctl command
> > +         number for each specification defined opcode. At any given point in
> > +         time the number of opcodes that the specification defines and a device
> > +         may implement may exceed the kernel's set of associated ioctl function
> > +         numbers. The mismatch is either by omission, specification is too new,
> > +         or by design. When prototyping new hardware, or developing /
> > debugging
> > +         the driver it is useful to be able to submit any possible command to
> > +         the hardware, even commands that may crash the kernel due to their
> > +         potential impact to memory currently in use by the kernel.
> > +
> > +         If developing CXL hardware or the driver say Y, otherwise say N.
> 
> Blocking RAW commands by default will prevent vendors from developing user
> space tools that utilize vendor specific commands. Vendors of CXL.mem devices
> should take ownership of ensuring any vendor defined commands that could cause
> user data to be exposed or corrupted are disabled at the device level for
> shipping configurations.

Thanks for brining this up Ariel. If there is a recommendation on how to codify
this, I would certainly like to know because the explanation will be long.

---

The background:

The enabling/disabling of the Kconfig option is driven by the distribution
and/or system integrator. Even if we made the default 'y', nothing stops them
from changing that. if you are using this driver in production and insist on
using RAW commands, you are free to carry around a small patch to get rid of the
WARN (it is a one-liner).

To recap why this is in place - the driver owns the sanctity of the device and
therefore a [large] part of the whole system. What we can do as driver writers
is figure out the set of commands that are "safe" and allow those. Aside from
being able to validate them, we're able to mediate them with other parallel
operations that might conflict. We gain the ability to squint extra hard at bug
reports. We provide a reason to try to use a well defined part of the spec.
Realizing that only allowing that small set of commands in a rapidly growing
ecosystem is not a welcoming API; we decided on RAW.

Vendor commands can be one of two types:
1. Some functionality probably most vendors want.
2. Functionality that is really single vendor specific.

Hopefully we can agree that the path for case #1 is to work with the consortium
to standardize a command that does what is needed and that can eventually become
part of UAPI. The situation is unfortunate, but temporary. If you won't be able
to upgrade your kernel, patch out the WARN as above.

The second situation is interesting and does need some more thought and
discussion.

---

I see 3 realistic options for truly vendor specific commands.
1. Tough noogies. Vendors aren't special and they shouldn't do that.
2. modparam to disable the WARN for specific devices (let the sysadmin decide)
3. Try to make them part of UAPI.

The right answer to me is #1, but I also realize I live in the real world.

#2 provides too much flexibility. Vendors will just do what they please and
distros and/or integrators will be seen as hostile if they don't accommodate.

I like #3, but I have a feeling not everyone will agree. My proposal for vendor
specific commands is, if it's clear it's truly a unique command, allow adding it
as part of UAPI (moving it out of RAW). I expect like 5 of these, ever. If we
start getting multiple per vendor, we've failed. The infrastructure is already
in place to allow doing this pretty easily. I think we'd have to draw up some
guidelines (like adding test cases for the command) to allow these to come in.
Anything with command effects is going to need extra scrutiny.

In my opinion, as maintainers of the driver, we do owe the community an answer
as to our direction for this. Dan, what is your thought?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10 15:07     ` Jonathan Cameron
@ 2021-02-10 16:55       ` Ben Widawsky
  2021-02-10 17:30         ` Jonathan Cameron
  2021-02-10 18:16         ` Ben Widawsky
  0 siblings, 2 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10 16:55 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On 21-02-10 15:07:59, Jonathan Cameron wrote:
> On Wed, 10 Feb 2021 13:32:52 +0000
> Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> 
> > On Tue, 9 Feb 2021 16:02:53 -0800
> > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > 
> > > Provide enough functionality to utilize the mailbox of a memory device.
> > > The mailbox is used to interact with the firmware running on the memory
> > > device. The flow is proven with one implemented command, "identify".
> > > Because the class code has already told the driver this is a memory
> > > device and the identify command is mandatory.
> > > 
> > > CXL devices contain an array of capabilities that describe the
> > > interactions software can have with the device or firmware running on
> > > the device. A CXL compliant device must implement the device status and
> > > the mailbox capability. Additionally, a CXL compliant memory device must
> > > implement the memory device capability. Each of the capabilities can
> > > [will] provide an offset within the MMIO region for interacting with the
> > > CXL device.
> > > 
> > > The capabilities tell the driver how to find and map the register space
> > > for CXL Memory Devices. The registers are required to utilize the CXL
> > > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > > and secondary. The secondary mailbox is earmarked for system firmware,
> > > and not handled in this driver.
> > > 
> > > Primary mailboxes are capable of generating an interrupt when submitting
> > > a background command. That implementation is saved for a later time.
> > > 
> > > Link: https://www.computeexpresslink.org/download-the-specification
> > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>  
> > 
> > Hi Ben,
> > 
> > 
> > > +/**
> > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > + * @cxlm: The CXL memory device to communicate with.
> > > + * @mbox_cmd: Command to send to the memory device.
> > > + *
> > > + * Context: Any context. Expects mbox_lock to be held.
> > > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > > + *         Caller should check the return code in @mbox_cmd to make sure it
> > > + *         succeeded.  
> > 
> > cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> > enters an infinite loop as a result.

I meant to fix that.

> > 
> > I haven't checked other paths, but to my mind it is not a good idea to require
> > two levels of error checking - the example here proves how easy it is to forget
> > one.

Demonstrably, you're correct. I think it would be good to have a kernel only
mbox command that does the error checking though. Let me type something up and
see how it looks.

> > 
> > Now all I have to do is figure out why I'm getting an error in the first place!
> 
> For reference this seems to be our old issue of arm64 memcpy_fromio() only doing 8 byte
> or 1 byte copies.  The hack in QEMU to allow that to work, doesn't work.
> Result is that 1 byte reads replicate across the register
> (in this case instead of 0000001c I get 1c1c1c1c)
> 
> For these particular registers, we are covered by the rules in 8.2 which says that
> a 1, 2, 4, 8 aligned reads of 64 bit registers etc are fine.
> 
> So we should not have to care.  This isn't true for the component registers where
> we need to guarantee 4 or 8 byte reads only.
> 
> For this particular issue the mailbox_read_reg() function in the QEMU code
> needs to handle the size 1 case and set min_access_size = 1 for
> mailbox_ops.  Logically it should also handle the 2 byte case I think,
> but I'm not hitting that.
> 
> Jonathan

I think the latest QEMU patches should do the right thing (I have a v4 branch if
you want to try it). If it doesn't, it'd be worth debugging. The memory
accessors should split up or combine the reads/writes to whatever the emulation
supports (4 or 8 only in this case).

We can move this discussion to the QEMU list if it's not just a simple bug on my
part.

> 
> > 
> > Jonathan
> > 
> > 
> > 
> > > + *
> > > + * This is a generic form of the CXL mailbox send command, thus the only I/O
> > > + * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> > > + * types of CXL devices may have further information available upon error
> > > + * conditions.
> > > + *
> > > + * The CXL spec allows for up to two mailboxes. The intention is for the primary
> > > + * mailbox to be OS controlled and the secondary mailbox to be used by system
> > > + * firmware. This allows the OS and firmware to communicate with the device and
> > > + * not need to coordinate with each other. The driver only uses the primary
> > > + * mailbox.
> > > + */
> > > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > > +				 struct mbox_cmd *mbox_cmd)
> > > +{
> > > +	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> > > +	u64 cmd_reg, status_reg;
> > > +	size_t out_len;
> > > +	int rc;
> > > +
> > > +	lockdep_assert_held(&cxlm->mbox_mutex);
> > > +
> > > +	/*
> > > +	 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
> > > +	 *   1. Caller reads MB Control Register to verify doorbell is clear
> > > +	 *   2. Caller writes Command Register
> > > +	 *   3. Caller writes Command Payload Registers if input payload is non-empty
> > > +	 *   4. Caller writes MB Control Register to set doorbell
> > > +	 *   5. Caller either polls for doorbell to be clear or waits for interrupt if configured
> > > +	 *   6. Caller reads MB Status Register to fetch Return code
> > > +	 *   7. If command successful, Caller reads Command Register to get Payload Length
> > > +	 *   8. If output payload is non-empty, host reads Command Payload Registers
> > > +	 *
> > > +	 * Hardware is free to do whatever it wants before the doorbell is rung,
> > > +	 * and isn't allowed to change anything after it clears the doorbell. As
> > > +	 * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can
> > > +	 * also happen in any order (though some orders might not make sense).
> > > +	 */
> > > +
> > > +	/* #1 */
> > > +	if (cxl_doorbell_busy(cxlm)) {
> > > +		dev_err_ratelimited(&cxlm->pdev->dev,
> > > +				    "Mailbox re-busy after acquiring\n");
> > > +		return -EBUSY;
> > > +	}
> > > +
> > > +	cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK,
> > > +			     mbox_cmd->opcode);
> > > +	if (mbox_cmd->size_in) {
> > > +		if (WARN_ON(!mbox_cmd->payload_in))
> > > +			return -EINVAL;
> > > +
> > > +		cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK,
> > > +				      mbox_cmd->size_in);
> > > +		memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in);
> > > +	}
> > > +
> > > +	/* #2, #3 */
> > > +	writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > > +
> > > +	/* #4 */
> > > +	dev_dbg(&cxlm->pdev->dev, "Sending command\n");
> > > +	writel(CXLDEV_MBOX_CTRL_DOORBELL,
> > > +	       cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET);
> > > +
> > > +	/* #5 */
> > > +	rc = cxl_mem_wait_for_doorbell(cxlm);
> > > +	if (rc == -ETIMEDOUT) {
> > > +		cxl_mem_mbox_timeout(cxlm, mbox_cmd);
> > > +		return rc;
> > > +	}
> > > +
> > > +	/* #6 */
> > > +	status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET);
> > > +	mbox_cmd->return_code =
> > > +		FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
> > > +
> > > +	if (mbox_cmd->return_code != 0) {
> > > +		dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
> > > +		return 0;  
> > 
> > I'd return some sort of error in this path.  Otherwise the sort of missing
> > handling I mention above is too easy to hit.
> > 
> > > +	}
> > > +
> > > +	/* #7 */
> > > +	cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > > +	out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg);
> > > +
> > > +	/* #8 */
> > > +	if (out_len && mbox_cmd->payload_out)
> > > +		memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
> > > +
> > > +	mbox_cmd->size_out = out_len;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +/**
> > > + * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox.
> > > + * @cxlm: The memory device to gain access to.
> > > + *
> > > + * Context: Any context. Takes the mbox_lock.
> > > + * Return: 0 if exclusive access was acquired.
> > > + */
> > > +static int cxl_mem_mbox_get(struct cxl_mem *cxlm)
> > > +{
> > > +	struct device *dev = &cxlm->pdev->dev;
> > > +	int rc = -EBUSY;
> > > +	u64 md_status;
> > > +
> > > +	mutex_lock_io(&cxlm->mbox_mutex);
> > > +
> > > +	/*
> > > +	 * XXX: There is some amount of ambiguity in the 2.0 version of the spec
> > > +	 * around the mailbox interface ready (8.2.8.5.1.1).  The purpose of the
> > > +	 * bit is to allow firmware running on the device to notify the driver
> > > +	 * that it's ready to receive commands. It is unclear if the bit needs
> > > +	 * to be read for each transaction mailbox, ie. the firmware can switch
> > > +	 * it on and off as needed. Second, there is no defined timeout for
> > > +	 * mailbox ready, like there is for the doorbell interface.
> > > +	 *
> > > +	 * Assumptions:
> > > +	 * 1. The firmware might toggle the Mailbox Interface Ready bit, check
> > > +	 *    it for every command.
> > > +	 *
> > > +	 * 2. If the doorbell is clear, the firmware should have first set the
> > > +	 *    Mailbox Interface Ready bit. Therefore, waiting for the doorbell
> > > +	 *    to be ready is sufficient.
> > > +	 */
> > > +	rc = cxl_mem_wait_for_doorbell(cxlm);
> > > +	if (rc) {
> > > +		dev_warn(dev, "Mailbox interface not ready\n");
> > > +		goto out;
> > > +	}
> > > +
> > > +	md_status = readq(cxlm->memdev_regs + CXLMDEV_STATUS_OFFSET);
> > > +	if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) {
> > > +		dev_err(dev,
> > > +			"mbox: reported doorbell ready, but not mbox ready\n");
> > > +		goto out;
> > > +	}
> > > +
> > > +	/*
> > > +	 * Hardware shouldn't allow a ready status but also have failure bits
> > > +	 * set. Spit out an error, this should be a bug report
> > > +	 */
> > > +	rc = -EFAULT;
> > > +	if (md_status & CXLMDEV_DEV_FATAL) {
> > > +		dev_err(dev, "mbox: reported ready, but fatal\n");
> > > +		goto out;
> > > +	}
> > > +	if (md_status & CXLMDEV_FW_HALT) {
> > > +		dev_err(dev, "mbox: reported ready, but halted\n");
> > > +		goto out;
> > > +	}
> > > +	if (CXLMDEV_RESET_NEEDED(md_status)) {
> > > +		dev_err(dev, "mbox: reported ready, but reset needed\n");
> > > +		goto out;
> > > +	}
> > > +
> > > +	/* with lock held */
> > > +	return 0;
> > > +
> > > +out:
> > > +	mutex_unlock(&cxlm->mbox_mutex);
> > > +	return rc;
> > > +}
> > > +
> > > +/**
> > > + * cxl_mem_mbox_put() - Release exclusive access to the mailbox.
> > > + * @cxlm: The CXL memory device to communicate with.
> > > + *
> > > + * Context: Any context. Expects mbox_lock to be held.
> > > + */
> > > +static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> > > +{
> > > +	mutex_unlock(&cxlm->mbox_mutex);
> > > +}
> > > +
> > > +/**
> > > + * cxl_mem_setup_regs() - Setup necessary MMIO.
> > > + * @cxlm: The CXL memory device to communicate with.
> > > + *
> > > + * Return: 0 if all necessary registers mapped.
> > > + *
> > > + * A memory device is required by spec to implement a certain set of MMIO
> > > + * regions. The purpose of this function is to enumerate and map those
> > > + * registers.
> > > + */
> > > +static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
> > > +{
> > > +	struct device *dev = &cxlm->pdev->dev;
> > > +	int cap, cap_count;
> > > +	u64 cap_array;
> > > +
> > > +	cap_array = readq(cxlm->regs + CXLDEV_CAP_ARRAY_OFFSET);
> > > +	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
> > > +	    CXLDEV_CAP_ARRAY_CAP_ID)
> > > +		return -ENODEV;
> > > +
> > > +	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
> > > +
> > > +	for (cap = 1; cap <= cap_count; cap++) {
> > > +		void __iomem *register_block;
> > > +		u32 offset;
> > > +		u16 cap_id;
> > > +
> > > +		cap_id = readl(cxlm->regs + cap * 0x10) & 0xffff;
> > > +		offset = readl(cxlm->regs + cap * 0x10 + 0x4);
> > > +		register_block = cxlm->regs + offset;
> > > +
> > > +		switch (cap_id) {
> > > +		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
> > > +			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
> > > +			cxlm->status_regs = register_block;
> > > +			break;
> > > +		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
> > > +			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
> > > +			cxlm->mbox_regs = register_block;
> > > +			break;
> > > +		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
> > > +			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
> > > +			break;
> > > +		case CXLDEV_CAP_CAP_ID_MEMDEV:
> > > +			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
> > > +			cxlm->memdev_regs = register_block;
> > > +			break;
> > > +		default:
> > > +			dev_dbg(dev, "Unknown cap ID: %d (0x%x)\n", cap_id, offset);
> > > +			break;
> > > +		}
> > > +	}
> > > +
> > > +	if (!cxlm->status_regs || !cxlm->mbox_regs || !cxlm->memdev_regs) {
> > > +		dev_err(dev, "registers not found: %s%s%s\n",
> > > +			!cxlm->status_regs ? "status " : "",
> > > +			!cxlm->mbox_regs ? "mbox " : "",
> > > +			!cxlm->memdev_regs ? "memdev" : "");
> > > +		return -ENXIO;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
> > > +{
> > > +	const int cap = readl(cxlm->mbox_regs + CXLDEV_MBOX_CAPS_OFFSET);
> > > +
> > > +	cxlm->payload_size =
> > > +		1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
> > > +
> > > +	/*
> > > +	 * CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register
> > > +	 *
> > > +	 * If the size is too small, mandatory commands will not work and so
> > > +	 * there's no point in going forward. If the size is too large, there's
> > > +	 * no harm is soft limiting it.
> > > +	 */
> > > +	cxlm->payload_size = min_t(size_t, cxlm->payload_size, SZ_1M);
> > > +	if (cxlm->payload_size < 256) {
> > > +		dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)",
> > > +			cxlm->payload_size);
> > > +		return -ENXIO;
> > > +	}
> > > +
> > > +	dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu",
> > > +		cxlm->payload_size);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
> > > +				      u32 reg_hi)
> > > +{
> > > +	struct device *dev = &pdev->dev;
> > > +	struct cxl_mem *cxlm;
> > > +	void __iomem *regs;
> > > +	u64 offset;
> > > +	u8 bar;
> > > +	int rc;
> > > +
> > > +	cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL);
> > > +	if (!cxlm) {
> > > +		dev_err(dev, "No memory available\n");
> > > +		return NULL;
> > > +	}
> > > +
> > > +	offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo);
> > > +	bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo);
> > > +
> > > +	/* Basic sanity check that BAR is big enough */
> > > +	if (pci_resource_len(pdev, bar) < offset) {
> > > +		dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
> > > +			&pdev->resource[bar], (unsigned long long)offset);
> > > +		return NULL;
> > > +	}
> > > +
> > > +	rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev));
> > > +	if (rc != 0) {
> > > +		dev_err(dev, "failed to map registers\n");
> > > +		return NULL;
> > > +	}
> > > +	regs = pcim_iomap_table(pdev)[bar];
> > > +
> > > +	mutex_init(&cxlm->mbox_mutex);
> > > +	cxlm->pdev = pdev;
> > > +	cxlm->regs = regs + offset;
> > > +
> > > +	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
> > > +	return cxlm;
> > > +}
> > >  
> > >  static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> > >  {
> > > @@ -28,10 +423,85 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> > >  	return 0;
> > >  }
> > >  
> > > +/**
> > > + * cxl_mem_identify() - Send the IDENTIFY command to the device.
> > > + * @cxlm: The device to identify.
> > > + *
> > > + * Return: 0 if identify was executed successfully.
> > > + *
> > > + * This will dispatch the identify command to the device and on success populate
> > > + * structures to be exported to sysfs.
> > > + */
> > > +static int cxl_mem_identify(struct cxl_mem *cxlm)
> > > +{
> > > +	struct cxl_mbox_identify {
> > > +		char fw_revision[0x10];
> > > +		__le64 total_capacity;
> > > +		__le64 volatile_capacity;
> > > +		__le64 persistent_capacity;
> > > +		__le64 partition_align;
> > > +		__le16 info_event_log_size;
> > > +		__le16 warning_event_log_size;
> > > +		__le16 failure_event_log_size;
> > > +		__le16 fatal_event_log_size;
> > > +		__le32 lsa_size;
> > > +		u8 poison_list_max_mer[3];
> > > +		__le16 inject_poison_limit;
> > > +		u8 poison_caps;
> > > +		u8 qos_telemetry_caps;
> > > +	} __packed id;
> > > +	struct mbox_cmd mbox_cmd = {
> > > +		.opcode = CXL_MBOX_OP_IDENTIFY,
> > > +		.payload_out = &id,
> > > +		.size_in = 0,
> > > +	};
> > > +	int rc;
> > > +
> > > +	/* Retrieve initial device memory map */
> > > +	rc = cxl_mem_mbox_get(cxlm);
> > > +	if (rc)
> > > +		return rc;
> > > +
> > > +	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > > +	cxl_mem_mbox_put(cxlm);
> > > +	if (rc)
> > > +		return rc;
> > > +
> > > +	/* TODO: Handle retry or reset responses from firmware. */
> > > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > > +		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> > > +			mbox_cmd.return_code);
> > > +		return -ENXIO;
> > > +	}
> > > +
> > > +	if (mbox_cmd.size_out != sizeof(id))
> > > +		return -ENXIO;
> > > +
> > > +	/*
> > > +	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> > > +	 * For now, only the capacity is exported in sysfs
> > > +	 */
> > > +	cxlm->ram.range.start = 0;
> > > +	cxlm->ram.range.end = le64_to_cpu(id.volatile_capacity) - 1;
> > > +
> > > +	cxlm->pmem.range.start = 0;
> > > +	cxlm->pmem.range.end = le64_to_cpu(id.persistent_capacity) - 1;
> > > +
> > > +	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
> > > +
> > > +	return rc;
> > > +}
> > > +
> > >  static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> > >  {
> > >  	struct device *dev = &pdev->dev;
> > > -	int regloc;
> > > +	struct cxl_mem *cxlm;
> > > +	int rc, regloc, i;
> > > +	u32 regloc_size;
> > > +
> > > +	rc = pcim_enable_device(pdev);
> > > +	if (rc)
> > > +		return rc;
> > >  
> > >  	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
> > >  	if (!regloc) {
> > > @@ -39,7 +509,44 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> > >  		return -ENXIO;
> > >  	}
> > >  
> > > -	return 0;
> > > +	/* Get the size of the Register Locator DVSEC */
> > > +	pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, &regloc_size);
> > > +	regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
> > > +
> > > +	regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET;
> > > +
> > > +	rc = -ENXIO;
> > > +	for (i = regloc; i < regloc + regloc_size; i += 8) {
> > > +		u32 reg_lo, reg_hi;
> > > +		u8 reg_type;
> > > +
> > > +		/* "register low and high" contain other bits */
> > > +		pci_read_config_dword(pdev, i, &reg_lo);
> > > +		pci_read_config_dword(pdev, i + 4, &reg_hi);
> > > +
> > > +		reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo);
> > > +
> > > +		if (reg_type == CXL_REGLOC_RBI_MEMDEV) {
> > > +			rc = 0;
> > > +			cxlm = cxl_mem_create(pdev, reg_lo, reg_hi);
> > > +			if (!cxlm)
> > > +				rc = -ENODEV;
> > > +			break;
> > > +		}
> > > +	}
> > > +
> > > +	if (rc)
> > > +		return rc;
> > > +
> > > +	rc = cxl_mem_setup_regs(cxlm);
> > > +	if (rc)
> > > +		return rc;
> > > +
> > > +	rc = cxl_mem_setup_mailbox(cxlm);
> > > +	if (rc)
> > > +		return rc;
> > > +
> > > +	return cxl_mem_identify(cxlm);
> > >  }
> > >  
> > >  static const struct pci_device_id cxl_mem_pci_tbl[] = {
> > > diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> > > index f135b9f7bb21..ffcbc13d7b5b 100644
> > > --- a/drivers/cxl/pci.h
> > > +++ b/drivers/cxl/pci.h
> > > @@ -14,5 +14,18 @@
> > >  #define PCI_DVSEC_ID_CXL		0x0
> > >  
> > >  #define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
> > > +#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET	0xC
> > > +
> > > +/* BAR Indicator Register (BIR) */
> > > +#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
> > > +
> > > +/* Register Block Identifier (RBI) */
> > > +#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
> > > +#define CXL_REGLOC_RBI_EMPTY 0
> > > +#define CXL_REGLOC_RBI_COMPONENT 1
> > > +#define CXL_REGLOC_RBI_VIRT 2
> > > +#define CXL_REGLOC_RBI_MEMDEV 3
> > > +
> > > +#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)
> > >  
> > >  #endif /* __CXL_PCI_H__ */
> > > diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> > > index e709ae8235e7..6267ca9ae683 100644
> > > --- a/include/uapi/linux/pci_regs.h
> > > +++ b/include/uapi/linux/pci_regs.h
> > > @@ -1080,6 +1080,7 @@
> > >  
> > >  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
> > >  #define PCI_DVSEC_HEADER1		0x4 /* Designated Vendor-Specific Header1 */
> > > +#define PCI_DVSEC_HEADER1_LENGTH_MASK	0xFFF00000
> > >  #define PCI_DVSEC_HEADER2		0x8 /* Designated Vendor-Specific Header2 */
> > >  
> > >  /* Data Link Feature */  
> > 
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 1/8] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
  2021-02-10 16:17   ` Jonathan Cameron
@ 2021-02-10 17:12     ` Ben Widawsky
  2021-02-10 17:23       ` Jonathan Cameron
  0 siblings, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10 17:12 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V, Jonathan Corbet, Dave Jiang

On 21-02-10 16:17:07, Jonathan Cameron wrote:
> On Tue, 9 Feb 2021 16:02:52 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > From: Dan Williams <dan.j.williams@intel.com>
> > 
> > The CXL.mem protocol allows a device to act as a provider of "System
> > RAM" and/or "Persistent Memory" that is fully coherent as if the memory
> > was attached to the typical CPU memory controller.
> > 
> > With the CXL-2.0 specification a PCI endpoint can implement a "Type-3"
> > device interface and give the operating system control over "Host
> > Managed Device Memory". See section 2.3 Type 3 CXL Device.
> > 
> > The memory range exported by the device may optionally be described by
> > the platform firmware memory map, or by infrastructure like LIBNVDIMM to
> > provision persistent memory capacity from one, or more, CXL.mem devices.
> > 
> > A pre-requisite for Linux-managed memory-capacity provisioning is this
> > cxl_mem driver that can speak the mailbox protocol defined in section
> > 8.2.8.4 Mailbox Registers.
> > 
> > For now just land the initial driver boiler-plate and Documentation/
> > infrastructure.
> > 
> > Link: https://www.computeexpresslink.org/download-the-specification
> > Cc: Jonathan Corbet <corbet@lwn.net>
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Acked-by: David Rientjes <rientjes@google.com> (v1)
> 
> A few trivial bits inline but nothing that I feel that strongly about.
> It is probably a good idea to add a note about generic dvsec code
> somewhere in this patch description (to avoid people raising it on
> future versions!)
> 
> With the define of PCI_EXT_CAP_ID_DVSEC dropped (it's in the generic
> header already).
> 
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> 
> > ---
> >  Documentation/driver-api/cxl/index.rst        | 12 ++++
> >  .../driver-api/cxl/memory-devices.rst         | 29 +++++++++
> >  Documentation/driver-api/index.rst            |  1 +
> >  drivers/Kconfig                               |  1 +
> >  drivers/Makefile                              |  1 +
> >  drivers/cxl/Kconfig                           | 35 +++++++++++
> >  drivers/cxl/Makefile                          |  4 ++
> >  drivers/cxl/mem.c                             | 63 +++++++++++++++++++
> >  drivers/cxl/pci.h                             | 18 ++++++
> >  include/linux/pci_ids.h                       |  1 +
> >  10 files changed, 165 insertions(+)
> >  create mode 100644 Documentation/driver-api/cxl/index.rst
> >  create mode 100644 Documentation/driver-api/cxl/memory-devices.rst
> >  create mode 100644 drivers/cxl/Kconfig
> >  create mode 100644 drivers/cxl/Makefile
> >  create mode 100644 drivers/cxl/mem.c
> >  create mode 100644 drivers/cxl/pci.h
> > 
> > diff --git a/Documentation/driver-api/cxl/index.rst b/Documentation/driver-api/cxl/index.rst
> > new file mode 100644
> > index 000000000000..036e49553542
> > --- /dev/null
> > +++ b/Documentation/driver-api/cxl/index.rst
> > @@ -0,0 +1,12 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +====================
> > +Compute Express Link
> > +====================
> > +
> > +.. toctree::
> > +   :maxdepth: 1
> > +
> > +   memory-devices
> > +
> > +.. only::  subproject and html
> > diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
> > new file mode 100644
> > index 000000000000..43177e700d62
> > --- /dev/null
> > +++ b/Documentation/driver-api/cxl/memory-devices.rst
> > @@ -0,0 +1,29 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +.. include:: <isonum.txt>
> > +
> > +===================================
> > +Compute Express Link Memory Devices
> > +===================================
> > +
> > +A Compute Express Link Memory Device is a CXL component that implements the
> > +CXL.mem protocol. It contains some amount of volatile memory, persistent memory,
> > +or both. It is enumerated as a PCI device for configuration and passing
> > +messages over an MMIO mailbox. Its contribution to the System Physical
> > +Address space is handled via HDM (Host Managed Device Memory) decoders
> > +that optionally define a device's contribution to an interleaved address
> > +range across multiple devices underneath a host-bridge or interleaved
> > +across host-bridges.
> > +
> > +Driver Infrastructure
> > +=====================
> > +
> > +This section covers the driver infrastructure for a CXL memory device.
> > +
> > +CXL Memory Device
> > +-----------------
> > +
> > +.. kernel-doc:: drivers/cxl/mem.c
> > +   :doc: cxl mem
> > +
> > +.. kernel-doc:: drivers/cxl/mem.c
> > +   :internal:
> > diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst
> > index 2456d0a97ed8..d246a18fd78f 100644
> > --- a/Documentation/driver-api/index.rst
> > +++ b/Documentation/driver-api/index.rst
> > @@ -35,6 +35,7 @@ available subsections can be seen below.
> >     usb/index
> >     firewire
> >     pci/index
> > +   cxl/index
> >     spi
> >     i2c
> >     ipmb
> > diff --git a/drivers/Kconfig b/drivers/Kconfig
> > index dcecc9f6e33f..62c753a73651 100644
> > --- a/drivers/Kconfig
> > +++ b/drivers/Kconfig
> > @@ -6,6 +6,7 @@ menu "Device Drivers"
> >  source "drivers/amba/Kconfig"
> >  source "drivers/eisa/Kconfig"
> >  source "drivers/pci/Kconfig"
> > +source "drivers/cxl/Kconfig"
> >  source "drivers/pcmcia/Kconfig"
> >  source "drivers/rapidio/Kconfig"
> >  
> > diff --git a/drivers/Makefile b/drivers/Makefile
> > index fd11b9ac4cc3..678ea810410f 100644
> > --- a/drivers/Makefile
> > +++ b/drivers/Makefile
> > @@ -73,6 +73,7 @@ obj-$(CONFIG_NVM)		+= lightnvm/
> >  obj-y				+= base/ block/ misc/ mfd/ nfc/
> >  obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
> >  obj-$(CONFIG_DAX)		+= dax/
> > +obj-$(CONFIG_CXL_BUS)		+= cxl/
> >  obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
> >  obj-$(CONFIG_NUBUS)		+= nubus/
> >  obj-y				+= macintosh/
> > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > new file mode 100644
> > index 000000000000..9e80b311e928
> > --- /dev/null
> > +++ b/drivers/cxl/Kconfig
> > @@ -0,0 +1,35 @@
> > +# SPDX-License-Identifier: GPL-2.0-only
> > +menuconfig CXL_BUS
> > +	tristate "CXL (Compute Express Link) Devices Support"
> > +	depends on PCI
> > +	help
> > +	  CXL is a bus that is electrically compatible with PCI Express, but
> > +	  layers three protocols on that signalling (CXL.io, CXL.cache, and
> > +	  CXL.mem). The CXL.cache protocol allows devices to hold cachelines
> > +	  locally, the CXL.mem protocol allows devices to be fully coherent
> > +	  memory targets, the CXL.io protocol is equivalent to PCI Express.
> > +	  Say 'y' to enable support for the configuration and management of
> > +	  devices supporting these protocols.
> > +
> > +if CXL_BUS
> > +
> > +config CXL_MEM
> > +	tristate "CXL.mem: Memory Devices"
> > +	help
> > +	  The CXL.mem protocol allows a device to act as a provider of
> > +	  "System RAM" and/or "Persistent Memory" that is fully coherent
> > +	  as if the memory was attached to the typical CPU memory
> > +	  controller.
> > +
> > +	  Say 'y/m' to enable a driver (named "cxl_mem.ko" when built as
> > +	  a module) that will attach to CXL.mem devices for
> > +	  configuration, provisioning, and health monitoring. This
> > +	  driver is required for dynamic provisioning of CXL.mem
> > +	  attached memory which is a prerequisite for persistent memory
> > +	  support. Typically volatile memory is mapped by platform
> > +	  firmware and included in the platform memory map, but in some
> > +	  cases the OS is responsible for mapping that memory. See
> > +	  Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
> > +
> > +	  If unsure say 'm'.
> > +endif
> > diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
> > new file mode 100644
> > index 000000000000..4a30f7c3fc4a
> > --- /dev/null
> > +++ b/drivers/cxl/Makefile
> > @@ -0,0 +1,4 @@
> > +# SPDX-License-Identifier: GPL-2.0
> > +obj-$(CONFIG_CXL_MEM) += cxl_mem.o
> > +
> > +cxl_mem-y := mem.o
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > new file mode 100644
> > index 000000000000..99a6571508df
> > --- /dev/null
> > +++ b/drivers/cxl/mem.c
> > @@ -0,0 +1,63 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> > +#include <linux/module.h>
> > +#include <linux/pci.h>
> > +#include <linux/io.h>
> > +#include "pci.h"
> > +
> > +static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> > +{
> > +	int pos;
> > +
> > +	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC);
> > +	if (!pos)
> > +		return 0;
> > +
> > +	while (pos) {
> > +		u16 vendor, id;
> > +
> > +		pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER1, &vendor);
> > +		pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER2, &id);
> > +		if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id)
> > +			return pos;
> > +
> > +		pos = pci_find_next_ext_capability(pdev, pos,
> > +						   PCI_EXT_CAP_ID_DVSEC);
> > +	}
> > +
> > +	return 0;
> 
> Christopher Hellwig raised this in v1. 
> 
> https://lore.kernel.org/linux-pci/20201104201141.GA399378@bjorn-Precision-5520/
> 
> +CC Dave Jiang for update on that.
> 
> This wants to move towards a generic helper.  We can do the deduplication
> later as Bjorn suggested.
> 
> > +}
> > +
> > +static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> > +{
> > +	struct device *dev = &pdev->dev;
> > +	int regloc;
> > +
> > +	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
> > +	if (!regloc) {
> > +		dev_err(dev, "register location dvsec not found\n");
> > +		return -ENXIO;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static const struct pci_device_id cxl_mem_pci_tbl[] = {
> > +	/* PCI class code for CXL.mem Type-3 Devices */
> > +	{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
> > +	  PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF, 0xffffff, 0 },
> 
> Having looked at this and thought 'thats a bit tricky to check'
> I did a quick grep and seems the kernel is split between this approach
> and people going with the mor readable c99 style initiators
> 	.class = .. etc
> 
> Personally I'd find the c99 approach easier to read. 
> 

Well, it's Dan's patch, but I did modify this last. I took a look around, and
the best fit seems to me seems to be:
-       { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
-         PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF, 0xffffff, 0 },
+       { PCI_DEVICE_CLASS((PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF), ~0)},

That work for you?

> > +	{ /* terminate list */ },
> > +};
> > +MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl);
> > +
> > +static struct pci_driver cxl_mem_driver = {
> > +	.name			= KBUILD_MODNAME,
> > +	.id_table		= cxl_mem_pci_tbl,
> > +	.probe			= cxl_mem_probe,
> > +	.driver	= {
> > +		.probe_type	= PROBE_PREFER_ASYNCHRONOUS,
> > +	},
> > +};
> > +
> > +MODULE_LICENSE("GPL v2");
> > +module_pci_driver(cxl_mem_driver);
> > diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> > new file mode 100644
> > index 000000000000..f135b9f7bb21
> > --- /dev/null
> > +++ b/drivers/cxl/pci.h
> > @@ -0,0 +1,18 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> > +#ifndef __CXL_PCI_H__
> > +#define __CXL_PCI_H__
> > +
> > +#define CXL_MEMORY_PROGIF	0x10
> > +
> > +/*
> > + * See section 8.1 Configuration Space Registers in the CXL 2.0
> > + * Specification
> > + */
> > +#define PCI_EXT_CAP_ID_DVSEC		0x23
> 
> This is already in include/uapi/linux/pci_regs.h
> 
> > +#define PCI_DVSEC_VENDOR_ID_CXL		0x1E98
> > +#define PCI_DVSEC_ID_CXL		0x0
> > +
> > +#define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
> > +
> > +#endif /* __CXL_PCI_H__ */
> > diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
> > index d8156a5dbee8..766260a9b247 100644
> > --- a/include/linux/pci_ids.h
> > +++ b/include/linux/pci_ids.h
> > @@ -51,6 +51,7 @@
> >  #define PCI_BASE_CLASS_MEMORY		0x05
> >  #define PCI_CLASS_MEMORY_RAM		0x0500
> >  #define PCI_CLASS_MEMORY_FLASH		0x0501
> > +#define PCI_CLASS_MEMORY_CXL		0x0502
> >  #define PCI_CLASS_MEMORY_OTHER		0x0580
> >  
> >  #define PCI_BASE_CLASS_BRIDGE		0x06
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 1/8] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
  2021-02-10 17:12     ` Ben Widawsky
@ 2021-02-10 17:23       ` Jonathan Cameron
  0 siblings, 0 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-10 17:23 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V, Jonathan Corbet, Dave Jiang

On Wed, 10 Feb 2021 09:12:20 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

...
   
> > > +}
> > > +
> > > +static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> > > +{
> > > +	struct device *dev = &pdev->dev;
> > > +	int regloc;
> > > +
> > > +	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
> > > +	if (!regloc) {
> > > +		dev_err(dev, "register location dvsec not found\n");
> > > +		return -ENXIO;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static const struct pci_device_id cxl_mem_pci_tbl[] = {
> > > +	/* PCI class code for CXL.mem Type-3 Devices */
> > > +	{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
> > > +	  PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF, 0xffffff, 0 },  
> > 
> > Having looked at this and thought 'thats a bit tricky to check'
> > I did a quick grep and seems the kernel is split between this approach
> > and people going with the mor readable c99 style initiators
> > 	.class = .. etc
> > 
> > Personally I'd find the c99 approach easier to read. 
> >   
> 
> Well, it's Dan's patch, but I did modify this last. I took a look around, and
> the best fit seems to me seems to be:
> -       { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
> -         PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF, 0xffffff, 0 },
> +       { PCI_DEVICE_CLASS((PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF), ~0)},
> 
> That work for you?
> 

Yes that's definitely nicer.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10 16:55       ` Ben Widawsky
@ 2021-02-10 17:30         ` Jonathan Cameron
  2021-02-10 18:16         ` Ben Widawsky
  1 sibling, 0 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-10 17:30 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Wed, 10 Feb 2021 08:55:57 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-02-10 15:07:59, Jonathan Cameron wrote:
> > On Wed, 10 Feb 2021 13:32:52 +0000
> > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> >   
> > > On Tue, 9 Feb 2021 16:02:53 -0800
> > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > >   
> > > > Provide enough functionality to utilize the mailbox of a memory device.
> > > > The mailbox is used to interact with the firmware running on the memory
> > > > device. The flow is proven with one implemented command, "identify".
> > > > Because the class code has already told the driver this is a memory
> > > > device and the identify command is mandatory.
> > > > 
> > > > CXL devices contain an array of capabilities that describe the
> > > > interactions software can have with the device or firmware running on
> > > > the device. A CXL compliant device must implement the device status and
> > > > the mailbox capability. Additionally, a CXL compliant memory device must
> > > > implement the memory device capability. Each of the capabilities can
> > > > [will] provide an offset within the MMIO region for interacting with the
> > > > CXL device.
> > > > 
> > > > The capabilities tell the driver how to find and map the register space
> > > > for CXL Memory Devices. The registers are required to utilize the CXL
> > > > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > > > and secondary. The secondary mailbox is earmarked for system firmware,
> > > > and not handled in this driver.
> > > > 
> > > > Primary mailboxes are capable of generating an interrupt when submitting
> > > > a background command. That implementation is saved for a later time.
> > > > 
> > > > Link: https://www.computeexpresslink.org/download-the-specification
> > > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>    
> > > 
> > > Hi Ben,
> > > 
> > >   
> > > > +/**
> > > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > + * @cxlm: The CXL memory device to communicate with.
> > > > + * @mbox_cmd: Command to send to the memory device.
> > > > + *
> > > > + * Context: Any context. Expects mbox_lock to be held.
> > > > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > > > + *         Caller should check the return code in @mbox_cmd to make sure it
> > > > + *         succeeded.    
> > > 
> > > cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> > > enters an infinite loop as a result.  
> 
> I meant to fix that.
> 
> > > 
> > > I haven't checked other paths, but to my mind it is not a good idea to require
> > > two levels of error checking - the example here proves how easy it is to forget
> > > one.  
> 
> Demonstrably, you're correct. I think it would be good to have a kernel only
> mbox command that does the error checking though. Let me type something up and
> see how it looks.
> 
> > > 
> > > Now all I have to do is figure out why I'm getting an error in the first place!  
> > 
> > For reference this seems to be our old issue of arm64 memcpy_fromio() only doing 8 byte
> > or 1 byte copies.  The hack in QEMU to allow that to work, doesn't work.
> > Result is that 1 byte reads replicate across the register
> > (in this case instead of 0000001c I get 1c1c1c1c)
> > 
> > For these particular registers, we are covered by the rules in 8.2 which says that
> > a 1, 2, 4, 8 aligned reads of 64 bit registers etc are fine.
> > 
> > So we should not have to care.  This isn't true for the component registers where
> > we need to guarantee 4 or 8 byte reads only.
> > 
> > For this particular issue the mailbox_read_reg() function in the QEMU code
> > needs to handle the size 1 case and set min_access_size = 1 for
> > mailbox_ops.  Logically it should also handle the 2 byte case I think,
> > but I'm not hitting that.
> > 
> > Jonathan  
> 
> I think the latest QEMU patches should do the right thing (I have a v4 branch if
> you want to try it). If it doesn't, it'd be worth debugging. The memory
> accessors should split up or combine the reads/writes to whatever the emulation
> supports (4 or 8 only in this case).
> 
> We can move this discussion to the QEMU list if it's not just a simple bug on my
> part.

I'm on your v4 QEMU branch.

I can follow up in the QEMU thread, but needs to do 1 byte reads as well.
(but as I'm here and someone might find this thread)

The arm64 implementation is 'interesting'.  Maybe we want to fix it but I
suspect we'll have a non trivial issue arguing it is broken.

CXL spec allows (I think) both 1 and 2 byte reads to this particular register.

/*
 * Copy data from IO memory space to "real" memory space.
 */
void __memcpy_fromio(void *to, const volatile void __iomem *from, size_t count)
{
	while (count && !IS_ALIGNED((unsigned long)from, 8)) {
		*(u8 *)to = __raw_readb(from);
		from++;
		to++;
		count--;
	}

	while (count >= 8) {
		*(u64 *)to = __raw_readq(from);
		from += 8;
		to += 8;
		count -= 8;
	}

	while (count) {
		*(u8 *)to = __raw_readb(from);
		from++;
		to++;
		count--;
	}
}
EXPORT_SYMBOL(__memcpy_fromio);

> 
> >   
> > > 
> > > Jonathan
> > > 
> > > 
> > >   
> > > > + *
> > > > + * This is a generic form of the CXL mailbox send command, thus the only I/O
> > > > + * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> > > > + * types of CXL devices may have further information available upon error
> > > > + * conditions.
> > > > + *
> > > > + * The CXL spec allows for up to two mailboxes. The intention is for the primary
> > > > + * mailbox to be OS controlled and the secondary mailbox to be used by system
> > > > + * firmware. This allows the OS and firmware to communicate with the device and
> > > > + * not need to coordinate with each other. The driver only uses the primary
> > > > + * mailbox.
> > > > + */
> > > > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > > > +				 struct mbox_cmd *mbox_cmd)
> > > > +{
> > > > +	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> > > > +	u64 cmd_reg, status_reg;
> > > > +	size_t out_len;
> > > > +	int rc;
> > > > +
> > > > +	lockdep_assert_held(&cxlm->mbox_mutex);
> > > > +
> > > > +	/*
> > > > +	 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
> > > > +	 *   1. Caller reads MB Control Register to verify doorbell is clear
> > > > +	 *   2. Caller writes Command Register
> > > > +	 *   3. Caller writes Command Payload Registers if input payload is non-empty
> > > > +	 *   4. Caller writes MB Control Register to set doorbell
> > > > +	 *   5. Caller either polls for doorbell to be clear or waits for interrupt if configured
> > > > +	 *   6. Caller reads MB Status Register to fetch Return code
> > > > +	 *   7. If command successful, Caller reads Command Register to get Payload Length
> > > > +	 *   8. If output payload is non-empty, host reads Command Payload Registers
> > > > +	 *
> > > > +	 * Hardware is free to do whatever it wants before the doorbell is rung,
> > > > +	 * and isn't allowed to change anything after it clears the doorbell. As
> > > > +	 * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can
> > > > +	 * also happen in any order (though some orders might not make sense).
> > > > +	 */
> > > > +
> > > > +	/* #1 */
> > > > +	if (cxl_doorbell_busy(cxlm)) {
> > > > +		dev_err_ratelimited(&cxlm->pdev->dev,
> > > > +				    "Mailbox re-busy after acquiring\n");
> > > > +		return -EBUSY;
> > > > +	}
> > > > +
> > > > +	cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK,
> > > > +			     mbox_cmd->opcode);
> > > > +	if (mbox_cmd->size_in) {
> > > > +		if (WARN_ON(!mbox_cmd->payload_in))
> > > > +			return -EINVAL;
> > > > +
> > > > +		cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK,
> > > > +				      mbox_cmd->size_in);
> > > > +		memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in);
> > > > +	}
> > > > +
> > > > +	/* #2, #3 */
> > > > +	writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > > > +
> > > > +	/* #4 */
> > > > +	dev_dbg(&cxlm->pdev->dev, "Sending command\n");
> > > > +	writel(CXLDEV_MBOX_CTRL_DOORBELL,
> > > > +	       cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET);
> > > > +
> > > > +	/* #5 */
> > > > +	rc = cxl_mem_wait_for_doorbell(cxlm);
> > > > +	if (rc == -ETIMEDOUT) {
> > > > +		cxl_mem_mbox_timeout(cxlm, mbox_cmd);
> > > > +		return rc;
> > > > +	}
> > > > +
> > > > +	/* #6 */
> > > > +	status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET);
> > > > +	mbox_cmd->return_code =
> > > > +		FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
> > > > +
> > > > +	if (mbox_cmd->return_code != 0) {
> > > > +		dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
> > > > +		return 0;    
> > > 
> > > I'd return some sort of error in this path.  Otherwise the sort of missing
> > > handling I mention above is too easy to hit.
> > >   
> > > > +	}
> > > > +
> > > > +	/* #7 */
> > > > +	cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > > > +	out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg);
> > > > +
> > > > +	/* #8 */
> > > > +	if (out_len && mbox_cmd->payload_out)
> > > > +		memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
> > > > +
> > > > +	mbox_cmd->size_out = out_len;
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +/**
> > > > + * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox.
> > > > + * @cxlm: The memory device to gain access to.
> > > > + *
> > > > + * Context: Any context. Takes the mbox_lock.
> > > > + * Return: 0 if exclusive access was acquired.
> > > > + */
> > > > +static int cxl_mem_mbox_get(struct cxl_mem *cxlm)
> > > > +{
> > > > +	struct device *dev = &cxlm->pdev->dev;
> > > > +	int rc = -EBUSY;
> > > > +	u64 md_status;
> > > > +
> > > > +	mutex_lock_io(&cxlm->mbox_mutex);
> > > > +
> > > > +	/*
> > > > +	 * XXX: There is some amount of ambiguity in the 2.0 version of the spec
> > > > +	 * around the mailbox interface ready (8.2.8.5.1.1).  The purpose of the
> > > > +	 * bit is to allow firmware running on the device to notify the driver
> > > > +	 * that it's ready to receive commands. It is unclear if the bit needs
> > > > +	 * to be read for each transaction mailbox, ie. the firmware can switch
> > > > +	 * it on and off as needed. Second, there is no defined timeout for
> > > > +	 * mailbox ready, like there is for the doorbell interface.
> > > > +	 *
> > > > +	 * Assumptions:
> > > > +	 * 1. The firmware might toggle the Mailbox Interface Ready bit, check
> > > > +	 *    it for every command.
> > > > +	 *
> > > > +	 * 2. If the doorbell is clear, the firmware should have first set the
> > > > +	 *    Mailbox Interface Ready bit. Therefore, waiting for the doorbell
> > > > +	 *    to be ready is sufficient.
> > > > +	 */
> > > > +	rc = cxl_mem_wait_for_doorbell(cxlm);
> > > > +	if (rc) {
> > > > +		dev_warn(dev, "Mailbox interface not ready\n");
> > > > +		goto out;
> > > > +	}
> > > > +
> > > > +	md_status = readq(cxlm->memdev_regs + CXLMDEV_STATUS_OFFSET);
> > > > +	if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) {
> > > > +		dev_err(dev,
> > > > +			"mbox: reported doorbell ready, but not mbox ready\n");
> > > > +		goto out;
> > > > +	}
> > > > +
> > > > +	/*
> > > > +	 * Hardware shouldn't allow a ready status but also have failure bits
> > > > +	 * set. Spit out an error, this should be a bug report
> > > > +	 */
> > > > +	rc = -EFAULT;
> > > > +	if (md_status & CXLMDEV_DEV_FATAL) {
> > > > +		dev_err(dev, "mbox: reported ready, but fatal\n");
> > > > +		goto out;
> > > > +	}
> > > > +	if (md_status & CXLMDEV_FW_HALT) {
> > > > +		dev_err(dev, "mbox: reported ready, but halted\n");
> > > > +		goto out;
> > > > +	}
> > > > +	if (CXLMDEV_RESET_NEEDED(md_status)) {
> > > > +		dev_err(dev, "mbox: reported ready, but reset needed\n");
> > > > +		goto out;
> > > > +	}
> > > > +
> > > > +	/* with lock held */
> > > > +	return 0;
> > > > +
> > > > +out:
> > > > +	mutex_unlock(&cxlm->mbox_mutex);
> > > > +	return rc;
> > > > +}
> > > > +
> > > > +/**
> > > > + * cxl_mem_mbox_put() - Release exclusive access to the mailbox.
> > > > + * @cxlm: The CXL memory device to communicate with.
> > > > + *
> > > > + * Context: Any context. Expects mbox_lock to be held.
> > > > + */
> > > > +static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> > > > +{
> > > > +	mutex_unlock(&cxlm->mbox_mutex);
> > > > +}
> > > > +
> > > > +/**
> > > > + * cxl_mem_setup_regs() - Setup necessary MMIO.
> > > > + * @cxlm: The CXL memory device to communicate with.
> > > > + *
> > > > + * Return: 0 if all necessary registers mapped.
> > > > + *
> > > > + * A memory device is required by spec to implement a certain set of MMIO
> > > > + * regions. The purpose of this function is to enumerate and map those
> > > > + * registers.
> > > > + */
> > > > +static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
> > > > +{
> > > > +	struct device *dev = &cxlm->pdev->dev;
> > > > +	int cap, cap_count;
> > > > +	u64 cap_array;
> > > > +
> > > > +	cap_array = readq(cxlm->regs + CXLDEV_CAP_ARRAY_OFFSET);
> > > > +	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
> > > > +	    CXLDEV_CAP_ARRAY_CAP_ID)
> > > > +		return -ENODEV;
> > > > +
> > > > +	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
> > > > +
> > > > +	for (cap = 1; cap <= cap_count; cap++) {
> > > > +		void __iomem *register_block;
> > > > +		u32 offset;
> > > > +		u16 cap_id;
> > > > +
> > > > +		cap_id = readl(cxlm->regs + cap * 0x10) & 0xffff;
> > > > +		offset = readl(cxlm->regs + cap * 0x10 + 0x4);
> > > > +		register_block = cxlm->regs + offset;
> > > > +
> > > > +		switch (cap_id) {
> > > > +		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
> > > > +			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
> > > > +			cxlm->status_regs = register_block;
> > > > +			break;
> > > > +		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
> > > > +			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
> > > > +			cxlm->mbox_regs = register_block;
> > > > +			break;
> > > > +		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
> > > > +			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
> > > > +			break;
> > > > +		case CXLDEV_CAP_CAP_ID_MEMDEV:
> > > > +			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
> > > > +			cxlm->memdev_regs = register_block;
> > > > +			break;
> > > > +		default:
> > > > +			dev_dbg(dev, "Unknown cap ID: %d (0x%x)\n", cap_id, offset);
> > > > +			break;
> > > > +		}
> > > > +	}
> > > > +
> > > > +	if (!cxlm->status_regs || !cxlm->mbox_regs || !cxlm->memdev_regs) {
> > > > +		dev_err(dev, "registers not found: %s%s%s\n",
> > > > +			!cxlm->status_regs ? "status " : "",
> > > > +			!cxlm->mbox_regs ? "mbox " : "",
> > > > +			!cxlm->memdev_regs ? "memdev" : "");
> > > > +		return -ENXIO;
> > > > +	}
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
> > > > +{
> > > > +	const int cap = readl(cxlm->mbox_regs + CXLDEV_MBOX_CAPS_OFFSET);
> > > > +
> > > > +	cxlm->payload_size =
> > > > +		1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
> > > > +
> > > > +	/*
> > > > +	 * CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register
> > > > +	 *
> > > > +	 * If the size is too small, mandatory commands will not work and so
> > > > +	 * there's no point in going forward. If the size is too large, there's
> > > > +	 * no harm is soft limiting it.
> > > > +	 */
> > > > +	cxlm->payload_size = min_t(size_t, cxlm->payload_size, SZ_1M);
> > > > +	if (cxlm->payload_size < 256) {
> > > > +		dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)",
> > > > +			cxlm->payload_size);
> > > > +		return -ENXIO;
> > > > +	}
> > > > +
> > > > +	dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu",
> > > > +		cxlm->payload_size);
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
> > > > +				      u32 reg_hi)
> > > > +{
> > > > +	struct device *dev = &pdev->dev;
> > > > +	struct cxl_mem *cxlm;
> > > > +	void __iomem *regs;
> > > > +	u64 offset;
> > > > +	u8 bar;
> > > > +	int rc;
> > > > +
> > > > +	cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL);
> > > > +	if (!cxlm) {
> > > > +		dev_err(dev, "No memory available\n");
> > > > +		return NULL;
> > > > +	}
> > > > +
> > > > +	offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo);
> > > > +	bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo);
> > > > +
> > > > +	/* Basic sanity check that BAR is big enough */
> > > > +	if (pci_resource_len(pdev, bar) < offset) {
> > > > +		dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
> > > > +			&pdev->resource[bar], (unsigned long long)offset);
> > > > +		return NULL;
> > > > +	}
> > > > +
> > > > +	rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev));
> > > > +	if (rc != 0) {
> > > > +		dev_err(dev, "failed to map registers\n");
> > > > +		return NULL;
> > > > +	}
> > > > +	regs = pcim_iomap_table(pdev)[bar];
> > > > +
> > > > +	mutex_init(&cxlm->mbox_mutex);
> > > > +	cxlm->pdev = pdev;
> > > > +	cxlm->regs = regs + offset;
> > > > +
> > > > +	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
> > > > +	return cxlm;
> > > > +}
> > > >  
> > > >  static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> > > >  {
> > > > @@ -28,10 +423,85 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> > > >  	return 0;
> > > >  }
> > > >  
> > > > +/**
> > > > + * cxl_mem_identify() - Send the IDENTIFY command to the device.
> > > > + * @cxlm: The device to identify.
> > > > + *
> > > > + * Return: 0 if identify was executed successfully.
> > > > + *
> > > > + * This will dispatch the identify command to the device and on success populate
> > > > + * structures to be exported to sysfs.
> > > > + */
> > > > +static int cxl_mem_identify(struct cxl_mem *cxlm)
> > > > +{
> > > > +	struct cxl_mbox_identify {
> > > > +		char fw_revision[0x10];
> > > > +		__le64 total_capacity;
> > > > +		__le64 volatile_capacity;
> > > > +		__le64 persistent_capacity;
> > > > +		__le64 partition_align;
> > > > +		__le16 info_event_log_size;
> > > > +		__le16 warning_event_log_size;
> > > > +		__le16 failure_event_log_size;
> > > > +		__le16 fatal_event_log_size;
> > > > +		__le32 lsa_size;
> > > > +		u8 poison_list_max_mer[3];
> > > > +		__le16 inject_poison_limit;
> > > > +		u8 poison_caps;
> > > > +		u8 qos_telemetry_caps;
> > > > +	} __packed id;
> > > > +	struct mbox_cmd mbox_cmd = {
> > > > +		.opcode = CXL_MBOX_OP_IDENTIFY,
> > > > +		.payload_out = &id,
> > > > +		.size_in = 0,
> > > > +	};
> > > > +	int rc;
> > > > +
> > > > +	/* Retrieve initial device memory map */
> > > > +	rc = cxl_mem_mbox_get(cxlm);
> > > > +	if (rc)
> > > > +		return rc;
> > > > +
> > > > +	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > > > +	cxl_mem_mbox_put(cxlm);
> > > > +	if (rc)
> > > > +		return rc;
> > > > +
> > > > +	/* TODO: Handle retry or reset responses from firmware. */
> > > > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > > > +		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> > > > +			mbox_cmd.return_code);
> > > > +		return -ENXIO;
> > > > +	}
> > > > +
> > > > +	if (mbox_cmd.size_out != sizeof(id))
> > > > +		return -ENXIO;
> > > > +
> > > > +	/*
> > > > +	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> > > > +	 * For now, only the capacity is exported in sysfs
> > > > +	 */
> > > > +	cxlm->ram.range.start = 0;
> > > > +	cxlm->ram.range.end = le64_to_cpu(id.volatile_capacity) - 1;
> > > > +
> > > > +	cxlm->pmem.range.start = 0;
> > > > +	cxlm->pmem.range.end = le64_to_cpu(id.persistent_capacity) - 1;
> > > > +
> > > > +	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
> > > > +
> > > > +	return rc;
> > > > +}
> > > > +
> > > >  static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> > > >  {
> > > >  	struct device *dev = &pdev->dev;
> > > > -	int regloc;
> > > > +	struct cxl_mem *cxlm;
> > > > +	int rc, regloc, i;
> > > > +	u32 regloc_size;
> > > > +
> > > > +	rc = pcim_enable_device(pdev);
> > > > +	if (rc)
> > > > +		return rc;
> > > >  
> > > >  	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
> > > >  	if (!regloc) {
> > > > @@ -39,7 +509,44 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> > > >  		return -ENXIO;
> > > >  	}
> > > >  
> > > > -	return 0;
> > > > +	/* Get the size of the Register Locator DVSEC */
> > > > +	pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, &regloc_size);
> > > > +	regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
> > > > +
> > > > +	regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET;
> > > > +
> > > > +	rc = -ENXIO;
> > > > +	for (i = regloc; i < regloc + regloc_size; i += 8) {
> > > > +		u32 reg_lo, reg_hi;
> > > > +		u8 reg_type;
> > > > +
> > > > +		/* "register low and high" contain other bits */
> > > > +		pci_read_config_dword(pdev, i, &reg_lo);
> > > > +		pci_read_config_dword(pdev, i + 4, &reg_hi);
> > > > +
> > > > +		reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo);
> > > > +
> > > > +		if (reg_type == CXL_REGLOC_RBI_MEMDEV) {
> > > > +			rc = 0;
> > > > +			cxlm = cxl_mem_create(pdev, reg_lo, reg_hi);
> > > > +			if (!cxlm)
> > > > +				rc = -ENODEV;
> > > > +			break;
> > > > +		}
> > > > +	}
> > > > +
> > > > +	if (rc)
> > > > +		return rc;
> > > > +
> > > > +	rc = cxl_mem_setup_regs(cxlm);
> > > > +	if (rc)
> > > > +		return rc;
> > > > +
> > > > +	rc = cxl_mem_setup_mailbox(cxlm);
> > > > +	if (rc)
> > > > +		return rc;
> > > > +
> > > > +	return cxl_mem_identify(cxlm);
> > > >  }
> > > >  
> > > >  static const struct pci_device_id cxl_mem_pci_tbl[] = {
> > > > diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> > > > index f135b9f7bb21..ffcbc13d7b5b 100644
> > > > --- a/drivers/cxl/pci.h
> > > > +++ b/drivers/cxl/pci.h
> > > > @@ -14,5 +14,18 @@
> > > >  #define PCI_DVSEC_ID_CXL		0x0
> > > >  
> > > >  #define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
> > > > +#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET	0xC
> > > > +
> > > > +/* BAR Indicator Register (BIR) */
> > > > +#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
> > > > +
> > > > +/* Register Block Identifier (RBI) */
> > > > +#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
> > > > +#define CXL_REGLOC_RBI_EMPTY 0
> > > > +#define CXL_REGLOC_RBI_COMPONENT 1
> > > > +#define CXL_REGLOC_RBI_VIRT 2
> > > > +#define CXL_REGLOC_RBI_MEMDEV 3
> > > > +
> > > > +#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)
> > > >  
> > > >  #endif /* __CXL_PCI_H__ */
> > > > diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> > > > index e709ae8235e7..6267ca9ae683 100644
> > > > --- a/include/uapi/linux/pci_regs.h
> > > > +++ b/include/uapi/linux/pci_regs.h
> > > > @@ -1080,6 +1080,7 @@
> > > >  
> > > >  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
> > > >  #define PCI_DVSEC_HEADER1		0x4 /* Designated Vendor-Specific Header1 */
> > > > +#define PCI_DVSEC_HEADER1_LENGTH_MASK	0xFFF00000
> > > >  #define PCI_DVSEC_HEADER2		0x8 /* Designated Vendor-Specific Header2 */
> > > >  
> > > >  /* Data Link Feature */    
> > >   
> >   


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10  0:02 ` [PATCH v2 2/8] cxl/mem: Find device capabilities Ben Widawsky
  2021-02-10 13:32   ` Jonathan Cameron
@ 2021-02-10 17:41   ` Jonathan Cameron
  2021-02-10 18:53     ` Ben Widawsky
  1 sibling, 1 reply; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-10 17:41 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Tue, 9 Feb 2021 16:02:53 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Provide enough functionality to utilize the mailbox of a memory device.
> The mailbox is used to interact with the firmware running on the memory
> device. The flow is proven with one implemented command, "identify".
> Because the class code has already told the driver this is a memory
> device and the identify command is mandatory.
> 
> CXL devices contain an array of capabilities that describe the
> interactions software can have with the device or firmware running on
> the device. A CXL compliant device must implement the device status and
> the mailbox capability. Additionally, a CXL compliant memory device must
> implement the memory device capability. Each of the capabilities can
> [will] provide an offset within the MMIO region for interacting with the
> CXL device.
> 
> The capabilities tell the driver how to find and map the register space
> for CXL Memory Devices. The registers are required to utilize the CXL
> spec defined mailbox interface. The spec outlines two mailboxes, primary
> and secondary. The secondary mailbox is earmarked for system firmware,
> and not handled in this driver.
> 
> Primary mailboxes are capable of generating an interrupt when submitting
> a background command. That implementation is saved for a later time.
> 
> Link: https://www.computeexpresslink.org/download-the-specification
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>

A few more comments inline (proper review whereas my other reply was a
bug chase).

Jonathan

> ---
>  drivers/cxl/Kconfig           |  14 +
>  drivers/cxl/cxl.h             |  93 +++++++
>  drivers/cxl/mem.c             | 511 +++++++++++++++++++++++++++++++++-
>  drivers/cxl/pci.h             |  13 +
>  include/uapi/linux/pci_regs.h |   1 +
>  5 files changed, 630 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/cxl/cxl.h
> 
> diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> index 9e80b311e928..c4ba3aa0a05d 100644
> --- a/drivers/cxl/Kconfig
> +++ b/drivers/cxl/Kconfig
> @@ -32,4 +32,18 @@ config CXL_MEM
>  	  Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
>  
>  	  If unsure say 'm'.
> +
> +config CXL_MEM_INSECURE_DEBUG
> +	bool "CXL.mem debugging"

As mentioned below, this makes me a tiny bit uncomfortable.

> +	depends on CXL_MEM
> +	help
> +	  Enable debug of all CXL command payloads.
> +
> +	  Some CXL devices and controllers support encryption and other
> +	  security features. The payloads for the commands that enable
> +	  those features may contain sensitive clear-text security
> +	  material. Disable debug of those command payloads by default.
> +	  If you are a kernel developer actively working on CXL
> +	  security enabling say Y, otherwise say N.
> +
>  endif
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> new file mode 100644
> index 000000000000..745f5e0bfce3
> --- /dev/null
> +++ b/drivers/cxl/cxl.h
> @@ -0,0 +1,93 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright(c) 2020 Intel Corporation. */
> +
> +#ifndef __CXL_H__
> +#define __CXL_H__
> +
> +#include <linux/bitfield.h>
> +#include <linux/bitops.h>
> +#include <linux/io.h>
> +
> +/* CXL 2.0 8.2.8.1 Device Capabilities Array Register */
> +#define CXLDEV_CAP_ARRAY_OFFSET 0x0
> +#define   CXLDEV_CAP_ARRAY_CAP_ID 0
> +#define   CXLDEV_CAP_ARRAY_ID_MASK GENMASK(15, 0)
> +#define   CXLDEV_CAP_ARRAY_COUNT_MASK GENMASK(47, 32)
> +/* CXL 2.0 8.2.8.2.1 CXL Device Capabilities */
> +#define CXLDEV_CAP_CAP_ID_DEVICE_STATUS 0x1
> +#define CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX 0x2
> +#define CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX 0x3
> +#define CXLDEV_CAP_CAP_ID_MEMDEV 0x4000
> +
> +/* CXL 2.0 8.2.8.4 Mailbox Registers */
> +#define CXLDEV_MBOX_CAPS_OFFSET 0x00
> +#define   CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0)
> +#define CXLDEV_MBOX_CTRL_OFFSET 0x04
> +#define   CXLDEV_MBOX_CTRL_DOORBELL BIT(0)
> +#define CXLDEV_MBOX_CMD_OFFSET 0x08
> +#define   CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK GENMASK(15, 0)
> +#define   CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK GENMASK(36, 16)
> +#define CXLDEV_MBOX_STATUS_OFFSET 0x10
> +#define   CXLDEV_MBOX_STATUS_RET_CODE_MASK GENMASK(47, 32)
> +#define CXLDEV_MBOX_BG_CMD_STATUS_OFFSET 0x18
> +#define CXLDEV_MBOX_PAYLOAD_OFFSET 0x20
> +
> +/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
> +#define CXLMDEV_STATUS_OFFSET 0x0
> +#define   CXLMDEV_DEV_FATAL BIT(0)
> +#define   CXLMDEV_FW_HALT BIT(1)
> +#define   CXLMDEV_STATUS_MEDIA_STATUS_MASK GENMASK(3, 2)
> +#define     CXLMDEV_MS_NOT_READY 0
> +#define     CXLMDEV_MS_READY 1
> +#define     CXLMDEV_MS_ERROR 2
> +#define     CXLMDEV_MS_DISABLED 3
> +#define CXLMDEV_READY(status)                                                  \
> +	(FIELD_GET(CXLMDEV_STATUS_MEDIA_STATUS_MASK, status) ==                \
> +	 CXLMDEV_MS_READY)
> +#define   CXLMDEV_MBOX_IF_READY BIT(4)
> +#define   CXLMDEV_RESET_NEEDED_MASK GENMASK(7, 5)
> +#define     CXLMDEV_RESET_NEEDED_NOT 0
> +#define     CXLMDEV_RESET_NEEDED_COLD 1
> +#define     CXLMDEV_RESET_NEEDED_WARM 2
> +#define     CXLMDEV_RESET_NEEDED_HOT 3
> +#define     CXLMDEV_RESET_NEEDED_CXL 4
> +#define CXLMDEV_RESET_NEEDED(status)                                           \
> +	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
> +	 CXLMDEV_RESET_NEEDED_NOT)
> +
> +/**
> + * struct cxl_mem - A CXL memory device
> + * @pdev: The PCI device associated with this CXL device.
> + * @regs: IO mappings to the device's MMIO
> + * @status_regs: CXL 2.0 8.2.8.3 Device Status Registers
> + * @mbox_regs: CXL 2.0 8.2.8.4 Mailbox Registers
> + * @memdev_regs: CXL 2.0 8.2.8.5 Memory Device Registers
> + * @payload_size: Size of space for payload
> + *                (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
> + * @mbox_mutex: Mutex to synchronize mailbox access.
> + * @firmware_version: Firmware version for the memory device.
> + * @pmem: Persistent memory capacity information.
> + * @ram: Volatile memory capacity information.
> + */
> +struct cxl_mem {
> +	struct pci_dev *pdev;
> +	void __iomem *regs;
> +
> +	void __iomem *status_regs;
> +	void __iomem *mbox_regs;
> +	void __iomem *memdev_regs;
> +
> +	size_t payload_size;
> +	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
> +	char firmware_version[0x10];
> +
> +	struct {
> +		struct range range;
> +	} pmem;

Christoph raised this in v1, and I agree with him that his would me more compact
and readable as

	struct range pmem_range;
	struct range ram_range;

The discussion seemed to get lost without getting resolved that I can see.

> +
> +	struct {
> +		struct range range;
> +	} ram;

> +};
> +
> +#endif /* __CXL_H__ */
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index 99a6571508df..0a868a15badc 100644
> --- a/drivers/cxl/mem.c
> +++ b/drivers/cxl/mem.c


...

> +static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> +				 struct mbox_cmd *mbox_cmd)
> +{
> +	struct device *dev = &cxlm->pdev->dev;
> +
> +	dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
> +		mbox_cmd->opcode, mbox_cmd->size_in);
> +
> +	if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {

Hmm.  Whilst I can see the advantage of this for debug, I'm not sure we want
it upstream even under a rather evil looking CONFIG variable.

Is there a bigger lock we can use to avoid chance of accidental enablement?


> +		print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1,
> +				     mbox_cmd->payload_in, mbox_cmd->size_in,
> +				     true);
> +	}
> +}
> +
> +/**
> + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> + * @cxlm: The CXL memory device to communicate with.
> + * @mbox_cmd: Command to send to the memory device.
> + *
> + * Context: Any context. Expects mbox_lock to be held.
> + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> + *         Caller should check the return code in @mbox_cmd to make sure it
> + *         succeeded.
> + *
> + * This is a generic form of the CXL mailbox send command, thus the only I/O
> + * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> + * types of CXL devices may have further information available upon error
> + * conditions.
> + *
> + * The CXL spec allows for up to two mailboxes. The intention is for the primary
> + * mailbox to be OS controlled and the secondary mailbox to be used by system
> + * firmware. This allows the OS and firmware to communicate with the device and
> + * not need to coordinate with each other. The driver only uses the primary
> + * mailbox.
> + */
> +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> +				 struct mbox_cmd *mbox_cmd)
> +{
> +	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> +	u64 cmd_reg, status_reg;
> +	size_t out_len;
> +	int rc;
> +
> +	lockdep_assert_held(&cxlm->mbox_mutex);
> +
> +	/*
> +	 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
> +	 *   1. Caller reads MB Control Register to verify doorbell is clear
> +	 *   2. Caller writes Command Register
> +	 *   3. Caller writes Command Payload Registers if input payload is non-empty
> +	 *   4. Caller writes MB Control Register to set doorbell
> +	 *   5. Caller either polls for doorbell to be clear or waits for interrupt if configured
> +	 *   6. Caller reads MB Status Register to fetch Return code
> +	 *   7. If command successful, Caller reads Command Register to get Payload Length
> +	 *   8. If output payload is non-empty, host reads Command Payload Registers
> +	 *
> +	 * Hardware is free to do whatever it wants before the doorbell is rung,
> +	 * and isn't allowed to change anything after it clears the doorbell. As
> +	 * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can
> +	 * also happen in any order (though some orders might not make sense).
> +	 */
> +
> +	/* #1 */
> +	if (cxl_doorbell_busy(cxlm)) {
> +		dev_err_ratelimited(&cxlm->pdev->dev,
> +				    "Mailbox re-busy after acquiring\n");
> +		return -EBUSY;
> +	}
> +
> +	cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK,
> +			     mbox_cmd->opcode);
> +	if (mbox_cmd->size_in) {
> +		if (WARN_ON(!mbox_cmd->payload_in))
> +			return -EINVAL;
> +
> +		cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK,
> +				      mbox_cmd->size_in);
> +		memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in);
> +	}
> +
> +	/* #2, #3 */
> +	writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> +
> +	/* #4 */
> +	dev_dbg(&cxlm->pdev->dev, "Sending command\n");
> +	writel(CXLDEV_MBOX_CTRL_DOORBELL,
> +	       cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET);
> +
> +	/* #5 */
> +	rc = cxl_mem_wait_for_doorbell(cxlm);
> +	if (rc == -ETIMEDOUT) {
> +		cxl_mem_mbox_timeout(cxlm, mbox_cmd);
> +		return rc;
> +	}
> +
> +	/* #6 */
> +	status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET);
> +	mbox_cmd->return_code =
> +		FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
> +
> +	if (mbox_cmd->return_code != 0) {
> +		dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
> +		return 0;

See earlier diversion whilst I was chasing my bug (another branch of this
thread)

> +	}
> +
> +	/* #7 */
> +	cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> +	out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg);
> +
> +	/* #8 */
> +	if (out_len && mbox_cmd->payload_out)
> +		memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
> +
> +	mbox_cmd->size_out = out_len;
> +
> +	return 0;
> +}
> +


...

> +static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
> +				      u32 reg_hi)
> +{
> +	struct device *dev = &pdev->dev;
> +	struct cxl_mem *cxlm;
> +	void __iomem *regs;
> +	u64 offset;
> +	u8 bar;
> +	int rc;
> +
> +	cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL);
> +	if (!cxlm) {
> +		dev_err(dev, "No memory available\n");
> +		return NULL;
> +	}
> +
> +	offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo);
> +	bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo);
> +
> +	/* Basic sanity check that BAR is big enough */
> +	if (pci_resource_len(pdev, bar) < offset) {
> +		dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
> +			&pdev->resource[bar], (unsigned long long)offset);
> +		return NULL;
> +	}
> +
> +	rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev));
> +	if (rc != 0) {

if (rc) 

> +		dev_err(dev, "failed to map registers\n");
> +		return NULL;
> +	}
> +	regs = pcim_iomap_table(pdev)[bar];
> +
> +	mutex_init(&cxlm->mbox_mutex);
> +	cxlm->pdev = pdev;
> +	cxlm->regs = regs + offset;
> +
> +	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
> +	return cxlm;
> +}
>  

...

>  static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  {
>  	struct device *dev = &pdev->dev;
> -	int regloc;
> +	struct cxl_mem *cxlm;
> +	int rc, regloc, i;
> +	u32 regloc_size;
> +
> +	rc = pcim_enable_device(pdev);
> +	if (rc)
> +		return rc;
>  
>  	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
>  	if (!regloc) {
> @@ -39,7 +509,44 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  		return -ENXIO;
>  	}
>  
> -	return 0;
> +	/* Get the size of the Register Locator DVSEC */
> +	pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, &regloc_size);
> +	regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
> +
> +	regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET;
> +
> +	rc = -ENXIO;
> +	for (i = regloc; i < regloc + regloc_size; i += 8) {
> +		u32 reg_lo, reg_hi;
> +		u8 reg_type;
> +
> +		/* "register low and high" contain other bits */

high doesn't contain any other bits so that's a tiny bit misleading.

> +		pci_read_config_dword(pdev, i, &reg_lo);
> +		pci_read_config_dword(pdev, i + 4, &reg_hi);
> +
> +		reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo);
> +
> +		if (reg_type == CXL_REGLOC_RBI_MEMDEV) {
> +			rc = 0;

I sort of assumed this unusual structure was to allow for some future
change, but checked end result and it still looks like this.
So, drop the rc assignment here and...

> +			cxlm = cxl_mem_create(pdev, reg_lo, reg_hi);
> +			if (!cxlm)
> +				rc = -ENODEV;

return -ENODEV;

> +			break;
> +		}
> +	}
> +
> +	if (rc)
> +		return rc;

With above direct return, only get here if rc = -ENXIO.
Could just as easily check if i >= regloc + regloc_size then it's
obvious this is kind of canonical form of 'not found'.


Alternative would be to treat the above as a 'find' loop then
have the clxm = cxl_mem_create() outside of the loop.


> +
> +	rc = cxl_mem_setup_regs(cxlm);
> +	if (rc)
> +		return rc;
> +
> +	rc = cxl_mem_setup_mailbox(cxlm);
> +	if (rc)
> +		return rc;
> +
> +	return cxl_mem_identify(cxlm);
>  }
>  
>  static const struct pci_device_id cxl_mem_pci_tbl[] = {
> diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> index f135b9f7bb21..ffcbc13d7b5b 100644
> --- a/drivers/cxl/pci.h
> +++ b/drivers/cxl/pci.h
> @@ -14,5 +14,18 @@
>  #define PCI_DVSEC_ID_CXL		0x0
>  
>  #define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
> +#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET	0xC
> +
> +/* BAR Indicator Register (BIR) */
> +#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
> +
> +/* Register Block Identifier (RBI) */
> +#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
> +#define CXL_REGLOC_RBI_EMPTY 0
> +#define CXL_REGLOC_RBI_COMPONENT 1
> +#define CXL_REGLOC_RBI_VIRT 2
> +#define CXL_REGLOC_RBI_MEMDEV 3
> +
> +#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)

CXL_REGLOCL_ADDR_LOW_MASK perhaps for clarity?

>  
>  #endif /* __CXL_PCI_H__ */
> diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> index e709ae8235e7..6267ca9ae683 100644
> --- a/include/uapi/linux/pci_regs.h
> +++ b/include/uapi/linux/pci_regs.h
> @@ -1080,6 +1080,7 @@
>  
>  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
>  #define PCI_DVSEC_HEADER1		0x4 /* Designated Vendor-Specific Header1 */
> +#define PCI_DVSEC_HEADER1_LENGTH_MASK	0xFFF00000

Seems sensible to add the revision mask as well.
The vendor id currently read using a word read rather than dword, but perhaps
neater to add that as well for completeness?

Having said that, given Bjorn's comment on clashes and the fact he'd rather see
this stuff defined in drivers and combined later (see review patch 1 and follow
the link) perhaps this series should not touch this header at all.
 
>  #define PCI_DVSEC_HEADER2		0x8 /* Designated Vendor-Specific Header2 */
>  
>  /* Data Link Feature */


^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-10 16:49     ` Ben Widawsky
@ 2021-02-10 18:03       ` Ariel.Sibley
  2021-02-10 18:11         ` Ben Widawsky
  0 siblings, 1 reply; 57+ messages in thread
From: Ariel.Sibley @ 2021-02-10 18:03 UTC (permalink / raw)
  To: ben.widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	helgaas, cbrowy, hch, dan.j.williams, david, rientjes, ira.weiny,
	jcm, Jonathan.Cameron, rafael.j.wysocki, rdunlap, vishal.l.verma,
	jgroves, sean.v.kelley, Ahmad.Danesh, Varada.Dighe,
	Kirthi.Shenoy, Sanjay.Goyal

> > > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > > index c4ba3aa0a05d..08eaa8e52083 100644
> > > --- a/drivers/cxl/Kconfig
> > > +++ b/drivers/cxl/Kconfig
> > > @@ -33,6 +33,24 @@ config CXL_MEM
> > >
> > >           If unsure say 'm'.
> > >
> > > +config CXL_MEM_RAW_COMMANDS
> > > +       bool "RAW Command Interface for Memory Devices"
> > > +       depends on CXL_MEM
> > > +       help
> > > +         Enable CXL RAW command interface.
> > > +
> > > +         The CXL driver ioctl interface may assign a kernel ioctl command
> > > +         number for each specification defined opcode. At any given point in
> > > +         time the number of opcodes that the specification defines and a device
> > > +         may implement may exceed the kernel's set of associated ioctl function
> > > +         numbers. The mismatch is either by omission, specification is too new,
> > > +         or by design. When prototyping new hardware, or developing /
> > > debugging
> > > +         the driver it is useful to be able to submit any possible command to
> > > +         the hardware, even commands that may crash the kernel due to their
> > > +         potential impact to memory currently in use by the kernel.
> > > +
> > > +         If developing CXL hardware or the driver say Y, otherwise say N.
> >
> > Blocking RAW commands by default will prevent vendors from developing user
> > space tools that utilize vendor specific commands. Vendors of CXL.mem devices
> > should take ownership of ensuring any vendor defined commands that could cause
> > user data to be exposed or corrupted are disabled at the device level for
> > shipping configurations.
> 
> Thanks for brining this up Ariel. If there is a recommendation on how to codify
> this, I would certainly like to know because the explanation will be long.
> 
> ---
> 
> The background:
> 
> The enabling/disabling of the Kconfig option is driven by the distribution
> and/or system integrator. Even if we made the default 'y', nothing stops them
> from changing that. if you are using this driver in production and insist on
> using RAW commands, you are free to carry around a small patch to get rid of the
> WARN (it is a one-liner).
> 
> To recap why this is in place - the driver owns the sanctity of the device and
> therefore a [large] part of the whole system. What we can do as driver writers
> is figure out the set of commands that are "safe" and allow those. Aside from
> being able to validate them, we're able to mediate them with other parallel
> operations that might conflict. We gain the ability to squint extra hard at bug
> reports. We provide a reason to try to use a well defined part of the spec.
> Realizing that only allowing that small set of commands in a rapidly growing
> ecosystem is not a welcoming API; we decided on RAW.
> 
> Vendor commands can be one of two types:
> 1. Some functionality probably most vendors want.
> 2. Functionality that is really single vendor specific.
> 
> Hopefully we can agree that the path for case #1 is to work with the consortium
> to standardize a command that does what is needed and that can eventually become
> part of UAPI. The situation is unfortunate, but temporary. If you won't be able
> to upgrade your kernel, patch out the WARN as above.
> 
> The second situation is interesting and does need some more thought and
> discussion.
> 
> ---
> 
> I see 3 realistic options for truly vendor specific commands.
> 1. Tough noogies. Vendors aren't special and they shouldn't do that.
> 2. modparam to disable the WARN for specific devices (let the sysadmin decide)
> 3. Try to make them part of UAPI.
> 
> The right answer to me is #1, but I also realize I live in the real world.
> 
> #2 provides too much flexibility. Vendors will just do what they please and
> distros and/or integrators will be seen as hostile if they don't accommodate.
> 
> I like #3, but I have a feeling not everyone will agree. My proposal for vendor
> specific commands is, if it's clear it's truly a unique command, allow adding it
> as part of UAPI (moving it out of RAW). I expect like 5 of these, ever. If we
> start getting multiple per vendor, we've failed. The infrastructure is already
> in place to allow doing this pretty easily. I think we'd have to draw up some
> guidelines (like adding test cases for the command) to allow these to come in.
> Anything with command effects is going to need extra scrutiny.

This would necessitate adding specific opcode values in the range C000h-FFFFh to UAPI, and those would then be allowed for all CXL.mem devices, correct?  If so, I do not think this is the right approach, as opcodes in this range are by definition vendor defined.  A given opcode value will have totally different effects depending on the vendor.

I think you may be on to something with the command effects.  But rather than "extra scrutiny" for opcodes that have command effects, would it make sense to allow vendor defined opcodes that have Bit[5:0] in the Command Effect field of the CEL Entry Structure (Table 173) set to 0?  In conjunction, those bits represent any change to the configuration or data within the device.  For commands that have no such effects, is there harm to allowing them?  Of course, this approach relies on the vendor to not misrepresent the command effects.

> 
> In my opinion, as maintainers of the driver, we do owe the community an answer
> as to our direction for this. Dan, what is your thought?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-10 18:03       ` Ariel.Sibley
@ 2021-02-10 18:11         ` Ben Widawsky
  2021-02-10 18:46           ` Ariel.Sibley
  0 siblings, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10 18:11 UTC (permalink / raw)
  To: Ariel.Sibley
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	helgaas, cbrowy, hch, dan.j.williams, david, rientjes, ira.weiny,
	jcm, Jonathan.Cameron, rafael.j.wysocki, rdunlap, vishal.l.verma,
	jgroves, sean.v.kelley, Ahmad.Danesh, Varada.Dighe,
	Kirthi.Shenoy, Sanjay.Goyal

On 21-02-10 18:03:35, Ariel.Sibley@microchip.com wrote:
> > > > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > > > index c4ba3aa0a05d..08eaa8e52083 100644
> > > > --- a/drivers/cxl/Kconfig
> > > > +++ b/drivers/cxl/Kconfig
> > > > @@ -33,6 +33,24 @@ config CXL_MEM
> > > >
> > > >           If unsure say 'm'.
> > > >
> > > > +config CXL_MEM_RAW_COMMANDS
> > > > +       bool "RAW Command Interface for Memory Devices"
> > > > +       depends on CXL_MEM
> > > > +       help
> > > > +         Enable CXL RAW command interface.
> > > > +
> > > > +         The CXL driver ioctl interface may assign a kernel ioctl command
> > > > +         number for each specification defined opcode. At any given point in
> > > > +         time the number of opcodes that the specification defines and a device
> > > > +         may implement may exceed the kernel's set of associated ioctl function
> > > > +         numbers. The mismatch is either by omission, specification is too new,
> > > > +         or by design. When prototyping new hardware, or developing /
> > > > debugging
> > > > +         the driver it is useful to be able to submit any possible command to
> > > > +         the hardware, even commands that may crash the kernel due to their
> > > > +         potential impact to memory currently in use by the kernel.
> > > > +
> > > > +         If developing CXL hardware or the driver say Y, otherwise say N.
> > >
> > > Blocking RAW commands by default will prevent vendors from developing user
> > > space tools that utilize vendor specific commands. Vendors of CXL.mem devices
> > > should take ownership of ensuring any vendor defined commands that could cause
> > > user data to be exposed or corrupted are disabled at the device level for
> > > shipping configurations.
> > 
> > Thanks for brining this up Ariel. If there is a recommendation on how to codify
> > this, I would certainly like to know because the explanation will be long.
> > 
> > ---
> > 
> > The background:
> > 
> > The enabling/disabling of the Kconfig option is driven by the distribution
> > and/or system integrator. Even if we made the default 'y', nothing stops them
> > from changing that. if you are using this driver in production and insist on
> > using RAW commands, you are free to carry around a small patch to get rid of the
> > WARN (it is a one-liner).
> > 
> > To recap why this is in place - the driver owns the sanctity of the device and
> > therefore a [large] part of the whole system. What we can do as driver writers
> > is figure out the set of commands that are "safe" and allow those. Aside from
> > being able to validate them, we're able to mediate them with other parallel
> > operations that might conflict. We gain the ability to squint extra hard at bug
> > reports. We provide a reason to try to use a well defined part of the spec.
> > Realizing that only allowing that small set of commands in a rapidly growing
> > ecosystem is not a welcoming API; we decided on RAW.
> > 
> > Vendor commands can be one of two types:
> > 1. Some functionality probably most vendors want.
> > 2. Functionality that is really single vendor specific.
> > 
> > Hopefully we can agree that the path for case #1 is to work with the consortium
> > to standardize a command that does what is needed and that can eventually become
> > part of UAPI. The situation is unfortunate, but temporary. If you won't be able
> > to upgrade your kernel, patch out the WARN as above.
> > 
> > The second situation is interesting and does need some more thought and
> > discussion.
> > 
> > ---
> > 
> > I see 3 realistic options for truly vendor specific commands.
> > 1. Tough noogies. Vendors aren't special and they shouldn't do that.
> > 2. modparam to disable the WARN for specific devices (let the sysadmin decide)
> > 3. Try to make them part of UAPI.
> > 
> > The right answer to me is #1, but I also realize I live in the real world.
> > 
> > #2 provides too much flexibility. Vendors will just do what they please and
> > distros and/or integrators will be seen as hostile if they don't accommodate.
> > 
> > I like #3, but I have a feeling not everyone will agree. My proposal for vendor
> > specific commands is, if it's clear it's truly a unique command, allow adding it
> > as part of UAPI (moving it out of RAW). I expect like 5 of these, ever. If we
> > start getting multiple per vendor, we've failed. The infrastructure is already
> > in place to allow doing this pretty easily. I think we'd have to draw up some
> > guidelines (like adding test cases for the command) to allow these to come in.
> > Anything with command effects is going to need extra scrutiny.
> 
> This would necessitate adding specific opcode values in the range C000h-FFFFh
> to UAPI, and those would then be allowed for all CXL.mem devices, correct?  If
> so, I do not think this is the right approach, as opcodes in this range are by
> definition vendor defined.  A given opcode value will have totally different
> effects depending on the vendor.

Perhaps I didn't explain well enough. The UAPI would define the command ID to
opcode mapping, for example 0xC000. There would be a validation step in the
driver where it determines if it's actually the correct hardware to execute on.
So it would be entirely possible to have multiple vendor commands with the same
opcode.

So UAPI might be this:
        ___C(GET_HEALTH_INFO, "Get Health Info"),                         \
        ___C(GET_LOG, "Get Log"),                                         \
        ___C(VENDOR_FOO_XXX, "FOO"),                                      \
        ___C(VENDOR_BAR_XXX, "BAR"),                                      \

User space just picks the command they want, FOO/BAR. If they use VENDOR_BAR_XXX
on VENDOR_FOO's hardware, they will get an error return value.

> I think you may be on to something with the command effects.  But rather than
> "extra scrutiny" for opcodes that have command effects, would it make sense to
> allow vendor defined opcodes that have Bit[5:0] in the Command Effect field of
> the CEL Entry Structure (Table 173) set to 0?  In conjunction, those bits
> represent any change to the configuration or data within the device.  For
> commands that have no such effects, is there harm to allowing them?  Of
> course, this approach relies on the vendor to not misrepresent the command
> effects.
> 

That last sentence is what worries me :-)


> > 
> > In my opinion, as maintainers of the driver, we do owe the community an answer
> > as to our direction for this. Dan, what is your thought?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10 16:55       ` Ben Widawsky
  2021-02-10 17:30         ` Jonathan Cameron
@ 2021-02-10 18:16         ` Ben Widawsky
  2021-02-11  9:55           ` Jonathan Cameron
  1 sibling, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10 18:16 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On 21-02-10 08:55:57, Ben Widawsky wrote:
> On 21-02-10 15:07:59, Jonathan Cameron wrote:
> > On Wed, 10 Feb 2021 13:32:52 +0000
> > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> > 
> > > On Tue, 9 Feb 2021 16:02:53 -0800
> > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > > 
> > > > Provide enough functionality to utilize the mailbox of a memory device.
> > > > The mailbox is used to interact with the firmware running on the memory
> > > > device. The flow is proven with one implemented command, "identify".
> > > > Because the class code has already told the driver this is a memory
> > > > device and the identify command is mandatory.
> > > > 
> > > > CXL devices contain an array of capabilities that describe the
> > > > interactions software can have with the device or firmware running on
> > > > the device. A CXL compliant device must implement the device status and
> > > > the mailbox capability. Additionally, a CXL compliant memory device must
> > > > implement the memory device capability. Each of the capabilities can
> > > > [will] provide an offset within the MMIO region for interacting with the
> > > > CXL device.
> > > > 
> > > > The capabilities tell the driver how to find and map the register space
> > > > for CXL Memory Devices. The registers are required to utilize the CXL
> > > > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > > > and secondary. The secondary mailbox is earmarked for system firmware,
> > > > and not handled in this driver.
> > > > 
> > > > Primary mailboxes are capable of generating an interrupt when submitting
> > > > a background command. That implementation is saved for a later time.
> > > > 
> > > > Link: https://www.computeexpresslink.org/download-the-specification
> > > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>  
> > > 
> > > Hi Ben,
> > > 
> > > 
> > > > +/**
> > > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > + * @cxlm: The CXL memory device to communicate with.
> > > > + * @mbox_cmd: Command to send to the memory device.
> > > > + *
> > > > + * Context: Any context. Expects mbox_lock to be held.
> > > > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > > > + *         Caller should check the return code in @mbox_cmd to make sure it
> > > > + *         succeeded.  
> > > 
> > > cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> > > enters an infinite loop as a result.
> 
> I meant to fix that.
> 
> > > 
> > > I haven't checked other paths, but to my mind it is not a good idea to require
> > > two levels of error checking - the example here proves how easy it is to forget
> > > one.
> 
> Demonstrably, you're correct. I think it would be good to have a kernel only
> mbox command that does the error checking though. Let me type something up and
> see how it looks.

Hi Jonathan. What do you think of this? The bit I'm on the fence about is if I
should validate output size too. I like the simplicity as it is, but it requires
every caller to possibly check output size, which is kind of the same problem
you're originally pointing out.

diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index 55c5f5a6023f..ad7b2077ab28 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -284,7 +284,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
 }
 
 /**
- * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
+ * __cxl_mem_mbox_send_cmd() - Execute a mailbox command
  * @cxlm: The CXL memory device to communicate with.
  * @mbox_cmd: Command to send to the memory device.
  *
@@ -296,7 +296,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
  * This is a generic form of the CXL mailbox send command, thus the only I/O
  * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
  * types of CXL devices may have further information available upon error
- * conditions.
+ * conditions. Driver facilities wishing to send mailbox commands should use the
+ * wrapper command.
  *
  * The CXL spec allows for up to two mailboxes. The intention is for the primary
  * mailbox to be OS controlled and the secondary mailbox to be used by system
@@ -304,8 +305,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
  * not need to coordinate with each other. The driver only uses the primary
  * mailbox.
  */
-static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
-				 struct mbox_cmd *mbox_cmd)
+static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
+				   struct mbox_cmd *mbox_cmd)
 {
 	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
 	u64 cmd_reg, status_reg;
@@ -469,6 +470,54 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
 	mutex_unlock(&cxlm->mbox_mutex);
 }
 
+/**
+ * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
+ * @cxlm: The CXL memory device to communicate with.
+ * @opcode: Opcode for the mailbox command.
+ * @in: The input payload for the mailbox command.
+ * @in_size: The length of the input payload
+ * @out: Caller allocated buffer for the output.
+ *
+ * Context: Any context. Will acquire and release mbox_mutex.
+ * Return:
+ *  * %>=0	- Number of bytes returned in @out.
+ *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
+ *  * %-EFAULT	- Hardware error occurred.
+ *  * %-ENXIO	- Command completed, but device reported an error.
+ *
+ * Mailbox commands may execute successfully yet the device itself reported an
+ * error. While this distinction can be useful for commands from userspace, the
+ * kernel will often only care when both are successful.
+ *
+ * See __cxl_mem_mbox_send_cmd()
+ */
+static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
+				 size_t in_size, u8 *out)
+{
+	struct mbox_cmd mbox_cmd = {
+		.opcode = opcode,
+		.payload_in = in,
+		.size_in = in_size,
+		.payload_out = out,
+	};
+	int rc;
+
+	rc = cxl_mem_mbox_get(cxlm);
+	if (rc)
+		return rc;
+
+	rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
+	cxl_mem_mbox_put(cxlm);
+	if (rc)
+		return rc;
+
+	/* TODO: Map return code to proper kernel style errno */
+	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
+		return -ENXIO;
+
+	return mbox_cmd.size_out;
+}
+
 /**
  * handle_mailbox_cmd_from_user() - Dispatch a mailbox command.
  * @cxlmd: The CXL memory device to communicate with.
@@ -1380,33 +1429,18 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
 		u8 poison_caps;
 		u8 qos_telemetry_caps;
 	} __packed id;
-	struct mbox_cmd mbox_cmd = {
-		.opcode = CXL_MBOX_OP_IDENTIFY,
-		.payload_out = &id,
-		.size_in = 0,
-	};
 	int rc;
 
-	/* Retrieve initial device memory map */
-	rc = cxl_mem_mbox_get(cxlm);
-	if (rc)
-		return rc;
-
-	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
-	cxl_mem_mbox_put(cxlm);
-	if (rc)
+	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
+				   (u8 *)&id);
+	if (rc < 0)
 		return rc;
 
-	/* TODO: Handle retry or reset responses from firmware. */
-	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
-		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
-			mbox_cmd.return_code);
+	if (rc < sizeof(id)) {
+		dev_err(&cxlm->pdev->dev, "Short identify data\n",
 		return -ENXIO;
 	}
 
-	if (mbox_cmd.size_out != sizeof(id))
-		return -ENXIO;
-
 	/*
 	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
 	 * For now, only the capacity is exported in sysfs


[snip]


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 3/8] cxl/mem: Register CXL memX devices
  2021-02-10  0:02 ` [PATCH v2 3/8] cxl/mem: Register CXL memX devices Ben Widawsky
@ 2021-02-10 18:17   ` Jonathan Cameron
  2021-02-11 10:17     ` Jonathan Cameron
  0 siblings, 1 reply; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-10 18:17 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Tue, 9 Feb 2021 16:02:54 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> From: Dan Williams <dan.j.williams@intel.com>
> 
> Create the /sys/bus/cxl hierarchy to enumerate:
> 
> * Memory Devices (per-endpoint control devices)
> 
> * Memory Address Space Devices (platform address ranges with
>   interleaving, performance, and persistence attributes)
> 
> * Memory Regions (active provisioned memory from an address space device
>   that is in use as System RAM or delegated to libnvdimm as Persistent
>   Memory regions).
> 
> For now, only the per-endpoint control devices are registered on the
> 'cxl' bus. However, going forward it will provide a mechanism to
> coordinate cross-device interleave.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>

One stray header, and a request for a tiny bit of reordering to
make it easier to chase through creation and destruction.

Either way with the header move to earlier patch I'm fine with this one.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  Documentation/ABI/testing/sysfs-bus-cxl       |  26 ++
>  .../driver-api/cxl/memory-devices.rst         |  17 +
>  drivers/cxl/Makefile                          |   3 +
>  drivers/cxl/bus.c                             |  29 ++
>  drivers/cxl/cxl.h                             |   4 +
>  drivers/cxl/mem.c                             | 301 +++++++++++++++++-
>  6 files changed, 378 insertions(+), 2 deletions(-)
>  create mode 100644 Documentation/ABI/testing/sysfs-bus-cxl
>  create mode 100644 drivers/cxl/bus.c
> 


> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index 745f5e0bfce3..b3c56fa6e126 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -3,6 +3,7 @@
>  
>  #ifndef __CXL_H__
>  #define __CXL_H__
> +#include <linux/range.h>

Why is this coming in now? Feels like it should have been in earlier
patch that started using struct range

>  
>  #include <linux/bitfield.h>
>  #include <linux/bitops.h>
> @@ -55,6 +56,7 @@
>  	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
>  	 CXLMDEV_RESET_NEEDED_NOT)
>  
> +struct cxl_memdev;
>  /**
>   * struct cxl_mem - A CXL memory device
>   * @pdev: The PCI device associated with this CXL device.
> @@ -72,6 +74,7 @@
>  struct cxl_mem {
>  	struct pci_dev *pdev;
>  	void __iomem *regs;
> +	struct cxl_memdev *cxlmd;
>  
>  	void __iomem *status_regs;
>  	void __iomem *mbox_regs;
> @@ -90,4 +93,5 @@ struct cxl_mem {
>  	} ram;
>  };
>  
> +extern struct bus_type cxl_bus_type;
>  #endif /* __CXL_H__ */
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index 0a868a15badc..8bbd2495e237 100644
> --- a/drivers/cxl/mem.c
> +++ b/drivers/cxl/mem.c
> @@ -1,11 +1,36 @@
>

> +
> +static void cxl_memdev_release(struct device *dev)
> +{
> +	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> +
> +	percpu_ref_exit(&cxlmd->ops_active);
> +	ida_free(&cxl_memdev_ida, cxlmd->id);
> +	kfree(cxlmd);
> +}
> +
...

> +static int cxl_mem_add_memdev(struct cxl_mem *cxlm)
> +{
> +	struct pci_dev *pdev = cxlm->pdev;
> +	struct cxl_memdev *cxlmd;
> +	struct device *dev;
> +	struct cdev *cdev;
> +	int rc;
> +
> +	cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL);
> +	if (!cxlmd)
> +		return -ENOMEM;
> +	init_completion(&cxlmd->ops_dead);
> +
> +	/*
> +	 * @cxlm is deallocated when the driver unbinds so operations
> +	 * that are using it need to hold a live reference.
> +	 */
> +	cxlmd->cxlm = cxlm;
> +	rc = percpu_ref_init(&cxlmd->ops_active, cxlmdev_ops_active_release, 0,
> +			     GFP_KERNEL);
> +	if (rc)
> +		goto err_ref;
> +
> +	rc = ida_alloc_range(&cxl_memdev_ida, 0, CXL_MEM_MAX_DEVS, GFP_KERNEL);
> +	if (rc < 0)
> +		goto err_id;
> +	cxlmd->id = rc;
> +
> +	dev = &cxlmd->dev;
> +	device_initialize(dev);
> +	dev->parent = &pdev->dev;
> +	dev->bus = &cxl_bus_type;
> +	dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
> +	dev->type = &cxl_memdev_type;
> +	dev_set_name(dev, "mem%d", cxlmd->id);
> +
> +	cdev = &cxlmd->cdev;
> +	cdev_init(cdev, &cxl_memdev_fops);
> +
> +	rc = cdev_device_add(cdev, dev);
> +	if (rc)
> +		goto err_add;
> +
> +	return devm_add_action_or_reset(dev->parent, cxlmdev_unregister, cxlmd);

This had me scratching my head. The cxlmdev_unregister() if called normally
or in the _or_reset() results in

	percpu_ref_kill(&cxlmd->ops_active);
	cdev_device_del(&cxlmd->cdev, dev);
	wait_for_completion(&cxlmd->ops_dead);
	cxlmd->cxlm = NULL;
	put_device(dev);
	/* If last ref this will result in */
		percpu_ref_exit(&cxlmd->ops_active);
		ida_free(&cxl_memdev_ida, cxlmd->id);
		kfree(cxlmd);

So it's doing all the correct things but not necessarily
in the obvious order.

For simplicity of review perhaps it's worth reordering probe a bit
to get the ida immediately after the cxlmd alloc and
for the cxlmdev_unregister() perhaps reorder the cdev_device_del()
before the percpu_ref_kill().

Trivial obvious as the ordering has no affect but makes it
easy for reviewers to tick off setup vs tear down parts.

> +
> +err_add:
> +	ida_free(&cxl_memdev_ida, cxlmd->id);
> +err_id:
> +	/*
> +	 * Theoretically userspace could have already entered the fops,
> +	 * so flush ops_active.
> +	 */
> +	percpu_ref_kill(&cxlmd->ops_active);
> +	wait_for_completion(&cxlmd->ops_dead);
> +	percpu_ref_exit(&cxlmd->ops_active);
> +err_ref:
> +	kfree(cxlmd);
> +
> +	return rc;
> +}
> +





^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-10  0:02 ` [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface Ben Widawsky
@ 2021-02-10 18:45   ` Jonathan Cameron
  2021-02-10 20:22     ` Ben Widawsky
  2021-02-11  4:40     ` Dan Williams
  2021-02-14 16:30   ` Al Viro
  1 sibling, 2 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-10 18:45 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

On Tue, 9 Feb 2021 16:02:55 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Add a straightforward IOCTL that provides a mechanism for userspace to
> query the supported memory device commands. CXL commands as they appear
> to userspace are described as part of the UAPI kerneldoc. The command
> list returned via this IOCTL will contain the full set of commands that
> the driver supports, however, some of those commands may not be
> available for use by userspace.
> 
> Memory device commands first appear in the CXL 2.0 specification. They
> are submitted through a mailbox mechanism specified also originally
> specified in the CXL 2.0 specification.
> 
> The send command allows userspace to issue mailbox commands directly to
> the hardware. The list of available commands to send are the output of
> the query command. The driver verifies basic properties of the command
> and possibly inspect the input (or output) payload to determine whether
> or not the command is allowed (or might taint the kernel).
> 
> Reported-by: kernel test robot <lkp@intel.com> # bug in earlier revision
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Reviewed-by: Dan Williams <dan.j.willams@intel.com>

A bit of anti macro commentary below.  Heavy use of them may make the code
shorter, but I'd argue they make it harder to do review if you've not looked
at a given bit of code for a while.

Also there is a bit of documentation in here for flags that don't seem to
exist (at this stage anyway) - may just be in the wrong patch.

Jonathan


> ---
>  .clang-format                                 |   1 +
>  .../userspace-api/ioctl/ioctl-number.rst      |   1 +
>  drivers/cxl/mem.c                             | 291 +++++++++++++++++-
>  include/uapi/linux/cxl_mem.h                  | 152 +++++++++
>  4 files changed, 443 insertions(+), 2 deletions(-)
>  create mode 100644 include/uapi/linux/cxl_mem.h
> 
> diff --git a/.clang-format b/.clang-format
> index 10dc5a9a61b3..3f11c8901b43 100644
> --- a/.clang-format
> +++ b/.clang-format
> @@ -109,6 +109,7 @@ ForEachMacros:
>    - 'css_for_each_child'
>    - 'css_for_each_descendant_post'
>    - 'css_for_each_descendant_pre'
> +  - 'cxl_for_each_cmd'
>    - 'device_for_each_child_node'
>    - 'dma_fence_chain_for_each'
>    - 'do_for_each_ftrace_op'
> diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst
> index a4c75a28c839..6eb8e634664d 100644
> --- a/Documentation/userspace-api/ioctl/ioctl-number.rst
> +++ b/Documentation/userspace-api/ioctl/ioctl-number.rst
> @@ -352,6 +352,7 @@ Code  Seq#    Include File                                           Comments
>                                                                       <mailto:michael.klein@puffin.lb.shuttle.de>
>  0xCC  00-0F  drivers/misc/ibmvmc.h                                   pseries VMC driver
>  0xCD  01     linux/reiserfs_fs.h
> +0xCE  01-02  uapi/linux/cxl_mem.h                                    Compute Express Link Memory Devices
>  0xCF  02     fs/cifs/ioctl.c
>  0xDB  00-0F  drivers/char/mwave/mwavepub.h
>  0xDD  00-3F                                                          ZFCP device driver see drivers/s390/scsi/
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index 8bbd2495e237..ce65630bb75e 100644
> --- a/drivers/cxl/mem.c
> +++ b/drivers/cxl/mem.c
> @@ -1,5 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0-only
>  /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> +#include <uapi/linux/cxl_mem.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
>  #include <linux/cdev.h>
> @@ -39,6 +40,7 @@
>  #define CXL_MAILBOX_TIMEOUT_MS (2 * HZ)
>  
>  enum opcode {
> +	CXL_MBOX_OP_INVALID		= 0x0000,
>  	CXL_MBOX_OP_IDENTIFY		= 0x4000,
>  	CXL_MBOX_OP_MAX			= 0x10000
>  };
> @@ -90,9 +92,57 @@ struct cxl_memdev {
>  static int cxl_mem_major;
>  static DEFINE_IDA(cxl_memdev_ida);
>  
> +/**
> + * struct cxl_mem_command - Driver representation of a memory device command
> + * @info: Command information as it exists for the UAPI
> + * @opcode: The actual bits used for the mailbox protocol
> + * @flags: Set of flags reflecting the state of the command.
> + *
> + *  * %CXL_CMD_FLAG_MANDATORY: Hardware must support this command. This flag is
> + *    only used internally by the driver for sanity checking.

Doesn't seem to be defined yet.

> + *
> + * The cxl_mem_command is the driver's internal representation of commands that
> + * are supported by the driver. Some of these commands may not be supported by
> + * the hardware. The driver will use @info to validate the fields passed in by
> + * the user then submit the @opcode to the hardware.
> + *
> + * See struct cxl_command_info.
> + */
> +struct cxl_mem_command {
> +	struct cxl_command_info info;
> +	enum opcode opcode;
> +};
> +
> +#define CXL_CMD(_id, _flags, sin, sout)                                        \
> +	[CXL_MEM_COMMAND_ID_##_id] = {                                         \
> +	.info =	{                                                              \
> +			.id = CXL_MEM_COMMAND_ID_##_id,                        \
> +			.flags = CXL_MEM_COMMAND_FLAG_##_flags,                \
> +			.size_in = sin,                                        \
> +			.size_out = sout,                                      \
> +		},                                                             \
> +	.opcode = CXL_MBOX_OP_##_id,                                           \
> +	}
> +
> +/*
> + * This table defines the supported mailbox commands for the driver. This table
> + * is made up of a UAPI structure. Non-negative values as parameters in the
> + * table will be validated against the user's input. For example, if size_in is
> + * 0, and the user passed in 1, it is an error.
> + */
> +static struct cxl_mem_command mem_commands[] = {
> +	CXL_CMD(IDENTIFY, NONE, 0, 0x43),
> +};

As below, I'm doubtful about the macro magic and would rather see the
long hand version. It's a fwe more characters but I can immediately see if fields
are in the right places etc and we can skip the 0 default values.

static struct cxl_mem_command mem_commands[] = {
	[CXL_MEM_COMMAND_ID_IDENTIFY] = {
		.info = {
			.id = CXL_MEM_COMMAND_ID_IDENTIFY,
			.size_out = 0x43,
		},
		.opcode = CXL_MBOX_OP_IDENTIFY,	
	},
};

Still it's your driver and I guess I'll guess I can probably get my head around
this macro..

>  
> diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
> new file mode 100644
> index 000000000000..f1f7e9f32ea5
> --- /dev/null
> +++ b/include/uapi/linux/cxl_mem.h
> @@ -0,0 +1,152 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * CXL IOCTLs for Memory Devices
> + */
> +
> +#ifndef _UAPI_CXL_MEM_H_
> +#define _UAPI_CXL_MEM_H_
> +
> +#include <linux/types.h>
> +
> +/**
> + * DOC: UAPI
> + *
> + * Not all of all commands that the driver supports are always available for use
> + * by userspace. Userspace must check the results from the QUERY command in
> + * order to determine the live set of commands.
> + */
> +
> +#define CXL_MEM_QUERY_COMMANDS _IOR(0xCE, 1, struct cxl_mem_query_commands)
> +#define CXL_MEM_SEND_COMMAND _IOWR(0xCE, 2, struct cxl_send_command)
> +
> +#define CXL_CMDS                                                          \
> +	___C(INVALID, "Invalid Command"),                                 \
> +	___C(IDENTIFY, "Identify Command"),                               \
> +	___C(MAX, "Last command")
> +
> +#define ___C(a, b) CXL_MEM_COMMAND_ID_##a
> +enum { CXL_CMDS };
> +
> +#undef ___C
> +#define ___C(a, b) { b }
> +static const struct {
> +	const char *name;
> +} cxl_command_names[] = { CXL_CMDS };
> +#undef ___C

Unless there are going to be a lot of these, I'd just write them out long hand
as much more readable than the macro magic.

enum {
	CXL_MEM_COMMAND_ID_INVALID,
	CXL_MEM_COMMAND_ID_IDENTIFY,
	CXL_MEM_COMMAND_ID_MAX
};

static const struct {
	const char *name;
} cxl_command_names[] = {
	[CXL_MEM_COMMAND_ID_INVALID] = { "Invalid Command" },
	[CXL_MEM_COMMAND_ID_IDENTIFY] = { "Identify Comamnd" },
	/* I hope you never need the Last command to exist in here as that sounds like a bug */
};

That's assuming I actually figured the macro fun out correctly.
To my mind it's worth doing this stuff for 'lots' no so much for 3.

> +
> +/**
> + * struct cxl_command_info - Command information returned from a query.
> + * @id: ID number for the command.
> + * @flags: Flags that specify command behavior.
> + *
> + *  * %CXL_MEM_COMMAND_FLAG_KERNEL: This command is reserved for exclusive
> + *    kernel use.
> + *  * %CXL_MEM_COMMAND_FLAG_MUTEX: This command may require coordination with
> + *    the kernel in order to complete successfully.
Doesn't correspond to the flags defined below.  If introduced in a later patch
then bring the docs in with the first use.

> + *
> + * @size_in: Expected input size, or -1 if variable length.
> + * @size_out: Expected output size, or -1 if variable length.
> + *
> + * Represents a single command that is supported by both the driver and the
> + * hardware. This is returned as part of an array from the query ioctl. The
> + * following would be a command named "foobar" that takes a variable length
> + * input and returns 0 bytes of output.

Why give it a name?  It's just an id!

> + *
> + *  - @id = 10
> + *  - @flags = CXL_MEM_COMMAND_FLAG_MUTEX

That flag doesn't seem to be defined below.

> + *  - @size_in = -1
> + *  - @size_out = 0
> + *
> + * See struct cxl_mem_query_commands.
> + */
> +struct cxl_command_info {
> +	__u32 id;
> +
> +	__u32 flags;
> +#define CXL_MEM_COMMAND_FLAG_NONE 0
> +#define CXL_MEM_COMMAND_FLAG_KERNEL BIT(0)
> +#define CXL_MEM_COMMAND_FLAG_MASK GENMASK(1, 0)
> +
> +	__s32 size_in;
> +	__s32 size_out;
> +};
> +
> +/**
> + * struct cxl_mem_query_commands - Query supported commands.
> + * @n_commands: In/out parameter. When @n_commands is > 0, the driver will
> + *		return min(num_support_commands, n_commands). When @n_commands
> + *		is 0, driver will return the number of total supported commands.
> + * @rsvd: Reserved for future use.
> + * @commands: Output array of supported commands. This array must be allocated
> + *            by userspace to be at least min(num_support_commands, @n_commands)
> + *
> + * Allow userspace to query the available commands supported by both the driver,
> + * and the hardware. Commands that aren't supported by either the driver, or the
> + * hardware are not returned in the query.
> + *
> + * Examples:
> + *
> + *  - { .n_commands = 0 } // Get number of supported commands
> + *  - { .n_commands = 15, .commands = buf } // Return first 15 (or less)
> + *    supported commands
> + *
> + *  See struct cxl_command_info.
> + */
> +struct cxl_mem_query_commands {
> +	/*
> +	 * Input: Number of commands to return (space allocated by user)
> +	 * Output: Number of commands supported by the driver/hardware
> +	 *
> +	 * If n_commands is 0, kernel will only return number of commands and
> +	 * not try to populate commands[], thus allowing userspace to know how
> +	 * much space to allocate
> +	 */
> +	__u32 n_commands;
> +	__u32 rsvd;
> +
> +	struct cxl_command_info __user commands[]; /* out: supported commands */
> +};
> +
> +/**
> + * struct cxl_send_command - Send a command to a memory device.
> + * @id: The command to send to the memory device. This must be one of the
> + *	commands returned by the query command.
> + * @flags: Flags for the command (input).
> + * @rsvd: Must be zero.
> + * @retval: Return value from the memory device (output).
> + * @in.size: Size of the payload to provide to the device (input).
> + * @in.rsvd: Must be zero.
> + * @in.payload: Pointer to memory for payload input (little endian order).

Silly point, but perhaps distinguish it's the payload that is in little endian order
not the pointer.  (I obviously haven't had enough coffee today and missread it)


> + * @out.size: Size of the payload received from the device (input/output). This
> + *	      field is filled in by userspace to let the driver know how much
> + *	      space was allocated for output. It is populated by the driver to
> + *	      let userspace know how large the output payload actually was.
> + * @out.rsvd: Must be zero.
> + * @out.payload: Pointer to memory for payload output (little endian order).
> + *
> + * Mechanism for userspace to send a command to the hardware for processing. The
> + * driver will do basic validation on the command sizes. In some cases even the
> + * payload may be introspected. Userspace is required to allocate large
> + * enough buffers for size_out which can be variable length in certain
> + * situations.
> + */
> +struct cxl_send_command {
> +	__u32 id;
> +	__u32 flags;
> +	__u32 rsvd;
> +	__u32 retval;
> +
> +	struct {
> +		__s32 size;
> +		__u32 rsvd;
> +		__u64 payload;
> +	} in;
> +
> +	struct {
> +		__s32 size;
> +		__u32 rsvd;
> +		__u64 payload;
> +	} out;
> +};
> +
> +#endif


^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-10 18:11         ` Ben Widawsky
@ 2021-02-10 18:46           ` Ariel.Sibley
  2021-02-10 19:12             ` Ben Widawsky
  0 siblings, 1 reply; 57+ messages in thread
From: Ariel.Sibley @ 2021-02-10 18:46 UTC (permalink / raw)
  To: ben.widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	helgaas, cbrowy, hch, dan.j.williams, david, rientjes, ira.weiny,
	jcm, Jonathan.Cameron, rafael.j.wysocki, rdunlap, vishal.l.verma,
	jgroves, sean.v.kelley, Ahmad.Danesh, Varada.Dighe,
	Kirthi.Shenoy, Sanjay.Goyal

> > > > > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > > > > index c4ba3aa0a05d..08eaa8e52083 100644
> > > > > --- a/drivers/cxl/Kconfig
> > > > > +++ b/drivers/cxl/Kconfig
> > > > > @@ -33,6 +33,24 @@ config CXL_MEM
> > > > >
> > > > >           If unsure say 'm'.
> > > > >
> > > > > +config CXL_MEM_RAW_COMMANDS
> > > > > +       bool "RAW Command Interface for Memory Devices"
> > > > > +       depends on CXL_MEM
> > > > > +       help
> > > > > +         Enable CXL RAW command interface.
> > > > > +
> > > > > +         The CXL driver ioctl interface may assign a kernel ioctl command
> > > > > +         number for each specification defined opcode. At any given point in
> > > > > +         time the number of opcodes that the specification defines and a device
> > > > > +         may implement may exceed the kernel's set of associated ioctl function
> > > > > +         numbers. The mismatch is either by omission, specification is too new,
> > > > > +         or by design. When prototyping new hardware, or developing /
> > > > > debugging
> > > > > +         the driver it is useful to be able to submit any possible command to
> > > > > +         the hardware, even commands that may crash the kernel due to their
> > > > > +         potential impact to memory currently in use by the kernel.
> > > > > +
> > > > > +         If developing CXL hardware or the driver say Y, otherwise say N.
> > > >
> > > > Blocking RAW commands by default will prevent vendors from developing user
> > > > space tools that utilize vendor specific commands. Vendors of CXL.mem devices
> > > > should take ownership of ensuring any vendor defined commands that could cause
> > > > user data to be exposed or corrupted are disabled at the device level for
> > > > shipping configurations.
> > >
> > > Thanks for brining this up Ariel. If there is a recommendation on how to codify
> > > this, I would certainly like to know because the explanation will be long.
> > >
> > > ---
> > >
> > > The background:
> > >
> > > The enabling/disabling of the Kconfig option is driven by the distribution
> > > and/or system integrator. Even if we made the default 'y', nothing stops them
> > > from changing that. if you are using this driver in production and insist on
> > > using RAW commands, you are free to carry around a small patch to get rid of the
> > > WARN (it is a one-liner).
> > >
> > > To recap why this is in place - the driver owns the sanctity of the device and
> > > therefore a [large] part of the whole system. What we can do as driver writers
> > > is figure out the set of commands that are "safe" and allow those. Aside from
> > > being able to validate them, we're able to mediate them with other parallel
> > > operations that might conflict. We gain the ability to squint extra hard at bug
> > > reports. We provide a reason to try to use a well defined part of the spec.
> > > Realizing that only allowing that small set of commands in a rapidly growing
> > > ecosystem is not a welcoming API; we decided on RAW.
> > >
> > > Vendor commands can be one of two types:
> > > 1. Some functionality probably most vendors want.
> > > 2. Functionality that is really single vendor specific.
> > >
> > > Hopefully we can agree that the path for case #1 is to work with the consortium
> > > to standardize a command that does what is needed and that can eventually become
> > > part of UAPI. The situation is unfortunate, but temporary. If you won't be able
> > > to upgrade your kernel, patch out the WARN as above.
> > >
> > > The second situation is interesting and does need some more thought and
> > > discussion.
> > >
> > > ---
> > >
> > > I see 3 realistic options for truly vendor specific commands.
> > > 1. Tough noogies. Vendors aren't special and they shouldn't do that.
> > > 2. modparam to disable the WARN for specific devices (let the sysadmin decide)
> > > 3. Try to make them part of UAPI.
> > >
> > > The right answer to me is #1, but I also realize I live in the real world.
> > >
> > > #2 provides too much flexibility. Vendors will just do what they please and
> > > distros and/or integrators will be seen as hostile if they don't accommodate.
> > >
> > > I like #3, but I have a feeling not everyone will agree. My proposal for vendor
> > > specific commands is, if it's clear it's truly a unique command, allow adding it
> > > as part of UAPI (moving it out of RAW). I expect like 5 of these, ever. If we
> > > start getting multiple per vendor, we've failed. The infrastructure is already
> > > in place to allow doing this pretty easily. I think we'd have to draw up some
> > > guidelines (like adding test cases for the command) to allow these to come in.
> > > Anything with command effects is going to need extra scrutiny.
> >
> > This would necessitate adding specific opcode values in the range C000h-FFFFh
> > to UAPI, and those would then be allowed for all CXL.mem devices, correct?  If
> > so, I do not think this is the right approach, as opcodes in this range are by
> > definition vendor defined.  A given opcode value will have totally different
> > effects depending on the vendor.
> 
> Perhaps I didn't explain well enough. The UAPI would define the command ID to
> opcode mapping, for example 0xC000. There would be a validation step in the
> driver where it determines if it's actually the correct hardware to execute on.
> So it would be entirely possible to have multiple vendor commands with the same
> opcode.
> 
> So UAPI might be this:
>         ___C(GET_HEALTH_INFO, "Get Health Info"),                         \
>         ___C(GET_LOG, "Get Log"),                                         \
>         ___C(VENDOR_FOO_XXX, "FOO"),                                      \
>         ___C(VENDOR_BAR_XXX, "BAR"),                                      \
> 
> User space just picks the command they want, FOO/BAR. If they use VENDOR_BAR_XXX
> on VENDOR_FOO's hardware, they will get an error return value.

Would the driver be doing this enforcement of vendor ID / opcode compatibility, or would the error return value mentioned here be from the device?  My concern is where the same opcode has two meanings for different vendors.  For example, for Vendor A opcode 0xC000 might report some form of status information, but for Vendor B it might have data side effects.  There may not have been any UAPI intention to expose 0xC000 for Vendor B devices, but the existence of 0xC000 in UAPI for Vendor A results in the data corrupting version of 0xC000 for Vendor B being allowed.  It would seem to me that even if the commands are in UAPI, the driver would still need to rely on the contents of the CEL to determine if the command should be allowed.
 
> > I think you may be on to something with the command effects.  But rather than
> > "extra scrutiny" for opcodes that have command effects, would it make sense to
> > allow vendor defined opcodes that have Bit[5:0] in the Command Effect field of
> > the CEL Entry Structure (Table 173) set to 0?  In conjunction, those bits
> > represent any change to the configuration or data within the device.  For
> > commands that have no such effects, is there harm to allowing them?  Of
> > course, this approach relies on the vendor to not misrepresent the command
> > effects.
> >
> 
> That last sentence is what worries me :-)

One must also rely on the vendor to not simply corrupt data at random. :) IMO the contents of the CEL should be believed by the driver, rather than the driver treating the device as a hostile actor.

> 
> 
> > >
> > > In my opinion, as maintainers of the driver, we do owe the community an answer
> > > as to our direction for this. Dan, what is your thought?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10 17:41   ` Jonathan Cameron
@ 2021-02-10 18:53     ` Ben Widawsky
  2021-02-10 19:54       ` Dan Williams
  0 siblings, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10 18:53 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On 21-02-10 17:41:04, Jonathan Cameron wrote:
> On Tue, 9 Feb 2021 16:02:53 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > Provide enough functionality to utilize the mailbox of a memory device.
> > The mailbox is used to interact with the firmware running on the memory
> > device. The flow is proven with one implemented command, "identify".
> > Because the class code has already told the driver this is a memory
> > device and the identify command is mandatory.
> > 
> > CXL devices contain an array of capabilities that describe the
> > interactions software can have with the device or firmware running on
> > the device. A CXL compliant device must implement the device status and
> > the mailbox capability. Additionally, a CXL compliant memory device must
> > implement the memory device capability. Each of the capabilities can
> > [will] provide an offset within the MMIO region for interacting with the
> > CXL device.
> > 
> > The capabilities tell the driver how to find and map the register space
> > for CXL Memory Devices. The registers are required to utilize the CXL
> > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > and secondary. The secondary mailbox is earmarked for system firmware,
> > and not handled in this driver.
> > 
> > Primary mailboxes are capable of generating an interrupt when submitting
> > a background command. That implementation is saved for a later time.
> > 
> > Link: https://www.computeexpresslink.org/download-the-specification
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> 
> A few more comments inline (proper review whereas my other reply was a
> bug chase).
> 
> Jonathan
> 
> > ---
> >  drivers/cxl/Kconfig           |  14 +
> >  drivers/cxl/cxl.h             |  93 +++++++
> >  drivers/cxl/mem.c             | 511 +++++++++++++++++++++++++++++++++-
> >  drivers/cxl/pci.h             |  13 +
> >  include/uapi/linux/pci_regs.h |   1 +
> >  5 files changed, 630 insertions(+), 2 deletions(-)
> >  create mode 100644 drivers/cxl/cxl.h
> > 
> > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > index 9e80b311e928..c4ba3aa0a05d 100644
> > --- a/drivers/cxl/Kconfig
> > +++ b/drivers/cxl/Kconfig
> > @@ -32,4 +32,18 @@ config CXL_MEM
> >  	  Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
> >  
> >  	  If unsure say 'm'.
> > +
> > +config CXL_MEM_INSECURE_DEBUG
> > +	bool "CXL.mem debugging"
> 
> As mentioned below, this makes me a tiny bit uncomfortable.
> 
> > +	depends on CXL_MEM
> > +	help
> > +	  Enable debug of all CXL command payloads.
> > +
> > +	  Some CXL devices and controllers support encryption and other
> > +	  security features. The payloads for the commands that enable
> > +	  those features may contain sensitive clear-text security
> > +	  material. Disable debug of those command payloads by default.
> > +	  If you are a kernel developer actively working on CXL
> > +	  security enabling say Y, otherwise say N.
> > +
> >  endif
> > diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> > new file mode 100644
> > index 000000000000..745f5e0bfce3
> > --- /dev/null
> > +++ b/drivers/cxl/cxl.h
> > @@ -0,0 +1,93 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +/* Copyright(c) 2020 Intel Corporation. */
> > +
> > +#ifndef __CXL_H__
> > +#define __CXL_H__
> > +
> > +#include <linux/bitfield.h>
> > +#include <linux/bitops.h>
> > +#include <linux/io.h>
> > +
> > +/* CXL 2.0 8.2.8.1 Device Capabilities Array Register */
> > +#define CXLDEV_CAP_ARRAY_OFFSET 0x0
> > +#define   CXLDEV_CAP_ARRAY_CAP_ID 0
> > +#define   CXLDEV_CAP_ARRAY_ID_MASK GENMASK(15, 0)
> > +#define   CXLDEV_CAP_ARRAY_COUNT_MASK GENMASK(47, 32)
> > +/* CXL 2.0 8.2.8.2.1 CXL Device Capabilities */
> > +#define CXLDEV_CAP_CAP_ID_DEVICE_STATUS 0x1
> > +#define CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX 0x2
> > +#define CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX 0x3
> > +#define CXLDEV_CAP_CAP_ID_MEMDEV 0x4000
> > +
> > +/* CXL 2.0 8.2.8.4 Mailbox Registers */
> > +#define CXLDEV_MBOX_CAPS_OFFSET 0x00
> > +#define   CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0)
> > +#define CXLDEV_MBOX_CTRL_OFFSET 0x04
> > +#define   CXLDEV_MBOX_CTRL_DOORBELL BIT(0)
> > +#define CXLDEV_MBOX_CMD_OFFSET 0x08
> > +#define   CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK GENMASK(15, 0)
> > +#define   CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK GENMASK(36, 16)
> > +#define CXLDEV_MBOX_STATUS_OFFSET 0x10
> > +#define   CXLDEV_MBOX_STATUS_RET_CODE_MASK GENMASK(47, 32)
> > +#define CXLDEV_MBOX_BG_CMD_STATUS_OFFSET 0x18
> > +#define CXLDEV_MBOX_PAYLOAD_OFFSET 0x20
> > +
> > +/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
> > +#define CXLMDEV_STATUS_OFFSET 0x0
> > +#define   CXLMDEV_DEV_FATAL BIT(0)
> > +#define   CXLMDEV_FW_HALT BIT(1)
> > +#define   CXLMDEV_STATUS_MEDIA_STATUS_MASK GENMASK(3, 2)
> > +#define     CXLMDEV_MS_NOT_READY 0
> > +#define     CXLMDEV_MS_READY 1
> > +#define     CXLMDEV_MS_ERROR 2
> > +#define     CXLMDEV_MS_DISABLED 3
> > +#define CXLMDEV_READY(status)                                                  \
> > +	(FIELD_GET(CXLMDEV_STATUS_MEDIA_STATUS_MASK, status) ==                \
> > +	 CXLMDEV_MS_READY)
> > +#define   CXLMDEV_MBOX_IF_READY BIT(4)
> > +#define   CXLMDEV_RESET_NEEDED_MASK GENMASK(7, 5)
> > +#define     CXLMDEV_RESET_NEEDED_NOT 0
> > +#define     CXLMDEV_RESET_NEEDED_COLD 1
> > +#define     CXLMDEV_RESET_NEEDED_WARM 2
> > +#define     CXLMDEV_RESET_NEEDED_HOT 3
> > +#define     CXLMDEV_RESET_NEEDED_CXL 4
> > +#define CXLMDEV_RESET_NEEDED(status)                                           \
> > +	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
> > +	 CXLMDEV_RESET_NEEDED_NOT)
> > +
> > +/**
> > + * struct cxl_mem - A CXL memory device
> > + * @pdev: The PCI device associated with this CXL device.
> > + * @regs: IO mappings to the device's MMIO
> > + * @status_regs: CXL 2.0 8.2.8.3 Device Status Registers
> > + * @mbox_regs: CXL 2.0 8.2.8.4 Mailbox Registers
> > + * @memdev_regs: CXL 2.0 8.2.8.5 Memory Device Registers
> > + * @payload_size: Size of space for payload
> > + *                (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
> > + * @mbox_mutex: Mutex to synchronize mailbox access.
> > + * @firmware_version: Firmware version for the memory device.
> > + * @pmem: Persistent memory capacity information.
> > + * @ram: Volatile memory capacity information.
> > + */
> > +struct cxl_mem {
> > +	struct pci_dev *pdev;
> > +	void __iomem *regs;
> > +
> > +	void __iomem *status_regs;
> > +	void __iomem *mbox_regs;
> > +	void __iomem *memdev_regs;
> > +
> > +	size_t payload_size;
> > +	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
> > +	char firmware_version[0x10];
> > +
> > +	struct {
> > +		struct range range;
> > +	} pmem;
> 
> Christoph raised this in v1, and I agree with him that his would me more compact
> and readable as
> 
> 	struct range pmem_range;
> 	struct range ram_range;
> 
> The discussion seemed to get lost without getting resolved that I can see.
> 

I had been waiting for Dan to chime in, since he authored it. I'll change it and
he can yell if he cares.

> > +
> > +	struct {
> > +		struct range range;
> > +	} ram;
> 
> > +};
> > +
> > +#endif /* __CXL_H__ */
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > index 99a6571508df..0a868a15badc 100644
> > --- a/drivers/cxl/mem.c
> > +++ b/drivers/cxl/mem.c
> 
> 
> ...
> 
> > +static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > +				 struct mbox_cmd *mbox_cmd)
> > +{
> > +	struct device *dev = &cxlm->pdev->dev;
> > +
> > +	dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
> > +		mbox_cmd->opcode, mbox_cmd->size_in);
> > +
> > +	if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
> 
> Hmm.  Whilst I can see the advantage of this for debug, I'm not sure we want
> it upstream even under a rather evil looking CONFIG variable.
> 
> Is there a bigger lock we can use to avoid chance of accidental enablement?

Any suggestions? I'm told this functionality was extremely valuable for NVDIMM,
though I haven't personally experienced it.

> 
> 
> > +		print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1,
> > +				     mbox_cmd->payload_in, mbox_cmd->size_in,
> > +				     true);
> > +	}
> > +}
> > +
> > +/**
> > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > + * @cxlm: The CXL memory device to communicate with.
> > + * @mbox_cmd: Command to send to the memory device.
> > + *
> > + * Context: Any context. Expects mbox_lock to be held.
> > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > + *         Caller should check the return code in @mbox_cmd to make sure it
> > + *         succeeded.
> > + *
> > + * This is a generic form of the CXL mailbox send command, thus the only I/O
> > + * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> > + * types of CXL devices may have further information available upon error
> > + * conditions.
> > + *
> > + * The CXL spec allows for up to two mailboxes. The intention is for the primary
> > + * mailbox to be OS controlled and the secondary mailbox to be used by system
> > + * firmware. This allows the OS and firmware to communicate with the device and
> > + * not need to coordinate with each other. The driver only uses the primary
> > + * mailbox.
> > + */
> > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > +				 struct mbox_cmd *mbox_cmd)
> > +{
> > +	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> > +	u64 cmd_reg, status_reg;
> > +	size_t out_len;
> > +	int rc;
> > +
> > +	lockdep_assert_held(&cxlm->mbox_mutex);
> > +
> > +	/*
> > +	 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
> > +	 *   1. Caller reads MB Control Register to verify doorbell is clear
> > +	 *   2. Caller writes Command Register
> > +	 *   3. Caller writes Command Payload Registers if input payload is non-empty
> > +	 *   4. Caller writes MB Control Register to set doorbell
> > +	 *   5. Caller either polls for doorbell to be clear or waits for interrupt if configured
> > +	 *   6. Caller reads MB Status Register to fetch Return code
> > +	 *   7. If command successful, Caller reads Command Register to get Payload Length
> > +	 *   8. If output payload is non-empty, host reads Command Payload Registers
> > +	 *
> > +	 * Hardware is free to do whatever it wants before the doorbell is rung,
> > +	 * and isn't allowed to change anything after it clears the doorbell. As
> > +	 * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can
> > +	 * also happen in any order (though some orders might not make sense).
> > +	 */
> > +
> > +	/* #1 */
> > +	if (cxl_doorbell_busy(cxlm)) {
> > +		dev_err_ratelimited(&cxlm->pdev->dev,
> > +				    "Mailbox re-busy after acquiring\n");
> > +		return -EBUSY;
> > +	}
> > +
> > +	cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK,
> > +			     mbox_cmd->opcode);
> > +	if (mbox_cmd->size_in) {
> > +		if (WARN_ON(!mbox_cmd->payload_in))
> > +			return -EINVAL;
> > +
> > +		cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK,
> > +				      mbox_cmd->size_in);
> > +		memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in);
> > +	}
> > +
> > +	/* #2, #3 */
> > +	writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > +
> > +	/* #4 */
> > +	dev_dbg(&cxlm->pdev->dev, "Sending command\n");
> > +	writel(CXLDEV_MBOX_CTRL_DOORBELL,
> > +	       cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET);
> > +
> > +	/* #5 */
> > +	rc = cxl_mem_wait_for_doorbell(cxlm);
> > +	if (rc == -ETIMEDOUT) {
> > +		cxl_mem_mbox_timeout(cxlm, mbox_cmd);
> > +		return rc;
> > +	}
> > +
> > +	/* #6 */
> > +	status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET);
> > +	mbox_cmd->return_code =
> > +		FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
> > +
> > +	if (mbox_cmd->return_code != 0) {
> > +		dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
> > +		return 0;
> 
> See earlier diversion whilst I was chasing my bug (another branch of this
> thread)
> 
> > +	}
> > +
> > +	/* #7 */
> > +	cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > +	out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg);
> > +
> > +	/* #8 */
> > +	if (out_len && mbox_cmd->payload_out)
> > +		memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
> > +
> > +	mbox_cmd->size_out = out_len;
> > +
> > +	return 0;
> > +}
> > +
> 
> 
> ...
> 
> > +static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
> > +				      u32 reg_hi)
> > +{
> > +	struct device *dev = &pdev->dev;
> > +	struct cxl_mem *cxlm;
> > +	void __iomem *regs;
> > +	u64 offset;
> > +	u8 bar;
> > +	int rc;
> > +
> > +	cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL);
> > +	if (!cxlm) {
> > +		dev_err(dev, "No memory available\n");
> > +		return NULL;
> > +	}
> > +
> > +	offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo);
> > +	bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo);
> > +
> > +	/* Basic sanity check that BAR is big enough */
> > +	if (pci_resource_len(pdev, bar) < offset) {
> > +		dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
> > +			&pdev->resource[bar], (unsigned long long)offset);
> > +		return NULL;
> > +	}
> > +
> > +	rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev));
> > +	if (rc != 0) {
> 
> if (rc) 
> 
> > +		dev_err(dev, "failed to map registers\n");
> > +		return NULL;
> > +	}
> > +	regs = pcim_iomap_table(pdev)[bar];
> > +
> > +	mutex_init(&cxlm->mbox_mutex);
> > +	cxlm->pdev = pdev;
> > +	cxlm->regs = regs + offset;
> > +
> > +	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
> > +	return cxlm;
> > +}
> >  
> 
> ...
> 
> >  static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> >  {
> >  	struct device *dev = &pdev->dev;
> > -	int regloc;
> > +	struct cxl_mem *cxlm;
> > +	int rc, regloc, i;
> > +	u32 regloc_size;
> > +
> > +	rc = pcim_enable_device(pdev);
> > +	if (rc)
> > +		return rc;
> >  
> >  	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
> >  	if (!regloc) {
> > @@ -39,7 +509,44 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> >  		return -ENXIO;
> >  	}
> >  
> > -	return 0;
> > +	/* Get the size of the Register Locator DVSEC */
> > +	pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, &regloc_size);
> > +	regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
> > +
> > +	regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET;
> > +
> > +	rc = -ENXIO;
> > +	for (i = regloc; i < regloc + regloc_size; i += 8) {
> > +		u32 reg_lo, reg_hi;
> > +		u8 reg_type;
> > +
> > +		/* "register low and high" contain other bits */
> 
> high doesn't contain any other bits so that's a tiny bit misleading.
> 
> > +		pci_read_config_dword(pdev, i, &reg_lo);
> > +		pci_read_config_dword(pdev, i + 4, &reg_hi);
> > +
> > +		reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo);
> > +
> > +		if (reg_type == CXL_REGLOC_RBI_MEMDEV) {
> > +			rc = 0;
> 
> I sort of assumed this unusual structure was to allow for some future
> change, but checked end result and it still looks like this.
> So, drop the rc assignment here and...
> 

[snip]

> 
> return -ENODEV;
> 

[snip]

> 
> With above direct return, only get here if rc = -ENXIO.
> Could just as easily check if i >= regloc + regloc_size then it's
> obvious this is kind of canonical form of 'not found'.
> 
> 
> Alternative would be to treat the above as a 'find' loop then
> have the clxm = cxl_mem_create() outside of the loop.
> 

I don't recall honestly, but I think it was meant to help distinguish the
failure type.
ENXIO - No register locator found
ENODEV - Some BAR or other resource not found/mapped.

I think this distinction is shown through debug messages or lack of, so I'm fine
to just make it -ENODEV in any failure.

> 
> > +
> > +	rc = cxl_mem_setup_regs(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = cxl_mem_setup_mailbox(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	return cxl_mem_identify(cxlm);
> >  }
> >  
> >  static const struct pci_device_id cxl_mem_pci_tbl[] = {
> > diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> > index f135b9f7bb21..ffcbc13d7b5b 100644
> > --- a/drivers/cxl/pci.h
> > +++ b/drivers/cxl/pci.h
> > @@ -14,5 +14,18 @@
> >  #define PCI_DVSEC_ID_CXL		0x0
> >  
> >  #define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
> > +#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET	0xC
> > +
> > +/* BAR Indicator Register (BIR) */
> > +#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
> > +
> > +/* Register Block Identifier (RBI) */
> > +#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
> > +#define CXL_REGLOC_RBI_EMPTY 0
> > +#define CXL_REGLOC_RBI_COMPONENT 1
> > +#define CXL_REGLOC_RBI_VIRT 2
> > +#define CXL_REGLOC_RBI_MEMDEV 3
> > +
> > +#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)
> 
> CXL_REGLOCL_ADDR_LOW_MASK perhaps for clarity?
> 
> >  
> >  #endif /* __CXL_PCI_H__ */
> > diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> > index e709ae8235e7..6267ca9ae683 100644
> > --- a/include/uapi/linux/pci_regs.h
> > +++ b/include/uapi/linux/pci_regs.h
> > @@ -1080,6 +1080,7 @@
> >  
> >  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
> >  #define PCI_DVSEC_HEADER1		0x4 /* Designated Vendor-Specific Header1 */
> > +#define PCI_DVSEC_HEADER1_LENGTH_MASK	0xFFF00000
> 
> Seems sensible to add the revision mask as well.
> The vendor id currently read using a word read rather than dword, but perhaps
> neater to add that as well for completeness?
> 
> Having said that, given Bjorn's comment on clashes and the fact he'd rather see
> this stuff defined in drivers and combined later (see review patch 1 and follow
> the link) perhaps this series should not touch this header at all.

I'm fine to move it back.

>  
> >  #define PCI_DVSEC_HEADER2		0x8 /* Designated Vendor-Specific Header2 */
> >  
> >  /* Data Link Feature */
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-10 18:46           ` Ariel.Sibley
@ 2021-02-10 19:12             ` Ben Widawsky
  0 siblings, 0 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10 19:12 UTC (permalink / raw)
  To: Ariel.Sibley
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	helgaas, cbrowy, hch, dan.j.williams, david, rientjes, ira.weiny,
	jcm, Jonathan.Cameron, rafael.j.wysocki, rdunlap, vishal.l.verma,
	jgroves, sean.v.kelley, Ahmad.Danesh, Varada.Dighe,
	Kirthi.Shenoy, Sanjay.Goyal

On 21-02-10 18:46:04, Ariel.Sibley@microchip.com wrote:
> > > > > > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > > > > > index c4ba3aa0a05d..08eaa8e52083 100644
> > > > > > --- a/drivers/cxl/Kconfig
> > > > > > +++ b/drivers/cxl/Kconfig
> > > > > > @@ -33,6 +33,24 @@ config CXL_MEM
> > > > > >
> > > > > >           If unsure say 'm'.
> > > > > >
> > > > > > +config CXL_MEM_RAW_COMMANDS
> > > > > > +       bool "RAW Command Interface for Memory Devices"
> > > > > > +       depends on CXL_MEM
> > > > > > +       help
> > > > > > +         Enable CXL RAW command interface.
> > > > > > +
> > > > > > +         The CXL driver ioctl interface may assign a kernel ioctl command
> > > > > > +         number for each specification defined opcode. At any given point in
> > > > > > +         time the number of opcodes that the specification defines and a device
> > > > > > +         may implement may exceed the kernel's set of associated ioctl function
> > > > > > +         numbers. The mismatch is either by omission, specification is too new,
> > > > > > +         or by design. When prototyping new hardware, or developing /
> > > > > > debugging
> > > > > > +         the driver it is useful to be able to submit any possible command to
> > > > > > +         the hardware, even commands that may crash the kernel due to their
> > > > > > +         potential impact to memory currently in use by the kernel.
> > > > > > +
> > > > > > +         If developing CXL hardware or the driver say Y, otherwise say N.
> > > > >
> > > > > Blocking RAW commands by default will prevent vendors from developing user
> > > > > space tools that utilize vendor specific commands. Vendors of CXL.mem devices
> > > > > should take ownership of ensuring any vendor defined commands that could cause
> > > > > user data to be exposed or corrupted are disabled at the device level for
> > > > > shipping configurations.
> > > >
> > > > Thanks for brining this up Ariel. If there is a recommendation on how to codify
> > > > this, I would certainly like to know because the explanation will be long.
> > > >
> > > > ---
> > > >
> > > > The background:
> > > >
> > > > The enabling/disabling of the Kconfig option is driven by the distribution
> > > > and/or system integrator. Even if we made the default 'y', nothing stops them
> > > > from changing that. if you are using this driver in production and insist on
> > > > using RAW commands, you are free to carry around a small patch to get rid of the
> > > > WARN (it is a one-liner).
> > > >
> > > > To recap why this is in place - the driver owns the sanctity of the device and
> > > > therefore a [large] part of the whole system. What we can do as driver writers
> > > > is figure out the set of commands that are "safe" and allow those. Aside from
> > > > being able to validate them, we're able to mediate them with other parallel
> > > > operations that might conflict. We gain the ability to squint extra hard at bug
> > > > reports. We provide a reason to try to use a well defined part of the spec.
> > > > Realizing that only allowing that small set of commands in a rapidly growing
> > > > ecosystem is not a welcoming API; we decided on RAW.
> > > >
> > > > Vendor commands can be one of two types:
> > > > 1. Some functionality probably most vendors want.
> > > > 2. Functionality that is really single vendor specific.
> > > >
> > > > Hopefully we can agree that the path for case #1 is to work with the consortium
> > > > to standardize a command that does what is needed and that can eventually become
> > > > part of UAPI. The situation is unfortunate, but temporary. If you won't be able
> > > > to upgrade your kernel, patch out the WARN as above.
> > > >
> > > > The second situation is interesting and does need some more thought and
> > > > discussion.
> > > >
> > > > ---
> > > >
> > > > I see 3 realistic options for truly vendor specific commands.
> > > > 1. Tough noogies. Vendors aren't special and they shouldn't do that.
> > > > 2. modparam to disable the WARN for specific devices (let the sysadmin decide)
> > > > 3. Try to make them part of UAPI.
> > > >
> > > > The right answer to me is #1, but I also realize I live in the real world.
> > > >
> > > > #2 provides too much flexibility. Vendors will just do what they please and
> > > > distros and/or integrators will be seen as hostile if they don't accommodate.
> > > >
> > > > I like #3, but I have a feeling not everyone will agree. My proposal for vendor
> > > > specific commands is, if it's clear it's truly a unique command, allow adding it
> > > > as part of UAPI (moving it out of RAW). I expect like 5 of these, ever. If we
> > > > start getting multiple per vendor, we've failed. The infrastructure is already
> > > > in place to allow doing this pretty easily. I think we'd have to draw up some
> > > > guidelines (like adding test cases for the command) to allow these to come in.
> > > > Anything with command effects is going to need extra scrutiny.
> > >
> > > This would necessitate adding specific opcode values in the range C000h-FFFFh
> > > to UAPI, and those would then be allowed for all CXL.mem devices, correct?  If
> > > so, I do not think this is the right approach, as opcodes in this range are by
> > > definition vendor defined.  A given opcode value will have totally different
> > > effects depending on the vendor.
> > 
> > Perhaps I didn't explain well enough. The UAPI would define the command ID to
> > opcode mapping, for example 0xC000. There would be a validation step in the
> > driver where it determines if it's actually the correct hardware to execute on.
> > So it would be entirely possible to have multiple vendor commands with the same
> > opcode.
> > 
> > So UAPI might be this:
> >         ___C(GET_HEALTH_INFO, "Get Health Info"),                         \
> >         ___C(GET_LOG, "Get Log"),                                         \
> >         ___C(VENDOR_FOO_XXX, "FOO"),                                      \
> >         ___C(VENDOR_BAR_XXX, "BAR"),                                      \
> > 
> > User space just picks the command they want, FOO/BAR. If they use VENDOR_BAR_XXX
> > on VENDOR_FOO's hardware, they will get an error return value.
> 
> Would the driver be doing this enforcement of vendor ID / opcode
> compatibility, or would the error return value mentioned here be from the
> device?  My concern is where the same opcode has two meanings for different
> vendors.  For example, for Vendor A opcode 0xC000 might report some form of
> status information, but for Vendor B it might have data side effects.  There
> may not have been any UAPI intention to expose 0xC000 for Vendor B devices,
> but the existence of 0xC000 in UAPI for Vendor A results in the data
> corrupting version of 0xC000 for Vendor B being allowed.  It would seem to me
> that even if the commands are in UAPI, the driver would still need to rely on
> the contents of the CEL to determine if the command should be allowed.

I think I might not be properly understanding your concern. There are two types
of errors in UAPI that represent 3 error conditions:

1. errno from the ioctl - parameter invalid kind of stuff, this would include using
   the vendor A UAPI on vendor B's device (assuming matching opcodes).
2. errno from the ioctl - transport error of some sort in the mailbox command -
   timeout on doorbell kind of thing.
3. cxl_send_command.retval - Device's error code.

Did that address your concern?

>  
> > > I think you may be on to something with the command effects.  But rather than
> > > "extra scrutiny" for opcodes that have command effects, would it make sense to
> > > allow vendor defined opcodes that have Bit[5:0] in the Command Effect field of
> > > the CEL Entry Structure (Table 173) set to 0?  In conjunction, those bits
> > > represent any change to the configuration or data within the device.  For
> > > commands that have no such effects, is there harm to allowing them?  Of
> > > course, this approach relies on the vendor to not misrepresent the command
> > > effects.
> > >
> > 
> > That last sentence is what worries me :-)
> 
> One must also rely on the vendor to not simply corrupt data at random. :) IMO
> the contents of the CEL should be believed by the driver, rather than the
> driver treating the device as a hostile actor.
> 

I respect your opinion, but my opinion is that driver writers absolutely cannot
rely on that. It would further the conversation a great deal to get concrete
examples of commands that couldn't be part of the core spec and had no effects.
I assume all vendors are going to avoid doing that, which is a real shame.

So far I haven't seen the consortium shoot something down from a vendor because
it is too vendor specific...

> > 
> > 
> > > >
> > > > In my opinion, as maintainers of the driver, we do owe the community an answer
> > > > as to our direction for this. Dan, what is your thought?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10 13:32   ` Jonathan Cameron
  2021-02-10 15:07     ` Jonathan Cameron
@ 2021-02-10 19:32     ` Ben Widawsky
  1 sibling, 0 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10 19:32 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On 21-02-10 13:32:52, Jonathan Cameron wrote:
> On Tue, 9 Feb 2021 16:02:53 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > Provide enough functionality to utilize the mailbox of a memory device.
> > The mailbox is used to interact with the firmware running on the memory
> > device. The flow is proven with one implemented command, "identify".
> > Because the class code has already told the driver this is a memory
> > device and the identify command is mandatory.
> > 
> > CXL devices contain an array of capabilities that describe the
> > interactions software can have with the device or firmware running on
> > the device. A CXL compliant device must implement the device status and
> > the mailbox capability. Additionally, a CXL compliant memory device must
> > implement the memory device capability. Each of the capabilities can
> > [will] provide an offset within the MMIO region for interacting with the
> > CXL device.
> > 
> > The capabilities tell the driver how to find and map the register space
> > for CXL Memory Devices. The registers are required to utilize the CXL
> > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > and secondary. The secondary mailbox is earmarked for system firmware,
> > and not handled in this driver.
> > 
> > Primary mailboxes are capable of generating an interrupt when submitting
> > a background command. That implementation is saved for a later time.
> > 
> > Link: https://www.computeexpresslink.org/download-the-specification
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> 
> Hi Ben,
> 
> 
> > +/**
> > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > + * @cxlm: The CXL memory device to communicate with.
> > + * @mbox_cmd: Command to send to the memory device.
> > + *
> > + * Context: Any context. Expects mbox_lock to be held.
> > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > + *         Caller should check the return code in @mbox_cmd to make sure it
> > + *         succeeded.
> 
> cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> enters an infinite loop as a result.
> 
> I haven't checked other paths, but to my mind it is not a good idea to require
> two levels of error checking - the example here proves how easy it is to forget
> one.
> 
> Now all I have to do is figure out why I'm getting an error in the first place!
> 
> Jonathan
> 
> 
> 
> > + *
> > + * This is a generic form of the CXL mailbox send command, thus the only I/O
> > + * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> > + * types of CXL devices may have further information available upon error
> > + * conditions.
> > + *
> > + * The CXL spec allows for up to two mailboxes. The intention is for the primary
> > + * mailbox to be OS controlled and the secondary mailbox to be used by system
> > + * firmware. This allows the OS and firmware to communicate with the device and
> > + * not need to coordinate with each other. The driver only uses the primary
> > + * mailbox.
> > + */
> > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > +				 struct mbox_cmd *mbox_cmd)
> > +{
> > +	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> > +	u64 cmd_reg, status_reg;
> > +	size_t out_len;
> > +	int rc;
> > +
> > +	lockdep_assert_held(&cxlm->mbox_mutex);
> > +
> > +	/*
> > +	 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
> > +	 *   1. Caller reads MB Control Register to verify doorbell is clear
> > +	 *   2. Caller writes Command Register
> > +	 *   3. Caller writes Command Payload Registers if input payload is non-empty
> > +	 *   4. Caller writes MB Control Register to set doorbell
> > +	 *   5. Caller either polls for doorbell to be clear or waits for interrupt if configured
> > +	 *   6. Caller reads MB Status Register to fetch Return code
> > +	 *   7. If command successful, Caller reads Command Register to get Payload Length
> > +	 *   8. If output payload is non-empty, host reads Command Payload Registers
> > +	 *
> > +	 * Hardware is free to do whatever it wants before the doorbell is rung,
> > +	 * and isn't allowed to change anything after it clears the doorbell. As
> > +	 * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can
> > +	 * also happen in any order (though some orders might not make sense).
> > +	 */
> > +
> > +	/* #1 */
> > +	if (cxl_doorbell_busy(cxlm)) {
> > +		dev_err_ratelimited(&cxlm->pdev->dev,
> > +				    "Mailbox re-busy after acquiring\n");
> > +		return -EBUSY;
> > +	}
> > +
> > +	cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK,
> > +			     mbox_cmd->opcode);
> > +	if (mbox_cmd->size_in) {
> > +		if (WARN_ON(!mbox_cmd->payload_in))
> > +			return -EINVAL;
> > +
> > +		cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK,
> > +				      mbox_cmd->size_in);
> > +		memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in);
> > +	}
> > +
> > +	/* #2, #3 */
> > +	writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > +
> > +	/* #4 */
> > +	dev_dbg(&cxlm->pdev->dev, "Sending command\n");
> > +	writel(CXLDEV_MBOX_CTRL_DOORBELL,
> > +	       cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET);
> > +
> > +	/* #5 */
> > +	rc = cxl_mem_wait_for_doorbell(cxlm);
> > +	if (rc == -ETIMEDOUT) {
> > +		cxl_mem_mbox_timeout(cxlm, mbox_cmd);
> > +		return rc;
> > +	}
> > +
> > +	/* #6 */
> > +	status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET);
> > +	mbox_cmd->return_code =
> > +		FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
> > +
> > +	if (mbox_cmd->return_code != 0) {
> > +		dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
> > +		return 0;
> 
> I'd return some sort of error in this path.  Otherwise the sort of missing
> handling I mention above is too easy to hit.
> 

I want to keep this because I think potentially userspace might want to submit
commands and get back the error. This is separating transport errors and device
errors and making the available discretely

I started another thread about adding a wrapper to handle kernel usages. I just
didn't see this point when I first looked.

> > +	}
> > +
> > +	/* #7 */
> > +	cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
> > +	out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg);
> > +
> > +	/* #8 */
> > +	if (out_len && mbox_cmd->payload_out)
> > +		memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
> > +
> > +	mbox_cmd->size_out = out_len;
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox.
> > + * @cxlm: The memory device to gain access to.
> > + *
> > + * Context: Any context. Takes the mbox_lock.
> > + * Return: 0 if exclusive access was acquired.
> > + */
> > +static int cxl_mem_mbox_get(struct cxl_mem *cxlm)
> > +{
> > +	struct device *dev = &cxlm->pdev->dev;
> > +	int rc = -EBUSY;
> > +	u64 md_status;
> > +
> > +	mutex_lock_io(&cxlm->mbox_mutex);
> > +
> > +	/*
> > +	 * XXX: There is some amount of ambiguity in the 2.0 version of the spec
> > +	 * around the mailbox interface ready (8.2.8.5.1.1).  The purpose of the
> > +	 * bit is to allow firmware running on the device to notify the driver
> > +	 * that it's ready to receive commands. It is unclear if the bit needs
> > +	 * to be read for each transaction mailbox, ie. the firmware can switch
> > +	 * it on and off as needed. Second, there is no defined timeout for
> > +	 * mailbox ready, like there is for the doorbell interface.
> > +	 *
> > +	 * Assumptions:
> > +	 * 1. The firmware might toggle the Mailbox Interface Ready bit, check
> > +	 *    it for every command.
> > +	 *
> > +	 * 2. If the doorbell is clear, the firmware should have first set the
> > +	 *    Mailbox Interface Ready bit. Therefore, waiting for the doorbell
> > +	 *    to be ready is sufficient.
> > +	 */
> > +	rc = cxl_mem_wait_for_doorbell(cxlm);
> > +	if (rc) {
> > +		dev_warn(dev, "Mailbox interface not ready\n");
> > +		goto out;
> > +	}
> > +
> > +	md_status = readq(cxlm->memdev_regs + CXLMDEV_STATUS_OFFSET);
> > +	if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) {
> > +		dev_err(dev,
> > +			"mbox: reported doorbell ready, but not mbox ready\n");
> > +		goto out;
> > +	}
> > +
> > +	/*
> > +	 * Hardware shouldn't allow a ready status but also have failure bits
> > +	 * set. Spit out an error, this should be a bug report
> > +	 */
> > +	rc = -EFAULT;
> > +	if (md_status & CXLMDEV_DEV_FATAL) {
> > +		dev_err(dev, "mbox: reported ready, but fatal\n");
> > +		goto out;
> > +	}
> > +	if (md_status & CXLMDEV_FW_HALT) {
> > +		dev_err(dev, "mbox: reported ready, but halted\n");
> > +		goto out;
> > +	}
> > +	if (CXLMDEV_RESET_NEEDED(md_status)) {
> > +		dev_err(dev, "mbox: reported ready, but reset needed\n");
> > +		goto out;
> > +	}
> > +
> > +	/* with lock held */
> > +	return 0;
> > +
> > +out:
> > +	mutex_unlock(&cxlm->mbox_mutex);
> > +	return rc;
> > +}
> > +
> > +/**
> > + * cxl_mem_mbox_put() - Release exclusive access to the mailbox.
> > + * @cxlm: The CXL memory device to communicate with.
> > + *
> > + * Context: Any context. Expects mbox_lock to be held.
> > + */
> > +static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> > +{
> > +	mutex_unlock(&cxlm->mbox_mutex);
> > +}
> > +
> > +/**
> > + * cxl_mem_setup_regs() - Setup necessary MMIO.
> > + * @cxlm: The CXL memory device to communicate with.
> > + *
> > + * Return: 0 if all necessary registers mapped.
> > + *
> > + * A memory device is required by spec to implement a certain set of MMIO
> > + * regions. The purpose of this function is to enumerate and map those
> > + * registers.
> > + */
> > +static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
> > +{
> > +	struct device *dev = &cxlm->pdev->dev;
> > +	int cap, cap_count;
> > +	u64 cap_array;
> > +
> > +	cap_array = readq(cxlm->regs + CXLDEV_CAP_ARRAY_OFFSET);
> > +	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
> > +	    CXLDEV_CAP_ARRAY_CAP_ID)
> > +		return -ENODEV;
> > +
> > +	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
> > +
> > +	for (cap = 1; cap <= cap_count; cap++) {
> > +		void __iomem *register_block;
> > +		u32 offset;
> > +		u16 cap_id;
> > +
> > +		cap_id = readl(cxlm->regs + cap * 0x10) & 0xffff;
> > +		offset = readl(cxlm->regs + cap * 0x10 + 0x4);
> > +		register_block = cxlm->regs + offset;
> > +
> > +		switch (cap_id) {
> > +		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
> > +			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
> > +			cxlm->status_regs = register_block;
> > +			break;
> > +		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
> > +			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
> > +			cxlm->mbox_regs = register_block;
> > +			break;
> > +		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
> > +			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
> > +			break;
> > +		case CXLDEV_CAP_CAP_ID_MEMDEV:
> > +			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
> > +			cxlm->memdev_regs = register_block;
> > +			break;
> > +		default:
> > +			dev_dbg(dev, "Unknown cap ID: %d (0x%x)\n", cap_id, offset);
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (!cxlm->status_regs || !cxlm->mbox_regs || !cxlm->memdev_regs) {
> > +		dev_err(dev, "registers not found: %s%s%s\n",
> > +			!cxlm->status_regs ? "status " : "",
> > +			!cxlm->mbox_regs ? "mbox " : "",
> > +			!cxlm->memdev_regs ? "memdev" : "");
> > +		return -ENXIO;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
> > +{
> > +	const int cap = readl(cxlm->mbox_regs + CXLDEV_MBOX_CAPS_OFFSET);
> > +
> > +	cxlm->payload_size =
> > +		1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
> > +
> > +	/*
> > +	 * CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register
> > +	 *
> > +	 * If the size is too small, mandatory commands will not work and so
> > +	 * there's no point in going forward. If the size is too large, there's
> > +	 * no harm is soft limiting it.
> > +	 */
> > +	cxlm->payload_size = min_t(size_t, cxlm->payload_size, SZ_1M);
> > +	if (cxlm->payload_size < 256) {
> > +		dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)",
> > +			cxlm->payload_size);
> > +		return -ENXIO;
> > +	}
> > +
> > +	dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu",
> > +		cxlm->payload_size);
> > +
> > +	return 0;
> > +}
> > +
> > +static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
> > +				      u32 reg_hi)
> > +{
> > +	struct device *dev = &pdev->dev;
> > +	struct cxl_mem *cxlm;
> > +	void __iomem *regs;
> > +	u64 offset;
> > +	u8 bar;
> > +	int rc;
> > +
> > +	cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL);
> > +	if (!cxlm) {
> > +		dev_err(dev, "No memory available\n");
> > +		return NULL;
> > +	}
> > +
> > +	offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo);
> > +	bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo);
> > +
> > +	/* Basic sanity check that BAR is big enough */
> > +	if (pci_resource_len(pdev, bar) < offset) {
> > +		dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
> > +			&pdev->resource[bar], (unsigned long long)offset);
> > +		return NULL;
> > +	}
> > +
> > +	rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev));
> > +	if (rc != 0) {
> > +		dev_err(dev, "failed to map registers\n");
> > +		return NULL;
> > +	}
> > +	regs = pcim_iomap_table(pdev)[bar];
> > +
> > +	mutex_init(&cxlm->mbox_mutex);
> > +	cxlm->pdev = pdev;
> > +	cxlm->regs = regs + offset;
> > +
> > +	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
> > +	return cxlm;
> > +}
> >  
> >  static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> >  {
> > @@ -28,10 +423,85 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> >  	return 0;
> >  }
> >  
> > +/**
> > + * cxl_mem_identify() - Send the IDENTIFY command to the device.
> > + * @cxlm: The device to identify.
> > + *
> > + * Return: 0 if identify was executed successfully.
> > + *
> > + * This will dispatch the identify command to the device and on success populate
> > + * structures to be exported to sysfs.
> > + */
> > +static int cxl_mem_identify(struct cxl_mem *cxlm)
> > +{
> > +	struct cxl_mbox_identify {
> > +		char fw_revision[0x10];
> > +		__le64 total_capacity;
> > +		__le64 volatile_capacity;
> > +		__le64 persistent_capacity;
> > +		__le64 partition_align;
> > +		__le16 info_event_log_size;
> > +		__le16 warning_event_log_size;
> > +		__le16 failure_event_log_size;
> > +		__le16 fatal_event_log_size;
> > +		__le32 lsa_size;
> > +		u8 poison_list_max_mer[3];
> > +		__le16 inject_poison_limit;
> > +		u8 poison_caps;
> > +		u8 qos_telemetry_caps;
> > +	} __packed id;
> > +	struct mbox_cmd mbox_cmd = {
> > +		.opcode = CXL_MBOX_OP_IDENTIFY,
> > +		.payload_out = &id,
> > +		.size_in = 0,
> > +	};
> > +	int rc;
> > +
> > +	/* Retrieve initial device memory map */
> > +	rc = cxl_mem_mbox_get(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > +	cxl_mem_mbox_put(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	/* TODO: Handle retry or reset responses from firmware. */
> > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > +		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> > +			mbox_cmd.return_code);
> > +		return -ENXIO;
> > +	}
> > +
> > +	if (mbox_cmd.size_out != sizeof(id))
> > +		return -ENXIO;
> > +
> > +	/*
> > +	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> > +	 * For now, only the capacity is exported in sysfs
> > +	 */
> > +	cxlm->ram.range.start = 0;
> > +	cxlm->ram.range.end = le64_to_cpu(id.volatile_capacity) - 1;
> > +
> > +	cxlm->pmem.range.start = 0;
> > +	cxlm->pmem.range.end = le64_to_cpu(id.persistent_capacity) - 1;
> > +
> > +	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
> > +
> > +	return rc;
> > +}
> > +
> >  static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> >  {
> >  	struct device *dev = &pdev->dev;
> > -	int regloc;
> > +	struct cxl_mem *cxlm;
> > +	int rc, regloc, i;
> > +	u32 regloc_size;
> > +
> > +	rc = pcim_enable_device(pdev);
> > +	if (rc)
> > +		return rc;
> >  
> >  	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
> >  	if (!regloc) {
> > @@ -39,7 +509,44 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> >  		return -ENXIO;
> >  	}
> >  
> > -	return 0;
> > +	/* Get the size of the Register Locator DVSEC */
> > +	pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, &regloc_size);
> > +	regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
> > +
> > +	regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET;
> > +
> > +	rc = -ENXIO;
> > +	for (i = regloc; i < regloc + regloc_size; i += 8) {
> > +		u32 reg_lo, reg_hi;
> > +		u8 reg_type;
> > +
> > +		/* "register low and high" contain other bits */
> > +		pci_read_config_dword(pdev, i, &reg_lo);
> > +		pci_read_config_dword(pdev, i + 4, &reg_hi);
> > +
> > +		reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo);
> > +
> > +		if (reg_type == CXL_REGLOC_RBI_MEMDEV) {
> > +			rc = 0;
> > +			cxlm = cxl_mem_create(pdev, reg_lo, reg_hi);
> > +			if (!cxlm)
> > +				rc = -ENODEV;
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = cxl_mem_setup_regs(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = cxl_mem_setup_mailbox(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	return cxl_mem_identify(cxlm);
> >  }
> >  
> >  static const struct pci_device_id cxl_mem_pci_tbl[] = {
> > diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> > index f135b9f7bb21..ffcbc13d7b5b 100644
> > --- a/drivers/cxl/pci.h
> > +++ b/drivers/cxl/pci.h
> > @@ -14,5 +14,18 @@
> >  #define PCI_DVSEC_ID_CXL		0x0
> >  
> >  #define PCI_DVSEC_ID_CXL_REGLOC_OFFSET		0x8
> > +#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET	0xC
> > +
> > +/* BAR Indicator Register (BIR) */
> > +#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
> > +
> > +/* Register Block Identifier (RBI) */
> > +#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
> > +#define CXL_REGLOC_RBI_EMPTY 0
> > +#define CXL_REGLOC_RBI_COMPONENT 1
> > +#define CXL_REGLOC_RBI_VIRT 2
> > +#define CXL_REGLOC_RBI_MEMDEV 3
> > +
> > +#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)
> >  
> >  #endif /* __CXL_PCI_H__ */
> > diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> > index e709ae8235e7..6267ca9ae683 100644
> > --- a/include/uapi/linux/pci_regs.h
> > +++ b/include/uapi/linux/pci_regs.h
> > @@ -1080,6 +1080,7 @@
> >  
> >  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
> >  #define PCI_DVSEC_HEADER1		0x4 /* Designated Vendor-Specific Header1 */
> > +#define PCI_DVSEC_HEADER1_LENGTH_MASK	0xFFF00000
> >  #define PCI_DVSEC_HEADER2		0x8 /* Designated Vendor-Specific Header2 */
> >  
> >  /* Data Link Feature */
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10 18:53     ` Ben Widawsky
@ 2021-02-10 19:54       ` Dan Williams
  2021-02-11 10:01         ` Jonathan Cameron
  0 siblings, 1 reply; 57+ messages in thread
From: Dan Williams @ 2021-02-10 19:54 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Jonathan Cameron, linux-cxl, Linux ACPI,
	Linux Kernel Mailing List, linux-nvdimm, Linux PCI,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, David Hildenbrand,
	David Rientjes, Ira Weiny, Jon Masters, Rafael Wysocki,
	Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Wed, Feb 10, 2021 at 10:53 AM Ben Widawsky <ben.widawsky@intel.com> wrote:
[..]
> > Christoph raised this in v1, and I agree with him that his would me more compact
> > and readable as
> >
> >       struct range pmem_range;
> >       struct range ram_range;
> >
> > The discussion seemed to get lost without getting resolved that I can see.
> >
>
> I had been waiting for Dan to chime in, since he authored it. I'll change it and
> he can yell if he cares.

No concerns from me.

>
> > > +
> > > +   struct {
> > > +           struct range range;
> > > +   } ram;
> >
> > > +};
> > > +
> > > +#endif /* __CXL_H__ */
> > > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > > index 99a6571508df..0a868a15badc 100644
> > > --- a/drivers/cxl/mem.c
> > > +++ b/drivers/cxl/mem.c
> >
> >
> > ...
> >
> > > +static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > > +                            struct mbox_cmd *mbox_cmd)
> > > +{
> > > +   struct device *dev = &cxlm->pdev->dev;
> > > +
> > > +   dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
> > > +           mbox_cmd->opcode, mbox_cmd->size_in);
> > > +
> > > +   if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
> >
> > Hmm.  Whilst I can see the advantage of this for debug, I'm not sure we want
> > it upstream even under a rather evil looking CONFIG variable.
> >
> > Is there a bigger lock we can use to avoid chance of accidental enablement?
>
> Any suggestions? I'm told this functionality was extremely valuable for NVDIMM,
> though I haven't personally experienced it.

Yeah, there was no problem with the identical mechanism in LIBNVDIMM
land. However, I notice that the useful feature for LIBNVDIMM is the
option to dump all payloads. This one only fires on timeouts which is
less useful. So I'd say fix it to dump all payloads on the argument
that the safety mechanism was proven with the LIBNVDIMM precedent, or
delete it altogether to maintain v5.12 momentum. Payload dumping can
be added later.

[..]
> > > diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> > > index e709ae8235e7..6267ca9ae683 100644
> > > --- a/include/uapi/linux/pci_regs.h
> > > +++ b/include/uapi/linux/pci_regs.h
> > > @@ -1080,6 +1080,7 @@
> > >
> > >  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
> > >  #define PCI_DVSEC_HEADER1          0x4 /* Designated Vendor-Specific Header1 */
> > > +#define PCI_DVSEC_HEADER1_LENGTH_MASK      0xFFF00000
> >
> > Seems sensible to add the revision mask as well.
> > The vendor id currently read using a word read rather than dword, but perhaps
> > neater to add that as well for completeness?
> >
> > Having said that, given Bjorn's comment on clashes and the fact he'd rather see
> > this stuff defined in drivers and combined later (see review patch 1 and follow
> > the link) perhaps this series should not touch this header at all.
>
> I'm fine to move it back.

Yeah, we're playing tennis now between Bjorn's and Christoph's
comments, but I like Bjorn's suggestion of "deduplicate post merge"
given the bloom of DVSEC infrastructure landing at the same time.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-10 18:45   ` Jonathan Cameron
@ 2021-02-10 20:22     ` Ben Widawsky
  2021-02-11  4:40     ` Dan Williams
  1 sibling, 0 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-10 20:22 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

On 21-02-10 18:45:40, Jonathan Cameron wrote:
> On Tue, 9 Feb 2021 16:02:55 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > Add a straightforward IOCTL that provides a mechanism for userspace to
> > query the supported memory device commands. CXL commands as they appear
> > to userspace are described as part of the UAPI kerneldoc. The command
> > list returned via this IOCTL will contain the full set of commands that
> > the driver supports, however, some of those commands may not be
> > available for use by userspace.
> > 
> > Memory device commands first appear in the CXL 2.0 specification. They
> > are submitted through a mailbox mechanism specified also originally
> > specified in the CXL 2.0 specification.
> > 
> > The send command allows userspace to issue mailbox commands directly to
> > the hardware. The list of available commands to send are the output of
> > the query command. The driver verifies basic properties of the command
> > and possibly inspect the input (or output) payload to determine whether
> > or not the command is allowed (or might taint the kernel).
> > 
> > Reported-by: kernel test robot <lkp@intel.com> # bug in earlier revision
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Reviewed-by: Dan Williams <dan.j.willams@intel.com>
> 
> A bit of anti macro commentary below.  Heavy use of them may make the code
> shorter, but I'd argue they make it harder to do review if you've not looked
> at a given bit of code for a while.
> 
> Also there is a bit of documentation in here for flags that don't seem to
> exist (at this stage anyway) - may just be in the wrong patch.
> 
> Jonathan
> 
> 
> > ---
> >  .clang-format                                 |   1 +
> >  .../userspace-api/ioctl/ioctl-number.rst      |   1 +
> >  drivers/cxl/mem.c                             | 291 +++++++++++++++++-
> >  include/uapi/linux/cxl_mem.h                  | 152 +++++++++
> >  4 files changed, 443 insertions(+), 2 deletions(-)
> >  create mode 100644 include/uapi/linux/cxl_mem.h
> > 
> > diff --git a/.clang-format b/.clang-format
> > index 10dc5a9a61b3..3f11c8901b43 100644
> > --- a/.clang-format
> > +++ b/.clang-format
> > @@ -109,6 +109,7 @@ ForEachMacros:
> >    - 'css_for_each_child'
> >    - 'css_for_each_descendant_post'
> >    - 'css_for_each_descendant_pre'
> > +  - 'cxl_for_each_cmd'
> >    - 'device_for_each_child_node'
> >    - 'dma_fence_chain_for_each'
> >    - 'do_for_each_ftrace_op'
> > diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst
> > index a4c75a28c839..6eb8e634664d 100644
> > --- a/Documentation/userspace-api/ioctl/ioctl-number.rst
> > +++ b/Documentation/userspace-api/ioctl/ioctl-number.rst
> > @@ -352,6 +352,7 @@ Code  Seq#    Include File                                           Comments
> >                                                                       <mailto:michael.klein@puffin.lb.shuttle.de>
> >  0xCC  00-0F  drivers/misc/ibmvmc.h                                   pseries VMC driver
> >  0xCD  01     linux/reiserfs_fs.h
> > +0xCE  01-02  uapi/linux/cxl_mem.h                                    Compute Express Link Memory Devices
> >  0xCF  02     fs/cifs/ioctl.c
> >  0xDB  00-0F  drivers/char/mwave/mwavepub.h
> >  0xDD  00-3F                                                          ZFCP device driver see drivers/s390/scsi/
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > index 8bbd2495e237..ce65630bb75e 100644
> > --- a/drivers/cxl/mem.c
> > +++ b/drivers/cxl/mem.c
> > @@ -1,5 +1,6 @@
> >  // SPDX-License-Identifier: GPL-2.0-only
> >  /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> > +#include <uapi/linux/cxl_mem.h>
> >  #include <linux/module.h>
> >  #include <linux/mutex.h>
> >  #include <linux/cdev.h>
> > @@ -39,6 +40,7 @@
> >  #define CXL_MAILBOX_TIMEOUT_MS (2 * HZ)
> >  
> >  enum opcode {
> > +	CXL_MBOX_OP_INVALID		= 0x0000,
> >  	CXL_MBOX_OP_IDENTIFY		= 0x4000,
> >  	CXL_MBOX_OP_MAX			= 0x10000
> >  };
> > @@ -90,9 +92,57 @@ struct cxl_memdev {
> >  static int cxl_mem_major;
> >  static DEFINE_IDA(cxl_memdev_ida);
> >  
> > +/**
> > + * struct cxl_mem_command - Driver representation of a memory device command
> > + * @info: Command information as it exists for the UAPI
> > + * @opcode: The actual bits used for the mailbox protocol
> > + * @flags: Set of flags reflecting the state of the command.
> > + *
> > + *  * %CXL_CMD_FLAG_MANDATORY: Hardware must support this command. This flag is
> > + *    only used internally by the driver for sanity checking.
> 
> Doesn't seem to be defined yet.
> 

This slipped by me. The flags are entirely gone now.

I found some other stale comments like references to "mbox_lock" that I'm also
cleaning up.

> > + *
> > + * The cxl_mem_command is the driver's internal representation of commands that
> > + * are supported by the driver. Some of these commands may not be supported by
> > + * the hardware. The driver will use @info to validate the fields passed in by
> > + * the user then submit the @opcode to the hardware.
> > + *
> > + * See struct cxl_command_info.
> > + */
> > +struct cxl_mem_command {
> > +	struct cxl_command_info info;
> > +	enum opcode opcode;
> > +};
> > +
> > +#define CXL_CMD(_id, _flags, sin, sout)                                        \
> > +	[CXL_MEM_COMMAND_ID_##_id] = {                                         \
> > +	.info =	{                                                              \
> > +			.id = CXL_MEM_COMMAND_ID_##_id,                        \
> > +			.flags = CXL_MEM_COMMAND_FLAG_##_flags,                \
> > +			.size_in = sin,                                        \
> > +			.size_out = sout,                                      \
> > +		},                                                             \
> > +	.opcode = CXL_MBOX_OP_##_id,                                           \
> > +	}
> > +
> > +/*
> > + * This table defines the supported mailbox commands for the driver. This table
> > + * is made up of a UAPI structure. Non-negative values as parameters in the
> > + * table will be validated against the user's input. For example, if size_in is
> > + * 0, and the user passed in 1, it is an error.
> > + */
> > +static struct cxl_mem_command mem_commands[] = {
> > +	CXL_CMD(IDENTIFY, NONE, 0, 0x43),
> > +};
> 
> As below, I'm doubtful about the macro magic and would rather see the
> long hand version. It's a fwe more characters but I can immediately see if fields
> are in the right places etc and we can skip the 0 default values.
> 
> static struct cxl_mem_command mem_commands[] = {
> 	[CXL_MEM_COMMAND_ID_IDENTIFY] = {
> 		.info = {
> 			.id = CXL_MEM_COMMAND_ID_IDENTIFY,
> 			.size_out = 0x43,
> 		},
> 		.opcode = CXL_MBOX_OP_IDENTIFY,	
> 	},
> };
> 
> Still it's your driver and I guess I'll guess I can probably get my head around
> this macro..
> 

An unreleased version of this series did just that. Dan suggested the change. My
original preference was your suggestion FWIW, but over time I've come to prefer
this.

We can drop flags now, and we could add more macros to make it a bit better:

CXL_CMD_IN(FOO, 0x10) // we have none in the driver today
CXL_CMD_OUT(IDENTIFY, 0x43)
CXL_CMD_INOUT(GET_LSA, 0x8, ~0)

> >  
> > diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
> > new file mode 100644
> > index 000000000000..f1f7e9f32ea5
> > --- /dev/null
> > +++ b/include/uapi/linux/cxl_mem.h
> > @@ -0,0 +1,152 @@
> > +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> > +/*
> > + * CXL IOCTLs for Memory Devices
> > + */
> > +
> > +#ifndef _UAPI_CXL_MEM_H_
> > +#define _UAPI_CXL_MEM_H_
> > +
> > +#include <linux/types.h>
> > +
> > +/**
> > + * DOC: UAPI
> > + *
> > + * Not all of all commands that the driver supports are always available for use
> > + * by userspace. Userspace must check the results from the QUERY command in
> > + * order to determine the live set of commands.
> > + */
> > +
> > +#define CXL_MEM_QUERY_COMMANDS _IOR(0xCE, 1, struct cxl_mem_query_commands)
> > +#define CXL_MEM_SEND_COMMAND _IOWR(0xCE, 2, struct cxl_send_command)
> > +
> > +#define CXL_CMDS                                                          \
> > +	___C(INVALID, "Invalid Command"),                                 \
> > +	___C(IDENTIFY, "Identify Command"),                               \
> > +	___C(MAX, "Last command")
> > +
> > +#define ___C(a, b) CXL_MEM_COMMAND_ID_##a
> > +enum { CXL_CMDS };
> > +
> > +#undef ___C
> > +#define ___C(a, b) { b }
> > +static const struct {
> > +	const char *name;
> > +} cxl_command_names[] = { CXL_CMDS };
> > +#undef ___C
> 
> Unless there are going to be a lot of these, I'd just write them out long hand
> as much more readable than the macro magic.
> 
> enum {
> 	CXL_MEM_COMMAND_ID_INVALID,
> 	CXL_MEM_COMMAND_ID_IDENTIFY,
> 	CXL_MEM_COMMAND_ID_MAX
> };
> 
> static const struct {
> 	const char *name;
> } cxl_command_names[] = {
> 	[CXL_MEM_COMMAND_ID_INVALID] = { "Invalid Command" },
> 	[CXL_MEM_COMMAND_ID_IDENTIFY] = { "Identify Comamnd" },
> 	/* I hope you never need the Last command to exist in here as that sounds like a bug */
> };
> 
> That's assuming I actually figured the macro fun out correctly.
> To my mind it's worth doing this stuff for 'lots' no so much for 3.
> 
> > +
> > +/**
> > + * struct cxl_command_info - Command information returned from a query.
> > + * @id: ID number for the command.
> > + * @flags: Flags that specify command behavior.
> > + *
> > + *  * %CXL_MEM_COMMAND_FLAG_KERNEL: This command is reserved for exclusive
> > + *    kernel use.
> > + *  * %CXL_MEM_COMMAND_FLAG_MUTEX: This command may require coordination with
> > + *    the kernel in order to complete successfully.
> Doesn't correspond to the flags defined below.  If introduced in a later patch
> then bring the docs in with the first use.
> 

MUTEX should be gone. KERNEL is still there, and I will move it to later.

> > + *
> > + * @size_in: Expected input size, or -1 if variable length.
> > + * @size_out: Expected output size, or -1 if variable length.
> > + *
> > + * Represents a single command that is supported by both the driver and the
> > + * hardware. This is returned as part of an array from the query ioctl. The
> > + * following would be a command named "foobar" that takes a variable length
> > + * input and returns 0 bytes of output.
> 
> Why give it a name?  It's just an id!
> 

At one point, name was part of the struct. Who reads comments anyway :P

> > + *
> > + *  - @id = 10
> > + *  - @flags = CXL_MEM_COMMAND_FLAG_MUTEX
> 
> That flag doesn't seem to be defined below.
> 

Yeah, stale comment...

> > + *  - @size_in = -1
> > + *  - @size_out = 0
> > + *
> > + * See struct cxl_mem_query_commands.
> > + */
> > +struct cxl_command_info {
> > +	__u32 id;
> > +
> > +	__u32 flags;
> > +#define CXL_MEM_COMMAND_FLAG_NONE 0
> > +#define CXL_MEM_COMMAND_FLAG_KERNEL BIT(0)
> > +#define CXL_MEM_COMMAND_FLAG_MASK GENMASK(1, 0)
> > +
> > +	__s32 size_in;
> > +	__s32 size_out;
> > +};
> > +
> > +/**
> > + * struct cxl_mem_query_commands - Query supported commands.
> > + * @n_commands: In/out parameter. When @n_commands is > 0, the driver will
> > + *		return min(num_support_commands, n_commands). When @n_commands
> > + *		is 0, driver will return the number of total supported commands.
> > + * @rsvd: Reserved for future use.
> > + * @commands: Output array of supported commands. This array must be allocated
> > + *            by userspace to be at least min(num_support_commands, @n_commands)
> > + *
> > + * Allow userspace to query the available commands supported by both the driver,
> > + * and the hardware. Commands that aren't supported by either the driver, or the
> > + * hardware are not returned in the query.
> > + *
> > + * Examples:
> > + *
> > + *  - { .n_commands = 0 } // Get number of supported commands
> > + *  - { .n_commands = 15, .commands = buf } // Return first 15 (or less)
> > + *    supported commands
> > + *
> > + *  See struct cxl_command_info.
> > + */
> > +struct cxl_mem_query_commands {
> > +	/*
> > +	 * Input: Number of commands to return (space allocated by user)
> > +	 * Output: Number of commands supported by the driver/hardware
> > +	 *
> > +	 * If n_commands is 0, kernel will only return number of commands and
> > +	 * not try to populate commands[], thus allowing userspace to know how
> > +	 * much space to allocate
> > +	 */
> > +	__u32 n_commands;
> > +	__u32 rsvd;
> > +
> > +	struct cxl_command_info __user commands[]; /* out: supported commands */
> > +};
> > +
> > +/**
> > + * struct cxl_send_command - Send a command to a memory device.
> > + * @id: The command to send to the memory device. This must be one of the
> > + *	commands returned by the query command.
> > + * @flags: Flags for the command (input).
> > + * @rsvd: Must be zero.
> > + * @retval: Return value from the memory device (output).
> > + * @in.size: Size of the payload to provide to the device (input).
> > + * @in.rsvd: Must be zero.
> > + * @in.payload: Pointer to memory for payload input (little endian order).
> 
> Silly point, but perhaps distinguish it's the payload that is in little endian order
> not the pointer.  (I obviously haven't had enough coffee today and missread it)
> 
> 
> > + * @out.size: Size of the payload received from the device (input/output). This
> > + *	      field is filled in by userspace to let the driver know how much
> > + *	      space was allocated for output. It is populated by the driver to
> > + *	      let userspace know how large the output payload actually was.
> > + * @out.rsvd: Must be zero.
> > + * @out.payload: Pointer to memory for payload output (little endian order).
> > + *
> > + * Mechanism for userspace to send a command to the hardware for processing. The
> > + * driver will do basic validation on the command sizes. In some cases even the
> > + * payload may be introspected. Userspace is required to allocate large
> > + * enough buffers for size_out which can be variable length in certain
> > + * situations.
> > + */
> > +struct cxl_send_command {
> > +	__u32 id;
> > +	__u32 flags;
> > +	__u32 rsvd;
> > +	__u32 retval;
> > +
> > +	struct {
> > +		__s32 size;
> > +		__u32 rsvd;
> > +		__u64 payload;
> > +	} in;
> > +
> > +	struct {
> > +		__s32 size;
> > +		__u32 rsvd;
> > +		__u64 payload;
> > +	} out;
> > +};
> > +
> > +#endif
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-10 18:45   ` Jonathan Cameron
  2021-02-10 20:22     ` Ben Widawsky
@ 2021-02-11  4:40     ` Dan Williams
  2021-02-11 10:06       ` Jonathan Cameron
  1 sibling, 1 reply; 57+ messages in thread
From: Dan Williams @ 2021-02-11  4:40 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Ben Widawsky, linux-cxl, Linux ACPI, Linux Kernel Mailing List,
	linux-nvdimm, Linux PCI, Bjorn Helgaas,
	Chris Browy <cbrowy@avery-design.com>,
	Christoph Hellwig <hch@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	David Hildenbrand <david@redhat.com>,
	David Rientjes, Jon Masters <jcm@jonmasters.org>,
	Rafael Wysocki <rafael.j.wysocki@intel.com>,
	Randy Dunlap, John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

On Wed, Feb 10, 2021 at 10:47 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
[..]
> > +#define CXL_CMDS                                                          \
> > +     ___C(INVALID, "Invalid Command"),                                 \
> > +     ___C(IDENTIFY, "Identify Command"),                               \
> > +     ___C(MAX, "Last command")
> > +
> > +#define ___C(a, b) CXL_MEM_COMMAND_ID_##a
> > +enum { CXL_CMDS };
> > +
> > +#undef ___C
> > +#define ___C(a, b) { b }
> > +static const struct {
> > +     const char *name;
> > +} cxl_command_names[] = { CXL_CMDS };
> > +#undef ___C
>
> Unless there are going to be a lot of these, I'd just write them out long hand
> as much more readable than the macro magic.

This macro magic isn't new to Linux it was introduced with ftrace:

See "cpp tricks and treats": https://lwn.net/Articles/383362/

>
> enum {
>         CXL_MEM_COMMAND_ID_INVALID,
>         CXL_MEM_COMMAND_ID_IDENTIFY,
>         CXL_MEM_COMMAND_ID_MAX
> };
>
> static const struct {
>         const char *name;
> } cxl_command_names[] = {
>         [CXL_MEM_COMMAND_ID_INVALID] = { "Invalid Command" },
>         [CXL_MEM_COMMAND_ID_IDENTIFY] = { "Identify Comamnd" },
>         /* I hope you never need the Last command to exist in here as that sounds like a bug */
> };
>
> That's assuming I actually figured the macro fun out correctly.
> To my mind it's worth doing this stuff for 'lots' no so much for 3.

The list will continue to expand, and it eliminates the "did you
remember to update cxl_command_names" review burden permanently.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10 18:16         ` Ben Widawsky
@ 2021-02-11  9:55           ` Jonathan Cameron
  2021-02-11 15:55             ` Ben Widawsky
  2021-02-11 18:27             ` Ben Widawsky
  0 siblings, 2 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-11  9:55 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Wed, 10 Feb 2021 10:16:05 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-02-10 08:55:57, Ben Widawsky wrote:
> > On 21-02-10 15:07:59, Jonathan Cameron wrote:  
> > > On Wed, 10 Feb 2021 13:32:52 +0000
> > > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> > >   
> > > > On Tue, 9 Feb 2021 16:02:53 -0800
> > > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > > >   
> > > > > Provide enough functionality to utilize the mailbox of a memory device.
> > > > > The mailbox is used to interact with the firmware running on the memory
> > > > > device. The flow is proven with one implemented command, "identify".
> > > > > Because the class code has already told the driver this is a memory
> > > > > device and the identify command is mandatory.
> > > > > 
> > > > > CXL devices contain an array of capabilities that describe the
> > > > > interactions software can have with the device or firmware running on
> > > > > the device. A CXL compliant device must implement the device status and
> > > > > the mailbox capability. Additionally, a CXL compliant memory device must
> > > > > implement the memory device capability. Each of the capabilities can
> > > > > [will] provide an offset within the MMIO region for interacting with the
> > > > > CXL device.
> > > > > 
> > > > > The capabilities tell the driver how to find and map the register space
> > > > > for CXL Memory Devices. The registers are required to utilize the CXL
> > > > > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > > > > and secondary. The secondary mailbox is earmarked for system firmware,
> > > > > and not handled in this driver.
> > > > > 
> > > > > Primary mailboxes are capable of generating an interrupt when submitting
> > > > > a background command. That implementation is saved for a later time.
> > > > > 
> > > > > Link: https://www.computeexpresslink.org/download-the-specification
> > > > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>    
> > > > 
> > > > Hi Ben,
> > > > 
> > > >   
> > > > > +/**
> > > > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > > + * @cxlm: The CXL memory device to communicate with.
> > > > > + * @mbox_cmd: Command to send to the memory device.
> > > > > + *
> > > > > + * Context: Any context. Expects mbox_lock to be held.
> > > > > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > > > > + *         Caller should check the return code in @mbox_cmd to make sure it
> > > > > + *         succeeded.    
> > > > 
> > > > cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> > > > enters an infinite loop as a result.  
> > 
> > I meant to fix that.
> >   
> > > > 
> > > > I haven't checked other paths, but to my mind it is not a good idea to require
> > > > two levels of error checking - the example here proves how easy it is to forget
> > > > one.  
> > 
> > Demonstrably, you're correct. I think it would be good to have a kernel only
> > mbox command that does the error checking though. Let me type something up and
> > see how it looks.  
> 
> Hi Jonathan. What do you think of this? The bit I'm on the fence about is if I
> should validate output size too. I like the simplicity as it is, but it requires
> every caller to possibly check output size, which is kind of the same problem
> you're originally pointing out.

The simplicity is good and this is pretty much what I expected you would end up with
(always reassuring)

For the output, perhaps just add another parameter to the wrapper for minimum
output length expected?

Now you mention the length question.  It does rather feel like there should also
be some protection on memcpy_fromio() copying too much data if the hardware
happens to return an unexpectedly long length.  Should never happen, but
the hardening is worth adding anyway given it's easy to do.

Jonathan


> 
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index 55c5f5a6023f..ad7b2077ab28 100644
> --- a/drivers/cxl/mem.c
> +++ b/drivers/cxl/mem.c
> @@ -284,7 +284,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
>  }
>  
>  /**
> - * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> + * __cxl_mem_mbox_send_cmd() - Execute a mailbox command
>   * @cxlm: The CXL memory device to communicate with.
>   * @mbox_cmd: Command to send to the memory device.
>   *
> @@ -296,7 +296,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
>   * This is a generic form of the CXL mailbox send command, thus the only I/O
>   * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
>   * types of CXL devices may have further information available upon error
> - * conditions.
> + * conditions. Driver facilities wishing to send mailbox commands should use the
> + * wrapper command.
>   *
>   * The CXL spec allows for up to two mailboxes. The intention is for the primary
>   * mailbox to be OS controlled and the secondary mailbox to be used by system
> @@ -304,8 +305,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
>   * not need to coordinate with each other. The driver only uses the primary
>   * mailbox.
>   */
> -static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> -				 struct mbox_cmd *mbox_cmd)
> +static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> +				   struct mbox_cmd *mbox_cmd)
>  {
>  	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
>  	u64 cmd_reg, status_reg;
> @@ -469,6 +470,54 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
>  	mutex_unlock(&cxlm->mbox_mutex);
>  }
>  
> +/**
> + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> + * @cxlm: The CXL memory device to communicate with.
> + * @opcode: Opcode for the mailbox command.
> + * @in: The input payload for the mailbox command.
> + * @in_size: The length of the input payload
> + * @out: Caller allocated buffer for the output.
> + *
> + * Context: Any context. Will acquire and release mbox_mutex.
> + * Return:
> + *  * %>=0	- Number of bytes returned in @out.
> + *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
> + *  * %-EFAULT	- Hardware error occurred.
> + *  * %-ENXIO	- Command completed, but device reported an error.
> + *
> + * Mailbox commands may execute successfully yet the device itself reported an
> + * error. While this distinction can be useful for commands from userspace, the
> + * kernel will often only care when both are successful.
> + *
> + * See __cxl_mem_mbox_send_cmd()
> + */
> +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
> +				 size_t in_size, u8 *out)
> +{
> +	struct mbox_cmd mbox_cmd = {
> +		.opcode = opcode,
> +		.payload_in = in,
> +		.size_in = in_size,
> +		.payload_out = out,
> +	};
> +	int rc;
> +
> +	rc = cxl_mem_mbox_get(cxlm);
> +	if (rc)
> +		return rc;
> +
> +	rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> +	cxl_mem_mbox_put(cxlm);
> +	if (rc)
> +		return rc;
> +
> +	/* TODO: Map return code to proper kernel style errno */
> +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
> +		return -ENXIO;
> +
> +	return mbox_cmd.size_out;
> +}
> +
>  /**
>   * handle_mailbox_cmd_from_user() - Dispatch a mailbox command.
>   * @cxlmd: The CXL memory device to communicate with.
> @@ -1380,33 +1429,18 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
>  		u8 poison_caps;
>  		u8 qos_telemetry_caps;
>  	} __packed id;
> -	struct mbox_cmd mbox_cmd = {
> -		.opcode = CXL_MBOX_OP_IDENTIFY,
> -		.payload_out = &id,
> -		.size_in = 0,
> -	};
>  	int rc;
>  
> -	/* Retrieve initial device memory map */
> -	rc = cxl_mem_mbox_get(cxlm);
> -	if (rc)
> -		return rc;
> -
> -	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> -	cxl_mem_mbox_put(cxlm);
> -	if (rc)
> +	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
> +				   (u8 *)&id);
> +	if (rc < 0)
>  		return rc;
>  
> -	/* TODO: Handle retry or reset responses from firmware. */
> -	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> -		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> -			mbox_cmd.return_code);
> +	if (rc < sizeof(id)) {
> +		dev_err(&cxlm->pdev->dev, "Short identify data\n",
>  		return -ENXIO;
>  	}
>  
> -	if (mbox_cmd.size_out != sizeof(id))
> -		return -ENXIO;
> -
>  	/*
>  	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
>  	 * For now, only the capacity is exported in sysfs
> 
> 
> [snip]
> 


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-10 19:54       ` Dan Williams
@ 2021-02-11 10:01         ` Jonathan Cameron
  2021-02-11 16:04           ` Ben Widawsky
  0 siblings, 1 reply; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-11 10:01 UTC (permalink / raw)
  To: Dan Williams
  Cc: Ben Widawsky, linux-cxl, Linux ACPI, Linux Kernel Mailing List,
	linux-nvdimm, Linux PCI, Bjorn Helgaas, Chris Browy,
	Christoph Hellwig, David Hildenbrand, David Rientjes, Ira Weiny,
	Jon Masters, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V

On Wed, 10 Feb 2021 11:54:29 -0800
Dan Williams <dan.j.williams@intel.com> wrote:

> > > ...
> > >  
> > > > +static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > > > +                            struct mbox_cmd *mbox_cmd)
> > > > +{
> > > > +   struct device *dev = &cxlm->pdev->dev;
> > > > +
> > > > +   dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
> > > > +           mbox_cmd->opcode, mbox_cmd->size_in);
> > > > +
> > > > +   if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {  
> > >
> > > Hmm.  Whilst I can see the advantage of this for debug, I'm not sure we want
> > > it upstream even under a rather evil looking CONFIG variable.
> > >
> > > Is there a bigger lock we can use to avoid chance of accidental enablement?  
> >
> > Any suggestions? I'm told this functionality was extremely valuable for NVDIMM,
> > though I haven't personally experienced it.  
> 
> Yeah, there was no problem with the identical mechanism in LIBNVDIMM
> land. However, I notice that the useful feature for LIBNVDIMM is the
> option to dump all payloads. This one only fires on timeouts which is
> less useful. So I'd say fix it to dump all payloads on the argument
> that the safety mechanism was proven with the LIBNVDIMM precedent, or
> delete it altogether to maintain v5.12 momentum. Payload dumping can
> be added later.

I think I'd drop it for now - feels like a topic that needs more discussion.

Also, dumping this data to the kernel log isn't exactly elegant - particularly
if we dump a lot more of it.  Perhaps tracepoints?

> 
> [..]
> > > > diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> > > > index e709ae8235e7..6267ca9ae683 100644
> > > > --- a/include/uapi/linux/pci_regs.h
> > > > +++ b/include/uapi/linux/pci_regs.h
> > > > @@ -1080,6 +1080,7 @@
> > > >
> > > >  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
> > > >  #define PCI_DVSEC_HEADER1          0x4 /* Designated Vendor-Specific Header1 */
> > > > +#define PCI_DVSEC_HEADER1_LENGTH_MASK      0xFFF00000  
> > >
> > > Seems sensible to add the revision mask as well.
> > > The vendor id currently read using a word read rather than dword, but perhaps
> > > neater to add that as well for completeness?
> > >
> > > Having said that, given Bjorn's comment on clashes and the fact he'd rather see
> > > this stuff defined in drivers and combined later (see review patch 1 and follow
> > > the link) perhaps this series should not touch this header at all.  
> >
> > I'm fine to move it back.  
> 
> Yeah, we're playing tennis now between Bjorn's and Christoph's
> comments, but I like Bjorn's suggestion of "deduplicate post merge"
> given the bloom of DVSEC infrastructure landing at the same time.
I guess it may depend on timing of this.  Personally I think 5.12 may be too aggressive.

As long as Bjorn can take a DVSEC deduplication as an immutable branch then perhaps
during 5.13 this tree can sit on top of that.

Jonathan



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-11  4:40     ` Dan Williams
@ 2021-02-11 10:06       ` Jonathan Cameron
  2021-02-11 16:54         ` Ben Widawsky
  0 siblings, 1 reply; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-11 10:06 UTC (permalink / raw)
  To: Dan Williams
  Cc: Ben Widawsky, linux-cxl, Linux ACPI, Linux Kernel Mailing List,
	linux-nvdimm, Linux PCI, Bjorn Helgaas,
	Chris Browy <cbrowy@avery-design.com>,
	Christoph Hellwig <hch@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	David Hildenbrand <david@redhat.com>,
	David Rientjes, Jon Masters <jcm@jonmasters.org>,
	Rafael Wysocki <rafael.j.wysocki@intel.com>,
	Randy Dunlap, John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

On Wed, 10 Feb 2021 20:40:52 -0800
Dan Williams <dan.j.williams@intel.com> wrote:

> On Wed, Feb 10, 2021 at 10:47 AM Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> [..]
> > > +#define CXL_CMDS                                                          \
> > > +     ___C(INVALID, "Invalid Command"),                                 \
> > > +     ___C(IDENTIFY, "Identify Command"),                               \
> > > +     ___C(MAX, "Last command")
> > > +
> > > +#define ___C(a, b) CXL_MEM_COMMAND_ID_##a
> > > +enum { CXL_CMDS };
> > > +
> > > +#undef ___C
> > > +#define ___C(a, b) { b }
> > > +static const struct {
> > > +     const char *name;
> > > +} cxl_command_names[] = { CXL_CMDS };
> > > +#undef ___C  
> >
> > Unless there are going to be a lot of these, I'd just write them out long hand
> > as much more readable than the macro magic.  
> 
> This macro magic isn't new to Linux it was introduced with ftrace:
> 
> See "cpp tricks and treats": https://lwn.net/Articles/383362/

Yeah. I've dealt with that one a few times. It's very cleaver and compact
but a PITA to debug build errors related to it.

> 
> >
> > enum {
> >         CXL_MEM_COMMAND_ID_INVALID,
> >         CXL_MEM_COMMAND_ID_IDENTIFY,
> >         CXL_MEM_COMMAND_ID_MAX
> > };
> >
> > static const struct {
> >         const char *name;
> > } cxl_command_names[] = {
> >         [CXL_MEM_COMMAND_ID_INVALID] = { "Invalid Command" },
> >         [CXL_MEM_COMMAND_ID_IDENTIFY] = { "Identify Comamnd" },
> >         /* I hope you never need the Last command to exist in here as that sounds like a bug */
> > };
> >
> > That's assuming I actually figured the macro fun out correctly.
> > To my mind it's worth doing this stuff for 'lots' no so much for 3.  
> 
> The list will continue to expand, and it eliminates the "did you
> remember to update cxl_command_names" review burden permanently.

How about a compromise.  Add a comment giving how the first entry expands to
avoid people (me at least :) having to think their way through it every time?

Jonathan


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 3/8] cxl/mem: Register CXL memX devices
  2021-02-10 18:17   ` Jonathan Cameron
@ 2021-02-11 10:17     ` Jonathan Cameron
  2021-02-11 20:40       ` Dan Williams
  0 siblings, 1 reply; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-11 10:17 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Wed, 10 Feb 2021 18:17:25 +0000
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Tue, 9 Feb 2021 16:02:54 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > From: Dan Williams <dan.j.williams@intel.com>
> > 
> > Create the /sys/bus/cxl hierarchy to enumerate:
> > 
> > * Memory Devices (per-endpoint control devices)
> > 
> > * Memory Address Space Devices (platform address ranges with
> >   interleaving, performance, and persistence attributes)
> > 
> > * Memory Regions (active provisioned memory from an address space device
> >   that is in use as System RAM or delegated to libnvdimm as Persistent
> >   Memory regions).
> > 
> > For now, only the per-endpoint control devices are registered on the
> > 'cxl' bus. However, going forward it will provide a mechanism to
> > coordinate cross-device interleave.
> > 
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>  
> 
> One stray header, and a request for a tiny bit of reordering to
> make it easier to chase through creation and destruction.
> 
> Either way with the header move to earlier patch I'm fine with this one.
> 
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

Actually thinking more on this, what is the justification for the
complexity + overhead of a percpu_refcount vs a refcount

I don't think this is a high enough performance path for it to matter.
Perhaps I'm missing a usecase where it does?

Jonathan

> 
> > ---
> >  Documentation/ABI/testing/sysfs-bus-cxl       |  26 ++
> >  .../driver-api/cxl/memory-devices.rst         |  17 +
> >  drivers/cxl/Makefile                          |   3 +
> >  drivers/cxl/bus.c                             |  29 ++
> >  drivers/cxl/cxl.h                             |   4 +
> >  drivers/cxl/mem.c                             | 301 +++++++++++++++++-
> >  6 files changed, 378 insertions(+), 2 deletions(-)
> >  create mode 100644 Documentation/ABI/testing/sysfs-bus-cxl
> >  create mode 100644 drivers/cxl/bus.c
> >   
> 
> 
> > diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> > index 745f5e0bfce3..b3c56fa6e126 100644
> > --- a/drivers/cxl/cxl.h
> > +++ b/drivers/cxl/cxl.h
> > @@ -3,6 +3,7 @@
> >  
> >  #ifndef __CXL_H__
> >  #define __CXL_H__
> > +#include <linux/range.h>  
> 
> Why is this coming in now? Feels like it should have been in earlier
> patch that started using struct range
> 
> >  
> >  #include <linux/bitfield.h>
> >  #include <linux/bitops.h>
> > @@ -55,6 +56,7 @@
> >  	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
> >  	 CXLMDEV_RESET_NEEDED_NOT)
> >  
> > +struct cxl_memdev;
> >  /**
> >   * struct cxl_mem - A CXL memory device
> >   * @pdev: The PCI device associated with this CXL device.
> > @@ -72,6 +74,7 @@
> >  struct cxl_mem {
> >  	struct pci_dev *pdev;
> >  	void __iomem *regs;
> > +	struct cxl_memdev *cxlmd;
> >  
> >  	void __iomem *status_regs;
> >  	void __iomem *mbox_regs;
> > @@ -90,4 +93,5 @@ struct cxl_mem {
> >  	} ram;
> >  };
> >  
> > +extern struct bus_type cxl_bus_type;
> >  #endif /* __CXL_H__ */
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > index 0a868a15badc..8bbd2495e237 100644
> > --- a/drivers/cxl/mem.c
> > +++ b/drivers/cxl/mem.c
> > @@ -1,11 +1,36 @@
> >  
> 
> > +
> > +static void cxl_memdev_release(struct device *dev)
> > +{
> > +	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> > +
> > +	percpu_ref_exit(&cxlmd->ops_active);
> > +	ida_free(&cxl_memdev_ida, cxlmd->id);
> > +	kfree(cxlmd);
> > +}
> > +  
> ...
> 
> > +static int cxl_mem_add_memdev(struct cxl_mem *cxlm)
> > +{
> > +	struct pci_dev *pdev = cxlm->pdev;
> > +	struct cxl_memdev *cxlmd;
> > +	struct device *dev;
> > +	struct cdev *cdev;
> > +	int rc;
> > +
> > +	cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL);
> > +	if (!cxlmd)
> > +		return -ENOMEM;
> > +	init_completion(&cxlmd->ops_dead);
> > +
> > +	/*
> > +	 * @cxlm is deallocated when the driver unbinds so operations
> > +	 * that are using it need to hold a live reference.
> > +	 */
> > +	cxlmd->cxlm = cxlm;
> > +	rc = percpu_ref_init(&cxlmd->ops_active, cxlmdev_ops_active_release, 0,
> > +			     GFP_KERNEL);
> > +	if (rc)
> > +		goto err_ref;
> > +
> > +	rc = ida_alloc_range(&cxl_memdev_ida, 0, CXL_MEM_MAX_DEVS, GFP_KERNEL);
> > +	if (rc < 0)
> > +		goto err_id;
> > +	cxlmd->id = rc;
> > +
> > +	dev = &cxlmd->dev;
> > +	device_initialize(dev);
> > +	dev->parent = &pdev->dev;
> > +	dev->bus = &cxl_bus_type;
> > +	dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
> > +	dev->type = &cxl_memdev_type;
> > +	dev_set_name(dev, "mem%d", cxlmd->id);
> > +
> > +	cdev = &cxlmd->cdev;
> > +	cdev_init(cdev, &cxl_memdev_fops);
> > +
> > +	rc = cdev_device_add(cdev, dev);
> > +	if (rc)
> > +		goto err_add;
> > +
> > +	return devm_add_action_or_reset(dev->parent, cxlmdev_unregister, cxlmd);  
> 
> This had me scratching my head. The cxlmdev_unregister() if called normally
> or in the _or_reset() results in
> 
> 	percpu_ref_kill(&cxlmd->ops_active);
> 	cdev_device_del(&cxlmd->cdev, dev);
> 	wait_for_completion(&cxlmd->ops_dead);
> 	cxlmd->cxlm = NULL;
> 	put_device(dev);
> 	/* If last ref this will result in */
> 		percpu_ref_exit(&cxlmd->ops_active);
> 		ida_free(&cxl_memdev_ida, cxlmd->id);
> 		kfree(cxlmd);
> 
> So it's doing all the correct things but not necessarily
> in the obvious order.
> 
> For simplicity of review perhaps it's worth reordering probe a bit
> to get the ida immediately after the cxlmd alloc and
> for the cxlmdev_unregister() perhaps reorder the cdev_device_del()
> before the percpu_ref_kill().
> 
> Trivial obvious as the ordering has no affect but makes it
> easy for reviewers to tick off setup vs tear down parts.
> 
> > +
> > +err_add:
> > +	ida_free(&cxl_memdev_ida, cxlmd->id);
> > +err_id:
> > +	/*
> > +	 * Theoretically userspace could have already entered the fops,
> > +	 * so flush ops_active.
> > +	 */
> > +	percpu_ref_kill(&cxlmd->ops_active);
> > +	wait_for_completion(&cxlmd->ops_dead);
> > +	percpu_ref_exit(&cxlmd->ops_active);
> > +err_ref:
> > +	kfree(cxlmd);
> > +
> > +	return rc;
> > +}
> > +  
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-10  0:02 ` [PATCH v2 5/8] cxl/mem: Add a "RAW" send command Ben Widawsky
  2021-02-10 15:26   ` Ariel.Sibley
@ 2021-02-11 11:19   ` Jonathan Cameron
  2021-02-11 16:01     ` Ben Widawsky
  1 sibling, 1 reply; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-11 11:19 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V, Ariel Sibley

On Tue, 9 Feb 2021 16:02:56 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> The CXL memory device send interface will have a number of supported
> commands. The raw command is not such a command. Raw commands allow
> userspace to send a specified opcode to the underlying hardware and
> bypass all driver checks on the command. This is useful for a couple of
> usecases, mainly:
> 1. Undocumented vendor specific hardware commands

This one I get.  There are things we'd love to standardize but often they
need proving in a generation of hardware before the data is available to
justify taking it to a standards body.  Stuff like performance stats.
This stuff will all sit in the vendor defined range.  Maybe there is an
argument for in driver hooks to allow proper support even for these
(Ben mentioned this in the other branch of the thread).

> 2. Prototyping new hardware commands not yet supported by the driver

For 2, could just have a convenient place to enable this by one line patch.
Some subsystems (SPI comes to mind) do this for their equivalent of raw
commands.  The code is all there to enable it but you need to hook it
up if you want to use it.  Avoids chance of a distro shipping it.

> 
> While this all sounds very powerful it comes with a couple of caveats:
> 1. Bug reports using raw commands will not get the same level of
>    attention as bug reports using supported commands (via taint).
> 2. Supported commands will be rejected by the RAW command.

Perhaps I'm missing reading this point 2 (not sure the code actually does it!)

As stated what worries me as it means when we add support for a new
bit of the spec we just broke the userspace ABI.

> 
> With this comes new debugfs knob to allow full access to your toes with
> your weapon of choice.

A few trivial things inline,

Jonathan

> 
> Cc: Ariel Sibley <Ariel.Sibley@microchip.com>
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/cxl/Kconfig          |  18 +++++
>  drivers/cxl/mem.c            | 125 ++++++++++++++++++++++++++++++++++-
>  include/uapi/linux/cxl_mem.h |  12 +++-
>  3 files changed, 152 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> index c4ba3aa0a05d..08eaa8e52083 100644
> --- a/drivers/cxl/Kconfig
> +++ b/drivers/cxl/Kconfig
> @@ -33,6 +33,24 @@ config CXL_MEM
>  
>  	  If unsure say 'm'.
>  
> +config CXL_MEM_RAW_COMMANDS
> +	bool "RAW Command Interface for Memory Devices"
> +	depends on CXL_MEM
> +	help
> +	  Enable CXL RAW command interface.
> +
> +	  The CXL driver ioctl interface may assign a kernel ioctl command
> +	  number for each specification defined opcode. At any given point in
> +	  time the number of opcodes that the specification defines and a device
> +	  may implement may exceed the kernel's set of associated ioctl function
> +	  numbers. The mismatch is either by omission, specification is too new,
> +	  or by design. When prototyping new hardware, or developing / debugging
> +	  the driver it is useful to be able to submit any possible command to
> +	  the hardware, even commands that may crash the kernel due to their
> +	  potential impact to memory currently in use by the kernel.
> +
> +	  If developing CXL hardware or the driver say Y, otherwise say N.
> +
>  config CXL_MEM_INSECURE_DEBUG
>  	bool "CXL.mem debugging"
>  	depends on CXL_MEM
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index ce65630bb75e..6d766a994dce 100644
> --- a/drivers/cxl/mem.c
> +++ b/drivers/cxl/mem.c
> @@ -1,6 +1,8 @@
>  // SPDX-License-Identifier: GPL-2.0-only
>  /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
>  #include <uapi/linux/cxl_mem.h>
> +#include <linux/security.h>
> +#include <linux/debugfs.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
>  #include <linux/cdev.h>
> @@ -41,7 +43,14 @@
>  
>  enum opcode {
>  	CXL_MBOX_OP_INVALID		= 0x0000,
> +	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
> +	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
>  	CXL_MBOX_OP_IDENTIFY		= 0x4000,
> +	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
> +	CXL_MBOX_OP_SET_LSA		= 0x4103,
> +	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
> +	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
> +	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
>  	CXL_MBOX_OP_MAX			= 0x10000
>  };
>  
> @@ -91,6 +100,8 @@ struct cxl_memdev {
>  
>  static int cxl_mem_major;
>  static DEFINE_IDA(cxl_memdev_ida);
> +static struct dentry *cxl_debugfs;
> +static bool raw_allow_all;
>  
>  /**
>   * struct cxl_mem_command - Driver representation of a memory device command
> @@ -132,6 +143,49 @@ struct cxl_mem_command {
>   */
>  static struct cxl_mem_command mem_commands[] = {
>  	CXL_CMD(IDENTIFY, NONE, 0, 0x43),
> +#ifdef CONFIG_CXL_MEM_RAW_COMMANDS
> +	CXL_CMD(RAW, NONE, ~0, ~0),
> +#endif
> +};
> +
> +/*
> + * Commands that RAW doesn't permit. The rationale for each:
> + *
> + * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment /
> + * coordination of transaction timeout values at the root bridge level.
> + *
> + * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live
> + * and needs to be coordinated with HDM updates.
> + *
> + * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the
> + * driver and any writes from userspace invalidates those contents.
> + *
> + * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes
> + * to the device after it is marked clean, userspace can not make that
> + * assertion.
> + *
> + * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that
> + * is kept up to date with patrol notifications and error management.
> + */
> +static u16 disabled_raw_commands[] = {
> +	CXL_MBOX_OP_ACTIVATE_FW,
> +	CXL_MBOX_OP_SET_PARTITION_INFO,
> +	CXL_MBOX_OP_SET_LSA,
> +	CXL_MBOX_OP_SET_SHUTDOWN_STATE,
> +	CXL_MBOX_OP_SCAN_MEDIA,
> +	CXL_MBOX_OP_GET_SCAN_MEDIA,
> +};
> +
> +/*
> + * Command sets that RAW doesn't permit. All opcodes in this set are
> + * disabled because they pass plain text security payloads over the
> + * user/kernel boundary. This functionality is intended to be wrapped
> + * behind the keys ABI which allows for encrypted payloads in the UAPI
> + */
> +static u8 security_command_sets[] = {
> +	0x44, /* Sanitize */
> +	0x45, /* Persistent Memory Data-at-rest Security */
> +	0x46, /* Security Passthrough */
>  };
>  
>  #define cxl_for_each_cmd(cmd)                                                  \
> @@ -162,6 +216,16 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
>  	return 0;
>  }
>  
> +static bool is_security_command(u16 opcode)
> +{
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(security_command_sets); i++)
> +		if (security_command_sets[i] == (opcode >> 8))
> +			return true;
> +	return false;
> +}
> +
>  static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
>  				 struct mbox_cmd *mbox_cmd)
>  {
> @@ -170,7 +234,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
>  	dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
>  		mbox_cmd->opcode, mbox_cmd->size_in);
>  
> -	if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
> +	if (!is_security_command(mbox_cmd->opcode) ||
> +	    IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
>  		print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1,
>  				     mbox_cmd->payload_in, mbox_cmd->size_in,
>  				     true);
> @@ -434,6 +499,9 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
>  		cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
>  		cmd->info.size_in);
>  
> +	dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
> +		      "raw command path used\n");
> +
>  	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
>  	cxl_mem_mbox_put(cxlm);
>  	if (rc)
> @@ -464,6 +532,29 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
>  	return rc;
>  }
>  
> +static bool cxl_mem_raw_command_allowed(u16 opcode)
> +{
> +	int i;
> +
> +	if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
> +		return false;
> +
> +	if (security_locked_down(LOCKDOWN_NONE))
> +		return false;
> +
> +	if (raw_allow_all)
> +		return true;
> +
> +	if (is_security_command(opcode))
Given we are mixing generic calls like security_locked_down()
and local cxl specific ones like this one, prefix the
local versions.

cxl_is_security_command()

I'd also have a slight preference to do it for cxl_disabled_raw_commands
and cxl_raw_allow_all though they are less important as more obviously
local by not being function calls.

> +		return false;
> +
> +	for (i = 0; i < ARRAY_SIZE(disabled_raw_commands); i++)
> +		if (disabled_raw_commands[i] == opcode)
> +			return false;
> +
> +	return true;
> +}
> +
>  /**
>   * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
>   * @cxlm: &struct cxl_mem device whose mailbox will be used.
> @@ -500,6 +591,29 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
>  	if (send_cmd->in.size > cxlm->payload_size)
>  		return -EINVAL;
>  
> +	/* Checks are bypassed for raw commands but along comes the taint! */
> +	if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) {
> +		const struct cxl_mem_command temp = {
> +			.info = {
> +				.id = CXL_MEM_COMMAND_ID_RAW,
> +				.flags = CXL_MEM_COMMAND_FLAG_NONE,
> +				.size_in = send_cmd->in.size,
> +				.size_out = send_cmd->out.size,
> +			},
> +			.opcode = send_cmd->raw.opcode
> +		};
> +
> +		if (send_cmd->raw.rsvd)
> +			return -EINVAL;
> +
> +		if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
> +			return -EPERM;
> +
> +		memcpy(out_cmd, &temp, sizeof(temp));
> +
> +		return 0;
> +	}
> +
>  	if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
>  		return -EINVAL;
>  
> @@ -1123,8 +1237,9 @@ static struct pci_driver cxl_mem_driver = {
>  
>  static __init int cxl_mem_init(void)
>  {
> -	int rc;
> +	struct dentry *mbox_debugfs;
>  	dev_t devt;
> +	int rc;

Shuffle this back to the place it was introduced to reduce patch noise.

>  
>  	rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl");
>  	if (rc)
> @@ -1139,11 +1254,17 @@ static __init int cxl_mem_init(void)
>  		return rc;
>  	}
>  
> +	cxl_debugfs = debugfs_create_dir("cxl", NULL);
> +	mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
> +	debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs,
> +			    &raw_allow_all);
> +
>  	return 0;
>  }
>  
>  static __exit void cxl_mem_exit(void)
>  {
> +	debugfs_remove_recursive(cxl_debugfs);
>  	pci_unregister_driver(&cxl_mem_driver);
>  	unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS);
>  }
> diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
> index f1f7e9f32ea5..72d1eb601a5d 100644
> --- a/include/uapi/linux/cxl_mem.h
> +++ b/include/uapi/linux/cxl_mem.h
> @@ -22,6 +22,7 @@
>  #define CXL_CMDS                                                          \
>  	___C(INVALID, "Invalid Command"),                                 \
>  	___C(IDENTIFY, "Identify Command"),                               \
> +	___C(RAW, "Raw device command"),                                  \
>  	___C(MAX, "Last command")
>  
>  #define ___C(a, b) CXL_MEM_COMMAND_ID_##a
> @@ -112,6 +113,9 @@ struct cxl_mem_query_commands {
>   * @id: The command to send to the memory device. This must be one of the
>   *	commands returned by the query command.
>   * @flags: Flags for the command (input).
> + * @raw: Special fields for raw commands
> + * @raw.opcode: Opcode passed to hardware when using the RAW command.
> + * @raw.rsvd: Must be zero.
>   * @rsvd: Must be zero.
>   * @retval: Return value from the memory device (output).
>   * @in.size: Size of the payload to provide to the device (input).
> @@ -133,7 +137,13 @@ struct cxl_mem_query_commands {
>  struct cxl_send_command {
>  	__u32 id;
>  	__u32 flags;
> -	__u32 rsvd;
> +	union {
> +		struct {
> +			__u16 opcode;
> +			__u16 rsvd;
> +		} raw;
> +		__u32 rsvd;
> +	};
>  	__u32 retval;
>  
>  	struct {


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 6/8] cxl/mem: Enable commands via CEL
  2021-02-10  0:02 ` [PATCH v2 6/8] cxl/mem: Enable commands via CEL Ben Widawsky
@ 2021-02-11 12:02   ` Jonathan Cameron
  2021-02-11 17:45     ` Ben Widawsky
  2021-02-16 13:43     ` Bartosz Golaszewski
  0 siblings, 2 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-11 12:02 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Tue, 9 Feb 2021 16:02:57 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> CXL devices identified by the memory-device class code must implement
> the Device Command Interface (described in 8.2.9 of the CXL 2.0 spec).
> While the driver already maintains a list of commands it supports, there
> is still a need to be able to distinguish between commands that the
> driver knows about from commands that are optionally supported by the
> hardware.
> 
> The Command Effects Log (CEL) is specified in the CXL 2.0 specification.
> The CEL is one of two types of logs, the other being vendor specific.

I'd say "vendor specific debug" just so that no one thinks it has anything
to do with the rest of this description (which mentioned vendor specific
commands).

> They are distinguished in hardware/spec via UUID. The CEL is useful for
> 2 things:
> 1. Determine which optional commands are supported by the CXL device.
> 2. Enumerate any vendor specific commands
> 
> The CEL is used by the driver to determine which commands are available
> in the hardware and therefore which commands userspace is allowed to
> execute. The set of enabled commands might be a subset of commands which
> are advertised in UAPI via CXL_MEM_SEND_COMMAND IOCTL.
> 
> The implementation leaves the statically defined table of commands and
> supplements it with a bitmap to determine commands that are enabled.
> This organization was chosen for the following reasons:
> - Smaller memory footprint. Doesn't need a table per device.
> - Reduce memory allocation complexity.
> - Fixed command IDs to opcode mapping for all devices makes development
>   and debugging easier.
> - Certain helpers are easily achievable, like cxl_for_each_cmd().
> 
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/cxl/cxl.h            |   2 +
>  drivers/cxl/mem.c            | 216 +++++++++++++++++++++++++++++++++++
>  include/uapi/linux/cxl_mem.h |   1 +
>  3 files changed, 219 insertions(+)
> 
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index b3c56fa6e126..9a5e595abfa4 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -68,6 +68,7 @@ struct cxl_memdev;
>   *                (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
>   * @mbox_mutex: Mutex to synchronize mailbox access.
>   * @firmware_version: Firmware version for the memory device.
> + * @enabled_commands: Hardware commands found enabled in CEL.
>   * @pmem: Persistent memory capacity information.
>   * @ram: Volatile memory capacity information.
>   */
> @@ -83,6 +84,7 @@ struct cxl_mem {
>  	size_t payload_size;
>  	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
>  	char firmware_version[0x10];
> +	unsigned long *enabled_cmds;
>  
>  	struct {
>  		struct range range;
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index 6d766a994dce..e9aa6ca18d99 100644
> --- a/drivers/cxl/mem.c
> +++ b/drivers/cxl/mem.c
> @@ -45,6 +45,8 @@ enum opcode {
>  	CXL_MBOX_OP_INVALID		= 0x0000,
>  	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
>  	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
> +	CXL_MBOX_OP_GET_SUPPORTED_LOGS	= 0x0400,
> +	CXL_MBOX_OP_GET_LOG		= 0x0401,
>  	CXL_MBOX_OP_IDENTIFY		= 0x4000,
>  	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
>  	CXL_MBOX_OP_SET_LSA		= 0x4103,
> @@ -103,6 +105,19 @@ static DEFINE_IDA(cxl_memdev_ida);
>  static struct dentry *cxl_debugfs;
>  static bool raw_allow_all;
>  
> +enum {
> +	CEL_UUID,
> +	VENDOR_DEBUG_UUID

Who wants to take a bet this will get extended at somepoint in the future?
Add a trailing comma to make that less noisy.

They would never have used a UUID if this wasn't expected to expand.
CXL spec calls out that "The following Log Identifier UUIDs are defined in _this_
specification" rather implying other specs may well define more.
Fun for the future!

> +};
> +
> +/* See CXL 2.0 Table 170. Get Log Input Payload */
> +static const uuid_t log_uuid[] = {
> +	[CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96,
> +			       0xb1, 0x62, 0x3b, 0x3f, 0x17),
> +	[VENDOR_DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f,
> +					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86)

likewise on trailing comma

> +};
> +
>  /**
>   * struct cxl_mem_command - Driver representation of a memory device command
>   * @info: Command information as it exists for the UAPI
> @@ -111,6 +126,8 @@ static bool raw_allow_all;
>   *
>   *  * %CXL_CMD_FLAG_MANDATORY: Hardware must support this command. This flag is
>   *    only used internally by the driver for sanity checking.
> + *  * %CXL_CMD_INTERNAL_FLAG_PSEUDO: This is a pseudo command which doesn't have
> + *    a direct mapping to hardware. They are implicitly always enabled.

Stale comment?

>   *
>   * The cxl_mem_command is the driver's internal representation of commands that
>   * are supported by the driver. Some of these commands may not be supported by
> @@ -146,6 +163,7 @@ static struct cxl_mem_command mem_commands[] = {
>  #ifdef CONFIG_CXL_MEM_RAW_COMMANDS
>  	CXL_CMD(RAW, NONE, ~0, ~0),
>  #endif
> +	CXL_CMD(GET_SUPPORTED_LOGS, NONE, 0, ~0),
>  };
>  
>  /*
> @@ -627,6 +645,10 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
>  	c = &mem_commands[send_cmd->id];
>  	info = &c->info;
>  
> +	/* Check that the command is enabled for hardware */
> +	if (!test_bit(info->id, cxlm->enabled_cmds))
> +		return -ENOTTY;
> +
>  	if (info->flags & CXL_MEM_COMMAND_FLAG_KERNEL)
>  		return -EPERM;
>  
> @@ -869,6 +891,14 @@ static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
>  	mutex_init(&cxlm->mbox_mutex);
>  	cxlm->pdev = pdev;
>  	cxlm->regs = regs + offset;
> +	cxlm->enabled_cmds =
> +		devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
> +				   sizeof(unsigned long),
> +				   GFP_KERNEL | __GFP_ZERO);

Hmm. There doesn't seem to be a devm_bitmap_zalloc

Embarrassingly one of the google hits on the topic is me suggesting
this in a previous review (that I'd long since forgotten)

Perhaps one for a refactoring patch after this lands.


> +	if (!cxlm->enabled_cmds) {
> +		dev_err(dev, "No memory available for bitmap\n");
> +		return NULL;
> +	}
>  
>  	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
>  	return cxlm;
> @@ -1088,6 +1118,188 @@ static int cxl_mem_add_memdev(struct cxl_mem *cxlm)
>  	return rc;
>  }
>  
> +struct cxl_mbox_get_log {
> +	uuid_t uuid;
> +	__le32 offset;
> +	__le32 length;
> +} __packed;
> +
> +static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
> +{
> +	u32 remaining = size;
> +	u32 offset = 0;
> +
> +	while (remaining) {
> +		u32 xfer_size = min_t(u32, remaining, cxlm->payload_size);
> +		struct cxl_mbox_get_log log = {
> +			.uuid = *uuid,
> +			.offset = cpu_to_le32(offset),
> +			.length = cpu_to_le32(xfer_size)
> +		};
> +		struct mbox_cmd mbox_cmd = {
> +			.opcode = CXL_MBOX_OP_GET_LOG,
> +			.payload_in = &log,
> +			.payload_out = out,
> +			.size_in = sizeof(log),
> +		};
> +		int rc;
> +
> +		rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> +		if (rc)
> +			return rc;
> +
> +		WARN_ON(mbox_cmd.size_out != xfer_size);

Just for completeness (as already addressed in one of Ben's replies
to earlier patch) this is missing handling for the return code.

> +
> +		out += xfer_size;
> +		remaining -= xfer_size;
> +		offset += xfer_size;
> +	}
> +
> +	return 0;
> +}
> +
> +static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
> +{
> +	struct cxl_mem_command *c;
> +
> +	cxl_for_each_cmd(c)
> +		if (c->opcode == opcode)
> +			return c;
> +
> +	return NULL;
> +}
> +
> +static void cxl_enable_cmd(struct cxl_mem *cxlm,
> +			   const struct cxl_mem_command *cmd)
> +{
> +	if (test_and_set_bit(cmd->info.id, cxlm->enabled_cmds))
> +		dev_WARN_ONCE(&cxlm->pdev->dev, true, "cmd enabled twice\n");
> +}
> +
> +/**
> + * cxl_walk_cel() - Walk through the Command Effects Log.
> + * @cxlm: Device.
> + * @size: Length of the Command Effects Log.
> + * @cel: CEL
> + *
> + * Iterate over each entry in the CEL and determine if the driver supports the
> + * command. If so, the command is enabled for the device and can be used later.
> + */
> +static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
> +{
> +	struct cel_entry {
> +		__le16 opcode;
> +		__le16 effect;
> +	} *cel_entry;

Driver is currently marking a bunch of other structures packed that don't
need it. Perhaps do this one as well for consistency?

> +	const int cel_entries = size / sizeof(*cel_entry);
> +	int i;
> +
> +	cel_entry = (struct cel_entry *)cel;
> +
> +	for (i = 0; i < cel_entries; i++) {
> +		const struct cel_entry *ce = &cel_entry[i];

Given ce is only ever used to get the ce->opcode maybe better using that
as the local variable?

		u16 opcode = le16_to_cpu(cel_entry[i].opcode)

Obviously that might change depending on later patches though.


> +		const struct cxl_mem_command *cmd =
> +			cxl_mem_find_command(le16_to_cpu(ce->opcode));
> +
> +		if (!cmd) {
> +			dev_dbg(&cxlm->pdev->dev, "Unsupported opcode 0x%04x",

Unsupported by who? (driver rather than hardware)

> +				le16_to_cpu(ce->opcode));
> +			continue;
> +		}
> +
> +		cxl_enable_cmd(cxlm, cmd);
> +	}
> +}
> +
> +/**
> + * cxl_mem_enumerate_cmds() - Enumerate commands for a device.
> + * @cxlm: The device.
> + *
> + * Returns 0 if enumerate completed successfully.
> + *
> + * CXL devices have optional support for certain commands. This function will
> + * determine the set of supported commands for the hardware and update the
> + * enabled_cmds bitmap in the @cxlm.
> + */
> +static int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm)
> +{
> +	struct device *dev = &cxlm->pdev->dev;
> +	struct cxl_mbox_get_supported_logs {
> +		__le16 entries;
> +		u8 rsvd[6];
> +		struct gsl_entry {
> +			uuid_t uuid;
> +			__le32 size;
> +		} __packed entry[2];
> +	} __packed gsl;
> +	struct mbox_cmd mbox_cmd = {
> +		.opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS,
> +		.payload_out = &gsl,
> +		.size_in = 0,
> +	};
> +	int i, rc;
> +
> +	rc = cxl_mem_mbox_get(cxlm);
> +	if (rc)
> +		return rc;
> +
> +	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> +	if (rc)
> +		goto out;
> +
> +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> +		rc = -ENXIO;
> +		goto out;
> +	}
> +
> +	if (mbox_cmd.size_out > sizeof(gsl)) {
> +		dev_warn(dev, "%zu excess logs\n",
> +			 (mbox_cmd.size_out - sizeof(gsl)) /
> +				 sizeof(struct gsl_entry));

This could well happen given spec seems to allow for other
entries defined by other specs.

Note that it's this path that I mentioned earlier as requiring we sanity
check the output size available before calling mempcy_fromio into it
with the hardware supported size.


> +	}
> +
> +	for (i = 0; i < le16_to_cpu(gsl.entries); i++) {
> +		u32 size = le32_to_cpu(gsl.entry[i].size);
> +		uuid_t uuid = gsl.entry[i].uuid;
> +		u8 *log;
> +
> +		dev_dbg(dev, "Found LOG type %pU of size %d", &uuid, size);
> +
> +		if (!uuid_equal(&uuid, &log_uuid[CEL_UUID]))
> +			continue;
> +
> +		/*
> +		 * It's a hardware bug if the log size is less than the input
> +		 * payload size because there are many mandatory commands.
> +		 */
> +		if (sizeof(struct cxl_mbox_get_log) > size) {

If you are going to talk about less than in the comment, I'd flip the condition
around so it lines up. Trivial obviously but nice to tidy up.

> +			dev_err(dev, "CEL log size reported was too small (%d)",
> +				size);
> +			rc = -ENOMEM;
> +			goto out;
> +		}
> +
> +		log = kvmalloc(size, GFP_KERNEL);
> +		if (!log) {
> +			rc = -ENOMEM;
> +			goto out;
> +		}
> +
> +		rc = cxl_xfer_log(cxlm, &uuid, size, log);
> +		if (rc) {
> +			kvfree(log);
> +			goto out;
> +		}
> +
> +		cxl_walk_cel(cxlm, size, log);
> +		kvfree(log);
> +	}
> +
> +out:
> +	cxl_mem_mbox_put(cxlm);
> +	return rc;
> +}
> +
>  /**
>   * cxl_mem_identify() - Send the IDENTIFY command to the device.
>   * @cxlm: The device to identify.
> @@ -1211,6 +1423,10 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  	if (rc)
>  		return rc;
>  
> +	rc = cxl_mem_enumerate_cmds(cxlm);
> +	if (rc)
> +		return rc;
> +
>  	rc = cxl_mem_identify(cxlm);
>  	if (rc)
>  		return rc;
> diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
> index 72d1eb601a5d..c5e75b9dad9d 100644
> --- a/include/uapi/linux/cxl_mem.h
> +++ b/include/uapi/linux/cxl_mem.h
> @@ -23,6 +23,7 @@
>  	___C(INVALID, "Invalid Command"),                                 \
>  	___C(IDENTIFY, "Identify Command"),                               \
>  	___C(RAW, "Raw device command"),                                  \
> +	___C(GET_SUPPORTED_LOGS, "Get Supported Logs"),                   \
>  	___C(MAX, "Last command")
>  
>  #define ___C(a, b) CXL_MEM_COMMAND_ID_##a


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 7/8] cxl/mem: Add set of informational commands
  2021-02-10  0:02 ` [PATCH v2 7/8] cxl/mem: Add set of informational commands Ben Widawsky
@ 2021-02-11 12:07   ` Jonathan Cameron
  0 siblings, 0 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-11 12:07 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Tue, 9 Feb 2021 16:02:58 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Add initial set of formal commands beyond basic identify and command
> enumeration.
> 
> Of special note is the Get Log Command which is only specified to return
> 2 log types, CEL and VENDOR_DEBUG. Given that VENDOR_DEBUG is already a
> large catch all for vendor specific information there is no known reason
> for devices to be implementing other log types. Unknown log types are
> included in the "vendor passthrough shenanigans" safety regime like raw
> commands and blocked by default.

As mentioned in previous patch comments, the way that is worded in the spec
suggests to me that what we might see if other specifications providing
more UUIDs to define other 'standard' info.  Maybe something else was
intended...   Still what you have done here makes sense to me.

> 
> Up to this point there has been no reason to inspect payload data.
> Given the need to check the log type add a new "validate_payload"
> operation to define a generic mechanism to restrict / filter commands.
> 
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/cxl/mem.c            | 55 +++++++++++++++++++++++++++++++++++-
>  include/uapi/linux/cxl_mem.h |  5 ++++
>  2 files changed, 59 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index e9aa6ca18d99..e8cc076b9f1b 100644
> --- a/drivers/cxl/mem.c
> +++ b/drivers/cxl/mem.c
> @@ -44,12 +44,16 @@
>  enum opcode {
>  	CXL_MBOX_OP_INVALID		= 0x0000,
>  	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
> +	CXL_MBOX_OP_GET_FW_INFO		= 0x0200,
>  	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
>  	CXL_MBOX_OP_GET_SUPPORTED_LOGS	= 0x0400,
>  	CXL_MBOX_OP_GET_LOG		= 0x0401,
>  	CXL_MBOX_OP_IDENTIFY		= 0x4000,
> +	CXL_MBOX_OP_GET_PARTITION_INFO	= 0x4100,
>  	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
> +	CXL_MBOX_OP_GET_LSA		= 0x4102,
>  	CXL_MBOX_OP_SET_LSA		= 0x4103,
> +	CXL_MBOX_OP_GET_HEALTH_INFO	= 0x4200,
>  	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
>  	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
>  	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
> @@ -118,6 +122,9 @@ static const uuid_t log_uuid[] = {
>  					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86)
>  };
>  
> +static int validate_log_uuid(struct cxl_mem *cxlm, void __user *payload,
> +			     size_t size);
> +
>  /**
>   * struct cxl_mem_command - Driver representation of a memory device command
>   * @info: Command information as it exists for the UAPI
> @@ -129,6 +136,10 @@ static const uuid_t log_uuid[] = {
>   *  * %CXL_CMD_INTERNAL_FLAG_PSEUDO: This is a pseudo command which doesn't have
>   *    a direct mapping to hardware. They are implicitly always enabled.
>   *
> + * @validate_payload: A function called after the command is validated but
> + * before it's sent to the hardware. The primary purpose is to validate, or
> + * fixup the actual payload.
> + *
>   * The cxl_mem_command is the driver's internal representation of commands that
>   * are supported by the driver. Some of these commands may not be supported by
>   * the hardware. The driver will use @info to validate the fields passed in by
> @@ -139,9 +150,12 @@ static const uuid_t log_uuid[] = {
>  struct cxl_mem_command {
>  	struct cxl_command_info info;
>  	enum opcode opcode;
> +
> +	int (*validate_payload)(struct cxl_mem *cxlm, void __user *payload,
> +				size_t size);
>  };
>  
> -#define CXL_CMD(_id, _flags, sin, sout)                                        \
> +#define CXL_CMD_VALIDATE(_id, _flags, sin, sout, v)                            \
>  	[CXL_MEM_COMMAND_ID_##_id] = {                                         \
>  	.info =	{                                                              \
>  			.id = CXL_MEM_COMMAND_ID_##_id,                        \
> @@ -150,8 +164,12 @@ struct cxl_mem_command {
>  			.size_out = sout,                                      \
>  		},                                                             \
>  	.opcode = CXL_MBOX_OP_##_id,                                           \
> +	.validate_payload = v,                                                 \
>  	}
>  
> +#define CXL_CMD(_id, _flags, sin, sout)                                        \
> +	CXL_CMD_VALIDATE(_id, _flags, sin, sout, NULL)
> +
>  /*
>   * This table defines the supported mailbox commands for the driver. This table
>   * is made up of a UAPI structure. Non-negative values as parameters in the
> @@ -164,6 +182,11 @@ static struct cxl_mem_command mem_commands[] = {
>  	CXL_CMD(RAW, NONE, ~0, ~0),
>  #endif
>  	CXL_CMD(GET_SUPPORTED_LOGS, NONE, 0, ~0),
> +	CXL_CMD(GET_FW_INFO, NONE, 0, 0x50),
> +	CXL_CMD(GET_PARTITION_INFO, NONE, 0, 0x20),
> +	CXL_CMD(GET_LSA, NONE, 0x8, ~0),
> +	CXL_CMD(GET_HEALTH_INFO, NONE, 0, 0x12),
> +	CXL_CMD_VALIDATE(GET_LOG, NONE, 0x18, ~0, validate_log_uuid),
>  };
>  
>  /*
> @@ -492,6 +515,14 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
>  		mbox_cmd.payload_out = kvzalloc(cxlm->payload_size, GFP_KERNEL);
>  
>  	if (cmd->info.size_in) {
> +		if (cmd->validate_payload) {
> +			rc = cmd->validate_payload(cxlm,
> +						   u64_to_user_ptr(in_payload),
> +						   cmd->info.size_in);
> +			if (rc)
> +				goto out;
> +		}
> +
>  		mbox_cmd.payload_in = kvzalloc(cmd->info.size_in, GFP_KERNEL);
>  		if (!mbox_cmd.payload_in) {
>  			rc = -ENOMEM;
> @@ -1124,6 +1155,28 @@ struct cxl_mbox_get_log {
>  	__le32 length;
>  } __packed;
>  
> +static int validate_log_uuid(struct cxl_mem *cxlm, void __user *input,
> +			     size_t size)
> +{
> +	struct cxl_mbox_get_log __user *get_log = input;
> +	uuid_t payload_uuid;
> +
> +	if (copy_from_user(&payload_uuid, &get_log->uuid, sizeof(uuid_t)))
> +		return -EFAULT;
> +
> +	if (uuid_equal(&payload_uuid, &log_uuid[CEL_UUID]))
> +		return 0;
> +	if (uuid_equal(&payload_uuid, &log_uuid[VENDOR_DEBUG_UUID]))
> +		return 0;
> +
> +	/* All unspec'd logs shall taint */
> +	if (WARN_ONCE(!cxl_mem_raw_command_allowed(CXL_MBOX_OP_RAW),
> +		      "Unknown log UUID %pU used\n", &payload_uuid))
> +		return -EPERM;
> +
> +	return 0;
> +}
> +
>  static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
>  {
>  	u32 remaining = size;
> diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
> index c5e75b9dad9d..ba4d3b4d6b7d 100644
> --- a/include/uapi/linux/cxl_mem.h
> +++ b/include/uapi/linux/cxl_mem.h
> @@ -24,6 +24,11 @@
>  	___C(IDENTIFY, "Identify Command"),                               \
>  	___C(RAW, "Raw device command"),                                  \
>  	___C(GET_SUPPORTED_LOGS, "Get Supported Logs"),                   \
> +	___C(GET_FW_INFO, "Get FW Info"),                                 \
> +	___C(GET_PARTITION_INFO, "Get Partition Information"),            \
> +	___C(GET_LSA, "Get Label Storage Area"),                          \
> +	___C(GET_HEALTH_INFO, "Get Health Info"),                         \
> +	___C(GET_LOG, "Get Log"),                                         \
>  	___C(MAX, "Last command")
>  
>  #define ___C(a, b) CXL_MEM_COMMAND_ID_##a


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-11  9:55           ` Jonathan Cameron
@ 2021-02-11 15:55             ` Ben Widawsky
  2021-02-12 13:27               ` Jonathan Cameron
  2021-02-11 18:27             ` Ben Widawsky
  1 sibling, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-11 15:55 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On 21-02-11 09:55:48, Jonathan Cameron wrote:
> On Wed, 10 Feb 2021 10:16:05 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > On 21-02-10 08:55:57, Ben Widawsky wrote:
> > > On 21-02-10 15:07:59, Jonathan Cameron wrote:  
> > > > On Wed, 10 Feb 2021 13:32:52 +0000
> > > > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> > > >   
> > > > > On Tue, 9 Feb 2021 16:02:53 -0800
> > > > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > > > >   
> > > > > > Provide enough functionality to utilize the mailbox of a memory device.
> > > > > > The mailbox is used to interact with the firmware running on the memory
> > > > > > device. The flow is proven with one implemented command, "identify".
> > > > > > Because the class code has already told the driver this is a memory
> > > > > > device and the identify command is mandatory.
> > > > > > 
> > > > > > CXL devices contain an array of capabilities that describe the
> > > > > > interactions software can have with the device or firmware running on
> > > > > > the device. A CXL compliant device must implement the device status and
> > > > > > the mailbox capability. Additionally, a CXL compliant memory device must
> > > > > > implement the memory device capability. Each of the capabilities can
> > > > > > [will] provide an offset within the MMIO region for interacting with the
> > > > > > CXL device.
> > > > > > 
> > > > > > The capabilities tell the driver how to find and map the register space
> > > > > > for CXL Memory Devices. The registers are required to utilize the CXL
> > > > > > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > > > > > and secondary. The secondary mailbox is earmarked for system firmware,
> > > > > > and not handled in this driver.
> > > > > > 
> > > > > > Primary mailboxes are capable of generating an interrupt when submitting
> > > > > > a background command. That implementation is saved for a later time.
> > > > > > 
> > > > > > Link: https://www.computeexpresslink.org/download-the-specification
> > > > > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > > > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>    
> > > > > 
> > > > > Hi Ben,
> > > > > 
> > > > >   
> > > > > > +/**
> > > > > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > > > + * @cxlm: The CXL memory device to communicate with.
> > > > > > + * @mbox_cmd: Command to send to the memory device.
> > > > > > + *
> > > > > > + * Context: Any context. Expects mbox_lock to be held.
> > > > > > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > > > > > + *         Caller should check the return code in @mbox_cmd to make sure it
> > > > > > + *         succeeded.    
> > > > > 
> > > > > cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> > > > > enters an infinite loop as a result.  
> > > 
> > > I meant to fix that.
> > >   
> > > > > 
> > > > > I haven't checked other paths, but to my mind it is not a good idea to require
> > > > > two levels of error checking - the example here proves how easy it is to forget
> > > > > one.  
> > > 
> > > Demonstrably, you're correct. I think it would be good to have a kernel only
> > > mbox command that does the error checking though. Let me type something up and
> > > see how it looks.  
> > 
> > Hi Jonathan. What do you think of this? The bit I'm on the fence about is if I
> > should validate output size too. I like the simplicity as it is, but it requires
> > every caller to possibly check output size, which is kind of the same problem
> > you're originally pointing out.
> 
> The simplicity is good and this is pretty much what I expected you would end up with
> (always reassuring)
> 
> For the output, perhaps just add another parameter to the wrapper for minimum
> output length expected?
> 
> Now you mention the length question.  It does rather feel like there should also
> be some protection on memcpy_fromio() copying too much data if the hardware
> happens to return an unexpectedly long length.  Should never happen, but
> the hardening is worth adding anyway given it's easy to do.
> 
> Jonathan

Some background because I forget what I've said previously... It's unfortunate
that the spec maxes at 1M mailbox size but has enough bits in the length field
to support 2M-1. I've made some requests to have this fixed, so maybe 3.0 won't
be awkward like this.

I think it makes sense to do as you suggested. One question though, do you have
an opinion on we return to the caller as the output payload size, do we cap it
at 1M also, or are we honest?

-       if (out_len && mbox_cmd->payload_out)
-               memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
+       if (out_len && mbox_cmd->payload_out) {
+               size_t n = min_t(size_t, cxlm->payload_size, out_len);
+               memcpy_fromio(mbox_cmd->payload_out, payload, n);
+       }

So...
mbox_cmd->size_out = out_len;
mbox_cmd->size_out = n;


> 
> 
> > 
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > index 55c5f5a6023f..ad7b2077ab28 100644
> > --- a/drivers/cxl/mem.c
> > +++ b/drivers/cxl/mem.c
> > @@ -284,7 +284,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> >  }
> >  
> >  /**
> > - * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > + * __cxl_mem_mbox_send_cmd() - Execute a mailbox command
> >   * @cxlm: The CXL memory device to communicate with.
> >   * @mbox_cmd: Command to send to the memory device.
> >   *
> > @@ -296,7 +296,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> >   * This is a generic form of the CXL mailbox send command, thus the only I/O
> >   * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> >   * types of CXL devices may have further information available upon error
> > - * conditions.
> > + * conditions. Driver facilities wishing to send mailbox commands should use the
> > + * wrapper command.
> >   *
> >   * The CXL spec allows for up to two mailboxes. The intention is for the primary
> >   * mailbox to be OS controlled and the secondary mailbox to be used by system
> > @@ -304,8 +305,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> >   * not need to coordinate with each other. The driver only uses the primary
> >   * mailbox.
> >   */
> > -static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > -				 struct mbox_cmd *mbox_cmd)
> > +static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > +				   struct mbox_cmd *mbox_cmd)
> >  {
> >  	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> >  	u64 cmd_reg, status_reg;
> > @@ -469,6 +470,54 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> >  	mutex_unlock(&cxlm->mbox_mutex);
> >  }
> >  
> > +/**
> > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > + * @cxlm: The CXL memory device to communicate with.
> > + * @opcode: Opcode for the mailbox command.
> > + * @in: The input payload for the mailbox command.
> > + * @in_size: The length of the input payload
> > + * @out: Caller allocated buffer for the output.
> > + *
> > + * Context: Any context. Will acquire and release mbox_mutex.
> > + * Return:
> > + *  * %>=0	- Number of bytes returned in @out.
> > + *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
> > + *  * %-EFAULT	- Hardware error occurred.
> > + *  * %-ENXIO	- Command completed, but device reported an error.
> > + *
> > + * Mailbox commands may execute successfully yet the device itself reported an
> > + * error. While this distinction can be useful for commands from userspace, the
> > + * kernel will often only care when both are successful.
> > + *
> > + * See __cxl_mem_mbox_send_cmd()
> > + */
> > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
> > +				 size_t in_size, u8 *out)
> > +{
> > +	struct mbox_cmd mbox_cmd = {
> > +		.opcode = opcode,
> > +		.payload_in = in,
> > +		.size_in = in_size,
> > +		.payload_out = out,
> > +	};
> > +	int rc;
> > +
> > +	rc = cxl_mem_mbox_get(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > +	cxl_mem_mbox_put(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	/* TODO: Map return code to proper kernel style errno */
> > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
> > +		return -ENXIO;
> > +
> > +	return mbox_cmd.size_out;
> > +}
> > +
> >  /**
> >   * handle_mailbox_cmd_from_user() - Dispatch a mailbox command.
> >   * @cxlmd: The CXL memory device to communicate with.
> > @@ -1380,33 +1429,18 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
> >  		u8 poison_caps;
> >  		u8 qos_telemetry_caps;
> >  	} __packed id;
> > -	struct mbox_cmd mbox_cmd = {
> > -		.opcode = CXL_MBOX_OP_IDENTIFY,
> > -		.payload_out = &id,
> > -		.size_in = 0,
> > -	};
> >  	int rc;
> >  
> > -	/* Retrieve initial device memory map */
> > -	rc = cxl_mem_mbox_get(cxlm);
> > -	if (rc)
> > -		return rc;
> > -
> > -	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > -	cxl_mem_mbox_put(cxlm);
> > -	if (rc)
> > +	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
> > +				   (u8 *)&id);
> > +	if (rc < 0)
> >  		return rc;
> >  
> > -	/* TODO: Handle retry or reset responses from firmware. */
> > -	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > -		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> > -			mbox_cmd.return_code);
> > +	if (rc < sizeof(id)) {
> > +		dev_err(&cxlm->pdev->dev, "Short identify data\n",
> >  		return -ENXIO;
> >  	}
> >  
> > -	if (mbox_cmd.size_out != sizeof(id))
> > -		return -ENXIO;
> > -
> >  	/*
> >  	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> >  	 * For now, only the capacity is exported in sysfs
> > 
> > 
> > [snip]
> > 
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-11 11:19   ` Jonathan Cameron
@ 2021-02-11 16:01     ` Ben Widawsky
  2021-02-12 13:40       ` Jonathan Cameron
  0 siblings, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-11 16:01 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V, Ariel Sibley

On 21-02-11 11:19:24, Jonathan Cameron wrote:
> On Tue, 9 Feb 2021 16:02:56 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > The CXL memory device send interface will have a number of supported
> > commands. The raw command is not such a command. Raw commands allow
> > userspace to send a specified opcode to the underlying hardware and
> > bypass all driver checks on the command. This is useful for a couple of
> > usecases, mainly:
> > 1. Undocumented vendor specific hardware commands
> 
> This one I get.  There are things we'd love to standardize but often they
> need proving in a generation of hardware before the data is available to
> justify taking it to a standards body.  Stuff like performance stats.
> This stuff will all sit in the vendor defined range.  Maybe there is an
> argument for in driver hooks to allow proper support even for these
> (Ben mentioned this in the other branch of the thread).
> 
> > 2. Prototyping new hardware commands not yet supported by the driver
> 
> For 2, could just have a convenient place to enable this by one line patch.
> Some subsystems (SPI comes to mind) do this for their equivalent of raw
> commands.  The code is all there to enable it but you need to hook it
> up if you want to use it.  Avoids chance of a distro shipping it.
> 

I'm fine to drop #2 as a justification point, or maybe reword the commit message
to say, "you could also just do... but since we have it for #1 already..."

> > 
> > While this all sounds very powerful it comes with a couple of caveats:
> > 1. Bug reports using raw commands will not get the same level of
> >    attention as bug reports using supported commands (via taint).
> > 2. Supported commands will be rejected by the RAW command.
> 
> Perhaps I'm missing reading this point 2 (not sure the code actually does it!)
> 
> As stated what worries me as it means when we add support for a new
> bit of the spec we just broke the userspace ABI.
> 

It does not break ABI. The agreement is userspace must always use the QUERY
command to find out what commands are supported. If it tries to use a RAW
command that is a supported command, it will be rejected. In the case you
mention, that's an application bug. If there is a way to document that better
than what's already in the UAPI kdocs, I'm open to suggestions.

Unlike perhaps other UAPI, this one only promises to give you a way to determine
what commands you can use, not the list of what commands you can use.

> > 
> > With this comes new debugfs knob to allow full access to your toes with
> > your weapon of choice.
> 
> A few trivial things inline,
> 
> Jonathan
> 
> > 
> > Cc: Ariel Sibley <Ariel.Sibley@microchip.com>
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  drivers/cxl/Kconfig          |  18 +++++
> >  drivers/cxl/mem.c            | 125 ++++++++++++++++++++++++++++++++++-
> >  include/uapi/linux/cxl_mem.h |  12 +++-
> >  3 files changed, 152 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > index c4ba3aa0a05d..08eaa8e52083 100644
> > --- a/drivers/cxl/Kconfig
> > +++ b/drivers/cxl/Kconfig
> > @@ -33,6 +33,24 @@ config CXL_MEM
> >  
> >  	  If unsure say 'm'.
> >  
> > +config CXL_MEM_RAW_COMMANDS
> > +	bool "RAW Command Interface for Memory Devices"
> > +	depends on CXL_MEM
> > +	help
> > +	  Enable CXL RAW command interface.
> > +
> > +	  The CXL driver ioctl interface may assign a kernel ioctl command
> > +	  number for each specification defined opcode. At any given point in
> > +	  time the number of opcodes that the specification defines and a device
> > +	  may implement may exceed the kernel's set of associated ioctl function
> > +	  numbers. The mismatch is either by omission, specification is too new,
> > +	  or by design. When prototyping new hardware, or developing / debugging
> > +	  the driver it is useful to be able to submit any possible command to
> > +	  the hardware, even commands that may crash the kernel due to their
> > +	  potential impact to memory currently in use by the kernel.
> > +
> > +	  If developing CXL hardware or the driver say Y, otherwise say N.
> > +
> >  config CXL_MEM_INSECURE_DEBUG
> >  	bool "CXL.mem debugging"
> >  	depends on CXL_MEM
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > index ce65630bb75e..6d766a994dce 100644
> > --- a/drivers/cxl/mem.c
> > +++ b/drivers/cxl/mem.c
> > @@ -1,6 +1,8 @@
> >  // SPDX-License-Identifier: GPL-2.0-only
> >  /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> >  #include <uapi/linux/cxl_mem.h>
> > +#include <linux/security.h>
> > +#include <linux/debugfs.h>
> >  #include <linux/module.h>
> >  #include <linux/mutex.h>
> >  #include <linux/cdev.h>
> > @@ -41,7 +43,14 @@
> >  
> >  enum opcode {
> >  	CXL_MBOX_OP_INVALID		= 0x0000,
> > +	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
> > +	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
> >  	CXL_MBOX_OP_IDENTIFY		= 0x4000,
> > +	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
> > +	CXL_MBOX_OP_SET_LSA		= 0x4103,
> > +	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
> > +	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
> > +	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
> >  	CXL_MBOX_OP_MAX			= 0x10000
> >  };
> >  
> > @@ -91,6 +100,8 @@ struct cxl_memdev {
> >  
> >  static int cxl_mem_major;
> >  static DEFINE_IDA(cxl_memdev_ida);
> > +static struct dentry *cxl_debugfs;
> > +static bool raw_allow_all;
> >  
> >  /**
> >   * struct cxl_mem_command - Driver representation of a memory device command
> > @@ -132,6 +143,49 @@ struct cxl_mem_command {
> >   */
> >  static struct cxl_mem_command mem_commands[] = {
> >  	CXL_CMD(IDENTIFY, NONE, 0, 0x43),
> > +#ifdef CONFIG_CXL_MEM_RAW_COMMANDS
> > +	CXL_CMD(RAW, NONE, ~0, ~0),
> > +#endif
> > +};
> > +
> > +/*
> > + * Commands that RAW doesn't permit. The rationale for each:
> > + *
> > + * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment /
> > + * coordination of transaction timeout values at the root bridge level.
> > + *
> > + * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live
> > + * and needs to be coordinated with HDM updates.
> > + *
> > + * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the
> > + * driver and any writes from userspace invalidates those contents.
> > + *
> > + * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes
> > + * to the device after it is marked clean, userspace can not make that
> > + * assertion.
> > + *
> > + * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that
> > + * is kept up to date with patrol notifications and error management.
> > + */
> > +static u16 disabled_raw_commands[] = {
> > +	CXL_MBOX_OP_ACTIVATE_FW,
> > +	CXL_MBOX_OP_SET_PARTITION_INFO,
> > +	CXL_MBOX_OP_SET_LSA,
> > +	CXL_MBOX_OP_SET_SHUTDOWN_STATE,
> > +	CXL_MBOX_OP_SCAN_MEDIA,
> > +	CXL_MBOX_OP_GET_SCAN_MEDIA,
> > +};
> > +
> > +/*
> > + * Command sets that RAW doesn't permit. All opcodes in this set are
> > + * disabled because they pass plain text security payloads over the
> > + * user/kernel boundary. This functionality is intended to be wrapped
> > + * behind the keys ABI which allows for encrypted payloads in the UAPI
> > + */
> > +static u8 security_command_sets[] = {
> > +	0x44, /* Sanitize */
> > +	0x45, /* Persistent Memory Data-at-rest Security */
> > +	0x46, /* Security Passthrough */
> >  };
> >  
> >  #define cxl_for_each_cmd(cmd)                                                  \
> > @@ -162,6 +216,16 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
> >  	return 0;
> >  }
> >  
> > +static bool is_security_command(u16 opcode)
> > +{
> > +	int i;
> > +
> > +	for (i = 0; i < ARRAY_SIZE(security_command_sets); i++)
> > +		if (security_command_sets[i] == (opcode >> 8))
> > +			return true;
> > +	return false;
> > +}
> > +
> >  static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> >  				 struct mbox_cmd *mbox_cmd)
> >  {
> > @@ -170,7 +234,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> >  	dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
> >  		mbox_cmd->opcode, mbox_cmd->size_in);
> >  
> > -	if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
> > +	if (!is_security_command(mbox_cmd->opcode) ||
> > +	    IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
> >  		print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1,
> >  				     mbox_cmd->payload_in, mbox_cmd->size_in,
> >  				     true);
> > @@ -434,6 +499,9 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
> >  		cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
> >  		cmd->info.size_in);
> >  
> > +	dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
> > +		      "raw command path used\n");
> > +
> >  	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> >  	cxl_mem_mbox_put(cxlm);
> >  	if (rc)
> > @@ -464,6 +532,29 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
> >  	return rc;
> >  }
> >  
> > +static bool cxl_mem_raw_command_allowed(u16 opcode)
> > +{
> > +	int i;
> > +
> > +	if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
> > +		return false;
> > +
> > +	if (security_locked_down(LOCKDOWN_NONE))
> > +		return false;
> > +
> > +	if (raw_allow_all)
> > +		return true;
> > +
> > +	if (is_security_command(opcode))
> Given we are mixing generic calls like security_locked_down()
> and local cxl specific ones like this one, prefix the
> local versions.
> 
> cxl_is_security_command()
> 
> I'd also have a slight preference to do it for cxl_disabled_raw_commands
> and cxl_raw_allow_all though they are less important as more obviously
> local by not being function calls.
> 
> > +		return false;
> > +
> > +	for (i = 0; i < ARRAY_SIZE(disabled_raw_commands); i++)
> > +		if (disabled_raw_commands[i] == opcode)
> > +			return false;
> > +
> > +	return true;
> > +}
> > +
> >  /**
> >   * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
> >   * @cxlm: &struct cxl_mem device whose mailbox will be used.
> > @@ -500,6 +591,29 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
> >  	if (send_cmd->in.size > cxlm->payload_size)
> >  		return -EINVAL;
> >  
> > +	/* Checks are bypassed for raw commands but along comes the taint! */
> > +	if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) {
> > +		const struct cxl_mem_command temp = {
> > +			.info = {
> > +				.id = CXL_MEM_COMMAND_ID_RAW,
> > +				.flags = CXL_MEM_COMMAND_FLAG_NONE,
> > +				.size_in = send_cmd->in.size,
> > +				.size_out = send_cmd->out.size,
> > +			},
> > +			.opcode = send_cmd->raw.opcode
> > +		};
> > +
> > +		if (send_cmd->raw.rsvd)
> > +			return -EINVAL;
> > +
> > +		if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
> > +			return -EPERM;
> > +
> > +		memcpy(out_cmd, &temp, sizeof(temp));
> > +
> > +		return 0;
> > +	}
> > +
> >  	if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
> >  		return -EINVAL;
> >  
> > @@ -1123,8 +1237,9 @@ static struct pci_driver cxl_mem_driver = {
> >  
> >  static __init int cxl_mem_init(void)
> >  {
> > -	int rc;
> > +	struct dentry *mbox_debugfs;
> >  	dev_t devt;
> > +	int rc;
> 
> Shuffle this back to the place it was introduced to reduce patch noise.
> 
> >  
> >  	rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl");
> >  	if (rc)
> > @@ -1139,11 +1254,17 @@ static __init int cxl_mem_init(void)
> >  		return rc;
> >  	}
> >  
> > +	cxl_debugfs = debugfs_create_dir("cxl", NULL);
> > +	mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
> > +	debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs,
> > +			    &raw_allow_all);
> > +
> >  	return 0;
> >  }
> >  
> >  static __exit void cxl_mem_exit(void)
> >  {
> > +	debugfs_remove_recursive(cxl_debugfs);
> >  	pci_unregister_driver(&cxl_mem_driver);
> >  	unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS);
> >  }
> > diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
> > index f1f7e9f32ea5..72d1eb601a5d 100644
> > --- a/include/uapi/linux/cxl_mem.h
> > +++ b/include/uapi/linux/cxl_mem.h
> > @@ -22,6 +22,7 @@
> >  #define CXL_CMDS                                                          \
> >  	___C(INVALID, "Invalid Command"),                                 \
> >  	___C(IDENTIFY, "Identify Command"),                               \
> > +	___C(RAW, "Raw device command"),                                  \
> >  	___C(MAX, "Last command")
> >  
> >  #define ___C(a, b) CXL_MEM_COMMAND_ID_##a
> > @@ -112,6 +113,9 @@ struct cxl_mem_query_commands {
> >   * @id: The command to send to the memory device. This must be one of the
> >   *	commands returned by the query command.
> >   * @flags: Flags for the command (input).
> > + * @raw: Special fields for raw commands
> > + * @raw.opcode: Opcode passed to hardware when using the RAW command.
> > + * @raw.rsvd: Must be zero.
> >   * @rsvd: Must be zero.
> >   * @retval: Return value from the memory device (output).
> >   * @in.size: Size of the payload to provide to the device (input).
> > @@ -133,7 +137,13 @@ struct cxl_mem_query_commands {
> >  struct cxl_send_command {
> >  	__u32 id;
> >  	__u32 flags;
> > -	__u32 rsvd;
> > +	union {
> > +		struct {
> > +			__u16 opcode;
> > +			__u16 rsvd;
> > +		} raw;
> > +		__u32 rsvd;
> > +	};
> >  	__u32 retval;
> >  
> >  	struct {
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-11 10:01         ` Jonathan Cameron
@ 2021-02-11 16:04           ` Ben Widawsky
  0 siblings, 0 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-11 16:04 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Dan Williams, linux-cxl, Linux ACPI, Linux Kernel Mailing List,
	linux-nvdimm, Linux PCI, Bjorn Helgaas, Chris Browy,
	Christoph Hellwig, David Hildenbrand, David Rientjes, Ira Weiny,
	Jon Masters, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V

On 21-02-11 10:01:52, Jonathan Cameron wrote:
> On Wed, 10 Feb 2021 11:54:29 -0800
> Dan Williams <dan.j.williams@intel.com> wrote:
> 
> > > > ...
> > > >  
> > > > > +static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > > > > +                            struct mbox_cmd *mbox_cmd)
> > > > > +{
> > > > > +   struct device *dev = &cxlm->pdev->dev;
> > > > > +
> > > > > +   dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
> > > > > +           mbox_cmd->opcode, mbox_cmd->size_in);
> > > > > +
> > > > > +   if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {  
> > > >
> > > > Hmm.  Whilst I can see the advantage of this for debug, I'm not sure we want
> > > > it upstream even under a rather evil looking CONFIG variable.
> > > >
> > > > Is there a bigger lock we can use to avoid chance of accidental enablement?  
> > >
> > > Any suggestions? I'm told this functionality was extremely valuable for NVDIMM,
> > > though I haven't personally experienced it.  
> > 
> > Yeah, there was no problem with the identical mechanism in LIBNVDIMM
> > land. However, I notice that the useful feature for LIBNVDIMM is the
> > option to dump all payloads. This one only fires on timeouts which is
> > less useful. So I'd say fix it to dump all payloads on the argument
> > that the safety mechanism was proven with the LIBNVDIMM precedent, or
> > delete it altogether to maintain v5.12 momentum. Payload dumping can
> > be added later.
> 
> I think I'd drop it for now - feels like a topic that needs more discussion.
> 
> Also, dumping this data to the kernel log isn't exactly elegant - particularly
> if we dump a lot more of it.  Perhaps tracepoints?
> 

I'll drop it. It's also a small enough bit to add on for developers. When I post
v3, I will add that bit on top as an RFC. My personal preference FWIW is to use
debugfs to store the payload of the last executed command.

We went with this because of the mechanism's provenance (libnvdimm)

> > 
> > [..]
> > > > > diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
> > > > > index e709ae8235e7..6267ca9ae683 100644
> > > > > --- a/include/uapi/linux/pci_regs.h
> > > > > +++ b/include/uapi/linux/pci_regs.h
> > > > > @@ -1080,6 +1080,7 @@
> > > > >
> > > > >  /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
> > > > >  #define PCI_DVSEC_HEADER1          0x4 /* Designated Vendor-Specific Header1 */
> > > > > +#define PCI_DVSEC_HEADER1_LENGTH_MASK      0xFFF00000  
> > > >
> > > > Seems sensible to add the revision mask as well.
> > > > The vendor id currently read using a word read rather than dword, but perhaps
> > > > neater to add that as well for completeness?
> > > >
> > > > Having said that, given Bjorn's comment on clashes and the fact he'd rather see
> > > > this stuff defined in drivers and combined later (see review patch 1 and follow
> > > > the link) perhaps this series should not touch this header at all.  
> > >
> > > I'm fine to move it back.  
> > 
> > Yeah, we're playing tennis now between Bjorn's and Christoph's
> > comments, but I like Bjorn's suggestion of "deduplicate post merge"
> > given the bloom of DVSEC infrastructure landing at the same time.
> I guess it may depend on timing of this.  Personally I think 5.12 may be too aggressive.
> 
> As long as Bjorn can take a DVSEC deduplication as an immutable branch then perhaps
> during 5.13 this tree can sit on top of that.
> 
> Jonathan
> 
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-10 15:26   ` Ariel.Sibley
  2021-02-10 16:49     ` Ben Widawsky
@ 2021-02-11 16:43     ` Dan Williams
  1 sibling, 0 replies; 57+ messages in thread
From: Dan Williams @ 2021-02-11 16:43 UTC (permalink / raw)
  To: Ariel.Sibley
  Cc: Ben Widawsky, linux-cxl, Linux ACPI, Linux Kernel Mailing List,
	linux-nvdimm, Linux PCI, Bjorn Helgaas, Chris Browy,
	Christoph Hellwig, David Hildenbrand, David Rientjes, Weiny, Ira,
	Jon Masters, Jonathan Cameron, Rafael J Wysocki, Randy Dunlap,
	Vishal L Verma, John Groves (jgroves),
	Sean V Kelley

On Wed, Feb 10, 2021 at 7:27 AM <Ariel.Sibley@microchip.com> wrote:
>
> > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > index c4ba3aa0a05d..08eaa8e52083 100644
> > --- a/drivers/cxl/Kconfig
> > +++ b/drivers/cxl/Kconfig
> > @@ -33,6 +33,24 @@ config CXL_MEM
> >
> >           If unsure say 'm'.
> >
> > +config CXL_MEM_RAW_COMMANDS
> > +       bool "RAW Command Interface for Memory Devices"
> > +       depends on CXL_MEM
> > +       help
> > +         Enable CXL RAW command interface.
> > +
> > +         The CXL driver ioctl interface may assign a kernel ioctl command
> > +         number for each specification defined opcode. At any given point in
> > +         time the number of opcodes that the specification defines and a device
> > +         may implement may exceed the kernel's set of associated ioctl function
> > +         numbers. The mismatch is either by omission, specification is too new,
> > +         or by design. When prototyping new hardware, or developing /
> > debugging
> > +         the driver it is useful to be able to submit any possible command to
> > +         the hardware, even commands that may crash the kernel due to their
> > +         potential impact to memory currently in use by the kernel.
> > +
> > +         If developing CXL hardware or the driver say Y, otherwise say N.
>
> Blocking RAW commands by default will prevent vendors from developing user space tools that utilize vendor specific commands. Vendors of CXL.mem devices should take ownership of ensuring any vendor defined commands that could cause user data to be exposed or corrupted are disabled at the device level for shipping configurations.

What follows is my personal opinion as a Linux kernel developer, not
necessarily the opinion of my employer...

Aside from the convention that new functionality is always default N
it is the Linux distributor that decides the configuration. In an
environment where the kernel is developing features like
CONFIG_SECURITY_LOCKDOWN_LSM that limit the ability of the kernel to
subvert platform features like secure boot, it is incumbent upon
drivers to evaluate what they must do to protect platform integrity.
See the ongoing tightening of /dev/mem like interfaces for an example
of the shrinking ability of root to have unfettered access to all
platform/hardware capabilities.

CXL is unique in that it impacts "System RAM" resources and that it
interleaves multiple devices. Compare this to NVME where the blast
radius of misbehavior is contained to an endpoint and is behind an
IOMMU. The larger impact to me increases the responsibility of CXL
enabling to review system impacts and vendor specific functionality is
typically unreviewable.

There are 2 proposals I can see to improve the unreviewable problem.
First, of course, get commands into the standard proper. One strawman
proposal is to take the "Code First" process that seems to be working
well for the ACPI and UEFI working groups and apply it to CXL command
definitions. That vastly shortens the time between proposal and Linux
enabling. The second proposal is to define a mechanism for de-facto
standards to develop. That need I believe was the motivation for
"designated vendor-specific" in the first instance? I.e. to share
implementations across vendors pre-standardization.

So, allocate a public id for the command space, publish a public
specification, and then send kernel patches. This was the process for
accepting command sets outside of ACPI into the LIBNVDIMM subsystem.
See drivers/acpi/nfit/nfit.h for the reference to the public command
sets.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-11 10:06       ` Jonathan Cameron
@ 2021-02-11 16:54         ` Ben Widawsky
  0 siblings, 0 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-11 16:54 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Dan Williams, linux-cxl, Linux ACPI, Linux Kernel Mailing List,
	linux-nvdimm, Linux PCI, Bjorn Helgaas,
	Chris Browy <cbrowy@avery-design.com>,
	Christoph Hellwig <hch@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	David Hildenbrand <david@redhat.com>,
	David Rientjes, Jon Masters <jcm@jonmasters.org>,
	Rafael Wysocki <rafael.j.wysocki@intel.com>,
	Randy Dunlap, John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

On 21-02-11 10:06:46, Jonathan Cameron wrote:
> On Wed, 10 Feb 2021 20:40:52 -0800
> Dan Williams <dan.j.williams@intel.com> wrote:
> 
> > On Wed, Feb 10, 2021 at 10:47 AM Jonathan Cameron
> > <Jonathan.Cameron@huawei.com> wrote:
> > [..]
> > > > +#define CXL_CMDS                                                          \
> > > > +     ___C(INVALID, "Invalid Command"),                                 \
> > > > +     ___C(IDENTIFY, "Identify Command"),                               \
> > > > +     ___C(MAX, "Last command")
> > > > +
> > > > +#define ___C(a, b) CXL_MEM_COMMAND_ID_##a
> > > > +enum { CXL_CMDS };
> > > > +
> > > > +#undef ___C
> > > > +#define ___C(a, b) { b }
> > > > +static const struct {
> > > > +     const char *name;
> > > > +} cxl_command_names[] = { CXL_CMDS };
> > > > +#undef ___C  
> > >
> > > Unless there are going to be a lot of these, I'd just write them out long hand
> > > as much more readable than the macro magic.  
> > 
> > This macro magic isn't new to Linux it was introduced with ftrace:
> > 
> > See "cpp tricks and treats": https://lwn.net/Articles/383362/
> 
> Yeah. I've dealt with that one a few times. It's very cleaver and compact
> but a PITA to debug build errors related to it.
> 
> > 
> > >
> > > enum {
> > >         CXL_MEM_COMMAND_ID_INVALID,
> > >         CXL_MEM_COMMAND_ID_IDENTIFY,
> > >         CXL_MEM_COMMAND_ID_MAX
> > > };
> > >
> > > static const struct {
> > >         const char *name;
> > > } cxl_command_names[] = {
> > >         [CXL_MEM_COMMAND_ID_INVALID] = { "Invalid Command" },
> > >         [CXL_MEM_COMMAND_ID_IDENTIFY] = { "Identify Comamnd" },
> > >         /* I hope you never need the Last command to exist in here as that sounds like a bug */
> > > };
> > >
> > > That's assuming I actually figured the macro fun out correctly.
> > > To my mind it's worth doing this stuff for 'lots' no so much for 3.  
> > 
> > The list will continue to expand, and it eliminates the "did you
> > remember to update cxl_command_names" review burden permanently.
> 
> How about a compromise.  Add a comment giving how the first entry expands to
> avoid people (me at least :) having to think their way through it every time?
> 
> Jonathan
> 

A minor tweak while here...

diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
index 655fbfde97fd..dac0adb879ec 100644
--- a/include/uapi/linux/cxl_mem.h
+++ b/include/uapi/linux/cxl_mem.h
@@ -22,7 +22,7 @@
 #define CXL_CMDS                                                          \
        ___C(INVALID, "Invalid Command"),                                 \
        ___C(IDENTIFY, "Identify Command"),                               \
-       ___C(MAX, "Last command")
+       ___C(MAX, "invalid / last command")

 #define ___C(a, b) CXL_MEM_COMMAND_ID_##a
 enum { CXL_CMDS };
@@ -32,6 +32,17 @@ enum { CXL_CMDS };
 static const struct {
        const char *name;
 } cxl_command_names[] = { CXL_CMDS };
+
+/*
+ * Here's how this actually breaks out:
+ * cxl_command_names[] = {
+ *     [CXL_MEM_COMMAND_ID_INVALID] = { "Invalid Command" },
+ *     [CXL_MEM_COMMAND_ID_IDENTIFY] = { "Identify Comamnd" },
+ *     ...
+ *     [CXL_MEM_COMMAND_ID_MAX] = { "invalid / last command" },
+ * };
+ */
+

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 6/8] cxl/mem: Enable commands via CEL
  2021-02-11 12:02   ` Jonathan Cameron
@ 2021-02-11 17:45     ` Ben Widawsky
  2021-02-11 20:34       ` Dan Williams
  2021-02-16 13:43     ` Bartosz Golaszewski
  1 sibling, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-11 17:45 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On 21-02-11 12:02:15, Jonathan Cameron wrote:
> On Tue, 9 Feb 2021 16:02:57 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > CXL devices identified by the memory-device class code must implement
> > the Device Command Interface (described in 8.2.9 of the CXL 2.0 spec).
> > While the driver already maintains a list of commands it supports, there
> > is still a need to be able to distinguish between commands that the
> > driver knows about from commands that are optionally supported by the
> > hardware.
> > 
> > The Command Effects Log (CEL) is specified in the CXL 2.0 specification.
> > The CEL is one of two types of logs, the other being vendor specific.
> 
> I'd say "vendor specific debug" just so that no one thinks it has anything
> to do with the rest of this description (which mentioned vendor specific
> commands).
> 
> > They are distinguished in hardware/spec via UUID. The CEL is useful for
> > 2 things:
> > 1. Determine which optional commands are supported by the CXL device.
> > 2. Enumerate any vendor specific commands
> > 
> > The CEL is used by the driver to determine which commands are available
> > in the hardware and therefore which commands userspace is allowed to
> > execute. The set of enabled commands might be a subset of commands which
> > are advertised in UAPI via CXL_MEM_SEND_COMMAND IOCTL.
> > 
> > The implementation leaves the statically defined table of commands and
> > supplements it with a bitmap to determine commands that are enabled.
> > This organization was chosen for the following reasons:
> > - Smaller memory footprint. Doesn't need a table per device.
> > - Reduce memory allocation complexity.
> > - Fixed command IDs to opcode mapping for all devices makes development
> >   and debugging easier.
> > - Certain helpers are easily achievable, like cxl_for_each_cmd().
> > 
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  drivers/cxl/cxl.h            |   2 +
> >  drivers/cxl/mem.c            | 216 +++++++++++++++++++++++++++++++++++
> >  include/uapi/linux/cxl_mem.h |   1 +
> >  3 files changed, 219 insertions(+)
> > 
> > diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> > index b3c56fa6e126..9a5e595abfa4 100644
> > --- a/drivers/cxl/cxl.h
> > +++ b/drivers/cxl/cxl.h
> > @@ -68,6 +68,7 @@ struct cxl_memdev;
> >   *                (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
> >   * @mbox_mutex: Mutex to synchronize mailbox access.
> >   * @firmware_version: Firmware version for the memory device.
> > + * @enabled_commands: Hardware commands found enabled in CEL.
> >   * @pmem: Persistent memory capacity information.
> >   * @ram: Volatile memory capacity information.
> >   */
> > @@ -83,6 +84,7 @@ struct cxl_mem {
> >  	size_t payload_size;
> >  	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
> >  	char firmware_version[0x10];
> > +	unsigned long *enabled_cmds;
> >  
> >  	struct {
> >  		struct range range;
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > index 6d766a994dce..e9aa6ca18d99 100644
> > --- a/drivers/cxl/mem.c
> > +++ b/drivers/cxl/mem.c
> > @@ -45,6 +45,8 @@ enum opcode {
> >  	CXL_MBOX_OP_INVALID		= 0x0000,
> >  	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
> >  	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
> > +	CXL_MBOX_OP_GET_SUPPORTED_LOGS	= 0x0400,
> > +	CXL_MBOX_OP_GET_LOG		= 0x0401,
> >  	CXL_MBOX_OP_IDENTIFY		= 0x4000,
> >  	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
> >  	CXL_MBOX_OP_SET_LSA		= 0x4103,
> > @@ -103,6 +105,19 @@ static DEFINE_IDA(cxl_memdev_ida);
> >  static struct dentry *cxl_debugfs;
> >  static bool raw_allow_all;
> >  
> > +enum {
> > +	CEL_UUID,
> > +	VENDOR_DEBUG_UUID
> 
> Who wants to take a bet this will get extended at somepoint in the future?
> Add a trailing comma to make that less noisy.
> 
> They would never have used a UUID if this wasn't expected to expand.
> CXL spec calls out that "The following Log Identifier UUIDs are defined in _this_
> specification" rather implying other specs may well define more.
> Fun for the future!
> 
> > +};
> > +
> > +/* See CXL 2.0 Table 170. Get Log Input Payload */
> > +static const uuid_t log_uuid[] = {
> > +	[CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96,
> > +			       0xb1, 0x62, 0x3b, 0x3f, 0x17),
> > +	[VENDOR_DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f,
> > +					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86)
> 
> likewise on trailing comma
> 
> > +};
> > +
> >  /**
> >   * struct cxl_mem_command - Driver representation of a memory device command
> >   * @info: Command information as it exists for the UAPI
> > @@ -111,6 +126,8 @@ static bool raw_allow_all;
> >   *
> >   *  * %CXL_CMD_FLAG_MANDATORY: Hardware must support this command. This flag is
> >   *    only used internally by the driver for sanity checking.
> > + *  * %CXL_CMD_INTERNAL_FLAG_PSEUDO: This is a pseudo command which doesn't have
> > + *    a direct mapping to hardware. They are implicitly always enabled.
> 
> Stale comment?
> 
> >   *
> >   * The cxl_mem_command is the driver's internal representation of commands that
> >   * are supported by the driver. Some of these commands may not be supported by
> > @@ -146,6 +163,7 @@ static struct cxl_mem_command mem_commands[] = {
> >  #ifdef CONFIG_CXL_MEM_RAW_COMMANDS
> >  	CXL_CMD(RAW, NONE, ~0, ~0),
> >  #endif
> > +	CXL_CMD(GET_SUPPORTED_LOGS, NONE, 0, ~0),
> >  };
> >  
> >  /*
> > @@ -627,6 +645,10 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
> >  	c = &mem_commands[send_cmd->id];
> >  	info = &c->info;
> >  
> > +	/* Check that the command is enabled for hardware */
> > +	if (!test_bit(info->id, cxlm->enabled_cmds))
> > +		return -ENOTTY;
> > +
> >  	if (info->flags & CXL_MEM_COMMAND_FLAG_KERNEL)
> >  		return -EPERM;
> >  
> > @@ -869,6 +891,14 @@ static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
> >  	mutex_init(&cxlm->mbox_mutex);
> >  	cxlm->pdev = pdev;
> >  	cxlm->regs = regs + offset;
> > +	cxlm->enabled_cmds =
> > +		devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
> > +				   sizeof(unsigned long),
> > +				   GFP_KERNEL | __GFP_ZERO);
> 
> Hmm. There doesn't seem to be a devm_bitmap_zalloc
> 
> Embarrassingly one of the google hits on the topic is me suggesting
> this in a previous review (that I'd long since forgotten)
> 
> Perhaps one for a refactoring patch after this lands.
> 
> 
> > +	if (!cxlm->enabled_cmds) {
> > +		dev_err(dev, "No memory available for bitmap\n");
> > +		return NULL;
> > +	}
> >  
> >  	dev_dbg(dev, "Mapped CXL Memory Device resource\n");
> >  	return cxlm;
> > @@ -1088,6 +1118,188 @@ static int cxl_mem_add_memdev(struct cxl_mem *cxlm)
> >  	return rc;
> >  }
> >  
> > +struct cxl_mbox_get_log {
> > +	uuid_t uuid;
> > +	__le32 offset;
> > +	__le32 length;
> > +} __packed;
> > +
> > +static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
> > +{
> > +	u32 remaining = size;
> > +	u32 offset = 0;
> > +
> > +	while (remaining) {
> > +		u32 xfer_size = min_t(u32, remaining, cxlm->payload_size);
> > +		struct cxl_mbox_get_log log = {
> > +			.uuid = *uuid,
> > +			.offset = cpu_to_le32(offset),
> > +			.length = cpu_to_le32(xfer_size)
> > +		};
> > +		struct mbox_cmd mbox_cmd = {
> > +			.opcode = CXL_MBOX_OP_GET_LOG,
> > +			.payload_in = &log,
> > +			.payload_out = out,
> > +			.size_in = sizeof(log),
> > +		};
> > +		int rc;
> > +
> > +		rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > +		if (rc)
> > +			return rc;
> > +
> > +		WARN_ON(mbox_cmd.size_out != xfer_size);
> 
> Just for completeness (as already addressed in one of Ben's replies
> to earlier patch) this is missing handling for the return code.
> 
> > +
> > +		out += xfer_size;
> > +		remaining -= xfer_size;
> > +		offset += xfer_size;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
> > +{
> > +	struct cxl_mem_command *c;
> > +
> > +	cxl_for_each_cmd(c)
> > +		if (c->opcode == opcode)
> > +			return c;
> > +
> > +	return NULL;
> > +}
> > +
> > +static void cxl_enable_cmd(struct cxl_mem *cxlm,
> > +			   const struct cxl_mem_command *cmd)
> > +{
> > +	if (test_and_set_bit(cmd->info.id, cxlm->enabled_cmds))
> > +		dev_WARN_ONCE(&cxlm->pdev->dev, true, "cmd enabled twice\n");
> > +}
> > +
> > +/**
> > + * cxl_walk_cel() - Walk through the Command Effects Log.
> > + * @cxlm: Device.
> > + * @size: Length of the Command Effects Log.
> > + * @cel: CEL
> > + *
> > + * Iterate over each entry in the CEL and determine if the driver supports the
> > + * command. If so, the command is enabled for the device and can be used later.
> > + */
> > +static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
> > +{
> > +	struct cel_entry {
> > +		__le16 opcode;
> > +		__le16 effect;
> > +	} *cel_entry;
> 
> Driver is currently marking a bunch of other structures packed that don't
> need it. Perhaps do this one as well for consistency?
> 

Just for my memory later...
I don't actually recall the history here. I had no intention originally to use
__packed, but they just kind of got in there, and it doesn't really hurt so
we've left them.

There are a few CXL structures which need packed (which is unfortunate), but
this isn't one of them.

> > +	const int cel_entries = size / sizeof(*cel_entry);
> > +	int i;
> > +
> > +	cel_entry = (struct cel_entry *)cel;
> > +
> > +	for (i = 0; i < cel_entries; i++) {
> > +		const struct cel_entry *ce = &cel_entry[i];
> 
> Given ce is only ever used to get the ce->opcode maybe better using that
> as the local variable?
> 
> 		u16 opcode = le16_to_cpu(cel_entry[i].opcode)
> 
> Obviously that might change depending on later patches though.
> 

Thanks. I did this and got rid of the const below and was able to remove the
line split below.

You'll learn I'm a little const-happy.

> 
> > +		const struct cxl_mem_command *cmd =
> > +			cxl_mem_find_command(le16_to_cpu(ce->opcode));
> > +
> > +		if (!cmd) {
> > +			dev_dbg(&cxlm->pdev->dev, "Unsupported opcode 0x%04x",
> 
> Unsupported by who? (driver rather than hardware)
> 
> > +				le16_to_cpu(ce->opcode));
> > +			continue;
> > +		}
> > +
> > +		cxl_enable_cmd(cxlm, cmd);
> > +	}
> > +}
> > +
> > +/**
> > + * cxl_mem_enumerate_cmds() - Enumerate commands for a device.
> > + * @cxlm: The device.
> > + *
> > + * Returns 0 if enumerate completed successfully.
> > + *
> > + * CXL devices have optional support for certain commands. This function will
> > + * determine the set of supported commands for the hardware and update the
> > + * enabled_cmds bitmap in the @cxlm.
> > + */
> > +static int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm)
> > +{
> > +	struct device *dev = &cxlm->pdev->dev;
> > +	struct cxl_mbox_get_supported_logs {
> > +		__le16 entries;
> > +		u8 rsvd[6];
> > +		struct gsl_entry {
> > +			uuid_t uuid;
> > +			__le32 size;
> > +		} __packed entry[2];
> > +	} __packed gsl;
> > +	struct mbox_cmd mbox_cmd = {
> > +		.opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS,
> > +		.payload_out = &gsl,
> > +		.size_in = 0,
> > +	};
> > +	int i, rc;
> > +
> > +	rc = cxl_mem_mbox_get(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > +	if (rc)
> > +		goto out;
> > +
> > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > +		rc = -ENXIO;
> > +		goto out;
> > +	}
> > +
> > +	if (mbox_cmd.size_out > sizeof(gsl)) {
> > +		dev_warn(dev, "%zu excess logs\n",
> > +			 (mbox_cmd.size_out - sizeof(gsl)) /
> > +				 sizeof(struct gsl_entry));
> 
> This could well happen given spec seems to allow for other
> entries defined by other specs.

Interesting. When I read the spec before (multiple times) I was certain it said
other UUIDs aren't allowed. You're correct though that the way it is worded,
this is a bad check. AIUI, the spec permits any UUID and as such I think we
should remove tainting for unknown UUIDs. Let me put the exact words:

Table 169 & 170
"Log Identifier: UUID representing the log to retrieve data for. The following
 Log Identifier UUIDs are defined in this specification"

To me this implies UUIDs from other (not "this") specifications are permitted.

Dan, I'd like your opinion here. I'm tempted to change the current WARN to a
dev_dbg or somesuch.

> 
> Note that it's this path that I mentioned earlier as requiring we sanity
> check the output size available before calling mempcy_fromio into it
> with the hardware supported size.

Since posting, I've already reworked this somewhat based on the other changes
and it should be safe now.


> 
> 
> > +	}
> > +
> > +	for (i = 0; i < le16_to_cpu(gsl.entries); i++) {
> > +		u32 size = le32_to_cpu(gsl.entry[i].size);
> > +		uuid_t uuid = gsl.entry[i].uuid;
> > +		u8 *log;
> > +
> > +		dev_dbg(dev, "Found LOG type %pU of size %d", &uuid, size);
> > +
> > +		if (!uuid_equal(&uuid, &log_uuid[CEL_UUID]))
> > +			continue;
> > +
> > +		/*
> > +		 * It's a hardware bug if the log size is less than the input
> > +		 * payload size because there are many mandatory commands.
> > +		 */
> > +		if (sizeof(struct cxl_mbox_get_log) > size) {
> 
> If you are going to talk about less than in the comment, I'd flip the condition
> around so it lines up. Trivial obviously but nice to tidy up.
> 
> > +			dev_err(dev, "CEL log size reported was too small (%d)",
> > +				size);
> > +			rc = -ENOMEM;
> > +			goto out;
> > +		}
> > +
> > +		log = kvmalloc(size, GFP_KERNEL);
> > +		if (!log) {
> > +			rc = -ENOMEM;
> > +			goto out;
> > +		}
> > +
> > +		rc = cxl_xfer_log(cxlm, &uuid, size, log);
> > +		if (rc) {
> > +			kvfree(log);
> > +			goto out;
> > +		}
> > +
> > +		cxl_walk_cel(cxlm, size, log);
> > +		kvfree(log);
> > +	}
> > +
> > +out:
> > +	cxl_mem_mbox_put(cxlm);
> > +	return rc;
> > +}
> > +
> >  /**
> >   * cxl_mem_identify() - Send the IDENTIFY command to the device.
> >   * @cxlm: The device to identify.
> > @@ -1211,6 +1423,10 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> >  	if (rc)
> >  		return rc;
> >  
> > +	rc = cxl_mem_enumerate_cmds(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> >  	rc = cxl_mem_identify(cxlm);
> >  	if (rc)
> >  		return rc;
> > diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
> > index 72d1eb601a5d..c5e75b9dad9d 100644
> > --- a/include/uapi/linux/cxl_mem.h
> > +++ b/include/uapi/linux/cxl_mem.h
> > @@ -23,6 +23,7 @@
> >  	___C(INVALID, "Invalid Command"),                                 \
> >  	___C(IDENTIFY, "Identify Command"),                               \
> >  	___C(RAW, "Raw device command"),                                  \
> > +	___C(GET_SUPPORTED_LOGS, "Get Supported Logs"),                   \
> >  	___C(MAX, "Last command")
> >  
> >  #define ___C(a, b) CXL_MEM_COMMAND_ID_##a
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-11  9:55           ` Jonathan Cameron
  2021-02-11 15:55             ` Ben Widawsky
@ 2021-02-11 18:27             ` Ben Widawsky
  2021-02-12 13:23               ` Jonathan Cameron
  1 sibling, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-11 18:27 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On 21-02-11 09:55:48, Jonathan Cameron wrote:
> On Wed, 10 Feb 2021 10:16:05 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > On 21-02-10 08:55:57, Ben Widawsky wrote:
> > > On 21-02-10 15:07:59, Jonathan Cameron wrote:  
> > > > On Wed, 10 Feb 2021 13:32:52 +0000
> > > > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> > > >   
> > > > > On Tue, 9 Feb 2021 16:02:53 -0800
> > > > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > > > >   
> > > > > > Provide enough functionality to utilize the mailbox of a memory device.
> > > > > > The mailbox is used to interact with the firmware running on the memory
> > > > > > device. The flow is proven with one implemented command, "identify".
> > > > > > Because the class code has already told the driver this is a memory
> > > > > > device and the identify command is mandatory.
> > > > > > 
> > > > > > CXL devices contain an array of capabilities that describe the
> > > > > > interactions software can have with the device or firmware running on
> > > > > > the device. A CXL compliant device must implement the device status and
> > > > > > the mailbox capability. Additionally, a CXL compliant memory device must
> > > > > > implement the memory device capability. Each of the capabilities can
> > > > > > [will] provide an offset within the MMIO region for interacting with the
> > > > > > CXL device.
> > > > > > 
> > > > > > The capabilities tell the driver how to find and map the register space
> > > > > > for CXL Memory Devices. The registers are required to utilize the CXL
> > > > > > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > > > > > and secondary. The secondary mailbox is earmarked for system firmware,
> > > > > > and not handled in this driver.
> > > > > > 
> > > > > > Primary mailboxes are capable of generating an interrupt when submitting
> > > > > > a background command. That implementation is saved for a later time.
> > > > > > 
> > > > > > Link: https://www.computeexpresslink.org/download-the-specification
> > > > > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > > > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>    
> > > > > 
> > > > > Hi Ben,
> > > > > 
> > > > >   
> > > > > > +/**
> > > > > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > > > + * @cxlm: The CXL memory device to communicate with.
> > > > > > + * @mbox_cmd: Command to send to the memory device.
> > > > > > + *
> > > > > > + * Context: Any context. Expects mbox_lock to be held.
> > > > > > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > > > > > + *         Caller should check the return code in @mbox_cmd to make sure it
> > > > > > + *         succeeded.    
> > > > > 
> > > > > cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> > > > > enters an infinite loop as a result.  
> > > 
> > > I meant to fix that.
> > >   
> > > > > 
> > > > > I haven't checked other paths, but to my mind it is not a good idea to require
> > > > > two levels of error checking - the example here proves how easy it is to forget
> > > > > one.  
> > > 
> > > Demonstrably, you're correct. I think it would be good to have a kernel only
> > > mbox command that does the error checking though. Let me type something up and
> > > see how it looks.  
> > 
> > Hi Jonathan. What do you think of this? The bit I'm on the fence about is if I
> > should validate output size too. I like the simplicity as it is, but it requires
> > every caller to possibly check output size, which is kind of the same problem
> > you're originally pointing out.
> 
> The simplicity is good and this is pretty much what I expected you would end up with
> (always reassuring)
> 
> For the output, perhaps just add another parameter to the wrapper for minimum
> output length expected?
> 
> Now you mention the length question.  It does rather feel like there should also
> be some protection on memcpy_fromio() copying too much data if the hardware
> happens to return an unexpectedly long length.  Should never happen, but
> the hardening is worth adding anyway given it's easy to do.
> 
> Jonathan
> 

I like it.

diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index 2e199b05f686..58071a203212 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -293,7 +293,7 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
  * See __cxl_mem_mbox_send_cmd()
  */
 static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
-				 size_t in_size, u8 *out)
+				 size_t in_size, u8 *out, size_t out_min_size)
 {
 	struct mbox_cmd mbox_cmd = {
 		.opcode = opcode,
@@ -303,6 +303,9 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
 	};
 	int rc;
 
+	if (out_min_size > cxlm->payload_size)
+		return -E2BIG;
+
 	rc = cxl_mem_mbox_get(cxlm);
 	if (rc)
 		return rc;
@@ -316,6 +319,9 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
 	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
 		return -ENXIO;
 
+	if (mbox_cmd.size_out < out_min_size)
+		return -ENODATA;
+
 	return mbox_cmd.size_out;
 }
 
@@ -505,15 +511,10 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
 	int rc;
 
 	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
-				   (u8 *)&id);
+				   (u8 *)&id, sizeof(id));
 	if (rc < 0)
 		return rc;
 
-	if (rc < sizeof(id)) {
-		dev_err(&cxlm->pdev->dev, "Short identify data\n");
-		return -ENXIO;
-	}
-
 	/*
 	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
 	 * For now, only the capacity is exported in sysfs


> 
> > 
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > index 55c5f5a6023f..ad7b2077ab28 100644
> > --- a/drivers/cxl/mem.c
> > +++ b/drivers/cxl/mem.c
> > @@ -284,7 +284,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> >  }
> >  
> >  /**
> > - * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > + * __cxl_mem_mbox_send_cmd() - Execute a mailbox command
> >   * @cxlm: The CXL memory device to communicate with.
> >   * @mbox_cmd: Command to send to the memory device.
> >   *
> > @@ -296,7 +296,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> >   * This is a generic form of the CXL mailbox send command, thus the only I/O
> >   * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> >   * types of CXL devices may have further information available upon error
> > - * conditions.
> > + * conditions. Driver facilities wishing to send mailbox commands should use the
> > + * wrapper command.
> >   *
> >   * The CXL spec allows for up to two mailboxes. The intention is for the primary
> >   * mailbox to be OS controlled and the secondary mailbox to be used by system
> > @@ -304,8 +305,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> >   * not need to coordinate with each other. The driver only uses the primary
> >   * mailbox.
> >   */
> > -static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > -				 struct mbox_cmd *mbox_cmd)
> > +static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > +				   struct mbox_cmd *mbox_cmd)
> >  {
> >  	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> >  	u64 cmd_reg, status_reg;
> > @@ -469,6 +470,54 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> >  	mutex_unlock(&cxlm->mbox_mutex);
> >  }
> >  
> > +/**
> > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > + * @cxlm: The CXL memory device to communicate with.
> > + * @opcode: Opcode for the mailbox command.
> > + * @in: The input payload for the mailbox command.
> > + * @in_size: The length of the input payload
> > + * @out: Caller allocated buffer for the output.
> > + *
> > + * Context: Any context. Will acquire and release mbox_mutex.
> > + * Return:
> > + *  * %>=0	- Number of bytes returned in @out.
> > + *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
> > + *  * %-EFAULT	- Hardware error occurred.
> > + *  * %-ENXIO	- Command completed, but device reported an error.
> > + *
> > + * Mailbox commands may execute successfully yet the device itself reported an
> > + * error. While this distinction can be useful for commands from userspace, the
> > + * kernel will often only care when both are successful.
> > + *
> > + * See __cxl_mem_mbox_send_cmd()
> > + */
> > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
> > +				 size_t in_size, u8 *out)
> > +{
> > +	struct mbox_cmd mbox_cmd = {
> > +		.opcode = opcode,
> > +		.payload_in = in,
> > +		.size_in = in_size,
> > +		.payload_out = out,
> > +	};
> > +	int rc;
> > +
> > +	rc = cxl_mem_mbox_get(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > +	cxl_mem_mbox_put(cxlm);
> > +	if (rc)
> > +		return rc;
> > +
> > +	/* TODO: Map return code to proper kernel style errno */
> > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
> > +		return -ENXIO;
> > +
> > +	return mbox_cmd.size_out;
> > +}
> > +
> >  /**
> >   * handle_mailbox_cmd_from_user() - Dispatch a mailbox command.
> >   * @cxlmd: The CXL memory device to communicate with.
> > @@ -1380,33 +1429,18 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
> >  		u8 poison_caps;
> >  		u8 qos_telemetry_caps;
> >  	} __packed id;
> > -	struct mbox_cmd mbox_cmd = {
> > -		.opcode = CXL_MBOX_OP_IDENTIFY,
> > -		.payload_out = &id,
> > -		.size_in = 0,
> > -	};
> >  	int rc;
> >  
> > -	/* Retrieve initial device memory map */
> > -	rc = cxl_mem_mbox_get(cxlm);
> > -	if (rc)
> > -		return rc;
> > -
> > -	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > -	cxl_mem_mbox_put(cxlm);
> > -	if (rc)
> > +	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
> > +				   (u8 *)&id);
> > +	if (rc < 0)
> >  		return rc;
> >  
> > -	/* TODO: Handle retry or reset responses from firmware. */
> > -	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > -		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> > -			mbox_cmd.return_code);
> > +	if (rc < sizeof(id)) {
> > +		dev_err(&cxlm->pdev->dev, "Short identify data\n",
> >  		return -ENXIO;
> >  	}
> >  
> > -	if (mbox_cmd.size_out != sizeof(id))
> > -		return -ENXIO;
> > -
> >  	/*
> >  	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> >  	 * For now, only the capacity is exported in sysfs
> > 
> > 
> > [snip]
> > 
> 

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 6/8] cxl/mem: Enable commands via CEL
  2021-02-11 17:45     ` Ben Widawsky
@ 2021-02-11 20:34       ` Dan Williams
  0 siblings, 0 replies; 57+ messages in thread
From: Dan Williams @ 2021-02-11 20:34 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Jonathan Cameron, linux-cxl, Linux ACPI,
	Linux Kernel Mailing List, linux-nvdimm, Linux PCI,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, David Hildenbrand,
	David Rientjes, Ira Weiny, Jon Masters, Rafael Wysocki,
	Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Thu, Feb 11, 2021 at 9:45 AM Ben Widawsky <ben.widawsky@intel.com> wrote:
[..]
> > > +   if (mbox_cmd.size_out > sizeof(gsl)) {
> > > +           dev_warn(dev, "%zu excess logs\n",
> > > +                    (mbox_cmd.size_out - sizeof(gsl)) /
> > > +                            sizeof(struct gsl_entry));
> >
> > This could well happen given spec seems to allow for other
> > entries defined by other specs.
>
> Interesting. When I read the spec before (multiple times) I was certain it said
> other UUIDs aren't allowed. You're correct though that the way it is worded,
> this is a bad check. AIUI, the spec permits any UUID and as such I think we
> should remove tainting for unknown UUIDs. Let me put the exact words:
>
> Table 169 & 170
> "Log Identifier: UUID representing the log to retrieve data for. The following
>  Log Identifier UUIDs are defined in this specification"
>
> To me this implies UUIDs from other (not "this") specifications are permitted.
>
> Dan, I'd like your opinion here. I'm tempted to change the current WARN to a
> dev_dbg or somesuch.

Yeah, sounds ok, and the command is well defined to be a read-only,
zero-side-effect affair. If a vendor did really want to sneak in a
proprietary protocol over this interface it would be quite awkward.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 3/8] cxl/mem: Register CXL memX devices
  2021-02-11 10:17     ` Jonathan Cameron
@ 2021-02-11 20:40       ` Dan Williams
  2021-02-12 13:33         ` Jonathan Cameron
  0 siblings, 1 reply; 57+ messages in thread
From: Dan Williams @ 2021-02-11 20:40 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Ben Widawsky, linux-cxl, Linux ACPI, Linux Kernel Mailing List,
	linux-nvdimm, Linux PCI, Bjorn Helgaas, Chris Browy,
	Christoph Hellwig, David Hildenbrand, David Rientjes, Ira Weiny,
	Jon Masters, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V

On Thu, Feb 11, 2021 at 2:19 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Wed, 10 Feb 2021 18:17:25 +0000
> Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
>
> > On Tue, 9 Feb 2021 16:02:54 -0800
> > Ben Widawsky <ben.widawsky@intel.com> wrote:
> >
> > > From: Dan Williams <dan.j.williams@intel.com>
> > >
> > > Create the /sys/bus/cxl hierarchy to enumerate:
> > >
> > > * Memory Devices (per-endpoint control devices)
> > >
> > > * Memory Address Space Devices (platform address ranges with
> > >   interleaving, performance, and persistence attributes)
> > >
> > > * Memory Regions (active provisioned memory from an address space device
> > >   that is in use as System RAM or delegated to libnvdimm as Persistent
> > >   Memory regions).
> > >
> > > For now, only the per-endpoint control devices are registered on the
> > > 'cxl' bus. However, going forward it will provide a mechanism to
> > > coordinate cross-device interleave.
> > >
> > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> >
> > One stray header, and a request for a tiny bit of reordering to
> > make it easier to chase through creation and destruction.
> >
> > Either way with the header move to earlier patch I'm fine with this one.
> >
> > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>
> Actually thinking more on this, what is the justification for the
> complexity + overhead of a percpu_refcount vs a refcount

A typical refcount does not have the block and drain semantics of a
percpu_ref. I'm planning to circle back and make this a first class
facility of the cdev interface borrowing the debugfs approach [1], but
for now percpu_ref fits the bill locally.

> I don't think this is a high enough performance path for it to matter.
> Perhaps I'm missing a usecase where it does?

It's less about percpu_ref performance and more about the
percpu_ref_tryget_live() facility.

[1]: http://lore.kernel.org/r/CAPcyv4jEYPsyh0bhbtKGRbK3bgp=_+=2rjx4X0gLi5-25VvDyg@mail.gmail.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-11 18:27             ` Ben Widawsky
@ 2021-02-12 13:23               ` Jonathan Cameron
  0 siblings, 0 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-12 13:23 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Thu, 11 Feb 2021 10:27:41 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-02-11 09:55:48, Jonathan Cameron wrote:
> > On Wed, 10 Feb 2021 10:16:05 -0800
> > Ben Widawsky <ben.widawsky@intel.com> wrote:
> >   
> > > On 21-02-10 08:55:57, Ben Widawsky wrote:  
> > > > On 21-02-10 15:07:59, Jonathan Cameron wrote:    
> > > > > On Wed, 10 Feb 2021 13:32:52 +0000
> > > > > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> > > > >     
> > > > > > On Tue, 9 Feb 2021 16:02:53 -0800
> > > > > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > > > > >     
> > > > > > > Provide enough functionality to utilize the mailbox of a memory device.
> > > > > > > The mailbox is used to interact with the firmware running on the memory
> > > > > > > device. The flow is proven with one implemented command, "identify".
> > > > > > > Because the class code has already told the driver this is a memory
> > > > > > > device and the identify command is mandatory.
> > > > > > > 
> > > > > > > CXL devices contain an array of capabilities that describe the
> > > > > > > interactions software can have with the device or firmware running on
> > > > > > > the device. A CXL compliant device must implement the device status and
> > > > > > > the mailbox capability. Additionally, a CXL compliant memory device must
> > > > > > > implement the memory device capability. Each of the capabilities can
> > > > > > > [will] provide an offset within the MMIO region for interacting with the
> > > > > > > CXL device.
> > > > > > > 
> > > > > > > The capabilities tell the driver how to find and map the register space
> > > > > > > for CXL Memory Devices. The registers are required to utilize the CXL
> > > > > > > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > > > > > > and secondary. The secondary mailbox is earmarked for system firmware,
> > > > > > > and not handled in this driver.
> > > > > > > 
> > > > > > > Primary mailboxes are capable of generating an interrupt when submitting
> > > > > > > a background command. That implementation is saved for a later time.
> > > > > > > 
> > > > > > > Link: https://www.computeexpresslink.org/download-the-specification
> > > > > > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > > > > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>      
> > > > > > 
> > > > > > Hi Ben,
> > > > > > 
> > > > > >     
> > > > > > > +/**
> > > > > > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > > > > + * @cxlm: The CXL memory device to communicate with.
> > > > > > > + * @mbox_cmd: Command to send to the memory device.
> > > > > > > + *
> > > > > > > + * Context: Any context. Expects mbox_lock to be held.
> > > > > > > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > > > > > > + *         Caller should check the return code in @mbox_cmd to make sure it
> > > > > > > + *         succeeded.      
> > > > > > 
> > > > > > cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> > > > > > enters an infinite loop as a result.    
> > > > 
> > > > I meant to fix that.
> > > >     
> > > > > > 
> > > > > > I haven't checked other paths, but to my mind it is not a good idea to require
> > > > > > two levels of error checking - the example here proves how easy it is to forget
> > > > > > one.    
> > > > 
> > > > Demonstrably, you're correct. I think it would be good to have a kernel only
> > > > mbox command that does the error checking though. Let me type something up and
> > > > see how it looks.    
> > > 
> > > Hi Jonathan. What do you think of this? The bit I'm on the fence about is if I
> > > should validate output size too. I like the simplicity as it is, but it requires
> > > every caller to possibly check output size, which is kind of the same problem
> > > you're originally pointing out.  
> > 
> > The simplicity is good and this is pretty much what I expected you would end up with
> > (always reassuring)
> > 
> > For the output, perhaps just add another parameter to the wrapper for minimum
> > output length expected?
> > 
> > Now you mention the length question.  It does rather feel like there should also
> > be some protection on memcpy_fromio() copying too much data if the hardware
> > happens to return an unexpectedly long length.  Should never happen, but
> > the hardening is worth adding anyway given it's easy to do.
> > 
> > Jonathan
> >   
> 
> I like it.
> 
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index 2e199b05f686..58071a203212 100644
> --- a/drivers/cxl/mem.c
> +++ b/drivers/cxl/mem.c
> @@ -293,7 +293,7 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
>   * See __cxl_mem_mbox_send_cmd()
>   */
>  static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
> -				 size_t in_size, u8 *out)
> +				 size_t in_size, u8 *out, size_t out_min_size)

This is kind of the opposite of what I was expecting.  What I'm worried about is
not so much that we receive at least enough data, but rather that we receive too much.
Buggy hardware or potentially a spec change being most likely causes.

So something like
int __cxl_mem_mbox_send_cmd(struct cxl_mem..., struct mbox_cmd, u8 *out, size_t out_sz)
//Or put the max size in the .size_out element of the command and make that inout rather
//than just out direction.
{
	...
	/* #8 */
	if (out_len && mbox_cmd->payload_out) {
		if (outlen > out_sz)
			//or just copy what we can fit in payload_out and return that size.
			return -E2BIG;
		memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
	}

	
}

Fine to also check the returned length is at least a minimum size.

>  {
>  	struct mbox_cmd mbox_cmd = {
>  		.opcode = opcode,
> @@ -303,6 +303,9 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
>  	};
>  	int rc;
>  
> +	if (out_min_size > cxlm->payload_size)
> +		return -E2BIG;
> +
>  	rc = cxl_mem_mbox_get(cxlm);
>  	if (rc)
>  		return rc;
> @@ -316,6 +319,9 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
>  	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
>  		return -ENXIO;
>  
> +	if (mbox_cmd.size_out < out_min_size)
> +		return -ENODATA;
> +
>  	return mbox_cmd.size_out;
>  }
>  
> @@ -505,15 +511,10 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
>  	int rc;
>  
>  	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
> -				   (u8 *)&id);
> +				   (u8 *)&id, sizeof(id));
>  	if (rc < 0)
>  		return rc;
>  
> -	if (rc < sizeof(id)) {
> -		dev_err(&cxlm->pdev->dev, "Short identify data\n");
> -		return -ENXIO;
> -	}
> -
>  	/*
>  	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
>  	 * For now, only the capacity is exported in sysfs
> 
> 
> >   
> > > 
> > > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > > index 55c5f5a6023f..ad7b2077ab28 100644
> > > --- a/drivers/cxl/mem.c
> > > +++ b/drivers/cxl/mem.c
> > > @@ -284,7 +284,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > >  }
> > >  
> > >  /**
> > > - * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > + * __cxl_mem_mbox_send_cmd() - Execute a mailbox command
> > >   * @cxlm: The CXL memory device to communicate with.
> > >   * @mbox_cmd: Command to send to the memory device.
> > >   *
> > > @@ -296,7 +296,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > >   * This is a generic form of the CXL mailbox send command, thus the only I/O
> > >   * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> > >   * types of CXL devices may have further information available upon error
> > > - * conditions.
> > > + * conditions. Driver facilities wishing to send mailbox commands should use the
> > > + * wrapper command.
> > >   *
> > >   * The CXL spec allows for up to two mailboxes. The intention is for the primary
> > >   * mailbox to be OS controlled and the secondary mailbox to be used by system
> > > @@ -304,8 +305,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > >   * not need to coordinate with each other. The driver only uses the primary
> > >   * mailbox.
> > >   */
> > > -static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > > -				 struct mbox_cmd *mbox_cmd)
> > > +static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > > +				   struct mbox_cmd *mbox_cmd)
> > >  {
> > >  	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> > >  	u64 cmd_reg, status_reg;
> > > @@ -469,6 +470,54 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> > >  	mutex_unlock(&cxlm->mbox_mutex);
> > >  }
> > >  
> > > +/**
> > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > + * @cxlm: The CXL memory device to communicate with.
> > > + * @opcode: Opcode for the mailbox command.
> > > + * @in: The input payload for the mailbox command.
> > > + * @in_size: The length of the input payload
> > > + * @out: Caller allocated buffer for the output.
> > > + *
> > > + * Context: Any context. Will acquire and release mbox_mutex.
> > > + * Return:
> > > + *  * %>=0	- Number of bytes returned in @out.
> > > + *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
> > > + *  * %-EFAULT	- Hardware error occurred.
> > > + *  * %-ENXIO	- Command completed, but device reported an error.
> > > + *
> > > + * Mailbox commands may execute successfully yet the device itself reported an
> > > + * error. While this distinction can be useful for commands from userspace, the
> > > + * kernel will often only care when both are successful.
> > > + *
> > > + * See __cxl_mem_mbox_send_cmd()
> > > + */
> > > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
> > > +				 size_t in_size, u8 *out)
> > > +{
> > > +	struct mbox_cmd mbox_cmd = {
> > > +		.opcode = opcode,
> > > +		.payload_in = in,
> > > +		.size_in = in_size,
> > > +		.payload_out = out,
> > > +	};
> > > +	int rc;
> > > +
> > > +	rc = cxl_mem_mbox_get(cxlm);
> > > +	if (rc)
> > > +		return rc;
> > > +
> > > +	rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > > +	cxl_mem_mbox_put(cxlm);
> > > +	if (rc)
> > > +		return rc;
> > > +
> > > +	/* TODO: Map return code to proper kernel style errno */
> > > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
> > > +		return -ENXIO;
> > > +
> > > +	return mbox_cmd.size_out;
> > > +}
> > > +
> > >  /**
> > >   * handle_mailbox_cmd_from_user() - Dispatch a mailbox command.
> > >   * @cxlmd: The CXL memory device to communicate with.
> > > @@ -1380,33 +1429,18 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
> > >  		u8 poison_caps;
> > >  		u8 qos_telemetry_caps;
> > >  	} __packed id;
> > > -	struct mbox_cmd mbox_cmd = {
> > > -		.opcode = CXL_MBOX_OP_IDENTIFY,
> > > -		.payload_out = &id,
> > > -		.size_in = 0,
> > > -	};
> > >  	int rc;
> > >  
> > > -	/* Retrieve initial device memory map */
> > > -	rc = cxl_mem_mbox_get(cxlm);
> > > -	if (rc)
> > > -		return rc;
> > > -
> > > -	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > > -	cxl_mem_mbox_put(cxlm);
> > > -	if (rc)
> > > +	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
> > > +				   (u8 *)&id);
> > > +	if (rc < 0)
> > >  		return rc;
> > >  
> > > -	/* TODO: Handle retry or reset responses from firmware. */
> > > -	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > > -		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> > > -			mbox_cmd.return_code);
> > > +	if (rc < sizeof(id)) {
> > > +		dev_err(&cxlm->pdev->dev, "Short identify data\n",
> > >  		return -ENXIO;
> > >  	}
> > >  
> > > -	if (mbox_cmd.size_out != sizeof(id))
> > > -		return -ENXIO;
> > > -
> > >  	/*
> > >  	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> > >  	 * For now, only the capacity is exported in sysfs
> > > 
> > > 
> > > [snip]
> > >   
> >   


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-11 15:55             ` Ben Widawsky
@ 2021-02-12 13:27               ` Jonathan Cameron
  2021-02-12 15:54                 ` Ben Widawsky
  0 siblings, 1 reply; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-12 13:27 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Thu, 11 Feb 2021 07:55:29 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-02-11 09:55:48, Jonathan Cameron wrote:
> > On Wed, 10 Feb 2021 10:16:05 -0800
> > Ben Widawsky <ben.widawsky@intel.com> wrote:
> >   
> > > On 21-02-10 08:55:57, Ben Widawsky wrote:  
> > > > On 21-02-10 15:07:59, Jonathan Cameron wrote:    
> > > > > On Wed, 10 Feb 2021 13:32:52 +0000
> > > > > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> > > > >     
> > > > > > On Tue, 9 Feb 2021 16:02:53 -0800
> > > > > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > > > > >     
> > > > > > > Provide enough functionality to utilize the mailbox of a memory device.
> > > > > > > The mailbox is used to interact with the firmware running on the memory
> > > > > > > device. The flow is proven with one implemented command, "identify".
> > > > > > > Because the class code has already told the driver this is a memory
> > > > > > > device and the identify command is mandatory.
> > > > > > > 
> > > > > > > CXL devices contain an array of capabilities that describe the
> > > > > > > interactions software can have with the device or firmware running on
> > > > > > > the device. A CXL compliant device must implement the device status and
> > > > > > > the mailbox capability. Additionally, a CXL compliant memory device must
> > > > > > > implement the memory device capability. Each of the capabilities can
> > > > > > > [will] provide an offset within the MMIO region for interacting with the
> > > > > > > CXL device.
> > > > > > > 
> > > > > > > The capabilities tell the driver how to find and map the register space
> > > > > > > for CXL Memory Devices. The registers are required to utilize the CXL
> > > > > > > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > > > > > > and secondary. The secondary mailbox is earmarked for system firmware,
> > > > > > > and not handled in this driver.
> > > > > > > 
> > > > > > > Primary mailboxes are capable of generating an interrupt when submitting
> > > > > > > a background command. That implementation is saved for a later time.
> > > > > > > 
> > > > > > > Link: https://www.computeexpresslink.org/download-the-specification
> > > > > > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > > > > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>      
> > > > > > 
> > > > > > Hi Ben,
> > > > > > 
> > > > > >     
> > > > > > > +/**
> > > > > > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > > > > + * @cxlm: The CXL memory device to communicate with.
> > > > > > > + * @mbox_cmd: Command to send to the memory device.
> > > > > > > + *
> > > > > > > + * Context: Any context. Expects mbox_lock to be held.
> > > > > > > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > > > > > > + *         Caller should check the return code in @mbox_cmd to make sure it
> > > > > > > + *         succeeded.      
> > > > > > 
> > > > > > cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> > > > > > enters an infinite loop as a result.    
> > > > 
> > > > I meant to fix that.
> > > >     
> > > > > > 
> > > > > > I haven't checked other paths, but to my mind it is not a good idea to require
> > > > > > two levels of error checking - the example here proves how easy it is to forget
> > > > > > one.    
> > > > 
> > > > Demonstrably, you're correct. I think it would be good to have a kernel only
> > > > mbox command that does the error checking though. Let me type something up and
> > > > see how it looks.    
> > > 
> > > Hi Jonathan. What do you think of this? The bit I'm on the fence about is if I
> > > should validate output size too. I like the simplicity as it is, but it requires
> > > every caller to possibly check output size, which is kind of the same problem
> > > you're originally pointing out.  
> > 
> > The simplicity is good and this is pretty much what I expected you would end up with
> > (always reassuring)
> > 
> > For the output, perhaps just add another parameter to the wrapper for minimum
> > output length expected?
> > 
> > Now you mention the length question.  It does rather feel like there should also
> > be some protection on memcpy_fromio() copying too much data if the hardware
> > happens to return an unexpectedly long length.  Should never happen, but
> > the hardening is worth adding anyway given it's easy to do.
> > 
> > Jonathan  
> 
> Some background because I forget what I've said previously... It's unfortunate
> that the spec maxes at 1M mailbox size but has enough bits in the length field
> to support 2M-1. I've made some requests to have this fixed, so maybe 3.0 won't
> be awkward like this.

Agreed spec should be tighter here, but I'd argue over 1M indicates buggy hardware.

> 
> I think it makes sense to do as you suggested. One question though, do you have
> an opinion on we return to the caller as the output payload size, do we cap it
> at 1M also, or are we honest?
> 
> -       if (out_len && mbox_cmd->payload_out)
> -               memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
> +       if (out_len && mbox_cmd->payload_out) {
> +               size_t n = min_t(size_t, cxlm->payload_size, out_len);
> +               memcpy_fromio(mbox_cmd->payload_out, payload, n);
> +       }

Ah, I read emails in wrong order.  What you have is what I expected and got
confused about in your other email.

> 
> So...
> mbox_cmd->size_out = out_len;
> mbox_cmd->size_out = n;

Good question.  My gut says the second one.
Maybe it's worth a warning print to let us know something
unexpected happened.

> 
> 
> > 
> >   
> > > 
> > > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > > index 55c5f5a6023f..ad7b2077ab28 100644
> > > --- a/drivers/cxl/mem.c
> > > +++ b/drivers/cxl/mem.c
> > > @@ -284,7 +284,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > >  }
> > >  
> > >  /**
> > > - * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > + * __cxl_mem_mbox_send_cmd() - Execute a mailbox command
> > >   * @cxlm: The CXL memory device to communicate with.
> > >   * @mbox_cmd: Command to send to the memory device.
> > >   *
> > > @@ -296,7 +296,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > >   * This is a generic form of the CXL mailbox send command, thus the only I/O
> > >   * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> > >   * types of CXL devices may have further information available upon error
> > > - * conditions.
> > > + * conditions. Driver facilities wishing to send mailbox commands should use the
> > > + * wrapper command.
> > >   *
> > >   * The CXL spec allows for up to two mailboxes. The intention is for the primary
> > >   * mailbox to be OS controlled and the secondary mailbox to be used by system
> > > @@ -304,8 +305,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > >   * not need to coordinate with each other. The driver only uses the primary
> > >   * mailbox.
> > >   */
> > > -static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > > -				 struct mbox_cmd *mbox_cmd)
> > > +static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > > +				   struct mbox_cmd *mbox_cmd)
> > >  {
> > >  	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> > >  	u64 cmd_reg, status_reg;
> > > @@ -469,6 +470,54 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> > >  	mutex_unlock(&cxlm->mbox_mutex);
> > >  }
> > >  
> > > +/**
> > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > + * @cxlm: The CXL memory device to communicate with.
> > > + * @opcode: Opcode for the mailbox command.
> > > + * @in: The input payload for the mailbox command.
> > > + * @in_size: The length of the input payload
> > > + * @out: Caller allocated buffer for the output.
> > > + *
> > > + * Context: Any context. Will acquire and release mbox_mutex.
> > > + * Return:
> > > + *  * %>=0	- Number of bytes returned in @out.
> > > + *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
> > > + *  * %-EFAULT	- Hardware error occurred.
> > > + *  * %-ENXIO	- Command completed, but device reported an error.
> > > + *
> > > + * Mailbox commands may execute successfully yet the device itself reported an
> > > + * error. While this distinction can be useful for commands from userspace, the
> > > + * kernel will often only care when both are successful.
> > > + *
> > > + * See __cxl_mem_mbox_send_cmd()
> > > + */
> > > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
> > > +				 size_t in_size, u8 *out)
> > > +{
> > > +	struct mbox_cmd mbox_cmd = {
> > > +		.opcode = opcode,
> > > +		.payload_in = in,
> > > +		.size_in = in_size,
> > > +		.payload_out = out,
> > > +	};
> > > +	int rc;
> > > +
> > > +	rc = cxl_mem_mbox_get(cxlm);
> > > +	if (rc)
> > > +		return rc;
> > > +
> > > +	rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > > +	cxl_mem_mbox_put(cxlm);
> > > +	if (rc)
> > > +		return rc;
> > > +
> > > +	/* TODO: Map return code to proper kernel style errno */
> > > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
> > > +		return -ENXIO;
> > > +
> > > +	return mbox_cmd.size_out;
> > > +}
> > > +
> > >  /**
> > >   * handle_mailbox_cmd_from_user() - Dispatch a mailbox command.
> > >   * @cxlmd: The CXL memory device to communicate with.
> > > @@ -1380,33 +1429,18 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
> > >  		u8 poison_caps;
> > >  		u8 qos_telemetry_caps;
> > >  	} __packed id;
> > > -	struct mbox_cmd mbox_cmd = {
> > > -		.opcode = CXL_MBOX_OP_IDENTIFY,
> > > -		.payload_out = &id,
> > > -		.size_in = 0,
> > > -	};
> > >  	int rc;
> > >  
> > > -	/* Retrieve initial device memory map */
> > > -	rc = cxl_mem_mbox_get(cxlm);
> > > -	if (rc)
> > > -		return rc;
> > > -
> > > -	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > > -	cxl_mem_mbox_put(cxlm);
> > > -	if (rc)
> > > +	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
> > > +				   (u8 *)&id);
> > > +	if (rc < 0)
> > >  		return rc;
> > >  
> > > -	/* TODO: Handle retry or reset responses from firmware. */
> > > -	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > > -		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> > > -			mbox_cmd.return_code);
> > > +	if (rc < sizeof(id)) {
> > > +		dev_err(&cxlm->pdev->dev, "Short identify data\n",
> > >  		return -ENXIO;
> > >  	}
> > >  
> > > -	if (mbox_cmd.size_out != sizeof(id))
> > > -		return -ENXIO;
> > > -
> > >  	/*
> > >  	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> > >  	 * For now, only the capacity is exported in sysfs
> > > 
> > > 
> > > [snip]
> > >   
> >   


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 3/8] cxl/mem: Register CXL memX devices
  2021-02-11 20:40       ` Dan Williams
@ 2021-02-12 13:33         ` Jonathan Cameron
  0 siblings, 0 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-12 13:33 UTC (permalink / raw)
  To: Dan Williams
  Cc: Ben Widawsky, linux-cxl, Linux ACPI, Linux Kernel Mailing List,
	linux-nvdimm, Linux PCI, Bjorn Helgaas, Chris Browy,
	Christoph Hellwig, David Hildenbrand, David Rientjes, Ira Weiny,
	Jon Masters, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V

On Thu, 11 Feb 2021 12:40:45 -0800
Dan Williams <dan.j.williams@intel.com> wrote:

> On Thu, Feb 11, 2021 at 2:19 AM Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Wed, 10 Feb 2021 18:17:25 +0000
> > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> >  
> > > On Tue, 9 Feb 2021 16:02:54 -0800
> > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > >  
> > > > From: Dan Williams <dan.j.williams@intel.com>
> > > >
> > > > Create the /sys/bus/cxl hierarchy to enumerate:
> > > >
> > > > * Memory Devices (per-endpoint control devices)
> > > >
> > > > * Memory Address Space Devices (platform address ranges with
> > > >   interleaving, performance, and persistence attributes)
> > > >
> > > > * Memory Regions (active provisioned memory from an address space device
> > > >   that is in use as System RAM or delegated to libnvdimm as Persistent
> > > >   Memory regions).
> > > >
> > > > For now, only the per-endpoint control devices are registered on the
> > > > 'cxl' bus. However, going forward it will provide a mechanism to
> > > > coordinate cross-device interleave.
> > > >
> > > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>  
> > >
> > > One stray header, and a request for a tiny bit of reordering to
> > > make it easier to chase through creation and destruction.
> > >
> > > Either way with the header move to earlier patch I'm fine with this one.
> > >
> > > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>  
> >
> > Actually thinking more on this, what is the justification for the
> > complexity + overhead of a percpu_refcount vs a refcount  
> 
> A typical refcount does not have the block and drain semantics of a
> percpu_ref. I'm planning to circle back and make this a first class
> facility of the cdev interface borrowing the debugfs approach [1], but
> for now percpu_ref fits the bill locally.
> 
> > I don't think this is a high enough performance path for it to matter.
> > Perhaps I'm missing a usecase where it does?  
> 
> It's less about percpu_ref performance and more about the
> percpu_ref_tryget_live() facility.
> 
> [1]: http://lore.kernel.org/r/CAPcyv4jEYPsyh0bhbtKGRbK3bgp=_+=2rjx4X0gLi5-25VvDyg@mail.gmail.com

Thanks for the reference. Definitely a nasty corner to clean up so I'll
keep an eye open for a new version of that series.

Jonathan



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 5/8] cxl/mem: Add a "RAW" send command
  2021-02-11 16:01     ` Ben Widawsky
@ 2021-02-12 13:40       ` Jonathan Cameron
  0 siblings, 0 replies; 57+ messages in thread
From: Jonathan Cameron @ 2021-02-12 13:40 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V, Ariel Sibley

On Thu, 11 Feb 2021 08:01:48 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-02-11 11:19:24, Jonathan Cameron wrote:
> > On Tue, 9 Feb 2021 16:02:56 -0800
> > Ben Widawsky <ben.widawsky@intel.com> wrote:
> >   
> > > The CXL memory device send interface will have a number of supported
> > > commands. The raw command is not such a command. Raw commands allow
> > > userspace to send a specified opcode to the underlying hardware and
> > > bypass all driver checks on the command. This is useful for a couple of
> > > usecases, mainly:
> > > 1. Undocumented vendor specific hardware commands  
> > 
> > This one I get.  There are things we'd love to standardize but often they
> > need proving in a generation of hardware before the data is available to
> > justify taking it to a standards body.  Stuff like performance stats.
> > This stuff will all sit in the vendor defined range.  Maybe there is an
> > argument for in driver hooks to allow proper support even for these
> > (Ben mentioned this in the other branch of the thread).
> >   
> > > 2. Prototyping new hardware commands not yet supported by the driver  
> > 
> > For 2, could just have a convenient place to enable this by one line patch.
> > Some subsystems (SPI comes to mind) do this for their equivalent of raw
> > commands.  The code is all there to enable it but you need to hook it
> > up if you want to use it.  Avoids chance of a distro shipping it.
> >   
> 
> I'm fine to drop #2 as a justification point, or maybe reword the commit message
> to say, "you could also just do... but since we have it for #1 already..."
> 
> > > 
> > > While this all sounds very powerful it comes with a couple of caveats:
> > > 1. Bug reports using raw commands will not get the same level of
> > >    attention as bug reports using supported commands (via taint).
> > > 2. Supported commands will be rejected by the RAW command.  
> > 
> > Perhaps I'm missing reading this point 2 (not sure the code actually does it!)
> > 
> > As stated what worries me as it means when we add support for a new
> > bit of the spec we just broke the userspace ABI.
> >   
> 
> It does not break ABI. The agreement is userspace must always use the QUERY
> command to find out what commands are supported. If it tries to use a RAW
> command that is a supported command, it will be rejected. In the case you
> mention, that's an application bug. If there is a way to document that better
> than what's already in the UAPI kdocs, I'm open to suggestions.
> 
> Unlike perhaps other UAPI, this one only promises to give you a way to determine
> what commands you can use, not the list of what commands you can use.

*crossed fingers* on this.  Users may have a different view when their application
just stops working.  It might print a nice error message telling them why
but it still doesn't work and that way lies grumpy Linus and reverts...

Mostly we'll get away with it because no one will notice, but it's unfortunately
still risky.   Personal preference is toplay safer and not allow direct userspace
access to commands in the spec (unless we've decided they will always be available
directly to userspace).  This includes anything in the ranges reserved for future
spec usage.

Jonathan



> 
> > > 
> > > With this comes new debugfs knob to allow full access to your toes with
> > > your weapon of choice.  
> > 
> > A few trivial things inline,
> > 
> > Jonathan
> >   
> > > 
> > > Cc: Ariel Sibley <Ariel.Sibley@microchip.com>
> > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> > > ---
> > >  drivers/cxl/Kconfig          |  18 +++++
> > >  drivers/cxl/mem.c            | 125 ++++++++++++++++++++++++++++++++++-
> > >  include/uapi/linux/cxl_mem.h |  12 +++-
> > >  3 files changed, 152 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > > index c4ba3aa0a05d..08eaa8e52083 100644
> > > --- a/drivers/cxl/Kconfig
> > > +++ b/drivers/cxl/Kconfig
> > > @@ -33,6 +33,24 @@ config CXL_MEM
> > >  
> > >  	  If unsure say 'm'.
> > >  
> > > +config CXL_MEM_RAW_COMMANDS
> > > +	bool "RAW Command Interface for Memory Devices"
> > > +	depends on CXL_MEM
> > > +	help
> > > +	  Enable CXL RAW command interface.
> > > +
> > > +	  The CXL driver ioctl interface may assign a kernel ioctl command
> > > +	  number for each specification defined opcode. At any given point in
> > > +	  time the number of opcodes that the specification defines and a device
> > > +	  may implement may exceed the kernel's set of associated ioctl function
> > > +	  numbers. The mismatch is either by omission, specification is too new,
> > > +	  or by design. When prototyping new hardware, or developing / debugging
> > > +	  the driver it is useful to be able to submit any possible command to
> > > +	  the hardware, even commands that may crash the kernel due to their
> > > +	  potential impact to memory currently in use by the kernel.
> > > +
> > > +	  If developing CXL hardware or the driver say Y, otherwise say N.
> > > +
> > >  config CXL_MEM_INSECURE_DEBUG
> > >  	bool "CXL.mem debugging"
> > >  	depends on CXL_MEM
> > > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > > index ce65630bb75e..6d766a994dce 100644
> > > --- a/drivers/cxl/mem.c
> > > +++ b/drivers/cxl/mem.c
> > > @@ -1,6 +1,8 @@
> > >  // SPDX-License-Identifier: GPL-2.0-only
> > >  /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> > >  #include <uapi/linux/cxl_mem.h>
> > > +#include <linux/security.h>
> > > +#include <linux/debugfs.h>
> > >  #include <linux/module.h>
> > >  #include <linux/mutex.h>
> > >  #include <linux/cdev.h>
> > > @@ -41,7 +43,14 @@
> > >  
> > >  enum opcode {
> > >  	CXL_MBOX_OP_INVALID		= 0x0000,
> > > +	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
> > > +	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
> > >  	CXL_MBOX_OP_IDENTIFY		= 0x4000,
> > > +	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
> > > +	CXL_MBOX_OP_SET_LSA		= 0x4103,
> > > +	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
> > > +	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
> > > +	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
> > >  	CXL_MBOX_OP_MAX			= 0x10000
> > >  };
> > >  
> > > @@ -91,6 +100,8 @@ struct cxl_memdev {
> > >  
> > >  static int cxl_mem_major;
> > >  static DEFINE_IDA(cxl_memdev_ida);
> > > +static struct dentry *cxl_debugfs;
> > > +static bool raw_allow_all;
> > >  
> > >  /**
> > >   * struct cxl_mem_command - Driver representation of a memory device command
> > > @@ -132,6 +143,49 @@ struct cxl_mem_command {
> > >   */
> > >  static struct cxl_mem_command mem_commands[] = {
> > >  	CXL_CMD(IDENTIFY, NONE, 0, 0x43),
> > > +#ifdef CONFIG_CXL_MEM_RAW_COMMANDS
> > > +	CXL_CMD(RAW, NONE, ~0, ~0),
> > > +#endif
> > > +};
> > > +
> > > +/*
> > > + * Commands that RAW doesn't permit. The rationale for each:
> > > + *
> > > + * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment /
> > > + * coordination of transaction timeout values at the root bridge level.
> > > + *
> > > + * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live
> > > + * and needs to be coordinated with HDM updates.
> > > + *
> > > + * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the
> > > + * driver and any writes from userspace invalidates those contents.
> > > + *
> > > + * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes
> > > + * to the device after it is marked clean, userspace can not make that
> > > + * assertion.
> > > + *
> > > + * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that
> > > + * is kept up to date with patrol notifications and error management.
> > > + */
> > > +static u16 disabled_raw_commands[] = {
> > > +	CXL_MBOX_OP_ACTIVATE_FW,
> > > +	CXL_MBOX_OP_SET_PARTITION_INFO,
> > > +	CXL_MBOX_OP_SET_LSA,
> > > +	CXL_MBOX_OP_SET_SHUTDOWN_STATE,
> > > +	CXL_MBOX_OP_SCAN_MEDIA,
> > > +	CXL_MBOX_OP_GET_SCAN_MEDIA,
> > > +};
> > > +
> > > +/*
> > > + * Command sets that RAW doesn't permit. All opcodes in this set are
> > > + * disabled because they pass plain text security payloads over the
> > > + * user/kernel boundary. This functionality is intended to be wrapped
> > > + * behind the keys ABI which allows for encrypted payloads in the UAPI
> > > + */
> > > +static u8 security_command_sets[] = {
> > > +	0x44, /* Sanitize */
> > > +	0x45, /* Persistent Memory Data-at-rest Security */
> > > +	0x46, /* Security Passthrough */
> > >  };
> > >  
> > >  #define cxl_for_each_cmd(cmd)                                                  \
> > > @@ -162,6 +216,16 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
> > >  	return 0;
> > >  }
> > >  
> > > +static bool is_security_command(u16 opcode)
> > > +{
> > > +	int i;
> > > +
> > > +	for (i = 0; i < ARRAY_SIZE(security_command_sets); i++)
> > > +		if (security_command_sets[i] == (opcode >> 8))
> > > +			return true;
> > > +	return false;
> > > +}
> > > +
> > >  static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > >  				 struct mbox_cmd *mbox_cmd)
> > >  {
> > > @@ -170,7 +234,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > >  	dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
> > >  		mbox_cmd->opcode, mbox_cmd->size_in);
> > >  
> > > -	if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
> > > +	if (!is_security_command(mbox_cmd->opcode) ||
> > > +	    IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
> > >  		print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1,
> > >  				     mbox_cmd->payload_in, mbox_cmd->size_in,
> > >  				     true);
> > > @@ -434,6 +499,9 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
> > >  		cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
> > >  		cmd->info.size_in);
> > >  
> > > +	dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
> > > +		      "raw command path used\n");
> > > +
> > >  	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > >  	cxl_mem_mbox_put(cxlm);
> > >  	if (rc)
> > > @@ -464,6 +532,29 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
> > >  	return rc;
> > >  }
> > >  
> > > +static bool cxl_mem_raw_command_allowed(u16 opcode)
> > > +{
> > > +	int i;
> > > +
> > > +	if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
> > > +		return false;
> > > +
> > > +	if (security_locked_down(LOCKDOWN_NONE))
> > > +		return false;
> > > +
> > > +	if (raw_allow_all)
> > > +		return true;
> > > +
> > > +	if (is_security_command(opcode))  
> > Given we are mixing generic calls like security_locked_down()
> > and local cxl specific ones like this one, prefix the
> > local versions.
> > 
> > cxl_is_security_command()
> > 
> > I'd also have a slight preference to do it for cxl_disabled_raw_commands
> > and cxl_raw_allow_all though they are less important as more obviously
> > local by not being function calls.
> >   
> > > +		return false;
> > > +
> > > +	for (i = 0; i < ARRAY_SIZE(disabled_raw_commands); i++)
> > > +		if (disabled_raw_commands[i] == opcode)
> > > +			return false;
> > > +
> > > +	return true;
> > > +}
> > > +
> > >  /**
> > >   * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
> > >   * @cxlm: &struct cxl_mem device whose mailbox will be used.
> > > @@ -500,6 +591,29 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
> > >  	if (send_cmd->in.size > cxlm->payload_size)
> > >  		return -EINVAL;
> > >  
> > > +	/* Checks are bypassed for raw commands but along comes the taint! */
> > > +	if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) {
> > > +		const struct cxl_mem_command temp = {
> > > +			.info = {
> > > +				.id = CXL_MEM_COMMAND_ID_RAW,
> > > +				.flags = CXL_MEM_COMMAND_FLAG_NONE,
> > > +				.size_in = send_cmd->in.size,
> > > +				.size_out = send_cmd->out.size,
> > > +			},
> > > +			.opcode = send_cmd->raw.opcode
> > > +		};
> > > +
> > > +		if (send_cmd->raw.rsvd)
> > > +			return -EINVAL;
> > > +
> > > +		if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
> > > +			return -EPERM;
> > > +
> > > +		memcpy(out_cmd, &temp, sizeof(temp));
> > > +
> > > +		return 0;
> > > +	}
> > > +
> > >  	if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
> > >  		return -EINVAL;
> > >  
> > > @@ -1123,8 +1237,9 @@ static struct pci_driver cxl_mem_driver = {
> > >  
> > >  static __init int cxl_mem_init(void)
> > >  {
> > > -	int rc;
> > > +	struct dentry *mbox_debugfs;
> > >  	dev_t devt;
> > > +	int rc;  
> > 
> > Shuffle this back to the place it was introduced to reduce patch noise.
> >   
> > >  
> > >  	rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl");
> > >  	if (rc)
> > > @@ -1139,11 +1254,17 @@ static __init int cxl_mem_init(void)
> > >  		return rc;
> > >  	}
> > >  
> > > +	cxl_debugfs = debugfs_create_dir("cxl", NULL);
> > > +	mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
> > > +	debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs,
> > > +			    &raw_allow_all);
> > > +
> > >  	return 0;
> > >  }
> > >  
> > >  static __exit void cxl_mem_exit(void)
> > >  {
> > > +	debugfs_remove_recursive(cxl_debugfs);
> > >  	pci_unregister_driver(&cxl_mem_driver);
> > >  	unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS);
> > >  }
> > > diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
> > > index f1f7e9f32ea5..72d1eb601a5d 100644
> > > --- a/include/uapi/linux/cxl_mem.h
> > > +++ b/include/uapi/linux/cxl_mem.h
> > > @@ -22,6 +22,7 @@
> > >  #define CXL_CMDS                                                          \
> > >  	___C(INVALID, "Invalid Command"),                                 \
> > >  	___C(IDENTIFY, "Identify Command"),                               \
> > > +	___C(RAW, "Raw device command"),                                  \
> > >  	___C(MAX, "Last command")
> > >  
> > >  #define ___C(a, b) CXL_MEM_COMMAND_ID_##a
> > > @@ -112,6 +113,9 @@ struct cxl_mem_query_commands {
> > >   * @id: The command to send to the memory device. This must be one of the
> > >   *	commands returned by the query command.
> > >   * @flags: Flags for the command (input).
> > > + * @raw: Special fields for raw commands
> > > + * @raw.opcode: Opcode passed to hardware when using the RAW command.
> > > + * @raw.rsvd: Must be zero.
> > >   * @rsvd: Must be zero.
> > >   * @retval: Return value from the memory device (output).
> > >   * @in.size: Size of the payload to provide to the device (input).
> > > @@ -133,7 +137,13 @@ struct cxl_mem_query_commands {
> > >  struct cxl_send_command {
> > >  	__u32 id;
> > >  	__u32 flags;
> > > -	__u32 rsvd;
> > > +	union {
> > > +		struct {
> > > +			__u16 opcode;
> > > +			__u16 rsvd;
> > > +		} raw;
> > > +		__u32 rsvd;
> > > +	};
> > >  	__u32 retval;
> > >  
> > >  	struct {  
> >   


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 2/8] cxl/mem: Find device capabilities
  2021-02-12 13:27               ` Jonathan Cameron
@ 2021-02-12 15:54                 ` Ben Widawsky
  0 siblings, 0 replies; 57+ messages in thread
From: Ben Widawsky @ 2021-02-12 15:54 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On 21-02-12 13:27:06, Jonathan Cameron wrote:
> On Thu, 11 Feb 2021 07:55:29 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > On 21-02-11 09:55:48, Jonathan Cameron wrote:
> > > On Wed, 10 Feb 2021 10:16:05 -0800
> > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > >   
> > > > On 21-02-10 08:55:57, Ben Widawsky wrote:  
> > > > > On 21-02-10 15:07:59, Jonathan Cameron wrote:    
> > > > > > On Wed, 10 Feb 2021 13:32:52 +0000
> > > > > > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> > > > > >     
> > > > > > > On Tue, 9 Feb 2021 16:02:53 -0800
> > > > > > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > > > > > >     
> > > > > > > > Provide enough functionality to utilize the mailbox of a memory device.
> > > > > > > > The mailbox is used to interact with the firmware running on the memory
> > > > > > > > device. The flow is proven with one implemented command, "identify".
> > > > > > > > Because the class code has already told the driver this is a memory
> > > > > > > > device and the identify command is mandatory.
> > > > > > > > 
> > > > > > > > CXL devices contain an array of capabilities that describe the
> > > > > > > > interactions software can have with the device or firmware running on
> > > > > > > > the device. A CXL compliant device must implement the device status and
> > > > > > > > the mailbox capability. Additionally, a CXL compliant memory device must
> > > > > > > > implement the memory device capability. Each of the capabilities can
> > > > > > > > [will] provide an offset within the MMIO region for interacting with the
> > > > > > > > CXL device.
> > > > > > > > 
> > > > > > > > The capabilities tell the driver how to find and map the register space
> > > > > > > > for CXL Memory Devices. The registers are required to utilize the CXL
> > > > > > > > spec defined mailbox interface. The spec outlines two mailboxes, primary
> > > > > > > > and secondary. The secondary mailbox is earmarked for system firmware,
> > > > > > > > and not handled in this driver.
> > > > > > > > 
> > > > > > > > Primary mailboxes are capable of generating an interrupt when submitting
> > > > > > > > a background command. That implementation is saved for a later time.
> > > > > > > > 
> > > > > > > > Link: https://www.computeexpresslink.org/download-the-specification
> > > > > > > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > > > > > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>      
> > > > > > > 
> > > > > > > Hi Ben,
> > > > > > > 
> > > > > > >     
> > > > > > > > +/**
> > > > > > > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > > > > > + * @cxlm: The CXL memory device to communicate with.
> > > > > > > > + * @mbox_cmd: Command to send to the memory device.
> > > > > > > > + *
> > > > > > > > + * Context: Any context. Expects mbox_lock to be held.
> > > > > > > > + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
> > > > > > > > + *         Caller should check the return code in @mbox_cmd to make sure it
> > > > > > > > + *         succeeded.      
> > > > > > > 
> > > > > > > cxl_xfer_log() doesn't check mbox_cmd->return_code and for my test it currently
> > > > > > > enters an infinite loop as a result.    
> > > > > 
> > > > > I meant to fix that.
> > > > >     
> > > > > > > 
> > > > > > > I haven't checked other paths, but to my mind it is not a good idea to require
> > > > > > > two levels of error checking - the example here proves how easy it is to forget
> > > > > > > one.    
> > > > > 
> > > > > Demonstrably, you're correct. I think it would be good to have a kernel only
> > > > > mbox command that does the error checking though. Let me type something up and
> > > > > see how it looks.    
> > > > 
> > > > Hi Jonathan. What do you think of this? The bit I'm on the fence about is if I
> > > > should validate output size too. I like the simplicity as it is, but it requires
> > > > every caller to possibly check output size, which is kind of the same problem
> > > > you're originally pointing out.  
> > > 
> > > The simplicity is good and this is pretty much what I expected you would end up with
> > > (always reassuring)
> > > 
> > > For the output, perhaps just add another parameter to the wrapper for minimum
> > > output length expected?
> > > 
> > > Now you mention the length question.  It does rather feel like there should also
> > > be some protection on memcpy_fromio() copying too much data if the hardware
> > > happens to return an unexpectedly long length.  Should never happen, but
> > > the hardening is worth adding anyway given it's easy to do.
> > > 
> > > Jonathan  
> > 
> > Some background because I forget what I've said previously... It's unfortunate
> > that the spec maxes at 1M mailbox size but has enough bits in the length field
> > to support 2M-1. I've made some requests to have this fixed, so maybe 3.0 won't
> > be awkward like this.
> 
> Agreed spec should be tighter here, but I'd argue over 1M indicates buggy hardware.
> 
> > 
> > I think it makes sense to do as you suggested. One question though, do you have
> > an opinion on we return to the caller as the output payload size, do we cap it
> > at 1M also, or are we honest?
> > 
> > -       if (out_len && mbox_cmd->payload_out)
> > -               memcpy_fromio(mbox_cmd->payload_out, payload, out_len);
> > +       if (out_len && mbox_cmd->payload_out) {
> > +               size_t n = min_t(size_t, cxlm->payload_size, out_len);
> > +               memcpy_fromio(mbox_cmd->payload_out, payload, n);
> > +       }
> 
> Ah, I read emails in wrong order.  What you have is what I expected and got
> confused about in your other email.
> 
> > 
> > So...
> > mbox_cmd->size_out = out_len;
> > mbox_cmd->size_out = n;
> 
> Good question.  My gut says the second one.
> Maybe it's worth a warning print to let us know something
> unexpected happened.
> 

I also prefer 'n', It's unfortunate though if userspace hits this condition, it
would have to scrape kernel logs to find out. Perhaps though userspace wouldn't
ever really care.

> > 
> > 
> > > 
> > >   
> > > > 
> > > > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > > > index 55c5f5a6023f..ad7b2077ab28 100644
> > > > --- a/drivers/cxl/mem.c
> > > > +++ b/drivers/cxl/mem.c
> > > > @@ -284,7 +284,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > > >  }
> > > >  
> > > >  /**
> > > > - * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > + * __cxl_mem_mbox_send_cmd() - Execute a mailbox command
> > > >   * @cxlm: The CXL memory device to communicate with.
> > > >   * @mbox_cmd: Command to send to the memory device.
> > > >   *
> > > > @@ -296,7 +296,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > > >   * This is a generic form of the CXL mailbox send command, thus the only I/O
> > > >   * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other
> > > >   * types of CXL devices may have further information available upon error
> > > > - * conditions.
> > > > + * conditions. Driver facilities wishing to send mailbox commands should use the
> > > > + * wrapper command.
> > > >   *
> > > >   * The CXL spec allows for up to two mailboxes. The intention is for the primary
> > > >   * mailbox to be OS controlled and the secondary mailbox to be used by system
> > > > @@ -304,8 +305,8 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
> > > >   * not need to coordinate with each other. The driver only uses the primary
> > > >   * mailbox.
> > > >   */
> > > > -static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > > > -				 struct mbox_cmd *mbox_cmd)
> > > > +static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
> > > > +				   struct mbox_cmd *mbox_cmd)
> > > >  {
> > > >  	void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
> > > >  	u64 cmd_reg, status_reg;
> > > > @@ -469,6 +470,54 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
> > > >  	mutex_unlock(&cxlm->mbox_mutex);
> > > >  }
> > > >  
> > > > +/**
> > > > + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
> > > > + * @cxlm: The CXL memory device to communicate with.
> > > > + * @opcode: Opcode for the mailbox command.
> > > > + * @in: The input payload for the mailbox command.
> > > > + * @in_size: The length of the input payload
> > > > + * @out: Caller allocated buffer for the output.
> > > > + *
> > > > + * Context: Any context. Will acquire and release mbox_mutex.
> > > > + * Return:
> > > > + *  * %>=0	- Number of bytes returned in @out.
> > > > + *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
> > > > + *  * %-EFAULT	- Hardware error occurred.
> > > > + *  * %-ENXIO	- Command completed, but device reported an error.
> > > > + *
> > > > + * Mailbox commands may execute successfully yet the device itself reported an
> > > > + * error. While this distinction can be useful for commands from userspace, the
> > > > + * kernel will often only care when both are successful.
> > > > + *
> > > > + * See __cxl_mem_mbox_send_cmd()
> > > > + */
> > > > +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
> > > > +				 size_t in_size, u8 *out)
> > > > +{
> > > > +	struct mbox_cmd mbox_cmd = {
> > > > +		.opcode = opcode,
> > > > +		.payload_in = in,
> > > > +		.size_in = in_size,
> > > > +		.payload_out = out,
> > > > +	};
> > > > +	int rc;
> > > > +
> > > > +	rc = cxl_mem_mbox_get(cxlm);
> > > > +	if (rc)
> > > > +		return rc;
> > > > +
> > > > +	rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > > > +	cxl_mem_mbox_put(cxlm);
> > > > +	if (rc)
> > > > +		return rc;
> > > > +
> > > > +	/* TODO: Map return code to proper kernel style errno */
> > > > +	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
> > > > +		return -ENXIO;
> > > > +
> > > > +	return mbox_cmd.size_out;
> > > > +}
> > > > +
> > > >  /**
> > > >   * handle_mailbox_cmd_from_user() - Dispatch a mailbox command.
> > > >   * @cxlmd: The CXL memory device to communicate with.
> > > > @@ -1380,33 +1429,18 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
> > > >  		u8 poison_caps;
> > > >  		u8 qos_telemetry_caps;
> > > >  	} __packed id;
> > > > -	struct mbox_cmd mbox_cmd = {
> > > > -		.opcode = CXL_MBOX_OP_IDENTIFY,
> > > > -		.payload_out = &id,
> > > > -		.size_in = 0,
> > > > -	};
> > > >  	int rc;
> > > >  
> > > > -	/* Retrieve initial device memory map */
> > > > -	rc = cxl_mem_mbox_get(cxlm);
> > > > -	if (rc)
> > > > -		return rc;
> > > > -
> > > > -	rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
> > > > -	cxl_mem_mbox_put(cxlm);
> > > > -	if (rc)
> > > > +	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
> > > > +				   (u8 *)&id);
> > > > +	if (rc < 0)
> > > >  		return rc;
> > > >  
> > > > -	/* TODO: Handle retry or reset responses from firmware. */
> > > > -	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) {
> > > > -		dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n",
> > > > -			mbox_cmd.return_code);
> > > > +	if (rc < sizeof(id)) {
> > > > +		dev_err(&cxlm->pdev->dev, "Short identify data\n",
> > > >  		return -ENXIO;
> > > >  	}
> > > >  
> > > > -	if (mbox_cmd.size_out != sizeof(id))
> > > > -		return -ENXIO;
> > > > -
> > > >  	/*
> > > >  	 * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
> > > >  	 * For now, only the capacity is exported in sysfs
> > > > 
> > > > 
> > > > [snip]
> > > >   
> > >   
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-10  0:02 ` [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface Ben Widawsky
  2021-02-10 18:45   ` Jonathan Cameron
@ 2021-02-14 16:30   ` Al Viro
  2021-02-14 23:14     ` Ben Widawsky
  1 sibling, 1 reply; 57+ messages in thread
From: Al Viro @ 2021-02-14 16:30 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

On Tue, Feb 09, 2021 at 04:02:55PM -0800, Ben Widawsky wrote:

> +static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
> +					const struct cxl_mem_command *cmd,
> +					u64 in_payload, u64 out_payload,
> +					struct cxl_send_command __user *s)
> +{
> +	struct cxl_mem *cxlm = cxlmd->cxlm;
> +	struct device *dev = &cxlmd->dev;
> +	struct mbox_cmd mbox_cmd = {
> +		.opcode = cmd->opcode,
> +		.size_in = cmd->info.size_in,
> +	};
> +	s32 user_size_out;
> +	int rc;
> +
> +	if (get_user(user_size_out, &s->out.size))
> +		return -EFAULT;

You have already copied it in.  Never reread stuff from userland - it *can*
change under you.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-14 16:30   ` Al Viro
@ 2021-02-14 23:14     ` Ben Widawsky
  2021-02-14 23:50       ` Al Viro
  0 siblings, 1 reply; 57+ messages in thread
From: Ben Widawsky @ 2021-02-14 23:14 UTC (permalink / raw)
  To: Al Viro
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

On 21-02-14 16:30:09, Al Viro wrote:
> On Tue, Feb 09, 2021 at 04:02:55PM -0800, Ben Widawsky wrote:
> 
> > +static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
> > +					const struct cxl_mem_command *cmd,
> > +					u64 in_payload, u64 out_payload,
> > +					struct cxl_send_command __user *s)
> > +{
> > +	struct cxl_mem *cxlm = cxlmd->cxlm;
> > +	struct device *dev = &cxlmd->dev;
> > +	struct mbox_cmd mbox_cmd = {
> > +		.opcode = cmd->opcode,
> > +		.size_in = cmd->info.size_in,
> > +	};
> > +	s32 user_size_out;
> > +	int rc;
> > +
> > +	if (get_user(user_size_out, &s->out.size))
> > +		return -EFAULT;
> 
> You have already copied it in.  Never reread stuff from userland - it *can*
> change under you.

As it turns out, this is some leftover logic which doesn't need to exist at all,
and I'm happy to change it. Thanks for reviewing.

I wasn't familiar with this restriction though. For my edification could you
explain how that could happen? Also, is this something that should go in the
kdocs, because I don't see anything about this restriction there.

Thanks.
Ben

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-14 23:14     ` Ben Widawsky
@ 2021-02-14 23:50       ` Al Viro
  2021-02-14 23:57         ` Al Viro
  0 siblings, 1 reply; 57+ messages in thread
From: Al Viro @ 2021-02-14 23:50 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

On Sun, Feb 14, 2021 at 03:14:56PM -0800, Ben Widawsky wrote:
> On 21-02-14 16:30:09, Al Viro wrote:
> > On Tue, Feb 09, 2021 at 04:02:55PM -0800, Ben Widawsky wrote:
> > 
> > > +static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
> > > +					const struct cxl_mem_command *cmd,
> > > +					u64 in_payload, u64 out_payload,
> > > +					struct cxl_send_command __user *s)
> > > +{
> > > +	struct cxl_mem *cxlm = cxlmd->cxlm;
> > > +	struct device *dev = &cxlmd->dev;
> > > +	struct mbox_cmd mbox_cmd = {
> > > +		.opcode = cmd->opcode,
> > > +		.size_in = cmd->info.size_in,
> > > +	};
> > > +	s32 user_size_out;
> > > +	int rc;
> > > +
> > > +	if (get_user(user_size_out, &s->out.size))
> > > +		return -EFAULT;
> > 
> > You have already copied it in.  Never reread stuff from userland - it *can*
> > change under you.
> 
> As it turns out, this is some leftover logic which doesn't need to exist at all,
> and I'm happy to change it. Thanks for reviewing.
> 
> I wasn't familiar with this restriction though. For my edification could you
> explain how that could happen? Also, is this something that should go in the
> kdocs, because I don't see anything about this restriction there.

Er...  You do realize that if two processes share memory, one can bloody well
modify it while another is in the middle of syscall, right?  Always could -
even mmap(2) with MAP_SHARED is sufficient, same as shmat(2), or the wholesale
sharing between POSIX threads, etc.

And even on UP with no preemption you could bloody well have a structure that
spans a page boundary, with the next page being mmapped and currently not
present in memory.  Then copy_from_user() would've copied the beginning, hit
a page fault, try to read the next page from something slow and lose CPU.
Letting the second process run and modify the already copied part.

It has been possible since at least mid-80s, well before Linux.  Anything in
user memory can change under you, right in the middle of syscall.  Always
could.  And there had been very real bugs along the lines of data being
read twice, once for safety check, once for actual work.  Something like

	get_user(len, &user_object->len);
	check that len is reasonable
	p = kmalloc(offsetof(struct foo, string[len]), GFP_KERNEL);
	copy_from_user(p, user_object, len);
	work with the copy, assuming that first p->len bytes of p->string[]
are safe to use, find out that p->len is much greater than len since
the userland data has changed between two fetches

Some of those had been exploitable from the very beginning, some had become
such on innocious-looking changes.

For the sake of your sanity it's better to avoid such landmines.  In some
cases it's OK to read the data twice (e.g. in something like select(2)), but
those cases are rare and seeing something of that sort is generally a big
red flag on review.  In almost all cases it's best avoided.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface
  2021-02-14 23:50       ` Al Viro
@ 2021-02-14 23:57         ` Al Viro
  0 siblings, 0 replies; 57+ messages in thread
From: Al Viro @ 2021-02-14 23:57 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, linux-acpi, linux-kernel, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Jonathan Cameron, Rafael Wysocki, Randy Dunlap, Vishal Verma,
	John Groves (jgroves),
	Kelley, Sean V, kernel test robot, Dan Williams

On Sun, Feb 14, 2021 at 11:50:12PM +0000, Al Viro wrote:
> 	check that len is reasonable
> 	p = kmalloc(offsetof(struct foo, string[len]), GFP_KERNEL);
> 	copy_from_user(p, user_object, len);
			offsetof(struct foo, string[len]), that is

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 6/8] cxl/mem: Enable commands via CEL
  2021-02-11 12:02   ` Jonathan Cameron
  2021-02-11 17:45     ` Ben Widawsky
@ 2021-02-16 13:43     ` Bartosz Golaszewski
  1 sibling, 0 replies; 57+ messages in thread
From: Bartosz Golaszewski @ 2021-02-16 13:43 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Ben Widawsky, linux-cxl, ACPI Devel Maling List,
	Linux Kernel Mailing List, linux-nvdimm, linux-pci,
	Bjorn Helgaas, Chris Browy, Christoph Hellwig, Dan Williams,
	David Hildenbrand, David Rientjes, Ira Weiny, Jon Masters,
	Rafael Wysocki, Randy Dunlap, Vishal Verma, John Groves (jgroves),
	Kelley, Sean V

On Thu, Feb 11, 2021 at 1:12 PM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>

[snip!]

> >
> > @@ -869,6 +891,14 @@ static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
> >       mutex_init(&cxlm->mbox_mutex);
> >       cxlm->pdev = pdev;
> >       cxlm->regs = regs + offset;
> > +     cxlm->enabled_cmds =
> > +             devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
> > +                                sizeof(unsigned long),
> > +                                GFP_KERNEL | __GFP_ZERO);
>
> Hmm. There doesn't seem to be a devm_bitmap_zalloc
>

FYI I've implemented both devm_bitmap_zalloc() as well as
devm_bitmap_alloc() and made them part of a series I sent out to
linux-gpio two weeks ago (surprisingly - it's nowhere to be found on
lkml or spinics or even patchwork :/). The patches didn't make it for
v5.12 but I'll respin them after the merge window, so we'll have those
devres helpers for v5.13.

Bartosz

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2021-02-16 13:44 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-10  0:02 [PATCH v2 0/8] CXL 2.0 Support Ben Widawsky
2021-02-10  0:02 ` [PATCH v2 1/8] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints Ben Widawsky
2021-02-10 16:17   ` Jonathan Cameron
2021-02-10 17:12     ` Ben Widawsky
2021-02-10 17:23       ` Jonathan Cameron
2021-02-10  0:02 ` [PATCH v2 2/8] cxl/mem: Find device capabilities Ben Widawsky
2021-02-10 13:32   ` Jonathan Cameron
2021-02-10 15:07     ` Jonathan Cameron
2021-02-10 16:55       ` Ben Widawsky
2021-02-10 17:30         ` Jonathan Cameron
2021-02-10 18:16         ` Ben Widawsky
2021-02-11  9:55           ` Jonathan Cameron
2021-02-11 15:55             ` Ben Widawsky
2021-02-12 13:27               ` Jonathan Cameron
2021-02-12 15:54                 ` Ben Widawsky
2021-02-11 18:27             ` Ben Widawsky
2021-02-12 13:23               ` Jonathan Cameron
2021-02-10 19:32     ` Ben Widawsky
2021-02-10 17:41   ` Jonathan Cameron
2021-02-10 18:53     ` Ben Widawsky
2021-02-10 19:54       ` Dan Williams
2021-02-11 10:01         ` Jonathan Cameron
2021-02-11 16:04           ` Ben Widawsky
2021-02-10  0:02 ` [PATCH v2 3/8] cxl/mem: Register CXL memX devices Ben Widawsky
2021-02-10 18:17   ` Jonathan Cameron
2021-02-11 10:17     ` Jonathan Cameron
2021-02-11 20:40       ` Dan Williams
2021-02-12 13:33         ` Jonathan Cameron
2021-02-10  0:02 ` [PATCH v2 4/8] cxl/mem: Add basic IOCTL interface Ben Widawsky
2021-02-10 18:45   ` Jonathan Cameron
2021-02-10 20:22     ` Ben Widawsky
2021-02-11  4:40     ` Dan Williams
2021-02-11 10:06       ` Jonathan Cameron
2021-02-11 16:54         ` Ben Widawsky
2021-02-14 16:30   ` Al Viro
2021-02-14 23:14     ` Ben Widawsky
2021-02-14 23:50       ` Al Viro
2021-02-14 23:57         ` Al Viro
2021-02-10  0:02 ` [PATCH v2 5/8] cxl/mem: Add a "RAW" send command Ben Widawsky
2021-02-10 15:26   ` Ariel.Sibley
2021-02-10 16:49     ` Ben Widawsky
2021-02-10 18:03       ` Ariel.Sibley
2021-02-10 18:11         ` Ben Widawsky
2021-02-10 18:46           ` Ariel.Sibley
2021-02-10 19:12             ` Ben Widawsky
2021-02-11 16:43     ` Dan Williams
2021-02-11 11:19   ` Jonathan Cameron
2021-02-11 16:01     ` Ben Widawsky
2021-02-12 13:40       ` Jonathan Cameron
2021-02-10  0:02 ` [PATCH v2 6/8] cxl/mem: Enable commands via CEL Ben Widawsky
2021-02-11 12:02   ` Jonathan Cameron
2021-02-11 17:45     ` Ben Widawsky
2021-02-11 20:34       ` Dan Williams
2021-02-16 13:43     ` Bartosz Golaszewski
2021-02-10  0:02 ` [PATCH v2 7/8] cxl/mem: Add set of informational commands Ben Widawsky
2021-02-11 12:07   ` Jonathan Cameron
2021-02-10  0:02 ` [PATCH v2 8/8] MAINTAINERS: Add maintainers of the CXL driver Ben Widawsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).