nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests
@ 2021-08-09 22:27 Dan Williams
  2021-08-09 22:27 ` [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields Dan Williams
                   ` (23 more replies)
  0 siblings, 24 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:27 UTC (permalink / raw)
  To: linux-cxl
  Cc: Andy Shevchenko, nvdimm, Jonathan.Cameron, ben.widawsky,
	vishal.l.verma, alison.schofield, ira.weiny

As mentioned in patch 20 in this series the response of upstream QEMU
community to CXL device emulation has been underwhelming to date. Even
if that picked up it still results in a situation where new driver
features and new test capabilities for those features are split across
multiple repositories.

The "nfit_test" approach of mocking up platform resources via an
external test module continues to yield positive results catching
regressions early and often. Repeat that success with a "cxl_test"
module to inject custom crafted topologies and command responses into
the CXL subsystem's sysfs and ioctl UAPIs.

The first target for cxl_test to verify is the integration of CXL with
LIBNVDIMM and the new support for the CXL namespace label + region-label
format. The first 11 patches introduce support for the new label format.

The next 9 patches rework the CXL PCI driver and to move more common
infrastructure into the core for the unit test environment to reuse. The
largest change here is disconnecting the mailbox command processing
infrastructure from the PCI specific transport. The unit test
environment replaces the PCI transport with a custom backend with mocked
responses to command requests.

Patch 20 introduces just enough mocked functionality for the cxl_acpi
driver to load against cxl_test resources. Patch 21 fixes the first bug
discovered by this framework, namely that HDM decoder target list maps
were not being filled out.

Finally patches 22 and 23 introduce a cxl_test representation of memory
expander devices. In this initial implementation these memory expander
targets implement just enough command support to pass the basic driver
init sequence and enable label command passthrough to LIBNVDIMM.

The topology of cxl_test includes:
- (4) platform fixed memory windows. One each of a x1-volatile,
  x4-volatile, x1-persistent, and x4-persistent.
- (4) Host bridges each with (2) root ports
- (8) CXL memory expanders, one for each root port
- Each memory expander device supports the GET_SUPPORTED_LOGS, GET_LOG,
  IDENTIFY, GET_LSA, and SET_LSA commands.

Going forward the expectation is that where possible new UAPI visible
subsystem functionality comes with cxl_test emulation of the same.

The build process for cxl_test is:

    make M=tools/testing/cxl
    make M=tools/testing/cxl modules_install

The implementation methodology of the test module is the same as
nfit_test where the bulk of the emulation comes from replacing symbols
that cxl_acpi and the cxl_core import with mocked implementation of
those symbols. See the "--wrap=" lines in tools/testing/cxl/Kbuild. Some
symbols need to be replaced, but are local to the modules like
match_add_root_ports(). In those cases the local symbol is marked __weak
with a strong implementation coming from tools/testing/cxl/. The goal
being to be minimally invasive to production code paths.

---

Dan Williams (23):
      libnvdimm/labels: Introduce getters for namespace label fields
      libnvdimm/labels: Add isetcookie validation helper
      libnvdimm/labels: Introduce label setter helpers
      libnvdimm/labels: Add a checksum calculation helper
      libnvdimm/labels: Add blk isetcookie set / validation helpers
      libnvdimm/labels: Add blk special cases for nlabel and position helpers
      libnvdimm/labels: Add type-guid helpers
      libnvdimm/labels: Add claim class helpers
      libnvdimm/labels: Add address-abstraction uuid definitions
      libnvdimm/labels: Add uuid helpers
      libnvdimm/labels: Introduce CXL labels
      cxl/pci: Make 'struct cxl_mem' device type generic
      cxl/mbox: Introduce the mbox_send operation
      cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core
      cxl/pci: Use module_pci_driver
      cxl/mbox: Convert 'enabled_cmds' to DECLARE_BITMAP
      cxl/mbox: Add exclusive kernel command support
      cxl/pmem: Translate NVDIMM label commands to CXL label commands
      cxl/pmem: Add support for multiple nvdimm-bridge objects
      tools/testing/cxl: Introduce a mocked-up CXL port hierarchy
      cxl/bus: Populate the target list at decoder create
      cxl/mbox: Move command definitions to common location
      tools/testing/cxl: Introduce a mock memory device + driver


 Documentation/driver-api/cxl/memory-devices.rst |    3 
 drivers/cxl/acpi.c                              |   65 +
 drivers/cxl/core/Makefile                       |    1 
 drivers/cxl/core/bus.c                          |   69 +-
 drivers/cxl/core/core.h                         |    8 
 drivers/cxl/core/mbox.c                         |  796 +++++++++++++++++
 drivers/cxl/core/memdev.c                       |   84 ++
 drivers/cxl/core/pmem.c                         |   32 +
 drivers/cxl/cxl.h                               |   35 -
 drivers/cxl/cxlmem.h                            |  186 ++++
 drivers/cxl/pci.c                               | 1053 +----------------------
 drivers/cxl/pmem.c                              |  162 +++-
 drivers/nvdimm/btt.c                            |   11 
 drivers/nvdimm/btt.h                            |    4 
 drivers/nvdimm/btt_devs.c                       |   12 
 drivers/nvdimm/core.c                           |   40 -
 drivers/nvdimm/label.c                          |  354 +++++---
 drivers/nvdimm/label.h                          |   96 +-
 drivers/nvdimm/namespace_devs.c                 |  194 ++--
 drivers/nvdimm/nd-core.h                        |    5 
 drivers/nvdimm/nd.h                             |  263 ++++++
 drivers/nvdimm/pfn_devs.c                       |    2 
 include/linux/nd.h                              |    4 
 tools/testing/cxl/Kbuild                        |   29 +
 tools/testing/cxl/mock_acpi.c                   |  105 ++
 tools/testing/cxl/mock_pmem.c                   |   24 +
 tools/testing/cxl/test/Kbuild                   |   10 
 tools/testing/cxl/test/cxl.c                    |  587 +++++++++++++
 tools/testing/cxl/test/mem.c                    |  255 ++++++
 tools/testing/cxl/test/mock.c                   |  155 +++
 tools/testing/cxl/test/mock.h                   |   27 +
 31 files changed, 3234 insertions(+), 1437 deletions(-)
 create mode 100644 drivers/cxl/core/mbox.c
 create mode 100644 tools/testing/cxl/Kbuild
 create mode 100644 tools/testing/cxl/mock_acpi.c
 create mode 100644 tools/testing/cxl/mock_pmem.c
 create mode 100644 tools/testing/cxl/test/Kbuild
 create mode 100644 tools/testing/cxl/test/cxl.c
 create mode 100644 tools/testing/cxl/test/mem.c
 create mode 100644 tools/testing/cxl/test/mock.c
 create mode 100644 tools/testing/cxl/test/mock.h

base-commit: 427832674f6e2413c21ca2271ec945a720608ff2

(cxl.git#pending as of August 9th, 2021)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
@ 2021-08-09 22:27 ` Dan Williams
  2021-08-10 20:48   ` Ben Widawsky
  2021-08-11 18:44   ` Jonathan Cameron
  2021-08-09 22:27 ` [PATCH 02/23] libnvdimm/labels: Add isetcookie validation helper Dan Williams
                   ` (22 subsequent siblings)
  23 siblings, 2 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:27 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for LIBNVDIMM to manage labels on CXL devices deploy
helpers that abstract the label type from the implementation. The CXL
label format is mostly similar to the EFI label format with concepts /
fields added, like dynamic region creation and label type guids, and
other concepts removed like BLK-mode and interleave-set-cookie ids.

In addition to nsl_get_* helpers there is the nsl_ref_name() helper that
returns a pointer to a label field rather than copying the data.

Where changes touch the old whitespace style, update to clang-format
expectations.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/label.c          |   20 ++++++-----
 drivers/nvdimm/namespace_devs.c |   70 +++++++++++++++++++--------------------
 drivers/nvdimm/nd.h             |   66 +++++++++++++++++++++++++++++++++++++
 3 files changed, 110 insertions(+), 46 deletions(-)

diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index 9251441fd8a3..b6d845cfb70e 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -350,14 +350,14 @@ static bool slot_valid(struct nvdimm_drvdata *ndd,
 		struct nd_namespace_label *nd_label, u32 slot)
 {
 	/* check that we are written where we expect to be written */
-	if (slot != __le32_to_cpu(nd_label->slot))
+	if (slot != nsl_get_slot(ndd, nd_label))
 		return false;
 
 	/* check checksum */
 	if (namespace_label_has(ndd, checksum)) {
 		u64 sum, sum_save;
 
-		sum_save = __le64_to_cpu(nd_label->checksum);
+		sum_save = nsl_get_checksum(ndd, nd_label);
 		nd_label->checksum = __cpu_to_le64(0);
 		sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
 		nd_label->checksum = __cpu_to_le64(sum_save);
@@ -395,13 +395,13 @@ int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
 			continue;
 
 		memcpy(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
-		flags = __le32_to_cpu(nd_label->flags);
+		flags = nsl_get_flags(ndd, nd_label);
 		if (test_bit(NDD_NOBLK, &nvdimm->flags))
 			flags &= ~NSLABEL_FLAG_LOCAL;
 		nd_label_gen_id(&label_id, label_uuid, flags);
 		res = nvdimm_allocate_dpa(ndd, &label_id,
-				__le64_to_cpu(nd_label->dpa),
-				__le64_to_cpu(nd_label->rawsize));
+					  nsl_get_dpa(ndd, nd_label),
+					  nsl_get_rawsize(ndd, nd_label));
 		nd_dbg_dpa(nd_region, ndd, res, "reserve\n");
 		if (!res)
 			return -EBUSY;
@@ -548,9 +548,9 @@ int nd_label_active_count(struct nvdimm_drvdata *ndd)
 		nd_label = to_label(ndd, slot);
 
 		if (!slot_valid(ndd, nd_label, slot)) {
-			u32 label_slot = __le32_to_cpu(nd_label->slot);
-			u64 size = __le64_to_cpu(nd_label->rawsize);
-			u64 dpa = __le64_to_cpu(nd_label->dpa);
+			u32 label_slot = nsl_get_slot(ndd, nd_label);
+			u64 size = nsl_get_rawsize(ndd, nd_label);
+			u64 dpa = nsl_get_dpa(ndd, nd_label);
 
 			dev_dbg(ndd->dev,
 				"slot%d invalid slot: %d dpa: %llx size: %llx\n",
@@ -879,9 +879,9 @@ static struct resource *to_resource(struct nvdimm_drvdata *ndd,
 	struct resource *res;
 
 	for_each_dpa_resource(ndd, res) {
-		if (res->start != __le64_to_cpu(nd_label->dpa))
+		if (res->start != nsl_get_dpa(ndd, nd_label))
 			continue;
-		if (resource_size(res) != __le64_to_cpu(nd_label->rawsize))
+		if (resource_size(res) != nsl_get_rawsize(ndd, nd_label))
 			continue;
 		return res;
 	}
diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
index 2403b71b601e..94da804372bf 100644
--- a/drivers/nvdimm/namespace_devs.c
+++ b/drivers/nvdimm/namespace_devs.c
@@ -1235,7 +1235,7 @@ static int namespace_update_uuid(struct nd_region *nd_region,
 			if (!nd_label)
 				continue;
 			nd_label_gen_id(&label_id, nd_label->uuid,
-					__le32_to_cpu(nd_label->flags));
+					nsl_get_flags(ndd, nd_label));
 			if (strcmp(old_label_id.id, label_id.id) == 0)
 				set_bit(ND_LABEL_REAP, &label_ent->flags);
 		}
@@ -1851,9 +1851,9 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
 
 			if (!nd_label)
 				continue;
-			isetcookie = __le64_to_cpu(nd_label->isetcookie);
-			position = __le16_to_cpu(nd_label->position);
-			nlabel = __le16_to_cpu(nd_label->nlabel);
+			isetcookie = nsl_get_isetcookie(ndd, nd_label);
+			position = nsl_get_position(ndd, nd_label);
+			nlabel = nsl_get_nlabel(ndd, nd_label);
 
 			if (isetcookie != cookie)
 				continue;
@@ -1923,8 +1923,8 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
 		 */
 		hw_start = nd_mapping->start;
 		hw_end = hw_start + nd_mapping->size;
-		pmem_start = __le64_to_cpu(nd_label->dpa);
-		pmem_end = pmem_start + __le64_to_cpu(nd_label->rawsize);
+		pmem_start = nsl_get_dpa(ndd, nd_label);
+		pmem_end = pmem_start + nsl_get_rawsize(ndd, nd_label);
 		if (pmem_start >= hw_start && pmem_start < hw_end
 				&& pmem_end <= hw_end && pmem_end > hw_start)
 			/* pass */;
@@ -1947,14 +1947,16 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
  * @nd_label: target pmem namespace label to evaluate
  */
 static struct device *create_namespace_pmem(struct nd_region *nd_region,
-		struct nd_namespace_index *nsindex,
-		struct nd_namespace_label *nd_label)
+					    struct nd_mapping *nd_mapping,
+					    struct nd_namespace_label *nd_label)
 {
+	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+	struct nd_namespace_index *nsindex =
+		to_namespace_index(ndd, ndd->ns_current);
 	u64 cookie = nd_region_interleave_set_cookie(nd_region, nsindex);
 	u64 altcookie = nd_region_interleave_set_altcookie(nd_region);
 	struct nd_label_ent *label_ent;
 	struct nd_namespace_pmem *nspm;
-	struct nd_mapping *nd_mapping;
 	resource_size_t size = 0;
 	struct resource *res;
 	struct device *dev;
@@ -1966,10 +1968,10 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
 		return ERR_PTR(-ENXIO);
 	}
 
-	if (__le64_to_cpu(nd_label->isetcookie) != cookie) {
+	if (nsl_get_isetcookie(ndd, nd_label) != cookie) {
 		dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
 				nd_label->uuid);
-		if (__le64_to_cpu(nd_label->isetcookie) != altcookie)
+		if (nsl_get_isetcookie(ndd, nd_label) != altcookie)
 			return ERR_PTR(-EAGAIN);
 
 		dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
@@ -2037,16 +2039,16 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
 			continue;
 		}
 
-		size += __le64_to_cpu(label0->rawsize);
-		if (__le16_to_cpu(label0->position) != 0)
+		ndd = to_ndd(nd_mapping);
+		size += nsl_get_rawsize(ndd, label0);
+		if (nsl_get_position(ndd, label0) != 0)
 			continue;
 		WARN_ON(nspm->alt_name || nspm->uuid);
-		nspm->alt_name = kmemdup((void __force *) label0->name,
-				NSLABEL_NAME_LEN, GFP_KERNEL);
+		nspm->alt_name = kmemdup(nsl_ref_name(ndd, label0),
+					 NSLABEL_NAME_LEN, GFP_KERNEL);
 		nspm->uuid = kmemdup((void __force *) label0->uuid,
 				NSLABEL_UUID_LEN, GFP_KERNEL);
-		nspm->lbasize = __le64_to_cpu(label0->lbasize);
-		ndd = to_ndd(nd_mapping);
+		nspm->lbasize = nsl_get_lbasize(ndd, label0);
 		if (namespace_label_has(ndd, abstraction_guid))
 			nspm->nsio.common.claim_class
 				= to_nvdimm_cclass(&label0->abstraction_guid);
@@ -2237,7 +2239,7 @@ static int add_namespace_resource(struct nd_region *nd_region,
 		if (is_namespace_blk(devs[i])) {
 			res = nsblk_add_resource(nd_region, ndd,
 					to_nd_namespace_blk(devs[i]),
-					__le64_to_cpu(nd_label->dpa));
+					nsl_get_dpa(ndd, nd_label));
 			if (!res)
 				return -ENXIO;
 			nd_dbg_dpa(nd_region, ndd, res, "%d assign\n", count);
@@ -2276,7 +2278,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
 		if (nd_label->isetcookie != __cpu_to_le64(nd_set->cookie2)) {
 			dev_dbg(ndd->dev, "expect cookie %#llx got %#llx\n",
 					nd_set->cookie2,
-					__le64_to_cpu(nd_label->isetcookie));
+					nsl_get_isetcookie(ndd, nd_label));
 			return ERR_PTR(-EAGAIN);
 		}
 	}
@@ -2288,7 +2290,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
 	dev->type = &namespace_blk_device_type;
 	dev->parent = &nd_region->dev;
 	nsblk->id = -1;
-	nsblk->lbasize = __le64_to_cpu(nd_label->lbasize);
+	nsblk->lbasize = nsl_get_lbasize(ndd, nd_label);
 	nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN,
 			GFP_KERNEL);
 	if (namespace_label_has(ndd, abstraction_guid))
@@ -2296,15 +2298,14 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
 			= to_nvdimm_cclass(&nd_label->abstraction_guid);
 	if (!nsblk->uuid)
 		goto blk_err;
-	memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
+	nsl_get_name(ndd, nd_label, name);
 	if (name[0]) {
-		nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN,
-				GFP_KERNEL);
+		nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN, GFP_KERNEL);
 		if (!nsblk->alt_name)
 			goto blk_err;
 	}
 	res = nsblk_add_resource(nd_region, ndd, nsblk,
-			__le64_to_cpu(nd_label->dpa));
+			nsl_get_dpa(ndd, nd_label));
 	if (!res)
 		goto blk_err;
 	nd_dbg_dpa(nd_region, ndd, res, "%d: assign\n", count);
@@ -2345,6 +2346,7 @@ static struct device **scan_labels(struct nd_region *nd_region)
 	struct device *dev, **devs = NULL;
 	struct nd_label_ent *label_ent, *e;
 	struct nd_mapping *nd_mapping = &nd_region->mapping[0];
+	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
 	resource_size_t map_end = nd_mapping->start + nd_mapping->size - 1;
 
 	/* "safe" because create_namespace_pmem() might list_move() label_ent */
@@ -2355,7 +2357,7 @@ static struct device **scan_labels(struct nd_region *nd_region)
 
 		if (!nd_label)
 			continue;
-		flags = __le32_to_cpu(nd_label->flags);
+		flags = nsl_get_flags(ndd, nd_label);
 		if (is_nd_blk(&nd_region->dev)
 				== !!(flags & NSLABEL_FLAG_LOCAL))
 			/* pass, region matches label type */;
@@ -2363,9 +2365,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
 			continue;
 
 		/* skip labels that describe extents outside of the region */
-		if (__le64_to_cpu(nd_label->dpa) < nd_mapping->start ||
-		    __le64_to_cpu(nd_label->dpa) > map_end)
-				continue;
+		if (nsl_get_dpa(ndd, nd_label) < nd_mapping->start ||
+		    nsl_get_dpa(ndd, nd_label) > map_end)
+			continue;
 
 		i = add_namespace_resource(nd_region, nd_label, devs, count);
 		if (i < 0)
@@ -2381,13 +2383,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
 
 		if (is_nd_blk(&nd_region->dev))
 			dev = create_namespace_blk(nd_region, nd_label, count);
-		else {
-			struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
-			struct nd_namespace_index *nsindex;
-
-			nsindex = to_namespace_index(ndd, ndd->ns_current);
-			dev = create_namespace_pmem(nd_region, nsindex, nd_label);
-		}
+		else
+			dev = create_namespace_pmem(nd_region, nd_mapping,
+						    nd_label);
 
 		if (IS_ERR(dev)) {
 			switch (PTR_ERR(dev)) {
@@ -2570,7 +2568,7 @@ static int init_active_labels(struct nd_region *nd_region)
 				break;
 			label = nd_label_active(ndd, j);
 			if (test_bit(NDD_NOBLK, &nvdimm->flags)) {
-				u32 flags = __le32_to_cpu(label->flags);
+				u32 flags = nsl_get_flags(ndd, label);
 
 				flags &= ~NSLABEL_FLAG_LOCAL;
 				label->flags = __cpu_to_le32(flags);
diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index 696b55556d4d..61f43f0edabf 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -35,6 +35,72 @@ struct nvdimm_drvdata {
 	struct kref kref;
 };
 
+static inline const u8 *nsl_ref_name(struct nvdimm_drvdata *ndd,
+				     struct nd_namespace_label *nd_label)
+{
+	return nd_label->name;
+}
+
+static inline u8 *nsl_get_name(struct nvdimm_drvdata *ndd,
+			       struct nd_namespace_label *nd_label, u8 *name)
+{
+	return memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
+}
+
+static inline u32 nsl_get_slot(struct nvdimm_drvdata *ndd,
+			       struct nd_namespace_label *nd_label)
+{
+	return __le32_to_cpu(nd_label->slot);
+}
+
+static inline u64 nsl_get_checksum(struct nvdimm_drvdata *ndd,
+				   struct nd_namespace_label *nd_label)
+{
+	return __le64_to_cpu(nd_label->checksum);
+}
+
+static inline u32 nsl_get_flags(struct nvdimm_drvdata *ndd,
+				struct nd_namespace_label *nd_label)
+{
+	return __le32_to_cpu(nd_label->flags);
+}
+
+static inline u64 nsl_get_dpa(struct nvdimm_drvdata *ndd,
+			      struct nd_namespace_label *nd_label)
+{
+	return __le64_to_cpu(nd_label->dpa);
+}
+
+static inline u64 nsl_get_rawsize(struct nvdimm_drvdata *ndd,
+				  struct nd_namespace_label *nd_label)
+{
+	return __le64_to_cpu(nd_label->rawsize);
+}
+
+static inline u64 nsl_get_isetcookie(struct nvdimm_drvdata *ndd,
+				     struct nd_namespace_label *nd_label)
+{
+	return __le64_to_cpu(nd_label->isetcookie);
+}
+
+static inline u16 nsl_get_position(struct nvdimm_drvdata *ndd,
+				   struct nd_namespace_label *nd_label)
+{
+	return __le16_to_cpu(nd_label->position);
+}
+
+static inline u16 nsl_get_nlabel(struct nvdimm_drvdata *ndd,
+				 struct nd_namespace_label *nd_label)
+{
+	return __le16_to_cpu(nd_label->nlabel);
+}
+
+static inline u64 nsl_get_lbasize(struct nvdimm_drvdata *ndd,
+				  struct nd_namespace_label *nd_label)
+{
+	return __le64_to_cpu(nd_label->lbasize);
+}
+
 struct nd_region_data {
 	int ns_count;
 	int ns_active;


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 02/23] libnvdimm/labels: Add isetcookie validation helper
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
  2021-08-09 22:27 ` [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields Dan Williams
@ 2021-08-09 22:27 ` Dan Williams
  2021-08-11 18:44   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 03/23] libnvdimm/labels: Introduce label setter helpers Dan Williams
                   ` (21 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:27 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation to handle CXL labels with the same code that handles EFI
labels, add a specific interleave-set-cookie validation helper
rather than a getter since the CXL label type does not support this
concept. The answer for CXL labels will always be true.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/namespace_devs.c |    8 +++-----
 drivers/nvdimm/nd.h             |    7 +++++++
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
index 94da804372bf..f33245c27cc4 100644
--- a/drivers/nvdimm/namespace_devs.c
+++ b/drivers/nvdimm/namespace_devs.c
@@ -1847,15 +1847,13 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
 		list_for_each_entry(label_ent, &nd_mapping->labels, list) {
 			struct nd_namespace_label *nd_label = label_ent->label;
 			u16 position, nlabel;
-			u64 isetcookie;
 
 			if (!nd_label)
 				continue;
-			isetcookie = nsl_get_isetcookie(ndd, nd_label);
 			position = nsl_get_position(ndd, nd_label);
 			nlabel = nsl_get_nlabel(ndd, nd_label);
 
-			if (isetcookie != cookie)
+			if (!nsl_validate_isetcookie(ndd, nd_label, cookie))
 				continue;
 
 			if (memcmp(nd_label->uuid, uuid, NSLABEL_UUID_LEN) != 0)
@@ -1968,10 +1966,10 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
 		return ERR_PTR(-ENXIO);
 	}
 
-	if (nsl_get_isetcookie(ndd, nd_label) != cookie) {
+	if (!nsl_validate_isetcookie(ndd, nd_label, cookie)) {
 		dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
 				nd_label->uuid);
-		if (nsl_get_isetcookie(ndd, nd_label) != altcookie)
+		if (!nsl_validate_isetcookie(ndd, nd_label, altcookie))
 			return ERR_PTR(-EAGAIN);
 
 		dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index 61f43f0edabf..b3feaf3699f7 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -83,6 +83,13 @@ static inline u64 nsl_get_isetcookie(struct nvdimm_drvdata *ndd,
 	return __le64_to_cpu(nd_label->isetcookie);
 }
 
+static inline bool nsl_validate_isetcookie(struct nvdimm_drvdata *ndd,
+					   struct nd_namespace_label *nd_label,
+					   u64 cookie)
+{
+	return cookie == __le64_to_cpu(nd_label->isetcookie);
+}
+
 static inline u16 nsl_get_position(struct nvdimm_drvdata *ndd,
 				   struct nd_namespace_label *nd_label)
 {


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 03/23] libnvdimm/labels: Introduce label setter helpers
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
  2021-08-09 22:27 ` [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields Dan Williams
  2021-08-09 22:27 ` [PATCH 02/23] libnvdimm/labels: Add isetcookie validation helper Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-11 17:27   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 04/23] libnvdimm/labels: Add a checksum calculation helper Dan Williams
                   ` (20 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for LIBNVDIMM to manage labels on CXL devices deploy
helpers that abstract the label type from the implementation. The CXL
label format is mostly similar to the EFI label format with concepts /
fields added, like dynamic region creation and label type guids, and
other concepts removed like BLK-mode and interleave-set-cookie ids.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/label.c          |   61 +++++++++++++++++------------------
 drivers/nvdimm/namespace_devs.c |    2 +
 drivers/nvdimm/nd.h             |   68 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 98 insertions(+), 33 deletions(-)

diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index b6d845cfb70e..b40a4eda1d89 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -358,9 +358,9 @@ static bool slot_valid(struct nvdimm_drvdata *ndd,
 		u64 sum, sum_save;
 
 		sum_save = nsl_get_checksum(ndd, nd_label);
-		nd_label->checksum = __cpu_to_le64(0);
+		nsl_set_checksum(ndd, nd_label, 0);
 		sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
-		nd_label->checksum = __cpu_to_le64(sum_save);
+		nsl_set_checksum(ndd, nd_label, sum_save);
 		if (sum != sum_save) {
 			dev_dbg(ndd->dev, "fail checksum. slot: %d expect: %#llx\n",
 				slot, sum);
@@ -797,16 +797,15 @@ static int __pmem_label_update(struct nd_region *nd_region,
 	nd_label = to_label(ndd, slot);
 	memset(nd_label, 0, sizeof_namespace_label(ndd));
 	memcpy(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN);
-	if (nspm->alt_name)
-		memcpy(nd_label->name, nspm->alt_name, NSLABEL_NAME_LEN);
-	nd_label->flags = __cpu_to_le32(flags);
-	nd_label->nlabel = __cpu_to_le16(nd_region->ndr_mappings);
-	nd_label->position = __cpu_to_le16(pos);
-	nd_label->isetcookie = __cpu_to_le64(cookie);
-	nd_label->rawsize = __cpu_to_le64(resource_size(res));
-	nd_label->lbasize = __cpu_to_le64(nspm->lbasize);
-	nd_label->dpa = __cpu_to_le64(res->start);
-	nd_label->slot = __cpu_to_le32(slot);
+	nsl_set_name(ndd, nd_label, nspm->alt_name);
+	nsl_set_flags(ndd, nd_label, flags);
+	nsl_set_nlabel(ndd, nd_label, nd_region->ndr_mappings);
+	nsl_set_position(ndd, nd_label, pos);
+	nsl_set_isetcookie(ndd, nd_label, cookie);
+	nsl_set_rawsize(ndd, nd_label, resource_size(res));
+	nsl_set_lbasize(ndd, nd_label, nspm->lbasize);
+	nsl_set_dpa(ndd, nd_label, res->start);
+	nsl_set_slot(ndd, nd_label, slot);
 	if (namespace_label_has(ndd, type_guid))
 		guid_copy(&nd_label->type_guid, &nd_set->type_guid);
 	if (namespace_label_has(ndd, abstraction_guid))
@@ -816,9 +815,9 @@ static int __pmem_label_update(struct nd_region *nd_region,
 	if (namespace_label_has(ndd, checksum)) {
 		u64 sum;
 
-		nd_label->checksum = __cpu_to_le64(0);
+		nsl_set_checksum(ndd, nd_label, 0);
 		sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
-		nd_label->checksum = __cpu_to_le64(sum);
+		nsl_set_checksum(ndd, nd_label, sum);
 	}
 	nd_dbg_dpa(nd_region, ndd, res, "\n");
 
@@ -1017,10 +1016,8 @@ static int __blk_label_update(struct nd_region *nd_region,
 		nd_label = to_label(ndd, slot);
 		memset(nd_label, 0, sizeof_namespace_label(ndd));
 		memcpy(nd_label->uuid, nsblk->uuid, NSLABEL_UUID_LEN);
-		if (nsblk->alt_name)
-			memcpy(nd_label->name, nsblk->alt_name,
-					NSLABEL_NAME_LEN);
-		nd_label->flags = __cpu_to_le32(NSLABEL_FLAG_LOCAL);
+		nsl_set_name(ndd, nd_label, nsblk->alt_name);
+		nsl_set_flags(ndd, nd_label, NSLABEL_FLAG_LOCAL);
 
 		/*
 		 * Use the presence of the type_guid as a flag to
@@ -1029,23 +1026,23 @@ static int __blk_label_update(struct nd_region *nd_region,
 		 */
 		if (namespace_label_has(ndd, type_guid)) {
 			if (i == min_dpa_idx) {
-				nd_label->nlabel = __cpu_to_le16(nsblk->num_resources);
-				nd_label->position = __cpu_to_le16(0);
+				nsl_set_nlabel(ndd, nd_label, nsblk->num_resources);
+				nsl_set_position(ndd, nd_label, 0);
 			} else {
-				nd_label->nlabel = __cpu_to_le16(0xffff);
-				nd_label->position = __cpu_to_le16(0xffff);
+				nsl_set_nlabel(ndd, nd_label, 0xffff);
+				nsl_set_position(ndd, nd_label, 0xffff);
 			}
-			nd_label->isetcookie = __cpu_to_le64(nd_set->cookie2);
+			nsl_set_isetcookie(ndd, nd_label, nd_set->cookie2);
 		} else {
-			nd_label->nlabel = __cpu_to_le16(0); /* N/A */
-			nd_label->position = __cpu_to_le16(0); /* N/A */
-			nd_label->isetcookie = __cpu_to_le64(0); /* N/A */
+			nsl_set_nlabel(ndd, nd_label, 0); /* N/A */
+			nsl_set_position(ndd, nd_label, 0); /* N/A */
+			nsl_set_isetcookie(ndd, nd_label, 0); /* N/A */
 		}
 
-		nd_label->dpa = __cpu_to_le64(res->start);
-		nd_label->rawsize = __cpu_to_le64(resource_size(res));
-		nd_label->lbasize = __cpu_to_le64(nsblk->lbasize);
-		nd_label->slot = __cpu_to_le32(slot);
+		nsl_set_dpa(ndd, nd_label, res->start);
+		nsl_set_rawsize(ndd, nd_label, resource_size(res));
+		nsl_set_lbasize(ndd, nd_label, nsblk->lbasize);
+		nsl_set_slot(ndd, nd_label, slot);
 		if (namespace_label_has(ndd, type_guid))
 			guid_copy(&nd_label->type_guid, &nd_set->type_guid);
 		if (namespace_label_has(ndd, abstraction_guid))
@@ -1056,10 +1053,10 @@ static int __blk_label_update(struct nd_region *nd_region,
 		if (namespace_label_has(ndd, checksum)) {
 			u64 sum;
 
-			nd_label->checksum = __cpu_to_le64(0);
+			nsl_set_checksum(ndd, nd_label, 0);
 			sum = nd_fletcher64(nd_label,
 					sizeof_namespace_label(ndd), 1);
-			nd_label->checksum = __cpu_to_le64(sum);
+			nsl_set_checksum(ndd, nd_label, sum);
 		}
 
 		/* update label */
diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
index f33245c27cc4..fb9e080ce654 100644
--- a/drivers/nvdimm/namespace_devs.c
+++ b/drivers/nvdimm/namespace_devs.c
@@ -2569,7 +2569,7 @@ static int init_active_labels(struct nd_region *nd_region)
 				u32 flags = nsl_get_flags(ndd, label);
 
 				flags &= ~NSLABEL_FLAG_LOCAL;
-				label->flags = __cpu_to_le32(flags);
+				nsl_set_flags(ndd, label, flags);
 			}
 			label_ent->label = label;
 
diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index b3feaf3699f7..416846fe7818 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -47,6 +47,14 @@ static inline u8 *nsl_get_name(struct nvdimm_drvdata *ndd,
 	return memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
 }
 
+static inline u8 *nsl_set_name(struct nvdimm_drvdata *ndd,
+			       struct nd_namespace_label *nd_label, u8 *name)
+{
+	if (!name)
+		return name;
+	return memcpy(nd_label->name, name, NSLABEL_NAME_LEN);
+}
+
 static inline u32 nsl_get_slot(struct nvdimm_drvdata *ndd,
 			       struct nd_namespace_label *nd_label)
 {
@@ -108,6 +116,66 @@ static inline u64 nsl_get_lbasize(struct nvdimm_drvdata *ndd,
 	return __le64_to_cpu(nd_label->lbasize);
 }
 
+static inline void nsl_set_slot(struct nvdimm_drvdata *ndd,
+				struct nd_namespace_label *nd_label, u32 slot)
+{
+	nd_label->slot = __le32_to_cpu(slot);
+}
+
+static inline void nsl_set_checksum(struct nvdimm_drvdata *ndd,
+				    struct nd_namespace_label *nd_label,
+				    u64 checksum)
+{
+	nd_label->checksum = __cpu_to_le64(checksum);
+}
+
+static inline void nsl_set_flags(struct nvdimm_drvdata *ndd,
+				 struct nd_namespace_label *nd_label, u32 flags)
+{
+	nd_label->flags = __cpu_to_le32(flags);
+}
+
+static inline void nsl_set_dpa(struct nvdimm_drvdata *ndd,
+			       struct nd_namespace_label *nd_label, u64 dpa)
+{
+	nd_label->dpa = __cpu_to_le64(dpa);
+}
+
+static inline void nsl_set_rawsize(struct nvdimm_drvdata *ndd,
+				   struct nd_namespace_label *nd_label,
+				   u64 rawsize)
+{
+	nd_label->rawsize = __cpu_to_le64(rawsize);
+}
+
+static inline void nsl_set_isetcookie(struct nvdimm_drvdata *ndd,
+				      struct nd_namespace_label *nd_label,
+				      u64 isetcookie)
+{
+	nd_label->isetcookie = __cpu_to_le64(isetcookie);
+}
+
+static inline void nsl_set_position(struct nvdimm_drvdata *ndd,
+				    struct nd_namespace_label *nd_label,
+				    u16 position)
+{
+	nd_label->position = __cpu_to_le16(position);
+}
+
+static inline void nsl_set_nlabel(struct nvdimm_drvdata *ndd,
+				  struct nd_namespace_label *nd_label,
+				  u16 nlabel)
+{
+	nd_label->nlabel = __cpu_to_le16(nlabel);
+}
+
+static inline void nsl_set_lbasize(struct nvdimm_drvdata *ndd,
+				   struct nd_namespace_label *nd_label,
+				   u64 lbasize)
+{
+	nd_label->lbasize = __cpu_to_le64(lbasize);
+}
+
 struct nd_region_data {
 	int ns_count;
 	int ns_active;


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 04/23] libnvdimm/labels: Add a checksum calculation helper
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (2 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 03/23] libnvdimm/labels: Introduce label setter helpers Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-11 18:44   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 05/23] libnvdimm/labels: Add blk isetcookie set / validation helpers Dan Williams
                   ` (19 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for LIBNVDIMM to manage labels on CXL devices deploy
helpers that abstract the label type from the implementation. The CXL
label format is mostly similar to the EFI label format with concepts /
fields added, like dynamic region creation and label type guids, and
other concepts removed like BLK-mode and interleave-set-cookie ids.

CXL labels support checksums by default, but early versions of the EFI
labels did not. Add a validate function that can return true in the case
the label format does not implement a checksum.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/label.c |   68 +++++++++++++++++++++++++-----------------------
 1 file changed, 35 insertions(+), 33 deletions(-)

diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index b40a4eda1d89..3f73412dd438 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -346,29 +346,45 @@ static bool preamble_next(struct nvdimm_drvdata *ndd,
 			free, nslot);
 }
 
+static bool nsl_validate_checksum(struct nvdimm_drvdata *ndd,
+				  struct nd_namespace_label *nd_label)
+{
+	u64 sum, sum_save;
+
+	if (!namespace_label_has(ndd, checksum))
+		return true;
+
+	sum_save = nsl_get_checksum(ndd, nd_label);
+	nsl_set_checksum(ndd, nd_label, 0);
+	sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
+	nsl_set_checksum(ndd, nd_label, sum_save);
+	return sum == sum_save;
+}
+
+static void nsl_calculate_checksum(struct nvdimm_drvdata *ndd,
+				   struct nd_namespace_label *nd_label)
+{
+	u64 sum;
+
+	if (!namespace_label_has(ndd, checksum))
+		return;
+	nsl_set_checksum(ndd, nd_label, 0);
+	sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
+	nsl_set_checksum(ndd, nd_label, sum);
+}
+
 static bool slot_valid(struct nvdimm_drvdata *ndd,
 		struct nd_namespace_label *nd_label, u32 slot)
 {
+	bool valid;
+
 	/* check that we are written where we expect to be written */
 	if (slot != nsl_get_slot(ndd, nd_label))
 		return false;
-
-	/* check checksum */
-	if (namespace_label_has(ndd, checksum)) {
-		u64 sum, sum_save;
-
-		sum_save = nsl_get_checksum(ndd, nd_label);
-		nsl_set_checksum(ndd, nd_label, 0);
-		sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
-		nsl_set_checksum(ndd, nd_label, sum_save);
-		if (sum != sum_save) {
-			dev_dbg(ndd->dev, "fail checksum. slot: %d expect: %#llx\n",
-				slot, sum);
-			return false;
-		}
-	}
-
-	return true;
+	valid = nsl_validate_checksum(ndd, nd_label);
+	if (!valid)
+		dev_dbg(ndd->dev, "fail checksum. slot: %d\n", slot);
+	return valid;
 }
 
 int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
@@ -812,13 +828,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
 		guid_copy(&nd_label->abstraction_guid,
 				to_abstraction_guid(ndns->claim_class,
 					&nd_label->abstraction_guid));
-	if (namespace_label_has(ndd, checksum)) {
-		u64 sum;
-
-		nsl_set_checksum(ndd, nd_label, 0);
-		sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
-		nsl_set_checksum(ndd, nd_label, sum);
-	}
+	nsl_calculate_checksum(ndd, nd_label);
 	nd_dbg_dpa(nd_region, ndd, res, "\n");
 
 	/* update label */
@@ -1049,15 +1059,7 @@ static int __blk_label_update(struct nd_region *nd_region,
 			guid_copy(&nd_label->abstraction_guid,
 					to_abstraction_guid(ndns->claim_class,
 						&nd_label->abstraction_guid));
-
-		if (namespace_label_has(ndd, checksum)) {
-			u64 sum;
-
-			nsl_set_checksum(ndd, nd_label, 0);
-			sum = nd_fletcher64(nd_label,
-					sizeof_namespace_label(ndd), 1);
-			nsl_set_checksum(ndd, nd_label, sum);
-		}
+		nsl_calculate_checksum(ndd, nd_label);
 
 		/* update label */
 		offset = nd_label_offset(ndd, nd_label);


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 05/23] libnvdimm/labels: Add blk isetcookie set / validation helpers
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (3 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 04/23] libnvdimm/labels: Add a checksum calculation helper Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-11 18:45   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 06/23] libnvdimm/labels: Add blk special cases for nlabel and position helpers Dan Williams
                   ` (18 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for LIBNVDIMM to manage labels on CXL devices deploy
helpers that abstract the label type from the implementation. The CXL
label format is mostly similar to the EFI label format with concepts /
fields added, like dynamic region creation and label type guids, and
other concepts removed like BLK-mode and interleave-set-cookie ids.

Given BLK-mode is not even supported on CXL push hide the BLK-mode
specific details inside the helpers.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/label.c          |   30 ++++++++++++++++++++++++++++--
 drivers/nvdimm/namespace_devs.c |    9 ++-------
 drivers/nvdimm/nd.h             |    4 ++++
 3 files changed, 34 insertions(+), 9 deletions(-)

diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index 3f73412dd438..d1a7f399cfe4 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -898,6 +898,33 @@ static struct resource *to_resource(struct nvdimm_drvdata *ndd,
 	return NULL;
 }
 
+static void nsl_set_blk_isetcookie(struct nvdimm_drvdata *ndd,
+				   struct nd_namespace_label *nd_label,
+				   u64 isetcookie)
+{
+	if (namespace_label_has(ndd, type_guid)) {
+		nsl_set_isetcookie(ndd, nd_label, isetcookie);
+		return;
+	}
+	nsl_set_isetcookie(ndd, nd_label, 0); /* N/A */
+}
+
+bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
+				 struct nd_namespace_label *nd_label,
+				 u64 isetcookie)
+{
+	if (!namespace_label_has(ndd, type_guid))
+		return true;
+
+	if (nsl_get_isetcookie(ndd, nd_label) != isetcookie) {
+		dev_dbg(ndd->dev, "expect cookie %#llx got %#llx\n", isetcookie,
+			nsl_get_isetcookie(ndd, nd_label));
+		return false;
+	}
+
+	return true;
+}
+
 /*
  * 1/ Account all the labels that can be freed after this update
  * 2/ Allocate and write the label to the staging (next) index
@@ -1042,12 +1069,11 @@ static int __blk_label_update(struct nd_region *nd_region,
 				nsl_set_nlabel(ndd, nd_label, 0xffff);
 				nsl_set_position(ndd, nd_label, 0xffff);
 			}
-			nsl_set_isetcookie(ndd, nd_label, nd_set->cookie2);
 		} else {
 			nsl_set_nlabel(ndd, nd_label, 0); /* N/A */
 			nsl_set_position(ndd, nd_label, 0); /* N/A */
-			nsl_set_isetcookie(ndd, nd_label, 0); /* N/A */
 		}
+		nsl_set_blk_isetcookie(ndd, nd_label, nd_set->cookie2);
 
 		nsl_set_dpa(ndd, nd_label, res->start);
 		nsl_set_rawsize(ndd, nd_label, resource_size(res));
diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
index fb9e080ce654..fbd0c2fcea4a 100644
--- a/drivers/nvdimm/namespace_devs.c
+++ b/drivers/nvdimm/namespace_devs.c
@@ -2272,14 +2272,9 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
 					&nd_label->type_guid);
 			return ERR_PTR(-EAGAIN);
 		}
-
-		if (nd_label->isetcookie != __cpu_to_le64(nd_set->cookie2)) {
-			dev_dbg(ndd->dev, "expect cookie %#llx got %#llx\n",
-					nd_set->cookie2,
-					nsl_get_isetcookie(ndd, nd_label));
-			return ERR_PTR(-EAGAIN);
-		}
 	}
+	if (!nsl_validate_blk_isetcookie(ndd, nd_label, nd_set->cookie2))
+		return ERR_PTR(-EAGAIN);
 
 	nsblk = kzalloc(sizeof(*nsblk), GFP_KERNEL);
 	if (!nsblk)
diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index 416846fe7818..2a9a608b7f17 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -176,6 +176,10 @@ static inline void nsl_set_lbasize(struct nvdimm_drvdata *ndd,
 	nd_label->lbasize = __cpu_to_le64(lbasize);
 }
 
+bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
+				 struct nd_namespace_label *nd_label,
+				 u64 isetcookie);
+
 struct nd_region_data {
 	int ns_count;
 	int ns_active;


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 06/23] libnvdimm/labels: Add blk special cases for nlabel and position helpers
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (4 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 05/23] libnvdimm/labels: Add blk isetcookie set / validation helpers Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-11 18:45   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 07/23] libnvdimm/labels: Add type-guid helpers Dan Williams
                   ` (17 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for LIBNVDIMM to manage labels on CXL devices deploy
helpers that abstract the label type from the implementation. The CXL
label format is mostly similar to the EFI label format with concepts /
fields added, like dynamic region creation and label type guids, and
other concepts removed like BLK-mode and interleave-set-cookie ids.

Finish off the BLK-mode specific helper conversion with the nlabel and
position behaviour that is specific to EFI v1.2 labels and not the
original v1.1 definition.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/label.c |   46 +++++++++++++++++++++++++++++-----------------
 1 file changed, 29 insertions(+), 17 deletions(-)

diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index d1a7f399cfe4..7188675c0955 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -898,6 +898,10 @@ static struct resource *to_resource(struct nvdimm_drvdata *ndd,
 	return NULL;
 }
 
+/*
+ * Use the presence of the type_guid as a flag to determine isetcookie
+ * usage and nlabel + position policy for blk-aperture namespaces.
+ */
 static void nsl_set_blk_isetcookie(struct nvdimm_drvdata *ndd,
 				   struct nd_namespace_label *nd_label,
 				   u64 isetcookie)
@@ -925,6 +929,28 @@ bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
 	return true;
 }
 
+static void nsl_set_blk_nlabel(struct nvdimm_drvdata *ndd,
+			       struct nd_namespace_label *nd_label, int nlabel,
+			       bool first)
+{
+	if (!namespace_label_has(ndd, type_guid)) {
+		nsl_set_nlabel(ndd, nd_label, 0); /* N/A */
+		return;
+	}
+	nsl_set_nlabel(ndd, nd_label, first ? nlabel : 0xffff);
+}
+
+static void nsl_set_blk_position(struct nvdimm_drvdata *ndd,
+				 struct nd_namespace_label *nd_label,
+				 bool first)
+{
+	if (!namespace_label_has(ndd, type_guid)) {
+		nsl_set_position(ndd, nd_label, 0);
+		return;
+	}
+	nsl_set_position(ndd, nd_label, first ? 0 : 0xffff);
+}
+
 /*
  * 1/ Account all the labels that can be freed after this update
  * 2/ Allocate and write the label to the staging (next) index
@@ -1056,23 +1082,9 @@ static int __blk_label_update(struct nd_region *nd_region,
 		nsl_set_name(ndd, nd_label, nsblk->alt_name);
 		nsl_set_flags(ndd, nd_label, NSLABEL_FLAG_LOCAL);
 
-		/*
-		 * Use the presence of the type_guid as a flag to
-		 * determine isetcookie usage and nlabel + position
-		 * policy for blk-aperture namespaces.
-		 */
-		if (namespace_label_has(ndd, type_guid)) {
-			if (i == min_dpa_idx) {
-				nsl_set_nlabel(ndd, nd_label, nsblk->num_resources);
-				nsl_set_position(ndd, nd_label, 0);
-			} else {
-				nsl_set_nlabel(ndd, nd_label, 0xffff);
-				nsl_set_position(ndd, nd_label, 0xffff);
-			}
-		} else {
-			nsl_set_nlabel(ndd, nd_label, 0); /* N/A */
-			nsl_set_position(ndd, nd_label, 0); /* N/A */
-		}
+		nsl_set_blk_nlabel(ndd, nd_label, nsblk->num_resources,
+				   i == min_dpa_idx);
+		nsl_set_blk_position(ndd, nd_label, i == min_dpa_idx);
 		nsl_set_blk_isetcookie(ndd, nd_label, nd_set->cookie2);
 
 		nsl_set_dpa(ndd, nd_label, res->start);


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 07/23] libnvdimm/labels: Add type-guid helpers
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (5 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 06/23] libnvdimm/labels: Add blk special cases for nlabel and position helpers Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-11 18:46   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 08/23] libnvdimm/labels: Add claim class helpers Dan Williams
                   ` (16 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for CXL label support, which does not have the type-guid
concept, wrap the existing users with nsl_set_type_guid, and
nsl_validate_type_guid. Recall that the type-guid is a value in the ACPI
NFIT table to indicate how the memory range is used / should be
presented to upper layers.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/label.c          |   26 ++++++++++++++++++++++----
 drivers/nvdimm/namespace_devs.c |   19 ++++---------------
 drivers/nvdimm/nd.h             |    2 ++
 3 files changed, 28 insertions(+), 19 deletions(-)

diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index 7188675c0955..294ffc3cb582 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -772,6 +772,26 @@ static void reap_victim(struct nd_mapping *nd_mapping,
 	victim->label = NULL;
 }
 
+static void nsl_set_type_guid(struct nvdimm_drvdata *ndd,
+			      struct nd_namespace_label *nd_label, guid_t *guid)
+{
+	if (namespace_label_has(ndd, type_guid))
+		guid_copy(&nd_label->type_guid, guid);
+}
+
+bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd,
+			    struct nd_namespace_label *nd_label, guid_t *guid)
+{
+	if (!namespace_label_has(ndd, type_guid))
+		return true;
+	if (!guid_equal(&nd_label->type_guid, guid)) {
+		dev_dbg(ndd->dev, "expect type_guid %pUb got %pUb\n", guid,
+			&nd_label->type_guid);
+		return false;
+	}
+	return true;
+}
+
 static int __pmem_label_update(struct nd_region *nd_region,
 		struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
 		int pos, unsigned long flags)
@@ -822,8 +842,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
 	nsl_set_lbasize(ndd, nd_label, nspm->lbasize);
 	nsl_set_dpa(ndd, nd_label, res->start);
 	nsl_set_slot(ndd, nd_label, slot);
-	if (namespace_label_has(ndd, type_guid))
-		guid_copy(&nd_label->type_guid, &nd_set->type_guid);
+	nsl_set_type_guid(ndd, nd_label, &nd_set->type_guid);
 	if (namespace_label_has(ndd, abstraction_guid))
 		guid_copy(&nd_label->abstraction_guid,
 				to_abstraction_guid(ndns->claim_class,
@@ -1091,8 +1110,7 @@ static int __blk_label_update(struct nd_region *nd_region,
 		nsl_set_rawsize(ndd, nd_label, resource_size(res));
 		nsl_set_lbasize(ndd, nd_label, nsblk->lbasize);
 		nsl_set_slot(ndd, nd_label, slot);
-		if (namespace_label_has(ndd, type_guid))
-			guid_copy(&nd_label->type_guid, &nd_set->type_guid);
+		nsl_set_type_guid(ndd, nd_label, &nd_set->type_guid);
 		if (namespace_label_has(ndd, abstraction_guid))
 			guid_copy(&nd_label->abstraction_guid,
 					to_abstraction_guid(ndns->claim_class,
diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
index fbd0c2fcea4a..af5a31dd3147 100644
--- a/drivers/nvdimm/namespace_devs.c
+++ b/drivers/nvdimm/namespace_devs.c
@@ -1859,14 +1859,9 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
 			if (memcmp(nd_label->uuid, uuid, NSLABEL_UUID_LEN) != 0)
 				continue;
 
-			if (namespace_label_has(ndd, type_guid)
-					&& !guid_equal(&nd_set->type_guid,
-						&nd_label->type_guid)) {
-				dev_dbg(ndd->dev, "expect type_guid %pUb got %pUb\n",
-						&nd_set->type_guid,
-						&nd_label->type_guid);
+			if (!nsl_validate_type_guid(ndd, nd_label,
+						    &nd_set->type_guid))
 				continue;
-			}
 
 			if (found_uuid) {
 				dev_dbg(ndd->dev, "duplicate entry for uuid\n");
@@ -2265,14 +2260,8 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
 	struct device *dev = NULL;
 	struct resource *res;
 
-	if (namespace_label_has(ndd, type_guid)) {
-		if (!guid_equal(&nd_set->type_guid, &nd_label->type_guid)) {
-			dev_dbg(ndd->dev, "expect type_guid %pUb got %pUb\n",
-					&nd_set->type_guid,
-					&nd_label->type_guid);
-			return ERR_PTR(-EAGAIN);
-		}
-	}
+	if (!nsl_validate_type_guid(ndd, nd_label, &nd_set->type_guid))
+		return ERR_PTR(-EAGAIN);
 	if (!nsl_validate_blk_isetcookie(ndd, nd_label, nd_set->cookie2))
 		return ERR_PTR(-EAGAIN);
 
diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index 2a9a608b7f17..f3c364df9449 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -179,6 +179,8 @@ static inline void nsl_set_lbasize(struct nvdimm_drvdata *ndd,
 bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
 				 struct nd_namespace_label *nd_label,
 				 u64 isetcookie);
+bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd,
+			    struct nd_namespace_label *nd_label, guid_t *guid);
 
 struct nd_region_data {
 	int ns_count;


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 08/23] libnvdimm/labels: Add claim class helpers
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (6 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 07/23] libnvdimm/labels: Add type-guid helpers Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-11 18:46   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 09/23] libnvdimm/labels: Add address-abstraction uuid definitions Dan Williams
                   ` (15 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for LIBNVDIMM to manage labels on CXL devices deploy
helpers that abstract the label type from the implementation. The CXL
label format is mostly similar to the EFI label format with concepts /
fields added, like dynamic region creation and label type guids, and
other concepts removed like BLK-mode and interleave-set-cookie ids.

CXL labels do have the concept of a claim class represented by an
"abstraction" identifier. It turns out both label implementations use
the same ids, but EFI encodes them as GUIDs and CXL labels encode them
as UUIDs. For now abstract out the claim class such that the UUID vs
GUID distinction can later be hidden in the helper.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/label.c          |   31 ++++++++++++++++++++++---------
 drivers/nvdimm/label.h          |    1 -
 drivers/nvdimm/namespace_devs.c |   13 ++++---------
 drivers/nvdimm/nd.h             |    2 ++
 4 files changed, 28 insertions(+), 19 deletions(-)

diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index 294ffc3cb582..7f473f9db300 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -724,7 +724,7 @@ static unsigned long nd_label_offset(struct nvdimm_drvdata *ndd,
 		- (unsigned long) to_namespace_index(ndd, 0);
 }
 
-enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid)
+static enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid)
 {
 	if (guid_equal(guid, &nvdimm_btt_guid))
 		return NVDIMM_CCLASS_BTT;
@@ -792,6 +792,25 @@ bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd,
 	return true;
 }
 
+static void nsl_set_claim_class(struct nvdimm_drvdata *ndd,
+				struct nd_namespace_label *nd_label,
+				enum nvdimm_claim_class claim_class)
+{
+	if (!namespace_label_has(ndd, abstraction_guid))
+		return;
+	guid_copy(&nd_label->abstraction_guid,
+		  to_abstraction_guid(claim_class,
+				      &nd_label->abstraction_guid));
+}
+
+enum nvdimm_claim_class nsl_get_claim_class(struct nvdimm_drvdata *ndd,
+					    struct nd_namespace_label *nd_label)
+{
+	if (!namespace_label_has(ndd, abstraction_guid))
+		return NVDIMM_CCLASS_NONE;
+	return to_nvdimm_cclass(&nd_label->abstraction_guid);
+}
+
 static int __pmem_label_update(struct nd_region *nd_region,
 		struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
 		int pos, unsigned long flags)
@@ -843,10 +862,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
 	nsl_set_dpa(ndd, nd_label, res->start);
 	nsl_set_slot(ndd, nd_label, slot);
 	nsl_set_type_guid(ndd, nd_label, &nd_set->type_guid);
-	if (namespace_label_has(ndd, abstraction_guid))
-		guid_copy(&nd_label->abstraction_guid,
-				to_abstraction_guid(ndns->claim_class,
-					&nd_label->abstraction_guid));
+	nsl_set_claim_class(ndd, nd_label, ndns->claim_class);
 	nsl_calculate_checksum(ndd, nd_label);
 	nd_dbg_dpa(nd_region, ndd, res, "\n");
 
@@ -1111,10 +1127,7 @@ static int __blk_label_update(struct nd_region *nd_region,
 		nsl_set_lbasize(ndd, nd_label, nsblk->lbasize);
 		nsl_set_slot(ndd, nd_label, slot);
 		nsl_set_type_guid(ndd, nd_label, &nd_set->type_guid);
-		if (namespace_label_has(ndd, abstraction_guid))
-			guid_copy(&nd_label->abstraction_guid,
-					to_abstraction_guid(ndns->claim_class,
-						&nd_label->abstraction_guid));
+		nsl_set_claim_class(ndd, nd_label, ndns->claim_class);
 		nsl_calculate_checksum(ndd, nd_label);
 
 		/* update label */
diff --git a/drivers/nvdimm/label.h b/drivers/nvdimm/label.h
index 956b6d1bd8cc..31f94fad7b92 100644
--- a/drivers/nvdimm/label.h
+++ b/drivers/nvdimm/label.h
@@ -135,7 +135,6 @@ struct nd_namespace_label *nd_label_active(struct nvdimm_drvdata *ndd, int n);
 u32 nd_label_alloc_slot(struct nvdimm_drvdata *ndd);
 bool nd_label_free_slot(struct nvdimm_drvdata *ndd, u32 slot);
 u32 nd_label_nfree(struct nvdimm_drvdata *ndd);
-enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid);
 struct nd_region;
 struct nd_namespace_pmem;
 struct nd_namespace_blk;
diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
index af5a31dd3147..58c76d74127a 100644
--- a/drivers/nvdimm/namespace_devs.c
+++ b/drivers/nvdimm/namespace_devs.c
@@ -2042,10 +2042,8 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
 		nspm->uuid = kmemdup((void __force *) label0->uuid,
 				NSLABEL_UUID_LEN, GFP_KERNEL);
 		nspm->lbasize = nsl_get_lbasize(ndd, label0);
-		if (namespace_label_has(ndd, abstraction_guid))
-			nspm->nsio.common.claim_class
-				= to_nvdimm_cclass(&label0->abstraction_guid);
-
+		nspm->nsio.common.claim_class =
+			nsl_get_claim_class(ndd, label0);
 	}
 
 	if (!nspm->alt_name || !nspm->uuid) {
@@ -2273,11 +2271,8 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
 	dev->parent = &nd_region->dev;
 	nsblk->id = -1;
 	nsblk->lbasize = nsl_get_lbasize(ndd, nd_label);
-	nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN,
-			GFP_KERNEL);
-	if (namespace_label_has(ndd, abstraction_guid))
-		nsblk->common.claim_class
-			= to_nvdimm_cclass(&nd_label->abstraction_guid);
+	nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN, GFP_KERNEL);
+	nsblk->common.claim_class = nsl_get_claim_class(ndd, nd_label);
 	if (!nsblk->uuid)
 		goto blk_err;
 	nsl_get_name(ndd, nd_label, name);
diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index f3c364df9449..ac80d9680367 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -181,6 +181,8 @@ bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
 				 u64 isetcookie);
 bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd,
 			    struct nd_namespace_label *nd_label, guid_t *guid);
+enum nvdimm_claim_class nsl_get_claim_class(struct nvdimm_drvdata *ndd,
+					    struct nd_namespace_label *nd_label);
 
 struct nd_region_data {
 	int ns_count;


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 09/23] libnvdimm/labels: Add address-abstraction uuid definitions
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (7 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 08/23] libnvdimm/labels: Add claim class helpers Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-11 18:49   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 10/23] libnvdimm/labels: Add uuid helpers Dan Williams
                   ` (14 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

The EFI definition of the labels represents the Linux "claim class" with
a GUID. The CXL definition of the labels stores the same identifier in
UUID byte order. In preparation for adding CXL label support, enable the
claim class to optionally handle uuids.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/label.c |   54 ++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 52 insertions(+), 2 deletions(-)

diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index 7f473f9db300..2ba31b883b28 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -17,6 +17,11 @@ static guid_t nvdimm_btt2_guid;
 static guid_t nvdimm_pfn_guid;
 static guid_t nvdimm_dax_guid;
 
+static uuid_t nvdimm_btt_uuid;
+static uuid_t nvdimm_btt2_uuid;
+static uuid_t nvdimm_pfn_uuid;
+static uuid_t nvdimm_dax_uuid;
+
 static const char NSINDEX_SIGNATURE[] = "NAMESPACE_INDEX\0";
 
 static u32 best_seq(u32 a, u32 b)
@@ -724,7 +729,7 @@ static unsigned long nd_label_offset(struct nvdimm_drvdata *ndd,
 		- (unsigned long) to_namespace_index(ndd, 0);
 }
 
-static enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid)
+static enum nvdimm_claim_class guid_to_nvdimm_cclass(guid_t *guid)
 {
 	if (guid_equal(guid, &nvdimm_btt_guid))
 		return NVDIMM_CCLASS_BTT;
@@ -740,6 +745,23 @@ static enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid)
 	return NVDIMM_CCLASS_UNKNOWN;
 }
 
+/* CXL labels store UUIDs instead of GUIDs for the same data */
+enum nvdimm_claim_class uuid_to_nvdimm_cclass(uuid_t *uuid)
+{
+	if (uuid_equal(uuid, &nvdimm_btt_uuid))
+		return NVDIMM_CCLASS_BTT;
+	else if (uuid_equal(uuid, &nvdimm_btt2_uuid))
+		return NVDIMM_CCLASS_BTT2;
+	else if (uuid_equal(uuid, &nvdimm_pfn_uuid))
+		return NVDIMM_CCLASS_PFN;
+	else if (uuid_equal(uuid, &nvdimm_dax_uuid))
+		return NVDIMM_CCLASS_DAX;
+	else if (uuid_equal(uuid, &uuid_null))
+		return NVDIMM_CCLASS_NONE;
+
+	return NVDIMM_CCLASS_UNKNOWN;
+}
+
 static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
 	guid_t *target)
 {
@@ -761,6 +783,29 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
 		return &guid_null;
 }
 
+/* CXL labels store UUIDs instead of GUIDs for the same data */
+__maybe_unused
+static const uuid_t *to_abstraction_uuid(enum nvdimm_claim_class claim_class,
+					 uuid_t *target)
+{
+	if (claim_class == NVDIMM_CCLASS_BTT)
+		return &nvdimm_btt_uuid;
+	else if (claim_class == NVDIMM_CCLASS_BTT2)
+		return &nvdimm_btt2_uuid;
+	else if (claim_class == NVDIMM_CCLASS_PFN)
+		return &nvdimm_pfn_uuid;
+	else if (claim_class == NVDIMM_CCLASS_DAX)
+		return &nvdimm_dax_uuid;
+	else if (claim_class == NVDIMM_CCLASS_UNKNOWN) {
+		/*
+		 * If we're modifying a namespace for which we don't
+		 * know the claim_class, don't touch the existing uuid.
+		 */
+		return target;
+	} else
+		return &uuid_null;
+}
+
 static void reap_victim(struct nd_mapping *nd_mapping,
 		struct nd_label_ent *victim)
 {
@@ -808,7 +853,7 @@ enum nvdimm_claim_class nsl_get_claim_class(struct nvdimm_drvdata *ndd,
 {
 	if (!namespace_label_has(ndd, abstraction_guid))
 		return NVDIMM_CCLASS_NONE;
-	return to_nvdimm_cclass(&nd_label->abstraction_guid);
+	return guid_to_nvdimm_cclass(&nd_label->abstraction_guid);
 }
 
 static int __pmem_label_update(struct nd_region *nd_region,
@@ -1395,5 +1440,10 @@ int __init nd_label_init(void)
 	WARN_ON(guid_parse(NVDIMM_PFN_GUID, &nvdimm_pfn_guid));
 	WARN_ON(guid_parse(NVDIMM_DAX_GUID, &nvdimm_dax_guid));
 
+	WARN_ON(uuid_parse(NVDIMM_BTT_GUID, &nvdimm_btt_uuid));
+	WARN_ON(uuid_parse(NVDIMM_BTT2_GUID, &nvdimm_btt2_uuid));
+	WARN_ON(uuid_parse(NVDIMM_PFN_GUID, &nvdimm_pfn_uuid));
+	WARN_ON(uuid_parse(NVDIMM_DAX_GUID, &nvdimm_dax_uuid));
+
 	return 0;
 }


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (8 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 09/23] libnvdimm/labels: Add address-abstraction uuid definitions Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-11  8:05   ` Andy Shevchenko
  2021-08-11 18:13   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 11/23] libnvdimm/labels: Introduce CXL labels Dan Williams
                   ` (13 subsequent siblings)
  23 siblings, 2 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: Andy Shevchenko, nvdimm, Jonathan.Cameron, ben.widawsky,
	vishal.l.verma, alison.schofield, ira.weiny

In preparation for CXL labels that move the uuid to a different offset
in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
proper uuid_t type. That type definition predated the libnvdimm
subsystem, so now is as a good a time as any to convert all the uuid
handling in the subsystem to uuid_t to match the helpers.

As for the whitespace changes, all new code is clang-format compliant.

Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/btt.c            |   11 +++--
 drivers/nvdimm/btt.h            |    4 +-
 drivers/nvdimm/btt_devs.c       |   12 +++---
 drivers/nvdimm/core.c           |   40 ++-----------------
 drivers/nvdimm/label.c          |   34 +++++++---------
 drivers/nvdimm/label.h          |    3 -
 drivers/nvdimm/namespace_devs.c |   83 ++++++++++++++++++++-------------------
 drivers/nvdimm/nd-core.h        |    5 +-
 drivers/nvdimm/nd.h             |   37 ++++++++++++++++-
 drivers/nvdimm/pfn_devs.c       |    2 -
 include/linux/nd.h              |    4 +-
 11 files changed, 115 insertions(+), 120 deletions(-)

diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 92dec4952297..1cdfbadb7408 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -973,7 +973,7 @@ static int btt_arena_write_layout(struct arena_info *arena)
 	u64 sum;
 	struct btt_sb *super;
 	struct nd_btt *nd_btt = arena->nd_btt;
-	const u8 *parent_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
+	const uuid_t *parent_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
 
 	ret = btt_map_init(arena);
 	if (ret)
@@ -988,8 +988,8 @@ static int btt_arena_write_layout(struct arena_info *arena)
 		return -ENOMEM;
 
 	strncpy(super->signature, BTT_SIG, BTT_SIG_LEN);
-	memcpy(super->uuid, nd_btt->uuid, 16);
-	memcpy(super->parent_uuid, parent_uuid, 16);
+	uuid_copy(&super->uuid, nd_btt->uuid);
+	uuid_copy(&super->parent_uuid, parent_uuid);
 	super->flags = cpu_to_le32(arena->flags);
 	super->version_major = cpu_to_le16(arena->version_major);
 	super->version_minor = cpu_to_le16(arena->version_minor);
@@ -1575,7 +1575,8 @@ static void btt_blk_cleanup(struct btt *btt)
  * Pointer to a new struct btt on success, NULL on failure.
  */
 static struct btt *btt_init(struct nd_btt *nd_btt, unsigned long long rawsize,
-		u32 lbasize, u8 *uuid, struct nd_region *nd_region)
+			    u32 lbasize, uuid_t *uuid,
+			    struct nd_region *nd_region)
 {
 	int ret;
 	struct btt *btt;
@@ -1694,7 +1695,7 @@ int nvdimm_namespace_attach_btt(struct nd_namespace_common *ndns)
 	}
 	nd_region = to_nd_region(nd_btt->dev.parent);
 	btt = btt_init(nd_btt, rawsize, nd_btt->lbasize, nd_btt->uuid,
-			nd_region);
+		       nd_region);
 	if (!btt)
 		return -ENOMEM;
 	nd_btt->btt = btt;
diff --git a/drivers/nvdimm/btt.h b/drivers/nvdimm/btt.h
index 0c76c0333f6e..fc3512d92ae5 100644
--- a/drivers/nvdimm/btt.h
+++ b/drivers/nvdimm/btt.h
@@ -94,8 +94,8 @@ struct log_group {
 
 struct btt_sb {
 	u8 signature[BTT_SIG_LEN];
-	u8 uuid[16];
-	u8 parent_uuid[16];
+	uuid_t uuid;
+	uuid_t parent_uuid;
 	__le32 flags;
 	__le16 version_major;
 	__le16 version_minor;
diff --git a/drivers/nvdimm/btt_devs.c b/drivers/nvdimm/btt_devs.c
index 05feb97e11ce..5ad45e9e48c9 100644
--- a/drivers/nvdimm/btt_devs.c
+++ b/drivers/nvdimm/btt_devs.c
@@ -180,8 +180,8 @@ bool is_nd_btt(struct device *dev)
 EXPORT_SYMBOL(is_nd_btt);
 
 static struct device *__nd_btt_create(struct nd_region *nd_region,
-		unsigned long lbasize, u8 *uuid,
-		struct nd_namespace_common *ndns)
+				      unsigned long lbasize, uuid_t *uuid,
+				      struct nd_namespace_common *ndns)
 {
 	struct nd_btt *nd_btt;
 	struct device *dev;
@@ -244,14 +244,14 @@ struct device *nd_btt_create(struct nd_region *nd_region)
  */
 bool nd_btt_arena_is_valid(struct nd_btt *nd_btt, struct btt_sb *super)
 {
-	const u8 *parent_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
+	const uuid_t *parent_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
 	u64 checksum;
 
 	if (memcmp(super->signature, BTT_SIG, BTT_SIG_LEN) != 0)
 		return false;
 
-	if (!guid_is_null((guid_t *)&super->parent_uuid))
-		if (memcmp(super->parent_uuid, parent_uuid, 16) != 0)
+	if (!uuid_is_null(&super->parent_uuid))
+		if (!uuid_equal(&super->parent_uuid, parent_uuid))
 			return false;
 
 	checksum = le64_to_cpu(super->checksum);
@@ -319,7 +319,7 @@ static int __nd_btt_probe(struct nd_btt *nd_btt,
 		return rc;
 
 	nd_btt->lbasize = le32_to_cpu(btt_sb->external_lbasize);
-	nd_btt->uuid = kmemdup(btt_sb->uuid, 16, GFP_KERNEL);
+	nd_btt->uuid = kmemdup(&btt_sb->uuid, sizeof(uuid_t), GFP_KERNEL);
 	if (!nd_btt->uuid)
 		return -ENOMEM;
 
diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c
index 7de592d7eff4..690152d62bf0 100644
--- a/drivers/nvdimm/core.c
+++ b/drivers/nvdimm/core.c
@@ -206,38 +206,6 @@ struct device *to_nvdimm_bus_dev(struct nvdimm_bus *nvdimm_bus)
 }
 EXPORT_SYMBOL_GPL(to_nvdimm_bus_dev);
 
-static bool is_uuid_sep(char sep)
-{
-	if (sep == '\n' || sep == '-' || sep == ':' || sep == '\0')
-		return true;
-	return false;
-}
-
-static int nd_uuid_parse(struct device *dev, u8 *uuid_out, const char *buf,
-		size_t len)
-{
-	const char *str = buf;
-	u8 uuid[16];
-	int i;
-
-	for (i = 0; i < 16; i++) {
-		if (!isxdigit(str[0]) || !isxdigit(str[1])) {
-			dev_dbg(dev, "pos: %d buf[%zd]: %c buf[%zd]: %c\n",
-					i, str - buf, str[0],
-					str + 1 - buf, str[1]);
-			return -EINVAL;
-		}
-
-		uuid[i] = (hex_to_bin(str[0]) << 4) | hex_to_bin(str[1]);
-		str += 2;
-		if (is_uuid_sep(*str))
-			str++;
-	}
-
-	memcpy(uuid_out, uuid, sizeof(uuid));
-	return 0;
-}
-
 /**
  * nd_uuid_store: common implementation for writing 'uuid' sysfs attributes
  * @dev: container device for the uuid property
@@ -248,21 +216,21 @@ static int nd_uuid_parse(struct device *dev, u8 *uuid_out, const char *buf,
  * (driver detached)
  * LOCKING: expects nd_device_lock() is held on entry
  */
-int nd_uuid_store(struct device *dev, u8 **uuid_out, const char *buf,
+int nd_uuid_store(struct device *dev, uuid_t **uuid_out, const char *buf,
 		size_t len)
 {
-	u8 uuid[16];
+	uuid_t uuid;
 	int rc;
 
 	if (dev->driver)
 		return -EBUSY;
 
-	rc = nd_uuid_parse(dev, uuid, buf, len);
+	rc = uuid_parse(buf, &uuid);
 	if (rc)
 		return rc;
 
 	kfree(*uuid_out);
-	*uuid_out = kmemdup(uuid, sizeof(uuid), GFP_KERNEL);
+	*uuid_out = kmemdup(&uuid, sizeof(uuid), GFP_KERNEL);
 	if (!(*uuid_out))
 		return -ENOMEM;
 
diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index 2ba31b883b28..99608e6aeaae 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -326,7 +326,8 @@ static bool preamble_index(struct nvdimm_drvdata *ndd, int idx,
 	return true;
 }
 
-char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags)
+char *nd_label_gen_id(struct nd_label_id *label_id, const uuid_t *uuid,
+		      u32 flags)
 {
 	if (!label_id || !uuid)
 		return NULL;
@@ -405,9 +406,9 @@ int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
 		struct nvdimm *nvdimm = to_nvdimm(ndd->dev);
 		struct nd_namespace_label *nd_label;
 		struct nd_region *nd_region = NULL;
-		u8 label_uuid[NSLABEL_UUID_LEN];
 		struct nd_label_id label_id;
 		struct resource *res;
+		uuid_t label_uuid;
 		u32 flags;
 
 		nd_label = to_label(ndd, slot);
@@ -415,11 +416,11 @@ int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
 		if (!slot_valid(ndd, nd_label, slot))
 			continue;
 
-		memcpy(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
+		nsl_get_uuid(ndd, nd_label, &label_uuid);
 		flags = nsl_get_flags(ndd, nd_label);
 		if (test_bit(NDD_NOBLK, &nvdimm->flags))
 			flags &= ~NSLABEL_FLAG_LOCAL;
-		nd_label_gen_id(&label_id, label_uuid, flags);
+		nd_label_gen_id(&label_id, &label_uuid, flags);
 		res = nvdimm_allocate_dpa(ndd, &label_id,
 					  nsl_get_dpa(ndd, nd_label),
 					  nsl_get_rawsize(ndd, nd_label));
@@ -896,7 +897,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
 
 	nd_label = to_label(ndd, slot);
 	memset(nd_label, 0, sizeof_namespace_label(ndd));
-	memcpy(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN);
+	nsl_set_uuid(ndd, nd_label, nspm->uuid);
 	nsl_set_name(ndd, nd_label, nspm->alt_name);
 	nsl_set_flags(ndd, nd_label, flags);
 	nsl_set_nlabel(ndd, nd_label, nd_region->ndr_mappings);
@@ -923,9 +924,8 @@ static int __pmem_label_update(struct nd_region *nd_region,
 	list_for_each_entry(label_ent, &nd_mapping->labels, list) {
 		if (!label_ent->label)
 			continue;
-		if (test_and_clear_bit(ND_LABEL_REAP, &label_ent->flags)
-				|| memcmp(nspm->uuid, label_ent->label->uuid,
-					NSLABEL_UUID_LEN) == 0)
+		if (test_and_clear_bit(ND_LABEL_REAP, &label_ent->flags) ||
+		    uuid_equal(nspm->uuid, nsl_ref_uuid(ndd, label_ent->label)))
 			reap_victim(nd_mapping, label_ent);
 	}
 
@@ -1050,7 +1050,6 @@ static int __blk_label_update(struct nd_region *nd_region,
 	unsigned long *free, *victim_map = NULL;
 	struct resource *res, **old_res_list;
 	struct nd_label_id label_id;
-	u8 uuid[NSLABEL_UUID_LEN];
 	int min_dpa_idx = 0;
 	LIST_HEAD(list);
 	u32 nslot, slot;
@@ -1088,8 +1087,7 @@ static int __blk_label_update(struct nd_region *nd_region,
 		/* mark unused labels for garbage collection */
 		for_each_clear_bit_le(slot, free, nslot) {
 			nd_label = to_label(ndd, slot);
-			memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
-			if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
+			if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
 				continue;
 			res = to_resource(ndd, nd_label);
 			if (res && is_old_resource(res, old_res_list,
@@ -1158,7 +1156,7 @@ static int __blk_label_update(struct nd_region *nd_region,
 
 		nd_label = to_label(ndd, slot);
 		memset(nd_label, 0, sizeof_namespace_label(ndd));
-		memcpy(nd_label->uuid, nsblk->uuid, NSLABEL_UUID_LEN);
+		nsl_set_uuid(ndd, nd_label, nsblk->uuid);
 		nsl_set_name(ndd, nd_label, nsblk->alt_name);
 		nsl_set_flags(ndd, nd_label, NSLABEL_FLAG_LOCAL);
 
@@ -1206,8 +1204,7 @@ static int __blk_label_update(struct nd_region *nd_region,
 		if (!nd_label)
 			continue;
 		nlabel++;
-		memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
-		if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
+		if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
 			continue;
 		nlabel--;
 		list_move(&label_ent->list, &list);
@@ -1237,8 +1234,7 @@ static int __blk_label_update(struct nd_region *nd_region,
 	}
 	for_each_clear_bit_le(slot, free, nslot) {
 		nd_label = to_label(ndd, slot);
-		memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
-		if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
+		if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
 			continue;
 		res = to_resource(ndd, nd_label);
 		res->flags &= ~DPA_RESOURCE_ADJUSTED;
@@ -1318,12 +1314,11 @@ static int init_labels(struct nd_mapping *nd_mapping, int num_labels)
 	return max(num_labels, old_num_labels);
 }
 
-static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
+static int del_labels(struct nd_mapping *nd_mapping, uuid_t *uuid)
 {
 	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
 	struct nd_label_ent *label_ent, *e;
 	struct nd_namespace_index *nsindex;
-	u8 label_uuid[NSLABEL_UUID_LEN];
 	unsigned long *free;
 	LIST_HEAD(list);
 	u32 nslot, slot;
@@ -1343,8 +1338,7 @@ static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
 		if (!nd_label)
 			continue;
 		active++;
-		memcpy(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
-		if (memcmp(label_uuid, uuid, NSLABEL_UUID_LEN) != 0)
+		if (!nsl_validate_uuid(ndd, nd_label, uuid))
 			continue;
 		active--;
 		slot = to_slot(ndd, nd_label);
diff --git a/drivers/nvdimm/label.h b/drivers/nvdimm/label.h
index 31f94fad7b92..e6e77691dbec 100644
--- a/drivers/nvdimm/label.h
+++ b/drivers/nvdimm/label.h
@@ -14,7 +14,6 @@ enum {
 	NSINDEX_SIG_LEN = 16,
 	NSINDEX_ALIGN = 256,
 	NSINDEX_SEQ_MASK = 0x3,
-	NSLABEL_UUID_LEN = 16,
 	NSLABEL_NAME_LEN = 64,
 	NSLABEL_FLAG_ROLABEL = 0x1,  /* read-only label */
 	NSLABEL_FLAG_LOCAL = 0x2,    /* DIMM-local namespace */
@@ -80,7 +79,7 @@ struct nd_namespace_index {
  * @unused: must be zero
  */
 struct nd_namespace_label {
-	u8 uuid[NSLABEL_UUID_LEN];
+	uuid_t uuid;
 	u8 name[NSLABEL_NAME_LEN];
 	__le32 flags;
 	__le16 nlabel;
diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
index 58c76d74127a..20ea3ccd1f29 100644
--- a/drivers/nvdimm/namespace_devs.c
+++ b/drivers/nvdimm/namespace_devs.c
@@ -51,7 +51,7 @@ static bool is_namespace_io(const struct device *dev);
 
 static int is_uuid_busy(struct device *dev, void *data)
 {
-	u8 *uuid1 = data, *uuid2 = NULL;
+	uuid_t *uuid1 = data, *uuid2 = NULL;
 
 	if (is_namespace_pmem(dev)) {
 		struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
@@ -71,7 +71,7 @@ static int is_uuid_busy(struct device *dev, void *data)
 		uuid2 = nd_pfn->uuid;
 	}
 
-	if (uuid2 && memcmp(uuid1, uuid2, NSLABEL_UUID_LEN) == 0)
+	if (uuid2 && uuid_equal(uuid1, uuid2))
 		return -EBUSY;
 
 	return 0;
@@ -89,7 +89,7 @@ static int is_namespace_uuid_busy(struct device *dev, void *data)
  * @dev: any device on a nvdimm_bus
  * @uuid: uuid to check
  */
-bool nd_is_uuid_unique(struct device *dev, u8 *uuid)
+bool nd_is_uuid_unique(struct device *dev, uuid_t *uuid)
 {
 	struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev);
 
@@ -192,12 +192,10 @@ const char *nvdimm_namespace_disk_name(struct nd_namespace_common *ndns,
 }
 EXPORT_SYMBOL(nvdimm_namespace_disk_name);
 
-const u8 *nd_dev_to_uuid(struct device *dev)
+const uuid_t *nd_dev_to_uuid(struct device *dev)
 {
-	static const u8 null_uuid[16];
-
 	if (!dev)
-		return null_uuid;
+		return &uuid_null;
 
 	if (is_namespace_pmem(dev)) {
 		struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
@@ -208,7 +206,7 @@ const u8 *nd_dev_to_uuid(struct device *dev)
 
 		return nsblk->uuid;
 	} else
-		return null_uuid;
+		return &uuid_null;
 }
 EXPORT_SYMBOL(nd_dev_to_uuid);
 
@@ -938,7 +936,8 @@ static void nd_namespace_pmem_set_resource(struct nd_region *nd_region,
 	res->end = res->start + size - 1;
 }
 
-static bool uuid_not_set(const u8 *uuid, struct device *dev, const char *where)
+static bool uuid_not_set(const uuid_t *uuid, struct device *dev,
+			 const char *where)
 {
 	if (!uuid) {
 		dev_dbg(dev, "%s: uuid not set\n", where);
@@ -957,7 +956,7 @@ static ssize_t __size_store(struct device *dev, unsigned long long val)
 	struct nd_label_id label_id;
 	u32 flags = 0, remainder;
 	int rc, i, id = -1;
-	u8 *uuid = NULL;
+	uuid_t *uuid = NULL;
 
 	if (dev->driver || ndns->claim)
 		return -EBUSY;
@@ -1050,7 +1049,7 @@ static ssize_t size_store(struct device *dev,
 {
 	struct nd_region *nd_region = to_nd_region(dev->parent);
 	unsigned long long val;
-	u8 **uuid = NULL;
+	uuid_t **uuid = NULL;
 	int rc;
 
 	rc = kstrtoull(buf, 0, &val);
@@ -1147,7 +1146,7 @@ static ssize_t size_show(struct device *dev,
 }
 static DEVICE_ATTR(size, 0444, size_show, size_store);
 
-static u8 *namespace_to_uuid(struct device *dev)
+static uuid_t *namespace_to_uuid(struct device *dev)
 {
 	if (is_namespace_pmem(dev)) {
 		struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
@@ -1161,10 +1160,10 @@ static u8 *namespace_to_uuid(struct device *dev)
 		return ERR_PTR(-ENXIO);
 }
 
-static ssize_t uuid_show(struct device *dev,
-		struct device_attribute *attr, char *buf)
+static ssize_t uuid_show(struct device *dev, struct device_attribute *attr,
+			 char *buf)
 {
-	u8 *uuid = namespace_to_uuid(dev);
+	uuid_t *uuid = namespace_to_uuid(dev);
 
 	if (IS_ERR(uuid))
 		return PTR_ERR(uuid);
@@ -1181,7 +1180,8 @@ static ssize_t uuid_show(struct device *dev,
  * @old_uuid: reference to the uuid storage location in the namespace object
  */
 static int namespace_update_uuid(struct nd_region *nd_region,
-		struct device *dev, u8 *new_uuid, u8 **old_uuid)
+				 struct device *dev, uuid_t *new_uuid,
+				 uuid_t **old_uuid)
 {
 	u32 flags = is_namespace_blk(dev) ? NSLABEL_FLAG_LOCAL : 0;
 	struct nd_label_id old_label_id;
@@ -1234,7 +1234,7 @@ static int namespace_update_uuid(struct nd_region *nd_region,
 
 			if (!nd_label)
 				continue;
-			nd_label_gen_id(&label_id, nd_label->uuid,
+			nd_label_gen_id(&label_id, nsl_ref_uuid(ndd, nd_label),
 					nsl_get_flags(ndd, nd_label));
 			if (strcmp(old_label_id.id, label_id.id) == 0)
 				set_bit(ND_LABEL_REAP, &label_ent->flags);
@@ -1251,9 +1251,9 @@ static ssize_t uuid_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
 	struct nd_region *nd_region = to_nd_region(dev->parent);
-	u8 *uuid = NULL;
+	uuid_t *uuid = NULL;
+	uuid_t **ns_uuid;
 	ssize_t rc = 0;
-	u8 **ns_uuid;
 
 	if (is_namespace_pmem(dev)) {
 		struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
@@ -1378,8 +1378,8 @@ static ssize_t dpa_extents_show(struct device *dev,
 {
 	struct nd_region *nd_region = to_nd_region(dev->parent);
 	struct nd_label_id label_id;
+	uuid_t *uuid = NULL;
 	int count = 0, i;
-	u8 *uuid = NULL;
 	u32 flags = 0;
 
 	nvdimm_bus_lock(dev);
@@ -1831,8 +1831,8 @@ static struct device **create_namespace_io(struct nd_region *nd_region)
 	return devs;
 }
 
-static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
-		u64 cookie, u16 pos)
+static bool has_uuid_at_pos(struct nd_region *nd_region, const uuid_t *uuid,
+			    u64 cookie, u16 pos)
 {
 	struct nd_namespace_label *found = NULL;
 	int i;
@@ -1856,7 +1856,7 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
 			if (!nsl_validate_isetcookie(ndd, nd_label, cookie))
 				continue;
 
-			if (memcmp(nd_label->uuid, uuid, NSLABEL_UUID_LEN) != 0)
+			if (!nsl_validate_uuid(ndd, nd_label, uuid))
 				continue;
 
 			if (!nsl_validate_type_guid(ndd, nd_label,
@@ -1881,7 +1881,7 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
 	return found != NULL;
 }
 
-static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
+static int select_pmem_id(struct nd_region *nd_region, const uuid_t *pmem_id)
 {
 	int i;
 
@@ -1900,7 +1900,7 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
 			nd_label = label_ent->label;
 			if (!nd_label)
 				continue;
-			if (memcmp(nd_label->uuid, pmem_id, NSLABEL_UUID_LEN) == 0)
+			if (nsl_validate_uuid(ndd, nd_label, pmem_id))
 				break;
 			nd_label = NULL;
 		}
@@ -1923,7 +1923,8 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
 			/* pass */;
 		else {
 			dev_dbg(&nd_region->dev, "%s invalid label for %pUb\n",
-					dev_name(ndd->dev), nd_label->uuid);
+				dev_name(ndd->dev),
+				nsl_ref_uuid(ndd, nd_label));
 			return -EINVAL;
 		}
 
@@ -1963,12 +1964,12 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
 
 	if (!nsl_validate_isetcookie(ndd, nd_label, cookie)) {
 		dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
-				nd_label->uuid);
+			nsl_ref_uuid(ndd, nd_label));
 		if (!nsl_validate_isetcookie(ndd, nd_label, altcookie))
 			return ERR_PTR(-EAGAIN);
 
 		dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
-				nd_label->uuid);
+			nsl_ref_uuid(ndd, nd_label));
 	}
 
 	nspm = kzalloc(sizeof(*nspm), GFP_KERNEL);
@@ -1984,9 +1985,11 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
 	res->flags = IORESOURCE_MEM;
 
 	for (i = 0; i < nd_region->ndr_mappings; i++) {
-		if (has_uuid_at_pos(nd_region, nd_label->uuid, cookie, i))
+		if (has_uuid_at_pos(nd_region, nsl_ref_uuid(ndd, nd_label),
+				    cookie, i))
 			continue;
-		if (has_uuid_at_pos(nd_region, nd_label->uuid, altcookie, i))
+		if (has_uuid_at_pos(nd_region, nsl_ref_uuid(ndd, nd_label),
+				    altcookie, i))
 			continue;
 		break;
 	}
@@ -2000,7 +2003,7 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
 		 * find a dimm with two instances of the same uuid.
 		 */
 		dev_err(&nd_region->dev, "%s missing label for %pUb\n",
-				nvdimm_name(nvdimm), nd_label->uuid);
+			nvdimm_name(nvdimm), nsl_ref_uuid(ndd, nd_label));
 		rc = -EINVAL;
 		goto err;
 	}
@@ -2013,7 +2016,7 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
 	 * the dimm being enabled (i.e. nd_label_reserve_dpa()
 	 * succeeded).
 	 */
-	rc = select_pmem_id(nd_region, nd_label->uuid);
+	rc = select_pmem_id(nd_region, nsl_ref_uuid(ndd, nd_label));
 	if (rc)
 		goto err;
 
@@ -2039,8 +2042,8 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
 		WARN_ON(nspm->alt_name || nspm->uuid);
 		nspm->alt_name = kmemdup(nsl_ref_name(ndd, label0),
 					 NSLABEL_NAME_LEN, GFP_KERNEL);
-		nspm->uuid = kmemdup((void __force *) label0->uuid,
-				NSLABEL_UUID_LEN, GFP_KERNEL);
+		nspm->uuid = kmemdup(nsl_ref_uuid(ndd, label0), sizeof(uuid_t),
+				     GFP_KERNEL);
 		nspm->lbasize = nsl_get_lbasize(ndd, label0);
 		nspm->nsio.common.claim_class =
 			nsl_get_claim_class(ndd, label0);
@@ -2217,15 +2220,15 @@ static int add_namespace_resource(struct nd_region *nd_region,
 	int i;
 
 	for (i = 0; i < count; i++) {
-		u8 *uuid = namespace_to_uuid(devs[i]);
+		uuid_t *uuid = namespace_to_uuid(devs[i]);
 		struct resource *res;
 
-		if (IS_ERR_OR_NULL(uuid)) {
+		if (IS_ERR(uuid)) {
 			WARN_ON(1);
 			continue;
 		}
 
-		if (memcmp(uuid, nd_label->uuid, NSLABEL_UUID_LEN) != 0)
+		if (!nsl_validate_uuid(ndd, nd_label, uuid))
 			continue;
 		if (is_namespace_blk(devs[i])) {
 			res = nsblk_add_resource(nd_region, ndd,
@@ -2236,8 +2239,8 @@ static int add_namespace_resource(struct nd_region *nd_region,
 			nd_dbg_dpa(nd_region, ndd, res, "%d assign\n", count);
 		} else {
 			dev_err(&nd_region->dev,
-					"error: conflicting extents for uuid: %pUb\n",
-					nd_label->uuid);
+				"error: conflicting extents for uuid: %pUb\n",
+				uuid);
 			return -ENXIO;
 		}
 		break;
@@ -2271,7 +2274,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
 	dev->parent = &nd_region->dev;
 	nsblk->id = -1;
 	nsblk->lbasize = nsl_get_lbasize(ndd, nd_label);
-	nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN, GFP_KERNEL);
+	nsblk->uuid = kmemdup(nsl_ref_uuid(ndd, nd_label), sizeof(uuid_t), GFP_KERNEL);
 	nsblk->common.claim_class = nsl_get_claim_class(ndd, nd_label);
 	if (!nsblk->uuid)
 		goto blk_err;
diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h
index 564faa36a3ca..a11850dd475d 100644
--- a/drivers/nvdimm/nd-core.h
+++ b/drivers/nvdimm/nd-core.h
@@ -126,8 +126,9 @@ void nvdimm_bus_destroy_ndctl(struct nvdimm_bus *nvdimm_bus);
 void nd_synchronize(void);
 void __nd_device_register(struct device *dev);
 struct nd_label_id;
-char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags);
-bool nd_is_uuid_unique(struct device *dev, u8 *uuid);
+char *nd_label_gen_id(struct nd_label_id *label_id, const uuid_t *uuid,
+		      u32 flags);
+bool nd_is_uuid_unique(struct device *dev, uuid_t *uuid);
 struct nd_region;
 struct nvdimm_drvdata;
 struct nd_mapping;
diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index ac80d9680367..132a8021e3ad 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -176,6 +176,35 @@ static inline void nsl_set_lbasize(struct nvdimm_drvdata *ndd,
 	nd_label->lbasize = __cpu_to_le64(lbasize);
 }
 
+static inline const uuid_t *nsl_get_uuid(struct nvdimm_drvdata *ndd,
+					 struct nd_namespace_label *nd_label,
+					 uuid_t *uuid)
+{
+	uuid_copy(uuid, &nd_label->uuid);
+	return uuid;
+}
+
+static inline const uuid_t *nsl_set_uuid(struct nvdimm_drvdata *ndd,
+					 struct nd_namespace_label *nd_label,
+					 const uuid_t *uuid)
+{
+	uuid_copy(&nd_label->uuid, uuid);
+	return &nd_label->uuid;
+}
+
+static inline bool nsl_validate_uuid(struct nvdimm_drvdata *ndd,
+				     struct nd_namespace_label *nd_label,
+				     const uuid_t *uuid)
+{
+	return uuid_equal(&nd_label->uuid, uuid);
+}
+
+static inline const uuid_t *nsl_ref_uuid(struct nvdimm_drvdata *ndd,
+					 struct nd_namespace_label *nd_label)
+{
+	return &nd_label->uuid;
+}
+
 bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
 				 struct nd_namespace_label *nd_label,
 				 u64 isetcookie);
@@ -334,7 +363,7 @@ struct nd_btt {
 	struct btt *btt;
 	unsigned long lbasize;
 	u64 size;
-	u8 *uuid;
+	uuid_t *uuid;
 	int id;
 	int initial_offset;
 	u16 version_major;
@@ -349,7 +378,7 @@ enum nd_pfn_mode {
 
 struct nd_pfn {
 	int id;
-	u8 *uuid;
+	uuid_t *uuid;
 	struct device dev;
 	unsigned long align;
 	unsigned long npfns;
@@ -377,7 +406,7 @@ void wait_nvdimm_bus_probe_idle(struct device *dev);
 void nd_device_register(struct device *dev);
 void nd_device_unregister(struct device *dev, enum nd_async_mode mode);
 void nd_device_notify(struct device *dev, enum nvdimm_event event);
-int nd_uuid_store(struct device *dev, u8 **uuid_out, const char *buf,
+int nd_uuid_store(struct device *dev, uuid_t **uuid_out, const char *buf,
 		size_t len);
 ssize_t nd_size_select_show(unsigned long current_size,
 		const unsigned long *supported, char *buf);
@@ -560,6 +589,6 @@ static inline bool is_bad_pmem(struct badblocks *bb, sector_t sector,
 	return false;
 }
 resource_size_t nd_namespace_blk_validate(struct nd_namespace_blk *nsblk);
-const u8 *nd_dev_to_uuid(struct device *dev);
+const uuid_t *nd_dev_to_uuid(struct device *dev);
 bool pmem_should_map_pages(struct device *dev);
 #endif /* __ND_H__ */
diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
index b499df630d4d..58eda16f5c53 100644
--- a/drivers/nvdimm/pfn_devs.c
+++ b/drivers/nvdimm/pfn_devs.c
@@ -452,7 +452,7 @@ int nd_pfn_validate(struct nd_pfn *nd_pfn, const char *sig)
 	unsigned long align, start_pad;
 	struct nd_pfn_sb *pfn_sb = nd_pfn->pfn_sb;
 	struct nd_namespace_common *ndns = nd_pfn->ndns;
-	const u8 *parent_uuid = nd_dev_to_uuid(&ndns->dev);
+	const uuid_t *parent_uuid = nd_dev_to_uuid(&ndns->dev);
 
 	if (!pfn_sb || !ndns)
 		return -ENODEV;
diff --git a/include/linux/nd.h b/include/linux/nd.h
index ee9ad76afbba..8a8c63edb1b2 100644
--- a/include/linux/nd.h
+++ b/include/linux/nd.h
@@ -88,7 +88,7 @@ struct nd_namespace_pmem {
 	struct nd_namespace_io nsio;
 	unsigned long lbasize;
 	char *alt_name;
-	u8 *uuid;
+	uuid_t *uuid;
 	int id;
 };
 
@@ -105,7 +105,7 @@ struct nd_namespace_pmem {
 struct nd_namespace_blk {
 	struct nd_namespace_common common;
 	char *alt_name;
-	u8 *uuid;
+	uuid_t *uuid;
 	int id;
 	unsigned long lbasize;
 	resource_size_t size;


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 11/23] libnvdimm/labels: Introduce CXL labels
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (9 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 10/23] libnvdimm/labels: Add uuid helpers Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-11 18:41   ` Jonathan Cameron
  2021-08-09 22:28 ` [PATCH 12/23] cxl/pci: Make 'struct cxl_mem' device type generic Dan Williams
                   ` (12 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

Now that all of use sites of label data have been converted to nsl_*
helpers, introduce the CXL label format. The ->cxl flag in
nvdimm_drvdata indicates the label format the device expects. A
follow-on patch allows a bus provider to select the label style.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/label.c |   48 +++++++++++-------
 drivers/nvdimm/label.h |   92 ++++++++++++++++++++++++----------
 drivers/nvdimm/nd.h    |  131 +++++++++++++++++++++++++++++++++++++-----------
 3 files changed, 199 insertions(+), 72 deletions(-)

diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index 99608e6aeaae..d51899a32dd7 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -22,6 +22,9 @@ static uuid_t nvdimm_btt2_uuid;
 static uuid_t nvdimm_pfn_uuid;
 static uuid_t nvdimm_dax_uuid;
 
+static uuid_t cxl_region_uuid;
+static uuid_t cxl_namespace_uuid;
+
 static const char NSINDEX_SIGNATURE[] = "NAMESPACE_INDEX\0";
 
 static u32 best_seq(u32 a, u32 b)
@@ -357,7 +360,7 @@ static bool nsl_validate_checksum(struct nvdimm_drvdata *ndd,
 {
 	u64 sum, sum_save;
 
-	if (!namespace_label_has(ndd, checksum))
+	if (!ndd->cxl && !efi_namespace_label_has(ndd, checksum))
 		return true;
 
 	sum_save = nsl_get_checksum(ndd, nd_label);
@@ -372,7 +375,7 @@ static void nsl_calculate_checksum(struct nvdimm_drvdata *ndd,
 {
 	u64 sum;
 
-	if (!namespace_label_has(ndd, checksum))
+	if (!ndd->cxl && !efi_namespace_label_has(ndd, checksum))
 		return;
 	nsl_set_checksum(ndd, nd_label, 0);
 	sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
@@ -785,7 +788,6 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
 }
 
 /* CXL labels store UUIDs instead of GUIDs for the same data */
-__maybe_unused
 static const uuid_t *to_abstraction_uuid(enum nvdimm_claim_class claim_class,
 					 uuid_t *target)
 {
@@ -821,18 +823,18 @@ static void reap_victim(struct nd_mapping *nd_mapping,
 static void nsl_set_type_guid(struct nvdimm_drvdata *ndd,
 			      struct nd_namespace_label *nd_label, guid_t *guid)
 {
-	if (namespace_label_has(ndd, type_guid))
-		guid_copy(&nd_label->type_guid, guid);
+	if (efi_namespace_label_has(ndd, type_guid))
+		guid_copy(&nd_label->efi.type_guid, guid);
 }
 
 bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd,
 			    struct nd_namespace_label *nd_label, guid_t *guid)
 {
-	if (!namespace_label_has(ndd, type_guid))
+	if (ndd->cxl || !efi_namespace_label_has(ndd, type_guid))
 		return true;
-	if (!guid_equal(&nd_label->type_guid, guid)) {
+	if (!guid_equal(&nd_label->efi.type_guid, guid)) {
 		dev_dbg(ndd->dev, "expect type_guid %pUb got %pUb\n", guid,
-			&nd_label->type_guid);
+			&nd_label->efi.type_guid);
 		return false;
 	}
 	return true;
@@ -842,19 +844,28 @@ static void nsl_set_claim_class(struct nvdimm_drvdata *ndd,
 				struct nd_namespace_label *nd_label,
 				enum nvdimm_claim_class claim_class)
 {
-	if (!namespace_label_has(ndd, abstraction_guid))
+	if (ndd->cxl) {
+		uuid_copy(&nd_label->cxl.abstraction_uuid,
+			  to_abstraction_uuid(claim_class,
+					      &nd_label->cxl.abstraction_uuid));
 		return;
-	guid_copy(&nd_label->abstraction_guid,
+	}
+
+	if (!efi_namespace_label_has(ndd, abstraction_guid))
+		return;
+	guid_copy(&nd_label->efi.abstraction_guid,
 		  to_abstraction_guid(claim_class,
-				      &nd_label->abstraction_guid));
+				      &nd_label->efi.abstraction_guid));
 }
 
 enum nvdimm_claim_class nsl_get_claim_class(struct nvdimm_drvdata *ndd,
 					    struct nd_namespace_label *nd_label)
 {
-	if (!namespace_label_has(ndd, abstraction_guid))
+	if (ndd->cxl)
+		return uuid_to_nvdimm_cclass(&nd_label->cxl.abstraction_uuid);
+	if (!efi_namespace_label_has(ndd, abstraction_guid))
 		return NVDIMM_CCLASS_NONE;
-	return guid_to_nvdimm_cclass(&nd_label->abstraction_guid);
+	return guid_to_nvdimm_cclass(&nd_label->efi.abstraction_guid);
 }
 
 static int __pmem_label_update(struct nd_region *nd_region,
@@ -986,7 +997,7 @@ static void nsl_set_blk_isetcookie(struct nvdimm_drvdata *ndd,
 				   struct nd_namespace_label *nd_label,
 				   u64 isetcookie)
 {
-	if (namespace_label_has(ndd, type_guid)) {
+	if (efi_namespace_label_has(ndd, type_guid)) {
 		nsl_set_isetcookie(ndd, nd_label, isetcookie);
 		return;
 	}
@@ -997,7 +1008,7 @@ bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
 				 struct nd_namespace_label *nd_label,
 				 u64 isetcookie)
 {
-	if (!namespace_label_has(ndd, type_guid))
+	if (!efi_namespace_label_has(ndd, type_guid))
 		return true;
 
 	if (nsl_get_isetcookie(ndd, nd_label) != isetcookie) {
@@ -1013,7 +1024,7 @@ static void nsl_set_blk_nlabel(struct nvdimm_drvdata *ndd,
 			       struct nd_namespace_label *nd_label, int nlabel,
 			       bool first)
 {
-	if (!namespace_label_has(ndd, type_guid)) {
+	if (!efi_namespace_label_has(ndd, type_guid)) {
 		nsl_set_nlabel(ndd, nd_label, 0); /* N/A */
 		return;
 	}
@@ -1024,7 +1035,7 @@ static void nsl_set_blk_position(struct nvdimm_drvdata *ndd,
 				 struct nd_namespace_label *nd_label,
 				 bool first)
 {
-	if (!namespace_label_has(ndd, type_guid)) {
+	if (!efi_namespace_label_has(ndd, type_guid)) {
 		nsl_set_position(ndd, nd_label, 0);
 		return;
 	}
@@ -1439,5 +1450,8 @@ int __init nd_label_init(void)
 	WARN_ON(uuid_parse(NVDIMM_PFN_GUID, &nvdimm_pfn_uuid));
 	WARN_ON(uuid_parse(NVDIMM_DAX_GUID, &nvdimm_dax_uuid));
 
+	WARN_ON(uuid_parse(CXL_REGION_UUID, &cxl_region_uuid));
+	WARN_ON(uuid_parse(CXL_NAMESPACE_UUID, &cxl_namespace_uuid));
+
 	return 0;
 }
diff --git a/drivers/nvdimm/label.h b/drivers/nvdimm/label.h
index e6e77691dbec..71ffde56fac0 100644
--- a/drivers/nvdimm/label.h
+++ b/drivers/nvdimm/label.h
@@ -64,40 +64,77 @@ struct nd_namespace_index {
 	u8 free[];
 };
 
-/**
- * struct nd_namespace_label - namespace superblock
- * @uuid: UUID per RFC 4122
- * @name: optional name (NULL-terminated)
- * @flags: see NSLABEL_FLAG_*
- * @nlabel: num labels to describe this ns
- * @position: labels position in set
- * @isetcookie: interleave set cookie
- * @lbasize: LBA size in bytes or 0 for pmem
- * @dpa: DPA of NVM range on this DIMM
- * @rawsize: size of namespace
- * @slot: slot of this label in label area
- * @unused: must be zero
- */
 struct nd_namespace_label {
+	union {
+		struct nvdimm_cxl_label {
+			uuid_t type;
+			uuid_t uuid;
+			u8 name[NSLABEL_NAME_LEN];
+			__le32 flags;
+			__le16 nlabel;
+			__le16 position;
+			__le64 dpa;
+			__le64 rawsize;
+			__le32 slot;
+			__le32 align;
+			uuid_t region_uuid;
+			uuid_t abstraction_uuid;
+			__le16 lbasize;
+			u8 reserved[0x56];
+			__le64 checksum;
+		} cxl;
+		/**
+		 * struct nvdimm_efi_label - namespace superblock
+		 * @uuid: UUID per RFC 4122
+		 * @name: optional name (NULL-terminated)
+		 * @flags: see NSLABEL_FLAG_*
+		 * @nlabel: num labels to describe this ns
+		 * @position: labels position in set
+		 * @isetcookie: interleave set cookie
+		 * @lbasize: LBA size in bytes or 0 for pmem
+		 * @dpa: DPA of NVM range on this DIMM
+		 * @rawsize: size of namespace
+		 * @slot: slot of this label in label area
+		 * @unused: must be zero
+		 */
+		struct nvdimm_efi_label {
+			uuid_t uuid;
+			u8 name[NSLABEL_NAME_LEN];
+			__le32 flags;
+			__le16 nlabel;
+			__le16 position;
+			__le64 isetcookie;
+			__le64 lbasize;
+			__le64 dpa;
+			__le64 rawsize;
+			__le32 slot;
+			/*
+			 * Accessing fields past this point should be
+			 * gated by a efi_namespace_label_has() check.
+			 */
+			u8 align;
+			u8 reserved[3];
+			guid_t type_guid;
+			guid_t abstraction_guid;
+			u8 reserved2[88];
+			__le64 checksum;
+		} efi;
+	};
+};
+
+struct cxl_region_label {
+	uuid_t type;
 	uuid_t uuid;
-	u8 name[NSLABEL_NAME_LEN];
 	__le32 flags;
 	__le16 nlabel;
 	__le16 position;
-	__le64 isetcookie;
-	__le64 lbasize;
 	__le64 dpa;
 	__le64 rawsize;
+	__le64 hpa;
 	__le32 slot;
-	/*
-	 * Accessing fields past this point should be gated by a
-	 * namespace_label_has() check.
-	 */
-	u8 align;
-	u8 reserved[3];
-	guid_t type_guid;
-	guid_t abstraction_guid;
-	u8 reserved2[88];
+	__le32 ig;
+	__le32 align;
+	u8 reserved[0xac];
 	__le64 checksum;
 };
 
@@ -106,6 +143,9 @@ struct nd_namespace_label {
 #define NVDIMM_PFN_GUID "266400ba-fb9f-4677-bcb0-968f11d0d225"
 #define NVDIMM_DAX_GUID "97a86d9c-3cdd-4eda-986f-5068b4f80088"
 
+#define CXL_REGION_UUID "529d7c61-da07-47c4-a93f-ecdf2c06f444"
+#define CXL_NAMESPACE_UUID "68bb2c0a-5a77-4937-9f85-3caf41a0f93c"
+
 /**
  * struct nd_label_id - identifier string for dpa allocation
  * @id: "{blk|pmem}-<namespace uuid>"
diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index 132a8021e3ad..817790c53f98 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -30,6 +30,7 @@ struct nvdimm_drvdata {
 	int nslabel_size;
 	struct nd_cmd_get_config_size nsarea;
 	void *data;
+	bool cxl;
 	int ns_current, ns_next;
 	struct resource dpa;
 	struct kref kref;
@@ -38,13 +39,17 @@ struct nvdimm_drvdata {
 static inline const u8 *nsl_ref_name(struct nvdimm_drvdata *ndd,
 				     struct nd_namespace_label *nd_label)
 {
-	return nd_label->name;
+	if (ndd->cxl)
+		return nd_label->cxl.name;
+	return nd_label->efi.name;
 }
 
 static inline u8 *nsl_get_name(struct nvdimm_drvdata *ndd,
 			       struct nd_namespace_label *nd_label, u8 *name)
 {
-	return memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
+	if (ndd->cxl)
+		return memcpy(name, nd_label->cxl.name, NSLABEL_NAME_LEN);
+	return memcpy(name, nd_label->efi.name, NSLABEL_NAME_LEN);
 }
 
 static inline u8 *nsl_set_name(struct nvdimm_drvdata *ndd,
@@ -52,135 +57,195 @@ static inline u8 *nsl_set_name(struct nvdimm_drvdata *ndd,
 {
 	if (!name)
 		return name;
-	return memcpy(nd_label->name, name, NSLABEL_NAME_LEN);
+	if (ndd->cxl)
+		return memcpy(nd_label->cxl.name, name, NSLABEL_NAME_LEN);
+	return memcpy(nd_label->efi.name, name, NSLABEL_NAME_LEN);
 }
 
 static inline u32 nsl_get_slot(struct nvdimm_drvdata *ndd,
 			       struct nd_namespace_label *nd_label)
 {
-	return __le32_to_cpu(nd_label->slot);
+	if (ndd->cxl)
+		return __le32_to_cpu(nd_label->cxl.slot);
+	return __le32_to_cpu(nd_label->efi.slot);
 }
 
 static inline u64 nsl_get_checksum(struct nvdimm_drvdata *ndd,
 				   struct nd_namespace_label *nd_label)
 {
-	return __le64_to_cpu(nd_label->checksum);
+	if (ndd->cxl)
+		return __le64_to_cpu(nd_label->cxl.checksum);
+	return __le64_to_cpu(nd_label->efi.checksum);
 }
 
 static inline u32 nsl_get_flags(struct nvdimm_drvdata *ndd,
 				struct nd_namespace_label *nd_label)
 {
-	return __le32_to_cpu(nd_label->flags);
+	if (ndd->cxl)
+		return __le32_to_cpu(nd_label->cxl.flags);
+	return __le32_to_cpu(nd_label->efi.flags);
 }
 
 static inline u64 nsl_get_dpa(struct nvdimm_drvdata *ndd,
 			      struct nd_namespace_label *nd_label)
 {
-	return __le64_to_cpu(nd_label->dpa);
+	if (ndd->cxl)
+		return __le64_to_cpu(nd_label->cxl.dpa);
+	return __le64_to_cpu(nd_label->efi.dpa);
 }
 
 static inline u64 nsl_get_rawsize(struct nvdimm_drvdata *ndd,
 				  struct nd_namespace_label *nd_label)
 {
-	return __le64_to_cpu(nd_label->rawsize);
+	if (ndd->cxl)
+		return __le64_to_cpu(nd_label->cxl.rawsize);
+	return __le64_to_cpu(nd_label->efi.rawsize);
 }
 
 static inline u64 nsl_get_isetcookie(struct nvdimm_drvdata *ndd,
 				     struct nd_namespace_label *nd_label)
 {
-	return __le64_to_cpu(nd_label->isetcookie);
+	/* WARN future refactor attempts that break this assumption */
+	if (dev_WARN_ONCE(ndd->dev, ndd->cxl,
+			  "CXL labels do not use the isetcookie concept\n"))
+		return 0;
+	return __le64_to_cpu(nd_label->efi.isetcookie);
 }
 
 static inline bool nsl_validate_isetcookie(struct nvdimm_drvdata *ndd,
 					   struct nd_namespace_label *nd_label,
 					   u64 cookie)
 {
-	return cookie == __le64_to_cpu(nd_label->isetcookie);
+	/*
+	 * Let the EFI and CXL validation comingle, where fields that
+	 * don't matter to CXL always validate.
+	 */
+	if (ndd->cxl)
+		return true;
+	return cookie == __le64_to_cpu(nd_label->efi.isetcookie);
 }
 
 static inline u16 nsl_get_position(struct nvdimm_drvdata *ndd,
 				   struct nd_namespace_label *nd_label)
 {
-	return __le16_to_cpu(nd_label->position);
+	if (ndd->cxl)
+		return __le16_to_cpu(nd_label->cxl.position);
+	return __le16_to_cpu(nd_label->efi.position);
 }
 
 static inline u16 nsl_get_nlabel(struct nvdimm_drvdata *ndd,
 				 struct nd_namespace_label *nd_label)
 {
-	return __le16_to_cpu(nd_label->nlabel);
+	if (ndd->cxl)
+		return __le16_to_cpu(nd_label->cxl.nlabel);
+	return __le16_to_cpu(nd_label->efi.nlabel);
 }
 
 static inline u64 nsl_get_lbasize(struct nvdimm_drvdata *ndd,
 				  struct nd_namespace_label *nd_label)
 {
-	return __le64_to_cpu(nd_label->lbasize);
+	/*
+	 * Yes, for some reason the EFI labels convey a massive 64-bit
+	 * lbasize, that got fixed for CXL.
+	 */
+	if (ndd->cxl)
+		return __le16_to_cpu(nd_label->cxl.lbasize);
+	return __le64_to_cpu(nd_label->efi.lbasize);
 }
 
 static inline void nsl_set_slot(struct nvdimm_drvdata *ndd,
 				struct nd_namespace_label *nd_label, u32 slot)
 {
-	nd_label->slot = __le32_to_cpu(slot);
+	if (ndd->cxl)
+		nd_label->cxl.slot = __le32_to_cpu(slot);
+	else
+		nd_label->efi.slot = __le32_to_cpu(slot);
 }
 
 static inline void nsl_set_checksum(struct nvdimm_drvdata *ndd,
 				    struct nd_namespace_label *nd_label,
 				    u64 checksum)
 {
-	nd_label->checksum = __cpu_to_le64(checksum);
+	if (ndd->cxl)
+		nd_label->cxl.checksum = __cpu_to_le64(checksum);
+	else
+		nd_label->efi.checksum = __cpu_to_le64(checksum);
 }
 
 static inline void nsl_set_flags(struct nvdimm_drvdata *ndd,
 				 struct nd_namespace_label *nd_label, u32 flags)
 {
-	nd_label->flags = __cpu_to_le32(flags);
+	if (ndd->cxl)
+		nd_label->cxl.flags = __cpu_to_le32(flags);
+	else
+		nd_label->efi.flags = __cpu_to_le32(flags);
 }
 
 static inline void nsl_set_dpa(struct nvdimm_drvdata *ndd,
 			       struct nd_namespace_label *nd_label, u64 dpa)
 {
-	nd_label->dpa = __cpu_to_le64(dpa);
+	if (ndd->cxl)
+		nd_label->cxl.dpa = __cpu_to_le64(dpa);
+	else
+		nd_label->efi.dpa = __cpu_to_le64(dpa);
 }
 
 static inline void nsl_set_rawsize(struct nvdimm_drvdata *ndd,
 				   struct nd_namespace_label *nd_label,
 				   u64 rawsize)
 {
-	nd_label->rawsize = __cpu_to_le64(rawsize);
+	if (ndd->cxl)
+		nd_label->cxl.rawsize = __cpu_to_le64(rawsize);
+	else
+		nd_label->efi.rawsize = __cpu_to_le64(rawsize);
 }
 
 static inline void nsl_set_isetcookie(struct nvdimm_drvdata *ndd,
 				      struct nd_namespace_label *nd_label,
 				      u64 isetcookie)
 {
-	nd_label->isetcookie = __cpu_to_le64(isetcookie);
+	if (!ndd->cxl)
+		nd_label->efi.isetcookie = __cpu_to_le64(isetcookie);
 }
 
 static inline void nsl_set_position(struct nvdimm_drvdata *ndd,
 				    struct nd_namespace_label *nd_label,
 				    u16 position)
 {
-	nd_label->position = __cpu_to_le16(position);
+	if (ndd->cxl)
+		nd_label->cxl.position = __cpu_to_le16(position);
+	else
+		nd_label->efi.position = __cpu_to_le16(position);
 }
 
 static inline void nsl_set_nlabel(struct nvdimm_drvdata *ndd,
 				  struct nd_namespace_label *nd_label,
 				  u16 nlabel)
 {
-	nd_label->nlabel = __cpu_to_le16(nlabel);
+	if (ndd->cxl)
+		nd_label->cxl.nlabel = __cpu_to_le16(nlabel);
+	else
+		nd_label->efi.nlabel = __cpu_to_le16(nlabel);
 }
 
 static inline void nsl_set_lbasize(struct nvdimm_drvdata *ndd,
 				   struct nd_namespace_label *nd_label,
 				   u64 lbasize)
 {
-	nd_label->lbasize = __cpu_to_le64(lbasize);
+	if (ndd->cxl)
+		nd_label->cxl.lbasize = __cpu_to_le16(lbasize);
+	else
+		nd_label->efi.lbasize = __cpu_to_le64(lbasize);
 }
 
 static inline const uuid_t *nsl_get_uuid(struct nvdimm_drvdata *ndd,
 					 struct nd_namespace_label *nd_label,
 					 uuid_t *uuid)
 {
-	uuid_copy(uuid, &nd_label->uuid);
+	if (ndd->cxl)
+		uuid_copy(uuid, &nd_label->cxl.uuid);
+	else
+		uuid_copy(uuid, &nd_label->efi.uuid);
 	return uuid;
 }
 
@@ -188,21 +253,29 @@ static inline const uuid_t *nsl_set_uuid(struct nvdimm_drvdata *ndd,
 					 struct nd_namespace_label *nd_label,
 					 const uuid_t *uuid)
 {
-	uuid_copy(&nd_label->uuid, uuid);
-	return &nd_label->uuid;
+	if (ndd->cxl) {
+		uuid_copy(&nd_label->cxl.uuid, uuid);
+		return &nd_label->cxl.uuid;
+	}
+	uuid_copy(&nd_label->efi.uuid, uuid);
+	return &nd_label->efi.uuid;
 }
 
 static inline bool nsl_validate_uuid(struct nvdimm_drvdata *ndd,
 				     struct nd_namespace_label *nd_label,
 				     const uuid_t *uuid)
 {
-	return uuid_equal(&nd_label->uuid, uuid);
+	if (ndd->cxl)
+		return uuid_equal(&nd_label->cxl.uuid, uuid);
+	return uuid_equal(&nd_label->efi.uuid, uuid);
 }
 
 static inline const uuid_t *nsl_ref_uuid(struct nvdimm_drvdata *ndd,
 					 struct nd_namespace_label *nd_label)
 {
-	return &nd_label->uuid;
+	if (ndd->cxl)
+		return &nd_label->cxl.uuid;
+	return &nd_label->efi.uuid;
 }
 
 bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
@@ -261,8 +334,8 @@ static inline struct nd_namespace_index *to_next_namespace_index(
 
 unsigned sizeof_namespace_label(struct nvdimm_drvdata *ndd);
 
-#define namespace_label_has(ndd, field) \
-	(offsetof(struct nd_namespace_label, field) \
+#define efi_namespace_label_has(ndd, field) \
+	(!ndd->cxl && offsetof(struct nvdimm_efi_label, field) \
 		< sizeof_namespace_label(ndd))
 
 #define nd_dbg_dpa(r, d, res, fmt, arg...) \


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 12/23] cxl/pci: Make 'struct cxl_mem' device type generic
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (10 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 11/23] libnvdimm/labels: Introduce CXL labels Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-09 22:28 ` [PATCH 13/23] cxl/mbox: Introduce the mbox_send operation Dan Williams
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for adding a unit test provider of a cxl_memdev, convert
the 'struct cxl_mem' driver context to carry a generic device rather
than a pci device.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/memdev.c |    3 +-
 drivers/cxl/cxlmem.h      |    4 ++-
 drivers/cxl/pci.c         |   60 ++++++++++++++++++++++-----------------------
 3 files changed, 32 insertions(+), 35 deletions(-)

diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
index a9c317e32010..40789558f8c2 100644
--- a/drivers/cxl/core/memdev.c
+++ b/drivers/cxl/core/memdev.c
@@ -149,7 +149,6 @@ static void cxl_memdev_unregister(void *_cxlmd)
 static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm,
 					   const struct file_operations *fops)
 {
-	struct pci_dev *pdev = cxlm->pdev;
 	struct cxl_memdev *cxlmd;
 	struct device *dev;
 	struct cdev *cdev;
@@ -166,7 +165,7 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm,
 
 	dev = &cxlmd->dev;
 	device_initialize(dev);
-	dev->parent = &pdev->dev;
+	dev->parent = cxlm->dev;
 	dev->bus = &cxl_bus_type;
 	dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
 	dev->type = &cxl_memdev_type;
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 6c0b1e2ea97c..8397daea9d9b 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -68,7 +68,7 @@ devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
 
 /**
  * struct cxl_mem - A CXL memory device
- * @pdev: The PCI device associated with this CXL device.
+ * @dev: The device associated with this CXL device.
  * @cxlmd: Logical memory device chardev / interface
  * @regs: Parsed register blocks
  * @payload_size: Size of space for payload
@@ -82,7 +82,7 @@ devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
  * @ram_range: Volatile memory capacity information.
  */
 struct cxl_mem {
-	struct pci_dev *pdev;
+	struct device *dev;
 	struct cxl_memdev *cxlmd;
 
 	struct cxl_regs regs;
diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index 8571ac73e026..c909a485fd3d 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -250,7 +250,7 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
 		cpu_relax();
 	}
 
-	dev_dbg(&cxlm->pdev->dev, "Doorbell wait took %dms",
+	dev_dbg(cxlm->dev, "Doorbell wait took %dms",
 		jiffies_to_msecs(end) - jiffies_to_msecs(start));
 	return 0;
 }
@@ -268,7 +268,7 @@ static bool cxl_is_security_command(u16 opcode)
 static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
 				 struct mbox_cmd *mbox_cmd)
 {
-	struct device *dev = &cxlm->pdev->dev;
+	struct device *dev = cxlm->dev;
 
 	dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
 		mbox_cmd->opcode, mbox_cmd->size_in);
@@ -300,6 +300,7 @@ static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
 				   struct mbox_cmd *mbox_cmd)
 {
 	void __iomem *payload = cxlm->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET;
+	struct device *dev = cxlm->dev;
 	u64 cmd_reg, status_reg;
 	size_t out_len;
 	int rc;
@@ -325,8 +326,7 @@ static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
 
 	/* #1 */
 	if (cxl_doorbell_busy(cxlm)) {
-		dev_err_ratelimited(&cxlm->pdev->dev,
-				    "Mailbox re-busy after acquiring\n");
+		dev_err_ratelimited(dev, "Mailbox re-busy after acquiring\n");
 		return -EBUSY;
 	}
 
@@ -345,7 +345,7 @@ static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
 	writeq(cmd_reg, cxlm->regs.mbox + CXLDEV_MBOX_CMD_OFFSET);
 
 	/* #4 */
-	dev_dbg(&cxlm->pdev->dev, "Sending command\n");
+	dev_dbg(dev, "Sending command\n");
 	writel(CXLDEV_MBOX_CTRL_DOORBELL,
 	       cxlm->regs.mbox + CXLDEV_MBOX_CTRL_OFFSET);
 
@@ -362,7 +362,7 @@ static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
 		FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
 
 	if (mbox_cmd->return_code != 0) {
-		dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
+		dev_dbg(dev, "Mailbox operation had an error\n");
 		return 0;
 	}
 
@@ -399,7 +399,7 @@ static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
  */
 static int cxl_mem_mbox_get(struct cxl_mem *cxlm)
 {
-	struct device *dev = &cxlm->pdev->dev;
+	struct device *dev = cxlm->dev;
 	u64 md_status;
 	int rc;
 
@@ -502,7 +502,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
 					u64 in_payload, u64 out_payload,
 					s32 *size_out, u32 *retval)
 {
-	struct device *dev = &cxlm->pdev->dev;
+	struct device *dev = cxlm->dev;
 	struct mbox_cmd mbox_cmd = {
 		.opcode = cmd->opcode,
 		.size_in = cmd->info.size_in,
@@ -925,12 +925,12 @@ static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
 	 */
 	cxlm->payload_size = min_t(size_t, cxlm->payload_size, SZ_1M);
 	if (cxlm->payload_size < 256) {
-		dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)",
+		dev_err(cxlm->dev, "Mailbox is too small (%zub)",
 			cxlm->payload_size);
 		return -ENXIO;
 	}
 
-	dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu",
+	dev_dbg(cxlm->dev, "Mailbox payload sized %zu",
 		cxlm->payload_size);
 
 	return 0;
@@ -948,7 +948,7 @@ static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev)
 	}
 
 	mutex_init(&cxlm->mbox_mutex);
-	cxlm->pdev = pdev;
+	cxlm->dev = dev;
 	cxlm->enabled_cmds =
 		devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
 				   sizeof(unsigned long),
@@ -964,9 +964,9 @@ static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev)
 static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm,
 					  u8 bar, u64 offset)
 {
-	struct pci_dev *pdev = cxlm->pdev;
-	struct device *dev = &pdev->dev;
 	void __iomem *addr;
+	struct device *dev = cxlm->dev;
+	struct pci_dev *pdev = to_pci_dev(dev);
 
 	/* Basic sanity check that BAR is big enough */
 	if (pci_resource_len(pdev, bar) < offset) {
@@ -989,7 +989,7 @@ static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm,
 
 static void cxl_mem_unmap_regblock(struct cxl_mem *cxlm, void __iomem *base)
 {
-	pci_iounmap(cxlm->pdev, base);
+	pci_iounmap(to_pci_dev(cxlm->dev), base);
 }
 
 static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
@@ -1018,7 +1018,7 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
 static int cxl_probe_regs(struct cxl_mem *cxlm, void __iomem *base,
 			  struct cxl_register_map *map)
 {
-	struct pci_dev *pdev = cxlm->pdev;
+	struct pci_dev *pdev = to_pci_dev(cxlm->dev);
 	struct device *dev = &pdev->dev;
 	struct cxl_component_reg_map *comp_map;
 	struct cxl_device_reg_map *dev_map;
@@ -1057,7 +1057,7 @@ static int cxl_probe_regs(struct cxl_mem *cxlm, void __iomem *base,
 
 static int cxl_map_regs(struct cxl_mem *cxlm, struct cxl_register_map *map)
 {
-	struct pci_dev *pdev = cxlm->pdev;
+	struct pci_dev *pdev = to_pci_dev(cxlm->dev);
 	struct device *dev = &pdev->dev;
 
 	switch (map->reg_type) {
@@ -1096,8 +1096,8 @@ static void cxl_decode_register_block(u32 reg_lo, u32 reg_hi,
  */
 static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
 {
-	struct pci_dev *pdev = cxlm->pdev;
-	struct device *dev = &pdev->dev;
+	struct pci_dev *pdev = to_pci_dev(cxlm->dev);
+	struct device *dev = cxlm->dev;
 	u32 regloc_size, regblocks;
 	void __iomem *base;
 	int regloc, i, n_maps;
@@ -1226,7 +1226,7 @@ static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
 		struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
 
 		if (!cmd) {
-			dev_dbg(&cxlm->pdev->dev,
+			dev_dbg(cxlm->dev,
 				"Opcode 0x%04x unsupported by driver", opcode);
 			continue;
 		}
@@ -1325,7 +1325,7 @@ static int cxl_mem_get_partition_info(struct cxl_mem *cxlm,
 static int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm)
 {
 	struct cxl_mbox_get_supported_logs *gsl;
-	struct device *dev = &cxlm->pdev->dev;
+	struct device *dev = cxlm->dev;
 	struct cxl_mem_command *cmd;
 	int i, rc;
 
@@ -1420,15 +1420,14 @@ static int cxl_mem_identify(struct cxl_mem *cxlm)
 	cxlm->partition_align_bytes = le64_to_cpu(id.partition_align);
 	cxlm->partition_align_bytes *= CXL_CAPACITY_MULTIPLIER;
 
-	dev_dbg(&cxlm->pdev->dev, "Identify Memory Device\n"
+	dev_dbg(cxlm->dev,
+		"Identify Memory Device\n"
 		"     total_bytes = %#llx\n"
 		"     volatile_only_bytes = %#llx\n"
 		"     persistent_only_bytes = %#llx\n"
 		"     partition_align_bytes = %#llx\n",
-			cxlm->total_bytes,
-			cxlm->volatile_only_bytes,
-			cxlm->persistent_only_bytes,
-			cxlm->partition_align_bytes);
+		cxlm->total_bytes, cxlm->volatile_only_bytes,
+		cxlm->persistent_only_bytes, cxlm->partition_align_bytes);
 
 	cxlm->lsa_size = le32_to_cpu(id.lsa_size);
 	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
@@ -1455,19 +1454,18 @@ static int cxl_mem_create_range_info(struct cxl_mem *cxlm)
 					&cxlm->next_volatile_bytes,
 					&cxlm->next_persistent_bytes);
 	if (rc < 0) {
-		dev_err(&cxlm->pdev->dev, "Failed to query partition information\n");
+		dev_err(cxlm->dev, "Failed to query partition information\n");
 		return rc;
 	}
 
-	dev_dbg(&cxlm->pdev->dev, "Get Partition Info\n"
+	dev_dbg(cxlm->dev,
+		"Get Partition Info\n"
 		"     active_volatile_bytes = %#llx\n"
 		"     active_persistent_bytes = %#llx\n"
 		"     next_volatile_bytes = %#llx\n"
 		"     next_persistent_bytes = %#llx\n",
-			cxlm->active_volatile_bytes,
-			cxlm->active_persistent_bytes,
-			cxlm->next_volatile_bytes,
-			cxlm->next_persistent_bytes);
+		cxlm->active_volatile_bytes, cxlm->active_persistent_bytes,
+		cxlm->next_volatile_bytes, cxlm->next_persistent_bytes);
 
 	cxlm->ram_range.start = 0;
 	cxlm->ram_range.end = cxlm->active_volatile_bytes - 1;


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 13/23] cxl/mbox: Introduce the mbox_send operation
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (11 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 12/23] cxl/pci: Make 'struct cxl_mem' device type generic Dan Williams
@ 2021-08-09 22:28 ` Dan Williams
  2021-08-09 22:29 ` [PATCH 14/23] cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core Dan Williams
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:28 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for implementing a unit test backend transport for ioctl
operations, and making the mailbox available to the cxl/pmem
infrastructure, move the existing PCI specific portion of mailbox handling
to an "mbox_send" operation.

With this split all the PCI-specific transport details are comprehended
by a single operation and the rest of the mailbox infrastructure is
'struct cxl_mem' and 'struct cxl_memdev' generic.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/cxlmem.h |   42 ++++++++++++++++++++++++++++
 drivers/cxl/pci.c    |   76 ++++++++++++++------------------------------------
 2 files changed, 63 insertions(+), 55 deletions(-)

diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 8397daea9d9b..a56d8f26a157 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -66,6 +66,45 @@ struct cxl_memdev *
 devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
 		    const struct cdevm_file_operations *cdevm_fops);
 
+/**
+ * struct mbox_cmd - A command to be submitted to hardware.
+ * @opcode: (input) The command set and command submitted to hardware.
+ * @payload_in: (input) Pointer to the input payload.
+ * @payload_out: (output) Pointer to the output payload. Must be allocated by
+ *		 the caller.
+ * @size_in: (input) Number of bytes to load from @payload_in.
+ * @size_out: (input) Max number of bytes loaded into @payload_out.
+ *            (output) Number of bytes generated by the device. For fixed size
+ *            outputs commands this is always expected to be deterministic. For
+ *            variable sized output commands, it tells the exact number of bytes
+ *            written.
+ * @return_code: (output) Error code returned from hardware.
+ *
+ * This is the primary mechanism used to send commands to the hardware.
+ * All the fields except @payload_* correspond exactly to the fields described in
+ * Command Register section of the CXL 2.0 8.2.8.4.5. @payload_in and
+ * @payload_out are written to, and read from the Command Payload Registers
+ * defined in CXL 2.0 8.2.8.4.8.
+ */
+struct cxl_mbox_cmd {
+	u16 opcode;
+	void *payload_in;
+	void *payload_out;
+	size_t size_in;
+	size_t size_out;
+	u16 return_code;
+#define CXL_MBOX_SUCCESS 0
+};
+
+/*
+ * CXL 2.0 - Memory capacity multiplier
+ * See Section 8.2.9.5
+ *
+ * Volatile, Persistent, and Partition capacities are specified to be in
+ * multiples of 256MB - define a multiplier to convert to/from bytes.
+ */
+#define CXL_CAPACITY_MULTIPLIER SZ_256M
+
 /**
  * struct cxl_mem - A CXL memory device
  * @dev: The device associated with this CXL device.
@@ -80,6 +119,7 @@ devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
  * @enabled_cmds: Hardware commands found enabled in CEL.
  * @pmem_range: Persistent memory capacity information.
  * @ram_range: Volatile memory capacity information.
+ * @mbox_send: @dev specific transport for transmitting mailbox commands
  */
 struct cxl_mem {
 	struct device *dev;
@@ -104,5 +144,7 @@ struct cxl_mem {
 	u64 active_persistent_bytes;
 	u64 next_volatile_bytes;
 	u64 next_persistent_bytes;
+
+	int (*mbox_send)(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd);
 };
 #endif /* __CXL_MEM_H__ */
diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index c909a485fd3d..27b8c40c9685 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -64,45 +64,6 @@ enum opcode {
 	CXL_MBOX_OP_MAX			= 0x10000
 };
 
-/*
- * CXL 2.0 - Memory capacity multiplier
- * See Section 8.2.9.5
- *
- * Volatile, Persistent, and Partition capacities are specified to be in
- * multiples of 256MB - define a multiplier to convert to/from bytes.
- */
-#define CXL_CAPACITY_MULTIPLIER SZ_256M
-
-/**
- * struct mbox_cmd - A command to be submitted to hardware.
- * @opcode: (input) The command set and command submitted to hardware.
- * @payload_in: (input) Pointer to the input payload.
- * @payload_out: (output) Pointer to the output payload. Must be allocated by
- *		 the caller.
- * @size_in: (input) Number of bytes to load from @payload_in.
- * @size_out: (input) Max number of bytes loaded into @payload_out.
- *            (output) Number of bytes generated by the device. For fixed size
- *            outputs commands this is always expected to be deterministic. For
- *            variable sized output commands, it tells the exact number of bytes
- *            written.
- * @return_code: (output) Error code returned from hardware.
- *
- * This is the primary mechanism used to send commands to the hardware.
- * All the fields except @payload_* correspond exactly to the fields described in
- * Command Register section of the CXL 2.0 8.2.8.4.5. @payload_in and
- * @payload_out are written to, and read from the Command Payload Registers
- * defined in CXL 2.0 8.2.8.4.8.
- */
-struct mbox_cmd {
-	u16 opcode;
-	void *payload_in;
-	void *payload_out;
-	size_t size_in;
-	size_t size_out;
-	u16 return_code;
-#define CXL_MBOX_SUCCESS 0
-};
-
 static DECLARE_RWSEM(cxl_memdev_rwsem);
 static struct dentry *cxl_debugfs;
 static bool cxl_raw_allow_all;
@@ -266,7 +227,7 @@ static bool cxl_is_security_command(u16 opcode)
 }
 
 static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
-				 struct mbox_cmd *mbox_cmd)
+				 struct cxl_mbox_cmd *mbox_cmd)
 {
 	struct device *dev = cxlm->dev;
 
@@ -297,7 +258,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
  * mailbox.
  */
 static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
-				   struct mbox_cmd *mbox_cmd)
+				   struct cxl_mbox_cmd *mbox_cmd)
 {
 	void __iomem *payload = cxlm->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET;
 	struct device *dev = cxlm->dev;
@@ -472,6 +433,20 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
 	mutex_unlock(&cxlm->mbox_mutex);
 }
 
+static int cxl_pci_mbox_send(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd)
+{
+	int rc;
+
+	rc = cxl_mem_mbox_get(cxlm);
+	if (rc)
+		return rc;
+
+	rc = __cxl_mem_mbox_send_cmd(cxlm, cmd);
+	cxl_mem_mbox_put(cxlm);
+
+	return rc;
+}
+
 /**
  * handle_mailbox_cmd_from_user() - Dispatch a mailbox command for userspace.
  * @cxlm: The CXL memory device to communicate with.
@@ -503,7 +478,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
 					s32 *size_out, u32 *retval)
 {
 	struct device *dev = cxlm->dev;
-	struct mbox_cmd mbox_cmd = {
+	struct cxl_mbox_cmd mbox_cmd = {
 		.opcode = cmd->opcode,
 		.size_in = cmd->info.size_in,
 		.size_out = cmd->info.size_out,
@@ -525,10 +500,6 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
 		}
 	}
 
-	rc = cxl_mem_mbox_get(cxlm);
-	if (rc)
-		goto out;
-
 	dev_dbg(dev,
 		"Submitting %s command for user\n"
 		"\topcode: %x\n"
@@ -539,8 +510,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
 	dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
 		      "raw command path used\n");
 
-	rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
-	cxl_mem_mbox_put(cxlm);
+	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
 	if (rc)
 		goto out;
 
@@ -874,7 +844,7 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode,
 				 void *out, size_t out_size)
 {
 	const struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
-	struct mbox_cmd mbox_cmd = {
+	struct cxl_mbox_cmd mbox_cmd = {
 		.opcode = opcode,
 		.payload_in = in,
 		.size_in = in_size,
@@ -886,12 +856,7 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode,
 	if (out_size > cxlm->payload_size)
 		return -E2BIG;
 
-	rc = cxl_mem_mbox_get(cxlm);
-	if (rc)
-		return rc;
-
-	rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
-	cxl_mem_mbox_put(cxlm);
+	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
 	if (rc)
 		return rc;
 
@@ -913,6 +878,7 @@ static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
 {
 	const int cap = readl(cxlm->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET);
 
+	cxlm->mbox_send = cxl_pci_mbox_send;
 	cxlm->payload_size =
 		1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 14/23] cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (12 preceding siblings ...)
  2021-08-09 22:28 ` [PATCH 13/23] cxl/mbox: Introduce the mbox_send operation Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-11  6:11   ` [PATCH v2 " Dan Williams
  2021-08-09 22:29 ` [PATCH 15/23] cxl/pci: Use module_pci_driver Dan Williams
                   ` (9 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

Now that the internals of mailbox operations are abstracted from the PCI
specifics a bulk of infrastructure can move to the core.

The CXL_PMEM driver intends to proxy LIBNVDIMM UAPI and driver requests
to the equivalent functionality provided by the CXL hardware mailbox
interface. In support of that intent move the mailbox implementation to
a shared location for the CXL_PCI driver native IOCTL path and CXL_PMEM
nvdimm command proxy path to share.

A unit test framework seeks to implement a unit test backend transport
for mailbox commands to communicate mocked up payloads. It can reuse all
of the mailbox infrastructure minus the PCI specifics, so that also gets
moved to the core.

Finally with the mailbox infrastructure and ioctl handling being
transport generic there is no longer any need to pass file
file_operations to devm_cxl_add_memdev(). That allows all the ioctl
boilerplate to move into the core for unit test reuse.

No functional change intended, just code movement.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 Documentation/driver-api/cxl/memory-devices.rst |    3 
 drivers/cxl/core/Makefile                       |    1 
 drivers/cxl/core/bus.c                          |    4 
 drivers/cxl/core/core.h                         |    8 
 drivers/cxl/core/mbox.c                         |  832 ++++++++++++++++++++
 drivers/cxl/core/memdev.c                       |   81 ++
 drivers/cxl/cxlmem.h                            |   81 ++
 drivers/cxl/pci.c                               |  943 -----------------------
 8 files changed, 985 insertions(+), 968 deletions(-)
 create mode 100644 drivers/cxl/core/mbox.c

diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
index 46847d8c70a0..356f70d28316 100644
--- a/Documentation/driver-api/cxl/memory-devices.rst
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -45,6 +45,9 @@ CXL Core
 .. kernel-doc:: drivers/cxl/core/regs.c
    :internal:
 
+.. kernel-doc:: drivers/cxl/core/mbox.c
+   :doc: cxl mbox
+
 External Interfaces
 ===================
 
diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
index 0fdbf3c6ac1a..07eb8e1fb8a6 100644
--- a/drivers/cxl/core/Makefile
+++ b/drivers/cxl/core/Makefile
@@ -6,3 +6,4 @@ cxl_core-y := bus.o
 cxl_core-y += pmem.o
 cxl_core-y += regs.o
 cxl_core-y += memdev.o
+cxl_core-y += mbox.o
diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
index 37b87adaa33f..8073354ba232 100644
--- a/drivers/cxl/core/bus.c
+++ b/drivers/cxl/core/bus.c
@@ -636,6 +636,8 @@ static __init int cxl_core_init(void)
 {
 	int rc;
 
+	cxl_mbox_init();
+
 	rc = cxl_memdev_init();
 	if (rc)
 		return rc;
@@ -647,6 +649,7 @@ static __init int cxl_core_init(void)
 
 err:
 	cxl_memdev_exit();
+	cxl_mbox_exit();
 	return rc;
 }
 
@@ -654,6 +657,7 @@ static void cxl_core_exit(void)
 {
 	bus_unregister(&cxl_bus_type);
 	cxl_memdev_exit();
+	cxl_mbox_exit();
 }
 
 module_init(cxl_core_init);
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index 036a3c8106b4..c85b7fbad02d 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -14,7 +14,15 @@ static inline void unregister_cxl_dev(void *dev)
 	device_unregister(dev);
 }
 
+struct cxl_send_command;
+struct cxl_mem_query_commands;
+int cxl_query_cmd(struct cxl_memdev *cxlmd,
+		  struct cxl_mem_query_commands __user *q);
+int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s);
+
 int cxl_memdev_init(void);
 void cxl_memdev_exit(void);
+void cxl_mbox_init(void);
+void cxl_mbox_exit(void);
 
 #endif /* __CXL_CORE_H__ */
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
new file mode 100644
index 000000000000..40f051956990
--- /dev/null
+++ b/drivers/cxl/core/mbox.c
@@ -0,0 +1,832 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/security.h>
+#include <linux/debugfs.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <cxlmem.h>
+#include <cxl.h>
+
+static bool cxl_raw_allow_all;
+
+/**
+ * DOC: cxl mbox
+ *
+ * Core implementation of the CXL 2.0 Type-3 Memory Device Mailbox. The
+ * implementation is used by the cxl_pci driver to initialize the device
+ * and implement the cxl_mem.h IOCTL UAPI. It also implements the
+ * backend of the cxl_pmem_ctl() transport for LIBNVDIMM.
+ *
+ */
+
+#define cxl_for_each_cmd(cmd)                                                  \
+	for ((cmd) = &cxl_mem_commands[0];                                     \
+	     ((cmd)-cxl_mem_commands) < ARRAY_SIZE(cxl_mem_commands); (cmd)++)
+
+#define cxl_doorbell_busy(cxlm)                                                \
+	(readl((cxlm)->regs.mbox + CXLDEV_MBOX_CTRL_OFFSET) &                  \
+	 CXLDEV_MBOX_CTRL_DOORBELL)
+
+/* CXL 2.0 - 8.2.8.4 */
+#define CXL_MAILBOX_TIMEOUT_MS (2 * HZ)
+
+#define CXL_CMD(_id, sin, sout, _flags)                                        \
+	[CXL_MEM_COMMAND_ID_##_id] = {                                         \
+	.info =	{                                                              \
+			.id = CXL_MEM_COMMAND_ID_##_id,                        \
+			.size_in = sin,                                        \
+			.size_out = sout,                                      \
+		},                                                             \
+	.opcode = CXL_MBOX_OP_##_id,                                           \
+	.flags = _flags,                                                       \
+	}
+
+/*
+ * This table defines the supported mailbox commands for the driver. This table
+ * is made up of a UAPI structure. Non-negative values as parameters in the
+ * table will be validated against the user's input. For example, if size_in is
+ * 0, and the user passed in 1, it is an error.
+ */
+static struct cxl_mem_command cxl_mem_commands[CXL_MEM_COMMAND_ID_MAX] = {
+	CXL_CMD(IDENTIFY, 0, 0x43, CXL_CMD_FLAG_FORCE_ENABLE),
+#ifdef CONFIG_CXL_MEM_RAW_COMMANDS
+	CXL_CMD(RAW, ~0, ~0, 0),
+#endif
+	CXL_CMD(GET_SUPPORTED_LOGS, 0, ~0, CXL_CMD_FLAG_FORCE_ENABLE),
+	CXL_CMD(GET_FW_INFO, 0, 0x50, 0),
+	CXL_CMD(GET_PARTITION_INFO, 0, 0x20, 0),
+	CXL_CMD(GET_LSA, 0x8, ~0, 0),
+	CXL_CMD(GET_HEALTH_INFO, 0, 0x12, 0),
+	CXL_CMD(GET_LOG, 0x18, ~0, CXL_CMD_FLAG_FORCE_ENABLE),
+	CXL_CMD(SET_PARTITION_INFO, 0x0a, 0, 0),
+	CXL_CMD(SET_LSA, ~0, 0, 0),
+	CXL_CMD(GET_ALERT_CONFIG, 0, 0x10, 0),
+	CXL_CMD(SET_ALERT_CONFIG, 0xc, 0, 0),
+	CXL_CMD(GET_SHUTDOWN_STATE, 0, 0x1, 0),
+	CXL_CMD(SET_SHUTDOWN_STATE, 0x1, 0, 0),
+	CXL_CMD(GET_POISON, 0x10, ~0, 0),
+	CXL_CMD(INJECT_POISON, 0x8, 0, 0),
+	CXL_CMD(CLEAR_POISON, 0x48, 0, 0),
+	CXL_CMD(GET_SCAN_MEDIA_CAPS, 0x10, 0x4, 0),
+	CXL_CMD(SCAN_MEDIA, 0x11, 0, 0),
+	CXL_CMD(GET_SCAN_MEDIA, 0, ~0, 0),
+};
+
+/*
+ * Commands that RAW doesn't permit. The rationale for each:
+ *
+ * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment /
+ * coordination of transaction timeout values at the root bridge level.
+ *
+ * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live
+ * and needs to be coordinated with HDM updates.
+ *
+ * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the
+ * driver and any writes from userspace invalidates those contents.
+ *
+ * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes
+ * to the device after it is marked clean, userspace can not make that
+ * assertion.
+ *
+ * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that
+ * is kept up to date with patrol notifications and error management.
+ */
+static u16 cxl_disabled_raw_commands[] = {
+	CXL_MBOX_OP_ACTIVATE_FW,
+	CXL_MBOX_OP_SET_PARTITION_INFO,
+	CXL_MBOX_OP_SET_LSA,
+	CXL_MBOX_OP_SET_SHUTDOWN_STATE,
+	CXL_MBOX_OP_SCAN_MEDIA,
+	CXL_MBOX_OP_GET_SCAN_MEDIA,
+};
+
+/*
+ * Command sets that RAW doesn't permit. All opcodes in this set are
+ * disabled because they pass plain text security payloads over the
+ * user/kernel boundary. This functionality is intended to be wrapped
+ * behind the keys ABI which allows for encrypted payloads in the UAPI
+ */
+static u8 security_command_sets[] = {
+	0x44, /* Sanitize */
+	0x45, /* Persistent Memory Data-at-rest Security */
+	0x46, /* Security Passthrough */
+};
+
+static bool cxl_is_security_command(u16 opcode)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(security_command_sets); i++)
+		if (security_command_sets[i] == (opcode >> 8))
+			return true;
+	return false;
+}
+
+static struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
+{
+	struct cxl_mem_command *c;
+
+	cxl_for_each_cmd(c)
+		if (c->opcode == opcode)
+			return c;
+
+	return NULL;
+}
+
+/**
+ * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
+ * @cxlm: The CXL memory device to communicate with.
+ * @opcode: Opcode for the mailbox command.
+ * @in: The input payload for the mailbox command.
+ * @in_size: The length of the input payload
+ * @out: Caller allocated buffer for the output.
+ * @out_size: Expected size of output.
+ *
+ * Context: Any context. Will acquire and release mbox_mutex.
+ * Return:
+ *  * %>=0	- Number of bytes returned in @out.
+ *  * %-E2BIG	- Payload is too large for hardware.
+ *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
+ *  * %-EFAULT	- Hardware error occurred.
+ *  * %-ENXIO	- Command completed, but device reported an error.
+ *  * %-EIO	- Unexpected output size.
+ *
+ * Mailbox commands may execute successfully yet the device itself reported an
+ * error. While this distinction can be useful for commands from userspace, the
+ * kernel will only be able to use results when both are successful.
+ *
+ * See __cxl_mem_mbox_send_cmd()
+ */
+int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, void *in,
+			  size_t in_size, void *out, size_t out_size)
+{
+	const struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
+	struct cxl_mbox_cmd mbox_cmd = {
+		.opcode = opcode,
+		.payload_in = in,
+		.size_in = in_size,
+		.size_out = out_size,
+		.payload_out = out,
+	};
+	int rc;
+
+	if (out_size > cxlm->payload_size)
+		return -E2BIG;
+
+	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
+	if (rc)
+		return rc;
+
+	/* TODO: Map return code to proper kernel style errno */
+	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
+		return -ENXIO;
+
+	/*
+	 * Variable sized commands can't be validated and so it's up to the
+	 * caller to do that if they wish.
+	 */
+	if (cmd->info.size_out >= 0 && mbox_cmd.size_out != out_size)
+		return -EIO;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_mbox_send_cmd);
+
+static bool cxl_mem_raw_command_allowed(u16 opcode)
+{
+	int i;
+
+	if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
+		return false;
+
+	if (security_locked_down(LOCKDOWN_NONE))
+		return false;
+
+	if (cxl_raw_allow_all)
+		return true;
+
+	if (cxl_is_security_command(opcode))
+		return false;
+
+	for (i = 0; i < ARRAY_SIZE(cxl_disabled_raw_commands); i++)
+		if (cxl_disabled_raw_commands[i] == opcode)
+			return false;
+
+	return true;
+}
+
+/**
+ * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
+ * @cxlm: &struct cxl_mem device whose mailbox will be used.
+ * @send_cmd: &struct cxl_send_command copied in from userspace.
+ * @out_cmd: Sanitized and populated &struct cxl_mem_command.
+ *
+ * Return:
+ *  * %0	- @out_cmd is ready to send.
+ *  * %-ENOTTY	- Invalid command specified.
+ *  * %-EINVAL	- Reserved fields or invalid values were used.
+ *  * %-ENOMEM	- Input or output buffer wasn't sized properly.
+ *  * %-EPERM	- Attempted to use a protected command.
+ *
+ * The result of this command is a fully validated command in @out_cmd that is
+ * safe to send to the hardware.
+ *
+ * See handle_mailbox_cmd_from_user()
+ */
+static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
+				      const struct cxl_send_command *send_cmd,
+				      struct cxl_mem_command *out_cmd)
+{
+	const struct cxl_command_info *info;
+	struct cxl_mem_command *c;
+
+	if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX)
+		return -ENOTTY;
+
+	/*
+	 * The user can never specify an input payload larger than what hardware
+	 * supports, but output can be arbitrarily large (simply write out as
+	 * much data as the hardware provides).
+	 */
+	if (send_cmd->in.size > cxlm->payload_size)
+		return -EINVAL;
+
+	/*
+	 * Checks are bypassed for raw commands but a WARN/taint will occur
+	 * later in the callchain
+	 */
+	if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) {
+		const struct cxl_mem_command temp = {
+			.info = {
+				.id = CXL_MEM_COMMAND_ID_RAW,
+				.flags = 0,
+				.size_in = send_cmd->in.size,
+				.size_out = send_cmd->out.size,
+			},
+			.opcode = send_cmd->raw.opcode
+		};
+
+		if (send_cmd->raw.rsvd)
+			return -EINVAL;
+
+		/*
+		 * Unlike supported commands, the output size of RAW commands
+		 * gets passed along without further checking, so it must be
+		 * validated here.
+		 */
+		if (send_cmd->out.size > cxlm->payload_size)
+			return -EINVAL;
+
+		if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
+			return -EPERM;
+
+		memcpy(out_cmd, &temp, sizeof(temp));
+
+		return 0;
+	}
+
+	if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
+		return -EINVAL;
+
+	if (send_cmd->rsvd)
+		return -EINVAL;
+
+	if (send_cmd->in.rsvd || send_cmd->out.rsvd)
+		return -EINVAL;
+
+	/* Convert user's command into the internal representation */
+	c = &cxl_mem_commands[send_cmd->id];
+	info = &c->info;
+
+	/* Check that the command is enabled for hardware */
+	if (!test_bit(info->id, cxlm->enabled_cmds))
+		return -ENOTTY;
+
+	/* Check the input buffer is the expected size */
+	if (info->size_in >= 0 && info->size_in != send_cmd->in.size)
+		return -ENOMEM;
+
+	/* Check the output buffer is at least large enough */
+	if (info->size_out >= 0 && send_cmd->out.size < info->size_out)
+		return -ENOMEM;
+
+	memcpy(out_cmd, c, sizeof(*c));
+	out_cmd->info.size_in = send_cmd->in.size;
+	/*
+	 * XXX: out_cmd->info.size_out will be controlled by the driver, and the
+	 * specified number of bytes @send_cmd->out.size will be copied back out
+	 * to userspace.
+	 */
+
+	return 0;
+}
+
+#define cxl_cmd_count ARRAY_SIZE(cxl_mem_commands)
+
+int cxl_query_cmd(struct cxl_memdev *cxlmd,
+		  struct cxl_mem_query_commands __user *q)
+{
+	struct device *dev = &cxlmd->dev;
+	struct cxl_mem_command *cmd;
+	u32 n_commands;
+	int j = 0;
+
+	dev_dbg(dev, "Query IOCTL\n");
+
+	if (get_user(n_commands, &q->n_commands))
+		return -EFAULT;
+
+	/* returns the total number if 0 elements are requested. */
+	if (n_commands == 0)
+		return put_user(cxl_cmd_count, &q->n_commands);
+
+	/*
+	 * otherwise, return max(n_commands, total commands) cxl_command_info
+	 * structures.
+	 */
+	cxl_for_each_cmd(cmd) {
+		const struct cxl_command_info *info = &cmd->info;
+
+		if (copy_to_user(&q->commands[j++], info, sizeof(*info)))
+			return -EFAULT;
+
+		if (j == n_commands)
+			break;
+	}
+
+	return 0;
+}
+
+/**
+ * handle_mailbox_cmd_from_user() - Dispatch a mailbox command for userspace.
+ * @cxlm: The CXL memory device to communicate with.
+ * @cmd: The validated command.
+ * @in_payload: Pointer to userspace's input payload.
+ * @out_payload: Pointer to userspace's output payload.
+ * @size_out: (Input) Max payload size to copy out.
+ *            (Output) Payload size hardware generated.
+ * @retval: Hardware generated return code from the operation.
+ *
+ * Return:
+ *  * %0	- Mailbox transaction succeeded. This implies the mailbox
+ *		  protocol completed successfully not that the operation itself
+ *		  was successful.
+ *  * %-ENOMEM  - Couldn't allocate a bounce buffer.
+ *  * %-EFAULT	- Something happened with copy_to/from_user.
+ *  * %-EINTR	- Mailbox acquisition interrupted.
+ *  * %-EXXX	- Transaction level failures.
+ *
+ * Creates the appropriate mailbox command and dispatches it on behalf of a
+ * userspace request. The input and output payloads are copied between
+ * userspace.
+ *
+ * See cxl_send_cmd().
+ */
+static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
+					const struct cxl_mem_command *cmd,
+					u64 in_payload, u64 out_payload,
+					s32 *size_out, u32 *retval)
+{
+	struct device *dev = cxlm->dev;
+	struct cxl_mbox_cmd mbox_cmd = {
+		.opcode = cmd->opcode,
+		.size_in = cmd->info.size_in,
+		.size_out = cmd->info.size_out,
+	};
+	int rc;
+
+	if (cmd->info.size_out) {
+		mbox_cmd.payload_out = kvzalloc(cmd->info.size_out, GFP_KERNEL);
+		if (!mbox_cmd.payload_out)
+			return -ENOMEM;
+	}
+
+	if (cmd->info.size_in) {
+		mbox_cmd.payload_in = vmemdup_user(u64_to_user_ptr(in_payload),
+						   cmd->info.size_in);
+		if (IS_ERR(mbox_cmd.payload_in)) {
+			kvfree(mbox_cmd.payload_out);
+			return PTR_ERR(mbox_cmd.payload_in);
+		}
+	}
+
+	dev_dbg(dev,
+		"Submitting %s command for user\n"
+		"\topcode: %x\n"
+		"\tsize: %ub\n",
+		cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
+		cmd->info.size_in);
+
+	dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
+		      "raw command path used\n");
+
+	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
+	if (rc)
+		goto out;
+
+	/*
+	 * @size_out contains the max size that's allowed to be written back out
+	 * to userspace. While the payload may have written more output than
+	 * this it will have to be ignored.
+	 */
+	if (mbox_cmd.size_out) {
+		dev_WARN_ONCE(dev, mbox_cmd.size_out > *size_out,
+			      "Invalid return size\n");
+		if (copy_to_user(u64_to_user_ptr(out_payload),
+				 mbox_cmd.payload_out, mbox_cmd.size_out)) {
+			rc = -EFAULT;
+			goto out;
+		}
+	}
+
+	*size_out = mbox_cmd.size_out;
+	*retval = mbox_cmd.return_code;
+
+out:
+	kvfree(mbox_cmd.payload_in);
+	kvfree(mbox_cmd.payload_out);
+	return rc;
+}
+
+int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s)
+{
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+	struct device *dev = &cxlmd->dev;
+	struct cxl_send_command send;
+	struct cxl_mem_command c;
+	int rc;
+
+	dev_dbg(dev, "Send IOCTL\n");
+
+	if (copy_from_user(&send, s, sizeof(send)))
+		return -EFAULT;
+
+	rc = cxl_validate_cmd_from_user(cxlmd->cxlm, &send, &c);
+	if (rc)
+		return rc;
+
+	/* Prepare to handle a full payload for variable sized output */
+	if (c.info.size_out < 0)
+		c.info.size_out = cxlm->payload_size;
+
+	rc = handle_mailbox_cmd_from_user(cxlm, &c, send.in.payload,
+					  send.out.payload, &send.out.size,
+					  &send.retval);
+	if (rc)
+		return rc;
+
+	if (copy_to_user(s, &send, sizeof(send)))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
+{
+	u32 remaining = size;
+	u32 offset = 0;
+
+	while (remaining) {
+		u32 xfer_size = min_t(u32, remaining, cxlm->payload_size);
+		struct cxl_mbox_get_log {
+			uuid_t uuid;
+			__le32 offset;
+			__le32 length;
+		} __packed log = {
+			.uuid = *uuid,
+			.offset = cpu_to_le32(offset),
+			.length = cpu_to_le32(xfer_size)
+		};
+		int rc;
+
+		rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_LOG, &log,
+					   sizeof(log), out, xfer_size);
+		if (rc < 0)
+			return rc;
+
+		out += xfer_size;
+		remaining -= xfer_size;
+		offset += xfer_size;
+	}
+
+	return 0;
+}
+
+/**
+ * cxl_walk_cel() - Walk through the Command Effects Log.
+ * @cxlm: Device.
+ * @size: Length of the Command Effects Log.
+ * @cel: CEL
+ *
+ * Iterate over each entry in the CEL and determine if the driver supports the
+ * command. If so, the command is enabled for the device and can be used later.
+ */
+static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
+{
+	struct cel_entry {
+		__le16 opcode;
+		__le16 effect;
+	} __packed * cel_entry;
+	const int cel_entries = size / sizeof(*cel_entry);
+	int i;
+
+	cel_entry = (struct cel_entry *)cel;
+
+	for (i = 0; i < cel_entries; i++) {
+		u16 opcode = le16_to_cpu(cel_entry[i].opcode);
+		struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
+
+		if (!cmd) {
+			dev_dbg(cxlm->dev,
+				"Opcode 0x%04x unsupported by driver", opcode);
+			continue;
+		}
+
+		set_bit(cmd->info.id, cxlm->enabled_cmds);
+	}
+}
+
+struct cxl_mbox_get_supported_logs {
+	__le16 entries;
+	u8 rsvd[6];
+	struct gsl_entry {
+		uuid_t uuid;
+		__le32 size;
+	} __packed entry[];
+} __packed;
+
+static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_mem *cxlm)
+{
+	struct cxl_mbox_get_supported_logs *ret;
+	int rc;
+
+	ret = kvmalloc(cxlm->payload_size, GFP_KERNEL);
+	if (!ret)
+		return ERR_PTR(-ENOMEM);
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_SUPPORTED_LOGS, NULL,
+				   0, ret, cxlm->payload_size);
+	if (rc < 0) {
+		kvfree(ret);
+		return ERR_PTR(rc);
+	}
+
+	return ret;
+}
+
+enum {
+	CEL_UUID,
+	VENDOR_DEBUG_UUID,
+};
+
+/* See CXL 2.0 Table 170. Get Log Input Payload */
+static const uuid_t log_uuid[] = {
+	[CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96,
+			       0xb1, 0x62, 0x3b, 0x3f, 0x17),
+	[VENDOR_DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f,
+					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86),
+};
+
+/**
+ * cxl_mem_enumerate_cmds() - Enumerate commands for a device.
+ * @cxlm: The device.
+ *
+ * Returns 0 if enumerate completed successfully.
+ *
+ * CXL devices have optional support for certain commands. This function will
+ * determine the set of supported commands for the hardware and update the
+ * enabled_cmds bitmap in the @cxlm.
+ */
+int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm)
+{
+	struct cxl_mbox_get_supported_logs *gsl;
+	struct device *dev = cxlm->dev;
+	struct cxl_mem_command *cmd;
+	int i, rc;
+
+	gsl = cxl_get_gsl(cxlm);
+	if (IS_ERR(gsl))
+		return PTR_ERR(gsl);
+
+	rc = -ENOENT;
+	for (i = 0; i < le16_to_cpu(gsl->entries); i++) {
+		u32 size = le32_to_cpu(gsl->entry[i].size);
+		uuid_t uuid = gsl->entry[i].uuid;
+		u8 *log;
+
+		dev_dbg(dev, "Found LOG type %pU of size %d", &uuid, size);
+
+		if (!uuid_equal(&uuid, &log_uuid[CEL_UUID]))
+			continue;
+
+		log = kvmalloc(size, GFP_KERNEL);
+		if (!log) {
+			rc = -ENOMEM;
+			goto out;
+		}
+
+		rc = cxl_xfer_log(cxlm, &uuid, size, log);
+		if (rc) {
+			kvfree(log);
+			goto out;
+		}
+
+		cxl_walk_cel(cxlm, size, log);
+		kvfree(log);
+
+		/* In case CEL was bogus, enable some default commands. */
+		cxl_for_each_cmd(cmd)
+			if (cmd->flags & CXL_CMD_FLAG_FORCE_ENABLE)
+				set_bit(cmd->info.id, cxlm->enabled_cmds);
+
+		/* Found the required CEL */
+		rc = 0;
+	}
+
+out:
+	kvfree(gsl);
+	return rc;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_enumerate_cmds);
+
+/**
+ * cxl_mem_get_partition_info - Get partition info
+ * @cxlm: The device to act on
+ * @active_volatile_bytes: returned active volatile capacity; in bytes
+ * @active_persistent_bytes: returned active persistent capacity; in bytes
+ * @next_volatile_bytes: return next volatile capacity; in bytes
+ * @next_persistent_bytes: return next persistent capacity; in bytes
+ *
+ * Retrieve the current partition info for the device specified.  The active
+ * values are the current capacity in bytes.  If not 0, the 'next' values are
+ * the pending values, in bytes, which take affect on next cold reset.
+ *
+ * Return: 0 if no error: or the result of the mailbox command.
+ *
+ * See CXL @8.2.9.5.2.1 Get Partition Info
+ */
+static int cxl_mem_get_partition_info(struct cxl_mem *cxlm)
+{
+	struct cxl_mbox_get_partition_info {
+		u64 active_volatile_cap;
+		u64 active_persistent_cap;
+		u64 next_volatile_cap;
+		u64 next_persistent_cap;
+	} __packed pi;
+	int rc;
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_PARTITION_INFO,
+				   NULL, 0, &pi, sizeof(pi));
+
+	if (rc)
+		return rc;
+
+	cxlm->active_volatile_bytes =
+		le64_to_cpu(pi.active_volatile_cap) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->active_persistent_bytes =
+		le64_to_cpu(pi.active_persistent_cap) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->next_volatile_bytes =
+		le64_to_cpu(pi.next_volatile_cap) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->next_persistent_bytes =
+		le64_to_cpu(pi.next_volatile_cap) * CXL_CAPACITY_MULTIPLIER;
+
+	return 0;
+}
+
+/**
+ * cxl_mem_identify() - Send the IDENTIFY command to the device.
+ * @cxlm: The device to identify.
+ *
+ * Return: 0 if identify was executed successfully.
+ *
+ * This will dispatch the identify command to the device and on success populate
+ * structures to be exported to sysfs.
+ */
+int cxl_mem_identify(struct cxl_mem *cxlm)
+{
+	/* See CXL 2.0 Table 175 Identify Memory Device Output Payload */
+	struct cxl_mbox_identify {
+		char fw_revision[0x10];
+		__le64 total_capacity;
+		__le64 volatile_capacity;
+		__le64 persistent_capacity;
+		__le64 partition_align;
+		__le16 info_event_log_size;
+		__le16 warning_event_log_size;
+		__le16 failure_event_log_size;
+		__le16 fatal_event_log_size;
+		__le32 lsa_size;
+		u8 poison_list_max_mer[3];
+		__le16 inject_poison_limit;
+		u8 poison_caps;
+		u8 qos_telemetry_caps;
+	} __packed id;
+	int rc;
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0, &id,
+				   sizeof(id));
+	if (rc < 0)
+		return rc;
+
+	cxlm->total_bytes =
+		le64_to_cpu(id.total_capacity) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->volatile_only_bytes =
+		le64_to_cpu(id.volatile_capacity) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->persistent_only_bytes =
+		le64_to_cpu(id.persistent_capacity) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->partition_align_bytes =
+		le64_to_cpu(id.partition_align) * CXL_CAPACITY_MULTIPLIER;
+
+	dev_dbg(cxlm->dev,
+		"Identify Memory Device\n"
+		"     total_bytes = %#llx\n"
+		"     volatile_only_bytes = %#llx\n"
+		"     persistent_only_bytes = %#llx\n"
+		"     partition_align_bytes = %#llx\n",
+		cxlm->total_bytes, cxlm->volatile_only_bytes,
+		cxlm->persistent_only_bytes, cxlm->partition_align_bytes);
+
+	cxlm->lsa_size = le32_to_cpu(id.lsa_size);
+	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_identify);
+
+int cxl_mem_create_range_info(struct cxl_mem *cxlm)
+{
+	int rc;
+
+	if (cxlm->partition_align_bytes == 0) {
+		cxlm->ram_range.start = 0;
+		cxlm->ram_range.end = cxlm->volatile_only_bytes - 1;
+		cxlm->pmem_range.start = cxlm->volatile_only_bytes;
+		cxlm->pmem_range.end = cxlm->volatile_only_bytes +
+				       cxlm->persistent_only_bytes - 1;
+		return 0;
+	}
+
+	rc = cxl_mem_get_partition_info(cxlm);
+	if (rc) {
+		dev_err(cxlm->dev, "Failed to query partition information\n");
+		return rc;
+	}
+
+	dev_dbg(cxlm->dev,
+		"Get Partition Info\n"
+		"     active_volatile_bytes = %#llx\n"
+		"     active_persistent_bytes = %#llx\n"
+		"     next_volatile_bytes = %#llx\n"
+		"     next_persistent_bytes = %#llx\n",
+		cxlm->active_volatile_bytes, cxlm->active_persistent_bytes,
+		cxlm->next_volatile_bytes, cxlm->next_persistent_bytes);
+
+	cxlm->ram_range.start = 0;
+	cxlm->ram_range.end = cxlm->active_volatile_bytes - 1;
+
+	cxlm->pmem_range.start = cxlm->active_volatile_bytes;
+	cxlm->pmem_range.end =
+		cxlm->active_volatile_bytes + cxlm->active_persistent_bytes - 1;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_create_range_info);
+
+struct cxl_mem *cxl_mem_create(struct device *dev)
+{
+	struct cxl_mem *cxlm;
+
+	cxlm = devm_kzalloc(dev, sizeof(*cxlm), GFP_KERNEL);
+	if (!cxlm)
+		return ERR_PTR(-ENOMEM);
+
+	mutex_init(&cxlm->mbox_mutex);
+	cxlm->dev = dev;
+	cxlm->enabled_cmds =
+		devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
+				   sizeof(unsigned long),
+				   GFP_KERNEL | __GFP_ZERO);
+	if (!cxlm->enabled_cmds)
+		return ERR_PTR(-ENOMEM);
+
+	return cxlm;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_create);
+
+static struct dentry *cxl_debugfs;
+
+void __init cxl_mbox_init(void)
+{
+	struct dentry *mbox_debugfs;
+
+	cxl_debugfs = debugfs_create_dir("cxl", NULL);
+	mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
+	debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs,
+			    &cxl_raw_allow_all);
+}
+
+void cxl_mbox_exit(void)
+{
+	debugfs_remove_recursive(cxl_debugfs);
+}
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
index 40789558f8c2..a2a9691568af 100644
--- a/drivers/cxl/core/memdev.c
+++ b/drivers/cxl/core/memdev.c
@@ -8,6 +8,8 @@
 #include <cxlmem.h>
 #include "core.h"
 
+static DECLARE_RWSEM(cxl_memdev_rwsem);
+
 /*
  * An entire PCI topology full of devices should be enough for any
  * config
@@ -132,16 +134,21 @@ static const struct device_type cxl_memdev_type = {
 	.groups = cxl_memdev_attribute_groups,
 };
 
+static void cxl_memdev_shutdown(struct device *dev)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+
+	down_write(&cxl_memdev_rwsem);
+	cxlmd->cxlm = NULL;
+	up_write(&cxl_memdev_rwsem);
+}
+
 static void cxl_memdev_unregister(void *_cxlmd)
 {
 	struct cxl_memdev *cxlmd = _cxlmd;
 	struct device *dev = &cxlmd->dev;
-	struct cdev *cdev = &cxlmd->cdev;
-	const struct cdevm_file_operations *cdevm_fops;
-
-	cdevm_fops = container_of(cdev->ops, typeof(*cdevm_fops), fops);
-	cdevm_fops->shutdown(dev);
 
+	cxl_memdev_shutdown(dev);
 	cdev_device_del(&cxlmd->cdev, dev);
 	put_device(dev);
 }
@@ -180,16 +187,72 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm,
 	return ERR_PTR(rc);
 }
 
+static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd,
+			       unsigned long arg)
+{
+	switch (cmd) {
+	case CXL_MEM_QUERY_COMMANDS:
+		return cxl_query_cmd(cxlmd, (void __user *)arg);
+	case CXL_MEM_SEND_COMMAND:
+		return cxl_send_cmd(cxlmd, (void __user *)arg);
+	default:
+		return -ENOTTY;
+	}
+}
+
+static long cxl_memdev_ioctl(struct file *file, unsigned int cmd,
+			     unsigned long arg)
+{
+	struct cxl_memdev *cxlmd = file->private_data;
+	int rc = -ENXIO;
+
+	down_read(&cxl_memdev_rwsem);
+	if (cxlmd->cxlm)
+		rc = __cxl_memdev_ioctl(cxlmd, cmd, arg);
+	up_read(&cxl_memdev_rwsem);
+
+	return rc;
+}
+
+static int cxl_memdev_open(struct inode *inode, struct file *file)
+{
+	struct cxl_memdev *cxlmd =
+		container_of(inode->i_cdev, typeof(*cxlmd), cdev);
+
+	get_device(&cxlmd->dev);
+	file->private_data = cxlmd;
+
+	return 0;
+}
+
+static int cxl_memdev_release_file(struct inode *inode, struct file *file)
+{
+	struct cxl_memdev *cxlmd =
+		container_of(inode->i_cdev, typeof(*cxlmd), cdev);
+
+	put_device(&cxlmd->dev);
+
+	return 0;
+}
+
+static const struct file_operations cxl_memdev_fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = cxl_memdev_ioctl,
+	.open = cxl_memdev_open,
+	.release = cxl_memdev_release_file,
+	.compat_ioctl = compat_ptr_ioctl,
+	.llseek = noop_llseek,
+};
+
 struct cxl_memdev *
-devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
-		    const struct cdevm_file_operations *cdevm_fops)
+devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm)
 {
 	struct cxl_memdev *cxlmd;
 	struct device *dev;
 	struct cdev *cdev;
 	int rc;
 
-	cxlmd = cxl_memdev_alloc(cxlm, &cdevm_fops->fops);
+	cxlmd = cxl_memdev_alloc(cxlm, &cxl_memdev_fops);
 	if (IS_ERR(cxlmd))
 		return cxlmd;
 
@@ -219,7 +282,7 @@ devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
 	 * The cdev was briefly live, shutdown any ioctl operations that
 	 * saw that state.
 	 */
-	cdevm_fops->shutdown(dev);
+	cxl_memdev_shutdown(dev);
 	put_device(dev);
 	return ERR_PTR(rc);
 }
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index a56d8f26a157..b7122ded3a04 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -2,6 +2,7 @@
 /* Copyright(c) 2020-2021 Intel Corporation. */
 #ifndef __CXL_MEM_H__
 #define __CXL_MEM_H__
+#include <uapi/linux/cxl_mem.h>
 #include <linux/cdev.h>
 #include "cxl.h"
 
@@ -28,21 +29,6 @@
 	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
 	 CXLMDEV_RESET_NEEDED_NOT)
 
-/**
- * struct cdevm_file_operations - devm coordinated cdev file operations
- * @fops: file operations that are synchronized against @shutdown
- * @shutdown: disconnect driver data
- *
- * @shutdown is invoked in the devres release path to disconnect any
- * driver instance data from @dev. It assumes synchronization with any
- * fops operation that requires driver data. After @shutdown an
- * operation may only reference @device data.
- */
-struct cdevm_file_operations {
-	struct file_operations fops;
-	void (*shutdown)(struct device *dev);
-};
-
 /**
  * struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
  * @dev: driver core device object
@@ -62,12 +48,11 @@ static inline struct cxl_memdev *to_cxl_memdev(struct device *dev)
 	return container_of(dev, struct cxl_memdev, dev);
 }
 
-struct cxl_memdev *
-devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
-		    const struct cdevm_file_operations *cdevm_fops);
+struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
+				       struct cxl_mem *cxlm);
 
 /**
- * struct mbox_cmd - A command to be submitted to hardware.
+ * struct cxl_mbox_cmd - A command to be submitted to hardware.
  * @opcode: (input) The command set and command submitted to hardware.
  * @payload_in: (input) Pointer to the input payload.
  * @payload_out: (output) Pointer to the output payload. Must be allocated by
@@ -147,4 +132,62 @@ struct cxl_mem {
 
 	int (*mbox_send)(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd);
 };
+
+enum cxl_opcode {
+	CXL_MBOX_OP_INVALID		= 0x0000,
+	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
+	CXL_MBOX_OP_GET_FW_INFO		= 0x0200,
+	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
+	CXL_MBOX_OP_GET_SUPPORTED_LOGS	= 0x0400,
+	CXL_MBOX_OP_GET_LOG		= 0x0401,
+	CXL_MBOX_OP_IDENTIFY		= 0x4000,
+	CXL_MBOX_OP_GET_PARTITION_INFO	= 0x4100,
+	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
+	CXL_MBOX_OP_GET_LSA		= 0x4102,
+	CXL_MBOX_OP_SET_LSA		= 0x4103,
+	CXL_MBOX_OP_GET_HEALTH_INFO	= 0x4200,
+	CXL_MBOX_OP_GET_ALERT_CONFIG	= 0x4201,
+	CXL_MBOX_OP_SET_ALERT_CONFIG	= 0x4202,
+	CXL_MBOX_OP_GET_SHUTDOWN_STATE	= 0x4203,
+	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
+	CXL_MBOX_OP_GET_POISON		= 0x4300,
+	CXL_MBOX_OP_INJECT_POISON	= 0x4301,
+	CXL_MBOX_OP_CLEAR_POISON	= 0x4302,
+	CXL_MBOX_OP_GET_SCAN_MEDIA_CAPS	= 0x4303,
+	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
+	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
+	CXL_MBOX_OP_MAX			= 0x10000
+};
+
+/**
+ * struct cxl_mem_command - Driver representation of a memory device command
+ * @info: Command information as it exists for the UAPI
+ * @opcode: The actual bits used for the mailbox protocol
+ * @flags: Set of flags effecting driver behavior.
+ *
+ *  * %CXL_CMD_FLAG_FORCE_ENABLE: In cases of error, commands with this flag
+ *    will be enabled by the driver regardless of what hardware may have
+ *    advertised.
+ *
+ * The cxl_mem_command is the driver's internal representation of commands that
+ * are supported by the driver. Some of these commands may not be supported by
+ * the hardware. The driver will use @info to validate the fields passed in by
+ * the user then submit the @opcode to the hardware.
+ *
+ * See struct cxl_command_info.
+ */
+struct cxl_mem_command {
+	struct cxl_command_info info;
+	enum cxl_opcode opcode;
+	u32 flags;
+#define CXL_CMD_FLAG_NONE 0
+#define CXL_CMD_FLAG_FORCE_ENABLE BIT(0)
+};
+
+int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, void *in,
+			  size_t in_size, void *out, size_t out_size);
+int cxl_mem_identify(struct cxl_mem *cxlm);
+int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm);
+int cxl_mem_create_range_info(struct cxl_mem *cxlm);
+struct cxl_mem *cxl_mem_create(struct device *dev);
 #endif /* __CXL_MEM_H__ */
diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index 27b8c40c9685..b8075b941a3a 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -1,17 +1,12 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
-#include <uapi/linux/cxl_mem.h>
-#include <linux/security.h>
-#include <linux/debugfs.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
 #include <linux/module.h>
 #include <linux/sizes.h>
 #include <linux/mutex.h>
 #include <linux/list.h>
-#include <linux/cdev.h>
-#include <linux/idr.h>
 #include <linux/pci.h>
 #include <linux/io.h>
-#include <linux/io-64-nonatomic-lo-hi.h>
 #include "cxlmem.h"
 #include "pci.h"
 #include "cxl.h"
@@ -38,162 +33,6 @@
 /* CXL 2.0 - 8.2.8.4 */
 #define CXL_MAILBOX_TIMEOUT_MS (2 * HZ)
 
-enum opcode {
-	CXL_MBOX_OP_INVALID		= 0x0000,
-	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
-	CXL_MBOX_OP_GET_FW_INFO		= 0x0200,
-	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
-	CXL_MBOX_OP_GET_SUPPORTED_LOGS	= 0x0400,
-	CXL_MBOX_OP_GET_LOG		= 0x0401,
-	CXL_MBOX_OP_IDENTIFY		= 0x4000,
-	CXL_MBOX_OP_GET_PARTITION_INFO	= 0x4100,
-	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
-	CXL_MBOX_OP_GET_LSA		= 0x4102,
-	CXL_MBOX_OP_SET_LSA		= 0x4103,
-	CXL_MBOX_OP_GET_HEALTH_INFO	= 0x4200,
-	CXL_MBOX_OP_GET_ALERT_CONFIG	= 0x4201,
-	CXL_MBOX_OP_SET_ALERT_CONFIG	= 0x4202,
-	CXL_MBOX_OP_GET_SHUTDOWN_STATE	= 0x4203,
-	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
-	CXL_MBOX_OP_GET_POISON		= 0x4300,
-	CXL_MBOX_OP_INJECT_POISON	= 0x4301,
-	CXL_MBOX_OP_CLEAR_POISON	= 0x4302,
-	CXL_MBOX_OP_GET_SCAN_MEDIA_CAPS	= 0x4303,
-	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
-	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
-	CXL_MBOX_OP_MAX			= 0x10000
-};
-
-static DECLARE_RWSEM(cxl_memdev_rwsem);
-static struct dentry *cxl_debugfs;
-static bool cxl_raw_allow_all;
-
-enum {
-	CEL_UUID,
-	VENDOR_DEBUG_UUID,
-};
-
-/* See CXL 2.0 Table 170. Get Log Input Payload */
-static const uuid_t log_uuid[] = {
-	[CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96,
-			       0xb1, 0x62, 0x3b, 0x3f, 0x17),
-	[VENDOR_DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f,
-					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86),
-};
-
-/**
- * struct cxl_mem_command - Driver representation of a memory device command
- * @info: Command information as it exists for the UAPI
- * @opcode: The actual bits used for the mailbox protocol
- * @flags: Set of flags effecting driver behavior.
- *
- *  * %CXL_CMD_FLAG_FORCE_ENABLE: In cases of error, commands with this flag
- *    will be enabled by the driver regardless of what hardware may have
- *    advertised.
- *
- * The cxl_mem_command is the driver's internal representation of commands that
- * are supported by the driver. Some of these commands may not be supported by
- * the hardware. The driver will use @info to validate the fields passed in by
- * the user then submit the @opcode to the hardware.
- *
- * See struct cxl_command_info.
- */
-struct cxl_mem_command {
-	struct cxl_command_info info;
-	enum opcode opcode;
-	u32 flags;
-#define CXL_CMD_FLAG_NONE 0
-#define CXL_CMD_FLAG_FORCE_ENABLE BIT(0)
-};
-
-#define CXL_CMD(_id, sin, sout, _flags)                                        \
-	[CXL_MEM_COMMAND_ID_##_id] = {                                         \
-	.info =	{                                                              \
-			.id = CXL_MEM_COMMAND_ID_##_id,                        \
-			.size_in = sin,                                        \
-			.size_out = sout,                                      \
-		},                                                             \
-	.opcode = CXL_MBOX_OP_##_id,                                           \
-	.flags = _flags,                                                       \
-	}
-
-/*
- * This table defines the supported mailbox commands for the driver. This table
- * is made up of a UAPI structure. Non-negative values as parameters in the
- * table will be validated against the user's input. For example, if size_in is
- * 0, and the user passed in 1, it is an error.
- */
-static struct cxl_mem_command mem_commands[CXL_MEM_COMMAND_ID_MAX] = {
-	CXL_CMD(IDENTIFY, 0, 0x43, CXL_CMD_FLAG_FORCE_ENABLE),
-#ifdef CONFIG_CXL_MEM_RAW_COMMANDS
-	CXL_CMD(RAW, ~0, ~0, 0),
-#endif
-	CXL_CMD(GET_SUPPORTED_LOGS, 0, ~0, CXL_CMD_FLAG_FORCE_ENABLE),
-	CXL_CMD(GET_FW_INFO, 0, 0x50, 0),
-	CXL_CMD(GET_PARTITION_INFO, 0, 0x20, 0),
-	CXL_CMD(GET_LSA, 0x8, ~0, 0),
-	CXL_CMD(GET_HEALTH_INFO, 0, 0x12, 0),
-	CXL_CMD(GET_LOG, 0x18, ~0, CXL_CMD_FLAG_FORCE_ENABLE),
-	CXL_CMD(SET_PARTITION_INFO, 0x0a, 0, 0),
-	CXL_CMD(SET_LSA, ~0, 0, 0),
-	CXL_CMD(GET_ALERT_CONFIG, 0, 0x10, 0),
-	CXL_CMD(SET_ALERT_CONFIG, 0xc, 0, 0),
-	CXL_CMD(GET_SHUTDOWN_STATE, 0, 0x1, 0),
-	CXL_CMD(SET_SHUTDOWN_STATE, 0x1, 0, 0),
-	CXL_CMD(GET_POISON, 0x10, ~0, 0),
-	CXL_CMD(INJECT_POISON, 0x8, 0, 0),
-	CXL_CMD(CLEAR_POISON, 0x48, 0, 0),
-	CXL_CMD(GET_SCAN_MEDIA_CAPS, 0x10, 0x4, 0),
-	CXL_CMD(SCAN_MEDIA, 0x11, 0, 0),
-	CXL_CMD(GET_SCAN_MEDIA, 0, ~0, 0),
-};
-
-/*
- * Commands that RAW doesn't permit. The rationale for each:
- *
- * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment /
- * coordination of transaction timeout values at the root bridge level.
- *
- * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live
- * and needs to be coordinated with HDM updates.
- *
- * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the
- * driver and any writes from userspace invalidates those contents.
- *
- * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes
- * to the device after it is marked clean, userspace can not make that
- * assertion.
- *
- * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that
- * is kept up to date with patrol notifications and error management.
- */
-static u16 cxl_disabled_raw_commands[] = {
-	CXL_MBOX_OP_ACTIVATE_FW,
-	CXL_MBOX_OP_SET_PARTITION_INFO,
-	CXL_MBOX_OP_SET_LSA,
-	CXL_MBOX_OP_SET_SHUTDOWN_STATE,
-	CXL_MBOX_OP_SCAN_MEDIA,
-	CXL_MBOX_OP_GET_SCAN_MEDIA,
-};
-
-/*
- * Command sets that RAW doesn't permit. All opcodes in this set are
- * disabled because they pass plain text security payloads over the
- * user/kernel boundary. This functionality is intended to be wrapped
- * behind the keys ABI which allows for encrypted payloads in the UAPI
- */
-static u8 security_command_sets[] = {
-	0x44, /* Sanitize */
-	0x45, /* Persistent Memory Data-at-rest Security */
-	0x46, /* Security Passthrough */
-};
-
-#define cxl_for_each_cmd(cmd)                                                  \
-	for ((cmd) = &mem_commands[0];                                         \
-	     ((cmd) - mem_commands) < ARRAY_SIZE(mem_commands); (cmd)++)
-
-#define cxl_cmd_count ARRAY_SIZE(mem_commands)
-
 static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
 {
 	const unsigned long start = jiffies;
@@ -216,16 +55,6 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
 	return 0;
 }
 
-static bool cxl_is_security_command(u16 opcode)
-{
-	int i;
-
-	for (i = 0; i < ARRAY_SIZE(security_command_sets); i++)
-		if (security_command_sets[i] == (opcode >> 8))
-			return true;
-	return false;
-}
-
 static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
 				 struct cxl_mbox_cmd *mbox_cmd)
 {
@@ -447,433 +276,6 @@ static int cxl_pci_mbox_send(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd)
 	return rc;
 }
 
-/**
- * handle_mailbox_cmd_from_user() - Dispatch a mailbox command for userspace.
- * @cxlm: The CXL memory device to communicate with.
- * @cmd: The validated command.
- * @in_payload: Pointer to userspace's input payload.
- * @out_payload: Pointer to userspace's output payload.
- * @size_out: (Input) Max payload size to copy out.
- *            (Output) Payload size hardware generated.
- * @retval: Hardware generated return code from the operation.
- *
- * Return:
- *  * %0	- Mailbox transaction succeeded. This implies the mailbox
- *		  protocol completed successfully not that the operation itself
- *		  was successful.
- *  * %-ENOMEM  - Couldn't allocate a bounce buffer.
- *  * %-EFAULT	- Something happened with copy_to/from_user.
- *  * %-EINTR	- Mailbox acquisition interrupted.
- *  * %-EXXX	- Transaction level failures.
- *
- * Creates the appropriate mailbox command and dispatches it on behalf of a
- * userspace request. The input and output payloads are copied between
- * userspace.
- *
- * See cxl_send_cmd().
- */
-static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
-					const struct cxl_mem_command *cmd,
-					u64 in_payload, u64 out_payload,
-					s32 *size_out, u32 *retval)
-{
-	struct device *dev = cxlm->dev;
-	struct cxl_mbox_cmd mbox_cmd = {
-		.opcode = cmd->opcode,
-		.size_in = cmd->info.size_in,
-		.size_out = cmd->info.size_out,
-	};
-	int rc;
-
-	if (cmd->info.size_out) {
-		mbox_cmd.payload_out = kvzalloc(cmd->info.size_out, GFP_KERNEL);
-		if (!mbox_cmd.payload_out)
-			return -ENOMEM;
-	}
-
-	if (cmd->info.size_in) {
-		mbox_cmd.payload_in = vmemdup_user(u64_to_user_ptr(in_payload),
-						   cmd->info.size_in);
-		if (IS_ERR(mbox_cmd.payload_in)) {
-			kvfree(mbox_cmd.payload_out);
-			return PTR_ERR(mbox_cmd.payload_in);
-		}
-	}
-
-	dev_dbg(dev,
-		"Submitting %s command for user\n"
-		"\topcode: %x\n"
-		"\tsize: %ub\n",
-		cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
-		cmd->info.size_in);
-
-	dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
-		      "raw command path used\n");
-
-	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
-	if (rc)
-		goto out;
-
-	/*
-	 * @size_out contains the max size that's allowed to be written back out
-	 * to userspace. While the payload may have written more output than
-	 * this it will have to be ignored.
-	 */
-	if (mbox_cmd.size_out) {
-		dev_WARN_ONCE(dev, mbox_cmd.size_out > *size_out,
-			      "Invalid return size\n");
-		if (copy_to_user(u64_to_user_ptr(out_payload),
-				 mbox_cmd.payload_out, mbox_cmd.size_out)) {
-			rc = -EFAULT;
-			goto out;
-		}
-	}
-
-	*size_out = mbox_cmd.size_out;
-	*retval = mbox_cmd.return_code;
-
-out:
-	kvfree(mbox_cmd.payload_in);
-	kvfree(mbox_cmd.payload_out);
-	return rc;
-}
-
-static bool cxl_mem_raw_command_allowed(u16 opcode)
-{
-	int i;
-
-	if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
-		return false;
-
-	if (security_locked_down(LOCKDOWN_NONE))
-		return false;
-
-	if (cxl_raw_allow_all)
-		return true;
-
-	if (cxl_is_security_command(opcode))
-		return false;
-
-	for (i = 0; i < ARRAY_SIZE(cxl_disabled_raw_commands); i++)
-		if (cxl_disabled_raw_commands[i] == opcode)
-			return false;
-
-	return true;
-}
-
-/**
- * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
- * @cxlm: &struct cxl_mem device whose mailbox will be used.
- * @send_cmd: &struct cxl_send_command copied in from userspace.
- * @out_cmd: Sanitized and populated &struct cxl_mem_command.
- *
- * Return:
- *  * %0	- @out_cmd is ready to send.
- *  * %-ENOTTY	- Invalid command specified.
- *  * %-EINVAL	- Reserved fields or invalid values were used.
- *  * %-ENOMEM	- Input or output buffer wasn't sized properly.
- *  * %-EPERM	- Attempted to use a protected command.
- *
- * The result of this command is a fully validated command in @out_cmd that is
- * safe to send to the hardware.
- *
- * See handle_mailbox_cmd_from_user()
- */
-static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
-				      const struct cxl_send_command *send_cmd,
-				      struct cxl_mem_command *out_cmd)
-{
-	const struct cxl_command_info *info;
-	struct cxl_mem_command *c;
-
-	if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX)
-		return -ENOTTY;
-
-	/*
-	 * The user can never specify an input payload larger than what hardware
-	 * supports, but output can be arbitrarily large (simply write out as
-	 * much data as the hardware provides).
-	 */
-	if (send_cmd->in.size > cxlm->payload_size)
-		return -EINVAL;
-
-	/*
-	 * Checks are bypassed for raw commands but a WARN/taint will occur
-	 * later in the callchain
-	 */
-	if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) {
-		const struct cxl_mem_command temp = {
-			.info = {
-				.id = CXL_MEM_COMMAND_ID_RAW,
-				.flags = 0,
-				.size_in = send_cmd->in.size,
-				.size_out = send_cmd->out.size,
-			},
-			.opcode = send_cmd->raw.opcode
-		};
-
-		if (send_cmd->raw.rsvd)
-			return -EINVAL;
-
-		/*
-		 * Unlike supported commands, the output size of RAW commands
-		 * gets passed along without further checking, so it must be
-		 * validated here.
-		 */
-		if (send_cmd->out.size > cxlm->payload_size)
-			return -EINVAL;
-
-		if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
-			return -EPERM;
-
-		memcpy(out_cmd, &temp, sizeof(temp));
-
-		return 0;
-	}
-
-	if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
-		return -EINVAL;
-
-	if (send_cmd->rsvd)
-		return -EINVAL;
-
-	if (send_cmd->in.rsvd || send_cmd->out.rsvd)
-		return -EINVAL;
-
-	/* Convert user's command into the internal representation */
-	c = &mem_commands[send_cmd->id];
-	info = &c->info;
-
-	/* Check that the command is enabled for hardware */
-	if (!test_bit(info->id, cxlm->enabled_cmds))
-		return -ENOTTY;
-
-	/* Check the input buffer is the expected size */
-	if (info->size_in >= 0 && info->size_in != send_cmd->in.size)
-		return -ENOMEM;
-
-	/* Check the output buffer is at least large enough */
-	if (info->size_out >= 0 && send_cmd->out.size < info->size_out)
-		return -ENOMEM;
-
-	memcpy(out_cmd, c, sizeof(*c));
-	out_cmd->info.size_in = send_cmd->in.size;
-	/*
-	 * XXX: out_cmd->info.size_out will be controlled by the driver, and the
-	 * specified number of bytes @send_cmd->out.size will be copied back out
-	 * to userspace.
-	 */
-
-	return 0;
-}
-
-static int cxl_query_cmd(struct cxl_memdev *cxlmd,
-			 struct cxl_mem_query_commands __user *q)
-{
-	struct device *dev = &cxlmd->dev;
-	struct cxl_mem_command *cmd;
-	u32 n_commands;
-	int j = 0;
-
-	dev_dbg(dev, "Query IOCTL\n");
-
-	if (get_user(n_commands, &q->n_commands))
-		return -EFAULT;
-
-	/* returns the total number if 0 elements are requested. */
-	if (n_commands == 0)
-		return put_user(cxl_cmd_count, &q->n_commands);
-
-	/*
-	 * otherwise, return max(n_commands, total commands) cxl_command_info
-	 * structures.
-	 */
-	cxl_for_each_cmd(cmd) {
-		const struct cxl_command_info *info = &cmd->info;
-
-		if (copy_to_user(&q->commands[j++], info, sizeof(*info)))
-			return -EFAULT;
-
-		if (j == n_commands)
-			break;
-	}
-
-	return 0;
-}
-
-static int cxl_send_cmd(struct cxl_memdev *cxlmd,
-			struct cxl_send_command __user *s)
-{
-	struct cxl_mem *cxlm = cxlmd->cxlm;
-	struct device *dev = &cxlmd->dev;
-	struct cxl_send_command send;
-	struct cxl_mem_command c;
-	int rc;
-
-	dev_dbg(dev, "Send IOCTL\n");
-
-	if (copy_from_user(&send, s, sizeof(send)))
-		return -EFAULT;
-
-	rc = cxl_validate_cmd_from_user(cxlmd->cxlm, &send, &c);
-	if (rc)
-		return rc;
-
-	/* Prepare to handle a full payload for variable sized output */
-	if (c.info.size_out < 0)
-		c.info.size_out = cxlm->payload_size;
-
-	rc = handle_mailbox_cmd_from_user(cxlm, &c, send.in.payload,
-					  send.out.payload, &send.out.size,
-					  &send.retval);
-	if (rc)
-		return rc;
-
-	if (copy_to_user(s, &send, sizeof(send)))
-		return -EFAULT;
-
-	return 0;
-}
-
-static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd,
-			       unsigned long arg)
-{
-	switch (cmd) {
-	case CXL_MEM_QUERY_COMMANDS:
-		return cxl_query_cmd(cxlmd, (void __user *)arg);
-	case CXL_MEM_SEND_COMMAND:
-		return cxl_send_cmd(cxlmd, (void __user *)arg);
-	default:
-		return -ENOTTY;
-	}
-}
-
-static long cxl_memdev_ioctl(struct file *file, unsigned int cmd,
-			     unsigned long arg)
-{
-	struct cxl_memdev *cxlmd = file->private_data;
-	int rc = -ENXIO;
-
-	down_read(&cxl_memdev_rwsem);
-	if (cxlmd->cxlm)
-		rc = __cxl_memdev_ioctl(cxlmd, cmd, arg);
-	up_read(&cxl_memdev_rwsem);
-
-	return rc;
-}
-
-static int cxl_memdev_open(struct inode *inode, struct file *file)
-{
-	struct cxl_memdev *cxlmd =
-		container_of(inode->i_cdev, typeof(*cxlmd), cdev);
-
-	get_device(&cxlmd->dev);
-	file->private_data = cxlmd;
-
-	return 0;
-}
-
-static int cxl_memdev_release_file(struct inode *inode, struct file *file)
-{
-	struct cxl_memdev *cxlmd =
-		container_of(inode->i_cdev, typeof(*cxlmd), cdev);
-
-	put_device(&cxlmd->dev);
-
-	return 0;
-}
-
-static void cxl_memdev_shutdown(struct device *dev)
-{
-	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
-
-	down_write(&cxl_memdev_rwsem);
-	cxlmd->cxlm = NULL;
-	up_write(&cxl_memdev_rwsem);
-}
-
-static const struct cdevm_file_operations cxl_memdev_fops = {
-	.fops = {
-		.owner = THIS_MODULE,
-		.unlocked_ioctl = cxl_memdev_ioctl,
-		.open = cxl_memdev_open,
-		.release = cxl_memdev_release_file,
-		.compat_ioctl = compat_ptr_ioctl,
-		.llseek = noop_llseek,
-	},
-	.shutdown = cxl_memdev_shutdown,
-};
-
-static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
-{
-	struct cxl_mem_command *c;
-
-	cxl_for_each_cmd(c)
-		if (c->opcode == opcode)
-			return c;
-
-	return NULL;
-}
-
-/**
- * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
- * @cxlm: The CXL memory device to communicate with.
- * @opcode: Opcode for the mailbox command.
- * @in: The input payload for the mailbox command.
- * @in_size: The length of the input payload
- * @out: Caller allocated buffer for the output.
- * @out_size: Expected size of output.
- *
- * Context: Any context. Will acquire and release mbox_mutex.
- * Return:
- *  * %>=0	- Number of bytes returned in @out.
- *  * %-E2BIG	- Payload is too large for hardware.
- *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
- *  * %-EFAULT	- Hardware error occurred.
- *  * %-ENXIO	- Command completed, but device reported an error.
- *  * %-EIO	- Unexpected output size.
- *
- * Mailbox commands may execute successfully yet the device itself reported an
- * error. While this distinction can be useful for commands from userspace, the
- * kernel will only be able to use results when both are successful.
- *
- * See __cxl_mem_mbox_send_cmd()
- */
-static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode,
-				 void *in, size_t in_size,
-				 void *out, size_t out_size)
-{
-	const struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
-	struct cxl_mbox_cmd mbox_cmd = {
-		.opcode = opcode,
-		.payload_in = in,
-		.size_in = in_size,
-		.size_out = out_size,
-		.payload_out = out,
-	};
-	int rc;
-
-	if (out_size > cxlm->payload_size)
-		return -E2BIG;
-
-	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
-	if (rc)
-		return rc;
-
-	/* TODO: Map return code to proper kernel style errno */
-	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
-		return -ENXIO;
-
-	/*
-	 * Variable sized commands can't be validated and so it's up to the
-	 * caller to do that if they wish.
-	 */
-	if (cmd->info.size_out >= 0 && mbox_cmd.size_out != out_size)
-		return -EIO;
-
-	return 0;
-}
-
 static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
 {
 	const int cap = readl(cxlm->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET);
@@ -902,31 +304,6 @@ static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
 	return 0;
 }
 
-static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev)
-{
-	struct device *dev = &pdev->dev;
-	struct cxl_mem *cxlm;
-
-	cxlm = devm_kzalloc(dev, sizeof(*cxlm), GFP_KERNEL);
-	if (!cxlm) {
-		dev_err(dev, "No memory available\n");
-		return ERR_PTR(-ENOMEM);
-	}
-
-	mutex_init(&cxlm->mbox_mutex);
-	cxlm->dev = dev;
-	cxlm->enabled_cmds =
-		devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
-				   sizeof(unsigned long),
-				   GFP_KERNEL | __GFP_ZERO);
-	if (!cxlm->enabled_cmds) {
-		dev_err(dev, "No memory available for bitmap\n");
-		return ERR_PTR(-ENOMEM);
-	}
-
-	return cxlm;
-}
-
 static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm,
 					  u8 bar, u64 offset)
 {
@@ -1136,313 +513,6 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
 	return ret;
 }
 
-static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
-{
-	u32 remaining = size;
-	u32 offset = 0;
-
-	while (remaining) {
-		u32 xfer_size = min_t(u32, remaining, cxlm->payload_size);
-		struct cxl_mbox_get_log {
-			uuid_t uuid;
-			__le32 offset;
-			__le32 length;
-		} __packed log = {
-			.uuid = *uuid,
-			.offset = cpu_to_le32(offset),
-			.length = cpu_to_le32(xfer_size)
-		};
-		int rc;
-
-		rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_LOG, &log,
-					   sizeof(log), out, xfer_size);
-		if (rc < 0)
-			return rc;
-
-		out += xfer_size;
-		remaining -= xfer_size;
-		offset += xfer_size;
-	}
-
-	return 0;
-}
-
-/**
- * cxl_walk_cel() - Walk through the Command Effects Log.
- * @cxlm: Device.
- * @size: Length of the Command Effects Log.
- * @cel: CEL
- *
- * Iterate over each entry in the CEL and determine if the driver supports the
- * command. If so, the command is enabled for the device and can be used later.
- */
-static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
-{
-	struct cel_entry {
-		__le16 opcode;
-		__le16 effect;
-	} __packed * cel_entry;
-	const int cel_entries = size / sizeof(*cel_entry);
-	int i;
-
-	cel_entry = (struct cel_entry *)cel;
-
-	for (i = 0; i < cel_entries; i++) {
-		u16 opcode = le16_to_cpu(cel_entry[i].opcode);
-		struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
-
-		if (!cmd) {
-			dev_dbg(cxlm->dev,
-				"Opcode 0x%04x unsupported by driver", opcode);
-			continue;
-		}
-
-		set_bit(cmd->info.id, cxlm->enabled_cmds);
-	}
-}
-
-struct cxl_mbox_get_supported_logs {
-	__le16 entries;
-	u8 rsvd[6];
-	struct gsl_entry {
-		uuid_t uuid;
-		__le32 size;
-	} __packed entry[];
-} __packed;
-
-static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_mem *cxlm)
-{
-	struct cxl_mbox_get_supported_logs *ret;
-	int rc;
-
-	ret = kvmalloc(cxlm->payload_size, GFP_KERNEL);
-	if (!ret)
-		return ERR_PTR(-ENOMEM);
-
-	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_SUPPORTED_LOGS, NULL,
-				   0, ret, cxlm->payload_size);
-	if (rc < 0) {
-		kvfree(ret);
-		return ERR_PTR(rc);
-	}
-
-	return ret;
-}
-
-/**
- * cxl_mem_get_partition_info - Get partition info
- * @cxlm: The device to act on
- * @active_volatile_bytes: returned active volatile capacity; in bytes
- * @active_persistent_bytes: returned active persistent capacity; in bytes
- * @next_volatile_bytes: return next volatile capacity; in bytes
- * @next_persistent_bytes: return next persistent capacity; in bytes
- *
- * Retrieve the current partition info for the device specified.  The active
- * values are the current capacity in bytes.  If not 0, the 'next' values are
- * the pending values, in bytes, which take affect on next cold reset.
- *
- * Return: 0 if no error: or the result of the mailbox command.
- *
- * See CXL @8.2.9.5.2.1 Get Partition Info
- */
-static int cxl_mem_get_partition_info(struct cxl_mem *cxlm,
-				      u64 *active_volatile_bytes,
-				      u64 *active_persistent_bytes,
-				      u64 *next_volatile_bytes,
-				      u64 *next_persistent_bytes)
-{
-	struct cxl_mbox_get_partition_info {
-		u64 active_volatile_cap;
-		u64 active_persistent_cap;
-		u64 next_volatile_cap;
-		u64 next_persistent_cap;
-	} __packed pi;
-	int rc;
-
-	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_PARTITION_INFO,
-				   NULL, 0, &pi, sizeof(pi));
-
-	if (rc)
-		return rc;
-
-	*active_volatile_bytes = le64_to_cpu(pi.active_volatile_cap);
-	*active_persistent_bytes = le64_to_cpu(pi.active_persistent_cap);
-	*next_volatile_bytes = le64_to_cpu(pi.next_volatile_cap);
-	*next_persistent_bytes = le64_to_cpu(pi.next_volatile_cap);
-
-	*active_volatile_bytes *= CXL_CAPACITY_MULTIPLIER;
-	*active_persistent_bytes *= CXL_CAPACITY_MULTIPLIER;
-	*next_volatile_bytes *= CXL_CAPACITY_MULTIPLIER;
-	*next_persistent_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	return 0;
-}
-
-/**
- * cxl_mem_enumerate_cmds() - Enumerate commands for a device.
- * @cxlm: The device.
- *
- * Returns 0 if enumerate completed successfully.
- *
- * CXL devices have optional support for certain commands. This function will
- * determine the set of supported commands for the hardware and update the
- * enabled_cmds bitmap in the @cxlm.
- */
-static int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm)
-{
-	struct cxl_mbox_get_supported_logs *gsl;
-	struct device *dev = cxlm->dev;
-	struct cxl_mem_command *cmd;
-	int i, rc;
-
-	gsl = cxl_get_gsl(cxlm);
-	if (IS_ERR(gsl))
-		return PTR_ERR(gsl);
-
-	rc = -ENOENT;
-	for (i = 0; i < le16_to_cpu(gsl->entries); i++) {
-		u32 size = le32_to_cpu(gsl->entry[i].size);
-		uuid_t uuid = gsl->entry[i].uuid;
-		u8 *log;
-
-		dev_dbg(dev, "Found LOG type %pU of size %d", &uuid, size);
-
-		if (!uuid_equal(&uuid, &log_uuid[CEL_UUID]))
-			continue;
-
-		log = kvmalloc(size, GFP_KERNEL);
-		if (!log) {
-			rc = -ENOMEM;
-			goto out;
-		}
-
-		rc = cxl_xfer_log(cxlm, &uuid, size, log);
-		if (rc) {
-			kvfree(log);
-			goto out;
-		}
-
-		cxl_walk_cel(cxlm, size, log);
-		kvfree(log);
-
-		/* In case CEL was bogus, enable some default commands. */
-		cxl_for_each_cmd(cmd)
-			if (cmd->flags & CXL_CMD_FLAG_FORCE_ENABLE)
-				set_bit(cmd->info.id, cxlm->enabled_cmds);
-
-		/* Found the required CEL */
-		rc = 0;
-	}
-
-out:
-	kvfree(gsl);
-	return rc;
-}
-
-/**
- * cxl_mem_identify() - Send the IDENTIFY command to the device.
- * @cxlm: The device to identify.
- *
- * Return: 0 if identify was executed successfully.
- *
- * This will dispatch the identify command to the device and on success populate
- * structures to be exported to sysfs.
- */
-static int cxl_mem_identify(struct cxl_mem *cxlm)
-{
-	/* See CXL 2.0 Table 175 Identify Memory Device Output Payload */
-	struct cxl_mbox_identify {
-		char fw_revision[0x10];
-		__le64 total_capacity;
-		__le64 volatile_capacity;
-		__le64 persistent_capacity;
-		__le64 partition_align;
-		__le16 info_event_log_size;
-		__le16 warning_event_log_size;
-		__le16 failure_event_log_size;
-		__le16 fatal_event_log_size;
-		__le32 lsa_size;
-		u8 poison_list_max_mer[3];
-		__le16 inject_poison_limit;
-		u8 poison_caps;
-		u8 qos_telemetry_caps;
-	} __packed id;
-	int rc;
-
-	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0, &id,
-				   sizeof(id));
-	if (rc < 0)
-		return rc;
-
-	cxlm->total_bytes = le64_to_cpu(id.total_capacity);
-	cxlm->total_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	cxlm->volatile_only_bytes = le64_to_cpu(id.volatile_capacity);
-	cxlm->volatile_only_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	cxlm->persistent_only_bytes = le64_to_cpu(id.persistent_capacity);
-	cxlm->persistent_only_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	cxlm->partition_align_bytes = le64_to_cpu(id.partition_align);
-	cxlm->partition_align_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	dev_dbg(cxlm->dev,
-		"Identify Memory Device\n"
-		"     total_bytes = %#llx\n"
-		"     volatile_only_bytes = %#llx\n"
-		"     persistent_only_bytes = %#llx\n"
-		"     partition_align_bytes = %#llx\n",
-		cxlm->total_bytes, cxlm->volatile_only_bytes,
-		cxlm->persistent_only_bytes, cxlm->partition_align_bytes);
-
-	cxlm->lsa_size = le32_to_cpu(id.lsa_size);
-	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
-
-	return 0;
-}
-
-static int cxl_mem_create_range_info(struct cxl_mem *cxlm)
-{
-	int rc;
-
-	if (cxlm->partition_align_bytes == 0) {
-		cxlm->ram_range.start = 0;
-		cxlm->ram_range.end = cxlm->volatile_only_bytes - 1;
-		cxlm->pmem_range.start = cxlm->volatile_only_bytes;
-		cxlm->pmem_range.end = cxlm->volatile_only_bytes +
-					cxlm->persistent_only_bytes - 1;
-		return 0;
-	}
-
-	rc = cxl_mem_get_partition_info(cxlm,
-					&cxlm->active_volatile_bytes,
-					&cxlm->active_persistent_bytes,
-					&cxlm->next_volatile_bytes,
-					&cxlm->next_persistent_bytes);
-	if (rc < 0) {
-		dev_err(cxlm->dev, "Failed to query partition information\n");
-		return rc;
-	}
-
-	dev_dbg(cxlm->dev,
-		"Get Partition Info\n"
-		"     active_volatile_bytes = %#llx\n"
-		"     active_persistent_bytes = %#llx\n"
-		"     next_volatile_bytes = %#llx\n"
-		"     next_persistent_bytes = %#llx\n",
-		cxlm->active_volatile_bytes, cxlm->active_persistent_bytes,
-		cxlm->next_volatile_bytes, cxlm->next_persistent_bytes);
-
-	cxlm->ram_range.start = 0;
-	cxlm->ram_range.end = cxlm->active_volatile_bytes - 1;
-
-	cxlm->pmem_range.start = cxlm->active_volatile_bytes;
-	cxlm->pmem_range.end = cxlm->active_volatile_bytes +
-				cxlm->active_persistent_bytes - 1;
-
-	return 0;
-}
-
 static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct cxl_memdev *cxlmd;
@@ -1453,7 +523,7 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (rc)
 		return rc;
 
-	cxlm = cxl_mem_create(pdev);
+	cxlm = cxl_mem_create(&pdev->dev);
 	if (IS_ERR(cxlm))
 		return PTR_ERR(cxlm);
 
@@ -1477,7 +547,7 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (rc)
 		return rc;
 
-	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm, &cxl_memdev_fops);
+	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm);
 	if (IS_ERR(cxlmd))
 		return PTR_ERR(cxlmd);
 
@@ -1505,7 +575,6 @@ static struct pci_driver cxl_mem_driver = {
 
 static __init int cxl_mem_init(void)
 {
-	struct dentry *mbox_debugfs;
 	int rc;
 
 	/* Double check the anonymous union trickery in struct cxl_regs */
@@ -1516,17 +585,11 @@ static __init int cxl_mem_init(void)
 	if (rc)
 		return rc;
 
-	cxl_debugfs = debugfs_create_dir("cxl", NULL);
-	mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
-	debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs,
-			    &cxl_raw_allow_all);
-
 	return 0;
 }
 
 static __exit void cxl_mem_exit(void)
 {
-	debugfs_remove_recursive(cxl_debugfs);
 	pci_unregister_driver(&cxl_mem_driver);
 }
 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 15/23] cxl/pci: Use module_pci_driver
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (13 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 14/23] cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-09 22:29 ` [PATCH 16/23] cxl/mbox: Convert 'enabled_cmds' to DECLARE_BITMAP Dan Williams
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

Now that cxl_mem_{init,exit} no longer need to manage debugfs, switch
back to the smaller form of the boiler plate.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/pci.c |   30 ++++++++----------------------
 1 file changed, 8 insertions(+), 22 deletions(-)

diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index b8075b941a3a..425e821160b5 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -519,6 +519,13 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	struct cxl_mem *cxlm;
 	int rc;
 
+	/*
+	 * Double check the anonymous union trickery in struct cxl_regs
+	 * FIXME switch to struct_group()
+	 */
+	BUILD_BUG_ON(offsetof(struct cxl_regs, memdev) !=
+		     offsetof(struct cxl_regs, device_regs.memdev));
+
 	rc = pcim_enable_device(pdev);
 	if (rc)
 		return rc;
@@ -573,27 +580,6 @@ static struct pci_driver cxl_mem_driver = {
 	},
 };
 
-static __init int cxl_mem_init(void)
-{
-	int rc;
-
-	/* Double check the anonymous union trickery in struct cxl_regs */
-	BUILD_BUG_ON(offsetof(struct cxl_regs, memdev) !=
-		     offsetof(struct cxl_regs, device_regs.memdev));
-
-	rc = pci_register_driver(&cxl_mem_driver);
-	if (rc)
-		return rc;
-
-	return 0;
-}
-
-static __exit void cxl_mem_exit(void)
-{
-	pci_unregister_driver(&cxl_mem_driver);
-}
-
 MODULE_LICENSE("GPL v2");
-module_init(cxl_mem_init);
-module_exit(cxl_mem_exit);
+module_pci_driver(cxl_mem_driver);
 MODULE_IMPORT_NS(CXL);


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 16/23] cxl/mbox: Convert 'enabled_cmds' to DECLARE_BITMAP
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (14 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 15/23] cxl/pci: Use module_pci_driver Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-09 22:29 ` [PATCH 17/23] cxl/mbox: Add exclusive kernel command support Dan Williams
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

Define enabled_cmds as an embedded member of 'struct cxl_mem' rather
than a pointer to another dynamic allocation.

As this leaves only one user of cxl_cmd_count, just open code it and
delete the helper.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/mbox.c |   10 +---------
 drivers/cxl/cxlmem.h    |    2 +-
 2 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index 40f051956990..23100231e246 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -322,8 +322,6 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
 	return 0;
 }
 
-#define cxl_cmd_count ARRAY_SIZE(cxl_mem_commands)
-
 int cxl_query_cmd(struct cxl_memdev *cxlmd,
 		  struct cxl_mem_query_commands __user *q)
 {
@@ -339,7 +337,7 @@ int cxl_query_cmd(struct cxl_memdev *cxlmd,
 
 	/* returns the total number if 0 elements are requested. */
 	if (n_commands == 0)
-		return put_user(cxl_cmd_count, &q->n_commands);
+		return put_user(ARRAY_SIZE(cxl_mem_commands), &q->n_commands);
 
 	/*
 	 * otherwise, return max(n_commands, total commands) cxl_command_info
@@ -803,12 +801,6 @@ struct cxl_mem *cxl_mem_create(struct device *dev)
 
 	mutex_init(&cxlm->mbox_mutex);
 	cxlm->dev = dev;
-	cxlm->enabled_cmds =
-		devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
-				   sizeof(unsigned long),
-				   GFP_KERNEL | __GFP_ZERO);
-	if (!cxlm->enabled_cmds)
-		return ERR_PTR(-ENOMEM);
 
 	return cxlm;
 }
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index b7122ded3a04..df4f3636a999 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -116,7 +116,7 @@ struct cxl_mem {
 	size_t lsa_size;
 	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
 	char firmware_version[0x10];
-	unsigned long *enabled_cmds;
+	DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX);
 
 	struct range pmem_range;
 	struct range ram_range;


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 17/23] cxl/mbox: Add exclusive kernel command support
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (15 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 16/23] cxl/mbox: Convert 'enabled_cmds' to DECLARE_BITMAP Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-10 21:34   ` Ben Widawsky
  2021-08-09 22:29 ` [PATCH 18/23] cxl/pmem: Translate NVDIMM label commands to CXL label commands Dan Williams
                   ` (6 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

The CXL_PMEM driver expects exclusive control of the label storage area
space. Similar to the LIBNVDIMM expectation that the label storage area
is only writable from userspace when the corresponding memory device is
not active in any region, the expectation is the native CXL_PCI UAPI
path is disabled while the cxl_nvdimm for a given cxl_memdev device is
active in LIBNVDIMM.

Add the ability to toggle the availability of a given command for the
UAPI path. Use that new capability to shutdown changes to partitions and
the label storage area while the cxl_nvdimm device is actively proxying
commands for LIBNVDIMM.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/mbox.c |    5 +++++
 drivers/cxl/cxlmem.h    |    2 ++
 drivers/cxl/pmem.c      |   35 +++++++++++++++++++++++++++++------
 3 files changed, 36 insertions(+), 6 deletions(-)

diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index 23100231e246..f26962d7cb65 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -409,6 +409,11 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
 		}
 	}
 
+	if (test_bit(cmd->info.id, cxlm->exclusive_cmds)) {
+		rc = -EBUSY;
+		goto out;
+	}
+
 	dev_dbg(dev,
 		"Submitting %s command for user\n"
 		"\topcode: %x\n"
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index df4f3636a999..f6cfe84a064c 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -102,6 +102,7 @@ struct cxl_mbox_cmd {
  * @mbox_mutex: Mutex to synchronize mailbox access.
  * @firmware_version: Firmware version for the memory device.
  * @enabled_cmds: Hardware commands found enabled in CEL.
+ * @exclusive_cmds: Commands that are kernel-internal only
  * @pmem_range: Persistent memory capacity information.
  * @ram_range: Volatile memory capacity information.
  * @mbox_send: @dev specific transport for transmitting mailbox commands
@@ -117,6 +118,7 @@ struct cxl_mem {
 	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
 	char firmware_version[0x10];
 	DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX);
+	DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX);
 
 	struct range pmem_range;
 	struct range ram_range;
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index 9652c3ee41e7..11410df77444 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -16,9 +16,23 @@
  */
 static struct workqueue_struct *cxl_pmem_wq;
 
-static void unregister_nvdimm(void *nvdimm)
+static void unregister_nvdimm(void *_cxl_nvd)
 {
-	nvdimm_delete(nvdimm);
+	struct cxl_nvdimm *cxl_nvd = _cxl_nvd;
+	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+	struct device *dev = &cxl_nvd->dev;
+	struct nvdimm *nvdimm;
+
+	nvdimm = dev_get_drvdata(dev);
+	if (nvdimm)
+		nvdimm_delete(nvdimm);
+
+	mutex_lock(&cxlm->mbox_mutex);
+	clear_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
+	clear_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
+	clear_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
+	mutex_unlock(&cxlm->mbox_mutex);
 }
 
 static int match_nvdimm_bridge(struct device *dev, const void *data)
@@ -39,6 +53,8 @@ static struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void)
 static int cxl_nvdimm_probe(struct device *dev)
 {
 	struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
+	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	struct cxl_mem *cxlm = cxlmd->cxlm;
 	struct cxl_nvdimm_bridge *cxl_nvb;
 	unsigned long flags = 0;
 	struct nvdimm *nvdimm;
@@ -52,17 +68,24 @@ static int cxl_nvdimm_probe(struct device *dev)
 	if (!cxl_nvb->nvdimm_bus)
 		goto out;
 
+	mutex_lock(&cxlm->mbox_mutex);
+	set_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
+	set_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
+	set_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
+	mutex_unlock(&cxlm->mbox_mutex);
+
 	set_bit(NDD_LABELING, &flags);
 	nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags, 0, 0,
 			       NULL);
-	if (!nvdimm)
-		goto out;
-
-	rc = devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
+	dev_set_drvdata(dev, nvdimm);
+	rc = devm_add_action_or_reset(dev, unregister_nvdimm, cxl_nvd);
 out:
 	device_unlock(&cxl_nvb->dev);
 	put_device(&cxl_nvb->dev);
 
+	if (!nvdimm && rc == 0)
+		rc = -ENOMEM;
+
 	return rc;
 }
 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 18/23] cxl/pmem: Translate NVDIMM label commands to CXL label commands
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (16 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 17/23] cxl/mbox: Add exclusive kernel command support Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-09 22:29 ` [PATCH 19/23] cxl/pmem: Add support for multiple nvdimm-bridge objects Dan Williams
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

The LIBNVDIMM IOCTL UAPI calls back to the nvdimm-bus-provider to
translate the Linux command payload to the device native command format.
The LIBNVDIMM commands get-config-size, get-config-data, and
set-config-data, map to the CXL memory device commands device-identify,
get-lsa, and set-lsa. Recall that the label-storage-area (LSA) on an
NVDIMM device arranges for the provisioning of namespaces. Additionally
for CXL the LSA is used for provisioning regions as well.

The data from device-identify is already cached in the 'struct cxl_mem'
instance associated with @cxl_nvd, so that payload return is simply
crafted and no CXL command is issued. The conversion for get-lsa is
straightforward, but the conversion for set-lsa requires an allocation
to append the set-lsa header in front of the payload.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/pmem.c |  121 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 117 insertions(+), 4 deletions(-)

diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index 11410df77444..3f2b185ff89f 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -54,9 +54,9 @@ static int cxl_nvdimm_probe(struct device *dev)
 {
 	struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
 	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	unsigned long flags = 0, cmd_mask = 0;
 	struct cxl_mem *cxlm = cxlmd->cxlm;
 	struct cxl_nvdimm_bridge *cxl_nvb;
-	unsigned long flags = 0;
 	struct nvdimm *nvdimm;
 	int rc = -ENXIO;
 
@@ -75,8 +75,11 @@ static int cxl_nvdimm_probe(struct device *dev)
 	mutex_unlock(&cxlm->mbox_mutex);
 
 	set_bit(NDD_LABELING, &flags);
-	nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags, 0, 0,
-			       NULL);
+	set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
+	set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
+	set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
+	nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags,
+			       cmd_mask, 0, NULL);
 	dev_set_drvdata(dev, nvdimm);
 	rc = devm_add_action_or_reset(dev, unregister_nvdimm, cxl_nvd);
 out:
@@ -95,11 +98,121 @@ static struct cxl_driver cxl_nvdimm_driver = {
 	.id = CXL_DEVICE_NVDIMM,
 };
 
+static int cxl_pmem_get_config_size(struct cxl_mem *cxlm,
+				    struct nd_cmd_get_config_size *cmd,
+				    unsigned int buf_len, int *cmd_rc)
+{
+	if (sizeof(*cmd) > buf_len)
+		return -EINVAL;
+
+	*cmd = (struct nd_cmd_get_config_size) {
+		 .config_size = cxlm->lsa_size,
+		 .max_xfer = cxlm->payload_size,
+	};
+	*cmd_rc = 0;
+
+	return 0;
+}
+
+static int cxl_pmem_get_config_data(struct cxl_mem *cxlm,
+				    struct nd_cmd_get_config_data_hdr *cmd,
+				    unsigned int buf_len, int *cmd_rc)
+{
+	struct cxl_mbox_get_lsa {
+		u32 offset;
+		u32 length;
+	} get_lsa;
+	int rc;
+
+	if (sizeof(*cmd) > buf_len)
+		return -EINVAL;
+	if (struct_size(cmd, out_buf, cmd->in_length) > buf_len)
+		return -EINVAL;
+
+	get_lsa = (struct cxl_mbox_get_lsa) {
+		.offset = cmd->in_offset,
+		.length = cmd->in_length,
+	};
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_LSA, &get_lsa,
+				   sizeof(get_lsa), cmd->out_buf,
+				   cmd->in_length);
+	cmd->status = 0;
+	*cmd_rc = 0;
+
+	return rc;
+}
+
+static int cxl_pmem_set_config_data(struct cxl_mem *cxlm,
+				    struct nd_cmd_set_config_hdr *cmd,
+				    unsigned int buf_len, int *cmd_rc)
+{
+	struct cxl_mbox_set_lsa {
+		u32 offset;
+		u32 reserved;
+		u8 data[];
+	} *set_lsa;
+	int rc;
+
+	if (sizeof(*cmd) > buf_len)
+		return -EINVAL;
+
+	/* 4-byte status follows the input data in the payload */
+	if (struct_size(cmd, in_buf, cmd->in_length) + 4 > buf_len)
+		return -EINVAL;
+
+	set_lsa =
+		kvzalloc(struct_size(set_lsa, data, cmd->in_length), GFP_KERNEL);
+	if (!set_lsa)
+		return -ENOMEM;
+
+	*set_lsa = (struct cxl_mbox_set_lsa) {
+		.offset = cmd->in_offset,
+	};
+	memcpy(set_lsa->data, cmd->in_buf, cmd->in_length);
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_SET_LSA, set_lsa,
+				   struct_size(set_lsa, data, cmd->in_length),
+				   NULL, 0);
+
+	/* set "firmware" status */
+	*(u32 *) &cmd->in_buf[cmd->in_length] = 0;
+	*cmd_rc = 0;
+	kvfree(set_lsa);
+
+	return rc;
+}
+
+static int cxl_pmem_nvdimm_ctl(struct nvdimm *nvdimm, unsigned int cmd,
+			       void *buf, unsigned int buf_len, int *cmd_rc)
+{
+	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+	unsigned long cmd_mask = nvdimm_cmd_mask(nvdimm);
+	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+
+	if (!test_bit(cmd, &cmd_mask))
+		return -ENOTTY;
+
+	switch (cmd) {
+	case ND_CMD_GET_CONFIG_SIZE:
+		return cxl_pmem_get_config_size(cxlm, buf, buf_len, cmd_rc);
+	case ND_CMD_GET_CONFIG_DATA:
+		return cxl_pmem_get_config_data(cxlm, buf, buf_len, cmd_rc);
+	case ND_CMD_SET_CONFIG_DATA:
+		return cxl_pmem_set_config_data(cxlm, buf, buf_len, cmd_rc);
+	default:
+		return -ENOTTY;
+	}
+}
+
 static int cxl_pmem_ctl(struct nvdimm_bus_descriptor *nd_desc,
 			struct nvdimm *nvdimm, unsigned int cmd, void *buf,
 			unsigned int buf_len, int *cmd_rc)
 {
-	return -ENOTTY;
+	if (!nvdimm)
+		return -ENOTTY;
+	return cxl_pmem_nvdimm_ctl(nvdimm, cmd, buf, buf_len, cmd_rc);
 }
 
 static bool online_nvdimm_bus(struct cxl_nvdimm_bridge *cxl_nvb)


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 19/23] cxl/pmem: Add support for multiple nvdimm-bridge objects
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (17 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 18/23] cxl/pmem: Translate NVDIMM label commands to CXL label commands Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-09 22:29 ` [PATCH 20/23] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy Dan Williams
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for a mocked unit test environment for CXL objects, allow
for multiple unique nvdimm-bridge objects.

For now, just allow multiple bridges to be registered. Later, when there
are multiple present, further updates are needed to
cxl_find_nvdimm_bridge() to identify which bridge is associated with
which CXL hierarchy for nvdimm registration.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/pmem.c |   32 +++++++++++++++++++++++++++++++-
 drivers/cxl/cxl.h       |    2 ++
 drivers/cxl/pmem.c      |   15 ---------------
 3 files changed, 33 insertions(+), 16 deletions(-)

diff --git a/drivers/cxl/core/pmem.c b/drivers/cxl/core/pmem.c
index 69c97cc0d945..ec3e4c642fca 100644
--- a/drivers/cxl/core/pmem.c
+++ b/drivers/cxl/core/pmem.c
@@ -3,15 +3,19 @@
 
 #include <linux/device.h>
 #include <linux/slab.h>
+#include <linux/idr.h>
 #include <cxlmem.h>
 #include <cxl.h>
 
 #include "core.h"
 
+static DEFINE_IDA(cxl_nvdimm_bridge_ida);
+
 static void cxl_nvdimm_bridge_release(struct device *dev)
 {
 	struct cxl_nvdimm_bridge *cxl_nvb = to_cxl_nvdimm_bridge(dev);
 
+	ida_free(&cxl_nvdimm_bridge_ida, cxl_nvb->id);
 	kfree(cxl_nvb);
 }
 
@@ -35,16 +39,38 @@ struct cxl_nvdimm_bridge *to_cxl_nvdimm_bridge(struct device *dev)
 }
 EXPORT_SYMBOL_GPL(to_cxl_nvdimm_bridge);
 
+static int match_nvdimm_bridge(struct device *dev, const void *data)
+{
+	return dev->type == &cxl_nvdimm_bridge_type;
+}
+
+struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void)
+{
+	struct device *dev;
+
+	dev = bus_find_device(&cxl_bus_type, NULL, NULL, match_nvdimm_bridge);
+	if (!dev)
+		return NULL;
+	return to_cxl_nvdimm_bridge(dev);
+}
+EXPORT_SYMBOL_GPL(cxl_find_nvdimm_bridge);
+
 static struct cxl_nvdimm_bridge *
 cxl_nvdimm_bridge_alloc(struct cxl_port *port)
 {
 	struct cxl_nvdimm_bridge *cxl_nvb;
 	struct device *dev;
+	int rc;
 
 	cxl_nvb = kzalloc(sizeof(*cxl_nvb), GFP_KERNEL);
 	if (!cxl_nvb)
 		return ERR_PTR(-ENOMEM);
 
+	rc = ida_alloc(&cxl_nvdimm_bridge_ida, GFP_KERNEL);
+	if (rc < 0)
+		goto err;
+	cxl_nvb->id = rc;
+
 	dev = &cxl_nvb->dev;
 	cxl_nvb->port = port;
 	cxl_nvb->state = CXL_NVB_NEW;
@@ -55,6 +81,10 @@ cxl_nvdimm_bridge_alloc(struct cxl_port *port)
 	dev->type = &cxl_nvdimm_bridge_type;
 
 	return cxl_nvb;
+
+err:
+	kfree(cxl_nvb);
+	return ERR_PTR(rc);
 }
 
 static void unregister_nvb(void *_cxl_nvb)
@@ -100,7 +130,7 @@ struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,
 		return cxl_nvb;
 
 	dev = &cxl_nvb->dev;
-	rc = dev_set_name(dev, "nvdimm-bridge");
+	rc = dev_set_name(dev, "nvdimm-bridge%d", cxl_nvb->id);
 	if (rc)
 		goto err;
 
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 53927f9fa77e..1b2e816e061e 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -211,6 +211,7 @@ enum cxl_nvdimm_brige_state {
 };
 
 struct cxl_nvdimm_bridge {
+	int id;
 	struct device dev;
 	struct cxl_port *port;
 	struct nvdimm_bus *nvdimm_bus;
@@ -323,4 +324,5 @@ struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,
 struct cxl_nvdimm *to_cxl_nvdimm(struct device *dev);
 bool is_cxl_nvdimm(struct device *dev);
 int devm_cxl_add_nvdimm(struct device *host, struct cxl_memdev *cxlmd);
+struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void);
 #endif /* __CXL_H__ */
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index 3f2b185ff89f..3e3b082478f2 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -35,21 +35,6 @@ static void unregister_nvdimm(void *_cxl_nvd)
 	mutex_unlock(&cxlm->mbox_mutex);
 }
 
-static int match_nvdimm_bridge(struct device *dev, const void *data)
-{
-	return strcmp(dev_name(dev), "nvdimm-bridge") == 0;
-}
-
-static struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void)
-{
-	struct device *dev;
-
-	dev = bus_find_device(&cxl_bus_type, NULL, NULL, match_nvdimm_bridge);
-	if (!dev)
-		return NULL;
-	return to_cxl_nvdimm_bridge(dev);
-}
-
 static int cxl_nvdimm_probe(struct device *dev)
 {
 	struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 20/23] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (18 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 19/23] cxl/pmem: Add support for multiple nvdimm-bridge objects Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-10 21:57   ` Ben Widawsky
  2021-08-09 22:29 ` [PATCH 21/23] cxl/bus: Populate the target list at decoder create Dan Williams
                   ` (3 subsequent siblings)
  23 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

Create an environment for CXL plumbing unit tests. Especially when it
comes to an algorithm for HDM Decoder (Host-managed Device Memory
Decoder) programming, the availability of an in-kernel-tree emulation
environment for CXL configuration complexity and corner cases speeds
development and deters regressions.

The approach taken mirrors what was done for tools/testing/nvdimm/. I.e.
an external module, cxl_test.ko built out of the tools/testing/cxl/
directory, provides mock implementations of kernel APIs and kernel
objects to simulate a real world device hierarchy.

One feedback for the tools/testing/nvdimm/ proposal was "why not do this
in QEMU?". In fact, the CXL development community has developed a QEMU
model for CXL [1]. However, there are a few blocking issues that keep
QEMU from being a tight fit for topology + provisioning unit tests:

1/ The QEMU community has yet to show interest in merging any of this
   support that has had patches on the list since November 2020. So,
   testing CXL to date involves building custom QEMU with out-of-tree
   patches.

2/ CXL mechanisms like cross-host-bridge interleave do not have a clear
   path to be emulated by QEMU without major infrastructure work. This
   is easier to achieve with the alloc_mock_res() approach taken in this
   patch to shortcut-define emulated system physical address ranges with
   interleave behavior.

The QEMU enabling has been critical to get the driver off the ground,
and may still move forward, but it does not address the ongoing needs of
a regression testing environment and test driven development.

This patch adds an ACPI CXL Platform definition with emulated CXL
multi-ported host-bridges. A follow on patch adds emulated memory
expander devices.

Link: https://lore.kernel.org/r/20210202005948.241655-1-ben.widawsky@intel.com [1]
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/acpi.c            |   52 +++-
 drivers/cxl/cxl.h             |    8 +
 tools/testing/cxl/Kbuild      |   27 ++
 tools/testing/cxl/mock_acpi.c |  105 ++++++++
 tools/testing/cxl/test/Kbuild |    6 
 tools/testing/cxl/test/cxl.c  |  508 +++++++++++++++++++++++++++++++++++++++++
 tools/testing/cxl/test/mock.c |  155 +++++++++++++
 tools/testing/cxl/test/mock.h |   26 ++
 8 files changed, 866 insertions(+), 21 deletions(-)
 create mode 100644 tools/testing/cxl/Kbuild
 create mode 100644 tools/testing/cxl/mock_acpi.c
 create mode 100644 tools/testing/cxl/test/Kbuild
 create mode 100644 tools/testing/cxl/test/cxl.c
 create mode 100644 tools/testing/cxl/test/mock.c
 create mode 100644 tools/testing/cxl/test/mock.h

diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
index 8ae89273f58e..e0cd9df85ca5 100644
--- a/drivers/cxl/acpi.c
+++ b/drivers/cxl/acpi.c
@@ -182,15 +182,7 @@ static resource_size_t get_chbcr(struct acpi_cedt_chbs *chbs)
 	return IS_ERR(chbs) ? CXL_RESOURCE_NONE : chbs->base;
 }
 
-struct cxl_walk_context {
-	struct device *dev;
-	struct pci_bus *root;
-	struct cxl_port *port;
-	int error;
-	int count;
-};
-
-static int match_add_root_ports(struct pci_dev *pdev, void *data)
+__weak int match_add_root_ports(struct pci_dev *pdev, void *data)
 {
 	struct cxl_walk_context *ctx = data;
 	struct pci_bus *root_bus = ctx->root;
@@ -214,6 +206,8 @@ static int match_add_root_ports(struct pci_dev *pdev, void *data)
 	port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
 	rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE);
 	if (rc) {
+		dev_err(dev, "failed to add dport: %s (%d)\n",
+			dev_name(&pdev->dev), rc);
 		ctx->error = rc;
 		return rc;
 	}
@@ -239,12 +233,15 @@ static struct cxl_dport *find_dport_by_dev(struct cxl_port *port, struct device
 	return NULL;
 }
 
-static struct acpi_device *to_cxl_host_bridge(struct device *dev)
+__weak struct acpi_device *to_cxl_host_bridge(struct device *host,
+					      struct device *dev)
 {
 	struct acpi_device *adev = to_acpi_device(dev);
 
-	if (strcmp(acpi_device_hid(adev), "ACPI0016") == 0)
+	if (strcmp(acpi_device_hid(adev), "ACPI0016") == 0) {
+		dev_dbg(host, "found host bridge %s\n", dev_name(&adev->dev));
 		return adev;
+	}
 	return NULL;
 }
 
@@ -254,14 +251,14 @@ static struct acpi_device *to_cxl_host_bridge(struct device *dev)
  */
 static int add_host_bridge_uport(struct device *match, void *arg)
 {
-	struct acpi_device *bridge = to_cxl_host_bridge(match);
+	struct cxl_port *port;
+	struct cxl_dport *dport;
+	struct cxl_decoder *cxld;
+	struct cxl_walk_context ctx;
+	struct acpi_pci_root *pci_root;
 	struct cxl_port *root_port = arg;
 	struct device *host = root_port->dev.parent;
-	struct acpi_pci_root *pci_root;
-	struct cxl_walk_context ctx;
-	struct cxl_decoder *cxld;
-	struct cxl_dport *dport;
-	struct cxl_port *port;
+	struct acpi_device *bridge = to_cxl_host_bridge(host, match);
 
 	if (!bridge)
 		return 0;
@@ -319,7 +316,7 @@ static int add_host_bridge_dport(struct device *match, void *arg)
 	struct acpi_cedt_chbs *chbs;
 	struct cxl_port *root_port = arg;
 	struct device *host = root_port->dev.parent;
-	struct acpi_device *bridge = to_cxl_host_bridge(match);
+	struct acpi_device *bridge = to_cxl_host_bridge(host, match);
 
 	if (!bridge)
 		return 0;
@@ -371,6 +368,17 @@ static int add_root_nvdimm_bridge(struct device *match, void *data)
 	return 1;
 }
 
+static u32 cedt_instance(struct platform_device *pdev)
+{
+	const bool *native_acpi0017 = acpi_device_get_match_data(&pdev->dev);
+
+	if (native_acpi0017 && *native_acpi0017)
+		return 0;
+
+	/* for cxl_test request a non-canonical instance */
+	return U32_MAX;
+}
+
 static int cxl_acpi_probe(struct platform_device *pdev)
 {
 	int rc;
@@ -384,7 +392,7 @@ static int cxl_acpi_probe(struct platform_device *pdev)
 		return PTR_ERR(root_port);
 	dev_dbg(host, "add: %s\n", dev_name(&root_port->dev));
 
-	status = acpi_get_table(ACPI_SIG_CEDT, 0, &acpi_cedt);
+	status = acpi_get_table(ACPI_SIG_CEDT, cedt_instance(pdev), &acpi_cedt);
 	if (ACPI_FAILURE(status))
 		return -ENXIO;
 
@@ -415,9 +423,11 @@ static int cxl_acpi_probe(struct platform_device *pdev)
 	return 0;
 }
 
+static bool native_acpi0017 = true;
+
 static const struct acpi_device_id cxl_acpi_ids[] = {
-	{ "ACPI0017", 0 },
-	{ "", 0 },
+	{ "ACPI0017", (unsigned long) &native_acpi0017 },
+	{ },
 };
 MODULE_DEVICE_TABLE(acpi, cxl_acpi_ids);
 
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 1b2e816e061e..09c81cf8b800 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -226,6 +226,14 @@ struct cxl_nvdimm {
 	struct nvdimm *nvdimm;
 };
 
+struct cxl_walk_context {
+	struct device *dev;
+	struct pci_bus *root;
+	struct cxl_port *port;
+	int error;
+	int count;
+};
+
 /**
  * struct cxl_port - logical collection of upstream port devices and
  *		     downstream port devices to construct a CXL memory
diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
new file mode 100644
index 000000000000..6ea0c7df36f0
--- /dev/null
+++ b/tools/testing/cxl/Kbuild
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: GPL-2.0
+ldflags-y += --wrap=is_acpi_device_node
+ldflags-y += --wrap=acpi_get_table
+ldflags-y += --wrap=acpi_put_table
+ldflags-y += --wrap=acpi_evaluate_integer
+ldflags-y += --wrap=acpi_pci_find_root
+ldflags-y += --wrap=pci_walk_bus
+
+DRIVERS := ../../../drivers
+CXL_SRC := $(DRIVERS)/cxl
+CXL_CORE_SRC := $(DRIVERS)/cxl/core
+ccflags-y := -I$(srctree)/drivers/cxl/
+
+obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
+
+cxl_acpi-y := $(CXL_SRC)/acpi.o
+cxl_acpi-y += mock_acpi.o
+
+obj-$(CONFIG_CXL_BUS) += cxl_core.o
+
+cxl_core-y := $(CXL_CORE_SRC)/bus.o
+cxl_core-y += $(CXL_CORE_SRC)/pmem.o
+cxl_core-y += $(CXL_CORE_SRC)/regs.o
+cxl_core-y += $(CXL_CORE_SRC)/memdev.o
+cxl_core-y += $(CXL_CORE_SRC)/mbox.o
+
+obj-m += test/
diff --git a/tools/testing/cxl/mock_acpi.c b/tools/testing/cxl/mock_acpi.c
new file mode 100644
index 000000000000..256bdf9e1ce8
--- /dev/null
+++ b/tools/testing/cxl/mock_acpi.c
@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2021 Intel Corporation. All rights reserved. */
+
+#include <linux/platform_device.h>
+#include <linux/device.h>
+#include <linux/acpi.h>
+#include <linux/pci.h>
+#include <cxl.h>
+#include "test/mock.h"
+
+struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev)
+{
+	int index;
+	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+	struct acpi_device *adev = NULL;
+
+	if (ops && ops->is_mock_bridge(dev)) {
+		adev = ACPI_COMPANION(dev);
+		goto out;
+	}
+
+	if (dev->bus == &platform_bus_type)
+		goto out;
+
+	if (strcmp(acpi_device_hid(to_acpi_device(dev)), "ACPI0016") == 0) {
+		adev = to_acpi_device(dev);
+		dev_dbg(host, "found host bridge %s\n", dev_name(&adev->dev));
+	}
+out:
+	put_cxl_mock_ops(index);
+	return adev;
+}
+
+static int match_add_root_port(struct pci_dev *pdev, void *data)
+{
+	struct cxl_walk_context *ctx = data;
+	struct pci_bus *root_bus = ctx->root;
+	struct cxl_port *port = ctx->port;
+	int type = pci_pcie_type(pdev);
+	struct device *dev = ctx->dev;
+	u32 lnkcap, port_num;
+	int rc;
+
+	if (pdev->bus != root_bus)
+		return 0;
+	if (!pci_is_pcie(pdev))
+		return 0;
+	if (type != PCI_EXP_TYPE_ROOT_PORT)
+		return 0;
+	if (pci_read_config_dword(pdev, pci_pcie_cap(pdev) + PCI_EXP_LNKCAP,
+				  &lnkcap) != PCIBIOS_SUCCESSFUL)
+		return 0;
+
+	/* TODO walk DVSEC to find component register base */
+	port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
+	rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE);
+	if (rc) {
+		dev_err(dev, "failed to add dport: %s (%d)\n",
+			dev_name(&pdev->dev), rc);
+		ctx->error = rc;
+		return rc;
+	}
+	ctx->count++;
+
+	dev_dbg(dev, "add dport%d: %s\n", port_num, dev_name(&pdev->dev));
+
+	return 0;
+}
+
+static int mock_add_root_port(struct platform_device *pdev, void *data)
+{
+	struct cxl_walk_context *ctx = data;
+	struct cxl_port *port = ctx->port;
+	struct device *dev = ctx->dev;
+	int rc;
+
+	rc = cxl_add_dport(port, &pdev->dev, pdev->id, CXL_RESOURCE_NONE);
+	if (rc) {
+		dev_err(dev, "failed to add dport: %s (%d)\n",
+			dev_name(&pdev->dev), rc);
+		ctx->error = rc;
+		return rc;
+	}
+	ctx->count++;
+
+	dev_dbg(dev, "add dport%d: %s\n", pdev->id, dev_name(&pdev->dev));
+
+	return 0;
+}
+
+int match_add_root_ports(struct pci_dev *dev, void *data)
+{
+	int index, rc;
+	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+	struct platform_device *pdev = (struct platform_device *) dev;
+
+	if (ops && ops->is_mock_port(pdev))
+		rc = mock_add_root_port(pdev, data);
+	else
+		rc = match_add_root_port(dev, data);
+
+	put_cxl_mock_ops(index);
+
+	return rc;
+}
diff --git a/tools/testing/cxl/test/Kbuild b/tools/testing/cxl/test/Kbuild
new file mode 100644
index 000000000000..7de4ddecfd21
--- /dev/null
+++ b/tools/testing/cxl/test/Kbuild
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-m += cxl_test.o
+obj-m += cxl_mock.o
+
+cxl_test-y := cxl.o
+cxl_mock-y := mock.o
diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
new file mode 100644
index 000000000000..5213d6e23dde
--- /dev/null
+++ b/tools/testing/cxl/test/cxl.c
@@ -0,0 +1,508 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// Copyright(c) 2021 Intel Corporation. All rights reserved.
+
+#include <linux/platform_device.h>
+#include <linux/genalloc.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/acpi.h>
+#include <linux/pci.h>
+#include <linux/mm.h>
+#include "mock.h"
+
+#define NR_CXL_HOST_BRIDGES 4
+#define NR_CXL_ROOT_PORTS 2
+
+static struct platform_device *cxl_acpi;
+static struct platform_device *cxl_host_bridge[NR_CXL_HOST_BRIDGES];
+static struct platform_device
+	*cxl_root_port[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS];
+
+static struct acpi_device acpi0017_mock;
+static struct acpi_device host_bridge[NR_CXL_HOST_BRIDGES] = {
+	[0] = {
+		.handle = &host_bridge[0],
+	},
+	[1] = {
+		.handle = &host_bridge[1],
+	},
+	[2] = {
+		.handle = &host_bridge[2],
+	},
+	[3] = {
+		.handle = &host_bridge[3],
+	},
+};
+
+static bool is_mock_adev(struct acpi_device *adev)
+{
+	int i;
+
+	if (adev == &acpi0017_mock)
+		return true;
+
+	for (i = 0; i < ARRAY_SIZE(host_bridge); i++)
+		if (adev == &host_bridge[i])
+			return true;
+
+	return false;
+}
+
+static struct {
+	struct acpi_table_cedt cedt;
+	struct acpi_cedt_chbs chbs[NR_CXL_HOST_BRIDGES];
+	struct {
+		struct acpi_cedt_cfmws cfmws;
+		u32 target[1];
+	} cfmws0;
+	struct {
+		struct acpi_cedt_cfmws cfmws;
+		u32 target[4];
+	} cfmws1;
+	struct {
+		struct acpi_cedt_cfmws cfmws;
+		u32 target[1];
+	} cfmws2;
+	struct {
+		struct acpi_cedt_cfmws cfmws;
+		u32 target[4];
+	} cfmws3;
+} __packed mock_cedt = {
+	.cedt = {
+		.header = {
+			.signature = "CEDT",
+			.length = sizeof(mock_cedt),
+			.revision = 1,
+		},
+	},
+	.chbs[0] = {
+		.header = {
+			.type = ACPI_CEDT_TYPE_CHBS,
+			.length = sizeof(mock_cedt.chbs[0]),
+		},
+		.uid = 0,
+		.cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
+	},
+	.chbs[1] = {
+		.header = {
+			.type = ACPI_CEDT_TYPE_CHBS,
+			.length = sizeof(mock_cedt.chbs[0]),
+		},
+		.uid = 1,
+		.cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
+	},
+	.chbs[2] = {
+		.header = {
+			.type = ACPI_CEDT_TYPE_CHBS,
+			.length = sizeof(mock_cedt.chbs[0]),
+		},
+		.uid = 2,
+		.cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
+	},
+	.chbs[3] = {
+		.header = {
+			.type = ACPI_CEDT_TYPE_CHBS,
+			.length = sizeof(mock_cedt.chbs[0]),
+		},
+		.uid = 3,
+		.cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
+	},
+	.cfmws0 = {
+		.cfmws = {
+			.header = {
+				.type = ACPI_CEDT_TYPE_CFMWS,
+				.length = sizeof(mock_cedt.cfmws0),
+			},
+			.interleave_ways = 0,
+			.granularity = 4,
+			.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |
+					ACPI_CEDT_CFMWS_RESTRICT_VOLATILE,
+			.qtg_id = 0,
+		},
+		.target = { 0 },
+	},
+	.cfmws1 = {
+		.cfmws = {
+			.header = {
+				.type = ACPI_CEDT_TYPE_CFMWS,
+				.length = sizeof(mock_cedt.cfmws1),
+			},
+			.interleave_ways = 2,
+			.granularity = 4,
+			.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |
+					ACPI_CEDT_CFMWS_RESTRICT_VOLATILE,
+			.qtg_id = 1,
+		},
+		.target = { 0, 1, 2, 3 },
+	},
+	.cfmws2 = {
+		.cfmws = {
+			.header = {
+				.type = ACPI_CEDT_TYPE_CFMWS,
+				.length = sizeof(mock_cedt.cfmws2),
+			},
+			.interleave_ways = 0,
+			.granularity = 4,
+			.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |
+					ACPI_CEDT_CFMWS_RESTRICT_PMEM,
+			.qtg_id = 2,
+		},
+		.target = { 0 },
+	},
+	.cfmws3 = {
+		.cfmws = {
+			.header = {
+				.type = ACPI_CEDT_TYPE_CFMWS,
+				.length = sizeof(mock_cedt.cfmws3),
+			},
+			.interleave_ways = 2,
+			.granularity = 4,
+			.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |
+					ACPI_CEDT_CFMWS_RESTRICT_PMEM,
+			.qtg_id = 3,
+		},
+		.target = { 0, 1, 2, 3 },
+	},
+};
+
+struct cxl_mock_res {
+	struct list_head list;
+	struct range range;
+};
+
+static LIST_HEAD(mock_res);
+static DEFINE_MUTEX(mock_res_lock);
+static struct gen_pool *cxl_mock_pool;
+
+static void free_mock_res(void)
+{
+	struct cxl_mock_res *res, *_res;
+
+	mutex_lock(&mock_res_lock);
+	list_for_each_entry_safe(res, _res, &mock_res, list) {
+		gen_pool_free(cxl_mock_pool, res->range.start,
+			      range_len(&res->range));
+		list_del(&res->list);
+		kfree(res);
+	}
+	mutex_unlock(&mock_res_lock);
+}
+
+static struct cxl_mock_res *alloc_mock_res(resource_size_t size)
+{
+	struct cxl_mock_res *res = kzalloc(sizeof(*res), GFP_KERNEL);
+	struct genpool_data_align data = {
+		.align = SZ_256M,
+	};
+	unsigned long phys;
+
+	INIT_LIST_HEAD(&res->list);
+	phys = gen_pool_alloc_algo(cxl_mock_pool, size,
+				   gen_pool_first_fit_align, &data);
+	if (!phys)
+		return NULL;
+
+	res->range = (struct range) {
+		.start = phys,
+		.end = phys + size - 1,
+	};
+	mutex_lock(&mock_res_lock);
+	list_add(&res->list, &mock_res);
+	mutex_unlock(&mock_res_lock);
+
+	return res;
+}
+
+static int populate_cedt(void)
+{
+	struct acpi_cedt_cfmws *cfmws[4] = {
+		[0] = &mock_cedt.cfmws0.cfmws,
+		[1] = &mock_cedt.cfmws1.cfmws,
+		[2] = &mock_cedt.cfmws2.cfmws,
+		[3] = &mock_cedt.cfmws3.cfmws,
+	};
+	struct cxl_mock_res *res;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(mock_cedt.chbs); i++) {
+		struct acpi_cedt_chbs *chbs = &mock_cedt.chbs[i];
+		resource_size_t size;
+
+		if (chbs->cxl_version == ACPI_CEDT_CHBS_VERSION_CXL20)
+			size = ACPI_CEDT_CHBS_LENGTH_CXL20;
+		else
+			size = ACPI_CEDT_CHBS_LENGTH_CXL11;
+
+		res = alloc_mock_res(size);
+		if (!res)
+			return -ENOMEM;
+		chbs->base = res->range.start;
+		chbs->length = size;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(cfmws); i++) {
+		struct acpi_cedt_cfmws *window = cfmws[i];
+		int ways = 1 << window->interleave_ways;
+
+		res = alloc_mock_res(SZ_256M * ways);
+		if (!res)
+			return -ENOMEM;
+		window->base_hpa = res->range.start;
+		window->window_size = range_len(&res->range);
+	}
+
+	return 0;
+}
+
+static acpi_status mock_acpi_get_table(char *signature, u32 instance,
+				       struct acpi_table_header **out_table)
+{
+	if (instance < U32_MAX || strcmp(signature, ACPI_SIG_CEDT) != 0)
+		return acpi_get_table(signature, instance, out_table);
+
+	*out_table = (struct acpi_table_header *) &mock_cedt;
+	return AE_OK;
+}
+
+static void mock_acpi_put_table(struct acpi_table_header *table)
+{
+	if (table == (struct acpi_table_header *) &mock_cedt)
+		return;
+	acpi_put_table(table);
+}
+
+static bool is_mock_bridge(struct device *dev)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(cxl_host_bridge); i++)
+		if (dev == &cxl_host_bridge[i]->dev)
+			return true;
+
+	return false;
+}
+
+static int host_bridge_index(struct acpi_device *adev)
+{
+	return adev - host_bridge;
+}
+
+static struct acpi_device *find_host_bridge(acpi_handle handle)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(host_bridge); i++)
+		if (handle == host_bridge[i].handle)
+			return &host_bridge[i];
+	return NULL;
+}
+
+static acpi_status
+mock_acpi_evaluate_integer(acpi_handle handle, acpi_string pathname,
+			   struct acpi_object_list *arguments,
+			   unsigned long long *data)
+{
+	struct acpi_device *adev = find_host_bridge(handle);
+
+	if (!adev || strcmp(pathname, METHOD_NAME__UID) != 0)
+		return acpi_evaluate_integer(handle, pathname, arguments, data);
+
+	*data = host_bridge_index(adev);
+	return AE_OK;
+}
+
+static struct pci_bus mock_pci_bus[NR_CXL_HOST_BRIDGES];
+static struct acpi_pci_root mock_pci_root[NR_CXL_HOST_BRIDGES] = {
+	[0] = {
+		.bus = &mock_pci_bus[0],
+	},
+	[1] = {
+		.bus = &mock_pci_bus[1],
+	},
+	[2] = {
+		.bus = &mock_pci_bus[2],
+	},
+	[3] = {
+		.bus = &mock_pci_bus[3],
+	},
+};
+
+static struct platform_device *mock_cxl_root_port(struct pci_bus *bus, int index)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(mock_pci_bus); i++)
+		if (bus == &mock_pci_bus[i])
+			return cxl_root_port[index + i * NR_CXL_ROOT_PORTS];
+	return NULL;
+}
+
+static bool is_mock_port(struct platform_device *pdev)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(cxl_root_port); i++)
+		if (pdev == cxl_root_port[i])
+			return true;
+	return false;
+}
+
+static bool is_mock_bus(struct pci_bus *bus)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(mock_pci_bus); i++)
+		if (bus == &mock_pci_bus[i])
+			return true;
+	return false;
+}
+
+static struct acpi_pci_root *mock_acpi_pci_find_root(acpi_handle handle)
+{
+	struct acpi_device *adev = find_host_bridge(handle);
+
+	if (!adev)
+		return acpi_pci_find_root(handle);
+	return &mock_pci_root[host_bridge_index(adev)];
+}
+
+static struct cxl_mock_ops cxl_mock_ops = {
+	.is_mock_adev = is_mock_adev,
+	.is_mock_bridge = is_mock_bridge,
+	.is_mock_bus = is_mock_bus,
+	.is_mock_port = is_mock_port,
+	.mock_port = mock_cxl_root_port,
+	.acpi_get_table = mock_acpi_get_table,
+	.acpi_put_table = mock_acpi_put_table,
+	.acpi_evaluate_integer = mock_acpi_evaluate_integer,
+	.acpi_pci_find_root = mock_acpi_pci_find_root,
+	.list = LIST_HEAD_INIT(cxl_mock_ops.list),
+};
+
+static void mock_companion(struct acpi_device *adev, struct device *dev)
+{
+	device_initialize(&adev->dev);
+	fwnode_init(&adev->fwnode, NULL);
+	dev->fwnode = &adev->fwnode;
+	adev->fwnode.dev = dev;
+}
+
+#ifndef SZ_64G
+#define SZ_64G (SZ_32G * 2)
+#endif
+
+#ifndef SZ_512G
+#define SZ_512G (SZ_64G * 8)
+#endif
+
+static __init int cxl_test_init(void)
+{
+	int rc, i;
+
+	register_cxl_mock_ops(&cxl_mock_ops);
+
+	cxl_mock_pool = gen_pool_create(ilog2(SZ_2M), NUMA_NO_NODE);
+	if (!cxl_mock_pool) {
+		rc = -ENOMEM;
+		goto err_gen_pool_create;
+	}
+
+	rc = gen_pool_add(cxl_mock_pool, SZ_512G, SZ_64G, NUMA_NO_NODE);
+	if (rc)
+		goto err_gen_pool_add;
+
+	rc = populate_cedt();
+	if (rc)
+		goto err_populate;
+
+	for (i = 0; i < ARRAY_SIZE(cxl_host_bridge); i++) {
+		struct acpi_device *adev = &host_bridge[i];
+		struct platform_device *pdev;
+
+		pdev = platform_device_alloc("cxl_host_bridge", i);
+		if (!pdev)
+			goto err_bridge;
+
+		mock_companion(adev, &pdev->dev);
+		rc = platform_device_add(pdev);
+		if (rc) {
+			platform_device_put(pdev);
+			goto err_bridge;
+		}
+		cxl_host_bridge[i] = pdev;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(cxl_root_port); i++) {
+		struct platform_device *bridge =
+			cxl_host_bridge[i / NR_CXL_ROOT_PORTS];
+		struct platform_device *pdev;
+
+		pdev = platform_device_alloc("cxl_root_port", i);
+		if (!pdev)
+			goto err_port;
+		pdev->dev.parent = &bridge->dev;
+
+		rc = platform_device_add(pdev);
+		if (rc) {
+			platform_device_put(pdev);
+			goto err_port;
+		}
+		cxl_root_port[i] = pdev;
+	}
+
+	cxl_acpi = platform_device_alloc("cxl_acpi", 0);
+	if (!cxl_acpi)
+		goto err_port;
+
+	mock_companion(&acpi0017_mock, &cxl_acpi->dev);
+	acpi0017_mock.dev.bus = &platform_bus_type;
+
+	rc = platform_device_add(cxl_acpi);
+	if (rc)
+		goto err_add;
+
+	return 0;
+
+err_add:
+	platform_device_put(cxl_acpi);
+err_port:
+	for (i = ARRAY_SIZE(cxl_root_port) - 1; i >= 0; i--) {
+		platform_device_del(cxl_root_port[i]);
+		platform_device_put(cxl_root_port[i]);
+	}
+err_bridge:
+	for (i = ARRAY_SIZE(cxl_host_bridge) - 1; i >= 0; i--) {
+		platform_device_del(cxl_host_bridge[i]);
+		platform_device_put(cxl_host_bridge[i]);
+	}
+err_populate:
+	free_mock_res();
+err_gen_pool_add:
+	gen_pool_destroy(cxl_mock_pool);
+err_gen_pool_create:
+	unregister_cxl_mock_ops(&cxl_mock_ops);
+	return rc;
+}
+
+static __exit void cxl_test_exit(void)
+{
+	int i;
+
+	platform_device_del(cxl_acpi);
+	platform_device_put(cxl_acpi);
+	for (i = ARRAY_SIZE(cxl_root_port) - 1; i >= 0; i--) {
+		platform_device_del(cxl_root_port[i]);
+		platform_device_put(cxl_root_port[i]);
+	}
+	for (i = ARRAY_SIZE(cxl_host_bridge) - 1; i >= 0; i--) {
+		platform_device_del(cxl_host_bridge[i]);
+		platform_device_put(cxl_host_bridge[i]);
+	}
+	free_mock_res();
+	gen_pool_destroy(cxl_mock_pool);
+	unregister_cxl_mock_ops(&cxl_mock_ops);
+}
+
+module_init(cxl_test_init);
+module_exit(cxl_test_exit);
+MODULE_LICENSE("GPL v2");
diff --git a/tools/testing/cxl/test/mock.c b/tools/testing/cxl/test/mock.c
new file mode 100644
index 000000000000..5b61373a4f1d
--- /dev/null
+++ b/tools/testing/cxl/test/mock.c
@@ -0,0 +1,155 @@
+// SPDX-License-Identifier: GPL-2.0-only
+//Copyright(c) 2021 Intel Corporation. All rights reserved.
+
+#include <linux/rculist.h>
+#include <linux/device.h>
+#include <linux/export.h>
+#include <linux/acpi.h>
+#include <linux/pci.h>
+#include "mock.h"
+
+static LIST_HEAD(mock);
+
+void register_cxl_mock_ops(struct cxl_mock_ops *ops)
+{
+	list_add_rcu(&ops->list, &mock);
+}
+EXPORT_SYMBOL_GPL(register_cxl_mock_ops);
+
+static DEFINE_SRCU(cxl_mock_srcu);
+
+void unregister_cxl_mock_ops(struct cxl_mock_ops *ops)
+{
+	list_del_rcu(&ops->list);
+	synchronize_srcu(&cxl_mock_srcu);
+}
+EXPORT_SYMBOL_GPL(unregister_cxl_mock_ops);
+
+struct cxl_mock_ops *get_cxl_mock_ops(int *index)
+{
+	*index = srcu_read_lock(&cxl_mock_srcu);
+	return list_first_or_null_rcu(&mock, struct cxl_mock_ops, list);
+}
+EXPORT_SYMBOL_GPL(get_cxl_mock_ops);
+
+void put_cxl_mock_ops(int index)
+{
+	srcu_read_unlock(&cxl_mock_srcu, index);
+}
+EXPORT_SYMBOL_GPL(put_cxl_mock_ops);
+
+bool __wrap_is_acpi_device_node(const struct fwnode_handle *fwnode)
+{
+	struct acpi_device *adev =
+		container_of(fwnode, struct acpi_device, fwnode);
+	int index;
+	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+	bool retval = false;
+
+	if (ops)
+		retval = ops->is_mock_adev(adev);
+
+	if (!retval)
+		retval = is_acpi_device_node(fwnode);
+
+	put_cxl_mock_ops(index);
+	return retval;
+}
+EXPORT_SYMBOL(__wrap_is_acpi_device_node);
+
+acpi_status __wrap_acpi_get_table(char *signature, u32 instance,
+				  struct acpi_table_header **out_table)
+{
+	int index;
+	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+	acpi_status status;
+
+	if (ops)
+		status = ops->acpi_get_table(signature, instance, out_table);
+	else
+		status = acpi_get_table(signature, instance, out_table);
+
+	put_cxl_mock_ops(index);
+
+	return status;
+}
+EXPORT_SYMBOL(__wrap_acpi_get_table);
+
+void __wrap_acpi_put_table(struct acpi_table_header *table)
+{
+	int index;
+	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+
+	if (ops)
+		ops->acpi_put_table(table);
+	else
+		acpi_put_table(table);
+	put_cxl_mock_ops(index);
+}
+EXPORT_SYMBOL(__wrap_acpi_put_table);
+
+acpi_status __wrap_acpi_evaluate_integer(acpi_handle handle,
+					 acpi_string pathname,
+					 struct acpi_object_list *arguments,
+					 unsigned long long *data)
+{
+	int index;
+	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+	acpi_status status;
+
+	if (ops)
+		status = ops->acpi_evaluate_integer(handle, pathname, arguments,
+						    data);
+	else
+		status = acpi_evaluate_integer(handle, pathname, arguments,
+					       data);
+	put_cxl_mock_ops(index);
+
+	return status;
+}
+EXPORT_SYMBOL(__wrap_acpi_evaluate_integer);
+
+struct acpi_pci_root *__wrap_acpi_pci_find_root(acpi_handle handle)
+{
+	int index;
+	struct acpi_pci_root *root;
+	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+
+	if (ops)
+		root = ops->acpi_pci_find_root(handle);
+	else
+		root = acpi_pci_find_root(handle);
+
+	put_cxl_mock_ops(index);
+
+	return root;
+}
+EXPORT_SYMBOL_GPL(__wrap_acpi_pci_find_root);
+
+void __wrap_pci_walk_bus(struct pci_bus *bus,
+			 int (*cb)(struct pci_dev *, void *), void *userdata)
+{
+	int index;
+	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+
+	if (ops && ops->is_mock_bus(bus)) {
+		int rc, i;
+
+		/*
+		 * Simulate 2 root ports per host-bridge and no
+		 * depth recursion.
+		 */
+		for (i = 0; i < 2; i++) {
+			rc = cb((struct pci_dev *) ops->mock_port(bus, i),
+				userdata);
+			if (rc)
+				break;
+		}
+	} else
+		pci_walk_bus(bus, cb, userdata);
+
+	put_cxl_mock_ops(index);
+}
+EXPORT_SYMBOL_GPL(__wrap_pci_walk_bus);
+
+MODULE_LICENSE("GPL v2");
diff --git a/tools/testing/cxl/test/mock.h b/tools/testing/cxl/test/mock.h
new file mode 100644
index 000000000000..7d3b3fa6ffec
--- /dev/null
+++ b/tools/testing/cxl/test/mock.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/list.h>
+#include <linux/acpi.h>
+
+struct cxl_mock_ops {
+	struct list_head list;
+	bool (*is_mock_adev)(struct acpi_device *dev);
+	acpi_status (*acpi_get_table)(char *signature, u32 instance,
+				      struct acpi_table_header **out_table);
+	void (*acpi_put_table)(struct acpi_table_header *table);
+	bool (*is_mock_bridge)(struct device *dev);
+	acpi_status (*acpi_evaluate_integer)(acpi_handle handle,
+					     acpi_string pathname,
+					     struct acpi_object_list *arguments,
+					     unsigned long long *data);
+	struct acpi_pci_root *(*acpi_pci_find_root)(acpi_handle handle);
+	struct platform_device *(*mock_port)(struct pci_bus *bus, int index);
+	bool (*is_mock_bus)(struct pci_bus *bus);
+	bool (*is_mock_port)(struct platform_device *pdev);
+};
+
+void register_cxl_mock_ops(struct cxl_mock_ops *ops);
+void unregister_cxl_mock_ops(struct cxl_mock_ops *ops);
+struct cxl_mock_ops *get_cxl_mock_ops(int *index);
+void put_cxl_mock_ops(int index);


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 21/23] cxl/bus: Populate the target list at decoder create
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (19 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 20/23] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-09 22:29 ` [PATCH 22/23] cxl/mbox: Move command definitions to common location Dan Williams
                   ` (2 subsequent siblings)
  23 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

As found by cxl_test, the implementation populated the target_list for
the single dport exceptional case, it missed populating the target_list
for the typical multi-dport case.

Walk the hosting port's dport list and populate based on the passed in
map.

Move devm_cxl_add_passthrough_decoder() out of line now that it does the
work of generating a target_map.

Before:
$ cat /sys/bus/cxl/devices/root2/decoder*/target_list
0

0


After:
$ cat /sys/bus/cxl/devices/root2/decoder*/target_list
0
0,1,2,3
0
0,1,2,3

Where root2 is a CXL topology root object generated by 'cxl_test'.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/acpi.c     |   13 +++++++++-
 drivers/cxl/core/bus.c |   65 +++++++++++++++++++++++++++++++++++++++---------
 drivers/cxl/cxl.h      |   25 +++++++-----------
 3 files changed, 75 insertions(+), 28 deletions(-)

diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
index e0cd9df85ca5..ab0ede9a526c 100644
--- a/drivers/cxl/acpi.c
+++ b/drivers/cxl/acpi.c
@@ -52,6 +52,12 @@ static int cxl_acpi_cfmws_verify(struct device *dev,
 		return -EINVAL;
 	}
 
+	if (CFMWS_INTERLEAVE_WAYS(cfmws) > CXL_DECODER_MAX_INTERLEAVE) {
+		dev_err(dev, "CFMWS Interleave Ways (%d) too large\n",
+			CFMWS_INTERLEAVE_WAYS(cfmws));
+		return -EINVAL;
+	}
+
 	expected_len = struct_size((cfmws), interleave_targets,
 				   CFMWS_INTERLEAVE_WAYS(cfmws));
 
@@ -71,6 +77,7 @@ static int cxl_acpi_cfmws_verify(struct device *dev,
 static void cxl_add_cfmws_decoders(struct device *dev,
 				   struct cxl_port *root_port)
 {
+	int target_map[CXL_DECODER_MAX_INTERLEAVE];
 	struct acpi_cedt_cfmws *cfmws;
 	struct cxl_decoder *cxld;
 	acpi_size len, cur = 0;
@@ -83,6 +90,7 @@ static void cxl_add_cfmws_decoders(struct device *dev,
 
 	while (cur < len) {
 		struct acpi_cedt_header *c = cedt_subtable + cur;
+		int i;
 
 		if (c->type != ACPI_CEDT_TYPE_CFMWS) {
 			cur += c->length;
@@ -108,6 +116,9 @@ static void cxl_add_cfmws_decoders(struct device *dev,
 			continue;
 		}
 
+		for (i = 0; i < CFMWS_INTERLEAVE_WAYS(cfmws); i++)
+			target_map[i] = cfmws->interleave_targets[i];
+
 		flags = cfmws_to_decoder_flags(cfmws->restrictions);
 		cxld = devm_cxl_add_decoder(dev, root_port,
 					    CFMWS_INTERLEAVE_WAYS(cfmws),
@@ -115,7 +126,7 @@ static void cxl_add_cfmws_decoders(struct device *dev,
 					    CFMWS_INTERLEAVE_WAYS(cfmws),
 					    CFMWS_INTERLEAVE_GRANULARITY(cfmws),
 					    CXL_DECODER_EXPANDER,
-					    flags);
+					    flags, target_map);
 
 		if (IS_ERR(cxld)) {
 			dev_err(dev, "Failed to add decoder for %#llx-%#llx\n",
diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
index 8073354ba232..9a755a37eadf 100644
--- a/drivers/cxl/core/bus.c
+++ b/drivers/cxl/core/bus.c
@@ -454,14 +454,15 @@ int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id,
 EXPORT_SYMBOL_GPL(cxl_add_dport);
 
 static struct cxl_decoder *
-cxl_decoder_alloc(struct cxl_port *port, int nr_targets, resource_size_t base,
-		  resource_size_t len, int interleave_ways,
-		  int interleave_granularity, enum cxl_decoder_type type,
-		  unsigned long flags)
+cxl_decoder_alloc(struct device *host, struct cxl_port *port, int nr_targets,
+		  resource_size_t base, resource_size_t len,
+		  int interleave_ways, int interleave_granularity,
+		  enum cxl_decoder_type type, unsigned long flags,
+		  int *target_map)
 {
 	struct cxl_decoder *cxld;
 	struct device *dev;
-	int rc = 0;
+	int rc = 0, i;
 
 	if (interleave_ways < 1)
 		return ERR_PTR(-EINVAL);
@@ -493,10 +494,19 @@ cxl_decoder_alloc(struct cxl_port *port, int nr_targets, resource_size_t base,
 		.target_type = type,
 	};
 
-	/* handle implied target_list */
-	if (interleave_ways == 1)
-		cxld->target[0] =
-			list_first_entry(&port->dports, struct cxl_dport, list);
+	device_lock(&port->dev);
+	for (i = 0; target_map && i < nr_targets; i++) {
+		struct cxl_dport *dport = find_dport(port, target_map[i]);
+
+		if (!dport) {
+			rc = -ENXIO;
+			goto err;
+		}
+		dev_dbg(host, "%s: target: %d\n", dev_name(dport->dport), i);
+		cxld->target[i] = dport;
+	}
+	device_unlock(&port->dev);
+
 	dev = &cxld->dev;
 	device_initialize(dev);
 	device_set_pm_not_required(dev);
@@ -519,14 +529,19 @@ struct cxl_decoder *
 devm_cxl_add_decoder(struct device *host, struct cxl_port *port, int nr_targets,
 		     resource_size_t base, resource_size_t len,
 		     int interleave_ways, int interleave_granularity,
-		     enum cxl_decoder_type type, unsigned long flags)
+		     enum cxl_decoder_type type, unsigned long flags,
+		     int *target_map)
 {
 	struct cxl_decoder *cxld;
 	struct device *dev;
 	int rc;
 
-	cxld = cxl_decoder_alloc(port, nr_targets, base, len, interleave_ways,
-				 interleave_granularity, type, flags);
+	if (nr_targets > CXL_DECODER_MAX_INTERLEAVE)
+		return ERR_PTR(-EINVAL);
+
+	cxld = cxl_decoder_alloc(host, port, nr_targets, base, len,
+				 interleave_ways, interleave_granularity, type,
+				 flags, target_map);
 	if (IS_ERR(cxld))
 		return cxld;
 
@@ -550,6 +565,32 @@ devm_cxl_add_decoder(struct device *host, struct cxl_port *port, int nr_targets,
 }
 EXPORT_SYMBOL_GPL(devm_cxl_add_decoder);
 
+/*
+ * Per the CXL specification (8.2.5.12 CXL HDM Decoder Capability Structure)
+ * single ported host-bridges need not publish a decoder capability when a
+ * passthrough decode can be assumed, i.e. all transactions that the uport sees
+ * are claimed and passed to the single dport. Default the range a 0-base
+ * 0-length until the first CXL region is activated.
+ */
+struct cxl_decoder *devm_cxl_add_passthrough_decoder(struct device *host,
+						     struct cxl_port *port)
+{
+	struct cxl_dport *dport;
+	int target_map[1];
+
+	device_lock(&port->dev);
+	dport = list_first_entry_or_null(&port->dports, typeof(*dport), list);
+	device_unlock(&port->dev);
+
+	if (!dport)
+		return ERR_PTR(-ENXIO);
+
+	target_map[0] = dport->port_id;
+	return devm_cxl_add_decoder(host, port, 1, 0, 0, 1, PAGE_SIZE,
+				    CXL_DECODER_EXPANDER, 0, target_map);
+}
+EXPORT_SYMBOL_GPL(devm_cxl_add_passthrough_decoder);
+
 /**
  * __cxl_driver_register - register a driver for the cxl bus
  * @cxl_drv: cxl driver structure to attach
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 09c81cf8b800..482b70566742 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -180,6 +180,12 @@ enum cxl_decoder_type {
        CXL_DECODER_EXPANDER = 3,
 };
 
+/*
+ * Current specification goes up to 8, double that seems a reasonable
+ * software max for the foreseeable future
+ */
+#define CXL_DECODER_MAX_INTERLEAVE 16
+
 /**
  * struct cxl_decoder - CXL address range decode configuration
  * @dev: this decoder's device
@@ -284,22 +290,11 @@ struct cxl_decoder *
 devm_cxl_add_decoder(struct device *host, struct cxl_port *port, int nr_targets,
 		     resource_size_t base, resource_size_t len,
 		     int interleave_ways, int interleave_granularity,
-		     enum cxl_decoder_type type, unsigned long flags);
-
-/*
- * Per the CXL specification (8.2.5.12 CXL HDM Decoder Capability Structure)
- * single ported host-bridges need not publish a decoder capability when a
- * passthrough decode can be assumed, i.e. all transactions that the uport sees
- * are claimed and passed to the single dport. Default the range a 0-base
- * 0-length until the first CXL region is activated.
- */
-static inline struct cxl_decoder *
-devm_cxl_add_passthrough_decoder(struct device *host, struct cxl_port *port)
-{
-	return devm_cxl_add_decoder(host, port, 1, 0, 0, 1, PAGE_SIZE,
-				    CXL_DECODER_EXPANDER, 0);
-}
+		     enum cxl_decoder_type type, unsigned long flags,
+		     int *target_map);
 
+struct cxl_decoder *devm_cxl_add_passthrough_decoder(struct device *host,
+						     struct cxl_port *port);
 extern struct bus_type cxl_bus_type;
 
 struct cxl_driver {


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 22/23] cxl/mbox: Move command definitions to common location
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (20 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 21/23] cxl/bus: Populate the target list at decoder create Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-09 22:29 ` [PATCH 23/23] tools/testing/cxl: Introduce a mock memory device + driver Dan Williams
  2021-08-10 22:10 ` [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Ben Widawsky
  23 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

In preparation for cxl_test to mock responses to mailbox command
requests, move some definitions from core/mbox.c to cxlmem.h.

No functional changes intended.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/mbox.c |   45 +++++--------------------------------
 drivers/cxl/cxlmem.h    |   57 +++++++++++++++++++++++++++++++++++++++++++++++
 drivers/cxl/pmem.c      |   11 ++-------
 3 files changed, 65 insertions(+), 48 deletions(-)

diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index f26962d7cb65..f9af1743212b 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -492,11 +492,7 @@ static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
 
 	while (remaining) {
 		u32 xfer_size = min_t(u32, remaining, cxlm->payload_size);
-		struct cxl_mbox_get_log {
-			uuid_t uuid;
-			__le32 offset;
-			__le32 length;
-		} __packed log = {
+		struct cxl_mbox_get_log log = {
 			.uuid = *uuid,
 			.offset = cpu_to_le32(offset),
 			.length = cpu_to_le32(xfer_size)
@@ -527,14 +523,11 @@ static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
  */
 static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
 {
-	struct cel_entry {
-		__le16 opcode;
-		__le16 effect;
-	} __packed * cel_entry;
+	struct cxl_cel_entry *cel_entry;
 	const int cel_entries = size / sizeof(*cel_entry);
 	int i;
 
-	cel_entry = (struct cel_entry *)cel;
+	cel_entry = (struct cxl_cel_entry *) cel;
 
 	for (i = 0; i < cel_entries; i++) {
 		u16 opcode = le16_to_cpu(cel_entry[i].opcode);
@@ -550,15 +543,6 @@ static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
 	}
 }
 
-struct cxl_mbox_get_supported_logs {
-	__le16 entries;
-	u8 rsvd[6];
-	struct gsl_entry {
-		uuid_t uuid;
-		__le32 size;
-	} __packed entry[];
-} __packed;
-
 static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_mem *cxlm)
 {
 	struct cxl_mbox_get_supported_logs *ret;
@@ -585,10 +569,8 @@ enum {
 
 /* See CXL 2.0 Table 170. Get Log Input Payload */
 static const uuid_t log_uuid[] = {
-	[CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96,
-			       0xb1, 0x62, 0x3b, 0x3f, 0x17),
-	[VENDOR_DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f,
-					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86),
+	[CEL_UUID] = DEFINE_CXL_CEL_UUID,
+	[VENDOR_DEBUG_UUID] = DEFINE_CXL_VENDOR_DEBUG_UUID,
 };
 
 /**
@@ -709,22 +691,7 @@ static int cxl_mem_get_partition_info(struct cxl_mem *cxlm)
 int cxl_mem_identify(struct cxl_mem *cxlm)
 {
 	/* See CXL 2.0 Table 175 Identify Memory Device Output Payload */
-	struct cxl_mbox_identify {
-		char fw_revision[0x10];
-		__le64 total_capacity;
-		__le64 volatile_capacity;
-		__le64 persistent_capacity;
-		__le64 partition_align;
-		__le16 info_event_log_size;
-		__le16 warning_event_log_size;
-		__le16 failure_event_log_size;
-		__le16 fatal_event_log_size;
-		__le32 lsa_size;
-		u8 poison_list_max_mer[3];
-		__le16 inject_poison_limit;
-		u8 poison_caps;
-		u8 qos_telemetry_caps;
-	} __packed id;
+	struct cxl_mbox_identify id;
 	int rc;
 
 	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0, &id,
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index f6cfe84a064c..271c2dc80c42 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -161,6 +161,63 @@ enum cxl_opcode {
 	CXL_MBOX_OP_MAX			= 0x10000
 };
 
+#define DEFINE_CXL_CEL_UUID                                                    \
+	UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96, 0xb1, 0x62,     \
+		  0x3b, 0x3f, 0x17)
+
+#define DEFINE_CXL_VENDOR_DEBUG_UUID                                           \
+	UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f, 0xd6, 0x07, 0x19,     \
+		  0x40, 0x3d, 0x86)
+
+struct cxl_mbox_get_supported_logs {
+	__le16 entries;
+	u8 rsvd[6];
+	struct cxl_gsl_entry {
+		uuid_t uuid;
+		__le32 size;
+	} __packed entry[];
+}  __packed;
+
+struct cxl_cel_entry {
+	__le16 opcode;
+	__le16 effect;
+} __packed;
+
+struct cxl_mbox_get_log {
+	uuid_t uuid;
+	__le32 offset;
+	__le32 length;
+} __packed;
+
+/* See CXL 2.0 Table 175 Identify Memory Device Output Payload */
+struct cxl_mbox_identify {
+	char fw_revision[0x10];
+	__le64 total_capacity;
+	__le64 volatile_capacity;
+	__le64 persistent_capacity;
+	__le64 partition_align;
+	__le16 info_event_log_size;
+	__le16 warning_event_log_size;
+	__le16 failure_event_log_size;
+	__le16 fatal_event_log_size;
+	__le32 lsa_size;
+	u8 poison_list_max_mer[3];
+	__le16 inject_poison_limit;
+	u8 poison_caps;
+	u8 qos_telemetry_caps;
+} __packed;
+
+struct cxl_mbox_get_lsa {
+	u32 offset;
+	u32 length;
+} __packed;
+
+struct cxl_mbox_set_lsa {
+	u32 offset;
+	u32 reserved;
+	u8 data[];
+} __packed;
+
 /**
  * struct cxl_mem_command - Driver representation of a memory device command
  * @info: Command information as it exists for the UAPI
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index 3e3b082478f2..b767250e076f 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -103,10 +103,7 @@ static int cxl_pmem_get_config_data(struct cxl_mem *cxlm,
 				    struct nd_cmd_get_config_data_hdr *cmd,
 				    unsigned int buf_len, int *cmd_rc)
 {
-	struct cxl_mbox_get_lsa {
-		u32 offset;
-		u32 length;
-	} get_lsa;
+	struct cxl_mbox_get_lsa get_lsa;
 	int rc;
 
 	if (sizeof(*cmd) > buf_len)
@@ -132,11 +129,7 @@ static int cxl_pmem_set_config_data(struct cxl_mem *cxlm,
 				    struct nd_cmd_set_config_hdr *cmd,
 				    unsigned int buf_len, int *cmd_rc)
 {
-	struct cxl_mbox_set_lsa {
-		u32 offset;
-		u32 reserved;
-		u8 data[];
-	} *set_lsa;
+	struct cxl_mbox_set_lsa *set_lsa;
 	int rc;
 
 	if (sizeof(*cmd) > buf_len)


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 23/23] tools/testing/cxl: Introduce a mock memory device + driver
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (21 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 22/23] cxl/mbox: Move command definitions to common location Dan Williams
@ 2021-08-09 22:29 ` Dan Williams
  2021-08-10 22:10 ` [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Ben Widawsky
  23 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-09 22:29 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

Introduce an emulated device-set plus driver to register CXL memory
devices, 'struct cxl_memdev' instances, in the mock cxl_test topology.
This enables the development of HDM Decoder (Host-managed Device Memory
Decoder) programming flow (region provisioning) in an environment that
can be updated alongside the kernel as it gains more functionality.

Whereas the cxl_pci module looks for CXL memory expanders on the 'pci'
bus, the cxl_mock_mem module attaches to CXL expanders on the platform
bus emitted by cxl_test.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/pmem.c       |    6 -
 drivers/cxl/cxl.h             |    2 
 drivers/cxl/pmem.c            |    2 
 tools/testing/cxl/Kbuild      |    2 
 tools/testing/cxl/mock_pmem.c |   24 ++++
 tools/testing/cxl/test/Kbuild |    4 +
 tools/testing/cxl/test/cxl.c  |   81 +++++++++++++
 tools/testing/cxl/test/mem.c  |  255 +++++++++++++++++++++++++++++++++++++++++
 tools/testing/cxl/test/mock.h |    1 
 9 files changed, 371 insertions(+), 6 deletions(-)
 create mode 100644 tools/testing/cxl/mock_pmem.c
 create mode 100644 tools/testing/cxl/test/mem.c

diff --git a/drivers/cxl/core/pmem.c b/drivers/cxl/core/pmem.c
index ec3e4c642fca..64ad04b6f8f2 100644
--- a/drivers/cxl/core/pmem.c
+++ b/drivers/cxl/core/pmem.c
@@ -39,16 +39,16 @@ struct cxl_nvdimm_bridge *to_cxl_nvdimm_bridge(struct device *dev)
 }
 EXPORT_SYMBOL_GPL(to_cxl_nvdimm_bridge);
 
-static int match_nvdimm_bridge(struct device *dev, const void *data)
+__weak int match_nvdimm_bridge(struct device *dev, const void *data)
 {
 	return dev->type == &cxl_nvdimm_bridge_type;
 }
 
-struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void)
+struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_nvdimm *cxl_nvd)
 {
 	struct device *dev;
 
-	dev = bus_find_device(&cxl_bus_type, NULL, NULL, match_nvdimm_bridge);
+	dev = bus_find_device(&cxl_bus_type, NULL, cxl_nvd, match_nvdimm_bridge);
 	if (!dev)
 		return NULL;
 	return to_cxl_nvdimm_bridge(dev);
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 482b70566742..d4a1470ecc8d 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -327,5 +327,5 @@ struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,
 struct cxl_nvdimm *to_cxl_nvdimm(struct device *dev);
 bool is_cxl_nvdimm(struct device *dev);
 int devm_cxl_add_nvdimm(struct device *host, struct cxl_memdev *cxlmd);
-struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void);
+struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_nvdimm *cxl_nvd);
 #endif /* __CXL_H__ */
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index b767250e076f..118a8e23a819 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -45,7 +45,7 @@ static int cxl_nvdimm_probe(struct device *dev)
 	struct nvdimm *nvdimm;
 	int rc = -ENXIO;
 
-	cxl_nvb = cxl_find_nvdimm_bridge();
+	cxl_nvb = cxl_find_nvdimm_bridge(cxl_nvd);
 	if (!cxl_nvb)
 		return -ENXIO;
 
diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
index 6ea0c7df36f0..ff9bb6c25a39 100644
--- a/tools/testing/cxl/Kbuild
+++ b/tools/testing/cxl/Kbuild
@@ -24,4 +24,6 @@ cxl_core-y += $(CXL_CORE_SRC)/regs.o
 cxl_core-y += $(CXL_CORE_SRC)/memdev.o
 cxl_core-y += $(CXL_CORE_SRC)/mbox.o
 
+cxl_core-y += mock_pmem.o
+
 obj-m += test/
diff --git a/tools/testing/cxl/mock_pmem.c b/tools/testing/cxl/mock_pmem.c
new file mode 100644
index 000000000000..f7315e6f52c0
--- /dev/null
+++ b/tools/testing/cxl/mock_pmem.c
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2021 Intel Corporation. All rights reserved. */
+#include <cxl.h>
+#include "test/mock.h"
+#include <core/core.h>
+
+int match_nvdimm_bridge(struct device *dev, const void *data)
+{
+	int index, rc = 0;
+	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+	const struct cxl_nvdimm *cxl_nvd = data;
+
+	if (ops) {
+		if (dev->type == &cxl_nvdimm_bridge_type &&
+		    (ops->is_mock_dev(dev->parent->parent) ==
+		     ops->is_mock_dev(cxl_nvd->dev.parent->parent)))
+			rc = 1;
+	} else
+		rc = dev->type == &cxl_nvdimm_bridge_type;
+
+	put_cxl_mock_ops(index);
+
+	return rc;
+}
diff --git a/tools/testing/cxl/test/Kbuild b/tools/testing/cxl/test/Kbuild
index 7de4ddecfd21..4e59e2c911f6 100644
--- a/tools/testing/cxl/test/Kbuild
+++ b/tools/testing/cxl/test/Kbuild
@@ -1,6 +1,10 @@
 # SPDX-License-Identifier: GPL-2.0
+ccflags-y := -I$(srctree)/drivers/cxl/
+
 obj-m += cxl_test.o
 obj-m += cxl_mock.o
+obj-m += cxl_mock_mem.o
 
 cxl_test-y := cxl.o
 cxl_mock-y := mock.o
+cxl_mock_mem-y := mem.o
diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
index 5213d6e23dde..314b09d40333 100644
--- a/tools/testing/cxl/test/cxl.c
+++ b/tools/testing/cxl/test/cxl.c
@@ -17,6 +17,7 @@ static struct platform_device *cxl_acpi;
 static struct platform_device *cxl_host_bridge[NR_CXL_HOST_BRIDGES];
 static struct platform_device
 	*cxl_root_port[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS];
+struct platform_device *cxl_mem[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS];
 
 static struct acpi_device acpi0017_mock;
 static struct acpi_device host_bridge[NR_CXL_HOST_BRIDGES] = {
@@ -34,6 +35,18 @@ static struct acpi_device host_bridge[NR_CXL_HOST_BRIDGES] = {
 	},
 };
 
+static bool is_mock_dev(struct device *dev)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(cxl_mem); i++)
+		if (dev == &cxl_mem[i]->dev)
+			return true;
+	if (dev == &cxl_acpi->dev)
+		return true;
+	return false;
+}
+
 static bool is_mock_adev(struct acpi_device *adev)
 {
 	int i;
@@ -371,6 +384,7 @@ static struct cxl_mock_ops cxl_mock_ops = {
 	.is_mock_bridge = is_mock_bridge,
 	.is_mock_bus = is_mock_bus,
 	.is_mock_port = is_mock_port,
+	.is_mock_dev = is_mock_dev,
 	.mock_port = mock_cxl_root_port,
 	.acpi_get_table = mock_acpi_get_table,
 	.acpi_put_table = mock_acpi_put_table,
@@ -395,6 +409,44 @@ static void mock_companion(struct acpi_device *adev, struct device *dev)
 #define SZ_512G (SZ_64G * 8)
 #endif
 
+static struct platform_device *alloc_memdev(int id)
+{
+	struct resource res[] = {
+		[0] = {
+			.flags = IORESOURCE_MEM,
+		},
+		[1] = {
+			.flags = IORESOURCE_MEM,
+			.desc = IORES_DESC_PERSISTENT_MEMORY,
+		},
+	};
+	struct platform_device *pdev;
+	int i, rc;
+
+	for (i = 0; i < ARRAY_SIZE(res); i++) {
+		struct cxl_mock_res *r = alloc_mock_res(SZ_256M);
+
+		if (!r)
+			return NULL;
+		res[i].start = r->range.start;
+		res[i].end = r->range.end;
+	}
+
+	pdev = platform_device_alloc("cxl_mem", id);
+	if (!pdev)
+		return NULL;
+
+	rc = platform_device_add_resources(pdev, res, ARRAY_SIZE(res));
+	if (rc)
+		goto err;
+
+	return pdev;
+
+err:
+	platform_device_put(pdev);
+	return NULL;
+}
+
 static __init int cxl_test_init(void)
 {
 	int rc, i;
@@ -450,9 +502,27 @@ static __init int cxl_test_init(void)
 		cxl_root_port[i] = pdev;
 	}
 
+	BUILD_BUG_ON(ARRAY_SIZE(cxl_mem) != ARRAY_SIZE(cxl_root_port));
+	for (i = 0; i < ARRAY_SIZE(cxl_mem); i++) {
+		struct platform_device *port = cxl_root_port[i];
+		struct platform_device *pdev;
+
+		pdev = alloc_memdev(i);
+		if (!pdev)
+			goto err_mem;
+		pdev->dev.parent = &port->dev;
+
+		rc = platform_device_add(pdev);
+		if (rc) {
+			platform_device_put(pdev);
+			goto err_mem;
+		}
+		cxl_mem[i] = pdev;
+	}
+
 	cxl_acpi = platform_device_alloc("cxl_acpi", 0);
 	if (!cxl_acpi)
-		goto err_port;
+		goto err_mem;
 
 	mock_companion(&acpi0017_mock, &cxl_acpi->dev);
 	acpi0017_mock.dev.bus = &platform_bus_type;
@@ -465,6 +535,11 @@ static __init int cxl_test_init(void)
 
 err_add:
 	platform_device_put(cxl_acpi);
+err_mem:
+	for (i = ARRAY_SIZE(cxl_mem) - 1; i >= 0; i--) {
+		platform_device_del(cxl_mem[i]);
+		platform_device_put(cxl_mem[i]);
+	}
 err_port:
 	for (i = ARRAY_SIZE(cxl_root_port) - 1; i >= 0; i--) {
 		platform_device_del(cxl_root_port[i]);
@@ -490,6 +565,10 @@ static __exit void cxl_test_exit(void)
 
 	platform_device_del(cxl_acpi);
 	platform_device_put(cxl_acpi);
+	for (i = ARRAY_SIZE(cxl_mem) - 1; i >= 0; i--) {
+		platform_device_del(cxl_mem[i]);
+		platform_device_put(cxl_mem[i]);
+	}
 	for (i = ARRAY_SIZE(cxl_root_port) - 1; i >= 0; i--) {
 		platform_device_del(cxl_root_port[i]);
 		platform_device_put(cxl_root_port[i]);
diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
new file mode 100644
index 000000000000..3ce02c2783d5
--- /dev/null
+++ b/tools/testing/cxl/test/mem.c
@@ -0,0 +1,255 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// Copyright(c) 2021 Intel Corporation. All rights reserved.
+
+#include <linux/platform_device.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include <linux/sizes.h>
+#include <linux/bits.h>
+#include <cxlmem.h>
+
+#define LSA_SIZE SZ_128K
+#define EFFECT(x) (1U << x)
+
+static struct cxl_cel_entry mock_cel[] = {
+	{
+		.opcode = cpu_to_le16(CXL_MBOX_OP_GET_SUPPORTED_LOGS),
+		.effect = cpu_to_le16(0),
+	},
+	{
+		.opcode = cpu_to_le16(CXL_MBOX_OP_IDENTIFY),
+		.effect = cpu_to_le16(0),
+	},
+	{
+		.opcode = cpu_to_le16(CXL_MBOX_OP_GET_LSA),
+		.effect = cpu_to_le16(0),
+	},
+	{
+		.opcode = cpu_to_le16(CXL_MBOX_OP_SET_LSA),
+		.effect = cpu_to_le16(EFFECT(1) | EFFECT(2)),
+	},
+};
+
+static struct {
+	struct cxl_mbox_get_supported_logs gsl;
+	struct cxl_gsl_entry entry;
+} mock_gsl_payload = {
+	.gsl = {
+		.entries = cpu_to_le16(1),
+	},
+	.entry = {
+		.uuid = DEFINE_CXL_CEL_UUID,
+		.size = cpu_to_le32(sizeof(mock_cel)),
+	},
+};
+
+static int mock_gsl(struct cxl_mbox_cmd *cmd)
+{
+	if (cmd->size_out < sizeof(mock_gsl_payload))
+		return -EINVAL;
+
+	memcpy(cmd->payload_out, &mock_gsl_payload, sizeof(mock_gsl_payload));
+	cmd->size_out = sizeof(mock_gsl_payload);
+
+	return 0;
+}
+
+static int mock_get_log(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd)
+{
+	struct cxl_mbox_get_log *gl = cmd->payload_in;
+	u32 offset = le32_to_cpu(gl->offset);
+	u32 length = le32_to_cpu(gl->length);
+	uuid_t uuid = DEFINE_CXL_CEL_UUID;
+	void *data = &mock_cel;
+
+	if (cmd->size_in < sizeof(*gl))
+		return -EINVAL;
+	if (offset + length >
+	    (min_t(size_t, cxlm->payload_size, sizeof(mock_cel))))
+		return -EINVAL;
+	if (!uuid_equal(&gl->uuid, &uuid))
+		return -EINVAL;
+	if (length > cmd->size_out)
+		return -EINVAL;
+
+	memcpy(cmd->payload_out, data + offset, length);
+
+	return 0;
+}
+
+static int mock_id(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd)
+{
+	struct platform_device *pdev = to_platform_device(cxlm->dev);
+	struct cxl_mbox_identify id = {
+		.fw_revision = { "mock fw v1 " },
+		.lsa_size = cpu_to_le32(LSA_SIZE),
+		/* FIXME: Add partition support */
+		.partition_align = cpu_to_le64(0),
+	};
+	u64 capacity = 0;
+	int i;
+
+	if (cmd->size_out < sizeof(id))
+		return -EINVAL;
+
+	for (i = 0; i < 2; i++) {
+		struct resource *res;
+
+		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+		if (!res)
+			break;
+
+		capacity += resource_size(res) / CXL_CAPACITY_MULTIPLIER;
+
+		if (le64_to_cpu(id.partition_align))
+			continue;
+
+		if (res->desc == IORES_DESC_PERSISTENT_MEMORY)
+			id.persistent_capacity = cpu_to_le64(
+				resource_size(res) / CXL_CAPACITY_MULTIPLIER);
+		else
+			id.volatile_capacity = cpu_to_le64(
+				resource_size(res) / CXL_CAPACITY_MULTIPLIER);
+	}
+
+	id.total_capacity = cpu_to_le64(capacity);
+
+	memcpy(cmd->payload_out, &id, sizeof(id));
+
+	return 0;
+}
+
+static int mock_get_lsa(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd)
+{
+	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
+	void *lsa = dev_get_drvdata(cxlm->dev);
+	u32 offset, length;
+
+	if (sizeof(*get_lsa) > cmd->size_in)
+		return -EINVAL;
+	offset = le32_to_cpu(get_lsa->offset);
+	length = le32_to_cpu(get_lsa->length);
+	if (offset + length > LSA_SIZE)
+		return -EINVAL;
+	if (length > cmd->size_out)
+		return -EINVAL;
+
+	memcpy(cmd->payload_out, lsa + offset, length);
+	return 0;
+}
+
+static int mock_set_lsa(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd)
+{
+	struct cxl_mbox_set_lsa *set_lsa = cmd->payload_in;
+	void *lsa = dev_get_drvdata(cxlm->dev);
+	u32 offset, length;
+
+	if (sizeof(*set_lsa) > cmd->size_in)
+		return -EINVAL;
+	offset = le32_to_cpu(set_lsa->offset);
+	length = cmd->size_in - sizeof(*set_lsa);
+	if (offset + length > LSA_SIZE)
+		return -EINVAL;
+
+	memcpy(lsa + offset, &set_lsa->data[0], length);
+	return 0;
+}
+
+static int cxl_mock_mbox_send(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd)
+{
+	struct device *dev = cxlm->dev;
+	int rc = -EIO;
+
+	switch (cmd->opcode) {
+	case CXL_MBOX_OP_GET_SUPPORTED_LOGS:
+		rc = mock_gsl(cmd);
+		break;
+	case CXL_MBOX_OP_GET_LOG:
+		rc = mock_get_log(cxlm, cmd);
+		break;
+	case CXL_MBOX_OP_IDENTIFY:
+		rc = mock_id(cxlm, cmd);
+		break;
+	case CXL_MBOX_OP_GET_LSA:
+		rc = mock_get_lsa(cxlm, cmd);
+		break;
+	case CXL_MBOX_OP_SET_LSA:
+		rc = mock_set_lsa(cxlm, cmd);
+		break;
+	default:
+		break;
+	}
+
+	dev_dbg(dev, "opcode: %#x sz_in: %zd sz_out: %zd rc: %d\n", cmd->opcode,
+		cmd->size_in, cmd->size_out, rc);
+
+	return rc;
+}
+
+static void label_area_release(void *lsa)
+{
+	vfree(lsa);
+}
+
+static int cxl_mock_mem_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct cxl_memdev *cxlmd;
+	struct cxl_mem *cxlm;
+	void *lsa;
+	int rc;
+
+	lsa = vmalloc(LSA_SIZE);
+	if (!lsa)
+		return -ENOMEM;
+	rc = devm_add_action_or_reset(dev, label_area_release, lsa);
+	if (rc)
+		return rc;
+	dev_set_drvdata(dev, lsa);
+
+	cxlm = cxl_mem_create(dev);
+	if (IS_ERR(cxlm))
+		return PTR_ERR(cxlm);
+
+	cxlm->mbox_send = cxl_mock_mbox_send;
+	cxlm->payload_size = SZ_4K;
+
+	rc = cxl_mem_enumerate_cmds(cxlm);
+	if (rc)
+		return rc;
+
+	rc = cxl_mem_identify(cxlm);
+	if (rc)
+		return rc;
+
+	rc = cxl_mem_create_range_info(cxlm);
+	if (rc)
+		return rc;
+
+	cxlmd = devm_cxl_add_memdev(dev, cxlm);
+	if (IS_ERR(cxlmd))
+		return PTR_ERR(cxlmd);
+
+	if (range_len(&cxlm->pmem_range) && IS_ENABLED(CONFIG_CXL_PMEM))
+		rc = devm_cxl_add_nvdimm(dev, cxlmd);
+
+	return 0;
+}
+
+static const struct platform_device_id cxl_mock_mem_ids[] = {
+	{ .name = "cxl_mem", },
+	{ },
+};
+MODULE_DEVICE_TABLE(platform, cxl_mock_mem_ids);
+
+static struct platform_driver cxl_mock_mem_driver = {
+	.probe = cxl_mock_mem_probe,
+	.id_table = cxl_mock_mem_ids,
+	.driver = {
+		.name = KBUILD_MODNAME,
+	},
+};
+
+module_platform_driver(cxl_mock_mem_driver);
+MODULE_LICENSE("GPL v2");
+MODULE_IMPORT_NS(CXL);
diff --git a/tools/testing/cxl/test/mock.h b/tools/testing/cxl/test/mock.h
index 7d3b3fa6ffec..805a94cb3fbe 100644
--- a/tools/testing/cxl/test/mock.h
+++ b/tools/testing/cxl/test/mock.h
@@ -18,6 +18,7 @@ struct cxl_mock_ops {
 	struct platform_device *(*mock_port)(struct pci_bus *bus, int index);
 	bool (*is_mock_bus)(struct pci_bus *bus);
 	bool (*is_mock_port)(struct platform_device *pdev);
+	bool (*is_mock_dev)(struct device *dev);
 };
 
 void register_cxl_mock_ops(struct cxl_mock_ops *ops);


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields
  2021-08-09 22:27 ` [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields Dan Williams
@ 2021-08-10 20:48   ` Ben Widawsky
  2021-08-10 21:58     ` Dan Williams
  2021-08-11 18:44   ` Jonathan Cameron
  1 sibling, 1 reply; 61+ messages in thread
From: Ben Widawsky @ 2021-08-10 20:48 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, Jonathan.Cameron, vishal.l.verma,
	alison.schofield, ira.weiny

On 21-08-09 15:27:52, Dan Williams wrote:
> In preparation for LIBNVDIMM to manage labels on CXL devices deploy
> helpers that abstract the label type from the implementation. The CXL
> label format is mostly similar to the EFI label format with concepts /
> fields added, like dynamic region creation and label type guids, and
> other concepts removed like BLK-mode and interleave-set-cookie ids.
> 
> In addition to nsl_get_* helpers there is the nsl_ref_name() helper that
> returns a pointer to a label field rather than copying the data.
> 
> Where changes touch the old whitespace style, update to clang-format
> expectations.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/nvdimm/label.c          |   20 ++++++-----
>  drivers/nvdimm/namespace_devs.c |   70 +++++++++++++++++++--------------------
>  drivers/nvdimm/nd.h             |   66 +++++++++++++++++++++++++++++++++++++
>  3 files changed, 110 insertions(+), 46 deletions(-)
> 
> diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> index 9251441fd8a3..b6d845cfb70e 100644
> --- a/drivers/nvdimm/label.c
> +++ b/drivers/nvdimm/label.c
> @@ -350,14 +350,14 @@ static bool slot_valid(struct nvdimm_drvdata *ndd,
>  		struct nd_namespace_label *nd_label, u32 slot)
>  {
>  	/* check that we are written where we expect to be written */
> -	if (slot != __le32_to_cpu(nd_label->slot))
> +	if (slot != nsl_get_slot(ndd, nd_label))
>  		return false;
>  
>  	/* check checksum */
>  	if (namespace_label_has(ndd, checksum)) {
>  		u64 sum, sum_save;
>  
> -		sum_save = __le64_to_cpu(nd_label->checksum);
> +		sum_save = nsl_get_checksum(ndd, nd_label);
>  		nd_label->checksum = __cpu_to_le64(0);
>  		sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
>  		nd_label->checksum = __cpu_to_le64(sum_save);
> @@ -395,13 +395,13 @@ int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
>  			continue;
>  
>  		memcpy(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> -		flags = __le32_to_cpu(nd_label->flags);
> +		flags = nsl_get_flags(ndd, nd_label);
>  		if (test_bit(NDD_NOBLK, &nvdimm->flags))

Lazy review (didn't check NDD_NOBLK), should this be test_bit(NDD_NOBLK, &flags)?

>  			flags &= ~NSLABEL_FLAG_LOCAL;
>  		nd_label_gen_id(&label_id, label_uuid, flags);
>  		res = nvdimm_allocate_dpa(ndd, &label_id,
> -				__le64_to_cpu(nd_label->dpa),
> -				__le64_to_cpu(nd_label->rawsize));
> +					  nsl_get_dpa(ndd, nd_label),
> +					  nsl_get_rawsize(ndd, nd_label));
>  		nd_dbg_dpa(nd_region, ndd, res, "reserve\n");
>  		if (!res)
>  			return -EBUSY;
> @@ -548,9 +548,9 @@ int nd_label_active_count(struct nvdimm_drvdata *ndd)
>  		nd_label = to_label(ndd, slot);
>  
>  		if (!slot_valid(ndd, nd_label, slot)) {
> -			u32 label_slot = __le32_to_cpu(nd_label->slot);
> -			u64 size = __le64_to_cpu(nd_label->rawsize);
> -			u64 dpa = __le64_to_cpu(nd_label->dpa);
> +			u32 label_slot = nsl_get_slot(ndd, nd_label);
> +			u64 size = nsl_get_rawsize(ndd, nd_label);
> +			u64 dpa = nsl_get_dpa(ndd, nd_label);
>  
>  			dev_dbg(ndd->dev,
>  				"slot%d invalid slot: %d dpa: %llx size: %llx\n",
> @@ -879,9 +879,9 @@ static struct resource *to_resource(struct nvdimm_drvdata *ndd,
>  	struct resource *res;
>  
>  	for_each_dpa_resource(ndd, res) {
> -		if (res->start != __le64_to_cpu(nd_label->dpa))
> +		if (res->start != nsl_get_dpa(ndd, nd_label))
>  			continue;
> -		if (resource_size(res) != __le64_to_cpu(nd_label->rawsize))
> +		if (resource_size(res) != nsl_get_rawsize(ndd, nd_label))
>  			continue;
>  		return res;
>  	}
> diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
> index 2403b71b601e..94da804372bf 100644
> --- a/drivers/nvdimm/namespace_devs.c
> +++ b/drivers/nvdimm/namespace_devs.c
> @@ -1235,7 +1235,7 @@ static int namespace_update_uuid(struct nd_region *nd_region,
>  			if (!nd_label)
>  				continue;
>  			nd_label_gen_id(&label_id, nd_label->uuid,
> -					__le32_to_cpu(nd_label->flags));
> +					nsl_get_flags(ndd, nd_label));
>  			if (strcmp(old_label_id.id, label_id.id) == 0)
>  				set_bit(ND_LABEL_REAP, &label_ent->flags);
>  		}
> @@ -1851,9 +1851,9 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
>  
>  			if (!nd_label)
>  				continue;
> -			isetcookie = __le64_to_cpu(nd_label->isetcookie);
> -			position = __le16_to_cpu(nd_label->position);
> -			nlabel = __le16_to_cpu(nd_label->nlabel);
> +			isetcookie = nsl_get_isetcookie(ndd, nd_label);
> +			position = nsl_get_position(ndd, nd_label);
> +			nlabel = nsl_get_nlabel(ndd, nd_label);
>  
>  			if (isetcookie != cookie)
>  				continue;
> @@ -1923,8 +1923,8 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
>  		 */
>  		hw_start = nd_mapping->start;
>  		hw_end = hw_start + nd_mapping->size;
> -		pmem_start = __le64_to_cpu(nd_label->dpa);
> -		pmem_end = pmem_start + __le64_to_cpu(nd_label->rawsize);
> +		pmem_start = nsl_get_dpa(ndd, nd_label);
> +		pmem_end = pmem_start + nsl_get_rawsize(ndd, nd_label);
>  		if (pmem_start >= hw_start && pmem_start < hw_end
>  				&& pmem_end <= hw_end && pmem_end > hw_start)
>  			/* pass */;
> @@ -1947,14 +1947,16 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
>   * @nd_label: target pmem namespace label to evaluate
>   */
>  static struct device *create_namespace_pmem(struct nd_region *nd_region,
> -		struct nd_namespace_index *nsindex,
> -		struct nd_namespace_label *nd_label)
> +					    struct nd_mapping *nd_mapping,
> +					    struct nd_namespace_label *nd_label)
>  {
> +	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
> +	struct nd_namespace_index *nsindex =
> +		to_namespace_index(ndd, ndd->ns_current);
>  	u64 cookie = nd_region_interleave_set_cookie(nd_region, nsindex);
>  	u64 altcookie = nd_region_interleave_set_altcookie(nd_region);
>  	struct nd_label_ent *label_ent;
>  	struct nd_namespace_pmem *nspm;
> -	struct nd_mapping *nd_mapping;
>  	resource_size_t size = 0;
>  	struct resource *res;
>  	struct device *dev;
> @@ -1966,10 +1968,10 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  		return ERR_PTR(-ENXIO);
>  	}
>  
> -	if (__le64_to_cpu(nd_label->isetcookie) != cookie) {
> +	if (nsl_get_isetcookie(ndd, nd_label) != cookie) {
>  		dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
>  				nd_label->uuid);
> -		if (__le64_to_cpu(nd_label->isetcookie) != altcookie)
> +		if (nsl_get_isetcookie(ndd, nd_label) != altcookie)
>  			return ERR_PTR(-EAGAIN);
>  
>  		dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
> @@ -2037,16 +2039,16 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  			continue;
>  		}
>  
> -		size += __le64_to_cpu(label0->rawsize);
> -		if (__le16_to_cpu(label0->position) != 0)
> +		ndd = to_ndd(nd_mapping);
> +		size += nsl_get_rawsize(ndd, label0);
> +		if (nsl_get_position(ndd, label0) != 0)
>  			continue;
>  		WARN_ON(nspm->alt_name || nspm->uuid);
> -		nspm->alt_name = kmemdup((void __force *) label0->name,
> -				NSLABEL_NAME_LEN, GFP_KERNEL);
> +		nspm->alt_name = kmemdup(nsl_ref_name(ndd, label0),
> +					 NSLABEL_NAME_LEN, GFP_KERNEL);
>  		nspm->uuid = kmemdup((void __force *) label0->uuid,
>  				NSLABEL_UUID_LEN, GFP_KERNEL);
> -		nspm->lbasize = __le64_to_cpu(label0->lbasize);
> -		ndd = to_ndd(nd_mapping);
> +		nspm->lbasize = nsl_get_lbasize(ndd, label0);
>  		if (namespace_label_has(ndd, abstraction_guid))
>  			nspm->nsio.common.claim_class
>  				= to_nvdimm_cclass(&label0->abstraction_guid);
> @@ -2237,7 +2239,7 @@ static int add_namespace_resource(struct nd_region *nd_region,
>  		if (is_namespace_blk(devs[i])) {
>  			res = nsblk_add_resource(nd_region, ndd,
>  					to_nd_namespace_blk(devs[i]),
> -					__le64_to_cpu(nd_label->dpa));
> +					nsl_get_dpa(ndd, nd_label));
>  			if (!res)
>  				return -ENXIO;
>  			nd_dbg_dpa(nd_region, ndd, res, "%d assign\n", count);
> @@ -2276,7 +2278,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  		if (nd_label->isetcookie != __cpu_to_le64(nd_set->cookie2)) {
>  			dev_dbg(ndd->dev, "expect cookie %#llx got %#llx\n",
>  					nd_set->cookie2,
> -					__le64_to_cpu(nd_label->isetcookie));
> +					nsl_get_isetcookie(ndd, nd_label));
>  			return ERR_PTR(-EAGAIN);
>  		}
>  	}
> @@ -2288,7 +2290,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  	dev->type = &namespace_blk_device_type;
>  	dev->parent = &nd_region->dev;
>  	nsblk->id = -1;
> -	nsblk->lbasize = __le64_to_cpu(nd_label->lbasize);
> +	nsblk->lbasize = nsl_get_lbasize(ndd, nd_label);
>  	nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN,
>  			GFP_KERNEL);
>  	if (namespace_label_has(ndd, abstraction_guid))
> @@ -2296,15 +2298,14 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  			= to_nvdimm_cclass(&nd_label->abstraction_guid);
>  	if (!nsblk->uuid)
>  		goto blk_err;
> -	memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
> +	nsl_get_name(ndd, nd_label, name);
>  	if (name[0]) {
> -		nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN,
> -				GFP_KERNEL);
> +		nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN, GFP_KERNEL);
>  		if (!nsblk->alt_name)
>  			goto blk_err;
>  	}
>  	res = nsblk_add_resource(nd_region, ndd, nsblk,
> -			__le64_to_cpu(nd_label->dpa));
> +			nsl_get_dpa(ndd, nd_label));
>  	if (!res)
>  		goto blk_err;
>  	nd_dbg_dpa(nd_region, ndd, res, "%d: assign\n", count);
> @@ -2345,6 +2346,7 @@ static struct device **scan_labels(struct nd_region *nd_region)
>  	struct device *dev, **devs = NULL;
>  	struct nd_label_ent *label_ent, *e;
>  	struct nd_mapping *nd_mapping = &nd_region->mapping[0];
> +	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
>  	resource_size_t map_end = nd_mapping->start + nd_mapping->size - 1;
>  
>  	/* "safe" because create_namespace_pmem() might list_move() label_ent */
> @@ -2355,7 +2357,7 @@ static struct device **scan_labels(struct nd_region *nd_region)
>  
>  		if (!nd_label)
>  			continue;
> -		flags = __le32_to_cpu(nd_label->flags);
> +		flags = nsl_get_flags(ndd, nd_label);
>  		if (is_nd_blk(&nd_region->dev)
>  				== !!(flags & NSLABEL_FLAG_LOCAL))
>  			/* pass, region matches label type */;
> @@ -2363,9 +2365,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
>  			continue;
>  
>  		/* skip labels that describe extents outside of the region */
> -		if (__le64_to_cpu(nd_label->dpa) < nd_mapping->start ||
> -		    __le64_to_cpu(nd_label->dpa) > map_end)
> -				continue;
> +		if (nsl_get_dpa(ndd, nd_label) < nd_mapping->start ||
> +		    nsl_get_dpa(ndd, nd_label) > map_end)
> +			continue;
>  
>  		i = add_namespace_resource(nd_region, nd_label, devs, count);
>  		if (i < 0)
> @@ -2381,13 +2383,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
>  
>  		if (is_nd_blk(&nd_region->dev))
>  			dev = create_namespace_blk(nd_region, nd_label, count);
> -		else {
> -			struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
> -			struct nd_namespace_index *nsindex;
> -
> -			nsindex = to_namespace_index(ndd, ndd->ns_current);
> -			dev = create_namespace_pmem(nd_region, nsindex, nd_label);
> -		}
> +		else
> +			dev = create_namespace_pmem(nd_region, nd_mapping,
> +						    nd_label);
>  
>  		if (IS_ERR(dev)) {
>  			switch (PTR_ERR(dev)) {
> @@ -2570,7 +2568,7 @@ static int init_active_labels(struct nd_region *nd_region)
>  				break;
>  			label = nd_label_active(ndd, j);
>  			if (test_bit(NDD_NOBLK, &nvdimm->flags)) {
> -				u32 flags = __le32_to_cpu(label->flags);
> +				u32 flags = nsl_get_flags(ndd, label);
>  
>  				flags &= ~NSLABEL_FLAG_LOCAL;
>  				label->flags = __cpu_to_le32(flags);

Does it make sense to introduce nsl_set_bit(), nsl_clear_bit() or some such to
avoid this swapping between endianess?

> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> index 696b55556d4d..61f43f0edabf 100644
> --- a/drivers/nvdimm/nd.h
> +++ b/drivers/nvdimm/nd.h
> @@ -35,6 +35,72 @@ struct nvdimm_drvdata {
>  	struct kref kref;
>  };
>  
> +static inline const u8 *nsl_ref_name(struct nvdimm_drvdata *ndd,
> +				     struct nd_namespace_label *nd_label)
> +{
> +	return nd_label->name;
> +}
> +
> +static inline u8 *nsl_get_name(struct nvdimm_drvdata *ndd,
> +			       struct nd_namespace_label *nd_label, u8 *name)
> +{
> +	return memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
> +}
> +
> +static inline u32 nsl_get_slot(struct nvdimm_drvdata *ndd,
> +			       struct nd_namespace_label *nd_label)
> +{
> +	return __le32_to_cpu(nd_label->slot);
> +}
> +
> +static inline u64 nsl_get_checksum(struct nvdimm_drvdata *ndd,
> +				   struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->checksum);
> +}
> +
> +static inline u32 nsl_get_flags(struct nvdimm_drvdata *ndd,
> +				struct nd_namespace_label *nd_label)
> +{
> +	return __le32_to_cpu(nd_label->flags);
> +}
> +
> +static inline u64 nsl_get_dpa(struct nvdimm_drvdata *ndd,
> +			      struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->dpa);
> +}
> +
> +static inline u64 nsl_get_rawsize(struct nvdimm_drvdata *ndd,
> +				  struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->rawsize);
> +}
> +
> +static inline u64 nsl_get_isetcookie(struct nvdimm_drvdata *ndd,
> +				     struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->isetcookie);
> +}
> +
> +static inline u16 nsl_get_position(struct nvdimm_drvdata *ndd,
> +				   struct nd_namespace_label *nd_label)
> +{
> +	return __le16_to_cpu(nd_label->position);
> +}
> +
> +static inline u16 nsl_get_nlabel(struct nvdimm_drvdata *ndd,
> +				 struct nd_namespace_label *nd_label)
> +{
> +	return __le16_to_cpu(nd_label->nlabel);
> +}
> +
> +static inline u64 nsl_get_lbasize(struct nvdimm_drvdata *ndd,
> +				  struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->lbasize);
> +}
> +
>  struct nd_region_data {
>  	int ns_count;
>  	int ns_active;
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 17/23] cxl/mbox: Add exclusive kernel command support
  2021-08-09 22:29 ` [PATCH 17/23] cxl/mbox: Add exclusive kernel command support Dan Williams
@ 2021-08-10 21:34   ` Ben Widawsky
  2021-08-10 21:52     ` Dan Williams
  0 siblings, 1 reply; 61+ messages in thread
From: Ben Widawsky @ 2021-08-10 21:34 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, Jonathan.Cameron, vishal.l.verma,
	alison.schofield, ira.weiny

On 21-08-09 15:29:18, Dan Williams wrote:
> The CXL_PMEM driver expects exclusive control of the label storage area
> space. Similar to the LIBNVDIMM expectation that the label storage area
> is only writable from userspace when the corresponding memory device is
> not active in any region, the expectation is the native CXL_PCI UAPI
> path is disabled while the cxl_nvdimm for a given cxl_memdev device is
> active in LIBNVDIMM.
> 
> Add the ability to toggle the availability of a given command for the
> UAPI path. Use that new capability to shutdown changes to partitions and
> the label storage area while the cxl_nvdimm device is actively proxying
> commands for LIBNVDIMM.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/cxl/core/mbox.c |    5 +++++
>  drivers/cxl/cxlmem.h    |    2 ++
>  drivers/cxl/pmem.c      |   35 +++++++++++++++++++++++++++++------
>  3 files changed, 36 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> index 23100231e246..f26962d7cb65 100644
> --- a/drivers/cxl/core/mbox.c
> +++ b/drivers/cxl/core/mbox.c
> @@ -409,6 +409,11 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
>  		}
>  	}
>  
> +	if (test_bit(cmd->info.id, cxlm->exclusive_cmds)) {
> +		rc = -EBUSY;
> +		goto out;
> +	}
> +

This breaks our current definition for cxl_raw_allow_all. All the test machinery
for whether a command can be submitted was supposed to happen in
cxl_validate_cmd_from_user(). Various versions of the original patches made
cxl_mem_raw_command_allowed() grow more intelligence (ie. more than just the
opcode). I think this check belongs there with more intelligence.

I don't love the EBUSY because it already had a meaning for concurrent use of
the mailbox, but I can't think of a better errno.

>  	dev_dbg(dev,
>  		"Submitting %s command for user\n"
>  		"\topcode: %x\n"
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index df4f3636a999..f6cfe84a064c 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -102,6 +102,7 @@ struct cxl_mbox_cmd {
>   * @mbox_mutex: Mutex to synchronize mailbox access.
>   * @firmware_version: Firmware version for the memory device.
>   * @enabled_cmds: Hardware commands found enabled in CEL.
> + * @exclusive_cmds: Commands that are kernel-internal only
>   * @pmem_range: Persistent memory capacity information.
>   * @ram_range: Volatile memory capacity information.
>   * @mbox_send: @dev specific transport for transmitting mailbox commands
> @@ -117,6 +118,7 @@ struct cxl_mem {
>  	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
>  	char firmware_version[0x10];
>  	DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX);
> +	DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX);
>  
>  	struct range pmem_range;
>  	struct range ram_range;
> diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
> index 9652c3ee41e7..11410df77444 100644
> --- a/drivers/cxl/pmem.c
> +++ b/drivers/cxl/pmem.c
> @@ -16,9 +16,23 @@
>   */
>  static struct workqueue_struct *cxl_pmem_wq;
>  
> -static void unregister_nvdimm(void *nvdimm)
> +static void unregister_nvdimm(void *_cxl_nvd)
>  {
> -	nvdimm_delete(nvdimm);
> +	struct cxl_nvdimm *cxl_nvd = _cxl_nvd;
> +	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> +	struct cxl_mem *cxlm = cxlmd->cxlm;
> +	struct device *dev = &cxl_nvd->dev;
> +	struct nvdimm *nvdimm;
> +
> +	nvdimm = dev_get_drvdata(dev);
> +	if (nvdimm)
> +		nvdimm_delete(nvdimm);
> +
> +	mutex_lock(&cxlm->mbox_mutex);
> +	clear_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
> +	clear_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
> +	clear_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
> +	mutex_unlock(&cxlm->mbox_mutex);
>  }
>  
>  static int match_nvdimm_bridge(struct device *dev, const void *data)
> @@ -39,6 +53,8 @@ static struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void)
>  static int cxl_nvdimm_probe(struct device *dev)
>  {
>  	struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
> +	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> +	struct cxl_mem *cxlm = cxlmd->cxlm;
>  	struct cxl_nvdimm_bridge *cxl_nvb;
>  	unsigned long flags = 0;
>  	struct nvdimm *nvdimm;
> @@ -52,17 +68,24 @@ static int cxl_nvdimm_probe(struct device *dev)
>  	if (!cxl_nvb->nvdimm_bus)
>  		goto out;
>  
> +	mutex_lock(&cxlm->mbox_mutex);
> +	set_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
> +	set_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
> +	set_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
> +	mutex_unlock(&cxlm->mbox_mutex);
> +

What's the concurrency this lock is trying to protect against?

>  	set_bit(NDD_LABELING, &flags);
>  	nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags, 0, 0,
>  			       NULL);
> -	if (!nvdimm)
> -		goto out;
> -
> -	rc = devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
> +	dev_set_drvdata(dev, nvdimm);
> +	rc = devm_add_action_or_reset(dev, unregister_nvdimm, cxl_nvd);
>  out:
>  	device_unlock(&cxl_nvb->dev);
>  	put_device(&cxl_nvb->dev);
>  
> +	if (!nvdimm && rc == 0)
> +		rc = -ENOMEM;
> +
>  	return rc;
>  }
>  
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 17/23] cxl/mbox: Add exclusive kernel command support
  2021-08-10 21:34   ` Ben Widawsky
@ 2021-08-10 21:52     ` Dan Williams
  2021-08-10 22:06       ` Ben Widawsky
  0 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-10 21:52 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Vishal L Verma,
	Schofield, Alison, Weiny, Ira

On Tue, Aug 10, 2021 at 2:35 PM Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> On 21-08-09 15:29:18, Dan Williams wrote:
> > The CXL_PMEM driver expects exclusive control of the label storage area
> > space. Similar to the LIBNVDIMM expectation that the label storage area
> > is only writable from userspace when the corresponding memory device is
> > not active in any region, the expectation is the native CXL_PCI UAPI
> > path is disabled while the cxl_nvdimm for a given cxl_memdev device is
> > active in LIBNVDIMM.
> >
> > Add the ability to toggle the availability of a given command for the
> > UAPI path. Use that new capability to shutdown changes to partitions and
> > the label storage area while the cxl_nvdimm device is actively proxying
> > commands for LIBNVDIMM.
> >
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  drivers/cxl/core/mbox.c |    5 +++++
> >  drivers/cxl/cxlmem.h    |    2 ++
> >  drivers/cxl/pmem.c      |   35 +++++++++++++++++++++++++++++------
> >  3 files changed, 36 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> > index 23100231e246..f26962d7cb65 100644
> > --- a/drivers/cxl/core/mbox.c
> > +++ b/drivers/cxl/core/mbox.c
> > @@ -409,6 +409,11 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
> >               }
> >       }
> >
> > +     if (test_bit(cmd->info.id, cxlm->exclusive_cmds)) {
> > +             rc = -EBUSY;
> > +             goto out;
> > +     }
> > +
>
> This breaks our current definition for cxl_raw_allow_all. All the test machinery

That's deliberate; this exclusion is outside of the raw policy. I
don't think raw_allow_all should override kernel self protection of
data structures, like labels, that it needs to maintain consistency.
If userspace wants to use raw_allow_all to send LSA manipulation
commands it must do so while the device is not active on the nvdimm
side of the house. You'll see that:

ndctl disable-region all
<mutate labels>
ndctl enable-region all

...is a common pattern from custom label update flows.

> for whether a command can be submitted was supposed to happen in
> cxl_validate_cmd_from_user(). Various versions of the original patches made
> cxl_mem_raw_command_allowed() grow more intelligence (ie. more than just the
> opcode). I think this check belongs there with more intelligence.
>
> I don't love the EBUSY because it already had a meaning for concurrent use of
> the mailbox, but I can't think of a better errno.

It's the existing errno that happens from nvdimm land when the kernel
owns the label area, so it would be confusing to invent a new one for
the same behavior now:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/nvdimm/bus.c#n1013

>
> >       dev_dbg(dev,
> >               "Submitting %s command for user\n"
> >               "\topcode: %x\n"
> > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> > index df4f3636a999..f6cfe84a064c 100644
> > --- a/drivers/cxl/cxlmem.h
> > +++ b/drivers/cxl/cxlmem.h
> > @@ -102,6 +102,7 @@ struct cxl_mbox_cmd {
> >   * @mbox_mutex: Mutex to synchronize mailbox access.
> >   * @firmware_version: Firmware version for the memory device.
> >   * @enabled_cmds: Hardware commands found enabled in CEL.
> > + * @exclusive_cmds: Commands that are kernel-internal only
> >   * @pmem_range: Persistent memory capacity information.
> >   * @ram_range: Volatile memory capacity information.
> >   * @mbox_send: @dev specific transport for transmitting mailbox commands
> > @@ -117,6 +118,7 @@ struct cxl_mem {
> >       struct mutex mbox_mutex; /* Protects device mailbox and firmware */
> >       char firmware_version[0x10];
> >       DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX);
> > +     DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX);
> >
> >       struct range pmem_range;
> >       struct range ram_range;
> > diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
> > index 9652c3ee41e7..11410df77444 100644
> > --- a/drivers/cxl/pmem.c
> > +++ b/drivers/cxl/pmem.c
> > @@ -16,9 +16,23 @@
> >   */
> >  static struct workqueue_struct *cxl_pmem_wq;
> >
> > -static void unregister_nvdimm(void *nvdimm)
> > +static void unregister_nvdimm(void *_cxl_nvd)
> >  {
> > -     nvdimm_delete(nvdimm);
> > +     struct cxl_nvdimm *cxl_nvd = _cxl_nvd;
> > +     struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> > +     struct cxl_mem *cxlm = cxlmd->cxlm;
> > +     struct device *dev = &cxl_nvd->dev;
> > +     struct nvdimm *nvdimm;
> > +
> > +     nvdimm = dev_get_drvdata(dev);
> > +     if (nvdimm)
> > +             nvdimm_delete(nvdimm);
> > +
> > +     mutex_lock(&cxlm->mbox_mutex);
> > +     clear_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
> > +     clear_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
> > +     clear_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
> > +     mutex_unlock(&cxlm->mbox_mutex);
> >  }
> >
> >  static int match_nvdimm_bridge(struct device *dev, const void *data)
> > @@ -39,6 +53,8 @@ static struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void)
> >  static int cxl_nvdimm_probe(struct device *dev)
> >  {
> >       struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
> > +     struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> > +     struct cxl_mem *cxlm = cxlmd->cxlm;
> >       struct cxl_nvdimm_bridge *cxl_nvb;
> >       unsigned long flags = 0;
> >       struct nvdimm *nvdimm;
> > @@ -52,17 +68,24 @@ static int cxl_nvdimm_probe(struct device *dev)
> >       if (!cxl_nvb->nvdimm_bus)
> >               goto out;
> >
> > +     mutex_lock(&cxlm->mbox_mutex);
> > +     set_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
> > +     set_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
> > +     set_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
> > +     mutex_unlock(&cxlm->mbox_mutex);
> > +
>
> What's the concurrency this lock is trying to protect against?

I can add a comment. It synchronizes against in-flight ioctl users to
make sure that any requests have completed before the policy changes.
I.e. do not allow userspace to race the nvdimm subsystem attaching to
get a consistent state of the persistent memory configuration.

>
> >       set_bit(NDD_LABELING, &flags);
> >       nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags, 0, 0,
> >                              NULL);
> > -     if (!nvdimm)
> > -             goto out;
> > -
> > -     rc = devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
> > +     dev_set_drvdata(dev, nvdimm);
> > +     rc = devm_add_action_or_reset(dev, unregister_nvdimm, cxl_nvd);
> >  out:
> >       device_unlock(&cxl_nvb->dev);
> >       put_device(&cxl_nvb->dev);
> >
> > +     if (!nvdimm && rc == 0)
> > +             rc = -ENOMEM;
> > +
> >       return rc;
> >  }
> >
> >

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 20/23] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy
  2021-08-09 22:29 ` [PATCH 20/23] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy Dan Williams
@ 2021-08-10 21:57   ` Ben Widawsky
  2021-08-10 22:40     ` Dan Williams
  0 siblings, 1 reply; 61+ messages in thread
From: Ben Widawsky @ 2021-08-10 21:57 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, Jonathan.Cameron, vishal.l.verma,
	alison.schofield, ira.weiny

On 21-08-09 15:29:33, Dan Williams wrote:
> Create an environment for CXL plumbing unit tests. Especially when it
> comes to an algorithm for HDM Decoder (Host-managed Device Memory
> Decoder) programming, the availability of an in-kernel-tree emulation
> environment for CXL configuration complexity and corner cases speeds
> development and deters regressions.
> 
> The approach taken mirrors what was done for tools/testing/nvdimm/. I.e.
> an external module, cxl_test.ko built out of the tools/testing/cxl/
> directory, provides mock implementations of kernel APIs and kernel
> objects to simulate a real world device hierarchy.
> 
> One feedback for the tools/testing/nvdimm/ proposal was "why not do this
> in QEMU?". In fact, the CXL development community has developed a QEMU
> model for CXL [1]. However, there are a few blocking issues that keep
> QEMU from being a tight fit for topology + provisioning unit tests:
> 
> 1/ The QEMU community has yet to show interest in merging any of this
>    support that has had patches on the list since November 2020. So,
>    testing CXL to date involves building custom QEMU with out-of-tree
>    patches.
> 
> 2/ CXL mechanisms like cross-host-bridge interleave do not have a clear
>    path to be emulated by QEMU without major infrastructure work. This
>    is easier to achieve with the alloc_mock_res() approach taken in this
>    patch to shortcut-define emulated system physical address ranges with
>    interleave behavior.

I just want to say that this was discussed on the mailing list, and I think
there is a reasonable plan (albeit a lot of work). However, #1 is the true
blocker IMHO.

> 
> The QEMU enabling has been critical to get the driver off the ground,
> and may still move forward, but it does not address the ongoing needs of
> a regression testing environment and test driven development.
> 

The really nice thing QEMU provides over this (assuming one implemented
interleaving properly), is it does allow a programmatic (via commandline) way to
test an infinite set of topologies, configurations, and hotplug scenarios. I
therefore disagree here in that I think QEMU is a better theoretical vehicle for
regression testing and test driven development, however, my unfinished branch
with no upstream interest in sight is problematic at best for longer term.

I didn't look super closely, but I have one comment/question below. Otherwise,
LGTM.

> This patch adds an ACPI CXL Platform definition with emulated CXL
> multi-ported host-bridges. A follow on patch adds emulated memory
> expander devices.
> 
> Link: https://lore.kernel.org/r/20210202005948.241655-1-ben.widawsky@intel.com [1]
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/cxl/acpi.c            |   52 +++-
>  drivers/cxl/cxl.h             |    8 +
>  tools/testing/cxl/Kbuild      |   27 ++
>  tools/testing/cxl/mock_acpi.c |  105 ++++++++
>  tools/testing/cxl/test/Kbuild |    6 
>  tools/testing/cxl/test/cxl.c  |  508 +++++++++++++++++++++++++++++++++++++++++
>  tools/testing/cxl/test/mock.c |  155 +++++++++++++
>  tools/testing/cxl/test/mock.h |   26 ++
>  8 files changed, 866 insertions(+), 21 deletions(-)
>  create mode 100644 tools/testing/cxl/Kbuild
>  create mode 100644 tools/testing/cxl/mock_acpi.c
>  create mode 100644 tools/testing/cxl/test/Kbuild
>  create mode 100644 tools/testing/cxl/test/cxl.c
>  create mode 100644 tools/testing/cxl/test/mock.c
>  create mode 100644 tools/testing/cxl/test/mock.h
> 
> diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> index 8ae89273f58e..e0cd9df85ca5 100644
> --- a/drivers/cxl/acpi.c
> +++ b/drivers/cxl/acpi.c
> @@ -182,15 +182,7 @@ static resource_size_t get_chbcr(struct acpi_cedt_chbs *chbs)
>  	return IS_ERR(chbs) ? CXL_RESOURCE_NONE : chbs->base;
>  }
>  
> -struct cxl_walk_context {
> -	struct device *dev;
> -	struct pci_bus *root;
> -	struct cxl_port *port;
> -	int error;
> -	int count;
> -};
> -
> -static int match_add_root_ports(struct pci_dev *pdev, void *data)
> +__weak int match_add_root_ports(struct pci_dev *pdev, void *data)
>  {
>  	struct cxl_walk_context *ctx = data;
>  	struct pci_bus *root_bus = ctx->root;
> @@ -214,6 +206,8 @@ static int match_add_root_ports(struct pci_dev *pdev, void *data)
>  	port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
>  	rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE);
>  	if (rc) {
> +		dev_err(dev, "failed to add dport: %s (%d)\n",
> +			dev_name(&pdev->dev), rc);
>  		ctx->error = rc;
>  		return rc;
>  	}
> @@ -239,12 +233,15 @@ static struct cxl_dport *find_dport_by_dev(struct cxl_port *port, struct device
>  	return NULL;
>  }
>  
> -static struct acpi_device *to_cxl_host_bridge(struct device *dev)
> +__weak struct acpi_device *to_cxl_host_bridge(struct device *host,
> +					      struct device *dev)
>  {
>  	struct acpi_device *adev = to_acpi_device(dev);
>  
> -	if (strcmp(acpi_device_hid(adev), "ACPI0016") == 0)
> +	if (strcmp(acpi_device_hid(adev), "ACPI0016") == 0) {
> +		dev_dbg(host, "found host bridge %s\n", dev_name(&adev->dev));
>  		return adev;
> +	}
>  	return NULL;
>  }
>  
> @@ -254,14 +251,14 @@ static struct acpi_device *to_cxl_host_bridge(struct device *dev)
>   */
>  static int add_host_bridge_uport(struct device *match, void *arg)
>  {
> -	struct acpi_device *bridge = to_cxl_host_bridge(match);
> +	struct cxl_port *port;
> +	struct cxl_dport *dport;
> +	struct cxl_decoder *cxld;
> +	struct cxl_walk_context ctx;
> +	struct acpi_pci_root *pci_root;
>  	struct cxl_port *root_port = arg;
>  	struct device *host = root_port->dev.parent;
> -	struct acpi_pci_root *pci_root;
> -	struct cxl_walk_context ctx;
> -	struct cxl_decoder *cxld;
> -	struct cxl_dport *dport;
> -	struct cxl_port *port;
> +	struct acpi_device *bridge = to_cxl_host_bridge(host, match);
>  
>  	if (!bridge)
>  		return 0;
> @@ -319,7 +316,7 @@ static int add_host_bridge_dport(struct device *match, void *arg)
>  	struct acpi_cedt_chbs *chbs;
>  	struct cxl_port *root_port = arg;
>  	struct device *host = root_port->dev.parent;
> -	struct acpi_device *bridge = to_cxl_host_bridge(match);
> +	struct acpi_device *bridge = to_cxl_host_bridge(host, match);
>  
>  	if (!bridge)
>  		return 0;
> @@ -371,6 +368,17 @@ static int add_root_nvdimm_bridge(struct device *match, void *data)
>  	return 1;
>  }
>  
> +static u32 cedt_instance(struct platform_device *pdev)
> +{
> +	const bool *native_acpi0017 = acpi_device_get_match_data(&pdev->dev);
> +
> +	if (native_acpi0017 && *native_acpi0017)
> +		return 0;
> +
> +	/* for cxl_test request a non-canonical instance */
> +	return U32_MAX;
> +}
> +
>  static int cxl_acpi_probe(struct platform_device *pdev)
>  {
>  	int rc;
> @@ -384,7 +392,7 @@ static int cxl_acpi_probe(struct platform_device *pdev)
>  		return PTR_ERR(root_port);
>  	dev_dbg(host, "add: %s\n", dev_name(&root_port->dev));
>  
> -	status = acpi_get_table(ACPI_SIG_CEDT, 0, &acpi_cedt);
> +	status = acpi_get_table(ACPI_SIG_CEDT, cedt_instance(pdev), &acpi_cedt);
>  	if (ACPI_FAILURE(status))
>  		return -ENXIO;
>  
> @@ -415,9 +423,11 @@ static int cxl_acpi_probe(struct platform_device *pdev)
>  	return 0;
>  }
>  
> +static bool native_acpi0017 = true;
> +
>  static const struct acpi_device_id cxl_acpi_ids[] = {
> -	{ "ACPI0017", 0 },
> -	{ "", 0 },
> +	{ "ACPI0017", (unsigned long) &native_acpi0017 },
> +	{ },
>  };
>  MODULE_DEVICE_TABLE(acpi, cxl_acpi_ids);
>  
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index 1b2e816e061e..09c81cf8b800 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -226,6 +226,14 @@ struct cxl_nvdimm {
>  	struct nvdimm *nvdimm;
>  };
>  
> +struct cxl_walk_context {
> +	struct device *dev;
> +	struct pci_bus *root;
> +	struct cxl_port *port;
> +	int error;
> +	int count;
> +};
> +
>  /**
>   * struct cxl_port - logical collection of upstream port devices and
>   *		     downstream port devices to construct a CXL memory
> diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
> new file mode 100644
> index 000000000000..6ea0c7df36f0
> --- /dev/null
> +++ b/tools/testing/cxl/Kbuild
> @@ -0,0 +1,27 @@
> +# SPDX-License-Identifier: GPL-2.0
> +ldflags-y += --wrap=is_acpi_device_node
> +ldflags-y += --wrap=acpi_get_table
> +ldflags-y += --wrap=acpi_put_table
> +ldflags-y += --wrap=acpi_evaluate_integer
> +ldflags-y += --wrap=acpi_pci_find_root
> +ldflags-y += --wrap=pci_walk_bus
> +
> +DRIVERS := ../../../drivers
> +CXL_SRC := $(DRIVERS)/cxl
> +CXL_CORE_SRC := $(DRIVERS)/cxl/core
> +ccflags-y := -I$(srctree)/drivers/cxl/
> +
> +obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
> +
> +cxl_acpi-y := $(CXL_SRC)/acpi.o
> +cxl_acpi-y += mock_acpi.o
> +
> +obj-$(CONFIG_CXL_BUS) += cxl_core.o
> +
> +cxl_core-y := $(CXL_CORE_SRC)/bus.o
> +cxl_core-y += $(CXL_CORE_SRC)/pmem.o
> +cxl_core-y += $(CXL_CORE_SRC)/regs.o
> +cxl_core-y += $(CXL_CORE_SRC)/memdev.o
> +cxl_core-y += $(CXL_CORE_SRC)/mbox.o
> +
> +obj-m += test/
> diff --git a/tools/testing/cxl/mock_acpi.c b/tools/testing/cxl/mock_acpi.c
> new file mode 100644
> index 000000000000..256bdf9e1ce8
> --- /dev/null
> +++ b/tools/testing/cxl/mock_acpi.c
> @@ -0,0 +1,105 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2021 Intel Corporation. All rights reserved. */
> +
> +#include <linux/platform_device.h>
> +#include <linux/device.h>
> +#include <linux/acpi.h>
> +#include <linux/pci.h>
> +#include <cxl.h>
> +#include "test/mock.h"
> +
> +struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev)
> +{
> +	int index;
> +	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +	struct acpi_device *adev = NULL;
> +
> +	if (ops && ops->is_mock_bridge(dev)) {
> +		adev = ACPI_COMPANION(dev);
> +		goto out;
> +	}

Here, and below ops->is_mock_port()... I'm a bit confused why a mock driver
would ever attempt to do anything with real hardware. ie, why not

> +
> +	if (dev->bus == &platform_bus_type)
> +		goto out;
> +
> +	if (strcmp(acpi_device_hid(to_acpi_device(dev)), "ACPI0016") == 0) {
> +		adev = to_acpi_device(dev);
> +		dev_dbg(host, "found host bridge %s\n", dev_name(&adev->dev));
> +	}
> +out:
> +	put_cxl_mock_ops(index);
> +	return adev;
> +}
> +
> +static int match_add_root_port(struct pci_dev *pdev, void *data)
> +{
> +	struct cxl_walk_context *ctx = data;
> +	struct pci_bus *root_bus = ctx->root;
> +	struct cxl_port *port = ctx->port;
> +	int type = pci_pcie_type(pdev);
> +	struct device *dev = ctx->dev;
> +	u32 lnkcap, port_num;
> +	int rc;
> +
> +	if (pdev->bus != root_bus)
> +		return 0;
> +	if (!pci_is_pcie(pdev))
> +		return 0;
> +	if (type != PCI_EXP_TYPE_ROOT_PORT)
> +		return 0;
> +	if (pci_read_config_dword(pdev, pci_pcie_cap(pdev) + PCI_EXP_LNKCAP,
> +				  &lnkcap) != PCIBIOS_SUCCESSFUL)
> +		return 0;
> +
> +	/* TODO walk DVSEC to find component register base */
> +	port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
> +	rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE);
> +	if (rc) {
> +		dev_err(dev, "failed to add dport: %s (%d)\n",
> +			dev_name(&pdev->dev), rc);
> +		ctx->error = rc;
> +		return rc;
> +	}
> +	ctx->count++;
> +
> +	dev_dbg(dev, "add dport%d: %s\n", port_num, dev_name(&pdev->dev));
> +
> +	return 0;
> +}
> +
> +static int mock_add_root_port(struct platform_device *pdev, void *data)
> +{
> +	struct cxl_walk_context *ctx = data;
> +	struct cxl_port *port = ctx->port;
> +	struct device *dev = ctx->dev;
> +	int rc;
> +
> +	rc = cxl_add_dport(port, &pdev->dev, pdev->id, CXL_RESOURCE_NONE);
> +	if (rc) {
> +		dev_err(dev, "failed to add dport: %s (%d)\n",
> +			dev_name(&pdev->dev), rc);
> +		ctx->error = rc;
> +		return rc;
> +	}
> +	ctx->count++;
> +
> +	dev_dbg(dev, "add dport%d: %s\n", pdev->id, dev_name(&pdev->dev));
> +
> +	return 0;
> +}
> +
> +int match_add_root_ports(struct pci_dev *dev, void *data)
> +{
> +	int index, rc;
> +	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +	struct platform_device *pdev = (struct platform_device *) dev;
> +
> +	if (ops && ops->is_mock_port(pdev))
> +		rc = mock_add_root_port(pdev, data);
> +	else
> +		rc = match_add_root_port(dev, data);
> +
> +	put_cxl_mock_ops(index);
> +
> +	return rc;
> +}
> diff --git a/tools/testing/cxl/test/Kbuild b/tools/testing/cxl/test/Kbuild
> new file mode 100644
> index 000000000000..7de4ddecfd21
> --- /dev/null
> +++ b/tools/testing/cxl/test/Kbuild
> @@ -0,0 +1,6 @@
> +# SPDX-License-Identifier: GPL-2.0
> +obj-m += cxl_test.o
> +obj-m += cxl_mock.o
> +
> +cxl_test-y := cxl.o
> +cxl_mock-y := mock.o
> diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
> new file mode 100644
> index 000000000000..5213d6e23dde
> --- /dev/null
> +++ b/tools/testing/cxl/test/cxl.c
> @@ -0,0 +1,508 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +// Copyright(c) 2021 Intel Corporation. All rights reserved.
> +
> +#include <linux/platform_device.h>
> +#include <linux/genalloc.h>
> +#include <linux/module.h>
> +#include <linux/mutex.h>
> +#include <linux/acpi.h>
> +#include <linux/pci.h>
> +#include <linux/mm.h>
> +#include "mock.h"
> +
> +#define NR_CXL_HOST_BRIDGES 4
> +#define NR_CXL_ROOT_PORTS 2
> +
> +static struct platform_device *cxl_acpi;
> +static struct platform_device *cxl_host_bridge[NR_CXL_HOST_BRIDGES];
> +static struct platform_device
> +	*cxl_root_port[NR_CXL_HOST_BRIDGES * NR_CXL_ROOT_PORTS];
> +
> +static struct acpi_device acpi0017_mock;
> +static struct acpi_device host_bridge[NR_CXL_HOST_BRIDGES] = {
> +	[0] = {
> +		.handle = &host_bridge[0],
> +	},
> +	[1] = {
> +		.handle = &host_bridge[1],
> +	},
> +	[2] = {
> +		.handle = &host_bridge[2],
> +	},
> +	[3] = {
> +		.handle = &host_bridge[3],
> +	},
> +};
> +
> +static bool is_mock_adev(struct acpi_device *adev)
> +{
> +	int i;
> +
> +	if (adev == &acpi0017_mock)
> +		return true;
> +
> +	for (i = 0; i < ARRAY_SIZE(host_bridge); i++)
> +		if (adev == &host_bridge[i])
> +			return true;
> +
> +	return false;
> +}
> +
> +static struct {
> +	struct acpi_table_cedt cedt;
> +	struct acpi_cedt_chbs chbs[NR_CXL_HOST_BRIDGES];
> +	struct {
> +		struct acpi_cedt_cfmws cfmws;
> +		u32 target[1];
> +	} cfmws0;
> +	struct {
> +		struct acpi_cedt_cfmws cfmws;
> +		u32 target[4];
> +	} cfmws1;
> +	struct {
> +		struct acpi_cedt_cfmws cfmws;
> +		u32 target[1];
> +	} cfmws2;
> +	struct {
> +		struct acpi_cedt_cfmws cfmws;
> +		u32 target[4];
> +	} cfmws3;
> +} __packed mock_cedt = {
> +	.cedt = {
> +		.header = {
> +			.signature = "CEDT",
> +			.length = sizeof(mock_cedt),
> +			.revision = 1,
> +		},
> +	},
> +	.chbs[0] = {
> +		.header = {
> +			.type = ACPI_CEDT_TYPE_CHBS,
> +			.length = sizeof(mock_cedt.chbs[0]),
> +		},
> +		.uid = 0,
> +		.cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
> +	},
> +	.chbs[1] = {
> +		.header = {
> +			.type = ACPI_CEDT_TYPE_CHBS,
> +			.length = sizeof(mock_cedt.chbs[0]),
> +		},
> +		.uid = 1,
> +		.cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
> +	},
> +	.chbs[2] = {
> +		.header = {
> +			.type = ACPI_CEDT_TYPE_CHBS,
> +			.length = sizeof(mock_cedt.chbs[0]),
> +		},
> +		.uid = 2,
> +		.cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
> +	},
> +	.chbs[3] = {
> +		.header = {
> +			.type = ACPI_CEDT_TYPE_CHBS,
> +			.length = sizeof(mock_cedt.chbs[0]),
> +		},
> +		.uid = 3,
> +		.cxl_version = ACPI_CEDT_CHBS_VERSION_CXL20,
> +	},
> +	.cfmws0 = {
> +		.cfmws = {
> +			.header = {
> +				.type = ACPI_CEDT_TYPE_CFMWS,
> +				.length = sizeof(mock_cedt.cfmws0),
> +			},
> +			.interleave_ways = 0,
> +			.granularity = 4,
> +			.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |
> +					ACPI_CEDT_CFMWS_RESTRICT_VOLATILE,
> +			.qtg_id = 0,
> +		},
> +		.target = { 0 },
> +	},
> +	.cfmws1 = {
> +		.cfmws = {
> +			.header = {
> +				.type = ACPI_CEDT_TYPE_CFMWS,
> +				.length = sizeof(mock_cedt.cfmws1),
> +			},
> +			.interleave_ways = 2,
> +			.granularity = 4,
> +			.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |
> +					ACPI_CEDT_CFMWS_RESTRICT_VOLATILE,
> +			.qtg_id = 1,
> +		},
> +		.target = { 0, 1, 2, 3 },
> +	},
> +	.cfmws2 = {
> +		.cfmws = {
> +			.header = {
> +				.type = ACPI_CEDT_TYPE_CFMWS,
> +				.length = sizeof(mock_cedt.cfmws2),
> +			},
> +			.interleave_ways = 0,
> +			.granularity = 4,
> +			.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |
> +					ACPI_CEDT_CFMWS_RESTRICT_PMEM,
> +			.qtg_id = 2,
> +		},
> +		.target = { 0 },
> +	},
> +	.cfmws3 = {
> +		.cfmws = {
> +			.header = {
> +				.type = ACPI_CEDT_TYPE_CFMWS,
> +				.length = sizeof(mock_cedt.cfmws3),
> +			},
> +			.interleave_ways = 2,
> +			.granularity = 4,
> +			.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 |
> +					ACPI_CEDT_CFMWS_RESTRICT_PMEM,
> +			.qtg_id = 3,
> +		},
> +		.target = { 0, 1, 2, 3 },
> +	},
> +};
> +
> +struct cxl_mock_res {
> +	struct list_head list;
> +	struct range range;
> +};
> +
> +static LIST_HEAD(mock_res);
> +static DEFINE_MUTEX(mock_res_lock);
> +static struct gen_pool *cxl_mock_pool;
> +
> +static void free_mock_res(void)
> +{
> +	struct cxl_mock_res *res, *_res;
> +
> +	mutex_lock(&mock_res_lock);
> +	list_for_each_entry_safe(res, _res, &mock_res, list) {
> +		gen_pool_free(cxl_mock_pool, res->range.start,
> +			      range_len(&res->range));
> +		list_del(&res->list);
> +		kfree(res);
> +	}
> +	mutex_unlock(&mock_res_lock);
> +}
> +
> +static struct cxl_mock_res *alloc_mock_res(resource_size_t size)
> +{
> +	struct cxl_mock_res *res = kzalloc(sizeof(*res), GFP_KERNEL);
> +	struct genpool_data_align data = {
> +		.align = SZ_256M,
> +	};
> +	unsigned long phys;
> +
> +	INIT_LIST_HEAD(&res->list);
> +	phys = gen_pool_alloc_algo(cxl_mock_pool, size,
> +				   gen_pool_first_fit_align, &data);
> +	if (!phys)
> +		return NULL;
> +
> +	res->range = (struct range) {
> +		.start = phys,
> +		.end = phys + size - 1,
> +	};
> +	mutex_lock(&mock_res_lock);
> +	list_add(&res->list, &mock_res);
> +	mutex_unlock(&mock_res_lock);
> +
> +	return res;
> +}
> +
> +static int populate_cedt(void)
> +{
> +	struct acpi_cedt_cfmws *cfmws[4] = {
> +		[0] = &mock_cedt.cfmws0.cfmws,
> +		[1] = &mock_cedt.cfmws1.cfmws,
> +		[2] = &mock_cedt.cfmws2.cfmws,
> +		[3] = &mock_cedt.cfmws3.cfmws,
> +	};
> +	struct cxl_mock_res *res;
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(mock_cedt.chbs); i++) {
> +		struct acpi_cedt_chbs *chbs = &mock_cedt.chbs[i];
> +		resource_size_t size;
> +
> +		if (chbs->cxl_version == ACPI_CEDT_CHBS_VERSION_CXL20)
> +			size = ACPI_CEDT_CHBS_LENGTH_CXL20;
> +		else
> +			size = ACPI_CEDT_CHBS_LENGTH_CXL11;
> +
> +		res = alloc_mock_res(size);
> +		if (!res)
> +			return -ENOMEM;
> +		chbs->base = res->range.start;
> +		chbs->length = size;
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(cfmws); i++) {
> +		struct acpi_cedt_cfmws *window = cfmws[i];
> +		int ways = 1 << window->interleave_ways;
> +
> +		res = alloc_mock_res(SZ_256M * ways);
> +		if (!res)
> +			return -ENOMEM;
> +		window->base_hpa = res->range.start;
> +		window->window_size = range_len(&res->range);
> +	}
> +
> +	return 0;
> +}
> +
> +static acpi_status mock_acpi_get_table(char *signature, u32 instance,
> +				       struct acpi_table_header **out_table)
> +{
> +	if (instance < U32_MAX || strcmp(signature, ACPI_SIG_CEDT) != 0)
> +		return acpi_get_table(signature, instance, out_table);
> +
> +	*out_table = (struct acpi_table_header *) &mock_cedt;
> +	return AE_OK;
> +}
> +
> +static void mock_acpi_put_table(struct acpi_table_header *table)
> +{
> +	if (table == (struct acpi_table_header *) &mock_cedt)
> +		return;
> +	acpi_put_table(table);
> +}
> +
> +static bool is_mock_bridge(struct device *dev)
> +{
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(cxl_host_bridge); i++)
> +		if (dev == &cxl_host_bridge[i]->dev)
> +			return true;
> +
> +	return false;
> +}
> +
> +static int host_bridge_index(struct acpi_device *adev)
> +{
> +	return adev - host_bridge;
> +}
> +
> +static struct acpi_device *find_host_bridge(acpi_handle handle)
> +{
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(host_bridge); i++)
> +		if (handle == host_bridge[i].handle)
> +			return &host_bridge[i];
> +	return NULL;
> +}
> +
> +static acpi_status
> +mock_acpi_evaluate_integer(acpi_handle handle, acpi_string pathname,
> +			   struct acpi_object_list *arguments,
> +			   unsigned long long *data)
> +{
> +	struct acpi_device *adev = find_host_bridge(handle);
> +
> +	if (!adev || strcmp(pathname, METHOD_NAME__UID) != 0)
> +		return acpi_evaluate_integer(handle, pathname, arguments, data);
> +
> +	*data = host_bridge_index(adev);
> +	return AE_OK;
> +}
> +
> +static struct pci_bus mock_pci_bus[NR_CXL_HOST_BRIDGES];
> +static struct acpi_pci_root mock_pci_root[NR_CXL_HOST_BRIDGES] = {
> +	[0] = {
> +		.bus = &mock_pci_bus[0],
> +	},
> +	[1] = {
> +		.bus = &mock_pci_bus[1],
> +	},
> +	[2] = {
> +		.bus = &mock_pci_bus[2],
> +	},
> +	[3] = {
> +		.bus = &mock_pci_bus[3],
> +	},
> +};
> +
> +static struct platform_device *mock_cxl_root_port(struct pci_bus *bus, int index)
> +{
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(mock_pci_bus); i++)
> +		if (bus == &mock_pci_bus[i])
> +			return cxl_root_port[index + i * NR_CXL_ROOT_PORTS];
> +	return NULL;
> +}
> +
> +static bool is_mock_port(struct platform_device *pdev)
> +{
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(cxl_root_port); i++)
> +		if (pdev == cxl_root_port[i])
> +			return true;
> +	return false;
> +}
> +
> +static bool is_mock_bus(struct pci_bus *bus)
> +{
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(mock_pci_bus); i++)
> +		if (bus == &mock_pci_bus[i])
> +			return true;
> +	return false;
> +}
> +
> +static struct acpi_pci_root *mock_acpi_pci_find_root(acpi_handle handle)
> +{
> +	struct acpi_device *adev = find_host_bridge(handle);
> +
> +	if (!adev)
> +		return acpi_pci_find_root(handle);
> +	return &mock_pci_root[host_bridge_index(adev)];
> +}
> +
> +static struct cxl_mock_ops cxl_mock_ops = {
> +	.is_mock_adev = is_mock_adev,
> +	.is_mock_bridge = is_mock_bridge,
> +	.is_mock_bus = is_mock_bus,
> +	.is_mock_port = is_mock_port,
> +	.mock_port = mock_cxl_root_port,
> +	.acpi_get_table = mock_acpi_get_table,
> +	.acpi_put_table = mock_acpi_put_table,
> +	.acpi_evaluate_integer = mock_acpi_evaluate_integer,
> +	.acpi_pci_find_root = mock_acpi_pci_find_root,
> +	.list = LIST_HEAD_INIT(cxl_mock_ops.list),
> +};
> +
> +static void mock_companion(struct acpi_device *adev, struct device *dev)
> +{
> +	device_initialize(&adev->dev);
> +	fwnode_init(&adev->fwnode, NULL);
> +	dev->fwnode = &adev->fwnode;
> +	adev->fwnode.dev = dev;
> +}
> +
> +#ifndef SZ_64G
> +#define SZ_64G (SZ_32G * 2)
> +#endif
> +
> +#ifndef SZ_512G
> +#define SZ_512G (SZ_64G * 8)
> +#endif
> +
> +static __init int cxl_test_init(void)
> +{
> +	int rc, i;
> +
> +	register_cxl_mock_ops(&cxl_mock_ops);
> +
> +	cxl_mock_pool = gen_pool_create(ilog2(SZ_2M), NUMA_NO_NODE);
> +	if (!cxl_mock_pool) {
> +		rc = -ENOMEM;
> +		goto err_gen_pool_create;
> +	}
> +
> +	rc = gen_pool_add(cxl_mock_pool, SZ_512G, SZ_64G, NUMA_NO_NODE);
> +	if (rc)
> +		goto err_gen_pool_add;
> +
> +	rc = populate_cedt();
> +	if (rc)
> +		goto err_populate;
> +
> +	for (i = 0; i < ARRAY_SIZE(cxl_host_bridge); i++) {
> +		struct acpi_device *adev = &host_bridge[i];
> +		struct platform_device *pdev;
> +
> +		pdev = platform_device_alloc("cxl_host_bridge", i);
> +		if (!pdev)
> +			goto err_bridge;
> +
> +		mock_companion(adev, &pdev->dev);
> +		rc = platform_device_add(pdev);
> +		if (rc) {
> +			platform_device_put(pdev);
> +			goto err_bridge;
> +		}
> +		cxl_host_bridge[i] = pdev;
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(cxl_root_port); i++) {
> +		struct platform_device *bridge =
> +			cxl_host_bridge[i / NR_CXL_ROOT_PORTS];
> +		struct platform_device *pdev;
> +
> +		pdev = platform_device_alloc("cxl_root_port", i);
> +		if (!pdev)
> +			goto err_port;
> +		pdev->dev.parent = &bridge->dev;
> +
> +		rc = platform_device_add(pdev);
> +		if (rc) {
> +			platform_device_put(pdev);
> +			goto err_port;
> +		}
> +		cxl_root_port[i] = pdev;
> +	}
> +
> +	cxl_acpi = platform_device_alloc("cxl_acpi", 0);
> +	if (!cxl_acpi)
> +		goto err_port;
> +
> +	mock_companion(&acpi0017_mock, &cxl_acpi->dev);
> +	acpi0017_mock.dev.bus = &platform_bus_type;
> +
> +	rc = platform_device_add(cxl_acpi);
> +	if (rc)
> +		goto err_add;
> +
> +	return 0;
> +
> +err_add:
> +	platform_device_put(cxl_acpi);
> +err_port:
> +	for (i = ARRAY_SIZE(cxl_root_port) - 1; i >= 0; i--) {
> +		platform_device_del(cxl_root_port[i]);
> +		platform_device_put(cxl_root_port[i]);
> +	}
> +err_bridge:
> +	for (i = ARRAY_SIZE(cxl_host_bridge) - 1; i >= 0; i--) {
> +		platform_device_del(cxl_host_bridge[i]);
> +		platform_device_put(cxl_host_bridge[i]);
> +	}
> +err_populate:
> +	free_mock_res();
> +err_gen_pool_add:
> +	gen_pool_destroy(cxl_mock_pool);
> +err_gen_pool_create:
> +	unregister_cxl_mock_ops(&cxl_mock_ops);
> +	return rc;
> +}
> +
> +static __exit void cxl_test_exit(void)
> +{
> +	int i;
> +
> +	platform_device_del(cxl_acpi);
> +	platform_device_put(cxl_acpi);
> +	for (i = ARRAY_SIZE(cxl_root_port) - 1; i >= 0; i--) {
> +		platform_device_del(cxl_root_port[i]);
> +		platform_device_put(cxl_root_port[i]);
> +	}
> +	for (i = ARRAY_SIZE(cxl_host_bridge) - 1; i >= 0; i--) {
> +		platform_device_del(cxl_host_bridge[i]);
> +		platform_device_put(cxl_host_bridge[i]);
> +	}
> +	free_mock_res();
> +	gen_pool_destroy(cxl_mock_pool);
> +	unregister_cxl_mock_ops(&cxl_mock_ops);
> +}
> +
> +module_init(cxl_test_init);
> +module_exit(cxl_test_exit);
> +MODULE_LICENSE("GPL v2");
> diff --git a/tools/testing/cxl/test/mock.c b/tools/testing/cxl/test/mock.c
> new file mode 100644
> index 000000000000..5b61373a4f1d
> --- /dev/null
> +++ b/tools/testing/cxl/test/mock.c
> @@ -0,0 +1,155 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +//Copyright(c) 2021 Intel Corporation. All rights reserved.
> +
> +#include <linux/rculist.h>
> +#include <linux/device.h>
> +#include <linux/export.h>
> +#include <linux/acpi.h>
> +#include <linux/pci.h>
> +#include "mock.h"
> +
> +static LIST_HEAD(mock);
> +
> +void register_cxl_mock_ops(struct cxl_mock_ops *ops)
> +{
> +	list_add_rcu(&ops->list, &mock);
> +}
> +EXPORT_SYMBOL_GPL(register_cxl_mock_ops);
> +
> +static DEFINE_SRCU(cxl_mock_srcu);
> +
> +void unregister_cxl_mock_ops(struct cxl_mock_ops *ops)
> +{
> +	list_del_rcu(&ops->list);
> +	synchronize_srcu(&cxl_mock_srcu);
> +}
> +EXPORT_SYMBOL_GPL(unregister_cxl_mock_ops);
> +
> +struct cxl_mock_ops *get_cxl_mock_ops(int *index)
> +{
> +	*index = srcu_read_lock(&cxl_mock_srcu);
> +	return list_first_or_null_rcu(&mock, struct cxl_mock_ops, list);
> +}
> +EXPORT_SYMBOL_GPL(get_cxl_mock_ops);
> +
> +void put_cxl_mock_ops(int index)
> +{
> +	srcu_read_unlock(&cxl_mock_srcu, index);
> +}
> +EXPORT_SYMBOL_GPL(put_cxl_mock_ops);
> +
> +bool __wrap_is_acpi_device_node(const struct fwnode_handle *fwnode)
> +{
> +	struct acpi_device *adev =
> +		container_of(fwnode, struct acpi_device, fwnode);
> +	int index;
> +	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +	bool retval = false;
> +
> +	if (ops)
> +		retval = ops->is_mock_adev(adev);
> +
> +	if (!retval)
> +		retval = is_acpi_device_node(fwnode);
> +
> +	put_cxl_mock_ops(index);
> +	return retval;
> +}
> +EXPORT_SYMBOL(__wrap_is_acpi_device_node);
> +
> +acpi_status __wrap_acpi_get_table(char *signature, u32 instance,
> +				  struct acpi_table_header **out_table)
> +{
> +	int index;
> +	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +	acpi_status status;
> +
> +	if (ops)
> +		status = ops->acpi_get_table(signature, instance, out_table);
> +	else
> +		status = acpi_get_table(signature, instance, out_table);
> +
> +	put_cxl_mock_ops(index);
> +
> +	return status;
> +}
> +EXPORT_SYMBOL(__wrap_acpi_get_table);
> +
> +void __wrap_acpi_put_table(struct acpi_table_header *table)
> +{
> +	int index;
> +	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +
> +	if (ops)
> +		ops->acpi_put_table(table);
> +	else
> +		acpi_put_table(table);
> +	put_cxl_mock_ops(index);
> +}
> +EXPORT_SYMBOL(__wrap_acpi_put_table);
> +
> +acpi_status __wrap_acpi_evaluate_integer(acpi_handle handle,
> +					 acpi_string pathname,
> +					 struct acpi_object_list *arguments,
> +					 unsigned long long *data)
> +{
> +	int index;
> +	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +	acpi_status status;
> +
> +	if (ops)
> +		status = ops->acpi_evaluate_integer(handle, pathname, arguments,
> +						    data);
> +	else
> +		status = acpi_evaluate_integer(handle, pathname, arguments,
> +					       data);
> +	put_cxl_mock_ops(index);
> +
> +	return status;
> +}
> +EXPORT_SYMBOL(__wrap_acpi_evaluate_integer);
> +
> +struct acpi_pci_root *__wrap_acpi_pci_find_root(acpi_handle handle)
> +{
> +	int index;
> +	struct acpi_pci_root *root;
> +	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +
> +	if (ops)
> +		root = ops->acpi_pci_find_root(handle);
> +	else
> +		root = acpi_pci_find_root(handle);
> +
> +	put_cxl_mock_ops(index);
> +
> +	return root;
> +}
> +EXPORT_SYMBOL_GPL(__wrap_acpi_pci_find_root);
> +
> +void __wrap_pci_walk_bus(struct pci_bus *bus,
> +			 int (*cb)(struct pci_dev *, void *), void *userdata)
> +{
> +	int index;
> +	struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +
> +	if (ops && ops->is_mock_bus(bus)) {
> +		int rc, i;
> +
> +		/*
> +		 * Simulate 2 root ports per host-bridge and no
> +		 * depth recursion.
> +		 */
> +		for (i = 0; i < 2; i++) {
> +			rc = cb((struct pci_dev *) ops->mock_port(bus, i),
> +				userdata);
> +			if (rc)
> +				break;
> +		}
> +	} else
> +		pci_walk_bus(bus, cb, userdata);
> +
> +	put_cxl_mock_ops(index);
> +}
> +EXPORT_SYMBOL_GPL(__wrap_pci_walk_bus);
> +
> +MODULE_LICENSE("GPL v2");
> diff --git a/tools/testing/cxl/test/mock.h b/tools/testing/cxl/test/mock.h
> new file mode 100644
> index 000000000000..7d3b3fa6ffec
> --- /dev/null
> +++ b/tools/testing/cxl/test/mock.h
> @@ -0,0 +1,26 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <linux/list.h>
> +#include <linux/acpi.h>
> +
> +struct cxl_mock_ops {
> +	struct list_head list;
> +	bool (*is_mock_adev)(struct acpi_device *dev);
> +	acpi_status (*acpi_get_table)(char *signature, u32 instance,
> +				      struct acpi_table_header **out_table);
> +	void (*acpi_put_table)(struct acpi_table_header *table);
> +	bool (*is_mock_bridge)(struct device *dev);
> +	acpi_status (*acpi_evaluate_integer)(acpi_handle handle,
> +					     acpi_string pathname,
> +					     struct acpi_object_list *arguments,
> +					     unsigned long long *data);
> +	struct acpi_pci_root *(*acpi_pci_find_root)(acpi_handle handle);
> +	struct platform_device *(*mock_port)(struct pci_bus *bus, int index);
> +	bool (*is_mock_bus)(struct pci_bus *bus);
> +	bool (*is_mock_port)(struct platform_device *pdev);
> +};
> +
> +void register_cxl_mock_ops(struct cxl_mock_ops *ops);
> +void unregister_cxl_mock_ops(struct cxl_mock_ops *ops);
> +struct cxl_mock_ops *get_cxl_mock_ops(int *index);
> +void put_cxl_mock_ops(int index);
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields
  2021-08-10 20:48   ` Ben Widawsky
@ 2021-08-10 21:58     ` Dan Williams
  0 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-10 21:58 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Vishal L Verma,
	Schofield, Alison, Weiny, Ira

On Tue, Aug 10, 2021 at 1:49 PM Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> On 21-08-09 15:27:52, Dan Williams wrote:
> > In preparation for LIBNVDIMM to manage labels on CXL devices deploy
> > helpers that abstract the label type from the implementation. The CXL
> > label format is mostly similar to the EFI label format with concepts /
> > fields added, like dynamic region creation and label type guids, and
> > other concepts removed like BLK-mode and interleave-set-cookie ids.
> >
> > In addition to nsl_get_* helpers there is the nsl_ref_name() helper that
> > returns a pointer to a label field rather than copying the data.
> >
> > Where changes touch the old whitespace style, update to clang-format
> > expectations.
> >
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  drivers/nvdimm/label.c          |   20 ++++++-----
> >  drivers/nvdimm/namespace_devs.c |   70 +++++++++++++++++++--------------------
> >  drivers/nvdimm/nd.h             |   66 +++++++++++++++++++++++++++++++++++++
> >  3 files changed, 110 insertions(+), 46 deletions(-)
> >
> > diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> > index 9251441fd8a3..b6d845cfb70e 100644
> > --- a/drivers/nvdimm/label.c
> > +++ b/drivers/nvdimm/label.c
> > @@ -350,14 +350,14 @@ static bool slot_valid(struct nvdimm_drvdata *ndd,
> >               struct nd_namespace_label *nd_label, u32 slot)
> >  {
> >       /* check that we are written where we expect to be written */
> > -     if (slot != __le32_to_cpu(nd_label->slot))
> > +     if (slot != nsl_get_slot(ndd, nd_label))
> >               return false;
> >
> >       /* check checksum */
> >       if (namespace_label_has(ndd, checksum)) {
> >               u64 sum, sum_save;
> >
> > -             sum_save = __le64_to_cpu(nd_label->checksum);
> > +             sum_save = nsl_get_checksum(ndd, nd_label);
> >               nd_label->checksum = __cpu_to_le64(0);
> >               sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
> >               nd_label->checksum = __cpu_to_le64(sum_save);
> > @@ -395,13 +395,13 @@ int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
> >                       continue;
> >
> >               memcpy(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> > -             flags = __le32_to_cpu(nd_label->flags);
> > +             flags = nsl_get_flags(ndd, nd_label);
> >               if (test_bit(NDD_NOBLK, &nvdimm->flags))
>
> Lazy review (didn't check NDD_NOBLK), should this be test_bit(NDD_NOBLK, &flags)?

No, there's 2 flags in play here, the label flags and the device
flags. The NDD_NOBLK device flag filters consideration of the
NSLABEL_FLAG_LOCAL label flag.

>
> >                       flags &= ~NSLABEL_FLAG_LOCAL;
> >               nd_label_gen_id(&label_id, label_uuid, flags);
> >               res = nvdimm_allocate_dpa(ndd, &label_id,
> > -                             __le64_to_cpu(nd_label->dpa),
> > -                             __le64_to_cpu(nd_label->rawsize));
> > +                                       nsl_get_dpa(ndd, nd_label),
> > +                                       nsl_get_rawsize(ndd, nd_label));
> >               nd_dbg_dpa(nd_region, ndd, res, "reserve\n");
> >               if (!res)
> >                       return -EBUSY;
> > @@ -548,9 +548,9 @@ int nd_label_active_count(struct nvdimm_drvdata *ndd)
> >               nd_label = to_label(ndd, slot);
> >
> >               if (!slot_valid(ndd, nd_label, slot)) {
> > -                     u32 label_slot = __le32_to_cpu(nd_label->slot);
> > -                     u64 size = __le64_to_cpu(nd_label->rawsize);
> > -                     u64 dpa = __le64_to_cpu(nd_label->dpa);
> > +                     u32 label_slot = nsl_get_slot(ndd, nd_label);
> > +                     u64 size = nsl_get_rawsize(ndd, nd_label);
> > +                     u64 dpa = nsl_get_dpa(ndd, nd_label);
> >
> >                       dev_dbg(ndd->dev,
> >                               "slot%d invalid slot: %d dpa: %llx size: %llx\n",
> > @@ -879,9 +879,9 @@ static struct resource *to_resource(struct nvdimm_drvdata *ndd,
> >       struct resource *res;
> >
> >       for_each_dpa_resource(ndd, res) {
> > -             if (res->start != __le64_to_cpu(nd_label->dpa))
> > +             if (res->start != nsl_get_dpa(ndd, nd_label))
> >                       continue;
> > -             if (resource_size(res) != __le64_to_cpu(nd_label->rawsize))
> > +             if (resource_size(res) != nsl_get_rawsize(ndd, nd_label))
> >                       continue;
> >               return res;
> >       }
> > diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
> > index 2403b71b601e..94da804372bf 100644
> > --- a/drivers/nvdimm/namespace_devs.c
> > +++ b/drivers/nvdimm/namespace_devs.c
> > @@ -1235,7 +1235,7 @@ static int namespace_update_uuid(struct nd_region *nd_region,
> >                       if (!nd_label)
> >                               continue;
> >                       nd_label_gen_id(&label_id, nd_label->uuid,
> > -                                     __le32_to_cpu(nd_label->flags));
> > +                                     nsl_get_flags(ndd, nd_label));
> >                       if (strcmp(old_label_id.id, label_id.id) == 0)
> >                               set_bit(ND_LABEL_REAP, &label_ent->flags);
> >               }
> > @@ -1851,9 +1851,9 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
> >
> >                       if (!nd_label)
> >                               continue;
> > -                     isetcookie = __le64_to_cpu(nd_label->isetcookie);
> > -                     position = __le16_to_cpu(nd_label->position);
> > -                     nlabel = __le16_to_cpu(nd_label->nlabel);
> > +                     isetcookie = nsl_get_isetcookie(ndd, nd_label);
> > +                     position = nsl_get_position(ndd, nd_label);
> > +                     nlabel = nsl_get_nlabel(ndd, nd_label);
> >
> >                       if (isetcookie != cookie)
> >                               continue;
> > @@ -1923,8 +1923,8 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
> >                */
> >               hw_start = nd_mapping->start;
> >               hw_end = hw_start + nd_mapping->size;
> > -             pmem_start = __le64_to_cpu(nd_label->dpa);
> > -             pmem_end = pmem_start + __le64_to_cpu(nd_label->rawsize);
> > +             pmem_start = nsl_get_dpa(ndd, nd_label);
> > +             pmem_end = pmem_start + nsl_get_rawsize(ndd, nd_label);
> >               if (pmem_start >= hw_start && pmem_start < hw_end
> >                               && pmem_end <= hw_end && pmem_end > hw_start)
> >                       /* pass */;
> > @@ -1947,14 +1947,16 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
> >   * @nd_label: target pmem namespace label to evaluate
> >   */
> >  static struct device *create_namespace_pmem(struct nd_region *nd_region,
> > -             struct nd_namespace_index *nsindex,
> > -             struct nd_namespace_label *nd_label)
> > +                                         struct nd_mapping *nd_mapping,
> > +                                         struct nd_namespace_label *nd_label)
> >  {
> > +     struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
> > +     struct nd_namespace_index *nsindex =
> > +             to_namespace_index(ndd, ndd->ns_current);
> >       u64 cookie = nd_region_interleave_set_cookie(nd_region, nsindex);
> >       u64 altcookie = nd_region_interleave_set_altcookie(nd_region);
> >       struct nd_label_ent *label_ent;
> >       struct nd_namespace_pmem *nspm;
> > -     struct nd_mapping *nd_mapping;
> >       resource_size_t size = 0;
> >       struct resource *res;
> >       struct device *dev;
> > @@ -1966,10 +1968,10 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
> >               return ERR_PTR(-ENXIO);
> >       }
> >
> > -     if (__le64_to_cpu(nd_label->isetcookie) != cookie) {
> > +     if (nsl_get_isetcookie(ndd, nd_label) != cookie) {
> >               dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
> >                               nd_label->uuid);
> > -             if (__le64_to_cpu(nd_label->isetcookie) != altcookie)
> > +             if (nsl_get_isetcookie(ndd, nd_label) != altcookie)
> >                       return ERR_PTR(-EAGAIN);
> >
> >               dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
> > @@ -2037,16 +2039,16 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
> >                       continue;
> >               }
> >
> > -             size += __le64_to_cpu(label0->rawsize);
> > -             if (__le16_to_cpu(label0->position) != 0)
> > +             ndd = to_ndd(nd_mapping);
> > +             size += nsl_get_rawsize(ndd, label0);
> > +             if (nsl_get_position(ndd, label0) != 0)
> >                       continue;
> >               WARN_ON(nspm->alt_name || nspm->uuid);
> > -             nspm->alt_name = kmemdup((void __force *) label0->name,
> > -                             NSLABEL_NAME_LEN, GFP_KERNEL);
> > +             nspm->alt_name = kmemdup(nsl_ref_name(ndd, label0),
> > +                                      NSLABEL_NAME_LEN, GFP_KERNEL);
> >               nspm->uuid = kmemdup((void __force *) label0->uuid,
> >                               NSLABEL_UUID_LEN, GFP_KERNEL);
> > -             nspm->lbasize = __le64_to_cpu(label0->lbasize);
> > -             ndd = to_ndd(nd_mapping);
> > +             nspm->lbasize = nsl_get_lbasize(ndd, label0);
> >               if (namespace_label_has(ndd, abstraction_guid))
> >                       nspm->nsio.common.claim_class
> >                               = to_nvdimm_cclass(&label0->abstraction_guid);
> > @@ -2237,7 +2239,7 @@ static int add_namespace_resource(struct nd_region *nd_region,
> >               if (is_namespace_blk(devs[i])) {
> >                       res = nsblk_add_resource(nd_region, ndd,
> >                                       to_nd_namespace_blk(devs[i]),
> > -                                     __le64_to_cpu(nd_label->dpa));
> > +                                     nsl_get_dpa(ndd, nd_label));
> >                       if (!res)
> >                               return -ENXIO;
> >                       nd_dbg_dpa(nd_region, ndd, res, "%d assign\n", count);
> > @@ -2276,7 +2278,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
> >               if (nd_label->isetcookie != __cpu_to_le64(nd_set->cookie2)) {
> >                       dev_dbg(ndd->dev, "expect cookie %#llx got %#llx\n",
> >                                       nd_set->cookie2,
> > -                                     __le64_to_cpu(nd_label->isetcookie));
> > +                                     nsl_get_isetcookie(ndd, nd_label));
> >                       return ERR_PTR(-EAGAIN);
> >               }
> >       }
> > @@ -2288,7 +2290,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
> >       dev->type = &namespace_blk_device_type;
> >       dev->parent = &nd_region->dev;
> >       nsblk->id = -1;
> > -     nsblk->lbasize = __le64_to_cpu(nd_label->lbasize);
> > +     nsblk->lbasize = nsl_get_lbasize(ndd, nd_label);
> >       nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN,
> >                       GFP_KERNEL);
> >       if (namespace_label_has(ndd, abstraction_guid))
> > @@ -2296,15 +2298,14 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
> >                       = to_nvdimm_cclass(&nd_label->abstraction_guid);
> >       if (!nsblk->uuid)
> >               goto blk_err;
> > -     memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
> > +     nsl_get_name(ndd, nd_label, name);
> >       if (name[0]) {
> > -             nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN,
> > -                             GFP_KERNEL);
> > +             nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN, GFP_KERNEL);
> >               if (!nsblk->alt_name)
> >                       goto blk_err;
> >       }
> >       res = nsblk_add_resource(nd_region, ndd, nsblk,
> > -                     __le64_to_cpu(nd_label->dpa));
> > +                     nsl_get_dpa(ndd, nd_label));
> >       if (!res)
> >               goto blk_err;
> >       nd_dbg_dpa(nd_region, ndd, res, "%d: assign\n", count);
> > @@ -2345,6 +2346,7 @@ static struct device **scan_labels(struct nd_region *nd_region)
> >       struct device *dev, **devs = NULL;
> >       struct nd_label_ent *label_ent, *e;
> >       struct nd_mapping *nd_mapping = &nd_region->mapping[0];
> > +     struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
> >       resource_size_t map_end = nd_mapping->start + nd_mapping->size - 1;
> >
> >       /* "safe" because create_namespace_pmem() might list_move() label_ent */
> > @@ -2355,7 +2357,7 @@ static struct device **scan_labels(struct nd_region *nd_region)
> >
> >               if (!nd_label)
> >                       continue;
> > -             flags = __le32_to_cpu(nd_label->flags);
> > +             flags = nsl_get_flags(ndd, nd_label);
> >               if (is_nd_blk(&nd_region->dev)
> >                               == !!(flags & NSLABEL_FLAG_LOCAL))
> >                       /* pass, region matches label type */;
> > @@ -2363,9 +2365,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
> >                       continue;
> >
> >               /* skip labels that describe extents outside of the region */
> > -             if (__le64_to_cpu(nd_label->dpa) < nd_mapping->start ||
> > -                 __le64_to_cpu(nd_label->dpa) > map_end)
> > -                             continue;
> > +             if (nsl_get_dpa(ndd, nd_label) < nd_mapping->start ||
> > +                 nsl_get_dpa(ndd, nd_label) > map_end)
> > +                     continue;
> >
> >               i = add_namespace_resource(nd_region, nd_label, devs, count);
> >               if (i < 0)
> > @@ -2381,13 +2383,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
> >
> >               if (is_nd_blk(&nd_region->dev))
> >                       dev = create_namespace_blk(nd_region, nd_label, count);
> > -             else {
> > -                     struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
> > -                     struct nd_namespace_index *nsindex;
> > -
> > -                     nsindex = to_namespace_index(ndd, ndd->ns_current);
> > -                     dev = create_namespace_pmem(nd_region, nsindex, nd_label);
> > -             }
> > +             else
> > +                     dev = create_namespace_pmem(nd_region, nd_mapping,
> > +                                                 nd_label);
> >
> >               if (IS_ERR(dev)) {
> >                       switch (PTR_ERR(dev)) {
> > @@ -2570,7 +2568,7 @@ static int init_active_labels(struct nd_region *nd_region)
> >                               break;
> >                       label = nd_label_active(ndd, j);
> >                       if (test_bit(NDD_NOBLK, &nvdimm->flags)) {
> > -                             u32 flags = __le32_to_cpu(label->flags);
> > +                             u32 flags = nsl_get_flags(ndd, label);
> >
> >                               flags &= ~NSLABEL_FLAG_LOCAL;
> >                               label->flags = __cpu_to_le32(flags);
>
> Does it make sense to introduce nsl_set_bit(), nsl_clear_bit() or some such to
> avoid this swapping between endianess?

Maybe, but that would be an independent follow-on, it's not a common
operation and CXL has no concept of BLK mode. It would just be a
cleanup for legacy code.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 17/23] cxl/mbox: Add exclusive kernel command support
  2021-08-10 21:52     ` Dan Williams
@ 2021-08-10 22:06       ` Ben Widawsky
  2021-08-11  1:22         ` Dan Williams
  0 siblings, 1 reply; 61+ messages in thread
From: Ben Widawsky @ 2021-08-10 22:06 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Vishal L Verma,
	Schofield, Alison, Weiny, Ira

On 21-08-10 14:52:18, Dan Williams wrote:
> On Tue, Aug 10, 2021 at 2:35 PM Ben Widawsky <ben.widawsky@intel.com> wrote:
> >
> > On 21-08-09 15:29:18, Dan Williams wrote:
> > > The CXL_PMEM driver expects exclusive control of the label storage area
> > > space. Similar to the LIBNVDIMM expectation that the label storage area
> > > is only writable from userspace when the corresponding memory device is
> > > not active in any region, the expectation is the native CXL_PCI UAPI
> > > path is disabled while the cxl_nvdimm for a given cxl_memdev device is
> > > active in LIBNVDIMM.
> > >
> > > Add the ability to toggle the availability of a given command for the
> > > UAPI path. Use that new capability to shutdown changes to partitions and
> > > the label storage area while the cxl_nvdimm device is actively proxying
> > > commands for LIBNVDIMM.
> > >
> > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > > ---
> > >  drivers/cxl/core/mbox.c |    5 +++++
> > >  drivers/cxl/cxlmem.h    |    2 ++
> > >  drivers/cxl/pmem.c      |   35 +++++++++++++++++++++++++++++------
> > >  3 files changed, 36 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> > > index 23100231e246..f26962d7cb65 100644
> > > --- a/drivers/cxl/core/mbox.c
> > > +++ b/drivers/cxl/core/mbox.c
> > > @@ -409,6 +409,11 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
> > >               }
> > >       }
> > >
> > > +     if (test_bit(cmd->info.id, cxlm->exclusive_cmds)) {
> > > +             rc = -EBUSY;
> > > +             goto out;
> > > +     }
> > > +
> >
> > This breaks our current definition for cxl_raw_allow_all. All the test machinery
> 
> That's deliberate; this exclusion is outside of the raw policy. I
> don't think raw_allow_all should override kernel self protection of
> data structures, like labels, that it needs to maintain consistency.
> If userspace wants to use raw_allow_all to send LSA manipulation
> commands it must do so while the device is not active on the nvdimm
> side of the house. You'll see that:
> 
> ndctl disable-region all
> <mutate labels>
> ndctl enable-region all
> 
> ...is a common pattern from custom label update flows.
> 

I won't argue about raw_allow_all since we never did document its debugfs
meaning (however, my intention was always to let userspace trump the kernel
(which was why we tainted)).

Either way, could you please move the actual check to
cxl_validate_cmd_from_user() instead of handle...(). Validate is the main
function to determine whether a command is allowed to be sent on behalf of the
user.  I think just putting it next to the enabled cmd check would make a lot
more sense. And please add the EBUSY meaning to the kdocs.

> > for whether a command can be submitted was supposed to happen in
> > cxl_validate_cmd_from_user(). Various versions of the original patches made
> > cxl_mem_raw_command_allowed() grow more intelligence (ie. more than just the
> > opcode). I think this check belongs there with more intelligence.
> >
> > I don't love the EBUSY because it already had a meaning for concurrent use of
> > the mailbox, but I can't think of a better errno.
> 
> It's the existing errno that happens from nvdimm land when the kernel
> owns the label area, so it would be confusing to invent a new one for
> the same behavior now:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/nvdimm/bus.c#n1013
> 
> >
> > >       dev_dbg(dev,
> > >               "Submitting %s command for user\n"
> > >               "\topcode: %x\n"
> > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> > > index df4f3636a999..f6cfe84a064c 100644
> > > --- a/drivers/cxl/cxlmem.h
> > > +++ b/drivers/cxl/cxlmem.h
> > > @@ -102,6 +102,7 @@ struct cxl_mbox_cmd {
> > >   * @mbox_mutex: Mutex to synchronize mailbox access.
> > >   * @firmware_version: Firmware version for the memory device.
> > >   * @enabled_cmds: Hardware commands found enabled in CEL.
> > > + * @exclusive_cmds: Commands that are kernel-internal only
> > >   * @pmem_range: Persistent memory capacity information.
> > >   * @ram_range: Volatile memory capacity information.
> > >   * @mbox_send: @dev specific transport for transmitting mailbox commands
> > > @@ -117,6 +118,7 @@ struct cxl_mem {
> > >       struct mutex mbox_mutex; /* Protects device mailbox and firmware */
> > >       char firmware_version[0x10];
> > >       DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX);
> > > +     DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX);
> > >
> > >       struct range pmem_range;
> > >       struct range ram_range;
> > > diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
> > > index 9652c3ee41e7..11410df77444 100644
> > > --- a/drivers/cxl/pmem.c
> > > +++ b/drivers/cxl/pmem.c
> > > @@ -16,9 +16,23 @@
> > >   */
> > >  static struct workqueue_struct *cxl_pmem_wq;
> > >
> > > -static void unregister_nvdimm(void *nvdimm)
> > > +static void unregister_nvdimm(void *_cxl_nvd)
> > >  {
> > > -     nvdimm_delete(nvdimm);
> > > +     struct cxl_nvdimm *cxl_nvd = _cxl_nvd;
> > > +     struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> > > +     struct cxl_mem *cxlm = cxlmd->cxlm;
> > > +     struct device *dev = &cxl_nvd->dev;
> > > +     struct nvdimm *nvdimm;
> > > +
> > > +     nvdimm = dev_get_drvdata(dev);
> > > +     if (nvdimm)
> > > +             nvdimm_delete(nvdimm);
> > > +
> > > +     mutex_lock(&cxlm->mbox_mutex);
> > > +     clear_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
> > > +     clear_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
> > > +     clear_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
> > > +     mutex_unlock(&cxlm->mbox_mutex);
> > >  }
> > >
> > >  static int match_nvdimm_bridge(struct device *dev, const void *data)
> > > @@ -39,6 +53,8 @@ static struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void)
> > >  static int cxl_nvdimm_probe(struct device *dev)
> > >  {
> > >       struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
> > > +     struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> > > +     struct cxl_mem *cxlm = cxlmd->cxlm;
> > >       struct cxl_nvdimm_bridge *cxl_nvb;
> > >       unsigned long flags = 0;
> > >       struct nvdimm *nvdimm;
> > > @@ -52,17 +68,24 @@ static int cxl_nvdimm_probe(struct device *dev)
> > >       if (!cxl_nvb->nvdimm_bus)
> > >               goto out;
> > >
> > > +     mutex_lock(&cxlm->mbox_mutex);
> > > +     set_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
> > > +     set_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
> > > +     set_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
> > > +     mutex_unlock(&cxlm->mbox_mutex);
> > > +
> >
> > What's the concurrency this lock is trying to protect against?
> 
> I can add a comment. It synchronizes against in-flight ioctl users to
> make sure that any requests have completed before the policy changes.
> I.e. do not allow userspace to race the nvdimm subsystem attaching to
> get a consistent state of the persistent memory configuration.
> 

Ah, so the expectation is that these things will be set not just on
probe/unregister()? I would assume an IOCTL couldn't happen while
probe/unregister is happening.

> >
> > >       set_bit(NDD_LABELING, &flags);
> > >       nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags, 0, 0,
> > >                              NULL);
> > > -     if (!nvdimm)
> > > -             goto out;
> > > -
> > > -     rc = devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
> > > +     dev_set_drvdata(dev, nvdimm);
> > > +     rc = devm_add_action_or_reset(dev, unregister_nvdimm, cxl_nvd);
> > >  out:
> > >       device_unlock(&cxl_nvb->dev);
> > >       put_device(&cxl_nvb->dev);
> > >
> > > +     if (!nvdimm && rc == 0)
> > > +             rc = -ENOMEM;
> > > +
> > >       return rc;
> > >  }
> > >
> > >

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests
  2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
                   ` (22 preceding siblings ...)
  2021-08-09 22:29 ` [PATCH 23/23] tools/testing/cxl: Introduce a mock memory device + driver Dan Williams
@ 2021-08-10 22:10 ` Ben Widawsky
  2021-08-10 22:58   ` Dan Williams
  23 siblings, 1 reply; 61+ messages in thread
From: Ben Widawsky @ 2021-08-10 22:10 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, Andy Shevchenko, nvdimm, Jonathan.Cameron,
	vishal.l.verma, alison.schofield, ira.weiny

On 21-08-09 15:27:47, Dan Williams wrote:
> As mentioned in patch 20 in this series the response of upstream QEMU
> community to CXL device emulation has been underwhelming to date. Even
> if that picked up it still results in a situation where new driver
> features and new test capabilities for those features are split across
> multiple repositories.
> 
> The "nfit_test" approach of mocking up platform resources via an
> external test module continues to yield positive results catching
> regressions early and often. Repeat that success with a "cxl_test"
> module to inject custom crafted topologies and command responses into
> the CXL subsystem's sysfs and ioctl UAPIs.
> 
> The first target for cxl_test to verify is the integration of CXL with
> LIBNVDIMM and the new support for the CXL namespace label + region-label
> format. The first 11 patches introduce support for the new label format.
> 
> The next 9 patches rework the CXL PCI driver and to move more common
> infrastructure into the core for the unit test environment to reuse. The
> largest change here is disconnecting the mailbox command processing
> infrastructure from the PCI specific transport. The unit test
> environment replaces the PCI transport with a custom backend with mocked
> responses to command requests.
> 
> Patch 20 introduces just enough mocked functionality for the cxl_acpi
> driver to load against cxl_test resources. Patch 21 fixes the first bug
> discovered by this framework, namely that HDM decoder target list maps
> were not being filled out.
> 
> Finally patches 22 and 23 introduce a cxl_test representation of memory
> expander devices. In this initial implementation these memory expander
> targets implement just enough command support to pass the basic driver
> init sequence and enable label command passthrough to LIBNVDIMM.
> 
> The topology of cxl_test includes:
> - (4) platform fixed memory windows. One each of a x1-volatile,
>   x4-volatile, x1-persistent, and x4-persistent.
> - (4) Host bridges each with (2) root ports
> - (8) CXL memory expanders, one for each root port
> - Each memory expander device supports the GET_SUPPORTED_LOGS, GET_LOG,
>   IDENTIFY, GET_LSA, and SET_LSA commands.
> 
> Going forward the expectation is that where possible new UAPI visible
> subsystem functionality comes with cxl_test emulation of the same.
> 
> The build process for cxl_test is:
> 
>     make M=tools/testing/cxl
>     make M=tools/testing/cxl modules_install
> 
> The implementation methodology of the test module is the same as
> nfit_test where the bulk of the emulation comes from replacing symbols
> that cxl_acpi and the cxl_core import with mocked implementation of
> those symbols. See the "--wrap=" lines in tools/testing/cxl/Kbuild. Some
> symbols need to be replaced, but are local to the modules like
> match_add_root_ports(). In those cases the local symbol is marked __weak
> with a strong implementation coming from tools/testing/cxl/. The goal
> being to be minimally invasive to production code paths.

I went through everything except the very last patch, which I'll try to get to
tomorrow when my brain is working a bit better. It looks fine to me overall. I'd
like if we could remove code duplication in the mock driver, but perhaps that's
the nature of the beast here.

> 
> ---
> 
> Dan Williams (23):
>       libnvdimm/labels: Introduce getters for namespace label fields
>       libnvdimm/labels: Add isetcookie validation helper
>       libnvdimm/labels: Introduce label setter helpers
>       libnvdimm/labels: Add a checksum calculation helper
>       libnvdimm/labels: Add blk isetcookie set / validation helpers
>       libnvdimm/labels: Add blk special cases for nlabel and position helpers
>       libnvdimm/labels: Add type-guid helpers
>       libnvdimm/labels: Add claim class helpers
>       libnvdimm/labels: Add address-abstraction uuid definitions
>       libnvdimm/labels: Add uuid helpers
>       libnvdimm/labels: Introduce CXL labels
>       cxl/pci: Make 'struct cxl_mem' device type generic
>       cxl/mbox: Introduce the mbox_send operation
>       cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core
>       cxl/pci: Use module_pci_driver
>       cxl/mbox: Convert 'enabled_cmds' to DECLARE_BITMAP
>       cxl/mbox: Add exclusive kernel command support
>       cxl/pmem: Translate NVDIMM label commands to CXL label commands
>       cxl/pmem: Add support for multiple nvdimm-bridge objects
>       tools/testing/cxl: Introduce a mocked-up CXL port hierarchy
>       cxl/bus: Populate the target list at decoder create
>       cxl/mbox: Move command definitions to common location
>       tools/testing/cxl: Introduce a mock memory device + driver
> 
> 
>  Documentation/driver-api/cxl/memory-devices.rst |    3 
>  drivers/cxl/acpi.c                              |   65 +
>  drivers/cxl/core/Makefile                       |    1 
>  drivers/cxl/core/bus.c                          |   69 +-
>  drivers/cxl/core/core.h                         |    8 
>  drivers/cxl/core/mbox.c                         |  796 +++++++++++++++++
>  drivers/cxl/core/memdev.c                       |   84 ++
>  drivers/cxl/core/pmem.c                         |   32 +
>  drivers/cxl/cxl.h                               |   35 -
>  drivers/cxl/cxlmem.h                            |  186 ++++
>  drivers/cxl/pci.c                               | 1053 +----------------------
>  drivers/cxl/pmem.c                              |  162 +++-
>  drivers/nvdimm/btt.c                            |   11 
>  drivers/nvdimm/btt.h                            |    4 
>  drivers/nvdimm/btt_devs.c                       |   12 
>  drivers/nvdimm/core.c                           |   40 -
>  drivers/nvdimm/label.c                          |  354 +++++---
>  drivers/nvdimm/label.h                          |   96 +-
>  drivers/nvdimm/namespace_devs.c                 |  194 ++--
>  drivers/nvdimm/nd-core.h                        |    5 
>  drivers/nvdimm/nd.h                             |  263 ++++++
>  drivers/nvdimm/pfn_devs.c                       |    2 
>  include/linux/nd.h                              |    4 
>  tools/testing/cxl/Kbuild                        |   29 +
>  tools/testing/cxl/mock_acpi.c                   |  105 ++
>  tools/testing/cxl/mock_pmem.c                   |   24 +
>  tools/testing/cxl/test/Kbuild                   |   10 
>  tools/testing/cxl/test/cxl.c                    |  587 +++++++++++++
>  tools/testing/cxl/test/mem.c                    |  255 ++++++
>  tools/testing/cxl/test/mock.c                   |  155 +++
>  tools/testing/cxl/test/mock.h                   |   27 +
>  31 files changed, 3234 insertions(+), 1437 deletions(-)
>  create mode 100644 drivers/cxl/core/mbox.c
>  create mode 100644 tools/testing/cxl/Kbuild
>  create mode 100644 tools/testing/cxl/mock_acpi.c
>  create mode 100644 tools/testing/cxl/mock_pmem.c
>  create mode 100644 tools/testing/cxl/test/Kbuild
>  create mode 100644 tools/testing/cxl/test/cxl.c
>  create mode 100644 tools/testing/cxl/test/mem.c
>  create mode 100644 tools/testing/cxl/test/mock.c
>  create mode 100644 tools/testing/cxl/test/mock.h
> 
> base-commit: 427832674f6e2413c21ca2271ec945a720608ff2
> 
> (cxl.git#pending as of August 9th, 2021)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 20/23] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy
  2021-08-10 21:57   ` Ben Widawsky
@ 2021-08-10 22:40     ` Dan Williams
  2021-08-11 15:18       ` Ben Widawsky
       [not found]       ` <xp0k4.l2r85dw1p7do@intel.com>
  0 siblings, 2 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-10 22:40 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Vishal L Verma,
	Schofield, Alison, Weiny, Ira

On Tue, Aug 10, 2021 at 2:57 PM Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> On 21-08-09 15:29:33, Dan Williams wrote:
> > Create an environment for CXL plumbing unit tests. Especially when it
> > comes to an algorithm for HDM Decoder (Host-managed Device Memory
> > Decoder) programming, the availability of an in-kernel-tree emulation
> > environment for CXL configuration complexity and corner cases speeds
> > development and deters regressions.
> >
> > The approach taken mirrors what was done for tools/testing/nvdimm/. I.e.
> > an external module, cxl_test.ko built out of the tools/testing/cxl/
> > directory, provides mock implementations of kernel APIs and kernel
> > objects to simulate a real world device hierarchy.
> >
> > One feedback for the tools/testing/nvdimm/ proposal was "why not do this
> > in QEMU?". In fact, the CXL development community has developed a QEMU
> > model for CXL [1]. However, there are a few blocking issues that keep
> > QEMU from being a tight fit for topology + provisioning unit tests:
> >
> > 1/ The QEMU community has yet to show interest in merging any of this
> >    support that has had patches on the list since November 2020. So,
> >    testing CXL to date involves building custom QEMU with out-of-tree
> >    patches.
> >
> > 2/ CXL mechanisms like cross-host-bridge interleave do not have a clear
> >    path to be emulated by QEMU without major infrastructure work. This
> >    is easier to achieve with the alloc_mock_res() approach taken in this
> >    patch to shortcut-define emulated system physical address ranges with
> >    interleave behavior.
>
> I just want to say that this was discussed on the mailing list, and I think
> there is a reasonable plan (albeit a lot of work). However, #1 is the true
> blocker IMHO.
>
> >
> > The QEMU enabling has been critical to get the driver off the ground,
> > and may still move forward, but it does not address the ongoing needs of
> > a regression testing environment and test driven development.
> >
>
> The really nice thing QEMU provides over this (assuming one implemented
> interleaving properly), is it does allow a programmatic (via commandline) way to
> test an infinite set of topologies, configurations, and hotplug scenarios. I
> therefore disagree here in that I think QEMU is a better theoretical vehicle for
> regression testing and test driven development, however, my unfinished branch
> with no upstream interest in sight is problematic at best for longer term.

The "infinite" is what I don't think QEMU will sign up to support.
There are going to be degenerate error handling scenarios that we want
to test that QEMU will have no interest in supporting because QEMU is
primarily targeted at faithfully emulating well behaved hardware. At
the same time cxl_test does not preclude QEMU support which will
remain super useful. You will notice that the ndctl unit tests have
some tests that run against nfit_test and some that run against "real"
topologies where the "real" stuff is usually the QEMU NVDIMM model. So
it's not "either, or" it's "QEMU and cxl_test".

>
> I didn't look super closely, but I have one comment/question below. Otherwise,
> LGTM.
>
> > This patch adds an ACPI CXL Platform definition with emulated CXL
> > multi-ported host-bridges. A follow on patch adds emulated memory
> > expander devices.
> >
> > Link: https://lore.kernel.org/r/20210202005948.241655-1-ben.widawsky@intel.com [1]
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  drivers/cxl/acpi.c            |   52 +++-
> >  drivers/cxl/cxl.h             |    8 +
> >  tools/testing/cxl/Kbuild      |   27 ++
> >  tools/testing/cxl/mock_acpi.c |  105 ++++++++
> >  tools/testing/cxl/test/Kbuild |    6
> >  tools/testing/cxl/test/cxl.c  |  508 +++++++++++++++++++++++++++++++++++++++++
> >  tools/testing/cxl/test/mock.c |  155 +++++++++++++
> >  tools/testing/cxl/test/mock.h |   26 ++
> >  8 files changed, 866 insertions(+), 21 deletions(-)
> >  create mode 100644 tools/testing/cxl/Kbuild
> >  create mode 100644 tools/testing/cxl/mock_acpi.c
> >  create mode 100644 tools/testing/cxl/test/Kbuild
> >  create mode 100644 tools/testing/cxl/test/cxl.c
> >  create mode 100644 tools/testing/cxl/test/mock.c
> >  create mode 100644 tools/testing/cxl/test/mock.h
> >
> > diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> > index 8ae89273f58e..e0cd9df85ca5 100644
> > --- a/drivers/cxl/acpi.c
> > +++ b/drivers/cxl/acpi.c
> > @@ -182,15 +182,7 @@ static resource_size_t get_chbcr(struct acpi_cedt_chbs *chbs)
> >       return IS_ERR(chbs) ? CXL_RESOURCE_NONE : chbs->base;
> >  }
> >
> > -struct cxl_walk_context {
> > -     struct device *dev;
> > -     struct pci_bus *root;
> > -     struct cxl_port *port;
> > -     int error;
> > -     int count;
> > -};
> > -
> > -static int match_add_root_ports(struct pci_dev *pdev, void *data)
> > +__weak int match_add_root_ports(struct pci_dev *pdev, void *data)
> >  {
> >       struct cxl_walk_context *ctx = data;
> >       struct pci_bus *root_bus = ctx->root;
> > @@ -214,6 +206,8 @@ static int match_add_root_ports(struct pci_dev *pdev, void *data)
> >       port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
> >       rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE);
> >       if (rc) {
> > +             dev_err(dev, "failed to add dport: %s (%d)\n",
> > +                     dev_name(&pdev->dev), rc);
> >               ctx->error = rc;
> >               return rc;
> >       }
> > @@ -239,12 +233,15 @@ static struct cxl_dport *find_dport_by_dev(struct cxl_port *port, struct device
> >       return NULL;
> >  }
> >
> > -static struct acpi_device *to_cxl_host_bridge(struct device *dev)
> > +__weak struct acpi_device *to_cxl_host_bridge(struct device *host,
> > +                                           struct device *dev)
> >  {
> >       struct acpi_device *adev = to_acpi_device(dev);
> >
> > -     if (strcmp(acpi_device_hid(adev), "ACPI0016") == 0)
> > +     if (strcmp(acpi_device_hid(adev), "ACPI0016") == 0) {
> > +             dev_dbg(host, "found host bridge %s\n", dev_name(&adev->dev));
> >               return adev;
> > +     }
> >       return NULL;
> >  }
> >
> > @@ -254,14 +251,14 @@ static struct acpi_device *to_cxl_host_bridge(struct device *dev)
> >   */
> >  static int add_host_bridge_uport(struct device *match, void *arg)
> >  {
> > -     struct acpi_device *bridge = to_cxl_host_bridge(match);
> > +     struct cxl_port *port;
> > +     struct cxl_dport *dport;
> > +     struct cxl_decoder *cxld;
> > +     struct cxl_walk_context ctx;
> > +     struct acpi_pci_root *pci_root;
> >       struct cxl_port *root_port = arg;
> >       struct device *host = root_port->dev.parent;
> > -     struct acpi_pci_root *pci_root;
> > -     struct cxl_walk_context ctx;
> > -     struct cxl_decoder *cxld;
> > -     struct cxl_dport *dport;
> > -     struct cxl_port *port;
> > +     struct acpi_device *bridge = to_cxl_host_bridge(host, match);
> >
> >       if (!bridge)
> >               return 0;
> > @@ -319,7 +316,7 @@ static int add_host_bridge_dport(struct device *match, void *arg)
> >       struct acpi_cedt_chbs *chbs;
> >       struct cxl_port *root_port = arg;
> >       struct device *host = root_port->dev.parent;
> > -     struct acpi_device *bridge = to_cxl_host_bridge(match);
> > +     struct acpi_device *bridge = to_cxl_host_bridge(host, match);
> >
> >       if (!bridge)
> >               return 0;
> > @@ -371,6 +368,17 @@ static int add_root_nvdimm_bridge(struct device *match, void *data)
> >       return 1;
> >  }
> >
> > +static u32 cedt_instance(struct platform_device *pdev)
> > +{
> > +     const bool *native_acpi0017 = acpi_device_get_match_data(&pdev->dev);
> > +
> > +     if (native_acpi0017 && *native_acpi0017)
> > +             return 0;
> > +
> > +     /* for cxl_test request a non-canonical instance */
> > +     return U32_MAX;
> > +}
> > +
> >  static int cxl_acpi_probe(struct platform_device *pdev)
> >  {
> >       int rc;
> > @@ -384,7 +392,7 @@ static int cxl_acpi_probe(struct platform_device *pdev)
> >               return PTR_ERR(root_port);
> >       dev_dbg(host, "add: %s\n", dev_name(&root_port->dev));
> >
> > -     status = acpi_get_table(ACPI_SIG_CEDT, 0, &acpi_cedt);
> > +     status = acpi_get_table(ACPI_SIG_CEDT, cedt_instance(pdev), &acpi_cedt);
> >       if (ACPI_FAILURE(status))
> >               return -ENXIO;
> >
> > @@ -415,9 +423,11 @@ static int cxl_acpi_probe(struct platform_device *pdev)
> >       return 0;
> >  }
> >
> > +static bool native_acpi0017 = true;
> > +
> >  static const struct acpi_device_id cxl_acpi_ids[] = {
> > -     { "ACPI0017", 0 },
> > -     { "", 0 },
> > +     { "ACPI0017", (unsigned long) &native_acpi0017 },
> > +     { },
> >  };
> >  MODULE_DEVICE_TABLE(acpi, cxl_acpi_ids);
> >
> > diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> > index 1b2e816e061e..09c81cf8b800 100644
> > --- a/drivers/cxl/cxl.h
> > +++ b/drivers/cxl/cxl.h
> > @@ -226,6 +226,14 @@ struct cxl_nvdimm {
> >       struct nvdimm *nvdimm;
> >  };
> >
> > +struct cxl_walk_context {
> > +     struct device *dev;
> > +     struct pci_bus *root;
> > +     struct cxl_port *port;
> > +     int error;
> > +     int count;
> > +};
> > +
> >  /**
> >   * struct cxl_port - logical collection of upstream port devices and
> >   *                downstream port devices to construct a CXL memory
> > diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
> > new file mode 100644
> > index 000000000000..6ea0c7df36f0
> > --- /dev/null
> > +++ b/tools/testing/cxl/Kbuild
> > @@ -0,0 +1,27 @@
> > +# SPDX-License-Identifier: GPL-2.0
> > +ldflags-y += --wrap=is_acpi_device_node
> > +ldflags-y += --wrap=acpi_get_table
> > +ldflags-y += --wrap=acpi_put_table
> > +ldflags-y += --wrap=acpi_evaluate_integer
> > +ldflags-y += --wrap=acpi_pci_find_root
> > +ldflags-y += --wrap=pci_walk_bus
> > +
> > +DRIVERS := ../../../drivers
> > +CXL_SRC := $(DRIVERS)/cxl
> > +CXL_CORE_SRC := $(DRIVERS)/cxl/core
> > +ccflags-y := -I$(srctree)/drivers/cxl/
> > +
> > +obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
> > +
> > +cxl_acpi-y := $(CXL_SRC)/acpi.o
> > +cxl_acpi-y += mock_acpi.o
> > +
> > +obj-$(CONFIG_CXL_BUS) += cxl_core.o
> > +
> > +cxl_core-y := $(CXL_CORE_SRC)/bus.o
> > +cxl_core-y += $(CXL_CORE_SRC)/pmem.o
> > +cxl_core-y += $(CXL_CORE_SRC)/regs.o
> > +cxl_core-y += $(CXL_CORE_SRC)/memdev.o
> > +cxl_core-y += $(CXL_CORE_SRC)/mbox.o
> > +
> > +obj-m += test/
> > diff --git a/tools/testing/cxl/mock_acpi.c b/tools/testing/cxl/mock_acpi.c
> > new file mode 100644
> > index 000000000000..256bdf9e1ce8
> > --- /dev/null
> > +++ b/tools/testing/cxl/mock_acpi.c
> > @@ -0,0 +1,105 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/* Copyright(c) 2021 Intel Corporation. All rights reserved. */
> > +
> > +#include <linux/platform_device.h>
> > +#include <linux/device.h>
> > +#include <linux/acpi.h>
> > +#include <linux/pci.h>
> > +#include <cxl.h>
> > +#include "test/mock.h"
> > +
> > +struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev)
> > +{
> > +     int index;
> > +     struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> > +     struct acpi_device *adev = NULL;
> > +
> > +     if (ops && ops->is_mock_bridge(dev)) {
> > +             adev = ACPI_COMPANION(dev);
> > +             goto out;
> > +     }
>
> Here, and below ops->is_mock_port()... I'm a bit confused why a mock driver
> would ever attempt to do anything with real hardware. ie, why not

The rationale is to be able to run cxl_test on a system that might
also have real CXL. For example I run this alongside the current QEMU
CXL model, and that results in the cxl_acpi driver attaching to 2
devices:

# tree /sys/bus/platform/drivers/cxl_acpi
/sys/bus/platform/drivers/cxl_acpi
├── ACPI0017:00 -> ../../../../devices/platform/ACPI0017:00
├── bind
├── cxl_acpi.0 -> ../../../../devices/platform/cxl_acpi.0
├── module -> ../../../../module/cxl_acpi
├── uevent
└── unbind

When the device is ACPI0017 this code is walking the ACPI bus looking
for  ACPI0016 devices. A real ACPI0016 will fall through
is_mock_port() to the original to_cxl_host_bridge() logic that just
reads the ACPI device HID. In the mock case the cxl_acpi driver has
instead been tricked into walk the platform bus which has real
platform devices, and the fake cxl_test ones:

/sys/bus/platform/devices/
├── ACPI0012:00 -> ../../../devices/platform/ACPI0012:00
├── ACPI0017:00 -> ../../../devices/platform/ACPI0017:00
├── alarmtimer.0.auto -> ../../../devices/pnp0/00:04/rtc/rtc0/alarmtimer.0.auto
├── cxl_acpi.0 -> ../../../devices/platform/cxl_acpi.0
├── cxl_host_bridge.0 -> ../../../devices/platform/cxl_host_bridge.0
├── cxl_host_bridge.1 -> ../../../devices/platform/cxl_host_bridge.1
├── cxl_host_bridge.2 -> ../../../devices/platform/cxl_host_bridge.2
├── cxl_host_bridge.3 -> ../../../devices/platform/cxl_host_bridge.3
├── e820_pmem -> ../../../devices/platform/e820_pmem
├── efi-framebuffer.0 -> ../../../devices/platform/efi-framebuffer.0
├── efivars.0 -> ../../../devices/platform/efivars.0
├── Fixed MDIO bus.0 -> ../../../devices/platform/Fixed MDIO bus.0
├── i8042 -> ../../../devices/platform/i8042
├── iTCO_wdt.1.auto -> ../../../devices/pci0000:00/0000:00:1f.0/iTCO_wdt.1.auto
├── kgdboc -> ../../../devices/platform/kgdboc
├── pcspkr -> ../../../devices/platform/pcspkr
├── PNP0103:00 -> ../../../devices/platform/PNP0103:00
├── QEMU0002:00 -> ../../../devices/pci0000:00/QEMU0002:00
├── rtc-efi.0 -> ../../../devices/platform/rtc-efi.0
└── serial8250 -> ../../../devices/platform/serial8250

...where is_mock_port() filters out those real platform devices. Note
that ACPI devices are atypical in that they get registered on the ACPI
bus and some get a companion device with the same name registered on
the platform bus.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests
  2021-08-10 22:10 ` [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Ben Widawsky
@ 2021-08-10 22:58   ` Dan Williams
  0 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-10 22:58 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, Andy Shevchenko, Linux NVDIMM, Jonathan Cameron,
	Vishal L Verma, Schofield, Alison, Weiny, Ira

On Tue, Aug 10, 2021 at 3:10 PM Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> On 21-08-09 15:27:47, Dan Williams wrote:
> > As mentioned in patch 20 in this series the response of upstream QEMU
> > community to CXL device emulation has been underwhelming to date. Even
> > if that picked up it still results in a situation where new driver
> > features and new test capabilities for those features are split across
> > multiple repositories.
> >
> > The "nfit_test" approach of mocking up platform resources via an
> > external test module continues to yield positive results catching
> > regressions early and often. Repeat that success with a "cxl_test"
> > module to inject custom crafted topologies and command responses into
> > the CXL subsystem's sysfs and ioctl UAPIs.
> >
> > The first target for cxl_test to verify is the integration of CXL with
> > LIBNVDIMM and the new support for the CXL namespace label + region-label
> > format. The first 11 patches introduce support for the new label format.
> >
> > The next 9 patches rework the CXL PCI driver and to move more common
> > infrastructure into the core for the unit test environment to reuse. The
> > largest change here is disconnecting the mailbox command processing
> > infrastructure from the PCI specific transport. The unit test
> > environment replaces the PCI transport with a custom backend with mocked
> > responses to command requests.
> >
> > Patch 20 introduces just enough mocked functionality for the cxl_acpi
> > driver to load against cxl_test resources. Patch 21 fixes the first bug
> > discovered by this framework, namely that HDM decoder target list maps
> > were not being filled out.
> >
> > Finally patches 22 and 23 introduce a cxl_test representation of memory
> > expander devices. In this initial implementation these memory expander
> > targets implement just enough command support to pass the basic driver
> > init sequence and enable label command passthrough to LIBNVDIMM.
> >
> > The topology of cxl_test includes:
> > - (4) platform fixed memory windows. One each of a x1-volatile,
> >   x4-volatile, x1-persistent, and x4-persistent.
> > - (4) Host bridges each with (2) root ports
> > - (8) CXL memory expanders, one for each root port
> > - Each memory expander device supports the GET_SUPPORTED_LOGS, GET_LOG,
> >   IDENTIFY, GET_LSA, and SET_LSA commands.
> >
> > Going forward the expectation is that where possible new UAPI visible
> > subsystem functionality comes with cxl_test emulation of the same.
> >
> > The build process for cxl_test is:
> >
> >     make M=tools/testing/cxl
> >     make M=tools/testing/cxl modules_install
> >
> > The implementation methodology of the test module is the same as
> > nfit_test where the bulk of the emulation comes from replacing symbols
> > that cxl_acpi and the cxl_core import with mocked implementation of
> > those symbols. See the "--wrap=" lines in tools/testing/cxl/Kbuild. Some
> > symbols need to be replaced, but are local to the modules like
> > match_add_root_ports(). In those cases the local symbol is marked __weak
> > with a strong implementation coming from tools/testing/cxl/. The goal
> > being to be minimally invasive to production code paths.
>
> I went through everything except the very last patch, which I'll try to get to
> tomorrow when my brain is working a bit better. It looks fine to me overall. I'd
> like if we could remove code duplication in the mock driver, but perhaps that's
> the nature of the beast here.

Well, maybe not. I.e. I don't think it would be out of the question to
wrap this common sequence into a helper that both cxl_pci and
cxl_mock_mem share:

        rc = cxl_mem_enumerate_cmds(cxlm);
        if (rc)
                return rc;

        rc = cxl_mem_identify(cxlm);
        if (rc)
                return rc;

        rc = cxl_mem_create_range_info(cxlm);
        if (rc)
                return rc;

        cxlmd = devm_cxl_add_memdev(dev, cxlm);
        if (IS_ERR(cxlmd))
                return PTR_ERR(cxlmd);

        if (range_len(&cxlm->pmem_range) && IS_ENABLED(CONFIG_CXL_PMEM))
                rc = devm_cxl_add_nvdimm(dev, cxlmd);

...or are you thinking of a different place where there's duplication?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 17/23] cxl/mbox: Add exclusive kernel command support
  2021-08-10 22:06       ` Ben Widawsky
@ 2021-08-11  1:22         ` Dan Williams
  2021-08-11  2:14           ` Dan Williams
  0 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-11  1:22 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Vishal L Verma,
	Schofield, Alison, Weiny, Ira

On Tue, Aug 10, 2021 at 3:07 PM Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> On 21-08-10 14:52:18, Dan Williams wrote:
> > On Tue, Aug 10, 2021 at 2:35 PM Ben Widawsky <ben.widawsky@intel.com> wrote:
> > >
> > > On 21-08-09 15:29:18, Dan Williams wrote:
> > > > The CXL_PMEM driver expects exclusive control of the label storage area
> > > > space. Similar to the LIBNVDIMM expectation that the label storage area
> > > > is only writable from userspace when the corresponding memory device is
> > > > not active in any region, the expectation is the native CXL_PCI UAPI
> > > > path is disabled while the cxl_nvdimm for a given cxl_memdev device is
> > > > active in LIBNVDIMM.
> > > >
> > > > Add the ability to toggle the availability of a given command for the
> > > > UAPI path. Use that new capability to shutdown changes to partitions and
> > > > the label storage area while the cxl_nvdimm device is actively proxying
> > > > commands for LIBNVDIMM.
> > > >
> > > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > > > ---
> > > >  drivers/cxl/core/mbox.c |    5 +++++
> > > >  drivers/cxl/cxlmem.h    |    2 ++
> > > >  drivers/cxl/pmem.c      |   35 +++++++++++++++++++++++++++++------
> > > >  3 files changed, 36 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> > > > index 23100231e246..f26962d7cb65 100644
> > > > --- a/drivers/cxl/core/mbox.c
> > > > +++ b/drivers/cxl/core/mbox.c
> > > > @@ -409,6 +409,11 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
> > > >               }
> > > >       }
> > > >
> > > > +     if (test_bit(cmd->info.id, cxlm->exclusive_cmds)) {
> > > > +             rc = -EBUSY;
> > > > +             goto out;
> > > > +     }
> > > > +
> > >
> > > This breaks our current definition for cxl_raw_allow_all. All the test machinery
> >
> > That's deliberate; this exclusion is outside of the raw policy. I
> > don't think raw_allow_all should override kernel self protection of
> > data structures, like labels, that it needs to maintain consistency.
> > If userspace wants to use raw_allow_all to send LSA manipulation
> > commands it must do so while the device is not active on the nvdimm
> > side of the house. You'll see that:
> >
> > ndctl disable-region all
> > <mutate labels>
> > ndctl enable-region all
> >
> > ...is a common pattern from custom label update flows.
> >
>
> I won't argue about raw_allow_all since we never did document its debugfs
> meaning (however, my intention was always to let userspace trump the kernel
> (which was why we tainted)).

Yeah we should document because the taint in my mind was for the
possibility of passing commands completely unknown to the kernel. If
someone really wants to subvert the kernel's label area coherency they
could simply have a vendor specific command that writes the labels.
Instead, if the kernel knows the opcode it is free to apply policy to
it as it sees fit, and if the opcode is unknown to the kernel then
raw_allow_all policy lets it through. We already have security
commands as another case of opcode that the kernel knows about and
thinks is a good idea to block. This is a dynamic version of the same.

> Either way, could you please move the actual check to
> cxl_validate_cmd_from_user() instead of handle...(). Validate is the main
> function to determine whether a command is allowed to be sent on behalf of the
> user.  I think just putting it next to the enabled cmd check would make a lot
> more sense. And please add the EBUSY meaning to the kdocs.

Sure, sounds good.

>
> > > for whether a command can be submitted was supposed to happen in
> > > cxl_validate_cmd_from_user(). Various versions of the original patches made
> > > cxl_mem_raw_command_allowed() grow more intelligence (ie. more than just the
> > > opcode). I think this check belongs there with more intelligence.
> > >
> > > I don't love the EBUSY because it already had a meaning for concurrent use of
> > > the mailbox, but I can't think of a better errno.
> >
> > It's the existing errno that happens from nvdimm land when the kernel
> > owns the label area, so it would be confusing to invent a new one for
> > the same behavior now:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/nvdimm/bus.c#n1013
> >
> > >
> > > >       dev_dbg(dev,
> > > >               "Submitting %s command for user\n"
> > > >               "\topcode: %x\n"
> > > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> > > > index df4f3636a999..f6cfe84a064c 100644
> > > > --- a/drivers/cxl/cxlmem.h
> > > > +++ b/drivers/cxl/cxlmem.h
> > > > @@ -102,6 +102,7 @@ struct cxl_mbox_cmd {
> > > >   * @mbox_mutex: Mutex to synchronize mailbox access.
> > > >   * @firmware_version: Firmware version for the memory device.
> > > >   * @enabled_cmds: Hardware commands found enabled in CEL.
> > > > + * @exclusive_cmds: Commands that are kernel-internal only
> > > >   * @pmem_range: Persistent memory capacity information.
> > > >   * @ram_range: Volatile memory capacity information.
> > > >   * @mbox_send: @dev specific transport for transmitting mailbox commands
> > > > @@ -117,6 +118,7 @@ struct cxl_mem {
> > > >       struct mutex mbox_mutex; /* Protects device mailbox and firmware */
> > > >       char firmware_version[0x10];
> > > >       DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX);
> > > > +     DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX);
> > > >
> > > >       struct range pmem_range;
> > > >       struct range ram_range;
> > > > diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
> > > > index 9652c3ee41e7..11410df77444 100644
> > > > --- a/drivers/cxl/pmem.c
> > > > +++ b/drivers/cxl/pmem.c
> > > > @@ -16,9 +16,23 @@
> > > >   */
> > > >  static struct workqueue_struct *cxl_pmem_wq;
> > > >
> > > > -static void unregister_nvdimm(void *nvdimm)
> > > > +static void unregister_nvdimm(void *_cxl_nvd)
> > > >  {
> > > > -     nvdimm_delete(nvdimm);
> > > > +     struct cxl_nvdimm *cxl_nvd = _cxl_nvd;
> > > > +     struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> > > > +     struct cxl_mem *cxlm = cxlmd->cxlm;
> > > > +     struct device *dev = &cxl_nvd->dev;
> > > > +     struct nvdimm *nvdimm;
> > > > +
> > > > +     nvdimm = dev_get_drvdata(dev);
> > > > +     if (nvdimm)
> > > > +             nvdimm_delete(nvdimm);
> > > > +
> > > > +     mutex_lock(&cxlm->mbox_mutex);
> > > > +     clear_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
> > > > +     clear_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
> > > > +     clear_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
> > > > +     mutex_unlock(&cxlm->mbox_mutex);
> > > >  }
> > > >
> > > >  static int match_nvdimm_bridge(struct device *dev, const void *data)
> > > > @@ -39,6 +53,8 @@ static struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void)
> > > >  static int cxl_nvdimm_probe(struct device *dev)
> > > >  {
> > > >       struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
> > > > +     struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> > > > +     struct cxl_mem *cxlm = cxlmd->cxlm;
> > > >       struct cxl_nvdimm_bridge *cxl_nvb;
> > > >       unsigned long flags = 0;
> > > >       struct nvdimm *nvdimm;
> > > > @@ -52,17 +68,24 @@ static int cxl_nvdimm_probe(struct device *dev)
> > > >       if (!cxl_nvb->nvdimm_bus)
> > > >               goto out;
> > > >
> > > > +     mutex_lock(&cxlm->mbox_mutex);
> > > > +     set_bit(CXL_MEM_COMMAND_ID_SET_PARTITION_INFO, cxlm->exclusive_cmds);
> > > > +     set_bit(CXL_MEM_COMMAND_ID_SET_SHUTDOWN_STATE, cxlm->exclusive_cmds);
> > > > +     set_bit(CXL_MEM_COMMAND_ID_SET_LSA, cxlm->exclusive_cmds);
> > > > +     mutex_unlock(&cxlm->mbox_mutex);
> > > > +
> > >
> > > What's the concurrency this lock is trying to protect against?
> >
> > I can add a comment. It synchronizes against in-flight ioctl users to
> > make sure that any requests have completed before the policy changes.
> > I.e. do not allow userspace to race the nvdimm subsystem attaching to
> > get a consistent state of the persistent memory configuration.
> >
>
> Ah, so the expectation is that these things will be set not just on
> probe/unregister()? I would assume an IOCTL couldn't happen while
> probe/unregister is happening.

The ioctl is going through the cxl_pci driver. That driver has
finished probe and published the ioctl before this lockout can run in
cxl_nvdimm_probe(), so it's entirely possible that label writing
ioctls are in progress when cxl_nvdimm_probe() eventually fires.

The current policy for /sys/bus/nd/devices/nmemX devices are that
label writes are allowed as long as the nmemX device is not active in
any region. I was thinking the CXL policy is coarser. Label writes via
/sys/bus/cxl/devices/memX ioctls are disallowed as long as the bridge
for that device into the nvdimm subsystem is active.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 17/23] cxl/mbox: Add exclusive kernel command support
  2021-08-11  1:22         ` Dan Williams
@ 2021-08-11  2:14           ` Dan Williams
  0 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-11  2:14 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Vishal L Verma,
	Schofield, Alison, Weiny, Ira

On Tue, Aug 10, 2021 at 6:22 PM Dan Williams <dan.j.williams@intel.com> wrote:
[..]
> > > > What's the concurrency this lock is trying to protect against?
> > >
> > > I can add a comment. It synchronizes against in-flight ioctl users to
> > > make sure that any requests have completed before the policy changes.
> > > I.e. do not allow userspace to race the nvdimm subsystem attaching to
> > > get a consistent state of the persistent memory configuration.
> > >
> >
> > Ah, so the expectation is that these things will be set not just on
> > probe/unregister()? I would assume an IOCTL couldn't happen while
> > probe/unregister is happening.
>
> The ioctl is going through the cxl_pci driver. That driver has
> finished probe and published the ioctl before this lockout can run in
> cxl_nvdimm_probe(), so it's entirely possible that label writing
> ioctls are in progress when cxl_nvdimm_probe() eventually fires.
>
> The current policy for /sys/bus/nd/devices/nmemX devices are that
> label writes are allowed as long as the nmemX device is not active in
> any region. I was thinking the CXL policy is coarser. Label writes via
> /sys/bus/cxl/devices/memX ioctls are disallowed as long as the bridge
> for that device into the nvdimm subsystem is active.

Oh, whoops, the mbox_mutex is not taken until we're deep inside
mbox_send. So this synchronization needs to move to the cxl_memdev
rwsem. Thanks for the nudge, I missed that.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH v2 14/23] cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core
  2021-08-09 22:29 ` [PATCH 14/23] cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core Dan Williams
@ 2021-08-11  6:11   ` Dan Williams
  0 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-11  6:11 UTC (permalink / raw)
  To: linux-cxl
  Cc: nvdimm, Jonathan.Cameron, ira.weiny, ben.widawsky,
	vishal.l.verma, alison.schofield

Now that the internals of mailbox operations are abstracted from the PCI
specifics a bulk of infrastructure can move to the core.

The CXL_PMEM driver intends to proxy LIBNVDIMM UAPI and driver requests
to the equivalent functionality provided by the CXL hardware mailbox
interface. In support of that intent move the mailbox implementation to
a shared location for the CXL_PCI driver native IOCTL path and CXL_PMEM
nvdimm command proxy path to share.

A unit test framework seeks to implement a unit test backend transport
for mailbox commands to communicate mocked up payloads. It can reuse all
of the mailbox infrastructure minus the PCI specifics, so that also gets
moved to the core.

Finally with the mailbox infrastructure and ioctl handling being
transport generic there is no longer any need to pass file
file_operations to devm_cxl_add_memdev(). That allows all the ioctl
boilerplate to move into the core for unit test reuse.

No functional change intended, just code movement.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
Changes since v1:
- Rebased on the new partition info payload definition with proper
  endian types.

 Documentation/driver-api/cxl/memory-devices.rst |    3 
 drivers/cxl/core/Makefile                       |    1 
 drivers/cxl/core/bus.c                          |    4 
 drivers/cxl/core/core.h                         |    8 
 drivers/cxl/core/mbox.c                         |  832 ++++++++++++++++++++
 drivers/cxl/core/memdev.c                       |   81 ++
 drivers/cxl/cxlmem.h                            |   81 ++
 drivers/cxl/pci.c                               |  941 -----------------------
 8 files changed, 985 insertions(+), 966 deletions(-)
 create mode 100644 drivers/cxl/core/mbox.c

diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
index 46847d8c70a0..356f70d28316 100644
--- a/Documentation/driver-api/cxl/memory-devices.rst
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -45,6 +45,9 @@ CXL Core
 .. kernel-doc:: drivers/cxl/core/regs.c
    :internal:
 
+.. kernel-doc:: drivers/cxl/core/mbox.c
+   :doc: cxl mbox
+
 External Interfaces
 ===================
 
diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
index 0fdbf3c6ac1a..07eb8e1fb8a6 100644
--- a/drivers/cxl/core/Makefile
+++ b/drivers/cxl/core/Makefile
@@ -6,3 +6,4 @@ cxl_core-y := bus.o
 cxl_core-y += pmem.o
 cxl_core-y += regs.o
 cxl_core-y += memdev.o
+cxl_core-y += mbox.o
diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
index 37b87adaa33f..8073354ba232 100644
--- a/drivers/cxl/core/bus.c
+++ b/drivers/cxl/core/bus.c
@@ -636,6 +636,8 @@ static __init int cxl_core_init(void)
 {
 	int rc;
 
+	cxl_mbox_init();
+
 	rc = cxl_memdev_init();
 	if (rc)
 		return rc;
@@ -647,6 +649,7 @@ static __init int cxl_core_init(void)
 
 err:
 	cxl_memdev_exit();
+	cxl_mbox_exit();
 	return rc;
 }
 
@@ -654,6 +657,7 @@ static void cxl_core_exit(void)
 {
 	bus_unregister(&cxl_bus_type);
 	cxl_memdev_exit();
+	cxl_mbox_exit();
 }
 
 module_init(cxl_core_init);
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index 036a3c8106b4..c85b7fbad02d 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -14,7 +14,15 @@ static inline void unregister_cxl_dev(void *dev)
 	device_unregister(dev);
 }
 
+struct cxl_send_command;
+struct cxl_mem_query_commands;
+int cxl_query_cmd(struct cxl_memdev *cxlmd,
+		  struct cxl_mem_query_commands __user *q);
+int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s);
+
 int cxl_memdev_init(void);
 void cxl_memdev_exit(void);
+void cxl_mbox_init(void);
+void cxl_mbox_exit(void);
 
 #endif /* __CXL_CORE_H__ */
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
new file mode 100644
index 000000000000..bbc49f245ac9
--- /dev/null
+++ b/drivers/cxl/core/mbox.c
@@ -0,0 +1,832 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/security.h>
+#include <linux/debugfs.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <cxlmem.h>
+#include <cxl.h>
+
+static bool cxl_raw_allow_all;
+
+/**
+ * DOC: cxl mbox
+ *
+ * Core implementation of the CXL 2.0 Type-3 Memory Device Mailbox. The
+ * implementation is used by the cxl_pci driver to initialize the device
+ * and implement the cxl_mem.h IOCTL UAPI. It also implements the
+ * backend of the cxl_pmem_ctl() transport for LIBNVDIMM.
+ *
+ */
+
+#define cxl_for_each_cmd(cmd)                                                  \
+	for ((cmd) = &cxl_mem_commands[0];                                     \
+	     ((cmd)-cxl_mem_commands) < ARRAY_SIZE(cxl_mem_commands); (cmd)++)
+
+#define cxl_doorbell_busy(cxlm)                                                \
+	(readl((cxlm)->regs.mbox + CXLDEV_MBOX_CTRL_OFFSET) &                  \
+	 CXLDEV_MBOX_CTRL_DOORBELL)
+
+/* CXL 2.0 - 8.2.8.4 */
+#define CXL_MAILBOX_TIMEOUT_MS (2 * HZ)
+
+#define CXL_CMD(_id, sin, sout, _flags)                                        \
+	[CXL_MEM_COMMAND_ID_##_id] = {                                         \
+	.info =	{                                                              \
+			.id = CXL_MEM_COMMAND_ID_##_id,                        \
+			.size_in = sin,                                        \
+			.size_out = sout,                                      \
+		},                                                             \
+	.opcode = CXL_MBOX_OP_##_id,                                           \
+	.flags = _flags,                                                       \
+	}
+
+/*
+ * This table defines the supported mailbox commands for the driver. This table
+ * is made up of a UAPI structure. Non-negative values as parameters in the
+ * table will be validated against the user's input. For example, if size_in is
+ * 0, and the user passed in 1, it is an error.
+ */
+static struct cxl_mem_command cxl_mem_commands[CXL_MEM_COMMAND_ID_MAX] = {
+	CXL_CMD(IDENTIFY, 0, 0x43, CXL_CMD_FLAG_FORCE_ENABLE),
+#ifdef CONFIG_CXL_MEM_RAW_COMMANDS
+	CXL_CMD(RAW, ~0, ~0, 0),
+#endif
+	CXL_CMD(GET_SUPPORTED_LOGS, 0, ~0, CXL_CMD_FLAG_FORCE_ENABLE),
+	CXL_CMD(GET_FW_INFO, 0, 0x50, 0),
+	CXL_CMD(GET_PARTITION_INFO, 0, 0x20, 0),
+	CXL_CMD(GET_LSA, 0x8, ~0, 0),
+	CXL_CMD(GET_HEALTH_INFO, 0, 0x12, 0),
+	CXL_CMD(GET_LOG, 0x18, ~0, CXL_CMD_FLAG_FORCE_ENABLE),
+	CXL_CMD(SET_PARTITION_INFO, 0x0a, 0, 0),
+	CXL_CMD(SET_LSA, ~0, 0, 0),
+	CXL_CMD(GET_ALERT_CONFIG, 0, 0x10, 0),
+	CXL_CMD(SET_ALERT_CONFIG, 0xc, 0, 0),
+	CXL_CMD(GET_SHUTDOWN_STATE, 0, 0x1, 0),
+	CXL_CMD(SET_SHUTDOWN_STATE, 0x1, 0, 0),
+	CXL_CMD(GET_POISON, 0x10, ~0, 0),
+	CXL_CMD(INJECT_POISON, 0x8, 0, 0),
+	CXL_CMD(CLEAR_POISON, 0x48, 0, 0),
+	CXL_CMD(GET_SCAN_MEDIA_CAPS, 0x10, 0x4, 0),
+	CXL_CMD(SCAN_MEDIA, 0x11, 0, 0),
+	CXL_CMD(GET_SCAN_MEDIA, 0, ~0, 0),
+};
+
+/*
+ * Commands that RAW doesn't permit. The rationale for each:
+ *
+ * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment /
+ * coordination of transaction timeout values at the root bridge level.
+ *
+ * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live
+ * and needs to be coordinated with HDM updates.
+ *
+ * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the
+ * driver and any writes from userspace invalidates those contents.
+ *
+ * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes
+ * to the device after it is marked clean, userspace can not make that
+ * assertion.
+ *
+ * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that
+ * is kept up to date with patrol notifications and error management.
+ */
+static u16 cxl_disabled_raw_commands[] = {
+	CXL_MBOX_OP_ACTIVATE_FW,
+	CXL_MBOX_OP_SET_PARTITION_INFO,
+	CXL_MBOX_OP_SET_LSA,
+	CXL_MBOX_OP_SET_SHUTDOWN_STATE,
+	CXL_MBOX_OP_SCAN_MEDIA,
+	CXL_MBOX_OP_GET_SCAN_MEDIA,
+};
+
+/*
+ * Command sets that RAW doesn't permit. All opcodes in this set are
+ * disabled because they pass plain text security payloads over the
+ * user/kernel boundary. This functionality is intended to be wrapped
+ * behind the keys ABI which allows for encrypted payloads in the UAPI
+ */
+static u8 security_command_sets[] = {
+	0x44, /* Sanitize */
+	0x45, /* Persistent Memory Data-at-rest Security */
+	0x46, /* Security Passthrough */
+};
+
+static bool cxl_is_security_command(u16 opcode)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(security_command_sets); i++)
+		if (security_command_sets[i] == (opcode >> 8))
+			return true;
+	return false;
+}
+
+static struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
+{
+	struct cxl_mem_command *c;
+
+	cxl_for_each_cmd(c)
+		if (c->opcode == opcode)
+			return c;
+
+	return NULL;
+}
+
+/**
+ * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
+ * @cxlm: The CXL memory device to communicate with.
+ * @opcode: Opcode for the mailbox command.
+ * @in: The input payload for the mailbox command.
+ * @in_size: The length of the input payload
+ * @out: Caller allocated buffer for the output.
+ * @out_size: Expected size of output.
+ *
+ * Context: Any context. Will acquire and release mbox_mutex.
+ * Return:
+ *  * %>=0	- Number of bytes returned in @out.
+ *  * %-E2BIG	- Payload is too large for hardware.
+ *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
+ *  * %-EFAULT	- Hardware error occurred.
+ *  * %-ENXIO	- Command completed, but device reported an error.
+ *  * %-EIO	- Unexpected output size.
+ *
+ * Mailbox commands may execute successfully yet the device itself reported an
+ * error. While this distinction can be useful for commands from userspace, the
+ * kernel will only be able to use results when both are successful.
+ *
+ * See __cxl_mem_mbox_send_cmd()
+ */
+int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, void *in,
+			  size_t in_size, void *out, size_t out_size)
+{
+	const struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
+	struct cxl_mbox_cmd mbox_cmd = {
+		.opcode = opcode,
+		.payload_in = in,
+		.size_in = in_size,
+		.size_out = out_size,
+		.payload_out = out,
+	};
+	int rc;
+
+	if (out_size > cxlm->payload_size)
+		return -E2BIG;
+
+	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
+	if (rc)
+		return rc;
+
+	/* TODO: Map return code to proper kernel style errno */
+	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
+		return -ENXIO;
+
+	/*
+	 * Variable sized commands can't be validated and so it's up to the
+	 * caller to do that if they wish.
+	 */
+	if (cmd->info.size_out >= 0 && mbox_cmd.size_out != out_size)
+		return -EIO;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_mbox_send_cmd);
+
+static bool cxl_mem_raw_command_allowed(u16 opcode)
+{
+	int i;
+
+	if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
+		return false;
+
+	if (security_locked_down(LOCKDOWN_NONE))
+		return false;
+
+	if (cxl_raw_allow_all)
+		return true;
+
+	if (cxl_is_security_command(opcode))
+		return false;
+
+	for (i = 0; i < ARRAY_SIZE(cxl_disabled_raw_commands); i++)
+		if (cxl_disabled_raw_commands[i] == opcode)
+			return false;
+
+	return true;
+}
+
+/**
+ * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
+ * @cxlm: &struct cxl_mem device whose mailbox will be used.
+ * @send_cmd: &struct cxl_send_command copied in from userspace.
+ * @out_cmd: Sanitized and populated &struct cxl_mem_command.
+ *
+ * Return:
+ *  * %0	- @out_cmd is ready to send.
+ *  * %-ENOTTY	- Invalid command specified.
+ *  * %-EINVAL	- Reserved fields or invalid values were used.
+ *  * %-ENOMEM	- Input or output buffer wasn't sized properly.
+ *  * %-EPERM	- Attempted to use a protected command.
+ *
+ * The result of this command is a fully validated command in @out_cmd that is
+ * safe to send to the hardware.
+ *
+ * See handle_mailbox_cmd_from_user()
+ */
+static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
+				      const struct cxl_send_command *send_cmd,
+				      struct cxl_mem_command *out_cmd)
+{
+	const struct cxl_command_info *info;
+	struct cxl_mem_command *c;
+
+	if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX)
+		return -ENOTTY;
+
+	/*
+	 * The user can never specify an input payload larger than what hardware
+	 * supports, but output can be arbitrarily large (simply write out as
+	 * much data as the hardware provides).
+	 */
+	if (send_cmd->in.size > cxlm->payload_size)
+		return -EINVAL;
+
+	/*
+	 * Checks are bypassed for raw commands but a WARN/taint will occur
+	 * later in the callchain
+	 */
+	if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) {
+		const struct cxl_mem_command temp = {
+			.info = {
+				.id = CXL_MEM_COMMAND_ID_RAW,
+				.flags = 0,
+				.size_in = send_cmd->in.size,
+				.size_out = send_cmd->out.size,
+			},
+			.opcode = send_cmd->raw.opcode
+		};
+
+		if (send_cmd->raw.rsvd)
+			return -EINVAL;
+
+		/*
+		 * Unlike supported commands, the output size of RAW commands
+		 * gets passed along without further checking, so it must be
+		 * validated here.
+		 */
+		if (send_cmd->out.size > cxlm->payload_size)
+			return -EINVAL;
+
+		if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
+			return -EPERM;
+
+		memcpy(out_cmd, &temp, sizeof(temp));
+
+		return 0;
+	}
+
+	if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
+		return -EINVAL;
+
+	if (send_cmd->rsvd)
+		return -EINVAL;
+
+	if (send_cmd->in.rsvd || send_cmd->out.rsvd)
+		return -EINVAL;
+
+	/* Convert user's command into the internal representation */
+	c = &cxl_mem_commands[send_cmd->id];
+	info = &c->info;
+
+	/* Check that the command is enabled for hardware */
+	if (!test_bit(info->id, cxlm->enabled_cmds))
+		return -ENOTTY;
+
+	/* Check the input buffer is the expected size */
+	if (info->size_in >= 0 && info->size_in != send_cmd->in.size)
+		return -ENOMEM;
+
+	/* Check the output buffer is at least large enough */
+	if (info->size_out >= 0 && send_cmd->out.size < info->size_out)
+		return -ENOMEM;
+
+	memcpy(out_cmd, c, sizeof(*c));
+	out_cmd->info.size_in = send_cmd->in.size;
+	/*
+	 * XXX: out_cmd->info.size_out will be controlled by the driver, and the
+	 * specified number of bytes @send_cmd->out.size will be copied back out
+	 * to userspace.
+	 */
+
+	return 0;
+}
+
+#define cxl_cmd_count ARRAY_SIZE(cxl_mem_commands)
+
+int cxl_query_cmd(struct cxl_memdev *cxlmd,
+		  struct cxl_mem_query_commands __user *q)
+{
+	struct device *dev = &cxlmd->dev;
+	struct cxl_mem_command *cmd;
+	u32 n_commands;
+	int j = 0;
+
+	dev_dbg(dev, "Query IOCTL\n");
+
+	if (get_user(n_commands, &q->n_commands))
+		return -EFAULT;
+
+	/* returns the total number if 0 elements are requested. */
+	if (n_commands == 0)
+		return put_user(cxl_cmd_count, &q->n_commands);
+
+	/*
+	 * otherwise, return max(n_commands, total commands) cxl_command_info
+	 * structures.
+	 */
+	cxl_for_each_cmd(cmd) {
+		const struct cxl_command_info *info = &cmd->info;
+
+		if (copy_to_user(&q->commands[j++], info, sizeof(*info)))
+			return -EFAULT;
+
+		if (j == n_commands)
+			break;
+	}
+
+	return 0;
+}
+
+/**
+ * handle_mailbox_cmd_from_user() - Dispatch a mailbox command for userspace.
+ * @cxlm: The CXL memory device to communicate with.
+ * @cmd: The validated command.
+ * @in_payload: Pointer to userspace's input payload.
+ * @out_payload: Pointer to userspace's output payload.
+ * @size_out: (Input) Max payload size to copy out.
+ *            (Output) Payload size hardware generated.
+ * @retval: Hardware generated return code from the operation.
+ *
+ * Return:
+ *  * %0	- Mailbox transaction succeeded. This implies the mailbox
+ *		  protocol completed successfully not that the operation itself
+ *		  was successful.
+ *  * %-ENOMEM  - Couldn't allocate a bounce buffer.
+ *  * %-EFAULT	- Something happened with copy_to/from_user.
+ *  * %-EINTR	- Mailbox acquisition interrupted.
+ *  * %-EXXX	- Transaction level failures.
+ *
+ * Creates the appropriate mailbox command and dispatches it on behalf of a
+ * userspace request. The input and output payloads are copied between
+ * userspace.
+ *
+ * See cxl_send_cmd().
+ */
+static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
+					const struct cxl_mem_command *cmd,
+					u64 in_payload, u64 out_payload,
+					s32 *size_out, u32 *retval)
+{
+	struct device *dev = cxlm->dev;
+	struct cxl_mbox_cmd mbox_cmd = {
+		.opcode = cmd->opcode,
+		.size_in = cmd->info.size_in,
+		.size_out = cmd->info.size_out,
+	};
+	int rc;
+
+	if (cmd->info.size_out) {
+		mbox_cmd.payload_out = kvzalloc(cmd->info.size_out, GFP_KERNEL);
+		if (!mbox_cmd.payload_out)
+			return -ENOMEM;
+	}
+
+	if (cmd->info.size_in) {
+		mbox_cmd.payload_in = vmemdup_user(u64_to_user_ptr(in_payload),
+						   cmd->info.size_in);
+		if (IS_ERR(mbox_cmd.payload_in)) {
+			kvfree(mbox_cmd.payload_out);
+			return PTR_ERR(mbox_cmd.payload_in);
+		}
+	}
+
+	dev_dbg(dev,
+		"Submitting %s command for user\n"
+		"\topcode: %x\n"
+		"\tsize: %ub\n",
+		cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
+		cmd->info.size_in);
+
+	dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
+		      "raw command path used\n");
+
+	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
+	if (rc)
+		goto out;
+
+	/*
+	 * @size_out contains the max size that's allowed to be written back out
+	 * to userspace. While the payload may have written more output than
+	 * this it will have to be ignored.
+	 */
+	if (mbox_cmd.size_out) {
+		dev_WARN_ONCE(dev, mbox_cmd.size_out > *size_out,
+			      "Invalid return size\n");
+		if (copy_to_user(u64_to_user_ptr(out_payload),
+				 mbox_cmd.payload_out, mbox_cmd.size_out)) {
+			rc = -EFAULT;
+			goto out;
+		}
+	}
+
+	*size_out = mbox_cmd.size_out;
+	*retval = mbox_cmd.return_code;
+
+out:
+	kvfree(mbox_cmd.payload_in);
+	kvfree(mbox_cmd.payload_out);
+	return rc;
+}
+
+int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s)
+{
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+	struct device *dev = &cxlmd->dev;
+	struct cxl_send_command send;
+	struct cxl_mem_command c;
+	int rc;
+
+	dev_dbg(dev, "Send IOCTL\n");
+
+	if (copy_from_user(&send, s, sizeof(send)))
+		return -EFAULT;
+
+	rc = cxl_validate_cmd_from_user(cxlmd->cxlm, &send, &c);
+	if (rc)
+		return rc;
+
+	/* Prepare to handle a full payload for variable sized output */
+	if (c.info.size_out < 0)
+		c.info.size_out = cxlm->payload_size;
+
+	rc = handle_mailbox_cmd_from_user(cxlm, &c, send.in.payload,
+					  send.out.payload, &send.out.size,
+					  &send.retval);
+	if (rc)
+		return rc;
+
+	if (copy_to_user(s, &send, sizeof(send)))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
+{
+	u32 remaining = size;
+	u32 offset = 0;
+
+	while (remaining) {
+		u32 xfer_size = min_t(u32, remaining, cxlm->payload_size);
+		struct cxl_mbox_get_log {
+			uuid_t uuid;
+			__le32 offset;
+			__le32 length;
+		} __packed log = {
+			.uuid = *uuid,
+			.offset = cpu_to_le32(offset),
+			.length = cpu_to_le32(xfer_size)
+		};
+		int rc;
+
+		rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_LOG, &log,
+					   sizeof(log), out, xfer_size);
+		if (rc < 0)
+			return rc;
+
+		out += xfer_size;
+		remaining -= xfer_size;
+		offset += xfer_size;
+	}
+
+	return 0;
+}
+
+/**
+ * cxl_walk_cel() - Walk through the Command Effects Log.
+ * @cxlm: Device.
+ * @size: Length of the Command Effects Log.
+ * @cel: CEL
+ *
+ * Iterate over each entry in the CEL and determine if the driver supports the
+ * command. If so, the command is enabled for the device and can be used later.
+ */
+static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
+{
+	struct cel_entry {
+		__le16 opcode;
+		__le16 effect;
+	} __packed * cel_entry;
+	const int cel_entries = size / sizeof(*cel_entry);
+	int i;
+
+	cel_entry = (struct cel_entry *)cel;
+
+	for (i = 0; i < cel_entries; i++) {
+		u16 opcode = le16_to_cpu(cel_entry[i].opcode);
+		struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
+
+		if (!cmd) {
+			dev_dbg(cxlm->dev,
+				"Opcode 0x%04x unsupported by driver", opcode);
+			continue;
+		}
+
+		set_bit(cmd->info.id, cxlm->enabled_cmds);
+	}
+}
+
+struct cxl_mbox_get_supported_logs {
+	__le16 entries;
+	u8 rsvd[6];
+	struct gsl_entry {
+		uuid_t uuid;
+		__le32 size;
+	} __packed entry[];
+} __packed;
+
+static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_mem *cxlm)
+{
+	struct cxl_mbox_get_supported_logs *ret;
+	int rc;
+
+	ret = kvmalloc(cxlm->payload_size, GFP_KERNEL);
+	if (!ret)
+		return ERR_PTR(-ENOMEM);
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_SUPPORTED_LOGS, NULL,
+				   0, ret, cxlm->payload_size);
+	if (rc < 0) {
+		kvfree(ret);
+		return ERR_PTR(rc);
+	}
+
+	return ret;
+}
+
+enum {
+	CEL_UUID,
+	VENDOR_DEBUG_UUID,
+};
+
+/* See CXL 2.0 Table 170. Get Log Input Payload */
+static const uuid_t log_uuid[] = {
+	[CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96,
+			       0xb1, 0x62, 0x3b, 0x3f, 0x17),
+	[VENDOR_DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f,
+					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86),
+};
+
+/**
+ * cxl_mem_enumerate_cmds() - Enumerate commands for a device.
+ * @cxlm: The device.
+ *
+ * Returns 0 if enumerate completed successfully.
+ *
+ * CXL devices have optional support for certain commands. This function will
+ * determine the set of supported commands for the hardware and update the
+ * enabled_cmds bitmap in the @cxlm.
+ */
+int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm)
+{
+	struct cxl_mbox_get_supported_logs *gsl;
+	struct device *dev = cxlm->dev;
+	struct cxl_mem_command *cmd;
+	int i, rc;
+
+	gsl = cxl_get_gsl(cxlm);
+	if (IS_ERR(gsl))
+		return PTR_ERR(gsl);
+
+	rc = -ENOENT;
+	for (i = 0; i < le16_to_cpu(gsl->entries); i++) {
+		u32 size = le32_to_cpu(gsl->entry[i].size);
+		uuid_t uuid = gsl->entry[i].uuid;
+		u8 *log;
+
+		dev_dbg(dev, "Found LOG type %pU of size %d", &uuid, size);
+
+		if (!uuid_equal(&uuid, &log_uuid[CEL_UUID]))
+			continue;
+
+		log = kvmalloc(size, GFP_KERNEL);
+		if (!log) {
+			rc = -ENOMEM;
+			goto out;
+		}
+
+		rc = cxl_xfer_log(cxlm, &uuid, size, log);
+		if (rc) {
+			kvfree(log);
+			goto out;
+		}
+
+		cxl_walk_cel(cxlm, size, log);
+		kvfree(log);
+
+		/* In case CEL was bogus, enable some default commands. */
+		cxl_for_each_cmd(cmd)
+			if (cmd->flags & CXL_CMD_FLAG_FORCE_ENABLE)
+				set_bit(cmd->info.id, cxlm->enabled_cmds);
+
+		/* Found the required CEL */
+		rc = 0;
+	}
+
+out:
+	kvfree(gsl);
+	return rc;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_enumerate_cmds);
+
+/**
+ * cxl_mem_get_partition_info - Get partition info
+ * @cxlm: The device to act on
+ * @active_volatile_bytes: returned active volatile capacity; in bytes
+ * @active_persistent_bytes: returned active persistent capacity; in bytes
+ * @next_volatile_bytes: return next volatile capacity; in bytes
+ * @next_persistent_bytes: return next persistent capacity; in bytes
+ *
+ * Retrieve the current partition info for the device specified.  The active
+ * values are the current capacity in bytes.  If not 0, the 'next' values are
+ * the pending values, in bytes, which take affect on next cold reset.
+ *
+ * Return: 0 if no error: or the result of the mailbox command.
+ *
+ * See CXL @8.2.9.5.2.1 Get Partition Info
+ */
+static int cxl_mem_get_partition_info(struct cxl_mem *cxlm)
+{
+	struct cxl_mbox_get_partition_info {
+		__le64 active_volatile_cap;
+		__le64 active_persistent_cap;
+		__le64 next_volatile_cap;
+		__le64 next_persistent_cap;
+	} __packed pi;
+	int rc;
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_PARTITION_INFO,
+				   NULL, 0, &pi, sizeof(pi));
+
+	if (rc)
+		return rc;
+
+	cxlm->active_volatile_bytes =
+		le64_to_cpu(pi.active_volatile_cap) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->active_persistent_bytes =
+		le64_to_cpu(pi.active_persistent_cap) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->next_volatile_bytes =
+		le64_to_cpu(pi.next_volatile_cap) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->next_persistent_bytes =
+		le64_to_cpu(pi.next_volatile_cap) * CXL_CAPACITY_MULTIPLIER;
+
+	return 0;
+}
+
+/**
+ * cxl_mem_identify() - Send the IDENTIFY command to the device.
+ * @cxlm: The device to identify.
+ *
+ * Return: 0 if identify was executed successfully.
+ *
+ * This will dispatch the identify command to the device and on success populate
+ * structures to be exported to sysfs.
+ */
+int cxl_mem_identify(struct cxl_mem *cxlm)
+{
+	/* See CXL 2.0 Table 175 Identify Memory Device Output Payload */
+	struct cxl_mbox_identify {
+		char fw_revision[0x10];
+		__le64 total_capacity;
+		__le64 volatile_capacity;
+		__le64 persistent_capacity;
+		__le64 partition_align;
+		__le16 info_event_log_size;
+		__le16 warning_event_log_size;
+		__le16 failure_event_log_size;
+		__le16 fatal_event_log_size;
+		__le32 lsa_size;
+		u8 poison_list_max_mer[3];
+		__le16 inject_poison_limit;
+		u8 poison_caps;
+		u8 qos_telemetry_caps;
+	} __packed id;
+	int rc;
+
+	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0, &id,
+				   sizeof(id));
+	if (rc < 0)
+		return rc;
+
+	cxlm->total_bytes =
+		le64_to_cpu(id.total_capacity) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->volatile_only_bytes =
+		le64_to_cpu(id.volatile_capacity) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->persistent_only_bytes =
+		le64_to_cpu(id.persistent_capacity) * CXL_CAPACITY_MULTIPLIER;
+	cxlm->partition_align_bytes =
+		le64_to_cpu(id.partition_align) * CXL_CAPACITY_MULTIPLIER;
+
+	dev_dbg(cxlm->dev,
+		"Identify Memory Device\n"
+		"     total_bytes = %#llx\n"
+		"     volatile_only_bytes = %#llx\n"
+		"     persistent_only_bytes = %#llx\n"
+		"     partition_align_bytes = %#llx\n",
+		cxlm->total_bytes, cxlm->volatile_only_bytes,
+		cxlm->persistent_only_bytes, cxlm->partition_align_bytes);
+
+	cxlm->lsa_size = le32_to_cpu(id.lsa_size);
+	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_identify);
+
+int cxl_mem_create_range_info(struct cxl_mem *cxlm)
+{
+	int rc;
+
+	if (cxlm->partition_align_bytes == 0) {
+		cxlm->ram_range.start = 0;
+		cxlm->ram_range.end = cxlm->volatile_only_bytes - 1;
+		cxlm->pmem_range.start = cxlm->volatile_only_bytes;
+		cxlm->pmem_range.end = cxlm->volatile_only_bytes +
+				       cxlm->persistent_only_bytes - 1;
+		return 0;
+	}
+
+	rc = cxl_mem_get_partition_info(cxlm);
+	if (rc) {
+		dev_err(cxlm->dev, "Failed to query partition information\n");
+		return rc;
+	}
+
+	dev_dbg(cxlm->dev,
+		"Get Partition Info\n"
+		"     active_volatile_bytes = %#llx\n"
+		"     active_persistent_bytes = %#llx\n"
+		"     next_volatile_bytes = %#llx\n"
+		"     next_persistent_bytes = %#llx\n",
+		cxlm->active_volatile_bytes, cxlm->active_persistent_bytes,
+		cxlm->next_volatile_bytes, cxlm->next_persistent_bytes);
+
+	cxlm->ram_range.start = 0;
+	cxlm->ram_range.end = cxlm->active_volatile_bytes - 1;
+
+	cxlm->pmem_range.start = cxlm->active_volatile_bytes;
+	cxlm->pmem_range.end =
+		cxlm->active_volatile_bytes + cxlm->active_persistent_bytes - 1;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_create_range_info);
+
+struct cxl_mem *cxl_mem_create(struct device *dev)
+{
+	struct cxl_mem *cxlm;
+
+	cxlm = devm_kzalloc(dev, sizeof(*cxlm), GFP_KERNEL);
+	if (!cxlm)
+		return ERR_PTR(-ENOMEM);
+
+	mutex_init(&cxlm->mbox_mutex);
+	cxlm->dev = dev;
+	cxlm->enabled_cmds =
+		devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
+				   sizeof(unsigned long),
+				   GFP_KERNEL | __GFP_ZERO);
+	if (!cxlm->enabled_cmds)
+		return ERR_PTR(-ENOMEM);
+
+	return cxlm;
+}
+EXPORT_SYMBOL_GPL(cxl_mem_create);
+
+static struct dentry *cxl_debugfs;
+
+void __init cxl_mbox_init(void)
+{
+	struct dentry *mbox_debugfs;
+
+	cxl_debugfs = debugfs_create_dir("cxl", NULL);
+	mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
+	debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs,
+			    &cxl_raw_allow_all);
+}
+
+void cxl_mbox_exit(void)
+{
+	debugfs_remove_recursive(cxl_debugfs);
+}
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
index 40789558f8c2..a2a9691568af 100644
--- a/drivers/cxl/core/memdev.c
+++ b/drivers/cxl/core/memdev.c
@@ -8,6 +8,8 @@
 #include <cxlmem.h>
 #include "core.h"
 
+static DECLARE_RWSEM(cxl_memdev_rwsem);
+
 /*
  * An entire PCI topology full of devices should be enough for any
  * config
@@ -132,16 +134,21 @@ static const struct device_type cxl_memdev_type = {
 	.groups = cxl_memdev_attribute_groups,
 };
 
+static void cxl_memdev_shutdown(struct device *dev)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+
+	down_write(&cxl_memdev_rwsem);
+	cxlmd->cxlm = NULL;
+	up_write(&cxl_memdev_rwsem);
+}
+
 static void cxl_memdev_unregister(void *_cxlmd)
 {
 	struct cxl_memdev *cxlmd = _cxlmd;
 	struct device *dev = &cxlmd->dev;
-	struct cdev *cdev = &cxlmd->cdev;
-	const struct cdevm_file_operations *cdevm_fops;
-
-	cdevm_fops = container_of(cdev->ops, typeof(*cdevm_fops), fops);
-	cdevm_fops->shutdown(dev);
 
+	cxl_memdev_shutdown(dev);
 	cdev_device_del(&cxlmd->cdev, dev);
 	put_device(dev);
 }
@@ -180,16 +187,72 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm,
 	return ERR_PTR(rc);
 }
 
+static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd,
+			       unsigned long arg)
+{
+	switch (cmd) {
+	case CXL_MEM_QUERY_COMMANDS:
+		return cxl_query_cmd(cxlmd, (void __user *)arg);
+	case CXL_MEM_SEND_COMMAND:
+		return cxl_send_cmd(cxlmd, (void __user *)arg);
+	default:
+		return -ENOTTY;
+	}
+}
+
+static long cxl_memdev_ioctl(struct file *file, unsigned int cmd,
+			     unsigned long arg)
+{
+	struct cxl_memdev *cxlmd = file->private_data;
+	int rc = -ENXIO;
+
+	down_read(&cxl_memdev_rwsem);
+	if (cxlmd->cxlm)
+		rc = __cxl_memdev_ioctl(cxlmd, cmd, arg);
+	up_read(&cxl_memdev_rwsem);
+
+	return rc;
+}
+
+static int cxl_memdev_open(struct inode *inode, struct file *file)
+{
+	struct cxl_memdev *cxlmd =
+		container_of(inode->i_cdev, typeof(*cxlmd), cdev);
+
+	get_device(&cxlmd->dev);
+	file->private_data = cxlmd;
+
+	return 0;
+}
+
+static int cxl_memdev_release_file(struct inode *inode, struct file *file)
+{
+	struct cxl_memdev *cxlmd =
+		container_of(inode->i_cdev, typeof(*cxlmd), cdev);
+
+	put_device(&cxlmd->dev);
+
+	return 0;
+}
+
+static const struct file_operations cxl_memdev_fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = cxl_memdev_ioctl,
+	.open = cxl_memdev_open,
+	.release = cxl_memdev_release_file,
+	.compat_ioctl = compat_ptr_ioctl,
+	.llseek = noop_llseek,
+};
+
 struct cxl_memdev *
-devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
-		    const struct cdevm_file_operations *cdevm_fops)
+devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm)
 {
 	struct cxl_memdev *cxlmd;
 	struct device *dev;
 	struct cdev *cdev;
 	int rc;
 
-	cxlmd = cxl_memdev_alloc(cxlm, &cdevm_fops->fops);
+	cxlmd = cxl_memdev_alloc(cxlm, &cxl_memdev_fops);
 	if (IS_ERR(cxlmd))
 		return cxlmd;
 
@@ -219,7 +282,7 @@ devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
 	 * The cdev was briefly live, shutdown any ioctl operations that
 	 * saw that state.
 	 */
-	cdevm_fops->shutdown(dev);
+	cxl_memdev_shutdown(dev);
 	put_device(dev);
 	return ERR_PTR(rc);
 }
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index a56d8f26a157..b7122ded3a04 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -2,6 +2,7 @@
 /* Copyright(c) 2020-2021 Intel Corporation. */
 #ifndef __CXL_MEM_H__
 #define __CXL_MEM_H__
+#include <uapi/linux/cxl_mem.h>
 #include <linux/cdev.h>
 #include "cxl.h"
 
@@ -28,21 +29,6 @@
 	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
 	 CXLMDEV_RESET_NEEDED_NOT)
 
-/**
- * struct cdevm_file_operations - devm coordinated cdev file operations
- * @fops: file operations that are synchronized against @shutdown
- * @shutdown: disconnect driver data
- *
- * @shutdown is invoked in the devres release path to disconnect any
- * driver instance data from @dev. It assumes synchronization with any
- * fops operation that requires driver data. After @shutdown an
- * operation may only reference @device data.
- */
-struct cdevm_file_operations {
-	struct file_operations fops;
-	void (*shutdown)(struct device *dev);
-};
-
 /**
  * struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
  * @dev: driver core device object
@@ -62,12 +48,11 @@ static inline struct cxl_memdev *to_cxl_memdev(struct device *dev)
 	return container_of(dev, struct cxl_memdev, dev);
 }
 
-struct cxl_memdev *
-devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
-		    const struct cdevm_file_operations *cdevm_fops);
+struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
+				       struct cxl_mem *cxlm);
 
 /**
- * struct mbox_cmd - A command to be submitted to hardware.
+ * struct cxl_mbox_cmd - A command to be submitted to hardware.
  * @opcode: (input) The command set and command submitted to hardware.
  * @payload_in: (input) Pointer to the input payload.
  * @payload_out: (output) Pointer to the output payload. Must be allocated by
@@ -147,4 +132,62 @@ struct cxl_mem {
 
 	int (*mbox_send)(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd);
 };
+
+enum cxl_opcode {
+	CXL_MBOX_OP_INVALID		= 0x0000,
+	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
+	CXL_MBOX_OP_GET_FW_INFO		= 0x0200,
+	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
+	CXL_MBOX_OP_GET_SUPPORTED_LOGS	= 0x0400,
+	CXL_MBOX_OP_GET_LOG		= 0x0401,
+	CXL_MBOX_OP_IDENTIFY		= 0x4000,
+	CXL_MBOX_OP_GET_PARTITION_INFO	= 0x4100,
+	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
+	CXL_MBOX_OP_GET_LSA		= 0x4102,
+	CXL_MBOX_OP_SET_LSA		= 0x4103,
+	CXL_MBOX_OP_GET_HEALTH_INFO	= 0x4200,
+	CXL_MBOX_OP_GET_ALERT_CONFIG	= 0x4201,
+	CXL_MBOX_OP_SET_ALERT_CONFIG	= 0x4202,
+	CXL_MBOX_OP_GET_SHUTDOWN_STATE	= 0x4203,
+	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
+	CXL_MBOX_OP_GET_POISON		= 0x4300,
+	CXL_MBOX_OP_INJECT_POISON	= 0x4301,
+	CXL_MBOX_OP_CLEAR_POISON	= 0x4302,
+	CXL_MBOX_OP_GET_SCAN_MEDIA_CAPS	= 0x4303,
+	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
+	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
+	CXL_MBOX_OP_MAX			= 0x10000
+};
+
+/**
+ * struct cxl_mem_command - Driver representation of a memory device command
+ * @info: Command information as it exists for the UAPI
+ * @opcode: The actual bits used for the mailbox protocol
+ * @flags: Set of flags effecting driver behavior.
+ *
+ *  * %CXL_CMD_FLAG_FORCE_ENABLE: In cases of error, commands with this flag
+ *    will be enabled by the driver regardless of what hardware may have
+ *    advertised.
+ *
+ * The cxl_mem_command is the driver's internal representation of commands that
+ * are supported by the driver. Some of these commands may not be supported by
+ * the hardware. The driver will use @info to validate the fields passed in by
+ * the user then submit the @opcode to the hardware.
+ *
+ * See struct cxl_command_info.
+ */
+struct cxl_mem_command {
+	struct cxl_command_info info;
+	enum cxl_opcode opcode;
+	u32 flags;
+#define CXL_CMD_FLAG_NONE 0
+#define CXL_CMD_FLAG_FORCE_ENABLE BIT(0)
+};
+
+int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, void *in,
+			  size_t in_size, void *out, size_t out_size);
+int cxl_mem_identify(struct cxl_mem *cxlm);
+int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm);
+int cxl_mem_create_range_info(struct cxl_mem *cxlm);
+struct cxl_mem *cxl_mem_create(struct device *dev);
 #endif /* __CXL_MEM_H__ */
diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index a211b35af4be..b8075b941a3a 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -1,17 +1,12 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
-#include <uapi/linux/cxl_mem.h>
-#include <linux/security.h>
-#include <linux/debugfs.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
 #include <linux/module.h>
 #include <linux/sizes.h>
 #include <linux/mutex.h>
 #include <linux/list.h>
-#include <linux/cdev.h>
-#include <linux/idr.h>
 #include <linux/pci.h>
 #include <linux/io.h>
-#include <linux/io-64-nonatomic-lo-hi.h>
 #include "cxlmem.h"
 #include "pci.h"
 #include "cxl.h"
@@ -38,162 +33,6 @@
 /* CXL 2.0 - 8.2.8.4 */
 #define CXL_MAILBOX_TIMEOUT_MS (2 * HZ)
 
-enum opcode {
-	CXL_MBOX_OP_INVALID		= 0x0000,
-	CXL_MBOX_OP_RAW			= CXL_MBOX_OP_INVALID,
-	CXL_MBOX_OP_GET_FW_INFO		= 0x0200,
-	CXL_MBOX_OP_ACTIVATE_FW		= 0x0202,
-	CXL_MBOX_OP_GET_SUPPORTED_LOGS	= 0x0400,
-	CXL_MBOX_OP_GET_LOG		= 0x0401,
-	CXL_MBOX_OP_IDENTIFY		= 0x4000,
-	CXL_MBOX_OP_GET_PARTITION_INFO	= 0x4100,
-	CXL_MBOX_OP_SET_PARTITION_INFO	= 0x4101,
-	CXL_MBOX_OP_GET_LSA		= 0x4102,
-	CXL_MBOX_OP_SET_LSA		= 0x4103,
-	CXL_MBOX_OP_GET_HEALTH_INFO	= 0x4200,
-	CXL_MBOX_OP_GET_ALERT_CONFIG	= 0x4201,
-	CXL_MBOX_OP_SET_ALERT_CONFIG	= 0x4202,
-	CXL_MBOX_OP_GET_SHUTDOWN_STATE	= 0x4203,
-	CXL_MBOX_OP_SET_SHUTDOWN_STATE	= 0x4204,
-	CXL_MBOX_OP_GET_POISON		= 0x4300,
-	CXL_MBOX_OP_INJECT_POISON	= 0x4301,
-	CXL_MBOX_OP_CLEAR_POISON	= 0x4302,
-	CXL_MBOX_OP_GET_SCAN_MEDIA_CAPS	= 0x4303,
-	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
-	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
-	CXL_MBOX_OP_MAX			= 0x10000
-};
-
-static DECLARE_RWSEM(cxl_memdev_rwsem);
-static struct dentry *cxl_debugfs;
-static bool cxl_raw_allow_all;
-
-enum {
-	CEL_UUID,
-	VENDOR_DEBUG_UUID,
-};
-
-/* See CXL 2.0 Table 170. Get Log Input Payload */
-static const uuid_t log_uuid[] = {
-	[CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96,
-			       0xb1, 0x62, 0x3b, 0x3f, 0x17),
-	[VENDOR_DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f,
-					0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86),
-};
-
-/**
- * struct cxl_mem_command - Driver representation of a memory device command
- * @info: Command information as it exists for the UAPI
- * @opcode: The actual bits used for the mailbox protocol
- * @flags: Set of flags effecting driver behavior.
- *
- *  * %CXL_CMD_FLAG_FORCE_ENABLE: In cases of error, commands with this flag
- *    will be enabled by the driver regardless of what hardware may have
- *    advertised.
- *
- * The cxl_mem_command is the driver's internal representation of commands that
- * are supported by the driver. Some of these commands may not be supported by
- * the hardware. The driver will use @info to validate the fields passed in by
- * the user then submit the @opcode to the hardware.
- *
- * See struct cxl_command_info.
- */
-struct cxl_mem_command {
-	struct cxl_command_info info;
-	enum opcode opcode;
-	u32 flags;
-#define CXL_CMD_FLAG_NONE 0
-#define CXL_CMD_FLAG_FORCE_ENABLE BIT(0)
-};
-
-#define CXL_CMD(_id, sin, sout, _flags)                                        \
-	[CXL_MEM_COMMAND_ID_##_id] = {                                         \
-	.info =	{                                                              \
-			.id = CXL_MEM_COMMAND_ID_##_id,                        \
-			.size_in = sin,                                        \
-			.size_out = sout,                                      \
-		},                                                             \
-	.opcode = CXL_MBOX_OP_##_id,                                           \
-	.flags = _flags,                                                       \
-	}
-
-/*
- * This table defines the supported mailbox commands for the driver. This table
- * is made up of a UAPI structure. Non-negative values as parameters in the
- * table will be validated against the user's input. For example, if size_in is
- * 0, and the user passed in 1, it is an error.
- */
-static struct cxl_mem_command mem_commands[CXL_MEM_COMMAND_ID_MAX] = {
-	CXL_CMD(IDENTIFY, 0, 0x43, CXL_CMD_FLAG_FORCE_ENABLE),
-#ifdef CONFIG_CXL_MEM_RAW_COMMANDS
-	CXL_CMD(RAW, ~0, ~0, 0),
-#endif
-	CXL_CMD(GET_SUPPORTED_LOGS, 0, ~0, CXL_CMD_FLAG_FORCE_ENABLE),
-	CXL_CMD(GET_FW_INFO, 0, 0x50, 0),
-	CXL_CMD(GET_PARTITION_INFO, 0, 0x20, 0),
-	CXL_CMD(GET_LSA, 0x8, ~0, 0),
-	CXL_CMD(GET_HEALTH_INFO, 0, 0x12, 0),
-	CXL_CMD(GET_LOG, 0x18, ~0, CXL_CMD_FLAG_FORCE_ENABLE),
-	CXL_CMD(SET_PARTITION_INFO, 0x0a, 0, 0),
-	CXL_CMD(SET_LSA, ~0, 0, 0),
-	CXL_CMD(GET_ALERT_CONFIG, 0, 0x10, 0),
-	CXL_CMD(SET_ALERT_CONFIG, 0xc, 0, 0),
-	CXL_CMD(GET_SHUTDOWN_STATE, 0, 0x1, 0),
-	CXL_CMD(SET_SHUTDOWN_STATE, 0x1, 0, 0),
-	CXL_CMD(GET_POISON, 0x10, ~0, 0),
-	CXL_CMD(INJECT_POISON, 0x8, 0, 0),
-	CXL_CMD(CLEAR_POISON, 0x48, 0, 0),
-	CXL_CMD(GET_SCAN_MEDIA_CAPS, 0x10, 0x4, 0),
-	CXL_CMD(SCAN_MEDIA, 0x11, 0, 0),
-	CXL_CMD(GET_SCAN_MEDIA, 0, ~0, 0),
-};
-
-/*
- * Commands that RAW doesn't permit. The rationale for each:
- *
- * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment /
- * coordination of transaction timeout values at the root bridge level.
- *
- * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live
- * and needs to be coordinated with HDM updates.
- *
- * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the
- * driver and any writes from userspace invalidates those contents.
- *
- * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes
- * to the device after it is marked clean, userspace can not make that
- * assertion.
- *
- * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that
- * is kept up to date with patrol notifications and error management.
- */
-static u16 cxl_disabled_raw_commands[] = {
-	CXL_MBOX_OP_ACTIVATE_FW,
-	CXL_MBOX_OP_SET_PARTITION_INFO,
-	CXL_MBOX_OP_SET_LSA,
-	CXL_MBOX_OP_SET_SHUTDOWN_STATE,
-	CXL_MBOX_OP_SCAN_MEDIA,
-	CXL_MBOX_OP_GET_SCAN_MEDIA,
-};
-
-/*
- * Command sets that RAW doesn't permit. All opcodes in this set are
- * disabled because they pass plain text security payloads over the
- * user/kernel boundary. This functionality is intended to be wrapped
- * behind the keys ABI which allows for encrypted payloads in the UAPI
- */
-static u8 security_command_sets[] = {
-	0x44, /* Sanitize */
-	0x45, /* Persistent Memory Data-at-rest Security */
-	0x46, /* Security Passthrough */
-};
-
-#define cxl_for_each_cmd(cmd)                                                  \
-	for ((cmd) = &mem_commands[0];                                         \
-	     ((cmd) - mem_commands) < ARRAY_SIZE(mem_commands); (cmd)++)
-
-#define cxl_cmd_count ARRAY_SIZE(mem_commands)
-
 static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
 {
 	const unsigned long start = jiffies;
@@ -216,16 +55,6 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
 	return 0;
 }
 
-static bool cxl_is_security_command(u16 opcode)
-{
-	int i;
-
-	for (i = 0; i < ARRAY_SIZE(security_command_sets); i++)
-		if (security_command_sets[i] == (opcode >> 8))
-			return true;
-	return false;
-}
-
 static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
 				 struct cxl_mbox_cmd *mbox_cmd)
 {
@@ -447,433 +276,6 @@ static int cxl_pci_mbox_send(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd)
 	return rc;
 }
 
-/**
- * handle_mailbox_cmd_from_user() - Dispatch a mailbox command for userspace.
- * @cxlm: The CXL memory device to communicate with.
- * @cmd: The validated command.
- * @in_payload: Pointer to userspace's input payload.
- * @out_payload: Pointer to userspace's output payload.
- * @size_out: (Input) Max payload size to copy out.
- *            (Output) Payload size hardware generated.
- * @retval: Hardware generated return code from the operation.
- *
- * Return:
- *  * %0	- Mailbox transaction succeeded. This implies the mailbox
- *		  protocol completed successfully not that the operation itself
- *		  was successful.
- *  * %-ENOMEM  - Couldn't allocate a bounce buffer.
- *  * %-EFAULT	- Something happened with copy_to/from_user.
- *  * %-EINTR	- Mailbox acquisition interrupted.
- *  * %-EXXX	- Transaction level failures.
- *
- * Creates the appropriate mailbox command and dispatches it on behalf of a
- * userspace request. The input and output payloads are copied between
- * userspace.
- *
- * See cxl_send_cmd().
- */
-static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm,
-					const struct cxl_mem_command *cmd,
-					u64 in_payload, u64 out_payload,
-					s32 *size_out, u32 *retval)
-{
-	struct device *dev = cxlm->dev;
-	struct cxl_mbox_cmd mbox_cmd = {
-		.opcode = cmd->opcode,
-		.size_in = cmd->info.size_in,
-		.size_out = cmd->info.size_out,
-	};
-	int rc;
-
-	if (cmd->info.size_out) {
-		mbox_cmd.payload_out = kvzalloc(cmd->info.size_out, GFP_KERNEL);
-		if (!mbox_cmd.payload_out)
-			return -ENOMEM;
-	}
-
-	if (cmd->info.size_in) {
-		mbox_cmd.payload_in = vmemdup_user(u64_to_user_ptr(in_payload),
-						   cmd->info.size_in);
-		if (IS_ERR(mbox_cmd.payload_in)) {
-			kvfree(mbox_cmd.payload_out);
-			return PTR_ERR(mbox_cmd.payload_in);
-		}
-	}
-
-	dev_dbg(dev,
-		"Submitting %s command for user\n"
-		"\topcode: %x\n"
-		"\tsize: %ub\n",
-		cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
-		cmd->info.size_in);
-
-	dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
-		      "raw command path used\n");
-
-	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
-	if (rc)
-		goto out;
-
-	/*
-	 * @size_out contains the max size that's allowed to be written back out
-	 * to userspace. While the payload may have written more output than
-	 * this it will have to be ignored.
-	 */
-	if (mbox_cmd.size_out) {
-		dev_WARN_ONCE(dev, mbox_cmd.size_out > *size_out,
-			      "Invalid return size\n");
-		if (copy_to_user(u64_to_user_ptr(out_payload),
-				 mbox_cmd.payload_out, mbox_cmd.size_out)) {
-			rc = -EFAULT;
-			goto out;
-		}
-	}
-
-	*size_out = mbox_cmd.size_out;
-	*retval = mbox_cmd.return_code;
-
-out:
-	kvfree(mbox_cmd.payload_in);
-	kvfree(mbox_cmd.payload_out);
-	return rc;
-}
-
-static bool cxl_mem_raw_command_allowed(u16 opcode)
-{
-	int i;
-
-	if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
-		return false;
-
-	if (security_locked_down(LOCKDOWN_NONE))
-		return false;
-
-	if (cxl_raw_allow_all)
-		return true;
-
-	if (cxl_is_security_command(opcode))
-		return false;
-
-	for (i = 0; i < ARRAY_SIZE(cxl_disabled_raw_commands); i++)
-		if (cxl_disabled_raw_commands[i] == opcode)
-			return false;
-
-	return true;
-}
-
-/**
- * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
- * @cxlm: &struct cxl_mem device whose mailbox will be used.
- * @send_cmd: &struct cxl_send_command copied in from userspace.
- * @out_cmd: Sanitized and populated &struct cxl_mem_command.
- *
- * Return:
- *  * %0	- @out_cmd is ready to send.
- *  * %-ENOTTY	- Invalid command specified.
- *  * %-EINVAL	- Reserved fields or invalid values were used.
- *  * %-ENOMEM	- Input or output buffer wasn't sized properly.
- *  * %-EPERM	- Attempted to use a protected command.
- *
- * The result of this command is a fully validated command in @out_cmd that is
- * safe to send to the hardware.
- *
- * See handle_mailbox_cmd_from_user()
- */
-static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
-				      const struct cxl_send_command *send_cmd,
-				      struct cxl_mem_command *out_cmd)
-{
-	const struct cxl_command_info *info;
-	struct cxl_mem_command *c;
-
-	if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX)
-		return -ENOTTY;
-
-	/*
-	 * The user can never specify an input payload larger than what hardware
-	 * supports, but output can be arbitrarily large (simply write out as
-	 * much data as the hardware provides).
-	 */
-	if (send_cmd->in.size > cxlm->payload_size)
-		return -EINVAL;
-
-	/*
-	 * Checks are bypassed for raw commands but a WARN/taint will occur
-	 * later in the callchain
-	 */
-	if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) {
-		const struct cxl_mem_command temp = {
-			.info = {
-				.id = CXL_MEM_COMMAND_ID_RAW,
-				.flags = 0,
-				.size_in = send_cmd->in.size,
-				.size_out = send_cmd->out.size,
-			},
-			.opcode = send_cmd->raw.opcode
-		};
-
-		if (send_cmd->raw.rsvd)
-			return -EINVAL;
-
-		/*
-		 * Unlike supported commands, the output size of RAW commands
-		 * gets passed along without further checking, so it must be
-		 * validated here.
-		 */
-		if (send_cmd->out.size > cxlm->payload_size)
-			return -EINVAL;
-
-		if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
-			return -EPERM;
-
-		memcpy(out_cmd, &temp, sizeof(temp));
-
-		return 0;
-	}
-
-	if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
-		return -EINVAL;
-
-	if (send_cmd->rsvd)
-		return -EINVAL;
-
-	if (send_cmd->in.rsvd || send_cmd->out.rsvd)
-		return -EINVAL;
-
-	/* Convert user's command into the internal representation */
-	c = &mem_commands[send_cmd->id];
-	info = &c->info;
-
-	/* Check that the command is enabled for hardware */
-	if (!test_bit(info->id, cxlm->enabled_cmds))
-		return -ENOTTY;
-
-	/* Check the input buffer is the expected size */
-	if (info->size_in >= 0 && info->size_in != send_cmd->in.size)
-		return -ENOMEM;
-
-	/* Check the output buffer is at least large enough */
-	if (info->size_out >= 0 && send_cmd->out.size < info->size_out)
-		return -ENOMEM;
-
-	memcpy(out_cmd, c, sizeof(*c));
-	out_cmd->info.size_in = send_cmd->in.size;
-	/*
-	 * XXX: out_cmd->info.size_out will be controlled by the driver, and the
-	 * specified number of bytes @send_cmd->out.size will be copied back out
-	 * to userspace.
-	 */
-
-	return 0;
-}
-
-static int cxl_query_cmd(struct cxl_memdev *cxlmd,
-			 struct cxl_mem_query_commands __user *q)
-{
-	struct device *dev = &cxlmd->dev;
-	struct cxl_mem_command *cmd;
-	u32 n_commands;
-	int j = 0;
-
-	dev_dbg(dev, "Query IOCTL\n");
-
-	if (get_user(n_commands, &q->n_commands))
-		return -EFAULT;
-
-	/* returns the total number if 0 elements are requested. */
-	if (n_commands == 0)
-		return put_user(cxl_cmd_count, &q->n_commands);
-
-	/*
-	 * otherwise, return max(n_commands, total commands) cxl_command_info
-	 * structures.
-	 */
-	cxl_for_each_cmd(cmd) {
-		const struct cxl_command_info *info = &cmd->info;
-
-		if (copy_to_user(&q->commands[j++], info, sizeof(*info)))
-			return -EFAULT;
-
-		if (j == n_commands)
-			break;
-	}
-
-	return 0;
-}
-
-static int cxl_send_cmd(struct cxl_memdev *cxlmd,
-			struct cxl_send_command __user *s)
-{
-	struct cxl_mem *cxlm = cxlmd->cxlm;
-	struct device *dev = &cxlmd->dev;
-	struct cxl_send_command send;
-	struct cxl_mem_command c;
-	int rc;
-
-	dev_dbg(dev, "Send IOCTL\n");
-
-	if (copy_from_user(&send, s, sizeof(send)))
-		return -EFAULT;
-
-	rc = cxl_validate_cmd_from_user(cxlmd->cxlm, &send, &c);
-	if (rc)
-		return rc;
-
-	/* Prepare to handle a full payload for variable sized output */
-	if (c.info.size_out < 0)
-		c.info.size_out = cxlm->payload_size;
-
-	rc = handle_mailbox_cmd_from_user(cxlm, &c, send.in.payload,
-					  send.out.payload, &send.out.size,
-					  &send.retval);
-	if (rc)
-		return rc;
-
-	if (copy_to_user(s, &send, sizeof(send)))
-		return -EFAULT;
-
-	return 0;
-}
-
-static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd,
-			       unsigned long arg)
-{
-	switch (cmd) {
-	case CXL_MEM_QUERY_COMMANDS:
-		return cxl_query_cmd(cxlmd, (void __user *)arg);
-	case CXL_MEM_SEND_COMMAND:
-		return cxl_send_cmd(cxlmd, (void __user *)arg);
-	default:
-		return -ENOTTY;
-	}
-}
-
-static long cxl_memdev_ioctl(struct file *file, unsigned int cmd,
-			     unsigned long arg)
-{
-	struct cxl_memdev *cxlmd = file->private_data;
-	int rc = -ENXIO;
-
-	down_read(&cxl_memdev_rwsem);
-	if (cxlmd->cxlm)
-		rc = __cxl_memdev_ioctl(cxlmd, cmd, arg);
-	up_read(&cxl_memdev_rwsem);
-
-	return rc;
-}
-
-static int cxl_memdev_open(struct inode *inode, struct file *file)
-{
-	struct cxl_memdev *cxlmd =
-		container_of(inode->i_cdev, typeof(*cxlmd), cdev);
-
-	get_device(&cxlmd->dev);
-	file->private_data = cxlmd;
-
-	return 0;
-}
-
-static int cxl_memdev_release_file(struct inode *inode, struct file *file)
-{
-	struct cxl_memdev *cxlmd =
-		container_of(inode->i_cdev, typeof(*cxlmd), cdev);
-
-	put_device(&cxlmd->dev);
-
-	return 0;
-}
-
-static void cxl_memdev_shutdown(struct device *dev)
-{
-	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
-
-	down_write(&cxl_memdev_rwsem);
-	cxlmd->cxlm = NULL;
-	up_write(&cxl_memdev_rwsem);
-}
-
-static const struct cdevm_file_operations cxl_memdev_fops = {
-	.fops = {
-		.owner = THIS_MODULE,
-		.unlocked_ioctl = cxl_memdev_ioctl,
-		.open = cxl_memdev_open,
-		.release = cxl_memdev_release_file,
-		.compat_ioctl = compat_ptr_ioctl,
-		.llseek = noop_llseek,
-	},
-	.shutdown = cxl_memdev_shutdown,
-};
-
-static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
-{
-	struct cxl_mem_command *c;
-
-	cxl_for_each_cmd(c)
-		if (c->opcode == opcode)
-			return c;
-
-	return NULL;
-}
-
-/**
- * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
- * @cxlm: The CXL memory device to communicate with.
- * @opcode: Opcode for the mailbox command.
- * @in: The input payload for the mailbox command.
- * @in_size: The length of the input payload
- * @out: Caller allocated buffer for the output.
- * @out_size: Expected size of output.
- *
- * Context: Any context. Will acquire and release mbox_mutex.
- * Return:
- *  * %>=0	- Number of bytes returned in @out.
- *  * %-E2BIG	- Payload is too large for hardware.
- *  * %-EBUSY	- Couldn't acquire exclusive mailbox access.
- *  * %-EFAULT	- Hardware error occurred.
- *  * %-ENXIO	- Command completed, but device reported an error.
- *  * %-EIO	- Unexpected output size.
- *
- * Mailbox commands may execute successfully yet the device itself reported an
- * error. While this distinction can be useful for commands from userspace, the
- * kernel will only be able to use results when both are successful.
- *
- * See __cxl_mem_mbox_send_cmd()
- */
-static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode,
-				 void *in, size_t in_size,
-				 void *out, size_t out_size)
-{
-	const struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
-	struct cxl_mbox_cmd mbox_cmd = {
-		.opcode = opcode,
-		.payload_in = in,
-		.size_in = in_size,
-		.size_out = out_size,
-		.payload_out = out,
-	};
-	int rc;
-
-	if (out_size > cxlm->payload_size)
-		return -E2BIG;
-
-	rc = cxlm->mbox_send(cxlm, &mbox_cmd);
-	if (rc)
-		return rc;
-
-	/* TODO: Map return code to proper kernel style errno */
-	if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
-		return -ENXIO;
-
-	/*
-	 * Variable sized commands can't be validated and so it's up to the
-	 * caller to do that if they wish.
-	 */
-	if (cmd->info.size_out >= 0 && mbox_cmd.size_out != out_size)
-		return -EIO;
-
-	return 0;
-}
-
 static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
 {
 	const int cap = readl(cxlm->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET);
@@ -902,31 +304,6 @@ static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
 	return 0;
 }
 
-static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev)
-{
-	struct device *dev = &pdev->dev;
-	struct cxl_mem *cxlm;
-
-	cxlm = devm_kzalloc(dev, sizeof(*cxlm), GFP_KERNEL);
-	if (!cxlm) {
-		dev_err(dev, "No memory available\n");
-		return ERR_PTR(-ENOMEM);
-	}
-
-	mutex_init(&cxlm->mbox_mutex);
-	cxlm->dev = dev;
-	cxlm->enabled_cmds =
-		devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count),
-				   sizeof(unsigned long),
-				   GFP_KERNEL | __GFP_ZERO);
-	if (!cxlm->enabled_cmds) {
-		dev_err(dev, "No memory available for bitmap\n");
-		return ERR_PTR(-ENOMEM);
-	}
-
-	return cxlm;
-}
-
 static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm,
 					  u8 bar, u64 offset)
 {
@@ -1136,311 +513,6 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
 	return ret;
 }
 
-static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
-{
-	u32 remaining = size;
-	u32 offset = 0;
-
-	while (remaining) {
-		u32 xfer_size = min_t(u32, remaining, cxlm->payload_size);
-		struct cxl_mbox_get_log {
-			uuid_t uuid;
-			__le32 offset;
-			__le32 length;
-		} __packed log = {
-			.uuid = *uuid,
-			.offset = cpu_to_le32(offset),
-			.length = cpu_to_le32(xfer_size)
-		};
-		int rc;
-
-		rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_LOG, &log,
-					   sizeof(log), out, xfer_size);
-		if (rc < 0)
-			return rc;
-
-		out += xfer_size;
-		remaining -= xfer_size;
-		offset += xfer_size;
-	}
-
-	return 0;
-}
-
-/**
- * cxl_walk_cel() - Walk through the Command Effects Log.
- * @cxlm: Device.
- * @size: Length of the Command Effects Log.
- * @cel: CEL
- *
- * Iterate over each entry in the CEL and determine if the driver supports the
- * command. If so, the command is enabled for the device and can be used later.
- */
-static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel)
-{
-	struct cel_entry {
-		__le16 opcode;
-		__le16 effect;
-	} __packed * cel_entry;
-	const int cel_entries = size / sizeof(*cel_entry);
-	int i;
-
-	cel_entry = (struct cel_entry *)cel;
-
-	for (i = 0; i < cel_entries; i++) {
-		u16 opcode = le16_to_cpu(cel_entry[i].opcode);
-		struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
-
-		if (!cmd) {
-			dev_dbg(cxlm->dev,
-				"Opcode 0x%04x unsupported by driver", opcode);
-			continue;
-		}
-
-		set_bit(cmd->info.id, cxlm->enabled_cmds);
-	}
-}
-
-struct cxl_mbox_get_supported_logs {
-	__le16 entries;
-	u8 rsvd[6];
-	struct gsl_entry {
-		uuid_t uuid;
-		__le32 size;
-	} __packed entry[];
-} __packed;
-
-static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_mem *cxlm)
-{
-	struct cxl_mbox_get_supported_logs *ret;
-	int rc;
-
-	ret = kvmalloc(cxlm->payload_size, GFP_KERNEL);
-	if (!ret)
-		return ERR_PTR(-ENOMEM);
-
-	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_SUPPORTED_LOGS, NULL,
-				   0, ret, cxlm->payload_size);
-	if (rc < 0) {
-		kvfree(ret);
-		return ERR_PTR(rc);
-	}
-
-	return ret;
-}
-
-/**
- * cxl_mem_get_partition_info - Get partition info
- * @cxlm: The device to act on
- * @active_volatile_bytes: returned active volatile capacity
- * @active_persistent_bytes: returned active persistent capacity
- * @next_volatile_bytes: return next volatile capacity
- * @next_persistent_bytes: return next persistent capacity
- *
- * Retrieve the current partition info for the device specified.  If not 0, the
- * 'next' values are pending and take affect on next cold reset.
- *
- * Return: 0 if no error: or the result of the mailbox command.
- *
- * See CXL @8.2.9.5.2.1 Get Partition Info
- */
-static int cxl_mem_get_partition_info(struct cxl_mem *cxlm,
-				      u64 *active_volatile_bytes,
-				      u64 *active_persistent_bytes,
-				      u64 *next_volatile_bytes,
-				      u64 *next_persistent_bytes)
-{
-	struct cxl_mbox_get_partition_info {
-		__le64 active_volatile_cap;
-		__le64 active_persistent_cap;
-		__le64 next_volatile_cap;
-		__le64 next_persistent_cap;
-	} __packed pi;
-	int rc;
-
-	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_PARTITION_INFO,
-				   NULL, 0, &pi, sizeof(pi));
-	if (rc)
-		return rc;
-
-	*active_volatile_bytes = le64_to_cpu(pi.active_volatile_cap);
-	*active_persistent_bytes = le64_to_cpu(pi.active_persistent_cap);
-	*next_volatile_bytes = le64_to_cpu(pi.next_volatile_cap);
-	*next_persistent_bytes = le64_to_cpu(pi.next_volatile_cap);
-
-	*active_volatile_bytes *= CXL_CAPACITY_MULTIPLIER;
-	*active_persistent_bytes *= CXL_CAPACITY_MULTIPLIER;
-	*next_volatile_bytes *= CXL_CAPACITY_MULTIPLIER;
-	*next_persistent_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	return 0;
-}
-
-/**
- * cxl_mem_enumerate_cmds() - Enumerate commands for a device.
- * @cxlm: The device.
- *
- * Returns 0 if enumerate completed successfully.
- *
- * CXL devices have optional support for certain commands. This function will
- * determine the set of supported commands for the hardware and update the
- * enabled_cmds bitmap in the @cxlm.
- */
-static int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm)
-{
-	struct cxl_mbox_get_supported_logs *gsl;
-	struct device *dev = cxlm->dev;
-	struct cxl_mem_command *cmd;
-	int i, rc;
-
-	gsl = cxl_get_gsl(cxlm);
-	if (IS_ERR(gsl))
-		return PTR_ERR(gsl);
-
-	rc = -ENOENT;
-	for (i = 0; i < le16_to_cpu(gsl->entries); i++) {
-		u32 size = le32_to_cpu(gsl->entry[i].size);
-		uuid_t uuid = gsl->entry[i].uuid;
-		u8 *log;
-
-		dev_dbg(dev, "Found LOG type %pU of size %d", &uuid, size);
-
-		if (!uuid_equal(&uuid, &log_uuid[CEL_UUID]))
-			continue;
-
-		log = kvmalloc(size, GFP_KERNEL);
-		if (!log) {
-			rc = -ENOMEM;
-			goto out;
-		}
-
-		rc = cxl_xfer_log(cxlm, &uuid, size, log);
-		if (rc) {
-			kvfree(log);
-			goto out;
-		}
-
-		cxl_walk_cel(cxlm, size, log);
-		kvfree(log);
-
-		/* In case CEL was bogus, enable some default commands. */
-		cxl_for_each_cmd(cmd)
-			if (cmd->flags & CXL_CMD_FLAG_FORCE_ENABLE)
-				set_bit(cmd->info.id, cxlm->enabled_cmds);
-
-		/* Found the required CEL */
-		rc = 0;
-	}
-
-out:
-	kvfree(gsl);
-	return rc;
-}
-
-/**
- * cxl_mem_identify() - Send the IDENTIFY command to the device.
- * @cxlm: The device to identify.
- *
- * Return: 0 if identify was executed successfully.
- *
- * This will dispatch the identify command to the device and on success populate
- * structures to be exported to sysfs.
- */
-static int cxl_mem_identify(struct cxl_mem *cxlm)
-{
-	/* See CXL 2.0 Table 175 Identify Memory Device Output Payload */
-	struct cxl_mbox_identify {
-		char fw_revision[0x10];
-		__le64 total_capacity;
-		__le64 volatile_capacity;
-		__le64 persistent_capacity;
-		__le64 partition_align;
-		__le16 info_event_log_size;
-		__le16 warning_event_log_size;
-		__le16 failure_event_log_size;
-		__le16 fatal_event_log_size;
-		__le32 lsa_size;
-		u8 poison_list_max_mer[3];
-		__le16 inject_poison_limit;
-		u8 poison_caps;
-		u8 qos_telemetry_caps;
-	} __packed id;
-	int rc;
-
-	rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0, &id,
-				   sizeof(id));
-	if (rc < 0)
-		return rc;
-
-	cxlm->total_bytes = le64_to_cpu(id.total_capacity);
-	cxlm->total_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	cxlm->volatile_only_bytes = le64_to_cpu(id.volatile_capacity);
-	cxlm->volatile_only_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	cxlm->persistent_only_bytes = le64_to_cpu(id.persistent_capacity);
-	cxlm->persistent_only_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	cxlm->partition_align_bytes = le64_to_cpu(id.partition_align);
-	cxlm->partition_align_bytes *= CXL_CAPACITY_MULTIPLIER;
-
-	dev_dbg(cxlm->dev,
-		"Identify Memory Device\n"
-		"     total_bytes = %#llx\n"
-		"     volatile_only_bytes = %#llx\n"
-		"     persistent_only_bytes = %#llx\n"
-		"     partition_align_bytes = %#llx\n",
-		cxlm->total_bytes, cxlm->volatile_only_bytes,
-		cxlm->persistent_only_bytes, cxlm->partition_align_bytes);
-
-	cxlm->lsa_size = le32_to_cpu(id.lsa_size);
-	memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
-
-	return 0;
-}
-
-static int cxl_mem_create_range_info(struct cxl_mem *cxlm)
-{
-	int rc;
-
-	if (cxlm->partition_align_bytes == 0) {
-		cxlm->ram_range.start = 0;
-		cxlm->ram_range.end = cxlm->volatile_only_bytes - 1;
-		cxlm->pmem_range.start = cxlm->volatile_only_bytes;
-		cxlm->pmem_range.end = cxlm->volatile_only_bytes +
-					cxlm->persistent_only_bytes - 1;
-		return 0;
-	}
-
-	rc = cxl_mem_get_partition_info(cxlm,
-					&cxlm->active_volatile_bytes,
-					&cxlm->active_persistent_bytes,
-					&cxlm->next_volatile_bytes,
-					&cxlm->next_persistent_bytes);
-	if (rc < 0) {
-		dev_err(cxlm->dev, "Failed to query partition information\n");
-		return rc;
-	}
-
-	dev_dbg(cxlm->dev,
-		"Get Partition Info\n"
-		"     active_volatile_bytes = %#llx\n"
-		"     active_persistent_bytes = %#llx\n"
-		"     next_volatile_bytes = %#llx\n"
-		"     next_persistent_bytes = %#llx\n",
-		cxlm->active_volatile_bytes, cxlm->active_persistent_bytes,
-		cxlm->next_volatile_bytes, cxlm->next_persistent_bytes);
-
-	cxlm->ram_range.start = 0;
-	cxlm->ram_range.end = cxlm->active_volatile_bytes - 1;
-
-	cxlm->pmem_range.start = cxlm->active_volatile_bytes;
-	cxlm->pmem_range.end = cxlm->active_volatile_bytes +
-				cxlm->active_persistent_bytes - 1;
-
-	return 0;
-}
-
 static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct cxl_memdev *cxlmd;
@@ -1451,7 +523,7 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (rc)
 		return rc;
 
-	cxlm = cxl_mem_create(pdev);
+	cxlm = cxl_mem_create(&pdev->dev);
 	if (IS_ERR(cxlm))
 		return PTR_ERR(cxlm);
 
@@ -1475,7 +547,7 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (rc)
 		return rc;
 
-	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm, &cxl_memdev_fops);
+	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm);
 	if (IS_ERR(cxlmd))
 		return PTR_ERR(cxlmd);
 
@@ -1503,7 +575,6 @@ static struct pci_driver cxl_mem_driver = {
 
 static __init int cxl_mem_init(void)
 {
-	struct dentry *mbox_debugfs;
 	int rc;
 
 	/* Double check the anonymous union trickery in struct cxl_regs */
@@ -1514,17 +585,11 @@ static __init int cxl_mem_init(void)
 	if (rc)
 		return rc;
 
-	cxl_debugfs = debugfs_create_dir("cxl", NULL);
-	mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
-	debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs,
-			    &cxl_raw_allow_all);
-
 	return 0;
 }
 
 static __exit void cxl_mem_exit(void)
 {
-	debugfs_remove_recursive(cxl_debugfs);
 	pci_unregister_driver(&cxl_mem_driver);
 }
 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-09 22:28 ` [PATCH 10/23] libnvdimm/labels: Add uuid helpers Dan Williams
@ 2021-08-11  8:05   ` Andy Shevchenko
  2021-08-11 16:59     ` Andy Shevchenko
  2021-08-11 18:13   ` Jonathan Cameron
  1 sibling, 1 reply; 61+ messages in thread
From: Andy Shevchenko @ 2021-08-11  8:05 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, Jonathan.Cameron, ben.widawsky,
	vishal.l.verma, alison.schofield, ira.weiny

On Mon, Aug 09, 2021 at 03:28:40PM -0700, Dan Williams wrote:
> In preparation for CXL labels that move the uuid to a different offset
> in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
> proper uuid_t type. That type definition predated the libnvdimm
> subsystem, so now is as a good a time as any to convert all the uuid
> handling in the subsystem to uuid_t to match the helpers.
> 
> As for the whitespace changes, all new code is clang-format compliant.

Thanks, looks good to me!
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

> Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/nvdimm/btt.c            |   11 +++--
>  drivers/nvdimm/btt.h            |    4 +-
>  drivers/nvdimm/btt_devs.c       |   12 +++---
>  drivers/nvdimm/core.c           |   40 ++-----------------
>  drivers/nvdimm/label.c          |   34 +++++++---------
>  drivers/nvdimm/label.h          |    3 -
>  drivers/nvdimm/namespace_devs.c |   83 ++++++++++++++++++++-------------------
>  drivers/nvdimm/nd-core.h        |    5 +-
>  drivers/nvdimm/nd.h             |   37 ++++++++++++++++-
>  drivers/nvdimm/pfn_devs.c       |    2 -
>  include/linux/nd.h              |    4 +-
>  11 files changed, 115 insertions(+), 120 deletions(-)
> 
> diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
> index 92dec4952297..1cdfbadb7408 100644
> --- a/drivers/nvdimm/btt.c
> +++ b/drivers/nvdimm/btt.c
> @@ -973,7 +973,7 @@ static int btt_arena_write_layout(struct arena_info *arena)
>  	u64 sum;
>  	struct btt_sb *super;
>  	struct nd_btt *nd_btt = arena->nd_btt;
> -	const u8 *parent_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
> +	const uuid_t *parent_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
>  
>  	ret = btt_map_init(arena);
>  	if (ret)
> @@ -988,8 +988,8 @@ static int btt_arena_write_layout(struct arena_info *arena)
>  		return -ENOMEM;
>  
>  	strncpy(super->signature, BTT_SIG, BTT_SIG_LEN);
> -	memcpy(super->uuid, nd_btt->uuid, 16);
> -	memcpy(super->parent_uuid, parent_uuid, 16);
> +	uuid_copy(&super->uuid, nd_btt->uuid);
> +	uuid_copy(&super->parent_uuid, parent_uuid);
>  	super->flags = cpu_to_le32(arena->flags);
>  	super->version_major = cpu_to_le16(arena->version_major);
>  	super->version_minor = cpu_to_le16(arena->version_minor);
> @@ -1575,7 +1575,8 @@ static void btt_blk_cleanup(struct btt *btt)
>   * Pointer to a new struct btt on success, NULL on failure.
>   */
>  static struct btt *btt_init(struct nd_btt *nd_btt, unsigned long long rawsize,
> -		u32 lbasize, u8 *uuid, struct nd_region *nd_region)
> +			    u32 lbasize, uuid_t *uuid,
> +			    struct nd_region *nd_region)
>  {
>  	int ret;
>  	struct btt *btt;
> @@ -1694,7 +1695,7 @@ int nvdimm_namespace_attach_btt(struct nd_namespace_common *ndns)
>  	}
>  	nd_region = to_nd_region(nd_btt->dev.parent);
>  	btt = btt_init(nd_btt, rawsize, nd_btt->lbasize, nd_btt->uuid,
> -			nd_region);
> +		       nd_region);
>  	if (!btt)
>  		return -ENOMEM;
>  	nd_btt->btt = btt;
> diff --git a/drivers/nvdimm/btt.h b/drivers/nvdimm/btt.h
> index 0c76c0333f6e..fc3512d92ae5 100644
> --- a/drivers/nvdimm/btt.h
> +++ b/drivers/nvdimm/btt.h
> @@ -94,8 +94,8 @@ struct log_group {
>  
>  struct btt_sb {
>  	u8 signature[BTT_SIG_LEN];
> -	u8 uuid[16];
> -	u8 parent_uuid[16];
> +	uuid_t uuid;
> +	uuid_t parent_uuid;
>  	__le32 flags;
>  	__le16 version_major;
>  	__le16 version_minor;
> diff --git a/drivers/nvdimm/btt_devs.c b/drivers/nvdimm/btt_devs.c
> index 05feb97e11ce..5ad45e9e48c9 100644
> --- a/drivers/nvdimm/btt_devs.c
> +++ b/drivers/nvdimm/btt_devs.c
> @@ -180,8 +180,8 @@ bool is_nd_btt(struct device *dev)
>  EXPORT_SYMBOL(is_nd_btt);
>  
>  static struct device *__nd_btt_create(struct nd_region *nd_region,
> -		unsigned long lbasize, u8 *uuid,
> -		struct nd_namespace_common *ndns)
> +				      unsigned long lbasize, uuid_t *uuid,
> +				      struct nd_namespace_common *ndns)
>  {
>  	struct nd_btt *nd_btt;
>  	struct device *dev;
> @@ -244,14 +244,14 @@ struct device *nd_btt_create(struct nd_region *nd_region)
>   */
>  bool nd_btt_arena_is_valid(struct nd_btt *nd_btt, struct btt_sb *super)
>  {
> -	const u8 *parent_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
> +	const uuid_t *parent_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
>  	u64 checksum;
>  
>  	if (memcmp(super->signature, BTT_SIG, BTT_SIG_LEN) != 0)
>  		return false;
>  
> -	if (!guid_is_null((guid_t *)&super->parent_uuid))
> -		if (memcmp(super->parent_uuid, parent_uuid, 16) != 0)
> +	if (!uuid_is_null(&super->parent_uuid))
> +		if (!uuid_equal(&super->parent_uuid, parent_uuid))
>  			return false;
>  
>  	checksum = le64_to_cpu(super->checksum);
> @@ -319,7 +319,7 @@ static int __nd_btt_probe(struct nd_btt *nd_btt,
>  		return rc;
>  
>  	nd_btt->lbasize = le32_to_cpu(btt_sb->external_lbasize);
> -	nd_btt->uuid = kmemdup(btt_sb->uuid, 16, GFP_KERNEL);
> +	nd_btt->uuid = kmemdup(&btt_sb->uuid, sizeof(uuid_t), GFP_KERNEL);
>  	if (!nd_btt->uuid)
>  		return -ENOMEM;
>  
> diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c
> index 7de592d7eff4..690152d62bf0 100644
> --- a/drivers/nvdimm/core.c
> +++ b/drivers/nvdimm/core.c
> @@ -206,38 +206,6 @@ struct device *to_nvdimm_bus_dev(struct nvdimm_bus *nvdimm_bus)
>  }
>  EXPORT_SYMBOL_GPL(to_nvdimm_bus_dev);
>  
> -static bool is_uuid_sep(char sep)
> -{
> -	if (sep == '\n' || sep == '-' || sep == ':' || sep == '\0')
> -		return true;
> -	return false;
> -}
> -
> -static int nd_uuid_parse(struct device *dev, u8 *uuid_out, const char *buf,
> -		size_t len)
> -{
> -	const char *str = buf;
> -	u8 uuid[16];
> -	int i;
> -
> -	for (i = 0; i < 16; i++) {
> -		if (!isxdigit(str[0]) || !isxdigit(str[1])) {
> -			dev_dbg(dev, "pos: %d buf[%zd]: %c buf[%zd]: %c\n",
> -					i, str - buf, str[0],
> -					str + 1 - buf, str[1]);
> -			return -EINVAL;
> -		}
> -
> -		uuid[i] = (hex_to_bin(str[0]) << 4) | hex_to_bin(str[1]);
> -		str += 2;
> -		if (is_uuid_sep(*str))
> -			str++;
> -	}
> -
> -	memcpy(uuid_out, uuid, sizeof(uuid));
> -	return 0;
> -}
> -
>  /**
>   * nd_uuid_store: common implementation for writing 'uuid' sysfs attributes
>   * @dev: container device for the uuid property
> @@ -248,21 +216,21 @@ static int nd_uuid_parse(struct device *dev, u8 *uuid_out, const char *buf,
>   * (driver detached)
>   * LOCKING: expects nd_device_lock() is held on entry
>   */
> -int nd_uuid_store(struct device *dev, u8 **uuid_out, const char *buf,
> +int nd_uuid_store(struct device *dev, uuid_t **uuid_out, const char *buf,
>  		size_t len)
>  {
> -	u8 uuid[16];
> +	uuid_t uuid;
>  	int rc;
>  
>  	if (dev->driver)
>  		return -EBUSY;
>  
> -	rc = nd_uuid_parse(dev, uuid, buf, len);
> +	rc = uuid_parse(buf, &uuid);
>  	if (rc)
>  		return rc;
>  
>  	kfree(*uuid_out);
> -	*uuid_out = kmemdup(uuid, sizeof(uuid), GFP_KERNEL);
> +	*uuid_out = kmemdup(&uuid, sizeof(uuid), GFP_KERNEL);
>  	if (!(*uuid_out))
>  		return -ENOMEM;
>  
> diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> index 2ba31b883b28..99608e6aeaae 100644
> --- a/drivers/nvdimm/label.c
> +++ b/drivers/nvdimm/label.c
> @@ -326,7 +326,8 @@ static bool preamble_index(struct nvdimm_drvdata *ndd, int idx,
>  	return true;
>  }
>  
> -char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags)
> +char *nd_label_gen_id(struct nd_label_id *label_id, const uuid_t *uuid,
> +		      u32 flags)
>  {
>  	if (!label_id || !uuid)
>  		return NULL;
> @@ -405,9 +406,9 @@ int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
>  		struct nvdimm *nvdimm = to_nvdimm(ndd->dev);
>  		struct nd_namespace_label *nd_label;
>  		struct nd_region *nd_region = NULL;
> -		u8 label_uuid[NSLABEL_UUID_LEN];
>  		struct nd_label_id label_id;
>  		struct resource *res;
> +		uuid_t label_uuid;
>  		u32 flags;
>  
>  		nd_label = to_label(ndd, slot);
> @@ -415,11 +416,11 @@ int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
>  		if (!slot_valid(ndd, nd_label, slot))
>  			continue;
>  
> -		memcpy(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> +		nsl_get_uuid(ndd, nd_label, &label_uuid);
>  		flags = nsl_get_flags(ndd, nd_label);
>  		if (test_bit(NDD_NOBLK, &nvdimm->flags))
>  			flags &= ~NSLABEL_FLAG_LOCAL;
> -		nd_label_gen_id(&label_id, label_uuid, flags);
> +		nd_label_gen_id(&label_id, &label_uuid, flags);
>  		res = nvdimm_allocate_dpa(ndd, &label_id,
>  					  nsl_get_dpa(ndd, nd_label),
>  					  nsl_get_rawsize(ndd, nd_label));
> @@ -896,7 +897,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
>  
>  	nd_label = to_label(ndd, slot);
>  	memset(nd_label, 0, sizeof_namespace_label(ndd));
> -	memcpy(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN);
> +	nsl_set_uuid(ndd, nd_label, nspm->uuid);
>  	nsl_set_name(ndd, nd_label, nspm->alt_name);
>  	nsl_set_flags(ndd, nd_label, flags);
>  	nsl_set_nlabel(ndd, nd_label, nd_region->ndr_mappings);
> @@ -923,9 +924,8 @@ static int __pmem_label_update(struct nd_region *nd_region,
>  	list_for_each_entry(label_ent, &nd_mapping->labels, list) {
>  		if (!label_ent->label)
>  			continue;
> -		if (test_and_clear_bit(ND_LABEL_REAP, &label_ent->flags)
> -				|| memcmp(nspm->uuid, label_ent->label->uuid,
> -					NSLABEL_UUID_LEN) == 0)
> +		if (test_and_clear_bit(ND_LABEL_REAP, &label_ent->flags) ||
> +		    uuid_equal(nspm->uuid, nsl_ref_uuid(ndd, label_ent->label)))
>  			reap_victim(nd_mapping, label_ent);
>  	}
>  
> @@ -1050,7 +1050,6 @@ static int __blk_label_update(struct nd_region *nd_region,
>  	unsigned long *free, *victim_map = NULL;
>  	struct resource *res, **old_res_list;
>  	struct nd_label_id label_id;
> -	u8 uuid[NSLABEL_UUID_LEN];
>  	int min_dpa_idx = 0;
>  	LIST_HEAD(list);
>  	u32 nslot, slot;
> @@ -1088,8 +1087,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  		/* mark unused labels for garbage collection */
>  		for_each_clear_bit_le(slot, free, nslot) {
>  			nd_label = to_label(ndd, slot);
> -			memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> -			if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
> +			if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
>  				continue;
>  			res = to_resource(ndd, nd_label);
>  			if (res && is_old_resource(res, old_res_list,
> @@ -1158,7 +1156,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  
>  		nd_label = to_label(ndd, slot);
>  		memset(nd_label, 0, sizeof_namespace_label(ndd));
> -		memcpy(nd_label->uuid, nsblk->uuid, NSLABEL_UUID_LEN);
> +		nsl_set_uuid(ndd, nd_label, nsblk->uuid);
>  		nsl_set_name(ndd, nd_label, nsblk->alt_name);
>  		nsl_set_flags(ndd, nd_label, NSLABEL_FLAG_LOCAL);
>  
> @@ -1206,8 +1204,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  		if (!nd_label)
>  			continue;
>  		nlabel++;
> -		memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> -		if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
> +		if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
>  			continue;
>  		nlabel--;
>  		list_move(&label_ent->list, &list);
> @@ -1237,8 +1234,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  	}
>  	for_each_clear_bit_le(slot, free, nslot) {
>  		nd_label = to_label(ndd, slot);
> -		memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> -		if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
> +		if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
>  			continue;
>  		res = to_resource(ndd, nd_label);
>  		res->flags &= ~DPA_RESOURCE_ADJUSTED;
> @@ -1318,12 +1314,11 @@ static int init_labels(struct nd_mapping *nd_mapping, int num_labels)
>  	return max(num_labels, old_num_labels);
>  }
>  
> -static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
> +static int del_labels(struct nd_mapping *nd_mapping, uuid_t *uuid)
>  {
>  	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
>  	struct nd_label_ent *label_ent, *e;
>  	struct nd_namespace_index *nsindex;
> -	u8 label_uuid[NSLABEL_UUID_LEN];
>  	unsigned long *free;
>  	LIST_HEAD(list);
>  	u32 nslot, slot;
> @@ -1343,8 +1338,7 @@ static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
>  		if (!nd_label)
>  			continue;
>  		active++;
> -		memcpy(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> -		if (memcmp(label_uuid, uuid, NSLABEL_UUID_LEN) != 0)
> +		if (!nsl_validate_uuid(ndd, nd_label, uuid))
>  			continue;
>  		active--;
>  		slot = to_slot(ndd, nd_label);
> diff --git a/drivers/nvdimm/label.h b/drivers/nvdimm/label.h
> index 31f94fad7b92..e6e77691dbec 100644
> --- a/drivers/nvdimm/label.h
> +++ b/drivers/nvdimm/label.h
> @@ -14,7 +14,6 @@ enum {
>  	NSINDEX_SIG_LEN = 16,
>  	NSINDEX_ALIGN = 256,
>  	NSINDEX_SEQ_MASK = 0x3,
> -	NSLABEL_UUID_LEN = 16,
>  	NSLABEL_NAME_LEN = 64,
>  	NSLABEL_FLAG_ROLABEL = 0x1,  /* read-only label */
>  	NSLABEL_FLAG_LOCAL = 0x2,    /* DIMM-local namespace */
> @@ -80,7 +79,7 @@ struct nd_namespace_index {
>   * @unused: must be zero
>   */
>  struct nd_namespace_label {
> -	u8 uuid[NSLABEL_UUID_LEN];
> +	uuid_t uuid;
>  	u8 name[NSLABEL_NAME_LEN];
>  	__le32 flags;
>  	__le16 nlabel;
> diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
> index 58c76d74127a..20ea3ccd1f29 100644
> --- a/drivers/nvdimm/namespace_devs.c
> +++ b/drivers/nvdimm/namespace_devs.c
> @@ -51,7 +51,7 @@ static bool is_namespace_io(const struct device *dev);
>  
>  static int is_uuid_busy(struct device *dev, void *data)
>  {
> -	u8 *uuid1 = data, *uuid2 = NULL;
> +	uuid_t *uuid1 = data, *uuid2 = NULL;
>  
>  	if (is_namespace_pmem(dev)) {
>  		struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
> @@ -71,7 +71,7 @@ static int is_uuid_busy(struct device *dev, void *data)
>  		uuid2 = nd_pfn->uuid;
>  	}
>  
> -	if (uuid2 && memcmp(uuid1, uuid2, NSLABEL_UUID_LEN) == 0)
> +	if (uuid2 && uuid_equal(uuid1, uuid2))
>  		return -EBUSY;
>  
>  	return 0;
> @@ -89,7 +89,7 @@ static int is_namespace_uuid_busy(struct device *dev, void *data)
>   * @dev: any device on a nvdimm_bus
>   * @uuid: uuid to check
>   */
> -bool nd_is_uuid_unique(struct device *dev, u8 *uuid)
> +bool nd_is_uuid_unique(struct device *dev, uuid_t *uuid)
>  {
>  	struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev);
>  
> @@ -192,12 +192,10 @@ const char *nvdimm_namespace_disk_name(struct nd_namespace_common *ndns,
>  }
>  EXPORT_SYMBOL(nvdimm_namespace_disk_name);
>  
> -const u8 *nd_dev_to_uuid(struct device *dev)
> +const uuid_t *nd_dev_to_uuid(struct device *dev)
>  {
> -	static const u8 null_uuid[16];
> -
>  	if (!dev)
> -		return null_uuid;
> +		return &uuid_null;
>  
>  	if (is_namespace_pmem(dev)) {
>  		struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
> @@ -208,7 +206,7 @@ const u8 *nd_dev_to_uuid(struct device *dev)
>  
>  		return nsblk->uuid;
>  	} else
> -		return null_uuid;
> +		return &uuid_null;
>  }
>  EXPORT_SYMBOL(nd_dev_to_uuid);
>  
> @@ -938,7 +936,8 @@ static void nd_namespace_pmem_set_resource(struct nd_region *nd_region,
>  	res->end = res->start + size - 1;
>  }
>  
> -static bool uuid_not_set(const u8 *uuid, struct device *dev, const char *where)
> +static bool uuid_not_set(const uuid_t *uuid, struct device *dev,
> +			 const char *where)
>  {
>  	if (!uuid) {
>  		dev_dbg(dev, "%s: uuid not set\n", where);
> @@ -957,7 +956,7 @@ static ssize_t __size_store(struct device *dev, unsigned long long val)
>  	struct nd_label_id label_id;
>  	u32 flags = 0, remainder;
>  	int rc, i, id = -1;
> -	u8 *uuid = NULL;
> +	uuid_t *uuid = NULL;
>  
>  	if (dev->driver || ndns->claim)
>  		return -EBUSY;
> @@ -1050,7 +1049,7 @@ static ssize_t size_store(struct device *dev,
>  {
>  	struct nd_region *nd_region = to_nd_region(dev->parent);
>  	unsigned long long val;
> -	u8 **uuid = NULL;
> +	uuid_t **uuid = NULL;
>  	int rc;
>  
>  	rc = kstrtoull(buf, 0, &val);
> @@ -1147,7 +1146,7 @@ static ssize_t size_show(struct device *dev,
>  }
>  static DEVICE_ATTR(size, 0444, size_show, size_store);
>  
> -static u8 *namespace_to_uuid(struct device *dev)
> +static uuid_t *namespace_to_uuid(struct device *dev)
>  {
>  	if (is_namespace_pmem(dev)) {
>  		struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
> @@ -1161,10 +1160,10 @@ static u8 *namespace_to_uuid(struct device *dev)
>  		return ERR_PTR(-ENXIO);
>  }
>  
> -static ssize_t uuid_show(struct device *dev,
> -		struct device_attribute *attr, char *buf)
> +static ssize_t uuid_show(struct device *dev, struct device_attribute *attr,
> +			 char *buf)
>  {
> -	u8 *uuid = namespace_to_uuid(dev);
> +	uuid_t *uuid = namespace_to_uuid(dev);
>  
>  	if (IS_ERR(uuid))
>  		return PTR_ERR(uuid);
> @@ -1181,7 +1180,8 @@ static ssize_t uuid_show(struct device *dev,
>   * @old_uuid: reference to the uuid storage location in the namespace object
>   */
>  static int namespace_update_uuid(struct nd_region *nd_region,
> -		struct device *dev, u8 *new_uuid, u8 **old_uuid)
> +				 struct device *dev, uuid_t *new_uuid,
> +				 uuid_t **old_uuid)
>  {
>  	u32 flags = is_namespace_blk(dev) ? NSLABEL_FLAG_LOCAL : 0;
>  	struct nd_label_id old_label_id;
> @@ -1234,7 +1234,7 @@ static int namespace_update_uuid(struct nd_region *nd_region,
>  
>  			if (!nd_label)
>  				continue;
> -			nd_label_gen_id(&label_id, nd_label->uuid,
> +			nd_label_gen_id(&label_id, nsl_ref_uuid(ndd, nd_label),
>  					nsl_get_flags(ndd, nd_label));
>  			if (strcmp(old_label_id.id, label_id.id) == 0)
>  				set_bit(ND_LABEL_REAP, &label_ent->flags);
> @@ -1251,9 +1251,9 @@ static ssize_t uuid_store(struct device *dev,
>  		struct device_attribute *attr, const char *buf, size_t len)
>  {
>  	struct nd_region *nd_region = to_nd_region(dev->parent);
> -	u8 *uuid = NULL;
> +	uuid_t *uuid = NULL;
> +	uuid_t **ns_uuid;
>  	ssize_t rc = 0;
> -	u8 **ns_uuid;
>  
>  	if (is_namespace_pmem(dev)) {
>  		struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
> @@ -1378,8 +1378,8 @@ static ssize_t dpa_extents_show(struct device *dev,
>  {
>  	struct nd_region *nd_region = to_nd_region(dev->parent);
>  	struct nd_label_id label_id;
> +	uuid_t *uuid = NULL;
>  	int count = 0, i;
> -	u8 *uuid = NULL;
>  	u32 flags = 0;
>  
>  	nvdimm_bus_lock(dev);
> @@ -1831,8 +1831,8 @@ static struct device **create_namespace_io(struct nd_region *nd_region)
>  	return devs;
>  }
>  
> -static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
> -		u64 cookie, u16 pos)
> +static bool has_uuid_at_pos(struct nd_region *nd_region, const uuid_t *uuid,
> +			    u64 cookie, u16 pos)
>  {
>  	struct nd_namespace_label *found = NULL;
>  	int i;
> @@ -1856,7 +1856,7 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
>  			if (!nsl_validate_isetcookie(ndd, nd_label, cookie))
>  				continue;
>  
> -			if (memcmp(nd_label->uuid, uuid, NSLABEL_UUID_LEN) != 0)
> +			if (!nsl_validate_uuid(ndd, nd_label, uuid))
>  				continue;
>  
>  			if (!nsl_validate_type_guid(ndd, nd_label,
> @@ -1881,7 +1881,7 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
>  	return found != NULL;
>  }
>  
> -static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
> +static int select_pmem_id(struct nd_region *nd_region, const uuid_t *pmem_id)
>  {
>  	int i;
>  
> @@ -1900,7 +1900,7 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
>  			nd_label = label_ent->label;
>  			if (!nd_label)
>  				continue;
> -			if (memcmp(nd_label->uuid, pmem_id, NSLABEL_UUID_LEN) == 0)
> +			if (nsl_validate_uuid(ndd, nd_label, pmem_id))
>  				break;
>  			nd_label = NULL;
>  		}
> @@ -1923,7 +1923,8 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
>  			/* pass */;
>  		else {
>  			dev_dbg(&nd_region->dev, "%s invalid label for %pUb\n",
> -					dev_name(ndd->dev), nd_label->uuid);
> +				dev_name(ndd->dev),
> +				nsl_ref_uuid(ndd, nd_label));
>  			return -EINVAL;
>  		}
>  
> @@ -1963,12 +1964,12 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  
>  	if (!nsl_validate_isetcookie(ndd, nd_label, cookie)) {
>  		dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
> -				nd_label->uuid);
> +			nsl_ref_uuid(ndd, nd_label));
>  		if (!nsl_validate_isetcookie(ndd, nd_label, altcookie))
>  			return ERR_PTR(-EAGAIN);
>  
>  		dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
> -				nd_label->uuid);
> +			nsl_ref_uuid(ndd, nd_label));
>  	}
>  
>  	nspm = kzalloc(sizeof(*nspm), GFP_KERNEL);
> @@ -1984,9 +1985,11 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  	res->flags = IORESOURCE_MEM;
>  
>  	for (i = 0; i < nd_region->ndr_mappings; i++) {
> -		if (has_uuid_at_pos(nd_region, nd_label->uuid, cookie, i))
> +		if (has_uuid_at_pos(nd_region, nsl_ref_uuid(ndd, nd_label),
> +				    cookie, i))
>  			continue;
> -		if (has_uuid_at_pos(nd_region, nd_label->uuid, altcookie, i))
> +		if (has_uuid_at_pos(nd_region, nsl_ref_uuid(ndd, nd_label),
> +				    altcookie, i))
>  			continue;
>  		break;
>  	}
> @@ -2000,7 +2003,7 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  		 * find a dimm with two instances of the same uuid.
>  		 */
>  		dev_err(&nd_region->dev, "%s missing label for %pUb\n",
> -				nvdimm_name(nvdimm), nd_label->uuid);
> +			nvdimm_name(nvdimm), nsl_ref_uuid(ndd, nd_label));
>  		rc = -EINVAL;
>  		goto err;
>  	}
> @@ -2013,7 +2016,7 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  	 * the dimm being enabled (i.e. nd_label_reserve_dpa()
>  	 * succeeded).
>  	 */
> -	rc = select_pmem_id(nd_region, nd_label->uuid);
> +	rc = select_pmem_id(nd_region, nsl_ref_uuid(ndd, nd_label));
>  	if (rc)
>  		goto err;
>  
> @@ -2039,8 +2042,8 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  		WARN_ON(nspm->alt_name || nspm->uuid);
>  		nspm->alt_name = kmemdup(nsl_ref_name(ndd, label0),
>  					 NSLABEL_NAME_LEN, GFP_KERNEL);
> -		nspm->uuid = kmemdup((void __force *) label0->uuid,
> -				NSLABEL_UUID_LEN, GFP_KERNEL);
> +		nspm->uuid = kmemdup(nsl_ref_uuid(ndd, label0), sizeof(uuid_t),
> +				     GFP_KERNEL);
>  		nspm->lbasize = nsl_get_lbasize(ndd, label0);
>  		nspm->nsio.common.claim_class =
>  			nsl_get_claim_class(ndd, label0);
> @@ -2217,15 +2220,15 @@ static int add_namespace_resource(struct nd_region *nd_region,
>  	int i;
>  
>  	for (i = 0; i < count; i++) {
> -		u8 *uuid = namespace_to_uuid(devs[i]);
> +		uuid_t *uuid = namespace_to_uuid(devs[i]);
>  		struct resource *res;
>  
> -		if (IS_ERR_OR_NULL(uuid)) {
> +		if (IS_ERR(uuid)) {
>  			WARN_ON(1);
>  			continue;
>  		}
>  
> -		if (memcmp(uuid, nd_label->uuid, NSLABEL_UUID_LEN) != 0)
> +		if (!nsl_validate_uuid(ndd, nd_label, uuid))
>  			continue;
>  		if (is_namespace_blk(devs[i])) {
>  			res = nsblk_add_resource(nd_region, ndd,
> @@ -2236,8 +2239,8 @@ static int add_namespace_resource(struct nd_region *nd_region,
>  			nd_dbg_dpa(nd_region, ndd, res, "%d assign\n", count);
>  		} else {
>  			dev_err(&nd_region->dev,
> -					"error: conflicting extents for uuid: %pUb\n",
> -					nd_label->uuid);
> +				"error: conflicting extents for uuid: %pUb\n",
> +				uuid);
>  			return -ENXIO;
>  		}
>  		break;
> @@ -2271,7 +2274,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  	dev->parent = &nd_region->dev;
>  	nsblk->id = -1;
>  	nsblk->lbasize = nsl_get_lbasize(ndd, nd_label);
> -	nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN, GFP_KERNEL);
> +	nsblk->uuid = kmemdup(nsl_ref_uuid(ndd, nd_label), sizeof(uuid_t), GFP_KERNEL);
>  	nsblk->common.claim_class = nsl_get_claim_class(ndd, nd_label);
>  	if (!nsblk->uuid)
>  		goto blk_err;
> diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h
> index 564faa36a3ca..a11850dd475d 100644
> --- a/drivers/nvdimm/nd-core.h
> +++ b/drivers/nvdimm/nd-core.h
> @@ -126,8 +126,9 @@ void nvdimm_bus_destroy_ndctl(struct nvdimm_bus *nvdimm_bus);
>  void nd_synchronize(void);
>  void __nd_device_register(struct device *dev);
>  struct nd_label_id;
> -char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags);
> -bool nd_is_uuid_unique(struct device *dev, u8 *uuid);
> +char *nd_label_gen_id(struct nd_label_id *label_id, const uuid_t *uuid,
> +		      u32 flags);
> +bool nd_is_uuid_unique(struct device *dev, uuid_t *uuid);
>  struct nd_region;
>  struct nvdimm_drvdata;
>  struct nd_mapping;
> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> index ac80d9680367..132a8021e3ad 100644
> --- a/drivers/nvdimm/nd.h
> +++ b/drivers/nvdimm/nd.h
> @@ -176,6 +176,35 @@ static inline void nsl_set_lbasize(struct nvdimm_drvdata *ndd,
>  	nd_label->lbasize = __cpu_to_le64(lbasize);
>  }
>  
> +static inline const uuid_t *nsl_get_uuid(struct nvdimm_drvdata *ndd,
> +					 struct nd_namespace_label *nd_label,
> +					 uuid_t *uuid)
> +{
> +	uuid_copy(uuid, &nd_label->uuid);
> +	return uuid;
> +}
> +
> +static inline const uuid_t *nsl_set_uuid(struct nvdimm_drvdata *ndd,
> +					 struct nd_namespace_label *nd_label,
> +					 const uuid_t *uuid)
> +{
> +	uuid_copy(&nd_label->uuid, uuid);
> +	return &nd_label->uuid;
> +}
> +
> +static inline bool nsl_validate_uuid(struct nvdimm_drvdata *ndd,
> +				     struct nd_namespace_label *nd_label,
> +				     const uuid_t *uuid)
> +{
> +	return uuid_equal(&nd_label->uuid, uuid);
> +}
> +
> +static inline const uuid_t *nsl_ref_uuid(struct nvdimm_drvdata *ndd,
> +					 struct nd_namespace_label *nd_label)
> +{
> +	return &nd_label->uuid;
> +}
> +
>  bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
>  				 struct nd_namespace_label *nd_label,
>  				 u64 isetcookie);
> @@ -334,7 +363,7 @@ struct nd_btt {
>  	struct btt *btt;
>  	unsigned long lbasize;
>  	u64 size;
> -	u8 *uuid;
> +	uuid_t *uuid;
>  	int id;
>  	int initial_offset;
>  	u16 version_major;
> @@ -349,7 +378,7 @@ enum nd_pfn_mode {
>  
>  struct nd_pfn {
>  	int id;
> -	u8 *uuid;
> +	uuid_t *uuid;
>  	struct device dev;
>  	unsigned long align;
>  	unsigned long npfns;
> @@ -377,7 +406,7 @@ void wait_nvdimm_bus_probe_idle(struct device *dev);
>  void nd_device_register(struct device *dev);
>  void nd_device_unregister(struct device *dev, enum nd_async_mode mode);
>  void nd_device_notify(struct device *dev, enum nvdimm_event event);
> -int nd_uuid_store(struct device *dev, u8 **uuid_out, const char *buf,
> +int nd_uuid_store(struct device *dev, uuid_t **uuid_out, const char *buf,
>  		size_t len);
>  ssize_t nd_size_select_show(unsigned long current_size,
>  		const unsigned long *supported, char *buf);
> @@ -560,6 +589,6 @@ static inline bool is_bad_pmem(struct badblocks *bb, sector_t sector,
>  	return false;
>  }
>  resource_size_t nd_namespace_blk_validate(struct nd_namespace_blk *nsblk);
> -const u8 *nd_dev_to_uuid(struct device *dev);
> +const uuid_t *nd_dev_to_uuid(struct device *dev);
>  bool pmem_should_map_pages(struct device *dev);
>  #endif /* __ND_H__ */
> diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
> index b499df630d4d..58eda16f5c53 100644
> --- a/drivers/nvdimm/pfn_devs.c
> +++ b/drivers/nvdimm/pfn_devs.c
> @@ -452,7 +452,7 @@ int nd_pfn_validate(struct nd_pfn *nd_pfn, const char *sig)
>  	unsigned long align, start_pad;
>  	struct nd_pfn_sb *pfn_sb = nd_pfn->pfn_sb;
>  	struct nd_namespace_common *ndns = nd_pfn->ndns;
> -	const u8 *parent_uuid = nd_dev_to_uuid(&ndns->dev);
> +	const uuid_t *parent_uuid = nd_dev_to_uuid(&ndns->dev);
>  
>  	if (!pfn_sb || !ndns)
>  		return -ENODEV;
> diff --git a/include/linux/nd.h b/include/linux/nd.h
> index ee9ad76afbba..8a8c63edb1b2 100644
> --- a/include/linux/nd.h
> +++ b/include/linux/nd.h
> @@ -88,7 +88,7 @@ struct nd_namespace_pmem {
>  	struct nd_namespace_io nsio;
>  	unsigned long lbasize;
>  	char *alt_name;
> -	u8 *uuid;
> +	uuid_t *uuid;
>  	int id;
>  };
>  
> @@ -105,7 +105,7 @@ struct nd_namespace_pmem {
>  struct nd_namespace_blk {
>  	struct nd_namespace_common common;
>  	char *alt_name;
> -	u8 *uuid;
> +	uuid_t *uuid;
>  	int id;
>  	unsigned long lbasize;
>  	resource_size_t size;
> 

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 20/23] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy
  2021-08-10 22:40     ` Dan Williams
@ 2021-08-11 15:18       ` Ben Widawsky
       [not found]       ` <xp0k4.l2r85dw1p7do@intel.com>
  1 sibling, 0 replies; 61+ messages in thread
From: Ben Widawsky @ 2021-08-11 15:18 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Vishal L Verma,
	Schofield, Alison, Weiny, Ira

On 21-08-10 15:40:58, Dan Williams wrote:
> On Tue, Aug 10, 2021 at 2:57 PM Ben Widawsky <ben.widawsky@intel.com> wrote:
> >
> > On 21-08-09 15:29:33, Dan Williams wrote:
> > > Create an environment for CXL plumbing unit tests. Especially when it
> > > comes to an algorithm for HDM Decoder (Host-managed Device Memory
> > > Decoder) programming, the availability of an in-kernel-tree emulation
> > > environment for CXL configuration complexity and corner cases speeds
> > > development and deters regressions.
> > >
> > > The approach taken mirrors what was done for tools/testing/nvdimm/. I.e.
> > > an external module, cxl_test.ko built out of the tools/testing/cxl/
> > > directory, provides mock implementations of kernel APIs and kernel
> > > objects to simulate a real world device hierarchy.
> > >
> > > One feedback for the tools/testing/nvdimm/ proposal was "why not do this
> > > in QEMU?". In fact, the CXL development community has developed a QEMU
> > > model for CXL [1]. However, there are a few blocking issues that keep
> > > QEMU from being a tight fit for topology + provisioning unit tests:
> > >
> > > 1/ The QEMU community has yet to show interest in merging any of this
> > >    support that has had patches on the list since November 2020. So,
> > >    testing CXL to date involves building custom QEMU with out-of-tree
> > >    patches.
> > >
> > > 2/ CXL mechanisms like cross-host-bridge interleave do not have a clear
> > >    path to be emulated by QEMU without major infrastructure work. This
> > >    is easier to achieve with the alloc_mock_res() approach taken in this
> > >    patch to shortcut-define emulated system physical address ranges with
> > >    interleave behavior.
> >
> > I just want to say that this was discussed on the mailing list, and I think
> > there is a reasonable plan (albeit a lot of work). However, #1 is the true
> > blocker IMHO.
> >
> > >
> > > The QEMU enabling has been critical to get the driver off the ground,
> > > and may still move forward, but it does not address the ongoing needs of
> > > a regression testing environment and test driven development.
> > >
> >
> > The really nice thing QEMU provides over this (assuming one implemented
> > interleaving properly), is it does allow a programmatic (via commandline) way to
> > test an infinite set of topologies, configurations, and hotplug scenarios. I
> > therefore disagree here in that I think QEMU is a better theoretical vehicle for
> > regression testing and test driven development, however, my unfinished branch
> > with no upstream interest in sight is problematic at best for longer term.
> 
> The "infinite" is what I don't think QEMU will sign up to support.
> There are going to be degenerate error handling scenarios that we want
> to test that QEMU will have no interest in supporting because QEMU is
> primarily targeted at faithfully emulating well behaved hardware. At
> the same time cxl_test does not preclude QEMU support which will
> remain super useful. You will notice that the ndctl unit tests have
> some tests that run against nfit_test and some that run against "real"
> topologies where the "real" stuff is usually the QEMU NVDIMM model. So
> it's not "either, or" it's "QEMU and cxl_test".

I don't mean infinite in the sense of adding code into QEMU to handle weird
things. All sorts of host bridge configs, device configs, etc, are simply done
by instantiating them on the commandline and with the appropriate properties.
Perhaps with this module, you can do the same via modparams.

I did have the intention of creating a vendor specific command for QEMU that
would allow all sorts of testable events, but that could have lived out of tree
fairly easily.

> 
> >
> > I didn't look super closely, but I have one comment/question below. Otherwise,
> > LGTM.
> >
> > > This patch adds an ACPI CXL Platform definition with emulated CXL
> > > multi-ported host-bridges. A follow on patch adds emulated memory
> > > expander devices.
> > >
> > > Link: https://lore.kernel.org/r/20210202005948.241655-1-ben.widawsky@intel.com [1]
> > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > > ---
> > >  drivers/cxl/acpi.c            |   52 +++-
> > >  drivers/cxl/cxl.h             |    8 +
> > >  tools/testing/cxl/Kbuild      |   27 ++
> > >  tools/testing/cxl/mock_acpi.c |  105 ++++++++
> > >  tools/testing/cxl/test/Kbuild |    6
> > >  tools/testing/cxl/test/cxl.c  |  508 +++++++++++++++++++++++++++++++++++++++++
> > >  tools/testing/cxl/test/mock.c |  155 +++++++++++++
> > >  tools/testing/cxl/test/mock.h |   26 ++
> > >  8 files changed, 866 insertions(+), 21 deletions(-)
> > >  create mode 100644 tools/testing/cxl/Kbuild
> > >  create mode 100644 tools/testing/cxl/mock_acpi.c
> > >  create mode 100644 tools/testing/cxl/test/Kbuild
> > >  create mode 100644 tools/testing/cxl/test/cxl.c
> > >  create mode 100644 tools/testing/cxl/test/mock.c
> > >  create mode 100644 tools/testing/cxl/test/mock.h
> > >
> > > diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> > > index 8ae89273f58e..e0cd9df85ca5 100644
> > > --- a/drivers/cxl/acpi.c
> > > +++ b/drivers/cxl/acpi.c
> > > @@ -182,15 +182,7 @@ static resource_size_t get_chbcr(struct acpi_cedt_chbs *chbs)
> > >       return IS_ERR(chbs) ? CXL_RESOURCE_NONE : chbs->base;
> > >  }
> > >
> > > -struct cxl_walk_context {
> > > -     struct device *dev;
> > > -     struct pci_bus *root;
> > > -     struct cxl_port *port;
> > > -     int error;
> > > -     int count;
> > > -};
> > > -
> > > -static int match_add_root_ports(struct pci_dev *pdev, void *data)
> > > +__weak int match_add_root_ports(struct pci_dev *pdev, void *data)
> > >  {
> > >       struct cxl_walk_context *ctx = data;
> > >       struct pci_bus *root_bus = ctx->root;
> > > @@ -214,6 +206,8 @@ static int match_add_root_ports(struct pci_dev *pdev, void *data)
> > >       port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
> > >       rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE);
> > >       if (rc) {
> > > +             dev_err(dev, "failed to add dport: %s (%d)\n",
> > > +                     dev_name(&pdev->dev), rc);
> > >               ctx->error = rc;
> > >               return rc;
> > >       }
> > > @@ -239,12 +233,15 @@ static struct cxl_dport *find_dport_by_dev(struct cxl_port *port, struct device
> > >       return NULL;
> > >  }
> > >
> > > -static struct acpi_device *to_cxl_host_bridge(struct device *dev)
> > > +__weak struct acpi_device *to_cxl_host_bridge(struct device *host,
> > > +                                           struct device *dev)
> > >  {
> > >       struct acpi_device *adev = to_acpi_device(dev);
> > >
> > > -     if (strcmp(acpi_device_hid(adev), "ACPI0016") == 0)
> > > +     if (strcmp(acpi_device_hid(adev), "ACPI0016") == 0) {
> > > +             dev_dbg(host, "found host bridge %s\n", dev_name(&adev->dev));
> > >               return adev;
> > > +     }
> > >       return NULL;
> > >  }
> > >
> > > @@ -254,14 +251,14 @@ static struct acpi_device *to_cxl_host_bridge(struct device *dev)
> > >   */
> > >  static int add_host_bridge_uport(struct device *match, void *arg)
> > >  {
> > > -     struct acpi_device *bridge = to_cxl_host_bridge(match);
> > > +     struct cxl_port *port;
> > > +     struct cxl_dport *dport;
> > > +     struct cxl_decoder *cxld;
> > > +     struct cxl_walk_context ctx;
> > > +     struct acpi_pci_root *pci_root;
> > >       struct cxl_port *root_port = arg;
> > >       struct device *host = root_port->dev.parent;
> > > -     struct acpi_pci_root *pci_root;
> > > -     struct cxl_walk_context ctx;
> > > -     struct cxl_decoder *cxld;
> > > -     struct cxl_dport *dport;
> > > -     struct cxl_port *port;
> > > +     struct acpi_device *bridge = to_cxl_host_bridge(host, match);
> > >
> > >       if (!bridge)
> > >               return 0;
> > > @@ -319,7 +316,7 @@ static int add_host_bridge_dport(struct device *match, void *arg)
> > >       struct acpi_cedt_chbs *chbs;
> > >       struct cxl_port *root_port = arg;
> > >       struct device *host = root_port->dev.parent;
> > > -     struct acpi_device *bridge = to_cxl_host_bridge(match);
> > > +     struct acpi_device *bridge = to_cxl_host_bridge(host, match);
> > >
> > >       if (!bridge)
> > >               return 0;
> > > @@ -371,6 +368,17 @@ static int add_root_nvdimm_bridge(struct device *match, void *data)
> > >       return 1;
> > >  }
> > >
> > > +static u32 cedt_instance(struct platform_device *pdev)
> > > +{
> > > +     const bool *native_acpi0017 = acpi_device_get_match_data(&pdev->dev);
> > > +
> > > +     if (native_acpi0017 && *native_acpi0017)
> > > +             return 0;
> > > +
> > > +     /* for cxl_test request a non-canonical instance */
> > > +     return U32_MAX;
> > > +}
> > > +
> > >  static int cxl_acpi_probe(struct platform_device *pdev)
> > >  {
> > >       int rc;
> > > @@ -384,7 +392,7 @@ static int cxl_acpi_probe(struct platform_device *pdev)
> > >               return PTR_ERR(root_port);
> > >       dev_dbg(host, "add: %s\n", dev_name(&root_port->dev));
> > >
> > > -     status = acpi_get_table(ACPI_SIG_CEDT, 0, &acpi_cedt);
> > > +     status = acpi_get_table(ACPI_SIG_CEDT, cedt_instance(pdev), &acpi_cedt);
> > >       if (ACPI_FAILURE(status))
> > >               return -ENXIO;
> > >
> > > @@ -415,9 +423,11 @@ static int cxl_acpi_probe(struct platform_device *pdev)
> > >       return 0;
> > >  }
> > >
> > > +static bool native_acpi0017 = true;
> > > +
> > >  static const struct acpi_device_id cxl_acpi_ids[] = {
> > > -     { "ACPI0017", 0 },
> > > -     { "", 0 },
> > > +     { "ACPI0017", (unsigned long) &native_acpi0017 },
> > > +     { },
> > >  };
> > >  MODULE_DEVICE_TABLE(acpi, cxl_acpi_ids);
> > >
> > > diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> > > index 1b2e816e061e..09c81cf8b800 100644
> > > --- a/drivers/cxl/cxl.h
> > > +++ b/drivers/cxl/cxl.h
> > > @@ -226,6 +226,14 @@ struct cxl_nvdimm {
> > >       struct nvdimm *nvdimm;
> > >  };
> > >
> > > +struct cxl_walk_context {
> > > +     struct device *dev;
> > > +     struct pci_bus *root;
> > > +     struct cxl_port *port;
> > > +     int error;
> > > +     int count;
> > > +};
> > > +
> > >  /**
> > >   * struct cxl_port - logical collection of upstream port devices and
> > >   *                downstream port devices to construct a CXL memory
> > > diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
> > > new file mode 100644
> > > index 000000000000..6ea0c7df36f0
> > > --- /dev/null
> > > +++ b/tools/testing/cxl/Kbuild
> > > @@ -0,0 +1,27 @@
> > > +# SPDX-License-Identifier: GPL-2.0
> > > +ldflags-y += --wrap=is_acpi_device_node
> > > +ldflags-y += --wrap=acpi_get_table
> > > +ldflags-y += --wrap=acpi_put_table
> > > +ldflags-y += --wrap=acpi_evaluate_integer
> > > +ldflags-y += --wrap=acpi_pci_find_root
> > > +ldflags-y += --wrap=pci_walk_bus
> > > +
> > > +DRIVERS := ../../../drivers
> > > +CXL_SRC := $(DRIVERS)/cxl
> > > +CXL_CORE_SRC := $(DRIVERS)/cxl/core
> > > +ccflags-y := -I$(srctree)/drivers/cxl/
> > > +
> > > +obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
> > > +
> > > +cxl_acpi-y := $(CXL_SRC)/acpi.o
> > > +cxl_acpi-y += mock_acpi.o
> > > +
> > > +obj-$(CONFIG_CXL_BUS) += cxl_core.o
> > > +
> > > +cxl_core-y := $(CXL_CORE_SRC)/bus.o
> > > +cxl_core-y += $(CXL_CORE_SRC)/pmem.o
> > > +cxl_core-y += $(CXL_CORE_SRC)/regs.o
> > > +cxl_core-y += $(CXL_CORE_SRC)/memdev.o
> > > +cxl_core-y += $(CXL_CORE_SRC)/mbox.o
> > > +
> > > +obj-m += test/
> > > diff --git a/tools/testing/cxl/mock_acpi.c b/tools/testing/cxl/mock_acpi.c
> > > new file mode 100644
> > > index 000000000000..256bdf9e1ce8
> > > --- /dev/null
> > > +++ b/tools/testing/cxl/mock_acpi.c
> > > @@ -0,0 +1,105 @@
> > > +// SPDX-License-Identifier: GPL-2.0-only
> > > +/* Copyright(c) 2021 Intel Corporation. All rights reserved. */
> > > +
> > > +#include <linux/platform_device.h>
> > > +#include <linux/device.h>
> > > +#include <linux/acpi.h>
> > > +#include <linux/pci.h>
> > > +#include <cxl.h>
> > > +#include "test/mock.h"
> > > +
> > > +struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev)
> > > +{
> > > +     int index;
> > > +     struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> > > +     struct acpi_device *adev = NULL;
> > > +
> > > +     if (ops && ops->is_mock_bridge(dev)) {
> > > +             adev = ACPI_COMPANION(dev);
> > > +             goto out;
> > > +     }
> >
> > Here, and below ops->is_mock_port()... I'm a bit confused why a mock driver
> > would ever attempt to do anything with real hardware. ie, why not
> 
> The rationale is to be able to run cxl_test on a system that might
> also have real CXL. For example I run this alongside the current QEMU
> CXL model, and that results in the cxl_acpi driver attaching to 2
> devices:
> 
> # tree /sys/bus/platform/drivers/cxl_acpi
> /sys/bus/platform/drivers/cxl_acpi
> ├── ACPI0017:00 -> ../../../../devices/platform/ACPI0017:00
> ├── bind
> ├── cxl_acpi.0 -> ../../../../devices/platform/cxl_acpi.0
> ├── module -> ../../../../module/cxl_acpi
> ├── uevent
> └── unbind
> 
> When the device is ACPI0017 this code is walking the ACPI bus looking
> for  ACPI0016 devices. A real ACPI0016 will fall through
> is_mock_port() to the original to_cxl_host_bridge() logic that just
> reads the ACPI device HID. In the mock case the cxl_acpi driver has
> instead been tricked into walk the platform bus which has real
> platform devices, and the fake cxl_test ones:
> 
> /sys/bus/platform/devices/
> ├── ACPI0012:00 -> ../../../devices/platform/ACPI0012:00
> ├── ACPI0017:00 -> ../../../devices/platform/ACPI0017:00
> ├── alarmtimer.0.auto -> ../../../devices/pnp0/00:04/rtc/rtc0/alarmtimer.0.auto
> ├── cxl_acpi.0 -> ../../../devices/platform/cxl_acpi.0
> ├── cxl_host_bridge.0 -> ../../../devices/platform/cxl_host_bridge.0
> ├── cxl_host_bridge.1 -> ../../../devices/platform/cxl_host_bridge.1
> ├── cxl_host_bridge.2 -> ../../../devices/platform/cxl_host_bridge.2
> ├── cxl_host_bridge.3 -> ../../../devices/platform/cxl_host_bridge.3
> ├── e820_pmem -> ../../../devices/platform/e820_pmem
> ├── efi-framebuffer.0 -> ../../../devices/platform/efi-framebuffer.0
> ├── efivars.0 -> ../../../devices/platform/efivars.0
> ├── Fixed MDIO bus.0 -> ../../../devices/platform/Fixed MDIO bus.0
> ├── i8042 -> ../../../devices/platform/i8042
> ├── iTCO_wdt.1.auto -> ../../../devices/pci0000:00/0000:00:1f.0/iTCO_wdt.1.auto
> ├── kgdboc -> ../../../devices/platform/kgdboc
> ├── pcspkr -> ../../../devices/platform/pcspkr
> ├── PNP0103:00 -> ../../../devices/platform/PNP0103:00
> ├── QEMU0002:00 -> ../../../devices/pci0000:00/QEMU0002:00
> ├── rtc-efi.0 -> ../../../devices/platform/rtc-efi.0
> └── serial8250 -> ../../../devices/platform/serial8250
> 
> ...where is_mock_port() filters out those real platform devices. Note
> that ACPI devices are atypical in that they get registered on the ACPI
> bus and some get a companion device with the same name registered on
> the platform bus.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-11  8:05   ` Andy Shevchenko
@ 2021-08-11 16:59     ` Andy Shevchenko
  2021-08-11 17:11       ` Dan Williams
  0 siblings, 1 reply; 61+ messages in thread
From: Andy Shevchenko @ 2021-08-11 16:59 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, Jonathan.Cameron, ben.widawsky,
	vishal.l.verma, alison.schofield, ira.weiny

On Wed, Aug 11, 2021 at 11:05:55AM +0300, Andy Shevchenko wrote:
> On Mon, Aug 09, 2021 at 03:28:40PM -0700, Dan Williams wrote:
> > In preparation for CXL labels that move the uuid to a different offset
> > in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
> > proper uuid_t type. That type definition predated the libnvdimm
> > subsystem, so now is as a good a time as any to convert all the uuid
> > handling in the subsystem to uuid_t to match the helpers.
> > 
> > As for the whitespace changes, all new code is clang-format compliant.
> 
> Thanks, looks good to me!
> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

Sorry, I'm in doubt this Rb stays. See below.

...

> >  struct btt_sb {
> >  	u8 signature[BTT_SIG_LEN];
> > -	u8 uuid[16];
> > -	u8 parent_uuid[16];
> > +	uuid_t uuid;
> > +	uuid_t parent_uuid;

uuid_t type is internal to the kernel. This seems to be an ABI?

> >  	__le32 flags;
> >  	__le16 version_major;
> >  	__le16 version_minor;

...

> >  struct nd_namespace_label {
> > -	u8 uuid[NSLABEL_UUID_LEN];
> > +	uuid_t uuid;

So seems this.

> >  	u8 name[NSLABEL_NAME_LEN];
> >  	__le32 flags;
> >  	__le16 nlabel;

...

I'm not familiar with FS stuff, but looks to me like unwanted changes.
In such cases you have to use export/import APIs. otherwise you make the type
carved in stone without even knowing that it's part of an ABI or some hardware
/ firmware interfaces.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-11 16:59     ` Andy Shevchenko
@ 2021-08-11 17:11       ` Dan Williams
  2021-08-11 19:18         ` Andy Shevchenko
  0 siblings, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-11 17:11 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Ben Widawsky,
	Vishal L Verma, Schofield, Alison, Weiny, Ira

On Wed, Aug 11, 2021 at 9:59 AM Andy Shevchenko
<andriy.shevchenko@linux.intel.com> wrote:
>
> On Wed, Aug 11, 2021 at 11:05:55AM +0300, Andy Shevchenko wrote:
> > On Mon, Aug 09, 2021 at 03:28:40PM -0700, Dan Williams wrote:
> > > In preparation for CXL labels that move the uuid to a different offset
> > > in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
> > > proper uuid_t type. That type definition predated the libnvdimm
> > > subsystem, so now is as a good a time as any to convert all the uuid
> > > handling in the subsystem to uuid_t to match the helpers.
> > >
> > > As for the whitespace changes, all new code is clang-format compliant.
> >
> > Thanks, looks good to me!
> > Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
>
> Sorry, I'm in doubt this Rb stays. See below.
>
> ...
>
> > >  struct btt_sb {
> > >     u8 signature[BTT_SIG_LEN];
> > > -   u8 uuid[16];
> > > -   u8 parent_uuid[16];
> > > +   uuid_t uuid;
> > > +   uuid_t parent_uuid;
>
> uuid_t type is internal to the kernel. This seems to be an ABI?

No, it's not a user ABI, this is an on-disk metadata structure. uuid_t
is approprirate.

>
> > >     __le32 flags;
> > >     __le16 version_major;
> > >     __le16 version_minor;
>
> ...
>
> > >  struct nd_namespace_label {
> > > -   u8 uuid[NSLABEL_UUID_LEN];
> > > +   uuid_t uuid;
>
> So seems this.
>
> > >     u8 name[NSLABEL_NAME_LEN];
> > >     __le32 flags;
> > >     __le16 nlabel;
>
> ...
>
> I'm not familiar with FS stuff, but looks to me like unwanted changes.
> In such cases you have to use export/import APIs. otherwise you make the type
> carved in stone without even knowing that it's part of an ABI or some hardware
> / firmware interfaces.

Can you clarify the concern? Carving the intent that these 16-bytes
are meant to be treated as UUID in stone is deliberate.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/23] libnvdimm/labels: Introduce label setter helpers
  2021-08-09 22:28 ` [PATCH 03/23] libnvdimm/labels: Introduce label setter helpers Dan Williams
@ 2021-08-11 17:27   ` Jonathan Cameron
  2021-08-11 17:42     ` Dan Williams
  0 siblings, 1 reply; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 17:27 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:28:03 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation for LIBNVDIMM to manage labels on CXL devices deploy
> helpers that abstract the label type from the implementation. The CXL
> label format is mostly similar to the EFI label format with concepts /
> fields added, like dynamic region creation and label type guids, and
> other concepts removed like BLK-mode and interleave-set-cookie ids.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Hi Dan,

Only thing on this patch is whether it might be better to put get /set pairs
together rather than all the get functions, then all the set functions?

If looking at this code in future it would make it a little easier to quickly
see they are match pairs.

Your code though, so if you prefer it like this, I don't really care!

Fine either way with me.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/nvdimm/label.c          |   61 +++++++++++++++++------------------
>  drivers/nvdimm/namespace_devs.c |    2 +
>  drivers/nvdimm/nd.h             |   68 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 98 insertions(+), 33 deletions(-)
> 

...
  
> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> index b3feaf3699f7..416846fe7818 100644
> --- a/drivers/nvdimm/nd.h
> +++ b/drivers/nvdimm/nd.h
> @@ -47,6 +47,14 @@ static inline u8 *nsl_get_name(struct nvdimm_drvdata *ndd,
>  	return memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
>  }
>  
> +static inline u8 *nsl_set_name(struct nvdimm_drvdata *ndd,
> +			       struct nd_namespace_label *nd_label, u8 *name)
> +{
> +	if (!name)
> +		return name;

Nitpick: Obviously same thing, but my eyes parse 
		return NULL;

more easily as a clear "error" return.

> +	return memcpy(nd_label->name, name, NSLABEL_NAME_LEN);
> +}
> +
...

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/23] libnvdimm/labels: Introduce label setter helpers
  2021-08-11 17:27   ` Jonathan Cameron
@ 2021-08-11 17:42     ` Dan Williams
  0 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-11 17:42 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, Linux NVDIMM, Ben Widawsky, Vishal L Verma, Schofield,
	Alison, Weiny, Ira

On Wed, Aug 11, 2021 at 10:27 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Mon, 9 Aug 2021 15:28:03 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
>
> > In preparation for LIBNVDIMM to manage labels on CXL devices deploy
> > helpers that abstract the label type from the implementation. The CXL
> > label format is mostly similar to the EFI label format with concepts /
> > fields added, like dynamic region creation and label type guids, and
> > other concepts removed like BLK-mode and interleave-set-cookie ids.
> >
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
>
> Hi Dan,
>
> Only thing on this patch is whether it might be better to put get /set pairs
> together rather than all the get functions, then all the set functions?
>
> If looking at this code in future it would make it a little easier to quickly
> see they are match pairs.

Sure, easy to do.

>
> Your code though, so if you prefer it like this, I don't really care!
>
> Fine either way with me.
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>
> > ---
> >  drivers/nvdimm/label.c          |   61 +++++++++++++++++------------------
> >  drivers/nvdimm/namespace_devs.c |    2 +
> >  drivers/nvdimm/nd.h             |   68 +++++++++++++++++++++++++++++++++++++++
> >  3 files changed, 98 insertions(+), 33 deletions(-)
> >
>
> ...
>
> > diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> > index b3feaf3699f7..416846fe7818 100644
> > --- a/drivers/nvdimm/nd.h
> > +++ b/drivers/nvdimm/nd.h
> > @@ -47,6 +47,14 @@ static inline u8 *nsl_get_name(struct nvdimm_drvdata *ndd,
> >       return memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
> >  }
> >
> > +static inline u8 *nsl_set_name(struct nvdimm_drvdata *ndd,
> > +                            struct nd_namespace_label *nd_label, u8 *name)
> > +{
> > +     if (!name)
> > +             return name;
>
> Nitpick: Obviously same thing, but my eyes parse
>                 return NULL;

Ok.

>
> more easily as a clear "error" return.
>
> > +     return memcpy(nd_label->name, name, NSLABEL_NAME_LEN);
> > +}
> > +
> ...

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-09 22:28 ` [PATCH 10/23] libnvdimm/labels: Add uuid helpers Dan Williams
  2021-08-11  8:05   ` Andy Shevchenko
@ 2021-08-11 18:13   ` Jonathan Cameron
  2021-08-12 21:17     ` Dan Williams
  1 sibling, 1 reply; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:13 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, Andy Shevchenko, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:28:40 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation for CXL labels that move the uuid to a different offset
> in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
> proper uuid_t type. That type definition predated the libnvdimm
> subsystem, so now is as a good a time as any to convert all the uuid
> handling in the subsystem to uuid_t to match the helpers.
> 
> As for the whitespace changes, all new code is clang-format compliant.
> 
> Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

There are a few interesting corners where you have cleaned out a pointless
copy before validating uuids. Perhaps call that out as a change in here
as it isn't as simple as just replacing like with like?
Perhaps I'm missing some reason that was needed in the code before this
patch.

All LGTM.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/nvdimm/btt.c            |   11 +++--
>  drivers/nvdimm/btt.h            |    4 +-
>  drivers/nvdimm/btt_devs.c       |   12 +++---
>  drivers/nvdimm/core.c           |   40 ++-----------------
>  drivers/nvdimm/label.c          |   34 +++++++---------
>  drivers/nvdimm/label.h          |    3 -
>  drivers/nvdimm/namespace_devs.c |   83 ++++++++++++++++++++-------------------
>  drivers/nvdimm/nd-core.h        |    5 +-
>  drivers/nvdimm/nd.h             |   37 ++++++++++++++++-
>  drivers/nvdimm/pfn_devs.c       |    2 -
>  include/linux/nd.h              |    4 +-
>  11 files changed, 115 insertions(+), 120 deletions(-)
> 
> diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
> index 92dec4952297..1cdfbadb7408 100644

> @@ -1050,7 +1050,6 @@ static int __blk_label_update(struct nd_region *nd_region,
>  	unsigned long *free, *victim_map = NULL;
>  	struct resource *res, **old_res_list;
>  	struct nd_label_id label_id;
> -	u8 uuid[NSLABEL_UUID_LEN];
>  	int min_dpa_idx = 0;
>  	LIST_HEAD(list);
>  	u32 nslot, slot;
> @@ -1088,8 +1087,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  		/* mark unused labels for garbage collection */
>  		for_each_clear_bit_le(slot, free, nslot) {
>  			nd_label = to_label(ndd, slot);
> -			memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> -			if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
> +			if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
>  				continue;

The original code here was 'unusual'. I'm not sure why it couldn't always be
validated in place. 

>  			res = to_resource(ndd, nd_label);
>  			if (res && is_old_resource(res, old_res_list,
> @@ -1158,7 +1156,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  
>  		nd_label = to_label(ndd, slot);
>  		memset(nd_label, 0, sizeof_namespace_label(ndd));
> -		memcpy(nd_label->uuid, nsblk->uuid, NSLABEL_UUID_LEN);

> +		nsl_set_uuid(ndd, nd_label, nsblk->uuid);
>  		nsl_set_name(ndd, nd_label, nsblk->alt_name);
>  		nsl_set_flags(ndd, nd_label, NSLABEL_FLAG_LOCAL);
>  
> @@ -1206,8 +1204,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  		if (!nd_label)
>  			continue;
>  		nlabel++;
> -		memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> -		if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
> +		if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
>  			continue;
>  		nlabel--;
>  		list_move(&label_ent->list, &list);
> @@ -1237,8 +1234,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  	}
>  	for_each_clear_bit_le(slot, free, nslot) {
>  		nd_label = to_label(ndd, slot);
> -		memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> -		if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
> +		if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
>  			continue;
>  		res = to_resource(ndd, nd_label);
>  		res->flags &= ~DPA_RESOURCE_ADJUSTED;
> @@ -1318,12 +1314,11 @@ static int init_labels(struct nd_mapping *nd_mapping, int num_labels)
>  	return max(num_labels, old_num_labels);
>  }

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 11/23] libnvdimm/labels: Introduce CXL labels
  2021-08-09 22:28 ` [PATCH 11/23] libnvdimm/labels: Introduce CXL labels Dan Williams
@ 2021-08-11 18:41   ` Jonathan Cameron
  2021-08-11 23:01     ` Dan Williams
  0 siblings, 1 reply; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:41 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:28:46 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> Now that all of use sites of label data have been converted to nsl_*
> helpers, introduce the CXL label format. The ->cxl flag in
> nvdimm_drvdata indicates the label format the device expects. A
> follow-on patch allows a bus provider to select the label style.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

A few trivial things inline. Nothing that actually 'needs' changing though.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> index e6e77691dbec..71ffde56fac0 100644
> --- a/drivers/nvdimm/label.h
> +++ b/drivers/nvdimm/label.h
> @@ -64,40 +64,77 @@ struct nd_namespace_index {
>  	u8 free[];
>  };
>  
> -/**
> - * struct nd_namespace_label - namespace superblock
> - * @uuid: UUID per RFC 4122
> - * @name: optional name (NULL-terminated)
> - * @flags: see NSLABEL_FLAG_*
> - * @nlabel: num labels to describe this ns
> - * @position: labels position in set
> - * @isetcookie: interleave set cookie
> - * @lbasize: LBA size in bytes or 0 for pmem
> - * @dpa: DPA of NVM range on this DIMM
> - * @rawsize: size of namespace
> - * @slot: slot of this label in label area
> - * @unused: must be zero
> - */
>  struct nd_namespace_label {
> +	union {
Cross reference might be a nice thing to include?
Table 212 I think...
> +		struct nvdimm_cxl_label {
> +			uuid_t type;
> +			uuid_t uuid;
> +			u8 name[NSLABEL_NAME_LEN];
> +			__le32 flags;
> +			__le16 nlabel;

Perhaps call out nlabel is nrange in the spec?

> +			__le16 position;
> +			__le64 dpa;
> +			__le64 rawsize;
> +			__le32 slot;
> +			__le32 align;
> +			uuid_t region_uuid;
> +			uuid_t abstraction_uuid;
> +			__le16 lbasize;
> +			u8 reserved[0x56];
> +			__le64 checksum;
> +		} cxl;
> +		/**
> +		 * struct nvdimm_efi_label - namespace superblock
> +		 * @uuid: UUID per RFC 4122
> +		 * @name: optional name (NULL-terminated)
> +		 * @flags: see NSLABEL_FLAG_*
> +		 * @nlabel: num labels to describe this ns
> +		 * @position: labels position in set
> +		 * @isetcookie: interleave set cookie
> +		 * @lbasize: LBA size in bytes or 0 for pmem
> +		 * @dpa: DPA of NVM range on this DIMM
> +		 * @rawsize: size of namespace
> +		 * @slot: slot of this label in label area
> +		 * @unused: must be zero
> +		 */
> +		struct nvdimm_efi_label {
> +			uuid_t uuid;
> +			u8 name[NSLABEL_NAME_LEN];
> +			__le32 flags;
> +			__le16 nlabel;
> +			__le16 position;
> +			__le64 isetcookie;
> +			__le64 lbasize;
> +			__le64 dpa;
> +			__le64 rawsize;
> +			__le32 slot;
> +			/*
> +			 * Accessing fields past this point should be
> +			 * gated by a efi_namespace_label_has() check.
> +			 */
> +			u8 align;
> +			u8 reserved[3];
> +			guid_t type_guid;
> +			guid_t abstraction_guid;
> +			u8 reserved2[88];
> +			__le64 checksum;
> +		} efi;
> +	};
> +};
> +
> +struct cxl_region_label {

Perhaps separate this out to another patch so the diff ends up less confusing?

> +	uuid_t type;
>  	uuid_t uuid;
> -	u8 name[NSLABEL_NAME_LEN];
>  	__le32 flags;
>  	__le16 nlabel;
>  	__le16 position;
> -	__le64 isetcookie;
> -	__le64 lbasize;
>  	__le64 dpa;
>  	__le64 rawsize;
> +	__le64 hpa;
>  	__le32 slot;
> -	/*
> -	 * Accessing fields past this point should be gated by a
> -	 * namespace_label_has() check.
> -	 */
> -	u8 align;
> -	u8 reserved[3];
> -	guid_t type_guid;
> -	guid_t abstraction_guid;
> -	u8 reserved2[88];
> +	__le32 ig;
> +	__le32 align;
> +	u8 reserved[0xac];
>  	__le64 checksum;
>  };

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields
  2021-08-09 22:27 ` [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields Dan Williams
  2021-08-10 20:48   ` Ben Widawsky
@ 2021-08-11 18:44   ` Jonathan Cameron
  1 sibling, 0 replies; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:44 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:27:52 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation for LIBNVDIMM to manage labels on CXL devices deploy
> helpers that abstract the label type from the implementation. The CXL
> label format is mostly similar to the EFI label format with concepts /
> fields added, like dynamic region creation and label type guids, and
> other concepts removed like BLK-mode and interleave-set-cookie ids.
> 
> In addition to nsl_get_* helpers there is the nsl_ref_name() helper that
> returns a pointer to a label field rather than copying the data.
> 
> Where changes touch the old whitespace style, update to clang-format
> expectations.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Checked it's a straight forward refactor and lgtm
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/nvdimm/label.c          |   20 ++++++-----
>  drivers/nvdimm/namespace_devs.c |   70 +++++++++++++++++++--------------------
>  drivers/nvdimm/nd.h             |   66 +++++++++++++++++++++++++++++++++++++
>  3 files changed, 110 insertions(+), 46 deletions(-)
> 
> diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> index 9251441fd8a3..b6d845cfb70e 100644
> --- a/drivers/nvdimm/label.c
> +++ b/drivers/nvdimm/label.c
> @@ -350,14 +350,14 @@ static bool slot_valid(struct nvdimm_drvdata *ndd,
>  		struct nd_namespace_label *nd_label, u32 slot)
>  {
>  	/* check that we are written where we expect to be written */
> -	if (slot != __le32_to_cpu(nd_label->slot))
> +	if (slot != nsl_get_slot(ndd, nd_label))
>  		return false;
>  
>  	/* check checksum */
>  	if (namespace_label_has(ndd, checksum)) {
>  		u64 sum, sum_save;
>  
> -		sum_save = __le64_to_cpu(nd_label->checksum);
> +		sum_save = nsl_get_checksum(ndd, nd_label);
>  		nd_label->checksum = __cpu_to_le64(0);
>  		sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
>  		nd_label->checksum = __cpu_to_le64(sum_save);
> @@ -395,13 +395,13 @@ int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
>  			continue;
>  
>  		memcpy(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> -		flags = __le32_to_cpu(nd_label->flags);
> +		flags = nsl_get_flags(ndd, nd_label);
>  		if (test_bit(NDD_NOBLK, &nvdimm->flags))
>  			flags &= ~NSLABEL_FLAG_LOCAL;
>  		nd_label_gen_id(&label_id, label_uuid, flags);
>  		res = nvdimm_allocate_dpa(ndd, &label_id,
> -				__le64_to_cpu(nd_label->dpa),
> -				__le64_to_cpu(nd_label->rawsize));
> +					  nsl_get_dpa(ndd, nd_label),
> +					  nsl_get_rawsize(ndd, nd_label));
>  		nd_dbg_dpa(nd_region, ndd, res, "reserve\n");
>  		if (!res)
>  			return -EBUSY;
> @@ -548,9 +548,9 @@ int nd_label_active_count(struct nvdimm_drvdata *ndd)
>  		nd_label = to_label(ndd, slot);
>  
>  		if (!slot_valid(ndd, nd_label, slot)) {
> -			u32 label_slot = __le32_to_cpu(nd_label->slot);
> -			u64 size = __le64_to_cpu(nd_label->rawsize);
> -			u64 dpa = __le64_to_cpu(nd_label->dpa);
> +			u32 label_slot = nsl_get_slot(ndd, nd_label);
> +			u64 size = nsl_get_rawsize(ndd, nd_label);
> +			u64 dpa = nsl_get_dpa(ndd, nd_label);
>  
>  			dev_dbg(ndd->dev,
>  				"slot%d invalid slot: %d dpa: %llx size: %llx\n",
> @@ -879,9 +879,9 @@ static struct resource *to_resource(struct nvdimm_drvdata *ndd,
>  	struct resource *res;
>  
>  	for_each_dpa_resource(ndd, res) {
> -		if (res->start != __le64_to_cpu(nd_label->dpa))
> +		if (res->start != nsl_get_dpa(ndd, nd_label))
>  			continue;
> -		if (resource_size(res) != __le64_to_cpu(nd_label->rawsize))
> +		if (resource_size(res) != nsl_get_rawsize(ndd, nd_label))
>  			continue;
>  		return res;
>  	}
> diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
> index 2403b71b601e..94da804372bf 100644
> --- a/drivers/nvdimm/namespace_devs.c
> +++ b/drivers/nvdimm/namespace_devs.c
> @@ -1235,7 +1235,7 @@ static int namespace_update_uuid(struct nd_region *nd_region,
>  			if (!nd_label)
>  				continue;
>  			nd_label_gen_id(&label_id, nd_label->uuid,
> -					__le32_to_cpu(nd_label->flags));
> +					nsl_get_flags(ndd, nd_label));
>  			if (strcmp(old_label_id.id, label_id.id) == 0)
>  				set_bit(ND_LABEL_REAP, &label_ent->flags);
>  		}
> @@ -1851,9 +1851,9 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
>  
>  			if (!nd_label)
>  				continue;
> -			isetcookie = __le64_to_cpu(nd_label->isetcookie);
> -			position = __le16_to_cpu(nd_label->position);
> -			nlabel = __le16_to_cpu(nd_label->nlabel);
> +			isetcookie = nsl_get_isetcookie(ndd, nd_label);
> +			position = nsl_get_position(ndd, nd_label);
> +			nlabel = nsl_get_nlabel(ndd, nd_label);
>  
>  			if (isetcookie != cookie)
>  				continue;
> @@ -1923,8 +1923,8 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
>  		 */
>  		hw_start = nd_mapping->start;
>  		hw_end = hw_start + nd_mapping->size;
> -		pmem_start = __le64_to_cpu(nd_label->dpa);
> -		pmem_end = pmem_start + __le64_to_cpu(nd_label->rawsize);
> +		pmem_start = nsl_get_dpa(ndd, nd_label);
> +		pmem_end = pmem_start + nsl_get_rawsize(ndd, nd_label);
>  		if (pmem_start >= hw_start && pmem_start < hw_end
>  				&& pmem_end <= hw_end && pmem_end > hw_start)
>  			/* pass */;
> @@ -1947,14 +1947,16 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
>   * @nd_label: target pmem namespace label to evaluate
>   */
>  static struct device *create_namespace_pmem(struct nd_region *nd_region,
> -		struct nd_namespace_index *nsindex,
> -		struct nd_namespace_label *nd_label)
> +					    struct nd_mapping *nd_mapping,
> +					    struct nd_namespace_label *nd_label)
>  {
> +	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
> +	struct nd_namespace_index *nsindex =
> +		to_namespace_index(ndd, ndd->ns_current);
>  	u64 cookie = nd_region_interleave_set_cookie(nd_region, nsindex);
>  	u64 altcookie = nd_region_interleave_set_altcookie(nd_region);
>  	struct nd_label_ent *label_ent;
>  	struct nd_namespace_pmem *nspm;
> -	struct nd_mapping *nd_mapping;
>  	resource_size_t size = 0;
>  	struct resource *res;
>  	struct device *dev;
> @@ -1966,10 +1968,10 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  		return ERR_PTR(-ENXIO);
>  	}
>  
> -	if (__le64_to_cpu(nd_label->isetcookie) != cookie) {
> +	if (nsl_get_isetcookie(ndd, nd_label) != cookie) {
>  		dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
>  				nd_label->uuid);
> -		if (__le64_to_cpu(nd_label->isetcookie) != altcookie)
> +		if (nsl_get_isetcookie(ndd, nd_label) != altcookie)
>  			return ERR_PTR(-EAGAIN);
>  
>  		dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
> @@ -2037,16 +2039,16 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  			continue;
>  		}
>  
> -		size += __le64_to_cpu(label0->rawsize);
> -		if (__le16_to_cpu(label0->position) != 0)
> +		ndd = to_ndd(nd_mapping);
> +		size += nsl_get_rawsize(ndd, label0);
> +		if (nsl_get_position(ndd, label0) != 0)
>  			continue;
>  		WARN_ON(nspm->alt_name || nspm->uuid);
> -		nspm->alt_name = kmemdup((void __force *) label0->name,
> -				NSLABEL_NAME_LEN, GFP_KERNEL);
> +		nspm->alt_name = kmemdup(nsl_ref_name(ndd, label0),
> +					 NSLABEL_NAME_LEN, GFP_KERNEL);
>  		nspm->uuid = kmemdup((void __force *) label0->uuid,
>  				NSLABEL_UUID_LEN, GFP_KERNEL);
> -		nspm->lbasize = __le64_to_cpu(label0->lbasize);
> -		ndd = to_ndd(nd_mapping);
> +		nspm->lbasize = nsl_get_lbasize(ndd, label0);
>  		if (namespace_label_has(ndd, abstraction_guid))
>  			nspm->nsio.common.claim_class
>  				= to_nvdimm_cclass(&label0->abstraction_guid);
> @@ -2237,7 +2239,7 @@ static int add_namespace_resource(struct nd_region *nd_region,
>  		if (is_namespace_blk(devs[i])) {
>  			res = nsblk_add_resource(nd_region, ndd,
>  					to_nd_namespace_blk(devs[i]),
> -					__le64_to_cpu(nd_label->dpa));
> +					nsl_get_dpa(ndd, nd_label));
>  			if (!res)
>  				return -ENXIO;
>  			nd_dbg_dpa(nd_region, ndd, res, "%d assign\n", count);
> @@ -2276,7 +2278,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  		if (nd_label->isetcookie != __cpu_to_le64(nd_set->cookie2)) {
>  			dev_dbg(ndd->dev, "expect cookie %#llx got %#llx\n",
>  					nd_set->cookie2,
> -					__le64_to_cpu(nd_label->isetcookie));
> +					nsl_get_isetcookie(ndd, nd_label));
>  			return ERR_PTR(-EAGAIN);
>  		}
>  	}
> @@ -2288,7 +2290,7 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  	dev->type = &namespace_blk_device_type;
>  	dev->parent = &nd_region->dev;
>  	nsblk->id = -1;
> -	nsblk->lbasize = __le64_to_cpu(nd_label->lbasize);
> +	nsblk->lbasize = nsl_get_lbasize(ndd, nd_label);
>  	nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN,
>  			GFP_KERNEL);
>  	if (namespace_label_has(ndd, abstraction_guid))
> @@ -2296,15 +2298,14 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  			= to_nvdimm_cclass(&nd_label->abstraction_guid);
>  	if (!nsblk->uuid)
>  		goto blk_err;
> -	memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
> +	nsl_get_name(ndd, nd_label, name);
>  	if (name[0]) {
> -		nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN,
> -				GFP_KERNEL);
> +		nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN, GFP_KERNEL);
>  		if (!nsblk->alt_name)
>  			goto blk_err;
>  	}
>  	res = nsblk_add_resource(nd_region, ndd, nsblk,
> -			__le64_to_cpu(nd_label->dpa));
> +			nsl_get_dpa(ndd, nd_label));
>  	if (!res)
>  		goto blk_err;
>  	nd_dbg_dpa(nd_region, ndd, res, "%d: assign\n", count);
> @@ -2345,6 +2346,7 @@ static struct device **scan_labels(struct nd_region *nd_region)
>  	struct device *dev, **devs = NULL;
>  	struct nd_label_ent *label_ent, *e;
>  	struct nd_mapping *nd_mapping = &nd_region->mapping[0];
> +	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
>  	resource_size_t map_end = nd_mapping->start + nd_mapping->size - 1;
>  
>  	/* "safe" because create_namespace_pmem() might list_move() label_ent */
> @@ -2355,7 +2357,7 @@ static struct device **scan_labels(struct nd_region *nd_region)
>  
>  		if (!nd_label)
>  			continue;
> -		flags = __le32_to_cpu(nd_label->flags);
> +		flags = nsl_get_flags(ndd, nd_label);
>  		if (is_nd_blk(&nd_region->dev)
>  				== !!(flags & NSLABEL_FLAG_LOCAL))
>  			/* pass, region matches label type */;
> @@ -2363,9 +2365,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
>  			continue;
>  
>  		/* skip labels that describe extents outside of the region */
> -		if (__le64_to_cpu(nd_label->dpa) < nd_mapping->start ||
> -		    __le64_to_cpu(nd_label->dpa) > map_end)
> -				continue;
> +		if (nsl_get_dpa(ndd, nd_label) < nd_mapping->start ||
> +		    nsl_get_dpa(ndd, nd_label) > map_end)
> +			continue;
>  
>  		i = add_namespace_resource(nd_region, nd_label, devs, count);
>  		if (i < 0)
> @@ -2381,13 +2383,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
>  
>  		if (is_nd_blk(&nd_region->dev))
>  			dev = create_namespace_blk(nd_region, nd_label, count);
> -		else {
> -			struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
> -			struct nd_namespace_index *nsindex;
> -
> -			nsindex = to_namespace_index(ndd, ndd->ns_current);
> -			dev = create_namespace_pmem(nd_region, nsindex, nd_label);
> -		}
> +		else
> +			dev = create_namespace_pmem(nd_region, nd_mapping,
> +						    nd_label);
>  
>  		if (IS_ERR(dev)) {
>  			switch (PTR_ERR(dev)) {
> @@ -2570,7 +2568,7 @@ static int init_active_labels(struct nd_region *nd_region)
>  				break;
>  			label = nd_label_active(ndd, j);
>  			if (test_bit(NDD_NOBLK, &nvdimm->flags)) {
> -				u32 flags = __le32_to_cpu(label->flags);
> +				u32 flags = nsl_get_flags(ndd, label);
>  
>  				flags &= ~NSLABEL_FLAG_LOCAL;
>  				label->flags = __cpu_to_le32(flags);
> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> index 696b55556d4d..61f43f0edabf 100644
> --- a/drivers/nvdimm/nd.h
> +++ b/drivers/nvdimm/nd.h
> @@ -35,6 +35,72 @@ struct nvdimm_drvdata {
>  	struct kref kref;
>  };
>  
> +static inline const u8 *nsl_ref_name(struct nvdimm_drvdata *ndd,
> +				     struct nd_namespace_label *nd_label)
> +{
> +	return nd_label->name;
> +}
> +
> +static inline u8 *nsl_get_name(struct nvdimm_drvdata *ndd,
> +			       struct nd_namespace_label *nd_label, u8 *name)
> +{
> +	return memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
> +}
> +
> +static inline u32 nsl_get_slot(struct nvdimm_drvdata *ndd,
> +			       struct nd_namespace_label *nd_label)
> +{
> +	return __le32_to_cpu(nd_label->slot);
> +}
> +
> +static inline u64 nsl_get_checksum(struct nvdimm_drvdata *ndd,
> +				   struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->checksum);
> +}
> +
> +static inline u32 nsl_get_flags(struct nvdimm_drvdata *ndd,
> +				struct nd_namespace_label *nd_label)
> +{
> +	return __le32_to_cpu(nd_label->flags);
> +}
> +
> +static inline u64 nsl_get_dpa(struct nvdimm_drvdata *ndd,
> +			      struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->dpa);
> +}
> +
> +static inline u64 nsl_get_rawsize(struct nvdimm_drvdata *ndd,
> +				  struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->rawsize);
> +}
> +
> +static inline u64 nsl_get_isetcookie(struct nvdimm_drvdata *ndd,
> +				     struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->isetcookie);
> +}
> +
> +static inline u16 nsl_get_position(struct nvdimm_drvdata *ndd,
> +				   struct nd_namespace_label *nd_label)
> +{
> +	return __le16_to_cpu(nd_label->position);
> +}
> +
> +static inline u16 nsl_get_nlabel(struct nvdimm_drvdata *ndd,
> +				 struct nd_namespace_label *nd_label)
> +{
> +	return __le16_to_cpu(nd_label->nlabel);
> +}
> +
> +static inline u64 nsl_get_lbasize(struct nvdimm_drvdata *ndd,
> +				  struct nd_namespace_label *nd_label)
> +{
> +	return __le64_to_cpu(nd_label->lbasize);
> +}
> +
>  struct nd_region_data {
>  	int ns_count;
>  	int ns_active;
> 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/23] libnvdimm/labels: Add isetcookie validation helper
  2021-08-09 22:27 ` [PATCH 02/23] libnvdimm/labels: Add isetcookie validation helper Dan Williams
@ 2021-08-11 18:44   ` Jonathan Cameron
  0 siblings, 0 replies; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:44 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:27:57 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation to handle CXL labels with the same code that handles EFI
> labels, add a specific interleave-set-cookie validation helper
> rather than a getter since the CXL label type does not support this
> concept. The answer for CXL labels will always be true.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/nvdimm/namespace_devs.c |    8 +++-----
>  drivers/nvdimm/nd.h             |    7 +++++++
>  2 files changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
> index 94da804372bf..f33245c27cc4 100644
> --- a/drivers/nvdimm/namespace_devs.c
> +++ b/drivers/nvdimm/namespace_devs.c
> @@ -1847,15 +1847,13 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
>  		list_for_each_entry(label_ent, &nd_mapping->labels, list) {
>  			struct nd_namespace_label *nd_label = label_ent->label;
>  			u16 position, nlabel;
> -			u64 isetcookie;
>  
>  			if (!nd_label)
>  				continue;
> -			isetcookie = nsl_get_isetcookie(ndd, nd_label);
>  			position = nsl_get_position(ndd, nd_label);
>  			nlabel = nsl_get_nlabel(ndd, nd_label);
>  
> -			if (isetcookie != cookie)
> +			if (!nsl_validate_isetcookie(ndd, nd_label, cookie))
>  				continue;
>  
>  			if (memcmp(nd_label->uuid, uuid, NSLABEL_UUID_LEN) != 0)
> @@ -1968,10 +1966,10 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  		return ERR_PTR(-ENXIO);
>  	}
>  
> -	if (nsl_get_isetcookie(ndd, nd_label) != cookie) {
> +	if (!nsl_validate_isetcookie(ndd, nd_label, cookie)) {
>  		dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
>  				nd_label->uuid);
> -		if (nsl_get_isetcookie(ndd, nd_label) != altcookie)
> +		if (!nsl_validate_isetcookie(ndd, nd_label, altcookie))
>  			return ERR_PTR(-EAGAIN);
>  
>  		dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> index 61f43f0edabf..b3feaf3699f7 100644
> --- a/drivers/nvdimm/nd.h
> +++ b/drivers/nvdimm/nd.h
> @@ -83,6 +83,13 @@ static inline u64 nsl_get_isetcookie(struct nvdimm_drvdata *ndd,
>  	return __le64_to_cpu(nd_label->isetcookie);
>  }
>  
> +static inline bool nsl_validate_isetcookie(struct nvdimm_drvdata *ndd,
> +					   struct nd_namespace_label *nd_label,
> +					   u64 cookie)
> +{
> +	return cookie == __le64_to_cpu(nd_label->isetcookie);
> +}
> +
>  static inline u16 nsl_get_position(struct nvdimm_drvdata *ndd,
>  				   struct nd_namespace_label *nd_label)
>  {
> 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 04/23] libnvdimm/labels: Add a checksum calculation helper
  2021-08-09 22:28 ` [PATCH 04/23] libnvdimm/labels: Add a checksum calculation helper Dan Williams
@ 2021-08-11 18:44   ` Jonathan Cameron
  0 siblings, 0 replies; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:44 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:28:08 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation for LIBNVDIMM to manage labels on CXL devices deploy
> helpers that abstract the label type from the implementation. The CXL
> label format is mostly similar to the EFI label format with concepts /
> fields added, like dynamic region creation and label type guids, and
> other concepts removed like BLK-mode and interleave-set-cookie ids.
> 
> CXL labels support checksums by default, but early versions of the EFI
> labels did not. Add a validate function that can return true in the case
> the label format does not implement a checksum.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/nvdimm/label.c |   68 +++++++++++++++++++++++++-----------------------
>  1 file changed, 35 insertions(+), 33 deletions(-)
> 
> diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> index b40a4eda1d89..3f73412dd438 100644
> --- a/drivers/nvdimm/label.c
> +++ b/drivers/nvdimm/label.c
> @@ -346,29 +346,45 @@ static bool preamble_next(struct nvdimm_drvdata *ndd,
>  			free, nslot);
>  }
>  
> +static bool nsl_validate_checksum(struct nvdimm_drvdata *ndd,
> +				  struct nd_namespace_label *nd_label)
> +{
> +	u64 sum, sum_save;
> +
> +	if (!namespace_label_has(ndd, checksum))
> +		return true;
> +
> +	sum_save = nsl_get_checksum(ndd, nd_label);
> +	nsl_set_checksum(ndd, nd_label, 0);
> +	sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
> +	nsl_set_checksum(ndd, nd_label, sum_save);
> +	return sum == sum_save;
> +}
> +
> +static void nsl_calculate_checksum(struct nvdimm_drvdata *ndd,
> +				   struct nd_namespace_label *nd_label)
> +{
> +	u64 sum;
> +
> +	if (!namespace_label_has(ndd, checksum))
> +		return;
> +	nsl_set_checksum(ndd, nd_label, 0);
> +	sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
> +	nsl_set_checksum(ndd, nd_label, sum);
> +}
> +
>  static bool slot_valid(struct nvdimm_drvdata *ndd,
>  		struct nd_namespace_label *nd_label, u32 slot)
>  {
> +	bool valid;
> +
>  	/* check that we are written where we expect to be written */
>  	if (slot != nsl_get_slot(ndd, nd_label))
>  		return false;
> -
> -	/* check checksum */
> -	if (namespace_label_has(ndd, checksum)) {
> -		u64 sum, sum_save;
> -
> -		sum_save = nsl_get_checksum(ndd, nd_label);
> -		nsl_set_checksum(ndd, nd_label, 0);
> -		sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
> -		nsl_set_checksum(ndd, nd_label, sum_save);
> -		if (sum != sum_save) {
> -			dev_dbg(ndd->dev, "fail checksum. slot: %d expect: %#llx\n",
> -				slot, sum);
> -			return false;
> -		}
> -	}
> -
> -	return true;
> +	valid = nsl_validate_checksum(ndd, nd_label);
> +	if (!valid)
> +		dev_dbg(ndd->dev, "fail checksum. slot: %d\n", slot);
> +	return valid;
>  }
>  
>  int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd)
> @@ -812,13 +828,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
>  		guid_copy(&nd_label->abstraction_guid,
>  				to_abstraction_guid(ndns->claim_class,
>  					&nd_label->abstraction_guid));
> -	if (namespace_label_has(ndd, checksum)) {
> -		u64 sum;
> -
> -		nsl_set_checksum(ndd, nd_label, 0);
> -		sum = nd_fletcher64(nd_label, sizeof_namespace_label(ndd), 1);
> -		nsl_set_checksum(ndd, nd_label, sum);
> -	}
> +	nsl_calculate_checksum(ndd, nd_label);
>  	nd_dbg_dpa(nd_region, ndd, res, "\n");
>  
>  	/* update label */
> @@ -1049,15 +1059,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  			guid_copy(&nd_label->abstraction_guid,
>  					to_abstraction_guid(ndns->claim_class,
>  						&nd_label->abstraction_guid));
> -
> -		if (namespace_label_has(ndd, checksum)) {
> -			u64 sum;
> -
> -			nsl_set_checksum(ndd, nd_label, 0);
> -			sum = nd_fletcher64(nd_label,
> -					sizeof_namespace_label(ndd), 1);
> -			nsl_set_checksum(ndd, nd_label, sum);
> -		}
> +		nsl_calculate_checksum(ndd, nd_label);
>  
>  		/* update label */
>  		offset = nd_label_offset(ndd, nd_label);
> 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 05/23] libnvdimm/labels: Add blk isetcookie set / validation helpers
  2021-08-09 22:28 ` [PATCH 05/23] libnvdimm/labels: Add blk isetcookie set / validation helpers Dan Williams
@ 2021-08-11 18:45   ` Jonathan Cameron
  0 siblings, 0 replies; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:45 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:28:14 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation for LIBNVDIMM to manage labels on CXL devices deploy
> helpers that abstract the label type from the implementation. The CXL
> label format is mostly similar to the EFI label format with concepts /
> fields added, like dynamic region creation and label type guids, and
> other concepts removed like BLK-mode and interleave-set-cookie ids.
> 
> Given BLK-mode is not even supported on CXL push hide the BLK-mode
> specific details inside the helpers.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/nvdimm/label.c          |   30 ++++++++++++++++++++++++++++--
>  drivers/nvdimm/namespace_devs.c |    9 ++-------
>  drivers/nvdimm/nd.h             |    4 ++++
>  3 files changed, 34 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> index 3f73412dd438..d1a7f399cfe4 100644
> --- a/drivers/nvdimm/label.c
> +++ b/drivers/nvdimm/label.c
> @@ -898,6 +898,33 @@ static struct resource *to_resource(struct nvdimm_drvdata *ndd,
>  	return NULL;
>  }
>  
> +static void nsl_set_blk_isetcookie(struct nvdimm_drvdata *ndd,
> +				   struct nd_namespace_label *nd_label,
> +				   u64 isetcookie)
> +{
> +	if (namespace_label_has(ndd, type_guid)) {
> +		nsl_set_isetcookie(ndd, nd_label, isetcookie);
> +		return;
> +	}
> +	nsl_set_isetcookie(ndd, nd_label, 0); /* N/A */
> +}
> +
> +bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
> +				 struct nd_namespace_label *nd_label,
> +				 u64 isetcookie)
> +{
> +	if (!namespace_label_has(ndd, type_guid))
> +		return true;
> +
> +	if (nsl_get_isetcookie(ndd, nd_label) != isetcookie) {
> +		dev_dbg(ndd->dev, "expect cookie %#llx got %#llx\n", isetcookie,
> +			nsl_get_isetcookie(ndd, nd_label));
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
>  /*
>   * 1/ Account all the labels that can be freed after this update
>   * 2/ Allocate and write the label to the staging (next) index
> @@ -1042,12 +1069,11 @@ static int __blk_label_update(struct nd_region *nd_region,
>  				nsl_set_nlabel(ndd, nd_label, 0xffff);
>  				nsl_set_position(ndd, nd_label, 0xffff);
>  			}
> -			nsl_set_isetcookie(ndd, nd_label, nd_set->cookie2);
>  		} else {
>  			nsl_set_nlabel(ndd, nd_label, 0); /* N/A */
>  			nsl_set_position(ndd, nd_label, 0); /* N/A */
> -			nsl_set_isetcookie(ndd, nd_label, 0); /* N/A */
>  		}
> +		nsl_set_blk_isetcookie(ndd, nd_label, nd_set->cookie2);
>  
>  		nsl_set_dpa(ndd, nd_label, res->start);
>  		nsl_set_rawsize(ndd, nd_label, resource_size(res));
> diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
> index fb9e080ce654..fbd0c2fcea4a 100644
> --- a/drivers/nvdimm/namespace_devs.c
> +++ b/drivers/nvdimm/namespace_devs.c
> @@ -2272,14 +2272,9 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  					&nd_label->type_guid);
>  			return ERR_PTR(-EAGAIN);
>  		}
> -
> -		if (nd_label->isetcookie != __cpu_to_le64(nd_set->cookie2)) {
> -			dev_dbg(ndd->dev, "expect cookie %#llx got %#llx\n",
> -					nd_set->cookie2,
> -					nsl_get_isetcookie(ndd, nd_label));
> -			return ERR_PTR(-EAGAIN);
> -		}
>  	}
> +	if (!nsl_validate_blk_isetcookie(ndd, nd_label, nd_set->cookie2))
> +		return ERR_PTR(-EAGAIN);
>  
>  	nsblk = kzalloc(sizeof(*nsblk), GFP_KERNEL);
>  	if (!nsblk)
> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> index 416846fe7818..2a9a608b7f17 100644
> --- a/drivers/nvdimm/nd.h
> +++ b/drivers/nvdimm/nd.h
> @@ -176,6 +176,10 @@ static inline void nsl_set_lbasize(struct nvdimm_drvdata *ndd,
>  	nd_label->lbasize = __cpu_to_le64(lbasize);
>  }
>  
> +bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
> +				 struct nd_namespace_label *nd_label,
> +				 u64 isetcookie);
> +
>  struct nd_region_data {
>  	int ns_count;
>  	int ns_active;
> 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 06/23] libnvdimm/labels: Add blk special cases for nlabel and position helpers
  2021-08-09 22:28 ` [PATCH 06/23] libnvdimm/labels: Add blk special cases for nlabel and position helpers Dan Williams
@ 2021-08-11 18:45   ` Jonathan Cameron
  0 siblings, 0 replies; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:45 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:28:19 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation for LIBNVDIMM to manage labels on CXL devices deploy
> helpers that abstract the label type from the implementation. The CXL
> label format is mostly similar to the EFI label format with concepts /
> fields added, like dynamic region creation and label type guids, and
> other concepts removed like BLK-mode and interleave-set-cookie ids.
> 
> Finish off the BLK-mode specific helper conversion with the nlabel and
> position behaviour that is specific to EFI v1.2 labels and not the
> original v1.1 definition.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/nvdimm/label.c |   46 +++++++++++++++++++++++++++++-----------------
>  1 file changed, 29 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> index d1a7f399cfe4..7188675c0955 100644
> --- a/drivers/nvdimm/label.c
> +++ b/drivers/nvdimm/label.c
> @@ -898,6 +898,10 @@ static struct resource *to_resource(struct nvdimm_drvdata *ndd,
>  	return NULL;
>  }
>  
> +/*
> + * Use the presence of the type_guid as a flag to determine isetcookie
> + * usage and nlabel + position policy for blk-aperture namespaces.
> + */
>  static void nsl_set_blk_isetcookie(struct nvdimm_drvdata *ndd,
>  				   struct nd_namespace_label *nd_label,
>  				   u64 isetcookie)
> @@ -925,6 +929,28 @@ bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
>  	return true;
>  }
>  
> +static void nsl_set_blk_nlabel(struct nvdimm_drvdata *ndd,
> +			       struct nd_namespace_label *nd_label, int nlabel,
> +			       bool first)
> +{
> +	if (!namespace_label_has(ndd, type_guid)) {
> +		nsl_set_nlabel(ndd, nd_label, 0); /* N/A */
> +		return;
> +	}
> +	nsl_set_nlabel(ndd, nd_label, first ? nlabel : 0xffff);
> +}
> +
> +static void nsl_set_blk_position(struct nvdimm_drvdata *ndd,
> +				 struct nd_namespace_label *nd_label,
> +				 bool first)
> +{
> +	if (!namespace_label_has(ndd, type_guid)) {
> +		nsl_set_position(ndd, nd_label, 0);
> +		return;
> +	}
> +	nsl_set_position(ndd, nd_label, first ? 0 : 0xffff);
> +}
> +
>  /*
>   * 1/ Account all the labels that can be freed after this update
>   * 2/ Allocate and write the label to the staging (next) index
> @@ -1056,23 +1082,9 @@ static int __blk_label_update(struct nd_region *nd_region,
>  		nsl_set_name(ndd, nd_label, nsblk->alt_name);
>  		nsl_set_flags(ndd, nd_label, NSLABEL_FLAG_LOCAL);
>  
> -		/*
> -		 * Use the presence of the type_guid as a flag to
> -		 * determine isetcookie usage and nlabel + position
> -		 * policy for blk-aperture namespaces.
> -		 */
> -		if (namespace_label_has(ndd, type_guid)) {
> -			if (i == min_dpa_idx) {
> -				nsl_set_nlabel(ndd, nd_label, nsblk->num_resources);
> -				nsl_set_position(ndd, nd_label, 0);
> -			} else {
> -				nsl_set_nlabel(ndd, nd_label, 0xffff);
> -				nsl_set_position(ndd, nd_label, 0xffff);
> -			}
> -		} else {
> -			nsl_set_nlabel(ndd, nd_label, 0); /* N/A */
> -			nsl_set_position(ndd, nd_label, 0); /* N/A */
> -		}
> +		nsl_set_blk_nlabel(ndd, nd_label, nsblk->num_resources,
> +				   i == min_dpa_idx);
> +		nsl_set_blk_position(ndd, nd_label, i == min_dpa_idx);
>  		nsl_set_blk_isetcookie(ndd, nd_label, nd_set->cookie2);
>  
>  		nsl_set_dpa(ndd, nd_label, res->start);
> 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 07/23] libnvdimm/labels: Add type-guid helpers
  2021-08-09 22:28 ` [PATCH 07/23] libnvdimm/labels: Add type-guid helpers Dan Williams
@ 2021-08-11 18:46   ` Jonathan Cameron
  0 siblings, 0 replies; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:46 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:28:24 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation for CXL label support, which does not have the type-guid
> concept, wrap the existing users with nsl_set_type_guid, and
> nsl_validate_type_guid. Recall that the type-guid is a value in the ACPI
> NFIT table to indicate how the memory range is used / should be
> presented to upper layers.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/nvdimm/label.c          |   26 ++++++++++++++++++++++----
>  drivers/nvdimm/namespace_devs.c |   19 ++++---------------
>  drivers/nvdimm/nd.h             |    2 ++
>  3 files changed, 28 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> index 7188675c0955..294ffc3cb582 100644
> --- a/drivers/nvdimm/label.c
> +++ b/drivers/nvdimm/label.c
> @@ -772,6 +772,26 @@ static void reap_victim(struct nd_mapping *nd_mapping,
>  	victim->label = NULL;
>  }
>  
> +static void nsl_set_type_guid(struct nvdimm_drvdata *ndd,
> +			      struct nd_namespace_label *nd_label, guid_t *guid)
> +{
> +	if (namespace_label_has(ndd, type_guid))
> +		guid_copy(&nd_label->type_guid, guid);
> +}
> +
> +bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd,
> +			    struct nd_namespace_label *nd_label, guid_t *guid)
> +{
> +	if (!namespace_label_has(ndd, type_guid))
> +		return true;
> +	if (!guid_equal(&nd_label->type_guid, guid)) {
> +		dev_dbg(ndd->dev, "expect type_guid %pUb got %pUb\n", guid,
> +			&nd_label->type_guid);
> +		return false;
> +	}
> +	return true;
> +}
> +
>  static int __pmem_label_update(struct nd_region *nd_region,
>  		struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
>  		int pos, unsigned long flags)
> @@ -822,8 +842,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
>  	nsl_set_lbasize(ndd, nd_label, nspm->lbasize);
>  	nsl_set_dpa(ndd, nd_label, res->start);
>  	nsl_set_slot(ndd, nd_label, slot);
> -	if (namespace_label_has(ndd, type_guid))
> -		guid_copy(&nd_label->type_guid, &nd_set->type_guid);
> +	nsl_set_type_guid(ndd, nd_label, &nd_set->type_guid);
>  	if (namespace_label_has(ndd, abstraction_guid))
>  		guid_copy(&nd_label->abstraction_guid,
>  				to_abstraction_guid(ndns->claim_class,
> @@ -1091,8 +1110,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  		nsl_set_rawsize(ndd, nd_label, resource_size(res));
>  		nsl_set_lbasize(ndd, nd_label, nsblk->lbasize);
>  		nsl_set_slot(ndd, nd_label, slot);
> -		if (namespace_label_has(ndd, type_guid))
> -			guid_copy(&nd_label->type_guid, &nd_set->type_guid);
> +		nsl_set_type_guid(ndd, nd_label, &nd_set->type_guid);
>  		if (namespace_label_has(ndd, abstraction_guid))
>  			guid_copy(&nd_label->abstraction_guid,
>  					to_abstraction_guid(ndns->claim_class,
> diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
> index fbd0c2fcea4a..af5a31dd3147 100644
> --- a/drivers/nvdimm/namespace_devs.c
> +++ b/drivers/nvdimm/namespace_devs.c
> @@ -1859,14 +1859,9 @@ static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid,
>  			if (memcmp(nd_label->uuid, uuid, NSLABEL_UUID_LEN) != 0)
>  				continue;
>  
> -			if (namespace_label_has(ndd, type_guid)
> -					&& !guid_equal(&nd_set->type_guid,
> -						&nd_label->type_guid)) {
> -				dev_dbg(ndd->dev, "expect type_guid %pUb got %pUb\n",
> -						&nd_set->type_guid,
> -						&nd_label->type_guid);
> +			if (!nsl_validate_type_guid(ndd, nd_label,
> +						    &nd_set->type_guid))
>  				continue;
> -			}
>  
>  			if (found_uuid) {
>  				dev_dbg(ndd->dev, "duplicate entry for uuid\n");
> @@ -2265,14 +2260,8 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  	struct device *dev = NULL;
>  	struct resource *res;
>  
> -	if (namespace_label_has(ndd, type_guid)) {
> -		if (!guid_equal(&nd_set->type_guid, &nd_label->type_guid)) {
> -			dev_dbg(ndd->dev, "expect type_guid %pUb got %pUb\n",
> -					&nd_set->type_guid,
> -					&nd_label->type_guid);
> -			return ERR_PTR(-EAGAIN);
> -		}
> -	}
> +	if (!nsl_validate_type_guid(ndd, nd_label, &nd_set->type_guid))
> +		return ERR_PTR(-EAGAIN);
>  	if (!nsl_validate_blk_isetcookie(ndd, nd_label, nd_set->cookie2))
>  		return ERR_PTR(-EAGAIN);
>  
> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> index 2a9a608b7f17..f3c364df9449 100644
> --- a/drivers/nvdimm/nd.h
> +++ b/drivers/nvdimm/nd.h
> @@ -179,6 +179,8 @@ static inline void nsl_set_lbasize(struct nvdimm_drvdata *ndd,
>  bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
>  				 struct nd_namespace_label *nd_label,
>  				 u64 isetcookie);
> +bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd,
> +			    struct nd_namespace_label *nd_label, guid_t *guid);
>  
>  struct nd_region_data {
>  	int ns_count;
> 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 08/23] libnvdimm/labels: Add claim class helpers
  2021-08-09 22:28 ` [PATCH 08/23] libnvdimm/labels: Add claim class helpers Dan Williams
@ 2021-08-11 18:46   ` Jonathan Cameron
  0 siblings, 0 replies; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:46 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:28:30 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation for LIBNVDIMM to manage labels on CXL devices deploy
> helpers that abstract the label type from the implementation. The CXL
> label format is mostly similar to the EFI label format with concepts /
> fields added, like dynamic region creation and label type guids, and
> other concepts removed like BLK-mode and interleave-set-cookie ids.
> 
> CXL labels do have the concept of a claim class represented by an
> "abstraction" identifier. It turns out both label implementations use
> the same ids, but EFI encodes them as GUIDs and CXL labels encode them
> as UUIDs. For now abstract out the claim class such that the UUID vs
> GUID distinction can later be hidden in the helper.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/nvdimm/label.c          |   31 ++++++++++++++++++++++---------
>  drivers/nvdimm/label.h          |    1 -
>  drivers/nvdimm/namespace_devs.c |   13 ++++---------
>  drivers/nvdimm/nd.h             |    2 ++
>  4 files changed, 28 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> index 294ffc3cb582..7f473f9db300 100644
> --- a/drivers/nvdimm/label.c
> +++ b/drivers/nvdimm/label.c
> @@ -724,7 +724,7 @@ static unsigned long nd_label_offset(struct nvdimm_drvdata *ndd,
>  		- (unsigned long) to_namespace_index(ndd, 0);
>  }
>  
> -enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid)
> +static enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid)
>  {
>  	if (guid_equal(guid, &nvdimm_btt_guid))
>  		return NVDIMM_CCLASS_BTT;
> @@ -792,6 +792,25 @@ bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd,
>  	return true;
>  }
>  
> +static void nsl_set_claim_class(struct nvdimm_drvdata *ndd,
> +				struct nd_namespace_label *nd_label,
> +				enum nvdimm_claim_class claim_class)
> +{
> +	if (!namespace_label_has(ndd, abstraction_guid))
> +		return;
> +	guid_copy(&nd_label->abstraction_guid,
> +		  to_abstraction_guid(claim_class,
> +				      &nd_label->abstraction_guid));
> +}
> +
> +enum nvdimm_claim_class nsl_get_claim_class(struct nvdimm_drvdata *ndd,
> +					    struct nd_namespace_label *nd_label)
> +{
> +	if (!namespace_label_has(ndd, abstraction_guid))
> +		return NVDIMM_CCLASS_NONE;
> +	return to_nvdimm_cclass(&nd_label->abstraction_guid);
> +}
> +
>  static int __pmem_label_update(struct nd_region *nd_region,
>  		struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
>  		int pos, unsigned long flags)
> @@ -843,10 +862,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
>  	nsl_set_dpa(ndd, nd_label, res->start);
>  	nsl_set_slot(ndd, nd_label, slot);
>  	nsl_set_type_guid(ndd, nd_label, &nd_set->type_guid);
> -	if (namespace_label_has(ndd, abstraction_guid))
> -		guid_copy(&nd_label->abstraction_guid,
> -				to_abstraction_guid(ndns->claim_class,
> -					&nd_label->abstraction_guid));
> +	nsl_set_claim_class(ndd, nd_label, ndns->claim_class);
>  	nsl_calculate_checksum(ndd, nd_label);
>  	nd_dbg_dpa(nd_region, ndd, res, "\n");
>  
> @@ -1111,10 +1127,7 @@ static int __blk_label_update(struct nd_region *nd_region,
>  		nsl_set_lbasize(ndd, nd_label, nsblk->lbasize);
>  		nsl_set_slot(ndd, nd_label, slot);
>  		nsl_set_type_guid(ndd, nd_label, &nd_set->type_guid);
> -		if (namespace_label_has(ndd, abstraction_guid))
> -			guid_copy(&nd_label->abstraction_guid,
> -					to_abstraction_guid(ndns->claim_class,
> -						&nd_label->abstraction_guid));
> +		nsl_set_claim_class(ndd, nd_label, ndns->claim_class);
>  		nsl_calculate_checksum(ndd, nd_label);
>  
>  		/* update label */
> diff --git a/drivers/nvdimm/label.h b/drivers/nvdimm/label.h
> index 956b6d1bd8cc..31f94fad7b92 100644
> --- a/drivers/nvdimm/label.h
> +++ b/drivers/nvdimm/label.h
> @@ -135,7 +135,6 @@ struct nd_namespace_label *nd_label_active(struct nvdimm_drvdata *ndd, int n);
>  u32 nd_label_alloc_slot(struct nvdimm_drvdata *ndd);
>  bool nd_label_free_slot(struct nvdimm_drvdata *ndd, u32 slot);
>  u32 nd_label_nfree(struct nvdimm_drvdata *ndd);
> -enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid);
>  struct nd_region;
>  struct nd_namespace_pmem;
>  struct nd_namespace_blk;
> diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
> index af5a31dd3147..58c76d74127a 100644
> --- a/drivers/nvdimm/namespace_devs.c
> +++ b/drivers/nvdimm/namespace_devs.c
> @@ -2042,10 +2042,8 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
>  		nspm->uuid = kmemdup((void __force *) label0->uuid,
>  				NSLABEL_UUID_LEN, GFP_KERNEL);
>  		nspm->lbasize = nsl_get_lbasize(ndd, label0);
> -		if (namespace_label_has(ndd, abstraction_guid))
> -			nspm->nsio.common.claim_class
> -				= to_nvdimm_cclass(&label0->abstraction_guid);
> -
> +		nspm->nsio.common.claim_class =
> +			nsl_get_claim_class(ndd, label0);
>  	}
>  
>  	if (!nspm->alt_name || !nspm->uuid) {
> @@ -2273,11 +2271,8 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
>  	dev->parent = &nd_region->dev;
>  	nsblk->id = -1;
>  	nsblk->lbasize = nsl_get_lbasize(ndd, nd_label);
> -	nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN,
> -			GFP_KERNEL);
> -	if (namespace_label_has(ndd, abstraction_guid))
> -		nsblk->common.claim_class
> -			= to_nvdimm_cclass(&nd_label->abstraction_guid);
> +	nsblk->uuid = kmemdup(nd_label->uuid, NSLABEL_UUID_LEN, GFP_KERNEL);
> +	nsblk->common.claim_class = nsl_get_claim_class(ndd, nd_label);
>  	if (!nsblk->uuid)
>  		goto blk_err;
>  	nsl_get_name(ndd, nd_label, name);
> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> index f3c364df9449..ac80d9680367 100644
> --- a/drivers/nvdimm/nd.h
> +++ b/drivers/nvdimm/nd.h
> @@ -181,6 +181,8 @@ bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,
>  				 u64 isetcookie);
>  bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd,
>  			    struct nd_namespace_label *nd_label, guid_t *guid);
> +enum nvdimm_claim_class nsl_get_claim_class(struct nvdimm_drvdata *ndd,
> +					    struct nd_namespace_label *nd_label);
>  
>  struct nd_region_data {
>  	int ns_count;
> 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 09/23] libnvdimm/labels: Add address-abstraction uuid definitions
  2021-08-09 22:28 ` [PATCH 09/23] libnvdimm/labels: Add address-abstraction uuid definitions Dan Williams
@ 2021-08-11 18:49   ` Jonathan Cameron
  2021-08-11 22:47     ` Dan Williams
  0 siblings, 1 reply; 61+ messages in thread
From: Jonathan Cameron @ 2021-08-11 18:49 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, nvdimm, ben.widawsky, vishal.l.verma,
	alison.schofield, ira.weiny

On Mon, 9 Aug 2021 15:28:35 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> The EFI definition of the labels represents the Linux "claim class" with
> a GUID. The CXL definition of the labels stores the same identifier in
> UUID byte order. In preparation for adding CXL label support, enable the
> claim class to optionally handle uuids.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

I've already commented on 10 and 11 so this was backfilling tags
for the ones I'd looked at earlier but looked good to me.

I'm not all that familiar with this code yet, so all my checking was off the
"does it look locally correct?" variety.

Out of time for today, and not sure when I'll get to looking at the remainder.

Jonathan

> ---
>  drivers/nvdimm/label.c |   54 ++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 52 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
> index 7f473f9db300..2ba31b883b28 100644
> --- a/drivers/nvdimm/label.c
> +++ b/drivers/nvdimm/label.c
> @@ -17,6 +17,11 @@ static guid_t nvdimm_btt2_guid;
>  static guid_t nvdimm_pfn_guid;
>  static guid_t nvdimm_dax_guid;
>  
> +static uuid_t nvdimm_btt_uuid;
> +static uuid_t nvdimm_btt2_uuid;
> +static uuid_t nvdimm_pfn_uuid;
> +static uuid_t nvdimm_dax_uuid;
> +
>  static const char NSINDEX_SIGNATURE[] = "NAMESPACE_INDEX\0";
>  
>  static u32 best_seq(u32 a, u32 b)
> @@ -724,7 +729,7 @@ static unsigned long nd_label_offset(struct nvdimm_drvdata *ndd,
>  		- (unsigned long) to_namespace_index(ndd, 0);
>  }
>  
> -static enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid)
> +static enum nvdimm_claim_class guid_to_nvdimm_cclass(guid_t *guid)
>  {
>  	if (guid_equal(guid, &nvdimm_btt_guid))
>  		return NVDIMM_CCLASS_BTT;
> @@ -740,6 +745,23 @@ static enum nvdimm_claim_class to_nvdimm_cclass(guid_t *guid)
>  	return NVDIMM_CCLASS_UNKNOWN;
>  }
>  
> +/* CXL labels store UUIDs instead of GUIDs for the same data */
> +enum nvdimm_claim_class uuid_to_nvdimm_cclass(uuid_t *uuid)
> +{
> +	if (uuid_equal(uuid, &nvdimm_btt_uuid))
> +		return NVDIMM_CCLASS_BTT;
> +	else if (uuid_equal(uuid, &nvdimm_btt2_uuid))
> +		return NVDIMM_CCLASS_BTT2;
> +	else if (uuid_equal(uuid, &nvdimm_pfn_uuid))
> +		return NVDIMM_CCLASS_PFN;
> +	else if (uuid_equal(uuid, &nvdimm_dax_uuid))
> +		return NVDIMM_CCLASS_DAX;
> +	else if (uuid_equal(uuid, &uuid_null))
> +		return NVDIMM_CCLASS_NONE;
> +
> +	return NVDIMM_CCLASS_UNKNOWN;
> +}
> +
>  static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
>  	guid_t *target)
>  {
> @@ -761,6 +783,29 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
>  		return &guid_null;
>  }
>  
> +/* CXL labels store UUIDs instead of GUIDs for the same data */
> +__maybe_unused
> +static const uuid_t *to_abstraction_uuid(enum nvdimm_claim_class claim_class,
> +					 uuid_t *target)
> +{
> +	if (claim_class == NVDIMM_CCLASS_BTT)
> +		return &nvdimm_btt_uuid;
> +	else if (claim_class == NVDIMM_CCLASS_BTT2)
> +		return &nvdimm_btt2_uuid;
> +	else if (claim_class == NVDIMM_CCLASS_PFN)
> +		return &nvdimm_pfn_uuid;
> +	else if (claim_class == NVDIMM_CCLASS_DAX)
> +		return &nvdimm_dax_uuid;
> +	else if (claim_class == NVDIMM_CCLASS_UNKNOWN) {
> +		/*
> +		 * If we're modifying a namespace for which we don't
> +		 * know the claim_class, don't touch the existing uuid.
> +		 */
> +		return target;
> +	} else
> +		return &uuid_null;
> +}
> +
>  static void reap_victim(struct nd_mapping *nd_mapping,
>  		struct nd_label_ent *victim)
>  {
> @@ -808,7 +853,7 @@ enum nvdimm_claim_class nsl_get_claim_class(struct nvdimm_drvdata *ndd,
>  {
>  	if (!namespace_label_has(ndd, abstraction_guid))
>  		return NVDIMM_CCLASS_NONE;
> -	return to_nvdimm_cclass(&nd_label->abstraction_guid);
> +	return guid_to_nvdimm_cclass(&nd_label->abstraction_guid);
>  }
>  
>  static int __pmem_label_update(struct nd_region *nd_region,
> @@ -1395,5 +1440,10 @@ int __init nd_label_init(void)
>  	WARN_ON(guid_parse(NVDIMM_PFN_GUID, &nvdimm_pfn_guid));
>  	WARN_ON(guid_parse(NVDIMM_DAX_GUID, &nvdimm_dax_guid));
>  
> +	WARN_ON(uuid_parse(NVDIMM_BTT_GUID, &nvdimm_btt_uuid));
> +	WARN_ON(uuid_parse(NVDIMM_BTT2_GUID, &nvdimm_btt2_uuid));
> +	WARN_ON(uuid_parse(NVDIMM_PFN_GUID, &nvdimm_pfn_uuid));
> +	WARN_ON(uuid_parse(NVDIMM_DAX_GUID, &nvdimm_dax_uuid));
> +
>  	return 0;
>  }
> 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-11 17:11       ` Dan Williams
@ 2021-08-11 19:18         ` Andy Shevchenko
  2021-08-11 19:26           ` Dan Williams
  2021-08-12 22:34           ` Dan Williams
  0 siblings, 2 replies; 61+ messages in thread
From: Andy Shevchenko @ 2021-08-11 19:18 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Ben Widawsky,
	Vishal L Verma, Schofield, Alison, Weiny, Ira

On Wed, Aug 11, 2021 at 10:11:56AM -0700, Dan Williams wrote:
> On Wed, Aug 11, 2021 at 9:59 AM Andy Shevchenko
> <andriy.shevchenko@linux.intel.com> wrote:
> > On Wed, Aug 11, 2021 at 11:05:55AM +0300, Andy Shevchenko wrote:
> > > On Mon, Aug 09, 2021 at 03:28:40PM -0700, Dan Williams wrote:
> > > > In preparation for CXL labels that move the uuid to a different offset
> > > > in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
> > > > proper uuid_t type. That type definition predated the libnvdimm
> > > > subsystem, so now is as a good a time as any to convert all the uuid
> > > > handling in the subsystem to uuid_t to match the helpers.
> > > >
> > > > As for the whitespace changes, all new code is clang-format compliant.
> > >
> > > Thanks, looks good to me!
> > > Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> >
> > Sorry, I'm in doubt this Rb stays. See below.
> >
> > ...
> >
> > > >  struct btt_sb {
> > > >     u8 signature[BTT_SIG_LEN];
> > > > -   u8 uuid[16];
> > > > -   u8 parent_uuid[16];
> > > > +   uuid_t uuid;
> > > > +   uuid_t parent_uuid;
> >
> > uuid_t type is internal to the kernel. This seems to be an ABI?
> 
> No, it's not a user ABI, this is an on-disk metadata structure. uuid_t
> is approprirate.

So, changing size of the structure is forbidden after this change, right?
I don't like this. It means we always stuck with this type to be like this and
no change will be allowed.

> > > >     __le32 flags;
> > > >     __le16 version_major;
> > > >     __le16 version_minor;
> >
> > ...
> >
> > > >  struct nd_namespace_label {
> > > > -   u8 uuid[NSLABEL_UUID_LEN];
> > > > +   uuid_t uuid;
> >
> > So seems this.
> >
> > > >     u8 name[NSLABEL_NAME_LEN];
> > > >     __le32 flags;
> > > >     __le16 nlabel;
> >
> > ...
> >
> > I'm not familiar with FS stuff, but looks to me like unwanted changes.
> > In such cases you have to use export/import APIs. otherwise you make the type
> > carved in stone without even knowing that it's part of an ABI or some hardware
> > / firmware interfaces.
> 
> Can you clarify the concern? Carving the intent that these 16-bytes
> are meant to be treated as UUID in stone is deliberate.

It's a bit surprise to me. Do we have any documentation on that?
How do we handle such types in kernel that covers a lot of code?

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-11 19:18         ` Andy Shevchenko
@ 2021-08-11 19:26           ` Dan Williams
  2021-08-12 22:34           ` Dan Williams
  1 sibling, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-11 19:26 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Ben Widawsky,
	Vishal L Verma, Schofield, Alison, Weiny, Ira

On Wed, Aug 11, 2021 at 12:18 PM Andy Shevchenko
<andriy.shevchenko@linux.intel.com> wrote:
>
> On Wed, Aug 11, 2021 at 10:11:56AM -0700, Dan Williams wrote:
> > On Wed, Aug 11, 2021 at 9:59 AM Andy Shevchenko
> > <andriy.shevchenko@linux.intel.com> wrote:
> > > On Wed, Aug 11, 2021 at 11:05:55AM +0300, Andy Shevchenko wrote:
> > > > On Mon, Aug 09, 2021 at 03:28:40PM -0700, Dan Williams wrote:
> > > > > In preparation for CXL labels that move the uuid to a different offset
> > > > > in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
> > > > > proper uuid_t type. That type definition predated the libnvdimm
> > > > > subsystem, so now is as a good a time as any to convert all the uuid
> > > > > handling in the subsystem to uuid_t to match the helpers.
> > > > >
> > > > > As for the whitespace changes, all new code is clang-format compliant.
> > > >
> > > > Thanks, looks good to me!
> > > > Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> > >
> > > Sorry, I'm in doubt this Rb stays. See below.
> > >
> > > ...
> > >
> > > > >  struct btt_sb {
> > > > >     u8 signature[BTT_SIG_LEN];
> > > > > -   u8 uuid[16];
> > > > > -   u8 parent_uuid[16];
> > > > > +   uuid_t uuid;
> > > > > +   uuid_t parent_uuid;
> > >
> > > uuid_t type is internal to the kernel. This seems to be an ABI?
> >
> > No, it's not a user ABI, this is an on-disk metadata structure. uuid_t
> > is approprirate.
>
> So, changing size of the structure is forbidden after this change, right?
> I don't like this. It means we always stuck with this type to be like this and
> no change will be allowed.

You want the flexibility to make a uuid_t not a 16-byte value? Isn't
that no longer a uuid_t? However, if the answer is yes, then I agree
it can not be used in these "on-disk" structures. I would expect
uuid_t size to be as reliable as any other Linux kernel specific type
that implies a size, and I would nak a patch that tried to change
uuid_t the way you describe.

That is, if I'm understanding your concern correctly...

>
> > > > >     __le32 flags;
> > > > >     __le16 version_major;
> > > > >     __le16 version_minor;
> > >
> > > ...
> > >
> > > > >  struct nd_namespace_label {
> > > > > -   u8 uuid[NSLABEL_UUID_LEN];
> > > > > +   uuid_t uuid;
> > >
> > > So seems this.
> > >
> > > > >     u8 name[NSLABEL_NAME_LEN];
> > > > >     __le32 flags;
> > > > >     __le16 nlabel;
> > >
> > > ...
> > >
> > > I'm not familiar with FS stuff, but looks to me like unwanted changes.
> > > In such cases you have to use export/import APIs. otherwise you make the type
> > > carved in stone without even knowing that it's part of an ABI or some hardware
> > > / firmware interfaces.
> >
> > Can you clarify the concern? Carving the intent that these 16-bytes
> > are meant to be treated as UUID in stone is deliberate.
>
> It's a bit surprise to me. Do we have any documentation on that?

Documentation on these superblocks formats? Some are in EFI, some are
Linux specific.

> How do we handle such types in kernel that covers a lot of code?

I'm not following?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 20/23] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy
       [not found]       ` <xp0k4.l2r85dw1p7do@intel.com>
@ 2021-08-11 21:03         ` Dan Williams
  0 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-11 21:03 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Vishal L Verma, Alison, Ira

On Wed, Aug 11, 2021 at 1:50 PM Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> On Tue, 10 Aug 2021 15:40, Dan Williams <dan.j.williams@intel.com> wrote:
>
> [snip]
>
> >
> >The rationale is to be able to run cxl_test on a system that might
> >also have real CXL. For example I run this alongside the current QEMU
> >CXL model, and that results in the cxl_acpi driver attaching to 2
> >devices:
> >
> ># tree /sys/bus/platform/drivers/cxl_acpi
> >/sys/bus/platform/drivers/cxl_acpi
> >├── ACPI0017:00 -> ../../../../devices/platform/ACPI0017:00
> >├── bind
> >├── cxl_acpi.0 -> ../../../../devices/platform/cxl_acpi.0
> >├── module -> ../../../../module/cxl_acpi
> >├── uevent
> >└── unbind
> >
> >When the device is ACPI0017 this code is walking the ACPI bus looking
> >for  ACPI0016 devices. A real ACPI0016 will fall through
> >is_mock_port() to the original to_cxl_host_bridge() logic that just
> >reads the ACPI device HID. In the mock case the cxl_acpi driver has
> >instead been tricked into walk the platform bus which has real
> >platform devices, and the fake cxl_test ones:
> >
> >/sys/bus/platform/devices/
> >├── ACPI0012:00 -> ../../../devices/platform/ACPI0012:00
> >├── ACPI0017:00 -> ../../../devices/platform/ACPI0017:00
> >├── alarmtimer.0.auto -> ../../../devices/pnp0/00:04/rtc/rtc0/alarmtimer.0.auto
> >├── cxl_acpi.0 -> ../../../devices/platform/cxl_acpi.0
> >├── cxl_host_bridge.0 -> ../../../devices/platform/cxl_host_bridge.0
> >├── cxl_host_bridge.1 -> ../../../devices/platform/cxl_host_bridge.1
> >├── cxl_host_bridge.2 -> ../../../devices/platform/cxl_host_bridge.2
> >├── cxl_host_bridge.3 -> ../../../devices/platform/cxl_host_bridge.3
> >├── e820_pmem -> ../../../devices/platform/e820_pmem
> >├── efi-framebuffer.0 -> ../../../devices/platform/efi-framebuffer.0
> >├── efivars.0 -> ../../../devices/platform/efivars.0
> >├── Fixed MDIO bus.0 -> ../../../devices/platform/Fixed MDIO bus.0
> >├── i8042 -> ../../../devices/platform/i8042
> >├── iTCO_wdt.1.auto -> ../../../devices/pci0000:00/0000:00:1f.0/iTCO_wdt.1.auto
> >├── kgdboc -> ../../../devices/platform/kgdboc
> >├── pcspkr -> ../../../devices/platform/pcspkr
> >├── PNP0103:00 -> ../../../devices/platform/PNP0103:00
> >├── QEMU0002:00 -> ../../../devices/pci0000:00/QEMU0002:00
> >├── rtc-efi.0 -> ../../../devices/platform/rtc-efi.0
> >└── serial8250 -> ../../../devices/platform/serial8250
> >
> >...where is_mock_port() filters out those real platform devices. Note
> >that ACPI devices are atypical in that they get registered on the ACPI
> >bus and some get a companion device with the same name registered on
> >the platform bus.
>
> More relevant to endpoints, but here too... Will we be able to have an
> interleave region comprised of a QEMU emulated device and a mock device? I think
> folks that are using QEMU for the hardware development purposes would really
> like that functionality.

I guess never say "never", but my intent was that the 2 bus-types were
distinct and the streams never crossed.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 09/23] libnvdimm/labels: Add address-abstraction uuid definitions
  2021-08-11 18:49   ` Jonathan Cameron
@ 2021-08-11 22:47     ` Dan Williams
  0 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-11 22:47 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, Linux NVDIMM, Ben Widawsky, Vishal L Verma, Schofield,
	Alison, Weiny, Ira

On Wed, Aug 11, 2021 at 11:49 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Mon, 9 Aug 2021 15:28:35 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
>
> > The EFI definition of the labels represents the Linux "claim class" with
> > a GUID. The CXL definition of the labels stores the same identifier in
> > UUID byte order. In preparation for adding CXL label support, enable the
> > claim class to optionally handle uuids.
> >
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>
> I've already commented on 10 and 11 so this was backfilling tags
> for the ones I'd looked at earlier but looked good to me.
>
> I'm not all that familiar with this code yet, so all my checking was off the
> "does it look locally correct?" variety.
>
> Out of time for today, and not sure when I'll get to looking at the remainder.

I appreciate the slog through this legacy nvdimm code!

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 11/23] libnvdimm/labels: Introduce CXL labels
  2021-08-11 18:41   ` Jonathan Cameron
@ 2021-08-11 23:01     ` Dan Williams
  0 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-11 23:01 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, Linux NVDIMM, Ben Widawsky, Vishal L Verma, Schofield,
	Alison, Weiny, Ira

On Wed, Aug 11, 2021 at 11:42 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Mon, 9 Aug 2021 15:28:46 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
>
> > Now that all of use sites of label data have been converted to nsl_*
> > helpers, introduce the CXL label format. The ->cxl flag in
> > nvdimm_drvdata indicates the label format the device expects. A
> > follow-on patch allows a bus provider to select the label style.
> >
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
>
> A few trivial things inline. Nothing that actually 'needs' changing though.
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>
> > index e6e77691dbec..71ffde56fac0 100644
> > --- a/drivers/nvdimm/label.h
> > +++ b/drivers/nvdimm/label.h
> > @@ -64,40 +64,77 @@ struct nd_namespace_index {
> >       u8 free[];
> >  };
> >
> > -/**
> > - * struct nd_namespace_label - namespace superblock
> > - * @uuid: UUID per RFC 4122
> > - * @name: optional name (NULL-terminated)
> > - * @flags: see NSLABEL_FLAG_*
> > - * @nlabel: num labels to describe this ns
> > - * @position: labels position in set
> > - * @isetcookie: interleave set cookie
> > - * @lbasize: LBA size in bytes or 0 for pmem
> > - * @dpa: DPA of NVM range on this DIMM
> > - * @rawsize: size of namespace
> > - * @slot: slot of this label in label area
> > - * @unused: must be zero
> > - */
> >  struct nd_namespace_label {
> > +     union {
> Cross reference might be a nice thing to include?
> Table 212 I think...
> > +             struct nvdimm_cxl_label {
> > +                     uuid_t type;
> > +                     uuid_t uuid;
> > +                     u8 name[NSLABEL_NAME_LEN];
> > +                     __le32 flags;
> > +                     __le16 nlabel;
>
> Perhaps call out nlabel is nrange in the spec?

Actually, this is a bug, because nlabel in EFI labels is the width of
the interleave set. In CXL labels that property is moved to the region
and this field is only for discontiguous namespace support. Good
indirect catch!

>
> > +                     __le16 position;
> > +                     __le64 dpa;
> > +                     __le64 rawsize;
> > +                     __le32 slot;
> > +                     __le32 align;
> > +                     uuid_t region_uuid;
> > +                     uuid_t abstraction_uuid;
> > +                     __le16 lbasize;
> > +                     u8 reserved[0x56];
> > +                     __le64 checksum;
> > +             } cxl;
> > +             /**
> > +              * struct nvdimm_efi_label - namespace superblock
> > +              * @uuid: UUID per RFC 4122
> > +              * @name: optional name (NULL-terminated)
> > +              * @flags: see NSLABEL_FLAG_*
> > +              * @nlabel: num labels to describe this ns
> > +              * @position: labels position in set
> > +              * @isetcookie: interleave set cookie
> > +              * @lbasize: LBA size in bytes or 0 for pmem
> > +              * @dpa: DPA of NVM range on this DIMM
> > +              * @rawsize: size of namespace
> > +              * @slot: slot of this label in label area
> > +              * @unused: must be zero
> > +              */
> > +             struct nvdimm_efi_label {
> > +                     uuid_t uuid;
> > +                     u8 name[NSLABEL_NAME_LEN];
> > +                     __le32 flags;
> > +                     __le16 nlabel;
> > +                     __le16 position;
> > +                     __le64 isetcookie;
> > +                     __le64 lbasize;
> > +                     __le64 dpa;
> > +                     __le64 rawsize;
> > +                     __le32 slot;
> > +                     /*
> > +                      * Accessing fields past this point should be
> > +                      * gated by a efi_namespace_label_has() check.
> > +                      */
> > +                     u8 align;
> > +                     u8 reserved[3];
> > +                     guid_t type_guid;
> > +                     guid_t abstraction_guid;
> > +                     u8 reserved2[88];
> > +                     __le64 checksum;
> > +             } efi;
> > +     };
> > +};
> > +
> > +struct cxl_region_label {
>
> Perhaps separate this out to another patch so the diff ends up less confusing?

I'll take a look.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-11 18:13   ` Jonathan Cameron
@ 2021-08-12 21:17     ` Dan Williams
  0 siblings, 0 replies; 61+ messages in thread
From: Dan Williams @ 2021-08-12 21:17 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, Andy Shevchenko, Linux NVDIMM, Ben Widawsky,
	Vishal L Verma, Schofield, Alison, Weiny, Ira

On Wed, Aug 11, 2021 at 11:14 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Mon, 9 Aug 2021 15:28:40 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
>
> > In preparation for CXL labels that move the uuid to a different offset
> > in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
> > proper uuid_t type. That type definition predated the libnvdimm
> > subsystem, so now is as a good a time as any to convert all the uuid
> > handling in the subsystem to uuid_t to match the helpers.
> >
> > As for the whitespace changes, all new code is clang-format compliant.
> >
> > Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
>
> There are a few interesting corners where you have cleaned out a pointless
> copy before validating uuids. Perhaps call that out as a change in here
> as it isn't as simple as just replacing like with like?
> Perhaps I'm missing some reason that was needed in the code before this
> patch.
>
> All LGTM.
>
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>
> > ---
> >  drivers/nvdimm/btt.c            |   11 +++--
> >  drivers/nvdimm/btt.h            |    4 +-
> >  drivers/nvdimm/btt_devs.c       |   12 +++---
> >  drivers/nvdimm/core.c           |   40 ++-----------------
> >  drivers/nvdimm/label.c          |   34 +++++++---------
> >  drivers/nvdimm/label.h          |    3 -
> >  drivers/nvdimm/namespace_devs.c |   83 ++++++++++++++++++++-------------------
> >  drivers/nvdimm/nd-core.h        |    5 +-
> >  drivers/nvdimm/nd.h             |   37 ++++++++++++++++-
> >  drivers/nvdimm/pfn_devs.c       |    2 -
> >  include/linux/nd.h              |    4 +-
> >  11 files changed, 115 insertions(+), 120 deletions(-)
> >
> > diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
> > index 92dec4952297..1cdfbadb7408 100644
>
> > @@ -1050,7 +1050,6 @@ static int __blk_label_update(struct nd_region *nd_region,
> >       unsigned long *free, *victim_map = NULL;
> >       struct resource *res, **old_res_list;
> >       struct nd_label_id label_id;
> > -     u8 uuid[NSLABEL_UUID_LEN];
> >       int min_dpa_idx = 0;
> >       LIST_HEAD(list);
> >       u32 nslot, slot;
> > @@ -1088,8 +1087,7 @@ static int __blk_label_update(struct nd_region *nd_region,
> >               /* mark unused labels for garbage collection */
> >               for_each_clear_bit_le(slot, free, nslot) {
> >                       nd_label = to_label(ndd, slot);
> > -                     memcpy(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
> > -                     if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
> > +                     if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
> >                               continue;
>
> The original code here was 'unusual'. I'm not sure why it couldn't always be
> validated in place.
>

Correct, I noticed that too and cleaned it up, but you're right I
should have at least noted that in the changelog.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-11 19:18         ` Andy Shevchenko
  2021-08-11 19:26           ` Dan Williams
@ 2021-08-12 22:34           ` Dan Williams
  2021-08-13 10:14             ` Andy Shevchenko
  1 sibling, 1 reply; 61+ messages in thread
From: Dan Williams @ 2021-08-12 22:34 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Ben Widawsky,
	Vishal L Verma, Schofield, Alison, Weiny, Ira

On Wed, Aug 11, 2021 at 12:18 PM Andy Shevchenko
<andriy.shevchenko@linux.intel.com> wrote:
>
> On Wed, Aug 11, 2021 at 10:11:56AM -0700, Dan Williams wrote:
> > On Wed, Aug 11, 2021 at 9:59 AM Andy Shevchenko
> > <andriy.shevchenko@linux.intel.com> wrote:
> > > On Wed, Aug 11, 2021 at 11:05:55AM +0300, Andy Shevchenko wrote:
> > > > On Mon, Aug 09, 2021 at 03:28:40PM -0700, Dan Williams wrote:
> > > > > In preparation for CXL labels that move the uuid to a different offset
> > > > > in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
> > > > > proper uuid_t type. That type definition predated the libnvdimm
> > > > > subsystem, so now is as a good a time as any to convert all the uuid
> > > > > handling in the subsystem to uuid_t to match the helpers.
> > > > >
> > > > > As for the whitespace changes, all new code is clang-format compliant.
> > > >
> > > > Thanks, looks good to me!
> > > > Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> > >
> > > Sorry, I'm in doubt this Rb stays. See below.

Andy, does this incremental diff restore your reviewed-by? The awkward
piece of this for me is that it introduces a handful of unnecessary
memory copies. See some of the new nsl_get_uuid() additions and the
extra copy in nsl_uuid_equal()

diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 1cdfbadb7408..52de60b7adee 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -988,8 +988,8 @@ static int btt_arena_write_layout(struct arena_info *arena)
                return -ENOMEM;

        strncpy(super->signature, BTT_SIG, BTT_SIG_LEN);
-       uuid_copy(&super->uuid, nd_btt->uuid);
-       uuid_copy(&super->parent_uuid, parent_uuid);
+       export_uuid(super->uuid, nd_btt->uuid);
+       export_uuid(super->parent_uuid, parent_uuid);
        super->flags = cpu_to_le32(arena->flags);
        super->version_major = cpu_to_le16(arena->version_major);
        super->version_minor = cpu_to_le16(arena->version_minor);
diff --git a/drivers/nvdimm/btt.h b/drivers/nvdimm/btt.h
index fc3512d92ae5..0c76c0333f6e 100644
--- a/drivers/nvdimm/btt.h
+++ b/drivers/nvdimm/btt.h
@@ -94,8 +94,8 @@ struct log_group {

 struct btt_sb {
        u8 signature[BTT_SIG_LEN];
-       uuid_t uuid;
-       uuid_t parent_uuid;
+       u8 uuid[16];
+       u8 parent_uuid[16];
        __le32 flags;
        __le16 version_major;
        __le16 version_minor;
diff --git a/drivers/nvdimm/btt_devs.c b/drivers/nvdimm/btt_devs.c
index 5ad45e9e48c9..8b52e5144f08 100644
--- a/drivers/nvdimm/btt_devs.c
+++ b/drivers/nvdimm/btt_devs.c
@@ -244,14 +244,16 @@ struct device *nd_btt_create(struct nd_region *nd_region)
  */
 bool nd_btt_arena_is_valid(struct nd_btt *nd_btt, struct btt_sb *super)
 {
-       const uuid_t *parent_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
+       const uuid_t *ns_uuid = nd_dev_to_uuid(&nd_btt->ndns->dev);
+       uuid_t parent_uuid;
        u64 checksum;

        if (memcmp(super->signature, BTT_SIG, BTT_SIG_LEN) != 0)
                return false;

-       if (!uuid_is_null(&super->parent_uuid))
-               if (!uuid_equal(&super->parent_uuid, parent_uuid))
+       import_uuid(&parent_uuid, super->parent_uuid);
+       if (!uuid_is_null(&parent_uuid))
+               if (!uuid_equal(&parent_uuid, ns_uuid))
                        return false;

        checksum = le64_to_cpu(super->checksum);
diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
index 99608e6aeaae..a799ccbc8c05 100644
--- a/drivers/nvdimm/label.c
+++ b/drivers/nvdimm/label.c
@@ -925,7 +925,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
                if (!label_ent->label)
                        continue;
                if (test_and_clear_bit(ND_LABEL_REAP, &label_ent->flags) ||
-                   uuid_equal(nspm->uuid, nsl_ref_uuid(ndd, label_ent->label)))
+                   nsl_uuid_equal(ndd, label_ent->label, nspm->uuid))
                        reap_victim(nd_mapping, label_ent);
        }

@@ -1087,7 +1087,7 @@ static int __blk_label_update(struct nd_region *nd_region,
                /* mark unused labels for garbage collection */
                for_each_clear_bit_le(slot, free, nslot) {
                        nd_label = to_label(ndd, slot);
-                       if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
+                       if (!nsl_uuid_equal(ndd, nd_label, nsblk->uuid))
                                continue;
                        res = to_resource(ndd, nd_label);
                        if (res && is_old_resource(res, old_res_list,
@@ -1204,7 +1204,7 @@ static int __blk_label_update(struct nd_region *nd_region,
                if (!nd_label)
                        continue;
                nlabel++;
-               if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
+               if (!nsl_uuid_equal(ndd, nd_label, nsblk->uuid))
                        continue;
                nlabel--;
                list_move(&label_ent->list, &list);
@@ -1234,7 +1234,7 @@ static int __blk_label_update(struct nd_region *nd_region,
        }
        for_each_clear_bit_le(slot, free, nslot) {
                nd_label = to_label(ndd, slot);
-               if (!nsl_validate_uuid(ndd, nd_label, nsblk->uuid))
+               if (!nsl_uuid_equal(ndd, nd_label, nsblk->uuid))
                        continue;
                res = to_resource(ndd, nd_label);
                res->flags &= ~DPA_RESOURCE_ADJUSTED;
@@ -1338,7 +1338,7 @@ static int del_labels(struct nd_mapping
*nd_mapping, uuid_t *uuid)
                if (!nd_label)
                        continue;
                active++;
-               if (!nsl_validate_uuid(ndd, nd_label, uuid))
+               if (!nsl_uuid_equal(ndd, nd_label, uuid))
                        continue;
                active--;
                slot = to_slot(ndd, nd_label);
diff --git a/drivers/nvdimm/label.h b/drivers/nvdimm/label.h
index e6e77691dbec..6e07771aa8f1 100644
--- a/drivers/nvdimm/label.h
+++ b/drivers/nvdimm/label.h
@@ -79,7 +79,7 @@ struct nd_namespace_index {
  * @unused: must be zero
  */
 struct nd_namespace_label {
-       uuid_t uuid;
+       u8 uuid[16];
        u8 name[NSLABEL_NAME_LEN];
        __le32 flags;
        __le16 nlabel;
diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
index 20ea3ccd1f29..d4959981c7d4 100644
--- a/drivers/nvdimm/namespace_devs.c
+++ b/drivers/nvdimm/namespace_devs.c
@@ -1231,10 +1231,12 @@ static int namespace_update_uuid(struct
nd_region *nd_region,
                list_for_each_entry(label_ent, &nd_mapping->labels, list) {
                        struct nd_namespace_label *nd_label = label_ent->label;
                        struct nd_label_id label_id;
+                       uuid_t uuid;

                        if (!nd_label)
                                continue;
-                       nd_label_gen_id(&label_id, nsl_ref_uuid(ndd, nd_label),
+                       nsl_get_uuid(ndd, nd_label, &uuid);
+                       nd_label_gen_id(&label_id, &uuid,
                                        nsl_get_flags(ndd, nd_label));
                        if (strcmp(old_label_id.id, label_id.id) == 0)
                                set_bit(ND_LABEL_REAP, &label_ent->flags);
@@ -1856,7 +1858,7 @@ static bool has_uuid_at_pos(struct nd_region
*nd_region, const uuid_t *uuid,
                        if (!nsl_validate_isetcookie(ndd, nd_label, cookie))
                                continue;

-                       if (!nsl_validate_uuid(ndd, nd_label, uuid))
+                       if (!nsl_uuid_equal(ndd, nd_label, uuid))
                                continue;

                        if (!nsl_validate_type_guid(ndd, nd_label,
@@ -1900,7 +1902,7 @@ static int select_pmem_id(struct nd_region
*nd_region, const uuid_t *pmem_id)
                        nd_label = label_ent->label;
                        if (!nd_label)
                                continue;
-                       if (nsl_validate_uuid(ndd, nd_label, pmem_id))
+                       if (nsl_uuid_equal(ndd, nd_label, pmem_id))
                                break;
                        nd_label = NULL;
                }
@@ -1924,7 +1926,7 @@ static int select_pmem_id(struct nd_region
*nd_region, const uuid_t *pmem_id)
                else {
                        dev_dbg(&nd_region->dev, "%s invalid label for %pUb\n",
                                dev_name(ndd->dev),
-                               nsl_ref_uuid(ndd, nd_label));
+                               nsl_uuid_raw(ndd, nd_label));
                        return -EINVAL;
                }

@@ -1954,6 +1956,7 @@ static struct device
*create_namespace_pmem(struct nd_region *nd_region,
        resource_size_t size = 0;
        struct resource *res;
        struct device *dev;
+       uuid_t uuid;
        int rc = 0;
        u16 i;

@@ -1964,12 +1967,12 @@ static struct device
*create_namespace_pmem(struct nd_region *nd_region,

        if (!nsl_validate_isetcookie(ndd, nd_label, cookie)) {
                dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
-                       nsl_ref_uuid(ndd, nd_label));
+                       nsl_uuid_raw(ndd, nd_label));
                if (!nsl_validate_isetcookie(ndd, nd_label, altcookie))
                        return ERR_PTR(-EAGAIN);

                dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
-                       nsl_ref_uuid(ndd, nd_label));
+                       nsl_uuid_raw(ndd, nd_label));
        }

        nspm = kzalloc(sizeof(*nspm), GFP_KERNEL);
@@ -1985,11 +1988,12 @@ static struct device
*create_namespace_pmem(struct nd_region *nd_region,
        res->flags = IORESOURCE_MEM;

        for (i = 0; i < nd_region->ndr_mappings; i++) {
-               if (has_uuid_at_pos(nd_region, nsl_ref_uuid(ndd, nd_label),
-                                   cookie, i))
+               uuid_t uuid;
+
+               nsl_get_uuid(ndd, nd_label, &uuid);
+               if (has_uuid_at_pos(nd_region, &uuid, cookie, i))
                        continue;
-               if (has_uuid_at_pos(nd_region, nsl_ref_uuid(ndd, nd_label),
-                                   altcookie, i))
+               if (has_uuid_at_pos(nd_region, &uuid, altcookie, i))
                        continue;
                break;
        }
@@ -2003,7 +2007,7 @@ static struct device
*create_namespace_pmem(struct nd_region *nd_region,
                 * find a dimm with two instances of the same uuid.
                 */
                dev_err(&nd_region->dev, "%s missing label for %pUb\n",
-                       nvdimm_name(nvdimm), nsl_ref_uuid(ndd, nd_label));
+                       nvdimm_name(nvdimm), nsl_uuid_raw(ndd, nd_label));
                rc = -EINVAL;
                goto err;
        }
@@ -2016,7 +2020,8 @@ static struct device
*create_namespace_pmem(struct nd_region *nd_region,
         * the dimm being enabled (i.e. nd_label_reserve_dpa()
         * succeeded).
         */
-       rc = select_pmem_id(nd_region, nsl_ref_uuid(ndd, nd_label));
+       nsl_get_uuid(ndd, nd_label, &uuid);
+       rc = select_pmem_id(nd_region, &uuid);
        if (rc)
                goto err;

@@ -2042,8 +2047,8 @@ static struct device
*create_namespace_pmem(struct nd_region *nd_region,
                WARN_ON(nspm->alt_name || nspm->uuid);
                nspm->alt_name = kmemdup(nsl_ref_name(ndd, label0),
                                         NSLABEL_NAME_LEN, GFP_KERNEL);
-               nspm->uuid = kmemdup(nsl_ref_uuid(ndd, label0), sizeof(uuid_t),
-                                    GFP_KERNEL);
+               nsl_get_uuid(ndd, label0, &uuid);
+               nspm->uuid = kmemdup(&uuid, sizeof(uuid_t), GFP_KERNEL);
                nspm->lbasize = nsl_get_lbasize(ndd, label0);
                nspm->nsio.common.claim_class =
                        nsl_get_claim_class(ndd, label0);
@@ -2228,7 +2233,7 @@ static int add_namespace_resource(struct
nd_region *nd_region,
                        continue;
                }

-               if (!nsl_validate_uuid(ndd, nd_label, uuid))
+               if (!nsl_uuid_equal(ndd, nd_label, uuid))
                        continue;
                if (is_namespace_blk(devs[i])) {
                        res = nsblk_add_resource(nd_region, ndd,
@@ -2260,6 +2265,7 @@ static struct device
*create_namespace_blk(struct nd_region *nd_region,
        char name[NSLABEL_NAME_LEN];
        struct device *dev = NULL;
        struct resource *res;
+       uuid_t uuid;

        if (!nsl_validate_type_guid(ndd, nd_label, &nd_set->type_guid))
                return ERR_PTR(-EAGAIN);
@@ -2274,7 +2280,8 @@ static struct device
*create_namespace_blk(struct nd_region *nd_region,
        dev->parent = &nd_region->dev;
        nsblk->id = -1;
        nsblk->lbasize = nsl_get_lbasize(ndd, nd_label);
-       nsblk->uuid = kmemdup(nsl_ref_uuid(ndd, nd_label),
sizeof(uuid_t), GFP_KERNEL);
+       nsl_get_uuid(ndd, nd_label, &uuid);
+       nsblk->uuid = kmemdup(&uuid, sizeof(uuid_t), GFP_KERNEL);
        nsblk->common.claim_class = nsl_get_claim_class(ndd, nd_label);
        if (!nsblk->uuid)
                goto blk_err;
diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index 132a8021e3ad..b781bf674f0a 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -180,7 +180,7 @@ static inline const uuid_t *nsl_get_uuid(struct
nvdimm_drvdata *ndd,
                                         struct nd_namespace_label *nd_label,
                                         uuid_t *uuid)
 {
-       uuid_copy(uuid, &nd_label->uuid);
+       import_uuid(uuid, nd_label->uuid);
        return uuid;
 }

@@ -188,21 +188,24 @@ static inline const uuid_t *nsl_set_uuid(struct
nvdimm_drvdata *ndd,
                                         struct nd_namespace_label *nd_label,
                                         const uuid_t *uuid)
 {
-       uuid_copy(&nd_label->uuid, uuid);
-       return &nd_label->uuid;
+       export_uuid(nd_label->uuid, uuid);
+       return uuid;
 }

-static inline bool nsl_validate_uuid(struct nvdimm_drvdata *ndd,
-                                    struct nd_namespace_label *nd_label,
-                                    const uuid_t *uuid)
+static inline bool nsl_uuid_equal(struct nvdimm_drvdata *ndd,
+                                 struct nd_namespace_label *nd_label,
+                                 const uuid_t *uuid)
 {
-       return uuid_equal(&nd_label->uuid, uuid);
+       uuid_t tmp;
+
+       import_uuid(&tmp, nd_label->uuid);
+       return uuid_equal(&tmp, uuid);
 }

-static inline const uuid_t *nsl_ref_uuid(struct nvdimm_drvdata *ndd,
-                                        struct nd_namespace_label *nd_label)
+static inline const u8 *nsl_uuid_raw(struct nvdimm_drvdata *ndd,
+                                    struct nd_namespace_label *nd_label)
 {
-       return &nd_label->uuid;
+       return nd_label->uuid;
 }

 bool nsl_validate_blk_isetcookie(struct nvdimm_drvdata *ndd,

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-12 22:34           ` Dan Williams
@ 2021-08-13 10:14             ` Andy Shevchenko
  2021-08-14  7:35               ` Christoph Hellwig
  0 siblings, 1 reply; 61+ messages in thread
From: Andy Shevchenko @ 2021-08-13 10:14 UTC (permalink / raw)
  To: Dan Williams, Christoph Hellwig
  Cc: linux-cxl, Linux NVDIMM, Jonathan Cameron, Ben Widawsky,
	Vishal L Verma, Schofield, Alison, Weiny, Ira

On Thu, Aug 12, 2021 at 03:34:59PM -0700, Dan Williams wrote:
> On Wed, Aug 11, 2021 at 12:18 PM Andy Shevchenko
> <andriy.shevchenko@linux.intel.com> wrote:
> >
> > On Wed, Aug 11, 2021 at 10:11:56AM -0700, Dan Williams wrote:
> > > On Wed, Aug 11, 2021 at 9:59 AM Andy Shevchenko
> > > <andriy.shevchenko@linux.intel.com> wrote:
> > > > On Wed, Aug 11, 2021 at 11:05:55AM +0300, Andy Shevchenko wrote:
> > > > > On Mon, Aug 09, 2021 at 03:28:40PM -0700, Dan Williams wrote:
> > > > > > In preparation for CXL labels that move the uuid to a different offset
> > > > > > in the label, add nsl_{ref,get,validate}_uuid(). These helpers use the
> > > > > > proper uuid_t type. That type definition predated the libnvdimm
> > > > > > subsystem, so now is as a good a time as any to convert all the uuid
> > > > > > handling in the subsystem to uuid_t to match the helpers.
> > > > > >
> > > > > > As for the whitespace changes, all new code is clang-format compliant.
> > > > >
> > > > > Thanks, looks good to me!
> > > > > Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> > > >
> > > > Sorry, I'm in doubt this Rb stays. See below.
> 
> Andy, does this incremental diff restore your reviewed-by? The awkward
> piece of this for me is that it introduces a handful of unnecessary
> memory copies. See some of the new nsl_get_uuid() additions and the
> extra copy in nsl_uuid_equal()

It does, thanks! As for the deeper discussion I think you need to talk to
Christoph. It was his idea to move uuid_t from UAPI to internal kernel type.
And I think it made and still makes sense to be that way.

But if we have already users of uuid_t like you are doing here (without this
patch) then it will be fine I guess. Not my area to advise or decide.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/23] libnvdimm/labels: Add uuid helpers
  2021-08-13 10:14             ` Andy Shevchenko
@ 2021-08-14  7:35               ` Christoph Hellwig
  0 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2021-08-14  7:35 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Dan Williams, Christoph Hellwig, linux-cxl, Linux NVDIMM,
	Jonathan Cameron, Ben Widawsky, Vishal L Verma, Schofield,
	Alison, Weiny, Ira

On Fri, Aug 13, 2021 at 01:14:58PM +0300, Andy Shevchenko wrote:
> > Andy, does this incremental diff restore your reviewed-by? The awkward
> > piece of this for me is that it introduces a handful of unnecessary
> > memory copies. See some of the new nsl_get_uuid() additions and the
> > extra copy in nsl_uuid_equal()
> 
> It does, thanks! As for the deeper discussion I think you need to talk to
> Christoph. It was his idea to move uuid_t from UAPI to internal kernel type.
> And I think it made and still makes sense to be that way.
> 
> But if we have already users of uuid_t like you are doing here (without this
> patch) then it will be fine I guess. Not my area to advise or decide.

I'm missing a lot of context here.  But that whole uuid/guid thing is
a little complex:

 - for userspace APIs and on-disk formats a uuid is nothing but a blob
 - userspace historically has its own library to deal with this (libuuid),
   which defines a uuid_t itself.

So instead of trying to build abstractions that somehow word in diferent
software ecosystems I think just treating it as the blob that it is for
exchange makes life easіer for everyone.  It also really makes definitions
of on-disk structures more clear when using the raw bytes instead of a
semi-opaque typedef.

^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2021-08-14  7:35 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-09 22:27 [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
2021-08-09 22:27 ` [PATCH 01/23] libnvdimm/labels: Introduce getters for namespace label fields Dan Williams
2021-08-10 20:48   ` Ben Widawsky
2021-08-10 21:58     ` Dan Williams
2021-08-11 18:44   ` Jonathan Cameron
2021-08-09 22:27 ` [PATCH 02/23] libnvdimm/labels: Add isetcookie validation helper Dan Williams
2021-08-11 18:44   ` Jonathan Cameron
2021-08-09 22:28 ` [PATCH 03/23] libnvdimm/labels: Introduce label setter helpers Dan Williams
2021-08-11 17:27   ` Jonathan Cameron
2021-08-11 17:42     ` Dan Williams
2021-08-09 22:28 ` [PATCH 04/23] libnvdimm/labels: Add a checksum calculation helper Dan Williams
2021-08-11 18:44   ` Jonathan Cameron
2021-08-09 22:28 ` [PATCH 05/23] libnvdimm/labels: Add blk isetcookie set / validation helpers Dan Williams
2021-08-11 18:45   ` Jonathan Cameron
2021-08-09 22:28 ` [PATCH 06/23] libnvdimm/labels: Add blk special cases for nlabel and position helpers Dan Williams
2021-08-11 18:45   ` Jonathan Cameron
2021-08-09 22:28 ` [PATCH 07/23] libnvdimm/labels: Add type-guid helpers Dan Williams
2021-08-11 18:46   ` Jonathan Cameron
2021-08-09 22:28 ` [PATCH 08/23] libnvdimm/labels: Add claim class helpers Dan Williams
2021-08-11 18:46   ` Jonathan Cameron
2021-08-09 22:28 ` [PATCH 09/23] libnvdimm/labels: Add address-abstraction uuid definitions Dan Williams
2021-08-11 18:49   ` Jonathan Cameron
2021-08-11 22:47     ` Dan Williams
2021-08-09 22:28 ` [PATCH 10/23] libnvdimm/labels: Add uuid helpers Dan Williams
2021-08-11  8:05   ` Andy Shevchenko
2021-08-11 16:59     ` Andy Shevchenko
2021-08-11 17:11       ` Dan Williams
2021-08-11 19:18         ` Andy Shevchenko
2021-08-11 19:26           ` Dan Williams
2021-08-12 22:34           ` Dan Williams
2021-08-13 10:14             ` Andy Shevchenko
2021-08-14  7:35               ` Christoph Hellwig
2021-08-11 18:13   ` Jonathan Cameron
2021-08-12 21:17     ` Dan Williams
2021-08-09 22:28 ` [PATCH 11/23] libnvdimm/labels: Introduce CXL labels Dan Williams
2021-08-11 18:41   ` Jonathan Cameron
2021-08-11 23:01     ` Dan Williams
2021-08-09 22:28 ` [PATCH 12/23] cxl/pci: Make 'struct cxl_mem' device type generic Dan Williams
2021-08-09 22:28 ` [PATCH 13/23] cxl/mbox: Introduce the mbox_send operation Dan Williams
2021-08-09 22:29 ` [PATCH 14/23] cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core Dan Williams
2021-08-11  6:11   ` [PATCH v2 " Dan Williams
2021-08-09 22:29 ` [PATCH 15/23] cxl/pci: Use module_pci_driver Dan Williams
2021-08-09 22:29 ` [PATCH 16/23] cxl/mbox: Convert 'enabled_cmds' to DECLARE_BITMAP Dan Williams
2021-08-09 22:29 ` [PATCH 17/23] cxl/mbox: Add exclusive kernel command support Dan Williams
2021-08-10 21:34   ` Ben Widawsky
2021-08-10 21:52     ` Dan Williams
2021-08-10 22:06       ` Ben Widawsky
2021-08-11  1:22         ` Dan Williams
2021-08-11  2:14           ` Dan Williams
2021-08-09 22:29 ` [PATCH 18/23] cxl/pmem: Translate NVDIMM label commands to CXL label commands Dan Williams
2021-08-09 22:29 ` [PATCH 19/23] cxl/pmem: Add support for multiple nvdimm-bridge objects Dan Williams
2021-08-09 22:29 ` [PATCH 20/23] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy Dan Williams
2021-08-10 21:57   ` Ben Widawsky
2021-08-10 22:40     ` Dan Williams
2021-08-11 15:18       ` Ben Widawsky
     [not found]       ` <xp0k4.l2r85dw1p7do@intel.com>
2021-08-11 21:03         ` Dan Williams
2021-08-09 22:29 ` [PATCH 21/23] cxl/bus: Populate the target list at decoder create Dan Williams
2021-08-09 22:29 ` [PATCH 22/23] cxl/mbox: Move command definitions to common location Dan Williams
2021-08-09 22:29 ` [PATCH 23/23] tools/testing/cxl: Introduce a mock memory device + driver Dan Williams
2021-08-10 22:10 ` [PATCH 00/23] cxl_test: Enable CXL Topology and UAPI regression tests Ben Widawsky
2021-08-10 22:58   ` Dan Williams

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).