All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 00/15] Introduce security commands for CXL pmem device
@ 2022-07-15 21:08 Dave Jiang
  2022-07-15 21:08 ` [PATCH RFC 01/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation Dave Jiang
                   ` (16 more replies)
  0 siblings, 17 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:08 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

This series is seeking comments on the implementation. It has not been fully
tested yet.

This series adds the support for "Persistent Memory Data-at-rest Security"
block of command set for the CXL Memory Devices. The enabling is done through
the nvdimm_security_ops as the operations are very similar to the same
operations that the persistent memory devices through NFIT provider support.
This enabling does not include the security pass-through commands nor the
Santize commands.

Under the nvdimm_security_ops, this patch series will enable get_flags(),
freeze(), change_key(), unlock(), disable(), and erase(). The disable() API
does not support disabling of the master passphrase. To maintain established
user ABI through the sysfs attribute "security", the "disable" command is
left untouched and a new "disable_master" command is introduced with a new
disable_master() API call for the nvdimm_security_ops().

This series does not include plumbing to directly handle the security commands
through cxl control util. The enabled security commands will still go through
ndctl tool with this enabling.

For calls such as unlock() and erase(), the CPU caches must be invalidated
post operation. Currently, the implementation resides in
drivers/acpi/nfit/intel.c with a comment that it should be implemented
cross arch when more than just NFIT based device needs this operation.
With the coming of CXL persistent memory devices this is now needed.
Introduce ARCH_HAS_NVDIMM_INVAL_CACHE and implement similar to
ARCH_HAS_PMEM_API where the arch can opt in with implementation.
Currently only add x86_64 implementation where wbinvd_on_all_cpus()
is called.

---

Dave Jiang (15):
      cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation
      tools/testing/cxl: Create context for cxl mock device
      tools/testing/cxl: Add "Get Security State" opcode support
      cxl/pmem: Add "Set Passphrase" security command support
      tools/testing/cxl: Add "Set Passphrase" opcode support
      cxl/pmem: Add Disable Passphrase security command support
      tools/testing/cxl: Add "Disable" security opcode support
      cxl/pmem: Add "Freeze Security State" security command support
      tools/testing/cxl: Add "Freeze Security State" security opcode support
      x86: add an arch helper function to invalidate all cache for nvdimm
      cxl/pmem: Add "Unlock" security command support
      tools/testing/cxl: Add "Unlock" security opcode support
      cxl/pmem: Add "Passphrase Secure Erase" security command support
      tools/testing/cxl: Add "passphrase secure erase" opcode support
      nvdimm/cxl/pmem: Add support for master passphrase disable security command


 arch/x86/Kconfig             |   1 +
 arch/x86/mm/pat/set_memory.c |   8 +
 drivers/acpi/nfit/intel.c    |  28 +--
 drivers/cxl/Kconfig          |  16 ++
 drivers/cxl/Makefile         |   1 +
 drivers/cxl/cxlmem.h         |  41 +++++
 drivers/cxl/pmem.c           |  10 +-
 drivers/cxl/security.c       | 182 ++++++++++++++++++
 drivers/nvdimm/security.c    |  33 +++-
 include/linux/libnvdimm.h    |  10 +
 lib/Kconfig                  |   3 +
 tools/testing/cxl/Kbuild     |   1 +
 tools/testing/cxl/test/mem.c | 348 ++++++++++++++++++++++++++++++++++-
 13 files changed, 644 insertions(+), 38 deletions(-)
 create mode 100644 drivers/cxl/security.c

--


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH RFC 01/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
@ 2022-07-15 21:08 ` Dave Jiang
  2022-07-15 21:09   ` Davidlohr Bueso
  2022-07-18  5:34   ` [PATCH RFC 1/15] " Davidlohr Bueso
  2022-07-15 21:08 ` [PATCH RFC 02/15] tools/testing/cxl: Create context for cxl mock device Dave Jiang
                   ` (15 subsequent siblings)
  16 siblings, 2 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:08 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Add nvdimm_security_ops support for CXL memory device with the introduction
of the ->get_flags() callback function. This is part of the "Persistent
Memory Data-at-rest Security" command set for CXL memory device support.
The ->get_flags() function provides the security state of the persistent
memory device defined by the CXL 2.0 spec section 8.2.9.5.6.1.

The nvdimm_security_ops for CXL is configured as an build option toggled by
kernel configuration CONFIG_CXL_PMEM_SECURITY.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 drivers/cxl/Kconfig      |   16 +++++++++++++
 drivers/cxl/Makefile     |    1 +
 drivers/cxl/cxlmem.h     |    9 +++++++
 drivers/cxl/pmem.c       |   10 ++++++--
 drivers/cxl/security.c   |   57 ++++++++++++++++++++++++++++++++++++++++++++++
 tools/testing/cxl/Kbuild |    1 +
 6 files changed, 92 insertions(+), 2 deletions(-)
 create mode 100644 drivers/cxl/security.c

diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
index f64e3984689f..43527a697f60 100644
--- a/drivers/cxl/Kconfig
+++ b/drivers/cxl/Kconfig
@@ -102,4 +102,20 @@ config CXL_SUSPEND
 	def_bool y
 	depends on SUSPEND && CXL_MEM
 
+config CXL_PMEM_SECURITY
+	tristate "CXL PMEM SECURITY: Persistent Memory Security Support"
+	depends on CXL_PMEM
+	default CXL_BUS
+	help
+	  CXL memory device "Persistent Memory Data-at-rest Security" command set
+	  support. Support opcode 0x4500..0x4505. The commands supported are "Get
+	  Security State", "Set Passphrase", "Disable Passphrase", "Unlock",
+	  "Freeze Security State", and "Passphrase Secure Erase". Security operation
+	  is done through nvdimm security_ops.
+
+	  See Chapter 8.2.9.5.6 in the CXL 2.0 specification for a detailed description
+	  of the Persistent Memory Security.
+
+	  If unsure say 'm'.
+
 endif
diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
index a78270794150..c19cf28f7512 100644
--- a/drivers/cxl/Makefile
+++ b/drivers/cxl/Makefile
@@ -11,3 +11,4 @@ cxl_pci-y := pci.o
 cxl_acpi-y := acpi.o
 cxl_pmem-y := pmem.o
 cxl_port-y := port.o
+cxl_pmem-$(CONFIG_CXL_PMEM_SECURITY) += security.o
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 7df0b053373a..35de2889aac3 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -250,6 +250,7 @@ enum cxl_opcode {
 	CXL_MBOX_OP_GET_SCAN_MEDIA_CAPS	= 0x4303,
 	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
 	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
+	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
 	CXL_MBOX_OP_MAX			= 0x10000
 };
 
@@ -342,6 +343,13 @@ struct cxl_mem_command {
 #define CXL_CMD_FLAG_FORCE_ENABLE BIT(0)
 };
 
+#define CXL_PMEM_SEC_STATE_USER_PASS_SET	0x01
+#define CXL_PMEM_SEC_STATE_MASTER_PASS_SET	0x02
+#define CXL_PMEM_SEC_STATE_LOCKED		0x04
+#define CXL_PMEM_SEC_STATE_FROZEN		0x08
+#define CXL_PMEM_SEC_STATE_USER_PLIMIT		0x10
+#define CXL_PMEM_SEC_STATE_MASTER_PLIMIT	0x20
+
 int cxl_mbox_send_cmd(struct cxl_dev_state *cxlds, u16 opcode, void *in,
 		      size_t in_size, void *out, size_t out_size);
 int cxl_dev_state_identify(struct cxl_dev_state *cxlds);
@@ -370,4 +378,5 @@ struct cxl_hdm {
 	unsigned int interleave_mask;
 	struct cxl_port *port;
 };
+
 #endif /* __CXL_MEM_H__ */
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index 0aaa70b4e0f7..6dbf067dcf10 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -10,6 +10,12 @@
 #include "cxlmem.h"
 #include "cxl.h"
 
+#if IS_ENABLED(CONFIG_CXL_PMEM_SECURITY)
+extern const struct nvdimm_security_ops *cxl_security_ops;
+#else
+static const struct nvdimm_security_ops *cxl_security_ops = NULL;
+#endif
+
 /*
  * Ordered workqueue for cxl nvdimm device arrival and departure
  * to coordinate bus rescans when a bridge arrives and trigger remove
@@ -58,8 +64,8 @@ static int cxl_nvdimm_probe(struct device *dev)
 	set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
 	set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
 	set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
-	nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags,
-			       cmd_mask, 0, NULL);
+	nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags,
+				 cmd_mask, 0, NULL, NULL, cxl_security_ops, NULL);
 	if (!nvdimm) {
 		rc = -ENOMEM;
 		goto out;
diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
new file mode 100644
index 000000000000..5b830ae621db
--- /dev/null
+++ b/drivers/cxl/security.c
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2022 Intel Corporation. All rights reserved. */
+#include <linux/libnvdimm.h>
+#include <asm/unaligned.h>
+#include <linux/module.h>
+#include <linux/ndctl.h>
+#include <linux/async.h>
+#include <linux/slab.h>
+#include "cxlmem.h"
+#include "cxl.h"
+
+static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm,
+						 enum nvdimm_passphrase_type ptype)
+{
+	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+	unsigned long security_flags;
+	u32 sec_out;
+	int rc;
+
+	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_GET_SECURITY_STATE, NULL, 0,
+			       &sec_out, sizeof(sec_out));
+	if (rc < 0)
+		return 0;
+
+	if (ptype == NVDIMM_MASTER) {
+		if (sec_out & CXL_PMEM_SEC_STATE_MASTER_PASS_SET)
+			set_bit(NVDIMM_SECURITY_UNLOCKED, &security_flags);
+		else
+			set_bit(NVDIMM_SECURITY_DISABLED, &security_flags);
+		if (sec_out & CXL_PMEM_SEC_STATE_MASTER_PLIMIT)
+			set_bit(NVDIMM_SECURITY_FROZEN, &security_flags);
+		return security_flags;
+	}
+
+	if (sec_out & CXL_PMEM_SEC_STATE_USER_PASS_SET) {
+		if (sec_out & CXL_PMEM_SEC_STATE_FROZEN ||
+		    sec_out & CXL_PMEM_SEC_STATE_USER_PLIMIT)
+			set_bit(NVDIMM_SECURITY_FROZEN, &security_flags);
+
+		if (sec_out & CXL_PMEM_SEC_STATE_LOCKED)
+			set_bit(NVDIMM_SECURITY_LOCKED, &security_flags);
+		else
+			set_bit(NVDIMM_SECURITY_UNLOCKED, &security_flags);
+	} else {
+		set_bit(NVDIMM_SECURITY_DISABLED, &security_flags);
+	}
+
+	return security_flags;
+}
+
+static const struct nvdimm_security_ops __cxl_security_ops = {
+	.get_flags = cxl_pmem_get_security_flags,
+};
+
+const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;
diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
index 33543231d453..7db7a35a1c2a 100644
--- a/tools/testing/cxl/Kbuild
+++ b/tools/testing/cxl/Kbuild
@@ -27,6 +27,7 @@ obj-m += cxl_pmem.o
 
 cxl_pmem-y := $(CXL_SRC)/pmem.o
 cxl_pmem-y += config_check.o
+cxl_pmem-$(CONFIG_CXL_PMEM_SECURITY) += $(CXL_SRC)/security.o
 
 obj-m += cxl_port.o
 



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 02/15] tools/testing/cxl: Create context for cxl mock device
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
  2022-07-15 21:08 ` [PATCH RFC 01/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation Dave Jiang
@ 2022-07-15 21:08 ` Dave Jiang
  2022-07-18  6:29   ` [PATCH RFC 2/15] " Davidlohr Bueso
  2022-08-03 16:36   ` [PATCH RFC 02/15] " Jonathan Cameron
  2022-07-15 21:08 ` [PATCH RFC 03/15] tools/testing/cxl: Add "Get Security State" opcode support Dave Jiang
                   ` (14 subsequent siblings)
  16 siblings, 2 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:08 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Add context struct for mock device and move lsa under the context. This
allows additional information such as security status and other persistent
security data such as passphrase to be added for the emulated test device.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 tools/testing/cxl/test/mem.c |   29 +++++++++++++++++++++++------
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index 6b9239b2afd4..723378248321 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -9,6 +9,10 @@
 #include <linux/bits.h>
 #include <cxlmem.h>
 
+struct mock_mdev_data {
+	void *lsa;
+};
+
 #define LSA_SIZE SZ_128K
 #define EFFECT(x) (1U << x)
 
@@ -140,7 +144,8 @@ static int mock_id(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 {
 	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
-	void *lsa = dev_get_drvdata(cxlds->dev);
+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
+	void *lsa = mdata->lsa;
 	u32 offset, length;
 
 	if (sizeof(*get_lsa) > cmd->size_in)
@@ -159,7 +164,8 @@ static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 static int mock_set_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 {
 	struct cxl_mbox_set_lsa *set_lsa = cmd->payload_in;
-	void *lsa = dev_get_drvdata(cxlds->dev);
+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
+	void *lsa = mdata->lsa;
 	u32 offset, length;
 
 	if (sizeof(*set_lsa) > cmd->size_in)
@@ -237,9 +243,12 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
 	return rc;
 }
 
-static void label_area_release(void *lsa)
+static void cxl_mock_drvdata_release(void *data)
 {
-	vfree(lsa);
+	struct mock_mdev_data *mdata = data;
+
+	vfree(mdata->lsa);
+	vfree(mdata);
 }
 
 static int cxl_mock_mem_probe(struct platform_device *pdev)
@@ -247,13 +256,21 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
 	struct device *dev = &pdev->dev;
 	struct cxl_memdev *cxlmd;
 	struct cxl_dev_state *cxlds;
+	struct mock_mdev_data *mdata;
 	void *lsa;
 	int rc;
 
+	mdata = vmalloc(sizeof(*mdata));
+	if (!mdata)
+		return -ENOMEM;
+
 	lsa = vmalloc(LSA_SIZE);
-	if (!lsa)
+	if (!lsa) {
+		vfree(mdata);
 		return -ENOMEM;
-	rc = devm_add_action_or_reset(dev, label_area_release, lsa);
+	}
+
+	rc = devm_add_action_or_reset(dev, cxl_mock_drvdata_release, mdata);
 	if (rc)
 		return rc;
 	dev_set_drvdata(dev, lsa);



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 03/15] tools/testing/cxl: Add "Get Security State" opcode support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
  2022-07-15 21:08 ` [PATCH RFC 01/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation Dave Jiang
  2022-07-15 21:08 ` [PATCH RFC 02/15] tools/testing/cxl: Create context for cxl mock device Dave Jiang
@ 2022-07-15 21:08 ` Dave Jiang
  2022-08-03 16:51   ` Jonathan Cameron
  2022-07-15 21:08 ` [PATCH RFC 04/15] cxl/pmem: Add "Set Passphrase" security command support Dave Jiang
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:08 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Add the emulation support for handling "Get Security State" opcode for a
CXL memory device for the cxl_test. The function will copy back device
security state bitmask to the output payload.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 tools/testing/cxl/test/mem.c |   24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index 723378248321..337e5a099d31 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -11,6 +11,7 @@
 
 struct mock_mdev_data {
 	void *lsa;
+	u32 security_state;
 };
 
 #define LSA_SIZE SZ_128K
@@ -141,6 +142,26 @@ static int mock_id(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 	return 0;
 }
 
+static int mock_get_security_state(struct cxl_dev_state *cxlds,
+				   struct cxl_mbox_cmd *cmd)
+{
+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
+
+	if (cmd->size_in) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (cmd->size_out != sizeof(u32)) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	memcpy(cmd->payload_out, &mdata->security_state, sizeof(u32));
+
+	return 0;
+}
+
 static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 {
 	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
@@ -233,6 +254,9 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
 	case CXL_MBOX_OP_GET_HEALTH_INFO:
 		rc = mock_health_info(cxlds, cmd);
 		break;
+	case CXL_MBOX_OP_GET_SECURITY_STATE:
+		rc = mock_get_security_state(cxlds, cmd);
+		break;
 	default:
 		break;
 	}



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 04/15] cxl/pmem: Add "Set Passphrase" security command support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (2 preceding siblings ...)
  2022-07-15 21:08 ` [PATCH RFC 03/15] tools/testing/cxl: Add "Get Security State" opcode support Dave Jiang
@ 2022-07-15 21:08 ` Dave Jiang
  2022-07-18  6:36   ` [PATCH RFC 4/15] " Davidlohr Bueso
  2022-08-03 17:01   ` [PATCH RFC 04/15] " Jonathan Cameron
  2022-07-15 21:09 ` [PATCH RFC 05/15] tools/testing/cxl: Add "Set Passphrase" opcode support Dave Jiang
                   ` (12 subsequent siblings)
  16 siblings, 2 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:08 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Create callback function to support the nvdimm_security_ops ->change_key()
callback. Translate the operation to send "Set Passphrase" security command
for CXL memory device. The operation supports setting a passphrase for the
CXL persistent memory device. It also supports the changing of the
currently set passphrase. The operation allows manipulation of a user
passphrase or a master passphrase.

See CXL 2.0 spec section 8.2.9.5.6.2 for reference.

However, the spec leaves a gap WRT master passphrase usages. The spec does
not define any ways to retrieve the status of if the support of master
passphrase is available for the device, nor does the commands that utilize
master passphrase will return a specific error that indicates master
passphrase is not supported. If using a device does not support master
passphrase and a command is issued with a master passphrase, the error
message returned by the device will be ambiguos.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 drivers/cxl/cxlmem.h   |   14 ++++++++++++++
 drivers/cxl/security.c |   27 +++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 35de2889aac3..1e76d22f4fd2 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -251,6 +251,7 @@ enum cxl_opcode {
 	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
 	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
 	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
+	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
 	CXL_MBOX_OP_MAX			= 0x10000
 };
 
@@ -350,6 +351,19 @@ struct cxl_mem_command {
 #define CXL_PMEM_SEC_STATE_USER_PLIMIT		0x10
 #define CXL_PMEM_SEC_STATE_MASTER_PLIMIT	0x20
 
+/* set passphrase input payload */
+struct cxl_set_pass {
+	u8 type;
+	u8 reserved[31];
+	u8 old_pass[NVDIMM_PASSPHRASE_LEN];
+	u8 new_pass[NVDIMM_PASSPHRASE_LEN];
+} __packed;
+
+enum {
+	CXL_PMEM_SEC_PASS_MASTER = 0,
+	CXL_PMEM_SEC_PASS_USER,
+};
+
 int cxl_mbox_send_cmd(struct cxl_dev_state *cxlds, u16 opcode, void *in,
 		      size_t in_size, void *out, size_t out_size);
 int cxl_dev_state_identify(struct cxl_dev_state *cxlds);
diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
index 5b830ae621db..76ec5087f966 100644
--- a/drivers/cxl/security.c
+++ b/drivers/cxl/security.c
@@ -50,8 +50,35 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm,
 	return security_flags;
 }
 
+static int cxl_pmem_security_change_key(struct nvdimm *nvdimm,
+					const struct nvdimm_key_data *old_data,
+					const struct nvdimm_key_data *new_data,
+					enum nvdimm_passphrase_type ptype)
+{
+	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+	struct cxl_set_pass *set_pass;
+	int rc;
+
+	set_pass = kzalloc(sizeof(*set_pass), GFP_KERNEL);
+	if (!set_pass)
+		return -ENOMEM;
+
+	set_pass->type = ptype == NVDIMM_MASTER ?
+		CXL_PMEM_SEC_PASS_MASTER : CXL_PMEM_SEC_PASS_USER;
+	memcpy(set_pass->old_pass, old_data->data, NVDIMM_PASSPHRASE_LEN);
+	memcpy(set_pass->new_pass, new_data->data, NVDIMM_PASSPHRASE_LEN);
+
+	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_SET_PASSPHRASE,
+			       set_pass, sizeof(*set_pass), NULL, 0);
+	kfree(set_pass);
+	return rc;
+}
+
 static const struct nvdimm_security_ops __cxl_security_ops = {
 	.get_flags = cxl_pmem_get_security_flags,
+	.change_key = cxl_pmem_security_change_key,
 };
 
 const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 05/15] tools/testing/cxl: Add "Set Passphrase" opcode support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (3 preceding siblings ...)
  2022-07-15 21:08 ` [PATCH RFC 04/15] cxl/pmem: Add "Set Passphrase" security command support Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-08-03 17:15   ` Jonathan Cameron
  2022-07-15 21:09 ` [PATCH RFC 06/15] cxl/pmem: Add Disable Passphrase security command support Dave Jiang
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Add support to emulate a CXL mem device supporting the "Set Passphrase"
operation. The operation supports setting of either a user or a master
passphrase.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 tools/testing/cxl/test/mem.c |   76 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index 337e5a099d31..796f4f7b5e3d 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -12,8 +12,14 @@
 struct mock_mdev_data {
 	void *lsa;
 	u32 security_state;
+	u8 user_pass[NVDIMM_PASSPHRASE_LEN];
+	u8 master_pass[NVDIMM_PASSPHRASE_LEN];
+	int user_limit;
+	int master_limit;
 };
 
+#define PASS_TRY_LIMIT 3
+
 #define LSA_SIZE SZ_128K
 #define EFFECT(x) (1U << x)
 
@@ -162,6 +168,73 @@ static int mock_get_security_state(struct cxl_dev_state *cxlds,
 	return 0;
 }
 
+static int mock_set_passphrase(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
+{
+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
+	struct cxl_set_pass *set_pass;
+
+	if (cmd->size_in != sizeof(*set_pass)) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (cmd->size_out != 0) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (mdata->security_state & CXL_PMEM_SEC_STATE_FROZEN) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	set_pass = cmd->payload_in;
+	switch (set_pass->type) {
+	case CXL_PMEM_SEC_PASS_MASTER:
+		if (mdata->security_state & CXL_PMEM_SEC_STATE_MASTER_PLIMIT) {
+			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+			return -ENXIO;
+		}
+		/*
+		 * CXL spec v2.0 8.2.9.5.6.2, The master pasphrase shall only be set in
+		 * the security disabled state when the user passphrase is not set.
+		 */
+		if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET) {
+			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+			return -ENXIO;
+		}
+		if (mdata->security_state & CXL_PMEM_SEC_STATE_MASTER_PASS_SET &&
+		    memcmp(mdata->master_pass, set_pass->old_pass, NVDIMM_PASSPHRASE_LEN)) {
+			if (++mdata->master_limit == PASS_TRY_LIMIT)
+				mdata->security_state |= CXL_PMEM_SEC_STATE_MASTER_PLIMIT;
+			cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
+			return -ENXIO;
+		}
+		memcpy(mdata->master_pass, set_pass->new_pass, NVDIMM_PASSPHRASE_LEN);
+		break;
+
+	case CXL_PMEM_SEC_PASS_USER:
+		if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PLIMIT) {
+			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+			return -ENXIO;
+		}
+		if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET &&
+		    memcmp(mdata->user_pass, set_pass->old_pass, NVDIMM_PASSPHRASE_LEN)) {
+			if (++mdata->user_limit == PASS_TRY_LIMIT)
+				mdata->security_state |= CXL_PMEM_SEC_STATE_USER_PLIMIT;
+			cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
+			return -ENXIO;
+		}
+		memcpy(mdata->user_pass, set_pass->new_pass, NVDIMM_PASSPHRASE_LEN);
+		break;
+
+	default:
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+	return 0;
+}
+
 static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 {
 	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
@@ -257,6 +330,9 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
 	case CXL_MBOX_OP_GET_SECURITY_STATE:
 		rc = mock_get_security_state(cxlds, cmd);
 		break;
+	case CXL_MBOX_OP_SET_PASSPHRASE:
+		rc = mock_set_passphrase(cxlds, cmd);
+		break;
 	default:
 		break;
 	}



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 01/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation
  2022-07-15 21:08 ` [PATCH RFC 01/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation Dave Jiang
@ 2022-07-15 21:09   ` Davidlohr Bueso
  2022-08-03 16:29     ` Jonathan Cameron
  2022-07-18  5:34   ` [PATCH RFC 1/15] " Davidlohr Bueso
  1 sibling, 1 reply; 79+ messages in thread
From: Davidlohr Bueso @ 2022-07-15 21:09 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield

On Fri, 15 Jul 2022, Dave Jiang wrote:

>+config CXL_PMEM_SECURITY
>+	tristate "CXL PMEM SECURITY: Persistent Memory Security Support"
>+	depends on CXL_PMEM
>+	default CXL_BUS
>+	help
>+	  CXL memory device "Persistent Memory Data-at-rest Security" command set
>+	  support. Support opcode 0x4500..0x4505. The commands supported are "Get
>+	  Security State", "Set Passphrase", "Disable Passphrase", "Unlock",
>+	  "Freeze Security State", and "Passphrase Secure Erase". Security operation
>+	  is done through nvdimm security_ops.
>+
>+	  See Chapter 8.2.9.5.6 in the CXL 2.0 specification for a detailed description
>+	  of the Persistent Memory Security.
>+
>+	  If unsure say 'm'.

Is there any fundamental reason why we need to add a new CXL Kconfig option
instead of just tucking this under CXL_PMEM?

^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH RFC 06/15] cxl/pmem: Add Disable Passphrase security command support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (4 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 05/15] tools/testing/cxl: Add "Set Passphrase" opcode support Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-08-03 17:21   ` Jonathan Cameron
  2022-07-15 21:09 ` [PATCH RFC 07/15] tools/testing/cxl: Add "Disable" security opcode support Dave Jiang
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Create callback function to support the nvdimm_security_ops ->disable()
callback. Translate the operation to send "Disable Passphrase" security
command for CXL memory device. The operation supports disabling a
passphrase for the CXL persistent memory device. In the original
implementation of nvdimm_security_ops, this operation only supports
disabling of the user passphrase. This is due to the NFIT version of
disable passphrase only supported disabling of user passphrase. The CXL
spec allows disabling of the master passphrase as well which
nvidmm_security_ops does not support yet. In this commit, the callback
function will only support user passphrase.

See CXL 2.0 spec section 8.2.9.5.6.3 for reference.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 drivers/cxl/cxlmem.h   |    8 ++++++++
 drivers/cxl/security.c |   30 ++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 1e76d22f4fd2..70a1eb7720d3 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -252,6 +252,7 @@ enum cxl_opcode {
 	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
 	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
 	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
+	CXL_MBOX_OP_DISABLE_PASSPHRASE	= 0x4502,
 	CXL_MBOX_OP_MAX			= 0x10000
 };
 
@@ -359,6 +360,13 @@ struct cxl_set_pass {
 	u8 new_pass[NVDIMM_PASSPHRASE_LEN];
 } __packed;
 
+/* disable passphrase input payload */
+struct cxl_disable_pass {
+	u8 type;
+	u8 reserved[31];
+	u8 pass[NVDIMM_PASSPHRASE_LEN];
+} __packed;
+
 enum {
 	CXL_PMEM_SEC_PASS_MASTER = 0,
 	CXL_PMEM_SEC_PASS_USER,
diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
index 76ec5087f966..4aec8e41e167 100644
--- a/drivers/cxl/security.c
+++ b/drivers/cxl/security.c
@@ -76,9 +76,39 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm,
 	return rc;
 }
 
+static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
+				     const struct nvdimm_key_data *key_data)
+{
+	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+	struct cxl_disable_pass *dis_pass;
+	int rc;
+
+	dis_pass = kzalloc(sizeof(*dis_pass), GFP_KERNEL);
+	if (!dis_pass)
+		return -ENOMEM;
+
+	/*
+	 * While the CXL spec defines the ability to erase the master passphrase,
+	 * the original nvdimm security ops does not provide that capability.
+	 * In order to preserve backward compatibility, this callback will
+	 * only support disable of user passphrase. The disable master passphrase
+	 * ability will need to be added as a new callback.
+	 */
+	dis_pass->type = CXL_PMEM_SEC_PASS_USER;
+	memcpy(dis_pass->pass, key_data->data, NVDIMM_PASSPHRASE_LEN);
+
+	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_DISABLE_PASSPHRASE,
+			       dis_pass, sizeof(*dis_pass), NULL, 0);
+	kfree(dis_pass);
+	return rc;
+}
+
 static const struct nvdimm_security_ops __cxl_security_ops = {
 	.get_flags = cxl_pmem_get_security_flags,
 	.change_key = cxl_pmem_security_change_key,
+	.disable = cxl_pmem_security_disable,
 };
 
 const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 07/15] tools/testing/cxl: Add "Disable" security opcode support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (5 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 06/15] cxl/pmem: Add Disable Passphrase security command support Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-08-03 17:23   ` Jonathan Cameron
  2022-07-15 21:09 ` [PATCH RFC 08/15] cxl/pmem: Add "Freeze Security State" security command support Dave Jiang
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Add support to emulate a CXL mem device support the "Disable Passphrase"
operation. The operation supports disabling of either a user or a master
passphrase. The emulation will provide support for both user and master
passphrase.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 tools/testing/cxl/test/mem.c |   80 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index 796f4f7b5e3d..5f87a94d92ae 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -235,6 +235,83 @@ static int mock_set_passphrase(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd
 	return 0;
 }
 
+static int mock_disable_passphrase(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
+{
+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
+	struct cxl_disable_pass *dis_pass;
+
+	if (cmd->size_in != sizeof(*dis_pass)) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (cmd->size_out != 0) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (mdata->security_state & CXL_PMEM_SEC_STATE_FROZEN) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	dis_pass = cmd->payload_in;
+	switch (dis_pass->type) {
+	case CXL_PMEM_SEC_PASS_MASTER:
+		if (mdata->security_state & CXL_PMEM_SEC_STATE_MASTER_PLIMIT) {
+			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+			return -ENXIO;
+		}
+
+		if (!(mdata->security_state & CXL_PMEM_SEC_STATE_MASTER_PASS_SET)) {
+			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+			return -ENXIO;
+		}
+
+		if (memcmp(dis_pass->pass, mdata->master_pass, NVDIMM_PASSPHRASE_LEN)) {
+			if (++mdata->master_limit == PASS_TRY_LIMIT)
+				mdata->security_state |= CXL_PMEM_SEC_STATE_MASTER_PLIMIT;
+			cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
+			return -ENXIO;
+		}
+
+		mdata->master_limit = 0;
+		memset(mdata->master_pass, 0, NVDIMM_PASSPHRASE_LEN);
+		mdata->security_state &= ~CXL_PMEM_SEC_STATE_MASTER_PASS_SET;
+		break;
+
+	case CXL_PMEM_SEC_PASS_USER:
+		if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PLIMIT) {
+			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+			return -ENXIO;
+		}
+
+		if (!(mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET)) {
+			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+			return -ENXIO;
+		}
+
+		if (memcmp(dis_pass->pass, mdata->user_pass, NVDIMM_PASSPHRASE_LEN)) {
+			if (++mdata->user_limit == PASS_TRY_LIMIT)
+				mdata->security_state |= CXL_PMEM_SEC_STATE_USER_PLIMIT;
+			cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
+			return -ENXIO;
+		}
+
+		mdata->user_limit = 0;
+		memset(mdata->user_pass, 0, NVDIMM_PASSPHRASE_LEN);
+		mdata->security_state &= ~(CXL_PMEM_SEC_STATE_USER_PASS_SET |
+					   CXL_PMEM_SEC_STATE_LOCKED);
+		break;
+
+	default:
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 {
 	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
@@ -333,6 +410,9 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
 	case CXL_MBOX_OP_SET_PASSPHRASE:
 		rc = mock_set_passphrase(cxlds, cmd);
 		break;
+	case CXL_MBOX_OP_DISABLE_PASSPHRASE:
+		rc = mock_disable_passphrase(cxlds, cmd);
+		break;
 	default:
 		break;
 	}



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 08/15] cxl/pmem: Add "Freeze Security State" security command support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (6 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 07/15] tools/testing/cxl: Add "Disable" security opcode support Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-08-03 17:23   ` Jonathan Cameron
  2022-07-15 21:09 ` [PATCH RFC 09/15] tools/testing/cxl: Add "Freeze Security State" security opcode support Dave Jiang
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Create callback function to support the nvdimm_security_ops() ->freeze()
callback. Translate the operation to send "Freeze Security State" security
command for CXL memory device.

See CXL 2.0 spec section 8.2.9.5.6.5 for reference.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 drivers/cxl/cxlmem.h   |    1 +
 drivers/cxl/security.c |   10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 70a1eb7720d3..ced85be291f3 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -253,6 +253,7 @@ enum cxl_opcode {
 	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
 	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
 	CXL_MBOX_OP_DISABLE_PASSPHRASE	= 0x4502,
+	CXL_MBOX_OP_FREEZE_SECURITY	= 0x4504,
 	CXL_MBOX_OP_MAX			= 0x10000
 };
 
diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
index 4aec8e41e167..6399266a5908 100644
--- a/drivers/cxl/security.c
+++ b/drivers/cxl/security.c
@@ -105,10 +105,20 @@ static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
 	return rc;
 }
 
+static int cxl_pmem_security_freeze(struct nvdimm *nvdimm)
+{
+	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+
+	return cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_FREEZE_SECURITY, NULL, 0, NULL, 0);
+}
+
 static const struct nvdimm_security_ops __cxl_security_ops = {
 	.get_flags = cxl_pmem_get_security_flags,
 	.change_key = cxl_pmem_security_change_key,
 	.disable = cxl_pmem_security_disable,
+	.freeze = cxl_pmem_security_freeze,
 };
 
 const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 09/15] tools/testing/cxl: Add "Freeze Security State" security opcode support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (7 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 08/15] cxl/pmem: Add "Freeze Security State" security command support Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-07-15 21:09 ` [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm Dave Jiang
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Add support to emulate a CXL mem device support the "Freeze Security State"
operation.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 tools/testing/cxl/test/mem.c |   31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index 5f87a94d92ae..d8d08a89ec0c 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -312,6 +312,34 @@ static int mock_disable_passphrase(struct cxl_dev_state *cxlds, struct cxl_mbox_
 	return 0;
 }
 
+static int mock_freeze_security(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
+{
+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
+
+	if (cmd->size_in != 0) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (cmd->size_out != 0) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (mdata->security_state & CXL_PMEM_SEC_STATE_FROZEN) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	if (!(mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET)) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	mdata->security_state |= CXL_PMEM_SEC_STATE_FROZEN;
+	return 0;
+}
+
 static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 {
 	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
@@ -413,6 +441,9 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
 	case CXL_MBOX_OP_DISABLE_PASSPHRASE:
 		rc = mock_disable_passphrase(cxlds, cmd);
 		break;
+	case CXL_MBOX_OP_FREEZE_SECURITY:
+		rc = mock_freeze_security(cxlds, cmd);
+		break;
 	default:
 		break;
 	}



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (8 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 09/15] tools/testing/cxl: Add "Freeze Security State" security opcode support Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-07-18  5:30   ` Davidlohr Bueso
  2022-07-15 21:09 ` [PATCH RFC 11/15] cxl/pmem: Add "Unlock" security command support Dave Jiang
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

The original implementation to flush all cache after unlocking the nvdimm
resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
nvdimm with security operations arrives on other archs. With support CXL
pmem supporting security operations, specifically "unlock" dimm, the need
for an arch supported helper function to invalidate all CPU cache for
nvdimm has arrived. Remove original implementation from acpi/nfit and add
cross arch support for this operation.

Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to opt in
and provide the support via wbinvd_on_all_cpus() call.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 arch/x86/Kconfig             |    1 +
 arch/x86/mm/pat/set_memory.c |    8 ++++++++
 drivers/acpi/nfit/intel.c    |   28 +++++-----------------------
 include/linux/libnvdimm.h    |    8 ++++++++
 lib/Kconfig                  |    3 +++
 5 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index be0b95e51df6..8dbe89eba639 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -83,6 +83,7 @@ config X86
 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
 	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
 	select ARCH_HAS_PMEM_API		if X86_64
+	select ARCH_HAS_NVDIMM_INVAL_CACHE	if X86_64
 	select ARCH_HAS_PTE_DEVMAP		if X86_64
 	select ARCH_HAS_PTE_SPECIAL
 	select ARCH_HAS_UACCESS_FLUSHCACHE	if X86_64
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 1abd5438f126..e4cd1286deef 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -330,6 +330,14 @@ void arch_invalidate_pmem(void *addr, size_t size)
 EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
 #endif
 
+#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
+void arch_invalidate_nvdimm_cache(void)
+{
+	wbinvd_on_all_cpus();
+}
+EXPORT_SYMBOL_GPL(arch_invalidate_nvdimm_cache);
+#endif
+
 static void __cpa_flush_all(void *arg)
 {
 	unsigned long cache = (unsigned long)arg;
diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
index 8dd792a55730..242d2e9203e9 100644
--- a/drivers/acpi/nfit/intel.c
+++ b/drivers/acpi/nfit/intel.c
@@ -190,8 +190,6 @@ static int intel_security_change_key(struct nvdimm *nvdimm,
 	}
 }
 
-static void nvdimm_invalidate_cache(void);
-
 static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
 		const struct nvdimm_key_data *key_data)
 {
@@ -228,7 +226,7 @@ static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
 	}
 
 	/* DIMM unlocked, invalidate all CPU caches before we read it */
-	nvdimm_invalidate_cache();
+	arch_invalidate_nvdimm_cache();
 
 	return 0;
 }
@@ -298,7 +296,7 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
 		return -ENOTTY;
 
 	/* flush all cache before we erase DIMM */
-	nvdimm_invalidate_cache();
+	arch_invalidate_nvdimm_cache();
 	memcpy(nd_cmd.cmd.passphrase, key->data,
 			sizeof(nd_cmd.cmd.passphrase));
 	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
@@ -318,7 +316,7 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
 	}
 
 	/* DIMM erased, invalidate all CPU caches before we read it */
-	nvdimm_invalidate_cache();
+	arch_invalidate_nvdimm_cache();
 	return 0;
 }
 
@@ -355,7 +353,7 @@ static int __maybe_unused intel_security_query_overwrite(struct nvdimm *nvdimm)
 	}
 
 	/* flush all cache before we make the nvdimms available */
-	nvdimm_invalidate_cache();
+	arch_invalidate_nvdimm_cache();
 	return 0;
 }
 
@@ -381,7 +379,7 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm,
 		return -ENOTTY;
 
 	/* flush all cache before we erase DIMM */
-	nvdimm_invalidate_cache();
+	arch_invalidate_nvdimm_cache();
 	memcpy(nd_cmd.cmd.passphrase, nkey->data,
 			sizeof(nd_cmd.cmd.passphrase));
 	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
@@ -401,22 +399,6 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm,
 	}
 }
 
-/*
- * TODO: define a cross arch wbinvd equivalent when/if
- * NVDIMM_FAMILY_INTEL command support arrives on another arch.
- */
-#ifdef CONFIG_X86
-static void nvdimm_invalidate_cache(void)
-{
-	wbinvd_on_all_cpus();
-}
-#else
-static void nvdimm_invalidate_cache(void)
-{
-	WARN_ON_ONCE("cache invalidation required after unlock\n");
-}
-#endif
-
 static const struct nvdimm_security_ops __intel_security_ops = {
 	.get_flags = intel_security_flags,
 	.freeze = intel_security_freeze,
diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
index 0d61e07b6827..455d54ec3c86 100644
--- a/include/linux/libnvdimm.h
+++ b/include/linux/libnvdimm.h
@@ -308,4 +308,12 @@ static inline void arch_invalidate_pmem(void *addr, size_t size)
 }
 #endif
 
+#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
+void arch_invalidate_nvdimm_cache(void);
+#else
+static inline void arch_invalidate_nvdimm_cache(void)
+{
+}
+#endif
+
 #endif /* __LIBNVDIMM_H__ */
diff --git a/lib/Kconfig b/lib/Kconfig
index eaaad4d85bf2..d4bc48eea635 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -652,6 +652,9 @@ config ARCH_NO_SG_CHAIN
 config ARCH_HAS_PMEM_API
 	bool
 
+config ARCH_HAS_NVDIMM_INVAL_CACHE
+	bool
+
 config MEMREGION
 	bool
 



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 11/15] cxl/pmem: Add "Unlock" security command support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (9 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-08-04 13:19   ` Jonathan Cameron
  2022-07-15 21:09 ` [PATCH RFC 12/15] tools/testing/cxl: Add "Unlock" security opcode support Dave Jiang
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Create callback function to support the nvdimm_security_ops() ->unlock()
callback. Translate the operation to send "Unlock" security command for CXL
mem device.

When the mem device is unlocked, arch_invalidate_nvdimm_cache() is called
in order to invalidate all CPU caches before attempting to access the mem
device.

See CXL 2.0 spec section 8.2.9.5.6.4 for reference.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 drivers/cxl/cxlmem.h   |    1 +
 drivers/cxl/security.c |   21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index ced85be291f3..ae8ccd484491 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -253,6 +253,7 @@ enum cxl_opcode {
 	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
 	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
 	CXL_MBOX_OP_DISABLE_PASSPHRASE	= 0x4502,
+	CXL_MBOX_OP_UNLOCK		= 0x4503,
 	CXL_MBOX_OP_FREEZE_SECURITY	= 0x4504,
 	CXL_MBOX_OP_MAX			= 0x10000
 };
diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
index 6399266a5908..d15520f280f0 100644
--- a/drivers/cxl/security.c
+++ b/drivers/cxl/security.c
@@ -114,11 +114,32 @@ static int cxl_pmem_security_freeze(struct nvdimm *nvdimm)
 	return cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_FREEZE_SECURITY, NULL, 0, NULL, 0);
 }
 
+static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
+				    const struct nvdimm_key_data *key_data)
+{
+	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+	u8 pass[NVDIMM_PASSPHRASE_LEN];
+	int rc;
+
+	memcpy(pass, key_data->data, NVDIMM_PASSPHRASE_LEN);
+	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_UNLOCK,
+			       pass, NVDIMM_PASSPHRASE_LEN, NULL, 0);
+	if (rc < 0)
+		return rc;
+
+	/* DIMM unlocked, invalidate all CPU caches before we read it */
+	arch_invalidate_nvdimm_cache();
+	return 0;
+}
+
 static const struct nvdimm_security_ops __cxl_security_ops = {
 	.get_flags = cxl_pmem_get_security_flags,
 	.change_key = cxl_pmem_security_change_key,
 	.disable = cxl_pmem_security_disable,
 	.freeze = cxl_pmem_security_freeze,
+	.unlock = cxl_pmem_security_unlock,
 };
 
 const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 12/15] tools/testing/cxl: Add "Unlock" security opcode support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (10 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 11/15] cxl/pmem: Add "Unlock" security command support Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-07-15 21:09 ` [PATCH RFC 13/15] cxl/pmem: Add "Passphrase Secure Erase" security command support Dave Jiang
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Add support to emulate a CXL mem device support the "Unlock" operation.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 tools/testing/cxl/test/mem.c |   49 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index d8d08a89ec0c..55a83896ccb8 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -340,6 +340,52 @@ static int mock_freeze_security(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd
 	return 0;
 }
 
+static int mock_unlock_security(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
+{
+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
+
+	if (cmd->size_in != NVDIMM_PASSPHRASE_LEN) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (cmd->size_out != 0) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (mdata->security_state & CXL_PMEM_SEC_STATE_FROZEN) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	if (!(mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET)) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PLIMIT) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	if (!(mdata->security_state & CXL_PMEM_SEC_STATE_LOCKED)) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	if (memcmp(cmd->payload_in, mdata->user_pass, NVDIMM_PASSPHRASE_LEN)) {
+		if (++mdata->user_limit == PASS_TRY_LIMIT)
+			mdata->security_state |= CXL_PMEM_SEC_STATE_USER_PLIMIT;
+		cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
+		return -ENXIO;
+	}
+
+	mdata->user_limit = 0;
+	mdata->security_state &= ~CXL_PMEM_SEC_STATE_LOCKED;
+	return 0;
+}
+
 static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 {
 	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
@@ -444,6 +490,9 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
 	case CXL_MBOX_OP_FREEZE_SECURITY:
 		rc = mock_freeze_security(cxlds, cmd);
 		break;
+	case CXL_MBOX_OP_UNLOCK:
+		rc = mock_unlock_security(cxlds, cmd);
+		break;
 	default:
 		break;
 	}



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 13/15] cxl/pmem: Add "Passphrase Secure Erase" security command support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (11 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 12/15] tools/testing/cxl: Add "Unlock" security opcode support Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-07-20  6:17   ` Davidlohr Bueso
  2022-07-15 21:09 ` [PATCH RFC 14/15] tools/testing/cxl: Add "passphrase secure erase" opcode support Dave Jiang
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Create callback function to support the nvdimm_security_ops() ->erase()
callback. Translate the operation to send "Passphrase Secure Erase"
security command for CXL memory device.

When the mem device is secure erased, arch_invalidate_nvdimm_cache() is
called in order to invalidate all CPU caches before attempting to access
the mem device again.

See CXL 2.0 spec section 8.2.9.5.6.6 for reference.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 drivers/cxl/cxlmem.h   |    8 ++++++++
 drivers/cxl/security.c |   29 +++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index ae8ccd484491..4bcb02f625b4 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -255,6 +255,7 @@ enum cxl_opcode {
 	CXL_MBOX_OP_DISABLE_PASSPHRASE	= 0x4502,
 	CXL_MBOX_OP_UNLOCK		= 0x4503,
 	CXL_MBOX_OP_FREEZE_SECURITY	= 0x4504,
+	CXL_MBOX_OP_PASSPHRASE_ERASE	= 0x4505,
 	CXL_MBOX_OP_MAX			= 0x10000
 };
 
@@ -369,6 +370,13 @@ struct cxl_disable_pass {
 	u8 pass[NVDIMM_PASSPHRASE_LEN];
 } __packed;
 
+/* passphrase erase payload */
+struct cxl_pass_erase {
+	u8 type;
+	u8 reserved[31];
+	u8 pass[NVDIMM_PASSPHRASE_LEN];
+} __packed;
+
 enum {
 	CXL_PMEM_SEC_PASS_MASTER = 0,
 	CXL_PMEM_SEC_PASS_USER,
diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
index d15520f280f0..4add7f62e758 100644
--- a/drivers/cxl/security.c
+++ b/drivers/cxl/security.c
@@ -134,12 +134,41 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
 	return 0;
 }
 
+static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
+					      const struct nvdimm_key_data *key,
+					      enum nvdimm_passphrase_type ptype)
+{
+	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+	struct cxl_pass_erase *erase;
+	int rc;
+
+	erase = kzalloc(sizeof(*erase), GFP_KERNEL);
+	if (!erase)
+		return -ENOMEM;
+
+	erase->type = ptype == NVDIMM_MASTER ?
+		CXL_PMEM_SEC_PASS_MASTER : CXL_PMEM_SEC_PASS_USER;
+	memcpy(erase->pass, key->data, NVDIMM_PASSPHRASE_LEN);
+	rc =  cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_PASSPHRASE_ERASE,
+				erase, sizeof(*erase), NULL, 0);
+	kfree(erase);
+	if (rc < 0)
+		return rc;
+
+	/* DIMM erased, invalidate all CPU caches before we read it */
+	arch_invalidate_nvdimm_cache();
+	return 0;
+}
+
 static const struct nvdimm_security_ops __cxl_security_ops = {
 	.get_flags = cxl_pmem_get_security_flags,
 	.change_key = cxl_pmem_security_change_key,
 	.disable = cxl_pmem_security_disable,
 	.freeze = cxl_pmem_security_freeze,
 	.unlock = cxl_pmem_security_unlock,
+	.erase = cxl_pmem_security_passphrase_erase,
 };
 
 const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 14/15] tools/testing/cxl: Add "passphrase secure erase" opcode support
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (12 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 13/15] cxl/pmem: Add "Passphrase Secure Erase" security command support Dave Jiang
@ 2022-07-15 21:09 ` Dave Jiang
  2022-07-15 21:10 ` [PATCH RFC 15/15] nvdimm/cxl/pmem: Add support for master passphrase disable security command Dave Jiang
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:09 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

Add support to emulate a CXL mem device support the "passphrase secure
erase" operation.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 tools/testing/cxl/test/mem.c |   59 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index 55a83896ccb8..ebc5e8768019 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -386,6 +386,62 @@ static int mock_unlock_security(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd
 	return 0;
 }
 
+static int mock_passphrase_erase(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
+{
+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
+	struct cxl_pass_erase *erase;
+
+	if (cmd->size_in != sizeof(*erase)) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (cmd->size_out != 0) {
+		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
+		return -EINVAL;
+	}
+
+	if (mdata->security_state & CXL_PMEM_SEC_STATE_FROZEN) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PLIMIT) {
+		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
+		return -ENXIO;
+	}
+
+	if (erase->type == CXL_PMEM_SEC_PASS_MASTER &&
+	    mdata->security_state & CXL_PMEM_SEC_STATE_MASTER_PASS_SET &&
+	    memcmp(mdata->master_pass, erase->pass, NVDIMM_PASSPHRASE_LEN)) {
+		if (++mdata->master_limit == PASS_TRY_LIMIT)
+			mdata->security_state |= CXL_PMEM_SEC_STATE_MASTER_PLIMIT;
+		cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
+		return -ENXIO;
+	}
+
+	if (erase->type == CXL_PMEM_SEC_PASS_USER &&
+	    mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET &&
+	    memcmp(mdata->user_pass, erase->pass, NVDIMM_PASSPHRASE_LEN)) {
+		if (++mdata->user_limit == PASS_TRY_LIMIT)
+			mdata->security_state |= CXL_PMEM_SEC_STATE_USER_PLIMIT;
+		cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
+		return -ENXIO;
+	}
+
+	if (erase->type == CXL_PMEM_SEC_PASS_USER) {
+		mdata->security_state &= ~CXL_PMEM_SEC_STATE_USER_PASS_SET;
+		mdata->user_limit = 0;
+		memset(mdata->user_pass, 0, NVDIMM_PASSPHRASE_LEN);
+	} else if (erase->type == CXL_PMEM_SEC_PASS_MASTER) {
+		mdata->master_limit = 0;
+	}
+
+	mdata->security_state &= ~CXL_PMEM_SEC_STATE_LOCKED;
+
+	return 0;
+}
+
 static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
 {
 	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
@@ -493,6 +549,9 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
 	case CXL_MBOX_OP_UNLOCK:
 		rc = mock_unlock_security(cxlds, cmd);
 		break;
+	case CXL_MBOX_OP_PASSPHRASE_ERASE:
+		rc = mock_passphrase_erase(cxlds, cmd);
+		break;
 	default:
 		break;
 	}



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH RFC 15/15] nvdimm/cxl/pmem: Add support for master passphrase disable security command
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (13 preceding siblings ...)
  2022-07-15 21:09 ` [PATCH RFC 14/15] tools/testing/cxl: Add "passphrase secure erase" opcode support Dave Jiang
@ 2022-07-15 21:10 ` Dave Jiang
  2022-07-15 21:29 ` [PATCH RFC 00/15] Introduce security commands for CXL pmem device Davidlohr Bueso
  2022-08-03 17:03 ` Jonathan Cameron
  16 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-15 21:10 UTC (permalink / raw)
  To: linux-cxl, nvdimm
  Cc: dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, dave

The original nvdimm_security_ops ->disable() only supports user passphrase
for security disable. The CXL spec introduced the disabling of master
passphrase. Add a ->disable_master() callback to support this new operation
and leaving the old ->disable() mechanism alone. A "disable_master" command
is added for the sysfs attribute in order to allow command to be issued
from userspace. ndctl will need enabling in order to utilize this new
operation.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
---
 drivers/cxl/security.c    |   28 ++++++++++++++++++----------
 drivers/nvdimm/security.c |   33 ++++++++++++++++++++++++++-------
 include/linux/libnvdimm.h |    2 ++
 3 files changed, 46 insertions(+), 17 deletions(-)

diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
index 4add7f62e758..3dc04b50afaf 100644
--- a/drivers/cxl/security.c
+++ b/drivers/cxl/security.c
@@ -76,8 +76,9 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm,
 	return rc;
 }
 
-static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
-				     const struct nvdimm_key_data *key_data)
+static int __cxl_pmem_security_disable(struct nvdimm *nvdimm,
+				       const struct nvdimm_key_data *key_data,
+				       enum nvdimm_passphrase_type ptype)
 {
 	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
 	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
@@ -89,14 +90,8 @@ static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
 	if (!dis_pass)
 		return -ENOMEM;
 
-	/*
-	 * While the CXL spec defines the ability to erase the master passphrase,
-	 * the original nvdimm security ops does not provide that capability.
-	 * In order to preserve backward compatibility, this callback will
-	 * only support disable of user passphrase. The disable master passphrase
-	 * ability will need to be added as a new callback.
-	 */
-	dis_pass->type = CXL_PMEM_SEC_PASS_USER;
+	dis_pass->type = ptype == NVDIMM_MASTER ?
+		CXL_PMEM_SEC_PASS_MASTER : CXL_PMEM_SEC_PASS_USER;
 	memcpy(dis_pass->pass, key_data->data, NVDIMM_PASSPHRASE_LEN);
 
 	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_DISABLE_PASSPHRASE,
@@ -105,6 +100,18 @@ static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
 	return rc;
 }
 
+static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
+				     const struct nvdimm_key_data *key_data)
+{
+	return __cxl_pmem_security_disable(nvdimm, key_data, NVDIMM_USER);
+}
+
+static int cxl_pmem_security_disable_master(struct nvdimm *nvdimm,
+					    const struct nvdimm_key_data *key_data)
+{
+	return __cxl_pmem_security_disable(nvdimm, key_data, NVDIMM_MASTER);
+}
+
 static int cxl_pmem_security_freeze(struct nvdimm *nvdimm)
 {
 	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
@@ -169,6 +176,7 @@ static const struct nvdimm_security_ops __cxl_security_ops = {
 	.freeze = cxl_pmem_security_freeze,
 	.unlock = cxl_pmem_security_unlock,
 	.erase = cxl_pmem_security_passphrase_erase,
+	.disable = cxl_pmem_security_disable_master,
 };
 
 const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;
diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
index b5aa55c61461..c1c9d0feae9d 100644
--- a/drivers/nvdimm/security.c
+++ b/drivers/nvdimm/security.c
@@ -239,7 +239,8 @@ static int check_security_state(struct nvdimm *nvdimm)
 	return 0;
 }
 
-static int security_disable(struct nvdimm *nvdimm, unsigned int keyid)
+static int security_disable(struct nvdimm *nvdimm, unsigned int keyid,
+			    enum nvdimm_passphrase_type pass_type)
 {
 	struct device *dev = &nvdimm->dev;
 	struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev);
@@ -250,8 +251,13 @@ static int security_disable(struct nvdimm *nvdimm, unsigned int keyid)
 	/* The bus lock should be held at the top level of the call stack */
 	lockdep_assert_held(&nvdimm_bus->reconfig_mutex);
 
-	if (!nvdimm->sec.ops || !nvdimm->sec.ops->disable
-			|| !nvdimm->sec.flags)
+	if (!nvdimm->sec.ops || !nvdimm->sec.flags)
+		return -EOPNOTSUPP;
+
+	if (pass_type == NVDIMM_USER && !nvdimm->sec.ops->disable)
+		return -EOPNOTSUPP;
+
+	if (pass_type == NVDIMM_MASTER && !nvdimm->sec.ops->disable_master)
 		return -EOPNOTSUPP;
 
 	rc = check_security_state(nvdimm);
@@ -263,12 +269,21 @@ static int security_disable(struct nvdimm *nvdimm, unsigned int keyid)
 	if (!data)
 		return -ENOKEY;
 
-	rc = nvdimm->sec.ops->disable(nvdimm, data);
-	dev_dbg(dev, "key: %d disable: %s\n", key_serial(key),
+	if (pass_type == NVDIMM_MASTER) {
+		rc = nvdimm->sec.ops->disable_master(nvdimm, data);
+		dev_dbg(dev, "key: %d disable_master: %s\n", key_serial(key),
 			rc == 0 ? "success" : "fail");
+	} else {
+		rc = nvdimm->sec.ops->disable(nvdimm, data);
+		dev_dbg(dev, "key: %d disable: %s\n", key_serial(key),
+			rc == 0 ? "success" : "fail");
+	}
 
 	nvdimm_put_key(key);
-	nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_USER);
+	if (pass_type == NVDIMM_MASTER)
+		nvdimm->sec.ext_flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
+	else
+		nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_USER);
 	return rc;
 }
 
@@ -473,6 +488,7 @@ void nvdimm_security_overwrite_query(struct work_struct *work)
 #define OPS							\
 	C( OP_FREEZE,		"freeze",		1),	\
 	C( OP_DISABLE,		"disable",		2),	\
+	C( OP_DISABLE_MASTER,	"disable_master",	2),	\
 	C( OP_UPDATE,		"update",		3),	\
 	C( OP_ERASE,		"erase",		2),	\
 	C( OP_OVERWRITE,	"overwrite",		2),	\
@@ -524,7 +540,10 @@ ssize_t nvdimm_security_store(struct device *dev, const char *buf, size_t len)
 		rc = nvdimm_security_freeze(nvdimm);
 	} else if (i == OP_DISABLE) {
 		dev_dbg(dev, "disable %u\n", key);
-		rc = security_disable(nvdimm, key);
+		rc = security_disable(nvdimm, key, NVDIMM_USER);
+	} else if (i == OP_DISABLE_MASTER) {
+		dev_dbg(dev, "disable_master %u\n", key);
+		rc = security_disable(nvdimm, key, NVDIMM_MASTER);
 	} else if (i == OP_UPDATE || i == OP_MASTER_UPDATE) {
 		dev_dbg(dev, "%s %u %u\n", ops[i].name, key, newkey);
 		rc = security_update(nvdimm, key, newkey, i == OP_UPDATE
diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
index 455d54ec3c86..07e4e7572089 100644
--- a/include/linux/libnvdimm.h
+++ b/include/linux/libnvdimm.h
@@ -179,6 +179,8 @@ struct nvdimm_security_ops {
 	int (*overwrite)(struct nvdimm *nvdimm,
 			const struct nvdimm_key_data *key_data);
 	int (*query_overwrite)(struct nvdimm *nvdimm);
+	int (*disable_master)(struct nvdimm *nvdimm,
+			      const struct nvdimm_key_data *key_data);
 };
 
 enum nvdimm_fwa_state {



^ permalink raw reply related	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 00/15] Introduce security commands for CXL pmem device
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (14 preceding siblings ...)
  2022-07-15 21:10 ` [PATCH RFC 15/15] nvdimm/cxl/pmem: Add support for master passphrase disable security command Dave Jiang
@ 2022-07-15 21:29 ` Davidlohr Bueso
  2022-07-19 18:53   ` Dave Jiang
  2022-08-03 17:03 ` Jonathan Cameron
  16 siblings, 1 reply; 79+ messages in thread
From: Davidlohr Bueso @ 2022-07-15 21:29 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield

On Fri, 15 Jul 2022, Dave Jiang wrote:

>This series is seeking comments on the implementation. It has not been fully
>tested yet.

Sorry if this is already somewhere, but how exactly does one test the mock device?

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-07-15 21:09 ` [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm Dave Jiang
@ 2022-07-18  5:30   ` Davidlohr Bueso
  2022-07-19 19:07     ` Dave Jiang
  0 siblings, 1 reply; 79+ messages in thread
From: Davidlohr Bueso @ 2022-07-18  5:30 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, Jonathan.Cameron, a.manzanares

On Fri, 15 Jul 2022, Dave Jiang wrote:

>The original implementation to flush all cache after unlocking the nvdimm
>resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
>nvdimm with security operations arrives on other archs. With support CXL
>pmem supporting security operations, specifically "unlock" dimm, the need
>for an arch supported helper function to invalidate all CPU cache for
>nvdimm has arrived. Remove original implementation from acpi/nfit and add
>cross arch support for this operation.
>
>Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to opt in
>and provide the support via wbinvd_on_all_cpus() call.

So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
its own semantics (iirc there was a flush all call in the past). Cc'ing
Jonathan as well.

Anyway, I think this call should not be defined in any place other than core
kernel headers, and not in pat/nvdimm. I was trying to make it fit in smp.h,
for example, but conviniently we might be able to hijack flush_cache_all()
for our purposes as of course neither x86-64 arm64 uses it :)

And I see this as safe (wrt not adding a big hammer on unaware drivers) as
the 32bit archs that define the call are mostly contained thin their arch/,
and the few in drivers/ are still specific to those archs.

Maybe something like the below.

Thanks,
Davidlohr

------8<----------------------------------------
Subject: [PATCH] arch/x86: define flush_cache_all as global wbinvd

With CXL security features, global CPU cache flushing nvdimm
requirements are no longer specific to that subsystem, even
beyond the scope of security_ops. CXL will need such semantics
for features not necessarily limited to persistent memory.

So use the flush_cache_all() for the wbinvd across all
CPUs on x86. arm64, which is another platform to have CXL
support can also define its own semantics here.

Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---
  arch/x86/Kconfig                  |  1 -
  arch/x86/include/asm/cacheflush.h |  5 +++++
  arch/x86/mm/pat/set_memory.c      |  8 --------
  drivers/acpi/nfit/intel.c         | 11 ++++++-----
  drivers/cxl/security.c            |  5 +++--
  include/linux/libnvdimm.h         |  9 ---------
  6 files changed, 14 insertions(+), 25 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8dbe89eba639..be0b95e51df6 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -83,7 +83,6 @@ config X86
	select ARCH_HAS_MEMBARRIER_SYNC_CORE
	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
	select ARCH_HAS_PMEM_API		if X86_64
-	select ARCH_HAS_NVDIMM_INVAL_CACHE	if X86_64
	select ARCH_HAS_PTE_DEVMAP		if X86_64
	select ARCH_HAS_PTE_SPECIAL
	select ARCH_HAS_UACCESS_FLUSHCACHE	if X86_64
diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index b192d917a6d0..05c79021665d 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -10,4 +10,9 @@

  void clflush_cache_range(void *addr, unsigned int size);

+#define flush_cache_all()		\
+do {					\
+	wbinvd_on_all_cpus();		\
+} while (0)
+
  #endif /* _ASM_X86_CACHEFLUSH_H */
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index e4cd1286deef..1abd5438f126 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -330,14 +330,6 @@ void arch_invalidate_pmem(void *addr, size_t size)
  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
  #endif

-#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
-void arch_invalidate_nvdimm_cache(void)
-{
-	wbinvd_on_all_cpus();
-}
-EXPORT_SYMBOL_GPL(arch_invalidate_nvdimm_cache);
-#endif
-
  static void __cpa_flush_all(void *arg)
  {
	unsigned long cache = (unsigned long)arg;
diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
index 242d2e9203e9..1b0ecb4d67e6 100644
--- a/drivers/acpi/nfit/intel.c
+++ b/drivers/acpi/nfit/intel.c
@@ -1,6 +1,7 @@
  // SPDX-License-Identifier: GPL-2.0
  /* Copyright(c) 2018 Intel Corporation. All rights reserved. */
  #include <linux/libnvdimm.h>
+#include <linux/cacheflush.h>
  #include <linux/ndctl.h>
  #include <linux/acpi.h>
  #include <asm/smp.h>
@@ -226,7 +227,7 @@ static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
	}

	/* DIMM unlocked, invalidate all CPU caches before we read it */
-	arch_invalidate_nvdimm_cache();
+	flush_cache_all();

	return 0;
  }
@@ -296,7 +297,7 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
		return -ENOTTY;

	/* flush all cache before we erase DIMM */
-	arch_invalidate_nvdimm_cache();
+	flush_cache_all();
	memcpy(nd_cmd.cmd.passphrase, key->data,
			sizeof(nd_cmd.cmd.passphrase));
	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
@@ -316,7 +317,7 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
	}

	/* DIMM erased, invalidate all CPU caches before we read it */
-	arch_invalidate_nvdimm_cache();
+	flush_cache_all();
	return 0;
  }

@@ -353,7 +354,7 @@ static int __maybe_unused intel_security_query_overwrite(struct nvdimm *nvdimm)
	}

	/* flush all cache before we make the nvdimms available */
-	arch_invalidate_nvdimm_cache();
+	flush_cache_all();
	return 0;
  }

@@ -379,7 +380,7 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm,
		return -ENOTTY;

	/* flush all cache before we erase DIMM */
-	arch_invalidate_nvdimm_cache();
+	flush_cache_all();
	memcpy(nd_cmd.cmd.passphrase, nkey->data,
			sizeof(nd_cmd.cmd.passphrase));
	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
index 3dc04b50afaf..e2977872bf2f 100644
--- a/drivers/cxl/security.c
+++ b/drivers/cxl/security.c
@@ -6,6 +6,7 @@
  #include <linux/ndctl.h>
  #include <linux/async.h>
  #include <linux/slab.h>
+#include <linux/cacheflush.h>
  #include "cxlmem.h"
  #include "cxl.h"

@@ -137,7 +138,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
		return rc;

	/* DIMM unlocked, invalidate all CPU caches before we read it */
-	arch_invalidate_nvdimm_cache();
+	flush_cache_all();
	return 0;
  }

@@ -165,7 +166,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
		return rc;

	/* DIMM erased, invalidate all CPU caches before we read it */
-	arch_invalidate_nvdimm_cache();
+	flush_cache_all();
	return 0;
  }

diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
index 07e4e7572089..0769afb73380 100644
--- a/include/linux/libnvdimm.h
+++ b/include/linux/libnvdimm.h
@@ -309,13 +309,4 @@ static inline void arch_invalidate_pmem(void *addr, size_t size)
  {
  }
  #endif
-
-#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
-void arch_invalidate_nvdimm_cache(void);
-#else
-static inline void arch_invalidate_nvdimm_cache(void)
-{
-}
-#endif
-
  #endif /* __LIBNVDIMM_H__ */
--
2.36.1

^ permalink raw reply related	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 1/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation
  2022-07-15 21:08 ` [PATCH RFC 01/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation Dave Jiang
  2022-07-15 21:09   ` Davidlohr Bueso
@ 2022-07-18  5:34   ` Davidlohr Bueso
  1 sibling, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-07-18  5:34 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield

On Fri, 15 Jul 2022, Dave Jiang wrote:

>+++ b/drivers/cxl/security.c
>@@ -0,0 +1,57 @@
>+// SPDX-License-Identifier: GPL-2.0-only
>+/* Copyright(c) 2022 Intel Corporation. All rights reserved. */
>+#include <linux/libnvdimm.h>
>+#include <asm/unaligned.h>
>+#include <linux/module.h>
>+#include <linux/ndctl.h>

ndctl.h can be removed.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 2/15] tools/testing/cxl: Create context for cxl mock device
  2022-07-15 21:08 ` [PATCH RFC 02/15] tools/testing/cxl: Create context for cxl mock device Dave Jiang
@ 2022-07-18  6:29   ` Davidlohr Bueso
  2022-08-03 16:36   ` [PATCH RFC 02/15] " Jonathan Cameron
  1 sibling, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-07-18  6:29 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield

On Fri, 15 Jul 2022, Dave Jiang wrote:

>Add context struct for mock device and move lsa under the context. This
>allows additional information such as security status and other persistent
>security data such as passphrase to be added for the emulated test device.
>

Reviewed-by: Davidlohr Bueso <dave@stgolabs.net>

>Signed-off-by: Dave Jiang <dave.jiang@intel.com>
>---
> tools/testing/cxl/test/mem.c |   29 +++++++++++++++++++++++------
> 1 file changed, 23 insertions(+), 6 deletions(-)
>
>diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
>index 6b9239b2afd4..723378248321 100644
>--- a/tools/testing/cxl/test/mem.c
>+++ b/tools/testing/cxl/test/mem.c
>@@ -9,6 +9,10 @@
> #include <linux/bits.h>
> #include <cxlmem.h>
>
>+struct mock_mdev_data {
>+	void *lsa;
>+};
>+
> #define LSA_SIZE SZ_128K
> #define EFFECT(x) (1U << x)
>
>@@ -140,7 +144,8 @@ static int mock_id(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
> static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
> {
> 	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
>-	void *lsa = dev_get_drvdata(cxlds->dev);
>+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
>+	void *lsa = mdata->lsa;
> 	u32 offset, length;
>
> 	if (sizeof(*get_lsa) > cmd->size_in)
>@@ -159,7 +164,8 @@ static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
> static int mock_set_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
> {
> 	struct cxl_mbox_set_lsa *set_lsa = cmd->payload_in;
>-	void *lsa = dev_get_drvdata(cxlds->dev);
>+	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
>+	void *lsa = mdata->lsa;
> 	u32 offset, length;
>
> 	if (sizeof(*set_lsa) > cmd->size_in)
>@@ -237,9 +243,12 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
> 	return rc;
> }
>
>-static void label_area_release(void *lsa)
>+static void cxl_mock_drvdata_release(void *data)
> {
>-	vfree(lsa);
>+	struct mock_mdev_data *mdata = data;
>+
>+	vfree(mdata->lsa);
>+	vfree(mdata);
> }
>
> static int cxl_mock_mem_probe(struct platform_device *pdev)
>@@ -247,13 +256,21 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
> 	struct device *dev = &pdev->dev;
> 	struct cxl_memdev *cxlmd;
> 	struct cxl_dev_state *cxlds;
>+	struct mock_mdev_data *mdata;
> 	void *lsa;
> 	int rc;
>
>+	mdata = vmalloc(sizeof(*mdata));
>+	if (!mdata)
>+		return -ENOMEM;
>+
> 	lsa = vmalloc(LSA_SIZE);
>-	if (!lsa)
>+	if (!lsa) {
>+		vfree(mdata);
> 		return -ENOMEM;
>-	rc = devm_add_action_or_reset(dev, label_area_release, lsa);
>+	}
>+
>+	rc = devm_add_action_or_reset(dev, cxl_mock_drvdata_release, mdata);
> 	if (rc)
> 		return rc;
> 	dev_set_drvdata(dev, lsa);
>

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 4/15] cxl/pmem: Add "Set Passphrase" security command support
  2022-07-15 21:08 ` [PATCH RFC 04/15] cxl/pmem: Add "Set Passphrase" security command support Dave Jiang
@ 2022-07-18  6:36   ` Davidlohr Bueso
  2022-07-19 18:55     ` Dave Jiang
  2022-08-03 17:01   ` [PATCH RFC 04/15] " Jonathan Cameron
  1 sibling, 1 reply; 79+ messages in thread
From: Davidlohr Bueso @ 2022-07-18  6:36 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield

On Fri, 15 Jul 2022, Dave Jiang wrote:

>However, the spec leaves a gap WRT master passphrase usages. The spec does
>not define any ways to retrieve the status of if the support of master
>passphrase is available for the device, nor does the commands that utilize
>master passphrase will return a specific error that indicates master
>passphrase is not supported. If using a device does not support master
>passphrase and a command is issued with a master passphrase, the error
>message returned by the device will be ambiguos.

In general I think that the 2.0 spec is brief at *best* wrt to these topics.
Even if a lot is redundant, there should be an explicit equivalent to the
theory of operation found in https://pmem.io/documents/NVDIMM_DSM_Interface-V1.8.pdf

Thanks,
Davidlohr

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 00/15] Introduce security commands for CXL pmem device
  2022-07-15 21:29 ` [PATCH RFC 00/15] Introduce security commands for CXL pmem device Davidlohr Bueso
@ 2022-07-19 18:53   ` Dave Jiang
  0 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-19 18:53 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield


On 7/15/2022 2:29 PM, Davidlohr Bueso wrote:
> On Fri, 15 Jul 2022, Dave Jiang wrote:
>
>> This series is seeking comments on the implementation. It has not 
>> been fully
>> tested yet.
>
> Sorry if this is already somewhere, but how exactly does one test the 
> mock device?
So you can do "make M=tools/testing/cxl" to build cxl_test drivers. It's 
similar to ndctl_test and the ndctl README has some instruction on how 
to build and load. Probably should add some information for cxl_test in 
that file. The run_qemu tool from Vishal also provides support for this 
if you add the --cxl-test switch.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 4/15] cxl/pmem: Add "Set Passphrase" security command support
  2022-07-18  6:36   ` [PATCH RFC 4/15] " Davidlohr Bueso
@ 2022-07-19 18:55     ` Dave Jiang
  0 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-07-19 18:55 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield


On 7/17/2022 11:36 PM, Davidlohr Bueso wrote:
> On Fri, 15 Jul 2022, Dave Jiang wrote:
>
>> However, the spec leaves a gap WRT master passphrase usages. The spec 
>> does
>> not define any ways to retrieve the status of if the support of master
>> passphrase is available for the device, nor does the commands that 
>> utilize
>> master passphrase will return a specific error that indicates master
>> passphrase is not supported. If using a device does not support master
>> passphrase and a command is issued with a master passphrase, the error
>> message returned by the device will be ambiguos.
>
> In general I think that the 2.0 spec is brief at *best* wrt to these 
> topics.
> Even if a lot is redundant, there should be an explicit equivalent to the
> theory of operation found in 
> https://pmem.io/documents/NVDIMM_DSM_Interface-V1.8.pdf

I totally agree.


>
> Thanks,
> Davidlohr

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-07-18  5:30   ` Davidlohr Bueso
@ 2022-07-19 19:07     ` Dave Jiang
  2022-08-03 17:37         ` Jonathan Cameron
  0 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-19 19:07 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, Jonathan.Cameron, a.manzanares


On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
> On Fri, 15 Jul 2022, Dave Jiang wrote:
>
>> The original implementation to flush all cache after unlocking the 
>> nvdimm
>> resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
>> nvdimm with security operations arrives on other archs. With support CXL
>> pmem supporting security operations, specifically "unlock" dimm, the 
>> need
>> for an arch supported helper function to invalidate all CPU cache for
>> nvdimm has arrived. Remove original implementation from acpi/nfit and 
>> add
>> cross arch support for this operation.
>>
>> Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to 
>> opt in
>> and provide the support via wbinvd_on_all_cpus() call.
>
> So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
> its own semantics (iirc there was a flush all call in the past). Cc'ing
> Jonathan as well.
>
> Anyway, I think this call should not be defined in any place other 
> than core
> kernel headers, and not in pat/nvdimm. I was trying to make it fit in 
> smp.h,
> for example, but conviniently we might be able to hijack 
> flush_cache_all()
> for our purposes as of course neither x86-64 arm64 uses it :)
>
> And I see this as safe (wrt not adding a big hammer on unaware 
> drivers) as
> the 32bit archs that define the call are mostly contained thin their 
> arch/,
> and the few in drivers/ are still specific to those archs.
>
> Maybe something like the below.

Ok. I'll replace my version with yours.


>
> Thanks,
> Davidlohr
>
> ------8<----------------------------------------
> Subject: [PATCH] arch/x86: define flush_cache_all as global wbinvd
>
> With CXL security features, global CPU cache flushing nvdimm
> requirements are no longer specific to that subsystem, even
> beyond the scope of security_ops. CXL will need such semantics
> for features not necessarily limited to persistent memory.
>
> So use the flush_cache_all() for the wbinvd across all
> CPUs on x86. arm64, which is another platform to have CXL
> support can also define its own semantics here.
>
> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
> ---
>  arch/x86/Kconfig                  |  1 -
>  arch/x86/include/asm/cacheflush.h |  5 +++++
>  arch/x86/mm/pat/set_memory.c      |  8 --------
>  drivers/acpi/nfit/intel.c         | 11 ++++++-----
>  drivers/cxl/security.c            |  5 +++--
>  include/linux/libnvdimm.h         |  9 ---------
>  6 files changed, 14 insertions(+), 25 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 8dbe89eba639..be0b95e51df6 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -83,7 +83,6 @@ config X86
>     select ARCH_HAS_MEMBARRIER_SYNC_CORE
>     select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
>     select ARCH_HAS_PMEM_API        if X86_64
> -    select ARCH_HAS_NVDIMM_INVAL_CACHE    if X86_64
>     select ARCH_HAS_PTE_DEVMAP        if X86_64
>     select ARCH_HAS_PTE_SPECIAL
>     select ARCH_HAS_UACCESS_FLUSHCACHE    if X86_64
> diff --git a/arch/x86/include/asm/cacheflush.h 
> b/arch/x86/include/asm/cacheflush.h
> index b192d917a6d0..05c79021665d 100644
> --- a/arch/x86/include/asm/cacheflush.h
> +++ b/arch/x86/include/asm/cacheflush.h
> @@ -10,4 +10,9 @@
>
>  void clflush_cache_range(void *addr, unsigned int size);
>
> +#define flush_cache_all()        \
> +do {                    \
> +    wbinvd_on_all_cpus();        \
> +} while (0)
> +
>  #endif /* _ASM_X86_CACHEFLUSH_H */
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index e4cd1286deef..1abd5438f126 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -330,14 +330,6 @@ void arch_invalidate_pmem(void *addr, size_t size)
>  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
>  #endif
>
> -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> -void arch_invalidate_nvdimm_cache(void)
> -{
> -    wbinvd_on_all_cpus();
> -}
> -EXPORT_SYMBOL_GPL(arch_invalidate_nvdimm_cache);
> -#endif
> -
>  static void __cpa_flush_all(void *arg)
>  {
>     unsigned long cache = (unsigned long)arg;
> diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
> index 242d2e9203e9..1b0ecb4d67e6 100644
> --- a/drivers/acpi/nfit/intel.c
> +++ b/drivers/acpi/nfit/intel.c
> @@ -1,6 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0
>  /* Copyright(c) 2018 Intel Corporation. All rights reserved. */
>  #include <linux/libnvdimm.h>
> +#include <linux/cacheflush.h>
>  #include <linux/ndctl.h>
>  #include <linux/acpi.h>
>  #include <asm/smp.h>
> @@ -226,7 +227,7 @@ static int __maybe_unused 
> intel_security_unlock(struct nvdimm *nvdimm,
>     }
>
>     /* DIMM unlocked, invalidate all CPU caches before we read it */
> -    arch_invalidate_nvdimm_cache();
> +    flush_cache_all();
>
>     return 0;
>  }
> @@ -296,7 +297,7 @@ static int __maybe_unused 
> intel_security_erase(struct nvdimm *nvdimm,
>         return -ENOTTY;
>
>     /* flush all cache before we erase DIMM */
> -    arch_invalidate_nvdimm_cache();
> +    flush_cache_all();
>     memcpy(nd_cmd.cmd.passphrase, key->data,
>             sizeof(nd_cmd.cmd.passphrase));
>     rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> @@ -316,7 +317,7 @@ static int __maybe_unused 
> intel_security_erase(struct nvdimm *nvdimm,
>     }
>
>     /* DIMM erased, invalidate all CPU caches before we read it */
> -    arch_invalidate_nvdimm_cache();
> +    flush_cache_all();
>     return 0;
>  }
>
> @@ -353,7 +354,7 @@ static int __maybe_unused 
> intel_security_query_overwrite(struct nvdimm *nvdimm)
>     }
>
>     /* flush all cache before we make the nvdimms available */
> -    arch_invalidate_nvdimm_cache();
> +    flush_cache_all();
>     return 0;
>  }
>
> @@ -379,7 +380,7 @@ static int __maybe_unused 
> intel_security_overwrite(struct nvdimm *nvdimm,
>         return -ENOTTY;
>
>     /* flush all cache before we erase DIMM */
> -    arch_invalidate_nvdimm_cache();
> +    flush_cache_all();
>     memcpy(nd_cmd.cmd.passphrase, nkey->data,
>             sizeof(nd_cmd.cmd.passphrase));
>     rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
> index 3dc04b50afaf..e2977872bf2f 100644
> --- a/drivers/cxl/security.c
> +++ b/drivers/cxl/security.c
> @@ -6,6 +6,7 @@
>  #include <linux/ndctl.h>
>  #include <linux/async.h>
>  #include <linux/slab.h>
> +#include <linux/cacheflush.h>
>  #include "cxlmem.h"
>  #include "cxl.h"
>
> @@ -137,7 +138,7 @@ static int cxl_pmem_security_unlock(struct nvdimm 
> *nvdimm,
>         return rc;
>
>     /* DIMM unlocked, invalidate all CPU caches before we read it */
> -    arch_invalidate_nvdimm_cache();
> +    flush_cache_all();
>     return 0;
>  }
>
> @@ -165,7 +166,7 @@ static int 
> cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
>         return rc;
>
>     /* DIMM erased, invalidate all CPU caches before we read it */
> -    arch_invalidate_nvdimm_cache();
> +    flush_cache_all();
>     return 0;
>  }
>
> diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
> index 07e4e7572089..0769afb73380 100644
> --- a/include/linux/libnvdimm.h
> +++ b/include/linux/libnvdimm.h
> @@ -309,13 +309,4 @@ static inline void arch_invalidate_pmem(void 
> *addr, size_t size)
>  {
>  }
>  #endif
> -
> -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> -void arch_invalidate_nvdimm_cache(void);
> -#else
> -static inline void arch_invalidate_nvdimm_cache(void)
> -{
> -}
> -#endif
> -
>  #endif /* __LIBNVDIMM_H__ */
> -- 
> 2.36.1
>

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 13/15] cxl/pmem: Add "Passphrase Secure Erase" security command support
  2022-07-15 21:09 ` [PATCH RFC 13/15] cxl/pmem: Add "Passphrase Secure Erase" security command support Dave Jiang
@ 2022-07-20  6:17   ` Davidlohr Bueso
  2022-07-20 17:38     ` Dave Jiang
  0 siblings, 1 reply; 79+ messages in thread
From: Davidlohr Bueso @ 2022-07-20  6:17 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, a.manzanares

On Fri, 15 Jul 2022, Dave Jiang wrote:

>Create callback function to support the nvdimm_security_ops() ->erase()
>callback. Translate the operation to send "Passphrase Secure Erase"
>security command for CXL memory device.
>
>When the mem device is secure erased, arch_invalidate_nvdimm_cache() is
>called in order to invalidate all CPU caches before attempting to access
>the mem device again.
>
>See CXL 2.0 spec section 8.2.9.5.6.6 for reference.

So something like the below is what I picture for 8.2.9.5.5.2
(I'm still thinking about the background command polling semantics
and corner cases for the overwrite/sanitize - also needed for
scan media - so I haven't implemented 8.2.9.5.5.1, but should
otherwise be straightforward).

The use cases here would be:

$> cxl sanitize --crypto-erase memN
$> cxl sanitize --overwrite memN
$> cxl sanitize --wait-overwrite memN

While slightly out of the scope of this series, it still might be
worth carrying as they are that unrelated unless there is something
fundamentally with my approach.

Thanks,
Davidlohr

-----<8----------------------------------------------------
[PATCH 16/15] cxl/mbox: Add "Secure Erase" security command support

To properly support this feature, create a 'security' sysfs
file that when read will list the current pmem security state,
and when written to, perform the requested operation (only
secure erase is currently supported).

Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---
  Documentation/ABI/testing/sysfs-bus-cxl | 13 +++++++
  drivers/cxl/core/mbox.c                 | 44 +++++++++++++++++++++
  drivers/cxl/core/memdev.c               | 51 +++++++++++++++++++++++++
  drivers/cxl/cxlmem.h                    |  3 ++
  4 files changed, 111 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
index 7c2b846521f3..ca5216b37bcf 100644
--- a/Documentation/ABI/testing/sysfs-bus-cxl
+++ b/Documentation/ABI/testing/sysfs-bus-cxl
@@ -52,6 +52,19 @@ Description:
  		host PCI device for this memory device, emit the CPU node
  		affinity for this device.
  
+What:		/sys/bus/cxl/devices/memX/security
+Date:		July, 2022
+KernelVersion:	v5.21
+Contact:	linux-cxl@vger.kernel.org
+Description:
+		Reading this file will display the security state for that
+		device. The following states are available: disabled, frozen,
+		locked and unlocked. When writing to the file, the following
+		command(s) are supported:
+		erase - Secure Erase user data by changing the media encryption
+			keys for all user data areas of the device. This causes
+			all CPU caches to be flushed.
+
  What:		/sys/bus/cxl/devices/*/devtype
  Date:		June, 2021
  KernelVersion:	v5.14
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index 54f434733b56..54b4aec615ee 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -787,6 +787,50 @@ int cxl_dev_state_identify(struct cxl_dev_state *cxlds)
  }
  EXPORT_SYMBOL_NS_GPL(cxl_dev_state_identify, CXL);
  
+/**
+ * cxl_mem_sanitize() - Send sanitation related commands to the device.
+ * @cxlds: The device data for the operation
+ * @cmd: The command opcode to send
+ *
+ * Return: 0 if the command was executed successfully, regardless of
+ * whether or not the actual security operation is done in the background.
+ * Upon error, return the result of the mailbox command or -EINVAL if
+ * security requirements are not met.
+ *
+ * See CXL 2.0 @8.2.9.5.5 Sanitize.
+ */
+int cxl_mem_sanitize(struct cxl_dev_state *cxlds, enum cxl_opcode cmd)
+{
+	int rc;
+	u32 sec_out;
+
+	/* TODO: CXL_MBOX_OP_SECURE_SANITIZE */
+	if (cmd != CXL_MBOX_OP_SECURE_ERASE)
+		return -EINVAL;
+
+	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_GET_SECURITY_STATE,
+			       NULL, 0, &sec_out, sizeof(sec_out));
+	if (rc)
+		return rc;
+	/*
+	 * Prior to using these commands, any security applied to
+	 * the user data areas of the device shall be DISABLED (or
+	 * UNLOCKED for secure erase case).
+	 */
+	if (sec_out & CXL_PMEM_SEC_STATE_USER_PASS_SET ||
+	    sec_out & CXL_PMEM_SEC_STATE_LOCKED)
+		return -EINVAL;
+
+	rc = cxl_mbox_send_cmd(cxlds, cmd, NULL, 0, NULL, 0);
+	if (rc == 0) {
+		/* flush all CPU caches before we read it */
+		flush_cache_all();
+	}
+
+	return rc;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_mem_sanitize, CXL);
+
  int cxl_mem_create_range_info(struct cxl_dev_state *cxlds)
  {
  	int rc;
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
index f7cdcd33504a..13563facfd62 100644
--- a/drivers/cxl/core/memdev.c
+++ b/drivers/cxl/core/memdev.c
@@ -106,12 +106,63 @@ static ssize_t numa_node_show(struct device *dev, struct device_attribute *attr,
  }
  static DEVICE_ATTR_RO(numa_node);
  
+#define CXL_SEC_CMD_SIZE 32
+
+static ssize_t security_show(struct device *dev,
+			     struct device_attribute *attr, char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+	u32 sec_out;
+	int rc;
+
+	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_GET_SECURITY_STATE,
+			       NULL, 0, &sec_out, sizeof(sec_out));
+	if (rc)
+		return rc;
+
+	if (!(sec_out & CXL_PMEM_SEC_STATE_USER_PASS_SET))
+		return sprintf(buf, "disabled\n");
+	if (sec_out & CXL_PMEM_SEC_STATE_FROZEN)
+		return sprintf(buf, "frozen\n");
+	if (sec_out & CXL_PMEM_SEC_STATE_LOCKED)
+		return sprintf(buf, "locked\n");
+	else
+		return sprintf(buf, "unlocked\n");
+}
+
+static ssize_t security_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t len)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+	char cmd[CXL_SEC_CMD_SIZE+1];
+	ssize_t rc;
+
+	rc = sscanf(buf, "%"__stringify(CXL_SEC_CMD_SIZE)"s", cmd);
+	if (rc < 1)
+		return -EINVAL;
+
+	if (sysfs_streq(cmd, "erase")) {
+		dev_dbg(dev, "secure-erase\n");
+		rc = cxl_mem_sanitize(cxlds, CXL_MBOX_OP_SECURE_ERASE);
+	} else
+		rc = -EINVAL;
+
+	if (rc == 0)
+		rc = len;
+	return rc;
+}
+static DEVICE_ATTR_RW(security);
+
  static struct attribute *cxl_memdev_attributes[] = {
  	&dev_attr_serial.attr,
  	&dev_attr_firmware_version.attr,
  	&dev_attr_payload_max.attr,
  	&dev_attr_label_storage_size.attr,
  	&dev_attr_numa_node.attr,
+	&dev_attr_security.attr,
  	NULL,
  };
  
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index a375a69040d2..cd6650ff757f 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -250,6 +250,7 @@ enum cxl_opcode {
  	CXL_MBOX_OP_GET_SCAN_MEDIA_CAPS	= 0x4303,
  	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
  	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
+	CXL_MBOX_OP_SECURE_ERASE        = 0x4401,
  	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
  	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
  	CXL_MBOX_OP_DISABLE_PASSPHRASE	= 0x4502,
@@ -348,6 +349,8 @@ struct cxl_mem_command {
  #define CXL_CMD_FLAG_FORCE_ENABLE BIT(0)
  };
  
+int cxl_mem_sanitize(struct cxl_dev_state *cxlds, enum cxl_opcode cmd);
+
  #define CXL_PMEM_SEC_STATE_USER_PASS_SET	0x01
  #define CXL_PMEM_SEC_STATE_MASTER_PASS_SET	0x02
  #define CXL_PMEM_SEC_STATE_LOCKED		0x04
-- 
2.36.1

^ permalink raw reply related	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 13/15] cxl/pmem: Add "Passphrase Secure Erase" security command support
  2022-07-20  6:17   ` Davidlohr Bueso
@ 2022-07-20 17:38     ` Dave Jiang
  2022-07-20 18:02       ` Davidlohr Bueso
  0 siblings, 1 reply; 79+ messages in thread
From: Dave Jiang @ 2022-07-20 17:38 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, a.manzanares


On 7/19/2022 11:17 PM, Davidlohr Bueso wrote:
> On Fri, 15 Jul 2022, Dave Jiang wrote:
>
>> Create callback function to support the nvdimm_security_ops() ->erase()
>> callback. Translate the operation to send "Passphrase Secure Erase"
>> security command for CXL memory device.
>>
>> When the mem device is secure erased, arch_invalidate_nvdimm_cache() is
>> called in order to invalidate all CPU caches before attempting to access
>> the mem device again.
>>
>> See CXL 2.0 spec section 8.2.9.5.6.6 for reference.
>
> So something like the below is what I picture for 8.2.9.5.5.2
> (I'm still thinking about the background command polling semantics
> and corner cases for the overwrite/sanitize - also needed for
> scan media - so I haven't implemented 8.2.9.5.5.1, but should
> otherwise be straightforward).
>
> The use cases here would be:
>
> $> cxl sanitize --crypto-erase memN
> $> cxl sanitize --overwrite memN
> $> cxl sanitize --wait-overwrite memN
>
> While slightly out of the scope of this series, it still might be
> worth carrying as they are that unrelated unless there is something
> fundamentally with my approach.

Patch below is about what I had in mind for the secure erase command. 
Looks good to me. The only thing I think it needs is to make sure the 
mem devs are not "in use" before secure erase in addition to the 
security check that's already there below. I was planning on working on 
this after getting the current security commands series wrapped up. But 
if you are already developing this then I'll defer.

Also here's the latest code that I'm still going through testing if you 
want to play with it. I still need to replace the x86 patch with your 
version.

https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=cxl-security


>
> Thanks,
> Davidlohr
>
> -----<8----------------------------------------------------
> [PATCH 16/15] cxl/mbox: Add "Secure Erase" security command support
>
> To properly support this feature, create a 'security' sysfs
> file that when read will list the current pmem security state,
> and when written to, perform the requested operation (only
> secure erase is currently supported).
>
> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
> ---
>  Documentation/ABI/testing/sysfs-bus-cxl | 13 +++++++
>  drivers/cxl/core/mbox.c                 | 44 +++++++++++++++++++++
>  drivers/cxl/core/memdev.c               | 51 +++++++++++++++++++++++++
>  drivers/cxl/cxlmem.h                    |  3 ++
>  4 files changed, 111 insertions(+)
>
> diff --git a/Documentation/ABI/testing/sysfs-bus-cxl 
> b/Documentation/ABI/testing/sysfs-bus-cxl
> index 7c2b846521f3..ca5216b37bcf 100644
> --- a/Documentation/ABI/testing/sysfs-bus-cxl
> +++ b/Documentation/ABI/testing/sysfs-bus-cxl
> @@ -52,6 +52,19 @@ Description:
>          host PCI device for this memory device, emit the CPU node
>          affinity for this device.
>
> +What:        /sys/bus/cxl/devices/memX/security
> +Date:        July, 2022
> +KernelVersion:    v5.21
> +Contact:    linux-cxl@vger.kernel.org
> +Description:
> +        Reading this file will display the security state for that
> +        device. The following states are available: disabled, frozen,
> +        locked and unlocked. When writing to the file, the following
> +        command(s) are supported:
> +        erase - Secure Erase user data by changing the media encryption
> +            keys for all user data areas of the device. This causes
> +            all CPU caches to be flushed.
> +
>  What:        /sys/bus/cxl/devices/*/devtype
>  Date:        June, 2021
>  KernelVersion:    v5.14
> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> index 54f434733b56..54b4aec615ee 100644
> --- a/drivers/cxl/core/mbox.c
> +++ b/drivers/cxl/core/mbox.c
> @@ -787,6 +787,50 @@ int cxl_dev_state_identify(struct cxl_dev_state 
> *cxlds)
>  }
>  EXPORT_SYMBOL_NS_GPL(cxl_dev_state_identify, CXL);
>
> +/**
> + * cxl_mem_sanitize() - Send sanitation related commands to the device.
> + * @cxlds: The device data for the operation
> + * @cmd: The command opcode to send
> + *
> + * Return: 0 if the command was executed successfully, regardless of
> + * whether or not the actual security operation is done in the 
> background.
> + * Upon error, return the result of the mailbox command or -EINVAL if
> + * security requirements are not met.
> + *
> + * See CXL 2.0 @8.2.9.5.5 Sanitize.
> + */
> +int cxl_mem_sanitize(struct cxl_dev_state *cxlds, enum cxl_opcode cmd)
> +{
> +    int rc;
> +    u32 sec_out;
> +
> +    /* TODO: CXL_MBOX_OP_SECURE_SANITIZE */
> +    if (cmd != CXL_MBOX_OP_SECURE_ERASE)
> +        return -EINVAL;
> +
> +    rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_GET_SECURITY_STATE,
> +                   NULL, 0, &sec_out, sizeof(sec_out));
> +    if (rc)
> +        return rc;
> +    /*
> +     * Prior to using these commands, any security applied to
> +     * the user data areas of the device shall be DISABLED (or
> +     * UNLOCKED for secure erase case).
> +     */
> +    if (sec_out & CXL_PMEM_SEC_STATE_USER_PASS_SET ||
> +        sec_out & CXL_PMEM_SEC_STATE_LOCKED)
> +        return -EINVAL;
> +
> +    rc = cxl_mbox_send_cmd(cxlds, cmd, NULL, 0, NULL, 0);
> +    if (rc == 0) {
> +        /* flush all CPU caches before we read it */
> +        flush_cache_all();
> +    }
> +
> +    return rc;
> +}
> +EXPORT_SYMBOL_NS_GPL(cxl_mem_sanitize, CXL);
> +
>  int cxl_mem_create_range_info(struct cxl_dev_state *cxlds)
>  {
>      int rc;
> diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
> index f7cdcd33504a..13563facfd62 100644
> --- a/drivers/cxl/core/memdev.c
> +++ b/drivers/cxl/core/memdev.c
> @@ -106,12 +106,63 @@ static ssize_t numa_node_show(struct device 
> *dev, struct device_attribute *attr,
>  }
>  static DEVICE_ATTR_RO(numa_node);
>
> +#define CXL_SEC_CMD_SIZE 32
> +
> +static ssize_t security_show(struct device *dev,
> +                 struct device_attribute *attr, char *buf)
> +{
> +    struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> +    struct cxl_dev_state *cxlds = cxlmd->cxlds;
> +    u32 sec_out;
> +    int rc;
> +
> +    rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_GET_SECURITY_STATE,
> +                   NULL, 0, &sec_out, sizeof(sec_out));
> +    if (rc)
> +        return rc;
> +
> +    if (!(sec_out & CXL_PMEM_SEC_STATE_USER_PASS_SET))
> +        return sprintf(buf, "disabled\n");
> +    if (sec_out & CXL_PMEM_SEC_STATE_FROZEN)
> +        return sprintf(buf, "frozen\n");
> +    if (sec_out & CXL_PMEM_SEC_STATE_LOCKED)
> +        return sprintf(buf, "locked\n");
> +    else
> +        return sprintf(buf, "unlocked\n");
> +}
> +
> +static ssize_t security_store(struct device *dev,
> +                  struct device_attribute *attr,
> +                  const char *buf, size_t len)
> +{
> +    struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> +    struct cxl_dev_state *cxlds = cxlmd->cxlds;
> +    char cmd[CXL_SEC_CMD_SIZE+1];
> +    ssize_t rc;
> +
> +    rc = sscanf(buf, "%"__stringify(CXL_SEC_CMD_SIZE)"s", cmd);
> +    if (rc < 1)
> +        return -EINVAL;
> +
> +    if (sysfs_streq(cmd, "erase")) {
> +        dev_dbg(dev, "secure-erase\n");
> +        rc = cxl_mem_sanitize(cxlds, CXL_MBOX_OP_SECURE_ERASE);
> +    } else
> +        rc = -EINVAL;
> +
> +    if (rc == 0)
> +        rc = len;
> +    return rc;
> +}
> +static DEVICE_ATTR_RW(security);
> +
>  static struct attribute *cxl_memdev_attributes[] = {
>      &dev_attr_serial.attr,
>      &dev_attr_firmware_version.attr,
>      &dev_attr_payload_max.attr,
>      &dev_attr_label_storage_size.attr,
>      &dev_attr_numa_node.attr,
> +    &dev_attr_security.attr,
>      NULL,
>  };
>
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index a375a69040d2..cd6650ff757f 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -250,6 +250,7 @@ enum cxl_opcode {
>      CXL_MBOX_OP_GET_SCAN_MEDIA_CAPS    = 0x4303,
>      CXL_MBOX_OP_SCAN_MEDIA        = 0x4304,
>      CXL_MBOX_OP_GET_SCAN_MEDIA    = 0x4305,
> +    CXL_MBOX_OP_SECURE_ERASE        = 0x4401,
>      CXL_MBOX_OP_GET_SECURITY_STATE    = 0x4500,
>      CXL_MBOX_OP_SET_PASSPHRASE    = 0x4501,
>      CXL_MBOX_OP_DISABLE_PASSPHRASE    = 0x4502,
> @@ -348,6 +349,8 @@ struct cxl_mem_command {
>  #define CXL_CMD_FLAG_FORCE_ENABLE BIT(0)
>  };
>
> +int cxl_mem_sanitize(struct cxl_dev_state *cxlds, enum cxl_opcode cmd);
> +
>  #define CXL_PMEM_SEC_STATE_USER_PASS_SET    0x01
>  #define CXL_PMEM_SEC_STATE_MASTER_PASS_SET    0x02
>  #define CXL_PMEM_SEC_STATE_LOCKED        0x04

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 13/15] cxl/pmem: Add "Passphrase Secure Erase" security command support
  2022-07-20 17:38     ` Dave Jiang
@ 2022-07-20 18:02       ` Davidlohr Bueso
  0 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-07-20 18:02 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, a.manzanares

On Wed, 20 Jul 2022, Dave Jiang wrote:

>Patch below is about what I had in mind for the secure erase command.
>Looks good to me. The only thing I think it needs is to make sure the
>mem devs are not "in use" before secure erase in addition to the
>security check that's already there below. I was planning on working
>on this after getting the current security commands series wrapped up.
>But if you are already developing this then I'll defer.
>
>Also here's the latest code that I'm still going through testing if
>you want to play with it. I still need to replace the x86 patch with
>your version.

Ok I will play more and test your series during the rest of the week,
as well as my own changes. I'll end up a formal series on top of this
one (or whatever version you are at at the time) with the sanitize
+ secure-erase (and the relevant mock device updates).

Thanks,
Davidlohr

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 01/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation
  2022-07-15 21:09   ` Davidlohr Bueso
@ 2022-08-03 16:29     ` Jonathan Cameron
  0 siblings, 0 replies; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 16:29 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: Dave Jiang, linux-cxl, nvdimm, dan.j.williams, bwidawsk,
	ira.weiny, vishal.l.verma, alison.schofield

On Fri, 15 Jul 2022 14:09:04 -0700
Davidlohr Bueso <dave@stgolabs.net> wrote:

> On Fri, 15 Jul 2022, Dave Jiang wrote:
> 
> >+config CXL_PMEM_SECURITY
> >+	tristate "CXL PMEM SECURITY: Persistent Memory Security Support"
> >+	depends on CXL_PMEM
> >+	default CXL_BUS
> >+	help
> >+	  CXL memory device "Persistent Memory Data-at-rest Security" command set
> >+	  support. Support opcode 0x4500..0x4505. The commands supported are "Get
> >+	  Security State", "Set Passphrase", "Disable Passphrase", "Unlock",
> >+	  "Freeze Security State", and "Passphrase Secure Erase". Security operation
> >+	  is done through nvdimm security_ops.
> >+
> >+	  See Chapter 8.2.9.5.6 in the CXL 2.0 specification for a detailed description
> >+	  of the Persistent Memory Security.
> >+
> >+	  If unsure say 'm'.  
> 
> Is there any fundamental reason why we need to add a new CXL Kconfig option
> instead of just tucking this under CXL_PMEM?

Agreed. I can't immediately see why we'd have this separately configurable.

Other than this looks good to me.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 02/15] tools/testing/cxl: Create context for cxl mock device
  2022-07-15 21:08 ` [PATCH RFC 02/15] tools/testing/cxl: Create context for cxl mock device Dave Jiang
  2022-07-18  6:29   ` [PATCH RFC 2/15] " Davidlohr Bueso
@ 2022-08-03 16:36   ` Jonathan Cameron
  2022-08-09 20:30     ` Dave Jiang
  1 sibling, 1 reply; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 16:36 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave

On Fri, 15 Jul 2022 14:08:44 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> Add context struct for mock device and move lsa under the context. This
> allows additional information such as security status and other persistent
> security data such as passphrase to be added for the emulated test device.
> 
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
> ---
>  tools/testing/cxl/test/mem.c |   29 +++++++++++++++++++++++------
>  1 file changed, 23 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
> index 6b9239b2afd4..723378248321 100644
> --- a/tools/testing/cxl/test/mem.c
> +++ b/tools/testing/cxl/test/mem.c
> @@ -9,6 +9,10 @@
>  #include <linux/bits.h>
>  #include <cxlmem.h>
>  
> +struct mock_mdev_data {
> +	void *lsa;
> +};
> +
>  #define LSA_SIZE SZ_128K
>  #define EFFECT(x) (1U << x)
>  
> @@ -140,7 +144,8 @@ static int mock_id(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>  static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>  {
>  	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
> -	void *lsa = dev_get_drvdata(cxlds->dev);
> +	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
> +	void *lsa = mdata->lsa;
>  	u32 offset, length;
>  
>  	if (sizeof(*get_lsa) > cmd->size_in)
> @@ -159,7 +164,8 @@ static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>  static int mock_set_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>  {
>  	struct cxl_mbox_set_lsa *set_lsa = cmd->payload_in;
> -	void *lsa = dev_get_drvdata(cxlds->dev);
> +	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
> +	void *lsa = mdata->lsa;
>  	u32 offset, length;
>  
>  	if (sizeof(*set_lsa) > cmd->size_in)
> @@ -237,9 +243,12 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
>  	return rc;
>  }
>  
> -static void label_area_release(void *lsa)
> +static void cxl_mock_drvdata_release(void *data)
>  {
> -	vfree(lsa);
> +	struct mock_mdev_data *mdata = data;
> +
> +	vfree(mdata->lsa);
> +	vfree(mdata);
>  }
>  
>  static int cxl_mock_mem_probe(struct platform_device *pdev)
> @@ -247,13 +256,21 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
>  	struct device *dev = &pdev->dev;
>  	struct cxl_memdev *cxlmd;
>  	struct cxl_dev_state *cxlds;
> +	struct mock_mdev_data *mdata;
>  	void *lsa;
>  	int rc;
>  
> +	mdata = vmalloc(sizeof(*mdata));

It's tiny so why vmalloc?  I guess that might become apparent later.
devm_kzalloc() should be fine and lead to simpler error handling.

> +	if (!mdata)
> +		return -ENOMEM;
> +
>  	lsa = vmalloc(LSA_SIZE);
> -	if (!lsa)
> +	if (!lsa) {
> +		vfree(mdata);
In general doing this just makes things fragile in the long term. Better to
register one devm_add_action_or_reset() for each thing set up (or standard
allcoation).

>  		return -ENOMEM;
> -	rc = devm_add_action_or_reset(dev, label_area_release, lsa);
> +	}
> +
> +	rc = devm_add_action_or_reset(dev, cxl_mock_drvdata_release, mdata);
>  	if (rc)
>  		return rc;
>  	dev_set_drvdata(dev, lsa);
> 
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 03/15] tools/testing/cxl: Add "Get Security State" opcode support
  2022-07-15 21:08 ` [PATCH RFC 03/15] tools/testing/cxl: Add "Get Security State" opcode support Dave Jiang
@ 2022-08-03 16:51   ` Jonathan Cameron
  0 siblings, 0 replies; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 16:51 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave

On Fri, 15 Jul 2022 14:08:49 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> Add the emulation support for handling "Get Security State" opcode for a
> CXL memory device for the cxl_test. The function will copy back device
> security state bitmask to the output payload.
> 
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
> ---
>  tools/testing/cxl/test/mem.c |   24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
> index 723378248321..337e5a099d31 100644
> --- a/tools/testing/cxl/test/mem.c
> +++ b/tools/testing/cxl/test/mem.c
> @@ -11,6 +11,7 @@
>  
>  struct mock_mdev_data {
>  	void *lsa;
> +	u32 security_state;
>  };
>  
>  #define LSA_SIZE SZ_128K
> @@ -141,6 +142,26 @@ static int mock_id(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>  	return 0;
>  }
>  
> +static int mock_get_security_state(struct cxl_dev_state *cxlds,
> +				   struct cxl_mbox_cmd *cmd)
> +{
> +	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
> +
> +	if (cmd->size_in) {
> +		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;

Interestingly I don't see invalid input as a possible return code for this command
in the spec.  Would it be an invalid payload length?
Also, is this based on current tree?  For other fail cases we don't set the
return code because the -EINVAL will presumably make the test fail anyway.

> +		return -EINVAL;
> +	}
> +
> +	if (cmd->size_out != sizeof(u32)) {
> +		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
Interesting corner. If this was a real device, I think this isn't actually
an error (it's just stupid as you ask a question and don't get an answer)
Makes sense to return -EINVAL from the test, but setting invalid input
in the return code probably doesn't.  Ignored anyway as we won't carry
on anyway because of the -EINVAL.

> +		return -EINVAL;
> +	}
> +
> +	memcpy(cmd->payload_out, &mdata->security_state, sizeof(u32));
> +
> +	return 0;
> +}
> +
>  static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>  {
>  	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
> @@ -233,6 +254,9 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
>  	case CXL_MBOX_OP_GET_HEALTH_INFO:
>  		rc = mock_health_info(cxlds, cmd);
>  		break;
> +	case CXL_MBOX_OP_GET_SECURITY_STATE:
> +		rc = mock_get_security_state(cxlds, cmd);
> +		break;
>  	default:
>  		break;
>  	}
> 
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 04/15] cxl/pmem: Add "Set Passphrase" security command support
  2022-07-15 21:08 ` [PATCH RFC 04/15] cxl/pmem: Add "Set Passphrase" security command support Dave Jiang
  2022-07-18  6:36   ` [PATCH RFC 4/15] " Davidlohr Bueso
@ 2022-08-03 17:01   ` Jonathan Cameron
  1 sibling, 0 replies; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 17:01 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave

On Fri, 15 Jul 2022 14:08:55 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> Create callback function to support the nvdimm_security_ops ->change_key()
> callback. Translate the operation to send "Set Passphrase" security command
> for CXL memory device. The operation supports setting a passphrase for the
> CXL persistent memory device. It also supports the changing of the
> currently set passphrase. The operation allows manipulation of a user
> passphrase or a master passphrase.
> 
> See CXL 2.0 spec section 8.2.9.5.6.2 for reference.
> 
> However, the spec leaves a gap WRT master passphrase usages. The spec does
> not define any ways to retrieve the status of if the support of master
> passphrase is available for the device, nor does the commands that utilize
> master passphrase will return a specific error that indicates master
> passphrase is not supported. If using a device does not support master
> passphrase and a command is issued with a master passphrase, the error
> message returned by the device will be ambiguos.
> 
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
A couple of trivial comments all of which I'm fine with you ignoring if you like

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/cxl/cxlmem.h   |   14 ++++++++++++++
>  drivers/cxl/security.c |   27 +++++++++++++++++++++++++++
>  2 files changed, 41 insertions(+)
> 
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index 35de2889aac3..1e76d22f4fd2 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -251,6 +251,7 @@ enum cxl_opcode {
>  	CXL_MBOX_OP_SCAN_MEDIA		= 0x4304,
>  	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
>  	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
> +	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
>  	CXL_MBOX_OP_MAX			= 0x10000
>  };
>  
> @@ -350,6 +351,19 @@ struct cxl_mem_command {
>  #define CXL_PMEM_SEC_STATE_USER_PLIMIT		0x10
>  #define CXL_PMEM_SEC_STATE_MASTER_PLIMIT	0x20
>  
> +/* set passphrase input payload */
> +struct cxl_set_pass {
> +	u8 type;
> +	u8 reserved[31];
> +	u8 old_pass[NVDIMM_PASSPHRASE_LEN];

Obviously same length, but maybe a comment to that effect as
the is a CXL structure using an NVIDMM define.

> +	u8 new_pass[NVDIMM_PASSPHRASE_LEN];
> +} __packed;
> +
> +enum {
> +	CXL_PMEM_SEC_PASS_MASTER = 0,
> +	CXL_PMEM_SEC_PASS_USER,
> +};
> +
>  int cxl_mbox_send_cmd(struct cxl_dev_state *cxlds, u16 opcode, void *in,
>  		      size_t in_size, void *out, size_t out_size);
>  int cxl_dev_state_identify(struct cxl_dev_state *cxlds);
> diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
> index 5b830ae621db..76ec5087f966 100644
> --- a/drivers/cxl/security.c
> +++ b/drivers/cxl/security.c
> @@ -50,8 +50,35 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm,
>  	return security_flags;
>  }
>  
> +static int cxl_pmem_security_change_key(struct nvdimm *nvdimm,
> +					const struct nvdimm_key_data *old_data,
> +					const struct nvdimm_key_data *new_data,
> +					enum nvdimm_passphrase_type ptype)
> +{
> +	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> +	struct cxl_dev_state *cxlds = cxlmd->cxlds;
> +	struct cxl_set_pass *set_pass;
> +	int rc;
> +
> +	set_pass = kzalloc(sizeof(*set_pass), GFP_KERNEL);

It's not huge.  Maybe just have it on the stack? I'm fine either way.

> +	if (!set_pass)
> +		return -ENOMEM;
> +
> +	set_pass->type = ptype == NVDIMM_MASTER ?
> +		CXL_PMEM_SEC_PASS_MASTER : CXL_PMEM_SEC_PASS_USER;
> +	memcpy(set_pass->old_pass, old_data->data, NVDIMM_PASSPHRASE_LEN);
> +	memcpy(set_pass->new_pass, new_data->data, NVDIMM_PASSPHRASE_LEN);
> +
> +	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_SET_PASSPHRASE,
> +			       set_pass, sizeof(*set_pass), NULL, 0);
> +	kfree(set_pass);
> +	return rc;
> +}
> +
>  static const struct nvdimm_security_ops __cxl_security_ops = {
>  	.get_flags = cxl_pmem_get_security_flags,
> +	.change_key = cxl_pmem_security_change_key,
>  };
>  
>  const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;
> 
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 00/15] Introduce security commands for CXL pmem device
  2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
                   ` (15 preceding siblings ...)
  2022-07-15 21:29 ` [PATCH RFC 00/15] Introduce security commands for CXL pmem device Davidlohr Bueso
@ 2022-08-03 17:03 ` Jonathan Cameron
  2022-08-08 22:18   ` Dave Jiang
  16 siblings, 1 reply; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 17:03 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave

On Fri, 15 Jul 2022 14:08:32 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> This series is seeking comments on the implementation. It has not been fully
> tested yet.
> 
> This series adds the support for "Persistent Memory Data-at-rest Security"
> block of command set for the CXL Memory Devices. The enabling is done through
> the nvdimm_security_ops as the operations are very similar to the same
> operations that the persistent memory devices through NFIT provider support.
> This enabling does not include the security pass-through commands nor the
> Santize commands.
> 
> Under the nvdimm_security_ops, this patch series will enable get_flags(),
> freeze(), change_key(), unlock(), disable(), and erase(). The disable() API
> does not support disabling of the master passphrase. To maintain established
> user ABI through the sysfs attribute "security", the "disable" command is
> left untouched and a new "disable_master" command is introduced with a new
> disable_master() API call for the nvdimm_security_ops().
> 
> This series does not include plumbing to directly handle the security commands
> through cxl control util. The enabled security commands will still go through
> ndctl tool with this enabling.
> 
> For calls such as unlock() and erase(), the CPU caches must be invalidated
> post operation. Currently, the implementation resides in
> drivers/acpi/nfit/intel.c with a comment that it should be implemented
> cross arch when more than just NFIT based device needs this operation.
> With the coming of CXL persistent memory devices this is now needed.
> Introduce ARCH_HAS_NVDIMM_INVAL_CACHE and implement similar to
> ARCH_HAS_PMEM_API where the arch can opt in with implementation.
> Currently only add x86_64 implementation where wbinvd_on_all_cpus()
> is called.
> 
Hi Dave,

Just curious.  What was reasoning behind this being a RFC?
What do you particular want comments on?

Thanks,

Jonathan

> ---
> 
> Dave Jiang (15):
>       cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation
>       tools/testing/cxl: Create context for cxl mock device
>       tools/testing/cxl: Add "Get Security State" opcode support
>       cxl/pmem: Add "Set Passphrase" security command support
>       tools/testing/cxl: Add "Set Passphrase" opcode support
>       cxl/pmem: Add Disable Passphrase security command support
>       tools/testing/cxl: Add "Disable" security opcode support
>       cxl/pmem: Add "Freeze Security State" security command support
>       tools/testing/cxl: Add "Freeze Security State" security opcode support
>       x86: add an arch helper function to invalidate all cache for nvdimm
>       cxl/pmem: Add "Unlock" security command support
>       tools/testing/cxl: Add "Unlock" security opcode support
>       cxl/pmem: Add "Passphrase Secure Erase" security command support
>       tools/testing/cxl: Add "passphrase secure erase" opcode support
>       nvdimm/cxl/pmem: Add support for master passphrase disable security command
> 
> 
>  arch/x86/Kconfig             |   1 +
>  arch/x86/mm/pat/set_memory.c |   8 +
>  drivers/acpi/nfit/intel.c    |  28 +--
>  drivers/cxl/Kconfig          |  16 ++
>  drivers/cxl/Makefile         |   1 +
>  drivers/cxl/cxlmem.h         |  41 +++++
>  drivers/cxl/pmem.c           |  10 +-
>  drivers/cxl/security.c       | 182 ++++++++++++++++++
>  drivers/nvdimm/security.c    |  33 +++-
>  include/linux/libnvdimm.h    |  10 +
>  lib/Kconfig                  |   3 +
>  tools/testing/cxl/Kbuild     |   1 +
>  tools/testing/cxl/test/mem.c | 348 ++++++++++++++++++++++++++++++++++-
>  13 files changed, 644 insertions(+), 38 deletions(-)
>  create mode 100644 drivers/cxl/security.c
> 
> --
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 05/15] tools/testing/cxl: Add "Set Passphrase" opcode support
  2022-07-15 21:09 ` [PATCH RFC 05/15] tools/testing/cxl: Add "Set Passphrase" opcode support Dave Jiang
@ 2022-08-03 17:15   ` Jonathan Cameron
  0 siblings, 0 replies; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 17:15 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave

On Fri, 15 Jul 2022 14:09:01 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> Add support to emulate a CXL mem device supporting the "Set Passphrase"
> operation. The operation supports setting of either a user or a master
> passphrase.
> 
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Hi Dave,

A few comments inline.

Thanks,

Jonathan

> ---
>  tools/testing/cxl/test/mem.c |   76 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 76 insertions(+)
> 
> diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
> index 337e5a099d31..796f4f7b5e3d 100644
> --- a/tools/testing/cxl/test/mem.c
> +++ b/tools/testing/cxl/test/mem.c
> @@ -12,8 +12,14 @@
>  struct mock_mdev_data {
>  	void *lsa;
>  	u32 security_state;
> +	u8 user_pass[NVDIMM_PASSPHRASE_LEN];
> +	u8 master_pass[NVDIMM_PASSPHRASE_LEN];
> +	int user_limit;
> +	int master_limit;
>  };
>  
> +#define PASS_TRY_LIMIT 3
> +
>  #define LSA_SIZE SZ_128K
>  #define EFFECT(x) (1U << x)
>  
> @@ -162,6 +168,73 @@ static int mock_get_security_state(struct cxl_dev_state *cxlds,
>  	return 0;
>  }
>  
> +static int mock_set_passphrase(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
> +{
> +	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
> +	struct cxl_set_pass *set_pass;
> +
> +	if (cmd->size_in != sizeof(*set_pass)) {
> +		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;

If it makes sense to set I think this should be invalid payload length.

> +		return -EINVAL;
> +	}
> +
> +	if (cmd->size_out != 0) {
> +		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;

As before. I'm not 100% sure this is actually an error from
device point of view (it fills the buffer whatever).  Obviously
it's an error in the software so return -EINVAL makes sense.


> +		return -EINVAL;
> +	}
> +
> +	if (mdata->security_state & CXL_PMEM_SEC_STATE_FROZEN) {
> +		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
> +		return -ENXIO;
> +	}
> +
> +	set_pass = cmd->payload_in;
> +	switch (set_pass->type) {
> +	case CXL_PMEM_SEC_PASS_MASTER:
> +		if (mdata->security_state & CXL_PMEM_SEC_STATE_MASTER_PLIMIT) {
> +			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
> +			return -ENXIO;
> +		}
> +		/*
> +		 * CXL spec v2.0 8.2.9.5.6.2, The master pasphrase shall only be set in
> +		 * the security disabled state when the user passphrase is not set.
> +		 */
> +		if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET) {
> +			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
> +			return -ENXIO;
> +		}
> +		if (mdata->security_state & CXL_PMEM_SEC_STATE_MASTER_PASS_SET &&
> +		    memcmp(mdata->master_pass, set_pass->old_pass, NVDIMM_PASSPHRASE_LEN)) {
> +			if (++mdata->master_limit == PASS_TRY_LIMIT)
> +				mdata->security_state |= CXL_PMEM_SEC_STATE_MASTER_PLIMIT;
> +			cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
> +			return -ENXIO;
> +		}
> +		memcpy(mdata->master_pass, set_pass->new_pass, NVDIMM_PASSPHRASE_LEN);
> +		break;
> +
> +	case CXL_PMEM_SEC_PASS_USER:
> +		if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PLIMIT) {
> +			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
> +			return -ENXIO;
> +		}
> +		if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET &&
> +		    memcmp(mdata->user_pass, set_pass->old_pass, NVDIMM_PASSPHRASE_LEN)) {
> +			if (++mdata->user_limit == PASS_TRY_LIMIT)
> +				mdata->security_state |= CXL_PMEM_SEC_STATE_USER_PLIMIT;
> +			cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
> +			return -ENXIO;
> +		}
> +		memcpy(mdata->user_pass, set_pass->new_pass, NVDIMM_PASSPHRASE_LEN);
> +		break;
> +
> +	default:
> +		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
> +		return -EINVAL;
> +	}

I would directly return rather than break; above as reduces the code someone following
either case needs to look at.  + saves a whole 1 line of code ;)

> +	return 0;
> +}
> +
>  static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>  {
>  	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
> @@ -257,6 +330,9 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
>  	case CXL_MBOX_OP_GET_SECURITY_STATE:
>  		rc = mock_get_security_state(cxlds, cmd);
>  		break;
> +	case CXL_MBOX_OP_SET_PASSPHRASE:
> +		rc = mock_set_passphrase(cxlds, cmd);
> +		break;
>  	default:
>  		break;
>  	}
> 
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 06/15] cxl/pmem: Add Disable Passphrase security command support
  2022-07-15 21:09 ` [PATCH RFC 06/15] cxl/pmem: Add Disable Passphrase security command support Dave Jiang
@ 2022-08-03 17:21   ` Jonathan Cameron
  0 siblings, 0 replies; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 17:21 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave

On Fri, 15 Jul 2022 14:09:07 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> Create callback function to support the nvdimm_security_ops ->disable()
> callback. Translate the operation to send "Disable Passphrase" security
> command for CXL memory device. The operation supports disabling a
> passphrase for the CXL persistent memory device. In the original
> implementation of nvdimm_security_ops, this operation only supports
> disabling of the user passphrase. This is due to the NFIT version of
> disable passphrase only supported disabling of user passphrase. The CXL
> spec allows disabling of the master passphrase as well which
> nvidmm_security_ops does not support yet. In this commit, the callback
nvdimm...
> function will only support user passphrase.
> 
> See CXL 2.0 spec section 8.2.9.5.6.3 for reference.
> 
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Trivial comment inline otherwise lgtm

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/cxl/cxlmem.h   |    8 ++++++++
>  drivers/cxl/security.c |   30 ++++++++++++++++++++++++++++++
>  2 files changed, 38 insertions(+)
> 
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index 1e76d22f4fd2..70a1eb7720d3 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -252,6 +252,7 @@ enum cxl_opcode {
>  	CXL_MBOX_OP_GET_SCAN_MEDIA	= 0x4305,
>  	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
>  	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
> +	CXL_MBOX_OP_DISABLE_PASSPHRASE	= 0x4502,
>  	CXL_MBOX_OP_MAX			= 0x10000
>  };
>  
> @@ -359,6 +360,13 @@ struct cxl_set_pass {
>  	u8 new_pass[NVDIMM_PASSPHRASE_LEN];
>  } __packed;
>  
> +/* disable passphrase input payload */
> +struct cxl_disable_pass {
> +	u8 type;
> +	u8 reserved[31];
> +	u8 pass[NVDIMM_PASSPHRASE_LEN];
> +} __packed;
> +
>  enum {
>  	CXL_PMEM_SEC_PASS_MASTER = 0,
>  	CXL_PMEM_SEC_PASS_USER,
> diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
> index 76ec5087f966..4aec8e41e167 100644
> --- a/drivers/cxl/security.c
> +++ b/drivers/cxl/security.c
> @@ -76,9 +76,39 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm,
>  	return rc;
>  }
>  
> +static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
> +				     const struct nvdimm_key_data *key_data)
> +{
> +	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> +	struct cxl_dev_state *cxlds = cxlmd->cxlds;
> +	struct cxl_disable_pass *dis_pass;
> +	int rc;
> +
> +	dis_pass = kzalloc(sizeof(*dis_pass), GFP_KERNEL);

Another fairly small structure. Maybe just put it on the stack...

> +	if (!dis_pass)
> +		return -ENOMEM;
> +
> +	/*
> +	 * While the CXL spec defines the ability to erase the master passphrase,
> +	 * the original nvdimm security ops does not provide that capability.
> +	 * In order to preserve backward compatibility, this callback will
> +	 * only support disable of user passphrase. The disable master passphrase
> +	 * ability will need to be added as a new callback.

Curious. Why is that callback set in stone? If this is exposed directly to userspace
perhaps call that out here.


> +	 */
> +	dis_pass->type = CXL_PMEM_SEC_PASS_USER;
> +	memcpy(dis_pass->pass, key_data->data, NVDIMM_PASSPHRASE_LEN);
> +
> +	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_DISABLE_PASSPHRASE,
> +			       dis_pass, sizeof(*dis_pass), NULL, 0);
> +	kfree(dis_pass);
> +	return rc;
> +}
> +
>  static const struct nvdimm_security_ops __cxl_security_ops = {
>  	.get_flags = cxl_pmem_get_security_flags,
>  	.change_key = cxl_pmem_security_change_key,
> +	.disable = cxl_pmem_security_disable,
>  };
>  
>  const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;
> 
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 07/15] tools/testing/cxl: Add "Disable" security opcode support
  2022-07-15 21:09 ` [PATCH RFC 07/15] tools/testing/cxl: Add "Disable" security opcode support Dave Jiang
@ 2022-08-03 17:23   ` Jonathan Cameron
  0 siblings, 0 replies; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 17:23 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave

On Fri, 15 Jul 2022 14:09:12 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> Add support to emulate a CXL mem device support the "Disable Passphrase"
> operation. The operation supports disabling of either a user or a master
> passphrase. The emulation will provide support for both user and master
> passphrase.
> 
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Similar comments as for earlier test patches.

Thanks,

Jonathan

> ---
>  tools/testing/cxl/test/mem.c |   80 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 80 insertions(+)
> 
> diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
> index 796f4f7b5e3d..5f87a94d92ae 100644
> --- a/tools/testing/cxl/test/mem.c
> +++ b/tools/testing/cxl/test/mem.c
> @@ -235,6 +235,83 @@ static int mock_set_passphrase(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd
>  	return 0;
>  }
>  
> +static int mock_disable_passphrase(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
> +{
> +	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
> +	struct cxl_disable_pass *dis_pass;
> +
> +	if (cmd->size_in != sizeof(*dis_pass)) {
> +		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;

Same as in earlier patches.  I think return code is wrong and not seeing why it's useful
to set it.

> +		return -EINVAL;
> +	}
> +
> +	if (cmd->size_out != 0) {
> +		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
> +		return -EINVAL;
> +	}
> +
> +	if (mdata->security_state & CXL_PMEM_SEC_STATE_FROZEN) {
> +		cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
> +		return -ENXIO;
> +	}
> +
> +	dis_pass = cmd->payload_in;
> +	switch (dis_pass->type) {
> +	case CXL_PMEM_SEC_PASS_MASTER:
> +		if (mdata->security_state & CXL_PMEM_SEC_STATE_MASTER_PLIMIT) {
> +			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
> +			return -ENXIO;
> +		}
> +
> +		if (!(mdata->security_state & CXL_PMEM_SEC_STATE_MASTER_PASS_SET)) {
> +			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
> +			return -ENXIO;
> +		}
> +
> +		if (memcmp(dis_pass->pass, mdata->master_pass, NVDIMM_PASSPHRASE_LEN)) {
> +			if (++mdata->master_limit == PASS_TRY_LIMIT)
> +				mdata->security_state |= CXL_PMEM_SEC_STATE_MASTER_PLIMIT;
> +			cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
> +			return -ENXIO;
> +		}
> +
> +		mdata->master_limit = 0;
> +		memset(mdata->master_pass, 0, NVDIMM_PASSPHRASE_LEN);
> +		mdata->security_state &= ~CXL_PMEM_SEC_STATE_MASTER_PASS_SET;
> +		break;
> +
> +	case CXL_PMEM_SEC_PASS_USER:
> +		if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PLIMIT) {
> +			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
> +			return -ENXIO;
> +		}
> +
> +		if (!(mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET)) {
> +			cmd->return_code = CXL_MBOX_CMD_RC_SECURITY;
> +			return -ENXIO;
> +		}
> +
> +		if (memcmp(dis_pass->pass, mdata->user_pass, NVDIMM_PASSPHRASE_LEN)) {
> +			if (++mdata->user_limit == PASS_TRY_LIMIT)
> +				mdata->security_state |= CXL_PMEM_SEC_STATE_USER_PLIMIT;
> +			cmd->return_code = CXL_MBOX_CMD_RC_PASSPHRASE;
> +			return -ENXIO;
> +		}
> +
> +		mdata->user_limit = 0;
> +		memset(mdata->user_pass, 0, NVDIMM_PASSPHRASE_LEN);
> +		mdata->security_state &= ~(CXL_PMEM_SEC_STATE_USER_PASS_SET |
> +					   CXL_PMEM_SEC_STATE_LOCKED);
> +		break;
Similar comment to before. I'd return 0 here and in above case to slightly improve
readability.

> +
> +	default:
> +		cmd->return_code = CXL_MBOX_CMD_RC_INPUT;
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
>  static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>  {
>  	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
> @@ -333,6 +410,9 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
>  	case CXL_MBOX_OP_SET_PASSPHRASE:
>  		rc = mock_set_passphrase(cxlds, cmd);
>  		break;
> +	case CXL_MBOX_OP_DISABLE_PASSPHRASE:
> +		rc = mock_disable_passphrase(cxlds, cmd);
> +		break;
>  	default:
>  		break;
>  	}
> 
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 08/15] cxl/pmem: Add "Freeze Security State" security command support
  2022-07-15 21:09 ` [PATCH RFC 08/15] cxl/pmem: Add "Freeze Security State" security command support Dave Jiang
@ 2022-08-03 17:23   ` Jonathan Cameron
  0 siblings, 0 replies; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 17:23 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave

On Fri, 15 Jul 2022 14:09:18 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> Create callback function to support the nvdimm_security_ops() ->freeze()
> callback. Translate the operation to send "Freeze Security State" security
> command for CXL memory device.
> 
> See CXL 2.0 spec section 8.2.9.5.6.5 for reference.
> 
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
>  drivers/cxl/cxlmem.h   |    1 +
>  drivers/cxl/security.c |   10 ++++++++++
>  2 files changed, 11 insertions(+)
> 
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index 70a1eb7720d3..ced85be291f3 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -253,6 +253,7 @@ enum cxl_opcode {
>  	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
>  	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
>  	CXL_MBOX_OP_DISABLE_PASSPHRASE	= 0x4502,
> +	CXL_MBOX_OP_FREEZE_SECURITY	= 0x4504,
>  	CXL_MBOX_OP_MAX			= 0x10000
>  };
>  
> diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
> index 4aec8e41e167..6399266a5908 100644
> --- a/drivers/cxl/security.c
> +++ b/drivers/cxl/security.c
> @@ -105,10 +105,20 @@ static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
>  	return rc;
>  }
>  
> +static int cxl_pmem_security_freeze(struct nvdimm *nvdimm)
> +{
> +	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> +	struct cxl_dev_state *cxlds = cxlmd->cxlds;
> +
> +	return cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_FREEZE_SECURITY, NULL, 0, NULL, 0);
> +}
> +
>  static const struct nvdimm_security_ops __cxl_security_ops = {
>  	.get_flags = cxl_pmem_get_security_flags,
>  	.change_key = cxl_pmem_security_change_key,
>  	.disable = cxl_pmem_security_disable,
> +	.freeze = cxl_pmem_security_freeze,
>  };
>  
>  const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;
> 
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-07-19 19:07     ` Dave Jiang
@ 2022-08-03 17:37         ` Jonathan Cameron
  0 siblings, 0 replies; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 17:37 UTC (permalink / raw)
  To: Dave Jiang
  Cc: Davidlohr Bueso, linux-cxl, nvdimm, dan.j.williams, bwidawsk,
	ira.weiny, vishal.l.verma, alison.schofield, a.manzanares,
	linux-arch, Arnd Bergmann, linux-arm-kernel

On Tue, 19 Jul 2022 12:07:03 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
> > On Fri, 15 Jul 2022, Dave Jiang wrote:
> >  
> >> The original implementation to flush all cache after unlocking the 
> >> nvdimm
> >> resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
> >> nvdimm with security operations arrives on other archs. With support CXL
> >> pmem supporting security operations, specifically "unlock" dimm, the 
> >> need
> >> for an arch supported helper function to invalidate all CPU cache for
> >> nvdimm has arrived. Remove original implementation from acpi/nfit and 
> >> add
> >> cross arch support for this operation.
> >>
> >> Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to 
> >> opt in
> >> and provide the support via wbinvd_on_all_cpus() call.  
> >
> > So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
> > its own semantics (iirc there was a flush all call in the past). Cc'ing
> > Jonathan as well.
> >
> > Anyway, I think this call should not be defined in any place other 
> > than core
> > kernel headers, and not in pat/nvdimm. I was trying to make it fit in 
> > smp.h,
> > for example, but conviniently we might be able to hijack 
> > flush_cache_all()
> > for our purposes as of course neither x86-64 arm64 uses it :)
> >
> > And I see this as safe (wrt not adding a big hammer on unaware 
> > drivers) as
> > the 32bit archs that define the call are mostly contained thin their 
> > arch/,
> > and the few in drivers/ are still specific to those archs.
> >
> > Maybe something like the below.  
> 
> Ok. I'll replace my version with yours.

Careful with flush_cache_all(). The stub version in 
include/asm-generic/cacheflush.h has a comment above it that would
need updating at very least (I think).  
Note there 'was' a flush_cache_all() for ARM64, but:
https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/

Also, I'm far from sure it will be the right choice on all CXL supporting
architectures.
+CC linux-arch, linux-arm and Arnd.

> 
> 
> >
> > Thanks,
> > Davidlohr
> >
> > ------8<----------------------------------------
> > Subject: [PATCH] arch/x86: define flush_cache_all as global wbinvd
> >
> > With CXL security features, global CPU cache flushing nvdimm
> > requirements are no longer specific to that subsystem, even
> > beyond the scope of security_ops. CXL will need such semantics
> > for features not necessarily limited to persistent memory.
> >
> > So use the flush_cache_all() for the wbinvd across all
> > CPUs on x86. arm64, which is another platform to have CXL
> > support can also define its own semantics here.
> >
> > Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
> > ---
> >  arch/x86/Kconfig                  |  1 -
> >  arch/x86/include/asm/cacheflush.h |  5 +++++
> >  arch/x86/mm/pat/set_memory.c      |  8 --------
> >  drivers/acpi/nfit/intel.c         | 11 ++++++-----
> >  drivers/cxl/security.c            |  5 +++--
> >  include/linux/libnvdimm.h         |  9 ---------
> >  6 files changed, 14 insertions(+), 25 deletions(-)
> >
> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index 8dbe89eba639..be0b95e51df6 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -83,7 +83,6 @@ config X86
> >     select ARCH_HAS_MEMBARRIER_SYNC_CORE
> >     select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
> >     select ARCH_HAS_PMEM_API        if X86_64
> > -    select ARCH_HAS_NVDIMM_INVAL_CACHE    if X86_64
> >     select ARCH_HAS_PTE_DEVMAP        if X86_64
> >     select ARCH_HAS_PTE_SPECIAL
> >     select ARCH_HAS_UACCESS_FLUSHCACHE    if X86_64
> > diff --git a/arch/x86/include/asm/cacheflush.h 
> > b/arch/x86/include/asm/cacheflush.h
> > index b192d917a6d0..05c79021665d 100644
> > --- a/arch/x86/include/asm/cacheflush.h
> > +++ b/arch/x86/include/asm/cacheflush.h
> > @@ -10,4 +10,9 @@
> >
> >  void clflush_cache_range(void *addr, unsigned int size);
> >
> > +#define flush_cache_all()        \
> > +do {                    \
> > +    wbinvd_on_all_cpus();        \
> > +} while (0)
> > +
> >  #endif /* _ASM_X86_CACHEFLUSH_H */
> > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > index e4cd1286deef..1abd5438f126 100644
> > --- a/arch/x86/mm/pat/set_memory.c
> > +++ b/arch/x86/mm/pat/set_memory.c
> > @@ -330,14 +330,6 @@ void arch_invalidate_pmem(void *addr, size_t size)
> >  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
> >  #endif
> >
> > -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> > -void arch_invalidate_nvdimm_cache(void)
> > -{
> > -    wbinvd_on_all_cpus();
> > -}
> > -EXPORT_SYMBOL_GPL(arch_invalidate_nvdimm_cache);
> > -#endif
> > -
> >  static void __cpa_flush_all(void *arg)
> >  {
> >     unsigned long cache = (unsigned long)arg;
> > diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
> > index 242d2e9203e9..1b0ecb4d67e6 100644
> > --- a/drivers/acpi/nfit/intel.c
> > +++ b/drivers/acpi/nfit/intel.c
> > @@ -1,6 +1,7 @@
> >  // SPDX-License-Identifier: GPL-2.0
> >  /* Copyright(c) 2018 Intel Corporation. All rights reserved. */
> >  #include <linux/libnvdimm.h>
> > +#include <linux/cacheflush.h>
> >  #include <linux/ndctl.h>
> >  #include <linux/acpi.h>
> >  #include <asm/smp.h>
> > @@ -226,7 +227,7 @@ static int __maybe_unused 
> > intel_security_unlock(struct nvdimm *nvdimm,
> >     }
> >
> >     /* DIMM unlocked, invalidate all CPU caches before we read it */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >
> >     return 0;
> >  }
> > @@ -296,7 +297,7 @@ static int __maybe_unused 
> > intel_security_erase(struct nvdimm *nvdimm,
> >         return -ENOTTY;
> >
> >     /* flush all cache before we erase DIMM */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     memcpy(nd_cmd.cmd.passphrase, key->data,
> >             sizeof(nd_cmd.cmd.passphrase));
> >     rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> > @@ -316,7 +317,7 @@ static int __maybe_unused 
> > intel_security_erase(struct nvdimm *nvdimm,
> >     }
> >
> >     /* DIMM erased, invalidate all CPU caches before we read it */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     return 0;
> >  }
> >
> > @@ -353,7 +354,7 @@ static int __maybe_unused 
> > intel_security_query_overwrite(struct nvdimm *nvdimm)
> >     }
> >
> >     /* flush all cache before we make the nvdimms available */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     return 0;
> >  }
> >
> > @@ -379,7 +380,7 @@ static int __maybe_unused 
> > intel_security_overwrite(struct nvdimm *nvdimm,
> >         return -ENOTTY;
> >
> >     /* flush all cache before we erase DIMM */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     memcpy(nd_cmd.cmd.passphrase, nkey->data,
> >             sizeof(nd_cmd.cmd.passphrase));
> >     rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> > diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
> > index 3dc04b50afaf..e2977872bf2f 100644
> > --- a/drivers/cxl/security.c
> > +++ b/drivers/cxl/security.c
> > @@ -6,6 +6,7 @@
> >  #include <linux/ndctl.h>
> >  #include <linux/async.h>
> >  #include <linux/slab.h>
> > +#include <linux/cacheflush.h>
> >  #include "cxlmem.h"
> >  #include "cxl.h"
> >
> > @@ -137,7 +138,7 @@ static int cxl_pmem_security_unlock(struct nvdimm 
> > *nvdimm,
> >         return rc;
> >
> >     /* DIMM unlocked, invalidate all CPU caches before we read it */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     return 0;
> >  }
> >
> > @@ -165,7 +166,7 @@ static int 
> > cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
> >         return rc;
> >
> >     /* DIMM erased, invalidate all CPU caches before we read it */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     return 0;
> >  }
> >
> > diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
> > index 07e4e7572089..0769afb73380 100644
> > --- a/include/linux/libnvdimm.h
> > +++ b/include/linux/libnvdimm.h
> > @@ -309,13 +309,4 @@ static inline void arch_invalidate_pmem(void 
> > *addr, size_t size)
> >  {
> >  }
> >  #endif
> > -
> > -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> > -void arch_invalidate_nvdimm_cache(void);
> > -#else
> > -static inline void arch_invalidate_nvdimm_cache(void)
> > -{
> > -}
> > -#endif
> > -
> >  #endif /* __LIBNVDIMM_H__ */
> > -- 
> > 2.36.1
> >  
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-03 17:37         ` Jonathan Cameron
  0 siblings, 0 replies; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-03 17:37 UTC (permalink / raw)
  To: Dave Jiang
  Cc: Davidlohr Bueso, linux-cxl, nvdimm, dan.j.williams, bwidawsk,
	ira.weiny, vishal.l.verma, alison.schofield, a.manzanares,
	linux-arch, Arnd Bergmann, linux-arm-kernel

On Tue, 19 Jul 2022 12:07:03 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
> > On Fri, 15 Jul 2022, Dave Jiang wrote:
> >  
> >> The original implementation to flush all cache after unlocking the 
> >> nvdimm
> >> resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
> >> nvdimm with security operations arrives on other archs. With support CXL
> >> pmem supporting security operations, specifically "unlock" dimm, the 
> >> need
> >> for an arch supported helper function to invalidate all CPU cache for
> >> nvdimm has arrived. Remove original implementation from acpi/nfit and 
> >> add
> >> cross arch support for this operation.
> >>
> >> Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to 
> >> opt in
> >> and provide the support via wbinvd_on_all_cpus() call.  
> >
> > So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
> > its own semantics (iirc there was a flush all call in the past). Cc'ing
> > Jonathan as well.
> >
> > Anyway, I think this call should not be defined in any place other 
> > than core
> > kernel headers, and not in pat/nvdimm. I was trying to make it fit in 
> > smp.h,
> > for example, but conviniently we might be able to hijack 
> > flush_cache_all()
> > for our purposes as of course neither x86-64 arm64 uses it :)
> >
> > And I see this as safe (wrt not adding a big hammer on unaware 
> > drivers) as
> > the 32bit archs that define the call are mostly contained thin their 
> > arch/,
> > and the few in drivers/ are still specific to those archs.
> >
> > Maybe something like the below.  
> 
> Ok. I'll replace my version with yours.

Careful with flush_cache_all(). The stub version in 
include/asm-generic/cacheflush.h has a comment above it that would
need updating at very least (I think).  
Note there 'was' a flush_cache_all() for ARM64, but:
https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/

Also, I'm far from sure it will be the right choice on all CXL supporting
architectures.
+CC linux-arch, linux-arm and Arnd.

> 
> 
> >
> > Thanks,
> > Davidlohr
> >
> > ------8<----------------------------------------
> > Subject: [PATCH] arch/x86: define flush_cache_all as global wbinvd
> >
> > With CXL security features, global CPU cache flushing nvdimm
> > requirements are no longer specific to that subsystem, even
> > beyond the scope of security_ops. CXL will need such semantics
> > for features not necessarily limited to persistent memory.
> >
> > So use the flush_cache_all() for the wbinvd across all
> > CPUs on x86. arm64, which is another platform to have CXL
> > support can also define its own semantics here.
> >
> > Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
> > ---
> >  arch/x86/Kconfig                  |  1 -
> >  arch/x86/include/asm/cacheflush.h |  5 +++++
> >  arch/x86/mm/pat/set_memory.c      |  8 --------
> >  drivers/acpi/nfit/intel.c         | 11 ++++++-----
> >  drivers/cxl/security.c            |  5 +++--
> >  include/linux/libnvdimm.h         |  9 ---------
> >  6 files changed, 14 insertions(+), 25 deletions(-)
> >
> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index 8dbe89eba639..be0b95e51df6 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -83,7 +83,6 @@ config X86
> >     select ARCH_HAS_MEMBARRIER_SYNC_CORE
> >     select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
> >     select ARCH_HAS_PMEM_API        if X86_64
> > -    select ARCH_HAS_NVDIMM_INVAL_CACHE    if X86_64
> >     select ARCH_HAS_PTE_DEVMAP        if X86_64
> >     select ARCH_HAS_PTE_SPECIAL
> >     select ARCH_HAS_UACCESS_FLUSHCACHE    if X86_64
> > diff --git a/arch/x86/include/asm/cacheflush.h 
> > b/arch/x86/include/asm/cacheflush.h
> > index b192d917a6d0..05c79021665d 100644
> > --- a/arch/x86/include/asm/cacheflush.h
> > +++ b/arch/x86/include/asm/cacheflush.h
> > @@ -10,4 +10,9 @@
> >
> >  void clflush_cache_range(void *addr, unsigned int size);
> >
> > +#define flush_cache_all()        \
> > +do {                    \
> > +    wbinvd_on_all_cpus();        \
> > +} while (0)
> > +
> >  #endif /* _ASM_X86_CACHEFLUSH_H */
> > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > index e4cd1286deef..1abd5438f126 100644
> > --- a/arch/x86/mm/pat/set_memory.c
> > +++ b/arch/x86/mm/pat/set_memory.c
> > @@ -330,14 +330,6 @@ void arch_invalidate_pmem(void *addr, size_t size)
> >  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
> >  #endif
> >
> > -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> > -void arch_invalidate_nvdimm_cache(void)
> > -{
> > -    wbinvd_on_all_cpus();
> > -}
> > -EXPORT_SYMBOL_GPL(arch_invalidate_nvdimm_cache);
> > -#endif
> > -
> >  static void __cpa_flush_all(void *arg)
> >  {
> >     unsigned long cache = (unsigned long)arg;
> > diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
> > index 242d2e9203e9..1b0ecb4d67e6 100644
> > --- a/drivers/acpi/nfit/intel.c
> > +++ b/drivers/acpi/nfit/intel.c
> > @@ -1,6 +1,7 @@
> >  // SPDX-License-Identifier: GPL-2.0
> >  /* Copyright(c) 2018 Intel Corporation. All rights reserved. */
> >  #include <linux/libnvdimm.h>
> > +#include <linux/cacheflush.h>
> >  #include <linux/ndctl.h>
> >  #include <linux/acpi.h>
> >  #include <asm/smp.h>
> > @@ -226,7 +227,7 @@ static int __maybe_unused 
> > intel_security_unlock(struct nvdimm *nvdimm,
> >     }
> >
> >     /* DIMM unlocked, invalidate all CPU caches before we read it */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >
> >     return 0;
> >  }
> > @@ -296,7 +297,7 @@ static int __maybe_unused 
> > intel_security_erase(struct nvdimm *nvdimm,
> >         return -ENOTTY;
> >
> >     /* flush all cache before we erase DIMM */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     memcpy(nd_cmd.cmd.passphrase, key->data,
> >             sizeof(nd_cmd.cmd.passphrase));
> >     rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> > @@ -316,7 +317,7 @@ static int __maybe_unused 
> > intel_security_erase(struct nvdimm *nvdimm,
> >     }
> >
> >     /* DIMM erased, invalidate all CPU caches before we read it */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     return 0;
> >  }
> >
> > @@ -353,7 +354,7 @@ static int __maybe_unused 
> > intel_security_query_overwrite(struct nvdimm *nvdimm)
> >     }
> >
> >     /* flush all cache before we make the nvdimms available */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     return 0;
> >  }
> >
> > @@ -379,7 +380,7 @@ static int __maybe_unused 
> > intel_security_overwrite(struct nvdimm *nvdimm,
> >         return -ENOTTY;
> >
> >     /* flush all cache before we erase DIMM */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     memcpy(nd_cmd.cmd.passphrase, nkey->data,
> >             sizeof(nd_cmd.cmd.passphrase));
> >     rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> > diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
> > index 3dc04b50afaf..e2977872bf2f 100644
> > --- a/drivers/cxl/security.c
> > +++ b/drivers/cxl/security.c
> > @@ -6,6 +6,7 @@
> >  #include <linux/ndctl.h>
> >  #include <linux/async.h>
> >  #include <linux/slab.h>
> > +#include <linux/cacheflush.h>
> >  #include "cxlmem.h"
> >  #include "cxl.h"
> >
> > @@ -137,7 +138,7 @@ static int cxl_pmem_security_unlock(struct nvdimm 
> > *nvdimm,
> >         return rc;
> >
> >     /* DIMM unlocked, invalidate all CPU caches before we read it */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     return 0;
> >  }
> >
> > @@ -165,7 +166,7 @@ static int 
> > cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
> >         return rc;
> >
> >     /* DIMM erased, invalidate all CPU caches before we read it */
> > -    arch_invalidate_nvdimm_cache();
> > +    flush_cache_all();
> >     return 0;
> >  }
> >
> > diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
> > index 07e4e7572089..0769afb73380 100644
> > --- a/include/linux/libnvdimm.h
> > +++ b/include/linux/libnvdimm.h
> > @@ -309,13 +309,4 @@ static inline void arch_invalidate_pmem(void 
> > *addr, size_t size)
> >  {
> >  }
> >  #endif
> > -
> > -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> > -void arch_invalidate_nvdimm_cache(void);
> > -#else
> > -static inline void arch_invalidate_nvdimm_cache(void)
> > -{
> > -}
> > -#endif
> > -
> >  #endif /* __LIBNVDIMM_H__ */
> > -- 
> > 2.36.1
> >  
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 11/15] cxl/pmem: Add "Unlock" security command support
  2022-07-15 21:09 ` [PATCH RFC 11/15] cxl/pmem: Add "Unlock" security command support Dave Jiang
@ 2022-08-04 13:19   ` Jonathan Cameron
  2022-08-09 22:31     ` Dave Jiang
  0 siblings, 1 reply; 79+ messages in thread
From: Jonathan Cameron @ 2022-08-04 13:19 UTC (permalink / raw)
  To: Dave Jiang
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave

On Fri, 15 Jul 2022 14:09:36 -0700
Dave Jiang <dave.jiang@intel.com> wrote:

> Create callback function to support the nvdimm_security_ops() ->unlock()
> callback. Translate the operation to send "Unlock" security command for CXL
> mem device.
> 
> When the mem device is unlocked, arch_invalidate_nvdimm_cache() is called
> in order to invalidate all CPU caches before attempting to access the mem
> device.
> 
> See CXL 2.0 spec section 8.2.9.5.6.4 for reference.
> 
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>

Hi Dave,

One trivial thing inline.

Thanks,

Jonathan

> ---
>  drivers/cxl/cxlmem.h   |    1 +
>  drivers/cxl/security.c |   21 +++++++++++++++++++++
>  2 files changed, 22 insertions(+)
> 
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index ced85be291f3..ae8ccd484491 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -253,6 +253,7 @@ enum cxl_opcode {
>  	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
>  	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
>  	CXL_MBOX_OP_DISABLE_PASSPHRASE	= 0x4502,
> +	CXL_MBOX_OP_UNLOCK		= 0x4503,
>  	CXL_MBOX_OP_FREEZE_SECURITY	= 0x4504,
>  	CXL_MBOX_OP_MAX			= 0x10000
>  };
> diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
> index 6399266a5908..d15520f280f0 100644
> --- a/drivers/cxl/security.c
> +++ b/drivers/cxl/security.c
> @@ -114,11 +114,32 @@ static int cxl_pmem_security_freeze(struct nvdimm *nvdimm)
>  	return cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_FREEZE_SECURITY, NULL, 0, NULL, 0);
>  }
>  
> +static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
> +				    const struct nvdimm_key_data *key_data)
> +{
> +	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> +	struct cxl_dev_state *cxlds = cxlmd->cxlds;
> +	u8 pass[NVDIMM_PASSPHRASE_LEN];
> +	int rc;
> +
> +	memcpy(pass, key_data->data, NVDIMM_PASSPHRASE_LEN);

Why do we need a local copy?  I'd have thought we could just
pass keydata->data in as the payload for cxl_mbox_send_cmd()
There might be some value in making it easier to check by
having a structure defined for this payload (obviously trivial)
but given we are using an array of length defined by a non CXL
define, I'm not sure there is any point in the copy.

> +	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_UNLOCK,
> +			       pass, NVDIMM_PASSPHRASE_LEN, NULL, 0);
> +	if (rc < 0)
> +		return rc;
> +
> +	/* DIMM unlocked, invalidate all CPU caches before we read it */
> +	arch_invalidate_nvdimm_cache();
> +	return 0;
> +}
> +
>  static const struct nvdimm_security_ops __cxl_security_ops = {
>  	.get_flags = cxl_pmem_get_security_flags,
>  	.change_key = cxl_pmem_security_change_key,
>  	.disable = cxl_pmem_security_disable,
>  	.freeze = cxl_pmem_security_freeze,
> +	.unlock = cxl_pmem_security_unlock,
>  };
>  
>  const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;
> 
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 00/15] Introduce security commands for CXL pmem device
  2022-08-03 17:03 ` Jonathan Cameron
@ 2022-08-08 22:18   ` Dave Jiang
  0 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-08-08 22:18 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave


On 8/3/2022 10:03 AM, Jonathan Cameron wrote:
> On Fri, 15 Jul 2022 14:08:32 -0700
> Dave Jiang <dave.jiang@intel.com> wrote:
>
>> This series is seeking comments on the implementation. It has not been fully
>> tested yet.
>>
>> This series adds the support for "Persistent Memory Data-at-rest Security"
>> block of command set for the CXL Memory Devices. The enabling is done through
>> the nvdimm_security_ops as the operations are very similar to the same
>> operations that the persistent memory devices through NFIT provider support.
>> This enabling does not include the security pass-through commands nor the
>> Santize commands.
>>
>> Under the nvdimm_security_ops, this patch series will enable get_flags(),
>> freeze(), change_key(), unlock(), disable(), and erase(). The disable() API
>> does not support disabling of the master passphrase. To maintain established
>> user ABI through the sysfs attribute "security", the "disable" command is
>> left untouched and a new "disable_master" command is introduced with a new
>> disable_master() API call for the nvdimm_security_ops().
>>
>> This series does not include plumbing to directly handle the security commands
>> through cxl control util. The enabled security commands will still go through
>> ndctl tool with this enabling.
>>
>> For calls such as unlock() and erase(), the CPU caches must be invalidated
>> post operation. Currently, the implementation resides in
>> drivers/acpi/nfit/intel.c with a comment that it should be implemented
>> cross arch when more than just NFIT based device needs this operation.
>> With the coming of CXL persistent memory devices this is now needed.
>> Introduce ARCH_HAS_NVDIMM_INVAL_CACHE and implement similar to
>> ARCH_HAS_PMEM_API where the arch can opt in with implementation.
>> Currently only add x86_64 implementation where wbinvd_on_all_cpus()
>> is called.
>>
> Hi Dave,
>
> Just curious.  What was reasoning behind this being a RFC?
> What do you particular want comments on?

Hi Jonathan. Thanks for reviewing the patches. When I posted the series, 
I haven't tested the code. I just wanted to make sure there are no 
objections to the direction of this enabling going with reusing the 
nvdimm security ops. Once I address Davidlohr and your comments and get 
it fully tested, I'll release v2 w/o RFC.


>
> Thanks,
>
> Jonathan
>
>> ---
>>
>> Dave Jiang (15):
>>        cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation
>>        tools/testing/cxl: Create context for cxl mock device
>>        tools/testing/cxl: Add "Get Security State" opcode support
>>        cxl/pmem: Add "Set Passphrase" security command support
>>        tools/testing/cxl: Add "Set Passphrase" opcode support
>>        cxl/pmem: Add Disable Passphrase security command support
>>        tools/testing/cxl: Add "Disable" security opcode support
>>        cxl/pmem: Add "Freeze Security State" security command support
>>        tools/testing/cxl: Add "Freeze Security State" security opcode support
>>        x86: add an arch helper function to invalidate all cache for nvdimm
>>        cxl/pmem: Add "Unlock" security command support
>>        tools/testing/cxl: Add "Unlock" security opcode support
>>        cxl/pmem: Add "Passphrase Secure Erase" security command support
>>        tools/testing/cxl: Add "passphrase secure erase" opcode support
>>        nvdimm/cxl/pmem: Add support for master passphrase disable security command
>>
>>
>>   arch/x86/Kconfig             |   1 +
>>   arch/x86/mm/pat/set_memory.c |   8 +
>>   drivers/acpi/nfit/intel.c    |  28 +--
>>   drivers/cxl/Kconfig          |  16 ++
>>   drivers/cxl/Makefile         |   1 +
>>   drivers/cxl/cxlmem.h         |  41 +++++
>>   drivers/cxl/pmem.c           |  10 +-
>>   drivers/cxl/security.c       | 182 ++++++++++++++++++
>>   drivers/nvdimm/security.c    |  33 +++-
>>   include/linux/libnvdimm.h    |  10 +
>>   lib/Kconfig                  |   3 +
>>   tools/testing/cxl/Kbuild     |   1 +
>>   tools/testing/cxl/test/mem.c | 348 ++++++++++++++++++++++++++++++++++-
>>   13 files changed, 644 insertions(+), 38 deletions(-)
>>   create mode 100644 drivers/cxl/security.c
>>
>> --
>>
>

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 02/15] tools/testing/cxl: Create context for cxl mock device
  2022-08-03 16:36   ` [PATCH RFC 02/15] " Jonathan Cameron
@ 2022-08-09 20:30     ` Dave Jiang
  0 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-08-09 20:30 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave


On 8/3/2022 9:36 AM, Jonathan Cameron wrote:
> On Fri, 15 Jul 2022 14:08:44 -0700
> Dave Jiang <dave.jiang@intel.com> wrote:
>
>> Add context struct for mock device and move lsa under the context. This
>> allows additional information such as security status and other persistent
>> security data such as passphrase to be added for the emulated test device.
>>
>> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
>> ---
>>   tools/testing/cxl/test/mem.c |   29 +++++++++++++++++++++++------
>>   1 file changed, 23 insertions(+), 6 deletions(-)
>>
>> diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
>> index 6b9239b2afd4..723378248321 100644
>> --- a/tools/testing/cxl/test/mem.c
>> +++ b/tools/testing/cxl/test/mem.c
>> @@ -9,6 +9,10 @@
>>   #include <linux/bits.h>
>>   #include <cxlmem.h>
>>   
>> +struct mock_mdev_data {
>> +	void *lsa;
>> +};
>> +
>>   #define LSA_SIZE SZ_128K
>>   #define EFFECT(x) (1U << x)
>>   
>> @@ -140,7 +144,8 @@ static int mock_id(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>>   static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>>   {
>>   	struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in;
>> -	void *lsa = dev_get_drvdata(cxlds->dev);
>> +	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
>> +	void *lsa = mdata->lsa;
>>   	u32 offset, length;
>>   
>>   	if (sizeof(*get_lsa) > cmd->size_in)
>> @@ -159,7 +164,8 @@ static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>>   static int mock_set_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd)
>>   {
>>   	struct cxl_mbox_set_lsa *set_lsa = cmd->payload_in;
>> -	void *lsa = dev_get_drvdata(cxlds->dev);
>> +	struct mock_mdev_data *mdata = dev_get_drvdata(cxlds->dev);
>> +	void *lsa = mdata->lsa;
>>   	u32 offset, length;
>>   
>>   	if (sizeof(*set_lsa) > cmd->size_in)
>> @@ -237,9 +243,12 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
>>   	return rc;
>>   }
>>   
>> -static void label_area_release(void *lsa)
>> +static void cxl_mock_drvdata_release(void *data)
>>   {
>> -	vfree(lsa);
>> +	struct mock_mdev_data *mdata = data;
>> +
>> +	vfree(mdata->lsa);
>> +	vfree(mdata);
>>   }
>>   
>>   static int cxl_mock_mem_probe(struct platform_device *pdev)
>> @@ -247,13 +256,21 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
>>   	struct device *dev = &pdev->dev;
>>   	struct cxl_memdev *cxlmd;
>>   	struct cxl_dev_state *cxlds;
>> +	struct mock_mdev_data *mdata;
>>   	void *lsa;
>>   	int rc;
>>   
>> +	mdata = vmalloc(sizeof(*mdata));
> It's tiny so why vmalloc?  I guess that might become apparent later.
> devm_kzalloc() should be fine and lead to simpler error handling.
In my testing I actually realized that this needs to be part of platform 
data in order for the contents to be "persistent" even the driver is 
unloaded. So this allocation has moved to cxl_test_init() and managed 
via platform_device_add_data(). And the function makes a copy of the 
passed in data rather than taking it as is and that is managed with the 
platform device lifetime.
>
>> +	if (!mdata)
>> +		return -ENOMEM;
>> +
>>   	lsa = vmalloc(LSA_SIZE);
>> -	if (!lsa)
>> +	if (!lsa) {
>> +		vfree(mdata);
> In general doing this just makes things fragile in the long term. Better to
> register one devm_add_action_or_reset() for each thing set up (or standard
> allcoation).
>
>>   		return -ENOMEM;
>> -	rc = devm_add_action_or_reset(dev, label_area_release, lsa);
>> +	}
>> +
>> +	rc = devm_add_action_or_reset(dev, cxl_mock_drvdata_release, mdata);
>>   	if (rc)
>>   		return rc;
>>   	dev_set_drvdata(dev, lsa);
>>
>>

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-08-03 17:37         ` Jonathan Cameron
@ 2022-08-09 21:47           ` Dave Jiang
  -1 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-08-09 21:47 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Davidlohr Bueso, linux-cxl, nvdimm, dan.j.williams, bwidawsk,
	ira.weiny, vishal.l.verma, alison.schofield, a.manzanares,
	linux-arch, Arnd Bergmann, linux-arm-kernel


On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
> On Tue, 19 Jul 2022 12:07:03 -0700
> Dave Jiang <dave.jiang@intel.com> wrote:
>
>> On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
>>> On Fri, 15 Jul 2022, Dave Jiang wrote:
>>>   
>>>> The original implementation to flush all cache after unlocking the
>>>> nvdimm
>>>> resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
>>>> nvdimm with security operations arrives on other archs. With support CXL
>>>> pmem supporting security operations, specifically "unlock" dimm, the
>>>> need
>>>> for an arch supported helper function to invalidate all CPU cache for
>>>> nvdimm has arrived. Remove original implementation from acpi/nfit and
>>>> add
>>>> cross arch support for this operation.
>>>>
>>>> Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
>>>> opt in
>>>> and provide the support via wbinvd_on_all_cpus() call.
>>> So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
>>> its own semantics (iirc there was a flush all call in the past). Cc'ing
>>> Jonathan as well.
>>>
>>> Anyway, I think this call should not be defined in any place other
>>> than core
>>> kernel headers, and not in pat/nvdimm. I was trying to make it fit in
>>> smp.h,
>>> for example, but conviniently we might be able to hijack
>>> flush_cache_all()
>>> for our purposes as of course neither x86-64 arm64 uses it :)
>>>
>>> And I see this as safe (wrt not adding a big hammer on unaware
>>> drivers) as
>>> the 32bit archs that define the call are mostly contained thin their
>>> arch/,
>>> and the few in drivers/ are still specific to those archs.
>>>
>>> Maybe something like the below.
>> Ok. I'll replace my version with yours.
> Careful with flush_cache_all(). The stub version in
> include/asm-generic/cacheflush.h has a comment above it that would
> need updating at very least (I think).
> Note there 'was' a flush_cache_all() for ARM64, but:
> https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/


flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I 
think other archs, at least ARM, those are separate instructions aren't 
they?

>
> Also, I'm far from sure it will be the right choice on all CXL supporting
> architectures.
> +CC linux-arch, linux-arm and Arnd.
>
>>
>>> Thanks,
>>> Davidlohr
>>>
>>> ------8<----------------------------------------
>>> Subject: [PATCH] arch/x86: define flush_cache_all as global wbinvd
>>>
>>> With CXL security features, global CPU cache flushing nvdimm
>>> requirements are no longer specific to that subsystem, even
>>> beyond the scope of security_ops. CXL will need such semantics
>>> for features not necessarily limited to persistent memory.
>>>
>>> So use the flush_cache_all() for the wbinvd across all
>>> CPUs on x86. arm64, which is another platform to have CXL
>>> support can also define its own semantics here.
>>>
>>> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
>>> ---
>>>   arch/x86/Kconfig                  |  1 -
>>>   arch/x86/include/asm/cacheflush.h |  5 +++++
>>>   arch/x86/mm/pat/set_memory.c      |  8 --------
>>>   drivers/acpi/nfit/intel.c         | 11 ++++++-----
>>>   drivers/cxl/security.c            |  5 +++--
>>>   include/linux/libnvdimm.h         |  9 ---------
>>>   6 files changed, 14 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>>> index 8dbe89eba639..be0b95e51df6 100644
>>> --- a/arch/x86/Kconfig
>>> +++ b/arch/x86/Kconfig
>>> @@ -83,7 +83,6 @@ config X86
>>>      select ARCH_HAS_MEMBARRIER_SYNC_CORE
>>>      select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
>>>      select ARCH_HAS_PMEM_API        if X86_64
>>> -    select ARCH_HAS_NVDIMM_INVAL_CACHE    if X86_64
>>>      select ARCH_HAS_PTE_DEVMAP        if X86_64
>>>      select ARCH_HAS_PTE_SPECIAL
>>>      select ARCH_HAS_UACCESS_FLUSHCACHE    if X86_64
>>> diff --git a/arch/x86/include/asm/cacheflush.h
>>> b/arch/x86/include/asm/cacheflush.h
>>> index b192d917a6d0..05c79021665d 100644
>>> --- a/arch/x86/include/asm/cacheflush.h
>>> +++ b/arch/x86/include/asm/cacheflush.h
>>> @@ -10,4 +10,9 @@
>>>
>>>   void clflush_cache_range(void *addr, unsigned int size);
>>>
>>> +#define flush_cache_all()        \
>>> +do {                    \
>>> +    wbinvd_on_all_cpus();        \
>>> +} while (0)
>>> +
>>>   #endif /* _ASM_X86_CACHEFLUSH_H */
>>> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
>>> index e4cd1286deef..1abd5438f126 100644
>>> --- a/arch/x86/mm/pat/set_memory.c
>>> +++ b/arch/x86/mm/pat/set_memory.c
>>> @@ -330,14 +330,6 @@ void arch_invalidate_pmem(void *addr, size_t size)
>>>   EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
>>>   #endif
>>>
>>> -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
>>> -void arch_invalidate_nvdimm_cache(void)
>>> -{
>>> -    wbinvd_on_all_cpus();
>>> -}
>>> -EXPORT_SYMBOL_GPL(arch_invalidate_nvdimm_cache);
>>> -#endif
>>> -
>>>   static void __cpa_flush_all(void *arg)
>>>   {
>>>      unsigned long cache = (unsigned long)arg;
>>> diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
>>> index 242d2e9203e9..1b0ecb4d67e6 100644
>>> --- a/drivers/acpi/nfit/intel.c
>>> +++ b/drivers/acpi/nfit/intel.c
>>> @@ -1,6 +1,7 @@
>>>   // SPDX-License-Identifier: GPL-2.0
>>>   /* Copyright(c) 2018 Intel Corporation. All rights reserved. */
>>>   #include <linux/libnvdimm.h>
>>> +#include <linux/cacheflush.h>
>>>   #include <linux/ndctl.h>
>>>   #include <linux/acpi.h>
>>>   #include <asm/smp.h>
>>> @@ -226,7 +227,7 @@ static int __maybe_unused
>>> intel_security_unlock(struct nvdimm *nvdimm,
>>>      }
>>>
>>>      /* DIMM unlocked, invalidate all CPU caches before we read it */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>
>>>      return 0;
>>>   }
>>> @@ -296,7 +297,7 @@ static int __maybe_unused
>>> intel_security_erase(struct nvdimm *nvdimm,
>>>          return -ENOTTY;
>>>
>>>      /* flush all cache before we erase DIMM */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      memcpy(nd_cmd.cmd.passphrase, key->data,
>>>              sizeof(nd_cmd.cmd.passphrase));
>>>      rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
>>> @@ -316,7 +317,7 @@ static int __maybe_unused
>>> intel_security_erase(struct nvdimm *nvdimm,
>>>      }
>>>
>>>      /* DIMM erased, invalidate all CPU caches before we read it */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      return 0;
>>>   }
>>>
>>> @@ -353,7 +354,7 @@ static int __maybe_unused
>>> intel_security_query_overwrite(struct nvdimm *nvdimm)
>>>      }
>>>
>>>      /* flush all cache before we make the nvdimms available */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      return 0;
>>>   }
>>>
>>> @@ -379,7 +380,7 @@ static int __maybe_unused
>>> intel_security_overwrite(struct nvdimm *nvdimm,
>>>          return -ENOTTY;
>>>
>>>      /* flush all cache before we erase DIMM */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      memcpy(nd_cmd.cmd.passphrase, nkey->data,
>>>              sizeof(nd_cmd.cmd.passphrase));
>>>      rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
>>> diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
>>> index 3dc04b50afaf..e2977872bf2f 100644
>>> --- a/drivers/cxl/security.c
>>> +++ b/drivers/cxl/security.c
>>> @@ -6,6 +6,7 @@
>>>   #include <linux/ndctl.h>
>>>   #include <linux/async.h>
>>>   #include <linux/slab.h>
>>> +#include <linux/cacheflush.h>
>>>   #include "cxlmem.h"
>>>   #include "cxl.h"
>>>
>>> @@ -137,7 +138,7 @@ static int cxl_pmem_security_unlock(struct nvdimm
>>> *nvdimm,
>>>          return rc;
>>>
>>>      /* DIMM unlocked, invalidate all CPU caches before we read it */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      return 0;
>>>   }
>>>
>>> @@ -165,7 +166,7 @@ static int
>>> cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
>>>          return rc;
>>>
>>>      /* DIMM erased, invalidate all CPU caches before we read it */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      return 0;
>>>   }
>>>
>>> diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
>>> index 07e4e7572089..0769afb73380 100644
>>> --- a/include/linux/libnvdimm.h
>>> +++ b/include/linux/libnvdimm.h
>>> @@ -309,13 +309,4 @@ static inline void arch_invalidate_pmem(void
>>> *addr, size_t size)
>>>   {
>>>   }
>>>   #endif
>>> -
>>> -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
>>> -void arch_invalidate_nvdimm_cache(void);
>>> -#else
>>> -static inline void arch_invalidate_nvdimm_cache(void)
>>> -{
>>> -}
>>> -#endif
>>> -
>>>   #endif /* __LIBNVDIMM_H__ */
>>> -- 
>>> 2.36.1
>>>   

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-09 21:47           ` Dave Jiang
  0 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-08-09 21:47 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Davidlohr Bueso, linux-cxl, nvdimm, dan.j.williams, bwidawsk,
	ira.weiny, vishal.l.verma, alison.schofield, a.manzanares,
	linux-arch, Arnd Bergmann, linux-arm-kernel


On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
> On Tue, 19 Jul 2022 12:07:03 -0700
> Dave Jiang <dave.jiang@intel.com> wrote:
>
>> On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
>>> On Fri, 15 Jul 2022, Dave Jiang wrote:
>>>   
>>>> The original implementation to flush all cache after unlocking the
>>>> nvdimm
>>>> resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
>>>> nvdimm with security operations arrives on other archs. With support CXL
>>>> pmem supporting security operations, specifically "unlock" dimm, the
>>>> need
>>>> for an arch supported helper function to invalidate all CPU cache for
>>>> nvdimm has arrived. Remove original implementation from acpi/nfit and
>>>> add
>>>> cross arch support for this operation.
>>>>
>>>> Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
>>>> opt in
>>>> and provide the support via wbinvd_on_all_cpus() call.
>>> So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
>>> its own semantics (iirc there was a flush all call in the past). Cc'ing
>>> Jonathan as well.
>>>
>>> Anyway, I think this call should not be defined in any place other
>>> than core
>>> kernel headers, and not in pat/nvdimm. I was trying to make it fit in
>>> smp.h,
>>> for example, but conviniently we might be able to hijack
>>> flush_cache_all()
>>> for our purposes as of course neither x86-64 arm64 uses it :)
>>>
>>> And I see this as safe (wrt not adding a big hammer on unaware
>>> drivers) as
>>> the 32bit archs that define the call are mostly contained thin their
>>> arch/,
>>> and the few in drivers/ are still specific to those archs.
>>>
>>> Maybe something like the below.
>> Ok. I'll replace my version with yours.
> Careful with flush_cache_all(). The stub version in
> include/asm-generic/cacheflush.h has a comment above it that would
> need updating at very least (I think).
> Note there 'was' a flush_cache_all() for ARM64, but:
> https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/


flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I 
think other archs, at least ARM, those are separate instructions aren't 
they?

>
> Also, I'm far from sure it will be the right choice on all CXL supporting
> architectures.
> +CC linux-arch, linux-arm and Arnd.
>
>>
>>> Thanks,
>>> Davidlohr
>>>
>>> ------8<----------------------------------------
>>> Subject: [PATCH] arch/x86: define flush_cache_all as global wbinvd
>>>
>>> With CXL security features, global CPU cache flushing nvdimm
>>> requirements are no longer specific to that subsystem, even
>>> beyond the scope of security_ops. CXL will need such semantics
>>> for features not necessarily limited to persistent memory.
>>>
>>> So use the flush_cache_all() for the wbinvd across all
>>> CPUs on x86. arm64, which is another platform to have CXL
>>> support can also define its own semantics here.
>>>
>>> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
>>> ---
>>>   arch/x86/Kconfig                  |  1 -
>>>   arch/x86/include/asm/cacheflush.h |  5 +++++
>>>   arch/x86/mm/pat/set_memory.c      |  8 --------
>>>   drivers/acpi/nfit/intel.c         | 11 ++++++-----
>>>   drivers/cxl/security.c            |  5 +++--
>>>   include/linux/libnvdimm.h         |  9 ---------
>>>   6 files changed, 14 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>>> index 8dbe89eba639..be0b95e51df6 100644
>>> --- a/arch/x86/Kconfig
>>> +++ b/arch/x86/Kconfig
>>> @@ -83,7 +83,6 @@ config X86
>>>      select ARCH_HAS_MEMBARRIER_SYNC_CORE
>>>      select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
>>>      select ARCH_HAS_PMEM_API        if X86_64
>>> -    select ARCH_HAS_NVDIMM_INVAL_CACHE    if X86_64
>>>      select ARCH_HAS_PTE_DEVMAP        if X86_64
>>>      select ARCH_HAS_PTE_SPECIAL
>>>      select ARCH_HAS_UACCESS_FLUSHCACHE    if X86_64
>>> diff --git a/arch/x86/include/asm/cacheflush.h
>>> b/arch/x86/include/asm/cacheflush.h
>>> index b192d917a6d0..05c79021665d 100644
>>> --- a/arch/x86/include/asm/cacheflush.h
>>> +++ b/arch/x86/include/asm/cacheflush.h
>>> @@ -10,4 +10,9 @@
>>>
>>>   void clflush_cache_range(void *addr, unsigned int size);
>>>
>>> +#define flush_cache_all()        \
>>> +do {                    \
>>> +    wbinvd_on_all_cpus();        \
>>> +} while (0)
>>> +
>>>   #endif /* _ASM_X86_CACHEFLUSH_H */
>>> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
>>> index e4cd1286deef..1abd5438f126 100644
>>> --- a/arch/x86/mm/pat/set_memory.c
>>> +++ b/arch/x86/mm/pat/set_memory.c
>>> @@ -330,14 +330,6 @@ void arch_invalidate_pmem(void *addr, size_t size)
>>>   EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
>>>   #endif
>>>
>>> -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
>>> -void arch_invalidate_nvdimm_cache(void)
>>> -{
>>> -    wbinvd_on_all_cpus();
>>> -}
>>> -EXPORT_SYMBOL_GPL(arch_invalidate_nvdimm_cache);
>>> -#endif
>>> -
>>>   static void __cpa_flush_all(void *arg)
>>>   {
>>>      unsigned long cache = (unsigned long)arg;
>>> diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
>>> index 242d2e9203e9..1b0ecb4d67e6 100644
>>> --- a/drivers/acpi/nfit/intel.c
>>> +++ b/drivers/acpi/nfit/intel.c
>>> @@ -1,6 +1,7 @@
>>>   // SPDX-License-Identifier: GPL-2.0
>>>   /* Copyright(c) 2018 Intel Corporation. All rights reserved. */
>>>   #include <linux/libnvdimm.h>
>>> +#include <linux/cacheflush.h>
>>>   #include <linux/ndctl.h>
>>>   #include <linux/acpi.h>
>>>   #include <asm/smp.h>
>>> @@ -226,7 +227,7 @@ static int __maybe_unused
>>> intel_security_unlock(struct nvdimm *nvdimm,
>>>      }
>>>
>>>      /* DIMM unlocked, invalidate all CPU caches before we read it */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>
>>>      return 0;
>>>   }
>>> @@ -296,7 +297,7 @@ static int __maybe_unused
>>> intel_security_erase(struct nvdimm *nvdimm,
>>>          return -ENOTTY;
>>>
>>>      /* flush all cache before we erase DIMM */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      memcpy(nd_cmd.cmd.passphrase, key->data,
>>>              sizeof(nd_cmd.cmd.passphrase));
>>>      rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
>>> @@ -316,7 +317,7 @@ static int __maybe_unused
>>> intel_security_erase(struct nvdimm *nvdimm,
>>>      }
>>>
>>>      /* DIMM erased, invalidate all CPU caches before we read it */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      return 0;
>>>   }
>>>
>>> @@ -353,7 +354,7 @@ static int __maybe_unused
>>> intel_security_query_overwrite(struct nvdimm *nvdimm)
>>>      }
>>>
>>>      /* flush all cache before we make the nvdimms available */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      return 0;
>>>   }
>>>
>>> @@ -379,7 +380,7 @@ static int __maybe_unused
>>> intel_security_overwrite(struct nvdimm *nvdimm,
>>>          return -ENOTTY;
>>>
>>>      /* flush all cache before we erase DIMM */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      memcpy(nd_cmd.cmd.passphrase, nkey->data,
>>>              sizeof(nd_cmd.cmd.passphrase));
>>>      rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
>>> diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
>>> index 3dc04b50afaf..e2977872bf2f 100644
>>> --- a/drivers/cxl/security.c
>>> +++ b/drivers/cxl/security.c
>>> @@ -6,6 +6,7 @@
>>>   #include <linux/ndctl.h>
>>>   #include <linux/async.h>
>>>   #include <linux/slab.h>
>>> +#include <linux/cacheflush.h>
>>>   #include "cxlmem.h"
>>>   #include "cxl.h"
>>>
>>> @@ -137,7 +138,7 @@ static int cxl_pmem_security_unlock(struct nvdimm
>>> *nvdimm,
>>>          return rc;
>>>
>>>      /* DIMM unlocked, invalidate all CPU caches before we read it */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      return 0;
>>>   }
>>>
>>> @@ -165,7 +166,7 @@ static int
>>> cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
>>>          return rc;
>>>
>>>      /* DIMM erased, invalidate all CPU caches before we read it */
>>> -    arch_invalidate_nvdimm_cache();
>>> +    flush_cache_all();
>>>      return 0;
>>>   }
>>>
>>> diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
>>> index 07e4e7572089..0769afb73380 100644
>>> --- a/include/linux/libnvdimm.h
>>> +++ b/include/linux/libnvdimm.h
>>> @@ -309,13 +309,4 @@ static inline void arch_invalidate_pmem(void
>>> *addr, size_t size)
>>>   {
>>>   }
>>>   #endif
>>> -
>>> -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
>>> -void arch_invalidate_nvdimm_cache(void);
>>> -#else
>>> -static inline void arch_invalidate_nvdimm_cache(void)
>>> -{
>>> -}
>>> -#endif
>>> -
>>>   #endif /* __LIBNVDIMM_H__ */
>>> -- 
>>> 2.36.1
>>>   

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 11/15] cxl/pmem: Add "Unlock" security command support
  2022-08-04 13:19   ` Jonathan Cameron
@ 2022-08-09 22:31     ` Dave Jiang
  0 siblings, 0 replies; 79+ messages in thread
From: Dave Jiang @ 2022-08-09 22:31 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, nvdimm, dan.j.williams, bwidawsk, ira.weiny,
	vishal.l.verma, alison.schofield, dave


On 8/4/2022 6:19 AM, Jonathan Cameron wrote:
> On Fri, 15 Jul 2022 14:09:36 -0700
> Dave Jiang <dave.jiang@intel.com> wrote:
>
>> Create callback function to support the nvdimm_security_ops() ->unlock()
>> callback. Translate the operation to send "Unlock" security command for CXL
>> mem device.
>>
>> When the mem device is unlocked, arch_invalidate_nvdimm_cache() is called
>> in order to invalidate all CPU caches before attempting to access the mem
>> device.
>>
>> See CXL 2.0 spec section 8.2.9.5.6.4 for reference.
>>
>> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
> Hi Dave,
>
> One trivial thing inline.
>
> Thanks,
>
> Jonathan
>
>> ---
>>   drivers/cxl/cxlmem.h   |    1 +
>>   drivers/cxl/security.c |   21 +++++++++++++++++++++
>>   2 files changed, 22 insertions(+)
>>
>> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
>> index ced85be291f3..ae8ccd484491 100644
>> --- a/drivers/cxl/cxlmem.h
>> +++ b/drivers/cxl/cxlmem.h
>> @@ -253,6 +253,7 @@ enum cxl_opcode {
>>   	CXL_MBOX_OP_GET_SECURITY_STATE	= 0x4500,
>>   	CXL_MBOX_OP_SET_PASSPHRASE	= 0x4501,
>>   	CXL_MBOX_OP_DISABLE_PASSPHRASE	= 0x4502,
>> +	CXL_MBOX_OP_UNLOCK		= 0x4503,
>>   	CXL_MBOX_OP_FREEZE_SECURITY	= 0x4504,
>>   	CXL_MBOX_OP_MAX			= 0x10000
>>   };
>> diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
>> index 6399266a5908..d15520f280f0 100644
>> --- a/drivers/cxl/security.c
>> +++ b/drivers/cxl/security.c
>> @@ -114,11 +114,32 @@ static int cxl_pmem_security_freeze(struct nvdimm *nvdimm)
>>   	return cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_FREEZE_SECURITY, NULL, 0, NULL, 0);
>>   }
>>   
>> +static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
>> +				    const struct nvdimm_key_data *key_data)
>> +{
>> +	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
>> +	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
>> +	struct cxl_dev_state *cxlds = cxlmd->cxlds;
>> +	u8 pass[NVDIMM_PASSPHRASE_LEN];
>> +	int rc;
>> +
>> +	memcpy(pass, key_data->data, NVDIMM_PASSPHRASE_LEN);
> Why do we need a local copy?  I'd have thought we could just
> pass keydata->data in as the payload for cxl_mbox_send_cmd()
> There might be some value in making it easier to check by
> having a structure defined for this payload (obviously trivial)
> but given we are using an array of length defined by a non CXL
> define, I'm not sure there is any point in the copy.

We end up hitting a compile warning if we just directly pass in because key_data->data has const qualifier.

tools/testing/cxl/../../../drivers/cxl/security.c: In function ‘cxl_pmem_security_unlock’:
tools/testing/cxl/../../../drivers/cxl/security.c:116:40: warning: passing argument 3 of ‘cxl_mbox_send_cmd’ discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
   116 |                                key_data->data, NVDIMM_PASSPHRASE_LEN, NULL, 0);
       |                                ~~~~~~~~^~~~~~
In file included from tools/testing/cxl/../../../drivers/cxl/security.c:8:
tools/testing/cxl/../../../drivers/cxl/cxlmem.h:408:70: note: expected ‘void *’ but argument is of type ‘const u8 *’ {aka ‘const unsigned char *’}
   408 | int cxl_mbox_send_cmd(struct cxl_dev_state *cxlds, u16 opcode, void *in,
       |                                                                ~~~~~~^~


>
>> +	rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_UNLOCK,
>> +			       pass, NVDIMM_PASSPHRASE_LEN, NULL, 0);
>> +	if (rc < 0)
>> +		return rc;
>> +
>> +	/* DIMM unlocked, invalidate all CPU caches before we read it */
>> +	arch_invalidate_nvdimm_cache();
>> +	return 0;
>> +}
>> +
>>   static const struct nvdimm_security_ops __cxl_security_ops = {
>>   	.get_flags = cxl_pmem_get_security_flags,
>>   	.change_key = cxl_pmem_security_change_key,
>>   	.disable = cxl_pmem_security_disable,
>>   	.freeze = cxl_pmem_security_freeze,
>> +	.unlock = cxl_pmem_security_unlock,
>>   };
>>   
>>   const struct nvdimm_security_ops *cxl_security_ops = &__cxl_security_ops;
>>
>>

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-08-09 21:47           ` Dave Jiang
@ 2022-08-10 14:15             ` Mark Rutland
  -1 siblings, 0 replies; 79+ messages in thread
From: Mark Rutland @ 2022-08-10 14:15 UTC (permalink / raw)
  To: Dave Jiang
  Cc: Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
> 
> On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
> > On Tue, 19 Jul 2022 12:07:03 -0700
> > Dave Jiang <dave.jiang@intel.com> wrote:
> > 
> > > On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
> > > > On Fri, 15 Jul 2022, Dave Jiang wrote:
> > > > > The original implementation to flush all cache after unlocking the
> > > > > nvdimm
> > > > > resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
> > > > > nvdimm with security operations arrives on other archs. With support CXL
> > > > > pmem supporting security operations, specifically "unlock" dimm, the
> > > > > need
> > > > > for an arch supported helper function to invalidate all CPU cache for
> > > > > nvdimm has arrived. Remove original implementation from acpi/nfit and
> > > > > add
> > > > > cross arch support for this operation.
> > > > > 
> > > > > Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
> > > > > opt in
> > > > > and provide the support via wbinvd_on_all_cpus() call.
> > > > So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
> > > > its own semantics (iirc there was a flush all call in the past). Cc'ing
> > > > Jonathan as well.
> > > > 
> > > > Anyway, I think this call should not be defined in any place other
> > > > than core
> > > > kernel headers, and not in pat/nvdimm. I was trying to make it fit in
> > > > smp.h,
> > > > for example, but conviniently we might be able to hijack
> > > > flush_cache_all()
> > > > for our purposes as of course neither x86-64 arm64 uses it :)
> > > > 
> > > > And I see this as safe (wrt not adding a big hammer on unaware
> > > > drivers) as
> > > > the 32bit archs that define the call are mostly contained thin their
> > > > arch/,
> > > > and the few in drivers/ are still specific to those archs.
> > > > 
> > > > Maybe something like the below.
> > > Ok. I'll replace my version with yours.
> > Careful with flush_cache_all(). The stub version in
> > include/asm-generic/cacheflush.h has a comment above it that would
> > need updating at very least (I think).
> > Note there 'was' a flush_cache_all() for ARM64, but:
> > https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
> 
> 
> flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
> think other archs, at least ARM, those are separate instructions aren't
> they?

On arm and arm64 there is no way to perform maintenance on *all* caches; it has
to be done in cacheline increments by address. It's not realistic to do that
for the entire address space, so we need to know the relevant address ranges
(as per the commit referenced above).

So we probably need to think a bit harder about the geenric interface, since
"all" isn't possible to implement. :/

Thanks,
Mark.

> 
> > 
> > Also, I'm far from sure it will be the right choice on all CXL supporting
> > architectures.
> > +CC linux-arch, linux-arm and Arnd.
> > 
> > > 
> > > > Thanks,
> > > > Davidlohr
> > > > 
> > > > ------8<----------------------------------------
> > > > Subject: [PATCH] arch/x86: define flush_cache_all as global wbinvd
> > > > 
> > > > With CXL security features, global CPU cache flushing nvdimm
> > > > requirements are no longer specific to that subsystem, even
> > > > beyond the scope of security_ops. CXL will need such semantics
> > > > for features not necessarily limited to persistent memory.
> > > > 
> > > > So use the flush_cache_all() for the wbinvd across all
> > > > CPUs on x86. arm64, which is another platform to have CXL
> > > > support can also define its own semantics here.
> > > > 
> > > > Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
> > > > ---
> > > >   arch/x86/Kconfig                  |  1 -
> > > >   arch/x86/include/asm/cacheflush.h |  5 +++++
> > > >   arch/x86/mm/pat/set_memory.c      |  8 --------
> > > >   drivers/acpi/nfit/intel.c         | 11 ++++++-----
> > > >   drivers/cxl/security.c            |  5 +++--
> > > >   include/linux/libnvdimm.h         |  9 ---------
> > > >   6 files changed, 14 insertions(+), 25 deletions(-)
> > > > 
> > > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > > > index 8dbe89eba639..be0b95e51df6 100644
> > > > --- a/arch/x86/Kconfig
> > > > +++ b/arch/x86/Kconfig
> > > > @@ -83,7 +83,6 @@ config X86
> > > >      select ARCH_HAS_MEMBARRIER_SYNC_CORE
> > > >      select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
> > > >      select ARCH_HAS_PMEM_API        if X86_64
> > > > -    select ARCH_HAS_NVDIMM_INVAL_CACHE    if X86_64
> > > >      select ARCH_HAS_PTE_DEVMAP        if X86_64
> > > >      select ARCH_HAS_PTE_SPECIAL
> > > >      select ARCH_HAS_UACCESS_FLUSHCACHE    if X86_64
> > > > diff --git a/arch/x86/include/asm/cacheflush.h
> > > > b/arch/x86/include/asm/cacheflush.h
> > > > index b192d917a6d0..05c79021665d 100644
> > > > --- a/arch/x86/include/asm/cacheflush.h
> > > > +++ b/arch/x86/include/asm/cacheflush.h
> > > > @@ -10,4 +10,9 @@
> > > > 
> > > >   void clflush_cache_range(void *addr, unsigned int size);
> > > > 
> > > > +#define flush_cache_all()        \
> > > > +do {                    \
> > > > +    wbinvd_on_all_cpus();        \
> > > > +} while (0)
> > > > +
> > > >   #endif /* _ASM_X86_CACHEFLUSH_H */
> > > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > > > index e4cd1286deef..1abd5438f126 100644
> > > > --- a/arch/x86/mm/pat/set_memory.c
> > > > +++ b/arch/x86/mm/pat/set_memory.c
> > > > @@ -330,14 +330,6 @@ void arch_invalidate_pmem(void *addr, size_t size)
> > > >   EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
> > > >   #endif
> > > > 
> > > > -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> > > > -void arch_invalidate_nvdimm_cache(void)
> > > > -{
> > > > -    wbinvd_on_all_cpus();
> > > > -}
> > > > -EXPORT_SYMBOL_GPL(arch_invalidate_nvdimm_cache);
> > > > -#endif
> > > > -
> > > >   static void __cpa_flush_all(void *arg)
> > > >   {
> > > >      unsigned long cache = (unsigned long)arg;
> > > > diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
> > > > index 242d2e9203e9..1b0ecb4d67e6 100644
> > > > --- a/drivers/acpi/nfit/intel.c
> > > > +++ b/drivers/acpi/nfit/intel.c
> > > > @@ -1,6 +1,7 @@
> > > >   // SPDX-License-Identifier: GPL-2.0
> > > >   /* Copyright(c) 2018 Intel Corporation. All rights reserved. */
> > > >   #include <linux/libnvdimm.h>
> > > > +#include <linux/cacheflush.h>
> > > >   #include <linux/ndctl.h>
> > > >   #include <linux/acpi.h>
> > > >   #include <asm/smp.h>
> > > > @@ -226,7 +227,7 @@ static int __maybe_unused
> > > > intel_security_unlock(struct nvdimm *nvdimm,
> > > >      }
> > > > 
> > > >      /* DIMM unlocked, invalidate all CPU caches before we read it */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > > 
> > > >      return 0;
> > > >   }
> > > > @@ -296,7 +297,7 @@ static int __maybe_unused
> > > > intel_security_erase(struct nvdimm *nvdimm,
> > > >          return -ENOTTY;
> > > > 
> > > >      /* flush all cache before we erase DIMM */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      memcpy(nd_cmd.cmd.passphrase, key->data,
> > > >              sizeof(nd_cmd.cmd.passphrase));
> > > >      rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> > > > @@ -316,7 +317,7 @@ static int __maybe_unused
> > > > intel_security_erase(struct nvdimm *nvdimm,
> > > >      }
> > > > 
> > > >      /* DIMM erased, invalidate all CPU caches before we read it */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      return 0;
> > > >   }
> > > > 
> > > > @@ -353,7 +354,7 @@ static int __maybe_unused
> > > > intel_security_query_overwrite(struct nvdimm *nvdimm)
> > > >      }
> > > > 
> > > >      /* flush all cache before we make the nvdimms available */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      return 0;
> > > >   }
> > > > 
> > > > @@ -379,7 +380,7 @@ static int __maybe_unused
> > > > intel_security_overwrite(struct nvdimm *nvdimm,
> > > >          return -ENOTTY;
> > > > 
> > > >      /* flush all cache before we erase DIMM */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      memcpy(nd_cmd.cmd.passphrase, nkey->data,
> > > >              sizeof(nd_cmd.cmd.passphrase));
> > > >      rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> > > > diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
> > > > index 3dc04b50afaf..e2977872bf2f 100644
> > > > --- a/drivers/cxl/security.c
> > > > +++ b/drivers/cxl/security.c
> > > > @@ -6,6 +6,7 @@
> > > >   #include <linux/ndctl.h>
> > > >   #include <linux/async.h>
> > > >   #include <linux/slab.h>
> > > > +#include <linux/cacheflush.h>
> > > >   #include "cxlmem.h"
> > > >   #include "cxl.h"
> > > > 
> > > > @@ -137,7 +138,7 @@ static int cxl_pmem_security_unlock(struct nvdimm
> > > > *nvdimm,
> > > >          return rc;
> > > > 
> > > >      /* DIMM unlocked, invalidate all CPU caches before we read it */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      return 0;
> > > >   }
> > > > 
> > > > @@ -165,7 +166,7 @@ static int
> > > > cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
> > > >          return rc;
> > > > 
> > > >      /* DIMM erased, invalidate all CPU caches before we read it */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      return 0;
> > > >   }
> > > > 
> > > > diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
> > > > index 07e4e7572089..0769afb73380 100644
> > > > --- a/include/linux/libnvdimm.h
> > > > +++ b/include/linux/libnvdimm.h
> > > > @@ -309,13 +309,4 @@ static inline void arch_invalidate_pmem(void
> > > > *addr, size_t size)
> > > >   {
> > > >   }
> > > >   #endif
> > > > -
> > > > -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> > > > -void arch_invalidate_nvdimm_cache(void);
> > > > -#else
> > > > -static inline void arch_invalidate_nvdimm_cache(void)
> > > > -{
> > > > -}
> > > > -#endif
> > > > -
> > > >   #endif /* __LIBNVDIMM_H__ */
> > > > -- 
> > > > 2.36.1

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-10 14:15             ` Mark Rutland
  0 siblings, 0 replies; 79+ messages in thread
From: Mark Rutland @ 2022-08-10 14:15 UTC (permalink / raw)
  To: Dave Jiang
  Cc: Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
> 
> On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
> > On Tue, 19 Jul 2022 12:07:03 -0700
> > Dave Jiang <dave.jiang@intel.com> wrote:
> > 
> > > On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
> > > > On Fri, 15 Jul 2022, Dave Jiang wrote:
> > > > > The original implementation to flush all cache after unlocking the
> > > > > nvdimm
> > > > > resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
> > > > > nvdimm with security operations arrives on other archs. With support CXL
> > > > > pmem supporting security operations, specifically "unlock" dimm, the
> > > > > need
> > > > > for an arch supported helper function to invalidate all CPU cache for
> > > > > nvdimm has arrived. Remove original implementation from acpi/nfit and
> > > > > add
> > > > > cross arch support for this operation.
> > > > > 
> > > > > Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
> > > > > opt in
> > > > > and provide the support via wbinvd_on_all_cpus() call.
> > > > So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
> > > > its own semantics (iirc there was a flush all call in the past). Cc'ing
> > > > Jonathan as well.
> > > > 
> > > > Anyway, I think this call should not be defined in any place other
> > > > than core
> > > > kernel headers, and not in pat/nvdimm. I was trying to make it fit in
> > > > smp.h,
> > > > for example, but conviniently we might be able to hijack
> > > > flush_cache_all()
> > > > for our purposes as of course neither x86-64 arm64 uses it :)
> > > > 
> > > > And I see this as safe (wrt not adding a big hammer on unaware
> > > > drivers) as
> > > > the 32bit archs that define the call are mostly contained thin their
> > > > arch/,
> > > > and the few in drivers/ are still specific to those archs.
> > > > 
> > > > Maybe something like the below.
> > > Ok. I'll replace my version with yours.
> > Careful with flush_cache_all(). The stub version in
> > include/asm-generic/cacheflush.h has a comment above it that would
> > need updating at very least (I think).
> > Note there 'was' a flush_cache_all() for ARM64, but:
> > https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
> 
> 
> flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
> think other archs, at least ARM, those are separate instructions aren't
> they?

On arm and arm64 there is no way to perform maintenance on *all* caches; it has
to be done in cacheline increments by address. It's not realistic to do that
for the entire address space, so we need to know the relevant address ranges
(as per the commit referenced above).

So we probably need to think a bit harder about the geenric interface, since
"all" isn't possible to implement. :/

Thanks,
Mark.

> 
> > 
> > Also, I'm far from sure it will be the right choice on all CXL supporting
> > architectures.
> > +CC linux-arch, linux-arm and Arnd.
> > 
> > > 
> > > > Thanks,
> > > > Davidlohr
> > > > 
> > > > ------8<----------------------------------------
> > > > Subject: [PATCH] arch/x86: define flush_cache_all as global wbinvd
> > > > 
> > > > With CXL security features, global CPU cache flushing nvdimm
> > > > requirements are no longer specific to that subsystem, even
> > > > beyond the scope of security_ops. CXL will need such semantics
> > > > for features not necessarily limited to persistent memory.
> > > > 
> > > > So use the flush_cache_all() for the wbinvd across all
> > > > CPUs on x86. arm64, which is another platform to have CXL
> > > > support can also define its own semantics here.
> > > > 
> > > > Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
> > > > ---
> > > >   arch/x86/Kconfig                  |  1 -
> > > >   arch/x86/include/asm/cacheflush.h |  5 +++++
> > > >   arch/x86/mm/pat/set_memory.c      |  8 --------
> > > >   drivers/acpi/nfit/intel.c         | 11 ++++++-----
> > > >   drivers/cxl/security.c            |  5 +++--
> > > >   include/linux/libnvdimm.h         |  9 ---------
> > > >   6 files changed, 14 insertions(+), 25 deletions(-)
> > > > 
> > > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > > > index 8dbe89eba639..be0b95e51df6 100644
> > > > --- a/arch/x86/Kconfig
> > > > +++ b/arch/x86/Kconfig
> > > > @@ -83,7 +83,6 @@ config X86
> > > >      select ARCH_HAS_MEMBARRIER_SYNC_CORE
> > > >      select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
> > > >      select ARCH_HAS_PMEM_API        if X86_64
> > > > -    select ARCH_HAS_NVDIMM_INVAL_CACHE    if X86_64
> > > >      select ARCH_HAS_PTE_DEVMAP        if X86_64
> > > >      select ARCH_HAS_PTE_SPECIAL
> > > >      select ARCH_HAS_UACCESS_FLUSHCACHE    if X86_64
> > > > diff --git a/arch/x86/include/asm/cacheflush.h
> > > > b/arch/x86/include/asm/cacheflush.h
> > > > index b192d917a6d0..05c79021665d 100644
> > > > --- a/arch/x86/include/asm/cacheflush.h
> > > > +++ b/arch/x86/include/asm/cacheflush.h
> > > > @@ -10,4 +10,9 @@
> > > > 
> > > >   void clflush_cache_range(void *addr, unsigned int size);
> > > > 
> > > > +#define flush_cache_all()        \
> > > > +do {                    \
> > > > +    wbinvd_on_all_cpus();        \
> > > > +} while (0)
> > > > +
> > > >   #endif /* _ASM_X86_CACHEFLUSH_H */
> > > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > > > index e4cd1286deef..1abd5438f126 100644
> > > > --- a/arch/x86/mm/pat/set_memory.c
> > > > +++ b/arch/x86/mm/pat/set_memory.c
> > > > @@ -330,14 +330,6 @@ void arch_invalidate_pmem(void *addr, size_t size)
> > > >   EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
> > > >   #endif
> > > > 
> > > > -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> > > > -void arch_invalidate_nvdimm_cache(void)
> > > > -{
> > > > -    wbinvd_on_all_cpus();
> > > > -}
> > > > -EXPORT_SYMBOL_GPL(arch_invalidate_nvdimm_cache);
> > > > -#endif
> > > > -
> > > >   static void __cpa_flush_all(void *arg)
> > > >   {
> > > >      unsigned long cache = (unsigned long)arg;
> > > > diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
> > > > index 242d2e9203e9..1b0ecb4d67e6 100644
> > > > --- a/drivers/acpi/nfit/intel.c
> > > > +++ b/drivers/acpi/nfit/intel.c
> > > > @@ -1,6 +1,7 @@
> > > >   // SPDX-License-Identifier: GPL-2.0
> > > >   /* Copyright(c) 2018 Intel Corporation. All rights reserved. */
> > > >   #include <linux/libnvdimm.h>
> > > > +#include <linux/cacheflush.h>
> > > >   #include <linux/ndctl.h>
> > > >   #include <linux/acpi.h>
> > > >   #include <asm/smp.h>
> > > > @@ -226,7 +227,7 @@ static int __maybe_unused
> > > > intel_security_unlock(struct nvdimm *nvdimm,
> > > >      }
> > > > 
> > > >      /* DIMM unlocked, invalidate all CPU caches before we read it */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > > 
> > > >      return 0;
> > > >   }
> > > > @@ -296,7 +297,7 @@ static int __maybe_unused
> > > > intel_security_erase(struct nvdimm *nvdimm,
> > > >          return -ENOTTY;
> > > > 
> > > >      /* flush all cache before we erase DIMM */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      memcpy(nd_cmd.cmd.passphrase, key->data,
> > > >              sizeof(nd_cmd.cmd.passphrase));
> > > >      rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> > > > @@ -316,7 +317,7 @@ static int __maybe_unused
> > > > intel_security_erase(struct nvdimm *nvdimm,
> > > >      }
> > > > 
> > > >      /* DIMM erased, invalidate all CPU caches before we read it */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      return 0;
> > > >   }
> > > > 
> > > > @@ -353,7 +354,7 @@ static int __maybe_unused
> > > > intel_security_query_overwrite(struct nvdimm *nvdimm)
> > > >      }
> > > > 
> > > >      /* flush all cache before we make the nvdimms available */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      return 0;
> > > >   }
> > > > 
> > > > @@ -379,7 +380,7 @@ static int __maybe_unused
> > > > intel_security_overwrite(struct nvdimm *nvdimm,
> > > >          return -ENOTTY;
> > > > 
> > > >      /* flush all cache before we erase DIMM */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      memcpy(nd_cmd.cmd.passphrase, nkey->data,
> > > >              sizeof(nd_cmd.cmd.passphrase));
> > > >      rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
> > > > diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c
> > > > index 3dc04b50afaf..e2977872bf2f 100644
> > > > --- a/drivers/cxl/security.c
> > > > +++ b/drivers/cxl/security.c
> > > > @@ -6,6 +6,7 @@
> > > >   #include <linux/ndctl.h>
> > > >   #include <linux/async.h>
> > > >   #include <linux/slab.h>
> > > > +#include <linux/cacheflush.h>
> > > >   #include "cxlmem.h"
> > > >   #include "cxl.h"
> > > > 
> > > > @@ -137,7 +138,7 @@ static int cxl_pmem_security_unlock(struct nvdimm
> > > > *nvdimm,
> > > >          return rc;
> > > > 
> > > >      /* DIMM unlocked, invalidate all CPU caches before we read it */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      return 0;
> > > >   }
> > > > 
> > > > @@ -165,7 +166,7 @@ static int
> > > > cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
> > > >          return rc;
> > > > 
> > > >      /* DIMM erased, invalidate all CPU caches before we read it */
> > > > -    arch_invalidate_nvdimm_cache();
> > > > +    flush_cache_all();
> > > >      return 0;
> > > >   }
> > > > 
> > > > diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
> > > > index 07e4e7572089..0769afb73380 100644
> > > > --- a/include/linux/libnvdimm.h
> > > > +++ b/include/linux/libnvdimm.h
> > > > @@ -309,13 +309,4 @@ static inline void arch_invalidate_pmem(void
> > > > *addr, size_t size)
> > > >   {
> > > >   }
> > > >   #endif
> > > > -
> > > > -#ifdef CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE
> > > > -void arch_invalidate_nvdimm_cache(void);
> > > > -#else
> > > > -static inline void arch_invalidate_nvdimm_cache(void)
> > > > -{
> > > > -}
> > > > -#endif
> > > > -
> > > >   #endif /* __LIBNVDIMM_H__ */
> > > > -- 
> > > > 2.36.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-08-10 14:15             ` Mark Rutland
@ 2022-08-10 14:31               ` Eliot Moss
  -1 siblings, 0 replies; 79+ messages in thread
From: Eliot Moss @ 2022-08-10 14:31 UTC (permalink / raw)
  To: Mark Rutland, Dave Jiang
  Cc: Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

On 8/10/2022 10:15 AM, Mark Rutland wrote:
> On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
>>
>> On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
>>> On Tue, 19 Jul 2022 12:07:03 -0700
>>> Dave Jiang <dave.jiang@intel.com> wrote:
>>>
>>>> On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
>>>>> On Fri, 15 Jul 2022, Dave Jiang wrote:
>>>>>> The original implementation to flush all cache after unlocking the
>>>>>> nvdimm
>>>>>> resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
>>>>>> nvdimm with security operations arrives on other archs. With support CXL
>>>>>> pmem supporting security operations, specifically "unlock" dimm, the
>>>>>> need
>>>>>> for an arch supported helper function to invalidate all CPU cache for
>>>>>> nvdimm has arrived. Remove original implementation from acpi/nfit and
>>>>>> add
>>>>>> cross arch support for this operation.
>>>>>>
>>>>>> Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
>>>>>> opt in
>>>>>> and provide the support via wbinvd_on_all_cpus() call.
>>>>> So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
>>>>> its own semantics (iirc there was a flush all call in the past). Cc'ing
>>>>> Jonathan as well.
>>>>>
>>>>> Anyway, I think this call should not be defined in any place other
>>>>> than core
>>>>> kernel headers, and not in pat/nvdimm. I was trying to make it fit in
>>>>> smp.h,
>>>>> for example, but conviniently we might be able to hijack
>>>>> flush_cache_all()
>>>>> for our purposes as of course neither x86-64 arm64 uses it :)
>>>>>
>>>>> And I see this as safe (wrt not adding a big hammer on unaware
>>>>> drivers) as
>>>>> the 32bit archs that define the call are mostly contained thin their
>>>>> arch/,
>>>>> and the few in drivers/ are still specific to those archs.
>>>>>
>>>>> Maybe something like the below.
>>>> Ok. I'll replace my version with yours.
>>> Careful with flush_cache_all(). The stub version in
>>> include/asm-generic/cacheflush.h has a comment above it that would
>>> need updating at very least (I think).
>>> Note there 'was' a flush_cache_all() for ARM64, but:
>>> https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
>>
>>
>> flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
>> think other archs, at least ARM, those are separate instructions aren't
>> they?
> 
> On arm and arm64 there is no way to perform maintenance on *all* caches; it has
> to be done in cacheline increments by address. It's not realistic to do that
> for the entire address space, so we need to know the relevant address ranges
> (as per the commit referenced above).
> 
> So we probably need to think a bit harder about the geenric interface, since
> "all" isn't possible to implement. :/

Can you not do flushing by set and way on each cache,
probably working outwards from L1?

Eliot Moss

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-10 14:31               ` Eliot Moss
  0 siblings, 0 replies; 79+ messages in thread
From: Eliot Moss @ 2022-08-10 14:31 UTC (permalink / raw)
  To: Mark Rutland, Dave Jiang
  Cc: Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

On 8/10/2022 10:15 AM, Mark Rutland wrote:
> On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
>>
>> On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
>>> On Tue, 19 Jul 2022 12:07:03 -0700
>>> Dave Jiang <dave.jiang@intel.com> wrote:
>>>
>>>> On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
>>>>> On Fri, 15 Jul 2022, Dave Jiang wrote:
>>>>>> The original implementation to flush all cache after unlocking the
>>>>>> nvdimm
>>>>>> resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
>>>>>> nvdimm with security operations arrives on other archs. With support CXL
>>>>>> pmem supporting security operations, specifically "unlock" dimm, the
>>>>>> need
>>>>>> for an arch supported helper function to invalidate all CPU cache for
>>>>>> nvdimm has arrived. Remove original implementation from acpi/nfit and
>>>>>> add
>>>>>> cross arch support for this operation.
>>>>>>
>>>>>> Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
>>>>>> opt in
>>>>>> and provide the support via wbinvd_on_all_cpus() call.
>>>>> So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
>>>>> its own semantics (iirc there was a flush all call in the past). Cc'ing
>>>>> Jonathan as well.
>>>>>
>>>>> Anyway, I think this call should not be defined in any place other
>>>>> than core
>>>>> kernel headers, and not in pat/nvdimm. I was trying to make it fit in
>>>>> smp.h,
>>>>> for example, but conviniently we might be able to hijack
>>>>> flush_cache_all()
>>>>> for our purposes as of course neither x86-64 arm64 uses it :)
>>>>>
>>>>> And I see this as safe (wrt not adding a big hammer on unaware
>>>>> drivers) as
>>>>> the 32bit archs that define the call are mostly contained thin their
>>>>> arch/,
>>>>> and the few in drivers/ are still specific to those archs.
>>>>>
>>>>> Maybe something like the below.
>>>> Ok. I'll replace my version with yours.
>>> Careful with flush_cache_all(). The stub version in
>>> include/asm-generic/cacheflush.h has a comment above it that would
>>> need updating at very least (I think).
>>> Note there 'was' a flush_cache_all() for ARM64, but:
>>> https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
>>
>>
>> flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
>> think other archs, at least ARM, those are separate instructions aren't
>> they?
> 
> On arm and arm64 there is no way to perform maintenance on *all* caches; it has
> to be done in cacheline increments by address. It's not realistic to do that
> for the entire address space, so we need to know the relevant address ranges
> (as per the commit referenced above).
> 
> So we probably need to think a bit harder about the geenric interface, since
> "all" isn't possible to implement. :/

Can you not do flushing by set and way on each cache,
probably working outwards from L1?

Eliot Moss

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-08-10 14:31               ` Eliot Moss
@ 2022-08-10 18:09                 ` Mark Rutland
  -1 siblings, 0 replies; 79+ messages in thread
From: Mark Rutland @ 2022-08-10 18:09 UTC (permalink / raw)
  To: Eliot Moss
  Cc: Dave Jiang, Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

On Wed, Aug 10, 2022 at 10:31:12AM -0400, Eliot Moss wrote:
> On 8/10/2022 10:15 AM, Mark Rutland wrote:
> > On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
> > > 
> > > On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
> > > > On Tue, 19 Jul 2022 12:07:03 -0700
> > > > Dave Jiang <dave.jiang@intel.com> wrote:
> > > > 
> > > > > On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
> > > > > > On Fri, 15 Jul 2022, Dave Jiang wrote:
> > > > > > > The original implementation to flush all cache after unlocking the
> > > > > > > nvdimm
> > > > > > > resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
> > > > > > > nvdimm with security operations arrives on other archs. With support CXL
> > > > > > > pmem supporting security operations, specifically "unlock" dimm, the
> > > > > > > need
> > > > > > > for an arch supported helper function to invalidate all CPU cache for
> > > > > > > nvdimm has arrived. Remove original implementation from acpi/nfit and
> > > > > > > add
> > > > > > > cross arch support for this operation.
> > > > > > > 
> > > > > > > Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
> > > > > > > opt in
> > > > > > > and provide the support via wbinvd_on_all_cpus() call.
> > > > > > So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
> > > > > > its own semantics (iirc there was a flush all call in the past). Cc'ing
> > > > > > Jonathan as well.
> > > > > > 
> > > > > > Anyway, I think this call should not be defined in any place other
> > > > > > than core
> > > > > > kernel headers, and not in pat/nvdimm. I was trying to make it fit in
> > > > > > smp.h,
> > > > > > for example, but conviniently we might be able to hijack
> > > > > > flush_cache_all()
> > > > > > for our purposes as of course neither x86-64 arm64 uses it :)
> > > > > > 
> > > > > > And I see this as safe (wrt not adding a big hammer on unaware
> > > > > > drivers) as
> > > > > > the 32bit archs that define the call are mostly contained thin their
> > > > > > arch/,
> > > > > > and the few in drivers/ are still specific to those archs.
> > > > > > 
> > > > > > Maybe something like the below.
> > > > > Ok. I'll replace my version with yours.
> > > > Careful with flush_cache_all(). The stub version in
> > > > include/asm-generic/cacheflush.h has a comment above it that would
> > > > need updating at very least (I think).
> > > > Note there 'was' a flush_cache_all() for ARM64, but:
> > > > https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
> > > 
> > > 
> > > flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
> > > think other archs, at least ARM, those are separate instructions aren't
> > > they?
> > 
> > On arm and arm64 there is no way to perform maintenance on *all* caches; it has
> > to be done in cacheline increments by address. It's not realistic to do that
> > for the entire address space, so we need to know the relevant address ranges
> > (as per the commit referenced above).
> > 
> > So we probably need to think a bit harder about the geenric interface, since
> > "all" isn't possible to implement. :/
> 
> Can you not do flushing by set and way on each cache,
> probably working outwards from L1?

Unfortunately, for a number of reasons, that doeesn't work. For better or
worse, the *only* way which is guaranteed to work is to do this by address.

If you look at the latest ARM ARM (ARM DDI 0487H.a):

  https://developer.arm.com/documentation/ddi0487/ha/

... on page D4-4754, in the block "Example code for cache maintenance
instructions", there's note with a treatise on this.

The gist is that:

* Set/Way ops are only guaranteed to affect the caches local to the CPU
  issuing them, and are not guaranteed to affect caches owned by other CPUs.

* Set/Way ops are not guaranteed to affect system-level caches, which are
  fairly popular these days (whereas VA ops are required to affect those).

* Set/Way ops race with the natural behaviour of caches (so e.g. a line could
  bounce between layers of cache, or between caches in the system, and avoid
  being operated upon).

So unless you're on a single CPU system, with translation disabled, and you
*know* that there are no system-level caches, you can't rely upon Set/Way ops
to do anything useful.

They're really there for firmware to use for IMPLEMENTATION DEFINED power-up
and power-down sequences, and aren'y useful to portable code.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-10 18:09                 ` Mark Rutland
  0 siblings, 0 replies; 79+ messages in thread
From: Mark Rutland @ 2022-08-10 18:09 UTC (permalink / raw)
  To: Eliot Moss
  Cc: Dave Jiang, Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

On Wed, Aug 10, 2022 at 10:31:12AM -0400, Eliot Moss wrote:
> On 8/10/2022 10:15 AM, Mark Rutland wrote:
> > On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
> > > 
> > > On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
> > > > On Tue, 19 Jul 2022 12:07:03 -0700
> > > > Dave Jiang <dave.jiang@intel.com> wrote:
> > > > 
> > > > > On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
> > > > > > On Fri, 15 Jul 2022, Dave Jiang wrote:
> > > > > > > The original implementation to flush all cache after unlocking the
> > > > > > > nvdimm
> > > > > > > resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
> > > > > > > nvdimm with security operations arrives on other archs. With support CXL
> > > > > > > pmem supporting security operations, specifically "unlock" dimm, the
> > > > > > > need
> > > > > > > for an arch supported helper function to invalidate all CPU cache for
> > > > > > > nvdimm has arrived. Remove original implementation from acpi/nfit and
> > > > > > > add
> > > > > > > cross arch support for this operation.
> > > > > > > 
> > > > > > > Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
> > > > > > > opt in
> > > > > > > and provide the support via wbinvd_on_all_cpus() call.
> > > > > > So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
> > > > > > its own semantics (iirc there was a flush all call in the past). Cc'ing
> > > > > > Jonathan as well.
> > > > > > 
> > > > > > Anyway, I think this call should not be defined in any place other
> > > > > > than core
> > > > > > kernel headers, and not in pat/nvdimm. I was trying to make it fit in
> > > > > > smp.h,
> > > > > > for example, but conviniently we might be able to hijack
> > > > > > flush_cache_all()
> > > > > > for our purposes as of course neither x86-64 arm64 uses it :)
> > > > > > 
> > > > > > And I see this as safe (wrt not adding a big hammer on unaware
> > > > > > drivers) as
> > > > > > the 32bit archs that define the call are mostly contained thin their
> > > > > > arch/,
> > > > > > and the few in drivers/ are still specific to those archs.
> > > > > > 
> > > > > > Maybe something like the below.
> > > > > Ok. I'll replace my version with yours.
> > > > Careful with flush_cache_all(). The stub version in
> > > > include/asm-generic/cacheflush.h has a comment above it that would
> > > > need updating at very least (I think).
> > > > Note there 'was' a flush_cache_all() for ARM64, but:
> > > > https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
> > > 
> > > 
> > > flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
> > > think other archs, at least ARM, those are separate instructions aren't
> > > they?
> > 
> > On arm and arm64 there is no way to perform maintenance on *all* caches; it has
> > to be done in cacheline increments by address. It's not realistic to do that
> > for the entire address space, so we need to know the relevant address ranges
> > (as per the commit referenced above).
> > 
> > So we probably need to think a bit harder about the geenric interface, since
> > "all" isn't possible to implement. :/
> 
> Can you not do flushing by set and way on each cache,
> probably working outwards from L1?

Unfortunately, for a number of reasons, that doeesn't work. For better or
worse, the *only* way which is guaranteed to work is to do this by address.

If you look at the latest ARM ARM (ARM DDI 0487H.a):

  https://developer.arm.com/documentation/ddi0487/ha/

... on page D4-4754, in the block "Example code for cache maintenance
instructions", there's note with a treatise on this.

The gist is that:

* Set/Way ops are only guaranteed to affect the caches local to the CPU
  issuing them, and are not guaranteed to affect caches owned by other CPUs.

* Set/Way ops are not guaranteed to affect system-level caches, which are
  fairly popular these days (whereas VA ops are required to affect those).

* Set/Way ops race with the natural behaviour of caches (so e.g. a line could
  bounce between layers of cache, or between caches in the system, and avoid
  being operated upon).

So unless you're on a single CPU system, with translation disabled, and you
*know* that there are no system-level caches, you can't rely upon Set/Way ops
to do anything useful.

They're really there for firmware to use for IMPLEMENTATION DEFINED power-up
and power-down sequences, and aren'y useful to portable code.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-08-10 18:09                 ` Mark Rutland
@ 2022-08-10 18:11                   ` Eliot Moss
  -1 siblings, 0 replies; 79+ messages in thread
From: Eliot Moss @ 2022-08-10 18:11 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Dave Jiang, Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

On 8/10/2022 2:09 PM, Mark Rutland wrote:
> On Wed, Aug 10, 2022 at 10:31:12AM -0400, Eliot Moss wrote:
>> On 8/10/2022 10:15 AM, Mark Rutland wrote:
>>> On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
>>>>
>>>> On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
>>>>> On Tue, 19 Jul 2022 12:07:03 -0700
>>>>> Dave Jiang <dave.jiang@intel.com> wrote:
>>>>>
>>>>>> On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
>>>>>>> On Fri, 15 Jul 2022, Dave Jiang wrote:
>>>>>>>> The original implementation to flush all cache after unlocking the
>>>>>>>> nvdimm
>>>>>>>> resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
>>>>>>>> nvdimm with security operations arrives on other archs. With support CXL
>>>>>>>> pmem supporting security operations, specifically "unlock" dimm, the
>>>>>>>> need
>>>>>>>> for an arch supported helper function to invalidate all CPU cache for
>>>>>>>> nvdimm has arrived. Remove original implementation from acpi/nfit and
>>>>>>>> add
>>>>>>>> cross arch support for this operation.
>>>>>>>>
>>>>>>>> Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
>>>>>>>> opt in
>>>>>>>> and provide the support via wbinvd_on_all_cpus() call.
>>>>>>> So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
>>>>>>> its own semantics (iirc there was a flush all call in the past). Cc'ing
>>>>>>> Jonathan as well.
>>>>>>>
>>>>>>> Anyway, I think this call should not be defined in any place other
>>>>>>> than core
>>>>>>> kernel headers, and not in pat/nvdimm. I was trying to make it fit in
>>>>>>> smp.h,
>>>>>>> for example, but conviniently we might be able to hijack
>>>>>>> flush_cache_all()
>>>>>>> for our purposes as of course neither x86-64 arm64 uses it :)
>>>>>>>
>>>>>>> And I see this as safe (wrt not adding a big hammer on unaware
>>>>>>> drivers) as
>>>>>>> the 32bit archs that define the call are mostly contained thin their
>>>>>>> arch/,
>>>>>>> and the few in drivers/ are still specific to those archs.
>>>>>>>
>>>>>>> Maybe something like the below.
>>>>>> Ok. I'll replace my version with yours.
>>>>> Careful with flush_cache_all(). The stub version in
>>>>> include/asm-generic/cacheflush.h has a comment above it that would
>>>>> need updating at very least (I think).
>>>>> Note there 'was' a flush_cache_all() for ARM64, but:
>>>>> https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
>>>>
>>>>
>>>> flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
>>>> think other archs, at least ARM, those are separate instructions aren't
>>>> they?
>>>
>>> On arm and arm64 there is no way to perform maintenance on *all* caches; it has
>>> to be done in cacheline increments by address. It's not realistic to do that
>>> for the entire address space, so we need to know the relevant address ranges
>>> (as per the commit referenced above).
>>>
>>> So we probably need to think a bit harder about the geenric interface, since
>>> "all" isn't possible to implement. :/
>>
>> Can you not do flushing by set and way on each cache,
>> probably working outwards from L1?
> 
> Unfortunately, for a number of reasons, that doeesn't work. For better or
> worse, the *only* way which is guaranteed to work is to do this by address.
> 
> If you look at the latest ARM ARM (ARM DDI 0487H.a):
> 
>    https://developer.arm.com/documentation/ddi0487/ha/
> 
> ... on page D4-4754, in the block "Example code for cache maintenance
> instructions", there's note with a treatise on this.
> 
> The gist is that:
> 
> * Set/Way ops are only guaranteed to affect the caches local to the CPU
>    issuing them, and are not guaranteed to affect caches owned by other CPUs.
> 
> * Set/Way ops are not guaranteed to affect system-level caches, which are
>    fairly popular these days (whereas VA ops are required to affect those).
> 
> * Set/Way ops race with the natural behaviour of caches (so e.g. a line could
>    bounce between layers of cache, or between caches in the system, and avoid
>    being operated upon).
> 
> So unless you're on a single CPU system, with translation disabled, and you
> *know* that there are no system-level caches, you can't rely upon Set/Way ops
> to do anything useful.
> 
> They're really there for firmware to use for IMPLEMENTATION DEFINED power-up
> and power-down sequences, and aren'y useful to portable code.

Thanks for the explanation.  Really does seem that
ARM could use the equivalent on wbnoinvd/wbinvd/invd.

Regards - Eliot

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-10 18:11                   ` Eliot Moss
  0 siblings, 0 replies; 79+ messages in thread
From: Eliot Moss @ 2022-08-10 18:11 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Dave Jiang, Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

On 8/10/2022 2:09 PM, Mark Rutland wrote:
> On Wed, Aug 10, 2022 at 10:31:12AM -0400, Eliot Moss wrote:
>> On 8/10/2022 10:15 AM, Mark Rutland wrote:
>>> On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
>>>>
>>>> On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
>>>>> On Tue, 19 Jul 2022 12:07:03 -0700
>>>>> Dave Jiang <dave.jiang@intel.com> wrote:
>>>>>
>>>>>> On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
>>>>>>> On Fri, 15 Jul 2022, Dave Jiang wrote:
>>>>>>>> The original implementation to flush all cache after unlocking the
>>>>>>>> nvdimm
>>>>>>>> resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
>>>>>>>> nvdimm with security operations arrives on other archs. With support CXL
>>>>>>>> pmem supporting security operations, specifically "unlock" dimm, the
>>>>>>>> need
>>>>>>>> for an arch supported helper function to invalidate all CPU cache for
>>>>>>>> nvdimm has arrived. Remove original implementation from acpi/nfit and
>>>>>>>> add
>>>>>>>> cross arch support for this operation.
>>>>>>>>
>>>>>>>> Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
>>>>>>>> opt in
>>>>>>>> and provide the support via wbinvd_on_all_cpus() call.
>>>>>>> So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
>>>>>>> its own semantics (iirc there was a flush all call in the past). Cc'ing
>>>>>>> Jonathan as well.
>>>>>>>
>>>>>>> Anyway, I think this call should not be defined in any place other
>>>>>>> than core
>>>>>>> kernel headers, and not in pat/nvdimm. I was trying to make it fit in
>>>>>>> smp.h,
>>>>>>> for example, but conviniently we might be able to hijack
>>>>>>> flush_cache_all()
>>>>>>> for our purposes as of course neither x86-64 arm64 uses it :)
>>>>>>>
>>>>>>> And I see this as safe (wrt not adding a big hammer on unaware
>>>>>>> drivers) as
>>>>>>> the 32bit archs that define the call are mostly contained thin their
>>>>>>> arch/,
>>>>>>> and the few in drivers/ are still specific to those archs.
>>>>>>>
>>>>>>> Maybe something like the below.
>>>>>> Ok. I'll replace my version with yours.
>>>>> Careful with flush_cache_all(). The stub version in
>>>>> include/asm-generic/cacheflush.h has a comment above it that would
>>>>> need updating at very least (I think).
>>>>> Note there 'was' a flush_cache_all() for ARM64, but:
>>>>> https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
>>>>
>>>>
>>>> flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
>>>> think other archs, at least ARM, those are separate instructions aren't
>>>> they?
>>>
>>> On arm and arm64 there is no way to perform maintenance on *all* caches; it has
>>> to be done in cacheline increments by address. It's not realistic to do that
>>> for the entire address space, so we need to know the relevant address ranges
>>> (as per the commit referenced above).
>>>
>>> So we probably need to think a bit harder about the geenric interface, since
>>> "all" isn't possible to implement. :/
>>
>> Can you not do flushing by set and way on each cache,
>> probably working outwards from L1?
> 
> Unfortunately, for a number of reasons, that doeesn't work. For better or
> worse, the *only* way which is guaranteed to work is to do this by address.
> 
> If you look at the latest ARM ARM (ARM DDI 0487H.a):
> 
>    https://developer.arm.com/documentation/ddi0487/ha/
> 
> ... on page D4-4754, in the block "Example code for cache maintenance
> instructions", there's note with a treatise on this.
> 
> The gist is that:
> 
> * Set/Way ops are only guaranteed to affect the caches local to the CPU
>    issuing them, and are not guaranteed to affect caches owned by other CPUs.
> 
> * Set/Way ops are not guaranteed to affect system-level caches, which are
>    fairly popular these days (whereas VA ops are required to affect those).
> 
> * Set/Way ops race with the natural behaviour of caches (so e.g. a line could
>    bounce between layers of cache, or between caches in the system, and avoid
>    being operated upon).
> 
> So unless you're on a single CPU system, with translation disabled, and you
> *know* that there are no system-level caches, you can't rely upon Set/Way ops
> to do anything useful.
> 
> They're really there for firmware to use for IMPLEMENTATION DEFINED power-up
> and power-down sequences, and aren'y useful to portable code.

Thanks for the explanation.  Really does seem that
ARM could use the equivalent on wbnoinvd/wbinvd/invd.

Regards - Eliot

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-08-10 14:15             ` Mark Rutland
@ 2022-08-10 20:06               ` Dan Williams
  -1 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-10 20:06 UTC (permalink / raw)
  To: Mark Rutland, Dave Jiang
  Cc: Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

Mark Rutland wrote:
> On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
> > 
> > On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
> > > On Tue, 19 Jul 2022 12:07:03 -0700
> > > Dave Jiang <dave.jiang@intel.com> wrote:
> > > 
> > > > On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
> > > > > On Fri, 15 Jul 2022, Dave Jiang wrote:
> > > > > > The original implementation to flush all cache after unlocking the
> > > > > > nvdimm
> > > > > > resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
> > > > > > nvdimm with security operations arrives on other archs. With support CXL
> > > > > > pmem supporting security operations, specifically "unlock" dimm, the
> > > > > > need
> > > > > > for an arch supported helper function to invalidate all CPU cache for
> > > > > > nvdimm has arrived. Remove original implementation from acpi/nfit and
> > > > > > add
> > > > > > cross arch support for this operation.
> > > > > > 
> > > > > > Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
> > > > > > opt in
> > > > > > and provide the support via wbinvd_on_all_cpus() call.
> > > > > So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
> > > > > its own semantics (iirc there was a flush all call in the past). Cc'ing
> > > > > Jonathan as well.
> > > > > 
> > > > > Anyway, I think this call should not be defined in any place other
> > > > > than core
> > > > > kernel headers, and not in pat/nvdimm. I was trying to make it fit in
> > > > > smp.h,
> > > > > for example, but conviniently we might be able to hijack
> > > > > flush_cache_all()
> > > > > for our purposes as of course neither x86-64 arm64 uses it :)
> > > > > 
> > > > > And I see this as safe (wrt not adding a big hammer on unaware
> > > > > drivers) as
> > > > > the 32bit archs that define the call are mostly contained thin their
> > > > > arch/,
> > > > > and the few in drivers/ are still specific to those archs.
> > > > > 
> > > > > Maybe something like the below.
> > > > Ok. I'll replace my version with yours.
> > > Careful with flush_cache_all(). The stub version in
> > > include/asm-generic/cacheflush.h has a comment above it that would
> > > need updating at very least (I think).
> > > Note there 'was' a flush_cache_all() for ARM64, but:
> > > https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
> > 
> > 
> > flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
> > think other archs, at least ARM, those are separate instructions aren't
> > they?
> 
> On arm and arm64 there is no way to perform maintenance on *all* caches; it has
> to be done in cacheline increments by address. It's not realistic to do that
> for the entire address space, so we need to know the relevant address ranges
> (as per the commit referenced above).
> 
> So we probably need to think a bit harder about the geenric interface, since
> "all" isn't possible to implement. :/
> 

I expect the interface would not be in the "flush_cache_" namespace
since those functions are explicitly for virtually tagged caches that
need maintenance on TLB operations that change the VA to PA association.
In this case the cache needs maintenance because the data at the PA
changes. That also means that putting it in the "nvdimm_" namespace is
also wrong because there are provisions in the CXL spec where volatile
memory ranges can also change contents at a given PA, for example caches
might need to be invalidated if software resets the device, but not the
platform.

Something like:

    region_cache_flush(resource_size_t base, resource_size_t n, bool nowait)

...where internally that function can decide if it can rely on an
instruction like wbinvd, use set / way based flushing (if set / way
maintenance can be made to work which sounds like no for arm64), or map
into VA space and loop. If it needs to fall back to that VA-based loop
it might be the case that the caller would want to just fail the
security op rather than suffer the loop latency.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-10 20:06               ` Dan Williams
  0 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-10 20:06 UTC (permalink / raw)
  To: Mark Rutland, Dave Jiang
  Cc: Jonathan Cameron, Davidlohr Bueso, linux-cxl, nvdimm,
	dan.j.williams, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel

Mark Rutland wrote:
> On Tue, Aug 09, 2022 at 02:47:06PM -0700, Dave Jiang wrote:
> > 
> > On 8/3/2022 10:37 AM, Jonathan Cameron wrote:
> > > On Tue, 19 Jul 2022 12:07:03 -0700
> > > Dave Jiang <dave.jiang@intel.com> wrote:
> > > 
> > > > On 7/17/2022 10:30 PM, Davidlohr Bueso wrote:
> > > > > On Fri, 15 Jul 2022, Dave Jiang wrote:
> > > > > > The original implementation to flush all cache after unlocking the
> > > > > > nvdimm
> > > > > > resides in drivers/acpi/nfit/intel.c. This is a temporary stop gap until
> > > > > > nvdimm with security operations arrives on other archs. With support CXL
> > > > > > pmem supporting security operations, specifically "unlock" dimm, the
> > > > > > need
> > > > > > for an arch supported helper function to invalidate all CPU cache for
> > > > > > nvdimm has arrived. Remove original implementation from acpi/nfit and
> > > > > > add
> > > > > > cross arch support for this operation.
> > > > > > 
> > > > > > Add CONFIG_ARCH_HAS_NVDIMM_INVAL_CACHE Kconfig and allow x86_64 to
> > > > > > opt in
> > > > > > and provide the support via wbinvd_on_all_cpus() call.
> > > > > So the 8.2.9.5.5 bits will also need wbinvd - and I guess arm64 will need
> > > > > its own semantics (iirc there was a flush all call in the past). Cc'ing
> > > > > Jonathan as well.
> > > > > 
> > > > > Anyway, I think this call should not be defined in any place other
> > > > > than core
> > > > > kernel headers, and not in pat/nvdimm. I was trying to make it fit in
> > > > > smp.h,
> > > > > for example, but conviniently we might be able to hijack
> > > > > flush_cache_all()
> > > > > for our purposes as of course neither x86-64 arm64 uses it :)
> > > > > 
> > > > > And I see this as safe (wrt not adding a big hammer on unaware
> > > > > drivers) as
> > > > > the 32bit archs that define the call are mostly contained thin their
> > > > > arch/,
> > > > > and the few in drivers/ are still specific to those archs.
> > > > > 
> > > > > Maybe something like the below.
> > > > Ok. I'll replace my version with yours.
> > > Careful with flush_cache_all(). The stub version in
> > > include/asm-generic/cacheflush.h has a comment above it that would
> > > need updating at very least (I think).
> > > Note there 'was' a flush_cache_all() for ARM64, but:
> > > https://patchwork.kernel.org/project/linux-arm-kernel/patch/1429521875-16893-1-git-send-email-mark.rutland@arm.com/
> > 
> > 
> > flush_and_invalidate_cache_all() instead given it calls wbinvd on x86? I
> > think other archs, at least ARM, those are separate instructions aren't
> > they?
> 
> On arm and arm64 there is no way to perform maintenance on *all* caches; it has
> to be done in cacheline increments by address. It's not realistic to do that
> for the entire address space, so we need to know the relevant address ranges
> (as per the commit referenced above).
> 
> So we probably need to think a bit harder about the geenric interface, since
> "all" isn't possible to implement. :/
> 

I expect the interface would not be in the "flush_cache_" namespace
since those functions are explicitly for virtually tagged caches that
need maintenance on TLB operations that change the VA to PA association.
In this case the cache needs maintenance because the data at the PA
changes. That also means that putting it in the "nvdimm_" namespace is
also wrong because there are provisions in the CXL spec where volatile
memory ranges can also change contents at a given PA, for example caches
might need to be invalidated if software resets the device, but not the
platform.

Something like:

    region_cache_flush(resource_size_t base, resource_size_t n, bool nowait)

...where internally that function can decide if it can rely on an
instruction like wbinvd, use set / way based flushing (if set / way
maintenance can be made to work which sounds like no for arm64), or map
into VA space and loop. If it needs to fall back to that VA-based loop
it might be the case that the caller would want to just fail the
security op rather than suffer the loop latency.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-08-10 20:06               ` Dan Williams
@ 2022-08-10 21:13                 ` Davidlohr Bueso
  -1 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-10 21:13 UTC (permalink / raw)
  To: Dan Williams
  Cc: Mark Rutland, Dave Jiang, Jonathan Cameron, linux-cxl, nvdimm,
	bwidawsk, ira.weiny, vishal.l.verma, alison.schofield,
	a.manzanares, linux-arch, Arnd Bergmann, linux-arm-kernel

On Wed, 10 Aug 2022, Dan Williams wrote:

>I expect the interface would not be in the "flush_cache_" namespace
>since those functions are explicitly for virtually tagged caches that
>need maintenance on TLB operations that change the VA to PA association.
>In this case the cache needs maintenance because the data at the PA
>changes. That also means that putting it in the "nvdimm_" namespace is
>also wrong because there are provisions in the CXL spec where volatile
>memory ranges can also change contents at a given PA, for example caches
>might need to be invalidated if software resets the device, but not the
>platform.
>
>Something like:
>
>    region_cache_flush(resource_size_t base, resource_size_t n, bool nowait)
>
>...where internally that function can decide if it can rely on an
>instruction like wbinvd, use set / way based flushing (if set / way
>maintenance can be made to work which sounds like no for arm64), or map
>into VA space and loop. If it needs to fall back to that VA-based loop
>it might be the case that the caller would want to just fail the
>security op rather than suffer the loop latency.

Yep, I was actually prototyping something similar, but want to still
reuse cacheflush.h machinery and just introduce cache_flush_region()
or whatever name, which returns any error. So all the logic would
just be per-arch, where x86 will do the wbinv and return 0, and arm64
can just do -EINVAL until VA-based is no longer the only way.

Thanks,
Davidlohr

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-10 21:13                 ` Davidlohr Bueso
  0 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-10 21:13 UTC (permalink / raw)
  To: Dan Williams
  Cc: Mark Rutland, Dave Jiang, Jonathan Cameron, linux-cxl, nvdimm,
	bwidawsk, ira.weiny, vishal.l.verma, alison.schofield,
	a.manzanares, linux-arch, Arnd Bergmann, linux-arm-kernel

On Wed, 10 Aug 2022, Dan Williams wrote:

>I expect the interface would not be in the "flush_cache_" namespace
>since those functions are explicitly for virtually tagged caches that
>need maintenance on TLB operations that change the VA to PA association.
>In this case the cache needs maintenance because the data at the PA
>changes. That also means that putting it in the "nvdimm_" namespace is
>also wrong because there are provisions in the CXL spec where volatile
>memory ranges can also change contents at a given PA, for example caches
>might need to be invalidated if software resets the device, but not the
>platform.
>
>Something like:
>
>    region_cache_flush(resource_size_t base, resource_size_t n, bool nowait)
>
>...where internally that function can decide if it can rely on an
>instruction like wbinvd, use set / way based flushing (if set / way
>maintenance can be made to work which sounds like no for arm64), or map
>into VA space and loop. If it needs to fall back to that VA-based loop
>it might be the case that the caller would want to just fail the
>security op rather than suffer the loop latency.

Yep, I was actually prototyping something similar, but want to still
reuse cacheflush.h machinery and just introduce cache_flush_region()
or whatever name, which returns any error. So all the logic would
just be per-arch, where x86 will do the wbinv and return 0, and arm64
can just do -EINVAL until VA-based is no longer the only way.

Thanks,
Davidlohr

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-08-10 21:13                 ` Davidlohr Bueso
@ 2022-08-10 21:30                   ` Dan Williams
  -1 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-10 21:30 UTC (permalink / raw)
  To: Davidlohr Bueso, Dan Williams
  Cc: Mark Rutland, Dave Jiang, Jonathan Cameron, linux-cxl, nvdimm,
	bwidawsk, ira.weiny, vishal.l.verma, alison.schofield,
	a.manzanares, linux-arch, Arnd Bergmann, linux-arm-kernel

Davidlohr Bueso wrote:
> On Wed, 10 Aug 2022, Dan Williams wrote:
> 
> >I expect the interface would not be in the "flush_cache_" namespace
> >since those functions are explicitly for virtually tagged caches that
> >need maintenance on TLB operations that change the VA to PA association.
> >In this case the cache needs maintenance because the data at the PA
> >changes. That also means that putting it in the "nvdimm_" namespace is
> >also wrong because there are provisions in the CXL spec where volatile
> >memory ranges can also change contents at a given PA, for example caches
> >might need to be invalidated if software resets the device, but not the
> >platform.
> >
> >Something like:
> >
> >    region_cache_flush(resource_size_t base, resource_size_t n, bool nowait)
> >
> >...where internally that function can decide if it can rely on an
> >instruction like wbinvd, use set / way based flushing (if set / way
> >maintenance can be made to work which sounds like no for arm64), or map
> >into VA space and loop. If it needs to fall back to that VA-based loop
> >it might be the case that the caller would want to just fail the
> >security op rather than suffer the loop latency.
> 
> Yep, I was actually prototyping something similar, but want to still
> reuse cacheflush.h machinery and just introduce cache_flush_region()
> or whatever name, which returns any error. So all the logic would
> just be per-arch, where x86 will do the wbinv and return 0, and arm64
> can just do -EINVAL until VA-based is no longer the only way.

cache_flush_region() works for me, but I wonder if there should be a
cache_flush_region_capable() call to shut off dependent code early
rather than discovering it at runtime? For example, even archs like x86,
that have wbinvd, have scenarios where wbinvd is prohibited, or painful.
TDX, and virtualization in general, comes to mind.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-10 21:30                   ` Dan Williams
  0 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-10 21:30 UTC (permalink / raw)
  To: Davidlohr Bueso, Dan Williams
  Cc: Mark Rutland, Dave Jiang, Jonathan Cameron, linux-cxl, nvdimm,
	bwidawsk, ira.weiny, vishal.l.verma, alison.schofield,
	a.manzanares, linux-arch, Arnd Bergmann, linux-arm-kernel

Davidlohr Bueso wrote:
> On Wed, 10 Aug 2022, Dan Williams wrote:
> 
> >I expect the interface would not be in the "flush_cache_" namespace
> >since those functions are explicitly for virtually tagged caches that
> >need maintenance on TLB operations that change the VA to PA association.
> >In this case the cache needs maintenance because the data at the PA
> >changes. That also means that putting it in the "nvdimm_" namespace is
> >also wrong because there are provisions in the CXL spec where volatile
> >memory ranges can also change contents at a given PA, for example caches
> >might need to be invalidated if software resets the device, but not the
> >platform.
> >
> >Something like:
> >
> >    region_cache_flush(resource_size_t base, resource_size_t n, bool nowait)
> >
> >...where internally that function can decide if it can rely on an
> >instruction like wbinvd, use set / way based flushing (if set / way
> >maintenance can be made to work which sounds like no for arm64), or map
> >into VA space and loop. If it needs to fall back to that VA-based loop
> >it might be the case that the caller would want to just fail the
> >security op rather than suffer the loop latency.
> 
> Yep, I was actually prototyping something similar, but want to still
> reuse cacheflush.h machinery and just introduce cache_flush_region()
> or whatever name, which returns any error. So all the logic would
> just be per-arch, where x86 will do the wbinv and return 0, and arm64
> can just do -EINVAL until VA-based is no longer the only way.

cache_flush_region() works for me, but I wonder if there should be a
cache_flush_region_capable() call to shut off dependent code early
rather than discovering it at runtime? For example, even archs like x86,
that have wbinvd, have scenarios where wbinvd is prohibited, or painful.
TDX, and virtualization in general, comes to mind.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
  2022-08-10 21:30                   ` Dan Williams
@ 2022-08-10 21:31                     ` Davidlohr Bueso
  -1 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-10 21:31 UTC (permalink / raw)
  To: Dan Williams
  Cc: Mark Rutland, Dave Jiang, Jonathan Cameron, linux-cxl, nvdimm,
	bwidawsk, ira.weiny, vishal.l.verma, alison.schofield,
	a.manzanares, linux-arch, Arnd Bergmann, linux-arm-kernel

On Wed, 10 Aug 2022, Dan Williams wrote:

>Davidlohr Bueso wrote:
>> On Wed, 10 Aug 2022, Dan Williams wrote:
>>
>> >I expect the interface would not be in the "flush_cache_" namespace
>> >since those functions are explicitly for virtually tagged caches that
>> >need maintenance on TLB operations that change the VA to PA association.
>> >In this case the cache needs maintenance because the data at the PA
>> >changes. That also means that putting it in the "nvdimm_" namespace is
>> >also wrong because there are provisions in the CXL spec where volatile
>> >memory ranges can also change contents at a given PA, for example caches
>> >might need to be invalidated if software resets the device, but not the
>> >platform.
>> >
>> >Something like:
>> >
>> >    region_cache_flush(resource_size_t base, resource_size_t n, bool nowait)
>> >
>> >...where internally that function can decide if it can rely on an
>> >instruction like wbinvd, use set / way based flushing (if set / way
>> >maintenance can be made to work which sounds like no for arm64), or map
>> >into VA space and loop. If it needs to fall back to that VA-based loop
>> >it might be the case that the caller would want to just fail the
>> >security op rather than suffer the loop latency.
>>
>> Yep, I was actually prototyping something similar, but want to still
>> reuse cacheflush.h machinery and just introduce cache_flush_region()
>> or whatever name, which returns any error. So all the logic would
>> just be per-arch, where x86 will do the wbinv and return 0, and arm64
>> can just do -EINVAL until VA-based is no longer the only way.
>
>cache_flush_region() works for me, but I wonder if there should be a
>cache_flush_region_capable() call to shut off dependent code early
>rather than discovering it at runtime? For example, even archs like x86,
>that have wbinvd, have scenarios where wbinvd is prohibited, or painful.
>TDX, and virtualization in general, comes to mind.

Yeah I'm no fan of wbinv, but in these cases (cxl/nvdimm), at least from
the performance angle, I am not worried: the user is explicity doing a
security/cleaning specific op, probably decomisioning, so it's rare and
should not expect better.

Thanks,
Davidlohr

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm
@ 2022-08-10 21:31                     ` Davidlohr Bueso
  0 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-10 21:31 UTC (permalink / raw)
  To: Dan Williams
  Cc: Mark Rutland, Dave Jiang, Jonathan Cameron, linux-cxl, nvdimm,
	bwidawsk, ira.weiny, vishal.l.verma, alison.schofield,
	a.manzanares, linux-arch, Arnd Bergmann, linux-arm-kernel

On Wed, 10 Aug 2022, Dan Williams wrote:

>Davidlohr Bueso wrote:
>> On Wed, 10 Aug 2022, Dan Williams wrote:
>>
>> >I expect the interface would not be in the "flush_cache_" namespace
>> >since those functions are explicitly for virtually tagged caches that
>> >need maintenance on TLB operations that change the VA to PA association.
>> >In this case the cache needs maintenance because the data at the PA
>> >changes. That also means that putting it in the "nvdimm_" namespace is
>> >also wrong because there are provisions in the CXL spec where volatile
>> >memory ranges can also change contents at a given PA, for example caches
>> >might need to be invalidated if software resets the device, but not the
>> >platform.
>> >
>> >Something like:
>> >
>> >    region_cache_flush(resource_size_t base, resource_size_t n, bool nowait)
>> >
>> >...where internally that function can decide if it can rely on an
>> >instruction like wbinvd, use set / way based flushing (if set / way
>> >maintenance can be made to work which sounds like no for arm64), or map
>> >into VA space and loop. If it needs to fall back to that VA-based loop
>> >it might be the case that the caller would want to just fail the
>> >security op rather than suffer the loop latency.
>>
>> Yep, I was actually prototyping something similar, but want to still
>> reuse cacheflush.h machinery and just introduce cache_flush_region()
>> or whatever name, which returns any error. So all the logic would
>> just be per-arch, where x86 will do the wbinv and return 0, and arm64
>> can just do -EINVAL until VA-based is no longer the only way.
>
>cache_flush_region() works for me, but I wonder if there should be a
>cache_flush_region_capable() call to shut off dependent code early
>rather than discovering it at runtime? For example, even archs like x86,
>that have wbinvd, have scenarios where wbinvd is prohibited, or painful.
>TDX, and virtualization in general, comes to mind.

Yeah I'm no fan of wbinv, but in these cases (cxl/nvdimm), at least from
the performance angle, I am not worried: the user is explicity doing a
security/cleaning specific op, probably decomisioning, so it's rare and
should not expect better.

Thanks,
Davidlohr

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH] arch/cacheflush: Introduce flush_all_caches()
  2022-08-10 20:06               ` Dan Williams
@ 2022-08-15 16:07                 ` Davidlohr Bueso
  -1 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-15 16:07 UTC (permalink / raw)
  To: Dan Williams
  Cc: Mark Rutland, Dave Jiang, Jonathan Cameron, linux-cxl, nvdimm,
	bwidawsk, ira.weiny, vishal.l.verma, alison.schofield,
	a.manzanares, linux-arch, Arnd Bergmann, linux-arm-kernel, bp,
	x86, linux-kernel, dave

With CXL security features, global CPU cache flushing nvdimm
requirements are no longer specific to that subsystem, even
beyond the scope of security_ops. CXL will need such semantics
for features not necessarily limited to persistent memory.

While the scope of this is for physical address space, add a
new flush_all_caches() in cacheflush headers such that each
architecture can define it, when capable. For x86 just use the
wbinvd hammer and prevent any other arch from being capable.
While there can be performance penalties or delays response
times, these calls are both rare and explicitly security
related, and therefore become less important.

Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---

After a few iterations I circled back to an interface without granularity.
It just doesn't make sense right now to define a range if arm64 will not
support this (won't do VA-based physical address space flushes) and, until
it comes up with consistent caches, security operations will simply be
unsupported.

  arch/x86/include/asm/cacheflush.h |  3 +++
  drivers/acpi/nfit/intel.c         | 41 ++++++++++++++-----------------
  include/asm-generic/cacheflush.h  | 22 +++++++++++++++++
  3 files changed, 43 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index b192d917a6d0..ce2ec9556093 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -10,4 +10,7 @@

  void clflush_cache_range(void *addr, unsigned int size);

+#define flush_all_caches() \
+	do { wbinvd_on_all_cpus(); } while(0)
+
  #endif /* _ASM_X86_CACHEFLUSH_H */
diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
index 8dd792a55730..f2f6c31e6ab7 100644
--- a/drivers/acpi/nfit/intel.c
+++ b/drivers/acpi/nfit/intel.c
@@ -4,6 +4,7 @@
  #include <linux/ndctl.h>
  #include <linux/acpi.h>
  #include <asm/smp.h>
+#include <linux/cacheflush.h>
  #include "intel.h"
  #include "nfit.h"

@@ -190,8 +191,6 @@ static int intel_security_change_key(struct nvdimm *nvdimm,
	}
  }

-static void nvdimm_invalidate_cache(void);
-
  static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
		const struct nvdimm_key_data *key_data)
  {
@@ -210,6 +209,9 @@ static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
	};
	int rc;

+	if (!flush_all_caches_capable())
+		return -EINVAL;
+
	if (!test_bit(NVDIMM_INTEL_UNLOCK_UNIT, &nfit_mem->dsm_mask))
		return -ENOTTY;

@@ -228,7 +230,7 @@ static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
	}

	/* DIMM unlocked, invalidate all CPU caches before we read it */
-	nvdimm_invalidate_cache();
+	flush_all_caches();

	return 0;
  }
@@ -294,11 +296,14 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
		},
	};

+	if (!flush_all_caches_capable())
+		return -EINVAL;
+
	if (!test_bit(cmd, &nfit_mem->dsm_mask))
		return -ENOTTY;

	/* flush all cache before we erase DIMM */
-	nvdimm_invalidate_cache();
+	flush_all_caches();
	memcpy(nd_cmd.cmd.passphrase, key->data,
			sizeof(nd_cmd.cmd.passphrase));
	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
@@ -318,7 +323,7 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
	}

	/* DIMM erased, invalidate all CPU caches before we read it */
-	nvdimm_invalidate_cache();
+	flush_all_caches();
	return 0;
  }

@@ -338,6 +343,9 @@ static int __maybe_unused intel_security_query_overwrite(struct nvdimm *nvdimm)
		},
	};

+	if (!flush_all_caches_capable())
+		return -EINVAL;
+
	if (!test_bit(NVDIMM_INTEL_QUERY_OVERWRITE, &nfit_mem->dsm_mask))
		return -ENOTTY;

@@ -355,7 +363,7 @@ static int __maybe_unused intel_security_query_overwrite(struct nvdimm *nvdimm)
	}

	/* flush all cache before we make the nvdimms available */
-	nvdimm_invalidate_cache();
+	flush_all_caches();
	return 0;
  }

@@ -377,11 +385,14 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm,
		},
	};

+	if (!flush_all_caches_capable())
+		return -EINVAL;
+
	if (!test_bit(NVDIMM_INTEL_OVERWRITE, &nfit_mem->dsm_mask))
		return -ENOTTY;

	/* flush all cache before we erase DIMM */
-	nvdimm_invalidate_cache();
+	flush_all_caches();
	memcpy(nd_cmd.cmd.passphrase, nkey->data,
			sizeof(nd_cmd.cmd.passphrase));
	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
@@ -401,22 +412,6 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm,
	}
  }

-/*
- * TODO: define a cross arch wbinvd equivalent when/if
- * NVDIMM_FAMILY_INTEL command support arrives on another arch.
- */
-#ifdef CONFIG_X86
-static void nvdimm_invalidate_cache(void)
-{
-	wbinvd_on_all_cpus();
-}
-#else
-static void nvdimm_invalidate_cache(void)
-{
-	WARN_ON_ONCE("cache invalidation required after unlock\n");
-}
-#endif
-
  static const struct nvdimm_security_ops __intel_security_ops = {
	.get_flags = intel_security_flags,
	.freeze = intel_security_freeze,
diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h
index 4f07afacbc23..f249142b4908 100644
--- a/include/asm-generic/cacheflush.h
+++ b/include/asm-generic/cacheflush.h
@@ -115,4 +115,26 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
	memcpy(dst, src, len)
  #endif

+/*
+ * Flush the entire caches across all CPUs. It is considered
+ * a big hammer (latency and performance). Unlike the APIs
+ * above, this function can be defined on architectures which
+ * have VIPT or PIPT caches, and thus is beyond the scope of
+ * virtual to physical mappings/page tables changing.
+ *
+ * The limitation here is that the architectures that make
+ * use of it must can actually comply with the semantics,
+ * such as those which caches are in a consistent state. The
+ * caller can verify the situation early on.
+ */
+#ifndef flush_all_caches
+# define flush_all_caches_capable() false
+static inline void flush_all_caches(void)
+{
+	WARN_ON_ONCE("cache invalidation required\n");
+}
+#else
+# define flush_all_caches_capable() true
+#endif
+
  #endif /* _ASM_GENERIC_CACHEFLUSH_H */
--
2.37.2

^ permalink raw reply related	[flat|nested] 79+ messages in thread

* [PATCH] arch/cacheflush: Introduce flush_all_caches()
@ 2022-08-15 16:07                 ` Davidlohr Bueso
  0 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-15 16:07 UTC (permalink / raw)
  To: Dan Williams
  Cc: Mark Rutland, Dave Jiang, Jonathan Cameron, linux-cxl, nvdimm,
	bwidawsk, ira.weiny, vishal.l.verma, alison.schofield,
	a.manzanares, linux-arch, Arnd Bergmann, linux-arm-kernel, bp,
	x86, linux-kernel, dave

With CXL security features, global CPU cache flushing nvdimm
requirements are no longer specific to that subsystem, even
beyond the scope of security_ops. CXL will need such semantics
for features not necessarily limited to persistent memory.

While the scope of this is for physical address space, add a
new flush_all_caches() in cacheflush headers such that each
architecture can define it, when capable. For x86 just use the
wbinvd hammer and prevent any other arch from being capable.
While there can be performance penalties or delays response
times, these calls are both rare and explicitly security
related, and therefore become less important.

Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---

After a few iterations I circled back to an interface without granularity.
It just doesn't make sense right now to define a range if arm64 will not
support this (won't do VA-based physical address space flushes) and, until
it comes up with consistent caches, security operations will simply be
unsupported.

  arch/x86/include/asm/cacheflush.h |  3 +++
  drivers/acpi/nfit/intel.c         | 41 ++++++++++++++-----------------
  include/asm-generic/cacheflush.h  | 22 +++++++++++++++++
  3 files changed, 43 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index b192d917a6d0..ce2ec9556093 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -10,4 +10,7 @@

  void clflush_cache_range(void *addr, unsigned int size);

+#define flush_all_caches() \
+	do { wbinvd_on_all_cpus(); } while(0)
+
  #endif /* _ASM_X86_CACHEFLUSH_H */
diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
index 8dd792a55730..f2f6c31e6ab7 100644
--- a/drivers/acpi/nfit/intel.c
+++ b/drivers/acpi/nfit/intel.c
@@ -4,6 +4,7 @@
  #include <linux/ndctl.h>
  #include <linux/acpi.h>
  #include <asm/smp.h>
+#include <linux/cacheflush.h>
  #include "intel.h"
  #include "nfit.h"

@@ -190,8 +191,6 @@ static int intel_security_change_key(struct nvdimm *nvdimm,
	}
  }

-static void nvdimm_invalidate_cache(void);
-
  static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
		const struct nvdimm_key_data *key_data)
  {
@@ -210,6 +209,9 @@ static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
	};
	int rc;

+	if (!flush_all_caches_capable())
+		return -EINVAL;
+
	if (!test_bit(NVDIMM_INTEL_UNLOCK_UNIT, &nfit_mem->dsm_mask))
		return -ENOTTY;

@@ -228,7 +230,7 @@ static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
	}

	/* DIMM unlocked, invalidate all CPU caches before we read it */
-	nvdimm_invalidate_cache();
+	flush_all_caches();

	return 0;
  }
@@ -294,11 +296,14 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
		},
	};

+	if (!flush_all_caches_capable())
+		return -EINVAL;
+
	if (!test_bit(cmd, &nfit_mem->dsm_mask))
		return -ENOTTY;

	/* flush all cache before we erase DIMM */
-	nvdimm_invalidate_cache();
+	flush_all_caches();
	memcpy(nd_cmd.cmd.passphrase, key->data,
			sizeof(nd_cmd.cmd.passphrase));
	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
@@ -318,7 +323,7 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
	}

	/* DIMM erased, invalidate all CPU caches before we read it */
-	nvdimm_invalidate_cache();
+	flush_all_caches();
	return 0;
  }

@@ -338,6 +343,9 @@ static int __maybe_unused intel_security_query_overwrite(struct nvdimm *nvdimm)
		},
	};

+	if (!flush_all_caches_capable())
+		return -EINVAL;
+
	if (!test_bit(NVDIMM_INTEL_QUERY_OVERWRITE, &nfit_mem->dsm_mask))
		return -ENOTTY;

@@ -355,7 +363,7 @@ static int __maybe_unused intel_security_query_overwrite(struct nvdimm *nvdimm)
	}

	/* flush all cache before we make the nvdimms available */
-	nvdimm_invalidate_cache();
+	flush_all_caches();
	return 0;
  }

@@ -377,11 +385,14 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm,
		},
	};

+	if (!flush_all_caches_capable())
+		return -EINVAL;
+
	if (!test_bit(NVDIMM_INTEL_OVERWRITE, &nfit_mem->dsm_mask))
		return -ENOTTY;

	/* flush all cache before we erase DIMM */
-	nvdimm_invalidate_cache();
+	flush_all_caches();
	memcpy(nd_cmd.cmd.passphrase, nkey->data,
			sizeof(nd_cmd.cmd.passphrase));
	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
@@ -401,22 +412,6 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm,
	}
  }

-/*
- * TODO: define a cross arch wbinvd equivalent when/if
- * NVDIMM_FAMILY_INTEL command support arrives on another arch.
- */
-#ifdef CONFIG_X86
-static void nvdimm_invalidate_cache(void)
-{
-	wbinvd_on_all_cpus();
-}
-#else
-static void nvdimm_invalidate_cache(void)
-{
-	WARN_ON_ONCE("cache invalidation required after unlock\n");
-}
-#endif
-
  static const struct nvdimm_security_ops __intel_security_ops = {
	.get_flags = intel_security_flags,
	.freeze = intel_security_freeze,
diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h
index 4f07afacbc23..f249142b4908 100644
--- a/include/asm-generic/cacheflush.h
+++ b/include/asm-generic/cacheflush.h
@@ -115,4 +115,26 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
	memcpy(dst, src, len)
  #endif

+/*
+ * Flush the entire caches across all CPUs. It is considered
+ * a big hammer (latency and performance). Unlike the APIs
+ * above, this function can be defined on architectures which
+ * have VIPT or PIPT caches, and thus is beyond the scope of
+ * virtual to physical mappings/page tables changing.
+ *
+ * The limitation here is that the architectures that make
+ * use of it must can actually comply with the semantics,
+ * such as those which caches are in a consistent state. The
+ * caller can verify the situation early on.
+ */
+#ifndef flush_all_caches
+# define flush_all_caches_capable() false
+static inline void flush_all_caches(void)
+{
+	WARN_ON_ONCE("cache invalidation required\n");
+}
+#else
+# define flush_all_caches_capable() true
+#endif
+
  #endif /* _ASM_GENERIC_CACHEFLUSH_H */
--
2.37.2

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
  2022-08-15 16:07                 ` Davidlohr Bueso
@ 2022-08-16  9:01                   ` Peter Zijlstra
  -1 siblings, 0 replies; 79+ messages in thread
From: Peter Zijlstra @ 2022-08-16  9:01 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: Dan Williams, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
> diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
> index b192d917a6d0..ce2ec9556093 100644
> --- a/arch/x86/include/asm/cacheflush.h
> +++ b/arch/x86/include/asm/cacheflush.h
> @@ -10,4 +10,7 @@
> 
>  void clflush_cache_range(void *addr, unsigned int size);
> 
> +#define flush_all_caches() \
> +	do { wbinvd_on_all_cpus(); } while(0)
> +

This is horrific... we've done our utmost best to remove all WBINVD
usage and here you're adding it back in the most horrible form possible
?!?

Please don't do this, do *NOT* use WBINVD.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
@ 2022-08-16  9:01                   ` Peter Zijlstra
  0 siblings, 0 replies; 79+ messages in thread
From: Peter Zijlstra @ 2022-08-16  9:01 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: Dan Williams, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
> diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
> index b192d917a6d0..ce2ec9556093 100644
> --- a/arch/x86/include/asm/cacheflush.h
> +++ b/arch/x86/include/asm/cacheflush.h
> @@ -10,4 +10,7 @@
> 
>  void clflush_cache_range(void *addr, unsigned int size);
> 
> +#define flush_all_caches() \
> +	do { wbinvd_on_all_cpus(); } while(0)
> +

This is horrific... we've done our utmost best to remove all WBINVD
usage and here you're adding it back in the most horrible form possible
?!?

Please don't do this, do *NOT* use WBINVD.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
  2022-08-16  9:01                   ` Peter Zijlstra
@ 2022-08-16 16:50                     ` Dan Williams
  -1 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-16 16:50 UTC (permalink / raw)
  To: Peter Zijlstra, Davidlohr Bueso
  Cc: Dan Williams, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

Peter Zijlstra wrote:
> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
> > index b192d917a6d0..ce2ec9556093 100644
> > --- a/arch/x86/include/asm/cacheflush.h
> > +++ b/arch/x86/include/asm/cacheflush.h
> > @@ -10,4 +10,7 @@
> > 
> >  void clflush_cache_range(void *addr, unsigned int size);
> > 
> > +#define flush_all_caches() \
> > +	do { wbinvd_on_all_cpus(); } while(0)
> > +
> 
> This is horrific... we've done our utmost best to remove all WBINVD
> usage and here you're adding it back in the most horrible form possible
> ?!?
> 
> Please don't do this, do *NOT* use WBINVD.

Unfortunately there are a few good options here, and the changelog did
not make clear that this is continuing legacy [1], not adding new wbinvd
usage.

The functionality this is enabling is to be able to instantaneously
secure erase potentially terabytes of memory at once and the kernel
needs to be sure that none of the data from before the secure is still
present in the cache. It is also used when unlocking a memory device
where speculative reads and firmware accesses could have cached poison
from before the device was unlocked.

This capability is typically only used once per-boot (for unlock), or
once per bare metal provisioning event (secure erase), like when handing
off the system to another tenant. That small scope plus the fact that
none of this is available to a VM limits the potential damage. So,
similar to the mitigation we did in [2] that did not kill off wbinvd
completely, this is limited to specific scenarios and should be disabled
in any scenario where wbinvd is painful / forbidden.

[1]: 4c6926a23b76 ("acpi/nfit, libnvdimm: Add unlock of nvdimm support for Intel DIMMs")
[2]: e2efb6359e62 ("ACPICA: Avoid cache flush inside virtual machines")

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
@ 2022-08-16 16:50                     ` Dan Williams
  0 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-16 16:50 UTC (permalink / raw)
  To: Peter Zijlstra, Davidlohr Bueso
  Cc: Dan Williams, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

Peter Zijlstra wrote:
> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
> > index b192d917a6d0..ce2ec9556093 100644
> > --- a/arch/x86/include/asm/cacheflush.h
> > +++ b/arch/x86/include/asm/cacheflush.h
> > @@ -10,4 +10,7 @@
> > 
> >  void clflush_cache_range(void *addr, unsigned int size);
> > 
> > +#define flush_all_caches() \
> > +	do { wbinvd_on_all_cpus(); } while(0)
> > +
> 
> This is horrific... we've done our utmost best to remove all WBINVD
> usage and here you're adding it back in the most horrible form possible
> ?!?
> 
> Please don't do this, do *NOT* use WBINVD.

Unfortunately there are a few good options here, and the changelog did
not make clear that this is continuing legacy [1], not adding new wbinvd
usage.

The functionality this is enabling is to be able to instantaneously
secure erase potentially terabytes of memory at once and the kernel
needs to be sure that none of the data from before the secure is still
present in the cache. It is also used when unlocking a memory device
where speculative reads and firmware accesses could have cached poison
from before the device was unlocked.

This capability is typically only used once per-boot (for unlock), or
once per bare metal provisioning event (secure erase), like when handing
off the system to another tenant. That small scope plus the fact that
none of this is available to a VM limits the potential damage. So,
similar to the mitigation we did in [2] that did not kill off wbinvd
completely, this is limited to specific scenarios and should be disabled
in any scenario where wbinvd is painful / forbidden.

[1]: 4c6926a23b76 ("acpi/nfit, libnvdimm: Add unlock of nvdimm support for Intel DIMMs")
[2]: e2efb6359e62 ("ACPICA: Avoid cache flush inside virtual machines")

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
  2022-08-16 16:50                     ` Dan Williams
@ 2022-08-16 16:53                       ` Davidlohr Bueso
  -1 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-16 16:53 UTC (permalink / raw)
  To: Dan Williams
  Cc: Peter Zijlstra, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, 16 Aug 2022, Dan Williams wrote:

>Peter Zijlstra wrote:
>> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
>> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
>> > index b192d917a6d0..ce2ec9556093 100644
>> > --- a/arch/x86/include/asm/cacheflush.h
>> > +++ b/arch/x86/include/asm/cacheflush.h
>> > @@ -10,4 +10,7 @@
>> >
>> >  void clflush_cache_range(void *addr, unsigned int size);
>> >
>> > +#define flush_all_caches() \
>> > +	do { wbinvd_on_all_cpus(); } while(0)
>> > +
>>
>> This is horrific... we've done our utmost best to remove all WBINVD
>> usage and here you're adding it back in the most horrible form possible
>> ?!?
>>
>> Please don't do this, do *NOT* use WBINVD.
>
>Unfortunately there are a few good options here, and the changelog did
>not make clear that this is continuing legacy [1], not adding new wbinvd
>usage.

While I was hoping that it was obvious from the intel.c changes that this
was not a new wbinvd, I can certainly improve the changelog with the below.

Thanks,
Davidlohr

>
>The functionality this is enabling is to be able to instantaneously
>secure erase potentially terabytes of memory at once and the kernel
>needs to be sure that none of the data from before the secure is still
>present in the cache. It is also used when unlocking a memory device
>where speculative reads and firmware accesses could have cached poison
>from before the device was unlocked.
>
>This capability is typically only used once per-boot (for unlock), or
>once per bare metal provisioning event (secure erase), like when handing
>off the system to another tenant. That small scope plus the fact that
>none of this is available to a VM limits the potential damage. So,
>similar to the mitigation we did in [2] that did not kill off wbinvd
>completely, this is limited to specific scenarios and should be disabled
>in any scenario where wbinvd is painful / forbidden.
>
>[1]: 4c6926a23b76 ("acpi/nfit, libnvdimm: Add unlock of nvdimm support for Intel DIMMs")
>[2]: e2efb6359e62 ("ACPICA: Avoid cache flush inside virtual machines")

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
@ 2022-08-16 16:53                       ` Davidlohr Bueso
  0 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-16 16:53 UTC (permalink / raw)
  To: Dan Williams
  Cc: Peter Zijlstra, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, 16 Aug 2022, Dan Williams wrote:

>Peter Zijlstra wrote:
>> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
>> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
>> > index b192d917a6d0..ce2ec9556093 100644
>> > --- a/arch/x86/include/asm/cacheflush.h
>> > +++ b/arch/x86/include/asm/cacheflush.h
>> > @@ -10,4 +10,7 @@
>> >
>> >  void clflush_cache_range(void *addr, unsigned int size);
>> >
>> > +#define flush_all_caches() \
>> > +	do { wbinvd_on_all_cpus(); } while(0)
>> > +
>>
>> This is horrific... we've done our utmost best to remove all WBINVD
>> usage and here you're adding it back in the most horrible form possible
>> ?!?
>>
>> Please don't do this, do *NOT* use WBINVD.
>
>Unfortunately there are a few good options here, and the changelog did
>not make clear that this is continuing legacy [1], not adding new wbinvd
>usage.

While I was hoping that it was obvious from the intel.c changes that this
was not a new wbinvd, I can certainly improve the changelog with the below.

Thanks,
Davidlohr

>
>The functionality this is enabling is to be able to instantaneously
>secure erase potentially terabytes of memory at once and the kernel
>needs to be sure that none of the data from before the secure is still
>present in the cache. It is also used when unlocking a memory device
>where speculative reads and firmware accesses could have cached poison
>from before the device was unlocked.
>
>This capability is typically only used once per-boot (for unlock), or
>once per bare metal provisioning event (secure erase), like when handing
>off the system to another tenant. That small scope plus the fact that
>none of this is available to a VM limits the potential damage. So,
>similar to the mitigation we did in [2] that did not kill off wbinvd
>completely, this is limited to specific scenarios and should be disabled
>in any scenario where wbinvd is painful / forbidden.
>
>[1]: 4c6926a23b76 ("acpi/nfit, libnvdimm: Add unlock of nvdimm support for Intel DIMMs")
>[2]: e2efb6359e62 ("ACPICA: Avoid cache flush inside virtual machines")

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
  2022-08-16 16:53                       ` Davidlohr Bueso
@ 2022-08-16 17:42                         ` Dan Williams
  -1 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-16 17:42 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: Peter Zijlstra, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, Aug 16, 2022 at 10:30 AM Davidlohr Bueso <dave@stgolabs.net> wrote:
>
> On Tue, 16 Aug 2022, Dan Williams wrote:
>
> >Peter Zijlstra wrote:
> >> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
> >> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
> >> > index b192d917a6d0..ce2ec9556093 100644
> >> > --- a/arch/x86/include/asm/cacheflush.h
> >> > +++ b/arch/x86/include/asm/cacheflush.h
> >> > @@ -10,4 +10,7 @@
> >> >
> >> >  void clflush_cache_range(void *addr, unsigned int size);
> >> >
> >> > +#define flush_all_caches() \
> >> > +  do { wbinvd_on_all_cpus(); } while(0)
> >> > +
> >>
> >> This is horrific... we've done our utmost best to remove all WBINVD
> >> usage and here you're adding it back in the most horrible form possible
> >> ?!?
> >>
> >> Please don't do this, do *NOT* use WBINVD.
> >
> >Unfortunately there are a few good options here, and the changelog did
> >not make clear that this is continuing legacy [1], not adding new wbinvd
> >usage.
>
> While I was hoping that it was obvious from the intel.c changes that this
> was not a new wbinvd, I can certainly improve the changelog with the below.

I also think this cache_flush_region() API wants a prominent comment
clarifying the limited applicability of this API. I.e. that it is not
for general purpose usage, not for VMs, and only for select bare metal
scenarios that instantaneously invalidate wide swaths of memory.
Otherwise, I can now see how this looks like a potentially scary
expansion of the usage of wbinvd.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
@ 2022-08-16 17:42                         ` Dan Williams
  0 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-16 17:42 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: Peter Zijlstra, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, Aug 16, 2022 at 10:30 AM Davidlohr Bueso <dave@stgolabs.net> wrote:
>
> On Tue, 16 Aug 2022, Dan Williams wrote:
>
> >Peter Zijlstra wrote:
> >> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
> >> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
> >> > index b192d917a6d0..ce2ec9556093 100644
> >> > --- a/arch/x86/include/asm/cacheflush.h
> >> > +++ b/arch/x86/include/asm/cacheflush.h
> >> > @@ -10,4 +10,7 @@
> >> >
> >> >  void clflush_cache_range(void *addr, unsigned int size);
> >> >
> >> > +#define flush_all_caches() \
> >> > +  do { wbinvd_on_all_cpus(); } while(0)
> >> > +
> >>
> >> This is horrific... we've done our utmost best to remove all WBINVD
> >> usage and here you're adding it back in the most horrible form possible
> >> ?!?
> >>
> >> Please don't do this, do *NOT* use WBINVD.
> >
> >Unfortunately there are a few good options here, and the changelog did
> >not make clear that this is continuing legacy [1], not adding new wbinvd
> >usage.
>
> While I was hoping that it was obvious from the intel.c changes that this
> was not a new wbinvd, I can certainly improve the changelog with the below.

I also think this cache_flush_region() API wants a prominent comment
clarifying the limited applicability of this API. I.e. that it is not
for general purpose usage, not for VMs, and only for select bare metal
scenarios that instantaneously invalidate wide swaths of memory.
Otherwise, I can now see how this looks like a potentially scary
expansion of the usage of wbinvd.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
  2022-08-16 17:42                         ` Dan Williams
@ 2022-08-16 17:52                           ` Davidlohr Bueso
  -1 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-16 17:52 UTC (permalink / raw)
  To: Dan Williams
  Cc: Peter Zijlstra, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, 16 Aug 2022, Dan Williams wrote:

>On Tue, Aug 16, 2022 at 10:30 AM Davidlohr Bueso <dave@stgolabs.net> wrote:
>>
>> On Tue, 16 Aug 2022, Dan Williams wrote:
>>
>> >Peter Zijlstra wrote:
>> >> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
>> >> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
>> >> > index b192d917a6d0..ce2ec9556093 100644
>> >> > --- a/arch/x86/include/asm/cacheflush.h
>> >> > +++ b/arch/x86/include/asm/cacheflush.h
>> >> > @@ -10,4 +10,7 @@
>> >> >
>> >> >  void clflush_cache_range(void *addr, unsigned int size);
>> >> >
>> >> > +#define flush_all_caches() \
>> >> > +  do { wbinvd_on_all_cpus(); } while(0)
>> >> > +
>> >>
>> >> This is horrific... we've done our utmost best to remove all WBINVD
>> >> usage and here you're adding it back in the most horrible form possible
>> >> ?!?
>> >>
>> >> Please don't do this, do *NOT* use WBINVD.
>> >
>> >Unfortunately there are a few good options here, and the changelog did
>> >not make clear that this is continuing legacy [1], not adding new wbinvd
>> >usage.
>>
>> While I was hoping that it was obvious from the intel.c changes that this
>> was not a new wbinvd, I can certainly improve the changelog with the below.
>
>I also think this cache_flush_region() API wants a prominent comment
>clarifying the limited applicability of this API. I.e. that it is not
>for general purpose usage, not for VMs, and only for select bare metal
>scenarios that instantaneously invalidate wide swaths of memory.
>Otherwise, I can now see how this looks like a potentially scary
>expansion of the usage of wbinvd.

Sure.

Also, in the future we might be able to bypass this hammer in the presence
of persistent cpu caches.

Thanks,
Davidlohr

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
@ 2022-08-16 17:52                           ` Davidlohr Bueso
  0 siblings, 0 replies; 79+ messages in thread
From: Davidlohr Bueso @ 2022-08-16 17:52 UTC (permalink / raw)
  To: Dan Williams
  Cc: Peter Zijlstra, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, 16 Aug 2022, Dan Williams wrote:

>On Tue, Aug 16, 2022 at 10:30 AM Davidlohr Bueso <dave@stgolabs.net> wrote:
>>
>> On Tue, 16 Aug 2022, Dan Williams wrote:
>>
>> >Peter Zijlstra wrote:
>> >> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
>> >> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
>> >> > index b192d917a6d0..ce2ec9556093 100644
>> >> > --- a/arch/x86/include/asm/cacheflush.h
>> >> > +++ b/arch/x86/include/asm/cacheflush.h
>> >> > @@ -10,4 +10,7 @@
>> >> >
>> >> >  void clflush_cache_range(void *addr, unsigned int size);
>> >> >
>> >> > +#define flush_all_caches() \
>> >> > +  do { wbinvd_on_all_cpus(); } while(0)
>> >> > +
>> >>
>> >> This is horrific... we've done our utmost best to remove all WBINVD
>> >> usage and here you're adding it back in the most horrible form possible
>> >> ?!?
>> >>
>> >> Please don't do this, do *NOT* use WBINVD.
>> >
>> >Unfortunately there are a few good options here, and the changelog did
>> >not make clear that this is continuing legacy [1], not adding new wbinvd
>> >usage.
>>
>> While I was hoping that it was obvious from the intel.c changes that this
>> was not a new wbinvd, I can certainly improve the changelog with the below.
>
>I also think this cache_flush_region() API wants a prominent comment
>clarifying the limited applicability of this API. I.e. that it is not
>for general purpose usage, not for VMs, and only for select bare metal
>scenarios that instantaneously invalidate wide swaths of memory.
>Otherwise, I can now see how this looks like a potentially scary
>expansion of the usage of wbinvd.

Sure.

Also, in the future we might be able to bypass this hammer in the presence
of persistent cpu caches.

Thanks,
Davidlohr

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
  2022-08-16 17:52                           ` Davidlohr Bueso
@ 2022-08-16 18:49                             ` Dan Williams
  -1 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-16 18:49 UTC (permalink / raw)
  To: Davidlohr Bueso, Dan Williams
  Cc: Peter Zijlstra, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

Davidlohr Bueso wrote:
> On Tue, 16 Aug 2022, Dan Williams wrote:
> 
> >On Tue, Aug 16, 2022 at 10:30 AM Davidlohr Bueso <dave@stgolabs.net> wrote:
> >>
> >> On Tue, 16 Aug 2022, Dan Williams wrote:
> >>
> >> >Peter Zijlstra wrote:
> >> >> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
> >> >> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
> >> >> > index b192d917a6d0..ce2ec9556093 100644
> >> >> > --- a/arch/x86/include/asm/cacheflush.h
> >> >> > +++ b/arch/x86/include/asm/cacheflush.h
> >> >> > @@ -10,4 +10,7 @@
> >> >> >
> >> >> >  void clflush_cache_range(void *addr, unsigned int size);
> >> >> >
> >> >> > +#define flush_all_caches() \
> >> >> > +  do { wbinvd_on_all_cpus(); } while(0)
> >> >> > +
> >> >>
> >> >> This is horrific... we've done our utmost best to remove all WBINVD
> >> >> usage and here you're adding it back in the most horrible form possible
> >> >> ?!?
> >> >>
> >> >> Please don't do this, do *NOT* use WBINVD.
> >> >
> >> >Unfortunately there are a few good options here, and the changelog did
> >> >not make clear that this is continuing legacy [1], not adding new wbinvd
> >> >usage.
> >>
> >> While I was hoping that it was obvious from the intel.c changes that this
> >> was not a new wbinvd, I can certainly improve the changelog with the below.
> >
> >I also think this cache_flush_region() API wants a prominent comment
> >clarifying the limited applicability of this API. I.e. that it is not
> >for general purpose usage, not for VMs, and only for select bare metal
> >scenarios that instantaneously invalidate wide swaths of memory.
> >Otherwise, I can now see how this looks like a potentially scary
> >expansion of the usage of wbinvd.
> 
> Sure.
> 
> Also, in the future we might be able to bypass this hammer in the presence
> of persistent cpu caches.

What would have helped is if the secure-erase and unlock definition in
the specification mandated that the device emit cache invalidations for
everything it has mapped when it is erased. However, that has some
holes, and it also makes me think there is a gap in the current region
provisioning code. If I have device-A mapped at physical-address-X and then
tear that down and instantiate device-B at that same physical address
there needs to be CPU cache invalidation between those 2 events.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
@ 2022-08-16 18:49                             ` Dan Williams
  0 siblings, 0 replies; 79+ messages in thread
From: Dan Williams @ 2022-08-16 18:49 UTC (permalink / raw)
  To: Davidlohr Bueso, Dan Williams
  Cc: Peter Zijlstra, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

Davidlohr Bueso wrote:
> On Tue, 16 Aug 2022, Dan Williams wrote:
> 
> >On Tue, Aug 16, 2022 at 10:30 AM Davidlohr Bueso <dave@stgolabs.net> wrote:
> >>
> >> On Tue, 16 Aug 2022, Dan Williams wrote:
> >>
> >> >Peter Zijlstra wrote:
> >> >> On Mon, Aug 15, 2022 at 09:07:06AM -0700, Davidlohr Bueso wrote:
> >> >> > diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
> >> >> > index b192d917a6d0..ce2ec9556093 100644
> >> >> > --- a/arch/x86/include/asm/cacheflush.h
> >> >> > +++ b/arch/x86/include/asm/cacheflush.h
> >> >> > @@ -10,4 +10,7 @@
> >> >> >
> >> >> >  void clflush_cache_range(void *addr, unsigned int size);
> >> >> >
> >> >> > +#define flush_all_caches() \
> >> >> > +  do { wbinvd_on_all_cpus(); } while(0)
> >> >> > +
> >> >>
> >> >> This is horrific... we've done our utmost best to remove all WBINVD
> >> >> usage and here you're adding it back in the most horrible form possible
> >> >> ?!?
> >> >>
> >> >> Please don't do this, do *NOT* use WBINVD.
> >> >
> >> >Unfortunately there are a few good options here, and the changelog did
> >> >not make clear that this is continuing legacy [1], not adding new wbinvd
> >> >usage.
> >>
> >> While I was hoping that it was obvious from the intel.c changes that this
> >> was not a new wbinvd, I can certainly improve the changelog with the below.
> >
> >I also think this cache_flush_region() API wants a prominent comment
> >clarifying the limited applicability of this API. I.e. that it is not
> >for general purpose usage, not for VMs, and only for select bare metal
> >scenarios that instantaneously invalidate wide swaths of memory.
> >Otherwise, I can now see how this looks like a potentially scary
> >expansion of the usage of wbinvd.
> 
> Sure.
> 
> Also, in the future we might be able to bypass this hammer in the presence
> of persistent cpu caches.

What would have helped is if the secure-erase and unlock definition in
the specification mandated that the device emit cache invalidations for
everything it has mapped when it is erased. However, that has some
holes, and it also makes me think there is a gap in the current region
provisioning code. If I have device-A mapped at physical-address-X and then
tear that down and instantiate device-B at that same physical address
there needs to be CPU cache invalidation between those 2 events.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
  2022-08-16 17:42                         ` Dan Williams
@ 2022-08-17  7:49                           ` Peter Zijlstra
  -1 siblings, 0 replies; 79+ messages in thread
From: Peter Zijlstra @ 2022-08-17  7:49 UTC (permalink / raw)
  To: Dan Williams
  Cc: Davidlohr Bueso, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, Aug 16, 2022 at 10:42:03AM -0700, Dan Williams wrote:

> I also think this cache_flush_region() API wants a prominent comment
> clarifying the limited applicability of this API. I.e. that it is not
> for general purpose usage, not for VMs, and only for select bare metal
> scenarios that instantaneously invalidate wide swaths of memory.
> Otherwise, I can now see how this looks like a potentially scary
> expansion of the usage of wbinvd.

This; because adding a generic API like this makes it ripe for usage.
And this is absolutely the very last thing we want used.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
@ 2022-08-17  7:49                           ` Peter Zijlstra
  0 siblings, 0 replies; 79+ messages in thread
From: Peter Zijlstra @ 2022-08-17  7:49 UTC (permalink / raw)
  To: Dan Williams
  Cc: Davidlohr Bueso, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, Aug 16, 2022 at 10:42:03AM -0700, Dan Williams wrote:

> I also think this cache_flush_region() API wants a prominent comment
> clarifying the limited applicability of this API. I.e. that it is not
> for general purpose usage, not for VMs, and only for select bare metal
> scenarios that instantaneously invalidate wide swaths of memory.
> Otherwise, I can now see how this looks like a potentially scary
> expansion of the usage of wbinvd.

This; because adding a generic API like this makes it ripe for usage.
And this is absolutely the very last thing we want used.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
  2022-08-16 18:49                             ` Dan Williams
@ 2022-08-17  7:53                               ` Peter Zijlstra
  -1 siblings, 0 replies; 79+ messages in thread
From: Peter Zijlstra @ 2022-08-17  7:53 UTC (permalink / raw)
  To: Dan Williams
  Cc: Davidlohr Bueso, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, Aug 16, 2022 at 11:49:59AM -0700, Dan Williams wrote:

> What would have helped is if the secure-erase and unlock definition in
> the specification mandated that the device emit cache invalidations for
> everything it has mapped when it is erased. However, that has some
> holes, and it also makes me think there is a gap in the current region
> provisioning code. If I have device-A mapped at physical-address-X and then
> tear that down and instantiate device-B at that same physical address
> there needs to be CPU cache invalidation between those 2 events.

Can we pretty please get those holes fixed ASAP such that future
generations can avoid the WBINVD nonsense?

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: [PATCH] arch/cacheflush: Introduce flush_all_caches()
@ 2022-08-17  7:53                               ` Peter Zijlstra
  0 siblings, 0 replies; 79+ messages in thread
From: Peter Zijlstra @ 2022-08-17  7:53 UTC (permalink / raw)
  To: Dan Williams
  Cc: Davidlohr Bueso, Mark Rutland, Dave Jiang, Jonathan Cameron,
	linux-cxl, nvdimm, bwidawsk, ira.weiny, vishal.l.verma,
	alison.schofield, a.manzanares, linux-arch, Arnd Bergmann,
	linux-arm-kernel, bp, x86, linux-kernel

On Tue, Aug 16, 2022 at 11:49:59AM -0700, Dan Williams wrote:

> What would have helped is if the secure-erase and unlock definition in
> the specification mandated that the device emit cache invalidations for
> everything it has mapped when it is erased. However, that has some
> holes, and it also makes me think there is a gap in the current region
> provisioning code. If I have device-A mapped at physical-address-X and then
> tear that down and instantiate device-B at that same physical address
> there needs to be CPU cache invalidation between those 2 events.

Can we pretty please get those holes fixed ASAP such that future
generations can avoid the WBINVD nonsense?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 79+ messages in thread

end of thread, other threads:[~2022-08-17  7:55 UTC | newest]

Thread overview: 79+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-15 21:08 [PATCH RFC 00/15] Introduce security commands for CXL pmem device Dave Jiang
2022-07-15 21:08 ` [PATCH RFC 01/15] cxl/pmem: Introduce nvdimm_security_ops with ->get_flags() operation Dave Jiang
2022-07-15 21:09   ` Davidlohr Bueso
2022-08-03 16:29     ` Jonathan Cameron
2022-07-18  5:34   ` [PATCH RFC 1/15] " Davidlohr Bueso
2022-07-15 21:08 ` [PATCH RFC 02/15] tools/testing/cxl: Create context for cxl mock device Dave Jiang
2022-07-18  6:29   ` [PATCH RFC 2/15] " Davidlohr Bueso
2022-08-03 16:36   ` [PATCH RFC 02/15] " Jonathan Cameron
2022-08-09 20:30     ` Dave Jiang
2022-07-15 21:08 ` [PATCH RFC 03/15] tools/testing/cxl: Add "Get Security State" opcode support Dave Jiang
2022-08-03 16:51   ` Jonathan Cameron
2022-07-15 21:08 ` [PATCH RFC 04/15] cxl/pmem: Add "Set Passphrase" security command support Dave Jiang
2022-07-18  6:36   ` [PATCH RFC 4/15] " Davidlohr Bueso
2022-07-19 18:55     ` Dave Jiang
2022-08-03 17:01   ` [PATCH RFC 04/15] " Jonathan Cameron
2022-07-15 21:09 ` [PATCH RFC 05/15] tools/testing/cxl: Add "Set Passphrase" opcode support Dave Jiang
2022-08-03 17:15   ` Jonathan Cameron
2022-07-15 21:09 ` [PATCH RFC 06/15] cxl/pmem: Add Disable Passphrase security command support Dave Jiang
2022-08-03 17:21   ` Jonathan Cameron
2022-07-15 21:09 ` [PATCH RFC 07/15] tools/testing/cxl: Add "Disable" security opcode support Dave Jiang
2022-08-03 17:23   ` Jonathan Cameron
2022-07-15 21:09 ` [PATCH RFC 08/15] cxl/pmem: Add "Freeze Security State" security command support Dave Jiang
2022-08-03 17:23   ` Jonathan Cameron
2022-07-15 21:09 ` [PATCH RFC 09/15] tools/testing/cxl: Add "Freeze Security State" security opcode support Dave Jiang
2022-07-15 21:09 ` [PATCH RFC 10/15] x86: add an arch helper function to invalidate all cache for nvdimm Dave Jiang
2022-07-18  5:30   ` Davidlohr Bueso
2022-07-19 19:07     ` Dave Jiang
2022-08-03 17:37       ` Jonathan Cameron
2022-08-03 17:37         ` Jonathan Cameron
2022-08-09 21:47         ` Dave Jiang
2022-08-09 21:47           ` Dave Jiang
2022-08-10 14:15           ` Mark Rutland
2022-08-10 14:15             ` Mark Rutland
2022-08-10 14:31             ` Eliot Moss
2022-08-10 14:31               ` Eliot Moss
2022-08-10 18:09               ` Mark Rutland
2022-08-10 18:09                 ` Mark Rutland
2022-08-10 18:11                 ` Eliot Moss
2022-08-10 18:11                   ` Eliot Moss
2022-08-10 20:06             ` Dan Williams
2022-08-10 20:06               ` Dan Williams
2022-08-10 21:13               ` Davidlohr Bueso
2022-08-10 21:13                 ` Davidlohr Bueso
2022-08-10 21:30                 ` Dan Williams
2022-08-10 21:30                   ` Dan Williams
2022-08-10 21:31                   ` Davidlohr Bueso
2022-08-10 21:31                     ` Davidlohr Bueso
2022-08-15 16:07               ` [PATCH] arch/cacheflush: Introduce flush_all_caches() Davidlohr Bueso
2022-08-15 16:07                 ` Davidlohr Bueso
2022-08-16  9:01                 ` Peter Zijlstra
2022-08-16  9:01                   ` Peter Zijlstra
2022-08-16 16:50                   ` Dan Williams
2022-08-16 16:50                     ` Dan Williams
2022-08-16 16:53                     ` Davidlohr Bueso
2022-08-16 16:53                       ` Davidlohr Bueso
2022-08-16 17:42                       ` Dan Williams
2022-08-16 17:42                         ` Dan Williams
2022-08-16 17:52                         ` Davidlohr Bueso
2022-08-16 17:52                           ` Davidlohr Bueso
2022-08-16 18:49                           ` Dan Williams
2022-08-16 18:49                             ` Dan Williams
2022-08-17  7:53                             ` Peter Zijlstra
2022-08-17  7:53                               ` Peter Zijlstra
2022-08-17  7:49                         ` Peter Zijlstra
2022-08-17  7:49                           ` Peter Zijlstra
2022-07-15 21:09 ` [PATCH RFC 11/15] cxl/pmem: Add "Unlock" security command support Dave Jiang
2022-08-04 13:19   ` Jonathan Cameron
2022-08-09 22:31     ` Dave Jiang
2022-07-15 21:09 ` [PATCH RFC 12/15] tools/testing/cxl: Add "Unlock" security opcode support Dave Jiang
2022-07-15 21:09 ` [PATCH RFC 13/15] cxl/pmem: Add "Passphrase Secure Erase" security command support Dave Jiang
2022-07-20  6:17   ` Davidlohr Bueso
2022-07-20 17:38     ` Dave Jiang
2022-07-20 18:02       ` Davidlohr Bueso
2022-07-15 21:09 ` [PATCH RFC 14/15] tools/testing/cxl: Add "passphrase secure erase" opcode support Dave Jiang
2022-07-15 21:10 ` [PATCH RFC 15/15] nvdimm/cxl/pmem: Add support for master passphrase disable security command Dave Jiang
2022-07-15 21:29 ` [PATCH RFC 00/15] Introduce security commands for CXL pmem device Davidlohr Bueso
2022-07-19 18:53   ` Dave Jiang
2022-08-03 17:03 ` Jonathan Cameron
2022-08-08 22:18   ` Dave Jiang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.