All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V4 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0
@ 2017-04-07 14:11 Christophe Lombard
  2017-04-07 14:11 ` [PATCH V4 1/7] cxl: Read vsec perst load image Christophe Lombard
                   ` (6 more replies)
  0 siblings, 7 replies; 30+ messages in thread
From: Christophe Lombard @ 2017-04-07 14:11 UTC (permalink / raw)
  To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan

This series adds support for a cxl card which supports the Coherent
Accelerator Interface Architecture 2.0.

It requires IBM Power9 system and the Power Service Layer, version 9.
The PSL provides the address translation and system memory cache for
CAIA compliant Accelerators.
the PSL attaches to the IBM Processor chip through the PCIe link using
the PSL-specific “CAPI Protocol” Transaction Layer Packets.
The PSL and CAPP communicate using PowerBus packets. 
When using a PCIe link the PCIe Host Bridge (PHB) decodes the CAPI
Protocol Packets from the PSL and forwards them as PowerBus data
packets. The PSL also has an optional DMA feature which allows the AFU
to send native PCIe reads and writes to the Processor.

CAIA 2 introduces new features: 
* There are several similarities among the two programming models:
Dedicated-Process and shared models.
* DMA support
* Nest MMU to handle translation addresses.
* ...

It builds on top of the existing cxl driver for the first version of
CAIA. Today only the bare-metal environment supports these new features.

Compatibility with the CAIA, version 1, allows applications and system
software to migrate from one implementation to another with minor
changes.
Most of the differences are:
* Power Service Layer registers: p1 and p2 registers. These new
registers require reworking The service layer API (in cxl.h).
* Support of Radix mode. Power9 consist of multiple memory management
model. So we need to select the right Translation mechanism mode.
* Dedicated-Shared Process Programming Model
* Process element entry. Structure cxl_process_element_common is
redefined.
* Translation Fault Handling. Only a page fault is now handle by the
driver cxl when a translation fault is occured. 

Roughly 3/4 of the code is common between the two CAIA version. When
the code needs to call a specific implementation, it does so
through an API. The PSL8 and PSL9 implementations each describe
their own definition. See struct cxl_service_layer_ops.

The first 3 patches are mostly cleanup and fixes, separating the
psl8-specific code from the code which will also be used for psl9.
Patches 4 restructure existing code, to easily add the psl
implementation.
Patch 5 and 6 rename and isolate implementation-specific code.
Patch 7 introduces the core of the PSL9-specific code.

Tested on Simulation environment.

Changelog[v4]
 - Rebase to latest upstream.
 - Integrate comments from Andrew Donnellan and Frederic Barrat.
 - patch2: - Update the structure cxl_irq_info.
	     Update the commit message.
 - patch3: - Update the commit message.
	     Remove the prototype cxl_context_mm_users_get() in cxl.h
	     The function no longer exists.
 - patch4: - Some callbacks are missing the xsl_ops structure
	   - Rework the function native_irq_multiplexed()
 - patch6: - Remove code lines that will be going away in the next
	     patch.
 - patch7: - Rename the function process_element_entry() to
	     process_element_entry_psl9()
	   - Change the setting of the PSL_SERR_An register.
	   - Update cxl documentation.

Changelog[v3]
 - Rebase to latest upstream.
 - Integrate comments from Andrew Donnellan and Frederic Barrat.
 - patch2: - Rename pid and tid to "reserved" in the struct cxl_irq_info.
 - patch3: - Update commit message.
	   - Reset ctx->mm to NULL.
	   - Simplify slightly the function _cxl_slbia() using the mm
	     associated to a context.
	   - Remove cxl_context_mm_users_get().
 - patch4: - Some prototypes are not supposed to depend on CONFIG_DEBUG_FS.
 - patch6: - Regroup the sste_lock and sst alloc under the same "if"
	     statement. 
 - patch7: - New functions to cover page fault and segment miss.
	   - Rework the code to avoid duplication.
	   - Add a new parameter for the function cxl_alloc_spa().
	   - Invalidation of all ERAT entries is no longer required by
	     CAIA2.
	   - Keep original version of cxl_native_register_serr_irq().
	   - ASB_Notify messages and Non-Blocking queues not supported
	     on DD1.
	   - Change the allocation of the apc machines.

Changelog[v2]
 - Rebase to latest upstream.
 - Integrate comments from Andrew Donnellan and Frederic Barrat.

Christophe Lombard (7):
  cxl: Read vsec perst load image
  cxl: Remove unused values in bare-metal environment.
  cxl: Keep track of mm struct associated with a context
  cxl: Update implementation service layer
  cxl: Rename some psl8 specific functions
  cxl: Isolate few psl8 specific calls
  cxl: Add psl9 specific code

 Documentation/powerpc/cxl.txt |  11 +-
 drivers/misc/cxl/api.c        |  17 +-
 drivers/misc/cxl/context.c    |  65 ++++++--
 drivers/misc/cxl/cxl.h        | 244 +++++++++++++++++++++-------
 drivers/misc/cxl/debugfs.c    |  41 +++--
 drivers/misc/cxl/fault.c      | 136 ++++++----------
 drivers/misc/cxl/file.c       |  15 +-
 drivers/misc/cxl/guest.c      |  10 +-
 drivers/misc/cxl/hcalls.c     |   6 +-
 drivers/misc/cxl/irq.c        |  55 ++++++-
 drivers/misc/cxl/main.c       |  12 +-
 drivers/misc/cxl/native.c     | 323 +++++++++++++++++++++++++++++++------
 drivers/misc/cxl/pci.c        | 364 ++++++++++++++++++++++++++++++++++++------
 drivers/misc/cxl/trace.h      |  43 +++++
 14 files changed, 1043 insertions(+), 299 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH V4 1/7] cxl: Read vsec perst load image
  2017-04-07 14:11 [PATCH V4 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
@ 2017-04-07 14:11 ` Christophe Lombard
  2017-04-10  4:00   ` Andrew Donnellan
                     ` (2 more replies)
  2017-04-07 14:11 ` [PATCH V4 2/7] cxl: Remove unused values in bare-metal environment Christophe Lombard
                   ` (5 subsequent siblings)
  6 siblings, 3 replies; 30+ messages in thread
From: Christophe Lombard @ 2017-04-07 14:11 UTC (permalink / raw)
  To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan

This bit is used to cause a flash image load for programmable
CAIA-compliant implementation. If this bit is set to ‘0’, a power
cycle of the adapter is required to load a programmable CAIA-com-
pliant implementation from flash.
This field will be used by the following patches.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
 drivers/misc/cxl/pci.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index b27ea98..1f4c351 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -1332,6 +1332,7 @@ static int cxl_read_vsec(struct cxl *adapter, struct pci_dev *dev)
 	CXL_READ_VSEC_IMAGE_STATE(dev, vsec, &image_state);
 	adapter->user_image_loaded = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
 	adapter->perst_select_user = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
+	adapter->perst_loads_image = !!(image_state & CXL_VSEC_PERST_LOADS_IMAGE);
 
 	CXL_READ_VSEC_NAFUS(dev, vsec, &adapter->slices);
 	CXL_READ_VSEC_AFU_DESC_OFF(dev, vsec, &afu_desc_off);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH V4 2/7] cxl: Remove unused values in bare-metal environment.
  2017-04-07 14:11 [PATCH V4 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
  2017-04-07 14:11 ` [PATCH V4 1/7] cxl: Read vsec perst load image Christophe Lombard
@ 2017-04-07 14:11 ` Christophe Lombard
  2017-04-10  5:25   ` Andrew Donnellan
  2017-04-10 16:41   ` Frederic Barrat
  2017-04-07 14:11 ` [PATCH V4 3/7] cxl: Keep track of mm struct associated with a context Christophe Lombard
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 30+ messages in thread
From: Christophe Lombard @ 2017-04-07 14:11 UTC (permalink / raw)
  To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan

The two previously fields pid and tid, located in the structure
cxl_irq_info, are only used in the guest environment. To avoid confusion,
it's not necessary to fill the fields in the bare-metal environment.
Pid_tid is now renamed to 'reserved' to avoid undefined behavior on
bare-metal. The PSL Process and Thread Identification Register
(CXL_PSL_PID_TID_An) is only used when attaching a dedicated process
for PSL8 only. This register goes away in CAIA2.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
 drivers/misc/cxl/cxl.h    | 20 ++++----------------
 drivers/misc/cxl/hcalls.c |  6 +++---
 drivers/misc/cxl/native.c |  5 -----
 3 files changed, 7 insertions(+), 24 deletions(-)

diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 79e60ec..36bc213 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -888,27 +888,15 @@ int __detach_context(struct cxl_context *ctx);
 /*
  * This must match the layout of the H_COLLECT_CA_INT_INFO retbuf defined
  * in PAPR.
- * A word about endianness: a pointer to this structure is passed when
- * calling the hcall. However, it is not a block of memory filled up by
- * the hypervisor. The return values are found in registers, and copied
- * one by one when returning from the hcall. See the end of the call to
- * plpar_hcall9() in hvCall.S
- * As a consequence:
- * - we don't need to do any endianness conversion
- * - the pid and tid are an exception. They are 32-bit values returned in
- *   the same 64-bit register. So we do need to worry about byte ordering.
+ * Field pid_tid is now 'reserved' because it's no more used on bare-metal.
+ * On a guest environment, PSL_PID_An is located on the upper 32 bits and
+ * PSL_TID_An register in the lower 32 bits.
  */
 struct cxl_irq_info {
 	u64 dsisr;
 	u64 dar;
 	u64 dsr;
-#ifndef CONFIG_CPU_LITTLE_ENDIAN
-	u32 pid;
-	u32 tid;
-#else
-	u32 tid;
-	u32 pid;
-#endif
+	u64 reserved;
 	u64 afu_err;
 	u64 errstat;
 	u64 proc_handle;
diff --git a/drivers/misc/cxl/hcalls.c b/drivers/misc/cxl/hcalls.c
index d6d11f4..9b8bb0f 100644
--- a/drivers/misc/cxl/hcalls.c
+++ b/drivers/misc/cxl/hcalls.c
@@ -413,9 +413,9 @@ long cxl_h_collect_int_info(u64 unit_address, u64 process_token,
 
 	switch (rc) {
 	case H_SUCCESS:     /* The interrupt info is returned in return registers. */
-		pr_devel("dsisr:%#llx, dar:%#llx, dsr:%#llx, pid:%u, tid:%u, afu_err:%#llx, errstat:%#llx\n",
-			info->dsisr, info->dar, info->dsr, info->pid,
-			info->tid, info->afu_err, info->errstat);
+		pr_devel("dsisr:%#llx, dar:%#llx, dsr:%#llx, pid_tid:%#llx, afu_err:%#llx, errstat:%#llx\n",
+			info->dsisr, info->dar, info->dsr, info->reserved,
+			info->afu_err, info->errstat);
 		return 0;
 	case H_PARAMETER:   /* An incorrect parameter was supplied. */
 		return -EINVAL;
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index 7ae7105..7257e8b 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -859,8 +859,6 @@ static int native_detach_process(struct cxl_context *ctx)
 
 static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
 {
-	u64 pidtid;
-
 	/* If the adapter has gone away, we can't get any meaningful
 	 * information.
 	 */
@@ -870,9 +868,6 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
 	info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
 	info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
 	info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
-	pidtid = cxl_p2n_read(afu, CXL_PSL_PID_TID_An);
-	info->pid = pidtid >> 32;
-	info->tid = pidtid & 0xffffffff;
 	info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An);
 	info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
 	info->proc_handle = 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH V4 3/7] cxl: Keep track of mm struct associated with a context
  2017-04-07 14:11 [PATCH V4 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
  2017-04-07 14:11 ` [PATCH V4 1/7] cxl: Read vsec perst load image Christophe Lombard
  2017-04-07 14:11 ` [PATCH V4 2/7] cxl: Remove unused values in bare-metal environment Christophe Lombard
@ 2017-04-07 14:11 ` Christophe Lombard
  2017-04-10  5:38   ` Andrew Donnellan
  2017-04-10 16:49   ` Frederic Barrat
  2017-04-07 14:11 ` [PATCH V4 4/7] cxl: Update implementation service layer Christophe Lombard
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 30+ messages in thread
From: Christophe Lombard @ 2017-04-07 14:11 UTC (permalink / raw)
  To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan

The mm_struct corresponding to the current task is acquired each time
an interrupt is raised. So to simplify the code, we only get the
mm_struct when attaching an AFU context to the process.
The mm_count reference is increased to ensure that the mm_struct can't
be freed. The mm_struct will be released when the context is detached.
A reference on mm_users is not kept to avoid a circular dependency if
the process mmaps its cxl mmio and forget to unmap before exiting.
The field glpid (pid of the group leader associated with the pid), of
the structure cxl_context, is removed because it's no longer useful.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
 drivers/misc/cxl/api.c     | 17 +++++++++--
 drivers/misc/cxl/context.c | 21 +++++++++++--
 drivers/misc/cxl/cxl.h     | 10 ++++--
 drivers/misc/cxl/fault.c   | 76 ++++------------------------------------------
 drivers/misc/cxl/file.c    | 15 +++++++--
 drivers/misc/cxl/main.c    | 12 ++------
 6 files changed, 61 insertions(+), 90 deletions(-)

diff --git a/drivers/misc/cxl/api.c b/drivers/misc/cxl/api.c
index bcc030e..1a138c8 100644
--- a/drivers/misc/cxl/api.c
+++ b/drivers/misc/cxl/api.c
@@ -14,6 +14,7 @@
 #include <linux/msi.h>
 #include <linux/module.h>
 #include <linux/mount.h>
+#include <linux/sched/mm.h>
 
 #include "cxl.h"
 
@@ -321,19 +322,29 @@ int cxl_start_context(struct cxl_context *ctx, u64 wed,
 
 	if (task) {
 		ctx->pid = get_task_pid(task, PIDTYPE_PID);
-		ctx->glpid = get_task_pid(task->group_leader, PIDTYPE_PID);
 		kernel = false;
 		ctx->real_mode = false;
+
+		/* acquire a reference to the task's mm */
+		ctx->mm = get_task_mm(current);
+
+		/* ensure this mm_struct can't be freed */
+		cxl_context_mm_count_get(ctx);
+
+		/* decrement the use count */
+		if (ctx->mm)
+			mmput(ctx->mm);
 	}
 
 	cxl_ctx_get();
 
 	if ((rc = cxl_ops->attach_process(ctx, kernel, wed, 0))) {
-		put_pid(ctx->glpid);
 		put_pid(ctx->pid);
-		ctx->glpid = ctx->pid = NULL;
+		ctx->pid = NULL;
 		cxl_adapter_context_put(ctx->afu->adapter);
 		cxl_ctx_put();
+		if (task)
+			cxl_context_mm_count_put(ctx);
 		goto out;
 	}
 
diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index 062bf6c..2e935ea 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -17,6 +17,7 @@
 #include <linux/debugfs.h>
 #include <linux/slab.h>
 #include <linux/idr.h>
+#include <linux/sched/mm.h>
 #include <asm/cputable.h>
 #include <asm/current.h>
 #include <asm/copro.h>
@@ -41,7 +42,7 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
 	spin_lock_init(&ctx->sste_lock);
 	ctx->afu = afu;
 	ctx->master = master;
-	ctx->pid = ctx->glpid = NULL; /* Set in start work ioctl */
+	ctx->pid = NULL; /* Set in start work ioctl */
 	mutex_init(&ctx->mapping_lock);
 	ctx->mapping = NULL;
 
@@ -242,12 +243,16 @@ int __detach_context(struct cxl_context *ctx)
 
 	/* release the reference to the group leader and mm handling pid */
 	put_pid(ctx->pid);
-	put_pid(ctx->glpid);
 
 	cxl_ctx_put();
 
 	/* Decrease the attached context count on the adapter */
 	cxl_adapter_context_put(ctx->afu->adapter);
+
+	/* Decrease the mm count on the context */
+	cxl_context_mm_count_put(ctx);
+	ctx->mm = NULL;
+
 	return 0;
 }
 
@@ -325,3 +330,15 @@ void cxl_context_free(struct cxl_context *ctx)
 	mutex_unlock(&ctx->afu->contexts_lock);
 	call_rcu(&ctx->rcu, reclaim_ctx);
 }
+
+void cxl_context_mm_count_get(struct cxl_context *ctx)
+{
+	if (ctx->mm)
+		atomic_inc(&ctx->mm->mm_count);
+}
+
+void cxl_context_mm_count_put(struct cxl_context *ctx)
+{
+	if (ctx->mm)
+		mmdrop(ctx->mm);
+}
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 36bc213..4bcbf7a 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -482,8 +482,6 @@ struct cxl_context {
 	unsigned int sst_size, sst_lru;
 
 	wait_queue_head_t wq;
-	/* pid of the group leader associated with the pid */
-	struct pid *glpid;
 	/* use mm context associated with this pid for ds faults */
 	struct pid *pid;
 	spinlock_t lock; /* Protects pending_irq_mask, pending_fault and fault_addr */
@@ -551,6 +549,8 @@ struct cxl_context {
 	 * CX4 only:
 	 */
 	struct list_head extra_irq_contexts;
+
+	struct mm_struct *mm;
 };
 
 struct cxl_service_layer_ops {
@@ -1012,4 +1012,10 @@ int cxl_adapter_context_lock(struct cxl *adapter);
 /* Unlock the contexts-lock if taken. Warn and force unlock otherwise */
 void cxl_adapter_context_unlock(struct cxl *adapter);
 
+/* Increases the reference count to "struct mm_struct" */
+void cxl_context_mm_count_get(struct cxl_context *ctx);
+
+/* Decrements the reference count to "struct mm_struct" */
+void cxl_context_mm_count_put(struct cxl_context *ctx);
+
 #endif
diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
index 2fa015c..e6f8f05 100644
--- a/drivers/misc/cxl/fault.c
+++ b/drivers/misc/cxl/fault.c
@@ -170,81 +170,18 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
 }
 
 /*
- * Returns the mm_struct corresponding to the context ctx via ctx->pid
- * In case the task has exited we use the task group leader accessible
- * via ctx->glpid to find the next task in the thread group that has a
- * valid  mm_struct associated with it. If a task with valid mm_struct
- * is found the ctx->pid is updated to use the task struct for subsequent
- * translations. In case no valid mm_struct is found in the task group to
- * service the fault a NULL is returned.
+ * Returns the mm_struct corresponding to the context ctx.
+ * mm_users == 0, the context may be in the process of being closed.
  */
 static struct mm_struct *get_mem_context(struct cxl_context *ctx)
 {
-	struct task_struct *task = NULL;
-	struct mm_struct *mm = NULL;
-	struct pid *old_pid = ctx->pid;
-
-	if (old_pid == NULL) {
-		pr_warn("%s: Invalid context for pe=%d\n",
-			 __func__, ctx->pe);
+	if (ctx->mm == NULL)
 		return NULL;
-	}
-
-	task = get_pid_task(old_pid, PIDTYPE_PID);
-
-	/*
-	 * pid_alive may look racy but this saves us from costly
-	 * get_task_mm when the task is a zombie. In worst case
-	 * we may think a task is alive, which is about to die
-	 * but get_task_mm will return NULL.
-	 */
-	if (task != NULL && pid_alive(task))
-		mm = get_task_mm(task);
 
-	/* release the task struct that was taken earlier */
-	if (task)
-		put_task_struct(task);
-	else
-		pr_devel("%s: Context owning pid=%i for pe=%i dead\n",
-			__func__, pid_nr(old_pid), ctx->pe);
-
-	/*
-	 * If we couldn't find the mm context then use the group
-	 * leader to iterate over the task group and find a task
-	 * that gives us mm_struct.
-	 */
-	if (unlikely(mm == NULL && ctx->glpid != NULL)) {
-
-		rcu_read_lock();
-		task = pid_task(ctx->glpid, PIDTYPE_PID);
-		if (task)
-			do {
-				mm = get_task_mm(task);
-				if (mm) {
-					ctx->pid = get_task_pid(task,
-								PIDTYPE_PID);
-					break;
-				}
-				task = next_thread(task);
-			} while (task && !thread_group_leader(task));
-		rcu_read_unlock();
-
-		/* check if we switched pid */
-		if (ctx->pid != old_pid) {
-			if (mm)
-				pr_devel("%s:pe=%i switch pid %i->%i\n",
-					 __func__, ctx->pe, pid_nr(old_pid),
-					 pid_nr(ctx->pid));
-			else
-				pr_devel("%s:Cannot find mm for pid=%i\n",
-					 __func__, pid_nr(old_pid));
-
-			/* drop the reference to older pid */
-			put_pid(old_pid);
-		}
-	}
+	if (!atomic_inc_not_zero(&ctx->mm->mm_users))
+		return NULL;
 
-	return mm;
+	return ctx->mm;
 }
 
 
@@ -282,7 +219,6 @@ void cxl_handle_fault(struct work_struct *fault_work)
 	if (!ctx->kernel) {
 
 		mm = get_mem_context(ctx);
-		/* indicates all the thread in task group have exited */
 		if (mm == NULL) {
 			pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
 				 __func__, ctx->pe, pid_nr(ctx->pid));
diff --git a/drivers/misc/cxl/file.c b/drivers/misc/cxl/file.c
index e7139c7..17b433f 100644
--- a/drivers/misc/cxl/file.c
+++ b/drivers/misc/cxl/file.c
@@ -18,6 +18,7 @@
 #include <linux/fs.h>
 #include <linux/mm.h>
 #include <linux/slab.h>
+#include <linux/sched/mm.h>
 #include <asm/cputable.h>
 #include <asm/current.h>
 #include <asm/copro.h>
@@ -216,8 +217,16 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
 	 * process is still accessible.
 	 */
 	ctx->pid = get_task_pid(current, PIDTYPE_PID);
-	ctx->glpid = get_task_pid(current->group_leader, PIDTYPE_PID);
 
+	/* acquire a reference to the task's mm */
+	ctx->mm = get_task_mm(current);
+
+	/* ensure this mm_struct can't be freed */
+	cxl_context_mm_count_get(ctx);
+
+	/* decrement the use count */
+	if (ctx->mm)
+		mmput(ctx->mm);
 
 	trace_cxl_attach(ctx, work.work_element_descriptor, work.num_interrupts, amr);
 
@@ -225,9 +234,9 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
 							amr))) {
 		afu_release_irqs(ctx, ctx);
 		cxl_adapter_context_put(ctx->afu->adapter);
-		put_pid(ctx->glpid);
 		put_pid(ctx->pid);
-		ctx->glpid = ctx->pid = NULL;
+		ctx->pid = NULL;
+		cxl_context_mm_count_put(ctx);
 		goto out;
 	}
 
diff --git a/drivers/misc/cxl/main.c b/drivers/misc/cxl/main.c
index b0b6ed3..1703655 100644
--- a/drivers/misc/cxl/main.c
+++ b/drivers/misc/cxl/main.c
@@ -59,16 +59,10 @@ int cxl_afu_slbia(struct cxl_afu *afu)
 
 static inline void _cxl_slbia(struct cxl_context *ctx, struct mm_struct *mm)
 {
-	struct task_struct *task;
 	unsigned long flags;
-	if (!(task = get_pid_task(ctx->pid, PIDTYPE_PID))) {
-		pr_devel("%s unable to get task %i\n",
-			 __func__, pid_nr(ctx->pid));
-		return;
-	}
 
-	if (task->mm != mm)
-		goto out_put;
+	if (ctx->mm != mm)
+		return;
 
 	pr_devel("%s matched mm - card: %i afu: %i pe: %i\n", __func__,
 		 ctx->afu->adapter->adapter_num, ctx->afu->slice, ctx->pe);
@@ -79,8 +73,6 @@ static inline void _cxl_slbia(struct cxl_context *ctx, struct mm_struct *mm)
 	spin_unlock_irqrestore(&ctx->sste_lock, flags);
 	mb();
 	cxl_afu_slbia(ctx->afu);
-out_put:
-	put_task_struct(task);
 }
 
 static inline void cxl_slbia_core(struct mm_struct *mm)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH V4 4/7] cxl: Update implementation service layer
  2017-04-07 14:11 [PATCH V4 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
                   ` (2 preceding siblings ...)
  2017-04-07 14:11 ` [PATCH V4 3/7] cxl: Keep track of mm struct associated with a context Christophe Lombard
@ 2017-04-07 14:11 ` Christophe Lombard
  2017-04-10  7:08   ` Andrew Donnellan
  2017-04-10 17:01   ` Frederic Barrat
  2017-04-07 14:11 ` [PATCH V4 5/7] cxl: Rename some psl8 specific functions Christophe Lombard
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 30+ messages in thread
From: Christophe Lombard @ 2017-04-07 14:11 UTC (permalink / raw)
  To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan

The service layer API (in cxl.h) lists some low-level functions whose
implementation is different on PSL8, PSL9 and XSL:
- Init implementation for the adapter and the afu.
- Invalidate TLB/SLB.
- Attach process for dedicated/directed models.
- Handle psl interrupts.
- Debug registers for the adapter and the afu.
- Traces.
Each environment implements its own functions, and the common code uses
them through function pointers, defined in cxl_service_layer_ops.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
 drivers/misc/cxl/cxl.h     | 40 +++++++++++++++++++++++----------
 drivers/misc/cxl/debugfs.c | 16 +++++++-------
 drivers/misc/cxl/guest.c   |  2 +-
 drivers/misc/cxl/irq.c     |  2 +-
 drivers/misc/cxl/native.c  | 54 ++++++++++++++++++++++++++-------------------
 drivers/misc/cxl/pci.c     | 55 +++++++++++++++++++++++++++++++++-------------
 6 files changed, 110 insertions(+), 59 deletions(-)

diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 4bcbf7a..626073d 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -553,13 +553,23 @@ struct cxl_context {
 	struct mm_struct *mm;
 };
 
+struct cxl_irq_info;
+
 struct cxl_service_layer_ops {
 	int (*adapter_regs_init)(struct cxl *adapter, struct pci_dev *dev);
+	int (*invalidate_all)(struct cxl *adapter);
 	int (*afu_regs_init)(struct cxl_afu *afu);
+	int (*sanitise_afu_regs)(struct cxl_afu *afu);
 	int (*register_serr_irq)(struct cxl_afu *afu);
 	void (*release_serr_irq)(struct cxl_afu *afu);
-	void (*debugfs_add_adapter_sl_regs)(struct cxl *adapter, struct dentry *dir);
-	void (*debugfs_add_afu_sl_regs)(struct cxl_afu *afu, struct dentry *dir);
+	irqreturn_t (*handle_interrupt)(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
+	irqreturn_t (*fail_irq)(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
+	int (*activate_dedicated_process)(struct cxl_afu *afu);
+	int (*attach_afu_directed)(struct cxl_context *ctx, u64 wed, u64 amr);
+	int (*attach_dedicated_process)(struct cxl_context *ctx, u64 wed, u64 amr);
+	void (*update_dedicated_ivtes)(struct cxl_context *ctx);
+	void (*debugfs_add_adapter_regs)(struct cxl *adapter, struct dentry *dir);
+	void (*debugfs_add_afu_regs)(struct cxl_afu *afu, struct dentry *dir);
 	void (*psl_irq_dump_registers)(struct cxl_context *ctx);
 	void (*err_irq_dump_registers)(struct cxl *adapter);
 	void (*debugfs_stop_trace)(struct cxl *adapter);
@@ -803,6 +813,11 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
 void afu_release_irqs(struct cxl_context *ctx, void *cookie);
 void afu_irq_name_free(struct cxl_context *ctx);
 
+int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr);
+int cxl_activate_dedicated_process_psl(struct cxl_afu *afu);
+int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr);
+void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx);
+
 #ifdef CONFIG_DEBUG_FS
 
 int cxl_debugfs_init(void);
@@ -811,10 +826,10 @@ int cxl_debugfs_adapter_add(struct cxl *adapter);
 void cxl_debugfs_adapter_remove(struct cxl *adapter);
 int cxl_debugfs_afu_add(struct cxl_afu *afu);
 void cxl_debugfs_afu_remove(struct cxl_afu *afu);
-void cxl_stop_trace(struct cxl *cxl);
-void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir);
-void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter, struct dentry *dir);
-void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir);
+void cxl_stop_trace_psl(struct cxl *cxl);
+void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir);
+void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
+void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir);
 
 #else /* CONFIG_DEBUG_FS */
 
@@ -849,17 +864,17 @@ static inline void cxl_stop_trace(struct cxl *cxl)
 {
 }
 
-static inline void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter,
+static inline void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter,
 						    struct dentry *dir)
 {
 }
 
-static inline void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter,
+static inline void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter,
 						    struct dentry *dir)
 {
 }
 
-static inline void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir)
+static inline void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
 {
 }
 
@@ -904,19 +919,20 @@ struct cxl_irq_info {
 };
 
 void cxl_assign_psn_space(struct cxl_context *ctx);
-irqreturn_t cxl_irq(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
+int cxl_invalidate_all_psl(struct cxl *adapter);
+irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
+irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
 int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
 			void *cookie, irq_hw_number_t *dest_hwirq,
 			unsigned int *dest_virq, const char *name);
 
 int cxl_check_error(struct cxl_afu *afu);
 int cxl_afu_slbia(struct cxl_afu *afu);
-int cxl_tlb_slb_invalidate(struct cxl *adapter);
 int cxl_data_cache_flush(struct cxl *adapter);
 int cxl_afu_disable(struct cxl_afu *afu);
 int cxl_psl_purge(struct cxl_afu *afu);
 
-void cxl_native_psl_irq_dump_regs(struct cxl_context *ctx);
+void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx);
 void cxl_native_err_irq_dump_regs(struct cxl *adapter);
 int cxl_pci_vphb_add(struct cxl_afu *afu);
 void cxl_pci_vphb_remove(struct cxl_afu *afu);
diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
index 9c06ac8..4848ebf 100644
--- a/drivers/misc/cxl/debugfs.c
+++ b/drivers/misc/cxl/debugfs.c
@@ -15,7 +15,7 @@
 
 static struct dentry *cxl_debugfs;
 
-void cxl_stop_trace(struct cxl *adapter)
+void cxl_stop_trace_psl(struct cxl *adapter)
 {
 	int slice;
 
@@ -53,7 +53,7 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
 					  (void __force *)value, &fops_io_x64);
 }
 
-void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir)
+void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir)
 {
 	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
 	debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR2));
@@ -61,7 +61,7 @@ void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir)
 	debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_TRACE));
 }
 
-void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter, struct dentry *dir)
+void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir)
 {
 	debugfs_create_io_x64("fec", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_XSL_FEC));
 }
@@ -82,8 +82,8 @@ int cxl_debugfs_adapter_add(struct cxl *adapter)
 
 	debugfs_create_io_x64("err_ivte", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_ErrIVTE));
 
-	if (adapter->native->sl_ops->debugfs_add_adapter_sl_regs)
-		adapter->native->sl_ops->debugfs_add_adapter_sl_regs(adapter, dir);
+	if (adapter->native->sl_ops->debugfs_add_adapter_regs)
+		adapter->native->sl_ops->debugfs_add_adapter_regs(adapter, dir);
 	return 0;
 }
 
@@ -92,7 +92,7 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
 	debugfs_remove_recursive(adapter->debugfs);
 }
 
-void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir)
+void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
 {
 	debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
 	debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
@@ -121,8 +121,8 @@ int cxl_debugfs_afu_add(struct cxl_afu *afu)
 	debugfs_create_io_x64("sstp1",      S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
 	debugfs_create_io_x64("err_status", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_ErrStat_An));
 
-	if (afu->adapter->native->sl_ops->debugfs_add_afu_sl_regs)
-		afu->adapter->native->sl_ops->debugfs_add_afu_sl_regs(afu, dir);
+	if (afu->adapter->native->sl_ops->debugfs_add_afu_regs)
+		afu->adapter->native->sl_ops->debugfs_add_afu_regs(afu, dir);
 
 	return 0;
 }
diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
index e04bc4d..f6ba698 100644
--- a/drivers/misc/cxl/guest.c
+++ b/drivers/misc/cxl/guest.c
@@ -169,7 +169,7 @@ static irqreturn_t guest_psl_irq(int irq, void *data)
 		return IRQ_HANDLED;
 	}
 
-	rc = cxl_irq(irq, ctx, &irq_info);
+	rc = cxl_irq_psl(irq, ctx, &irq_info);
 	return rc;
 }
 
diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
index 1a402bb..2fa119e 100644
--- a/drivers/misc/cxl/irq.c
+++ b/drivers/misc/cxl/irq.c
@@ -34,7 +34,7 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
 	return IRQ_HANDLED;
 }
 
-irqreturn_t cxl_irq(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
+irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
 {
 	u64 dsisr, dar;
 
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index 7257e8b..c147863e 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -258,7 +258,7 @@ void cxl_release_spa(struct cxl_afu *afu)
 	}
 }
 
-int cxl_tlb_slb_invalidate(struct cxl *adapter)
+int cxl_invalidate_all_psl(struct cxl *adapter)
 {
 	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
 
@@ -578,7 +578,7 @@ static void update_ivtes_directed(struct cxl_context *ctx)
 		WARN_ON(add_process_element(ctx));
 }
 
-static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
+int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr)
 {
 	u32 pid;
 	int result;
@@ -671,7 +671,7 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
 	return 0;
 }
 
-static int activate_dedicated_process(struct cxl_afu *afu)
+int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
 {
 	dev_info(&afu->dev, "Activating dedicated process mode\n");
 
@@ -694,7 +694,7 @@ static int activate_dedicated_process(struct cxl_afu *afu)
 	return cxl_chardev_d_afu_add(afu);
 }
 
-static void update_ivtes_dedicated(struct cxl_context *ctx)
+void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
 {
 	struct cxl_afu *afu = ctx->afu;
 
@@ -710,7 +710,7 @@ static void update_ivtes_dedicated(struct cxl_context *ctx)
 			((u64)ctx->irqs.range[3] & 0xffff));
 }
 
-static int attach_dedicated(struct cxl_context *ctx, u64 wed, u64 amr)
+int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr)
 {
 	struct cxl_afu *afu = ctx->afu;
 	u64 pid;
@@ -728,7 +728,8 @@ static int attach_dedicated(struct cxl_context *ctx, u64 wed, u64 amr)
 
 	cxl_prefault(ctx, wed);
 
-	update_ivtes_dedicated(ctx);
+	if (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes)
+		afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
 
 	cxl_p2n_write(afu, CXL_PSL_AMR_An, amr);
 
@@ -778,8 +779,9 @@ static int native_afu_activate_mode(struct cxl_afu *afu, int mode)
 
 	if (mode == CXL_MODE_DIRECTED)
 		return activate_afu_directed(afu);
-	if (mode == CXL_MODE_DEDICATED)
-		return activate_dedicated_process(afu);
+	if ((mode == CXL_MODE_DEDICATED) &&
+	    (afu->adapter->native->sl_ops->activate_dedicated_process))
+		return afu->adapter->native->sl_ops->activate_dedicated_process(afu);
 
 	return -EINVAL;
 }
@@ -793,11 +795,13 @@ static int native_attach_process(struct cxl_context *ctx, bool kernel,
 	}
 
 	ctx->kernel = kernel;
-	if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
-		return attach_afu_directed(ctx, wed, amr);
+	if ((ctx->afu->current_mode == CXL_MODE_DIRECTED) &&
+	    (ctx->afu->adapter->native->sl_ops->attach_afu_directed))
+		return ctx->afu->adapter->native->sl_ops->attach_afu_directed(ctx, wed, amr);
 
-	if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
-		return attach_dedicated(ctx, wed, amr);
+	if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
+	    (ctx->afu->adapter->native->sl_ops->attach_dedicated_process))
+		return ctx->afu->adapter->native->sl_ops->attach_dedicated_process(ctx, wed, amr);
 
 	return -EINVAL;
 }
@@ -830,8 +834,9 @@ static void native_update_ivtes(struct cxl_context *ctx)
 {
 	if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
 		return update_ivtes_directed(ctx);
-	if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
-		return update_ivtes_dedicated(ctx);
+	if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
+	    (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes))
+		return ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
 	WARN(1, "native_update_ivtes: Bad mode\n");
 }
 
@@ -875,7 +880,7 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
 	return 0;
 }
 
-void cxl_native_psl_irq_dump_regs(struct cxl_context *ctx)
+void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx)
 {
 	u64 fir1, fir2, fir_slice, serr, afu_debug;
 
@@ -911,7 +916,7 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
 	return cxl_ops->ack_irq(ctx, 0, errstat);
 }
 
-static irqreturn_t fail_psl_irq(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
+irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
 {
 	if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
 		cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
@@ -927,7 +932,7 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
 	struct cxl_context *ctx;
 	struct cxl_irq_info irq_info;
 	u64 phreg = cxl_p2n_read(afu, CXL_PSL_PEHandle_An);
-	int ph, ret;
+	int ph, ret = IRQ_HANDLED, res;
 
 	/* check if eeh kicked in while the interrupt was in flight */
 	if (unlikely(phreg == ~0ULL)) {
@@ -938,15 +943,18 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
 	}
 	/* Mask the pe-handle from register value */
 	ph = phreg & 0xffff;
-	if ((ret = native_get_irq_info(afu, &irq_info))) {
-		WARN(1, "Unable to get CXL IRQ Info: %i\n", ret);
-		return fail_psl_irq(afu, &irq_info);
+	if ((res = native_get_irq_info(afu, &irq_info))) {
+		WARN(1, "Unable to get CXL IRQ Info: %i\n", res);
+		if (afu->adapter->native->sl_ops->fail_irq)
+			return afu->adapter->native->sl_ops->fail_irq(afu, &irq_info);
+		return ret;
 	}
 
 	rcu_read_lock();
 	ctx = idr_find(&afu->contexts_idr, ph);
 	if (ctx) {
-		ret = cxl_irq(irq, ctx, &irq_info);
+		if (afu->adapter->native->sl_ops->handle_interrupt)
+			ret = afu->adapter->native->sl_ops->handle_interrupt(irq, ctx, &irq_info);
 		rcu_read_unlock();
 		return ret;
 	}
@@ -956,7 +964,9 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
 		" %016llx\n(Possible AFU HW issue - was a term/remove acked"
 		" with outstanding transactions?)\n", ph, irq_info.dsisr,
 		irq_info.dar);
-	return fail_psl_irq(afu, &irq_info);
+	if (afu->adapter->native->sl_ops->fail_irq)
+		ret = afu->adapter->native->sl_ops->fail_irq(afu, &irq_info);
+	return ret;
 }
 
 static void native_irq_wait(struct cxl_context *ctx)
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index 1f4c351..e9c679e 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -377,7 +377,7 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
 	return 0;
 }
 
-static int init_implementation_adapter_psl_regs(struct cxl *adapter, struct pci_dev *dev)
+static int init_implementation_adapter_regs_psl(struct cxl *adapter, struct pci_dev *dev)
 {
 	u64 psl_dsnctl, psl_fircntl;
 	u64 chipid;
@@ -409,7 +409,7 @@ static int init_implementation_adapter_psl_regs(struct cxl *adapter, struct pci_
 	return 0;
 }
 
-static int init_implementation_adapter_xsl_regs(struct cxl *adapter, struct pci_dev *dev)
+static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_dev *dev)
 {
 	u64 xsl_dsnctl;
 	u64 chipid;
@@ -513,7 +513,7 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
 	return;
 }
 
-static int init_implementation_afu_psl_regs(struct cxl_afu *afu)
+static int init_implementation_afu_regs_psl(struct cxl_afu *afu)
 {
 	/* read/write masks for this slice */
 	cxl_p1n_write(afu, CXL_PSL_APCALLOC_A, 0xFFFFFFFEFEFEFEFEULL);
@@ -996,7 +996,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
 	return 0;
 }
 
-static int sanitise_afu_regs(struct cxl_afu *afu)
+static int sanitise_afu_regs_psl(struct cxl_afu *afu)
 {
 	u64 reg;
 
@@ -1102,8 +1102,11 @@ static int pci_configure_afu(struct cxl_afu *afu, struct cxl *adapter, struct pc
 	if ((rc = pci_map_slice_regs(afu, adapter, dev)))
 		return rc;
 
-	if ((rc = sanitise_afu_regs(afu)))
-		goto err1;
+	if (adapter->native->sl_ops->sanitise_afu_regs) {
+		rc = adapter->native->sl_ops->sanitise_afu_regs(afu);
+		if (rc)
+			goto err1;
+	}
 
 	/* We need to reset the AFU before we can read the AFU descriptor */
 	if ((rc = cxl_ops->afu_reset(afu)))
@@ -1432,9 +1435,15 @@ static void cxl_release_adapter(struct device *dev)
 
 static int sanitise_adapter_regs(struct cxl *adapter)
 {
+	int rc = 0;
+
 	/* Clear PSL tberror bit by writing 1 to it */
 	cxl_p1_write(adapter, CXL_PSL_ErrIVTE, CXL_PSL_ErrIVTE_tberror);
-	return cxl_tlb_slb_invalidate(adapter);
+
+	if (adapter->native->sl_ops->invalidate_all)
+		rc = adapter->native->sl_ops->invalidate_all(adapter);
+
+	return rc;
 }
 
 /* This should contain *only* operations that can safely be done in
@@ -1518,15 +1527,23 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
 }
 
 static const struct cxl_service_layer_ops psl_ops = {
-	.adapter_regs_init = init_implementation_adapter_psl_regs,
-	.afu_regs_init = init_implementation_afu_psl_regs,
+	.adapter_regs_init = init_implementation_adapter_regs_psl,
+	.invalidate_all = cxl_invalidate_all_psl,
+	.afu_regs_init = init_implementation_afu_regs_psl,
+	.sanitise_afu_regs = sanitise_afu_regs_psl,
 	.register_serr_irq = cxl_native_register_serr_irq,
 	.release_serr_irq = cxl_native_release_serr_irq,
-	.debugfs_add_adapter_sl_regs = cxl_debugfs_add_adapter_psl_regs,
-	.debugfs_add_afu_sl_regs = cxl_debugfs_add_afu_psl_regs,
-	.psl_irq_dump_registers = cxl_native_psl_irq_dump_regs,
+	.handle_interrupt = cxl_irq_psl,
+	.fail_irq = cxl_fail_irq_psl,
+	.activate_dedicated_process = cxl_activate_dedicated_process_psl,
+	.attach_afu_directed = cxl_attach_afu_directed_psl,
+	.attach_dedicated_process = cxl_attach_dedicated_process_psl,
+	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
+	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl,
+	.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl,
+	.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl,
 	.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
-	.debugfs_stop_trace = cxl_stop_trace,
+	.debugfs_stop_trace = cxl_stop_trace_psl,
 	.write_timebase_ctrl = write_timebase_ctrl_psl,
 	.timebase_read = timebase_read_psl,
 	.capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
@@ -1534,8 +1551,16 @@ static const struct cxl_service_layer_ops psl_ops = {
 };
 
 static const struct cxl_service_layer_ops xsl_ops = {
-	.adapter_regs_init = init_implementation_adapter_xsl_regs,
-	.debugfs_add_adapter_sl_regs = cxl_debugfs_add_adapter_xsl_regs,
+	.adapter_regs_init = init_implementation_adapter_regs_xsl,
+	.invalidate_all = cxl_invalidate_all_psl,
+	.sanitise_afu_regs = sanitise_afu_regs_psl,
+	.handle_interrupt = cxl_irq_psl,
+	.fail_irq = cxl_fail_irq_psl,
+	.activate_dedicated_process = cxl_activate_dedicated_process_psl,
+	.attach_afu_directed = cxl_attach_afu_directed_psl,
+	.attach_dedicated_process = cxl_attach_dedicated_process_psl,
+	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
+	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_xsl,
 	.write_timebase_ctrl = write_timebase_ctrl_xsl,
 	.timebase_read = timebase_read_xsl,
 	.capi_mode = OPAL_PHB_CAPI_MODE_DMA,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH V4 5/7] cxl: Rename some psl8 specific functions
  2017-04-07 14:11 [PATCH V4 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
                   ` (3 preceding siblings ...)
  2017-04-07 14:11 ` [PATCH V4 4/7] cxl: Update implementation service layer Christophe Lombard
@ 2017-04-07 14:11 ` Christophe Lombard
  2017-04-10  6:14   ` Andrew Donnellan
  2017-04-10 17:06   ` Frederic Barrat
  2017-04-07 14:11 ` [PATCH V4 6/7] cxl: Isolate few psl8 specific calls Christophe Lombard
  2017-04-07 14:11 ` [PATCH V4 7/7] cxl: Add psl9 specific code Christophe Lombard
  6 siblings, 2 replies; 30+ messages in thread
From: Christophe Lombard @ 2017-04-07 14:11 UTC (permalink / raw)
  To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan

Rename a few functions, changing the '_psl' suffix to '_psl8', to make
clear that the implementation is psl8 specific.
Those functions will have an equivalent implementation for the psl9 in
a later patch.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
 drivers/misc/cxl/cxl.h     | 26 ++++++++++----------
 drivers/misc/cxl/debugfs.c |  6 ++---
 drivers/misc/cxl/guest.c   |  2 +-
 drivers/misc/cxl/irq.c     |  2 +-
 drivers/misc/cxl/native.c  | 12 +++++-----
 drivers/misc/cxl/pci.c     | 60 +++++++++++++++++++++++-----------------------
 6 files changed, 54 insertions(+), 54 deletions(-)

diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 626073d..a54c003 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -813,10 +813,10 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
 void afu_release_irqs(struct cxl_context *ctx, void *cookie);
 void afu_irq_name_free(struct cxl_context *ctx);
 
-int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr);
-int cxl_activate_dedicated_process_psl(struct cxl_afu *afu);
-int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr);
-void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx);
+int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu);
+int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx);
 
 #ifdef CONFIG_DEBUG_FS
 
@@ -826,10 +826,10 @@ int cxl_debugfs_adapter_add(struct cxl *adapter);
 void cxl_debugfs_adapter_remove(struct cxl *adapter);
 int cxl_debugfs_afu_add(struct cxl_afu *afu);
 void cxl_debugfs_afu_remove(struct cxl_afu *afu);
-void cxl_stop_trace_psl(struct cxl *cxl);
-void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir);
+void cxl_stop_trace_psl8(struct cxl *cxl);
+void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir);
 void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
-void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir);
+void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir);
 
 #else /* CONFIG_DEBUG_FS */
 
@@ -860,11 +860,11 @@ static inline void cxl_debugfs_afu_remove(struct cxl_afu *afu)
 {
 }
 
-static inline void cxl_stop_trace(struct cxl *cxl)
+static inline void cxl_stop_trace_psl8(struct cxl *cxl)
 {
 }
 
-static inline void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter,
+static inline void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter,
 						    struct dentry *dir)
 {
 }
@@ -874,7 +874,7 @@ static inline void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter,
 {
 }
 
-static inline void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
+static inline void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
 {
 }
 
@@ -919,8 +919,8 @@ struct cxl_irq_info {
 };
 
 void cxl_assign_psn_space(struct cxl_context *ctx);
-int cxl_invalidate_all_psl(struct cxl *adapter);
-irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
+int cxl_invalidate_all_psl8(struct cxl *adapter);
+irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
 irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
 int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
 			void *cookie, irq_hw_number_t *dest_hwirq,
@@ -932,7 +932,7 @@ int cxl_data_cache_flush(struct cxl *adapter);
 int cxl_afu_disable(struct cxl_afu *afu);
 int cxl_psl_purge(struct cxl_afu *afu);
 
-void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx);
+void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx);
 void cxl_native_err_irq_dump_regs(struct cxl *adapter);
 int cxl_pci_vphb_add(struct cxl_afu *afu);
 void cxl_pci_vphb_remove(struct cxl_afu *afu);
diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
index 4848ebf..2ff10a9 100644
--- a/drivers/misc/cxl/debugfs.c
+++ b/drivers/misc/cxl/debugfs.c
@@ -15,7 +15,7 @@
 
 static struct dentry *cxl_debugfs;
 
-void cxl_stop_trace_psl(struct cxl *adapter)
+void cxl_stop_trace_psl8(struct cxl *adapter)
 {
 	int slice;
 
@@ -53,7 +53,7 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
 					  (void __force *)value, &fops_io_x64);
 }
 
-void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir)
+void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir)
 {
 	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
 	debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR2));
@@ -92,7 +92,7 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
 	debugfs_remove_recursive(adapter->debugfs);
 }
 
-void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
+void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
 {
 	debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
 	debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
index f6ba698..3ad7381 100644
--- a/drivers/misc/cxl/guest.c
+++ b/drivers/misc/cxl/guest.c
@@ -169,7 +169,7 @@ static irqreturn_t guest_psl_irq(int irq, void *data)
 		return IRQ_HANDLED;
 	}
 
-	rc = cxl_irq_psl(irq, ctx, &irq_info);
+	rc = cxl_irq_psl8(irq, ctx, &irq_info);
 	return rc;
 }
 
diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
index 2fa119e..fa9f8a2 100644
--- a/drivers/misc/cxl/irq.c
+++ b/drivers/misc/cxl/irq.c
@@ -34,7 +34,7 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
 	return IRQ_HANDLED;
 }
 
-irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
+irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
 {
 	u64 dsisr, dar;
 
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index c147863e..ee3164e 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -258,7 +258,7 @@ void cxl_release_spa(struct cxl_afu *afu)
 	}
 }
 
-int cxl_invalidate_all_psl(struct cxl *adapter)
+int cxl_invalidate_all_psl8(struct cxl *adapter)
 {
 	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
 
@@ -578,7 +578,7 @@ static void update_ivtes_directed(struct cxl_context *ctx)
 		WARN_ON(add_process_element(ctx));
 }
 
-int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr)
+int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 {
 	u32 pid;
 	int result;
@@ -671,7 +671,7 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
 	return 0;
 }
 
-int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
+int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
 {
 	dev_info(&afu->dev, "Activating dedicated process mode\n");
 
@@ -694,7 +694,7 @@ int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
 	return cxl_chardev_d_afu_add(afu);
 }
 
-void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
+void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
 {
 	struct cxl_afu *afu = ctx->afu;
 
@@ -710,7 +710,7 @@ void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
 			((u64)ctx->irqs.range[3] & 0xffff));
 }
 
-int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr)
+int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 {
 	struct cxl_afu *afu = ctx->afu;
 	u64 pid;
@@ -880,7 +880,7 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
 	return 0;
 }
 
-void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx)
+void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx)
 {
 	u64 fir1, fir2, fir_slice, serr, afu_debug;
 
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index e9c679e..69008a4 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -377,7 +377,7 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
 	return 0;
 }
 
-static int init_implementation_adapter_regs_psl(struct cxl *adapter, struct pci_dev *dev)
+static int init_implementation_adapter_regs_psl8(struct cxl *adapter, struct pci_dev *dev)
 {
 	u64 psl_dsnctl, psl_fircntl;
 	u64 chipid;
@@ -434,7 +434,7 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
 /* For the PSL this is a multiple for 0 < n <= 7: */
 #define PSL_2048_250MHZ_CYCLES 1
 
-static void write_timebase_ctrl_psl(struct cxl *adapter)
+static void write_timebase_ctrl_psl8(struct cxl *adapter)
 {
 	cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT,
 		     TBSYNC_CNT(2 * PSL_2048_250MHZ_CYCLES));
@@ -455,7 +455,7 @@ static void write_timebase_ctrl_xsl(struct cxl *adapter)
 		     TBSYNC_CNT(XSL_4000_CLOCKS));
 }
 
-static u64 timebase_read_psl(struct cxl *adapter)
+static u64 timebase_read_psl8(struct cxl *adapter)
 {
 	return cxl_p1_read(adapter, CXL_PSL_Timebase);
 }
@@ -513,7 +513,7 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
 	return;
 }
 
-static int init_implementation_afu_regs_psl(struct cxl_afu *afu)
+static int init_implementation_afu_regs_psl8(struct cxl_afu *afu)
 {
 	/* read/write masks for this slice */
 	cxl_p1n_write(afu, CXL_PSL_APCALLOC_A, 0xFFFFFFFEFEFEFEFEULL);
@@ -996,7 +996,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
 	return 0;
 }
 
-static int sanitise_afu_regs_psl(struct cxl_afu *afu)
+static int sanitise_afu_regs_psl8(struct cxl_afu *afu)
 {
 	u64 reg;
 
@@ -1526,40 +1526,40 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
 	pci_disable_device(pdev);
 }
 
-static const struct cxl_service_layer_ops psl_ops = {
-	.adapter_regs_init = init_implementation_adapter_regs_psl,
-	.invalidate_all = cxl_invalidate_all_psl,
-	.afu_regs_init = init_implementation_afu_regs_psl,
-	.sanitise_afu_regs = sanitise_afu_regs_psl,
+static const struct cxl_service_layer_ops psl8_ops = {
+	.adapter_regs_init = init_implementation_adapter_regs_psl8,
+	.invalidate_all = cxl_invalidate_all_psl8,
+	.afu_regs_init = init_implementation_afu_regs_psl8,
+	.sanitise_afu_regs = sanitise_afu_regs_psl8,
 	.register_serr_irq = cxl_native_register_serr_irq,
 	.release_serr_irq = cxl_native_release_serr_irq,
-	.handle_interrupt = cxl_irq_psl,
+	.handle_interrupt = cxl_irq_psl8,
 	.fail_irq = cxl_fail_irq_psl,
-	.activate_dedicated_process = cxl_activate_dedicated_process_psl,
-	.attach_afu_directed = cxl_attach_afu_directed_psl,
-	.attach_dedicated_process = cxl_attach_dedicated_process_psl,
-	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
-	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl,
-	.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl,
-	.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl,
+	.activate_dedicated_process = cxl_activate_dedicated_process_psl8,
+	.attach_afu_directed = cxl_attach_afu_directed_psl8,
+	.attach_dedicated_process = cxl_attach_dedicated_process_psl8,
+	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl8,
+	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl8,
+	.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl8,
+	.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl8,
 	.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
-	.debugfs_stop_trace = cxl_stop_trace_psl,
-	.write_timebase_ctrl = write_timebase_ctrl_psl,
-	.timebase_read = timebase_read_psl,
+	.debugfs_stop_trace = cxl_stop_trace_psl8,
+	.write_timebase_ctrl = write_timebase_ctrl_psl8,
+	.timebase_read = timebase_read_psl8,
 	.capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
 	.needs_reset_before_disable = true,
 };
 
 static const struct cxl_service_layer_ops xsl_ops = {
 	.adapter_regs_init = init_implementation_adapter_regs_xsl,
-	.invalidate_all = cxl_invalidate_all_psl,
-	.sanitise_afu_regs = sanitise_afu_regs_psl,
-	.handle_interrupt = cxl_irq_psl,
+	.invalidate_all = cxl_invalidate_all_psl8,
+	.sanitise_afu_regs = sanitise_afu_regs_psl8,
+	.handle_interrupt = cxl_irq_psl8,
 	.fail_irq = cxl_fail_irq_psl,
-	.activate_dedicated_process = cxl_activate_dedicated_process_psl,
-	.attach_afu_directed = cxl_attach_afu_directed_psl,
-	.attach_dedicated_process = cxl_attach_dedicated_process_psl,
-	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
+	.activate_dedicated_process = cxl_activate_dedicated_process_psl8,
+	.attach_afu_directed = cxl_attach_afu_directed_psl8,
+	.attach_dedicated_process = cxl_attach_dedicated_process_psl8,
+	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl8,
 	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_xsl,
 	.write_timebase_ctrl = write_timebase_ctrl_xsl,
 	.timebase_read = timebase_read_xsl,
@@ -1574,8 +1574,8 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
 		adapter->native->sl_ops = &xsl_ops;
 		adapter->min_pe = 1; /* Workaround for CX-4 hardware bug */
 	} else {
-		dev_info(&dev->dev, "Device uses a PSL\n");
-		adapter->native->sl_ops = &psl_ops;
+		dev_info(&dev->dev, "Device uses a PSL8\n");
+		adapter->native->sl_ops = &psl8_ops;
 	}
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH V4 6/7] cxl: Isolate few psl8 specific calls
  2017-04-07 14:11 [PATCH V4 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
                   ` (4 preceding siblings ...)
  2017-04-07 14:11 ` [PATCH V4 5/7] cxl: Rename some psl8 specific functions Christophe Lombard
@ 2017-04-07 14:11 ` Christophe Lombard
  2017-04-10 17:13   ` Frederic Barrat
  2017-04-07 14:11 ` [PATCH V4 7/7] cxl: Add psl9 specific code Christophe Lombard
  6 siblings, 1 reply; 30+ messages in thread
From: Christophe Lombard @ 2017-04-07 14:11 UTC (permalink / raw)
  To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan

Point out the specific Coherent Accelerator Interface Architecture,
level 1, registers.
Code and functions specific to PSL8 (CAIA1) must be framed.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
 drivers/misc/cxl/context.c | 28 +++++++++++---------
 drivers/misc/cxl/cxl.h     | 35 +++++++++++++++++++------
 drivers/misc/cxl/debugfs.c |  6 +++--
 drivers/misc/cxl/native.c  | 43 +++++++++++++++++++++----------
 drivers/misc/cxl/pci.c     | 64 +++++++++++++++++++++++++++++++---------------
 5 files changed, 120 insertions(+), 56 deletions(-)

diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index 2e935ea..ac2531e 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -39,23 +39,26 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
 {
 	int i;
 
-	spin_lock_init(&ctx->sste_lock);
 	ctx->afu = afu;
 	ctx->master = master;
 	ctx->pid = NULL; /* Set in start work ioctl */
 	mutex_init(&ctx->mapping_lock);
 	ctx->mapping = NULL;
 
-	/*
-	 * Allocate the segment table before we put it in the IDR so that we
-	 * can always access it when dereferenced from IDR. For the same
-	 * reason, the segment table is only destroyed after the context is
-	 * removed from the IDR.  Access to this in the IOCTL is protected by
-	 * Linux filesytem symantics (can't IOCTL until open is complete).
-	 */
-	i = cxl_alloc_sst(ctx);
-	if (i)
-		return i;
+	if (cxl_is_psl8(afu)) {
+		spin_lock_init(&ctx->sste_lock);
+
+		/*
+		 * Allocate the segment table before we put it in the IDR so that we
+		 * can always access it when dereferenced from IDR. For the same
+		 * reason, the segment table is only destroyed after the context is
+		 * removed from the IDR.  Access to this in the IOCTL is protected by
+		 * Linux filesytem symantics (can't IOCTL until open is complete).
+		 */
+		i = cxl_alloc_sst(ctx);
+		if (i)
+			return i;
+	}
 
 	INIT_WORK(&ctx->fault_work, cxl_handle_fault);
 
@@ -308,7 +311,8 @@ static void reclaim_ctx(struct rcu_head *rcu)
 {
 	struct cxl_context *ctx = container_of(rcu, struct cxl_context, rcu);
 
-	free_page((u64)ctx->sstp);
+	if (cxl_is_psl8(ctx->afu))
+		free_page((u64)ctx->sstp);
 	if (ctx->ff_page)
 		__free_page(ctx->ff_page);
 	ctx->sstp = NULL;
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index a54c003..82335c0 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -73,7 +73,7 @@ static const cxl_p1_reg_t CXL_PSL_Control = {0x0020};
 static const cxl_p1_reg_t CXL_PSL_DLCNTL  = {0x0060};
 static const cxl_p1_reg_t CXL_PSL_DLADDR  = {0x0068};
 
-/* PSL Lookaside Buffer Management Area */
+/* PSL Lookaside Buffer Management Area - CAIA 1 */
 static const cxl_p1_reg_t CXL_PSL_LBISEL  = {0x0080};
 static const cxl_p1_reg_t CXL_PSL_SLBIE   = {0x0088};
 static const cxl_p1_reg_t CXL_PSL_SLBIA   = {0x0090};
@@ -82,7 +82,7 @@ static const cxl_p1_reg_t CXL_PSL_TLBIA   = {0x00A8};
 static const cxl_p1_reg_t CXL_PSL_AFUSEL  = {0x00B0};
 
 /* 0x00C0:7EFF Implementation dependent area */
-/* PSL registers */
+/* PSL registers - CAIA 1 */
 static const cxl_p1_reg_t CXL_PSL_FIR1      = {0x0100};
 static const cxl_p1_reg_t CXL_PSL_FIR2      = {0x0108};
 static const cxl_p1_reg_t CXL_PSL_Timebase  = {0x0110};
@@ -109,7 +109,7 @@ static const cxl_p1n_reg_t CXL_PSL_AMBAR_An       = {0x10};
 static const cxl_p1n_reg_t CXL_PSL_SPOffset_An    = {0x18};
 static const cxl_p1n_reg_t CXL_PSL_ID_An          = {0x20};
 static const cxl_p1n_reg_t CXL_PSL_SERR_An        = {0x28};
-/* Memory Management and Lookaside Buffer Management */
+/* Memory Management and Lookaside Buffer Management - CAIA 1*/
 static const cxl_p1n_reg_t CXL_PSL_SDR_An         = {0x30};
 static const cxl_p1n_reg_t CXL_PSL_AMOR_An        = {0x38};
 /* Pointer Area */
@@ -124,6 +124,7 @@ static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An  = {0xB8};
 /* 0xC0:FF Implementation Dependent Area */
 static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An   = {0xC0};
 static const cxl_p1n_reg_t CXL_AFU_DEBUG_An       = {0xC8};
+/* 0xC0:FF Implementation Dependent Area - CAIA 1 */
 static const cxl_p1n_reg_t CXL_PSL_APCALLOC_A     = {0xD0};
 static const cxl_p1n_reg_t CXL_PSL_COALLOC_A      = {0xD8};
 static const cxl_p1n_reg_t CXL_PSL_RXCTL_A        = {0xE0};
@@ -133,12 +134,14 @@ static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE    = {0xE8};
 /* Configuration and Control Area */
 static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
 static const cxl_p2n_reg_t CXL_CSRP_An        = {0x008};
+/* Configuration and Control Area - CAIA 1 */
 static const cxl_p2n_reg_t CXL_AURP0_An       = {0x010};
 static const cxl_p2n_reg_t CXL_AURP1_An       = {0x018};
 static const cxl_p2n_reg_t CXL_SSTP0_An       = {0x020};
 static const cxl_p2n_reg_t CXL_SSTP1_An       = {0x028};
+/* Configuration and Control Area - CAIA 1 */
 static const cxl_p2n_reg_t CXL_PSL_AMR_An     = {0x030};
-/* Segment Lookaside Buffer Management */
+/* Segment Lookaside Buffer Management - CAIA 1 */
 static const cxl_p2n_reg_t CXL_SLBIE_An       = {0x040};
 static const cxl_p2n_reg_t CXL_SLBIA_An       = {0x048};
 static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
@@ -257,7 +260,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 #define CXL_SSTP1_An_STVA_L_MASK (~((1ull << (63-55))-1))
 #define CXL_SSTP1_An_V              (1ull << (63-63))
 
-/****** CXL_PSL_SLBIE_[An] **************************************************/
+/****** CXL_PSL_SLBIE_[An] - CAIA 1 **************************************************/
 /* write: */
 #define CXL_SLBIE_C        PPC_BIT(36)         /* Class */
 #define CXL_SLBIE_SS       PPC_BITMASK(37, 38) /* Segment Size */
@@ -267,10 +270,10 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 #define CXL_SLBIE_MAX      PPC_BITMASK(24, 31)
 #define CXL_SLBIE_PENDING  PPC_BITMASK(56, 63)
 
-/****** Common to all CXL_TLBIA/SLBIA_[An] **********************************/
+/****** Common to all CXL_TLBIA/SLBIA_[An] - CAIA 1 **********************************/
 #define CXL_TLB_SLB_P          (1ull) /* Pending (read) */
 
-/****** Common to all CXL_TLB/SLB_IA/IE_[An] registers **********************/
+/****** Common to all CXL_TLB/SLB_IA/IE_[An] registers - CAIA 1 **********************/
 #define CXL_TLB_SLB_IQ_ALL     (0ull) /* Inv qualifier */
 #define CXL_TLB_SLB_IQ_LPID    (1ull) /* Inv qualifier */
 #define CXL_TLB_SLB_IQ_LPIDPID (3ull) /* Inv qualifier */
@@ -278,7 +281,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 /****** CXL_PSL_AFUSEL ******************************************************/
 #define CXL_PSL_AFUSEL_A (1ull << (63-55)) /* Adapter wide invalidates affect all AFUs */
 
-/****** CXL_PSL_DSISR_An ****************************************************/
+/****** CXL_PSL_DSISR_An - CAIA 1 ****************************************************/
 #define CXL_PSL_DSISR_An_DS (1ull << (63-0))  /* Segment not found */
 #define CXL_PSL_DSISR_An_DM (1ull << (63-1))  /* PTE not found (See also: M) or protection fault */
 #define CXL_PSL_DSISR_An_ST (1ull << (63-2))  /* Segment Table PTE not found */
@@ -749,6 +752,22 @@ static inline u64 cxl_p2n_read(struct cxl_afu *afu, cxl_p2n_reg_t reg)
 		return ~0ULL;
 }
 
+static inline bool cxl_is_power8(void)
+{
+	if ((pvr_version_is(PVR_POWER8E)) ||
+	    (pvr_version_is(PVR_POWER8NVL)) ||
+	    (pvr_version_is(PVR_POWER8)))
+		return true;
+	return false;
+}
+
+static inline bool cxl_is_psl8(struct cxl_afu *afu)
+{
+	if (afu->adapter->caia_major == 1)
+		return true;
+	return false;
+}
+
 ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
 				loff_t off, size_t count);
 
diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
index 2ff10a9..43a1a27 100644
--- a/drivers/misc/cxl/debugfs.c
+++ b/drivers/misc/cxl/debugfs.c
@@ -94,6 +94,9 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
 
 void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
 {
+	debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
+	debugfs_create_io_x64("sstp1", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
+
 	debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
 	debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
 	debugfs_create_io_x64("afu_debug", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_AFU_DEBUG_An));
@@ -117,8 +120,7 @@ int cxl_debugfs_afu_add(struct cxl_afu *afu)
 	debugfs_create_io_x64("sr",         S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SR_An));
 	debugfs_create_io_x64("dsisr",      S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DSISR_An));
 	debugfs_create_io_x64("dar",        S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DAR_An));
-	debugfs_create_io_x64("sstp0",      S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
-	debugfs_create_io_x64("sstp1",      S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
+
 	debugfs_create_io_x64("err_status", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_ErrStat_An));
 
 	if (afu->adapter->native->sl_ops->debugfs_add_afu_regs)
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index ee3164e..0401e4dc 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -155,13 +155,21 @@ int cxl_psl_purge(struct cxl_afu *afu)
 		}
 
 		dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
-		pr_devel_ratelimited("PSL purging... PSL_CNTL: 0x%016llx  PSL_DSISR: 0x%016llx\n", PSL_CNTL, dsisr);
+		pr_devel_ratelimited("PSL purging... PSL_CNTL: 0x%016llx"
+				     "  PSL_DSISR: 0x%016llx\n",
+				     PSL_CNTL, dsisr);
+
 		if (dsisr & CXL_PSL_DSISR_TRANS) {
 			dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
-			dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%016llx, DAR: 0x%016llx\n", dsisr, dar);
+			dev_notice(&afu->dev, "PSL purge terminating "
+					      "pending translation, "
+					      "DSISR: 0x%016llx, DAR: 0x%016llx\n",
+					       dsisr, dar);
 			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
 		} else if (dsisr) {
-			dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%016llx\n", dsisr);
+			dev_notice(&afu->dev, "PSL purge acknowledging "
+					      "pending non-translation fault, "
+					      "DSISR: 0x%016llx\n", dsisr);
 			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
 		} else {
 			cpu_relax();
@@ -466,7 +474,8 @@ static int remove_process_element(struct cxl_context *ctx)
 
 	if (!rc)
 		ctx->pe_inserted = false;
-	slb_invalid(ctx);
+	if (cxl_is_power8())
+		slb_invalid(ctx);
 	pr_devel("%s Remove pe: %i finished\n", __func__, ctx->pe);
 	mutex_unlock(&ctx->afu->native->spa_mutex);
 
@@ -499,7 +508,8 @@ static int activate_afu_directed(struct cxl_afu *afu)
 	attach_spa(afu);
 
 	cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_AFU);
-	cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
+	if (cxl_is_power8())
+		cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
 	cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
 
 	afu->current_mode = CXL_MODE_DIRECTED;
@@ -872,7 +882,8 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
 
 	info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
 	info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
-	info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
+	if (cxl_is_power8())
+		info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
 	info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An);
 	info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
 	info->proc_handle = 0;
@@ -984,7 +995,8 @@ static void native_irq_wait(struct cxl_context *ctx)
 		if (ph != ctx->pe)
 			return;
 		dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);
-		if ((dsisr & CXL_PSL_DSISR_PENDING) == 0)
+		if (cxl_is_psl8(ctx->afu) &&
+		   ((dsisr & CXL_PSL_DSISR_PENDING) == 0))
 			return;
 		/*
 		 * We are waiting for the workqueue to process our
@@ -1001,21 +1013,25 @@ static void native_irq_wait(struct cxl_context *ctx)
 static irqreturn_t native_slice_irq_err(int irq, void *data)
 {
 	struct cxl_afu *afu = data;
-	u64 fir_slice, errstat, serr, afu_debug, afu_error, dsisr;
+	u64 errstat, serr, afu_error, dsisr;
+	u64 fir_slice, afu_debug;
 
 	/*
 	 * slice err interrupt is only used with full PSL (no XSL)
 	 */
 	serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
-	fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An);
 	errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
-	afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An);
 	afu_error = cxl_p2n_read(afu, CXL_AFU_ERR_An);
 	dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
 	cxl_afu_decode_psl_serr(afu, serr);
-	dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice);
+
+	if (cxl_is_power8()) {
+		fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An);
+		afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An);
+		dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice);
+		dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug);
+	}
 	dev_crit(&afu->dev, "CXL_PSL_ErrStat_An: 0x%016llx\n", errstat);
-	dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug);
 	dev_crit(&afu->dev, "AFU_ERR_An: 0x%.16llx\n", afu_error);
 	dev_crit(&afu->dev, "PSL_DSISR_An: 0x%.16llx\n", dsisr);
 
@@ -1108,7 +1124,8 @@ int cxl_native_register_serr_irq(struct cxl_afu *afu)
 	}
 
 	serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
-	serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
+	if (cxl_is_power8())
+		serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
 	cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
 
 	return 0;
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index 69008a4..a910115 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -324,32 +324,33 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
 #undef show_reg
 }
 
-#define CAPP_UNIT0_ID 0xBA
-#define CAPP_UNIT1_ID 0XBE
+#define P8_CAPP_UNIT0_ID 0xBA
+#define P8_CAPP_UNIT1_ID 0XBE
 
 static u64 get_capp_unit_id(struct device_node *np)
 {
 	u32 phb_index;
 
-	/*
-	 * For chips other than POWER8NVL, we only have CAPP 0,
-	 * irrespective of which PHB is used.
-	 */
-	if (!pvr_version_is(PVR_POWER8NVL))
-		return CAPP_UNIT0_ID;
+	if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
+		return 0;
 
 	/*
-	 * For POWER8NVL, assume CAPP 0 is attached to PHB0 and
-	 * CAPP 1 is attached to PHB1.
+	 * POWER 8:
+	 *  - For chips other than POWER8NVL, we only have CAPP 0,
+	 *    irrespective of which PHB is used.
+	 *  - For POWER8NVL, assume CAPP 0 is attached to PHB0 and
+	 *    CAPP 1 is attached to PHB1.
 	 */
-	if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
-		return 0;
+	if (cxl_is_power8()) {
+		if (!pvr_version_is(PVR_POWER8NVL))
+			return P8_CAPP_UNIT0_ID;
 
-	if (phb_index == 0)
-		return CAPP_UNIT0_ID;
+		if (phb_index == 0)
+			return P8_CAPP_UNIT0_ID;
 
-	if (phb_index == 1)
-		return CAPP_UNIT1_ID;
+		if (phb_index == 1)
+			return P8_CAPP_UNIT1_ID;
+	}
 
 	return 0;
 }
@@ -968,7 +969,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
 	}
 
 	if (afu->pp_psa && (afu->pp_size < PAGE_SIZE))
-		dev_warn(&afu->dev, "AFU uses < PAGE_SIZE per-process PSA!");
+		dev_warn(&afu->dev, "AFU uses pp_size(%#016llx) < PAGE_SIZE per-process PSA!\n", afu->pp_size);
 
 	for (i = 0; i < afu->crs_num; i++) {
 		rc = cxl_ops->afu_cr_read32(afu, i, 0, &val);
@@ -1251,8 +1252,13 @@ int cxl_pci_reset(struct cxl *adapter)
 
 	dev_info(&dev->dev, "CXL reset\n");
 
-	/* the adapter is about to be reset, so ignore errors */
-	cxl_data_cache_flush(adapter);
+	/*
+	 * The adapter is about to be reset, so ignore errors.
+	 * Not supported on P9 DD1 but don't forget to enable it
+	 * on P9 DD2
+	 */
+	if (cxl_is_power8())
+		cxl_data_cache_flush(adapter);
 
 	/* pcie_warm_reset requests a fundamental pci reset which includes a
 	 * PERST assert/deassert.  PERST triggers a loading of the image
@@ -1382,6 +1388,14 @@ static void cxl_fixup_malformed_tlp(struct cxl *adapter, struct pci_dev *dev)
 	pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, data);
 }
 
+static bool cxl_compatible_caia_version(struct cxl *adapter)
+{
+	if (cxl_is_power8() && (adapter->caia_major == 1))
+		return true;
+
+	return false;
+}
+
 static int cxl_vsec_looks_ok(struct cxl *adapter, struct pci_dev *dev)
 {
 	if (adapter->vsec_status & CXL_STATUS_SECOND_PORT)
@@ -1392,6 +1406,12 @@ static int cxl_vsec_looks_ok(struct cxl *adapter, struct pci_dev *dev)
 		return -EINVAL;
 	}
 
+	if (!cxl_compatible_caia_version(adapter)) {
+		dev_info(&dev->dev, "Ignoring card. PSL type is not supported "
+				    "(caia version: %d)\n", adapter->caia_major);
+		return -ENODEV;
+	}
+
 	if (!adapter->slices) {
 		/* Once we support dynamic reprogramming we can use the card if
 		 * it supports loadable AFUs */
@@ -1574,8 +1594,10 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
 		adapter->native->sl_ops = &xsl_ops;
 		adapter->min_pe = 1; /* Workaround for CX-4 hardware bug */
 	} else {
-		dev_info(&dev->dev, "Device uses a PSL8\n");
-		adapter->native->sl_ops = &psl8_ops;
+		if (cxl_is_power8()) {
+			dev_info(&dev->dev, "Device uses a PSL8\n");
+			adapter->native->sl_ops = &psl8_ops;
+		}
 	}
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH V4 7/7] cxl: Add psl9 specific code
  2017-04-07 14:11 [PATCH V4 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
                   ` (5 preceding siblings ...)
  2017-04-07 14:11 ` [PATCH V4 6/7] cxl: Isolate few psl8 specific calls Christophe Lombard
@ 2017-04-07 14:11 ` Christophe Lombard
  2017-04-11 14:41   ` Frederic Barrat
                     ` (2 more replies)
  6 siblings, 3 replies; 30+ messages in thread
From: Christophe Lombard @ 2017-04-07 14:11 UTC (permalink / raw)
  To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan

The new Coherent Accelerator Interface Architecture, level 2, for the
IBM POWER9 brings new content and features:
- POWER9 Service Layer
- Registers
- Radix mode
- Process element entry
- Dedicated-Shared Process Programming Model
- Translation Fault Handling
- CAPP
- Memory Context ID
    If a valid mm_struct is found the memory context id is used for each
    transaction associated with the process handle. The PSL uses the
    context ID to find the corresponding process element.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
 Documentation/powerpc/cxl.txt |  11 +-
 drivers/misc/cxl/context.c    |  16 ++-
 drivers/misc/cxl/cxl.h        | 137 +++++++++++++++++++----
 drivers/misc/cxl/debugfs.c    |  19 ++++
 drivers/misc/cxl/fault.c      |  64 +++++++----
 drivers/misc/cxl/guest.c      |   8 +-
 drivers/misc/cxl/irq.c        |  53 +++++++++
 drivers/misc/cxl/native.c     | 225 +++++++++++++++++++++++++++++++++++---
 drivers/misc/cxl/pci.c        | 246 +++++++++++++++++++++++++++++++++++++++---
 drivers/misc/cxl/trace.h      |  43 ++++++++
 10 files changed, 748 insertions(+), 74 deletions(-)

diff --git a/Documentation/powerpc/cxl.txt b/Documentation/powerpc/cxl.txt
index d5506ba0..4a77462 100644
--- a/Documentation/powerpc/cxl.txt
+++ b/Documentation/powerpc/cxl.txt
@@ -21,7 +21,7 @@ Introduction
 Hardware overview
 =================
 
-          POWER8               FPGA
+         POWER8/9             FPGA
        +----------+        +---------+
        |          |        |         |
        |   CPU    |        |   AFU   |
@@ -34,7 +34,7 @@ Hardware overview
        |   | CAPP |<------>|         |
        +---+------+  PCIE  +---------+
 
-    The POWER8 chip has a Coherently Attached Processor Proxy (CAPP)
+    The POWER8/9 chip has a Coherently Attached Processor Proxy (CAPP)
     unit which is part of the PCIe Host Bridge (PHB). This is managed
     by Linux by calls into OPAL. Linux doesn't directly program the
     CAPP.
@@ -59,6 +59,13 @@ Hardware overview
     the fault. The context to which this fault is serviced is based on
     who owns that acceleration function.
 
+    POWER8 <-----> PSL Version 8 is compliant to the CAIA Version 1.0.
+    POWER9 <-----> PSL Version 9 is compliant to the CAIA Version 2.0.
+    This PSL Version 9 provides new features as:
+    * Native DMA support.
+    * Supports sending ASB_Notify messages for host thread wakeup.
+    * Supports Atomic operations.
+    * ....
 
 AFU Modes
 =========
diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index ac2531e..45363be 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -188,12 +188,24 @@ int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)
 	if (ctx->afu->current_mode == CXL_MODE_DEDICATED) {
 		if (start + len > ctx->afu->adapter->ps_size)
 			return -EINVAL;
+
+		if (cxl_is_psl9(ctx->afu)) {
+			/* make sure there is a valid problem state
+			 * area space for this AFU
+			 */
+			if (ctx->master && !ctx->afu->psa) {
+				pr_devel("AFU doesn't support mmio space\n");
+				return -EINVAL;
+			}
+
+			/* Can't mmap until the AFU is enabled */
+			if (!ctx->afu->enabled)
+				return -EBUSY;
+		}
 	} else {
 		if (start + len > ctx->psn_size)
 			return -EINVAL;
-	}
 
-	if (ctx->afu->current_mode != CXL_MODE_DEDICATED) {
 		/* make sure there is a valid per process space for this AFU */
 		if ((ctx->master && !ctx->afu->psa) || (!ctx->afu->pp_psa)) {
 			pr_devel("AFU doesn't support mmio space\n");
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 82335c0..df40e6e 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -63,7 +63,7 @@ typedef struct {
 /* Memory maps. Ref CXL Appendix A */
 
 /* PSL Privilege 1 Memory Map */
-/* Configuration and Control area */
+/* Configuration and Control area - CAIA 1&2 */
 static const cxl_p1_reg_t CXL_PSL_CtxTime = {0x0000};
 static const cxl_p1_reg_t CXL_PSL_ErrIVTE = {0x0008};
 static const cxl_p1_reg_t CXL_PSL_KEY1    = {0x0010};
@@ -98,11 +98,29 @@ static const cxl_p1_reg_t CXL_XSL_Timebase  = {0x0100};
 static const cxl_p1_reg_t CXL_XSL_TB_CTLSTAT = {0x0108};
 static const cxl_p1_reg_t CXL_XSL_FEC       = {0x0158};
 static const cxl_p1_reg_t CXL_XSL_DSNCTL    = {0x0168};
+/* PSL registers - CAIA 2 */
+static const cxl_p1_reg_t CXL_PSL9_CONTROL  = {0x0020};
+static const cxl_p1_reg_t CXL_XSL9_DSNCTL   = {0x0168};
+static const cxl_p1_reg_t CXL_PSL9_FIR1     = {0x0300};
+static const cxl_p1_reg_t CXL_PSL9_FIR2     = {0x0308};
+static const cxl_p1_reg_t CXL_PSL9_Timebase = {0x0310};
+static const cxl_p1_reg_t CXL_PSL9_DEBUG    = {0x0320};
+static const cxl_p1_reg_t CXL_PSL9_FIR_CNTL = {0x0348};
+static const cxl_p1_reg_t CXL_PSL9_DSNDCTL  = {0x0350};
+static const cxl_p1_reg_t CXL_PSL9_TB_CTLSTAT = {0x0340};
+static const cxl_p1_reg_t CXL_PSL9_TRACECFG = {0x0368};
+static const cxl_p1_reg_t CXL_PSL9_APCDEDALLOC = {0x0378};
+static const cxl_p1_reg_t CXL_PSL9_APCDEDTYPE = {0x0380};
+static const cxl_p1_reg_t CXL_PSL9_TNR_ADDR = {0x0388};
+static const cxl_p1_reg_t CXL_PSL9_GP_CT = {0x0398};
+static const cxl_p1_reg_t CXL_XSL9_IERAT = {0x0588};
+static const cxl_p1_reg_t CXL_XSL9_ILPP  = {0x0590};
+
 /* 0x7F00:7FFF Reserved PCIe MSI-X Pending Bit Array area */
 /* 0x8000:FFFF Reserved PCIe MSI-X Table Area */
 
 /* PSL Slice Privilege 1 Memory Map */
-/* Configuration Area */
+/* Configuration Area - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_PSL_SR_An          = {0x00};
 static const cxl_p1n_reg_t CXL_PSL_LPID_An        = {0x08};
 static const cxl_p1n_reg_t CXL_PSL_AMBAR_An       = {0x10};
@@ -111,17 +129,18 @@ static const cxl_p1n_reg_t CXL_PSL_ID_An          = {0x20};
 static const cxl_p1n_reg_t CXL_PSL_SERR_An        = {0x28};
 /* Memory Management and Lookaside Buffer Management - CAIA 1*/
 static const cxl_p1n_reg_t CXL_PSL_SDR_An         = {0x30};
+/* Memory Management and Lookaside Buffer Management - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_PSL_AMOR_An        = {0x38};
-/* Pointer Area */
+/* Pointer Area - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_HAURP_An           = {0x80};
 static const cxl_p1n_reg_t CXL_PSL_SPAP_An        = {0x88};
 static const cxl_p1n_reg_t CXL_PSL_LLCMD_An       = {0x90};
-/* Control Area */
+/* Control Area - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_PSL_SCNTL_An       = {0xA0};
 static const cxl_p1n_reg_t CXL_PSL_CtxTime_An     = {0xA8};
 static const cxl_p1n_reg_t CXL_PSL_IVTE_Offset_An = {0xB0};
 static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An  = {0xB8};
-/* 0xC0:FF Implementation Dependent Area */
+/* 0xC0:FF Implementation Dependent Area - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An   = {0xC0};
 static const cxl_p1n_reg_t CXL_AFU_DEBUG_An       = {0xC8};
 /* 0xC0:FF Implementation Dependent Area - CAIA 1 */
@@ -131,7 +150,7 @@ static const cxl_p1n_reg_t CXL_PSL_RXCTL_A        = {0xE0};
 static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE    = {0xE8};
 
 /* PSL Slice Privilege 2 Memory Map */
-/* Configuration and Control Area */
+/* Configuration and Control Area - CAIA 1&2 */
 static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
 static const cxl_p2n_reg_t CXL_CSRP_An        = {0x008};
 /* Configuration and Control Area - CAIA 1 */
@@ -145,17 +164,17 @@ static const cxl_p2n_reg_t CXL_PSL_AMR_An     = {0x030};
 static const cxl_p2n_reg_t CXL_SLBIE_An       = {0x040};
 static const cxl_p2n_reg_t CXL_SLBIA_An       = {0x048};
 static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
-/* Interrupt Registers */
+/* Interrupt Registers - CAIA 1&2 */
 static const cxl_p2n_reg_t CXL_PSL_DSISR_An   = {0x060};
 static const cxl_p2n_reg_t CXL_PSL_DAR_An     = {0x068};
 static const cxl_p2n_reg_t CXL_PSL_DSR_An     = {0x070};
 static const cxl_p2n_reg_t CXL_PSL_TFC_An     = {0x078};
 static const cxl_p2n_reg_t CXL_PSL_PEHandle_An = {0x080};
 static const cxl_p2n_reg_t CXL_PSL_ErrStat_An = {0x088};
-/* AFU Registers */
+/* AFU Registers - CAIA 1&2 */
 static const cxl_p2n_reg_t CXL_AFU_Cntl_An    = {0x090};
 static const cxl_p2n_reg_t CXL_AFU_ERR_An     = {0x098};
-/* Work Element Descriptor */
+/* Work Element Descriptor - CAIA 1&2 */
 static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 /* 0x0C0:FFF Implementation Dependent Area */
 
@@ -182,6 +201,10 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 #define CXL_PSL_SR_An_SF  MSR_SF            /* 64bit */
 #define CXL_PSL_SR_An_TA  (1ull << (63-1))  /* Tags active,   GA1: 0 */
 #define CXL_PSL_SR_An_HV  MSR_HV            /* Hypervisor,    GA1: 0 */
+#define CXL_PSL_SR_An_XLAT_hpt (0ull << (63-6))/* Hashed page table (HPT) mode */
+#define CXL_PSL_SR_An_XLAT_roh (2ull << (63-6))/* Radix on HPT mode */
+#define CXL_PSL_SR_An_XLAT_ror (3ull << (63-6))/* Radix on Radix mode */
+#define CXL_PSL_SR_An_BOT (1ull << (63-10)) /* Use the in-memory segment table */
 #define CXL_PSL_SR_An_PR  MSR_PR            /* Problem state, GA1: 1 */
 #define CXL_PSL_SR_An_ISL (1ull << (63-53)) /* Ignore Segment Large Page */
 #define CXL_PSL_SR_An_TC  (1ull << (63-54)) /* Page Table secondary hash */
@@ -298,12 +321,38 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 #define CXL_PSL_DSISR_An_S  DSISR_ISSTORE     /* Access was afu_wr or afu_zero */
 #define CXL_PSL_DSISR_An_K  DSISR_KEYFAULT    /* Access not permitted by virtual page class key protection */
 
+/****** CXL_PSL_DSISR_An - CAIA 2 ****************************************************/
+#define CXL_PSL9_DSISR_An_TF (1ull << (63-3))  /* Translation fault */
+#define CXL_PSL9_DSISR_An_PE (1ull << (63-4))  /* PSL Error (implementation specific) */
+#define CXL_PSL9_DSISR_An_AE (1ull << (63-5))  /* AFU Error */
+#define CXL_PSL9_DSISR_An_OC (1ull << (63-6))  /* OS Context Warning */
+#define CXL_PSL9_DSISR_An_S (1ull << (63-38))  /* TF for a write operation */
+#define CXL_PSL9_DSISR_PENDING (CXL_PSL9_DSISR_An_TF | CXL_PSL9_DSISR_An_PE | CXL_PSL9_DSISR_An_AE | CXL_PSL9_DSISR_An_OC)
+/* NOTE: Bits 56:63 (Checkout Response Status) are valid when DSISR_An[TF] = 1
+ * Status (0:7) Encoding
+ */
+#define CXL_PSL9_DSISR_An_CO_MASK 0x00000000000000ffULL
+#define CXL_PSL9_DSISR_An_SF      0x0000000000000080ULL  /* Segment Fault                        0b10000000 */
+#define CXL_PSL9_DSISR_An_PF_SLR  0x0000000000000088ULL  /* PTE not found (Single Level Radix)   0b10001000 */
+#define CXL_PSL9_DSISR_An_PF_RGC  0x000000000000008CULL  /* PTE not found (Radix Guest (child))  0b10001100 */
+#define CXL_PSL9_DSISR_An_PF_RGP  0x0000000000000090ULL  /* PTE not found (Radix Guest (parent)) 0b10010000 */
+#define CXL_PSL9_DSISR_An_PF_HRH  0x0000000000000094ULL  /* PTE not found (HPT/Radix Host)       0b10010100 */
+#define CXL_PSL9_DSISR_An_PF_STEG 0x000000000000009CULL  /* PTE not found (STEG VA)              0b10011100 */
+
 /****** CXL_PSL_TFC_An ******************************************************/
 #define CXL_PSL_TFC_An_A  (1ull << (63-28)) /* Acknowledge non-translation fault */
 #define CXL_PSL_TFC_An_C  (1ull << (63-29)) /* Continue (abort transaction) */
 #define CXL_PSL_TFC_An_AE (1ull << (63-30)) /* Restart PSL with address error */
 #define CXL_PSL_TFC_An_R  (1ull << (63-31)) /* Restart PSL transaction */
 
+/****** CXL_XSL9_IERAT_ERAT - CAIA 2 **********************************/
+#define CXL_XSL9_IERAT_MLPID    (1ull << (63-0))  /* Match LPID */
+#define CXL_XSL9_IERAT_MPID     (1ull << (63-1))  /* Match PID */
+#define CXL_XSL9_IERAT_PRS      (1ull << (63-4))  /* PRS bit for Radix invalidations */
+#define CXL_XSL9_IERAT_INVR     (1ull << (63-3))  /* Invalidate Radix */
+#define CXL_XSL9_IERAT_IALL     (1ull << (63-8))  /* Invalidate All */
+#define CXL_XSL9_IERAT_IINPROG  (1ull << (63-63)) /* Invalidate in progress */
+
 /* cxl_process_element->software_status */
 #define CXL_PE_SOFTWARE_STATE_V (1ul << (31 -  0)) /* Valid */
 #define CXL_PE_SOFTWARE_STATE_C (1ul << (31 - 29)) /* Complete */
@@ -654,25 +703,38 @@ int cxl_pci_reset(struct cxl *adapter);
 void cxl_pci_release_afu(struct device *dev);
 ssize_t cxl_pci_read_adapter_vpd(struct cxl *adapter, void *buf, size_t len);
 
-/* common == phyp + powernv */
+/* common == phyp + powernv - CAIA 1&2 */
 struct cxl_process_element_common {
 	__be32 tid;
 	__be32 pid;
 	__be64 csrp;
-	__be64 aurp0;
-	__be64 aurp1;
-	__be64 sstp0;
-	__be64 sstp1;
+	union {
+		struct {
+			__be64 aurp0;
+			__be64 aurp1;
+			__be64 sstp0;
+			__be64 sstp1;
+		} psl8;  /* CAIA 1 */
+		struct {
+			u8     reserved2[8];
+			u8     reserved3[8];
+			u8     reserved4[8];
+			u8     reserved5[8];
+		} psl9;  /* CAIA 2 */
+	} u;
 	__be64 amr;
-	u8     reserved3[4];
+	u8     reserved6[4];
 	__be64 wed;
 } __packed;
 
-/* just powernv */
+/* just powernv - CAIA 1&2 */
 struct cxl_process_element {
 	__be64 sr;
 	__be64 SPOffset;
-	__be64 sdr;
+	union {
+		__be64 sdr;          /* CAIA 1 */
+		u8     reserved1[8]; /* CAIA 2 */
+	} u;
 	__be64 haurp;
 	__be32 ctxtime;
 	__be16 ivte_offsets[4];
@@ -761,6 +823,16 @@ static inline bool cxl_is_power8(void)
 	return false;
 }
 
+static inline bool cxl_is_power9(void)
+{
+	/* intermediate solution */
+	if (!cxl_is_power8() &&
+	   (cpu_has_feature(CPU_FTRS_POWER9) ||
+	    cpu_has_feature(CPU_FTR_POWER9_DD1)))
+		return true;
+	return false;
+}
+
 static inline bool cxl_is_psl8(struct cxl_afu *afu)
 {
 	if (afu->adapter->caia_major == 1)
@@ -768,6 +840,13 @@ static inline bool cxl_is_psl8(struct cxl_afu *afu)
 	return false;
 }
 
+static inline bool cxl_is_psl9(struct cxl_afu *afu)
+{
+	if (afu->adapter->caia_major == 2)
+		return true;
+	return false;
+}
+
 ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
 				loff_t off, size_t count);
 
@@ -794,7 +873,6 @@ int cxl_update_properties(struct device_node *dn, struct property *new_prop);
 
 void cxl_remove_adapter_nr(struct cxl *adapter);
 
-int cxl_alloc_spa(struct cxl_afu *afu);
 void cxl_release_spa(struct cxl_afu *afu);
 
 dev_t cxl_get_dev(void);
@@ -832,9 +910,13 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
 void afu_release_irqs(struct cxl_context *ctx, void *cookie);
 void afu_irq_name_free(struct cxl_context *ctx);
 
+int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
 int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu);
 int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu);
+int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
 int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx);
 void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx);
 
 #ifdef CONFIG_DEBUG_FS
@@ -845,9 +927,12 @@ int cxl_debugfs_adapter_add(struct cxl *adapter);
 void cxl_debugfs_adapter_remove(struct cxl *adapter);
 int cxl_debugfs_afu_add(struct cxl_afu *afu);
 void cxl_debugfs_afu_remove(struct cxl_afu *afu);
+void cxl_stop_trace_psl9(struct cxl *cxl);
 void cxl_stop_trace_psl8(struct cxl *cxl);
+void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir);
 void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir);
 void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
+void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir);
 void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir);
 
 #else /* CONFIG_DEBUG_FS */
@@ -879,10 +964,19 @@ static inline void cxl_debugfs_afu_remove(struct cxl_afu *afu)
 {
 }
 
+static inline void cxl_stop_trace_psl9(struct cxl *cxl)
+{
+}
+
 static inline void cxl_stop_trace_psl8(struct cxl *cxl)
 {
 }
 
+static inline void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter,
+						    struct dentry *dir)
+{
+}
+
 static inline void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter,
 						    struct dentry *dir)
 {
@@ -893,6 +987,10 @@ static inline void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter,
 {
 }
 
+static inline void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir)
+{
+}
+
 static inline void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
 {
 }
@@ -938,7 +1036,9 @@ struct cxl_irq_info {
 };
 
 void cxl_assign_psn_space(struct cxl_context *ctx);
+int cxl_invalidate_all_psl9(struct cxl *adapter);
 int cxl_invalidate_all_psl8(struct cxl *adapter);
+irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
 irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
 irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
 int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
@@ -951,6 +1051,7 @@ int cxl_data_cache_flush(struct cxl *adapter);
 int cxl_afu_disable(struct cxl_afu *afu);
 int cxl_psl_purge(struct cxl_afu *afu);
 
+void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx);
 void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx);
 void cxl_native_err_irq_dump_regs(struct cxl *adapter);
 int cxl_pci_vphb_add(struct cxl_afu *afu);
diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
index 43a1a27..eae9d74 100644
--- a/drivers/misc/cxl/debugfs.c
+++ b/drivers/misc/cxl/debugfs.c
@@ -15,6 +15,12 @@
 
 static struct dentry *cxl_debugfs;
 
+void cxl_stop_trace_psl9(struct cxl *adapter)
+{
+	/* Stop the trace */
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x4480000000000000ULL);
+}
+
 void cxl_stop_trace_psl8(struct cxl *adapter)
 {
 	int slice;
@@ -53,6 +59,14 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
 					  (void __force *)value, &fops_io_x64);
 }
 
+void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir)
+{
+	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR1));
+	debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR2));
+	debugfs_create_io_x64("fir_cntl", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR_CNTL));
+	debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_TRACECFG));
+}
+
 void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir)
 {
 	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
@@ -92,6 +106,11 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
 	debugfs_remove_recursive(adapter->debugfs);
 }
 
+void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir)
+{
+	debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
+}
+
 void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
 {
 	debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
index e6f8f05..5344448 100644
--- a/drivers/misc/cxl/fault.c
+++ b/drivers/misc/cxl/fault.c
@@ -146,25 +146,26 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
 		return cxl_ack_ae(ctx);
 	}
 
-	/*
-	 * update_mmu_cache() will not have loaded the hash since current->trap
-	 * is not a 0x400 or 0x300, so just call hash_page_mm() here.
-	 */
-	access = _PAGE_PRESENT | _PAGE_READ;
-	if (dsisr & CXL_PSL_DSISR_An_S)
-		access |= _PAGE_WRITE;
-
-	access |= _PAGE_PRIVILEGED;
-	if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
-		access &= ~_PAGE_PRIVILEGED;
-
-	if (dsisr & DSISR_NOHPTE)
-		inv_flags |= HPTE_NOHPTE_UPDATE;
-
-	local_irq_save(flags);
-	hash_page_mm(mm, dar, access, 0x300, inv_flags);
-	local_irq_restore(flags);
-
+	if (!radix_enabled()) {
+		/*
+		 * update_mmu_cache() will not have loaded the hash since current->trap
+		 * is not a 0x400 or 0x300, so just call hash_page_mm() here.
+		 */
+		access = _PAGE_PRESENT | _PAGE_READ;
+		if (dsisr & CXL_PSL_DSISR_An_S)
+			access |= _PAGE_WRITE;
+
+		access |= _PAGE_PRIVILEGED;
+		if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
+			access &= ~_PAGE_PRIVILEGED;
+
+		if (dsisr & DSISR_NOHPTE)
+			inv_flags |= HPTE_NOHPTE_UPDATE;
+
+		local_irq_save(flags);
+		hash_page_mm(mm, dar, access, 0x300, inv_flags);
+		local_irq_restore(flags);
+	}
 	pr_devel("Page fault successfully handled for pe: %i!\n", ctx->pe);
 	cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_R, 0);
 }
@@ -184,7 +185,28 @@ static struct mm_struct *get_mem_context(struct cxl_context *ctx)
 	return ctx->mm;
 }
 
+static bool cxl_is_segment_miss(struct cxl_context *ctx, u64 dsisr)
+{
+	if ((cxl_is_psl8(ctx->afu)) && (dsisr & CXL_PSL_DSISR_An_DS))
+		return true;
+
+	return false;
+}
+
+static bool cxl_is_page_fault(struct cxl_context *ctx, u64 dsisr)
+{
+	if ((cxl_is_psl8(ctx->afu)) && (dsisr & CXL_PSL_DSISR_An_DM))
+		return true;
+
+	if ((cxl_is_psl9(ctx->afu)) &&
+	   ((dsisr & CXL_PSL9_DSISR_An_CO_MASK) &
+		(CXL_PSL9_DSISR_An_PF_SLR | CXL_PSL9_DSISR_An_PF_RGC |
+		 CXL_PSL9_DSISR_An_PF_RGP | CXL_PSL9_DSISR_An_PF_HRH |
+		 CXL_PSL9_DSISR_An_PF_STEG)))
+		return true;
 
+	return false;
+}
 
 void cxl_handle_fault(struct work_struct *fault_work)
 {
@@ -230,9 +252,9 @@ void cxl_handle_fault(struct work_struct *fault_work)
 		}
 	}
 
-	if (dsisr & CXL_PSL_DSISR_An_DS)
+	if (cxl_is_segment_miss(ctx, dsisr))
 		cxl_handle_segment_miss(ctx, mm, dar);
-	else if (dsisr & CXL_PSL_DSISR_An_DM)
+	else if (cxl_is_page_fault(ctx, dsisr))
 		cxl_handle_page_fault(ctx, mm, dsisr, dar);
 	else
 		WARN(1, "cxl_handle_fault has nothing to handle\n");
diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
index 3ad7381..f58b4b6c 100644
--- a/drivers/misc/cxl/guest.c
+++ b/drivers/misc/cxl/guest.c
@@ -551,13 +551,13 @@ static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
 	elem->common.tid    = cpu_to_be32(0); /* Unused */
 	elem->common.pid    = cpu_to_be32(pid);
 	elem->common.csrp   = cpu_to_be64(0); /* disable */
-	elem->common.aurp0  = cpu_to_be64(0); /* disable */
-	elem->common.aurp1  = cpu_to_be64(0); /* disable */
+	elem->common.u.psl8.aurp0  = cpu_to_be64(0); /* disable */
+	elem->common.u.psl8.aurp1  = cpu_to_be64(0); /* disable */
 
 	cxl_prefault(ctx, wed);
 
-	elem->common.sstp0  = cpu_to_be64(ctx->sstp0);
-	elem->common.sstp1  = cpu_to_be64(ctx->sstp1);
+	elem->common.u.psl8.sstp0  = cpu_to_be64(ctx->sstp0);
+	elem->common.u.psl8.sstp1  = cpu_to_be64(ctx->sstp1);
 
 	/*
 	 * Ensure we have at least one interrupt allocated to take faults for
diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
index fa9f8a2..1eb5168 100644
--- a/drivers/misc/cxl/irq.c
+++ b/drivers/misc/cxl/irq.c
@@ -34,6 +34,59 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
 	return IRQ_HANDLED;
 }
 
+irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
+{
+	u64 dsisr, dar;
+
+	dsisr = irq_info->dsisr;
+	dar = irq_info->dar;
+
+	trace_cxl_psl9_irq(ctx, irq, dsisr, dar);
+
+	pr_devel("CXL interrupt %i for afu pe: %i DSISR: %#llx DAR: %#llx\n", irq, ctx->pe, dsisr, dar);
+
+	if (dsisr & CXL_PSL9_DSISR_An_TF) {
+		pr_devel("CXL interrupt: Scheduling translation fault"
+			 " handling for later (pe: %i)\n", ctx->pe);
+		return schedule_cxl_fault(ctx, dsisr, dar);
+	}
+
+	if (dsisr & CXL_PSL9_DSISR_An_PE)
+		return cxl_ops->handle_psl_slice_error(ctx, dsisr,
+						irq_info->errstat);
+	if (dsisr & CXL_PSL9_DSISR_An_AE) {
+		pr_devel("CXL interrupt: AFU Error 0x%016llx\n", irq_info->afu_err);
+
+		if (ctx->pending_afu_err) {
+			/*
+			 * This shouldn't happen - the PSL treats these errors
+			 * as fatal and will have reset the AFU, so there's not
+			 * much point buffering multiple AFU errors.
+			 * OTOH if we DO ever see a storm of these come in it's
+			 * probably best that we log them somewhere:
+			 */
+			dev_err_ratelimited(&ctx->afu->dev, "CXL AFU Error "
+					    "undelivered to pe %i: 0x%016llx\n",
+					    ctx->pe, irq_info->afu_err);
+		} else {
+			spin_lock(&ctx->lock);
+			ctx->afu_err = irq_info->afu_err;
+			ctx->pending_afu_err = 1;
+			spin_unlock(&ctx->lock);
+
+			wake_up_all(&ctx->wq);
+		}
+
+		cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_A, 0);
+		return IRQ_HANDLED;
+	}
+	if (dsisr & CXL_PSL9_DSISR_An_OC)
+		pr_devel("CXL interrupt: OS Context Warning\n");
+
+	WARN(1, "Unhandled CXL PSL IRQ\n");
+	return IRQ_HANDLED;
+}
+
 irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
 {
 	u64 dsisr, dar;
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index 0401e4dc..1e3c5c2 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -120,6 +120,7 @@ int cxl_psl_purge(struct cxl_afu *afu)
 	u64 AFU_Cntl = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
 	u64 dsisr, dar;
 	u64 start, end;
+	u64 trans_fault = 0x0ULL;
 	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
 	int rc = 0;
 
@@ -127,6 +128,11 @@ int cxl_psl_purge(struct cxl_afu *afu)
 
 	pr_devel("PSL purge request\n");
 
+	if (cxl_is_psl8(afu))
+		trans_fault = CXL_PSL_DSISR_TRANS;
+	if (cxl_is_psl9(afu))
+		trans_fault = CXL_PSL9_DSISR_An_TF;
+
 	if (!cxl_ops->link_ok(afu->adapter, afu)) {
 		dev_warn(&afu->dev, "PSL Purge called with link down, ignoring\n");
 		rc = -EIO;
@@ -159,12 +165,12 @@ int cxl_psl_purge(struct cxl_afu *afu)
 				     "  PSL_DSISR: 0x%016llx\n",
 				     PSL_CNTL, dsisr);
 
-		if (dsisr & CXL_PSL_DSISR_TRANS) {
+		if (dsisr & trans_fault) {
 			dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
 			dev_notice(&afu->dev, "PSL purge terminating "
 					      "pending translation, "
 					      "DSISR: 0x%016llx, DAR: 0x%016llx\n",
-					       dsisr, dar);
+					      dsisr, dar);
 			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
 		} else if (dsisr) {
 			dev_notice(&afu->dev, "PSL purge acknowledging "
@@ -204,7 +210,7 @@ static int spa_max_procs(int spa_size)
 	return ((spa_size / 8) - 96) / 17;
 }
 
-int cxl_alloc_spa(struct cxl_afu *afu)
+static int cxl_alloc_spa(struct cxl_afu *afu, int mode)
 {
 	unsigned spa_size;
 
@@ -217,7 +223,8 @@ int cxl_alloc_spa(struct cxl_afu *afu)
 		if (spa_size > 0x100000) {
 			dev_warn(&afu->dev, "num_of_processes too large for the SPA, limiting to %i (0x%x)\n",
 					afu->native->spa_max_procs, afu->native->spa_size);
-			afu->num_procs = afu->native->spa_max_procs;
+			if (mode != CXL_MODE_DEDICATED)
+				afu->num_procs = afu->native->spa_max_procs;
 			break;
 		}
 
@@ -266,6 +273,35 @@ void cxl_release_spa(struct cxl_afu *afu)
 	}
 }
 
+/* Invalidation of all ERAT entries is no longer required by CAIA2. Use
+ * only for debug
+ */
+int cxl_invalidate_all_psl9(struct cxl *adapter)
+{
+	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
+	u64 ierat;
+
+	pr_devel("CXL adapter - invalidation of all ERAT entries\n");
+
+	/* Invalidates all ERAT entries for Radix or HPT */
+	ierat = CXL_XSL9_IERAT_IALL;
+	if (radix_enabled())
+		ierat |= CXL_XSL9_IERAT_INVR;
+	cxl_p1_write(adapter, CXL_XSL9_IERAT, ierat);
+
+	while (cxl_p1_read(adapter, CXL_XSL9_IERAT) & CXL_XSL9_IERAT_IINPROG) {
+		if (time_after_eq(jiffies, timeout)) {
+			dev_warn(&adapter->dev,
+			"WARNING: CXL adapter invalidation of all ERAT entries timed out!\n");
+			return -EBUSY;
+		}
+		if (!cxl_ops->link_ok(adapter, NULL))
+			return -EIO;
+		cpu_relax();
+	}
+	return 0;
+}
+
 int cxl_invalidate_all_psl8(struct cxl *adapter)
 {
 	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
@@ -502,7 +538,7 @@ static int activate_afu_directed(struct cxl_afu *afu)
 
 	afu->num_procs = afu->max_procs_virtualised;
 	if (afu->native->spa == NULL) {
-		if (cxl_alloc_spa(afu))
+		if (cxl_alloc_spa(afu, CXL_MODE_DIRECTED))
 			return -ENOMEM;
 	}
 	attach_spa(afu);
@@ -552,10 +588,19 @@ static u64 calculate_sr(struct cxl_context *ctx)
 		sr |= (mfmsr() & MSR_SF) | CXL_PSL_SR_An_HV;
 	} else {
 		sr |= CXL_PSL_SR_An_PR | CXL_PSL_SR_An_R;
-		sr &= ~(CXL_PSL_SR_An_HV);
+		if (radix_enabled())
+			sr |= CXL_PSL_SR_An_HV;
+		else
+			sr &= ~(CXL_PSL_SR_An_HV);
 		if (!test_tsk_thread_flag(current, TIF_32BIT))
 			sr |= CXL_PSL_SR_An_SF;
 	}
+	if (cxl_is_psl9(ctx->afu)) {
+		if (radix_enabled())
+			sr |= CXL_PSL_SR_An_XLAT_ror;
+		else
+			sr |= CXL_PSL_SR_An_XLAT_hpt;
+	}
 	return sr;
 }
 
@@ -588,6 +633,70 @@ static void update_ivtes_directed(struct cxl_context *ctx)
 		WARN_ON(add_process_element(ctx));
 }
 
+static int process_element_entry_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
+{
+	u32 pid;
+
+	cxl_assign_psn_space(ctx);
+
+	ctx->elem->ctxtime = 0; /* disable */
+	ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
+	ctx->elem->haurp = 0; /* disable */
+
+	if (ctx->kernel)
+		pid = 0;
+	else {
+		if (ctx->mm == NULL) {
+			pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
+				__func__, ctx->pe, pid_nr(ctx->pid));
+			return -EINVAL;
+		}
+		pid = ctx->mm->context.id;
+	}
+
+	ctx->elem->common.tid = 0;
+	ctx->elem->common.pid = cpu_to_be32(pid);
+
+	ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
+
+	ctx->elem->common.csrp = 0; /* disable */
+
+	cxl_prefault(ctx, wed);
+
+	/*
+	 * Ensure we have the multiplexed PSL interrupt set up to take faults
+	 * for kernel contexts that may not have allocated any AFU IRQs at all:
+	 */
+	if (ctx->irqs.range[0] == 0) {
+		ctx->irqs.offset[0] = ctx->afu->native->psl_hwirq;
+		ctx->irqs.range[0] = 1;
+	}
+
+	ctx->elem->common.amr = cpu_to_be64(amr);
+	ctx->elem->common.wed = cpu_to_be64(wed);
+
+	return 0;
+}
+
+int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
+{
+	int result;
+
+	/* fill the process element entry */
+	result = process_element_entry_psl9(ctx, wed, amr);
+	if (result)
+		return result;
+
+	update_ivtes_directed(ctx);
+
+	/* first guy needs to enable */
+	result = cxl_ops->afu_check_and_enable(ctx->afu);
+	if (result)
+		return result;
+
+	return add_process_element(ctx);
+}
+
 int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 {
 	u32 pid;
@@ -598,7 +707,7 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 	ctx->elem->ctxtime = 0; /* disable */
 	ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
 	ctx->elem->haurp = 0; /* disable */
-	ctx->elem->sdr = cpu_to_be64(mfspr(SPRN_SDR1));
+	ctx->elem->u.sdr = cpu_to_be64(mfspr(SPRN_SDR1));
 
 	pid = current->pid;
 	if (ctx->kernel)
@@ -609,13 +718,13 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 	ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
 
 	ctx->elem->common.csrp = 0; /* disable */
-	ctx->elem->common.aurp0 = 0; /* disable */
-	ctx->elem->common.aurp1 = 0; /* disable */
+	ctx->elem->common.u.psl8.aurp0 = 0; /* disable */
+	ctx->elem->common.u.psl8.aurp1 = 0; /* disable */
 
 	cxl_prefault(ctx, wed);
 
-	ctx->elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
-	ctx->elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
+	ctx->elem->common.u.psl8.sstp0 = cpu_to_be64(ctx->sstp0);
+	ctx->elem->common.u.psl8.sstp1 = cpu_to_be64(ctx->sstp1);
 
 	/*
 	 * Ensure we have the multiplexed PSL interrupt set up to take faults
@@ -681,6 +790,31 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
 	return 0;
 }
 
+int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu)
+{
+	dev_info(&afu->dev, "Activating dedicated process mode\n");
+
+	/* If XSL is set to dedicated mode (Set in PSL_SCNTL reg), the
+	 * XSL and AFU are programmed to work with a single context.
+	 * The context information should be configured in the SPA area
+	 * index 0 (so PSL_SPAP must be configured before enabling the
+	 * AFU).
+	 */
+	afu->num_procs = 1;
+	if (afu->native->spa == NULL) {
+		if (cxl_alloc_spa(afu, CXL_MODE_DEDICATED))
+			return -ENOMEM;
+	}
+	attach_spa(afu);
+
+	cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_Process);
+	cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
+
+	afu->current_mode = CXL_MODE_DEDICATED;
+
+	return cxl_chardev_d_afu_add(afu);
+}
+
 int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
 {
 	dev_info(&afu->dev, "Activating dedicated process mode\n");
@@ -704,6 +838,16 @@ int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
 	return cxl_chardev_d_afu_add(afu);
 }
 
+void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx)
+{
+	int r;
+
+	for (r = 0; r < CXL_IRQ_RANGES; r++) {
+		ctx->elem->ivte_offsets[r] = cpu_to_be16(ctx->irqs.offset[r]);
+		ctx->elem->ivte_ranges[r] = cpu_to_be16(ctx->irqs.range[r]);
+	}
+}
+
 void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
 {
 	struct cxl_afu *afu = ctx->afu;
@@ -720,6 +864,26 @@ void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
 			((u64)ctx->irqs.range[3] & 0xffff));
 }
 
+int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
+{
+	struct cxl_afu *afu = ctx->afu;
+	int result;
+
+	/* fill the process element entry */
+	result = process_element_entry_psl9(ctx, wed, amr);
+	if (result)
+		return result;
+
+	if (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes)
+		afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
+
+	result = cxl_ops->afu_reset(afu);
+	if (result)
+		return result;
+
+	return afu_enable(afu);
+}
+
 int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 {
 	struct cxl_afu *afu = ctx->afu;
@@ -891,6 +1055,21 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
 	return 0;
 }
 
+void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx)
+{
+	u64 fir1, fir2, serr;
+
+	fir1 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR1);
+	fir2 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR2);
+
+	dev_crit(&ctx->afu->dev, "PSL_FIR1: 0x%016llx\n", fir1);
+	dev_crit(&ctx->afu->dev, "PSL_FIR2: 0x%016llx\n", fir2);
+	if (ctx->afu->adapter->native->sl_ops->register_serr_irq) {
+		serr = cxl_p1n_read(ctx->afu, CXL_PSL_SERR_An);
+		cxl_afu_decode_psl_serr(ctx->afu, serr);
+	}
+}
+
 void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx)
 {
 	u64 fir1, fir2, fir_slice, serr, afu_debug;
@@ -927,9 +1106,20 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
 	return cxl_ops->ack_irq(ctx, 0, errstat);
 }
 
+static bool cxl_is_translation_fault(struct cxl_afu *afu, u64 dsisr)
+{
+	if ((cxl_is_psl8(afu)) && (dsisr & CXL_PSL_DSISR_TRANS))
+		return true;
+
+	if ((cxl_is_psl9(afu)) && (dsisr & CXL_PSL9_DSISR_An_TF))
+		return true;
+
+	return false;
+}
+
 irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
 {
-	if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
+	if (cxl_is_translation_fault(afu, irq_info->dsisr))
 		cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
 	else
 		cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
@@ -998,6 +1188,9 @@ static void native_irq_wait(struct cxl_context *ctx)
 		if (cxl_is_psl8(ctx->afu) &&
 		   ((dsisr & CXL_PSL_DSISR_PENDING) == 0))
 			return;
+		if (cxl_is_psl9(ctx->afu) &&
+		   ((dsisr & CXL_PSL9_DSISR_PENDING) == 0))
+			return;
 		/*
 		 * We are waiting for the workqueue to process our
 		 * irq, so need to let that run here.
@@ -1125,7 +1318,13 @@ int cxl_native_register_serr_irq(struct cxl_afu *afu)
 
 	serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
 	if (cxl_is_power8())
-		serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
+ 		serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
+	if (cxl_is_power9()) {
+		/* By default, all errors are masked. So don't set all masks.
+		 * Slice errors will be transfered.
+		 */
+		serr = (serr & ~0xff0000007fffffffULL) | (afu->serr_hwirq & 0xffff);
+	}
 	cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
 
 	return 0;
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index a910115..1789ad8 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -60,7 +60,7 @@
 #define CXL_VSEC_PROTOCOL_MASK   0xe0
 #define CXL_VSEC_PROTOCOL_1024TB 0x80
 #define CXL_VSEC_PROTOCOL_512TB  0x40
-#define CXL_VSEC_PROTOCOL_256TB  0x20 /* Power 8 uses this */
+#define CXL_VSEC_PROTOCOL_256TB  0x20 /* Power 8/9 uses this */
 #define CXL_VSEC_PROTOCOL_ENABLE 0x01
 
 #define CXL_READ_VSEC_PSL_REVISION(dev, vsec, dest) \
@@ -326,14 +326,20 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
 
 #define P8_CAPP_UNIT0_ID 0xBA
 #define P8_CAPP_UNIT1_ID 0XBE
+#define P9_CAPP_UNIT0_ID 0xC0
+#define P9_CAPP_UNIT1_ID 0xE0
 
-static u64 get_capp_unit_id(struct device_node *np)
+static u32 get_phb_index(struct device_node *np)
 {
 	u32 phb_index;
 
 	if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
-		return 0;
+		return -ENODEV;
+	return phb_index;
+}
 
+static u64 get_capp_unit_id(struct device_node *np, u32 phb_index)
+{
 	/*
 	 * POWER 8:
 	 *  - For chips other than POWER8NVL, we only have CAPP 0,
@@ -352,10 +358,25 @@ static u64 get_capp_unit_id(struct device_node *np)
 			return P8_CAPP_UNIT1_ID;
 	}
 
+	/*
+	 * POWER 9:
+	 *   PEC0 (PHB0). Capp ID = CAPP0 (0b1100_0000)
+	 *   PEC1 (PHB1 - PHB2). No capi mode
+	 *   PEC2 (PHB3 - PHB4 - PHB5): Capi mode on PHB3 only. Capp ID = CAPP1 (0b1110_0000)
+	 */
+	if (cxl_is_power9()) {
+		if (phb_index == 0)
+			return P9_CAPP_UNIT0_ID;
+
+		if (phb_index == 3)
+			return P9_CAPP_UNIT1_ID;
+	}
+
 	return 0;
 }
 
-static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id)
+static int calc_capp_routing(struct pci_dev *dev, u64 *chipid,
+			     u32 *phb_index, u64 *capp_unit_id)
 {
 	struct device_node *np;
 	const __be32 *prop;
@@ -367,8 +388,16 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
 		np = of_get_next_parent(np);
 	if (!np)
 		return -ENODEV;
+
 	*chipid = be32_to_cpup(prop);
-	*capp_unit_id = get_capp_unit_id(np);
+
+	*phb_index = get_phb_index(np);
+	if (*phb_index == -ENODEV) {
+		pr_err("cxl: invalid phb index\n");
+		return -ENODEV;
+	}
+
+	*capp_unit_id = get_capp_unit_id(np, *phb_index);
 	of_node_put(np);
 	if (!*capp_unit_id) {
 		pr_err("cxl: invalid capp unit id\n");
@@ -378,14 +407,97 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
 	return 0;
 }
 
+static int init_implementation_adapter_regs_psl9(struct cxl *adapter, struct pci_dev *dev)
+{
+	u64 xsl_dsnctl, psl_fircntl;
+	u64 chipid;
+	u32 phb_index;
+	u64 capp_unit_id;
+	int rc;
+
+	rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
+	if (rc)
+		return rc;
+
+	/* CAPI Identifier bits [0:7]
+	 * bit 61:60 MSI bits --> 0
+	 * bit 59 TVT selector --> 0
+	 */
+	/* Tell XSL where to route data to.
+	 * The field chipid should match the PHB CAPI_CMPM register
+	 */
+	xsl_dsnctl = ((u64)0x2 << (63-7)); /* Bit 57 */
+	xsl_dsnctl |= (capp_unit_id << (63-15));
+
+	/* nMMU_ID Defaults to: b’000001001’*/
+	xsl_dsnctl |= ((u64)0x09 << (63-28));
+
+	if (cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1)) {
+		/* Used to identify CAPI packets which should be sorted into
+		 * the Non-Blocking queues by the PHB. This field should match
+		 * the PHB PBL_NBW_CMPM register
+		 * nbwind=0x03, bits [57:58], must include capi indicator.
+		 * Not supported on P9 DD1.
+		 */
+		xsl_dsnctl |= ((u64)0x03 << (63-47));
+
+		/* Upper 16b address bits of ASB_Notify messages sent to the
+		 * system. Need to match the PHB’s ASN Compare/Mask Register.
+		 * Not supported on P9 DD1.
+		 */
+		xsl_dsnctl |= ((u64)0x04 << (63-55));
+	}
+
+	cxl_p1_write(adapter, CXL_XSL9_DSNCTL, xsl_dsnctl);
+
+	/* Set fir_cntl to recommended value for production env */
+	psl_fircntl = (0x2ULL << (63-3)); /* ce_report */
+	psl_fircntl |= (0x1ULL << (63-6)); /* FIR_report */
+	psl_fircntl |= 0x1ULL; /* ce_thresh */
+	cxl_p1_write(adapter, CXL_PSL9_FIR_CNTL, psl_fircntl);
+
+	/* vccredits=0x1  pcklat=0x4 */
+	cxl_p1_write(adapter, CXL_PSL9_DSNDCTL, 0x0000000000001810ULL);
+
+	/* For debugging with trace arrays.
+	 * Configure RX trace 0 segmented mode.
+	 * Configure CT trace 0 segmented mode.
+	 * Configure LA0 trace 0 segmented mode.
+	 * Configure LA1 trace 0 segmented mode.
+	 */
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000000ULL);
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000003ULL);
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000005ULL);
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000006ULL);
+
+	/* A response to an ASB_Notify request is returned by the
+	 * system as an MMIO write to the address defined in
+	 * the PSL_TNR_ADDR register
+	 */
+	/* PSL_TNR_ADDR */
+
+	/* NORST */
+	cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x8000000000000000ULL);
+
+	/* allocate the apc machines */
+	cxl_p1_write(adapter, CXL_PSL9_APCDEDTYPE, 0x40000003FFFF0000ULL);
+
+	/* Disable vc dd1 fix */
+	if ((cxl_is_power9() && cpu_has_feature(CPU_FTR_POWER9_DD1)))
+		cxl_p1_write(adapter, CXL_PSL9_GP_CT, 0x0400000000000001ULL);
+
+	return 0;
+}
+
 static int init_implementation_adapter_regs_psl8(struct cxl *adapter, struct pci_dev *dev)
 {
 	u64 psl_dsnctl, psl_fircntl;
 	u64 chipid;
+	u32 phb_index;
 	u64 capp_unit_id;
 	int rc;
 
-	rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
+	rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
 	if (rc)
 		return rc;
 
@@ -414,10 +526,11 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
 {
 	u64 xsl_dsnctl;
 	u64 chipid;
+	u32 phb_index;
 	u64 capp_unit_id;
 	int rc;
 
-	rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
+	rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
 	if (rc)
 		return rc;
 
@@ -435,6 +548,12 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
 /* For the PSL this is a multiple for 0 < n <= 7: */
 #define PSL_2048_250MHZ_CYCLES 1
 
+static void write_timebase_ctrl_psl9(struct cxl *adapter)
+{
+	cxl_p1_write(adapter, CXL_PSL9_TB_CTLSTAT,
+		     TBSYNC_CNT(2 * PSL_2048_250MHZ_CYCLES));
+}
+
 static void write_timebase_ctrl_psl8(struct cxl *adapter)
 {
 	cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT,
@@ -456,6 +575,11 @@ static void write_timebase_ctrl_xsl(struct cxl *adapter)
 		     TBSYNC_CNT(XSL_4000_CLOCKS));
 }
 
+static u64 timebase_read_psl9(struct cxl *adapter)
+{
+	return cxl_p1_read(adapter, CXL_PSL9_Timebase);
+}
+
 static u64 timebase_read_psl8(struct cxl *adapter)
 {
 	return cxl_p1_read(adapter, CXL_PSL_Timebase);
@@ -514,6 +638,11 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
 	return;
 }
 
+static int init_implementation_afu_regs_psl9(struct cxl_afu *afu)
+{
+	return 0;
+}
+
 static int init_implementation_afu_regs_psl8(struct cxl_afu *afu)
 {
 	/* read/write masks for this slice */
@@ -612,7 +741,7 @@ static int setup_cxl_bars(struct pci_dev *dev)
 	/*
 	 * BAR 4/5 has a special meaning for CXL and must be programmed with a
 	 * special value corresponding to the CXL protocol address range.
-	 * For POWER 8 that means bits 48:49 must be set to 10
+	 * For POWER 8/9 that means bits 48:49 must be set to 10
 	 */
 	pci_write_config_dword(dev, PCI_BASE_ADDRESS_4, 0x00000000);
 	pci_write_config_dword(dev, PCI_BASE_ADDRESS_5, 0x00020000);
@@ -997,6 +1126,52 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
 	return 0;
 }
 
+static int sanitise_afu_regs_psl9(struct cxl_afu *afu)
+{
+	u64 reg;
+
+	/*
+	 * Clear out any regs that contain either an IVTE or address or may be
+	 * waiting on an acknowledgment to try to be a bit safer as we bring
+	 * it online
+	 */
+	reg = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
+	if ((reg & CXL_AFU_Cntl_An_ES_MASK) != CXL_AFU_Cntl_An_ES_Disabled) {
+		dev_warn(&afu->dev, "WARNING: AFU was not disabled: %#016llx\n", reg);
+		if (cxl_ops->afu_reset(afu))
+			return -EIO;
+		if (cxl_afu_disable(afu))
+			return -EIO;
+		if (cxl_psl_purge(afu))
+			return -EIO;
+	}
+	cxl_p1n_write(afu, CXL_PSL_SPAP_An, 0x0000000000000000);
+	cxl_p1n_write(afu, CXL_PSL_AMBAR_An, 0x0000000000000000);
+	reg = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
+	if (reg) {
+		dev_warn(&afu->dev, "AFU had pending DSISR: %#016llx\n", reg);
+		if (reg & CXL_PSL9_DSISR_An_TF)
+			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
+		else
+			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
+	}
+	if (afu->adapter->native->sl_ops->register_serr_irq) {
+		reg = cxl_p1n_read(afu, CXL_PSL_SERR_An);
+		if (reg) {
+			if (reg & ~0x000000007fffffff)
+				dev_warn(&afu->dev, "AFU had pending SERR: %#016llx\n", reg);
+			cxl_p1n_write(afu, CXL_PSL_SERR_An, reg & ~0xffff);
+		}
+	}
+	reg = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
+	if (reg) {
+		dev_warn(&afu->dev, "AFU had pending error status: %#016llx\n", reg);
+		cxl_p2n_write(afu, CXL_PSL_ErrStat_An, reg);
+	}
+
+	return 0;
+}
+
 static int sanitise_afu_regs_psl8(struct cxl_afu *afu)
 {
 	u64 reg;
@@ -1254,10 +1429,10 @@ int cxl_pci_reset(struct cxl *adapter)
 
 	/*
 	 * The adapter is about to be reset, so ignore errors.
-	 * Not supported on P9 DD1 but don't forget to enable it
-	 * on P9 DD2
+	 * Not supported on P9 DD1
 	 */
-	if (cxl_is_power8())
+	if ((cxl_is_power8()) ||
+	    ((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))
 		cxl_data_cache_flush(adapter);
 
 	/* pcie_warm_reset requests a fundamental pci reset which includes a
@@ -1393,6 +1568,9 @@ static bool cxl_compatible_caia_version(struct cxl *adapter)
 	if (cxl_is_power8() && (adapter->caia_major == 1))
 		return true;
 
+	if (cxl_is_power9() && (adapter->caia_major == 2))
+		return true;
+
 	return false;
 }
 
@@ -1460,8 +1638,12 @@ static int sanitise_adapter_regs(struct cxl *adapter)
 	/* Clear PSL tberror bit by writing 1 to it */
 	cxl_p1_write(adapter, CXL_PSL_ErrIVTE, CXL_PSL_ErrIVTE_tberror);
 
-	if (adapter->native->sl_ops->invalidate_all)
+	if (adapter->native->sl_ops->invalidate_all) {
+		/* do not invalidate ERAT entries when not reloading on PERST */
+		if (cxl_is_power9() && (adapter->perst_loads_image))
+			return 0;
 		rc = adapter->native->sl_ops->invalidate_all(adapter);
+	}
 
 	return rc;
 }
@@ -1546,6 +1728,30 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
 	pci_disable_device(pdev);
 }
 
+static const struct cxl_service_layer_ops psl9_ops = {
+	.adapter_regs_init = init_implementation_adapter_regs_psl9,
+	.invalidate_all = cxl_invalidate_all_psl9,
+	.afu_regs_init = init_implementation_afu_regs_psl9,
+	.sanitise_afu_regs = sanitise_afu_regs_psl9,
+	.register_serr_irq = cxl_native_register_serr_irq,
+	.release_serr_irq = cxl_native_release_serr_irq,
+	.handle_interrupt = cxl_irq_psl9,
+	.fail_irq = cxl_fail_irq_psl,
+	.activate_dedicated_process = cxl_activate_dedicated_process_psl9,
+	.attach_afu_directed = cxl_attach_afu_directed_psl9,
+	.attach_dedicated_process = cxl_attach_dedicated_process_psl9,
+	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl9,
+	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl9,
+	.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl9,
+	.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl9,
+	.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
+	.debugfs_stop_trace = cxl_stop_trace_psl9,
+	.write_timebase_ctrl = write_timebase_ctrl_psl9,
+	.timebase_read = timebase_read_psl9,
+	.capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
+	.needs_reset_before_disable = true,
+};
+
 static const struct cxl_service_layer_ops psl8_ops = {
 	.adapter_regs_init = init_implementation_adapter_regs_psl8,
 	.invalidate_all = cxl_invalidate_all_psl8,
@@ -1597,6 +1803,9 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
 		if (cxl_is_power8()) {
 			dev_info(&dev->dev, "Device uses a PSL8\n");
 			adapter->native->sl_ops = &psl8_ops;
+		} else {
+			dev_info(&dev->dev, "Device uses a PSL9\n");
+			adapter->native->sl_ops = &psl9_ops;
 		}
 	}
 }
@@ -1667,8 +1876,12 @@ static void cxl_pci_remove_adapter(struct cxl *adapter)
 	cxl_sysfs_adapter_remove(adapter);
 	cxl_debugfs_adapter_remove(adapter);
 
-	/* Flush adapter datacache as its about to be removed */
-	cxl_data_cache_flush(adapter);
+	/* Flush adapter datacache as its about to be removed.
+	 * Not supported on P9 DD1
+	 */
+	if ((cxl_is_power8()) ||
+	    ((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))
+		cxl_data_cache_flush(adapter);
 
 	cxl_deconfigure_adapter(adapter);
 
@@ -1752,6 +1965,11 @@ static int cxl_probe(struct pci_dev *dev, const struct pci_device_id *id)
 		return -ENODEV;
 	}
 
+	if (cxl_is_power9() && !radix_enabled()) {
+		dev_info(&dev->dev, "Only Radix mode supported\n");
+		return -ENODEV;
+	}
+
 	if (cxl_verbose)
 		dump_cxl_config_space(dev);
 
diff --git a/drivers/misc/cxl/trace.h b/drivers/misc/cxl/trace.h
index 751d611..b8e300a 100644
--- a/drivers/misc/cxl/trace.h
+++ b/drivers/misc/cxl/trace.h
@@ -17,6 +17,15 @@
 
 #include "cxl.h"
 
+#define dsisr_psl9_flags(flags) \
+	__print_flags(flags, "|", \
+		{ CXL_PSL9_DSISR_An_CO_MASK,	"FR" }, \
+		{ CXL_PSL9_DSISR_An_TF,		"TF" }, \
+		{ CXL_PSL9_DSISR_An_PE,		"PE" }, \
+		{ CXL_PSL9_DSISR_An_AE,		"AE" }, \
+		{ CXL_PSL9_DSISR_An_OC,		"OC" }, \
+		{ CXL_PSL9_DSISR_An_S,		"S" })
+
 #define DSISR_FLAGS \
 	{ CXL_PSL_DSISR_An_DS,	"DS" }, \
 	{ CXL_PSL_DSISR_An_DM,	"DM" }, \
@@ -154,6 +163,40 @@ TRACE_EVENT(cxl_afu_irq,
 	)
 );
 
+TRACE_EVENT(cxl_psl9_irq,
+	TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
+
+	TP_ARGS(ctx, irq, dsisr, dar),
+
+	TP_STRUCT__entry(
+		__field(u8, card)
+		__field(u8, afu)
+		__field(u16, pe)
+		__field(int, irq)
+		__field(u64, dsisr)
+		__field(u64, dar)
+	),
+
+	TP_fast_assign(
+		__entry->card = ctx->afu->adapter->adapter_num;
+		__entry->afu = ctx->afu->slice;
+		__entry->pe = ctx->pe;
+		__entry->irq = irq;
+		__entry->dsisr = dsisr;
+		__entry->dar = dar;
+	),
+
+	TP_printk("afu%i.%i pe=%i irq=%i dsisr=0x%016llx dsisr=%s dar=0x%016llx",
+		__entry->card,
+		__entry->afu,
+		__entry->pe,
+		__entry->irq,
+		__entry->dsisr,
+		dsisr_psl9_flags(__entry->dsisr),
+		__entry->dar
+	)
+);
+
 TRACE_EVENT(cxl_psl_irq,
 	TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 1/7] cxl: Read vsec perst load image
  2017-04-07 14:11 ` [PATCH V4 1/7] cxl: Read vsec perst load image Christophe Lombard
@ 2017-04-10  4:00   ` Andrew Donnellan
  2017-04-10 16:40   ` Frederic Barrat
  2017-04-19  3:47   ` [V4,1/7] " Michael Ellerman
  2 siblings, 0 replies; 30+ messages in thread
From: Andrew Donnellan @ 2017-04-10  4:00 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie

Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>

On 08/04/17 00:11, Christophe Lombard wrote:
> This bit is used to cause a flash image load for programmable
> CAIA-compliant implementation. If this bit is set to ‘0’, a power
> cycle of the adapter is required to load a programmable CAIA-com-
> pliant implementation from flash.
> This field will be used by the following patches.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> ---
>  drivers/misc/cxl/pci.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index b27ea98..1f4c351 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -1332,6 +1332,7 @@ static int cxl_read_vsec(struct cxl *adapter, struct pci_dev *dev)
>  	CXL_READ_VSEC_IMAGE_STATE(dev, vsec, &image_state);
>  	adapter->user_image_loaded = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
>  	adapter->perst_select_user = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
> +	adapter->perst_loads_image = !!(image_state & CXL_VSEC_PERST_LOADS_IMAGE);
>
>  	CXL_READ_VSEC_NAFUS(dev, vsec, &adapter->slices);
>  	CXL_READ_VSEC_AFU_DESC_OFF(dev, vsec, &afu_desc_off);
>

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 2/7] cxl: Remove unused values in bare-metal environment.
  2017-04-07 14:11 ` [PATCH V4 2/7] cxl: Remove unused values in bare-metal environment Christophe Lombard
@ 2017-04-10  5:25   ` Andrew Donnellan
  2017-04-10 16:41   ` Frederic Barrat
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Donnellan @ 2017-04-10  5:25 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie

On 08/04/17 00:11, Christophe Lombard wrote:
> The two previously fields pid and tid, located in the structure
> cxl_irq_info, are only used in the guest environment. To avoid confusion,
> it's not necessary to fill the fields in the bare-metal environment.
> Pid_tid is now renamed to 'reserved' to avoid undefined behavior on
> bare-metal. The PSL Process and Thread Identification Register
> (CXL_PSL_PID_TID_An) is only used when attaching a dedicated process
> for PSL8 only. This register goes away in CAIA2.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>

Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>

> ---
>  drivers/misc/cxl/cxl.h    | 20 ++++----------------
>  drivers/misc/cxl/hcalls.c |  6 +++---
>  drivers/misc/cxl/native.c |  5 -----
>  3 files changed, 7 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index 79e60ec..36bc213 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -888,27 +888,15 @@ int __detach_context(struct cxl_context *ctx);
>  /*
>   * This must match the layout of the H_COLLECT_CA_INT_INFO retbuf defined
>   * in PAPR.
> - * A word about endianness: a pointer to this structure is passed when
> - * calling the hcall. However, it is not a block of memory filled up by
> - * the hypervisor. The return values are found in registers, and copied
> - * one by one when returning from the hcall. See the end of the call to
> - * plpar_hcall9() in hvCall.S
> - * As a consequence:
> - * - we don't need to do any endianness conversion
> - * - the pid and tid are an exception. They are 32-bit values returned in
> - *   the same 64-bit register. So we do need to worry about byte ordering.
> + * Field pid_tid is now 'reserved' because it's no more used on bare-metal.
> + * On a guest environment, PSL_PID_An is located on the upper 32 bits and
> + * PSL_TID_An register in the lower 32 bits.
>   */
>  struct cxl_irq_info {
>  	u64 dsisr;
>  	u64 dar;
>  	u64 dsr;
> -#ifndef CONFIG_CPU_LITTLE_ENDIAN
> -	u32 pid;
> -	u32 tid;
> -#else
> -	u32 tid;
> -	u32 pid;
> -#endif
> +	u64 reserved;
>  	u64 afu_err;
>  	u64 errstat;
>  	u64 proc_handle;
> diff --git a/drivers/misc/cxl/hcalls.c b/drivers/misc/cxl/hcalls.c
> index d6d11f4..9b8bb0f 100644
> --- a/drivers/misc/cxl/hcalls.c
> +++ b/drivers/misc/cxl/hcalls.c
> @@ -413,9 +413,9 @@ long cxl_h_collect_int_info(u64 unit_address, u64 process_token,
>
>  	switch (rc) {
>  	case H_SUCCESS:     /* The interrupt info is returned in return registers. */
> -		pr_devel("dsisr:%#llx, dar:%#llx, dsr:%#llx, pid:%u, tid:%u, afu_err:%#llx, errstat:%#llx\n",
> -			info->dsisr, info->dar, info->dsr, info->pid,
> -			info->tid, info->afu_err, info->errstat);
> +		pr_devel("dsisr:%#llx, dar:%#llx, dsr:%#llx, pid_tid:%#llx, afu_err:%#llx, errstat:%#llx\n",
> +			info->dsisr, info->dar, info->dsr, info->reserved,
> +			info->afu_err, info->errstat);
>  		return 0;
>  	case H_PARAMETER:   /* An incorrect parameter was supplied. */
>  		return -EINVAL;
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index 7ae7105..7257e8b 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -859,8 +859,6 @@ static int native_detach_process(struct cxl_context *ctx)
>
>  static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
>  {
> -	u64 pidtid;
> -
>  	/* If the adapter has gone away, we can't get any meaningful
>  	 * information.
>  	 */
> @@ -870,9 +868,6 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
>  	info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
>  	info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
>  	info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
> -	pidtid = cxl_p2n_read(afu, CXL_PSL_PID_TID_An);
> -	info->pid = pidtid >> 32;
> -	info->tid = pidtid & 0xffffffff;
>  	info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An);
>  	info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
>  	info->proc_handle = 0;
>

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 3/7] cxl: Keep track of mm struct associated with a context
  2017-04-07 14:11 ` [PATCH V4 3/7] cxl: Keep track of mm struct associated with a context Christophe Lombard
@ 2017-04-10  5:38   ` Andrew Donnellan
  2017-04-10 16:49   ` Frederic Barrat
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Donnellan @ 2017-04-10  5:38 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie

On 08/04/17 00:11, Christophe Lombard wrote:
> The mm_struct corresponding to the current task is acquired each time
> an interrupt is raised. So to simplify the code, we only get the
> mm_struct when attaching an AFU context to the process.
> The mm_count reference is increased to ensure that the mm_struct can't
> be freed. The mm_struct will be released when the context is detached.
> A reference on mm_users is not kept to avoid a circular dependency if
> the process mmaps its cxl mmio and forget to unmap before exiting.
> The field glpid (pid of the group leader associated with the pid), of
> the structure cxl_context, is removed because it's no longer useful.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>

Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>

> ---
>  drivers/misc/cxl/api.c     | 17 +++++++++--
>  drivers/misc/cxl/context.c | 21 +++++++++++--
>  drivers/misc/cxl/cxl.h     | 10 ++++--
>  drivers/misc/cxl/fault.c   | 76 ++++------------------------------------------
>  drivers/misc/cxl/file.c    | 15 +++++++--
>  drivers/misc/cxl/main.c    | 12 ++------
>  6 files changed, 61 insertions(+), 90 deletions(-)
>
> diff --git a/drivers/misc/cxl/api.c b/drivers/misc/cxl/api.c
> index bcc030e..1a138c8 100644
> --- a/drivers/misc/cxl/api.c
> +++ b/drivers/misc/cxl/api.c
> @@ -14,6 +14,7 @@
>  #include <linux/msi.h>
>  #include <linux/module.h>
>  #include <linux/mount.h>
> +#include <linux/sched/mm.h>
>
>  #include "cxl.h"
>
> @@ -321,19 +322,29 @@ int cxl_start_context(struct cxl_context *ctx, u64 wed,
>
>  	if (task) {
>  		ctx->pid = get_task_pid(task, PIDTYPE_PID);
> -		ctx->glpid = get_task_pid(task->group_leader, PIDTYPE_PID);
>  		kernel = false;
>  		ctx->real_mode = false;
> +
> +		/* acquire a reference to the task's mm */
> +		ctx->mm = get_task_mm(current);
> +
> +		/* ensure this mm_struct can't be freed */
> +		cxl_context_mm_count_get(ctx);
> +
> +		/* decrement the use count */
> +		if (ctx->mm)
> +			mmput(ctx->mm);
>  	}
>
>  	cxl_ctx_get();
>
>  	if ((rc = cxl_ops->attach_process(ctx, kernel, wed, 0))) {
> -		put_pid(ctx->glpid);
>  		put_pid(ctx->pid);
> -		ctx->glpid = ctx->pid = NULL;
> +		ctx->pid = NULL;
>  		cxl_adapter_context_put(ctx->afu->adapter);
>  		cxl_ctx_put();
> +		if (task)
> +			cxl_context_mm_count_put(ctx);
>  		goto out;
>  	}
>
> diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
> index 062bf6c..2e935ea 100644
> --- a/drivers/misc/cxl/context.c
> +++ b/drivers/misc/cxl/context.c
> @@ -17,6 +17,7 @@
>  #include <linux/debugfs.h>
>  #include <linux/slab.h>
>  #include <linux/idr.h>
> +#include <linux/sched/mm.h>
>  #include <asm/cputable.h>
>  #include <asm/current.h>
>  #include <asm/copro.h>
> @@ -41,7 +42,7 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
>  	spin_lock_init(&ctx->sste_lock);
>  	ctx->afu = afu;
>  	ctx->master = master;
> -	ctx->pid = ctx->glpid = NULL; /* Set in start work ioctl */
> +	ctx->pid = NULL; /* Set in start work ioctl */
>  	mutex_init(&ctx->mapping_lock);
>  	ctx->mapping = NULL;
>
> @@ -242,12 +243,16 @@ int __detach_context(struct cxl_context *ctx)
>
>  	/* release the reference to the group leader and mm handling pid */
>  	put_pid(ctx->pid);
> -	put_pid(ctx->glpid);
>
>  	cxl_ctx_put();
>
>  	/* Decrease the attached context count on the adapter */
>  	cxl_adapter_context_put(ctx->afu->adapter);
> +
> +	/* Decrease the mm count on the context */
> +	cxl_context_mm_count_put(ctx);
> +	ctx->mm = NULL;
> +
>  	return 0;
>  }
>
> @@ -325,3 +330,15 @@ void cxl_context_free(struct cxl_context *ctx)
>  	mutex_unlock(&ctx->afu->contexts_lock);
>  	call_rcu(&ctx->rcu, reclaim_ctx);
>  }
> +
> +void cxl_context_mm_count_get(struct cxl_context *ctx)
> +{
> +	if (ctx->mm)
> +		atomic_inc(&ctx->mm->mm_count);
> +}
> +
> +void cxl_context_mm_count_put(struct cxl_context *ctx)
> +{
> +	if (ctx->mm)
> +		mmdrop(ctx->mm);
> +}
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index 36bc213..4bcbf7a 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -482,8 +482,6 @@ struct cxl_context {
>  	unsigned int sst_size, sst_lru;
>
>  	wait_queue_head_t wq;
> -	/* pid of the group leader associated with the pid */
> -	struct pid *glpid;
>  	/* use mm context associated with this pid for ds faults */
>  	struct pid *pid;
>  	spinlock_t lock; /* Protects pending_irq_mask, pending_fault and fault_addr */
> @@ -551,6 +549,8 @@ struct cxl_context {
>  	 * CX4 only:
>  	 */
>  	struct list_head extra_irq_contexts;
> +
> +	struct mm_struct *mm;
>  };
>
>  struct cxl_service_layer_ops {
> @@ -1012,4 +1012,10 @@ int cxl_adapter_context_lock(struct cxl *adapter);
>  /* Unlock the contexts-lock if taken. Warn and force unlock otherwise */
>  void cxl_adapter_context_unlock(struct cxl *adapter);
>
> +/* Increases the reference count to "struct mm_struct" */
> +void cxl_context_mm_count_get(struct cxl_context *ctx);
> +
> +/* Decrements the reference count to "struct mm_struct" */
> +void cxl_context_mm_count_put(struct cxl_context *ctx);
> +
>  #endif
> diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
> index 2fa015c..e6f8f05 100644
> --- a/drivers/misc/cxl/fault.c
> +++ b/drivers/misc/cxl/fault.c
> @@ -170,81 +170,18 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
>  }
>
>  /*
> - * Returns the mm_struct corresponding to the context ctx via ctx->pid
> - * In case the task has exited we use the task group leader accessible
> - * via ctx->glpid to find the next task in the thread group that has a
> - * valid  mm_struct associated with it. If a task with valid mm_struct
> - * is found the ctx->pid is updated to use the task struct for subsequent
> - * translations. In case no valid mm_struct is found in the task group to
> - * service the fault a NULL is returned.
> + * Returns the mm_struct corresponding to the context ctx.
> + * mm_users == 0, the context may be in the process of being closed.
>   */
>  static struct mm_struct *get_mem_context(struct cxl_context *ctx)
>  {
> -	struct task_struct *task = NULL;
> -	struct mm_struct *mm = NULL;
> -	struct pid *old_pid = ctx->pid;
> -
> -	if (old_pid == NULL) {
> -		pr_warn("%s: Invalid context for pe=%d\n",
> -			 __func__, ctx->pe);
> +	if (ctx->mm == NULL)
>  		return NULL;
> -	}
> -
> -	task = get_pid_task(old_pid, PIDTYPE_PID);
> -
> -	/*
> -	 * pid_alive may look racy but this saves us from costly
> -	 * get_task_mm when the task is a zombie. In worst case
> -	 * we may think a task is alive, which is about to die
> -	 * but get_task_mm will return NULL.
> -	 */
> -	if (task != NULL && pid_alive(task))
> -		mm = get_task_mm(task);
>
> -	/* release the task struct that was taken earlier */
> -	if (task)
> -		put_task_struct(task);
> -	else
> -		pr_devel("%s: Context owning pid=%i for pe=%i dead\n",
> -			__func__, pid_nr(old_pid), ctx->pe);
> -
> -	/*
> -	 * If we couldn't find the mm context then use the group
> -	 * leader to iterate over the task group and find a task
> -	 * that gives us mm_struct.
> -	 */
> -	if (unlikely(mm == NULL && ctx->glpid != NULL)) {
> -
> -		rcu_read_lock();
> -		task = pid_task(ctx->glpid, PIDTYPE_PID);
> -		if (task)
> -			do {
> -				mm = get_task_mm(task);
> -				if (mm) {
> -					ctx->pid = get_task_pid(task,
> -								PIDTYPE_PID);
> -					break;
> -				}
> -				task = next_thread(task);
> -			} while (task && !thread_group_leader(task));
> -		rcu_read_unlock();
> -
> -		/* check if we switched pid */
> -		if (ctx->pid != old_pid) {
> -			if (mm)
> -				pr_devel("%s:pe=%i switch pid %i->%i\n",
> -					 __func__, ctx->pe, pid_nr(old_pid),
> -					 pid_nr(ctx->pid));
> -			else
> -				pr_devel("%s:Cannot find mm for pid=%i\n",
> -					 __func__, pid_nr(old_pid));
> -
> -			/* drop the reference to older pid */
> -			put_pid(old_pid);
> -		}
> -	}
> +	if (!atomic_inc_not_zero(&ctx->mm->mm_users))
> +		return NULL;
>
> -	return mm;
> +	return ctx->mm;
>  }
>
>
> @@ -282,7 +219,6 @@ void cxl_handle_fault(struct work_struct *fault_work)
>  	if (!ctx->kernel) {
>
>  		mm = get_mem_context(ctx);
> -		/* indicates all the thread in task group have exited */
>  		if (mm == NULL) {
>  			pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
>  				 __func__, ctx->pe, pid_nr(ctx->pid));
> diff --git a/drivers/misc/cxl/file.c b/drivers/misc/cxl/file.c
> index e7139c7..17b433f 100644
> --- a/drivers/misc/cxl/file.c
> +++ b/drivers/misc/cxl/file.c
> @@ -18,6 +18,7 @@
>  #include <linux/fs.h>
>  #include <linux/mm.h>
>  #include <linux/slab.h>
> +#include <linux/sched/mm.h>
>  #include <asm/cputable.h>
>  #include <asm/current.h>
>  #include <asm/copro.h>
> @@ -216,8 +217,16 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
>  	 * process is still accessible.
>  	 */
>  	ctx->pid = get_task_pid(current, PIDTYPE_PID);
> -	ctx->glpid = get_task_pid(current->group_leader, PIDTYPE_PID);
>
> +	/* acquire a reference to the task's mm */
> +	ctx->mm = get_task_mm(current);
> +
> +	/* ensure this mm_struct can't be freed */
> +	cxl_context_mm_count_get(ctx);
> +
> +	/* decrement the use count */
> +	if (ctx->mm)
> +		mmput(ctx->mm);
>
>  	trace_cxl_attach(ctx, work.work_element_descriptor, work.num_interrupts, amr);
>
> @@ -225,9 +234,9 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
>  							amr))) {
>  		afu_release_irqs(ctx, ctx);
>  		cxl_adapter_context_put(ctx->afu->adapter);
> -		put_pid(ctx->glpid);
>  		put_pid(ctx->pid);
> -		ctx->glpid = ctx->pid = NULL;
> +		ctx->pid = NULL;
> +		cxl_context_mm_count_put(ctx);
>  		goto out;
>  	}
>
> diff --git a/drivers/misc/cxl/main.c b/drivers/misc/cxl/main.c
> index b0b6ed3..1703655 100644
> --- a/drivers/misc/cxl/main.c
> +++ b/drivers/misc/cxl/main.c
> @@ -59,16 +59,10 @@ int cxl_afu_slbia(struct cxl_afu *afu)
>
>  static inline void _cxl_slbia(struct cxl_context *ctx, struct mm_struct *mm)
>  {
> -	struct task_struct *task;
>  	unsigned long flags;
> -	if (!(task = get_pid_task(ctx->pid, PIDTYPE_PID))) {
> -		pr_devel("%s unable to get task %i\n",
> -			 __func__, pid_nr(ctx->pid));
> -		return;
> -	}
>
> -	if (task->mm != mm)
> -		goto out_put;
> +	if (ctx->mm != mm)
> +		return;
>
>  	pr_devel("%s matched mm - card: %i afu: %i pe: %i\n", __func__,
>  		 ctx->afu->adapter->adapter_num, ctx->afu->slice, ctx->pe);
> @@ -79,8 +73,6 @@ static inline void _cxl_slbia(struct cxl_context *ctx, struct mm_struct *mm)
>  	spin_unlock_irqrestore(&ctx->sste_lock, flags);
>  	mb();
>  	cxl_afu_slbia(ctx->afu);
> -out_put:
> -	put_task_struct(task);
>  }
>
>  static inline void cxl_slbia_core(struct mm_struct *mm)
>

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 5/7] cxl: Rename some psl8 specific functions
  2017-04-07 14:11 ` [PATCH V4 5/7] cxl: Rename some psl8 specific functions Christophe Lombard
@ 2017-04-10  6:14   ` Andrew Donnellan
  2017-04-10 17:06   ` Frederic Barrat
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Donnellan @ 2017-04-10  6:14 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie

On 08/04/17 00:11, Christophe Lombard wrote:
> Rename a few functions, changing the '_psl' suffix to '_psl8', to make
> clear that the implementation is psl8 specific.
> Those functions will have an equivalent implementation for the psl9 in
> a later patch.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>

Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 4/7] cxl: Update implementation service layer
  2017-04-07 14:11 ` [PATCH V4 4/7] cxl: Update implementation service layer Christophe Lombard
@ 2017-04-10  7:08   ` Andrew Donnellan
  2017-04-10 17:01   ` Frederic Barrat
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Donnellan @ 2017-04-10  7:08 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie



On 08/04/17 00:11, Christophe Lombard wrote:
> The service layer API (in cxl.h) lists some low-level functions whose
> implementation is different on PSL8, PSL9 and XSL:
> - Init implementation for the adapter and the afu.
> - Invalidate TLB/SLB.
> - Attach process for dedicated/directed models.
> - Handle psl interrupts.
> - Debug registers for the adapter and the afu.
> - Traces.
> Each environment implements its own functions, and the common code uses
> them through function pointers, defined in cxl_service_layer_ops.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>

Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 1/7] cxl: Read vsec perst load image
  2017-04-07 14:11 ` [PATCH V4 1/7] cxl: Read vsec perst load image Christophe Lombard
  2017-04-10  4:00   ` Andrew Donnellan
@ 2017-04-10 16:40   ` Frederic Barrat
  2017-04-19  3:47   ` [V4,1/7] " Michael Ellerman
  2 siblings, 0 replies; 30+ messages in thread
From: Frederic Barrat @ 2017-04-10 16:40 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, imunsie, andrew.donnellan



Le 07/04/2017 à 16:11, Christophe Lombard a écrit :
> This bit is used to cause a flash image load for programmable
> CAIA-compliant implementation. If this bit is set to ‘0’, a power
> cycle of the adapter is required to load a programmable CAIA-com-
> pliant implementation from flash.
> This field will be used by the following patches.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> ---

Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>


>  drivers/misc/cxl/pci.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index b27ea98..1f4c351 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -1332,6 +1332,7 @@ static int cxl_read_vsec(struct cxl *adapter, struct pci_dev *dev)
>  	CXL_READ_VSEC_IMAGE_STATE(dev, vsec, &image_state);
>  	adapter->user_image_loaded = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
>  	adapter->perst_select_user = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
> +	adapter->perst_loads_image = !!(image_state & CXL_VSEC_PERST_LOADS_IMAGE);
>
>  	CXL_READ_VSEC_NAFUS(dev, vsec, &adapter->slices);
>  	CXL_READ_VSEC_AFU_DESC_OFF(dev, vsec, &afu_desc_off);
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 2/7] cxl: Remove unused values in bare-metal environment.
  2017-04-07 14:11 ` [PATCH V4 2/7] cxl: Remove unused values in bare-metal environment Christophe Lombard
  2017-04-10  5:25   ` Andrew Donnellan
@ 2017-04-10 16:41   ` Frederic Barrat
  1 sibling, 0 replies; 30+ messages in thread
From: Frederic Barrat @ 2017-04-10 16:41 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, imunsie, andrew.donnellan



Le 07/04/2017 à 16:11, Christophe Lombard a écrit :
> The two previously fields pid and tid, located in the structure
> cxl_irq_info, are only used in the guest environment. To avoid confusion,
> it's not necessary to fill the fields in the bare-metal environment.
> Pid_tid is now renamed to 'reserved' to avoid undefined behavior on
> bare-metal. The PSL Process and Thread Identification Register
> (CXL_PSL_PID_TID_An) is only used when attaching a dedicated process
> for PSL8 only. This register goes away in CAIA2.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> ---

Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>



>  drivers/misc/cxl/cxl.h    | 20 ++++----------------
>  drivers/misc/cxl/hcalls.c |  6 +++---
>  drivers/misc/cxl/native.c |  5 -----
>  3 files changed, 7 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index 79e60ec..36bc213 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -888,27 +888,15 @@ int __detach_context(struct cxl_context *ctx);
>  /*
>   * This must match the layout of the H_COLLECT_CA_INT_INFO retbuf defined
>   * in PAPR.
> - * A word about endianness: a pointer to this structure is passed when
> - * calling the hcall. However, it is not a block of memory filled up by
> - * the hypervisor. The return values are found in registers, and copied
> - * one by one when returning from the hcall. See the end of the call to
> - * plpar_hcall9() in hvCall.S
> - * As a consequence:
> - * - we don't need to do any endianness conversion
> - * - the pid and tid are an exception. They are 32-bit values returned in
> - *   the same 64-bit register. So we do need to worry about byte ordering.
> + * Field pid_tid is now 'reserved' because it's no more used on bare-metal.
> + * On a guest environment, PSL_PID_An is located on the upper 32 bits and
> + * PSL_TID_An register in the lower 32 bits.
>   */
>  struct cxl_irq_info {
>  	u64 dsisr;
>  	u64 dar;
>  	u64 dsr;
> -#ifndef CONFIG_CPU_LITTLE_ENDIAN
> -	u32 pid;
> -	u32 tid;
> -#else
> -	u32 tid;
> -	u32 pid;
> -#endif
> +	u64 reserved;
>  	u64 afu_err;
>  	u64 errstat;
>  	u64 proc_handle;
> diff --git a/drivers/misc/cxl/hcalls.c b/drivers/misc/cxl/hcalls.c
> index d6d11f4..9b8bb0f 100644
> --- a/drivers/misc/cxl/hcalls.c
> +++ b/drivers/misc/cxl/hcalls.c
> @@ -413,9 +413,9 @@ long cxl_h_collect_int_info(u64 unit_address, u64 process_token,
>
>  	switch (rc) {
>  	case H_SUCCESS:     /* The interrupt info is returned in return registers. */
> -		pr_devel("dsisr:%#llx, dar:%#llx, dsr:%#llx, pid:%u, tid:%u, afu_err:%#llx, errstat:%#llx\n",
> -			info->dsisr, info->dar, info->dsr, info->pid,
> -			info->tid, info->afu_err, info->errstat);
> +		pr_devel("dsisr:%#llx, dar:%#llx, dsr:%#llx, pid_tid:%#llx, afu_err:%#llx, errstat:%#llx\n",
> +			info->dsisr, info->dar, info->dsr, info->reserved,
> +			info->afu_err, info->errstat);
>  		return 0;
>  	case H_PARAMETER:   /* An incorrect parameter was supplied. */
>  		return -EINVAL;
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index 7ae7105..7257e8b 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -859,8 +859,6 @@ static int native_detach_process(struct cxl_context *ctx)
>
>  static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
>  {
> -	u64 pidtid;
> -
>  	/* If the adapter has gone away, we can't get any meaningful
>  	 * information.
>  	 */
> @@ -870,9 +868,6 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
>  	info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
>  	info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
>  	info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
> -	pidtid = cxl_p2n_read(afu, CXL_PSL_PID_TID_An);
> -	info->pid = pidtid >> 32;
> -	info->tid = pidtid & 0xffffffff;
>  	info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An);
>  	info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
>  	info->proc_handle = 0;
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 3/7] cxl: Keep track of mm struct associated with a context
  2017-04-07 14:11 ` [PATCH V4 3/7] cxl: Keep track of mm struct associated with a context Christophe Lombard
  2017-04-10  5:38   ` Andrew Donnellan
@ 2017-04-10 16:49   ` Frederic Barrat
  1 sibling, 0 replies; 30+ messages in thread
From: Frederic Barrat @ 2017-04-10 16:49 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, imunsie, andrew.donnellan



Le 07/04/2017 à 16:11, Christophe Lombard a écrit :
> The mm_struct corresponding to the current task is acquired each time
> an interrupt is raised. So to simplify the code, we only get the
> mm_struct when attaching an AFU context to the process.
> The mm_count reference is increased to ensure that the mm_struct can't
> be freed. The mm_struct will be released when the context is detached.
> A reference on mm_users is not kept to avoid a circular dependency if
> the process mmaps its cxl mmio and forget to unmap before exiting.
> The field glpid (pid of the group leader associated with the pid), of
> the structure cxl_context, is removed because it's no longer useful.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> ---

Thanks for the update, I think it looks good now.

Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>


>  drivers/misc/cxl/api.c     | 17 +++++++++--
>  drivers/misc/cxl/context.c | 21 +++++++++++--
>  drivers/misc/cxl/cxl.h     | 10 ++++--
>  drivers/misc/cxl/fault.c   | 76 ++++------------------------------------------
>  drivers/misc/cxl/file.c    | 15 +++++++--
>  drivers/misc/cxl/main.c    | 12 ++------
>  6 files changed, 61 insertions(+), 90 deletions(-)
>
> diff --git a/drivers/misc/cxl/api.c b/drivers/misc/cxl/api.c
> index bcc030e..1a138c8 100644
> --- a/drivers/misc/cxl/api.c
> +++ b/drivers/misc/cxl/api.c
> @@ -14,6 +14,7 @@
>  #include <linux/msi.h>
>  #include <linux/module.h>
>  #include <linux/mount.h>
> +#include <linux/sched/mm.h>
>
>  #include "cxl.h"
>
> @@ -321,19 +322,29 @@ int cxl_start_context(struct cxl_context *ctx, u64 wed,
>
>  	if (task) {
>  		ctx->pid = get_task_pid(task, PIDTYPE_PID);
> -		ctx->glpid = get_task_pid(task->group_leader, PIDTYPE_PID);
>  		kernel = false;
>  		ctx->real_mode = false;
> +
> +		/* acquire a reference to the task's mm */
> +		ctx->mm = get_task_mm(current);
> +
> +		/* ensure this mm_struct can't be freed */
> +		cxl_context_mm_count_get(ctx);
> +
> +		/* decrement the use count */
> +		if (ctx->mm)
> +			mmput(ctx->mm);
>  	}
>
>  	cxl_ctx_get();
>
>  	if ((rc = cxl_ops->attach_process(ctx, kernel, wed, 0))) {
> -		put_pid(ctx->glpid);
>  		put_pid(ctx->pid);
> -		ctx->glpid = ctx->pid = NULL;
> +		ctx->pid = NULL;
>  		cxl_adapter_context_put(ctx->afu->adapter);
>  		cxl_ctx_put();
> +		if (task)
> +			cxl_context_mm_count_put(ctx);
>  		goto out;
>  	}
>
> diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
> index 062bf6c..2e935ea 100644
> --- a/drivers/misc/cxl/context.c
> +++ b/drivers/misc/cxl/context.c
> @@ -17,6 +17,7 @@
>  #include <linux/debugfs.h>
>  #include <linux/slab.h>
>  #include <linux/idr.h>
> +#include <linux/sched/mm.h>
>  #include <asm/cputable.h>
>  #include <asm/current.h>
>  #include <asm/copro.h>
> @@ -41,7 +42,7 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
>  	spin_lock_init(&ctx->sste_lock);
>  	ctx->afu = afu;
>  	ctx->master = master;
> -	ctx->pid = ctx->glpid = NULL; /* Set in start work ioctl */
> +	ctx->pid = NULL; /* Set in start work ioctl */
>  	mutex_init(&ctx->mapping_lock);
>  	ctx->mapping = NULL;
>
> @@ -242,12 +243,16 @@ int __detach_context(struct cxl_context *ctx)
>
>  	/* release the reference to the group leader and mm handling pid */
>  	put_pid(ctx->pid);
> -	put_pid(ctx->glpid);
>
>  	cxl_ctx_put();
>
>  	/* Decrease the attached context count on the adapter */
>  	cxl_adapter_context_put(ctx->afu->adapter);
> +
> +	/* Decrease the mm count on the context */
> +	cxl_context_mm_count_put(ctx);
> +	ctx->mm = NULL;
> +
>  	return 0;
>  }
>
> @@ -325,3 +330,15 @@ void cxl_context_free(struct cxl_context *ctx)
>  	mutex_unlock(&ctx->afu->contexts_lock);
>  	call_rcu(&ctx->rcu, reclaim_ctx);
>  }
> +
> +void cxl_context_mm_count_get(struct cxl_context *ctx)
> +{
> +	if (ctx->mm)
> +		atomic_inc(&ctx->mm->mm_count);
> +}
> +
> +void cxl_context_mm_count_put(struct cxl_context *ctx)
> +{
> +	if (ctx->mm)
> +		mmdrop(ctx->mm);
> +}
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index 36bc213..4bcbf7a 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -482,8 +482,6 @@ struct cxl_context {
>  	unsigned int sst_size, sst_lru;
>
>  	wait_queue_head_t wq;
> -	/* pid of the group leader associated with the pid */
> -	struct pid *glpid;
>  	/* use mm context associated with this pid for ds faults */
>  	struct pid *pid;
>  	spinlock_t lock; /* Protects pending_irq_mask, pending_fault and fault_addr */
> @@ -551,6 +549,8 @@ struct cxl_context {
>  	 * CX4 only:
>  	 */
>  	struct list_head extra_irq_contexts;
> +
> +	struct mm_struct *mm;
>  };
>
>  struct cxl_service_layer_ops {
> @@ -1012,4 +1012,10 @@ int cxl_adapter_context_lock(struct cxl *adapter);
>  /* Unlock the contexts-lock if taken. Warn and force unlock otherwise */
>  void cxl_adapter_context_unlock(struct cxl *adapter);
>
> +/* Increases the reference count to "struct mm_struct" */
> +void cxl_context_mm_count_get(struct cxl_context *ctx);
> +
> +/* Decrements the reference count to "struct mm_struct" */
> +void cxl_context_mm_count_put(struct cxl_context *ctx);
> +
>  #endif
> diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
> index 2fa015c..e6f8f05 100644
> --- a/drivers/misc/cxl/fault.c
> +++ b/drivers/misc/cxl/fault.c
> @@ -170,81 +170,18 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
>  }
>
>  /*
> - * Returns the mm_struct corresponding to the context ctx via ctx->pid
> - * In case the task has exited we use the task group leader accessible
> - * via ctx->glpid to find the next task in the thread group that has a
> - * valid  mm_struct associated with it. If a task with valid mm_struct
> - * is found the ctx->pid is updated to use the task struct for subsequent
> - * translations. In case no valid mm_struct is found in the task group to
> - * service the fault a NULL is returned.
> + * Returns the mm_struct corresponding to the context ctx.
> + * mm_users == 0, the context may be in the process of being closed.
>   */
>  static struct mm_struct *get_mem_context(struct cxl_context *ctx)
>  {
> -	struct task_struct *task = NULL;
> -	struct mm_struct *mm = NULL;
> -	struct pid *old_pid = ctx->pid;
> -
> -	if (old_pid == NULL) {
> -		pr_warn("%s: Invalid context for pe=%d\n",
> -			 __func__, ctx->pe);
> +	if (ctx->mm == NULL)
>  		return NULL;
> -	}
> -
> -	task = get_pid_task(old_pid, PIDTYPE_PID);
> -
> -	/*
> -	 * pid_alive may look racy but this saves us from costly
> -	 * get_task_mm when the task is a zombie. In worst case
> -	 * we may think a task is alive, which is about to die
> -	 * but get_task_mm will return NULL.
> -	 */
> -	if (task != NULL && pid_alive(task))
> -		mm = get_task_mm(task);
>
> -	/* release the task struct that was taken earlier */
> -	if (task)
> -		put_task_struct(task);
> -	else
> -		pr_devel("%s: Context owning pid=%i for pe=%i dead\n",
> -			__func__, pid_nr(old_pid), ctx->pe);
> -
> -	/*
> -	 * If we couldn't find the mm context then use the group
> -	 * leader to iterate over the task group and find a task
> -	 * that gives us mm_struct.
> -	 */
> -	if (unlikely(mm == NULL && ctx->glpid != NULL)) {
> -
> -		rcu_read_lock();
> -		task = pid_task(ctx->glpid, PIDTYPE_PID);
> -		if (task)
> -			do {
> -				mm = get_task_mm(task);
> -				if (mm) {
> -					ctx->pid = get_task_pid(task,
> -								PIDTYPE_PID);
> -					break;
> -				}
> -				task = next_thread(task);
> -			} while (task && !thread_group_leader(task));
> -		rcu_read_unlock();
> -
> -		/* check if we switched pid */
> -		if (ctx->pid != old_pid) {
> -			if (mm)
> -				pr_devel("%s:pe=%i switch pid %i->%i\n",
> -					 __func__, ctx->pe, pid_nr(old_pid),
> -					 pid_nr(ctx->pid));
> -			else
> -				pr_devel("%s:Cannot find mm for pid=%i\n",
> -					 __func__, pid_nr(old_pid));
> -
> -			/* drop the reference to older pid */
> -			put_pid(old_pid);
> -		}
> -	}
> +	if (!atomic_inc_not_zero(&ctx->mm->mm_users))
> +		return NULL;
>
> -	return mm;
> +	return ctx->mm;
>  }
>
>
> @@ -282,7 +219,6 @@ void cxl_handle_fault(struct work_struct *fault_work)
>  	if (!ctx->kernel) {
>
>  		mm = get_mem_context(ctx);
> -		/* indicates all the thread in task group have exited */
>  		if (mm == NULL) {
>  			pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
>  				 __func__, ctx->pe, pid_nr(ctx->pid));
> diff --git a/drivers/misc/cxl/file.c b/drivers/misc/cxl/file.c
> index e7139c7..17b433f 100644
> --- a/drivers/misc/cxl/file.c
> +++ b/drivers/misc/cxl/file.c
> @@ -18,6 +18,7 @@
>  #include <linux/fs.h>
>  #include <linux/mm.h>
>  #include <linux/slab.h>
> +#include <linux/sched/mm.h>
>  #include <asm/cputable.h>
>  #include <asm/current.h>
>  #include <asm/copro.h>
> @@ -216,8 +217,16 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
>  	 * process is still accessible.
>  	 */
>  	ctx->pid = get_task_pid(current, PIDTYPE_PID);
> -	ctx->glpid = get_task_pid(current->group_leader, PIDTYPE_PID);
>
> +	/* acquire a reference to the task's mm */
> +	ctx->mm = get_task_mm(current);
> +
> +	/* ensure this mm_struct can't be freed */
> +	cxl_context_mm_count_get(ctx);
> +
> +	/* decrement the use count */
> +	if (ctx->mm)
> +		mmput(ctx->mm);
>
>  	trace_cxl_attach(ctx, work.work_element_descriptor, work.num_interrupts, amr);
>
> @@ -225,9 +234,9 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
>  							amr))) {
>  		afu_release_irqs(ctx, ctx);
>  		cxl_adapter_context_put(ctx->afu->adapter);
> -		put_pid(ctx->glpid);
>  		put_pid(ctx->pid);
> -		ctx->glpid = ctx->pid = NULL;
> +		ctx->pid = NULL;
> +		cxl_context_mm_count_put(ctx);
>  		goto out;
>  	}
>
> diff --git a/drivers/misc/cxl/main.c b/drivers/misc/cxl/main.c
> index b0b6ed3..1703655 100644
> --- a/drivers/misc/cxl/main.c
> +++ b/drivers/misc/cxl/main.c
> @@ -59,16 +59,10 @@ int cxl_afu_slbia(struct cxl_afu *afu)
>
>  static inline void _cxl_slbia(struct cxl_context *ctx, struct mm_struct *mm)
>  {
> -	struct task_struct *task;
>  	unsigned long flags;
> -	if (!(task = get_pid_task(ctx->pid, PIDTYPE_PID))) {
> -		pr_devel("%s unable to get task %i\n",
> -			 __func__, pid_nr(ctx->pid));
> -		return;
> -	}
>
> -	if (task->mm != mm)
> -		goto out_put;
> +	if (ctx->mm != mm)
> +		return;
>
>  	pr_devel("%s matched mm - card: %i afu: %i pe: %i\n", __func__,
>  		 ctx->afu->adapter->adapter_num, ctx->afu->slice, ctx->pe);
> @@ -79,8 +73,6 @@ static inline void _cxl_slbia(struct cxl_context *ctx, struct mm_struct *mm)
>  	spin_unlock_irqrestore(&ctx->sste_lock, flags);
>  	mb();
>  	cxl_afu_slbia(ctx->afu);
> -out_put:
> -	put_task_struct(task);
>  }
>
>  static inline void cxl_slbia_core(struct mm_struct *mm)
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 4/7] cxl: Update implementation service layer
  2017-04-07 14:11 ` [PATCH V4 4/7] cxl: Update implementation service layer Christophe Lombard
  2017-04-10  7:08   ` Andrew Donnellan
@ 2017-04-10 17:01   ` Frederic Barrat
  1 sibling, 0 replies; 30+ messages in thread
From: Frederic Barrat @ 2017-04-10 17:01 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, imunsie, andrew.donnellan


Le 07/04/2017 à 16:11, Christophe Lombard a écrit :
> The service layer API (in cxl.h) lists some low-level functions whose
> implementation is different on PSL8, PSL9 and XSL:
> - Init implementation for the adapter and the afu.
> - Invalidate TLB/SLB.
> - Attach process for dedicated/directed models.
> - Handle psl interrupts.
> - Debug registers for the adapter and the afu.
> - Traces.
> Each environment implements its own functions, and the common code uses
> them through function pointers, defined in cxl_service_layer_ops.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> ---


Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>


>  drivers/misc/cxl/cxl.h     | 40 +++++++++++++++++++++++----------
>  drivers/misc/cxl/debugfs.c | 16 +++++++-------
>  drivers/misc/cxl/guest.c   |  2 +-
>  drivers/misc/cxl/irq.c     |  2 +-
>  drivers/misc/cxl/native.c  | 54 ++++++++++++++++++++++++++-------------------
>  drivers/misc/cxl/pci.c     | 55 +++++++++++++++++++++++++++++++++-------------
>  6 files changed, 110 insertions(+), 59 deletions(-)
>
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index 4bcbf7a..626073d 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -553,13 +553,23 @@ struct cxl_context {
>  	struct mm_struct *mm;
>  };
>
> +struct cxl_irq_info;
> +
>  struct cxl_service_layer_ops {
>  	int (*adapter_regs_init)(struct cxl *adapter, struct pci_dev *dev);
> +	int (*invalidate_all)(struct cxl *adapter);
>  	int (*afu_regs_init)(struct cxl_afu *afu);
> +	int (*sanitise_afu_regs)(struct cxl_afu *afu);
>  	int (*register_serr_irq)(struct cxl_afu *afu);
>  	void (*release_serr_irq)(struct cxl_afu *afu);
> -	void (*debugfs_add_adapter_sl_regs)(struct cxl *adapter, struct dentry *dir);
> -	void (*debugfs_add_afu_sl_regs)(struct cxl_afu *afu, struct dentry *dir);
> +	irqreturn_t (*handle_interrupt)(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> +	irqreturn_t (*fail_irq)(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
> +	int (*activate_dedicated_process)(struct cxl_afu *afu);
> +	int (*attach_afu_directed)(struct cxl_context *ctx, u64 wed, u64 amr);
> +	int (*attach_dedicated_process)(struct cxl_context *ctx, u64 wed, u64 amr);
> +	void (*update_dedicated_ivtes)(struct cxl_context *ctx);
> +	void (*debugfs_add_adapter_regs)(struct cxl *adapter, struct dentry *dir);
> +	void (*debugfs_add_afu_regs)(struct cxl_afu *afu, struct dentry *dir);
>  	void (*psl_irq_dump_registers)(struct cxl_context *ctx);
>  	void (*err_irq_dump_registers)(struct cxl *adapter);
>  	void (*debugfs_stop_trace)(struct cxl *adapter);
> @@ -803,6 +813,11 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
>  void afu_release_irqs(struct cxl_context *ctx, void *cookie);
>  void afu_irq_name_free(struct cxl_context *ctx);
>
> +int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr);
> +int cxl_activate_dedicated_process_psl(struct cxl_afu *afu);
> +int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr);
> +void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx);
> +
>  #ifdef CONFIG_DEBUG_FS
>
>  int cxl_debugfs_init(void);
> @@ -811,10 +826,10 @@ int cxl_debugfs_adapter_add(struct cxl *adapter);
>  void cxl_debugfs_adapter_remove(struct cxl *adapter);
>  int cxl_debugfs_afu_add(struct cxl_afu *afu);
>  void cxl_debugfs_afu_remove(struct cxl_afu *afu);
> -void cxl_stop_trace(struct cxl *cxl);
> -void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir);
> -void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter, struct dentry *dir);
> -void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir);
> +void cxl_stop_trace_psl(struct cxl *cxl);
> +void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir);
> +void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
> +void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir);
>
>  #else /* CONFIG_DEBUG_FS */
>
> @@ -849,17 +864,17 @@ static inline void cxl_stop_trace(struct cxl *cxl)
>  {
>  }
>
> -static inline void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter,
> +static inline void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter,
>  						    struct dentry *dir)
>  {
>  }
>
> -static inline void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter,
> +static inline void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter,
>  						    struct dentry *dir)
>  {
>  }
>
> -static inline void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir)
> +static inline void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
>  {
>  }
>
> @@ -904,19 +919,20 @@ struct cxl_irq_info {
>  };
>
>  void cxl_assign_psn_space(struct cxl_context *ctx);
> -irqreturn_t cxl_irq(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> +int cxl_invalidate_all_psl(struct cxl *adapter);
> +irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> +irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
>  int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
>  			void *cookie, irq_hw_number_t *dest_hwirq,
>  			unsigned int *dest_virq, const char *name);
>
>  int cxl_check_error(struct cxl_afu *afu);
>  int cxl_afu_slbia(struct cxl_afu *afu);
> -int cxl_tlb_slb_invalidate(struct cxl *adapter);
>  int cxl_data_cache_flush(struct cxl *adapter);
>  int cxl_afu_disable(struct cxl_afu *afu);
>  int cxl_psl_purge(struct cxl_afu *afu);
>
> -void cxl_native_psl_irq_dump_regs(struct cxl_context *ctx);
> +void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx);
>  void cxl_native_err_irq_dump_regs(struct cxl *adapter);
>  int cxl_pci_vphb_add(struct cxl_afu *afu);
>  void cxl_pci_vphb_remove(struct cxl_afu *afu);
> diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
> index 9c06ac8..4848ebf 100644
> --- a/drivers/misc/cxl/debugfs.c
> +++ b/drivers/misc/cxl/debugfs.c
> @@ -15,7 +15,7 @@
>
>  static struct dentry *cxl_debugfs;
>
> -void cxl_stop_trace(struct cxl *adapter)
> +void cxl_stop_trace_psl(struct cxl *adapter)
>  {
>  	int slice;
>
> @@ -53,7 +53,7 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
>  					  (void __force *)value, &fops_io_x64);
>  }
>
> -void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir)
> +void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir)
>  {
>  	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
>  	debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR2));
> @@ -61,7 +61,7 @@ void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir)
>  	debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_TRACE));
>  }
>
> -void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter, struct dentry *dir)
> +void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir)
>  {
>  	debugfs_create_io_x64("fec", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_XSL_FEC));
>  }
> @@ -82,8 +82,8 @@ int cxl_debugfs_adapter_add(struct cxl *adapter)
>
>  	debugfs_create_io_x64("err_ivte", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_ErrIVTE));
>
> -	if (adapter->native->sl_ops->debugfs_add_adapter_sl_regs)
> -		adapter->native->sl_ops->debugfs_add_adapter_sl_regs(adapter, dir);
> +	if (adapter->native->sl_ops->debugfs_add_adapter_regs)
> +		adapter->native->sl_ops->debugfs_add_adapter_regs(adapter, dir);
>  	return 0;
>  }
>
> @@ -92,7 +92,7 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
>  	debugfs_remove_recursive(adapter->debugfs);
>  }
>
> -void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir)
> +void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
>  {
>  	debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
>  	debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
> @@ -121,8 +121,8 @@ int cxl_debugfs_afu_add(struct cxl_afu *afu)
>  	debugfs_create_io_x64("sstp1",      S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
>  	debugfs_create_io_x64("err_status", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_ErrStat_An));
>
> -	if (afu->adapter->native->sl_ops->debugfs_add_afu_sl_regs)
> -		afu->adapter->native->sl_ops->debugfs_add_afu_sl_regs(afu, dir);
> +	if (afu->adapter->native->sl_ops->debugfs_add_afu_regs)
> +		afu->adapter->native->sl_ops->debugfs_add_afu_regs(afu, dir);
>
>  	return 0;
>  }
> diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
> index e04bc4d..f6ba698 100644
> --- a/drivers/misc/cxl/guest.c
> +++ b/drivers/misc/cxl/guest.c
> @@ -169,7 +169,7 @@ static irqreturn_t guest_psl_irq(int irq, void *data)
>  		return IRQ_HANDLED;
>  	}
>
> -	rc = cxl_irq(irq, ctx, &irq_info);
> +	rc = cxl_irq_psl(irq, ctx, &irq_info);
>  	return rc;
>  }
>
> diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
> index 1a402bb..2fa119e 100644
> --- a/drivers/misc/cxl/irq.c
> +++ b/drivers/misc/cxl/irq.c
> @@ -34,7 +34,7 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
>  	return IRQ_HANDLED;
>  }
>
> -irqreturn_t cxl_irq(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
> +irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
>  {
>  	u64 dsisr, dar;
>
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index 7257e8b..c147863e 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -258,7 +258,7 @@ void cxl_release_spa(struct cxl_afu *afu)
>  	}
>  }
>
> -int cxl_tlb_slb_invalidate(struct cxl *adapter)
> +int cxl_invalidate_all_psl(struct cxl *adapter)
>  {
>  	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
>
> @@ -578,7 +578,7 @@ static void update_ivtes_directed(struct cxl_context *ctx)
>  		WARN_ON(add_process_element(ctx));
>  }
>
> -static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
> +int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr)
>  {
>  	u32 pid;
>  	int result;
> @@ -671,7 +671,7 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
>  	return 0;
>  }
>
> -static int activate_dedicated_process(struct cxl_afu *afu)
> +int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
>  {
>  	dev_info(&afu->dev, "Activating dedicated process mode\n");
>
> @@ -694,7 +694,7 @@ static int activate_dedicated_process(struct cxl_afu *afu)
>  	return cxl_chardev_d_afu_add(afu);
>  }
>
> -static void update_ivtes_dedicated(struct cxl_context *ctx)
> +void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
>  {
>  	struct cxl_afu *afu = ctx->afu;
>
> @@ -710,7 +710,7 @@ static void update_ivtes_dedicated(struct cxl_context *ctx)
>  			((u64)ctx->irqs.range[3] & 0xffff));
>  }
>
> -static int attach_dedicated(struct cxl_context *ctx, u64 wed, u64 amr)
> +int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr)
>  {
>  	struct cxl_afu *afu = ctx->afu;
>  	u64 pid;
> @@ -728,7 +728,8 @@ static int attach_dedicated(struct cxl_context *ctx, u64 wed, u64 amr)
>
>  	cxl_prefault(ctx, wed);
>
> -	update_ivtes_dedicated(ctx);
> +	if (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes)
> +		afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
>
>  	cxl_p2n_write(afu, CXL_PSL_AMR_An, amr);
>
> @@ -778,8 +779,9 @@ static int native_afu_activate_mode(struct cxl_afu *afu, int mode)
>
>  	if (mode == CXL_MODE_DIRECTED)
>  		return activate_afu_directed(afu);
> -	if (mode == CXL_MODE_DEDICATED)
> -		return activate_dedicated_process(afu);
> +	if ((mode == CXL_MODE_DEDICATED) &&
> +	    (afu->adapter->native->sl_ops->activate_dedicated_process))
> +		return afu->adapter->native->sl_ops->activate_dedicated_process(afu);
>
>  	return -EINVAL;
>  }
> @@ -793,11 +795,13 @@ static int native_attach_process(struct cxl_context *ctx, bool kernel,
>  	}
>
>  	ctx->kernel = kernel;
> -	if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
> -		return attach_afu_directed(ctx, wed, amr);
> +	if ((ctx->afu->current_mode == CXL_MODE_DIRECTED) &&
> +	    (ctx->afu->adapter->native->sl_ops->attach_afu_directed))
> +		return ctx->afu->adapter->native->sl_ops->attach_afu_directed(ctx, wed, amr);
>
> -	if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
> -		return attach_dedicated(ctx, wed, amr);
> +	if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
> +	    (ctx->afu->adapter->native->sl_ops->attach_dedicated_process))
> +		return ctx->afu->adapter->native->sl_ops->attach_dedicated_process(ctx, wed, amr);
>
>  	return -EINVAL;
>  }
> @@ -830,8 +834,9 @@ static void native_update_ivtes(struct cxl_context *ctx)
>  {
>  	if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
>  		return update_ivtes_directed(ctx);
> -	if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
> -		return update_ivtes_dedicated(ctx);
> +	if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
> +	    (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes))
> +		return ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
>  	WARN(1, "native_update_ivtes: Bad mode\n");
>  }
>
> @@ -875,7 +880,7 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
>  	return 0;
>  }
>
> -void cxl_native_psl_irq_dump_regs(struct cxl_context *ctx)
> +void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx)
>  {
>  	u64 fir1, fir2, fir_slice, serr, afu_debug;
>
> @@ -911,7 +916,7 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
>  	return cxl_ops->ack_irq(ctx, 0, errstat);
>  }
>
> -static irqreturn_t fail_psl_irq(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
> +irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
>  {
>  	if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
>  		cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
> @@ -927,7 +932,7 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
>  	struct cxl_context *ctx;
>  	struct cxl_irq_info irq_info;
>  	u64 phreg = cxl_p2n_read(afu, CXL_PSL_PEHandle_An);
> -	int ph, ret;
> +	int ph, ret = IRQ_HANDLED, res;
>
>  	/* check if eeh kicked in while the interrupt was in flight */
>  	if (unlikely(phreg == ~0ULL)) {
> @@ -938,15 +943,18 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
>  	}
>  	/* Mask the pe-handle from register value */
>  	ph = phreg & 0xffff;
> -	if ((ret = native_get_irq_info(afu, &irq_info))) {
> -		WARN(1, "Unable to get CXL IRQ Info: %i\n", ret);
> -		return fail_psl_irq(afu, &irq_info);
> +	if ((res = native_get_irq_info(afu, &irq_info))) {
> +		WARN(1, "Unable to get CXL IRQ Info: %i\n", res);
> +		if (afu->adapter->native->sl_ops->fail_irq)
> +			return afu->adapter->native->sl_ops->fail_irq(afu, &irq_info);
> +		return ret;
>  	}
>
>  	rcu_read_lock();
>  	ctx = idr_find(&afu->contexts_idr, ph);
>  	if (ctx) {
> -		ret = cxl_irq(irq, ctx, &irq_info);
> +		if (afu->adapter->native->sl_ops->handle_interrupt)
> +			ret = afu->adapter->native->sl_ops->handle_interrupt(irq, ctx, &irq_info);
>  		rcu_read_unlock();
>  		return ret;
>  	}
> @@ -956,7 +964,9 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
>  		" %016llx\n(Possible AFU HW issue - was a term/remove acked"
>  		" with outstanding transactions?)\n", ph, irq_info.dsisr,
>  		irq_info.dar);
> -	return fail_psl_irq(afu, &irq_info);
> +	if (afu->adapter->native->sl_ops->fail_irq)
> +		ret = afu->adapter->native->sl_ops->fail_irq(afu, &irq_info);
> +	return ret;
>  }
>
>  static void native_irq_wait(struct cxl_context *ctx)
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index 1f4c351..e9c679e 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -377,7 +377,7 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
>  	return 0;
>  }
>
> -static int init_implementation_adapter_psl_regs(struct cxl *adapter, struct pci_dev *dev)
> +static int init_implementation_adapter_regs_psl(struct cxl *adapter, struct pci_dev *dev)
>  {
>  	u64 psl_dsnctl, psl_fircntl;
>  	u64 chipid;
> @@ -409,7 +409,7 @@ static int init_implementation_adapter_psl_regs(struct cxl *adapter, struct pci_
>  	return 0;
>  }
>
> -static int init_implementation_adapter_xsl_regs(struct cxl *adapter, struct pci_dev *dev)
> +static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_dev *dev)
>  {
>  	u64 xsl_dsnctl;
>  	u64 chipid;
> @@ -513,7 +513,7 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
>  	return;
>  }
>
> -static int init_implementation_afu_psl_regs(struct cxl_afu *afu)
> +static int init_implementation_afu_regs_psl(struct cxl_afu *afu)
>  {
>  	/* read/write masks for this slice */
>  	cxl_p1n_write(afu, CXL_PSL_APCALLOC_A, 0xFFFFFFFEFEFEFEFEULL);
> @@ -996,7 +996,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
>  	return 0;
>  }
>
> -static int sanitise_afu_regs(struct cxl_afu *afu)
> +static int sanitise_afu_regs_psl(struct cxl_afu *afu)
>  {
>  	u64 reg;
>
> @@ -1102,8 +1102,11 @@ static int pci_configure_afu(struct cxl_afu *afu, struct cxl *adapter, struct pc
>  	if ((rc = pci_map_slice_regs(afu, adapter, dev)))
>  		return rc;
>
> -	if ((rc = sanitise_afu_regs(afu)))
> -		goto err1;
> +	if (adapter->native->sl_ops->sanitise_afu_regs) {
> +		rc = adapter->native->sl_ops->sanitise_afu_regs(afu);
> +		if (rc)
> +			goto err1;
> +	}
>
>  	/* We need to reset the AFU before we can read the AFU descriptor */
>  	if ((rc = cxl_ops->afu_reset(afu)))
> @@ -1432,9 +1435,15 @@ static void cxl_release_adapter(struct device *dev)
>
>  static int sanitise_adapter_regs(struct cxl *adapter)
>  {
> +	int rc = 0;
> +
>  	/* Clear PSL tberror bit by writing 1 to it */
>  	cxl_p1_write(adapter, CXL_PSL_ErrIVTE, CXL_PSL_ErrIVTE_tberror);
> -	return cxl_tlb_slb_invalidate(adapter);
> +
> +	if (adapter->native->sl_ops->invalidate_all)
> +		rc = adapter->native->sl_ops->invalidate_all(adapter);
> +
> +	return rc;
>  }
>
>  /* This should contain *only* operations that can safely be done in
> @@ -1518,15 +1527,23 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
>  }
>
>  static const struct cxl_service_layer_ops psl_ops = {
> -	.adapter_regs_init = init_implementation_adapter_psl_regs,
> -	.afu_regs_init = init_implementation_afu_psl_regs,
> +	.adapter_regs_init = init_implementation_adapter_regs_psl,
> +	.invalidate_all = cxl_invalidate_all_psl,
> +	.afu_regs_init = init_implementation_afu_regs_psl,
> +	.sanitise_afu_regs = sanitise_afu_regs_psl,
>  	.register_serr_irq = cxl_native_register_serr_irq,
>  	.release_serr_irq = cxl_native_release_serr_irq,
> -	.debugfs_add_adapter_sl_regs = cxl_debugfs_add_adapter_psl_regs,
> -	.debugfs_add_afu_sl_regs = cxl_debugfs_add_afu_psl_regs,
> -	.psl_irq_dump_registers = cxl_native_psl_irq_dump_regs,
> +	.handle_interrupt = cxl_irq_psl,
> +	.fail_irq = cxl_fail_irq_psl,
> +	.activate_dedicated_process = cxl_activate_dedicated_process_psl,
> +	.attach_afu_directed = cxl_attach_afu_directed_psl,
> +	.attach_dedicated_process = cxl_attach_dedicated_process_psl,
> +	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
> +	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl,
> +	.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl,
> +	.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl,
>  	.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
> -	.debugfs_stop_trace = cxl_stop_trace,
> +	.debugfs_stop_trace = cxl_stop_trace_psl,
>  	.write_timebase_ctrl = write_timebase_ctrl_psl,
>  	.timebase_read = timebase_read_psl,
>  	.capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
> @@ -1534,8 +1551,16 @@ static const struct cxl_service_layer_ops psl_ops = {
>  };
>
>  static const struct cxl_service_layer_ops xsl_ops = {
> -	.adapter_regs_init = init_implementation_adapter_xsl_regs,
> -	.debugfs_add_adapter_sl_regs = cxl_debugfs_add_adapter_xsl_regs,
> +	.adapter_regs_init = init_implementation_adapter_regs_xsl,
> +	.invalidate_all = cxl_invalidate_all_psl,
> +	.sanitise_afu_regs = sanitise_afu_regs_psl,
> +	.handle_interrupt = cxl_irq_psl,
> +	.fail_irq = cxl_fail_irq_psl,
> +	.activate_dedicated_process = cxl_activate_dedicated_process_psl,
> +	.attach_afu_directed = cxl_attach_afu_directed_psl,
> +	.attach_dedicated_process = cxl_attach_dedicated_process_psl,
> +	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
> +	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_xsl,
>  	.write_timebase_ctrl = write_timebase_ctrl_xsl,
>  	.timebase_read = timebase_read_xsl,
>  	.capi_mode = OPAL_PHB_CAPI_MODE_DMA,
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 5/7] cxl: Rename some psl8 specific functions
  2017-04-07 14:11 ` [PATCH V4 5/7] cxl: Rename some psl8 specific functions Christophe Lombard
  2017-04-10  6:14   ` Andrew Donnellan
@ 2017-04-10 17:06   ` Frederic Barrat
  1 sibling, 0 replies; 30+ messages in thread
From: Frederic Barrat @ 2017-04-10 17:06 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, imunsie, andrew.donnellan



Le 07/04/2017 à 16:11, Christophe Lombard a écrit :
> Rename a few functions, changing the '_psl' suffix to '_psl8', to make
> clear that the implementation is psl8 specific.
> Those functions will have an equivalent implementation for the psl9 in
> a later patch.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> ---

Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>


>  drivers/misc/cxl/cxl.h     | 26 ++++++++++----------
>  drivers/misc/cxl/debugfs.c |  6 ++---
>  drivers/misc/cxl/guest.c   |  2 +-
>  drivers/misc/cxl/irq.c     |  2 +-
>  drivers/misc/cxl/native.c  | 12 +++++-----
>  drivers/misc/cxl/pci.c     | 60 +++++++++++++++++++++++-----------------------
>  6 files changed, 54 insertions(+), 54 deletions(-)
>
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index 626073d..a54c003 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -813,10 +813,10 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
>  void afu_release_irqs(struct cxl_context *ctx, void *cookie);
>  void afu_irq_name_free(struct cxl_context *ctx);
>
> -int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr);
> -int cxl_activate_dedicated_process_psl(struct cxl_afu *afu);
> -int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr);
> -void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx);
> +int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
> +int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu);
> +int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
> +void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx);
>
>  #ifdef CONFIG_DEBUG_FS
>
> @@ -826,10 +826,10 @@ int cxl_debugfs_adapter_add(struct cxl *adapter);
>  void cxl_debugfs_adapter_remove(struct cxl *adapter);
>  int cxl_debugfs_afu_add(struct cxl_afu *afu);
>  void cxl_debugfs_afu_remove(struct cxl_afu *afu);
> -void cxl_stop_trace_psl(struct cxl *cxl);
> -void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir);
> +void cxl_stop_trace_psl8(struct cxl *cxl);
> +void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir);
>  void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
> -void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir);
> +void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir);
>
>  #else /* CONFIG_DEBUG_FS */
>
> @@ -860,11 +860,11 @@ static inline void cxl_debugfs_afu_remove(struct cxl_afu *afu)
>  {
>  }
>
> -static inline void cxl_stop_trace(struct cxl *cxl)
> +static inline void cxl_stop_trace_psl8(struct cxl *cxl)
>  {
>  }
>
> -static inline void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter,
> +static inline void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter,
>  						    struct dentry *dir)
>  {
>  }
> @@ -874,7 +874,7 @@ static inline void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter,
>  {
>  }
>
> -static inline void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
> +static inline void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
>  {
>  }
>
> @@ -919,8 +919,8 @@ struct cxl_irq_info {
>  };
>
>  void cxl_assign_psn_space(struct cxl_context *ctx);
> -int cxl_invalidate_all_psl(struct cxl *adapter);
> -irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> +int cxl_invalidate_all_psl8(struct cxl *adapter);
> +irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
>  irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
>  int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
>  			void *cookie, irq_hw_number_t *dest_hwirq,
> @@ -932,7 +932,7 @@ int cxl_data_cache_flush(struct cxl *adapter);
>  int cxl_afu_disable(struct cxl_afu *afu);
>  int cxl_psl_purge(struct cxl_afu *afu);
>
> -void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx);
> +void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx);
>  void cxl_native_err_irq_dump_regs(struct cxl *adapter);
>  int cxl_pci_vphb_add(struct cxl_afu *afu);
>  void cxl_pci_vphb_remove(struct cxl_afu *afu);
> diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
> index 4848ebf..2ff10a9 100644
> --- a/drivers/misc/cxl/debugfs.c
> +++ b/drivers/misc/cxl/debugfs.c
> @@ -15,7 +15,7 @@
>
>  static struct dentry *cxl_debugfs;
>
> -void cxl_stop_trace_psl(struct cxl *adapter)
> +void cxl_stop_trace_psl8(struct cxl *adapter)
>  {
>  	int slice;
>
> @@ -53,7 +53,7 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
>  					  (void __force *)value, &fops_io_x64);
>  }
>
> -void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir)
> +void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir)
>  {
>  	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
>  	debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR2));
> @@ -92,7 +92,7 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
>  	debugfs_remove_recursive(adapter->debugfs);
>  }
>
> -void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
> +void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
>  {
>  	debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
>  	debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
> diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
> index f6ba698..3ad7381 100644
> --- a/drivers/misc/cxl/guest.c
> +++ b/drivers/misc/cxl/guest.c
> @@ -169,7 +169,7 @@ static irqreturn_t guest_psl_irq(int irq, void *data)
>  		return IRQ_HANDLED;
>  	}
>
> -	rc = cxl_irq_psl(irq, ctx, &irq_info);
> +	rc = cxl_irq_psl8(irq, ctx, &irq_info);
>  	return rc;
>  }
>
> diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
> index 2fa119e..fa9f8a2 100644
> --- a/drivers/misc/cxl/irq.c
> +++ b/drivers/misc/cxl/irq.c
> @@ -34,7 +34,7 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
>  	return IRQ_HANDLED;
>  }
>
> -irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
> +irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
>  {
>  	u64 dsisr, dar;
>
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index c147863e..ee3164e 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -258,7 +258,7 @@ void cxl_release_spa(struct cxl_afu *afu)
>  	}
>  }
>
> -int cxl_invalidate_all_psl(struct cxl *adapter)
> +int cxl_invalidate_all_psl8(struct cxl *adapter)
>  {
>  	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
>
> @@ -578,7 +578,7 @@ static void update_ivtes_directed(struct cxl_context *ctx)
>  		WARN_ON(add_process_element(ctx));
>  }
>
> -int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr)
> +int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
>  {
>  	u32 pid;
>  	int result;
> @@ -671,7 +671,7 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
>  	return 0;
>  }
>
> -int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
> +int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
>  {
>  	dev_info(&afu->dev, "Activating dedicated process mode\n");
>
> @@ -694,7 +694,7 @@ int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
>  	return cxl_chardev_d_afu_add(afu);
>  }
>
> -void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
> +void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
>  {
>  	struct cxl_afu *afu = ctx->afu;
>
> @@ -710,7 +710,7 @@ void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
>  			((u64)ctx->irqs.range[3] & 0xffff));
>  }
>
> -int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr)
> +int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
>  {
>  	struct cxl_afu *afu = ctx->afu;
>  	u64 pid;
> @@ -880,7 +880,7 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
>  	return 0;
>  }
>
> -void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx)
> +void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx)
>  {
>  	u64 fir1, fir2, fir_slice, serr, afu_debug;
>
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index e9c679e..69008a4 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -377,7 +377,7 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
>  	return 0;
>  }
>
> -static int init_implementation_adapter_regs_psl(struct cxl *adapter, struct pci_dev *dev)
> +static int init_implementation_adapter_regs_psl8(struct cxl *adapter, struct pci_dev *dev)
>  {
>  	u64 psl_dsnctl, psl_fircntl;
>  	u64 chipid;
> @@ -434,7 +434,7 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
>  /* For the PSL this is a multiple for 0 < n <= 7: */
>  #define PSL_2048_250MHZ_CYCLES 1
>
> -static void write_timebase_ctrl_psl(struct cxl *adapter)
> +static void write_timebase_ctrl_psl8(struct cxl *adapter)
>  {
>  	cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT,
>  		     TBSYNC_CNT(2 * PSL_2048_250MHZ_CYCLES));
> @@ -455,7 +455,7 @@ static void write_timebase_ctrl_xsl(struct cxl *adapter)
>  		     TBSYNC_CNT(XSL_4000_CLOCKS));
>  }
>
> -static u64 timebase_read_psl(struct cxl *adapter)
> +static u64 timebase_read_psl8(struct cxl *adapter)
>  {
>  	return cxl_p1_read(adapter, CXL_PSL_Timebase);
>  }
> @@ -513,7 +513,7 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
>  	return;
>  }
>
> -static int init_implementation_afu_regs_psl(struct cxl_afu *afu)
> +static int init_implementation_afu_regs_psl8(struct cxl_afu *afu)
>  {
>  	/* read/write masks for this slice */
>  	cxl_p1n_write(afu, CXL_PSL_APCALLOC_A, 0xFFFFFFFEFEFEFEFEULL);
> @@ -996,7 +996,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
>  	return 0;
>  }
>
> -static int sanitise_afu_regs_psl(struct cxl_afu *afu)
> +static int sanitise_afu_regs_psl8(struct cxl_afu *afu)
>  {
>  	u64 reg;
>
> @@ -1526,40 +1526,40 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
>  	pci_disable_device(pdev);
>  }
>
> -static const struct cxl_service_layer_ops psl_ops = {
> -	.adapter_regs_init = init_implementation_adapter_regs_psl,
> -	.invalidate_all = cxl_invalidate_all_psl,
> -	.afu_regs_init = init_implementation_afu_regs_psl,
> -	.sanitise_afu_regs = sanitise_afu_regs_psl,
> +static const struct cxl_service_layer_ops psl8_ops = {
> +	.adapter_regs_init = init_implementation_adapter_regs_psl8,
> +	.invalidate_all = cxl_invalidate_all_psl8,
> +	.afu_regs_init = init_implementation_afu_regs_psl8,
> +	.sanitise_afu_regs = sanitise_afu_regs_psl8,
>  	.register_serr_irq = cxl_native_register_serr_irq,
>  	.release_serr_irq = cxl_native_release_serr_irq,
> -	.handle_interrupt = cxl_irq_psl,
> +	.handle_interrupt = cxl_irq_psl8,
>  	.fail_irq = cxl_fail_irq_psl,
> -	.activate_dedicated_process = cxl_activate_dedicated_process_psl,
> -	.attach_afu_directed = cxl_attach_afu_directed_psl,
> -	.attach_dedicated_process = cxl_attach_dedicated_process_psl,
> -	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
> -	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl,
> -	.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl,
> -	.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl,
> +	.activate_dedicated_process = cxl_activate_dedicated_process_psl8,
> +	.attach_afu_directed = cxl_attach_afu_directed_psl8,
> +	.attach_dedicated_process = cxl_attach_dedicated_process_psl8,
> +	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl8,
> +	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl8,
> +	.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl8,
> +	.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl8,
>  	.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
> -	.debugfs_stop_trace = cxl_stop_trace_psl,
> -	.write_timebase_ctrl = write_timebase_ctrl_psl,
> -	.timebase_read = timebase_read_psl,
> +	.debugfs_stop_trace = cxl_stop_trace_psl8,
> +	.write_timebase_ctrl = write_timebase_ctrl_psl8,
> +	.timebase_read = timebase_read_psl8,
>  	.capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
>  	.needs_reset_before_disable = true,
>  };
>
>  static const struct cxl_service_layer_ops xsl_ops = {
>  	.adapter_regs_init = init_implementation_adapter_regs_xsl,
> -	.invalidate_all = cxl_invalidate_all_psl,
> -	.sanitise_afu_regs = sanitise_afu_regs_psl,
> -	.handle_interrupt = cxl_irq_psl,
> +	.invalidate_all = cxl_invalidate_all_psl8,
> +	.sanitise_afu_regs = sanitise_afu_regs_psl8,
> +	.handle_interrupt = cxl_irq_psl8,
>  	.fail_irq = cxl_fail_irq_psl,
> -	.activate_dedicated_process = cxl_activate_dedicated_process_psl,
> -	.attach_afu_directed = cxl_attach_afu_directed_psl,
> -	.attach_dedicated_process = cxl_attach_dedicated_process_psl,
> -	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
> +	.activate_dedicated_process = cxl_activate_dedicated_process_psl8,
> +	.attach_afu_directed = cxl_attach_afu_directed_psl8,
> +	.attach_dedicated_process = cxl_attach_dedicated_process_psl8,
> +	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl8,
>  	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_xsl,
>  	.write_timebase_ctrl = write_timebase_ctrl_xsl,
>  	.timebase_read = timebase_read_xsl,
> @@ -1574,8 +1574,8 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
>  		adapter->native->sl_ops = &xsl_ops;
>  		adapter->min_pe = 1; /* Workaround for CX-4 hardware bug */
>  	} else {
> -		dev_info(&dev->dev, "Device uses a PSL\n");
> -		adapter->native->sl_ops = &psl_ops;
> +		dev_info(&dev->dev, "Device uses a PSL8\n");
> +		adapter->native->sl_ops = &psl8_ops;
>  	}
>  }
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 6/7] cxl: Isolate few psl8 specific calls
  2017-04-07 14:11 ` [PATCH V4 6/7] cxl: Isolate few psl8 specific calls Christophe Lombard
@ 2017-04-10 17:13   ` Frederic Barrat
  2017-04-12  2:13     ` Michael Ellerman
  0 siblings, 1 reply; 30+ messages in thread
From: Frederic Barrat @ 2017-04-10 17:13 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, imunsie, andrew.donnellan



Le 07/04/2017 à 16:11, Christophe Lombard a écrit :
> Point out the specific Coherent Accelerator Interface Architecture,
> level 1, registers.
> Code and functions specific to PSL8 (CAIA1) must be framed.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> ---

There are a few changes in native.c which are about splitting long 
strings, but that's minor. And the rest looks ok.

I'll do the last patch tomorrow.

Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>


>  drivers/misc/cxl/context.c | 28 +++++++++++---------
>  drivers/misc/cxl/cxl.h     | 35 +++++++++++++++++++------
>  drivers/misc/cxl/debugfs.c |  6 +++--
>  drivers/misc/cxl/native.c  | 43 +++++++++++++++++++++----------
>  drivers/misc/cxl/pci.c     | 64 +++++++++++++++++++++++++++++++---------------
>  5 files changed, 120 insertions(+), 56 deletions(-)
>
> diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
> index 2e935ea..ac2531e 100644
> --- a/drivers/misc/cxl/context.c
> +++ b/drivers/misc/cxl/context.c
> @@ -39,23 +39,26 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
>  {
>  	int i;
>
> -	spin_lock_init(&ctx->sste_lock);
>  	ctx->afu = afu;
>  	ctx->master = master;
>  	ctx->pid = NULL; /* Set in start work ioctl */
>  	mutex_init(&ctx->mapping_lock);
>  	ctx->mapping = NULL;
>
> -	/*
> -	 * Allocate the segment table before we put it in the IDR so that we
> -	 * can always access it when dereferenced from IDR. For the same
> -	 * reason, the segment table is only destroyed after the context is
> -	 * removed from the IDR.  Access to this in the IOCTL is protected by
> -	 * Linux filesytem symantics (can't IOCTL until open is complete).
> -	 */
> -	i = cxl_alloc_sst(ctx);
> -	if (i)
> -		return i;
> +	if (cxl_is_psl8(afu)) {
> +		spin_lock_init(&ctx->sste_lock);
> +
> +		/*
> +		 * Allocate the segment table before we put it in the IDR so that we
> +		 * can always access it when dereferenced from IDR. For the same
> +		 * reason, the segment table is only destroyed after the context is
> +		 * removed from the IDR.  Access to this in the IOCTL is protected by
> +		 * Linux filesytem symantics (can't IOCTL until open is complete).
> +		 */
> +		i = cxl_alloc_sst(ctx);
> +		if (i)
> +			return i;
> +	}
>
>  	INIT_WORK(&ctx->fault_work, cxl_handle_fault);
>
> @@ -308,7 +311,8 @@ static void reclaim_ctx(struct rcu_head *rcu)
>  {
>  	struct cxl_context *ctx = container_of(rcu, struct cxl_context, rcu);
>
> -	free_page((u64)ctx->sstp);
> +	if (cxl_is_psl8(ctx->afu))
> +		free_page((u64)ctx->sstp);
>  	if (ctx->ff_page)
>  		__free_page(ctx->ff_page);
>  	ctx->sstp = NULL;
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index a54c003..82335c0 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -73,7 +73,7 @@ static const cxl_p1_reg_t CXL_PSL_Control = {0x0020};
>  static const cxl_p1_reg_t CXL_PSL_DLCNTL  = {0x0060};
>  static const cxl_p1_reg_t CXL_PSL_DLADDR  = {0x0068};
>
> -/* PSL Lookaside Buffer Management Area */
> +/* PSL Lookaside Buffer Management Area - CAIA 1 */
>  static const cxl_p1_reg_t CXL_PSL_LBISEL  = {0x0080};
>  static const cxl_p1_reg_t CXL_PSL_SLBIE   = {0x0088};
>  static const cxl_p1_reg_t CXL_PSL_SLBIA   = {0x0090};
> @@ -82,7 +82,7 @@ static const cxl_p1_reg_t CXL_PSL_TLBIA   = {0x00A8};
>  static const cxl_p1_reg_t CXL_PSL_AFUSEL  = {0x00B0};
>
>  /* 0x00C0:7EFF Implementation dependent area */
> -/* PSL registers */
> +/* PSL registers - CAIA 1 */
>  static const cxl_p1_reg_t CXL_PSL_FIR1      = {0x0100};
>  static const cxl_p1_reg_t CXL_PSL_FIR2      = {0x0108};
>  static const cxl_p1_reg_t CXL_PSL_Timebase  = {0x0110};
> @@ -109,7 +109,7 @@ static const cxl_p1n_reg_t CXL_PSL_AMBAR_An       = {0x10};
>  static const cxl_p1n_reg_t CXL_PSL_SPOffset_An    = {0x18};
>  static const cxl_p1n_reg_t CXL_PSL_ID_An          = {0x20};
>  static const cxl_p1n_reg_t CXL_PSL_SERR_An        = {0x28};
> -/* Memory Management and Lookaside Buffer Management */
> +/* Memory Management and Lookaside Buffer Management - CAIA 1*/
>  static const cxl_p1n_reg_t CXL_PSL_SDR_An         = {0x30};
>  static const cxl_p1n_reg_t CXL_PSL_AMOR_An        = {0x38};
>  /* Pointer Area */
> @@ -124,6 +124,7 @@ static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An  = {0xB8};
>  /* 0xC0:FF Implementation Dependent Area */
>  static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An   = {0xC0};
>  static const cxl_p1n_reg_t CXL_AFU_DEBUG_An       = {0xC8};
> +/* 0xC0:FF Implementation Dependent Area - CAIA 1 */
>  static const cxl_p1n_reg_t CXL_PSL_APCALLOC_A     = {0xD0};
>  static const cxl_p1n_reg_t CXL_PSL_COALLOC_A      = {0xD8};
>  static const cxl_p1n_reg_t CXL_PSL_RXCTL_A        = {0xE0};
> @@ -133,12 +134,14 @@ static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE    = {0xE8};
>  /* Configuration and Control Area */
>  static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
>  static const cxl_p2n_reg_t CXL_CSRP_An        = {0x008};
> +/* Configuration and Control Area - CAIA 1 */
>  static const cxl_p2n_reg_t CXL_AURP0_An       = {0x010};
>  static const cxl_p2n_reg_t CXL_AURP1_An       = {0x018};
>  static const cxl_p2n_reg_t CXL_SSTP0_An       = {0x020};
>  static const cxl_p2n_reg_t CXL_SSTP1_An       = {0x028};
> +/* Configuration and Control Area - CAIA 1 */
>  static const cxl_p2n_reg_t CXL_PSL_AMR_An     = {0x030};
> -/* Segment Lookaside Buffer Management */
> +/* Segment Lookaside Buffer Management - CAIA 1 */
>  static const cxl_p2n_reg_t CXL_SLBIE_An       = {0x040};
>  static const cxl_p2n_reg_t CXL_SLBIA_An       = {0x048};
>  static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
> @@ -257,7 +260,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
>  #define CXL_SSTP1_An_STVA_L_MASK (~((1ull << (63-55))-1))
>  #define CXL_SSTP1_An_V              (1ull << (63-63))
>
> -/****** CXL_PSL_SLBIE_[An] **************************************************/
> +/****** CXL_PSL_SLBIE_[An] - CAIA 1 **************************************************/
>  /* write: */
>  #define CXL_SLBIE_C        PPC_BIT(36)         /* Class */
>  #define CXL_SLBIE_SS       PPC_BITMASK(37, 38) /* Segment Size */
> @@ -267,10 +270,10 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
>  #define CXL_SLBIE_MAX      PPC_BITMASK(24, 31)
>  #define CXL_SLBIE_PENDING  PPC_BITMASK(56, 63)
>
> -/****** Common to all CXL_TLBIA/SLBIA_[An] **********************************/
> +/****** Common to all CXL_TLBIA/SLBIA_[An] - CAIA 1 **********************************/
>  #define CXL_TLB_SLB_P          (1ull) /* Pending (read) */
>
> -/****** Common to all CXL_TLB/SLB_IA/IE_[An] registers **********************/
> +/****** Common to all CXL_TLB/SLB_IA/IE_[An] registers - CAIA 1 **********************/
>  #define CXL_TLB_SLB_IQ_ALL     (0ull) /* Inv qualifier */
>  #define CXL_TLB_SLB_IQ_LPID    (1ull) /* Inv qualifier */
>  #define CXL_TLB_SLB_IQ_LPIDPID (3ull) /* Inv qualifier */
> @@ -278,7 +281,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
>  /****** CXL_PSL_AFUSEL ******************************************************/
>  #define CXL_PSL_AFUSEL_A (1ull << (63-55)) /* Adapter wide invalidates affect all AFUs */
>
> -/****** CXL_PSL_DSISR_An ****************************************************/
> +/****** CXL_PSL_DSISR_An - CAIA 1 ****************************************************/
>  #define CXL_PSL_DSISR_An_DS (1ull << (63-0))  /* Segment not found */
>  #define CXL_PSL_DSISR_An_DM (1ull << (63-1))  /* PTE not found (See also: M) or protection fault */
>  #define CXL_PSL_DSISR_An_ST (1ull << (63-2))  /* Segment Table PTE not found */
> @@ -749,6 +752,22 @@ static inline u64 cxl_p2n_read(struct cxl_afu *afu, cxl_p2n_reg_t reg)
>  		return ~0ULL;
>  }
>
> +static inline bool cxl_is_power8(void)
> +{
> +	if ((pvr_version_is(PVR_POWER8E)) ||
> +	    (pvr_version_is(PVR_POWER8NVL)) ||
> +	    (pvr_version_is(PVR_POWER8)))
> +		return true;
> +	return false;
> +}
> +
> +static inline bool cxl_is_psl8(struct cxl_afu *afu)
> +{
> +	if (afu->adapter->caia_major == 1)
> +		return true;
> +	return false;
> +}
> +
>  ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
>  				loff_t off, size_t count);
>
> diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
> index 2ff10a9..43a1a27 100644
> --- a/drivers/misc/cxl/debugfs.c
> +++ b/drivers/misc/cxl/debugfs.c
> @@ -94,6 +94,9 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
>
>  void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
>  {
> +	debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
> +	debugfs_create_io_x64("sstp1", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
> +
>  	debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
>  	debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
>  	debugfs_create_io_x64("afu_debug", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_AFU_DEBUG_An));
> @@ -117,8 +120,7 @@ int cxl_debugfs_afu_add(struct cxl_afu *afu)
>  	debugfs_create_io_x64("sr",         S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SR_An));
>  	debugfs_create_io_x64("dsisr",      S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DSISR_An));
>  	debugfs_create_io_x64("dar",        S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DAR_An));
> -	debugfs_create_io_x64("sstp0",      S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
> -	debugfs_create_io_x64("sstp1",      S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
> +
>  	debugfs_create_io_x64("err_status", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_ErrStat_An));
>
>  	if (afu->adapter->native->sl_ops->debugfs_add_afu_regs)
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index ee3164e..0401e4dc 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -155,13 +155,21 @@ int cxl_psl_purge(struct cxl_afu *afu)
>  		}
>
>  		dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
> -		pr_devel_ratelimited("PSL purging... PSL_CNTL: 0x%016llx  PSL_DSISR: 0x%016llx\n", PSL_CNTL, dsisr);
> +		pr_devel_ratelimited("PSL purging... PSL_CNTL: 0x%016llx"
> +				     "  PSL_DSISR: 0x%016llx\n",
> +				     PSL_CNTL, dsisr);
> +
>  		if (dsisr & CXL_PSL_DSISR_TRANS) {
>  			dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
> -			dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%016llx, DAR: 0x%016llx\n", dsisr, dar);
> +			dev_notice(&afu->dev, "PSL purge terminating "
> +					      "pending translation, "
> +					      "DSISR: 0x%016llx, DAR: 0x%016llx\n",
> +					       dsisr, dar);
>  			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
>  		} else if (dsisr) {
> -			dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%016llx\n", dsisr);
> +			dev_notice(&afu->dev, "PSL purge acknowledging "
> +					      "pending non-translation fault, "
> +					      "DSISR: 0x%016llx\n", dsisr);
>  			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
>  		} else {
>  			cpu_relax();
> @@ -466,7 +474,8 @@ static int remove_process_element(struct cxl_context *ctx)
>
>  	if (!rc)
>  		ctx->pe_inserted = false;
> -	slb_invalid(ctx);
> +	if (cxl_is_power8())
> +		slb_invalid(ctx);
>  	pr_devel("%s Remove pe: %i finished\n", __func__, ctx->pe);
>  	mutex_unlock(&ctx->afu->native->spa_mutex);
>
> @@ -499,7 +508,8 @@ static int activate_afu_directed(struct cxl_afu *afu)
>  	attach_spa(afu);
>
>  	cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_AFU);
> -	cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
> +	if (cxl_is_power8())
> +		cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
>  	cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
>
>  	afu->current_mode = CXL_MODE_DIRECTED;
> @@ -872,7 +882,8 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
>
>  	info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
>  	info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
> -	info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
> +	if (cxl_is_power8())
> +		info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
>  	info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An);
>  	info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
>  	info->proc_handle = 0;
> @@ -984,7 +995,8 @@ static void native_irq_wait(struct cxl_context *ctx)
>  		if (ph != ctx->pe)
>  			return;
>  		dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);
> -		if ((dsisr & CXL_PSL_DSISR_PENDING) == 0)
> +		if (cxl_is_psl8(ctx->afu) &&
> +		   ((dsisr & CXL_PSL_DSISR_PENDING) == 0))
>  			return;
>  		/*
>  		 * We are waiting for the workqueue to process our
> @@ -1001,21 +1013,25 @@ static void native_irq_wait(struct cxl_context *ctx)
>  static irqreturn_t native_slice_irq_err(int irq, void *data)
>  {
>  	struct cxl_afu *afu = data;
> -	u64 fir_slice, errstat, serr, afu_debug, afu_error, dsisr;
> +	u64 errstat, serr, afu_error, dsisr;
> +	u64 fir_slice, afu_debug;
>
>  	/*
>  	 * slice err interrupt is only used with full PSL (no XSL)
>  	 */
>  	serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
> -	fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An);
>  	errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
> -	afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An);
>  	afu_error = cxl_p2n_read(afu, CXL_AFU_ERR_An);
>  	dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
>  	cxl_afu_decode_psl_serr(afu, serr);
> -	dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice);
> +
> +	if (cxl_is_power8()) {
> +		fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An);
> +		afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An);
> +		dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice);
> +		dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug);
> +	}
>  	dev_crit(&afu->dev, "CXL_PSL_ErrStat_An: 0x%016llx\n", errstat);
> -	dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug);
>  	dev_crit(&afu->dev, "AFU_ERR_An: 0x%.16llx\n", afu_error);
>  	dev_crit(&afu->dev, "PSL_DSISR_An: 0x%.16llx\n", dsisr);
>
> @@ -1108,7 +1124,8 @@ int cxl_native_register_serr_irq(struct cxl_afu *afu)
>  	}
>
>  	serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
> -	serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
> +	if (cxl_is_power8())
> +		serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
>  	cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
>
>  	return 0;
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index 69008a4..a910115 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -324,32 +324,33 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
>  #undef show_reg
>  }
>
> -#define CAPP_UNIT0_ID 0xBA
> -#define CAPP_UNIT1_ID 0XBE
> +#define P8_CAPP_UNIT0_ID 0xBA
> +#define P8_CAPP_UNIT1_ID 0XBE
>
>  static u64 get_capp_unit_id(struct device_node *np)
>  {
>  	u32 phb_index;
>
> -	/*
> -	 * For chips other than POWER8NVL, we only have CAPP 0,
> -	 * irrespective of which PHB is used.
> -	 */
> -	if (!pvr_version_is(PVR_POWER8NVL))
> -		return CAPP_UNIT0_ID;
> +	if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
> +		return 0;
>
>  	/*
> -	 * For POWER8NVL, assume CAPP 0 is attached to PHB0 and
> -	 * CAPP 1 is attached to PHB1.
> +	 * POWER 8:
> +	 *  - For chips other than POWER8NVL, we only have CAPP 0,
> +	 *    irrespective of which PHB is used.
> +	 *  - For POWER8NVL, assume CAPP 0 is attached to PHB0 and
> +	 *    CAPP 1 is attached to PHB1.
>  	 */
> -	if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
> -		return 0;
> +	if (cxl_is_power8()) {
> +		if (!pvr_version_is(PVR_POWER8NVL))
> +			return P8_CAPP_UNIT0_ID;
>
> -	if (phb_index == 0)
> -		return CAPP_UNIT0_ID;
> +		if (phb_index == 0)
> +			return P8_CAPP_UNIT0_ID;
>
> -	if (phb_index == 1)
> -		return CAPP_UNIT1_ID;
> +		if (phb_index == 1)
> +			return P8_CAPP_UNIT1_ID;
> +	}
>
>  	return 0;
>  }
> @@ -968,7 +969,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
>  	}
>
>  	if (afu->pp_psa && (afu->pp_size < PAGE_SIZE))
> -		dev_warn(&afu->dev, "AFU uses < PAGE_SIZE per-process PSA!");
> +		dev_warn(&afu->dev, "AFU uses pp_size(%#016llx) < PAGE_SIZE per-process PSA!\n", afu->pp_size);
>
>  	for (i = 0; i < afu->crs_num; i++) {
>  		rc = cxl_ops->afu_cr_read32(afu, i, 0, &val);
> @@ -1251,8 +1252,13 @@ int cxl_pci_reset(struct cxl *adapter)
>
>  	dev_info(&dev->dev, "CXL reset\n");
>
> -	/* the adapter is about to be reset, so ignore errors */
> -	cxl_data_cache_flush(adapter);
> +	/*
> +	 * The adapter is about to be reset, so ignore errors.
> +	 * Not supported on P9 DD1 but don't forget to enable it
> +	 * on P9 DD2
> +	 */
> +	if (cxl_is_power8())
> +		cxl_data_cache_flush(adapter);
>
>  	/* pcie_warm_reset requests a fundamental pci reset which includes a
>  	 * PERST assert/deassert.  PERST triggers a loading of the image
> @@ -1382,6 +1388,14 @@ static void cxl_fixup_malformed_tlp(struct cxl *adapter, struct pci_dev *dev)
>  	pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, data);
>  }
>
> +static bool cxl_compatible_caia_version(struct cxl *adapter)
> +{
> +	if (cxl_is_power8() && (adapter->caia_major == 1))
> +		return true;
> +
> +	return false;
> +}
> +
>  static int cxl_vsec_looks_ok(struct cxl *adapter, struct pci_dev *dev)
>  {
>  	if (adapter->vsec_status & CXL_STATUS_SECOND_PORT)
> @@ -1392,6 +1406,12 @@ static int cxl_vsec_looks_ok(struct cxl *adapter, struct pci_dev *dev)
>  		return -EINVAL;
>  	}
>
> +	if (!cxl_compatible_caia_version(adapter)) {
> +		dev_info(&dev->dev, "Ignoring card. PSL type is not supported "
> +				    "(caia version: %d)\n", adapter->caia_major);
> +		return -ENODEV;
> +	}
> +
>  	if (!adapter->slices) {
>  		/* Once we support dynamic reprogramming we can use the card if
>  		 * it supports loadable AFUs */
> @@ -1574,8 +1594,10 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
>  		adapter->native->sl_ops = &xsl_ops;
>  		adapter->min_pe = 1; /* Workaround for CX-4 hardware bug */
>  	} else {
> -		dev_info(&dev->dev, "Device uses a PSL8\n");
> -		adapter->native->sl_ops = &psl8_ops;
> +		if (cxl_is_power8()) {
> +			dev_info(&dev->dev, "Device uses a PSL8\n");
> +			adapter->native->sl_ops = &psl8_ops;
> +		}
>  	}
>  }
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 7/7] cxl: Add psl9 specific code
  2017-04-07 14:11 ` [PATCH V4 7/7] cxl: Add psl9 specific code Christophe Lombard
@ 2017-04-11 14:41   ` Frederic Barrat
  2017-04-12  2:11     ` Michael Ellerman
  2017-04-12  7:52   ` Andrew Donnellan
  2017-04-12 14:34   ` [PATCH V4 7/7 remix] " Frederic Barrat
  2 siblings, 1 reply; 30+ messages in thread
From: Frederic Barrat @ 2017-04-11 14:41 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, imunsie, andrew.donnellan



Le 07/04/2017 à 16:11, Christophe Lombard a écrit :
> The new Coherent Accelerator Interface Architecture, level 2, for the
> IBM POWER9 brings new content and features:
> - POWER9 Service Layer
> - Registers
> - Radix mode
> - Process element entry
> - Dedicated-Shared Process Programming Model
> - Translation Fault Handling
> - CAPP
> - Memory Context ID
>     If a valid mm_struct is found the memory context id is used for each
>     transaction associated with the process handle. The PSL uses the
>     context ID to find the corresponding process element.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> ---


I'm ok with the code. However checkpatch is complaining about a 
tab/space error in native.c

If you have a quick respin, I also have a comment below about the 
documentation.


>  Documentation/powerpc/cxl.txt |  11 +-
>  drivers/misc/cxl/context.c    |  16 ++-
>  drivers/misc/cxl/cxl.h        | 137 +++++++++++++++++++----
>  drivers/misc/cxl/debugfs.c    |  19 ++++
>  drivers/misc/cxl/fault.c      |  64 +++++++----
>  drivers/misc/cxl/guest.c      |   8 +-
>  drivers/misc/cxl/irq.c        |  53 +++++++++
>  drivers/misc/cxl/native.c     | 225 +++++++++++++++++++++++++++++++++++---
>  drivers/misc/cxl/pci.c        | 246 +++++++++++++++++++++++++++++++++++++++---
>  drivers/misc/cxl/trace.h      |  43 ++++++++
>  10 files changed, 748 insertions(+), 74 deletions(-)
>
> diff --git a/Documentation/powerpc/cxl.txt b/Documentation/powerpc/cxl.txt
> index d5506ba0..4a77462 100644
> --- a/Documentation/powerpc/cxl.txt
> +++ b/Documentation/powerpc/cxl.txt
> @@ -21,7 +21,7 @@ Introduction
>  Hardware overview
>  =================
>
> -          POWER8               FPGA
> +         POWER8/9             FPGA
>         +----------+        +---------+
>         |          |        |         |
>         |   CPU    |        |   AFU   |
> @@ -34,7 +34,7 @@ Hardware overview
>         |   | CAPP |<------>|         |
>         +---+------+  PCIE  +---------+
>
> -    The POWER8 chip has a Coherently Attached Processor Proxy (CAPP)
> +    The POWER8/9 chip has a Coherently Attached Processor Proxy (CAPP)
>      unit which is part of the PCIe Host Bridge (PHB). This is managed
>      by Linux by calls into OPAL. Linux doesn't directly program the
>      CAPP.
> @@ -59,6 +59,13 @@ Hardware overview
>      the fault. The context to which this fault is serviced is based on
>      who owns that acceleration function.


> +    POWER8 <-----> PSL Version 8 is compliant to the CAIA Version 1.0.
> +    POWER9 <-----> PSL Version 9 is compliant to the CAIA Version 2.0.
> +    This PSL Version 9 provides new features as:
> +    * Native DMA support.
> +    * Supports sending ASB_Notify messages for host thread wakeup.
> +    * Supports Atomic operations.
> +    * ....


I think one of the most important difference is missing: the PSL on 
power9 uses the new nest MMU on the power9 chip and no longer has its 
own MMU.

   Fred

>
>  AFU Modes
>  =========
> diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
> index ac2531e..45363be 100644
> --- a/drivers/misc/cxl/context.c
> +++ b/drivers/misc/cxl/context.c
> @@ -188,12 +188,24 @@ int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)
>  	if (ctx->afu->current_mode == CXL_MODE_DEDICATED) {
>  		if (start + len > ctx->afu->adapter->ps_size)
>  			return -EINVAL;
> +
> +		if (cxl_is_psl9(ctx->afu)) {
> +			/* make sure there is a valid problem state
> +			 * area space for this AFU
> +			 */
> +			if (ctx->master && !ctx->afu->psa) {
> +				pr_devel("AFU doesn't support mmio space\n");
> +				return -EINVAL;
> +			}
> +
> +			/* Can't mmap until the AFU is enabled */
> +			if (!ctx->afu->enabled)
> +				return -EBUSY;
> +		}
>  	} else {
>  		if (start + len > ctx->psn_size)
>  			return -EINVAL;
> -	}
>
> -	if (ctx->afu->current_mode != CXL_MODE_DEDICATED) {
>  		/* make sure there is a valid per process space for this AFU */
>  		if ((ctx->master && !ctx->afu->psa) || (!ctx->afu->pp_psa)) {
>  			pr_devel("AFU doesn't support mmio space\n");
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index 82335c0..df40e6e 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -63,7 +63,7 @@ typedef struct {
>  /* Memory maps. Ref CXL Appendix A */
>
>  /* PSL Privilege 1 Memory Map */
> -/* Configuration and Control area */
> +/* Configuration and Control area - CAIA 1&2 */
>  static const cxl_p1_reg_t CXL_PSL_CtxTime = {0x0000};
>  static const cxl_p1_reg_t CXL_PSL_ErrIVTE = {0x0008};
>  static const cxl_p1_reg_t CXL_PSL_KEY1    = {0x0010};
> @@ -98,11 +98,29 @@ static const cxl_p1_reg_t CXL_XSL_Timebase  = {0x0100};
>  static const cxl_p1_reg_t CXL_XSL_TB_CTLSTAT = {0x0108};
>  static const cxl_p1_reg_t CXL_XSL_FEC       = {0x0158};
>  static const cxl_p1_reg_t CXL_XSL_DSNCTL    = {0x0168};
> +/* PSL registers - CAIA 2 */
> +static const cxl_p1_reg_t CXL_PSL9_CONTROL  = {0x0020};
> +static const cxl_p1_reg_t CXL_XSL9_DSNCTL   = {0x0168};
> +static const cxl_p1_reg_t CXL_PSL9_FIR1     = {0x0300};
> +static const cxl_p1_reg_t CXL_PSL9_FIR2     = {0x0308};
> +static const cxl_p1_reg_t CXL_PSL9_Timebase = {0x0310};
> +static const cxl_p1_reg_t CXL_PSL9_DEBUG    = {0x0320};
> +static const cxl_p1_reg_t CXL_PSL9_FIR_CNTL = {0x0348};
> +static const cxl_p1_reg_t CXL_PSL9_DSNDCTL  = {0x0350};
> +static const cxl_p1_reg_t CXL_PSL9_TB_CTLSTAT = {0x0340};
> +static const cxl_p1_reg_t CXL_PSL9_TRACECFG = {0x0368};
> +static const cxl_p1_reg_t CXL_PSL9_APCDEDALLOC = {0x0378};
> +static const cxl_p1_reg_t CXL_PSL9_APCDEDTYPE = {0x0380};
> +static const cxl_p1_reg_t CXL_PSL9_TNR_ADDR = {0x0388};
> +static const cxl_p1_reg_t CXL_PSL9_GP_CT = {0x0398};
> +static const cxl_p1_reg_t CXL_XSL9_IERAT = {0x0588};
> +static const cxl_p1_reg_t CXL_XSL9_ILPP  = {0x0590};
> +
>  /* 0x7F00:7FFF Reserved PCIe MSI-X Pending Bit Array area */
>  /* 0x8000:FFFF Reserved PCIe MSI-X Table Area */
>
>  /* PSL Slice Privilege 1 Memory Map */
> -/* Configuration Area */
> +/* Configuration Area - CAIA 1&2 */
>  static const cxl_p1n_reg_t CXL_PSL_SR_An          = {0x00};
>  static const cxl_p1n_reg_t CXL_PSL_LPID_An        = {0x08};
>  static const cxl_p1n_reg_t CXL_PSL_AMBAR_An       = {0x10};
> @@ -111,17 +129,18 @@ static const cxl_p1n_reg_t CXL_PSL_ID_An          = {0x20};
>  static const cxl_p1n_reg_t CXL_PSL_SERR_An        = {0x28};
>  /* Memory Management and Lookaside Buffer Management - CAIA 1*/
>  static const cxl_p1n_reg_t CXL_PSL_SDR_An         = {0x30};
> +/* Memory Management and Lookaside Buffer Management - CAIA 1&2 */
>  static const cxl_p1n_reg_t CXL_PSL_AMOR_An        = {0x38};
> -/* Pointer Area */
> +/* Pointer Area - CAIA 1&2 */
>  static const cxl_p1n_reg_t CXL_HAURP_An           = {0x80};
>  static const cxl_p1n_reg_t CXL_PSL_SPAP_An        = {0x88};
>  static const cxl_p1n_reg_t CXL_PSL_LLCMD_An       = {0x90};
> -/* Control Area */
> +/* Control Area - CAIA 1&2 */
>  static const cxl_p1n_reg_t CXL_PSL_SCNTL_An       = {0xA0};
>  static const cxl_p1n_reg_t CXL_PSL_CtxTime_An     = {0xA8};
>  static const cxl_p1n_reg_t CXL_PSL_IVTE_Offset_An = {0xB0};
>  static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An  = {0xB8};
> -/* 0xC0:FF Implementation Dependent Area */
> +/* 0xC0:FF Implementation Dependent Area - CAIA 1&2 */
>  static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An   = {0xC0};
>  static const cxl_p1n_reg_t CXL_AFU_DEBUG_An       = {0xC8};
>  /* 0xC0:FF Implementation Dependent Area - CAIA 1 */
> @@ -131,7 +150,7 @@ static const cxl_p1n_reg_t CXL_PSL_RXCTL_A        = {0xE0};
>  static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE    = {0xE8};
>
>  /* PSL Slice Privilege 2 Memory Map */
> -/* Configuration and Control Area */
> +/* Configuration and Control Area - CAIA 1&2 */
>  static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
>  static const cxl_p2n_reg_t CXL_CSRP_An        = {0x008};
>  /* Configuration and Control Area - CAIA 1 */
> @@ -145,17 +164,17 @@ static const cxl_p2n_reg_t CXL_PSL_AMR_An     = {0x030};
>  static const cxl_p2n_reg_t CXL_SLBIE_An       = {0x040};
>  static const cxl_p2n_reg_t CXL_SLBIA_An       = {0x048};
>  static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
> -/* Interrupt Registers */
> +/* Interrupt Registers - CAIA 1&2 */
>  static const cxl_p2n_reg_t CXL_PSL_DSISR_An   = {0x060};
>  static const cxl_p2n_reg_t CXL_PSL_DAR_An     = {0x068};
>  static const cxl_p2n_reg_t CXL_PSL_DSR_An     = {0x070};
>  static const cxl_p2n_reg_t CXL_PSL_TFC_An     = {0x078};
>  static const cxl_p2n_reg_t CXL_PSL_PEHandle_An = {0x080};
>  static const cxl_p2n_reg_t CXL_PSL_ErrStat_An = {0x088};
> -/* AFU Registers */
> +/* AFU Registers - CAIA 1&2 */
>  static const cxl_p2n_reg_t CXL_AFU_Cntl_An    = {0x090};
>  static const cxl_p2n_reg_t CXL_AFU_ERR_An     = {0x098};
> -/* Work Element Descriptor */
> +/* Work Element Descriptor - CAIA 1&2 */
>  static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
>  /* 0x0C0:FFF Implementation Dependent Area */
>
> @@ -182,6 +201,10 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
>  #define CXL_PSL_SR_An_SF  MSR_SF            /* 64bit */
>  #define CXL_PSL_SR_An_TA  (1ull << (63-1))  /* Tags active,   GA1: 0 */
>  #define CXL_PSL_SR_An_HV  MSR_HV            /* Hypervisor,    GA1: 0 */
> +#define CXL_PSL_SR_An_XLAT_hpt (0ull << (63-6))/* Hashed page table (HPT) mode */
> +#define CXL_PSL_SR_An_XLAT_roh (2ull << (63-6))/* Radix on HPT mode */
> +#define CXL_PSL_SR_An_XLAT_ror (3ull << (63-6))/* Radix on Radix mode */
> +#define CXL_PSL_SR_An_BOT (1ull << (63-10)) /* Use the in-memory segment table */
>  #define CXL_PSL_SR_An_PR  MSR_PR            /* Problem state, GA1: 1 */
>  #define CXL_PSL_SR_An_ISL (1ull << (63-53)) /* Ignore Segment Large Page */
>  #define CXL_PSL_SR_An_TC  (1ull << (63-54)) /* Page Table secondary hash */
> @@ -298,12 +321,38 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
>  #define CXL_PSL_DSISR_An_S  DSISR_ISSTORE     /* Access was afu_wr or afu_zero */
>  #define CXL_PSL_DSISR_An_K  DSISR_KEYFAULT    /* Access not permitted by virtual page class key protection */
>
> +/****** CXL_PSL_DSISR_An - CAIA 2 ****************************************************/
> +#define CXL_PSL9_DSISR_An_TF (1ull << (63-3))  /* Translation fault */
> +#define CXL_PSL9_DSISR_An_PE (1ull << (63-4))  /* PSL Error (implementation specific) */
> +#define CXL_PSL9_DSISR_An_AE (1ull << (63-5))  /* AFU Error */
> +#define CXL_PSL9_DSISR_An_OC (1ull << (63-6))  /* OS Context Warning */
> +#define CXL_PSL9_DSISR_An_S (1ull << (63-38))  /* TF for a write operation */
> +#define CXL_PSL9_DSISR_PENDING (CXL_PSL9_DSISR_An_TF | CXL_PSL9_DSISR_An_PE | CXL_PSL9_DSISR_An_AE | CXL_PSL9_DSISR_An_OC)
> +/* NOTE: Bits 56:63 (Checkout Response Status) are valid when DSISR_An[TF] = 1
> + * Status (0:7) Encoding
> + */
> +#define CXL_PSL9_DSISR_An_CO_MASK 0x00000000000000ffULL
> +#define CXL_PSL9_DSISR_An_SF      0x0000000000000080ULL  /* Segment Fault                        0b10000000 */
> +#define CXL_PSL9_DSISR_An_PF_SLR  0x0000000000000088ULL  /* PTE not found (Single Level Radix)   0b10001000 */
> +#define CXL_PSL9_DSISR_An_PF_RGC  0x000000000000008CULL  /* PTE not found (Radix Guest (child))  0b10001100 */
> +#define CXL_PSL9_DSISR_An_PF_RGP  0x0000000000000090ULL  /* PTE not found (Radix Guest (parent)) 0b10010000 */
> +#define CXL_PSL9_DSISR_An_PF_HRH  0x0000000000000094ULL  /* PTE not found (HPT/Radix Host)       0b10010100 */
> +#define CXL_PSL9_DSISR_An_PF_STEG 0x000000000000009CULL  /* PTE not found (STEG VA)              0b10011100 */
> +
>  /****** CXL_PSL_TFC_An ******************************************************/
>  #define CXL_PSL_TFC_An_A  (1ull << (63-28)) /* Acknowledge non-translation fault */
>  #define CXL_PSL_TFC_An_C  (1ull << (63-29)) /* Continue (abort transaction) */
>  #define CXL_PSL_TFC_An_AE (1ull << (63-30)) /* Restart PSL with address error */
>  #define CXL_PSL_TFC_An_R  (1ull << (63-31)) /* Restart PSL transaction */
>
> +/****** CXL_XSL9_IERAT_ERAT - CAIA 2 **********************************/
> +#define CXL_XSL9_IERAT_MLPID    (1ull << (63-0))  /* Match LPID */
> +#define CXL_XSL9_IERAT_MPID     (1ull << (63-1))  /* Match PID */
> +#define CXL_XSL9_IERAT_PRS      (1ull << (63-4))  /* PRS bit for Radix invalidations */
> +#define CXL_XSL9_IERAT_INVR     (1ull << (63-3))  /* Invalidate Radix */
> +#define CXL_XSL9_IERAT_IALL     (1ull << (63-8))  /* Invalidate All */
> +#define CXL_XSL9_IERAT_IINPROG  (1ull << (63-63)) /* Invalidate in progress */
> +
>  /* cxl_process_element->software_status */
>  #define CXL_PE_SOFTWARE_STATE_V (1ul << (31 -  0)) /* Valid */
>  #define CXL_PE_SOFTWARE_STATE_C (1ul << (31 - 29)) /* Complete */
> @@ -654,25 +703,38 @@ int cxl_pci_reset(struct cxl *adapter);
>  void cxl_pci_release_afu(struct device *dev);
>  ssize_t cxl_pci_read_adapter_vpd(struct cxl *adapter, void *buf, size_t len);
>
> -/* common == phyp + powernv */
> +/* common == phyp + powernv - CAIA 1&2 */
>  struct cxl_process_element_common {
>  	__be32 tid;
>  	__be32 pid;
>  	__be64 csrp;
> -	__be64 aurp0;
> -	__be64 aurp1;
> -	__be64 sstp0;
> -	__be64 sstp1;
> +	union {
> +		struct {
> +			__be64 aurp0;
> +			__be64 aurp1;
> +			__be64 sstp0;
> +			__be64 sstp1;
> +		} psl8;  /* CAIA 1 */
> +		struct {
> +			u8     reserved2[8];
> +			u8     reserved3[8];
> +			u8     reserved4[8];
> +			u8     reserved5[8];
> +		} psl9;  /* CAIA 2 */
> +	} u;
>  	__be64 amr;
> -	u8     reserved3[4];
> +	u8     reserved6[4];
>  	__be64 wed;
>  } __packed;
>
> -/* just powernv */
> +/* just powernv - CAIA 1&2 */
>  struct cxl_process_element {
>  	__be64 sr;
>  	__be64 SPOffset;
> -	__be64 sdr;
> +	union {
> +		__be64 sdr;          /* CAIA 1 */
> +		u8     reserved1[8]; /* CAIA 2 */
> +	} u;
>  	__be64 haurp;
>  	__be32 ctxtime;
>  	__be16 ivte_offsets[4];
> @@ -761,6 +823,16 @@ static inline bool cxl_is_power8(void)
>  	return false;
>  }
>
> +static inline bool cxl_is_power9(void)
> +{
> +	/* intermediate solution */
> +	if (!cxl_is_power8() &&
> +	   (cpu_has_feature(CPU_FTRS_POWER9) ||
> +	    cpu_has_feature(CPU_FTR_POWER9_DD1)))
> +		return true;
> +	return false;
> +}
> +
>  static inline bool cxl_is_psl8(struct cxl_afu *afu)
>  {
>  	if (afu->adapter->caia_major == 1)
> @@ -768,6 +840,13 @@ static inline bool cxl_is_psl8(struct cxl_afu *afu)
>  	return false;
>  }
>
> +static inline bool cxl_is_psl9(struct cxl_afu *afu)
> +{
> +	if (afu->adapter->caia_major == 2)
> +		return true;
> +	return false;
> +}
> +
>  ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
>  				loff_t off, size_t count);
>
> @@ -794,7 +873,6 @@ int cxl_update_properties(struct device_node *dn, struct property *new_prop);
>
>  void cxl_remove_adapter_nr(struct cxl *adapter);
>
> -int cxl_alloc_spa(struct cxl_afu *afu);
>  void cxl_release_spa(struct cxl_afu *afu);
>
>  dev_t cxl_get_dev(void);
> @@ -832,9 +910,13 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
>  void afu_release_irqs(struct cxl_context *ctx, void *cookie);
>  void afu_irq_name_free(struct cxl_context *ctx);
>
> +int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
>  int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
> +int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu);
>  int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu);
> +int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
>  int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
> +void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx);
>  void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx);
>
>  #ifdef CONFIG_DEBUG_FS
> @@ -845,9 +927,12 @@ int cxl_debugfs_adapter_add(struct cxl *adapter);
>  void cxl_debugfs_adapter_remove(struct cxl *adapter);
>  int cxl_debugfs_afu_add(struct cxl_afu *afu);
>  void cxl_debugfs_afu_remove(struct cxl_afu *afu);
> +void cxl_stop_trace_psl9(struct cxl *cxl);
>  void cxl_stop_trace_psl8(struct cxl *cxl);
> +void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir);
>  void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir);
>  void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
> +void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir);
>  void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir);
>
>  #else /* CONFIG_DEBUG_FS */
> @@ -879,10 +964,19 @@ static inline void cxl_debugfs_afu_remove(struct cxl_afu *afu)
>  {
>  }
>
> +static inline void cxl_stop_trace_psl9(struct cxl *cxl)
> +{
> +}
> +
>  static inline void cxl_stop_trace_psl8(struct cxl *cxl)
>  {
>  }
>
> +static inline void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter,
> +						    struct dentry *dir)
> +{
> +}
> +
>  static inline void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter,
>  						    struct dentry *dir)
>  {
> @@ -893,6 +987,10 @@ static inline void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter,
>  {
>  }
>
> +static inline void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir)
> +{
> +}
> +
>  static inline void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
>  {
>  }
> @@ -938,7 +1036,9 @@ struct cxl_irq_info {
>  };
>
>  void cxl_assign_psn_space(struct cxl_context *ctx);
> +int cxl_invalidate_all_psl9(struct cxl *adapter);
>  int cxl_invalidate_all_psl8(struct cxl *adapter);
> +irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
>  irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
>  irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
>  int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
> @@ -951,6 +1051,7 @@ int cxl_data_cache_flush(struct cxl *adapter);
>  int cxl_afu_disable(struct cxl_afu *afu);
>  int cxl_psl_purge(struct cxl_afu *afu);
>
> +void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx);
>  void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx);
>  void cxl_native_err_irq_dump_regs(struct cxl *adapter);
>  int cxl_pci_vphb_add(struct cxl_afu *afu);
> diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
> index 43a1a27..eae9d74 100644
> --- a/drivers/misc/cxl/debugfs.c
> +++ b/drivers/misc/cxl/debugfs.c
> @@ -15,6 +15,12 @@
>
>  static struct dentry *cxl_debugfs;
>
> +void cxl_stop_trace_psl9(struct cxl *adapter)
> +{
> +	/* Stop the trace */
> +	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x4480000000000000ULL);
> +}
> +
>  void cxl_stop_trace_psl8(struct cxl *adapter)
>  {
>  	int slice;
> @@ -53,6 +59,14 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
>  					  (void __force *)value, &fops_io_x64);
>  }
>
> +void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir)
> +{
> +	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR1));
> +	debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR2));
> +	debugfs_create_io_x64("fir_cntl", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR_CNTL));
> +	debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_TRACECFG));
> +}
> +
>  void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir)
>  {
>  	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
> @@ -92,6 +106,11 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
>  	debugfs_remove_recursive(adapter->debugfs);
>  }
>
> +void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir)
> +{
> +	debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
> +}
> +
>  void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
>  {
>  	debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
> diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
> index e6f8f05..5344448 100644
> --- a/drivers/misc/cxl/fault.c
> +++ b/drivers/misc/cxl/fault.c
> @@ -146,25 +146,26 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
>  		return cxl_ack_ae(ctx);
>  	}
>
> -	/*
> -	 * update_mmu_cache() will not have loaded the hash since current->trap
> -	 * is not a 0x400 or 0x300, so just call hash_page_mm() here.
> -	 */
> -	access = _PAGE_PRESENT | _PAGE_READ;
> -	if (dsisr & CXL_PSL_DSISR_An_S)
> -		access |= _PAGE_WRITE;
> -
> -	access |= _PAGE_PRIVILEGED;
> -	if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
> -		access &= ~_PAGE_PRIVILEGED;
> -
> -	if (dsisr & DSISR_NOHPTE)
> -		inv_flags |= HPTE_NOHPTE_UPDATE;
> -
> -	local_irq_save(flags);
> -	hash_page_mm(mm, dar, access, 0x300, inv_flags);
> -	local_irq_restore(flags);
> -
> +	if (!radix_enabled()) {
> +		/*
> +		 * update_mmu_cache() will not have loaded the hash since current->trap
> +		 * is not a 0x400 or 0x300, so just call hash_page_mm() here.
> +		 */
> +		access = _PAGE_PRESENT | _PAGE_READ;
> +		if (dsisr & CXL_PSL_DSISR_An_S)
> +			access |= _PAGE_WRITE;
> +
> +		access |= _PAGE_PRIVILEGED;
> +		if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
> +			access &= ~_PAGE_PRIVILEGED;
> +
> +		if (dsisr & DSISR_NOHPTE)
> +			inv_flags |= HPTE_NOHPTE_UPDATE;
> +
> +		local_irq_save(flags);
> +		hash_page_mm(mm, dar, access, 0x300, inv_flags);
> +		local_irq_restore(flags);
> +	}
>  	pr_devel("Page fault successfully handled for pe: %i!\n", ctx->pe);
>  	cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_R, 0);
>  }
> @@ -184,7 +185,28 @@ static struct mm_struct *get_mem_context(struct cxl_context *ctx)
>  	return ctx->mm;
>  }
>
> +static bool cxl_is_segment_miss(struct cxl_context *ctx, u64 dsisr)
> +{
> +	if ((cxl_is_psl8(ctx->afu)) && (dsisr & CXL_PSL_DSISR_An_DS))
> +		return true;
> +
> +	return false;
> +}
> +
> +static bool cxl_is_page_fault(struct cxl_context *ctx, u64 dsisr)
> +{
> +	if ((cxl_is_psl8(ctx->afu)) && (dsisr & CXL_PSL_DSISR_An_DM))
> +		return true;
> +
> +	if ((cxl_is_psl9(ctx->afu)) &&
> +	   ((dsisr & CXL_PSL9_DSISR_An_CO_MASK) &
> +		(CXL_PSL9_DSISR_An_PF_SLR | CXL_PSL9_DSISR_An_PF_RGC |
> +		 CXL_PSL9_DSISR_An_PF_RGP | CXL_PSL9_DSISR_An_PF_HRH |
> +		 CXL_PSL9_DSISR_An_PF_STEG)))
> +		return true;
>
> +	return false;
> +}
>
>  void cxl_handle_fault(struct work_struct *fault_work)
>  {
> @@ -230,9 +252,9 @@ void cxl_handle_fault(struct work_struct *fault_work)
>  		}
>  	}
>
> -	if (dsisr & CXL_PSL_DSISR_An_DS)
> +	if (cxl_is_segment_miss(ctx, dsisr))
>  		cxl_handle_segment_miss(ctx, mm, dar);
> -	else if (dsisr & CXL_PSL_DSISR_An_DM)
> +	else if (cxl_is_page_fault(ctx, dsisr))
>  		cxl_handle_page_fault(ctx, mm, dsisr, dar);
>  	else
>  		WARN(1, "cxl_handle_fault has nothing to handle\n");
> diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
> index 3ad7381..f58b4b6c 100644
> --- a/drivers/misc/cxl/guest.c
> +++ b/drivers/misc/cxl/guest.c
> @@ -551,13 +551,13 @@ static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
>  	elem->common.tid    = cpu_to_be32(0); /* Unused */
>  	elem->common.pid    = cpu_to_be32(pid);
>  	elem->common.csrp   = cpu_to_be64(0); /* disable */
> -	elem->common.aurp0  = cpu_to_be64(0); /* disable */
> -	elem->common.aurp1  = cpu_to_be64(0); /* disable */
> +	elem->common.u.psl8.aurp0  = cpu_to_be64(0); /* disable */
> +	elem->common.u.psl8.aurp1  = cpu_to_be64(0); /* disable */
>
>  	cxl_prefault(ctx, wed);
>
> -	elem->common.sstp0  = cpu_to_be64(ctx->sstp0);
> -	elem->common.sstp1  = cpu_to_be64(ctx->sstp1);
> +	elem->common.u.psl8.sstp0  = cpu_to_be64(ctx->sstp0);
> +	elem->common.u.psl8.sstp1  = cpu_to_be64(ctx->sstp1);
>
>  	/*
>  	 * Ensure we have at least one interrupt allocated to take faults for
> diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
> index fa9f8a2..1eb5168 100644
> --- a/drivers/misc/cxl/irq.c
> +++ b/drivers/misc/cxl/irq.c
> @@ -34,6 +34,59 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
>  	return IRQ_HANDLED;
>  }
>
> +irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
> +{
> +	u64 dsisr, dar;
> +
> +	dsisr = irq_info->dsisr;
> +	dar = irq_info->dar;
> +
> +	trace_cxl_psl9_irq(ctx, irq, dsisr, dar);
> +
> +	pr_devel("CXL interrupt %i for afu pe: %i DSISR: %#llx DAR: %#llx\n", irq, ctx->pe, dsisr, dar);
> +
> +	if (dsisr & CXL_PSL9_DSISR_An_TF) {
> +		pr_devel("CXL interrupt: Scheduling translation fault"
> +			 " handling for later (pe: %i)\n", ctx->pe);
> +		return schedule_cxl_fault(ctx, dsisr, dar);
> +	}
> +
> +	if (dsisr & CXL_PSL9_DSISR_An_PE)
> +		return cxl_ops->handle_psl_slice_error(ctx, dsisr,
> +						irq_info->errstat);
> +	if (dsisr & CXL_PSL9_DSISR_An_AE) {
> +		pr_devel("CXL interrupt: AFU Error 0x%016llx\n", irq_info->afu_err);
> +
> +		if (ctx->pending_afu_err) {
> +			/*
> +			 * This shouldn't happen - the PSL treats these errors
> +			 * as fatal and will have reset the AFU, so there's not
> +			 * much point buffering multiple AFU errors.
> +			 * OTOH if we DO ever see a storm of these come in it's
> +			 * probably best that we log them somewhere:
> +			 */
> +			dev_err_ratelimited(&ctx->afu->dev, "CXL AFU Error "
> +					    "undelivered to pe %i: 0x%016llx\n",
> +					    ctx->pe, irq_info->afu_err);
> +		} else {
> +			spin_lock(&ctx->lock);
> +			ctx->afu_err = irq_info->afu_err;
> +			ctx->pending_afu_err = 1;
> +			spin_unlock(&ctx->lock);
> +
> +			wake_up_all(&ctx->wq);
> +		}
> +
> +		cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_A, 0);
> +		return IRQ_HANDLED;
> +	}
> +	if (dsisr & CXL_PSL9_DSISR_An_OC)
> +		pr_devel("CXL interrupt: OS Context Warning\n");
> +
> +	WARN(1, "Unhandled CXL PSL IRQ\n");
> +	return IRQ_HANDLED;
> +}
> +
>  irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
>  {
>  	u64 dsisr, dar;
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index 0401e4dc..1e3c5c2 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -120,6 +120,7 @@ int cxl_psl_purge(struct cxl_afu *afu)
>  	u64 AFU_Cntl = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
>  	u64 dsisr, dar;
>  	u64 start, end;
> +	u64 trans_fault = 0x0ULL;
>  	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
>  	int rc = 0;
>
> @@ -127,6 +128,11 @@ int cxl_psl_purge(struct cxl_afu *afu)
>
>  	pr_devel("PSL purge request\n");
>
> +	if (cxl_is_psl8(afu))
> +		trans_fault = CXL_PSL_DSISR_TRANS;
> +	if (cxl_is_psl9(afu))
> +		trans_fault = CXL_PSL9_DSISR_An_TF;
> +
>  	if (!cxl_ops->link_ok(afu->adapter, afu)) {
>  		dev_warn(&afu->dev, "PSL Purge called with link down, ignoring\n");
>  		rc = -EIO;
> @@ -159,12 +165,12 @@ int cxl_psl_purge(struct cxl_afu *afu)
>  				     "  PSL_DSISR: 0x%016llx\n",
>  				     PSL_CNTL, dsisr);
>
> -		if (dsisr & CXL_PSL_DSISR_TRANS) {
> +		if (dsisr & trans_fault) {
>  			dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
>  			dev_notice(&afu->dev, "PSL purge terminating "
>  					      "pending translation, "
>  					      "DSISR: 0x%016llx, DAR: 0x%016llx\n",
> -					       dsisr, dar);
> +					      dsisr, dar);
>  			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
>  		} else if (dsisr) {
>  			dev_notice(&afu->dev, "PSL purge acknowledging "
> @@ -204,7 +210,7 @@ static int spa_max_procs(int spa_size)
>  	return ((spa_size / 8) - 96) / 17;
>  }
>
> -int cxl_alloc_spa(struct cxl_afu *afu)
> +static int cxl_alloc_spa(struct cxl_afu *afu, int mode)
>  {
>  	unsigned spa_size;
>
> @@ -217,7 +223,8 @@ int cxl_alloc_spa(struct cxl_afu *afu)
>  		if (spa_size > 0x100000) {
>  			dev_warn(&afu->dev, "num_of_processes too large for the SPA, limiting to %i (0x%x)\n",
>  					afu->native->spa_max_procs, afu->native->spa_size);
> -			afu->num_procs = afu->native->spa_max_procs;
> +			if (mode != CXL_MODE_DEDICATED)
> +				afu->num_procs = afu->native->spa_max_procs;
>  			break;
>  		}
>
> @@ -266,6 +273,35 @@ void cxl_release_spa(struct cxl_afu *afu)
>  	}
>  }
>
> +/* Invalidation of all ERAT entries is no longer required by CAIA2. Use
> + * only for debug
> + */
> +int cxl_invalidate_all_psl9(struct cxl *adapter)
> +{
> +	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
> +	u64 ierat;
> +
> +	pr_devel("CXL adapter - invalidation of all ERAT entries\n");
> +
> +	/* Invalidates all ERAT entries for Radix or HPT */
> +	ierat = CXL_XSL9_IERAT_IALL;
> +	if (radix_enabled())
> +		ierat |= CXL_XSL9_IERAT_INVR;
> +	cxl_p1_write(adapter, CXL_XSL9_IERAT, ierat);
> +
> +	while (cxl_p1_read(adapter, CXL_XSL9_IERAT) & CXL_XSL9_IERAT_IINPROG) {
> +		if (time_after_eq(jiffies, timeout)) {
> +			dev_warn(&adapter->dev,
> +			"WARNING: CXL adapter invalidation of all ERAT entries timed out!\n");
> +			return -EBUSY;
> +		}
> +		if (!cxl_ops->link_ok(adapter, NULL))
> +			return -EIO;
> +		cpu_relax();
> +	}
> +	return 0;
> +}
> +
>  int cxl_invalidate_all_psl8(struct cxl *adapter)
>  {
>  	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
> @@ -502,7 +538,7 @@ static int activate_afu_directed(struct cxl_afu *afu)
>
>  	afu->num_procs = afu->max_procs_virtualised;
>  	if (afu->native->spa == NULL) {
> -		if (cxl_alloc_spa(afu))
> +		if (cxl_alloc_spa(afu, CXL_MODE_DIRECTED))
>  			return -ENOMEM;
>  	}
>  	attach_spa(afu);
> @@ -552,10 +588,19 @@ static u64 calculate_sr(struct cxl_context *ctx)
>  		sr |= (mfmsr() & MSR_SF) | CXL_PSL_SR_An_HV;
>  	} else {
>  		sr |= CXL_PSL_SR_An_PR | CXL_PSL_SR_An_R;
> -		sr &= ~(CXL_PSL_SR_An_HV);
> +		if (radix_enabled())
> +			sr |= CXL_PSL_SR_An_HV;
> +		else
> +			sr &= ~(CXL_PSL_SR_An_HV);
>  		if (!test_tsk_thread_flag(current, TIF_32BIT))
>  			sr |= CXL_PSL_SR_An_SF;
>  	}
> +	if (cxl_is_psl9(ctx->afu)) {
> +		if (radix_enabled())
> +			sr |= CXL_PSL_SR_An_XLAT_ror;
> +		else
> +			sr |= CXL_PSL_SR_An_XLAT_hpt;
> +	}
>  	return sr;
>  }
>
> @@ -588,6 +633,70 @@ static void update_ivtes_directed(struct cxl_context *ctx)
>  		WARN_ON(add_process_element(ctx));
>  }
>
> +static int process_element_entry_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
> +{
> +	u32 pid;
> +
> +	cxl_assign_psn_space(ctx);
> +
> +	ctx->elem->ctxtime = 0; /* disable */
> +	ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
> +	ctx->elem->haurp = 0; /* disable */
> +
> +	if (ctx->kernel)
> +		pid = 0;
> +	else {
> +		if (ctx->mm == NULL) {
> +			pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
> +				__func__, ctx->pe, pid_nr(ctx->pid));
> +			return -EINVAL;
> +		}
> +		pid = ctx->mm->context.id;
> +	}
> +
> +	ctx->elem->common.tid = 0;
> +	ctx->elem->common.pid = cpu_to_be32(pid);
> +
> +	ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
> +
> +	ctx->elem->common.csrp = 0; /* disable */
> +
> +	cxl_prefault(ctx, wed);
> +
> +	/*
> +	 * Ensure we have the multiplexed PSL interrupt set up to take faults
> +	 * for kernel contexts that may not have allocated any AFU IRQs at all:
> +	 */
> +	if (ctx->irqs.range[0] == 0) {
> +		ctx->irqs.offset[0] = ctx->afu->native->psl_hwirq;
> +		ctx->irqs.range[0] = 1;
> +	}
> +
> +	ctx->elem->common.amr = cpu_to_be64(amr);
> +	ctx->elem->common.wed = cpu_to_be64(wed);
> +
> +	return 0;
> +}
> +
> +int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
> +{
> +	int result;
> +
> +	/* fill the process element entry */
> +	result = process_element_entry_psl9(ctx, wed, amr);
> +	if (result)
> +		return result;
> +
> +	update_ivtes_directed(ctx);
> +
> +	/* first guy needs to enable */
> +	result = cxl_ops->afu_check_and_enable(ctx->afu);
> +	if (result)
> +		return result;
> +
> +	return add_process_element(ctx);
> +}
> +
>  int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
>  {
>  	u32 pid;
> @@ -598,7 +707,7 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
>  	ctx->elem->ctxtime = 0; /* disable */
>  	ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
>  	ctx->elem->haurp = 0; /* disable */
> -	ctx->elem->sdr = cpu_to_be64(mfspr(SPRN_SDR1));
> +	ctx->elem->u.sdr = cpu_to_be64(mfspr(SPRN_SDR1));
>
>  	pid = current->pid;
>  	if (ctx->kernel)
> @@ -609,13 +718,13 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
>  	ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
>
>  	ctx->elem->common.csrp = 0; /* disable */
> -	ctx->elem->common.aurp0 = 0; /* disable */
> -	ctx->elem->common.aurp1 = 0; /* disable */
> +	ctx->elem->common.u.psl8.aurp0 = 0; /* disable */
> +	ctx->elem->common.u.psl8.aurp1 = 0; /* disable */
>
>  	cxl_prefault(ctx, wed);
>
> -	ctx->elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
> -	ctx->elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
> +	ctx->elem->common.u.psl8.sstp0 = cpu_to_be64(ctx->sstp0);
> +	ctx->elem->common.u.psl8.sstp1 = cpu_to_be64(ctx->sstp1);
>
>  	/*
>  	 * Ensure we have the multiplexed PSL interrupt set up to take faults
> @@ -681,6 +790,31 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
>  	return 0;
>  }
>
> +int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu)
> +{
> +	dev_info(&afu->dev, "Activating dedicated process mode\n");
> +
> +	/* If XSL is set to dedicated mode (Set in PSL_SCNTL reg), the
> +	 * XSL and AFU are programmed to work with a single context.
> +	 * The context information should be configured in the SPA area
> +	 * index 0 (so PSL_SPAP must be configured before enabling the
> +	 * AFU).
> +	 */
> +	afu->num_procs = 1;
> +	if (afu->native->spa == NULL) {
> +		if (cxl_alloc_spa(afu, CXL_MODE_DEDICATED))
> +			return -ENOMEM;
> +	}
> +	attach_spa(afu);
> +
> +	cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_Process);
> +	cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
> +
> +	afu->current_mode = CXL_MODE_DEDICATED;
> +
> +	return cxl_chardev_d_afu_add(afu);
> +}
> +
>  int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
>  {
>  	dev_info(&afu->dev, "Activating dedicated process mode\n");
> @@ -704,6 +838,16 @@ int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
>  	return cxl_chardev_d_afu_add(afu);
>  }
>
> +void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx)
> +{
> +	int r;
> +
> +	for (r = 0; r < CXL_IRQ_RANGES; r++) {
> +		ctx->elem->ivte_offsets[r] = cpu_to_be16(ctx->irqs.offset[r]);
> +		ctx->elem->ivte_ranges[r] = cpu_to_be16(ctx->irqs.range[r]);
> +	}
> +}
> +
>  void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
>  {
>  	struct cxl_afu *afu = ctx->afu;
> @@ -720,6 +864,26 @@ void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
>  			((u64)ctx->irqs.range[3] & 0xffff));
>  }
>
> +int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
> +{
> +	struct cxl_afu *afu = ctx->afu;
> +	int result;
> +
> +	/* fill the process element entry */
> +	result = process_element_entry_psl9(ctx, wed, amr);
> +	if (result)
> +		return result;
> +
> +	if (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes)
> +		afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
> +
> +	result = cxl_ops->afu_reset(afu);
> +	if (result)
> +		return result;
> +
> +	return afu_enable(afu);
> +}
> +
>  int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
>  {
>  	struct cxl_afu *afu = ctx->afu;
> @@ -891,6 +1055,21 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
>  	return 0;
>  }
>
> +void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx)
> +{
> +	u64 fir1, fir2, serr;
> +
> +	fir1 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR1);
> +	fir2 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR2);
> +
> +	dev_crit(&ctx->afu->dev, "PSL_FIR1: 0x%016llx\n", fir1);
> +	dev_crit(&ctx->afu->dev, "PSL_FIR2: 0x%016llx\n", fir2);
> +	if (ctx->afu->adapter->native->sl_ops->register_serr_irq) {
> +		serr = cxl_p1n_read(ctx->afu, CXL_PSL_SERR_An);
> +		cxl_afu_decode_psl_serr(ctx->afu, serr);
> +	}
> +}
> +
>  void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx)
>  {
>  	u64 fir1, fir2, fir_slice, serr, afu_debug;
> @@ -927,9 +1106,20 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
>  	return cxl_ops->ack_irq(ctx, 0, errstat);
>  }
>
> +static bool cxl_is_translation_fault(struct cxl_afu *afu, u64 dsisr)
> +{
> +	if ((cxl_is_psl8(afu)) && (dsisr & CXL_PSL_DSISR_TRANS))
> +		return true;
> +
> +	if ((cxl_is_psl9(afu)) && (dsisr & CXL_PSL9_DSISR_An_TF))
> +		return true;
> +
> +	return false;
> +}
> +
>  irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
>  {
> -	if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
> +	if (cxl_is_translation_fault(afu, irq_info->dsisr))
>  		cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
>  	else
>  		cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
> @@ -998,6 +1188,9 @@ static void native_irq_wait(struct cxl_context *ctx)
>  		if (cxl_is_psl8(ctx->afu) &&
>  		   ((dsisr & CXL_PSL_DSISR_PENDING) == 0))
>  			return;
> +		if (cxl_is_psl9(ctx->afu) &&
> +		   ((dsisr & CXL_PSL9_DSISR_PENDING) == 0))
> +			return;
>  		/*
>  		 * We are waiting for the workqueue to process our
>  		 * irq, so need to let that run here.
> @@ -1125,7 +1318,13 @@ int cxl_native_register_serr_irq(struct cxl_afu *afu)
>
>  	serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
>  	if (cxl_is_power8())
> -		serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
> + 		serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
> +	if (cxl_is_power9()) {
> +		/* By default, all errors are masked. So don't set all masks.
> +		 * Slice errors will be transfered.
> +		 */
> +		serr = (serr & ~0xff0000007fffffffULL) | (afu->serr_hwirq & 0xffff);
> +	}
>  	cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
>
>  	return 0;
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index a910115..1789ad8 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -60,7 +60,7 @@
>  #define CXL_VSEC_PROTOCOL_MASK   0xe0
>  #define CXL_VSEC_PROTOCOL_1024TB 0x80
>  #define CXL_VSEC_PROTOCOL_512TB  0x40
> -#define CXL_VSEC_PROTOCOL_256TB  0x20 /* Power 8 uses this */
> +#define CXL_VSEC_PROTOCOL_256TB  0x20 /* Power 8/9 uses this */
>  #define CXL_VSEC_PROTOCOL_ENABLE 0x01
>
>  #define CXL_READ_VSEC_PSL_REVISION(dev, vsec, dest) \
> @@ -326,14 +326,20 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
>
>  #define P8_CAPP_UNIT0_ID 0xBA
>  #define P8_CAPP_UNIT1_ID 0XBE
> +#define P9_CAPP_UNIT0_ID 0xC0
> +#define P9_CAPP_UNIT1_ID 0xE0
>
> -static u64 get_capp_unit_id(struct device_node *np)
> +static u32 get_phb_index(struct device_node *np)
>  {
>  	u32 phb_index;
>
>  	if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
> -		return 0;
> +		return -ENODEV;
> +	return phb_index;
> +}
>
> +static u64 get_capp_unit_id(struct device_node *np, u32 phb_index)
> +{
>  	/*
>  	 * POWER 8:
>  	 *  - For chips other than POWER8NVL, we only have CAPP 0,
> @@ -352,10 +358,25 @@ static u64 get_capp_unit_id(struct device_node *np)
>  			return P8_CAPP_UNIT1_ID;
>  	}
>
> +	/*
> +	 * POWER 9:
> +	 *   PEC0 (PHB0). Capp ID = CAPP0 (0b1100_0000)
> +	 *   PEC1 (PHB1 - PHB2). No capi mode
> +	 *   PEC2 (PHB3 - PHB4 - PHB5): Capi mode on PHB3 only. Capp ID = CAPP1 (0b1110_0000)
> +	 */
> +	if (cxl_is_power9()) {
> +		if (phb_index == 0)
> +			return P9_CAPP_UNIT0_ID;
> +
> +		if (phb_index == 3)
> +			return P9_CAPP_UNIT1_ID;
> +	}
> +
>  	return 0;
>  }
>
> -static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id)
> +static int calc_capp_routing(struct pci_dev *dev, u64 *chipid,
> +			     u32 *phb_index, u64 *capp_unit_id)
>  {
>  	struct device_node *np;
>  	const __be32 *prop;
> @@ -367,8 +388,16 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
>  		np = of_get_next_parent(np);
>  	if (!np)
>  		return -ENODEV;
> +
>  	*chipid = be32_to_cpup(prop);
> -	*capp_unit_id = get_capp_unit_id(np);
> +
> +	*phb_index = get_phb_index(np);
> +	if (*phb_index == -ENODEV) {
> +		pr_err("cxl: invalid phb index\n");
> +		return -ENODEV;
> +	}
> +
> +	*capp_unit_id = get_capp_unit_id(np, *phb_index);
>  	of_node_put(np);
>  	if (!*capp_unit_id) {
>  		pr_err("cxl: invalid capp unit id\n");
> @@ -378,14 +407,97 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
>  	return 0;
>  }
>
> +static int init_implementation_adapter_regs_psl9(struct cxl *adapter, struct pci_dev *dev)
> +{
> +	u64 xsl_dsnctl, psl_fircntl;
> +	u64 chipid;
> +	u32 phb_index;
> +	u64 capp_unit_id;
> +	int rc;
> +
> +	rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
> +	if (rc)
> +		return rc;
> +
> +	/* CAPI Identifier bits [0:7]
> +	 * bit 61:60 MSI bits --> 0
> +	 * bit 59 TVT selector --> 0
> +	 */
> +	/* Tell XSL where to route data to.
> +	 * The field chipid should match the PHB CAPI_CMPM register
> +	 */
> +	xsl_dsnctl = ((u64)0x2 << (63-7)); /* Bit 57 */
> +	xsl_dsnctl |= (capp_unit_id << (63-15));
> +
> +	/* nMMU_ID Defaults to: b’000001001’*/
> +	xsl_dsnctl |= ((u64)0x09 << (63-28));
> +
> +	if (cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> +		/* Used to identify CAPI packets which should be sorted into
> +		 * the Non-Blocking queues by the PHB. This field should match
> +		 * the PHB PBL_NBW_CMPM register
> +		 * nbwind=0x03, bits [57:58], must include capi indicator.
> +		 * Not supported on P9 DD1.
> +		 */
> +		xsl_dsnctl |= ((u64)0x03 << (63-47));
> +
> +		/* Upper 16b address bits of ASB_Notify messages sent to the
> +		 * system. Need to match the PHB’s ASN Compare/Mask Register.
> +		 * Not supported on P9 DD1.
> +		 */
> +		xsl_dsnctl |= ((u64)0x04 << (63-55));
> +	}
> +
> +	cxl_p1_write(adapter, CXL_XSL9_DSNCTL, xsl_dsnctl);
> +
> +	/* Set fir_cntl to recommended value for production env */
> +	psl_fircntl = (0x2ULL << (63-3)); /* ce_report */
> +	psl_fircntl |= (0x1ULL << (63-6)); /* FIR_report */
> +	psl_fircntl |= 0x1ULL; /* ce_thresh */
> +	cxl_p1_write(adapter, CXL_PSL9_FIR_CNTL, psl_fircntl);
> +
> +	/* vccredits=0x1  pcklat=0x4 */
> +	cxl_p1_write(adapter, CXL_PSL9_DSNDCTL, 0x0000000000001810ULL);
> +
> +	/* For debugging with trace arrays.
> +	 * Configure RX trace 0 segmented mode.
> +	 * Configure CT trace 0 segmented mode.
> +	 * Configure LA0 trace 0 segmented mode.
> +	 * Configure LA1 trace 0 segmented mode.
> +	 */
> +	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000000ULL);
> +	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000003ULL);
> +	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000005ULL);
> +	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000006ULL);
> +
> +	/* A response to an ASB_Notify request is returned by the
> +	 * system as an MMIO write to the address defined in
> +	 * the PSL_TNR_ADDR register
> +	 */
> +	/* PSL_TNR_ADDR */
> +
> +	/* NORST */
> +	cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x8000000000000000ULL);
> +
> +	/* allocate the apc machines */
> +	cxl_p1_write(adapter, CXL_PSL9_APCDEDTYPE, 0x40000003FFFF0000ULL);
> +
> +	/* Disable vc dd1 fix */
> +	if ((cxl_is_power9() && cpu_has_feature(CPU_FTR_POWER9_DD1)))
> +		cxl_p1_write(adapter, CXL_PSL9_GP_CT, 0x0400000000000001ULL);
> +
> +	return 0;
> +}
> +
>  static int init_implementation_adapter_regs_psl8(struct cxl *adapter, struct pci_dev *dev)
>  {
>  	u64 psl_dsnctl, psl_fircntl;
>  	u64 chipid;
> +	u32 phb_index;
>  	u64 capp_unit_id;
>  	int rc;
>
> -	rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
> +	rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
>  	if (rc)
>  		return rc;
>
> @@ -414,10 +526,11 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
>  {
>  	u64 xsl_dsnctl;
>  	u64 chipid;
> +	u32 phb_index;
>  	u64 capp_unit_id;
>  	int rc;
>
> -	rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
> +	rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
>  	if (rc)
>  		return rc;
>
> @@ -435,6 +548,12 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
>  /* For the PSL this is a multiple for 0 < n <= 7: */
>  #define PSL_2048_250MHZ_CYCLES 1
>
> +static void write_timebase_ctrl_psl9(struct cxl *adapter)
> +{
> +	cxl_p1_write(adapter, CXL_PSL9_TB_CTLSTAT,
> +		     TBSYNC_CNT(2 * PSL_2048_250MHZ_CYCLES));
> +}
> +
>  static void write_timebase_ctrl_psl8(struct cxl *adapter)
>  {
>  	cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT,
> @@ -456,6 +575,11 @@ static void write_timebase_ctrl_xsl(struct cxl *adapter)
>  		     TBSYNC_CNT(XSL_4000_CLOCKS));
>  }
>
> +static u64 timebase_read_psl9(struct cxl *adapter)
> +{
> +	return cxl_p1_read(adapter, CXL_PSL9_Timebase);
> +}
> +
>  static u64 timebase_read_psl8(struct cxl *adapter)
>  {
>  	return cxl_p1_read(adapter, CXL_PSL_Timebase);
> @@ -514,6 +638,11 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
>  	return;
>  }
>
> +static int init_implementation_afu_regs_psl9(struct cxl_afu *afu)
> +{
> +	return 0;
> +}
> +
>  static int init_implementation_afu_regs_psl8(struct cxl_afu *afu)
>  {
>  	/* read/write masks for this slice */
> @@ -612,7 +741,7 @@ static int setup_cxl_bars(struct pci_dev *dev)
>  	/*
>  	 * BAR 4/5 has a special meaning for CXL and must be programmed with a
>  	 * special value corresponding to the CXL protocol address range.
> -	 * For POWER 8 that means bits 48:49 must be set to 10
> +	 * For POWER 8/9 that means bits 48:49 must be set to 10
>  	 */
>  	pci_write_config_dword(dev, PCI_BASE_ADDRESS_4, 0x00000000);
>  	pci_write_config_dword(dev, PCI_BASE_ADDRESS_5, 0x00020000);
> @@ -997,6 +1126,52 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
>  	return 0;
>  }
>
> +static int sanitise_afu_regs_psl9(struct cxl_afu *afu)
> +{
> +	u64 reg;
> +
> +	/*
> +	 * Clear out any regs that contain either an IVTE or address or may be
> +	 * waiting on an acknowledgment to try to be a bit safer as we bring
> +	 * it online
> +	 */
> +	reg = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
> +	if ((reg & CXL_AFU_Cntl_An_ES_MASK) != CXL_AFU_Cntl_An_ES_Disabled) {
> +		dev_warn(&afu->dev, "WARNING: AFU was not disabled: %#016llx\n", reg);
> +		if (cxl_ops->afu_reset(afu))
> +			return -EIO;
> +		if (cxl_afu_disable(afu))
> +			return -EIO;
> +		if (cxl_psl_purge(afu))
> +			return -EIO;
> +	}
> +	cxl_p1n_write(afu, CXL_PSL_SPAP_An, 0x0000000000000000);
> +	cxl_p1n_write(afu, CXL_PSL_AMBAR_An, 0x0000000000000000);
> +	reg = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
> +	if (reg) {
> +		dev_warn(&afu->dev, "AFU had pending DSISR: %#016llx\n", reg);
> +		if (reg & CXL_PSL9_DSISR_An_TF)
> +			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
> +		else
> +			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
> +	}
> +	if (afu->adapter->native->sl_ops->register_serr_irq) {
> +		reg = cxl_p1n_read(afu, CXL_PSL_SERR_An);
> +		if (reg) {
> +			if (reg & ~0x000000007fffffff)
> +				dev_warn(&afu->dev, "AFU had pending SERR: %#016llx\n", reg);
> +			cxl_p1n_write(afu, CXL_PSL_SERR_An, reg & ~0xffff);
> +		}
> +	}
> +	reg = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
> +	if (reg) {
> +		dev_warn(&afu->dev, "AFU had pending error status: %#016llx\n", reg);
> +		cxl_p2n_write(afu, CXL_PSL_ErrStat_An, reg);
> +	}
> +
> +	return 0;
> +}
> +
>  static int sanitise_afu_regs_psl8(struct cxl_afu *afu)
>  {
>  	u64 reg;
> @@ -1254,10 +1429,10 @@ int cxl_pci_reset(struct cxl *adapter)
>
>  	/*
>  	 * The adapter is about to be reset, so ignore errors.
> -	 * Not supported on P9 DD1 but don't forget to enable it
> -	 * on P9 DD2
> +	 * Not supported on P9 DD1
>  	 */
> -	if (cxl_is_power8())
> +	if ((cxl_is_power8()) ||
> +	    ((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))
>  		cxl_data_cache_flush(adapter);
>
>  	/* pcie_warm_reset requests a fundamental pci reset which includes a
> @@ -1393,6 +1568,9 @@ static bool cxl_compatible_caia_version(struct cxl *adapter)
>  	if (cxl_is_power8() && (adapter->caia_major == 1))
>  		return true;
>
> +	if (cxl_is_power9() && (adapter->caia_major == 2))
> +		return true;
> +
>  	return false;
>  }
>
> @@ -1460,8 +1638,12 @@ static int sanitise_adapter_regs(struct cxl *adapter)
>  	/* Clear PSL tberror bit by writing 1 to it */
>  	cxl_p1_write(adapter, CXL_PSL_ErrIVTE, CXL_PSL_ErrIVTE_tberror);
>
> -	if (adapter->native->sl_ops->invalidate_all)
> +	if (adapter->native->sl_ops->invalidate_all) {
> +		/* do not invalidate ERAT entries when not reloading on PERST */
> +		if (cxl_is_power9() && (adapter->perst_loads_image))
> +			return 0;
>  		rc = adapter->native->sl_ops->invalidate_all(adapter);
> +	}
>
>  	return rc;
>  }
> @@ -1546,6 +1728,30 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
>  	pci_disable_device(pdev);
>  }
>
> +static const struct cxl_service_layer_ops psl9_ops = {
> +	.adapter_regs_init = init_implementation_adapter_regs_psl9,
> +	.invalidate_all = cxl_invalidate_all_psl9,
> +	.afu_regs_init = init_implementation_afu_regs_psl9,
> +	.sanitise_afu_regs = sanitise_afu_regs_psl9,
> +	.register_serr_irq = cxl_native_register_serr_irq,
> +	.release_serr_irq = cxl_native_release_serr_irq,
> +	.handle_interrupt = cxl_irq_psl9,
> +	.fail_irq = cxl_fail_irq_psl,
> +	.activate_dedicated_process = cxl_activate_dedicated_process_psl9,
> +	.attach_afu_directed = cxl_attach_afu_directed_psl9,
> +	.attach_dedicated_process = cxl_attach_dedicated_process_psl9,
> +	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl9,
> +	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl9,
> +	.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl9,
> +	.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl9,
> +	.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
> +	.debugfs_stop_trace = cxl_stop_trace_psl9,
> +	.write_timebase_ctrl = write_timebase_ctrl_psl9,
> +	.timebase_read = timebase_read_psl9,
> +	.capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
> +	.needs_reset_before_disable = true,
> +};
> +
>  static const struct cxl_service_layer_ops psl8_ops = {
>  	.adapter_regs_init = init_implementation_adapter_regs_psl8,
>  	.invalidate_all = cxl_invalidate_all_psl8,
> @@ -1597,6 +1803,9 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
>  		if (cxl_is_power8()) {
>  			dev_info(&dev->dev, "Device uses a PSL8\n");
>  			adapter->native->sl_ops = &psl8_ops;
> +		} else {
> +			dev_info(&dev->dev, "Device uses a PSL9\n");
> +			adapter->native->sl_ops = &psl9_ops;
>  		}
>  	}
>  }
> @@ -1667,8 +1876,12 @@ static void cxl_pci_remove_adapter(struct cxl *adapter)
>  	cxl_sysfs_adapter_remove(adapter);
>  	cxl_debugfs_adapter_remove(adapter);
>
> -	/* Flush adapter datacache as its about to be removed */
> -	cxl_data_cache_flush(adapter);
> +	/* Flush adapter datacache as its about to be removed.
> +	 * Not supported on P9 DD1
> +	 */
> +	if ((cxl_is_power8()) ||
> +	    ((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))
> +		cxl_data_cache_flush(adapter);
>
>  	cxl_deconfigure_adapter(adapter);
>
> @@ -1752,6 +1965,11 @@ static int cxl_probe(struct pci_dev *dev, const struct pci_device_id *id)
>  		return -ENODEV;
>  	}
>
> +	if (cxl_is_power9() && !radix_enabled()) {
> +		dev_info(&dev->dev, "Only Radix mode supported\n");
> +		return -ENODEV;
> +	}
> +
>  	if (cxl_verbose)
>  		dump_cxl_config_space(dev);
>
> diff --git a/drivers/misc/cxl/trace.h b/drivers/misc/cxl/trace.h
> index 751d611..b8e300a 100644
> --- a/drivers/misc/cxl/trace.h
> +++ b/drivers/misc/cxl/trace.h
> @@ -17,6 +17,15 @@
>
>  #include "cxl.h"
>
> +#define dsisr_psl9_flags(flags) \
> +	__print_flags(flags, "|", \
> +		{ CXL_PSL9_DSISR_An_CO_MASK,	"FR" }, \
> +		{ CXL_PSL9_DSISR_An_TF,		"TF" }, \
> +		{ CXL_PSL9_DSISR_An_PE,		"PE" }, \
> +		{ CXL_PSL9_DSISR_An_AE,		"AE" }, \
> +		{ CXL_PSL9_DSISR_An_OC,		"OC" }, \
> +		{ CXL_PSL9_DSISR_An_S,		"S" })
> +
>  #define DSISR_FLAGS \
>  	{ CXL_PSL_DSISR_An_DS,	"DS" }, \
>  	{ CXL_PSL_DSISR_An_DM,	"DM" }, \
> @@ -154,6 +163,40 @@ TRACE_EVENT(cxl_afu_irq,
>  	)
>  );
>
> +TRACE_EVENT(cxl_psl9_irq,
> +	TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
> +
> +	TP_ARGS(ctx, irq, dsisr, dar),
> +
> +	TP_STRUCT__entry(
> +		__field(u8, card)
> +		__field(u8, afu)
> +		__field(u16, pe)
> +		__field(int, irq)
> +		__field(u64, dsisr)
> +		__field(u64, dar)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->card = ctx->afu->adapter->adapter_num;
> +		__entry->afu = ctx->afu->slice;
> +		__entry->pe = ctx->pe;
> +		__entry->irq = irq;
> +		__entry->dsisr = dsisr;
> +		__entry->dar = dar;
> +	),
> +
> +	TP_printk("afu%i.%i pe=%i irq=%i dsisr=0x%016llx dsisr=%s dar=0x%016llx",
> +		__entry->card,
> +		__entry->afu,
> +		__entry->pe,
> +		__entry->irq,
> +		__entry->dsisr,
> +		dsisr_psl9_flags(__entry->dsisr),
> +		__entry->dar
> +	)
> +);
> +
>  TRACE_EVENT(cxl_psl_irq,
>  	TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 7/7] cxl: Add psl9 specific code
  2017-04-11 14:41   ` Frederic Barrat
@ 2017-04-12  2:11     ` Michael Ellerman
  2017-04-12  8:30       ` christophe lombard
  0 siblings, 1 reply; 30+ messages in thread
From: Michael Ellerman @ 2017-04-12  2:11 UTC (permalink / raw)
  To: Frederic Barrat, Christophe Lombard, linuxppc-dev, imunsie,
	andrew.donnellan

Frederic Barrat <fbarrat@linux.vnet.ibm.com> writes:

> Le 07/04/2017 =C3=A0 16:11, Christophe Lombard a =C3=A9crit :
>> The new Coherent Accelerator Interface Architecture, level 2, for the
>> IBM POWER9 brings new content and features:
>> - POWER9 Service Layer
>> - Registers
>> - Radix mode
>> - Process element entry
>> - Dedicated-Shared Process Programming Model
>> - Translation Fault Handling
>> - CAPP
>> - Memory Context ID
>>     If a valid mm_struct is found the memory context id is used for each
>>     transaction associated with the process handle. The PSL uses the
>>     context ID to find the corresponding process element.
>>
>> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
>> ---
>
>
> I'm ok with the code. However checkpatch is complaining about a=20
> tab/space error in native.c

I already fixed it up when I applied them (and a bunch of other things).

> If you have a quick respin, I also have a comment below about the=20
> documentation.

So please send me an incremental patch to update the doco and I'll
squash it before merging the series.

cheers

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 6/7] cxl: Isolate few psl8 specific calls
  2017-04-10 17:13   ` Frederic Barrat
@ 2017-04-12  2:13     ` Michael Ellerman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Ellerman @ 2017-04-12  2:13 UTC (permalink / raw)
  To: Frederic Barrat, Christophe Lombard, linuxppc-dev, imunsie,
	andrew.donnellan

Frederic Barrat <fbarrat@linux.vnet.ibm.com> writes:

> Le 07/04/2017 =C3=A0 16:11, Christophe Lombard a =C3=A9crit :
>> Point out the specific Coherent Accelerator Interface Architecture,
>> level 1, registers.
>> Code and functions specific to PSL8 (CAIA1) must be framed.
>>
>> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
>> ---
>
> There are a few changes in native.c which are about splitting long=20
> strings, but that's minor. And the rest looks ok.

It is minor, so I fixed it up when applying. But in future please don't
split long strings, it makes them harder to grep for.

cheers

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 7/7] cxl: Add psl9 specific code
  2017-04-07 14:11 ` [PATCH V4 7/7] cxl: Add psl9 specific code Christophe Lombard
  2017-04-11 14:41   ` Frederic Barrat
@ 2017-04-12  7:52   ` Andrew Donnellan
  2017-04-12 11:57     ` Frederic Barrat
  2017-04-12 14:34   ` [PATCH V4 7/7 remix] " Frederic Barrat
  2 siblings, 1 reply; 30+ messages in thread
From: Andrew Donnellan @ 2017-04-12  7:52 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie

On 08/04/17 00:11, Christophe Lombard wrote:
> +static u32 get_phb_index(struct device_node *np)
>  {
>  	u32 phb_index;
>
>  	if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
> -		return 0;
> +		return -ENODEV;

Function is unsigned.

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 7/7] cxl: Add psl9 specific code
  2017-04-12  2:11     ` Michael Ellerman
@ 2017-04-12  8:30       ` christophe lombard
  2017-04-12 11:47         ` Michael Ellerman
  0 siblings, 1 reply; 30+ messages in thread
From: christophe lombard @ 2017-04-12  8:30 UTC (permalink / raw)
  To: Michael Ellerman, Frederic Barrat, linuxppc-dev, imunsie,
	andrew.donnellan

Le 12/04/2017 à 04:11, Michael Ellerman a écrit :
> Frederic Barrat <fbarrat@linux.vnet.ibm.com> writes:
>
>> Le 07/04/2017 à 16:11, Christophe Lombard a écrit :
>>> The new Coherent Accelerator Interface Architecture, level 2, for the
>>> IBM POWER9 brings new content and features:
>>> - POWER9 Service Layer
>>> - Registers
>>> - Radix mode
>>> - Process element entry
>>> - Dedicated-Shared Process Programming Model
>>> - Translation Fault Handling
>>> - CAPP
>>> - Memory Context ID
>>>      If a valid mm_struct is found the memory context id is used for each
>>>      transaction associated with the process handle. The PSL uses the
>>>      context ID to find the corresponding process element.
>>>
>>> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
>>> ---
>>
>> I'm ok with the code. However checkpatch is complaining about a
>> tab/space error in native.c
> I already fixed it up when I applied them (and a bunch of other things).
>
>> If you have a quick respin, I also have a comment below about the
>> documentation.
> So please send me an incremental patch to update the doco and I'll
> squash it before merging the series.
>
> cheers
>
Hi,

Here is a new patch which updates the documentation based
on the complet PATCH V4 7/7.
Let me know if it suits you.
Thanks


Index: capi2_linux_prepare_patch_V4/Documentation/powerpc/cxl.txt
===================================================================
--- capi2_linux_prepare_patch_V4.orig/Documentation/powerpc/cxl.txt
+++ capi2_linux_prepare_patch_V4/Documentation/powerpc/cxl.txt
@@ -62,6 +62,7 @@ Hardware overview
      POWER8 <-----> PSL Version 8 is compliant to the CAIA Version 1.0.
      POWER9 <-----> PSL Version 9 is compliant to the CAIA Version 2.0.
      This PSL Version 9 provides new features as:
+    * Interaction with the nest MMU which resides within each P9 chip.
      * Native DMA support.
      * Supports sending ASB_Notify messages for host thread wakeup.
      * Supports Atomic operations.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 7/7] cxl: Add psl9 specific code
  2017-04-12  8:30       ` christophe lombard
@ 2017-04-12 11:47         ` Michael Ellerman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Ellerman @ 2017-04-12 11:47 UTC (permalink / raw)
  To: christophe lombard, Frederic Barrat, linuxppc-dev, imunsie,
	andrew.donnellan

christophe lombard <clombard@linux.vnet.ibm.com> writes:
> Le 12/04/2017 =C3=A0 04:11, Michael Ellerman a =C3=A9crit :
> Hi,
>
> Here is a new patch which updates the documentation based
> on the complet PATCH V4 7/7.
> Let me know if it suits you.

Fine by me, I'll wait for Fred's ack before I merge it all.

> Index: capi2_linux_prepare_patch_V4/Documentation/powerpc/cxl.txt
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> --- capi2_linux_prepare_patch_V4.orig/Documentation/powerpc/cxl.txt
> +++ capi2_linux_prepare_patch_V4/Documentation/powerpc/cxl.txt
> @@ -62,6 +62,7 @@ Hardware overview
>       POWER8 <-----> PSL Version 8 is compliant to the CAIA Version 1.0.
>       POWER9 <-----> PSL Version 9 is compliant to the CAIA Version 2.0.
>       This PSL Version 9 provides new features as:
> +    * Interaction with the nest MMU which resides within each P9 chip.
>       * Native DMA support.
>       * Supports sending ASB_Notify messages for host thread wakeup.
>       * Supports Atomic operations.

The path didn't actually apply, the whitespace is messed up, but I fixed
it up.

cheers

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 7/7] cxl: Add psl9 specific code
  2017-04-12  7:52   ` Andrew Donnellan
@ 2017-04-12 11:57     ` Frederic Barrat
  2017-04-13 11:05       ` Michael Ellerman
  0 siblings, 1 reply; 30+ messages in thread
From: Frederic Barrat @ 2017-04-12 11:57 UTC (permalink / raw)
  To: Andrew Donnellan, Christophe Lombard, linuxppc-dev, imunsie



Le 12/04/2017 à 09:52, Andrew Donnellan a écrit :
> On 08/04/17 00:11, Christophe Lombard wrote:
>> +static u32 get_phb_index(struct device_node *np)
>>  {
>>      u32 phb_index;
>>
>>      if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
>> -        return 0;
>> +        return -ENODEV;
>
> Function is unsigned.
>

[Christophe is off till the end of the week, so I'm following up]

Michael: what's the easiest for you at this point? Shall I send a new 
version of the 7th patch with all changes consolidated (tab error + doc 
+ Andrew's remark above)?

   Fred

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH V4 7/7 remix] cxl: Add psl9 specific code
  2017-04-07 14:11 ` [PATCH V4 7/7] cxl: Add psl9 specific code Christophe Lombard
  2017-04-11 14:41   ` Frederic Barrat
  2017-04-12  7:52   ` Andrew Donnellan
@ 2017-04-12 14:34   ` Frederic Barrat
  2017-04-19  3:47     ` [V4,7/7,remix] " Michael Ellerman
  2 siblings, 1 reply; 30+ messages in thread
From: Frederic Barrat @ 2017-04-12 14:34 UTC (permalink / raw)
  To: andrew.donnellan, imunsie, linuxppc-dev

From: Christophe Lombard <clombard@linux.vnet.ibm.com>

The new Coherent Accelerator Interface Architecture, level 2, for the
IBM POWER9 brings new content and features:
- POWER9 Service Layer
- Registers
- Radix mode
- Process element entry
- Dedicated-Shared Process Programming Model
- Translation Fault Handling
- CAPP
- Memory Context ID
    If a valid mm_struct is found the memory context id is used for each
    transaction associated with the process handle. The PSL uses the
    context ID to find the corresponding process element.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
---
 Documentation/powerpc/cxl.txt |  15 ++-
 drivers/misc/cxl/context.c    |  16 ++-
 drivers/misc/cxl/cxl.h        | 137 ++++++++++++++++++++---
 drivers/misc/cxl/debugfs.c    |  19 ++++
 drivers/misc/cxl/fault.c      |  64 +++++++----
 drivers/misc/cxl/guest.c      |   8 +-
 drivers/misc/cxl/irq.c        |  53 +++++++++
 drivers/misc/cxl/native.c     | 223 +++++++++++++++++++++++++++++++++++--
 drivers/misc/cxl/pci.c        | 251 +++++++++++++++++++++++++++++++++++++++---
 drivers/misc/cxl/trace.h      |  43 ++++++++
 10 files changed, 753 insertions(+), 76 deletions(-)

diff --git a/Documentation/powerpc/cxl.txt b/Documentation/powerpc/cxl.txt
index d5506ba0..c5e8d50 100644
--- a/Documentation/powerpc/cxl.txt
+++ b/Documentation/powerpc/cxl.txt
@@ -21,7 +21,7 @@ Introduction
 Hardware overview
 =================
 
-          POWER8               FPGA
+         POWER8/9             FPGA
        +----------+        +---------+
        |          |        |         |
        |   CPU    |        |   AFU   |
@@ -34,7 +34,7 @@ Hardware overview
        |   | CAPP |<------>|         |
        +---+------+  PCIE  +---------+
 
-    The POWER8 chip has a Coherently Attached Processor Proxy (CAPP)
+    The POWER8/9 chip has a Coherently Attached Processor Proxy (CAPP)
     unit which is part of the PCIe Host Bridge (PHB). This is managed
     by Linux by calls into OPAL. Linux doesn't directly program the
     CAPP.
@@ -59,6 +59,17 @@ Hardware overview
     the fault. The context to which this fault is serviced is based on
     who owns that acceleration function.
 
+    POWER8 <-----> PSL Version 8 is compliant to the CAIA Version 1.0.
+    POWER9 <-----> PSL Version 9 is compliant to the CAIA Version 2.0.
+    This PSL Version 9 provides new features such as:
+    * Interaction with the nest MMU on the P9 chip.
+    * Native DMA support.
+    * Supports sending ASB_Notify messages for host thread wakeup.
+    * Supports Atomic operations.
+    * ....
+
+    Cards with a PSL9 won't work on a POWER8 system and cards with a
+    PSL8 won't work on a POWER9 system.
 
 AFU Modes
 =========
diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index ac2531e..45363be 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -188,12 +188,24 @@ int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)
 	if (ctx->afu->current_mode == CXL_MODE_DEDICATED) {
 		if (start + len > ctx->afu->adapter->ps_size)
 			return -EINVAL;
+
+		if (cxl_is_psl9(ctx->afu)) {
+			/* make sure there is a valid problem state
+			 * area space for this AFU
+			 */
+			if (ctx->master && !ctx->afu->psa) {
+				pr_devel("AFU doesn't support mmio space\n");
+				return -EINVAL;
+			}
+
+			/* Can't mmap until the AFU is enabled */
+			if (!ctx->afu->enabled)
+				return -EBUSY;
+		}
 	} else {
 		if (start + len > ctx->psn_size)
 			return -EINVAL;
-	}
 
-	if (ctx->afu->current_mode != CXL_MODE_DEDICATED) {
 		/* make sure there is a valid per process space for this AFU */
 		if ((ctx->master && !ctx->afu->psa) || (!ctx->afu->pp_psa)) {
 			pr_devel("AFU doesn't support mmio space\n");
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 82335c0..df40e6e 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -63,7 +63,7 @@ typedef struct {
 /* Memory maps. Ref CXL Appendix A */
 
 /* PSL Privilege 1 Memory Map */
-/* Configuration and Control area */
+/* Configuration and Control area - CAIA 1&2 */
 static const cxl_p1_reg_t CXL_PSL_CtxTime = {0x0000};
 static const cxl_p1_reg_t CXL_PSL_ErrIVTE = {0x0008};
 static const cxl_p1_reg_t CXL_PSL_KEY1    = {0x0010};
@@ -98,11 +98,29 @@ static const cxl_p1_reg_t CXL_XSL_Timebase  = {0x0100};
 static const cxl_p1_reg_t CXL_XSL_TB_CTLSTAT = {0x0108};
 static const cxl_p1_reg_t CXL_XSL_FEC       = {0x0158};
 static const cxl_p1_reg_t CXL_XSL_DSNCTL    = {0x0168};
+/* PSL registers - CAIA 2 */
+static const cxl_p1_reg_t CXL_PSL9_CONTROL  = {0x0020};
+static const cxl_p1_reg_t CXL_XSL9_DSNCTL   = {0x0168};
+static const cxl_p1_reg_t CXL_PSL9_FIR1     = {0x0300};
+static const cxl_p1_reg_t CXL_PSL9_FIR2     = {0x0308};
+static const cxl_p1_reg_t CXL_PSL9_Timebase = {0x0310};
+static const cxl_p1_reg_t CXL_PSL9_DEBUG    = {0x0320};
+static const cxl_p1_reg_t CXL_PSL9_FIR_CNTL = {0x0348};
+static const cxl_p1_reg_t CXL_PSL9_DSNDCTL  = {0x0350};
+static const cxl_p1_reg_t CXL_PSL9_TB_CTLSTAT = {0x0340};
+static const cxl_p1_reg_t CXL_PSL9_TRACECFG = {0x0368};
+static const cxl_p1_reg_t CXL_PSL9_APCDEDALLOC = {0x0378};
+static const cxl_p1_reg_t CXL_PSL9_APCDEDTYPE = {0x0380};
+static const cxl_p1_reg_t CXL_PSL9_TNR_ADDR = {0x0388};
+static const cxl_p1_reg_t CXL_PSL9_GP_CT = {0x0398};
+static const cxl_p1_reg_t CXL_XSL9_IERAT = {0x0588};
+static const cxl_p1_reg_t CXL_XSL9_ILPP  = {0x0590};
+
 /* 0x7F00:7FFF Reserved PCIe MSI-X Pending Bit Array area */
 /* 0x8000:FFFF Reserved PCIe MSI-X Table Area */
 
 /* PSL Slice Privilege 1 Memory Map */
-/* Configuration Area */
+/* Configuration Area - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_PSL_SR_An          = {0x00};
 static const cxl_p1n_reg_t CXL_PSL_LPID_An        = {0x08};
 static const cxl_p1n_reg_t CXL_PSL_AMBAR_An       = {0x10};
@@ -111,17 +129,18 @@ static const cxl_p1n_reg_t CXL_PSL_ID_An          = {0x20};
 static const cxl_p1n_reg_t CXL_PSL_SERR_An        = {0x28};
 /* Memory Management and Lookaside Buffer Management - CAIA 1*/
 static const cxl_p1n_reg_t CXL_PSL_SDR_An         = {0x30};
+/* Memory Management and Lookaside Buffer Management - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_PSL_AMOR_An        = {0x38};
-/* Pointer Area */
+/* Pointer Area - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_HAURP_An           = {0x80};
 static const cxl_p1n_reg_t CXL_PSL_SPAP_An        = {0x88};
 static const cxl_p1n_reg_t CXL_PSL_LLCMD_An       = {0x90};
-/* Control Area */
+/* Control Area - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_PSL_SCNTL_An       = {0xA0};
 static const cxl_p1n_reg_t CXL_PSL_CtxTime_An     = {0xA8};
 static const cxl_p1n_reg_t CXL_PSL_IVTE_Offset_An = {0xB0};
 static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An  = {0xB8};
-/* 0xC0:FF Implementation Dependent Area */
+/* 0xC0:FF Implementation Dependent Area - CAIA 1&2 */
 static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An   = {0xC0};
 static const cxl_p1n_reg_t CXL_AFU_DEBUG_An       = {0xC8};
 /* 0xC0:FF Implementation Dependent Area - CAIA 1 */
@@ -131,7 +150,7 @@ static const cxl_p1n_reg_t CXL_PSL_RXCTL_A        = {0xE0};
 static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE    = {0xE8};
 
 /* PSL Slice Privilege 2 Memory Map */
-/* Configuration and Control Area */
+/* Configuration and Control Area - CAIA 1&2 */
 static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
 static const cxl_p2n_reg_t CXL_CSRP_An        = {0x008};
 /* Configuration and Control Area - CAIA 1 */
@@ -145,17 +164,17 @@ static const cxl_p2n_reg_t CXL_PSL_AMR_An     = {0x030};
 static const cxl_p2n_reg_t CXL_SLBIE_An       = {0x040};
 static const cxl_p2n_reg_t CXL_SLBIA_An       = {0x048};
 static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
-/* Interrupt Registers */
+/* Interrupt Registers - CAIA 1&2 */
 static const cxl_p2n_reg_t CXL_PSL_DSISR_An   = {0x060};
 static const cxl_p2n_reg_t CXL_PSL_DAR_An     = {0x068};
 static const cxl_p2n_reg_t CXL_PSL_DSR_An     = {0x070};
 static const cxl_p2n_reg_t CXL_PSL_TFC_An     = {0x078};
 static const cxl_p2n_reg_t CXL_PSL_PEHandle_An = {0x080};
 static const cxl_p2n_reg_t CXL_PSL_ErrStat_An = {0x088};
-/* AFU Registers */
+/* AFU Registers - CAIA 1&2 */
 static const cxl_p2n_reg_t CXL_AFU_Cntl_An    = {0x090};
 static const cxl_p2n_reg_t CXL_AFU_ERR_An     = {0x098};
-/* Work Element Descriptor */
+/* Work Element Descriptor - CAIA 1&2 */
 static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 /* 0x0C0:FFF Implementation Dependent Area */
 
@@ -182,6 +201,10 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 #define CXL_PSL_SR_An_SF  MSR_SF            /* 64bit */
 #define CXL_PSL_SR_An_TA  (1ull << (63-1))  /* Tags active,   GA1: 0 */
 #define CXL_PSL_SR_An_HV  MSR_HV            /* Hypervisor,    GA1: 0 */
+#define CXL_PSL_SR_An_XLAT_hpt (0ull << (63-6))/* Hashed page table (HPT) mode */
+#define CXL_PSL_SR_An_XLAT_roh (2ull << (63-6))/* Radix on HPT mode */
+#define CXL_PSL_SR_An_XLAT_ror (3ull << (63-6))/* Radix on Radix mode */
+#define CXL_PSL_SR_An_BOT (1ull << (63-10)) /* Use the in-memory segment table */
 #define CXL_PSL_SR_An_PR  MSR_PR            /* Problem state, GA1: 1 */
 #define CXL_PSL_SR_An_ISL (1ull << (63-53)) /* Ignore Segment Large Page */
 #define CXL_PSL_SR_An_TC  (1ull << (63-54)) /* Page Table secondary hash */
@@ -298,12 +321,38 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 #define CXL_PSL_DSISR_An_S  DSISR_ISSTORE     /* Access was afu_wr or afu_zero */
 #define CXL_PSL_DSISR_An_K  DSISR_KEYFAULT    /* Access not permitted by virtual page class key protection */
 
+/****** CXL_PSL_DSISR_An - CAIA 2 ****************************************************/
+#define CXL_PSL9_DSISR_An_TF (1ull << (63-3))  /* Translation fault */
+#define CXL_PSL9_DSISR_An_PE (1ull << (63-4))  /* PSL Error (implementation specific) */
+#define CXL_PSL9_DSISR_An_AE (1ull << (63-5))  /* AFU Error */
+#define CXL_PSL9_DSISR_An_OC (1ull << (63-6))  /* OS Context Warning */
+#define CXL_PSL9_DSISR_An_S (1ull << (63-38))  /* TF for a write operation */
+#define CXL_PSL9_DSISR_PENDING (CXL_PSL9_DSISR_An_TF | CXL_PSL9_DSISR_An_PE | CXL_PSL9_DSISR_An_AE | CXL_PSL9_DSISR_An_OC)
+/* NOTE: Bits 56:63 (Checkout Response Status) are valid when DSISR_An[TF] = 1
+ * Status (0:7) Encoding
+ */
+#define CXL_PSL9_DSISR_An_CO_MASK 0x00000000000000ffULL
+#define CXL_PSL9_DSISR_An_SF      0x0000000000000080ULL  /* Segment Fault                        0b10000000 */
+#define CXL_PSL9_DSISR_An_PF_SLR  0x0000000000000088ULL  /* PTE not found (Single Level Radix)   0b10001000 */
+#define CXL_PSL9_DSISR_An_PF_RGC  0x000000000000008CULL  /* PTE not found (Radix Guest (child))  0b10001100 */
+#define CXL_PSL9_DSISR_An_PF_RGP  0x0000000000000090ULL  /* PTE not found (Radix Guest (parent)) 0b10010000 */
+#define CXL_PSL9_DSISR_An_PF_HRH  0x0000000000000094ULL  /* PTE not found (HPT/Radix Host)       0b10010100 */
+#define CXL_PSL9_DSISR_An_PF_STEG 0x000000000000009CULL  /* PTE not found (STEG VA)              0b10011100 */
+
 /****** CXL_PSL_TFC_An ******************************************************/
 #define CXL_PSL_TFC_An_A  (1ull << (63-28)) /* Acknowledge non-translation fault */
 #define CXL_PSL_TFC_An_C  (1ull << (63-29)) /* Continue (abort transaction) */
 #define CXL_PSL_TFC_An_AE (1ull << (63-30)) /* Restart PSL with address error */
 #define CXL_PSL_TFC_An_R  (1ull << (63-31)) /* Restart PSL transaction */
 
+/****** CXL_XSL9_IERAT_ERAT - CAIA 2 **********************************/
+#define CXL_XSL9_IERAT_MLPID    (1ull << (63-0))  /* Match LPID */
+#define CXL_XSL9_IERAT_MPID     (1ull << (63-1))  /* Match PID */
+#define CXL_XSL9_IERAT_PRS      (1ull << (63-4))  /* PRS bit for Radix invalidations */
+#define CXL_XSL9_IERAT_INVR     (1ull << (63-3))  /* Invalidate Radix */
+#define CXL_XSL9_IERAT_IALL     (1ull << (63-8))  /* Invalidate All */
+#define CXL_XSL9_IERAT_IINPROG  (1ull << (63-63)) /* Invalidate in progress */
+
 /* cxl_process_element->software_status */
 #define CXL_PE_SOFTWARE_STATE_V (1ul << (31 -  0)) /* Valid */
 #define CXL_PE_SOFTWARE_STATE_C (1ul << (31 - 29)) /* Complete */
@@ -654,25 +703,38 @@ int cxl_pci_reset(struct cxl *adapter);
 void cxl_pci_release_afu(struct device *dev);
 ssize_t cxl_pci_read_adapter_vpd(struct cxl *adapter, void *buf, size_t len);
 
-/* common == phyp + powernv */
+/* common == phyp + powernv - CAIA 1&2 */
 struct cxl_process_element_common {
 	__be32 tid;
 	__be32 pid;
 	__be64 csrp;
-	__be64 aurp0;
-	__be64 aurp1;
-	__be64 sstp0;
-	__be64 sstp1;
+	union {
+		struct {
+			__be64 aurp0;
+			__be64 aurp1;
+			__be64 sstp0;
+			__be64 sstp1;
+		} psl8;  /* CAIA 1 */
+		struct {
+			u8     reserved2[8];
+			u8     reserved3[8];
+			u8     reserved4[8];
+			u8     reserved5[8];
+		} psl9;  /* CAIA 2 */
+	} u;
 	__be64 amr;
-	u8     reserved3[4];
+	u8     reserved6[4];
 	__be64 wed;
 } __packed;
 
-/* just powernv */
+/* just powernv - CAIA 1&2 */
 struct cxl_process_element {
 	__be64 sr;
 	__be64 SPOffset;
-	__be64 sdr;
+	union {
+		__be64 sdr;          /* CAIA 1 */
+		u8     reserved1[8]; /* CAIA 2 */
+	} u;
 	__be64 haurp;
 	__be32 ctxtime;
 	__be16 ivte_offsets[4];
@@ -761,6 +823,16 @@ static inline bool cxl_is_power8(void)
 	return false;
 }
 
+static inline bool cxl_is_power9(void)
+{
+	/* intermediate solution */
+	if (!cxl_is_power8() &&
+	   (cpu_has_feature(CPU_FTRS_POWER9) ||
+	    cpu_has_feature(CPU_FTR_POWER9_DD1)))
+		return true;
+	return false;
+}
+
 static inline bool cxl_is_psl8(struct cxl_afu *afu)
 {
 	if (afu->adapter->caia_major == 1)
@@ -768,6 +840,13 @@ static inline bool cxl_is_psl8(struct cxl_afu *afu)
 	return false;
 }
 
+static inline bool cxl_is_psl9(struct cxl_afu *afu)
+{
+	if (afu->adapter->caia_major == 2)
+		return true;
+	return false;
+}
+
 ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
 				loff_t off, size_t count);
 
@@ -794,7 +873,6 @@ int cxl_update_properties(struct device_node *dn, struct property *new_prop);
 
 void cxl_remove_adapter_nr(struct cxl *adapter);
 
-int cxl_alloc_spa(struct cxl_afu *afu);
 void cxl_release_spa(struct cxl_afu *afu);
 
 dev_t cxl_get_dev(void);
@@ -832,9 +910,13 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
 void afu_release_irqs(struct cxl_context *ctx, void *cookie);
 void afu_irq_name_free(struct cxl_context *ctx);
 
+int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
 int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu);
 int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu);
+int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
 int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx);
 void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx);
 
 #ifdef CONFIG_DEBUG_FS
@@ -845,9 +927,12 @@ int cxl_debugfs_adapter_add(struct cxl *adapter);
 void cxl_debugfs_adapter_remove(struct cxl *adapter);
 int cxl_debugfs_afu_add(struct cxl_afu *afu);
 void cxl_debugfs_afu_remove(struct cxl_afu *afu);
+void cxl_stop_trace_psl9(struct cxl *cxl);
 void cxl_stop_trace_psl8(struct cxl *cxl);
+void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir);
 void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir);
 void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
+void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir);
 void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir);
 
 #else /* CONFIG_DEBUG_FS */
@@ -879,10 +964,19 @@ static inline void cxl_debugfs_afu_remove(struct cxl_afu *afu)
 {
 }
 
+static inline void cxl_stop_trace_psl9(struct cxl *cxl)
+{
+}
+
 static inline void cxl_stop_trace_psl8(struct cxl *cxl)
 {
 }
 
+static inline void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter,
+						    struct dentry *dir)
+{
+}
+
 static inline void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter,
 						    struct dentry *dir)
 {
@@ -893,6 +987,10 @@ static inline void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter,
 {
 }
 
+static inline void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir)
+{
+}
+
 static inline void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
 {
 }
@@ -938,7 +1036,9 @@ struct cxl_irq_info {
 };
 
 void cxl_assign_psn_space(struct cxl_context *ctx);
+int cxl_invalidate_all_psl9(struct cxl *adapter);
 int cxl_invalidate_all_psl8(struct cxl *adapter);
+irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
 irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
 irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
 int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
@@ -951,6 +1051,7 @@ int cxl_data_cache_flush(struct cxl *adapter);
 int cxl_afu_disable(struct cxl_afu *afu);
 int cxl_psl_purge(struct cxl_afu *afu);
 
+void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx);
 void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx);
 void cxl_native_err_irq_dump_regs(struct cxl *adapter);
 int cxl_pci_vphb_add(struct cxl_afu *afu);
diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
index 43a1a27..eae9d74 100644
--- a/drivers/misc/cxl/debugfs.c
+++ b/drivers/misc/cxl/debugfs.c
@@ -15,6 +15,12 @@
 
 static struct dentry *cxl_debugfs;
 
+void cxl_stop_trace_psl9(struct cxl *adapter)
+{
+	/* Stop the trace */
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x4480000000000000ULL);
+}
+
 void cxl_stop_trace_psl8(struct cxl *adapter)
 {
 	int slice;
@@ -53,6 +59,14 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
 					  (void __force *)value, &fops_io_x64);
 }
 
+void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir)
+{
+	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR1));
+	debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR2));
+	debugfs_create_io_x64("fir_cntl", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR_CNTL));
+	debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_TRACECFG));
+}
+
 void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir)
 {
 	debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
@@ -92,6 +106,11 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
 	debugfs_remove_recursive(adapter->debugfs);
 }
 
+void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir)
+{
+	debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
+}
+
 void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
 {
 	debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
index e6f8f05..5344448 100644
--- a/drivers/misc/cxl/fault.c
+++ b/drivers/misc/cxl/fault.c
@@ -146,25 +146,26 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
 		return cxl_ack_ae(ctx);
 	}
 
-	/*
-	 * update_mmu_cache() will not have loaded the hash since current->trap
-	 * is not a 0x400 or 0x300, so just call hash_page_mm() here.
-	 */
-	access = _PAGE_PRESENT | _PAGE_READ;
-	if (dsisr & CXL_PSL_DSISR_An_S)
-		access |= _PAGE_WRITE;
-
-	access |= _PAGE_PRIVILEGED;
-	if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
-		access &= ~_PAGE_PRIVILEGED;
-
-	if (dsisr & DSISR_NOHPTE)
-		inv_flags |= HPTE_NOHPTE_UPDATE;
-
-	local_irq_save(flags);
-	hash_page_mm(mm, dar, access, 0x300, inv_flags);
-	local_irq_restore(flags);
-
+	if (!radix_enabled()) {
+		/*
+		 * update_mmu_cache() will not have loaded the hash since current->trap
+		 * is not a 0x400 or 0x300, so just call hash_page_mm() here.
+		 */
+		access = _PAGE_PRESENT | _PAGE_READ;
+		if (dsisr & CXL_PSL_DSISR_An_S)
+			access |= _PAGE_WRITE;
+
+		access |= _PAGE_PRIVILEGED;
+		if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
+			access &= ~_PAGE_PRIVILEGED;
+
+		if (dsisr & DSISR_NOHPTE)
+			inv_flags |= HPTE_NOHPTE_UPDATE;
+
+		local_irq_save(flags);
+		hash_page_mm(mm, dar, access, 0x300, inv_flags);
+		local_irq_restore(flags);
+	}
 	pr_devel("Page fault successfully handled for pe: %i!\n", ctx->pe);
 	cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_R, 0);
 }
@@ -184,7 +185,28 @@ static struct mm_struct *get_mem_context(struct cxl_context *ctx)
 	return ctx->mm;
 }
 
+static bool cxl_is_segment_miss(struct cxl_context *ctx, u64 dsisr)
+{
+	if ((cxl_is_psl8(ctx->afu)) && (dsisr & CXL_PSL_DSISR_An_DS))
+		return true;
+
+	return false;
+}
+
+static bool cxl_is_page_fault(struct cxl_context *ctx, u64 dsisr)
+{
+	if ((cxl_is_psl8(ctx->afu)) && (dsisr & CXL_PSL_DSISR_An_DM))
+		return true;
+
+	if ((cxl_is_psl9(ctx->afu)) &&
+	   ((dsisr & CXL_PSL9_DSISR_An_CO_MASK) &
+		(CXL_PSL9_DSISR_An_PF_SLR | CXL_PSL9_DSISR_An_PF_RGC |
+		 CXL_PSL9_DSISR_An_PF_RGP | CXL_PSL9_DSISR_An_PF_HRH |
+		 CXL_PSL9_DSISR_An_PF_STEG)))
+		return true;
 
+	return false;
+}
 
 void cxl_handle_fault(struct work_struct *fault_work)
 {
@@ -230,9 +252,9 @@ void cxl_handle_fault(struct work_struct *fault_work)
 		}
 	}
 
-	if (dsisr & CXL_PSL_DSISR_An_DS)
+	if (cxl_is_segment_miss(ctx, dsisr))
 		cxl_handle_segment_miss(ctx, mm, dar);
-	else if (dsisr & CXL_PSL_DSISR_An_DM)
+	else if (cxl_is_page_fault(ctx, dsisr))
 		cxl_handle_page_fault(ctx, mm, dsisr, dar);
 	else
 		WARN(1, "cxl_handle_fault has nothing to handle\n");
diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
index 3ad7381..f58b4b6c 100644
--- a/drivers/misc/cxl/guest.c
+++ b/drivers/misc/cxl/guest.c
@@ -551,13 +551,13 @@ static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
 	elem->common.tid    = cpu_to_be32(0); /* Unused */
 	elem->common.pid    = cpu_to_be32(pid);
 	elem->common.csrp   = cpu_to_be64(0); /* disable */
-	elem->common.aurp0  = cpu_to_be64(0); /* disable */
-	elem->common.aurp1  = cpu_to_be64(0); /* disable */
+	elem->common.u.psl8.aurp0  = cpu_to_be64(0); /* disable */
+	elem->common.u.psl8.aurp1  = cpu_to_be64(0); /* disable */
 
 	cxl_prefault(ctx, wed);
 
-	elem->common.sstp0  = cpu_to_be64(ctx->sstp0);
-	elem->common.sstp1  = cpu_to_be64(ctx->sstp1);
+	elem->common.u.psl8.sstp0  = cpu_to_be64(ctx->sstp0);
+	elem->common.u.psl8.sstp1  = cpu_to_be64(ctx->sstp1);
 
 	/*
 	 * Ensure we have at least one interrupt allocated to take faults for
diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
index fa9f8a2..1eb5168 100644
--- a/drivers/misc/cxl/irq.c
+++ b/drivers/misc/cxl/irq.c
@@ -34,6 +34,59 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
 	return IRQ_HANDLED;
 }
 
+irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
+{
+	u64 dsisr, dar;
+
+	dsisr = irq_info->dsisr;
+	dar = irq_info->dar;
+
+	trace_cxl_psl9_irq(ctx, irq, dsisr, dar);
+
+	pr_devel("CXL interrupt %i for afu pe: %i DSISR: %#llx DAR: %#llx\n", irq, ctx->pe, dsisr, dar);
+
+	if (dsisr & CXL_PSL9_DSISR_An_TF) {
+		pr_devel("CXL interrupt: Scheduling translation fault"
+			 " handling for later (pe: %i)\n", ctx->pe);
+		return schedule_cxl_fault(ctx, dsisr, dar);
+	}
+
+	if (dsisr & CXL_PSL9_DSISR_An_PE)
+		return cxl_ops->handle_psl_slice_error(ctx, dsisr,
+						irq_info->errstat);
+	if (dsisr & CXL_PSL9_DSISR_An_AE) {
+		pr_devel("CXL interrupt: AFU Error 0x%016llx\n", irq_info->afu_err);
+
+		if (ctx->pending_afu_err) {
+			/*
+			 * This shouldn't happen - the PSL treats these errors
+			 * as fatal and will have reset the AFU, so there's not
+			 * much point buffering multiple AFU errors.
+			 * OTOH if we DO ever see a storm of these come in it's
+			 * probably best that we log them somewhere:
+			 */
+			dev_err_ratelimited(&ctx->afu->dev, "CXL AFU Error "
+					    "undelivered to pe %i: 0x%016llx\n",
+					    ctx->pe, irq_info->afu_err);
+		} else {
+			spin_lock(&ctx->lock);
+			ctx->afu_err = irq_info->afu_err;
+			ctx->pending_afu_err = 1;
+			spin_unlock(&ctx->lock);
+
+			wake_up_all(&ctx->wq);
+		}
+
+		cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_A, 0);
+		return IRQ_HANDLED;
+	}
+	if (dsisr & CXL_PSL9_DSISR_An_OC)
+		pr_devel("CXL interrupt: OS Context Warning\n");
+
+	WARN(1, "Unhandled CXL PSL IRQ\n");
+	return IRQ_HANDLED;
+}
+
 irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
 {
 	u64 dsisr, dar;
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index 0401e4dc..16666e3 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -120,6 +120,7 @@ int cxl_psl_purge(struct cxl_afu *afu)
 	u64 AFU_Cntl = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
 	u64 dsisr, dar;
 	u64 start, end;
+	u64 trans_fault = 0x0ULL;
 	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
 	int rc = 0;
 
@@ -127,6 +128,11 @@ int cxl_psl_purge(struct cxl_afu *afu)
 
 	pr_devel("PSL purge request\n");
 
+	if (cxl_is_psl8(afu))
+		trans_fault = CXL_PSL_DSISR_TRANS;
+	if (cxl_is_psl9(afu))
+		trans_fault = CXL_PSL9_DSISR_An_TF;
+
 	if (!cxl_ops->link_ok(afu->adapter, afu)) {
 		dev_warn(&afu->dev, "PSL Purge called with link down, ignoring\n");
 		rc = -EIO;
@@ -159,12 +165,12 @@ int cxl_psl_purge(struct cxl_afu *afu)
 				     "  PSL_DSISR: 0x%016llx\n",
 				     PSL_CNTL, dsisr);
 
-		if (dsisr & CXL_PSL_DSISR_TRANS) {
+		if (dsisr & trans_fault) {
 			dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
 			dev_notice(&afu->dev, "PSL purge terminating "
 					      "pending translation, "
 					      "DSISR: 0x%016llx, DAR: 0x%016llx\n",
-					       dsisr, dar);
+					      dsisr, dar);
 			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
 		} else if (dsisr) {
 			dev_notice(&afu->dev, "PSL purge acknowledging "
@@ -204,7 +210,7 @@ static int spa_max_procs(int spa_size)
 	return ((spa_size / 8) - 96) / 17;
 }
 
-int cxl_alloc_spa(struct cxl_afu *afu)
+static int cxl_alloc_spa(struct cxl_afu *afu, int mode)
 {
 	unsigned spa_size;
 
@@ -217,7 +223,8 @@ int cxl_alloc_spa(struct cxl_afu *afu)
 		if (spa_size > 0x100000) {
 			dev_warn(&afu->dev, "num_of_processes too large for the SPA, limiting to %i (0x%x)\n",
 					afu->native->spa_max_procs, afu->native->spa_size);
-			afu->num_procs = afu->native->spa_max_procs;
+			if (mode != CXL_MODE_DEDICATED)
+				afu->num_procs = afu->native->spa_max_procs;
 			break;
 		}
 
@@ -266,6 +273,35 @@ void cxl_release_spa(struct cxl_afu *afu)
 	}
 }
 
+/* Invalidation of all ERAT entries is no longer required by CAIA2. Use
+ * only for debug
+ */
+int cxl_invalidate_all_psl9(struct cxl *adapter)
+{
+	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
+	u64 ierat;
+
+	pr_devel("CXL adapter - invalidation of all ERAT entries\n");
+
+	/* Invalidates all ERAT entries for Radix or HPT */
+	ierat = CXL_XSL9_IERAT_IALL;
+	if (radix_enabled())
+		ierat |= CXL_XSL9_IERAT_INVR;
+	cxl_p1_write(adapter, CXL_XSL9_IERAT, ierat);
+
+	while (cxl_p1_read(adapter, CXL_XSL9_IERAT) & CXL_XSL9_IERAT_IINPROG) {
+		if (time_after_eq(jiffies, timeout)) {
+			dev_warn(&adapter->dev,
+			"WARNING: CXL adapter invalidation of all ERAT entries timed out!\n");
+			return -EBUSY;
+		}
+		if (!cxl_ops->link_ok(adapter, NULL))
+			return -EIO;
+		cpu_relax();
+	}
+	return 0;
+}
+
 int cxl_invalidate_all_psl8(struct cxl *adapter)
 {
 	unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
@@ -502,7 +538,7 @@ static int activate_afu_directed(struct cxl_afu *afu)
 
 	afu->num_procs = afu->max_procs_virtualised;
 	if (afu->native->spa == NULL) {
-		if (cxl_alloc_spa(afu))
+		if (cxl_alloc_spa(afu, CXL_MODE_DIRECTED))
 			return -ENOMEM;
 	}
 	attach_spa(afu);
@@ -552,10 +588,19 @@ static u64 calculate_sr(struct cxl_context *ctx)
 		sr |= (mfmsr() & MSR_SF) | CXL_PSL_SR_An_HV;
 	} else {
 		sr |= CXL_PSL_SR_An_PR | CXL_PSL_SR_An_R;
-		sr &= ~(CXL_PSL_SR_An_HV);
+		if (radix_enabled())
+			sr |= CXL_PSL_SR_An_HV;
+		else
+			sr &= ~(CXL_PSL_SR_An_HV);
 		if (!test_tsk_thread_flag(current, TIF_32BIT))
 			sr |= CXL_PSL_SR_An_SF;
 	}
+	if (cxl_is_psl9(ctx->afu)) {
+		if (radix_enabled())
+			sr |= CXL_PSL_SR_An_XLAT_ror;
+		else
+			sr |= CXL_PSL_SR_An_XLAT_hpt;
+	}
 	return sr;
 }
 
@@ -588,6 +633,70 @@ static void update_ivtes_directed(struct cxl_context *ctx)
 		WARN_ON(add_process_element(ctx));
 }
 
+static int process_element_entry_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
+{
+	u32 pid;
+
+	cxl_assign_psn_space(ctx);
+
+	ctx->elem->ctxtime = 0; /* disable */
+	ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
+	ctx->elem->haurp = 0; /* disable */
+
+	if (ctx->kernel)
+		pid = 0;
+	else {
+		if (ctx->mm == NULL) {
+			pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
+				__func__, ctx->pe, pid_nr(ctx->pid));
+			return -EINVAL;
+		}
+		pid = ctx->mm->context.id;
+	}
+
+	ctx->elem->common.tid = 0;
+	ctx->elem->common.pid = cpu_to_be32(pid);
+
+	ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
+
+	ctx->elem->common.csrp = 0; /* disable */
+
+	cxl_prefault(ctx, wed);
+
+	/*
+	 * Ensure we have the multiplexed PSL interrupt set up to take faults
+	 * for kernel contexts that may not have allocated any AFU IRQs at all:
+	 */
+	if (ctx->irqs.range[0] == 0) {
+		ctx->irqs.offset[0] = ctx->afu->native->psl_hwirq;
+		ctx->irqs.range[0] = 1;
+	}
+
+	ctx->elem->common.amr = cpu_to_be64(amr);
+	ctx->elem->common.wed = cpu_to_be64(wed);
+
+	return 0;
+}
+
+int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
+{
+	int result;
+
+	/* fill the process element entry */
+	result = process_element_entry_psl9(ctx, wed, amr);
+	if (result)
+		return result;
+
+	update_ivtes_directed(ctx);
+
+	/* first guy needs to enable */
+	result = cxl_ops->afu_check_and_enable(ctx->afu);
+	if (result)
+		return result;
+
+	return add_process_element(ctx);
+}
+
 int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 {
 	u32 pid;
@@ -598,7 +707,7 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 	ctx->elem->ctxtime = 0; /* disable */
 	ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
 	ctx->elem->haurp = 0; /* disable */
-	ctx->elem->sdr = cpu_to_be64(mfspr(SPRN_SDR1));
+	ctx->elem->u.sdr = cpu_to_be64(mfspr(SPRN_SDR1));
 
 	pid = current->pid;
 	if (ctx->kernel)
@@ -609,13 +718,13 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 	ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
 
 	ctx->elem->common.csrp = 0; /* disable */
-	ctx->elem->common.aurp0 = 0; /* disable */
-	ctx->elem->common.aurp1 = 0; /* disable */
+	ctx->elem->common.u.psl8.aurp0 = 0; /* disable */
+	ctx->elem->common.u.psl8.aurp1 = 0; /* disable */
 
 	cxl_prefault(ctx, wed);
 
-	ctx->elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
-	ctx->elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
+	ctx->elem->common.u.psl8.sstp0 = cpu_to_be64(ctx->sstp0);
+	ctx->elem->common.u.psl8.sstp1 = cpu_to_be64(ctx->sstp1);
 
 	/*
 	 * Ensure we have the multiplexed PSL interrupt set up to take faults
@@ -681,6 +790,31 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
 	return 0;
 }
 
+int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu)
+{
+	dev_info(&afu->dev, "Activating dedicated process mode\n");
+
+	/* If XSL is set to dedicated mode (Set in PSL_SCNTL reg), the
+	 * XSL and AFU are programmed to work with a single context.
+	 * The context information should be configured in the SPA area
+	 * index 0 (so PSL_SPAP must be configured before enabling the
+	 * AFU).
+	 */
+	afu->num_procs = 1;
+	if (afu->native->spa == NULL) {
+		if (cxl_alloc_spa(afu, CXL_MODE_DEDICATED))
+			return -ENOMEM;
+	}
+	attach_spa(afu);
+
+	cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_Process);
+	cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
+
+	afu->current_mode = CXL_MODE_DEDICATED;
+
+	return cxl_chardev_d_afu_add(afu);
+}
+
 int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
 {
 	dev_info(&afu->dev, "Activating dedicated process mode\n");
@@ -704,6 +838,16 @@ int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
 	return cxl_chardev_d_afu_add(afu);
 }
 
+void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx)
+{
+	int r;
+
+	for (r = 0; r < CXL_IRQ_RANGES; r++) {
+		ctx->elem->ivte_offsets[r] = cpu_to_be16(ctx->irqs.offset[r]);
+		ctx->elem->ivte_ranges[r] = cpu_to_be16(ctx->irqs.range[r]);
+	}
+}
+
 void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
 {
 	struct cxl_afu *afu = ctx->afu;
@@ -720,6 +864,26 @@ void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
 			((u64)ctx->irqs.range[3] & 0xffff));
 }
 
+int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
+{
+	struct cxl_afu *afu = ctx->afu;
+	int result;
+
+	/* fill the process element entry */
+	result = process_element_entry_psl9(ctx, wed, amr);
+	if (result)
+		return result;
+
+	if (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes)
+		afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
+
+	result = cxl_ops->afu_reset(afu);
+	if (result)
+		return result;
+
+	return afu_enable(afu);
+}
+
 int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
 {
 	struct cxl_afu *afu = ctx->afu;
@@ -891,6 +1055,21 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
 	return 0;
 }
 
+void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx)
+{
+	u64 fir1, fir2, serr;
+
+	fir1 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR1);
+	fir2 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR2);
+
+	dev_crit(&ctx->afu->dev, "PSL_FIR1: 0x%016llx\n", fir1);
+	dev_crit(&ctx->afu->dev, "PSL_FIR2: 0x%016llx\n", fir2);
+	if (ctx->afu->adapter->native->sl_ops->register_serr_irq) {
+		serr = cxl_p1n_read(ctx->afu, CXL_PSL_SERR_An);
+		cxl_afu_decode_psl_serr(ctx->afu, serr);
+	}
+}
+
 void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx)
 {
 	u64 fir1, fir2, fir_slice, serr, afu_debug;
@@ -927,9 +1106,20 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
 	return cxl_ops->ack_irq(ctx, 0, errstat);
 }
 
+static bool cxl_is_translation_fault(struct cxl_afu *afu, u64 dsisr)
+{
+	if ((cxl_is_psl8(afu)) && (dsisr & CXL_PSL_DSISR_TRANS))
+		return true;
+
+	if ((cxl_is_psl9(afu)) && (dsisr & CXL_PSL9_DSISR_An_TF))
+		return true;
+
+	return false;
+}
+
 irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
 {
-	if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
+	if (cxl_is_translation_fault(afu, irq_info->dsisr))
 		cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
 	else
 		cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
@@ -998,6 +1188,9 @@ static void native_irq_wait(struct cxl_context *ctx)
 		if (cxl_is_psl8(ctx->afu) &&
 		   ((dsisr & CXL_PSL_DSISR_PENDING) == 0))
 			return;
+		if (cxl_is_psl9(ctx->afu) &&
+		   ((dsisr & CXL_PSL9_DSISR_PENDING) == 0))
+			return;
 		/*
 		 * We are waiting for the workqueue to process our
 		 * irq, so need to let that run here.
@@ -1126,6 +1319,12 @@ int cxl_native_register_serr_irq(struct cxl_afu *afu)
 	serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
 	if (cxl_is_power8())
 		serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
+	if (cxl_is_power9()) {
+		/* By default, all errors are masked. So don't set all masks.
+		 * Slice errors will be transfered.
+		 */
+		serr = (serr & ~0xff0000007fffffffULL) | (afu->serr_hwirq & 0xffff);
+	}
 	cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
 
 	return 0;
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index a910115..f4ed1d3 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -60,7 +60,7 @@
 #define CXL_VSEC_PROTOCOL_MASK   0xe0
 #define CXL_VSEC_PROTOCOL_1024TB 0x80
 #define CXL_VSEC_PROTOCOL_512TB  0x40
-#define CXL_VSEC_PROTOCOL_256TB  0x20 /* Power 8 uses this */
+#define CXL_VSEC_PROTOCOL_256TB  0x20 /* Power 8/9 uses this */
 #define CXL_VSEC_PROTOCOL_ENABLE 0x01
 
 #define CXL_READ_VSEC_PSL_REVISION(dev, vsec, dest) \
@@ -326,14 +326,18 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
 
 #define P8_CAPP_UNIT0_ID 0xBA
 #define P8_CAPP_UNIT1_ID 0XBE
+#define P9_CAPP_UNIT0_ID 0xC0
+#define P9_CAPP_UNIT1_ID 0xE0
 
-static u64 get_capp_unit_id(struct device_node *np)
+static int get_phb_index(struct device_node *np, u32 *phb_index)
 {
-	u32 phb_index;
-
-	if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
-		return 0;
+	if (of_property_read_u32(np, "ibm,phb-index", phb_index))
+		return -ENODEV;
+	return 0;
+}
 
+static u64 get_capp_unit_id(struct device_node *np, u32 phb_index)
+{
 	/*
 	 * POWER 8:
 	 *  - For chips other than POWER8NVL, we only have CAPP 0,
@@ -352,11 +356,27 @@ static u64 get_capp_unit_id(struct device_node *np)
 			return P8_CAPP_UNIT1_ID;
 	}
 
+	/*
+	 * POWER 9:
+	 *   PEC0 (PHB0). Capp ID = CAPP0 (0b1100_0000)
+	 *   PEC1 (PHB1 - PHB2). No capi mode
+	 *   PEC2 (PHB3 - PHB4 - PHB5): Capi mode on PHB3 only. Capp ID = CAPP1 (0b1110_0000)
+	 */
+	if (cxl_is_power9()) {
+		if (phb_index == 0)
+			return P9_CAPP_UNIT0_ID;
+
+		if (phb_index == 3)
+			return P9_CAPP_UNIT1_ID;
+	}
+
 	return 0;
 }
 
-static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id)
+static int calc_capp_routing(struct pci_dev *dev, u64 *chipid,
+			     u32 *phb_index, u64 *capp_unit_id)
 {
+	int rc;
 	struct device_node *np;
 	const __be32 *prop;
 
@@ -367,8 +387,16 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
 		np = of_get_next_parent(np);
 	if (!np)
 		return -ENODEV;
+
 	*chipid = be32_to_cpup(prop);
-	*capp_unit_id = get_capp_unit_id(np);
+
+	rc = get_phb_index(np, phb_index);
+	if (rc) {
+		pr_err("cxl: invalid phb index\n");
+		return rc;
+	}
+
+	*capp_unit_id = get_capp_unit_id(np, *phb_index);
 	of_node_put(np);
 	if (!*capp_unit_id) {
 		pr_err("cxl: invalid capp unit id\n");
@@ -378,14 +406,97 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
 	return 0;
 }
 
+static int init_implementation_adapter_regs_psl9(struct cxl *adapter, struct pci_dev *dev)
+{
+	u64 xsl_dsnctl, psl_fircntl;
+	u64 chipid;
+	u32 phb_index;
+	u64 capp_unit_id;
+	int rc;
+
+	rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
+	if (rc)
+		return rc;
+
+	/* CAPI Identifier bits [0:7]
+	 * bit 61:60 MSI bits --> 0
+	 * bit 59 TVT selector --> 0
+	 */
+	/* Tell XSL where to route data to.
+	 * The field chipid should match the PHB CAPI_CMPM register
+	 */
+	xsl_dsnctl = ((u64)0x2 << (63-7)); /* Bit 57 */
+	xsl_dsnctl |= (capp_unit_id << (63-15));
+
+	/* nMMU_ID Defaults to: b’000001001’*/
+	xsl_dsnctl |= ((u64)0x09 << (63-28));
+
+	if (cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1)) {
+		/* Used to identify CAPI packets which should be sorted into
+		 * the Non-Blocking queues by the PHB. This field should match
+		 * the PHB PBL_NBW_CMPM register
+		 * nbwind=0x03, bits [57:58], must include capi indicator.
+		 * Not supported on P9 DD1.
+		 */
+		xsl_dsnctl |= ((u64)0x03 << (63-47));
+
+		/* Upper 16b address bits of ASB_Notify messages sent to the
+		 * system. Need to match the PHB’s ASN Compare/Mask Register.
+		 * Not supported on P9 DD1.
+		 */
+		xsl_dsnctl |= ((u64)0x04 << (63-55));
+	}
+
+	cxl_p1_write(adapter, CXL_XSL9_DSNCTL, xsl_dsnctl);
+
+	/* Set fir_cntl to recommended value for production env */
+	psl_fircntl = (0x2ULL << (63-3)); /* ce_report */
+	psl_fircntl |= (0x1ULL << (63-6)); /* FIR_report */
+	psl_fircntl |= 0x1ULL; /* ce_thresh */
+	cxl_p1_write(adapter, CXL_PSL9_FIR_CNTL, psl_fircntl);
+
+	/* vccredits=0x1  pcklat=0x4 */
+	cxl_p1_write(adapter, CXL_PSL9_DSNDCTL, 0x0000000000001810ULL);
+
+	/* For debugging with trace arrays.
+	 * Configure RX trace 0 segmented mode.
+	 * Configure CT trace 0 segmented mode.
+	 * Configure LA0 trace 0 segmented mode.
+	 * Configure LA1 trace 0 segmented mode.
+	 */
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000000ULL);
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000003ULL);
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000005ULL);
+	cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x8040800080000006ULL);
+
+	/* A response to an ASB_Notify request is returned by the
+	 * system as an MMIO write to the address defined in
+	 * the PSL_TNR_ADDR register
+	 */
+	/* PSL_TNR_ADDR */
+
+	/* NORST */
+	cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x8000000000000000ULL);
+
+	/* allocate the apc machines */
+	cxl_p1_write(adapter, CXL_PSL9_APCDEDTYPE, 0x40000003FFFF0000ULL);
+
+	/* Disable vc dd1 fix */
+	if ((cxl_is_power9() && cpu_has_feature(CPU_FTR_POWER9_DD1)))
+		cxl_p1_write(adapter, CXL_PSL9_GP_CT, 0x0400000000000001ULL);
+
+	return 0;
+}
+
 static int init_implementation_adapter_regs_psl8(struct cxl *adapter, struct pci_dev *dev)
 {
 	u64 psl_dsnctl, psl_fircntl;
 	u64 chipid;
+	u32 phb_index;
 	u64 capp_unit_id;
 	int rc;
 
-	rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
+	rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
 	if (rc)
 		return rc;
 
@@ -414,10 +525,11 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
 {
 	u64 xsl_dsnctl;
 	u64 chipid;
+	u32 phb_index;
 	u64 capp_unit_id;
 	int rc;
 
-	rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
+	rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
 	if (rc)
 		return rc;
 
@@ -435,6 +547,12 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
 /* For the PSL this is a multiple for 0 < n <= 7: */
 #define PSL_2048_250MHZ_CYCLES 1
 
+static void write_timebase_ctrl_psl9(struct cxl *adapter)
+{
+	cxl_p1_write(adapter, CXL_PSL9_TB_CTLSTAT,
+		     TBSYNC_CNT(2 * PSL_2048_250MHZ_CYCLES));
+}
+
 static void write_timebase_ctrl_psl8(struct cxl *adapter)
 {
 	cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT,
@@ -456,6 +574,11 @@ static void write_timebase_ctrl_xsl(struct cxl *adapter)
 		     TBSYNC_CNT(XSL_4000_CLOCKS));
 }
 
+static u64 timebase_read_psl9(struct cxl *adapter)
+{
+	return cxl_p1_read(adapter, CXL_PSL9_Timebase);
+}
+
 static u64 timebase_read_psl8(struct cxl *adapter)
 {
 	return cxl_p1_read(adapter, CXL_PSL_Timebase);
@@ -514,6 +637,11 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
 	return;
 }
 
+static int init_implementation_afu_regs_psl9(struct cxl_afu *afu)
+{
+	return 0;
+}
+
 static int init_implementation_afu_regs_psl8(struct cxl_afu *afu)
 {
 	/* read/write masks for this slice */
@@ -612,7 +740,7 @@ static int setup_cxl_bars(struct pci_dev *dev)
 	/*
 	 * BAR 4/5 has a special meaning for CXL and must be programmed with a
 	 * special value corresponding to the CXL protocol address range.
-	 * For POWER 8 that means bits 48:49 must be set to 10
+	 * For POWER 8/9 that means bits 48:49 must be set to 10
 	 */
 	pci_write_config_dword(dev, PCI_BASE_ADDRESS_4, 0x00000000);
 	pci_write_config_dword(dev, PCI_BASE_ADDRESS_5, 0x00020000);
@@ -997,6 +1125,52 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
 	return 0;
 }
 
+static int sanitise_afu_regs_psl9(struct cxl_afu *afu)
+{
+	u64 reg;
+
+	/*
+	 * Clear out any regs that contain either an IVTE or address or may be
+	 * waiting on an acknowledgment to try to be a bit safer as we bring
+	 * it online
+	 */
+	reg = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
+	if ((reg & CXL_AFU_Cntl_An_ES_MASK) != CXL_AFU_Cntl_An_ES_Disabled) {
+		dev_warn(&afu->dev, "WARNING: AFU was not disabled: %#016llx\n", reg);
+		if (cxl_ops->afu_reset(afu))
+			return -EIO;
+		if (cxl_afu_disable(afu))
+			return -EIO;
+		if (cxl_psl_purge(afu))
+			return -EIO;
+	}
+	cxl_p1n_write(afu, CXL_PSL_SPAP_An, 0x0000000000000000);
+	cxl_p1n_write(afu, CXL_PSL_AMBAR_An, 0x0000000000000000);
+	reg = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
+	if (reg) {
+		dev_warn(&afu->dev, "AFU had pending DSISR: %#016llx\n", reg);
+		if (reg & CXL_PSL9_DSISR_An_TF)
+			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
+		else
+			cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
+	}
+	if (afu->adapter->native->sl_ops->register_serr_irq) {
+		reg = cxl_p1n_read(afu, CXL_PSL_SERR_An);
+		if (reg) {
+			if (reg & ~0x000000007fffffff)
+				dev_warn(&afu->dev, "AFU had pending SERR: %#016llx\n", reg);
+			cxl_p1n_write(afu, CXL_PSL_SERR_An, reg & ~0xffff);
+		}
+	}
+	reg = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
+	if (reg) {
+		dev_warn(&afu->dev, "AFU had pending error status: %#016llx\n", reg);
+		cxl_p2n_write(afu, CXL_PSL_ErrStat_An, reg);
+	}
+
+	return 0;
+}
+
 static int sanitise_afu_regs_psl8(struct cxl_afu *afu)
 {
 	u64 reg;
@@ -1254,10 +1428,10 @@ int cxl_pci_reset(struct cxl *adapter)
 
 	/*
 	 * The adapter is about to be reset, so ignore errors.
-	 * Not supported on P9 DD1 but don't forget to enable it
-	 * on P9 DD2
+	 * Not supported on P9 DD1
 	 */
-	if (cxl_is_power8())
+	if ((cxl_is_power8()) ||
+	    ((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))
 		cxl_data_cache_flush(adapter);
 
 	/* pcie_warm_reset requests a fundamental pci reset which includes a
@@ -1393,6 +1567,9 @@ static bool cxl_compatible_caia_version(struct cxl *adapter)
 	if (cxl_is_power8() && (adapter->caia_major == 1))
 		return true;
 
+	if (cxl_is_power9() && (adapter->caia_major == 2))
+		return true;
+
 	return false;
 }
 
@@ -1460,8 +1637,12 @@ static int sanitise_adapter_regs(struct cxl *adapter)
 	/* Clear PSL tberror bit by writing 1 to it */
 	cxl_p1_write(adapter, CXL_PSL_ErrIVTE, CXL_PSL_ErrIVTE_tberror);
 
-	if (adapter->native->sl_ops->invalidate_all)
+	if (adapter->native->sl_ops->invalidate_all) {
+		/* do not invalidate ERAT entries when not reloading on PERST */
+		if (cxl_is_power9() && (adapter->perst_loads_image))
+			return 0;
 		rc = adapter->native->sl_ops->invalidate_all(adapter);
+	}
 
 	return rc;
 }
@@ -1546,6 +1727,30 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
 	pci_disable_device(pdev);
 }
 
+static const struct cxl_service_layer_ops psl9_ops = {
+	.adapter_regs_init = init_implementation_adapter_regs_psl9,
+	.invalidate_all = cxl_invalidate_all_psl9,
+	.afu_regs_init = init_implementation_afu_regs_psl9,
+	.sanitise_afu_regs = sanitise_afu_regs_psl9,
+	.register_serr_irq = cxl_native_register_serr_irq,
+	.release_serr_irq = cxl_native_release_serr_irq,
+	.handle_interrupt = cxl_irq_psl9,
+	.fail_irq = cxl_fail_irq_psl,
+	.activate_dedicated_process = cxl_activate_dedicated_process_psl9,
+	.attach_afu_directed = cxl_attach_afu_directed_psl9,
+	.attach_dedicated_process = cxl_attach_dedicated_process_psl9,
+	.update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl9,
+	.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl9,
+	.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl9,
+	.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl9,
+	.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
+	.debugfs_stop_trace = cxl_stop_trace_psl9,
+	.write_timebase_ctrl = write_timebase_ctrl_psl9,
+	.timebase_read = timebase_read_psl9,
+	.capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
+	.needs_reset_before_disable = true,
+};
+
 static const struct cxl_service_layer_ops psl8_ops = {
 	.adapter_regs_init = init_implementation_adapter_regs_psl8,
 	.invalidate_all = cxl_invalidate_all_psl8,
@@ -1597,6 +1802,9 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
 		if (cxl_is_power8()) {
 			dev_info(&dev->dev, "Device uses a PSL8\n");
 			adapter->native->sl_ops = &psl8_ops;
+		} else {
+			dev_info(&dev->dev, "Device uses a PSL9\n");
+			adapter->native->sl_ops = &psl9_ops;
 		}
 	}
 }
@@ -1667,8 +1875,12 @@ static void cxl_pci_remove_adapter(struct cxl *adapter)
 	cxl_sysfs_adapter_remove(adapter);
 	cxl_debugfs_adapter_remove(adapter);
 
-	/* Flush adapter datacache as its about to be removed */
-	cxl_data_cache_flush(adapter);
+	/* Flush adapter datacache as its about to be removed.
+	 * Not supported on P9 DD1
+	 */
+	if ((cxl_is_power8()) ||
+	    ((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))
+		cxl_data_cache_flush(adapter);
 
 	cxl_deconfigure_adapter(adapter);
 
@@ -1752,6 +1964,11 @@ static int cxl_probe(struct pci_dev *dev, const struct pci_device_id *id)
 		return -ENODEV;
 	}
 
+	if (cxl_is_power9() && !radix_enabled()) {
+		dev_info(&dev->dev, "Only Radix mode supported\n");
+		return -ENODEV;
+	}
+
 	if (cxl_verbose)
 		dump_cxl_config_space(dev);
 
diff --git a/drivers/misc/cxl/trace.h b/drivers/misc/cxl/trace.h
index 751d611..b8e300a 100644
--- a/drivers/misc/cxl/trace.h
+++ b/drivers/misc/cxl/trace.h
@@ -17,6 +17,15 @@
 
 #include "cxl.h"
 
+#define dsisr_psl9_flags(flags) \
+	__print_flags(flags, "|", \
+		{ CXL_PSL9_DSISR_An_CO_MASK,	"FR" }, \
+		{ CXL_PSL9_DSISR_An_TF,		"TF" }, \
+		{ CXL_PSL9_DSISR_An_PE,		"PE" }, \
+		{ CXL_PSL9_DSISR_An_AE,		"AE" }, \
+		{ CXL_PSL9_DSISR_An_OC,		"OC" }, \
+		{ CXL_PSL9_DSISR_An_S,		"S" })
+
 #define DSISR_FLAGS \
 	{ CXL_PSL_DSISR_An_DS,	"DS" }, \
 	{ CXL_PSL_DSISR_An_DM,	"DM" }, \
@@ -154,6 +163,40 @@ TRACE_EVENT(cxl_afu_irq,
 	)
 );
 
+TRACE_EVENT(cxl_psl9_irq,
+	TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
+
+	TP_ARGS(ctx, irq, dsisr, dar),
+
+	TP_STRUCT__entry(
+		__field(u8, card)
+		__field(u8, afu)
+		__field(u16, pe)
+		__field(int, irq)
+		__field(u64, dsisr)
+		__field(u64, dar)
+	),
+
+	TP_fast_assign(
+		__entry->card = ctx->afu->adapter->adapter_num;
+		__entry->afu = ctx->afu->slice;
+		__entry->pe = ctx->pe;
+		__entry->irq = irq;
+		__entry->dsisr = dsisr;
+		__entry->dar = dar;
+	),
+
+	TP_printk("afu%i.%i pe=%i irq=%i dsisr=0x%016llx dsisr=%s dar=0x%016llx",
+		__entry->card,
+		__entry->afu,
+		__entry->pe,
+		__entry->irq,
+		__entry->dsisr,
+		dsisr_psl9_flags(__entry->dsisr),
+		__entry->dar
+	)
+);
+
 TRACE_EVENT(cxl_psl_irq,
 	TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH V4 7/7] cxl: Add psl9 specific code
  2017-04-12 11:57     ` Frederic Barrat
@ 2017-04-13 11:05       ` Michael Ellerman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Ellerman @ 2017-04-13 11:05 UTC (permalink / raw)
  To: Frederic Barrat, Andrew Donnellan, Christophe Lombard,
	linuxppc-dev, imunsie

Frederic Barrat <fbarrat@linux.vnet.ibm.com> writes:

> Le 12/04/2017 =C3=A0 09:52, Andrew Donnellan a =C3=A9crit :
>> On 08/04/17 00:11, Christophe Lombard wrote:
>>> +static u32 get_phb_index(struct device_node *np)
>>>  {
>>>      u32 phb_index;
>>>
>>>      if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
>>> -        return 0;
>>> +        return -ENODEV;
>>
>> Function is unsigned.
>
> [Christophe is off till the end of the week, so I'm following up]
>
> Michael: what's the easiest for you at this point? Shall I send a new=20
> version of the 7th patch with all changes consolidated (tab error + doc=20
> + Andrew's remark above)?

An incremental patch would have been easiest, but that's OK I've taken
your remix.

cheers

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [V4,1/7] cxl: Read vsec perst load image
  2017-04-07 14:11 ` [PATCH V4 1/7] cxl: Read vsec perst load image Christophe Lombard
  2017-04-10  4:00   ` Andrew Donnellan
  2017-04-10 16:40   ` Frederic Barrat
@ 2017-04-19  3:47   ` Michael Ellerman
  2 siblings, 0 replies; 30+ messages in thread
From: Michael Ellerman @ 2017-04-19  3:47 UTC (permalink / raw)
  To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie, andrew.donnellan

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 657 bytes --]

On Fri, 2017-04-07 at 14:11:53 UTC, Christophe Lombard wrote:
> This bit is used to cause a flash image load for programmable
> CAIA-compliant implementation. If this bit is set to ‘0’, a power
> cycle of the adapter is required to load a programmable CAIA-com-
> pliant implementation from flash.
> This field will be used by the following patches.
> 
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
> Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/aba81433b50350fde68bf80fe9f75d

cheers

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [V4,7/7,remix] cxl: Add psl9 specific code
  2017-04-12 14:34   ` [PATCH V4 7/7 remix] " Frederic Barrat
@ 2017-04-19  3:47     ` Michael Ellerman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Ellerman @ 2017-04-19  3:47 UTC (permalink / raw)
  To: Frederic Barrat, andrew.donnellan, imunsie, linuxppc-dev

On Wed, 2017-04-12 at 14:34:07 UTC, Frederic Barrat wrote:
> From: Christophe Lombard <clombard@linux.vnet.ibm.com>
> 
> The new Coherent Accelerator Interface Architecture, level 2, for the
> IBM POWER9 brings new content and features:
> - POWER9 Service Layer
> - Registers
> - Radix mode
> - Process element entry
> - Dedicated-Shared Process Programming Model
> - Translation Fault Handling
> - CAPP
> - Memory Context ID
>     If a valid mm_struct is found the memory context id is used for each
>     transaction associated with the process handle. The PSL uses the
>     context ID to find the corresponding process element.
> 
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
> Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/f24be42aab37c6d07c05126673138e

cheers

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2017-04-19  3:47 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-07 14:11 [PATCH V4 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
2017-04-07 14:11 ` [PATCH V4 1/7] cxl: Read vsec perst load image Christophe Lombard
2017-04-10  4:00   ` Andrew Donnellan
2017-04-10 16:40   ` Frederic Barrat
2017-04-19  3:47   ` [V4,1/7] " Michael Ellerman
2017-04-07 14:11 ` [PATCH V4 2/7] cxl: Remove unused values in bare-metal environment Christophe Lombard
2017-04-10  5:25   ` Andrew Donnellan
2017-04-10 16:41   ` Frederic Barrat
2017-04-07 14:11 ` [PATCH V4 3/7] cxl: Keep track of mm struct associated with a context Christophe Lombard
2017-04-10  5:38   ` Andrew Donnellan
2017-04-10 16:49   ` Frederic Barrat
2017-04-07 14:11 ` [PATCH V4 4/7] cxl: Update implementation service layer Christophe Lombard
2017-04-10  7:08   ` Andrew Donnellan
2017-04-10 17:01   ` Frederic Barrat
2017-04-07 14:11 ` [PATCH V4 5/7] cxl: Rename some psl8 specific functions Christophe Lombard
2017-04-10  6:14   ` Andrew Donnellan
2017-04-10 17:06   ` Frederic Barrat
2017-04-07 14:11 ` [PATCH V4 6/7] cxl: Isolate few psl8 specific calls Christophe Lombard
2017-04-10 17:13   ` Frederic Barrat
2017-04-12  2:13     ` Michael Ellerman
2017-04-07 14:11 ` [PATCH V4 7/7] cxl: Add psl9 specific code Christophe Lombard
2017-04-11 14:41   ` Frederic Barrat
2017-04-12  2:11     ` Michael Ellerman
2017-04-12  8:30       ` christophe lombard
2017-04-12 11:47         ` Michael Ellerman
2017-04-12  7:52   ` Andrew Donnellan
2017-04-12 11:57     ` Frederic Barrat
2017-04-13 11:05       ` Michael Ellerman
2017-04-12 14:34   ` [PATCH V4 7/7 remix] " Frederic Barrat
2017-04-19  3:47     ` [V4,7/7,remix] " Michael Ellerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.