All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] arm64 pmem support
@ 2017-07-25 10:55 ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: will.deacon, catalin.marinas; +Cc: mark.rutland, linux-arm-kernel, linux-nvdimm

Hi all,

With the latest updates to the pmem API, the arch code contribution
becomes very straightforward to wire up - I think there's about as
much code here to just cope with the existence of our new instruction
as there is to actually make use of it. I don't have access to any
NVDIMMs nor suitable hardware to put them in, so this is written purely
to spec - the extent of testing has been the feature detection on a
v8.2 Fast Model vs. v8.0 systems.

Patch #1 could go in as a fix ahead of the rest; it just needs to come
before patch #5 to prevent that blowing up the build.

Robin.

Robin Murphy (6):
  arm64: mm: Fix set_memory_valid() declaration
  arm64: Convert __inval_cache_range() to area-based
  arm64: Expose DC CVAP to userspace
  arm64: Handle trapped DC CVAP
  arm64: Implement pmem API support
  arm64: uaccess: Implement *_flushcache variants

 Documentation/arm64/cpu-feature-registers.txt |  2 ++
 arch/arm64/Kconfig                            | 12 +++++++
 arch/arm64/include/asm/assembler.h            |  6 ++++
 arch/arm64/include/asm/cacheflush.h           |  4 ++-
 arch/arm64/include/asm/cpucaps.h              |  3 +-
 arch/arm64/include/asm/esr.h                  |  3 +-
 arch/arm64/include/asm/string.h               |  4 +++
 arch/arm64/include/asm/sysreg.h               |  1 +
 arch/arm64/include/asm/uaccess.h              | 12 +++++++
 arch/arm64/include/uapi/asm/hwcap.h           |  1 +
 arch/arm64/kernel/cpufeature.c                | 13 ++++++++
 arch/arm64/kernel/cpuinfo.c                   |  1 +
 arch/arm64/kernel/head.S                      | 18 +++++-----
 arch/arm64/kernel/traps.c                     |  3 ++
 arch/arm64/lib/Makefile                       |  2 ++
 arch/arm64/lib/uaccess_flushcache.c           | 47 +++++++++++++++++++++++++++
 arch/arm64/mm/cache.S                         | 37 ++++++++++++++++-----
 arch/arm64/mm/pageattr.c                      | 18 ++++++++++
 18 files changed, 166 insertions(+), 21 deletions(-)
 create mode 100644 arch/arm64/lib/uaccess_flushcache.c

-- 
2.12.2.dirty

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 0/6] arm64 pmem support
@ 2017-07-25 10:55 ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: linux-arm-kernel

Hi all,

With the latest updates to the pmem API, the arch code contribution
becomes very straightforward to wire up - I think there's about as
much code here to just cope with the existence of our new instruction
as there is to actually make use of it. I don't have access to any
NVDIMMs nor suitable hardware to put them in, so this is written purely
to spec - the extent of testing has been the feature detection on a
v8.2 Fast Model vs. v8.0 systems.

Patch #1 could go in as a fix ahead of the rest; it just needs to come
before patch #5 to prevent that blowing up the build.

Robin.

Robin Murphy (6):
  arm64: mm: Fix set_memory_valid() declaration
  arm64: Convert __inval_cache_range() to area-based
  arm64: Expose DC CVAP to userspace
  arm64: Handle trapped DC CVAP
  arm64: Implement pmem API support
  arm64: uaccess: Implement *_flushcache variants

 Documentation/arm64/cpu-feature-registers.txt |  2 ++
 arch/arm64/Kconfig                            | 12 +++++++
 arch/arm64/include/asm/assembler.h            |  6 ++++
 arch/arm64/include/asm/cacheflush.h           |  4 ++-
 arch/arm64/include/asm/cpucaps.h              |  3 +-
 arch/arm64/include/asm/esr.h                  |  3 +-
 arch/arm64/include/asm/string.h               |  4 +++
 arch/arm64/include/asm/sysreg.h               |  1 +
 arch/arm64/include/asm/uaccess.h              | 12 +++++++
 arch/arm64/include/uapi/asm/hwcap.h           |  1 +
 arch/arm64/kernel/cpufeature.c                | 13 ++++++++
 arch/arm64/kernel/cpuinfo.c                   |  1 +
 arch/arm64/kernel/head.S                      | 18 +++++-----
 arch/arm64/kernel/traps.c                     |  3 ++
 arch/arm64/lib/Makefile                       |  2 ++
 arch/arm64/lib/uaccess_flushcache.c           | 47 +++++++++++++++++++++++++++
 arch/arm64/mm/cache.S                         | 37 ++++++++++++++++-----
 arch/arm64/mm/pageattr.c                      | 18 ++++++++++
 18 files changed, 166 insertions(+), 21 deletions(-)
 create mode 100644 arch/arm64/lib/uaccess_flushcache.c

-- 
2.12.2.dirty

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/6] arm64: mm: Fix set_memory_valid() declaration
  2017-07-25 10:55 ` Robin Murphy
@ 2017-07-25 10:55   ` Robin Murphy
  -1 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: will.deacon, catalin.marinas; +Cc: mark.rutland, linux-arm-kernel, linux-nvdimm

Clearly, set_memory_valid() has never been seen in the same room as its
declaration... Whilst the type mismatch is such that kexec probably
wasn't broken in practice, fix it to match the definition as it should.

Fixes: 9b0aa14e3155 ("arm64: mm: add set_memory_valid()")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/include/asm/cacheflush.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index d74a284abdc2..4d4f650c290e 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -150,6 +150,6 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
 {
 }
 
-int set_memory_valid(unsigned long addr, unsigned long size, int enable);
+int set_memory_valid(unsigned long addr, int numpages, int enable);
 
 #endif
-- 
2.12.2.dirty

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 1/6] arm64: mm: Fix set_memory_valid() declaration
@ 2017-07-25 10:55   ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: linux-arm-kernel

Clearly, set_memory_valid() has never been seen in the same room as its
declaration... Whilst the type mismatch is such that kexec probably
wasn't broken in practice, fix it to match the definition as it should.

Fixes: 9b0aa14e3155 ("arm64: mm: add set_memory_valid()")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/include/asm/cacheflush.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index d74a284abdc2..4d4f650c290e 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -150,6 +150,6 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
 {
 }
 
-int set_memory_valid(unsigned long addr, unsigned long size, int enable);
+int set_memory_valid(unsigned long addr, int numpages, int enable);
 
 #endif
-- 
2.12.2.dirty

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/6] arm64: Convert __inval_cache_range() to area-based
  2017-07-25 10:55 ` Robin Murphy
@ 2017-07-25 10:55   ` Robin Murphy
  -1 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: will.deacon, catalin.marinas; +Cc: mark.rutland, linux-arm-kernel, linux-nvdimm

__inval_cache_range() is already the odd one out among our data cache
maintenance routines as the only remaining range-based one; as we're
going to want an invalidation routine to call from C code for the pmem
API, let's tweak the prototype and name to bring it in line with the
clean operations, and to make its relationship with __dma_inv_area()
neatly mirror that of __clean_dcache_area_poc() and __dma_clean_area().
The loop clearing the early page tables gets mildly massaged in the
process for the sake of consistency.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/kernel/head.S            | 18 +++++++++---------
 arch/arm64/mm/cache.S               | 23 ++++++++++++++---------
 3 files changed, 24 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 4d4f650c290e..b4b43a94dffd 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -67,6 +67,7 @@
  */
 extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
+extern void __inval_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 973df7de7bf8..73a0531e0187 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -143,8 +143,8 @@ preserve_boot_args:
 	dmb	sy				// needed before dc ivac with
 						// MMU off
 
-	add	x1, x0, #0x20			// 4 x 8 bytes
-	b	__inval_cache_range		// tail call
+	mov	x1, #0x20			// 4 x 8 bytes
+	b	__inval_dcache_area		// tail call
 ENDPROC(preserve_boot_args)
 
 /*
@@ -221,20 +221,20 @@ __create_page_tables:
 	 * dirty cache lines being evicted.
 	 */
 	adrp	x0, idmap_pg_dir
-	adrp	x1, swapper_pg_dir + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE
-	bl	__inval_cache_range
+	ldr	x1, =(IDMAP_DIR_SIZE + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
+	bl	__inval_dcache_area
 
 	/*
 	 * Clear the idmap and swapper page tables.
 	 */
 	adrp	x0, idmap_pg_dir
-	adrp	x6, swapper_pg_dir + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE
+	ldr	x1, =(IDMAP_DIR_SIZE + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
 1:	stp	xzr, xzr, [x0], #16
 	stp	xzr, xzr, [x0], #16
 	stp	xzr, xzr, [x0], #16
 	stp	xzr, xzr, [x0], #16
-	cmp	x0, x6
-	b.lo	1b
+	subs	x1, x1, #64
+	b.ne	1b
 
 	mov	x7, SWAPPER_MM_MMUFLAGS
 
@@ -307,9 +307,9 @@ __create_page_tables:
 	 * tables again to remove any speculatively loaded cache lines.
 	 */
 	adrp	x0, idmap_pg_dir
-	adrp	x1, swapper_pg_dir + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE
+	ldr	x1, =(IDMAP_DIR_SIZE + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
 	dmb	sy
-	bl	__inval_cache_range
+	bl	__inval_dcache_area
 
 	ret	x28
 ENDPROC(__create_page_tables)
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 83c27b6e6dca..ed47fbbb4b05 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -109,20 +109,25 @@ ENTRY(__clean_dcache_area_pou)
 ENDPROC(__clean_dcache_area_pou)
 
 /*
+ *	__inval_dcache_area(kaddr, size)
+ *
+ * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	are invalidated. Any partial lines at the ends of the interval are
+ *	also cleaned to PoC to prevent data loss.
+ *
+ *	- kaddr   - kernel address
+ *	- size    - size in question
+ */
+ENTRY(__inval_dcache_area)
+	/* FALLTHROUGH */
+
+/*
  *	__dma_inv_area(start, size)
  *	- start   - virtual start address of region
  *	- size    - size in question
  */
 __dma_inv_area:
 	add	x1, x1, x0
-	/* FALLTHROUGH */
-
-/*
- *	__inval_cache_range(start, end)
- *	- start   - start address of region
- *	- end     - end address of region
- */
-ENTRY(__inval_cache_range)
 	dcache_line_size x2, x3
 	sub	x3, x2, #1
 	tst	x1, x3				// end cache line aligned?
@@ -140,7 +145,7 @@ ENTRY(__inval_cache_range)
 	b.lo	2b
 	dsb	sy
 	ret
-ENDPIPROC(__inval_cache_range)
+ENDPIPROC(__inval_dcache_area)
 ENDPROC(__dma_inv_area)
 
 /*
-- 
2.12.2.dirty

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/6] arm64: Convert __inval_cache_range() to area-based
@ 2017-07-25 10:55   ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: linux-arm-kernel

__inval_cache_range() is already the odd one out among our data cache
maintenance routines as the only remaining range-based one; as we're
going to want an invalidation routine to call from C code for the pmem
API, let's tweak the prototype and name to bring it in line with the
clean operations, and to make its relationship with __dma_inv_area()
neatly mirror that of __clean_dcache_area_poc() and __dma_clean_area().
The loop clearing the early page tables gets mildly massaged in the
process for the sake of consistency.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/kernel/head.S            | 18 +++++++++---------
 arch/arm64/mm/cache.S               | 23 ++++++++++++++---------
 3 files changed, 24 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 4d4f650c290e..b4b43a94dffd 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -67,6 +67,7 @@
  */
 extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
+extern void __inval_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 973df7de7bf8..73a0531e0187 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -143,8 +143,8 @@ preserve_boot_args:
 	dmb	sy				// needed before dc ivac with
 						// MMU off
 
-	add	x1, x0, #0x20			// 4 x 8 bytes
-	b	__inval_cache_range		// tail call
+	mov	x1, #0x20			// 4 x 8 bytes
+	b	__inval_dcache_area		// tail call
 ENDPROC(preserve_boot_args)
 
 /*
@@ -221,20 +221,20 @@ __create_page_tables:
 	 * dirty cache lines being evicted.
 	 */
 	adrp	x0, idmap_pg_dir
-	adrp	x1, swapper_pg_dir + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE
-	bl	__inval_cache_range
+	ldr	x1, =(IDMAP_DIR_SIZE + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
+	bl	__inval_dcache_area
 
 	/*
 	 * Clear the idmap and swapper page tables.
 	 */
 	adrp	x0, idmap_pg_dir
-	adrp	x6, swapper_pg_dir + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE
+	ldr	x1, =(IDMAP_DIR_SIZE + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
 1:	stp	xzr, xzr, [x0], #16
 	stp	xzr, xzr, [x0], #16
 	stp	xzr, xzr, [x0], #16
 	stp	xzr, xzr, [x0], #16
-	cmp	x0, x6
-	b.lo	1b
+	subs	x1, x1, #64
+	b.ne	1b
 
 	mov	x7, SWAPPER_MM_MMUFLAGS
 
@@ -307,9 +307,9 @@ __create_page_tables:
 	 * tables again to remove any speculatively loaded cache lines.
 	 */
 	adrp	x0, idmap_pg_dir
-	adrp	x1, swapper_pg_dir + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE
+	ldr	x1, =(IDMAP_DIR_SIZE + SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
 	dmb	sy
-	bl	__inval_cache_range
+	bl	__inval_dcache_area
 
 	ret	x28
 ENDPROC(__create_page_tables)
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 83c27b6e6dca..ed47fbbb4b05 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -109,20 +109,25 @@ ENTRY(__clean_dcache_area_pou)
 ENDPROC(__clean_dcache_area_pou)
 
 /*
+ *	__inval_dcache_area(kaddr, size)
+ *
+ * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	are invalidated. Any partial lines at the ends of the interval are
+ *	also cleaned to PoC to prevent data loss.
+ *
+ *	- kaddr   - kernel address
+ *	- size    - size in question
+ */
+ENTRY(__inval_dcache_area)
+	/* FALLTHROUGH */
+
+/*
  *	__dma_inv_area(start, size)
  *	- start   - virtual start address of region
  *	- size    - size in question
  */
 __dma_inv_area:
 	add	x1, x1, x0
-	/* FALLTHROUGH */
-
-/*
- *	__inval_cache_range(start, end)
- *	- start   - start address of region
- *	- end     - end address of region
- */
-ENTRY(__inval_cache_range)
 	dcache_line_size x2, x3
 	sub	x3, x2, #1
 	tst	x1, x3				// end cache line aligned?
@@ -140,7 +145,7 @@ ENTRY(__inval_cache_range)
 	b.lo	2b
 	dsb	sy
 	ret
-ENDPIPROC(__inval_cache_range)
+ENDPIPROC(__inval_dcache_area)
 ENDPROC(__dma_inv_area)
 
 /*
-- 
2.12.2.dirty

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/6] arm64: Expose DC CVAP to userspace
  2017-07-25 10:55 ` Robin Murphy
@ 2017-07-25 10:55   ` Robin Murphy
  -1 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: will.deacon, catalin.marinas; +Cc: mark.rutland, linux-arm-kernel, linux-nvdimm

The ARMv8.2-DCPoP feature introduces persistent memory support to the
architecture, by defining a point of persistence in the memory
hierarchy, and a corresponding cache maintenance operation, DC CVAP.
Expose the support via HWCAP and MRS emulation.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 Documentation/arm64/cpu-feature-registers.txt | 2 ++
 arch/arm64/include/asm/sysreg.h               | 1 +
 arch/arm64/include/uapi/asm/hwcap.h           | 1 +
 arch/arm64/kernel/cpufeature.c                | 2 ++
 arch/arm64/kernel/cpuinfo.c                   | 1 +
 5 files changed, 7 insertions(+)

diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
index d1c97f9f51cc..dad411d635d8 100644
--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -179,6 +179,8 @@ infrastructure:
      | FCMA                         | [19-16] |    y    |
      |--------------------------------------------------|
      | JSCVT                        | [15-12] |    y    |
+     |--------------------------------------------------|
+     | DPB                          | [3-0]   |    y    |
      x--------------------------------------------------x
 
 Appendix I: Example
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 16e44fa9b3b6..1974731baa91 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -329,6 +329,7 @@
 #define ID_AA64ISAR1_LRCPC_SHIFT	20
 #define ID_AA64ISAR1_FCMA_SHIFT		16
 #define ID_AA64ISAR1_JSCVT_SHIFT	12
+#define ID_AA64ISAR1_DPB_SHIFT		0
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_GIC_SHIFT		24
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 4e187ce2a811..4b9344cba83a 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -35,5 +35,6 @@
 #define HWCAP_JSCVT		(1 << 13)
 #define HWCAP_FCMA		(1 << 14)
 #define HWCAP_LRCPC		(1 << 15)
+#define HWCAP_DCPOP		(1 << 16)
 
 #endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9f9e0064c8c1..a2542ef3ff25 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -120,6 +120,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
 
@@ -916,6 +917,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index f495ee5049fd..311885962830 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -68,6 +68,7 @@ static const char *const hwcap_str[] = {
 	"jscvt",
 	"fcma",
 	"lrcpc",
+	"dcpop",
 	NULL
 };
 
-- 
2.12.2.dirty

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/6] arm64: Expose DC CVAP to userspace
@ 2017-07-25 10:55   ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: linux-arm-kernel

The ARMv8.2-DCPoP feature introduces persistent memory support to the
architecture, by defining a point of persistence in the memory
hierarchy, and a corresponding cache maintenance operation, DC CVAP.
Expose the support via HWCAP and MRS emulation.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 Documentation/arm64/cpu-feature-registers.txt | 2 ++
 arch/arm64/include/asm/sysreg.h               | 1 +
 arch/arm64/include/uapi/asm/hwcap.h           | 1 +
 arch/arm64/kernel/cpufeature.c                | 2 ++
 arch/arm64/kernel/cpuinfo.c                   | 1 +
 5 files changed, 7 insertions(+)

diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
index d1c97f9f51cc..dad411d635d8 100644
--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -179,6 +179,8 @@ infrastructure:
      | FCMA                         | [19-16] |    y    |
      |--------------------------------------------------|
      | JSCVT                        | [15-12] |    y    |
+     |--------------------------------------------------|
+     | DPB                          | [3-0]   |    y    |
      x--------------------------------------------------x
 
 Appendix I: Example
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 16e44fa9b3b6..1974731baa91 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -329,6 +329,7 @@
 #define ID_AA64ISAR1_LRCPC_SHIFT	20
 #define ID_AA64ISAR1_FCMA_SHIFT		16
 #define ID_AA64ISAR1_JSCVT_SHIFT	12
+#define ID_AA64ISAR1_DPB_SHIFT		0
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_GIC_SHIFT		24
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 4e187ce2a811..4b9344cba83a 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -35,5 +35,6 @@
 #define HWCAP_JSCVT		(1 << 13)
 #define HWCAP_FCMA		(1 << 14)
 #define HWCAP_LRCPC		(1 << 15)
+#define HWCAP_DCPOP		(1 << 16)
 
 #endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9f9e0064c8c1..a2542ef3ff25 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -120,6 +120,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
 
@@ -916,6 +917,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index f495ee5049fd..311885962830 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -68,6 +68,7 @@ static const char *const hwcap_str[] = {
 	"jscvt",
 	"fcma",
 	"lrcpc",
+	"dcpop",
 	NULL
 };
 
-- 
2.12.2.dirty

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/6] arm64: Handle trapped DC CVAP
  2017-07-25 10:55 ` Robin Murphy
@ 2017-07-25 10:55   ` Robin Murphy
  -1 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: will.deacon, catalin.marinas; +Cc: mark.rutland, linux-arm-kernel, linux-nvdimm

Cache clean to PoP is subject to the same access controls as to PoC, so
if we are trapping userspace cache maintenance with SCTLR_EL1.UCI, we
need to be prepared to handle it. To avoid getting into complicated
fights with binutils about ARMv8.2 options, we'll just cheat and use the
raw SYS instruction rather than the 'proper' DC alias.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/include/asm/esr.h | 3 ++-
 arch/arm64/kernel/traps.c    | 3 +++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 8cabd57b6348..2636df9a4d6b 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -157,9 +157,10 @@
 /*
  * User space cache operations have the following sysreg encoding
  * in System instructions.
- * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 14 }, WRITE (L=0)
+ * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 12, 14 }, WRITE (L=0)
  */
 #define ESR_ELx_SYS64_ISS_CRM_DC_CIVAC	14
+#define ESR_ELx_SYS64_ISS_CRM_DC_CVAP	12
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAU	11
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAC	10
 #define ESR_ELx_SYS64_ISS_CRM_IC_IVAU	5
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index c7c7088097be..dc53a8ba1882 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -480,6 +480,9 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
 	case ESR_ELx_SYS64_ISS_CRM_DC_CVAC:	/* DC CVAC, gets promoted */
 		__user_cache_maint("dc civac", address, ret);
 		break;
+	case ESR_ELx_SYS64_ISS_CRM_DC_CVAP:	/* DC CVAP */
+		__user_cache_maint("sys 3, c7, c12, 1", address, ret);
+		break;
 	case ESR_ELx_SYS64_ISS_CRM_DC_CIVAC:	/* DC CIVAC */
 		__user_cache_maint("dc civac", address, ret);
 		break;
-- 
2.12.2.dirty

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/6] arm64: Handle trapped DC CVAP
@ 2017-07-25 10:55   ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: linux-arm-kernel

Cache clean to PoP is subject to the same access controls as to PoC, so
if we are trapping userspace cache maintenance with SCTLR_EL1.UCI, we
need to be prepared to handle it. To avoid getting into complicated
fights with binutils about ARMv8.2 options, we'll just cheat and use the
raw SYS instruction rather than the 'proper' DC alias.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/include/asm/esr.h | 3 ++-
 arch/arm64/kernel/traps.c    | 3 +++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 8cabd57b6348..2636df9a4d6b 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -157,9 +157,10 @@
 /*
  * User space cache operations have the following sysreg encoding
  * in System instructions.
- * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 14 }, WRITE (L=0)
+ * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 12, 14 }, WRITE (L=0)
  */
 #define ESR_ELx_SYS64_ISS_CRM_DC_CIVAC	14
+#define ESR_ELx_SYS64_ISS_CRM_DC_CVAP	12
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAU	11
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAC	10
 #define ESR_ELx_SYS64_ISS_CRM_IC_IVAU	5
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index c7c7088097be..dc53a8ba1882 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -480,6 +480,9 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
 	case ESR_ELx_SYS64_ISS_CRM_DC_CVAC:	/* DC CVAC, gets promoted */
 		__user_cache_maint("dc civac", address, ret);
 		break;
+	case ESR_ELx_SYS64_ISS_CRM_DC_CVAP:	/* DC CVAP */
+		__user_cache_maint("sys 3, c7, c12, 1", address, ret);
+		break;
 	case ESR_ELx_SYS64_ISS_CRM_DC_CIVAC:	/* DC CIVAC */
 		__user_cache_maint("dc civac", address, ret);
 		break;
-- 
2.12.2.dirty

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/6] arm64: Implement pmem API support
  2017-07-25 10:55 ` Robin Murphy
@ 2017-07-25 10:55   ` Robin Murphy
  -1 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: will.deacon, catalin.marinas; +Cc: mark.rutland, linux-arm-kernel, linux-nvdimm

Add a clean-to-point-of-persistence cache maintenance helper, and wire
up the basic architectural support for the pmem driver based on it.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/Kconfig                  | 11 +++++++++++
 arch/arm64/include/asm/assembler.h  |  6 ++++++
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/include/asm/cpucaps.h    |  3 ++-
 arch/arm64/kernel/cpufeature.c      | 11 +++++++++++
 arch/arm64/mm/cache.S               | 14 ++++++++++++++
 arch/arm64/mm/pageattr.c            | 18 ++++++++++++++++++
 7 files changed, 63 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index dfd908630631..0b0576a54724 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -960,6 +960,17 @@ config ARM64_UAO
 	  regular load/store instructions if the cpu does not implement the
 	  feature.
 
+config ARM64_PMEM
+	bool "Enable support for persistent memory"
+	select ARCH_HAS_PMEM_API
+	help
+	  Say Y to enable support for the persistent memory API based on the
+	  ARMv8.2 DCPoP feature.
+
+	  The feature is detected at runtime, and the kernel will use DC CVAC
+	  operations if DC CVAP is not supported (following the behaviour of
+	  DC CVAP itself if the system does not define a point of persistence).
+
 endmenu
 
 config ARM64_MODULE_CMODEL_LARGE
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 1b67c3782d00..5d8903c45031 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -353,6 +353,12 @@ alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
 alternative_else
 	dc	civac, \kaddr
 alternative_endif
+	.elseif	(\op == cvap)
+alternative_if ARM64_HAS_DCPOP
+	sys 3, c7, c12, 1, \kaddr	// dc cvap
+alternative_else
+	dc	cvac, \kaddr
+alternative_endif
 	.else
 	dc	\op, \kaddr
 	.endif
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index b4b43a94dffd..76d1cc85d5b1 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -69,6 +69,7 @@ extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
 extern void __inval_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
+extern void __clean_dcache_area_pop(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(void *kaddr, unsigned long len);
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 8d2272c6822c..8da621627d7c 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -39,7 +39,8 @@
 #define ARM64_WORKAROUND_QCOM_FALKOR_E1003	18
 #define ARM64_WORKAROUND_858921			19
 #define ARM64_WORKAROUND_CAVIUM_30115		20
+#define ARM64_HAS_DCPOP				21
 
-#define ARM64_NCAPS				21
+#define ARM64_NCAPS				22
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a2542ef3ff25..cd52d365d1f0 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -889,6 +889,17 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = 0,
 		.matches = has_no_fpsimd,
 	},
+#ifdef CONFIG_ARM64_PMEM
+	{
+		.desc = "Data cache clean to Point of Persistence",
+		.capability = ARM64_HAS_DCPOP,
+		.def_scope = SCOPE_SYSTEM,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.field_pos = ID_AA64ISAR1_DPB_SHIFT,
+		.min_field_value = 1,
+	},
+#endif
 	{},
 };
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index ed47fbbb4b05..7f1dbe962cf5 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -172,6 +172,20 @@ ENDPIPROC(__clean_dcache_area_poc)
 ENDPROC(__dma_clean_area)
 
 /*
+ *	__clean_dcache_area_pop(kaddr, size)
+ *
+ * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	are cleaned to the PoP.
+ *
+ *	- kaddr   - kernel address
+ *	- size    - size in question
+ */
+ENTRY(__clean_dcache_area_pop)
+	dcache_by_line_op cvap, sy, x0, x1, x2, x3
+	ret
+ENDPIPROC(__clean_dcache_area_pop)
+
+/*
  *	__dma_flush_area(start, size)
  *
  *	clean & invalidate D / U line
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index a682a0a2a0fa..a461a00ceb3e 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -183,3 +183,21 @@ bool kernel_page_present(struct page *page)
 }
 #endif /* CONFIG_HIBERNATION */
 #endif /* CONFIG_DEBUG_PAGEALLOC */
+
+#ifdef CONFIG_ARCH_HAS_PMEM_API
+#include <asm/cacheflush.h>
+
+static inline void arch_wb_cache_pmem(void *addr, size_t size)
+{
+	/* Ensure order against any prior non-cacheable writes */
+	dmb(sy);
+	__clean_dcache_area_pop(addr, size);
+}
+EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
+
+static inline void arch_invalidate_pmem(void *addr, size_t size)
+{
+	__inval_dcache_area(addr, size);
+}
+EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
+#endif
-- 
2.12.2.dirty

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/6] arm64: Implement pmem API support
@ 2017-07-25 10:55   ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: linux-arm-kernel

Add a clean-to-point-of-persistence cache maintenance helper, and wire
up the basic architectural support for the pmem driver based on it.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/Kconfig                  | 11 +++++++++++
 arch/arm64/include/asm/assembler.h  |  6 ++++++
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/include/asm/cpucaps.h    |  3 ++-
 arch/arm64/kernel/cpufeature.c      | 11 +++++++++++
 arch/arm64/mm/cache.S               | 14 ++++++++++++++
 arch/arm64/mm/pageattr.c            | 18 ++++++++++++++++++
 7 files changed, 63 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index dfd908630631..0b0576a54724 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -960,6 +960,17 @@ config ARM64_UAO
 	  regular load/store instructions if the cpu does not implement the
 	  feature.
 
+config ARM64_PMEM
+	bool "Enable support for persistent memory"
+	select ARCH_HAS_PMEM_API
+	help
+	  Say Y to enable support for the persistent memory API based on the
+	  ARMv8.2 DCPoP feature.
+
+	  The feature is detected at runtime, and the kernel will use DC CVAC
+	  operations if DC CVAP is not supported (following the behaviour of
+	  DC CVAP itself if the system does not define a point of persistence).
+
 endmenu
 
 config ARM64_MODULE_CMODEL_LARGE
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 1b67c3782d00..5d8903c45031 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -353,6 +353,12 @@ alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
 alternative_else
 	dc	civac, \kaddr
 alternative_endif
+	.elseif	(\op == cvap)
+alternative_if ARM64_HAS_DCPOP
+	sys 3, c7, c12, 1, \kaddr	// dc cvap
+alternative_else
+	dc	cvac, \kaddr
+alternative_endif
 	.else
 	dc	\op, \kaddr
 	.endif
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index b4b43a94dffd..76d1cc85d5b1 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -69,6 +69,7 @@ extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
 extern void __inval_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
+extern void __clean_dcache_area_pop(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(void *kaddr, unsigned long len);
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 8d2272c6822c..8da621627d7c 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -39,7 +39,8 @@
 #define ARM64_WORKAROUND_QCOM_FALKOR_E1003	18
 #define ARM64_WORKAROUND_858921			19
 #define ARM64_WORKAROUND_CAVIUM_30115		20
+#define ARM64_HAS_DCPOP				21
 
-#define ARM64_NCAPS				21
+#define ARM64_NCAPS				22
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a2542ef3ff25..cd52d365d1f0 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -889,6 +889,17 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = 0,
 		.matches = has_no_fpsimd,
 	},
+#ifdef CONFIG_ARM64_PMEM
+	{
+		.desc = "Data cache clean to Point of Persistence",
+		.capability = ARM64_HAS_DCPOP,
+		.def_scope = SCOPE_SYSTEM,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.field_pos = ID_AA64ISAR1_DPB_SHIFT,
+		.min_field_value = 1,
+	},
+#endif
 	{},
 };
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index ed47fbbb4b05..7f1dbe962cf5 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -172,6 +172,20 @@ ENDPIPROC(__clean_dcache_area_poc)
 ENDPROC(__dma_clean_area)
 
 /*
+ *	__clean_dcache_area_pop(kaddr, size)
+ *
+ * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	are cleaned to the PoP.
+ *
+ *	- kaddr   - kernel address
+ *	- size    - size in question
+ */
+ENTRY(__clean_dcache_area_pop)
+	dcache_by_line_op cvap, sy, x0, x1, x2, x3
+	ret
+ENDPIPROC(__clean_dcache_area_pop)
+
+/*
  *	__dma_flush_area(start, size)
  *
  *	clean & invalidate D / U line
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index a682a0a2a0fa..a461a00ceb3e 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -183,3 +183,21 @@ bool kernel_page_present(struct page *page)
 }
 #endif /* CONFIG_HIBERNATION */
 #endif /* CONFIG_DEBUG_PAGEALLOC */
+
+#ifdef CONFIG_ARCH_HAS_PMEM_API
+#include <asm/cacheflush.h>
+
+static inline void arch_wb_cache_pmem(void *addr, size_t size)
+{
+	/* Ensure order against any prior non-cacheable writes */
+	dmb(sy);
+	__clean_dcache_area_pop(addr, size);
+}
+EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
+
+static inline void arch_invalidate_pmem(void *addr, size_t size)
+{
+	__inval_dcache_area(addr, size);
+}
+EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
+#endif
-- 
2.12.2.dirty

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 6/6] arm64: uaccess: Implement *_flushcache variants
  2017-07-25 10:55 ` Robin Murphy
@ 2017-07-25 10:55   ` Robin Murphy
  -1 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: will.deacon, catalin.marinas; +Cc: mark.rutland, linux-arm-kernel, linux-nvdimm

Implement the set of copy functions with guarantees of a clean cache
upon completion necessary to support the pmem driver.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/Kconfig                  |  1 +
 arch/arm64/include/asm/string.h     |  4 ++++
 arch/arm64/include/asm/uaccess.h    | 12 ++++++++++
 arch/arm64/lib/Makefile             |  2 ++
 arch/arm64/lib/uaccess_flushcache.c | 47 +++++++++++++++++++++++++++++++++++++
 5 files changed, 66 insertions(+)
 create mode 100644 arch/arm64/lib/uaccess_flushcache.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 0b0576a54724..e43a63b3d14b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -963,6 +963,7 @@ config ARM64_UAO
 config ARM64_PMEM
 	bool "Enable support for persistent memory"
 	select ARCH_HAS_PMEM_API
+	select ARCH_HAS_UACCESS_FLUSHCACHE
 	help
 	  Say Y to enable support for the persistent memory API based on the
 	  ARMv8.2 DCPoP feature.
diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
index d0aa42907569..dd95d33a5bd5 100644
--- a/arch/arm64/include/asm/string.h
+++ b/arch/arm64/include/asm/string.h
@@ -52,6 +52,10 @@ extern void *__memset(void *, int, __kernel_size_t);
 #define __HAVE_ARCH_MEMCMP
 extern int memcmp(const void *, const void *, size_t);
 
+#ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE
+#define __HAVE_ARCH_MEMCPY_FLUSHCACHE
+void memcpy_flushcache(void *dst, const void *src, size_t cnt);
+#endif
 
 #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
 
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 8f0a1de11e4a..bb056fee297c 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -347,4 +347,16 @@ extern long strncpy_from_user(char *dest, const char __user *src, long count);
 
 extern __must_check long strnlen_user(const char __user *str, long n);
 
+#ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE
+struct page;
+void memcpy_page_flushcache(char *to, struct page *page, size_t offset, size_t len);
+extern unsigned long __must_check __copy_user_flushcache(void *to, const void __user *from, unsigned long n);
+
+static inline int __copy_from_user_flushcache(void *dst, const void __user *src, unsigned size)
+{
+	kasan_check_write(dst, size);
+	return __copy_user_flushcache(dst, src, size);
+}
+#endif
+
 #endif /* __ASM_UACCESS_H */
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index c86b7909ef31..a0abc142c92b 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -17,3 +17,5 @@ CFLAGS_atomic_ll_sc.o	:= -fcall-used-x0 -ffixed-x1 -ffixed-x2		\
 		   -fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12	\
 		   -fcall-saved-x13 -fcall-saved-x14 -fcall-saved-x15	\
 		   -fcall-saved-x18
+
+lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
new file mode 100644
index 000000000000..b6ceafdb8b72
--- /dev/null
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -0,0 +1,47 @@
+/*
+ * Copyright (C) 2017 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/uaccess.h>
+#include <asm/barrier.h>
+#include <asm/cacheflush.h>
+
+void memcpy_flushcache(void *dst, const void *src, size_t cnt)
+{
+	/*
+	 * We assume this should not be called with @dst pointing to
+	 * non-cacheable memory, such that we don't need an explicit
+	 * barrier to order the cache maintenance against the memcpy.
+	 */
+	memcpy(dst, src, cnt);
+	__clean_dcache_area_pop(dst, cnt);
+}
+EXPORT_SYMBOL_GPL(memcpy_flushcache);
+
+void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
+			    size_t len)
+{
+	memcpy_flushcache(to, page_address(page) + offset, len);
+}
+
+unsigned long __copy_user_flushcache(void *to, const void __user *from,
+				     unsigned long n)
+{
+	unsigned long rc = __arch_copy_from_user(to, from, n);
+
+	/* See above */
+	__clean_dcache_area_pop(to, n - rc);
+	return rc;
+}
-- 
2.12.2.dirty

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 6/6] arm64: uaccess: Implement *_flushcache variants
@ 2017-07-25 10:55   ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-07-25 10:55 UTC (permalink / raw)
  To: linux-arm-kernel

Implement the set of copy functions with guarantees of a clean cache
upon completion necessary to support the pmem driver.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 arch/arm64/Kconfig                  |  1 +
 arch/arm64/include/asm/string.h     |  4 ++++
 arch/arm64/include/asm/uaccess.h    | 12 ++++++++++
 arch/arm64/lib/Makefile             |  2 ++
 arch/arm64/lib/uaccess_flushcache.c | 47 +++++++++++++++++++++++++++++++++++++
 5 files changed, 66 insertions(+)
 create mode 100644 arch/arm64/lib/uaccess_flushcache.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 0b0576a54724..e43a63b3d14b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -963,6 +963,7 @@ config ARM64_UAO
 config ARM64_PMEM
 	bool "Enable support for persistent memory"
 	select ARCH_HAS_PMEM_API
+	select ARCH_HAS_UACCESS_FLUSHCACHE
 	help
 	  Say Y to enable support for the persistent memory API based on the
 	  ARMv8.2 DCPoP feature.
diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
index d0aa42907569..dd95d33a5bd5 100644
--- a/arch/arm64/include/asm/string.h
+++ b/arch/arm64/include/asm/string.h
@@ -52,6 +52,10 @@ extern void *__memset(void *, int, __kernel_size_t);
 #define __HAVE_ARCH_MEMCMP
 extern int memcmp(const void *, const void *, size_t);
 
+#ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE
+#define __HAVE_ARCH_MEMCPY_FLUSHCACHE
+void memcpy_flushcache(void *dst, const void *src, size_t cnt);
+#endif
 
 #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
 
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 8f0a1de11e4a..bb056fee297c 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -347,4 +347,16 @@ extern long strncpy_from_user(char *dest, const char __user *src, long count);
 
 extern __must_check long strnlen_user(const char __user *str, long n);
 
+#ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE
+struct page;
+void memcpy_page_flushcache(char *to, struct page *page, size_t offset, size_t len);
+extern unsigned long __must_check __copy_user_flushcache(void *to, const void __user *from, unsigned long n);
+
+static inline int __copy_from_user_flushcache(void *dst, const void __user *src, unsigned size)
+{
+	kasan_check_write(dst, size);
+	return __copy_user_flushcache(dst, src, size);
+}
+#endif
+
 #endif /* __ASM_UACCESS_H */
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index c86b7909ef31..a0abc142c92b 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -17,3 +17,5 @@ CFLAGS_atomic_ll_sc.o	:= -fcall-used-x0 -ffixed-x1 -ffixed-x2		\
 		   -fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12	\
 		   -fcall-saved-x13 -fcall-saved-x14 -fcall-saved-x15	\
 		   -fcall-saved-x18
+
+lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
new file mode 100644
index 000000000000..b6ceafdb8b72
--- /dev/null
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -0,0 +1,47 @@
+/*
+ * Copyright (C) 2017 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/uaccess.h>
+#include <asm/barrier.h>
+#include <asm/cacheflush.h>
+
+void memcpy_flushcache(void *dst, const void *src, size_t cnt)
+{
+	/*
+	 * We assume this should not be called with @dst pointing to
+	 * non-cacheable memory, such that we don't need an explicit
+	 * barrier to order the cache maintenance against the memcpy.
+	 */
+	memcpy(dst, src, cnt);
+	__clean_dcache_area_pop(dst, cnt);
+}
+EXPORT_SYMBOL_GPL(memcpy_flushcache);
+
+void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
+			    size_t len)
+{
+	memcpy_flushcache(to, page_address(page) + offset, len);
+}
+
+unsigned long __copy_user_flushcache(void *to, const void __user *from,
+				     unsigned long n)
+{
+	unsigned long rc = __arch_copy_from_user(to, from, n);
+
+	/* See above */
+	__clean_dcache_area_pop(to, n - rc);
+	return rc;
+}
-- 
2.12.2.dirty

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] arm64: Implement pmem API support
  2017-07-25 10:55   ` Robin Murphy
@ 2017-08-04 15:25     ` Catalin Marinas
  -1 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2017-08-04 15:25 UTC (permalink / raw)
  To: Robin Murphy; +Cc: mark.rutland, linux-nvdimm, will.deacon, linux-arm-kernel

Two minor comments below.

On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -960,6 +960,17 @@ config ARM64_UAO
>  	  regular load/store instructions if the cpu does not implement the
>  	  feature.
>  
> +config ARM64_PMEM
> +	bool "Enable support for persistent memory"
> +	select ARCH_HAS_PMEM_API
> +	help
> +	  Say Y to enable support for the persistent memory API based on the
> +	  ARMv8.2 DCPoP feature.
> +
> +	  The feature is detected at runtime, and the kernel will use DC CVAC
> +	  operations if DC CVAP is not supported (following the behaviour of
> +	  DC CVAP itself if the system does not define a point of persistence).

Any reason not to have this default y?

> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -172,6 +172,20 @@ ENDPIPROC(__clean_dcache_area_poc)
>  ENDPROC(__dma_clean_area)
>  
>  /*
> + *	__clean_dcache_area_pop(kaddr, size)
> + *
> + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * 	are cleaned to the PoP.
> + *
> + *	- kaddr   - kernel address
> + *	- size    - size in question
> + */
> +ENTRY(__clean_dcache_area_pop)
> +	dcache_by_line_op cvap, sy, x0, x1, x2, x3
> +	ret
> +ENDPIPROC(__clean_dcache_area_pop)
> +
> +/*
>   *	__dma_flush_area(start, size)
>   *
>   *	clean & invalidate D / U line
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index a682a0a2a0fa..a461a00ceb3e 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -183,3 +183,21 @@ bool kernel_page_present(struct page *page)
>  }
>  #endif /* CONFIG_HIBERNATION */
>  #endif /* CONFIG_DEBUG_PAGEALLOC */
> +
> +#ifdef CONFIG_ARCH_HAS_PMEM_API
> +#include <asm/cacheflush.h>
> +
> +static inline void arch_wb_cache_pmem(void *addr, size_t size)
> +{
> +	/* Ensure order against any prior non-cacheable writes */
> +	dmb(sy);
> +	__clean_dcache_area_pop(addr, size);
> +}

Could we keep the dmb() in the actual __clean_dcache_area_pop()
implementation?

I can do the changes myself if you don't have any objections.

-- 
Catalin
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 5/6] arm64: Implement pmem API support
@ 2017-08-04 15:25     ` Catalin Marinas
  0 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2017-08-04 15:25 UTC (permalink / raw)
  To: linux-arm-kernel

Two minor comments below.

On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -960,6 +960,17 @@ config ARM64_UAO
>  	  regular load/store instructions if the cpu does not implement the
>  	  feature.
>  
> +config ARM64_PMEM
> +	bool "Enable support for persistent memory"
> +	select ARCH_HAS_PMEM_API
> +	help
> +	  Say Y to enable support for the persistent memory API based on the
> +	  ARMv8.2 DCPoP feature.
> +
> +	  The feature is detected at runtime, and the kernel will use DC CVAC
> +	  operations if DC CVAP is not supported (following the behaviour of
> +	  DC CVAP itself if the system does not define a point of persistence).

Any reason not to have this default y?

> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -172,6 +172,20 @@ ENDPIPROC(__clean_dcache_area_poc)
>  ENDPROC(__dma_clean_area)
>  
>  /*
> + *	__clean_dcache_area_pop(kaddr, size)
> + *
> + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * 	are cleaned to the PoP.
> + *
> + *	- kaddr   - kernel address
> + *	- size    - size in question
> + */
> +ENTRY(__clean_dcache_area_pop)
> +	dcache_by_line_op cvap, sy, x0, x1, x2, x3
> +	ret
> +ENDPIPROC(__clean_dcache_area_pop)
> +
> +/*
>   *	__dma_flush_area(start, size)
>   *
>   *	clean & invalidate D / U line
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index a682a0a2a0fa..a461a00ceb3e 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -183,3 +183,21 @@ bool kernel_page_present(struct page *page)
>  }
>  #endif /* CONFIG_HIBERNATION */
>  #endif /* CONFIG_DEBUG_PAGEALLOC */
> +
> +#ifdef CONFIG_ARCH_HAS_PMEM_API
> +#include <asm/cacheflush.h>
> +
> +static inline void arch_wb_cache_pmem(void *addr, size_t size)
> +{
> +	/* Ensure order against any prior non-cacheable writes */
> +	dmb(sy);
> +	__clean_dcache_area_pop(addr, size);
> +}

Could we keep the dmb() in the actual __clean_dcache_area_pop()
implementation?

I can do the changes myself if you don't have any objections.

-- 
Catalin

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] arm64: Implement pmem API support
  2017-08-04 15:25     ` Catalin Marinas
@ 2017-08-04 17:43       ` Robin Murphy
  -1 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-08-04 17:43 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: mark.rutland, linux-nvdimm, will.deacon, linux-arm-kernel

On 04/08/17 16:25, Catalin Marinas wrote:
> Two minor comments below.
> 
> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -960,6 +960,17 @@ config ARM64_UAO
>>  	  regular load/store instructions if the cpu does not implement the
>>  	  feature.
>>  
>> +config ARM64_PMEM
>> +	bool "Enable support for persistent memory"
>> +	select ARCH_HAS_PMEM_API
>> +	help
>> +	  Say Y to enable support for the persistent memory API based on the
>> +	  ARMv8.2 DCPoP feature.
>> +
>> +	  The feature is detected at runtime, and the kernel will use DC CVAC
>> +	  operations if DC CVAP is not supported (following the behaviour of
>> +	  DC CVAP itself if the system does not define a point of persistence).
> 
> Any reason not to have this default y?

Mostly because it's untested, and not actually useful without some way
of describing persistent memory regions to the kernel (I'm currently
trying to make sense of what exactly ARCH_HAS_MMIO_FLUSH is supposed to
mean in order to enable ACPI NFIT support).

There's also the potential issue that we can't disable ARCH_HAS_PMEM_API
at runtime on pre-v8.2 systems where DC CVAC may not strictly give the
guarantee of persistence that that is supposed to imply. However, I
guess that's more of an open problem, since even on a v8.2 CPU reporting
(mandatory) DC CVAP support we've still no way to actually know whether
the interconnect/memory controller/etc. of any old system is up to the
job. At this point I'm mostly hoping that people will only be sticking
NVDIMMs into systems that *are* properly designed to support them, v8.2
CPUs or not.

>> --- a/arch/arm64/mm/cache.S
>> +++ b/arch/arm64/mm/cache.S
>> @@ -172,6 +172,20 @@ ENDPIPROC(__clean_dcache_area_poc)
>>  ENDPROC(__dma_clean_area)
>>  
>>  /*
>> + *	__clean_dcache_area_pop(kaddr, size)
>> + *
>> + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
>> + * 	are cleaned to the PoP.
>> + *
>> + *	- kaddr   - kernel address
>> + *	- size    - size in question
>> + */
>> +ENTRY(__clean_dcache_area_pop)
>> +	dcache_by_line_op cvap, sy, x0, x1, x2, x3
>> +	ret
>> +ENDPIPROC(__clean_dcache_area_pop)
>> +
>> +/*
>>   *	__dma_flush_area(start, size)
>>   *
>>   *	clean & invalidate D / U line
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index a682a0a2a0fa..a461a00ceb3e 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -183,3 +183,21 @@ bool kernel_page_present(struct page *page)
>>  }
>>  #endif /* CONFIG_HIBERNATION */
>>  #endif /* CONFIG_DEBUG_PAGEALLOC */
>> +
>> +#ifdef CONFIG_ARCH_HAS_PMEM_API
>> +#include <asm/cacheflush.h>
>> +
>> +static inline void arch_wb_cache_pmem(void *addr, size_t size)
>> +{
>> +	/* Ensure order against any prior non-cacheable writes */
>> +	dmb(sy);
>> +	__clean_dcache_area_pop(addr, size);
>> +}
> 
> Could we keep the dmb() in the actual __clean_dcache_area_pop()
> implementation?

Mark held the opinion that it should follow the same pattern as the
other cache maintenance primitives - e.g. we don't have such a dmb in
__inval_cache_range(), but do place them at callsites where we know it
may be necessary (head.S) - and I found it hard to disagree. The callers
in patch #6 should never need a barrier, and arguably we may not even
need this one, since it looks like pmem should currently always be
mapped as MEMREMAP_WB if ARCH_HAS_PMEM_API.

> I can do the changes myself if you don't have any objections.

If you would prefer to play safe and move it back into the assembly
that's fine by me, but note that the associated comments in patch #6
should also be removed if so.

Robin.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 5/6] arm64: Implement pmem API support
@ 2017-08-04 17:43       ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-08-04 17:43 UTC (permalink / raw)
  To: linux-arm-kernel

On 04/08/17 16:25, Catalin Marinas wrote:
> Two minor comments below.
> 
> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -960,6 +960,17 @@ config ARM64_UAO
>>  	  regular load/store instructions if the cpu does not implement the
>>  	  feature.
>>  
>> +config ARM64_PMEM
>> +	bool "Enable support for persistent memory"
>> +	select ARCH_HAS_PMEM_API
>> +	help
>> +	  Say Y to enable support for the persistent memory API based on the
>> +	  ARMv8.2 DCPoP feature.
>> +
>> +	  The feature is detected at runtime, and the kernel will use DC CVAC
>> +	  operations if DC CVAP is not supported (following the behaviour of
>> +	  DC CVAP itself if the system does not define a point of persistence).
> 
> Any reason not to have this default y?

Mostly because it's untested, and not actually useful without some way
of describing persistent memory regions to the kernel (I'm currently
trying to make sense of what exactly ARCH_HAS_MMIO_FLUSH is supposed to
mean in order to enable ACPI NFIT support).

There's also the potential issue that we can't disable ARCH_HAS_PMEM_API
at runtime on pre-v8.2 systems where DC CVAC may not strictly give the
guarantee of persistence that that is supposed to imply. However, I
guess that's more of an open problem, since even on a v8.2 CPU reporting
(mandatory) DC CVAP support we've still no way to actually know whether
the interconnect/memory controller/etc. of any old system is up to the
job. At this point I'm mostly hoping that people will only be sticking
NVDIMMs into systems that *are* properly designed to support them, v8.2
CPUs or not.

>> --- a/arch/arm64/mm/cache.S
>> +++ b/arch/arm64/mm/cache.S
>> @@ -172,6 +172,20 @@ ENDPIPROC(__clean_dcache_area_poc)
>>  ENDPROC(__dma_clean_area)
>>  
>>  /*
>> + *	__clean_dcache_area_pop(kaddr, size)
>> + *
>> + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
>> + * 	are cleaned to the PoP.
>> + *
>> + *	- kaddr   - kernel address
>> + *	- size    - size in question
>> + */
>> +ENTRY(__clean_dcache_area_pop)
>> +	dcache_by_line_op cvap, sy, x0, x1, x2, x3
>> +	ret
>> +ENDPIPROC(__clean_dcache_area_pop)
>> +
>> +/*
>>   *	__dma_flush_area(start, size)
>>   *
>>   *	clean & invalidate D / U line
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index a682a0a2a0fa..a461a00ceb3e 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -183,3 +183,21 @@ bool kernel_page_present(struct page *page)
>>  }
>>  #endif /* CONFIG_HIBERNATION */
>>  #endif /* CONFIG_DEBUG_PAGEALLOC */
>> +
>> +#ifdef CONFIG_ARCH_HAS_PMEM_API
>> +#include <asm/cacheflush.h>
>> +
>> +static inline void arch_wb_cache_pmem(void *addr, size_t size)
>> +{
>> +	/* Ensure order against any prior non-cacheable writes */
>> +	dmb(sy);
>> +	__clean_dcache_area_pop(addr, size);
>> +}
> 
> Could we keep the dmb() in the actual __clean_dcache_area_pop()
> implementation?

Mark held the opinion that it should follow the same pattern as the
other cache maintenance primitives - e.g. we don't have such a dmb in
__inval_cache_range(), but do place them at callsites where we know it
may be necessary (head.S) - and I found it hard to disagree. The callers
in patch #6 should never need a barrier, and arguably we may not even
need this one, since it looks like pmem should currently always be
mapped as MEMREMAP_WB if ARCH_HAS_PMEM_API.

> I can do the changes myself if you don't have any objections.

If you would prefer to play safe and move it back into the assembly
that's fine by me, but note that the associated comments in patch #6
should also be removed if so.

Robin.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] arm64: Implement pmem API support
  2017-08-04 17:43       ` Robin Murphy
@ 2017-08-04 18:09         ` Dan Williams
  -1 siblings, 0 replies; 34+ messages in thread
From: Dan Williams @ 2017-08-04 18:09 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Mark Rutland, linux-nvdimm, Catalin Marinas, Will Deacon,
	linux-arm-kernel

On Fri, Aug 4, 2017 at 10:43 AM, Robin Murphy <robin.murphy@arm.com> wrote:
> On 04/08/17 16:25, Catalin Marinas wrote:
>> Two minor comments below.
>>
>> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
>>> --- a/arch/arm64/Kconfig
>>> +++ b/arch/arm64/Kconfig
>>> @@ -960,6 +960,17 @@ config ARM64_UAO
>>>        regular load/store instructions if the cpu does not implement the
>>>        feature.
>>>
>>> +config ARM64_PMEM
>>> +    bool "Enable support for persistent memory"
>>> +    select ARCH_HAS_PMEM_API
>>> +    help
>>> +      Say Y to enable support for the persistent memory API based on the
>>> +      ARMv8.2 DCPoP feature.
>>> +
>>> +      The feature is detected at runtime, and the kernel will use DC CVAC
>>> +      operations if DC CVAP is not supported (following the behaviour of
>>> +      DC CVAP itself if the system does not define a point of persistence).
>>
>> Any reason not to have this default y?
>
> Mostly because it's untested, and not actually useful without some way
> of describing persistent memory regions to the kernel (I'm currently
> trying to make sense of what exactly ARCH_HAS_MMIO_FLUSH is supposed to
> mean in order to enable ACPI NFIT support).

This is related to block-aperture support described by the NFIT where
a sliding-memory-mapped window can be programmed to access different
ranges of the NVDIMM. Before the window is programmed to a new
DIMM-address we need to flush any dirty data through the current
window setting to media. See the call to mmio_flush_range() in
acpi_nfit_blk_single_io(). I think it's ok to omit ARCH_HAS_MMIO_FLUSH
support, and add a configuration option to compile out the
block-aperture support.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 5/6] arm64: Implement pmem API support
@ 2017-08-04 18:09         ` Dan Williams
  0 siblings, 0 replies; 34+ messages in thread
From: Dan Williams @ 2017-08-04 18:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Aug 4, 2017 at 10:43 AM, Robin Murphy <robin.murphy@arm.com> wrote:
> On 04/08/17 16:25, Catalin Marinas wrote:
>> Two minor comments below.
>>
>> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
>>> --- a/arch/arm64/Kconfig
>>> +++ b/arch/arm64/Kconfig
>>> @@ -960,6 +960,17 @@ config ARM64_UAO
>>>        regular load/store instructions if the cpu does not implement the
>>>        feature.
>>>
>>> +config ARM64_PMEM
>>> +    bool "Enable support for persistent memory"
>>> +    select ARCH_HAS_PMEM_API
>>> +    help
>>> +      Say Y to enable support for the persistent memory API based on the
>>> +      ARMv8.2 DCPoP feature.
>>> +
>>> +      The feature is detected at runtime, and the kernel will use DC CVAC
>>> +      operations if DC CVAP is not supported (following the behaviour of
>>> +      DC CVAP itself if the system does not define a point of persistence).
>>
>> Any reason not to have this default y?
>
> Mostly because it's untested, and not actually useful without some way
> of describing persistent memory regions to the kernel (I'm currently
> trying to make sense of what exactly ARCH_HAS_MMIO_FLUSH is supposed to
> mean in order to enable ACPI NFIT support).

This is related to block-aperture support described by the NFIT where
a sliding-memory-mapped window can be programmed to access different
ranges of the NVDIMM. Before the window is programmed to a new
DIMM-address we need to flush any dirty data through the current
window setting to media. See the call to mmio_flush_range() in
acpi_nfit_blk_single_io(). I think it's ok to omit ARCH_HAS_MMIO_FLUSH
support, and add a configuration option to compile out the
block-aperture support.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] arm64: Implement pmem API support
  2017-08-04 18:09         ` Dan Williams
@ 2017-08-04 18:35           ` Robin Murphy
  -1 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-08-04 18:35 UTC (permalink / raw)
  To: Dan Williams
  Cc: Mark Rutland, linux-nvdimm, Catalin Marinas, Will Deacon,
	linux-arm-kernel

On 04/08/17 19:09, Dan Williams wrote:
> On Fri, Aug 4, 2017 at 10:43 AM, Robin Murphy <robin.murphy@arm.com> wrote:
>> On 04/08/17 16:25, Catalin Marinas wrote:
>>> Two minor comments below.
>>>
>>> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
>>>> --- a/arch/arm64/Kconfig
>>>> +++ b/arch/arm64/Kconfig
>>>> @@ -960,6 +960,17 @@ config ARM64_UAO
>>>>        regular load/store instructions if the cpu does not implement the
>>>>        feature.
>>>>
>>>> +config ARM64_PMEM
>>>> +    bool "Enable support for persistent memory"
>>>> +    select ARCH_HAS_PMEM_API
>>>> +    help
>>>> +      Say Y to enable support for the persistent memory API based on the
>>>> +      ARMv8.2 DCPoP feature.
>>>> +
>>>> +      The feature is detected at runtime, and the kernel will use DC CVAC
>>>> +      operations if DC CVAP is not supported (following the behaviour of
>>>> +      DC CVAP itself if the system does not define a point of persistence).
>>>
>>> Any reason not to have this default y?
>>
>> Mostly because it's untested, and not actually useful without some way
>> of describing persistent memory regions to the kernel (I'm currently
>> trying to make sense of what exactly ARCH_HAS_MMIO_FLUSH is supposed to
>> mean in order to enable ACPI NFIT support).
> 
> This is related to block-aperture support described by the NFIT where
> a sliding-memory-mapped window can be programmed to access different
> ranges of the NVDIMM. Before the window is programmed to a new
> DIMM-address we need to flush any dirty data through the current
> window setting to media. See the call to mmio_flush_range() in
> acpi_nfit_blk_single_io(). I think it's ok to omit ARCH_HAS_MMIO_FLUSH
> support, and add a configuration option to compile out the
> block-aperture support.

Oh, I have every intention of implementing it one way or another if
necessary - it's not difficult, it's just been a question of working
through the NFIT code to figure out the subtleties of translation to
arm64 ;)

If mmio_flush_range() is for true MMIO (i.e. __iomem) mappings, then
arm64 should only need a barrier, rather than actual cache operations.
If on the other hand it's misleadingly named and only actually used on
MEMREMAP_WB mappings (as I'm staring to think it might be), then I can't
help thinking it could simply go away in favour of arch_wb_pmem(), since
that now seems to have those same semantics and intent, plus a much more
appropriate name.

Robin.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 5/6] arm64: Implement pmem API support
@ 2017-08-04 18:35           ` Robin Murphy
  0 siblings, 0 replies; 34+ messages in thread
From: Robin Murphy @ 2017-08-04 18:35 UTC (permalink / raw)
  To: linux-arm-kernel

On 04/08/17 19:09, Dan Williams wrote:
> On Fri, Aug 4, 2017 at 10:43 AM, Robin Murphy <robin.murphy@arm.com> wrote:
>> On 04/08/17 16:25, Catalin Marinas wrote:
>>> Two minor comments below.
>>>
>>> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
>>>> --- a/arch/arm64/Kconfig
>>>> +++ b/arch/arm64/Kconfig
>>>> @@ -960,6 +960,17 @@ config ARM64_UAO
>>>>        regular load/store instructions if the cpu does not implement the
>>>>        feature.
>>>>
>>>> +config ARM64_PMEM
>>>> +    bool "Enable support for persistent memory"
>>>> +    select ARCH_HAS_PMEM_API
>>>> +    help
>>>> +      Say Y to enable support for the persistent memory API based on the
>>>> +      ARMv8.2 DCPoP feature.
>>>> +
>>>> +      The feature is detected at runtime, and the kernel will use DC CVAC
>>>> +      operations if DC CVAP is not supported (following the behaviour of
>>>> +      DC CVAP itself if the system does not define a point of persistence).
>>>
>>> Any reason not to have this default y?
>>
>> Mostly because it's untested, and not actually useful without some way
>> of describing persistent memory regions to the kernel (I'm currently
>> trying to make sense of what exactly ARCH_HAS_MMIO_FLUSH is supposed to
>> mean in order to enable ACPI NFIT support).
> 
> This is related to block-aperture support described by the NFIT where
> a sliding-memory-mapped window can be programmed to access different
> ranges of the NVDIMM. Before the window is programmed to a new
> DIMM-address we need to flush any dirty data through the current
> window setting to media. See the call to mmio_flush_range() in
> acpi_nfit_blk_single_io(). I think it's ok to omit ARCH_HAS_MMIO_FLUSH
> support, and add a configuration option to compile out the
> block-aperture support.

Oh, I have every intention of implementing it one way or another if
necessary - it's not difficult, it's just been a question of working
through the NFIT code to figure out the subtleties of translation to
arm64 ;)

If mmio_flush_range() is for true MMIO (i.e. __iomem) mappings, then
arm64 should only need a barrier, rather than actual cache operations.
If on the other hand it's misleadingly named and only actually used on
MEMREMAP_WB mappings (as I'm staring to think it might be), then I can't
help thinking it could simply go away in favour of arch_wb_pmem(), since
that now seems to have those same semantics and intent, plus a much more
appropriate name.

Robin.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] arm64: Implement pmem API support
  2017-08-04 18:35           ` Robin Murphy
@ 2017-08-04 19:36             ` Dan Williams
  -1 siblings, 0 replies; 34+ messages in thread
From: Dan Williams @ 2017-08-04 19:36 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Mark Rutland, linux-nvdimm, Catalin Marinas, Will Deacon,
	linux-arm-kernel

On Fri, Aug 4, 2017 at 11:35 AM, Robin Murphy <robin.murphy@arm.com> wrote:
> On 04/08/17 19:09, Dan Williams wrote:
>> On Fri, Aug 4, 2017 at 10:43 AM, Robin Murphy <robin.murphy@arm.com> wrote:
>>> On 04/08/17 16:25, Catalin Marinas wrote:
>>>> Two minor comments below.
>>>>
>>>> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
>>>>> --- a/arch/arm64/Kconfig
>>>>> +++ b/arch/arm64/Kconfig
>>>>> @@ -960,6 +960,17 @@ config ARM64_UAO
>>>>>        regular load/store instructions if the cpu does not implement the
>>>>>        feature.
>>>>>
>>>>> +config ARM64_PMEM
>>>>> +    bool "Enable support for persistent memory"
>>>>> +    select ARCH_HAS_PMEM_API
>>>>> +    help
>>>>> +      Say Y to enable support for the persistent memory API based on the
>>>>> +      ARMv8.2 DCPoP feature.
>>>>> +
>>>>> +      The feature is detected at runtime, and the kernel will use DC CVAC
>>>>> +      operations if DC CVAP is not supported (following the behaviour of
>>>>> +      DC CVAP itself if the system does not define a point of persistence).
>>>>
>>>> Any reason not to have this default y?
>>>
>>> Mostly because it's untested, and not actually useful without some way
>>> of describing persistent memory regions to the kernel (I'm currently
>>> trying to make sense of what exactly ARCH_HAS_MMIO_FLUSH is supposed to
>>> mean in order to enable ACPI NFIT support).
>>
>> This is related to block-aperture support described by the NFIT where
>> a sliding-memory-mapped window can be programmed to access different
>> ranges of the NVDIMM. Before the window is programmed to a new
>> DIMM-address we need to flush any dirty data through the current
>> window setting to media. See the call to mmio_flush_range() in
>> acpi_nfit_blk_single_io(). I think it's ok to omit ARCH_HAS_MMIO_FLUSH
>> support, and add a configuration option to compile out the
>> block-aperture support.
>
> Oh, I have every intention of implementing it one way or another if
> necessary - it's not difficult, it's just been a question of working
> through the NFIT code to figure out the subtleties of translation to
> arm64 ;)
>
> If mmio_flush_range() is for true MMIO (i.e. __iomem) mappings, then
> arm64 should only need a barrier, rather than actual cache operations.
> If on the other hand it's misleadingly named and only actually used on
> MEMREMAP_WB mappings (as I'm staring to think it might be), then I can't
> help thinking it could simply go away in favour of arch_wb_pmem(), since
> that now seems to have those same semantics and intent, plus a much more
> appropriate name.
>

The mapping type of block-apertures is up to the architecture, so you
could mark them uncacheable and not worry about mmio_flush_range().
Also, arch_wb_pmem() is not a replacement for mmio_flush_range() since
we also need the cache to be invalidated. arch_wb_pmem() is allowed to
leave clean cache lines present.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 5/6] arm64: Implement pmem API support
@ 2017-08-04 19:36             ` Dan Williams
  0 siblings, 0 replies; 34+ messages in thread
From: Dan Williams @ 2017-08-04 19:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Aug 4, 2017 at 11:35 AM, Robin Murphy <robin.murphy@arm.com> wrote:
> On 04/08/17 19:09, Dan Williams wrote:
>> On Fri, Aug 4, 2017 at 10:43 AM, Robin Murphy <robin.murphy@arm.com> wrote:
>>> On 04/08/17 16:25, Catalin Marinas wrote:
>>>> Two minor comments below.
>>>>
>>>> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
>>>>> --- a/arch/arm64/Kconfig
>>>>> +++ b/arch/arm64/Kconfig
>>>>> @@ -960,6 +960,17 @@ config ARM64_UAO
>>>>>        regular load/store instructions if the cpu does not implement the
>>>>>        feature.
>>>>>
>>>>> +config ARM64_PMEM
>>>>> +    bool "Enable support for persistent memory"
>>>>> +    select ARCH_HAS_PMEM_API
>>>>> +    help
>>>>> +      Say Y to enable support for the persistent memory API based on the
>>>>> +      ARMv8.2 DCPoP feature.
>>>>> +
>>>>> +      The feature is detected at runtime, and the kernel will use DC CVAC
>>>>> +      operations if DC CVAP is not supported (following the behaviour of
>>>>> +      DC CVAP itself if the system does not define a point of persistence).
>>>>
>>>> Any reason not to have this default y?
>>>
>>> Mostly because it's untested, and not actually useful without some way
>>> of describing persistent memory regions to the kernel (I'm currently
>>> trying to make sense of what exactly ARCH_HAS_MMIO_FLUSH is supposed to
>>> mean in order to enable ACPI NFIT support).
>>
>> This is related to block-aperture support described by the NFIT where
>> a sliding-memory-mapped window can be programmed to access different
>> ranges of the NVDIMM. Before the window is programmed to a new
>> DIMM-address we need to flush any dirty data through the current
>> window setting to media. See the call to mmio_flush_range() in
>> acpi_nfit_blk_single_io(). I think it's ok to omit ARCH_HAS_MMIO_FLUSH
>> support, and add a configuration option to compile out the
>> block-aperture support.
>
> Oh, I have every intention of implementing it one way or another if
> necessary - it's not difficult, it's just been a question of working
> through the NFIT code to figure out the subtleties of translation to
> arm64 ;)
>
> If mmio_flush_range() is for true MMIO (i.e. __iomem) mappings, then
> arm64 should only need a barrier, rather than actual cache operations.
> If on the other hand it's misleadingly named and only actually used on
> MEMREMAP_WB mappings (as I'm staring to think it might be), then I can't
> help thinking it could simply go away in favour of arch_wb_pmem(), since
> that now seems to have those same semantics and intent, plus a much more
> appropriate name.
>

The mapping type of block-apertures is up to the architecture, so you
could mark them uncacheable and not worry about mmio_flush_range().
Also, arch_wb_pmem() is not a replacement for mmio_flush_range() since
we also need the cache to be invalidated. arch_wb_pmem() is allowed to
leave clean cache lines present.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/6] arm64: uaccess: Implement *_flushcache variants
  2017-07-25 10:55   ` Robin Murphy
@ 2017-08-07 18:32     ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2017-08-07 18:32 UTC (permalink / raw)
  To: Robin Murphy
  Cc: mark.rutland, linux-nvdimm, catalin.marinas, linux-arm-kernel

On Tue, Jul 25, 2017 at 11:55:43AM +0100, Robin Murphy wrote:
> Implement the set of copy functions with guarantees of a clean cache
> upon completion necessary to support the pmem driver.
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>  arch/arm64/Kconfig                  |  1 +
>  arch/arm64/include/asm/string.h     |  4 ++++
>  arch/arm64/include/asm/uaccess.h    | 12 ++++++++++
>  arch/arm64/lib/Makefile             |  2 ++
>  arch/arm64/lib/uaccess_flushcache.c | 47 +++++++++++++++++++++++++++++++++++++
>  5 files changed, 66 insertions(+)
>  create mode 100644 arch/arm64/lib/uaccess_flushcache.c

[...]

> diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
> new file mode 100644
> index 000000000000..b6ceafdb8b72
> --- /dev/null
> +++ b/arch/arm64/lib/uaccess_flushcache.c
> @@ -0,0 +1,47 @@
> +/*
> + * Copyright (C) 2017 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/uaccess.h>
> +#include <asm/barrier.h>
> +#include <asm/cacheflush.h>
> +
> +void memcpy_flushcache(void *dst, const void *src, size_t cnt)
> +{
> +	/*
> +	 * We assume this should not be called with @dst pointing to
> +	 * non-cacheable memory, such that we don't need an explicit
> +	 * barrier to order the cache maintenance against the memcpy.
> +	 */
> +	memcpy(dst, src, cnt);
> +	__clean_dcache_area_pop(dst, cnt);
> +}
> +EXPORT_SYMBOL_GPL(memcpy_flushcache);
> +
> +void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
> +			    size_t len)
> +{
> +	memcpy_flushcache(to, page_address(page) + offset, len);
> +}
> +
> +unsigned long __copy_user_flushcache(void *to, const void __user *from,
> +				     unsigned long n)
> +{
> +	unsigned long rc = __arch_copy_from_user(to, from, n);

I'm a bit nervous calling the bare user accessor here without an access_ok
check beforehand. Can we rely on the caller having done the check for us? I
tried to follow the breadcrumbs back out, but I noticed that other iov
iterators (such as copy_from_iter) *do* do the bounds check, whereas the
pmem version (copy_from_iter_nocache) doesn't appear to check the address.

Is that right?

Will
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 6/6] arm64: uaccess: Implement *_flushcache variants
@ 2017-08-07 18:32     ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2017-08-07 18:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 25, 2017 at 11:55:43AM +0100, Robin Murphy wrote:
> Implement the set of copy functions with guarantees of a clean cache
> upon completion necessary to support the pmem driver.
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>  arch/arm64/Kconfig                  |  1 +
>  arch/arm64/include/asm/string.h     |  4 ++++
>  arch/arm64/include/asm/uaccess.h    | 12 ++++++++++
>  arch/arm64/lib/Makefile             |  2 ++
>  arch/arm64/lib/uaccess_flushcache.c | 47 +++++++++++++++++++++++++++++++++++++
>  5 files changed, 66 insertions(+)
>  create mode 100644 arch/arm64/lib/uaccess_flushcache.c

[...]

> diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
> new file mode 100644
> index 000000000000..b6ceafdb8b72
> --- /dev/null
> +++ b/arch/arm64/lib/uaccess_flushcache.c
> @@ -0,0 +1,47 @@
> +/*
> + * Copyright (C) 2017 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/uaccess.h>
> +#include <asm/barrier.h>
> +#include <asm/cacheflush.h>
> +
> +void memcpy_flushcache(void *dst, const void *src, size_t cnt)
> +{
> +	/*
> +	 * We assume this should not be called with @dst pointing to
> +	 * non-cacheable memory, such that we don't need an explicit
> +	 * barrier to order the cache maintenance against the memcpy.
> +	 */
> +	memcpy(dst, src, cnt);
> +	__clean_dcache_area_pop(dst, cnt);
> +}
> +EXPORT_SYMBOL_GPL(memcpy_flushcache);
> +
> +void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
> +			    size_t len)
> +{
> +	memcpy_flushcache(to, page_address(page) + offset, len);
> +}
> +
> +unsigned long __copy_user_flushcache(void *to, const void __user *from,
> +				     unsigned long n)
> +{
> +	unsigned long rc = __arch_copy_from_user(to, from, n);

I'm a bit nervous calling the bare user accessor here without an access_ok
check beforehand. Can we rely on the caller having done the check for us? I
tried to follow the breadcrumbs back out, but I noticed that other iov
iterators (such as copy_from_iter) *do* do the bounds check, whereas the
pmem version (copy_from_iter_nocache) doesn't appear to check the address.

Is that right?

Will

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] arm64: Implement pmem API support
  2017-08-04 15:25     ` Catalin Marinas
@ 2017-08-07 18:33       ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2017-08-07 18:33 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: mark.rutland, linux-nvdimm, Robin Murphy, linux-arm-kernel

On Fri, Aug 04, 2017 at 04:25:42PM +0100, Catalin Marinas wrote:
> Two minor comments below.
> 
> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -960,6 +960,17 @@ config ARM64_UAO
> >  	  regular load/store instructions if the cpu does not implement the
> >  	  feature.
> >  
> > +config ARM64_PMEM
> > +	bool "Enable support for persistent memory"
> > +	select ARCH_HAS_PMEM_API
> > +	help
> > +	  Say Y to enable support for the persistent memory API based on the
> > +	  ARMv8.2 DCPoP feature.
> > +
> > +	  The feature is detected at runtime, and the kernel will use DC CVAC
> > +	  operations if DC CVAP is not supported (following the behaviour of
> > +	  DC CVAP itself if the system does not define a point of persistence).
> 
> Any reason not to have this default y?
> 
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -172,6 +172,20 @@ ENDPIPROC(__clean_dcache_area_poc)
> >  ENDPROC(__dma_clean_area)
> >  
> >  /*
> > + *	__clean_dcache_area_pop(kaddr, size)
> > + *
> > + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> > + * 	are cleaned to the PoP.
> > + *
> > + *	- kaddr   - kernel address
> > + *	- size    - size in question
> > + */
> > +ENTRY(__clean_dcache_area_pop)
> > +	dcache_by_line_op cvap, sy, x0, x1, x2, x3
> > +	ret
> > +ENDPIPROC(__clean_dcache_area_pop)
> > +
> > +/*
> >   *	__dma_flush_area(start, size)
> >   *
> >   *	clean & invalidate D / U line
> > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> > index a682a0a2a0fa..a461a00ceb3e 100644
> > --- a/arch/arm64/mm/pageattr.c
> > +++ b/arch/arm64/mm/pageattr.c
> > @@ -183,3 +183,21 @@ bool kernel_page_present(struct page *page)
> >  }
> >  #endif /* CONFIG_HIBERNATION */
> >  #endif /* CONFIG_DEBUG_PAGEALLOC */
> > +
> > +#ifdef CONFIG_ARCH_HAS_PMEM_API
> > +#include <asm/cacheflush.h>
> > +
> > +static inline void arch_wb_cache_pmem(void *addr, size_t size)
> > +{
> > +	/* Ensure order against any prior non-cacheable writes */
> > +	dmb(sy);
> > +	__clean_dcache_area_pop(addr, size);
> > +}
> 
> Could we keep the dmb() in the actual __clean_dcache_area_pop()
> implementation?
> 
> I can do the changes myself if you don't have any objections.

I *think* the DMB can also be reworked to use the outer-shareable domain,
much as we do for the dma_* barriers.

Will
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 5/6] arm64: Implement pmem API support
@ 2017-08-07 18:33       ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2017-08-07 18:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Aug 04, 2017 at 04:25:42PM +0100, Catalin Marinas wrote:
> Two minor comments below.
> 
> On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -960,6 +960,17 @@ config ARM64_UAO
> >  	  regular load/store instructions if the cpu does not implement the
> >  	  feature.
> >  
> > +config ARM64_PMEM
> > +	bool "Enable support for persistent memory"
> > +	select ARCH_HAS_PMEM_API
> > +	help
> > +	  Say Y to enable support for the persistent memory API based on the
> > +	  ARMv8.2 DCPoP feature.
> > +
> > +	  The feature is detected at runtime, and the kernel will use DC CVAC
> > +	  operations if DC CVAP is not supported (following the behaviour of
> > +	  DC CVAP itself if the system does not define a point of persistence).
> 
> Any reason not to have this default y?
> 
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -172,6 +172,20 @@ ENDPIPROC(__clean_dcache_area_poc)
> >  ENDPROC(__dma_clean_area)
> >  
> >  /*
> > + *	__clean_dcache_area_pop(kaddr, size)
> > + *
> > + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> > + * 	are cleaned to the PoP.
> > + *
> > + *	- kaddr   - kernel address
> > + *	- size    - size in question
> > + */
> > +ENTRY(__clean_dcache_area_pop)
> > +	dcache_by_line_op cvap, sy, x0, x1, x2, x3
> > +	ret
> > +ENDPIPROC(__clean_dcache_area_pop)
> > +
> > +/*
> >   *	__dma_flush_area(start, size)
> >   *
> >   *	clean & invalidate D / U line
> > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> > index a682a0a2a0fa..a461a00ceb3e 100644
> > --- a/arch/arm64/mm/pageattr.c
> > +++ b/arch/arm64/mm/pageattr.c
> > @@ -183,3 +183,21 @@ bool kernel_page_present(struct page *page)
> >  }
> >  #endif /* CONFIG_HIBERNATION */
> >  #endif /* CONFIG_DEBUG_PAGEALLOC */
> > +
> > +#ifdef CONFIG_ARCH_HAS_PMEM_API
> > +#include <asm/cacheflush.h>
> > +
> > +static inline void arch_wb_cache_pmem(void *addr, size_t size)
> > +{
> > +	/* Ensure order against any prior non-cacheable writes */
> > +	dmb(sy);
> > +	__clean_dcache_area_pop(addr, size);
> > +}
> 
> Could we keep the dmb() in the actual __clean_dcache_area_pop()
> implementation?
> 
> I can do the changes myself if you don't have any objections.

I *think* the DMB can also be reworked to use the outer-shareable domain,
much as we do for the dma_* barriers.

Will

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 0/6] arm64 pmem support
  2017-07-25 10:55 ` Robin Murphy
@ 2017-08-07 18:34   ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2017-08-07 18:34 UTC (permalink / raw)
  To: Robin Murphy
  Cc: mark.rutland, linux-nvdimm, catalin.marinas, linux-arm-kernel

On Tue, Jul 25, 2017 at 11:55:37AM +0100, Robin Murphy wrote:
> With the latest updates to the pmem API, the arch code contribution
> becomes very straightforward to wire up - I think there's about as
> much code here to just cope with the existence of our new instruction
> as there is to actually make use of it. I don't have access to any
> NVDIMMs nor suitable hardware to put them in, so this is written purely
> to spec - the extent of testing has been the feature detection on a
> v8.2 Fast Model vs. v8.0 systems.
> 
> Patch #1 could go in as a fix ahead of the rest; it just needs to come
> before patch #5 to prevent that blowing up the build.

Modulo my two comments:

Reviewed-by: Will Deacon <will.deacon@arm.com>

Will
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 0/6] arm64 pmem support
@ 2017-08-07 18:34   ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2017-08-07 18:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 25, 2017 at 11:55:37AM +0100, Robin Murphy wrote:
> With the latest updates to the pmem API, the arch code contribution
> becomes very straightforward to wire up - I think there's about as
> much code here to just cope with the existence of our new instruction
> as there is to actually make use of it. I don't have access to any
> NVDIMMs nor suitable hardware to put them in, so this is written purely
> to spec - the extent of testing has been the feature detection on a
> v8.2 Fast Model vs. v8.0 systems.
> 
> Patch #1 could go in as a fix ahead of the rest; it just needs to come
> before patch #5 to prevent that blowing up the build.

Modulo my two comments:

Reviewed-by: Will Deacon <will.deacon@arm.com>

Will

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/6] arm64: uaccess: Implement *_flushcache variants
  2017-07-25 10:55   ` Robin Murphy
@ 2017-08-10 10:58     ` Arnd Bergmann
  -1 siblings, 0 replies; 34+ messages in thread
From: Arnd Bergmann @ 2017-08-10 10:58 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Mark Rutland, linux-nvdimm, Catalin Marinas, Will Deacon, Linux ARM

On Tue, Jul 25, 2017 at 12:55 PM, Robin Murphy <robin.murphy@arm.com> wrote:
> Implement the set of copy functions with guarantees of a clean cache
> upon completion necessary to support the pmem driver.
>
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>  arch/arm64/Kconfig                  |  1 +
>  arch/arm64/include/asm/string.h     |  4 ++++
>  arch/arm64/include/asm/uaccess.h    | 12 ++++++++++
>  arch/arm64/lib/Makefile             |  2 ++
>  arch/arm64/lib/uaccess_flushcache.c | 47 +++++++++++++++++++++++++++++++++++++
>  5 files changed, 66 insertions(+)
>  create mode 100644 arch/arm64/lib/uaccess_flushcache.c

It looks like Catalin applied part of this patch but forgot to add
arch/arm64/lib/uaccess_flushcache.c

      Arnd
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 6/6] arm64: uaccess: Implement *_flushcache variants
@ 2017-08-10 10:58     ` Arnd Bergmann
  0 siblings, 0 replies; 34+ messages in thread
From: Arnd Bergmann @ 2017-08-10 10:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 25, 2017 at 12:55 PM, Robin Murphy <robin.murphy@arm.com> wrote:
> Implement the set of copy functions with guarantees of a clean cache
> upon completion necessary to support the pmem driver.
>
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>  arch/arm64/Kconfig                  |  1 +
>  arch/arm64/include/asm/string.h     |  4 ++++
>  arch/arm64/include/asm/uaccess.h    | 12 ++++++++++
>  arch/arm64/lib/Makefile             |  2 ++
>  arch/arm64/lib/uaccess_flushcache.c | 47 +++++++++++++++++++++++++++++++++++++
>  5 files changed, 66 insertions(+)
>  create mode 100644 arch/arm64/lib/uaccess_flushcache.c

It looks like Catalin applied part of this patch but forgot to add
arch/arm64/lib/uaccess_flushcache.c

      Arnd

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/6] arm64: uaccess: Implement *_flushcache variants
  2017-08-10 10:58     ` Arnd Bergmann
@ 2017-08-10 14:12       ` Catalin Marinas
  -1 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2017-08-10 14:12 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Mark Rutland, linux-nvdimm, Will Deacon, Robin Murphy, Linux ARM

On Thu, Aug 10, 2017 at 12:58:45PM +0200, Arnd Bergmann wrote:
> On Tue, Jul 25, 2017 at 12:55 PM, Robin Murphy <robin.murphy@arm.com> wrote:
> > Implement the set of copy functions with guarantees of a clean cache
> > upon completion necessary to support the pmem driver.
> >
> > Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> > ---
> >  arch/arm64/Kconfig                  |  1 +
> >  arch/arm64/include/asm/string.h     |  4 ++++
> >  arch/arm64/include/asm/uaccess.h    | 12 ++++++++++
> >  arch/arm64/lib/Makefile             |  2 ++
> >  arch/arm64/lib/uaccess_flushcache.c | 47 +++++++++++++++++++++++++++++++++++++
> >  5 files changed, 66 insertions(+)
> >  create mode 100644 arch/arm64/lib/uaccess_flushcache.c
> 
> It looks like Catalin applied part of this patch but forgot to add
> arch/arm64/lib/uaccess_flushcache.c

Added it now, thanks.

-- 
Catalin
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 6/6] arm64: uaccess: Implement *_flushcache variants
@ 2017-08-10 14:12       ` Catalin Marinas
  0 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2017-08-10 14:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Aug 10, 2017 at 12:58:45PM +0200, Arnd Bergmann wrote:
> On Tue, Jul 25, 2017 at 12:55 PM, Robin Murphy <robin.murphy@arm.com> wrote:
> > Implement the set of copy functions with guarantees of a clean cache
> > upon completion necessary to support the pmem driver.
> >
> > Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> > ---
> >  arch/arm64/Kconfig                  |  1 +
> >  arch/arm64/include/asm/string.h     |  4 ++++
> >  arch/arm64/include/asm/uaccess.h    | 12 ++++++++++
> >  arch/arm64/lib/Makefile             |  2 ++
> >  arch/arm64/lib/uaccess_flushcache.c | 47 +++++++++++++++++++++++++++++++++++++
> >  5 files changed, 66 insertions(+)
> >  create mode 100644 arch/arm64/lib/uaccess_flushcache.c
> 
> It looks like Catalin applied part of this patch but forgot to add
> arch/arm64/lib/uaccess_flushcache.c

Added it now, thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2017-08-10 14:12 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-25 10:55 [PATCH 0/6] arm64 pmem support Robin Murphy
2017-07-25 10:55 ` Robin Murphy
2017-07-25 10:55 ` [PATCH 1/6] arm64: mm: Fix set_memory_valid() declaration Robin Murphy
2017-07-25 10:55   ` Robin Murphy
2017-07-25 10:55 ` [PATCH 2/6] arm64: Convert __inval_cache_range() to area-based Robin Murphy
2017-07-25 10:55   ` Robin Murphy
2017-07-25 10:55 ` [PATCH 3/6] arm64: Expose DC CVAP to userspace Robin Murphy
2017-07-25 10:55   ` Robin Murphy
2017-07-25 10:55 ` [PATCH 4/6] arm64: Handle trapped DC CVAP Robin Murphy
2017-07-25 10:55   ` Robin Murphy
2017-07-25 10:55 ` [PATCH 5/6] arm64: Implement pmem API support Robin Murphy
2017-07-25 10:55   ` Robin Murphy
2017-08-04 15:25   ` Catalin Marinas
2017-08-04 15:25     ` Catalin Marinas
2017-08-04 17:43     ` Robin Murphy
2017-08-04 17:43       ` Robin Murphy
2017-08-04 18:09       ` Dan Williams
2017-08-04 18:09         ` Dan Williams
2017-08-04 18:35         ` Robin Murphy
2017-08-04 18:35           ` Robin Murphy
2017-08-04 19:36           ` Dan Williams
2017-08-04 19:36             ` Dan Williams
2017-08-07 18:33     ` Will Deacon
2017-08-07 18:33       ` Will Deacon
2017-07-25 10:55 ` [PATCH 6/6] arm64: uaccess: Implement *_flushcache variants Robin Murphy
2017-07-25 10:55   ` Robin Murphy
2017-08-07 18:32   ` Will Deacon
2017-08-07 18:32     ` Will Deacon
2017-08-10 10:58   ` Arnd Bergmann
2017-08-10 10:58     ` Arnd Bergmann
2017-08-10 14:12     ` Catalin Marinas
2017-08-10 14:12       ` Catalin Marinas
2017-08-07 18:34 ` [PATCH 0/6] arm64 pmem support Will Deacon
2017-08-07 18:34   ` Will Deacon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.