All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3]  support pmem on arm64
@ 2016-07-15  2:46 ` Kwangwoo Lee
  0 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel, linux-nvdimm, Catalin Marinas, Will Deacon,
	Mark Rutland, Ross Zwisler, Dan Williams, Vishal Verma
  Cc: Kwangwoo Lee, linux-kernel, Woosuk Chung

This patch set add supporting the pmem driver on arm64 architecture which
can be used on NVDIMM(Non-Volatile DIMM). It has been tested on QEMU with
NVDIMM ACPI/NFIT support on AArch64 VIRT platform.

Until the changes of pmem codes posted to nvdimm list is merged which assums
supporting ADR(Asynchronous DRAM Refresh) or direct flushing based on the
flush hint provided by ACPI/NFIT, this pmem implementation on arm64 can be
a solution to evaluate pmem on arm64 architecture.

Change log:

v3)
do not use a access helper in arch_memcpy_to_pmem().
split cache related codes with different patch set.
fix some comments in pmem.h.

v2)
rewrite functions under the mapping information MEMREMAP_WB.
rewrite the comments for arm64 in pmem.h
add __clean_dcache_area() to clean the cache lines to the PoC.

v1)
add pmem support codes.

Kwangwoo Lee (3):
  arm64: mm: add __clean_dcache_area()
  arm64: mm: add mmio_flush_range() to support pmem
  arm64: pmem: add pmem support codes

 arch/arm64/Kconfig                  |   2 +
 arch/arm64/include/asm/cacheflush.h |   3 +
 arch/arm64/include/asm/pmem.h       | 143 ++++++++++++++++++++++++++++++++++++
 arch/arm64/mm/cache.S               |  18 +++++
 4 files changed, 166 insertions(+)
 create mode 100644 arch/arm64/include/asm/pmem.h

-- 
2.5.0

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 0/3]  support pmem on arm64
@ 2016-07-15  2:46 ` Kwangwoo Lee
  0 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel, linux-nvdimm, Catalin Marinas, Will Deacon,
	Mark Rutland, Ross Zwisler, Dan Williams, Vishal Verma
  Cc: Kwangwoo Lee, Woosuk Chung, Hyunchul Kim, linux-kernel

This patch set add supporting the pmem driver on arm64 architecture which
can be used on NVDIMM(Non-Volatile DIMM). It has been tested on QEMU with
NVDIMM ACPI/NFIT support on AArch64 VIRT platform.

Until the changes of pmem codes posted to nvdimm list is merged which assums
supporting ADR(Asynchronous DRAM Refresh) or direct flushing based on the
flush hint provided by ACPI/NFIT, this pmem implementation on arm64 can be
a solution to evaluate pmem on arm64 architecture.

Change log:

v3)
do not use a access helper in arch_memcpy_to_pmem().
split cache related codes with different patch set.
fix some comments in pmem.h.

v2)
rewrite functions under the mapping information MEMREMAP_WB.
rewrite the comments for arm64 in pmem.h
add __clean_dcache_area() to clean the cache lines to the PoC.

v1)
add pmem support codes.

Kwangwoo Lee (3):
  arm64: mm: add __clean_dcache_area()
  arm64: mm: add mmio_flush_range() to support pmem
  arm64: pmem: add pmem support codes

 arch/arm64/Kconfig                  |   2 +
 arch/arm64/include/asm/cacheflush.h |   3 +
 arch/arm64/include/asm/pmem.h       | 143 ++++++++++++++++++++++++++++++++++++
 arch/arm64/mm/cache.S               |  18 +++++
 4 files changed, 166 insertions(+)
 create mode 100644 arch/arm64/include/asm/pmem.h

-- 
2.5.0

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 0/3]  support pmem on arm64
@ 2016-07-15  2:46 ` Kwangwoo Lee
  0 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel

This patch set add supporting the pmem driver on arm64 architecture which
can be used on NVDIMM(Non-Volatile DIMM). It has been tested on QEMU with
NVDIMM ACPI/NFIT support on AArch64 VIRT platform.

Until the changes of pmem codes posted to nvdimm list is merged which assums
supporting ADR(Asynchronous DRAM Refresh) or direct flushing based on the
flush hint provided by ACPI/NFIT, this pmem implementation on arm64 can be
a solution to evaluate pmem on arm64 architecture.

Change log:

v3)
do not use a access helper in arch_memcpy_to_pmem().
split cache related codes with different patch set.
fix some comments in pmem.h.

v2)
rewrite functions under the mapping information MEMREMAP_WB.
rewrite the comments for arm64 in pmem.h
add __clean_dcache_area() to clean the cache lines to the PoC.

v1)
add pmem support codes.

Kwangwoo Lee (3):
  arm64: mm: add __clean_dcache_area()
  arm64: mm: add mmio_flush_range() to support pmem
  arm64: pmem: add pmem support codes

 arch/arm64/Kconfig                  |   2 +
 arch/arm64/include/asm/cacheflush.h |   3 +
 arch/arm64/include/asm/pmem.h       | 143 ++++++++++++++++++++++++++++++++++++
 arch/arm64/mm/cache.S               |  18 +++++
 4 files changed, 166 insertions(+)
 create mode 100644 arch/arm64/include/asm/pmem.h

-- 
2.5.0

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
  2016-07-15  2:46 ` Kwangwoo Lee
  (?)
@ 2016-07-15  2:46   ` Kwangwoo Lee
  -1 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel, linux-nvdimm, Catalin Marinas, Will Deacon,
	Mark Rutland, Ross Zwisler, Dan Williams, Vishal Verma
  Cc: Kwangwoo Lee, linux-kernel, Woosuk Chung

Ensure D-cache lines are cleaned to the PoC(Point of Coherency).

This function is called by arch_wb_cache_pmem() to clean the cache lines
and remain the data in cache for the next access.

Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/mm/cache.S               | 18 ++++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index c64268d..903a94f 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -68,6 +68,7 @@
 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
 extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
+extern void __clean_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 6df0706..5a350e4 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -93,6 +93,24 @@ ENTRY(__flush_dcache_area)
 ENDPIPROC(__flush_dcache_area)
 
 /*
+ *	__clean_dcache_area(kaddr, size)
+ *
+ * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	are cleaned to the PoC.
+ *
+ *	- kaddr   - kernel address
+ *	- size    - size in question
+ */
+ENTRY(__clean_dcache_area)
+alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
+	dcache_by_line_op cvac, sy, x0, x1, x2, x3
+alternative_else
+	dcache_by_line_op civac, sy, x0, x1, x2, x3
+alternative_endif
+	ret
+ENDPROC(__clean_dcache_area)
+
+/*
  *	__clean_dcache_area_pou(kaddr, size)
  *
  * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
-- 
2.5.0

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
@ 2016-07-15  2:46   ` Kwangwoo Lee
  0 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel, linux-nvdimm, Catalin Marinas, Will Deacon,
	Mark Rutland, Ross Zwisler, Dan Williams, Vishal Verma
  Cc: Kwangwoo Lee, Woosuk Chung, Hyunchul Kim, linux-kernel

Ensure D-cache lines are cleaned to the PoC(Point of Coherency).

This function is called by arch_wb_cache_pmem() to clean the cache lines
and remain the data in cache for the next access.

Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/mm/cache.S               | 18 ++++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index c64268d..903a94f 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -68,6 +68,7 @@
 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
 extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
+extern void __clean_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 6df0706..5a350e4 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -93,6 +93,24 @@ ENTRY(__flush_dcache_area)
 ENDPIPROC(__flush_dcache_area)
 
 /*
+ *	__clean_dcache_area(kaddr, size)
+ *
+ * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	are cleaned to the PoC.
+ *
+ *	- kaddr   - kernel address
+ *	- size    - size in question
+ */
+ENTRY(__clean_dcache_area)
+alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
+	dcache_by_line_op cvac, sy, x0, x1, x2, x3
+alternative_else
+	dcache_by_line_op civac, sy, x0, x1, x2, x3
+alternative_endif
+	ret
+ENDPROC(__clean_dcache_area)
+
+/*
  *	__clean_dcache_area_pou(kaddr, size)
  *
  * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
@ 2016-07-15  2:46   ` Kwangwoo Lee
  0 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel

Ensure D-cache lines are cleaned to the PoC(Point of Coherency).

This function is called by arch_wb_cache_pmem() to clean the cache lines
and remain the data in cache for the next access.

Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/mm/cache.S               | 18 ++++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index c64268d..903a94f 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -68,6 +68,7 @@
 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
 extern void flush_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
+extern void __clean_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 6df0706..5a350e4 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -93,6 +93,24 @@ ENTRY(__flush_dcache_area)
 ENDPIPROC(__flush_dcache_area)
 
 /*
+ *	__clean_dcache_area(kaddr, size)
+ *
+ * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	are cleaned to the PoC.
+ *
+ *	- kaddr   - kernel address
+ *	- size    - size in question
+ */
+ENTRY(__clean_dcache_area)
+alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
+	dcache_by_line_op cvac, sy, x0, x1, x2, x3
+alternative_else
+	dcache_by_line_op civac, sy, x0, x1, x2, x3
+alternative_endif
+	ret
+ENDPROC(__clean_dcache_area)
+
+/*
  *	__clean_dcache_area_pou(kaddr, size)
  *
  * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/3] arm64: mm: add mmio_flush_range() to support pmem
  2016-07-15  2:46 ` Kwangwoo Lee
  (?)
@ 2016-07-15  2:46   ` Kwangwoo Lee
  -1 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel, linux-nvdimm, Catalin Marinas, Will Deacon,
	Mark Rutland, Ross Zwisler, Dan Williams, Vishal Verma
  Cc: Kwangwoo Lee, linux-kernel, Woosuk Chung

To enable pmem in arm64, mmio_flush_range() is required to build
successfully. If NVDIMM is used as blk mode, not pmem , it is called in
acpi_nfit_blk_single_io() via nd_blk_make_request() on read with
NFIT_BLK_READ_FLUSH flag.

The function cleans and invalidates the cache lines using the argument -
addr and size. Thus, it can be mapped with __flush_dcache_area().

Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
---
 arch/arm64/Kconfig                  | 1 +
 arch/arm64/include/asm/cacheflush.h | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4f43622..12546ce 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -15,6 +15,7 @@ config ARM64
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
 	select ARCH_WANT_FRAME_POINTERS
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
+	select ARCH_HAS_MMIO_FLUSH
 	select ARM_AMBA
 	select ARM_ARCH_TIMER
 	select ARM_GIC
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 903a94f..fba18e4 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -134,6 +134,8 @@ static inline void __flush_icache_all(void)
  */
 #define flush_icache_page(vma,page)	do { } while (0)
 
+#define mmio_flush_range(addr, size)	__flush_dcache_area(addr, size)
+
 /*
  * Not required on AArch64 (PIPT or VIPT non-aliasing D-cache).
  */
-- 
2.5.0

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/3] arm64: mm: add mmio_flush_range() to support pmem
@ 2016-07-15  2:46   ` Kwangwoo Lee
  0 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel, linux-nvdimm, Catalin Marinas, Will Deacon,
	Mark Rutland, Ross Zwisler, Dan Williams, Vishal Verma
  Cc: Kwangwoo Lee, Woosuk Chung, Hyunchul Kim, linux-kernel

To enable pmem in arm64, mmio_flush_range() is required to build
successfully. If NVDIMM is used as blk mode, not pmem , it is called in
acpi_nfit_blk_single_io() via nd_blk_make_request() on read with
NFIT_BLK_READ_FLUSH flag.

The function cleans and invalidates the cache lines using the argument -
addr and size. Thus, it can be mapped with __flush_dcache_area().

Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
---
 arch/arm64/Kconfig                  | 1 +
 arch/arm64/include/asm/cacheflush.h | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4f43622..12546ce 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -15,6 +15,7 @@ config ARM64
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
 	select ARCH_WANT_FRAME_POINTERS
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
+	select ARCH_HAS_MMIO_FLUSH
 	select ARM_AMBA
 	select ARM_ARCH_TIMER
 	select ARM_GIC
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 903a94f..fba18e4 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -134,6 +134,8 @@ static inline void __flush_icache_all(void)
  */
 #define flush_icache_page(vma,page)	do { } while (0)
 
+#define mmio_flush_range(addr, size)	__flush_dcache_area(addr, size)
+
 /*
  * Not required on AArch64 (PIPT or VIPT non-aliasing D-cache).
  */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/3] arm64: mm: add mmio_flush_range() to support pmem
@ 2016-07-15  2:46   ` Kwangwoo Lee
  0 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel

To enable pmem in arm64, mmio_flush_range() is required to build
successfully. If NVDIMM is used as blk mode, not pmem , it is called in
acpi_nfit_blk_single_io() via nd_blk_make_request() on read with
NFIT_BLK_READ_FLUSH flag.

The function cleans and invalidates the cache lines using the argument -
addr and size. Thus, it can be mapped with __flush_dcache_area().

Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
---
 arch/arm64/Kconfig                  | 1 +
 arch/arm64/include/asm/cacheflush.h | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4f43622..12546ce 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -15,6 +15,7 @@ config ARM64
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
 	select ARCH_WANT_FRAME_POINTERS
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
+	select ARCH_HAS_MMIO_FLUSH
 	select ARM_AMBA
 	select ARM_ARCH_TIMER
 	select ARM_GIC
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 903a94f..fba18e4 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -134,6 +134,8 @@ static inline void __flush_icache_all(void)
  */
 #define flush_icache_page(vma,page)	do { } while (0)
 
+#define mmio_flush_range(addr, size)	__flush_dcache_area(addr, size)
+
 /*
  * Not required on AArch64 (PIPT or VIPT non-aliasing D-cache).
  */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/3] arm64: pmem: add pmem support codes
  2016-07-15  2:46 ` Kwangwoo Lee
  (?)
@ 2016-07-15  2:46   ` Kwangwoo Lee
  -1 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel, linux-nvdimm, Catalin Marinas, Will Deacon,
	Mark Rutland, Ross Zwisler, Dan Williams, Vishal Verma
  Cc: Kwangwoo Lee, linux-kernel, Woosuk Chung

This patch adds support pmem on arm64 platform. The limitation of
current implementation is that the persistency of pmem on NVDIMM
is not guaranteed on arm64 yet.

pmem driver expects that the persistency need to be guaranteed in
arch_wmb_pmem(), but the PoP(Point of Persistency) is going to be
supported on ARMv8.2 with DC CVAP instruction. Until then,
__arch_has_wmb_pmem() will return false and shows warning message.

[    6.250487] nd_pmem namespace0.0: unable to guarantee persistence of writes
[    6.305000] pmem0: detected capacity change from 0 to 1073741824
...
[   29.215249] EXT4-fs (pmem0): DAX enabled. Warning: EXPERIMENTAL, use at your own risk
[   29.308960] EXT4-fs (pmem0): mounted filesystem with ordered data mode. Opts: dax

Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
---
 arch/arm64/Kconfig            |   1 +
 arch/arm64/include/asm/pmem.h | 143 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 144 insertions(+)
 create mode 100644 arch/arm64/include/asm/pmem.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 12546ce..e14fd31 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -16,6 +16,7 @@ config ARM64
 	select ARCH_WANT_FRAME_POINTERS
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_HAS_MMIO_FLUSH
+	select ARCH_HAS_PMEM_API
 	select ARM_AMBA
 	select ARM_ARCH_TIMER
 	select ARM_GIC
diff --git a/arch/arm64/include/asm/pmem.h b/arch/arm64/include/asm/pmem.h
new file mode 100644
index 0000000..0bcfd87
--- /dev/null
+++ b/arch/arm64/include/asm/pmem.h
@@ -0,0 +1,143 @@
+/*
+ * Based on arch/x86/include/asm/pmem.h
+ *
+ * Copyright(c) 2016 SK hynix Inc. Kwangwoo Lee <kwangwoo.lee@sk.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+#ifndef __ASM_PMEM_H__
+#define __ASM_PMEM_H__
+
+#ifdef CONFIG_ARCH_HAS_PMEM_API
+#include <linux/uaccess.h>
+#include <asm/cacheflush.h>
+
+/**
+ * arch_memcpy_to_pmem - copy data to persistent memory
+ * @dst: destination buffer for the copy
+ * @src: source buffer for the copy
+ * @n: length of the copy in bytes
+ *
+ * Copy data to persistent memory media. if ARCH_HAS_PMEM_API is defined,
+ * then MEMREMAP_WB is used to memremap() during probe. A subsequent
+ * arch_wmb_pmem() need to guarantee durability.
+ */
+static inline void arch_memcpy_to_pmem(void __pmem *dst, const void *src,
+		size_t n)
+{
+	memcpy((void __force *) dst, src, n);
+	__flush_dcache_area(dst, n);
+}
+
+static inline int arch_memcpy_from_pmem(void *dst, const void __pmem *src,
+		size_t n)
+{
+	memcpy(dst, (void __force *) src, n);
+	return 0;
+}
+
+/**
+ * arch_wmb_pmem - synchronize writes to persistent memory
+ *
+ * After a series of arch_memcpy_to_pmem() operations this need to be called to
+ * ensure that written data is durable on persistent memory media.
+ */
+static inline void arch_wmb_pmem(void)
+{
+	/* pmem writes has been done in arch_memcpy_to_pmem() */
+	wmb();
+
+	/*
+	 * ARMv8.2 will support DC CVAP to ensure Point-of-Persistency and here
+	 * is the point for the API like __clean_dcache_area_pop().
+	 */
+}
+
+/**
+ * arch_wb_cache_pmem - write back a cache range
+ * @vaddr:	virtual start address
+ * @size:	number of bytes to write back
+ *
+ * Write back a cache range. Leave data in cache for performance of next access.
+ * This function requires explicit ordering with an arch_wmb_pmem() call.
+ */
+static inline void arch_wb_cache_pmem(void __pmem *addr, size_t size)
+{
+	/*
+	 * Just clean cache to PoC. The data in cache is remained to use the
+	 * next access. arch_wmb_pmem() need to be the point to ensure the
+	 * persistency under the current implementation.
+	 */
+	__clean_dcache_area(addr, size);
+}
+
+/**
+ * arch_copy_from_iter_pmem - copy data from an iterator to PMEM
+ * @addr:	PMEM destination address
+ * @bytes:	number of bytes to copy
+ * @i:		iterator with source data
+ *
+ * Copy data from the iterator 'i' to the PMEM buffer starting at 'addr'.
+ * This function requires explicit ordering with an arch_wmb_pmem() call.
+ */
+static inline size_t arch_copy_from_iter_pmem(void __pmem *addr, size_t bytes,
+		struct iov_iter *i)
+{
+	void *vaddr = (void __force *)addr;
+	size_t len;
+
+	/*
+	 * ARCH_HAS_NOCACHE_UACCESS is not defined and the default mapping is
+	 * MEMREMAP_WB. Instead of using copy_from_iter_nocache(), use cacheable
+	 * version and call arch_wb_cache_pmem().
+	 */
+	len = copy_from_iter(vaddr, bytes, i);
+
+	arch_wb_cache_pmem(addr, bytes);
+
+	return len;
+}
+
+/**
+ * arch_clear_pmem - zero a PMEM memory range
+ * @addr:	virtual start address
+ * @size:	number of bytes to zero
+ *
+ * Write zeros into the memory range starting at 'addr' for 'size' bytes.
+ * This function requires explicit ordering with an arch_wmb_pmem() call.
+ */
+static inline void arch_clear_pmem(void __pmem *addr, size_t size)
+{
+	void *vaddr = (void __force *)addr;
+
+	memset(vaddr, 0, size);
+	arch_wb_cache_pmem(addr, size);
+}
+
+/**
+ * arch_invalidate_pmem - invalidate a PMEM memory range
+ * @addr:	virtual start address
+ * @size:	number of bytes to zero
+ *
+ * After finishing ARS(Address Range Scrubbing), clean and invalidate the
+ * address range.
+ */
+static inline void arch_invalidate_pmem(void __pmem *addr, size_t size)
+{
+	__flush_dcache_area(addr, size);
+}
+
+static inline bool __arch_has_wmb_pmem(void)
+{
+	/* return false until arch_wmb_pmem() guarantee PoP on ARMv8.2. */
+	return false;
+}
+#endif /* CONFIG_ARCH_HAS_PMEM_API */
+#endif /* __ASM_PMEM_H__ */
-- 
2.5.0

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/3] arm64: pmem: add pmem support codes
@ 2016-07-15  2:46   ` Kwangwoo Lee
  0 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel, linux-nvdimm, Catalin Marinas, Will Deacon,
	Mark Rutland, Ross Zwisler, Dan Williams, Vishal Verma
  Cc: Kwangwoo Lee, Woosuk Chung, Hyunchul Kim, linux-kernel

This patch adds support pmem on arm64 platform. The limitation of
current implementation is that the persistency of pmem on NVDIMM
is not guaranteed on arm64 yet.

pmem driver expects that the persistency need to be guaranteed in
arch_wmb_pmem(), but the PoP(Point of Persistency) is going to be
supported on ARMv8.2 with DC CVAP instruction. Until then,
__arch_has_wmb_pmem() will return false and shows warning message.

[    6.250487] nd_pmem namespace0.0: unable to guarantee persistence of writes
[    6.305000] pmem0: detected capacity change from 0 to 1073741824
...
[   29.215249] EXT4-fs (pmem0): DAX enabled. Warning: EXPERIMENTAL, use at your own risk
[   29.308960] EXT4-fs (pmem0): mounted filesystem with ordered data mode. Opts: dax

Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
---
 arch/arm64/Kconfig            |   1 +
 arch/arm64/include/asm/pmem.h | 143 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 144 insertions(+)
 create mode 100644 arch/arm64/include/asm/pmem.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 12546ce..e14fd31 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -16,6 +16,7 @@ config ARM64
 	select ARCH_WANT_FRAME_POINTERS
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_HAS_MMIO_FLUSH
+	select ARCH_HAS_PMEM_API
 	select ARM_AMBA
 	select ARM_ARCH_TIMER
 	select ARM_GIC
diff --git a/arch/arm64/include/asm/pmem.h b/arch/arm64/include/asm/pmem.h
new file mode 100644
index 0000000..0bcfd87
--- /dev/null
+++ b/arch/arm64/include/asm/pmem.h
@@ -0,0 +1,143 @@
+/*
+ * Based on arch/x86/include/asm/pmem.h
+ *
+ * Copyright(c) 2016 SK hynix Inc. Kwangwoo Lee <kwangwoo.lee@sk.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+#ifndef __ASM_PMEM_H__
+#define __ASM_PMEM_H__
+
+#ifdef CONFIG_ARCH_HAS_PMEM_API
+#include <linux/uaccess.h>
+#include <asm/cacheflush.h>
+
+/**
+ * arch_memcpy_to_pmem - copy data to persistent memory
+ * @dst: destination buffer for the copy
+ * @src: source buffer for the copy
+ * @n: length of the copy in bytes
+ *
+ * Copy data to persistent memory media. if ARCH_HAS_PMEM_API is defined,
+ * then MEMREMAP_WB is used to memremap() during probe. A subsequent
+ * arch_wmb_pmem() need to guarantee durability.
+ */
+static inline void arch_memcpy_to_pmem(void __pmem *dst, const void *src,
+		size_t n)
+{
+	memcpy((void __force *) dst, src, n);
+	__flush_dcache_area(dst, n);
+}
+
+static inline int arch_memcpy_from_pmem(void *dst, const void __pmem *src,
+		size_t n)
+{
+	memcpy(dst, (void __force *) src, n);
+	return 0;
+}
+
+/**
+ * arch_wmb_pmem - synchronize writes to persistent memory
+ *
+ * After a series of arch_memcpy_to_pmem() operations this need to be called to
+ * ensure that written data is durable on persistent memory media.
+ */
+static inline void arch_wmb_pmem(void)
+{
+	/* pmem writes has been done in arch_memcpy_to_pmem() */
+	wmb();
+
+	/*
+	 * ARMv8.2 will support DC CVAP to ensure Point-of-Persistency and here
+	 * is the point for the API like __clean_dcache_area_pop().
+	 */
+}
+
+/**
+ * arch_wb_cache_pmem - write back a cache range
+ * @vaddr:	virtual start address
+ * @size:	number of bytes to write back
+ *
+ * Write back a cache range. Leave data in cache for performance of next access.
+ * This function requires explicit ordering with an arch_wmb_pmem() call.
+ */
+static inline void arch_wb_cache_pmem(void __pmem *addr, size_t size)
+{
+	/*
+	 * Just clean cache to PoC. The data in cache is remained to use the
+	 * next access. arch_wmb_pmem() need to be the point to ensure the
+	 * persistency under the current implementation.
+	 */
+	__clean_dcache_area(addr, size);
+}
+
+/**
+ * arch_copy_from_iter_pmem - copy data from an iterator to PMEM
+ * @addr:	PMEM destination address
+ * @bytes:	number of bytes to copy
+ * @i:		iterator with source data
+ *
+ * Copy data from the iterator 'i' to the PMEM buffer starting at 'addr'.
+ * This function requires explicit ordering with an arch_wmb_pmem() call.
+ */
+static inline size_t arch_copy_from_iter_pmem(void __pmem *addr, size_t bytes,
+		struct iov_iter *i)
+{
+	void *vaddr = (void __force *)addr;
+	size_t len;
+
+	/*
+	 * ARCH_HAS_NOCACHE_UACCESS is not defined and the default mapping is
+	 * MEMREMAP_WB. Instead of using copy_from_iter_nocache(), use cacheable
+	 * version and call arch_wb_cache_pmem().
+	 */
+	len = copy_from_iter(vaddr, bytes, i);
+
+	arch_wb_cache_pmem(addr, bytes);
+
+	return len;
+}
+
+/**
+ * arch_clear_pmem - zero a PMEM memory range
+ * @addr:	virtual start address
+ * @size:	number of bytes to zero
+ *
+ * Write zeros into the memory range starting at 'addr' for 'size' bytes.
+ * This function requires explicit ordering with an arch_wmb_pmem() call.
+ */
+static inline void arch_clear_pmem(void __pmem *addr, size_t size)
+{
+	void *vaddr = (void __force *)addr;
+
+	memset(vaddr, 0, size);
+	arch_wb_cache_pmem(addr, size);
+}
+
+/**
+ * arch_invalidate_pmem - invalidate a PMEM memory range
+ * @addr:	virtual start address
+ * @size:	number of bytes to zero
+ *
+ * After finishing ARS(Address Range Scrubbing), clean and invalidate the
+ * address range.
+ */
+static inline void arch_invalidate_pmem(void __pmem *addr, size_t size)
+{
+	__flush_dcache_area(addr, size);
+}
+
+static inline bool __arch_has_wmb_pmem(void)
+{
+	/* return false until arch_wmb_pmem() guarantee PoP on ARMv8.2. */
+	return false;
+}
+#endif /* CONFIG_ARCH_HAS_PMEM_API */
+#endif /* __ASM_PMEM_H__ */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/3] arm64: pmem: add pmem support codes
@ 2016-07-15  2:46   ` Kwangwoo Lee
  0 siblings, 0 replies; 18+ messages in thread
From: Kwangwoo Lee @ 2016-07-15  2:46 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds support pmem on arm64 platform. The limitation of
current implementation is that the persistency of pmem on NVDIMM
is not guaranteed on arm64 yet.

pmem driver expects that the persistency need to be guaranteed in
arch_wmb_pmem(), but the PoP(Point of Persistency) is going to be
supported on ARMv8.2 with DC CVAP instruction. Until then,
__arch_has_wmb_pmem() will return false and shows warning message.

[    6.250487] nd_pmem namespace0.0: unable to guarantee persistence of writes
[    6.305000] pmem0: detected capacity change from 0 to 1073741824
...
[   29.215249] EXT4-fs (pmem0): DAX enabled. Warning: EXPERIMENTAL, use at your own risk
[   29.308960] EXT4-fs (pmem0): mounted filesystem with ordered data mode. Opts: dax

Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
---
 arch/arm64/Kconfig            |   1 +
 arch/arm64/include/asm/pmem.h | 143 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 144 insertions(+)
 create mode 100644 arch/arm64/include/asm/pmem.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 12546ce..e14fd31 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -16,6 +16,7 @@ config ARM64
 	select ARCH_WANT_FRAME_POINTERS
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_HAS_MMIO_FLUSH
+	select ARCH_HAS_PMEM_API
 	select ARM_AMBA
 	select ARM_ARCH_TIMER
 	select ARM_GIC
diff --git a/arch/arm64/include/asm/pmem.h b/arch/arm64/include/asm/pmem.h
new file mode 100644
index 0000000..0bcfd87
--- /dev/null
+++ b/arch/arm64/include/asm/pmem.h
@@ -0,0 +1,143 @@
+/*
+ * Based on arch/x86/include/asm/pmem.h
+ *
+ * Copyright(c) 2016 SK hynix Inc. Kwangwoo Lee <kwangwoo.lee@sk.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+#ifndef __ASM_PMEM_H__
+#define __ASM_PMEM_H__
+
+#ifdef CONFIG_ARCH_HAS_PMEM_API
+#include <linux/uaccess.h>
+#include <asm/cacheflush.h>
+
+/**
+ * arch_memcpy_to_pmem - copy data to persistent memory
+ * @dst: destination buffer for the copy
+ * @src: source buffer for the copy
+ * @n: length of the copy in bytes
+ *
+ * Copy data to persistent memory media. if ARCH_HAS_PMEM_API is defined,
+ * then MEMREMAP_WB is used to memremap() during probe. A subsequent
+ * arch_wmb_pmem() need to guarantee durability.
+ */
+static inline void arch_memcpy_to_pmem(void __pmem *dst, const void *src,
+		size_t n)
+{
+	memcpy((void __force *) dst, src, n);
+	__flush_dcache_area(dst, n);
+}
+
+static inline int arch_memcpy_from_pmem(void *dst, const void __pmem *src,
+		size_t n)
+{
+	memcpy(dst, (void __force *) src, n);
+	return 0;
+}
+
+/**
+ * arch_wmb_pmem - synchronize writes to persistent memory
+ *
+ * After a series of arch_memcpy_to_pmem() operations this need to be called to
+ * ensure that written data is durable on persistent memory media.
+ */
+static inline void arch_wmb_pmem(void)
+{
+	/* pmem writes has been done in arch_memcpy_to_pmem() */
+	wmb();
+
+	/*
+	 * ARMv8.2 will support DC CVAP to ensure Point-of-Persistency and here
+	 * is the point for the API like __clean_dcache_area_pop().
+	 */
+}
+
+/**
+ * arch_wb_cache_pmem - write back a cache range
+ * @vaddr:	virtual start address
+ * @size:	number of bytes to write back
+ *
+ * Write back a cache range. Leave data in cache for performance of next access.
+ * This function requires explicit ordering with an arch_wmb_pmem() call.
+ */
+static inline void arch_wb_cache_pmem(void __pmem *addr, size_t size)
+{
+	/*
+	 * Just clean cache to PoC. The data in cache is remained to use the
+	 * next access. arch_wmb_pmem() need to be the point to ensure the
+	 * persistency under the current implementation.
+	 */
+	__clean_dcache_area(addr, size);
+}
+
+/**
+ * arch_copy_from_iter_pmem - copy data from an iterator to PMEM
+ * @addr:	PMEM destination address
+ * @bytes:	number of bytes to copy
+ * @i:		iterator with source data
+ *
+ * Copy data from the iterator 'i' to the PMEM buffer starting at 'addr'.
+ * This function requires explicit ordering with an arch_wmb_pmem() call.
+ */
+static inline size_t arch_copy_from_iter_pmem(void __pmem *addr, size_t bytes,
+		struct iov_iter *i)
+{
+	void *vaddr = (void __force *)addr;
+	size_t len;
+
+	/*
+	 * ARCH_HAS_NOCACHE_UACCESS is not defined and the default mapping is
+	 * MEMREMAP_WB. Instead of using copy_from_iter_nocache(), use cacheable
+	 * version and call arch_wb_cache_pmem().
+	 */
+	len = copy_from_iter(vaddr, bytes, i);
+
+	arch_wb_cache_pmem(addr, bytes);
+
+	return len;
+}
+
+/**
+ * arch_clear_pmem - zero a PMEM memory range
+ * @addr:	virtual start address
+ * @size:	number of bytes to zero
+ *
+ * Write zeros into the memory range starting at 'addr' for 'size' bytes.
+ * This function requires explicit ordering with an arch_wmb_pmem() call.
+ */
+static inline void arch_clear_pmem(void __pmem *addr, size_t size)
+{
+	void *vaddr = (void __force *)addr;
+
+	memset(vaddr, 0, size);
+	arch_wb_cache_pmem(addr, size);
+}
+
+/**
+ * arch_invalidate_pmem - invalidate a PMEM memory range
+ * @addr:	virtual start address
+ * @size:	number of bytes to zero
+ *
+ * After finishing ARS(Address Range Scrubbing), clean and invalidate the
+ * address range.
+ */
+static inline void arch_invalidate_pmem(void __pmem *addr, size_t size)
+{
+	__flush_dcache_area(addr, size);
+}
+
+static inline bool __arch_has_wmb_pmem(void)
+{
+	/* return false until arch_wmb_pmem() guarantee PoP on ARMv8.2. */
+	return false;
+}
+#endif /* CONFIG_ARCH_HAS_PMEM_API */
+#endif /* __ASM_PMEM_H__ */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
  2016-07-15  2:46   ` Kwangwoo Lee
  (?)
@ 2016-07-21 16:11     ` Will Deacon
  -1 siblings, 0 replies; 18+ messages in thread
From: Will Deacon @ 2016-07-21 16:11 UTC (permalink / raw)
  To: Kwangwoo Lee
  Cc: Mark Rutland, linux-nvdimm, Catalin Marinas, linux-kernel,
	Woosuk Chung, linux-arm-kernel

On Fri, Jul 15, 2016 at 11:46:20AM +0900, Kwangwoo Lee wrote:
> Ensure D-cache lines are cleaned to the PoC(Point of Coherency).
> 
> This function is called by arch_wb_cache_pmem() to clean the cache lines
> and remain the data in cache for the next access.
> 
> Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
> ---
>  arch/arm64/include/asm/cacheflush.h |  1 +
>  arch/arm64/mm/cache.S               | 18 ++++++++++++++++++
>  2 files changed, 19 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index c64268d..903a94f 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -68,6 +68,7 @@
>  extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
>  extern void flush_icache_range(unsigned long start, unsigned long end);
>  extern void __flush_dcache_area(void *addr, size_t len);
> +extern void __clean_dcache_area(void *addr, size_t len);
>  extern void __clean_dcache_area_pou(void *addr, size_t len);
>  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
>  
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 6df0706..5a350e4 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -93,6 +93,24 @@ ENTRY(__flush_dcache_area)
>  ENDPIPROC(__flush_dcache_area)
>  
>  /*
> + *	__clean_dcache_area(kaddr, size)
> + *
> + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * 	are cleaned to the PoC.
> + *
> + *	- kaddr   - kernel address
> + *	- size    - size in question
> + */
> +ENTRY(__clean_dcache_area)
> +alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
> +	dcache_by_line_op cvac, sy, x0, x1, x2, x3
> +alternative_else
> +	dcache_by_line_op civac, sy, x0, x1, x2, x3
> +alternative_endif
> +	ret
> +ENDPROC(__clean_dcache_area)

This looks functionally equivalent to __dma_clean_range. How about we:

  1. Convert the __dma_* routines to use dcache_by_line
  2. Introduce __clean_dcache_area_poc as a fallthrough to __dma_clean_range
  3. Use __clean_dcache_area_poc for the pmem stuff (with some parameter
     marshalling in the macro).

Will
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
@ 2016-07-21 16:11     ` Will Deacon
  0 siblings, 0 replies; 18+ messages in thread
From: Will Deacon @ 2016-07-21 16:11 UTC (permalink / raw)
  To: Kwangwoo Lee
  Cc: linux-arm-kernel, linux-nvdimm, Catalin Marinas, Mark Rutland,
	Ross Zwisler, Dan Williams, Vishal Verma, Woosuk Chung,
	Hyunchul Kim, linux-kernel

On Fri, Jul 15, 2016 at 11:46:20AM +0900, Kwangwoo Lee wrote:
> Ensure D-cache lines are cleaned to the PoC(Point of Coherency).
> 
> This function is called by arch_wb_cache_pmem() to clean the cache lines
> and remain the data in cache for the next access.
> 
> Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
> ---
>  arch/arm64/include/asm/cacheflush.h |  1 +
>  arch/arm64/mm/cache.S               | 18 ++++++++++++++++++
>  2 files changed, 19 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index c64268d..903a94f 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -68,6 +68,7 @@
>  extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
>  extern void flush_icache_range(unsigned long start, unsigned long end);
>  extern void __flush_dcache_area(void *addr, size_t len);
> +extern void __clean_dcache_area(void *addr, size_t len);
>  extern void __clean_dcache_area_pou(void *addr, size_t len);
>  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
>  
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 6df0706..5a350e4 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -93,6 +93,24 @@ ENTRY(__flush_dcache_area)
>  ENDPIPROC(__flush_dcache_area)
>  
>  /*
> + *	__clean_dcache_area(kaddr, size)
> + *
> + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * 	are cleaned to the PoC.
> + *
> + *	- kaddr   - kernel address
> + *	- size    - size in question
> + */
> +ENTRY(__clean_dcache_area)
> +alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
> +	dcache_by_line_op cvac, sy, x0, x1, x2, x3
> +alternative_else
> +	dcache_by_line_op civac, sy, x0, x1, x2, x3
> +alternative_endif
> +	ret
> +ENDPROC(__clean_dcache_area)

This looks functionally equivalent to __dma_clean_range. How about we:

  1. Convert the __dma_* routines to use dcache_by_line
  2. Introduce __clean_dcache_area_poc as a fallthrough to __dma_clean_range
  3. Use __clean_dcache_area_poc for the pmem stuff (with some parameter
     marshalling in the macro).

Will

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
@ 2016-07-21 16:11     ` Will Deacon
  0 siblings, 0 replies; 18+ messages in thread
From: Will Deacon @ 2016-07-21 16:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 15, 2016 at 11:46:20AM +0900, Kwangwoo Lee wrote:
> Ensure D-cache lines are cleaned to the PoC(Point of Coherency).
> 
> This function is called by arch_wb_cache_pmem() to clean the cache lines
> and remain the data in cache for the next access.
> 
> Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
> ---
>  arch/arm64/include/asm/cacheflush.h |  1 +
>  arch/arm64/mm/cache.S               | 18 ++++++++++++++++++
>  2 files changed, 19 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index c64268d..903a94f 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -68,6 +68,7 @@
>  extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
>  extern void flush_icache_range(unsigned long start, unsigned long end);
>  extern void __flush_dcache_area(void *addr, size_t len);
> +extern void __clean_dcache_area(void *addr, size_t len);
>  extern void __clean_dcache_area_pou(void *addr, size_t len);
>  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
>  
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 6df0706..5a350e4 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -93,6 +93,24 @@ ENTRY(__flush_dcache_area)
>  ENDPIPROC(__flush_dcache_area)
>  
>  /*
> + *	__clean_dcache_area(kaddr, size)
> + *
> + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * 	are cleaned to the PoC.
> + *
> + *	- kaddr   - kernel address
> + *	- size    - size in question
> + */
> +ENTRY(__clean_dcache_area)
> +alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
> +	dcache_by_line_op cvac, sy, x0, x1, x2, x3
> +alternative_else
> +	dcache_by_line_op civac, sy, x0, x1, x2, x3
> +alternative_endif
> +	ret
> +ENDPROC(__clean_dcache_area)

This looks functionally equivalent to __dma_clean_range. How about we:

  1. Convert the __dma_* routines to use dcache_by_line
  2. Introduce __clean_dcache_area_poc as a fallthrough to __dma_clean_range
  3. Use __clean_dcache_area_poc for the pmem stuff (with some parameter
     marshalling in the macro).

Will

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
  2016-07-21 16:11     ` Will Deacon
  (?)
@ 2016-07-22  7:28       ` kwangwoo.lee
  -1 siblings, 0 replies; 18+ messages in thread
From: kwangwoo.lee @ 2016-07-22  7:28 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, linux-nvdimm, Catalin Marinas, linux-kernel,
	woosuk.chung, linux-arm-kernel

Hi Will,

> -----Original Message-----
> From: Will Deacon [mailto:will.deacon@arm.com]
> Sent: Friday, July 22, 2016 1:12 AM
> To: 이광우(LEE KWANGWOO) MS SW
> Cc: linux-arm-kernel@lists.infradead.org; linux-nvdimm@lists.01.org; Catalin Marinas; Mark Rutland;
> Ross Zwisler; Dan Williams; Vishal Verma; 정우석(CHUNG WOO SUK) MS SW; 김현철(KIM HYUNCHUL) MS SW;
> linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
> 
> On Fri, Jul 15, 2016 at 11:46:20AM +0900, Kwangwoo Lee wrote:
> > Ensure D-cache lines are cleaned to the PoC(Point of Coherency).
> >
> > This function is called by arch_wb_cache_pmem() to clean the cache lines
> > and remain the data in cache for the next access.
> >
> > Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
> > ---
> >  arch/arm64/include/asm/cacheflush.h |  1 +
> >  arch/arm64/mm/cache.S               | 18 ++++++++++++++++++
> >  2 files changed, 19 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> > index c64268d..903a94f 100644
> > --- a/arch/arm64/include/asm/cacheflush.h
> > +++ b/arch/arm64/include/asm/cacheflush.h
> > @@ -68,6 +68,7 @@
> >  extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> >  extern void flush_icache_range(unsigned long start, unsigned long end);
> >  extern void __flush_dcache_area(void *addr, size_t len);
> > +extern void __clean_dcache_area(void *addr, size_t len);
> >  extern void __clean_dcache_area_pou(void *addr, size_t len);
> >  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> >
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 6df0706..5a350e4 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -93,6 +93,24 @@ ENTRY(__flush_dcache_area)
> >  ENDPIPROC(__flush_dcache_area)
> >
> >  /*
> > + *	__clean_dcache_area(kaddr, size)
> > + *
> > + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> > + * 	are cleaned to the PoC.
> > + *
> > + *	- kaddr   - kernel address
> > + *	- size    - size in question
> > + */
> > +ENTRY(__clean_dcache_area)
> > +alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
> > +	dcache_by_line_op cvac, sy, x0, x1, x2, x3
> > +alternative_else
> > +	dcache_by_line_op civac, sy, x0, x1, x2, x3
> > +alternative_endif
> > +	ret
> > +ENDPROC(__clean_dcache_area)
> 
> This looks functionally equivalent to __dma_clean_range. How about we:
> 
>   1. Convert the __dma_* routines to use dcache_by_line
>   2. Introduce __clean_dcache_area_poc as a fallthrough to __dma_clean_range
>   3. Use __clean_dcache_area_poc for the pmem stuff (with some parameter
>      marshalling in the macro).

OK. I'll revise the patch following your comment in the next round. Thanks for the comment!

> Will

Best Regards,
Kwangwoo Lee
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
@ 2016-07-22  7:28       ` kwangwoo.lee
  0 siblings, 0 replies; 18+ messages in thread
From: kwangwoo.lee @ 2016-07-22  7:28 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-nvdimm@lists.01.org, Catalin Marinas,
	Mark Rutland, Ross Zwisler, Dan Williams, Vishal Verma,
	woosuk.chung, hyunchul3.kim, linux-kernel

Hi Will,

> -----Original Message-----
> From: Will Deacon [mailto:will.deacon@arm.com]
> Sent: Friday, July 22, 2016 1:12 AM
> To: 이광우(LEE KWANGWOO) MS SW
> Cc: linux-arm-kernel@lists.infradead.org; linux-nvdimm@lists.01.org; Catalin Marinas; Mark Rutland;
> Ross Zwisler; Dan Williams; Vishal Verma; 정우석(CHUNG WOO SUK) MS SW; 김현철(KIM HYUNCHUL) MS SW;
> linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
> 
> On Fri, Jul 15, 2016 at 11:46:20AM +0900, Kwangwoo Lee wrote:
> > Ensure D-cache lines are cleaned to the PoC(Point of Coherency).
> >
> > This function is called by arch_wb_cache_pmem() to clean the cache lines
> > and remain the data in cache for the next access.
> >
> > Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
> > ---
> >  arch/arm64/include/asm/cacheflush.h |  1 +
> >  arch/arm64/mm/cache.S               | 18 ++++++++++++++++++
> >  2 files changed, 19 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> > index c64268d..903a94f 100644
> > --- a/arch/arm64/include/asm/cacheflush.h
> > +++ b/arch/arm64/include/asm/cacheflush.h
> > @@ -68,6 +68,7 @@
> >  extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> >  extern void flush_icache_range(unsigned long start, unsigned long end);
> >  extern void __flush_dcache_area(void *addr, size_t len);
> > +extern void __clean_dcache_area(void *addr, size_t len);
> >  extern void __clean_dcache_area_pou(void *addr, size_t len);
> >  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> >
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 6df0706..5a350e4 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -93,6 +93,24 @@ ENTRY(__flush_dcache_area)
> >  ENDPIPROC(__flush_dcache_area)
> >
> >  /*
> > + *	__clean_dcache_area(kaddr, size)
> > + *
> > + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> > + * 	are cleaned to the PoC.
> > + *
> > + *	- kaddr   - kernel address
> > + *	- size    - size in question
> > + */
> > +ENTRY(__clean_dcache_area)
> > +alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
> > +	dcache_by_line_op cvac, sy, x0, x1, x2, x3
> > +alternative_else
> > +	dcache_by_line_op civac, sy, x0, x1, x2, x3
> > +alternative_endif
> > +	ret
> > +ENDPROC(__clean_dcache_area)
> 
> This looks functionally equivalent to __dma_clean_range. How about we:
> 
>   1. Convert the __dma_* routines to use dcache_by_line
>   2. Introduce __clean_dcache_area_poc as a fallthrough to __dma_clean_range
>   3. Use __clean_dcache_area_poc for the pmem stuff (with some parameter
>      marshalling in the macro).

OK. I'll revise the patch following your comment in the next round. Thanks for the comment!

> Will

Best Regards,
Kwangwoo Lee

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
@ 2016-07-22  7:28       ` kwangwoo.lee
  0 siblings, 0 replies; 18+ messages in thread
From: kwangwoo.lee at sk.com @ 2016-07-22  7:28 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Will,

> -----Original Message-----
> From: Will Deacon [mailto:will.deacon at arm.com]
> Sent: Friday, July 22, 2016 1:12 AM
> To: ???(LEE KWANGWOO) MS SW
> Cc: linux-arm-kernel at lists.infradead.org; linux-nvdimm at lists.01.org; Catalin Marinas; Mark Rutland;
> Ross Zwisler; Dan Williams; Vishal Verma; ???(CHUNG WOO SUK) MS SW; ???(KIM HYUNCHUL) MS SW;
> linux-kernel at vger.kernel.org
> Subject: Re: [PATCH v3 1/3] arm64: mm: add __clean_dcache_area()
> 
> On Fri, Jul 15, 2016 at 11:46:20AM +0900, Kwangwoo Lee wrote:
> > Ensure D-cache lines are cleaned to the PoC(Point of Coherency).
> >
> > This function is called by arch_wb_cache_pmem() to clean the cache lines
> > and remain the data in cache for the next access.
> >
> > Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
> > ---
> >  arch/arm64/include/asm/cacheflush.h |  1 +
> >  arch/arm64/mm/cache.S               | 18 ++++++++++++++++++
> >  2 files changed, 19 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> > index c64268d..903a94f 100644
> > --- a/arch/arm64/include/asm/cacheflush.h
> > +++ b/arch/arm64/include/asm/cacheflush.h
> > @@ -68,6 +68,7 @@
> >  extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> >  extern void flush_icache_range(unsigned long start, unsigned long end);
> >  extern void __flush_dcache_area(void *addr, size_t len);
> > +extern void __clean_dcache_area(void *addr, size_t len);
> >  extern void __clean_dcache_area_pou(void *addr, size_t len);
> >  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> >
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 6df0706..5a350e4 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -93,6 +93,24 @@ ENTRY(__flush_dcache_area)
> >  ENDPIPROC(__flush_dcache_area)
> >
> >  /*
> > + *	__clean_dcache_area(kaddr, size)
> > + *
> > + * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> > + * 	are cleaned to the PoC.
> > + *
> > + *	- kaddr   - kernel address
> > + *	- size    - size in question
> > + */
> > +ENTRY(__clean_dcache_area)
> > +alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
> > +	dcache_by_line_op cvac, sy, x0, x1, x2, x3
> > +alternative_else
> > +	dcache_by_line_op civac, sy, x0, x1, x2, x3
> > +alternative_endif
> > +	ret
> > +ENDPROC(__clean_dcache_area)
> 
> This looks functionally equivalent to __dma_clean_range. How about we:
> 
>   1. Convert the __dma_* routines to use dcache_by_line
>   2. Introduce __clean_dcache_area_poc as a fallthrough to __dma_clean_range
>   3. Use __clean_dcache_area_poc for the pmem stuff (with some parameter
>      marshalling in the macro).

OK. I'll revise the patch following your comment in the next round. Thanks for the comment!

> Will

Best Regards,
Kwangwoo Lee

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-07-22  7:29 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-15  2:46 [PATCH v3 0/3] support pmem on arm64 Kwangwoo Lee
2016-07-15  2:46 ` Kwangwoo Lee
2016-07-15  2:46 ` Kwangwoo Lee
2016-07-15  2:46 ` [PATCH v3 1/3] arm64: mm: add __clean_dcache_area() Kwangwoo Lee
2016-07-15  2:46   ` Kwangwoo Lee
2016-07-15  2:46   ` Kwangwoo Lee
2016-07-21 16:11   ` Will Deacon
2016-07-21 16:11     ` Will Deacon
2016-07-21 16:11     ` Will Deacon
2016-07-22  7:28     ` kwangwoo.lee
2016-07-22  7:28       ` kwangwoo.lee at sk.com
2016-07-22  7:28       ` kwangwoo.lee
2016-07-15  2:46 ` [PATCH v3 2/3] arm64: mm: add mmio_flush_range() to support pmem Kwangwoo Lee
2016-07-15  2:46   ` Kwangwoo Lee
2016-07-15  2:46   ` Kwangwoo Lee
2016-07-15  2:46 ` [PATCH v3 3/3] arm64: pmem: add pmem support codes Kwangwoo Lee
2016-07-15  2:46   ` Kwangwoo Lee
2016-07-15  2:46   ` Kwangwoo Lee

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.