* [PATCH v4 0/2] arm64: Fix kcsan test_barrier fail and panic
@ 2022-05-23 11:31 ` Kefeng Wang
0 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-05-23 11:31 UTC (permalink / raw)
To: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet
Cc: linux-doc, arnd, Kefeng Wang
Fix selftest and kcsan_test() module fail when KCSAN_STRICT
and KCSAN_WEAK_MEMORY enabled on ARM64.
v4:
- Use 2 spaces after a sentence-ending '.', suggested by Marco
- Collect Ack/Review
v3:
- update dma_mb()'s description and add the generic definition,
also asm-generic change is moved into patch1, suggested by Marco.
v2:
- Add documents about dma_mb(), suggested by Mike and Will.
- drop Fixes tag and update changlog, suggested by Mike.
Kefeng Wang (2):
asm-generic: Add memory barrier dma_mb()
arm64: kcsan: Support detecting more missing memory barriers
Documentation/memory-barriers.txt | 11 ++++++-----
arch/arm64/include/asm/barrier.h | 12 ++++++------
include/asm-generic/barrier.h | 8 ++++++++
3 files changed, 20 insertions(+), 11 deletions(-)
--
2.35.3
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH v4 0/2] arm64: Fix kcsan test_barrier fail and panic
@ 2022-05-23 11:31 ` Kefeng Wang
0 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-05-23 11:31 UTC (permalink / raw)
To: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet
Cc: linux-doc, arnd, Kefeng Wang
Fix selftest and kcsan_test() module fail when KCSAN_STRICT
and KCSAN_WEAK_MEMORY enabled on ARM64.
v4:
- Use 2 spaces after a sentence-ending '.', suggested by Marco
- Collect Ack/Review
v3:
- update dma_mb()'s description and add the generic definition,
also asm-generic change is moved into patch1, suggested by Marco.
v2:
- Add documents about dma_mb(), suggested by Mike and Will.
- drop Fixes tag and update changlog, suggested by Mike.
Kefeng Wang (2):
asm-generic: Add memory barrier dma_mb()
arm64: kcsan: Support detecting more missing memory barriers
Documentation/memory-barriers.txt | 11 ++++++-----
arch/arm64/include/asm/barrier.h | 12 ++++++------
include/asm-generic/barrier.h | 8 ++++++++
3 files changed, 20 insertions(+), 11 deletions(-)
--
2.35.3
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
2022-05-23 11:31 ` Kefeng Wang
@ 2022-05-23 11:31 ` Kefeng Wang
-1 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-05-23 11:31 UTC (permalink / raw)
To: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet
Cc: linux-doc, arnd, Kefeng Wang
The memory barrier dma_mb() is introduced by commit a76a37777f2c
("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
which is used to ensure that prior (both reads and writes) accesses
to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
Documentation/memory-barriers.txt | 11 ++++++-----
include/asm-generic/barrier.h | 8 ++++++++
2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index b12df9137e1c..832b5d36e279 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
(*) dma_wmb();
(*) dma_rmb();
+ (*) dma_mb();
These are for use with consistent memory to guarantee the ordering
of writes or reads of shared memory accessible to both the CPU and a
@@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
The dma_rmb() allows us guarantee the device has released ownership
before we read the data from the descriptor, and the dma_wmb() allows
us to guarantee the data is written to the descriptor before the device
- can see it now has ownership. Note that, when using writel(), a prior
- wmb() is not needed to guarantee that the cache coherent memory writes
- have completed before writing to the MMIO region. The cheaper
- writel_relaxed() does not provide this guarantee and must not be used
- here.
+ can see it now has ownership. The dma_mb() implies both a dma_rmb() and
+ a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
+ to guarantee that the cache coherent memory writes have completed before
+ writing to the MMIO region. The cheaper writel_relaxed() does not provide
+ this guarantee and must not be used here.
See the subsection "Kernel I/O barrier effects" for more information on
relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index fd7e8fbaeef1..961f4d88f9ef 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -38,6 +38,10 @@
#define wmb() do { kcsan_wmb(); __wmb(); } while (0)
#endif
+#ifdef __dma_mb
+#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
+#endif
+
#ifdef __dma_rmb
#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
#endif
@@ -65,6 +69,10 @@
#define wmb() mb()
#endif
+#ifndef dma_mb
+#define dma_mb() mb()
+#endif
+
#ifndef dma_rmb
#define dma_rmb() rmb()
#endif
--
2.35.3
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
@ 2022-05-23 11:31 ` Kefeng Wang
0 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-05-23 11:31 UTC (permalink / raw)
To: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet
Cc: linux-doc, arnd, Kefeng Wang
The memory barrier dma_mb() is introduced by commit a76a37777f2c
("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
which is used to ensure that prior (both reads and writes) accesses
to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
Documentation/memory-barriers.txt | 11 ++++++-----
include/asm-generic/barrier.h | 8 ++++++++
2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index b12df9137e1c..832b5d36e279 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
(*) dma_wmb();
(*) dma_rmb();
+ (*) dma_mb();
These are for use with consistent memory to guarantee the ordering
of writes or reads of shared memory accessible to both the CPU and a
@@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
The dma_rmb() allows us guarantee the device has released ownership
before we read the data from the descriptor, and the dma_wmb() allows
us to guarantee the data is written to the descriptor before the device
- can see it now has ownership. Note that, when using writel(), a prior
- wmb() is not needed to guarantee that the cache coherent memory writes
- have completed before writing to the MMIO region. The cheaper
- writel_relaxed() does not provide this guarantee and must not be used
- here.
+ can see it now has ownership. The dma_mb() implies both a dma_rmb() and
+ a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
+ to guarantee that the cache coherent memory writes have completed before
+ writing to the MMIO region. The cheaper writel_relaxed() does not provide
+ this guarantee and must not be used here.
See the subsection "Kernel I/O barrier effects" for more information on
relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index fd7e8fbaeef1..961f4d88f9ef 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -38,6 +38,10 @@
#define wmb() do { kcsan_wmb(); __wmb(); } while (0)
#endif
+#ifdef __dma_mb
+#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
+#endif
+
#ifdef __dma_rmb
#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
#endif
@@ -65,6 +69,10 @@
#define wmb() mb()
#endif
+#ifndef dma_mb
+#define dma_mb() mb()
+#endif
+
#ifndef dma_rmb
#define dma_rmb() rmb()
#endif
--
2.35.3
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers
2022-05-23 11:31 ` Kefeng Wang
@ 2022-05-23 11:31 ` Kefeng Wang
-1 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-05-23 11:31 UTC (permalink / raw)
To: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet
Cc: linux-doc, arnd, Kefeng Wang
As "kcsan: Support detecting a subset of missing memory barriers"[1]
introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects
more missing memory barrier, but arm64 don't have KCSAN instrumentation
for barriers, so the new selftest test_barrier() and test cases for
memory barrier instrumentation in kcsan_test module will fail, even
panic on selftest.
Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h
defined the final instrumented version of these barriers, which will
fix the above issues.
Note, barrier instrumentation that can be disabled via __no_kcsan with
appropriate compiler-support (and not just with objtool help), see
commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no
objtool support exists"), it adds disable_sanitizer_instrumentation to
__no_kcsan attribute which will remove all sanitizer instrumentation fully
(with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize.
[1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/
Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/arm64/include/asm/barrier.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9f3e2c3d2ca0..2cfc4245d2e2 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -50,13 +50,13 @@
#define pmr_sync() do {} while (0)
#endif
-#define mb() dsb(sy)
-#define rmb() dsb(ld)
-#define wmb() dsb(st)
+#define __mb() dsb(sy)
+#define __rmb() dsb(ld)
+#define __wmb() dsb(st)
-#define dma_mb() dmb(osh)
-#define dma_rmb() dmb(oshld)
-#define dma_wmb() dmb(oshst)
+#define __dma_mb() dmb(osh)
+#define __dma_rmb() dmb(oshld)
+#define __dma_wmb() dmb(oshst)
#define io_stop_wc() dgh()
--
2.35.3
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers
@ 2022-05-23 11:31 ` Kefeng Wang
0 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-05-23 11:31 UTC (permalink / raw)
To: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet
Cc: linux-doc, arnd, Kefeng Wang
As "kcsan: Support detecting a subset of missing memory barriers"[1]
introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects
more missing memory barrier, but arm64 don't have KCSAN instrumentation
for barriers, so the new selftest test_barrier() and test cases for
memory barrier instrumentation in kcsan_test module will fail, even
panic on selftest.
Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h
defined the final instrumented version of these barriers, which will
fix the above issues.
Note, barrier instrumentation that can be disabled via __no_kcsan with
appropriate compiler-support (and not just with objtool help), see
commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no
objtool support exists"), it adds disable_sanitizer_instrumentation to
__no_kcsan attribute which will remove all sanitizer instrumentation fully
(with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize.
[1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/
Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/arm64/include/asm/barrier.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9f3e2c3d2ca0..2cfc4245d2e2 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -50,13 +50,13 @@
#define pmr_sync() do {} while (0)
#endif
-#define mb() dsb(sy)
-#define rmb() dsb(ld)
-#define wmb() dsb(st)
+#define __mb() dsb(sy)
+#define __rmb() dsb(ld)
+#define __wmb() dsb(st)
-#define dma_mb() dmb(osh)
-#define dma_rmb() dmb(oshld)
-#define dma_wmb() dmb(oshst)
+#define __dma_mb() dmb(osh)
+#define __dma_rmb() dmb(oshld)
+#define __dma_wmb() dmb(oshst)
#define io_stop_wc() dgh()
--
2.35.3
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
2022-05-23 11:31 ` Kefeng Wang
@ 2022-05-23 11:35 ` Marco Elver
-1 siblings, 0 replies; 28+ messages in thread
From: Marco Elver @ 2022-05-23 11:35 UTC (permalink / raw)
To: Kefeng Wang
Cc: catalin.marinas, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet, linux-doc, arnd, Paul E. McKenney,
Peter Zijlstra
On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
> The memory barrier dma_mb() is introduced by commit a76a37777f2c
> ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> which is used to ensure that prior (both reads and writes) accesses
> to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
>
> Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> Documentation/memory-barriers.txt | 11 ++++++-----
> include/asm-generic/barrier.h | 8 ++++++++
> 2 files changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index b12df9137e1c..832b5d36e279 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
>
> (*) dma_wmb();
> (*) dma_rmb();
> + (*) dma_mb();
>
> These are for use with consistent memory to guarantee the ordering
> of writes or reads of shared memory accessible to both the CPU and a
> @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
> The dma_rmb() allows us guarantee the device has released ownership
> before we read the data from the descriptor, and the dma_wmb() allows
> us to guarantee the data is written to the descriptor before the device
> - can see it now has ownership. Note that, when using writel(), a prior
> - wmb() is not needed to guarantee that the cache coherent memory writes
> - have completed before writing to the MMIO region. The cheaper
> - writel_relaxed() does not provide this guarantee and must not be used
> - here.
> + can see it now has ownership. The dma_mb() implies both a dma_rmb() and
> + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
> + to guarantee that the cache coherent memory writes have completed before
> + writing to the MMIO region. The cheaper writel_relaxed() does not provide
> + this guarantee and must not be used here.
>
> See the subsection "Kernel I/O barrier effects" for more information on
> relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index fd7e8fbaeef1..961f4d88f9ef 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -38,6 +38,10 @@
> #define wmb() do { kcsan_wmb(); __wmb(); } while (0)
> #endif
>
> +#ifdef __dma_mb
> +#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
> +#endif
> +
> #ifdef __dma_rmb
> #define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
> #endif
> @@ -65,6 +69,10 @@
> #define wmb() mb()
> #endif
>
> +#ifndef dma_mb
> +#define dma_mb() mb()
> +#endif
> +
> #ifndef dma_rmb
> #define dma_rmb() rmb()
> #endif
> --
> 2.35.3
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
@ 2022-05-23 11:35 ` Marco Elver
0 siblings, 0 replies; 28+ messages in thread
From: Marco Elver @ 2022-05-23 11:35 UTC (permalink / raw)
To: Kefeng Wang
Cc: catalin.marinas, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet, linux-doc, arnd, Paul E. McKenney,
Peter Zijlstra
On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
> The memory barrier dma_mb() is introduced by commit a76a37777f2c
> ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> which is used to ensure that prior (both reads and writes) accesses
> to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
>
> Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> Documentation/memory-barriers.txt | 11 ++++++-----
> include/asm-generic/barrier.h | 8 ++++++++
> 2 files changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index b12df9137e1c..832b5d36e279 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
>
> (*) dma_wmb();
> (*) dma_rmb();
> + (*) dma_mb();
>
> These are for use with consistent memory to guarantee the ordering
> of writes or reads of shared memory accessible to both the CPU and a
> @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
> The dma_rmb() allows us guarantee the device has released ownership
> before we read the data from the descriptor, and the dma_wmb() allows
> us to guarantee the data is written to the descriptor before the device
> - can see it now has ownership. Note that, when using writel(), a prior
> - wmb() is not needed to guarantee that the cache coherent memory writes
> - have completed before writing to the MMIO region. The cheaper
> - writel_relaxed() does not provide this guarantee and must not be used
> - here.
> + can see it now has ownership. The dma_mb() implies both a dma_rmb() and
> + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
> + to guarantee that the cache coherent memory writes have completed before
> + writing to the MMIO region. The cheaper writel_relaxed() does not provide
> + this guarantee and must not be used here.
>
> See the subsection "Kernel I/O barrier effects" for more information on
> relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index fd7e8fbaeef1..961f4d88f9ef 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -38,6 +38,10 @@
> #define wmb() do { kcsan_wmb(); __wmb(); } while (0)
> #endif
>
> +#ifdef __dma_mb
> +#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
> +#endif
> +
> #ifdef __dma_rmb
> #define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
> #endif
> @@ -65,6 +69,10 @@
> #define wmb() mb()
> #endif
>
> +#ifndef dma_mb
> +#define dma_mb() mb()
> +#endif
> +
> #ifndef dma_rmb
> #define dma_rmb() rmb()
> #endif
> --
> 2.35.3
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
2022-05-23 11:31 ` Kefeng Wang
@ 2022-05-23 11:38 ` Mark Rutland
-1 siblings, 0 replies; 28+ messages in thread
From: Mark Rutland @ 2022-05-23 11:38 UTC (permalink / raw)
To: Kefeng Wang
Cc: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
Jonathan Corbet, linux-doc, arnd
On Mon, May 23, 2022 at 07:31:25PM +0800, Kefeng Wang wrote:
> The memory barrier dma_mb() is introduced by commit a76a37777f2c
> ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> which is used to ensure that prior (both reads and writes) accesses
> to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
>
> Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
FWIW, this looks sane to me so:
Acked-by: Mark Rutland <mark.rutland@arm.com>
I'll leave the final say to Will, as I assume this'll go via the arm64 tree and
he'll be the one picking this.
Mark.
> ---
> Documentation/memory-barriers.txt | 11 ++++++-----
> include/asm-generic/barrier.h | 8 ++++++++
> 2 files changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index b12df9137e1c..832b5d36e279 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
>
> (*) dma_wmb();
> (*) dma_rmb();
> + (*) dma_mb();
>
> These are for use with consistent memory to guarantee the ordering
> of writes or reads of shared memory accessible to both the CPU and a
> @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
> The dma_rmb() allows us guarantee the device has released ownership
> before we read the data from the descriptor, and the dma_wmb() allows
> us to guarantee the data is written to the descriptor before the device
> - can see it now has ownership. Note that, when using writel(), a prior
> - wmb() is not needed to guarantee that the cache coherent memory writes
> - have completed before writing to the MMIO region. The cheaper
> - writel_relaxed() does not provide this guarantee and must not be used
> - here.
> + can see it now has ownership. The dma_mb() implies both a dma_rmb() and
> + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
> + to guarantee that the cache coherent memory writes have completed before
> + writing to the MMIO region. The cheaper writel_relaxed() does not provide
> + this guarantee and must not be used here.
>
> See the subsection "Kernel I/O barrier effects" for more information on
> relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index fd7e8fbaeef1..961f4d88f9ef 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -38,6 +38,10 @@
> #define wmb() do { kcsan_wmb(); __wmb(); } while (0)
> #endif
>
> +#ifdef __dma_mb
> +#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
> +#endif
> +
> #ifdef __dma_rmb
> #define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
> #endif
> @@ -65,6 +69,10 @@
> #define wmb() mb()
> #endif
>
> +#ifndef dma_mb
> +#define dma_mb() mb()
> +#endif
> +
> #ifndef dma_rmb
> #define dma_rmb() rmb()
> #endif
> --
> 2.35.3
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
@ 2022-05-23 11:38 ` Mark Rutland
0 siblings, 0 replies; 28+ messages in thread
From: Mark Rutland @ 2022-05-23 11:38 UTC (permalink / raw)
To: Kefeng Wang
Cc: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
Jonathan Corbet, linux-doc, arnd
On Mon, May 23, 2022 at 07:31:25PM +0800, Kefeng Wang wrote:
> The memory barrier dma_mb() is introduced by commit a76a37777f2c
> ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> which is used to ensure that prior (both reads and writes) accesses
> to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
>
> Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
FWIW, this looks sane to me so:
Acked-by: Mark Rutland <mark.rutland@arm.com>
I'll leave the final say to Will, as I assume this'll go via the arm64 tree and
he'll be the one picking this.
Mark.
> ---
> Documentation/memory-barriers.txt | 11 ++++++-----
> include/asm-generic/barrier.h | 8 ++++++++
> 2 files changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index b12df9137e1c..832b5d36e279 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
>
> (*) dma_wmb();
> (*) dma_rmb();
> + (*) dma_mb();
>
> These are for use with consistent memory to guarantee the ordering
> of writes or reads of shared memory accessible to both the CPU and a
> @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
> The dma_rmb() allows us guarantee the device has released ownership
> before we read the data from the descriptor, and the dma_wmb() allows
> us to guarantee the data is written to the descriptor before the device
> - can see it now has ownership. Note that, when using writel(), a prior
> - wmb() is not needed to guarantee that the cache coherent memory writes
> - have completed before writing to the MMIO region. The cheaper
> - writel_relaxed() does not provide this guarantee and must not be used
> - here.
> + can see it now has ownership. The dma_mb() implies both a dma_rmb() and
> + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
> + to guarantee that the cache coherent memory writes have completed before
> + writing to the MMIO region. The cheaper writel_relaxed() does not provide
> + this guarantee and must not be used here.
>
> See the subsection "Kernel I/O barrier effects" for more information on
> relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index fd7e8fbaeef1..961f4d88f9ef 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -38,6 +38,10 @@
> #define wmb() do { kcsan_wmb(); __wmb(); } while (0)
> #endif
>
> +#ifdef __dma_mb
> +#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
> +#endif
> +
> #ifdef __dma_rmb
> #define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
> #endif
> @@ -65,6 +69,10 @@
> #define wmb() mb()
> #endif
>
> +#ifndef dma_mb
> +#define dma_mb() mb()
> +#endif
> +
> #ifndef dma_rmb
> #define dma_rmb() rmb()
> #endif
> --
> 2.35.3
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers
2022-05-23 11:31 ` Kefeng Wang
@ 2022-05-23 14:16 ` Mark Rutland
-1 siblings, 0 replies; 28+ messages in thread
From: Mark Rutland @ 2022-05-23 14:16 UTC (permalink / raw)
To: Kefeng Wang
Cc: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
Jonathan Corbet, linux-doc, arnd
On Mon, May 23, 2022 at 07:31:26PM +0800, Kefeng Wang wrote:
> As "kcsan: Support detecting a subset of missing memory barriers"[1]
> introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects
> more missing memory barrier, but arm64 don't have KCSAN instrumentation
> for barriers, so the new selftest test_barrier() and test cases for
> memory barrier instrumentation in kcsan_test module will fail, even
> panic on selftest.
>
> Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h
> defined the final instrumented version of these barriers, which will
> fix the above issues.
>
> Note, barrier instrumentation that can be disabled via __no_kcsan with
> appropriate compiler-support (and not just with objtool help), see
> commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no
> objtool support exists"), it adds disable_sanitizer_instrumentation to
> __no_kcsan attribute which will remove all sanitizer instrumentation fully
> (with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize.
>
> [1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/
>
> Acked-by: Marco Elver <elver@google.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Having built this with GCC 12.1.0 and LLVM 14.0.0, I think this patch itself
doesn't introduce any new problems, and logically makes sense. With that in
mind:
Acked-by: Mark Rutland <mark.rutland@arm.com>
As an aside, having scanned the resulting vmlinux with objdump, there are
plenty of latent issues where we get KCSAN instrumentation where we don't want
it (e.g. early/late in arm64's entry-common.o). The bulk of those are due to
missing `nonistr` or `__always_inline`, which we'll need to fix up.
Thanks,
Mark.
> ---
> arch/arm64/include/asm/barrier.h | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
> index 9f3e2c3d2ca0..2cfc4245d2e2 100644
> --- a/arch/arm64/include/asm/barrier.h
> +++ b/arch/arm64/include/asm/barrier.h
> @@ -50,13 +50,13 @@
> #define pmr_sync() do {} while (0)
> #endif
>
> -#define mb() dsb(sy)
> -#define rmb() dsb(ld)
> -#define wmb() dsb(st)
> +#define __mb() dsb(sy)
> +#define __rmb() dsb(ld)
> +#define __wmb() dsb(st)
>
> -#define dma_mb() dmb(osh)
> -#define dma_rmb() dmb(oshld)
> -#define dma_wmb() dmb(oshst)
> +#define __dma_mb() dmb(osh)
> +#define __dma_rmb() dmb(oshld)
> +#define __dma_wmb() dmb(oshst)
>
> #define io_stop_wc() dgh()
>
> --
> 2.35.3
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers
@ 2022-05-23 14:16 ` Mark Rutland
0 siblings, 0 replies; 28+ messages in thread
From: Mark Rutland @ 2022-05-23 14:16 UTC (permalink / raw)
To: Kefeng Wang
Cc: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
Jonathan Corbet, linux-doc, arnd
On Mon, May 23, 2022 at 07:31:26PM +0800, Kefeng Wang wrote:
> As "kcsan: Support detecting a subset of missing memory barriers"[1]
> introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects
> more missing memory barrier, but arm64 don't have KCSAN instrumentation
> for barriers, so the new selftest test_barrier() and test cases for
> memory barrier instrumentation in kcsan_test module will fail, even
> panic on selftest.
>
> Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h
> defined the final instrumented version of these barriers, which will
> fix the above issues.
>
> Note, barrier instrumentation that can be disabled via __no_kcsan with
> appropriate compiler-support (and not just with objtool help), see
> commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no
> objtool support exists"), it adds disable_sanitizer_instrumentation to
> __no_kcsan attribute which will remove all sanitizer instrumentation fully
> (with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize.
>
> [1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/
>
> Acked-by: Marco Elver <elver@google.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Having built this with GCC 12.1.0 and LLVM 14.0.0, I think this patch itself
doesn't introduce any new problems, and logically makes sense. With that in
mind:
Acked-by: Mark Rutland <mark.rutland@arm.com>
As an aside, having scanned the resulting vmlinux with objdump, there are
plenty of latent issues where we get KCSAN instrumentation where we don't want
it (e.g. early/late in arm64's entry-common.o). The bulk of those are due to
missing `nonistr` or `__always_inline`, which we'll need to fix up.
Thanks,
Mark.
> ---
> arch/arm64/include/asm/barrier.h | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
> index 9f3e2c3d2ca0..2cfc4245d2e2 100644
> --- a/arch/arm64/include/asm/barrier.h
> +++ b/arch/arm64/include/asm/barrier.h
> @@ -50,13 +50,13 @@
> #define pmr_sync() do {} while (0)
> #endif
>
> -#define mb() dsb(sy)
> -#define rmb() dsb(ld)
> -#define wmb() dsb(st)
> +#define __mb() dsb(sy)
> +#define __rmb() dsb(ld)
> +#define __wmb() dsb(st)
>
> -#define dma_mb() dmb(osh)
> -#define dma_rmb() dmb(oshld)
> -#define dma_wmb() dmb(oshst)
> +#define __dma_mb() dmb(osh)
> +#define __dma_rmb() dmb(oshld)
> +#define __dma_wmb() dmb(oshst)
>
> #define io_stop_wc() dgh()
>
> --
> 2.35.3
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers
2022-05-23 14:16 ` Mark Rutland
@ 2022-06-14 3:20 ` Kefeng Wang
-1 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-06-14 3:20 UTC (permalink / raw)
To: Mark Rutland
Cc: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
Jonathan Corbet, linux-doc, arnd
Hi Will and Catalin, kindly ping...
On 2022/5/23 22:16, Mark Rutland wrote:
> On Mon, May 23, 2022 at 07:31:26PM +0800, Kefeng Wang wrote:
>> As "kcsan: Support detecting a subset of missing memory barriers"[1]
>> introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects
>> more missing memory barrier, but arm64 don't have KCSAN instrumentation
>> for barriers, so the new selftest test_barrier() and test cases for
>> memory barrier instrumentation in kcsan_test module will fail, even
>> panic on selftest.
>>
>> Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h
>> defined the final instrumented version of these barriers, which will
>> fix the above issues.
>>
>> Note, barrier instrumentation that can be disabled via __no_kcsan with
>> appropriate compiler-support (and not just with objtool help), see
>> commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no
>> objtool support exists"), it adds disable_sanitizer_instrumentation to
>> __no_kcsan attribute which will remove all sanitizer instrumentation fully
>> (with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize.
>>
>> [1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/
>>
>> Acked-by: Marco Elver <elver@google.com>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Having built this with GCC 12.1.0 and LLVM 14.0.0, I think this patch itself
> doesn't introduce any new problems, and logically makes sense. With that in
> mind:
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> As an aside, having scanned the resulting vmlinux with objdump, there are
> plenty of latent issues where we get KCSAN instrumentation where we don't want
> it (e.g. early/late in arm64's entry-common.o). The bulk of those are due to
> missing `nonistr` or `__always_inline`, which we'll need to fix up.
>
> Thanks,
> Mark.
>
>> ---
>> arch/arm64/include/asm/barrier.h | 12 ++++++------
>> 1 file changed, 6 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
>> index 9f3e2c3d2ca0..2cfc4245d2e2 100644
>> --- a/arch/arm64/include/asm/barrier.h
>> +++ b/arch/arm64/include/asm/barrier.h
>> @@ -50,13 +50,13 @@
>> #define pmr_sync() do {} while (0)
>> #endif
>>
>> -#define mb() dsb(sy)
>> -#define rmb() dsb(ld)
>> -#define wmb() dsb(st)
>> +#define __mb() dsb(sy)
>> +#define __rmb() dsb(ld)
>> +#define __wmb() dsb(st)
>>
>> -#define dma_mb() dmb(osh)
>> -#define dma_rmb() dmb(oshld)
>> -#define dma_wmb() dmb(oshst)
>> +#define __dma_mb() dmb(osh)
>> +#define __dma_rmb() dmb(oshld)
>> +#define __dma_wmb() dmb(oshst)
>>
>> #define io_stop_wc() dgh()
>>
>> --
>> 2.35.3
>>
> .
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers
@ 2022-06-14 3:20 ` Kefeng Wang
0 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-06-14 3:20 UTC (permalink / raw)
To: Mark Rutland
Cc: elver, catalin.marinas, will, linux-arm-kernel, linux-kernel,
Jonathan Corbet, linux-doc, arnd
Hi Will and Catalin, kindly ping...
On 2022/5/23 22:16, Mark Rutland wrote:
> On Mon, May 23, 2022 at 07:31:26PM +0800, Kefeng Wang wrote:
>> As "kcsan: Support detecting a subset of missing memory barriers"[1]
>> introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects
>> more missing memory barrier, but arm64 don't have KCSAN instrumentation
>> for barriers, so the new selftest test_barrier() and test cases for
>> memory barrier instrumentation in kcsan_test module will fail, even
>> panic on selftest.
>>
>> Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h
>> defined the final instrumented version of these barriers, which will
>> fix the above issues.
>>
>> Note, barrier instrumentation that can be disabled via __no_kcsan with
>> appropriate compiler-support (and not just with objtool help), see
>> commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no
>> objtool support exists"), it adds disable_sanitizer_instrumentation to
>> __no_kcsan attribute which will remove all sanitizer instrumentation fully
>> (with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize.
>>
>> [1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/
>>
>> Acked-by: Marco Elver <elver@google.com>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Having built this with GCC 12.1.0 and LLVM 14.0.0, I think this patch itself
> doesn't introduce any new problems, and logically makes sense. With that in
> mind:
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> As an aside, having scanned the resulting vmlinux with objdump, there are
> plenty of latent issues where we get KCSAN instrumentation where we don't want
> it (e.g. early/late in arm64's entry-common.o). The bulk of those are due to
> missing `nonistr` or `__always_inline`, which we'll need to fix up.
>
> Thanks,
> Mark.
>
>> ---
>> arch/arm64/include/asm/barrier.h | 12 ++++++------
>> 1 file changed, 6 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
>> index 9f3e2c3d2ca0..2cfc4245d2e2 100644
>> --- a/arch/arm64/include/asm/barrier.h
>> +++ b/arch/arm64/include/asm/barrier.h
>> @@ -50,13 +50,13 @@
>> #define pmr_sync() do {} while (0)
>> #endif
>>
>> -#define mb() dsb(sy)
>> -#define rmb() dsb(ld)
>> -#define wmb() dsb(st)
>> +#define __mb() dsb(sy)
>> +#define __rmb() dsb(ld)
>> +#define __wmb() dsb(st)
>>
>> -#define dma_mb() dmb(osh)
>> -#define dma_rmb() dmb(oshld)
>> -#define dma_wmb() dmb(oshst)
>> +#define __dma_mb() dmb(osh)
>> +#define __dma_rmb() dmb(oshld)
>> +#define __dma_wmb() dmb(oshst)
>>
>> #define io_stop_wc() dgh()
>>
>> --
>> 2.35.3
>>
> .
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
2022-05-23 11:35 ` Marco Elver
@ 2022-06-16 23:13 ` Paul E. McKenney
-1 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2022-06-16 23:13 UTC (permalink / raw)
To: Marco Elver
Cc: Kefeng Wang, catalin.marinas, will, linux-arm-kernel,
linux-kernel, mark.rutland, Jonathan Corbet, linux-doc, arnd,
Peter Zijlstra
On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> >
> > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > which is used to ensure that prior (both reads and writes) accesses
> > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> >
> > Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> Reviewed-by: Marco Elver <elver@google.com>
Just checking... Did these ever get picked up? It was suggested
that they go up via the arm64 tree, if I remember correctly.
Thanx, Paul
> > ---
> > Documentation/memory-barriers.txt | 11 ++++++-----
> > include/asm-generic/barrier.h | 8 ++++++++
> > 2 files changed, 14 insertions(+), 5 deletions(-)
> >
> > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> > index b12df9137e1c..832b5d36e279 100644
> > --- a/Documentation/memory-barriers.txt
> > +++ b/Documentation/memory-barriers.txt
> > @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
> >
> > (*) dma_wmb();
> > (*) dma_rmb();
> > + (*) dma_mb();
> >
> > These are for use with consistent memory to guarantee the ordering
> > of writes or reads of shared memory accessible to both the CPU and a
> > @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
> > The dma_rmb() allows us guarantee the device has released ownership
> > before we read the data from the descriptor, and the dma_wmb() allows
> > us to guarantee the data is written to the descriptor before the device
> > - can see it now has ownership. Note that, when using writel(), a prior
> > - wmb() is not needed to guarantee that the cache coherent memory writes
> > - have completed before writing to the MMIO region. The cheaper
> > - writel_relaxed() does not provide this guarantee and must not be used
> > - here.
> > + can see it now has ownership. The dma_mb() implies both a dma_rmb() and
> > + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
> > + to guarantee that the cache coherent memory writes have completed before
> > + writing to the MMIO region. The cheaper writel_relaxed() does not provide
> > + this guarantee and must not be used here.
> >
> > See the subsection "Kernel I/O barrier effects" for more information on
> > relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
> > diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> > index fd7e8fbaeef1..961f4d88f9ef 100644
> > --- a/include/asm-generic/barrier.h
> > +++ b/include/asm-generic/barrier.h
> > @@ -38,6 +38,10 @@
> > #define wmb() do { kcsan_wmb(); __wmb(); } while (0)
> > #endif
> >
> > +#ifdef __dma_mb
> > +#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
> > +#endif
> > +
> > #ifdef __dma_rmb
> > #define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
> > #endif
> > @@ -65,6 +69,10 @@
> > #define wmb() mb()
> > #endif
> >
> > +#ifndef dma_mb
> > +#define dma_mb() mb()
> > +#endif
> > +
> > #ifndef dma_rmb
> > #define dma_rmb() rmb()
> > #endif
> > --
> > 2.35.3
> >
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
@ 2022-06-16 23:13 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2022-06-16 23:13 UTC (permalink / raw)
To: Marco Elver
Cc: Kefeng Wang, catalin.marinas, will, linux-arm-kernel,
linux-kernel, mark.rutland, Jonathan Corbet, linux-doc, arnd,
Peter Zijlstra
On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> >
> > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > which is used to ensure that prior (both reads and writes) accesses
> > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> >
> > Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> Reviewed-by: Marco Elver <elver@google.com>
Just checking... Did these ever get picked up? It was suggested
that they go up via the arm64 tree, if I remember correctly.
Thanx, Paul
> > ---
> > Documentation/memory-barriers.txt | 11 ++++++-----
> > include/asm-generic/barrier.h | 8 ++++++++
> > 2 files changed, 14 insertions(+), 5 deletions(-)
> >
> > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> > index b12df9137e1c..832b5d36e279 100644
> > --- a/Documentation/memory-barriers.txt
> > +++ b/Documentation/memory-barriers.txt
> > @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
> >
> > (*) dma_wmb();
> > (*) dma_rmb();
> > + (*) dma_mb();
> >
> > These are for use with consistent memory to guarantee the ordering
> > of writes or reads of shared memory accessible to both the CPU and a
> > @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
> > The dma_rmb() allows us guarantee the device has released ownership
> > before we read the data from the descriptor, and the dma_wmb() allows
> > us to guarantee the data is written to the descriptor before the device
> > - can see it now has ownership. Note that, when using writel(), a prior
> > - wmb() is not needed to guarantee that the cache coherent memory writes
> > - have completed before writing to the MMIO region. The cheaper
> > - writel_relaxed() does not provide this guarantee and must not be used
> > - here.
> > + can see it now has ownership. The dma_mb() implies both a dma_rmb() and
> > + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
> > + to guarantee that the cache coherent memory writes have completed before
> > + writing to the MMIO region. The cheaper writel_relaxed() does not provide
> > + this guarantee and must not be used here.
> >
> > See the subsection "Kernel I/O barrier effects" for more information on
> > relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
> > diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> > index fd7e8fbaeef1..961f4d88f9ef 100644
> > --- a/include/asm-generic/barrier.h
> > +++ b/include/asm-generic/barrier.h
> > @@ -38,6 +38,10 @@
> > #define wmb() do { kcsan_wmb(); __wmb(); } while (0)
> > #endif
> >
> > +#ifdef __dma_mb
> > +#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
> > +#endif
> > +
> > #ifdef __dma_rmb
> > #define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
> > #endif
> > @@ -65,6 +69,10 @@
> > #define wmb() mb()
> > #endif
> >
> > +#ifndef dma_mb
> > +#define dma_mb() mb()
> > +#endif
> > +
> > #ifndef dma_rmb
> > #define dma_rmb() rmb()
> > #endif
> > --
> > 2.35.3
> >
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
2022-06-16 23:13 ` Paul E. McKenney
@ 2022-06-17 10:18 ` Marco Elver
-1 siblings, 0 replies; 28+ messages in thread
From: Marco Elver @ 2022-06-17 10:18 UTC (permalink / raw)
To: paulmck
Cc: Kefeng Wang, catalin.marinas, will, linux-arm-kernel,
linux-kernel, mark.rutland, Jonathan Corbet, linux-doc, arnd,
Peter Zijlstra
On Fri, 17 Jun 2022 at 01:13, Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> > On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> > >
> > > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > > which is used to ensure that prior (both reads and writes) accesses
> > > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> > >
> > > Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> >
> > Reviewed-by: Marco Elver <elver@google.com>
>
> Just checking... Did these ever get picked up? It was suggested
> that they go up via the arm64 tree, if I remember correctly.
I don't see them in -next, and as far as I can tell, they're not in
the arm64 tree.
Thanks,
-- Marco
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
@ 2022-06-17 10:18 ` Marco Elver
0 siblings, 0 replies; 28+ messages in thread
From: Marco Elver @ 2022-06-17 10:18 UTC (permalink / raw)
To: paulmck
Cc: Kefeng Wang, catalin.marinas, will, linux-arm-kernel,
linux-kernel, mark.rutland, Jonathan Corbet, linux-doc, arnd,
Peter Zijlstra
On Fri, 17 Jun 2022 at 01:13, Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> > On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> > >
> > > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > > which is used to ensure that prior (both reads and writes) accesses
> > > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> > >
> > > Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> >
> > Reviewed-by: Marco Elver <elver@google.com>
>
> Just checking... Did these ever get picked up? It was suggested
> that they go up via the arm64 tree, if I remember correctly.
I don't see them in -next, and as far as I can tell, they're not in
the arm64 tree.
Thanks,
-- Marco
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
2022-06-17 10:18 ` Marco Elver
@ 2022-06-19 9:45 ` Catalin Marinas
-1 siblings, 0 replies; 28+ messages in thread
From: Catalin Marinas @ 2022-06-19 9:45 UTC (permalink / raw)
To: Marco Elver
Cc: paulmck, Kefeng Wang, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet, linux-doc, arnd, Peter Zijlstra
On Fri, Jun 17, 2022 at 12:18:41PM +0200, Marco Elver wrote:
> On Fri, 17 Jun 2022 at 01:13, Paul E. McKenney <paulmck@kernel.org> wrote:
> > On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> > > On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> > > >
> > > > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > > > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > > > which is used to ensure that prior (both reads and writes) accesses
> > > > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> > > >
> > > > Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> > > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> > >
> > > Reviewed-by: Marco Elver <elver@google.com>
> >
> > Just checking... Did these ever get picked up? It was suggested
> > that they go up via the arm64 tree, if I remember correctly.
>
> I don't see them in -next, and as far as I can tell, they're not in
> the arm64 tree.
Since v4 was posted during the merging window, it hasn't been queued for
5.19-rc1. I normally only merge patches with a Fixes tag during the -rc
period (though there are some exceptions). Mark commented in v1 that
such tag isn't necessary, so I thought I'd leave it for the 5.20 merging
window.
That said, the diffstat is small, so if it helps having this in 5.19, I
can queue it for -rc4.
--
Catalin
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
@ 2022-06-19 9:45 ` Catalin Marinas
0 siblings, 0 replies; 28+ messages in thread
From: Catalin Marinas @ 2022-06-19 9:45 UTC (permalink / raw)
To: Marco Elver
Cc: paulmck, Kefeng Wang, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet, linux-doc, arnd, Peter Zijlstra
On Fri, Jun 17, 2022 at 12:18:41PM +0200, Marco Elver wrote:
> On Fri, 17 Jun 2022 at 01:13, Paul E. McKenney <paulmck@kernel.org> wrote:
> > On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> > > On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> > > >
> > > > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > > > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > > > which is used to ensure that prior (both reads and writes) accesses
> > > > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> > > >
> > > > Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> > > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> > >
> > > Reviewed-by: Marco Elver <elver@google.com>
> >
> > Just checking... Did these ever get picked up? It was suggested
> > that they go up via the arm64 tree, if I remember correctly.
>
> I don't see them in -next, and as far as I can tell, they're not in
> the arm64 tree.
Since v4 was posted during the merging window, it hasn't been queued for
5.19-rc1. I normally only merge patches with a Fixes tag during the -rc
period (though there are some exceptions). Mark commented in v1 that
such tag isn't necessary, so I thought I'd leave it for the 5.20 merging
window.
That said, the diffstat is small, so if it helps having this in 5.19, I
can queue it for -rc4.
--
Catalin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
2022-06-19 9:45 ` Catalin Marinas
@ 2022-06-20 21:02 ` Paul E. McKenney
-1 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2022-06-20 21:02 UTC (permalink / raw)
To: Catalin Marinas
Cc: Marco Elver, Kefeng Wang, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet, linux-doc, arnd, Peter Zijlstra
On Sun, Jun 19, 2022 at 10:45:22AM +0100, Catalin Marinas wrote:
> On Fri, Jun 17, 2022 at 12:18:41PM +0200, Marco Elver wrote:
> > On Fri, 17 Jun 2022 at 01:13, Paul E. McKenney <paulmck@kernel.org> wrote:
> > > On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> > > > On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> > > > >
> > > > > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > > > > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > > > > which is used to ensure that prior (both reads and writes) accesses
> > > > > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> > > > >
> > > > > Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> > > > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> > > >
> > > > Reviewed-by: Marco Elver <elver@google.com>
> > >
> > > Just checking... Did these ever get picked up? It was suggested
> > > that they go up via the arm64 tree, if I remember correctly.
> >
> > I don't see them in -next, and as far as I can tell, they're not in
> > the arm64 tree.
>
> Since v4 was posted during the merging window, it hasn't been queued for
> 5.19-rc1. I normally only merge patches with a Fixes tag during the -rc
> period (though there are some exceptions). Mark commented in v1 that
> such tag isn't necessary, so I thought I'd leave it for the 5.20 merging
> window.
>
> That said, the diffstat is small, so if it helps having this in 5.19, I
> can queue it for -rc4.
As long as it is not going to be lost, I am good. ;-)
So v5.20 is fine.
Thanx, Paul
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
@ 2022-06-20 21:02 ` Paul E. McKenney
0 siblings, 0 replies; 28+ messages in thread
From: Paul E. McKenney @ 2022-06-20 21:02 UTC (permalink / raw)
To: Catalin Marinas
Cc: Marco Elver, Kefeng Wang, will, linux-arm-kernel, linux-kernel,
mark.rutland, Jonathan Corbet, linux-doc, arnd, Peter Zijlstra
On Sun, Jun 19, 2022 at 10:45:22AM +0100, Catalin Marinas wrote:
> On Fri, Jun 17, 2022 at 12:18:41PM +0200, Marco Elver wrote:
> > On Fri, 17 Jun 2022 at 01:13, Paul E. McKenney <paulmck@kernel.org> wrote:
> > > On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> > > > On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> > > > >
> > > > > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > > > > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > > > > which is used to ensure that prior (both reads and writes) accesses
> > > > > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> > > > >
> > > > > Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> > > > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> > > >
> > > > Reviewed-by: Marco Elver <elver@google.com>
> > >
> > > Just checking... Did these ever get picked up? It was suggested
> > > that they go up via the arm64 tree, if I remember correctly.
> >
> > I don't see them in -next, and as far as I can tell, they're not in
> > the arm64 tree.
>
> Since v4 was posted during the merging window, it hasn't been queued for
> 5.19-rc1. I normally only merge patches with a Fixes tag during the -rc
> period (though there are some exceptions). Mark commented in v1 that
> such tag isn't necessary, so I thought I'd leave it for the 5.20 merging
> window.
>
> That said, the diffstat is small, so if it helps having this in 5.19, I
> can queue it for -rc4.
As long as it is not going to be lost, I am good. ;-)
So v5.20 is fine.
Thanx, Paul
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers
2022-05-23 11:31 ` Kefeng Wang
@ 2022-06-21 10:46 ` Catalin Marinas
-1 siblings, 0 replies; 28+ messages in thread
From: Catalin Marinas @ 2022-06-21 10:46 UTC (permalink / raw)
To: Kefeng Wang
Cc: elver, will, linux-arm-kernel, linux-kernel, mark.rutland,
Jonathan Corbet, linux-doc, arnd
On Mon, May 23, 2022 at 07:31:26PM +0800, Kefeng Wang wrote:
> As "kcsan: Support detecting a subset of missing memory barriers"[1]
> introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects
> more missing memory barrier, but arm64 don't have KCSAN instrumentation
> for barriers, so the new selftest test_barrier() and test cases for
> memory barrier instrumentation in kcsan_test module will fail, even
> panic on selftest.
>
> Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h
> defined the final instrumented version of these barriers, which will
> fix the above issues.
>
> Note, barrier instrumentation that can be disabled via __no_kcsan with
> appropriate compiler-support (and not just with objtool help), see
> commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no
> objtool support exists"), it adds disable_sanitizer_instrumentation to
> __no_kcsan attribute which will remove all sanitizer instrumentation fully
> (with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize.
>
> [1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/
>
> Acked-by: Marco Elver <elver@google.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
I'll leave the series to Will to queue for 5.20.
Thanks.
--
Catalin
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers
@ 2022-06-21 10:46 ` Catalin Marinas
0 siblings, 0 replies; 28+ messages in thread
From: Catalin Marinas @ 2022-06-21 10:46 UTC (permalink / raw)
To: Kefeng Wang
Cc: elver, will, linux-arm-kernel, linux-kernel, mark.rutland,
Jonathan Corbet, linux-doc, arnd
On Mon, May 23, 2022 at 07:31:26PM +0800, Kefeng Wang wrote:
> As "kcsan: Support detecting a subset of missing memory barriers"[1]
> introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects
> more missing memory barrier, but arm64 don't have KCSAN instrumentation
> for barriers, so the new selftest test_barrier() and test cases for
> memory barrier instrumentation in kcsan_test module will fail, even
> panic on selftest.
>
> Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h
> defined the final instrumented version of these barriers, which will
> fix the above issues.
>
> Note, barrier instrumentation that can be disabled via __no_kcsan with
> appropriate compiler-support (and not just with objtool help), see
> commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no
> objtool support exists"), it adds disable_sanitizer_instrumentation to
> __no_kcsan attribute which will remove all sanitizer instrumentation fully
> (with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize.
>
> [1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/
>
> Acked-by: Marco Elver <elver@google.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
I'll leave the series to Will to queue for 5.20.
Thanks.
--
Catalin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 0/2] arm64: Fix kcsan test_barrier fail and panic
2022-05-23 11:31 ` Kefeng Wang
@ 2022-06-23 19:31 ` Will Deacon
-1 siblings, 0 replies; 28+ messages in thread
From: Will Deacon @ 2022-06-23 19:31 UTC (permalink / raw)
To: elver, Jonathan Corbet, linux-kernel, Kefeng Wang, mark.rutland,
catalin.marinas, linux-arm-kernel
Cc: kernel-team, Will Deacon, arnd, linux-doc
On Mon, 23 May 2022 19:31:24 +0800, Kefeng Wang wrote:
> Fix selftest and kcsan_test() module fail when KCSAN_STRICT
> and KCSAN_WEAK_MEMORY enabled on ARM64.
>
> v4:
> - Use 2 spaces after a sentence-ending '.', suggested by Marco
> - Collect Ack/Review
>
> [...]
Applied to arm64 (for-next/kcsan), thanks!
[1/2] asm-generic: Add memory barrier dma_mb()
https://git.kernel.org/arm64/c/ed59dfd9509d
[2/2] arm64: kcsan: Support detecting more missing memory barriers
https://git.kernel.org/arm64/c/4d09caec2fab
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v4 0/2] arm64: Fix kcsan test_barrier fail and panic
@ 2022-06-23 19:31 ` Will Deacon
0 siblings, 0 replies; 28+ messages in thread
From: Will Deacon @ 2022-06-23 19:31 UTC (permalink / raw)
To: elver, Jonathan Corbet, linux-kernel, Kefeng Wang, mark.rutland,
catalin.marinas, linux-arm-kernel
Cc: kernel-team, Will Deacon, arnd, linux-doc
On Mon, 23 May 2022 19:31:24 +0800, Kefeng Wang wrote:
> Fix selftest and kcsan_test() module fail when KCSAN_STRICT
> and KCSAN_WEAK_MEMORY enabled on ARM64.
>
> v4:
> - Use 2 spaces after a sentence-ending '.', suggested by Marco
> - Collect Ack/Review
>
> [...]
Applied to arm64 (for-next/kcsan), thanks!
[1/2] asm-generic: Add memory barrier dma_mb()
https://git.kernel.org/arm64/c/ed59dfd9509d
[2/2] arm64: kcsan: Support detecting more missing memory barriers
https://git.kernel.org/arm64/c/4d09caec2fab
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
2022-05-23 11:26 Kefeng Wang
@ 2022-05-23 11:26 ` Kefeng Wang
0 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-05-23 11:26 UTC (permalink / raw)
To: catalin.marinas, will, akpm, linux-arm-kernel, linux-kernel
Cc: linux-mm, hch, arnd, anshuman.khandual, Kefeng Wang
The memory barrier dma_mb() is introduced by commit a76a37777f2c
("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
which is used to ensure that prior (both reads and writes) accesses
to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
Documentation/memory-barriers.txt | 11 ++++++-----
include/asm-generic/barrier.h | 8 ++++++++
2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index b12df9137e1c..832b5d36e279 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
(*) dma_wmb();
(*) dma_rmb();
+ (*) dma_mb();
These are for use with consistent memory to guarantee the ordering
of writes or reads of shared memory accessible to both the CPU and a
@@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
The dma_rmb() allows us guarantee the device has released ownership
before we read the data from the descriptor, and the dma_wmb() allows
us to guarantee the data is written to the descriptor before the device
- can see it now has ownership. Note that, when using writel(), a prior
- wmb() is not needed to guarantee that the cache coherent memory writes
- have completed before writing to the MMIO region. The cheaper
- writel_relaxed() does not provide this guarantee and must not be used
- here.
+ can see it now has ownership. The dma_mb() implies both a dma_rmb() and
+ a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
+ to guarantee that the cache coherent memory writes have completed before
+ writing to the MMIO region. The cheaper writel_relaxed() does not provide
+ this guarantee and must not be used here.
See the subsection "Kernel I/O barrier effects" for more information on
relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index fd7e8fbaeef1..961f4d88f9ef 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -38,6 +38,10 @@
#define wmb() do { kcsan_wmb(); __wmb(); } while (0)
#endif
+#ifdef __dma_mb
+#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
+#endif
+
#ifdef __dma_rmb
#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
#endif
@@ -65,6 +69,10 @@
#define wmb() mb()
#endif
+#ifndef dma_mb
+#define dma_mb() mb()
+#endif
+
#ifndef dma_rmb
#define dma_rmb() rmb()
#endif
--
2.35.3
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
@ 2022-05-23 11:26 ` Kefeng Wang
0 siblings, 0 replies; 28+ messages in thread
From: Kefeng Wang @ 2022-05-23 11:26 UTC (permalink / raw)
To: catalin.marinas, will, akpm, linux-arm-kernel, linux-kernel
Cc: linux-mm, hch, arnd, anshuman.khandual, Kefeng Wang
The memory barrier dma_mb() is introduced by commit a76a37777f2c
("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
which is used to ensure that prior (both reads and writes) accesses
to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
Documentation/memory-barriers.txt | 11 ++++++-----
include/asm-generic/barrier.h | 8 ++++++++
2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index b12df9137e1c..832b5d36e279 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
(*) dma_wmb();
(*) dma_rmb();
+ (*) dma_mb();
These are for use with consistent memory to guarantee the ordering
of writes or reads of shared memory accessible to both the CPU and a
@@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
The dma_rmb() allows us guarantee the device has released ownership
before we read the data from the descriptor, and the dma_wmb() allows
us to guarantee the data is written to the descriptor before the device
- can see it now has ownership. Note that, when using writel(), a prior
- wmb() is not needed to guarantee that the cache coherent memory writes
- have completed before writing to the MMIO region. The cheaper
- writel_relaxed() does not provide this guarantee and must not be used
- here.
+ can see it now has ownership. The dma_mb() implies both a dma_rmb() and
+ a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
+ to guarantee that the cache coherent memory writes have completed before
+ writing to the MMIO region. The cheaper writel_relaxed() does not provide
+ this guarantee and must not be used here.
See the subsection "Kernel I/O barrier effects" for more information on
relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index fd7e8fbaeef1..961f4d88f9ef 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -38,6 +38,10 @@
#define wmb() do { kcsan_wmb(); __wmb(); } while (0)
#endif
+#ifdef __dma_mb
+#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
+#endif
+
#ifdef __dma_rmb
#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
#endif
@@ -65,6 +69,10 @@
#define wmb() mb()
#endif
+#ifndef dma_mb
+#define dma_mb() mb()
+#endif
+
#ifndef dma_rmb
#define dma_rmb() rmb()
#endif
--
2.35.3
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 28+ messages in thread
end of thread, other threads:[~2022-06-23 19:42 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-23 11:31 [PATCH v4 0/2] arm64: Fix kcsan test_barrier fail and panic Kefeng Wang
2022-05-23 11:31 ` Kefeng Wang
2022-05-23 11:31 ` [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb() Kefeng Wang
2022-05-23 11:31 ` Kefeng Wang
2022-05-23 11:35 ` Marco Elver
2022-05-23 11:35 ` Marco Elver
2022-06-16 23:13 ` Paul E. McKenney
2022-06-16 23:13 ` Paul E. McKenney
2022-06-17 10:18 ` Marco Elver
2022-06-17 10:18 ` Marco Elver
2022-06-19 9:45 ` Catalin Marinas
2022-06-19 9:45 ` Catalin Marinas
2022-06-20 21:02 ` Paul E. McKenney
2022-06-20 21:02 ` Paul E. McKenney
2022-05-23 11:38 ` Mark Rutland
2022-05-23 11:38 ` Mark Rutland
2022-05-23 11:31 ` [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers Kefeng Wang
2022-05-23 11:31 ` Kefeng Wang
2022-05-23 14:16 ` Mark Rutland
2022-05-23 14:16 ` Mark Rutland
2022-06-14 3:20 ` Kefeng Wang
2022-06-14 3:20 ` Kefeng Wang
2022-06-21 10:46 ` Catalin Marinas
2022-06-21 10:46 ` Catalin Marinas
2022-06-23 19:31 ` [PATCH v4 0/2] arm64: Fix kcsan test_barrier fail and panic Will Deacon
2022-06-23 19:31 ` Will Deacon
-- strict thread matches above, loose matches on Subject: below --
2022-05-23 11:26 Kefeng Wang
2022-05-23 11:26 ` [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb() Kefeng Wang
2022-05-23 11:26 ` Kefeng Wang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.