linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores
@ 2024-04-11 16:46 Jason Gunthorpe
  2024-04-11 16:46 ` [PATCH v3 1/6] x86: Stop using weak symbols for __iowrite32_copy() Jason Gunthorpe
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Jason Gunthorpe @ 2024-04-11 16:46 UTC (permalink / raw)
  To: Alexander Gordeev, Andrew Morton, Christian Borntraeger,
	Borislav Petkov, Dave Hansen, David S. Miller, Eric Dumazet,
	Gerald Schaefer, Vasily Gorbik, Heiko Carstens, H. Peter Anvin,
	Justin Stitt, Jakub Kicinski, Leon Romanovsky, linux-rdma,
	linux-s390, llvm, Ingo Molnar, Bill Wendling, Nathan Chancellor,
	Nick Desaulniers, netdev, Paolo Abeni, Salil Mehta,
	Sven Schnelle, Thomas Gleixner, x86, Yisen Zhuang
  Cc: Arnd Bergmann, Catalin Marinas, Leon Romanovsky, linux-arch,
	linux-arm-kernel, Mark Rutland, Michael Guralnik, patches,
	Niklas Schnelle, Jijie Shao, Will Deacon

mlx5 has a built in self-test at driver startup to evaluate if the
platform supports write combining to generate a 64 byte PCIe TLP or
not. This has proven necessary because a lot of common scenarios end up
with broken write combining (especially inside virtual machines) and there
is no other way to learn this information.

This self test has been consistently failing on new ARM64 CPU
designs (specifically with NVIDIA Grace's implementation of Neoverse
V2). The C loop around writeq() generates some pretty terrible ARM64
assembly, but historically this has worked on alot of existing ARM64 CPUs
till now.

We see it succeed about 1 time in 10,000 on the worst affected
systems. The CPU architects speculate that the load instructions
interspersed with the stores make the test unreliable.

Arrange things so that the ARM64 uses a predictable inline assembly block
of 8 STR instructions.

Catalin suggested implementing this in terms of the obscure
__iowrite64_copy() interface which was long ago added to optimize write
combining stores on Pathscale RDMA HW for x86. These copy routines have
the advantage of requiring the caller to supply alignment which allows an
optimal assembly implementation.

This is a good suggestion because it turns out that S390 has much the same
problem and already uses the __iowrite64_copy() to try to make its WC
operations work.

The first several patches modernize and improve the performance of
__iowriteXX_copy() so that an ARM64 implementation can be provided which
relies on __builtin_constant_p to generate fast inlined assembly code in a
few common cases.

It looks ack'd enough now so I plan to take this through the RDMA tree.

v3:
 - Rebase to 6.9-rc3
 - Fix copy&pasteo in __const_memcpy_toio_aligned64() to use__raw_writeq()
v2: https://lore.kernel.org/r/0-v1-38290193eace+5-mlx5_arm_wc_jgg@nvidia.com
 - Rework everything to use __iowrite64_copy().
 - Don't use STP since that is not reliably supported in ARM VMs
 - New patches to tidy up __iowriteXX_copy() on x86 and s390
v1: https://lore.kernel.org/r/cover.1700766072.git.leon@kernel.org

Jason Gunthorpe (6):
  x86: Stop using weak symbols for __iowrite32_copy()
  s390: Implement __iowrite32_copy()
  s390: Stop using weak symbols for __iowrite64_copy()
  arm64/io: Provide a WC friendly __iowriteXX_copy()
  net: hns3: Remove io_stop_wc() calls after __iowrite64_copy()
  IB/mlx5: Use __iowrite64_copy() for write combining stores

 arch/arm64/include/asm/io.h                   | 132 ++++++++++++++++++
 arch/arm64/kernel/io.c                        |  42 ++++++
 arch/s390/include/asm/io.h                    |  15 ++
 arch/s390/pci/pci.c                           |   6 -
 arch/x86/include/asm/io.h                     |  17 +++
 arch/x86/lib/Makefile                         |   1 -
 arch/x86/lib/iomap_copy_64.S                  |  15 --
 drivers/infiniband/hw/mlx5/mem.c              |   8 +-
 .../net/ethernet/hisilicon/hns3/hns3_enet.c   |   4 -
 include/linux/io.h                            |   8 +-
 lib/iomap_copy.c                              |  13 +-
 11 files changed, 222 insertions(+), 39 deletions(-)
 delete mode 100644 arch/x86/lib/iomap_copy_64.S


base-commit: fec50db7033ea478773b159e0e2efb135270e3b7
-- 
2.43.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/6] x86: Stop using weak symbols for __iowrite32_copy()
  2024-04-11 16:46 [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
@ 2024-04-11 16:46 ` Jason Gunthorpe
  2024-04-11 20:24   ` Arnd Bergmann
  2024-04-11 16:46 ` [PATCH v3 2/6] s390: Implement __iowrite32_copy() Jason Gunthorpe
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2024-04-11 16:46 UTC (permalink / raw)
  To: Alexander Gordeev, Andrew Morton, Christian Borntraeger,
	Borislav Petkov, Dave Hansen, David S. Miller, Eric Dumazet,
	Gerald Schaefer, Vasily Gorbik, Heiko Carstens, H. Peter Anvin,
	Justin Stitt, Jakub Kicinski, Leon Romanovsky, linux-rdma,
	linux-s390, llvm, Ingo Molnar, Bill Wendling, Nathan Chancellor,
	Nick Desaulniers, netdev, Paolo Abeni, Salil Mehta,
	Sven Schnelle, Thomas Gleixner, x86, Yisen Zhuang
  Cc: Arnd Bergmann, Catalin Marinas, Leon Romanovsky, linux-arch,
	linux-arm-kernel, Mark Rutland, Michael Guralnik, patches,
	Niklas Schnelle, Jijie Shao, Will Deacon

Start switching iomap_copy routines over to use #define and arch provided
inline/macro functions instead of weak symbols.

Inline functions allow more compiler optimization and this is often a
driver hot path.

x86 has the only weak implementation for __iowrite32_copy(), so replace it
with a static inline containing the same single instruction inline
assembly. The compiler will generate the "mov edx,ecx" in a more optimal
way.

Remove iomap_copy_64.S

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 arch/x86/include/asm/io.h    | 17 +++++++++++++++++
 arch/x86/lib/Makefile        |  1 -
 arch/x86/lib/iomap_copy_64.S | 15 ---------------
 include/linux/io.h           |  5 ++++-
 lib/iomap_copy.c             |  6 +++---
 5 files changed, 24 insertions(+), 20 deletions(-)
 delete mode 100644 arch/x86/lib/iomap_copy_64.S

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 294cd2a4081812..4b99ed326b1748 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -209,6 +209,23 @@ void memset_io(volatile void __iomem *, int, size_t);
 #define memcpy_toio memcpy_toio
 #define memset_io memset_io
 
+#ifdef CONFIG_X86_64
+/*
+ * Commit 0f07496144c2 ("[PATCH] Add faster __iowrite32_copy routine for
+ * x86_64") says that circa 2006 rep movsl is noticeably faster than a copy
+ * loop.
+ */
+static inline void __iowrite32_copy(void __iomem *to, const void *from,
+				    size_t count)
+{
+	asm volatile("rep ; movsl"
+		     : "=&c"(count), "=&D"(to), "=&S"(from)
+		     : "0"(count), "1"(to), "2"(from)
+		     : "memory");
+}
+#define __iowrite32_copy __iowrite32_copy
+#endif
+
 /*
  * ISA space is 'always mapped' on a typical x86 system, no need to
  * explicitly ioremap() it. The fact that the ISA IO space is mapped
diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index 6da73513f02668..98583a9dbab337 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -53,7 +53,6 @@ ifneq ($(CONFIG_X86_CMPXCHG64),y)
         lib-y += atomic64_386_32.o
 endif
 else
-        obj-y += iomap_copy_64.o
 ifneq ($(CONFIG_GENERIC_CSUM),y)
         lib-y += csum-partial_64.o csum-copy_64.o csum-wrappers_64.o
 endif
diff --git a/arch/x86/lib/iomap_copy_64.S b/arch/x86/lib/iomap_copy_64.S
deleted file mode 100644
index 6ff2f56cb0f71a..00000000000000
--- a/arch/x86/lib/iomap_copy_64.S
+++ /dev/null
@@ -1,15 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright 2006 PathScale, Inc.  All Rights Reserved.
- */
-
-#include <linux/linkage.h>
-
-/*
- * override generic version in lib/iomap_copy.c
- */
-SYM_FUNC_START(__iowrite32_copy)
-	movl %edx,%ecx
-	rep movsl
-	RET
-SYM_FUNC_END(__iowrite32_copy)
diff --git a/include/linux/io.h b/include/linux/io.h
index 235ba7d80a8f0d..ce86120ce9d526 100644
--- a/include/linux/io.h
+++ b/include/linux/io.h
@@ -16,7 +16,10 @@
 struct device;
 struct resource;
 
-__visible void __iowrite32_copy(void __iomem *to, const void *from, size_t count);
+#ifndef __iowrite32_copy
+void __iowrite32_copy(void __iomem *to, const void *from, size_t count);
+#endif
+
 void __ioread32_copy(void *to, const void __iomem *from, size_t count);
 void __iowrite64_copy(void __iomem *to, const void *from, size_t count);
 
diff --git a/lib/iomap_copy.c b/lib/iomap_copy.c
index 5de7c04e05ef56..8ddcbb53507dfe 100644
--- a/lib/iomap_copy.c
+++ b/lib/iomap_copy.c
@@ -16,9 +16,8 @@
  * time.  Order of access is not guaranteed, nor is a memory barrier
  * performed afterwards.
  */
-void __attribute__((weak)) __iowrite32_copy(void __iomem *to,
-					    const void *from,
-					    size_t count)
+#ifndef __iowrite32_copy
+void __iowrite32_copy(void __iomem *to, const void *from, size_t count)
 {
 	u32 __iomem *dst = to;
 	const u32 *src = from;
@@ -28,6 +27,7 @@ void __attribute__((weak)) __iowrite32_copy(void __iomem *to,
 		__raw_writel(*src++, dst++);
 }
 EXPORT_SYMBOL_GPL(__iowrite32_copy);
+#endif
 
 /**
  * __ioread32_copy - copy data from MMIO space, in 32-bit units
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 2/6] s390: Implement __iowrite32_copy()
  2024-04-11 16:46 [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
  2024-04-11 16:46 ` [PATCH v3 1/6] x86: Stop using weak symbols for __iowrite32_copy() Jason Gunthorpe
@ 2024-04-11 16:46 ` Jason Gunthorpe
  2024-04-11 16:46 ` [PATCH v3 3/6] s390: Stop using weak symbols for __iowrite64_copy() Jason Gunthorpe
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Jason Gunthorpe @ 2024-04-11 16:46 UTC (permalink / raw)
  To: Alexander Gordeev, Andrew Morton, Christian Borntraeger,
	Borislav Petkov, Dave Hansen, David S. Miller, Eric Dumazet,
	Gerald Schaefer, Vasily Gorbik, Heiko Carstens, H. Peter Anvin,
	Justin Stitt, Jakub Kicinski, Leon Romanovsky, linux-rdma,
	linux-s390, llvm, Ingo Molnar, Bill Wendling, Nathan Chancellor,
	Nick Desaulniers, netdev, Paolo Abeni, Salil Mehta,
	Sven Schnelle, Thomas Gleixner, x86, Yisen Zhuang
  Cc: Arnd Bergmann, Catalin Marinas, Leon Romanovsky, linux-arch,
	linux-arm-kernel, Mark Rutland, Michael Guralnik, patches,
	Niklas Schnelle, Jijie Shao, Will Deacon

It is trivial to implement an inline to do this, so provide it in the s390
headers. Like the 64 bit version it should just invoke zpci_memcpy_toio()
with the correct size.

Acked-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 arch/s390/include/asm/io.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/s390/include/asm/io.h b/arch/s390/include/asm/io.h
index 4453ad7c11aced..00704fc8a54b30 100644
--- a/arch/s390/include/asm/io.h
+++ b/arch/s390/include/asm/io.h
@@ -73,6 +73,14 @@ static inline void ioport_unmap(void __iomem *p)
 #define __raw_writel	zpci_write_u32
 #define __raw_writeq	zpci_write_u64
 
+/* combine single writes by using store-block insn */
+static inline void __iowrite32_copy(void __iomem *to, const void *from,
+				    size_t count)
+{
+	zpci_memcpy_toio(to, from, count * 4);
+}
+#define __iowrite32_copy __iowrite32_copy
+
 #endif /* CONFIG_PCI */
 
 #include <asm-generic/io.h>
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 3/6] s390: Stop using weak symbols for __iowrite64_copy()
  2024-04-11 16:46 [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
  2024-04-11 16:46 ` [PATCH v3 1/6] x86: Stop using weak symbols for __iowrite32_copy() Jason Gunthorpe
  2024-04-11 16:46 ` [PATCH v3 2/6] s390: Implement __iowrite32_copy() Jason Gunthorpe
@ 2024-04-11 16:46 ` Jason Gunthorpe
  2024-04-11 20:23   ` Arnd Bergmann
  2024-04-11 16:46 ` [PATCH v3 4/6] arm64/io: Provide a WC friendly __iowriteXX_copy() Jason Gunthorpe
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2024-04-11 16:46 UTC (permalink / raw)
  To: Alexander Gordeev, Andrew Morton, Christian Borntraeger,
	Borislav Petkov, Dave Hansen, David S. Miller, Eric Dumazet,
	Gerald Schaefer, Vasily Gorbik, Heiko Carstens, H. Peter Anvin,
	Justin Stitt, Jakub Kicinski, Leon Romanovsky, linux-rdma,
	linux-s390, llvm, Ingo Molnar, Bill Wendling, Nathan Chancellor,
	Nick Desaulniers, netdev, Paolo Abeni, Salil Mehta,
	Sven Schnelle, Thomas Gleixner, x86, Yisen Zhuang
  Cc: Arnd Bergmann, Catalin Marinas, Leon Romanovsky, linux-arch,
	linux-arm-kernel, Mark Rutland, Michael Guralnik, patches,
	Niklas Schnelle, Jijie Shao, Will Deacon

Complete switching the __iowriteXX_copy() routines over to use #define and
arch provided inline/macro functions instead of weak symbols.

S390 has an implementation that simply calls another memcpy
function. Inline this so the callers don't have to do two jumps.

Acked-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 arch/s390/include/asm/io.h | 7 +++++++
 arch/s390/pci/pci.c        | 6 ------
 include/linux/io.h         | 3 +++
 lib/iomap_copy.c           | 7 +++----
 4 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/s390/include/asm/io.h b/arch/s390/include/asm/io.h
index 00704fc8a54b30..0fbc992d7a5ea7 100644
--- a/arch/s390/include/asm/io.h
+++ b/arch/s390/include/asm/io.h
@@ -81,6 +81,13 @@ static inline void __iowrite32_copy(void __iomem *to, const void *from,
 }
 #define __iowrite32_copy __iowrite32_copy
 
+static inline void __iowrite64_copy(void __iomem *to, const void *from,
+				    size_t count)
+{
+	zpci_memcpy_toio(to, from, count * 8);
+}
+#define __iowrite64_copy __iowrite64_copy
+
 #endif /* CONFIG_PCI */
 
 #include <asm-generic/io.h>
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 26afde0d1ed34c..0de0f6e405b51e 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -250,12 +250,6 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
 	return 0;
 }
 
-/* combine single writes by using store-block insn */
-void __iowrite64_copy(void __iomem *to, const void *from, size_t count)
-{
-	zpci_memcpy_toio(to, from, count * 8);
-}
-
 void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
 			   unsigned long prot)
 {
diff --git a/include/linux/io.h b/include/linux/io.h
index ce86120ce9d526..42e132808f0035 100644
--- a/include/linux/io.h
+++ b/include/linux/io.h
@@ -21,7 +21,10 @@ void __iowrite32_copy(void __iomem *to, const void *from, size_t count);
 #endif
 
 void __ioread32_copy(void *to, const void __iomem *from, size_t count);
+
+#ifndef __iowrite64_copy
 void __iowrite64_copy(void __iomem *to, const void *from, size_t count);
+#endif
 
 #ifdef CONFIG_MMU
 int ioremap_page_range(unsigned long addr, unsigned long end,
diff --git a/lib/iomap_copy.c b/lib/iomap_copy.c
index 8ddcbb53507dfe..2fd5712fb7c02b 100644
--- a/lib/iomap_copy.c
+++ b/lib/iomap_copy.c
@@ -60,9 +60,8 @@ EXPORT_SYMBOL_GPL(__ioread32_copy);
  * time.  Order of access is not guaranteed, nor is a memory barrier
  * performed afterwards.
  */
-void __attribute__((weak)) __iowrite64_copy(void __iomem *to,
-					    const void *from,
-					    size_t count)
+#ifndef __iowrite64_copy
+void __iowrite64_copy(void __iomem *to, const void *from, size_t count)
 {
 #ifdef CONFIG_64BIT
 	u64 __iomem *dst = to;
@@ -75,5 +74,5 @@ void __attribute__((weak)) __iowrite64_copy(void __iomem *to,
 	__iowrite32_copy(to, from, count * 2);
 #endif
 }
-
 EXPORT_SYMBOL_GPL(__iowrite64_copy);
+#endif
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 4/6] arm64/io: Provide a WC friendly __iowriteXX_copy()
  2024-04-11 16:46 [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
                   ` (2 preceding siblings ...)
  2024-04-11 16:46 ` [PATCH v3 3/6] s390: Stop using weak symbols for __iowrite64_copy() Jason Gunthorpe
@ 2024-04-11 16:46 ` Jason Gunthorpe
  2024-04-11 16:46 ` [PATCH v3 5/6] net: hns3: Remove io_stop_wc() calls after __iowrite64_copy() Jason Gunthorpe
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Jason Gunthorpe @ 2024-04-11 16:46 UTC (permalink / raw)
  To: Alexander Gordeev, Andrew Morton, Christian Borntraeger,
	Borislav Petkov, Dave Hansen, David S. Miller, Eric Dumazet,
	Gerald Schaefer, Vasily Gorbik, Heiko Carstens, H. Peter Anvin,
	Justin Stitt, Jakub Kicinski, Leon Romanovsky, linux-rdma,
	linux-s390, llvm, Ingo Molnar, Bill Wendling, Nathan Chancellor,
	Nick Desaulniers, netdev, Paolo Abeni, Salil Mehta,
	Sven Schnelle, Thomas Gleixner, x86, Yisen Zhuang
  Cc: Arnd Bergmann, Catalin Marinas, Leon Romanovsky, linux-arch,
	linux-arm-kernel, Mark Rutland, Michael Guralnik, patches,
	Niklas Schnelle, Jijie Shao, Will Deacon

The kernel provides driver support for using write combining IO memory
through the __iowriteXX_copy() API which is commonly used as an optional
optimization to generate 16/32/64 byte MemWr TLPs in a PCIe environment.

iomap_copy.c provides a generic implementation as a simple 4/8 byte at a
time copy loop that has worked well with past ARM64 CPUs, giving a high
frequency of large TLPs being successfully formed.

However modern ARM64 CPUs are quite sensitive to how the write combining
CPU HW is operated and a compiler generated loop with intermixed
load/store is not sufficient to frequently generate a large TLP. The CPUs
would like to see the entire TLP generated by consecutive store
instructions from registers. Compilers like gcc tend to intermix loads and
stores and have poor code generation, in part, due to the ARM64 situation
that writeq() does not codegen anything other than "[xN]". However even
with that resolved compilers like clang still do not have good code
generation.

This means on modern ARM64 CPUs the rate at which __iowriteXX_copy()
successfully generates large TLPs is very small (less than 1 in 10,000)
tries), to the point that the use of WC is pointless.

Implement __iowrite32/64_copy() specifically for ARM64 and use inline
assembly to build consecutive blocks of STR instructions. Provide direct
support for 64/32/16 large TLP generation in this manner. Optimize for
common constant lengths so that the compiler can directly inline the store
blocks.

This brings the frequency of large TLP generation up to a high level that
is comparable with older CPU generations.

As the __iowriteXX_copy() family of APIs is intended for use with WC
incorporate the DGH hint directly into the function.

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arch@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 arch/arm64/include/asm/io.h | 132 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/io.c      |  42 ++++++++++++
 2 files changed, 174 insertions(+)

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 8d825522c55c84..4ff0ae3f6d6690 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -139,6 +139,138 @@ extern void __memset_io(volatile void __iomem *, int, size_t);
 #define memcpy_fromio(a,c,l)	__memcpy_fromio((a),(c),(l))
 #define memcpy_toio(c,a,l)	__memcpy_toio((c),(a),(l))
 
+/*
+ * The ARM64 iowrite implementation is intended to support drivers that want to
+ * use write combining. For instance PCI drivers using write combining with a 64
+ * byte __iowrite64_copy() expect to get a 64 byte MemWr TLP on the PCIe bus.
+ *
+ * Newer ARM core have sensitive write combining buffers, it is important that
+ * the stores be contiguous blocks of store instructions. Normal memcpy
+ * approaches have a very low chance to generate write combining.
+ *
+ * Since this is the only API on ARM64 that should be used with write combining
+ * it also integrates the DGH hint which is supposed to lower the latency to
+ * emit the large TLP from the CPU.
+ */
+
+static inline void __const_memcpy_toio_aligned32(volatile u32 __iomem *to,
+						 const u32 *from, size_t count)
+{
+	switch (count) {
+	case 8:
+		asm volatile("str %w0, [%8, #4 * 0]\n"
+			     "str %w1, [%8, #4 * 1]\n"
+			     "str %w2, [%8, #4 * 2]\n"
+			     "str %w3, [%8, #4 * 3]\n"
+			     "str %w4, [%8, #4 * 4]\n"
+			     "str %w5, [%8, #4 * 5]\n"
+			     "str %w6, [%8, #4 * 6]\n"
+			     "str %w7, [%8, #4 * 7]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
+			       "rZ"(from[3]), "rZ"(from[4]), "rZ"(from[5]),
+			       "rZ"(from[6]), "rZ"(from[7]), "r"(to));
+		break;
+	case 4:
+		asm volatile("str %w0, [%4, #4 * 0]\n"
+			     "str %w1, [%4, #4 * 1]\n"
+			     "str %w2, [%4, #4 * 2]\n"
+			     "str %w3, [%4, #4 * 3]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
+			       "rZ"(from[3]), "r"(to));
+		break;
+	case 2:
+		asm volatile("str %w0, [%2, #4 * 0]\n"
+			     "str %w1, [%2, #4 * 1]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "r"(to));
+		break;
+	case 1:
+		__raw_writel(*from, to);
+		break;
+	default:
+		BUILD_BUG();
+	}
+}
+
+void __iowrite32_copy_full(void __iomem *to, const void *from, size_t count);
+
+static inline void __const_iowrite32_copy(void __iomem *to, const void *from,
+					  size_t count)
+{
+	if (count == 8 || count == 4 || count == 2 || count == 1) {
+		__const_memcpy_toio_aligned32(to, from, count);
+		dgh();
+	} else {
+		__iowrite32_copy_full(to, from, count);
+	}
+}
+
+#define __iowrite32_copy(to, from, count)                  \
+	(__builtin_constant_p(count) ?                     \
+		 __const_iowrite32_copy(to, from, count) : \
+		 __iowrite32_copy_full(to, from, count))
+
+static inline void __const_memcpy_toio_aligned64(volatile u64 __iomem *to,
+						 const u64 *from, size_t count)
+{
+	switch (count) {
+	case 8:
+		asm volatile("str %x0, [%8, #8 * 0]\n"
+			     "str %x1, [%8, #8 * 1]\n"
+			     "str %x2, [%8, #8 * 2]\n"
+			     "str %x3, [%8, #8 * 3]\n"
+			     "str %x4, [%8, #8 * 4]\n"
+			     "str %x5, [%8, #8 * 5]\n"
+			     "str %x6, [%8, #8 * 6]\n"
+			     "str %x7, [%8, #8 * 7]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
+			       "rZ"(from[3]), "rZ"(from[4]), "rZ"(from[5]),
+			       "rZ"(from[6]), "rZ"(from[7]), "r"(to));
+		break;
+	case 4:
+		asm volatile("str %x0, [%4, #8 * 0]\n"
+			     "str %x1, [%4, #8 * 1]\n"
+			     "str %x2, [%4, #8 * 2]\n"
+			     "str %x3, [%4, #8 * 3]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
+			       "rZ"(from[3]), "r"(to));
+		break;
+	case 2:
+		asm volatile("str %x0, [%2, #8 * 0]\n"
+			     "str %x1, [%2, #8 * 1]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "r"(to));
+		break;
+	case 1:
+		__raw_writeq(*from, to);
+		break;
+	default:
+		BUILD_BUG();
+	}
+}
+
+void __iowrite64_copy_full(void __iomem *to, const void *from, size_t count);
+
+static inline void __const_iowrite64_copy(void __iomem *to, const void *from,
+					  size_t count)
+{
+	if (count == 8 || count == 4 || count == 2 || count == 1) {
+		__const_memcpy_toio_aligned64(to, from, count);
+		dgh();
+	} else {
+		__iowrite64_copy_full(to, from, count);
+	}
+}
+
+#define __iowrite64_copy(to, from, count)                  \
+	(__builtin_constant_p(count) ?                     \
+		 __const_iowrite64_copy(to, from, count) : \
+		 __iowrite64_copy_full(to, from, count))
+
 /*
  * I/O memory mapping functions.
  */
diff --git a/arch/arm64/kernel/io.c b/arch/arm64/kernel/io.c
index aa7a4ec6a3ae6f..ef48089fbfe1a4 100644
--- a/arch/arm64/kernel/io.c
+++ b/arch/arm64/kernel/io.c
@@ -37,6 +37,48 @@ void __memcpy_fromio(void *to, const volatile void __iomem *from, size_t count)
 }
 EXPORT_SYMBOL(__memcpy_fromio);
 
+/*
+ * This generates a memcpy that works on a from/to address which is aligned to
+ * bits. Count is in terms of the number of bits sized quantities to copy. It
+ * optimizes to use the STR groupings when possible so that it is WC friendly.
+ */
+#define memcpy_toio_aligned(to, from, count, bits)                        \
+	({                                                                \
+		volatile u##bits __iomem *_to = to;                       \
+		const u##bits *_from = from;                              \
+		size_t _count = count;                                    \
+		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
+                                                                          \
+		for (; _from < _end_from; _from += 8, _to += 8)           \
+			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
+		if ((_count % 8) >= 4) {                                  \
+			__const_memcpy_toio_aligned##bits(_to, _from, 4); \
+			_from += 4;                                       \
+			_to += 4;                                         \
+		}                                                         \
+		if ((_count % 4) >= 2) {                                  \
+			__const_memcpy_toio_aligned##bits(_to, _from, 2); \
+			_from += 2;                                       \
+			_to += 2;                                         \
+		}                                                         \
+		if (_count % 2)                                           \
+			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
+	})
+
+void __iowrite64_copy_full(void __iomem *to, const void *from, size_t count)
+{
+	memcpy_toio_aligned(to, from, count, 64);
+	dgh();
+}
+EXPORT_SYMBOL(__iowrite64_copy_full);
+
+void __iowrite32_copy_full(void __iomem *to, const void *from, size_t count)
+{
+	memcpy_toio_aligned(to, from, count, 32);
+	dgh();
+}
+EXPORT_SYMBOL(__iowrite32_copy_full);
+
 /*
  * Copy data from "real" memory space to IO memory space.
  */
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 5/6] net: hns3: Remove io_stop_wc() calls after __iowrite64_copy()
  2024-04-11 16:46 [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
                   ` (3 preceding siblings ...)
  2024-04-11 16:46 ` [PATCH v3 4/6] arm64/io: Provide a WC friendly __iowriteXX_copy() Jason Gunthorpe
@ 2024-04-11 16:46 ` Jason Gunthorpe
  2024-04-11 16:46 ` [PATCH v3 6/6] IB/mlx5: Use __iowrite64_copy() for write combining stores Jason Gunthorpe
  2024-04-23  0:18 ` [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
  6 siblings, 0 replies; 11+ messages in thread
From: Jason Gunthorpe @ 2024-04-11 16:46 UTC (permalink / raw)
  To: Alexander Gordeev, Andrew Morton, Christian Borntraeger,
	Borislav Petkov, Dave Hansen, David S. Miller, Eric Dumazet,
	Gerald Schaefer, Vasily Gorbik, Heiko Carstens, H. Peter Anvin,
	Justin Stitt, Jakub Kicinski, Leon Romanovsky, linux-rdma,
	linux-s390, llvm, Ingo Molnar, Bill Wendling, Nathan Chancellor,
	Nick Desaulniers, netdev, Paolo Abeni, Salil Mehta,
	Sven Schnelle, Thomas Gleixner, x86, Yisen Zhuang
  Cc: Arnd Bergmann, Catalin Marinas, Leon Romanovsky, linux-arch,
	linux-arm-kernel, Mark Rutland, Michael Guralnik, patches,
	Niklas Schnelle, Jijie Shao, Will Deacon

Now that the ARM64 arch implementation does the DGH as part of
__iowrite64_copy() there is no reason to open code this in drivers.

Reviewed-by: Jijie Shao<shaojijie@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index 19668a8d22f76a..04b9e86363f8fc 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -2068,8 +2068,6 @@ static void hns3_tx_push_bd(struct hns3_enet_ring *ring, int num)
 	__iowrite64_copy(ring->tqp->mem_base, desc,
 			 (sizeof(struct hns3_desc) * HNS3_MAX_PUSH_BD_NUM) /
 			 HNS3_BYTES_PER_64BIT);
-
-	io_stop_wc();
 }
 
 static void hns3_tx_mem_doorbell(struct hns3_enet_ring *ring)
@@ -2088,8 +2086,6 @@ static void hns3_tx_mem_doorbell(struct hns3_enet_ring *ring)
 	u64_stats_update_begin(&ring->syncp);
 	ring->stats.tx_mem_doorbell += ring->pending_buf;
 	u64_stats_update_end(&ring->syncp);
-
-	io_stop_wc();
 }
 
 static void hns3_tx_doorbell(struct hns3_enet_ring *ring, int num,
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 6/6] IB/mlx5: Use __iowrite64_copy() for write combining stores
  2024-04-11 16:46 [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
                   ` (4 preceding siblings ...)
  2024-04-11 16:46 ` [PATCH v3 5/6] net: hns3: Remove io_stop_wc() calls after __iowrite64_copy() Jason Gunthorpe
@ 2024-04-11 16:46 ` Jason Gunthorpe
  2024-04-16  8:29   ` Leon Romanovsky
  2024-04-23  0:18 ` [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
  6 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2024-04-11 16:46 UTC (permalink / raw)
  To: Alexander Gordeev, Andrew Morton, Christian Borntraeger,
	Borislav Petkov, Dave Hansen, David S. Miller, Eric Dumazet,
	Gerald Schaefer, Vasily Gorbik, Heiko Carstens, H. Peter Anvin,
	Justin Stitt, Jakub Kicinski, Leon Romanovsky, linux-rdma,
	linux-s390, llvm, Ingo Molnar, Bill Wendling, Nathan Chancellor,
	Nick Desaulniers, netdev, Paolo Abeni, Salil Mehta,
	Sven Schnelle, Thomas Gleixner, x86, Yisen Zhuang
  Cc: Arnd Bergmann, Catalin Marinas, Leon Romanovsky, linux-arch,
	linux-arm-kernel, Mark Rutland, Michael Guralnik, patches,
	Niklas Schnelle, Jijie Shao, Will Deacon

mlx5 has a built in self-test at driver startup to evaluate if the
platform supports write combining to generate a 64 byte PCIe TLP or
not. This has proven necessary because a lot of common scenarios end up
with broken write combining (especially inside virtual machines) and there
is other way to learn this information.

This self test has been consistently failing on new ARM64 CPU
designs (specifically with NVIDIA Grace's implementation of Neoverse
V2). The C loop around writeq() generates some pretty terrible ARM64
assembly, but historically this has worked on a lot of existing ARM64 CPUs
till now.

We see it succeed about 1 time in 10,000 on the worst effected
systems. The CPU architects speculate that the load instructions
interspersed with the stores makes the WC buffers statistically flush too
often and thus the generation of large TLPs becomes infrequent. This makes
the boot up test unreliable in that it indicates no write-combining,
however userspace would be fine since it uses a ST4 instruction.

Further, S390 has similar issues where only the special zpci_memcpy_toio()
will actually generate large TLPs, and the open coded loop does not
trigger it at all.

Fix both ARM64 and S390 by switching to __iowrite64_copy() which now
provides architecture specific variants that have a high change of
generating a large TLP with write combining. x86 continues to use a
similar writeq loop in the generate __iowrite64_copy().

Fixes: 11f552e21755 ("IB/mlx5: Test write combining support")
Tested-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/infiniband/hw/mlx5/mem.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mem.c b/drivers/infiniband/hw/mlx5/mem.c
index 96ffbbaf0a73d1..5a22be14d958f2 100644
--- a/drivers/infiniband/hw/mlx5/mem.c
+++ b/drivers/infiniband/hw/mlx5/mem.c
@@ -30,6 +30,7 @@
  * SOFTWARE.
  */
 
+#include <linux/io.h>
 #include <rdma/ib_umem_odp.h>
 #include "mlx5_ib.h"
 #include <linux/jiffies.h>
@@ -108,7 +109,6 @@ static int post_send_nop(struct mlx5_ib_dev *dev, struct ib_qp *ibqp, u64 wr_id,
 	__be32 mmio_wqe[16] = {};
 	unsigned long flags;
 	unsigned int idx;
-	int i;
 
 	if (unlikely(dev->mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR))
 		return -EIO;
@@ -148,10 +148,8 @@ static int post_send_nop(struct mlx5_ib_dev *dev, struct ib_qp *ibqp, u64 wr_id,
 	 * we hit doorbell
 	 */
 	wmb();
-	for (i = 0; i < 8; i++)
-		mlx5_write64(&mmio_wqe[i * 2],
-			     bf->bfreg->map + bf->offset + i * 8);
-	io_stop_wc();
+	__iowrite64_copy(bf->bfreg->map + bf->offset, mmio_wqe,
+			 sizeof(mmio_wqe) / 8);
 
 	bf->offset ^= bf->buf_size;
 
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/6] s390: Stop using weak symbols for __iowrite64_copy()
  2024-04-11 16:46 ` [PATCH v3 3/6] s390: Stop using weak symbols for __iowrite64_copy() Jason Gunthorpe
@ 2024-04-11 20:23   ` Arnd Bergmann
  0 siblings, 0 replies; 11+ messages in thread
From: Arnd Bergmann @ 2024-04-11 20:23 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, Andrew Morton,
	Christian Borntraeger, Borislav Petkov, Dave Hansen,
	David S . Miller, Eric Dumazet, Gerald Schaefer, Vasily Gorbik,
	Heiko Carstens, H. Peter Anvin, Justin Stitt, Jakub Kicinski,
	Leon Romanovsky, linux-rdma, linux-s390, llvm, Ingo Molnar,
	Bill Wendling, Nathan Chancellor, Nick Desaulniers, Netdev,
	Paolo Abeni, Salil Mehta, Sven Schnelle, Thomas Gleixner, x86,
	Yisen Zhuang
  Cc: Catalin Marinas, Leon Romanovsky, Linux-Arch, linux-arm-kernel,
	Mark Rutland, Michael Guralnik, patches, Niklas Schnelle,
	Jijie Shao, Will Deacon

On Thu, Apr 11, 2024, at 18:46, Jason Gunthorpe wrote:
> Complete switching the __iowriteXX_copy() routines over to use #define and
> arch provided inline/macro functions instead of weak symbols.
>
> S390 has an implementation that simply calls another memcpy
> function. Inline this so the callers don't have to do two jumps.
>
> Acked-by: Niklas Schnelle <schnelle@linux.ibm.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  arch/s390/include/asm/io.h | 7 +++++++
>  arch/s390/pci/pci.c        | 6 ------
>  include/linux/io.h         | 3 +++
>  lib/iomap_copy.c           | 7 +++----
>  4 files changed, 13 insertions(+), 10 deletions(-)

For the common code bits:

Acked-by: Arnd Bergmann <arnd@arndb.de>

> -void __attribute__((weak)) __iowrite64_copy(void __iomem *to,
> -					    const void *from,
> -					    size_t count)
> +#ifndef __iowrite64_copy
> +void __iowrite64_copy(void __iomem *to, const void *from, size_t count)
>  {

I'm always happy to see __weak functions get cleaned up.

      Arnd

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 1/6] x86: Stop using weak symbols for __iowrite32_copy()
  2024-04-11 16:46 ` [PATCH v3 1/6] x86: Stop using weak symbols for __iowrite32_copy() Jason Gunthorpe
@ 2024-04-11 20:24   ` Arnd Bergmann
  0 siblings, 0 replies; 11+ messages in thread
From: Arnd Bergmann @ 2024-04-11 20:24 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, Andrew Morton,
	Christian Borntraeger, Borislav Petkov, Dave Hansen,
	David S . Miller, Eric Dumazet, Gerald Schaefer, Vasily Gorbik,
	Heiko Carstens, H. Peter Anvin, Justin Stitt, Jakub Kicinski,
	Leon Romanovsky, linux-rdma, linux-s390, llvm, Ingo Molnar,
	Bill Wendling, Nathan Chancellor, Nick Desaulniers, Netdev,
	Paolo Abeni, Salil Mehta, Sven Schnelle, Thomas Gleixner, x86,
	Yisen Zhuang
  Cc: Catalin Marinas, Leon Romanovsky, Linux-Arch, linux-arm-kernel,
	Mark Rutland, Michael Guralnik, patches, Niklas Schnelle,
	Jijie Shao, Will Deacon

On Thu, Apr 11, 2024, at 18:46, Jason Gunthorpe wrote:
>  arch/x86/include/asm/io.h    | 17 +++++++++++++++++
>  arch/x86/lib/Makefile        |  1 -
>  arch/x86/lib/iomap_copy_64.S | 15 ---------------
>  include/linux/io.h           |  5 ++++-
>  lib/iomap_copy.c             |  6 +++---
>  5 files changed, 24 insertions(+), 20 deletions(-)
>  delete mode 100644 arch/x86/lib/iomap_copy_64.S

Acked-by: Arnd Bergmann <arnd@arndb.de>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 6/6] IB/mlx5: Use __iowrite64_copy() for write combining stores
  2024-04-11 16:46 ` [PATCH v3 6/6] IB/mlx5: Use __iowrite64_copy() for write combining stores Jason Gunthorpe
@ 2024-04-16  8:29   ` Leon Romanovsky
  0 siblings, 0 replies; 11+ messages in thread
From: Leon Romanovsky @ 2024-04-16  8:29 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, Andrew Morton, Christian Borntraeger,
	Borislav Petkov, Dave Hansen, David S. Miller, Eric Dumazet,
	Gerald Schaefer, Vasily Gorbik, Heiko Carstens, H. Peter Anvin,
	Justin Stitt, Jakub Kicinski, linux-rdma, linux-s390, llvm,
	Ingo Molnar, Bill Wendling, Nathan Chancellor, Nick Desaulniers,
	netdev, Paolo Abeni, Salil Mehta, Sven Schnelle, Thomas Gleixner,
	x86, Yisen Zhuang, Arnd Bergmann, Catalin Marinas, linux-arch,
	linux-arm-kernel, Mark Rutland, Michael Guralnik, patches,
	Niklas Schnelle, Jijie Shao, Will Deacon

On Thu, Apr 11, 2024 at 01:46:19PM -0300, Jason Gunthorpe wrote:
> mlx5 has a built in self-test at driver startup to evaluate if the
> platform supports write combining to generate a 64 byte PCIe TLP or
> not. This has proven necessary because a lot of common scenarios end up
> with broken write combining (especially inside virtual machines) and there
> is other way to learn this information.
> 
> This self test has been consistently failing on new ARM64 CPU
> designs (specifically with NVIDIA Grace's implementation of Neoverse
> V2). The C loop around writeq() generates some pretty terrible ARM64
> assembly, but historically this has worked on a lot of existing ARM64 CPUs
> till now.
> 
> We see it succeed about 1 time in 10,000 on the worst effected
> systems. The CPU architects speculate that the load instructions
> interspersed with the stores makes the WC buffers statistically flush too
> often and thus the generation of large TLPs becomes infrequent. This makes
> the boot up test unreliable in that it indicates no write-combining,
> however userspace would be fine since it uses a ST4 instruction.
> 
> Further, S390 has similar issues where only the special zpci_memcpy_toio()
> will actually generate large TLPs, and the open coded loop does not
> trigger it at all.
> 
> Fix both ARM64 and S390 by switching to __iowrite64_copy() which now
> provides architecture specific variants that have a high change of
> generating a large TLP with write combining. x86 continues to use a
> similar writeq loop in the generate __iowrite64_copy().
> 
> Fixes: 11f552e21755 ("IB/mlx5: Test write combining support")
> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/infiniband/hw/mlx5/mem.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 

Thanks,
Acked-by: Leon Romanovsky <leonro@nvidia.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores
  2024-04-11 16:46 [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
                   ` (5 preceding siblings ...)
  2024-04-11 16:46 ` [PATCH v3 6/6] IB/mlx5: Use __iowrite64_copy() for write combining stores Jason Gunthorpe
@ 2024-04-23  0:18 ` Jason Gunthorpe
  6 siblings, 0 replies; 11+ messages in thread
From: Jason Gunthorpe @ 2024-04-23  0:18 UTC (permalink / raw)
  To: Alexander Gordeev, Andrew Morton, Christian Borntraeger,
	Borislav Petkov, Dave Hansen, David S. Miller, Eric Dumazet,
	Gerald Schaefer, Vasily Gorbik, Heiko Carstens, H. Peter Anvin,
	Justin Stitt, Jakub Kicinski, Leon Romanovsky, linux-rdma,
	linux-s390, llvm, Ingo Molnar, Bill Wendling, Nathan Chancellor,
	Nick Desaulniers, netdev, Paolo Abeni, Salil Mehta,
	Sven Schnelle, Thomas Gleixner, x86, Yisen Zhuang
  Cc: Arnd Bergmann, Catalin Marinas, Leon Romanovsky, linux-arch,
	linux-arm-kernel, Mark Rutland, Michael Guralnik, patches,
	Niklas Schnelle, Jijie Shao, Will Deacon

On Thu, Apr 11, 2024 at 01:46:13PM -0300, Jason Gunthorpe wrote:
> Jason Gunthorpe (6):
>   x86: Stop using weak symbols for __iowrite32_copy()
>   s390: Implement __iowrite32_copy()
>   s390: Stop using weak symbols for __iowrite64_copy()
>   arm64/io: Provide a WC friendly __iowriteXX_copy()
>   net: hns3: Remove io_stop_wc() calls after __iowrite64_copy()
>   IB/mlx5: Use __iowrite64_copy() for write combining stores

Applied to rdma's for-next thanks all

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2024-04-23  0:18 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-11 16:46 [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe
2024-04-11 16:46 ` [PATCH v3 1/6] x86: Stop using weak symbols for __iowrite32_copy() Jason Gunthorpe
2024-04-11 20:24   ` Arnd Bergmann
2024-04-11 16:46 ` [PATCH v3 2/6] s390: Implement __iowrite32_copy() Jason Gunthorpe
2024-04-11 16:46 ` [PATCH v3 3/6] s390: Stop using weak symbols for __iowrite64_copy() Jason Gunthorpe
2024-04-11 20:23   ` Arnd Bergmann
2024-04-11 16:46 ` [PATCH v3 4/6] arm64/io: Provide a WC friendly __iowriteXX_copy() Jason Gunthorpe
2024-04-11 16:46 ` [PATCH v3 5/6] net: hns3: Remove io_stop_wc() calls after __iowrite64_copy() Jason Gunthorpe
2024-04-11 16:46 ` [PATCH v3 6/6] IB/mlx5: Use __iowrite64_copy() for write combining stores Jason Gunthorpe
2024-04-16  8:29   ` Leon Romanovsky
2024-04-23  0:18 ` [PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores Jason Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).