All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/2] RISC-V: Probe for misaligned access speed
@ 2023-08-18 19:41 ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-08-18 19:41 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Randy Dunlap, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Samuel Holland, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Evan Green, Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	Jonathan Corbet, linux-kernel, Xianting Tian, David Laight,
	Palmer Dabbelt, Andy Chiu


The current setting for the hwprobe bit indicating misaligned access
speed is controlled by a vendor-specific feature probe function. This is
essentially a per-SoC table we have to maintain on behalf of each vendor
going forward. Let's convert that instead to something we detect at
runtime.

We have two assembly routines at the heart of our probe: one that
does a bunch of word-sized accesses (without aligning its input buffer),
and the other that does byte accesses. If we can move a larger number of
bytes using misaligned word accesses than we can with the same amount of
time doing byte accesses, then we can declare misaligned accesses as
"fast".

The tradeoff of reducing this maintenance burden is boot time. We spend
4-6 jiffies per core doing this measurement (0-2 on jiffie edge
alignment, and 4 on measurement). The timing loop was based on
raid6_choose_gen(), which uses (16+1)*N jiffies (where N is the number
of algorithms). By taking only the fastest iteration out of all
attempts for use in the comparison, variance between runs is very low.
On my THead C906, it looks like this:

[    0.047563] cpu0: Ratio of byte access time to unaligned word access is 4.34, unaligned accesses are fast

Several others have chimed in with results on slow machines with the
older algorithm, which took all runs into account, including noise like
interrupts. Even with this variation, results indicate that in all cases
(fast, slow, and emulated) the measured numbers are nowhere near each
other (always multiple factors away).


Changes in v4:
 - Avoid the bare 64-bit divide which fails to link on 32-bit systems,
   use div_u64() (Palmer, buildrobot)

Changes in v3:
 - Fix documentation indentation (Conor)
 - Rename __copy_..._unaligned() to __riscv_copy_..._unaligned() (Conor)
 - Renamed c0,c1 to start_cycles, end_cycles (Conor)
 - Renamed j0,j1 to start_jiffies, now
 - Renamed check_unaligned_access0() to
   check_unaligned_access_boot_cpu() (Conor)

Changes in v2:
 - Explain more in the commit message (Conor)
 - Use a new algorithm that looks for the fastest run (David)
 - Clarify documentatin further (David and Conor)
 - Unify around a single word, "unaligned" (Conor)
 - Align asm operands, and other misc whitespace changes (Conor)

Evan Green (2):
  RISC-V: Probe for unaligned access speed
  RISC-V: alternative: Remove feature_probe_func

 Documentation/riscv/hwprobe.rst      |  11 ++-
 arch/riscv/errata/thead/errata.c     |   8 ---
 arch/riscv/include/asm/alternative.h |   5 --
 arch/riscv/include/asm/cpufeature.h  |   2 +
 arch/riscv/kernel/Makefile           |   1 +
 arch/riscv/kernel/alternative.c      |  19 -----
 arch/riscv/kernel/copy-unaligned.S   |  71 ++++++++++++++++++
 arch/riscv/kernel/copy-unaligned.h   |  13 ++++
 arch/riscv/kernel/cpufeature.c       | 104 +++++++++++++++++++++++++++
 arch/riscv/kernel/smpboot.c          |   3 +-
 10 files changed, 198 insertions(+), 39 deletions(-)
 create mode 100644 arch/riscv/kernel/copy-unaligned.S
 create mode 100644 arch/riscv/kernel/copy-unaligned.h

-- 
2.34.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v4 0/2] RISC-V: Probe for misaligned access speed
@ 2023-08-18 19:41 ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-08-18 19:41 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: David Laight, Simon Hosie, Evan Green, Albert Ou,
	Alexandre Ghiti, Andrew Jones, Andy Chiu, Anup Patel,
	Björn Töpel, Conor Dooley, Greentime Hu, Guo Ren,
	Heiko Stuebner, Jisheng Zhang, Jonathan Corbet, Ley Foon Tan,
	Marc Zyngier, Masahiro Yamada, Palmer Dabbelt, Paul Walmsley,
	Randy Dunlap, Samuel Holland, Sia Jee Heng, Sunil V L,
	Xianting Tian, linux-doc, linux-kernel, linux-riscv


The current setting for the hwprobe bit indicating misaligned access
speed is controlled by a vendor-specific feature probe function. This is
essentially a per-SoC table we have to maintain on behalf of each vendor
going forward. Let's convert that instead to something we detect at
runtime.

We have two assembly routines at the heart of our probe: one that
does a bunch of word-sized accesses (without aligning its input buffer),
and the other that does byte accesses. If we can move a larger number of
bytes using misaligned word accesses than we can with the same amount of
time doing byte accesses, then we can declare misaligned accesses as
"fast".

The tradeoff of reducing this maintenance burden is boot time. We spend
4-6 jiffies per core doing this measurement (0-2 on jiffie edge
alignment, and 4 on measurement). The timing loop was based on
raid6_choose_gen(), which uses (16+1)*N jiffies (where N is the number
of algorithms). By taking only the fastest iteration out of all
attempts for use in the comparison, variance between runs is very low.
On my THead C906, it looks like this:

[    0.047563] cpu0: Ratio of byte access time to unaligned word access is 4.34, unaligned accesses are fast

Several others have chimed in with results on slow machines with the
older algorithm, which took all runs into account, including noise like
interrupts. Even with this variation, results indicate that in all cases
(fast, slow, and emulated) the measured numbers are nowhere near each
other (always multiple factors away).


Changes in v4:
 - Avoid the bare 64-bit divide which fails to link on 32-bit systems,
   use div_u64() (Palmer, buildrobot)

Changes in v3:
 - Fix documentation indentation (Conor)
 - Rename __copy_..._unaligned() to __riscv_copy_..._unaligned() (Conor)
 - Renamed c0,c1 to start_cycles, end_cycles (Conor)
 - Renamed j0,j1 to start_jiffies, now
 - Renamed check_unaligned_access0() to
   check_unaligned_access_boot_cpu() (Conor)

Changes in v2:
 - Explain more in the commit message (Conor)
 - Use a new algorithm that looks for the fastest run (David)
 - Clarify documentatin further (David and Conor)
 - Unify around a single word, "unaligned" (Conor)
 - Align asm operands, and other misc whitespace changes (Conor)

Evan Green (2):
  RISC-V: Probe for unaligned access speed
  RISC-V: alternative: Remove feature_probe_func

 Documentation/riscv/hwprobe.rst      |  11 ++-
 arch/riscv/errata/thead/errata.c     |   8 ---
 arch/riscv/include/asm/alternative.h |   5 --
 arch/riscv/include/asm/cpufeature.h  |   2 +
 arch/riscv/kernel/Makefile           |   1 +
 arch/riscv/kernel/alternative.c      |  19 -----
 arch/riscv/kernel/copy-unaligned.S   |  71 ++++++++++++++++++
 arch/riscv/kernel/copy-unaligned.h   |  13 ++++
 arch/riscv/kernel/cpufeature.c       | 104 +++++++++++++++++++++++++++
 arch/riscv/kernel/smpboot.c          |   3 +-
 10 files changed, 198 insertions(+), 39 deletions(-)
 create mode 100644 arch/riscv/kernel/copy-unaligned.S
 create mode 100644 arch/riscv/kernel/copy-unaligned.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-08-18 19:41 ` Evan Green
@ 2023-08-18 19:41   ` Evan Green
  -1 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-08-18 19:41 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Heiko Stuebner, linux-doc, Björn Töpel, Conor Dooley,
	Guo Ren, Evan Green, Jisheng Zhang, linux-riscv, Jonathan Corbet,
	Sia Jee Heng, Marc Zyngier, Masahiro Yamada, Greentime Hu,
	Simon Hosie, Andrew Jones, Albert Ou, Alexandre Ghiti,
	Ley Foon Tan, Paul Walmsley, Anup Patel, linux-kernel,
	Xianting Tian, David Laight, Palmer Dabbelt, Andy Chiu

Rather than deferring unaligned access speed determinations to a vendor
function, let's probe them and find out how fast they are. If we
determine that an unaligned word access is faster than N byte accesses,
mark the hardware's unaligned access as "fast". Otherwise, we mark
accesses as slow.

The algorithm itself runs for a fixed amount of jiffies. Within each
iteration it attempts to time a single loop, and then keeps only the best
(fastest) loop it saw. This algorithm was found to have lower variance from
run to run than my first attempt, which counted the total number of
iterations that could be done in that fixed amount of jiffies. By taking
only the best iteration in the loop, assuming at least one loop wasn't
perturbed by an interrupt, we eliminate the effects of interrupts and
other "warm up" factors like branch prediction. The only downside is it
depends on having an rdtime granular and accurate enough to measure a
single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
leave the unaligned setting at UNKNOWN.

There is a slight change in user-visible behavior here. Previously, all
boards except the THead C906 reported misaligned access speed of
UNKNOWN. C906 reported FAST. With this change, since we're now measuring
misaligned access speed on each hart, all RISC-V systems will have this
key set as either FAST or SLOW.

Currently, we don't have a way to confidently measure the difference between
SLOW and EMULATED, so we label anything not fast as SLOW. This will
mislabel some systems that are actually EMULATED as SLOW. When we get
support for delegating misaligned access traps to the kernel (as opposed
to the firmware quietly handling it), we can explicitly test in Linux to
see if unaligned accesses trap. Those systems will start to report
EMULATED, though older (today's) systems without that new SBI mechanism
will continue to report SLOW.

I've updated the documentation for those hwprobe values to reflect
this, specifically: SLOW may or may not be emulated by software, and FAST
represents means being faster than equivalent byte accesses. The change
in documentation is accurate with respect to both the former and current
behavior.

Signed-off-by: Evan Green <evan@rivosinc.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>

---

Changes in v4:
 - Avoid the bare 64-bit divide which fails to link on 32-bit systems,
   use div_u64() (Palmer, buildrobot)

Changes in v3:
 - Fix documentation indentation (Conor)
 - Rename __copy_..._unaligned() to __riscv_copy_..._unaligned() (Conor)
 - Renamed c0,c1 to start_cycles, end_cycles (Conor)
 - Renamed j0,j1 to start_jiffies, now
 - Renamed check_unaligned_access0() to
   check_unaligned_access_boot_cpu() (Conor)

Changes in v2:
 - Explain more in the commit message (Conor)
 - Use a new algorithm that looks for the fastest run (David)
 - Clarify documentatin further (David and Conor)
 - Unify around a single word, "unaligned" (Conor)
 - Align asm operands, and other misc whitespace changes (Conor)

 Documentation/riscv/hwprobe.rst     |  11 ++-
 arch/riscv/include/asm/cpufeature.h |   2 +
 arch/riscv/kernel/Makefile          |   1 +
 arch/riscv/kernel/copy-unaligned.S  |  71 +++++++++++++++++++
 arch/riscv/kernel/copy-unaligned.h  |  13 ++++
 arch/riscv/kernel/cpufeature.c      | 104 ++++++++++++++++++++++++++++
 arch/riscv/kernel/smpboot.c         |   2 +
 7 files changed, 198 insertions(+), 6 deletions(-)
 create mode 100644 arch/riscv/kernel/copy-unaligned.S
 create mode 100644 arch/riscv/kernel/copy-unaligned.h

diff --git a/Documentation/riscv/hwprobe.rst b/Documentation/riscv/hwprobe.rst
index 19165ebd82ba..f63fd05f1a73 100644
--- a/Documentation/riscv/hwprobe.rst
+++ b/Documentation/riscv/hwprobe.rst
@@ -87,13 +87,12 @@ The following keys are defined:
     emulated via software, either in or below the kernel.  These accesses are
     always extremely slow.
 
-  * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are supported
-    in hardware, but are slower than the cooresponding aligned accesses
-    sequences.
+  * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are slower
+    than equivalent byte accesses.  Misaligned accesses may be supported
+    directly in hardware, or trapped and emulated by software.
 
-  * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are supported
-    in hardware and are faster than the cooresponding aligned accesses
-    sequences.
+  * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are faster
+    than equivalent byte accesses.
 
   * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNSUPPORTED`: Misaligned accesses are
     not supported at all and will generate a misaligned address fault.
diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
index 23fed53b8815..d0345bd659c9 100644
--- a/arch/riscv/include/asm/cpufeature.h
+++ b/arch/riscv/include/asm/cpufeature.h
@@ -30,4 +30,6 @@ DECLARE_PER_CPU(long, misaligned_access_speed);
 /* Per-cpu ISA extensions. */
 extern struct riscv_isainfo hart_isa[NR_CPUS];
 
+void check_unaligned_access(int cpu);
+
 #endif
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index 506cc4a9a45a..7e6c464cdfe9 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -38,6 +38,7 @@ extra-y += vmlinux.lds
 obj-y	+= head.o
 obj-y	+= soc.o
 obj-$(CONFIG_RISCV_ALTERNATIVE) += alternative.o
+obj-y	+= copy-unaligned.o
 obj-y	+= cpu.o
 obj-y	+= cpufeature.o
 obj-y	+= entry.o
diff --git a/arch/riscv/kernel/copy-unaligned.S b/arch/riscv/kernel/copy-unaligned.S
new file mode 100644
index 000000000000..cfdecfbaad62
--- /dev/null
+++ b/arch/riscv/kernel/copy-unaligned.S
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Rivos Inc. */
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+	.text
+
+/* void __riscv_copy_words_unaligned(void *, const void *, size_t) */
+/* Performs a memcpy without aligning buffers, using word loads and stores. */
+/* Note: The size is truncated to a multiple of 8 * SZREG */
+ENTRY(__riscv_copy_words_unaligned)
+	andi  a4, a2, ~((8*SZREG)-1)
+	beqz  a4, 2f
+	add   a3, a1, a4
+1:
+	REG_L a4,       0(a1)
+	REG_L a5,   SZREG(a1)
+	REG_L a6, 2*SZREG(a1)
+	REG_L a7, 3*SZREG(a1)
+	REG_L t0, 4*SZREG(a1)
+	REG_L t1, 5*SZREG(a1)
+	REG_L t2, 6*SZREG(a1)
+	REG_L t3, 7*SZREG(a1)
+	REG_S a4,       0(a0)
+	REG_S a5,   SZREG(a0)
+	REG_S a6, 2*SZREG(a0)
+	REG_S a7, 3*SZREG(a0)
+	REG_S t0, 4*SZREG(a0)
+	REG_S t1, 5*SZREG(a0)
+	REG_S t2, 6*SZREG(a0)
+	REG_S t3, 7*SZREG(a0)
+	addi  a0, a0, 8*SZREG
+	addi  a1, a1, 8*SZREG
+	bltu  a1, a3, 1b
+
+2:
+	ret
+END(__riscv_copy_words_unaligned)
+
+/* void __riscv_copy_bytes_unaligned(void *, const void *, size_t) */
+/* Performs a memcpy without aligning buffers, using only byte accesses. */
+/* Note: The size is truncated to a multiple of 8 */
+ENTRY(__riscv_copy_bytes_unaligned)
+	andi a4, a2, ~(8-1)
+	beqz a4, 2f
+	add  a3, a1, a4
+1:
+	lb   a4, 0(a1)
+	lb   a5, 1(a1)
+	lb   a6, 2(a1)
+	lb   a7, 3(a1)
+	lb   t0, 4(a1)
+	lb   t1, 5(a1)
+	lb   t2, 6(a1)
+	lb   t3, 7(a1)
+	sb   a4, 0(a0)
+	sb   a5, 1(a0)
+	sb   a6, 2(a0)
+	sb   a7, 3(a0)
+	sb   t0, 4(a0)
+	sb   t1, 5(a0)
+	sb   t2, 6(a0)
+	sb   t3, 7(a0)
+	addi a0, a0, 8
+	addi a1, a1, 8
+	bltu a1, a3, 1b
+
+2:
+	ret
+END(__riscv_copy_bytes_unaligned)
diff --git a/arch/riscv/kernel/copy-unaligned.h b/arch/riscv/kernel/copy-unaligned.h
new file mode 100644
index 000000000000..e3d70d35b708
--- /dev/null
+++ b/arch/riscv/kernel/copy-unaligned.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2023 Rivos, Inc.
+ */
+#ifndef __RISCV_KERNEL_COPY_UNALIGNED_H
+#define __RISCV_KERNEL_COPY_UNALIGNED_H
+
+#include <linux/types.h>
+
+void __riscv_copy_words_unaligned(void *dst, const void *src, size_t size);
+void __riscv_copy_bytes_unaligned(void *dst, const void *src, size_t size);
+
+#endif /* __RISCV_KERNEL_COPY_UNALIGNED_H */
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index 71fb840ee246..72bbaf355067 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -19,12 +19,19 @@
 #include <asm/cacheflush.h>
 #include <asm/cpufeature.h>
 #include <asm/hwcap.h>
+#include <asm/hwprobe.h>
 #include <asm/patch.h>
 #include <asm/processor.h>
 #include <asm/vector.h>
 
+#include "copy-unaligned.h"
+
 #define NUM_ALPHA_EXTS ('z' - 'a' + 1)
 
+#define MISALIGNED_ACCESS_JIFFIES_LG2 1
+#define MISALIGNED_BUFFER_SIZE 0x4000
+#define MISALIGNED_COPY_SIZE ((MISALIGNED_BUFFER_SIZE / 2) - 0x80)
+
 unsigned long elf_hwcap __read_mostly;
 
 /* Host ISA bitmap */
@@ -555,6 +562,103 @@ unsigned long riscv_get_elf_hwcap(void)
 	return hwcap;
 }
 
+void check_unaligned_access(int cpu)
+{
+	u64 start_cycles, end_cycles;
+	u64 word_cycles;
+	u64 byte_cycles;
+	int ratio;
+	unsigned long start_jiffies, now;
+	struct page *page;
+	void *dst;
+	void *src;
+	long speed = RISCV_HWPROBE_MISALIGNED_SLOW;
+
+	page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE));
+	if (!page) {
+		pr_warn("Can't alloc pages to measure memcpy performance");
+		return;
+	}
+
+	/* Make an unaligned destination buffer. */
+	dst = (void *)((unsigned long)page_address(page) | 0x1);
+	/* Unalign src as well, but differently (off by 1 + 2 = 3). */
+	src = dst + (MISALIGNED_BUFFER_SIZE / 2);
+	src += 2;
+	word_cycles = -1ULL;
+	/* Do a warmup. */
+	__riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+	preempt_disable();
+	start_jiffies = jiffies;
+	while ((now = jiffies) == start_jiffies)
+		cpu_relax();
+
+	/*
+	 * For a fixed amount of time, repeatedly try the function, and take
+	 * the best time in cycles as the measurement.
+	 */
+	while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
+		start_cycles = get_cycles64();
+		/* Ensure the CSR read can't reorder WRT to the copy. */
+		mb();
+		__riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+		/* Ensure the copy ends before the end time is snapped. */
+		mb();
+		end_cycles = get_cycles64();
+		if ((end_cycles - start_cycles) < word_cycles)
+			word_cycles = end_cycles - start_cycles;
+	}
+
+	byte_cycles = -1ULL;
+	__riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+	start_jiffies = jiffies;
+	while ((now = jiffies) == start_jiffies)
+		cpu_relax();
+
+	while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
+		start_cycles = get_cycles64();
+		mb();
+		__riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+		mb();
+		end_cycles = get_cycles64();
+		if ((end_cycles - start_cycles) < byte_cycles)
+			byte_cycles = end_cycles - start_cycles;
+	}
+
+	preempt_enable();
+
+	/* Don't divide by zero. */
+	if (!word_cycles || !byte_cycles) {
+		pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned access speed\n",
+			cpu);
+
+		goto out;
+	}
+
+	if (word_cycles < byte_cycles)
+		speed = RISCV_HWPROBE_MISALIGNED_FAST;
+
+	ratio = div_u64((byte_cycles * 100), word_cycles);
+	pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n",
+		cpu,
+		ratio / 100,
+		ratio % 100,
+		(speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow");
+
+	per_cpu(misaligned_access_speed, cpu) = speed;
+
+out:
+	__free_pages(page, get_order(MISALIGNED_BUFFER_SIZE));
+}
+
+static int check_unaligned_access_boot_cpu(void)
+{
+	check_unaligned_access(0);
+	return 0;
+}
+
+arch_initcall(check_unaligned_access_boot_cpu);
+
 #ifdef CONFIG_RISCV_ALTERNATIVE
 /*
  * Alternative patch sites consider 48 bits when determining when to patch
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
index f4d6acb38dd0..00ddbd2364dc 100644
--- a/arch/riscv/kernel/smpboot.c
+++ b/arch/riscv/kernel/smpboot.c
@@ -26,6 +26,7 @@
 #include <linux/sched/task_stack.h>
 #include <linux/sched/mm.h>
 #include <asm/cpu_ops.h>
+#include <asm/cpufeature.h>
 #include <asm/irq.h>
 #include <asm/mmu_context.h>
 #include <asm/numa.h>
@@ -245,6 +246,7 @@ asmlinkage __visible void smp_callin(void)
 
 	numa_add_cpu(curr_cpuid);
 	set_cpu_online(curr_cpuid, 1);
+	check_unaligned_access(curr_cpuid);
 	probe_vendor_features(curr_cpuid);
 
 	if (has_vector()) {
-- 
2.34.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-08-18 19:41   ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-08-18 19:41 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: David Laight, Simon Hosie, Evan Green, Conor Dooley, Albert Ou,
	Alexandre Ghiti, Andrew Jones, Andy Chiu, Anup Patel,
	Björn Töpel, Greentime Hu, Guo Ren, Heiko Stuebner,
	Jisheng Zhang, Jonathan Corbet, Ley Foon Tan, Marc Zyngier,
	Masahiro Yamada, Palmer Dabbelt, Paul Walmsley, Sia Jee Heng,
	Sunil V L, Xianting Tian, linux-doc, linux-kernel, linux-riscv

Rather than deferring unaligned access speed determinations to a vendor
function, let's probe them and find out how fast they are. If we
determine that an unaligned word access is faster than N byte accesses,
mark the hardware's unaligned access as "fast". Otherwise, we mark
accesses as slow.

The algorithm itself runs for a fixed amount of jiffies. Within each
iteration it attempts to time a single loop, and then keeps only the best
(fastest) loop it saw. This algorithm was found to have lower variance from
run to run than my first attempt, which counted the total number of
iterations that could be done in that fixed amount of jiffies. By taking
only the best iteration in the loop, assuming at least one loop wasn't
perturbed by an interrupt, we eliminate the effects of interrupts and
other "warm up" factors like branch prediction. The only downside is it
depends on having an rdtime granular and accurate enough to measure a
single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
leave the unaligned setting at UNKNOWN.

There is a slight change in user-visible behavior here. Previously, all
boards except the THead C906 reported misaligned access speed of
UNKNOWN. C906 reported FAST. With this change, since we're now measuring
misaligned access speed on each hart, all RISC-V systems will have this
key set as either FAST or SLOW.

Currently, we don't have a way to confidently measure the difference between
SLOW and EMULATED, so we label anything not fast as SLOW. This will
mislabel some systems that are actually EMULATED as SLOW. When we get
support for delegating misaligned access traps to the kernel (as opposed
to the firmware quietly handling it), we can explicitly test in Linux to
see if unaligned accesses trap. Those systems will start to report
EMULATED, though older (today's) systems without that new SBI mechanism
will continue to report SLOW.

I've updated the documentation for those hwprobe values to reflect
this, specifically: SLOW may or may not be emulated by software, and FAST
represents means being faster than equivalent byte accesses. The change
in documentation is accurate with respect to both the former and current
behavior.

Signed-off-by: Evan Green <evan@rivosinc.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>

---

Changes in v4:
 - Avoid the bare 64-bit divide which fails to link on 32-bit systems,
   use div_u64() (Palmer, buildrobot)

Changes in v3:
 - Fix documentation indentation (Conor)
 - Rename __copy_..._unaligned() to __riscv_copy_..._unaligned() (Conor)
 - Renamed c0,c1 to start_cycles, end_cycles (Conor)
 - Renamed j0,j1 to start_jiffies, now
 - Renamed check_unaligned_access0() to
   check_unaligned_access_boot_cpu() (Conor)

Changes in v2:
 - Explain more in the commit message (Conor)
 - Use a new algorithm that looks for the fastest run (David)
 - Clarify documentatin further (David and Conor)
 - Unify around a single word, "unaligned" (Conor)
 - Align asm operands, and other misc whitespace changes (Conor)

 Documentation/riscv/hwprobe.rst     |  11 ++-
 arch/riscv/include/asm/cpufeature.h |   2 +
 arch/riscv/kernel/Makefile          |   1 +
 arch/riscv/kernel/copy-unaligned.S  |  71 +++++++++++++++++++
 arch/riscv/kernel/copy-unaligned.h  |  13 ++++
 arch/riscv/kernel/cpufeature.c      | 104 ++++++++++++++++++++++++++++
 arch/riscv/kernel/smpboot.c         |   2 +
 7 files changed, 198 insertions(+), 6 deletions(-)
 create mode 100644 arch/riscv/kernel/copy-unaligned.S
 create mode 100644 arch/riscv/kernel/copy-unaligned.h

diff --git a/Documentation/riscv/hwprobe.rst b/Documentation/riscv/hwprobe.rst
index 19165ebd82ba..f63fd05f1a73 100644
--- a/Documentation/riscv/hwprobe.rst
+++ b/Documentation/riscv/hwprobe.rst
@@ -87,13 +87,12 @@ The following keys are defined:
     emulated via software, either in or below the kernel.  These accesses are
     always extremely slow.
 
-  * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are supported
-    in hardware, but are slower than the cooresponding aligned accesses
-    sequences.
+  * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are slower
+    than equivalent byte accesses.  Misaligned accesses may be supported
+    directly in hardware, or trapped and emulated by software.
 
-  * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are supported
-    in hardware and are faster than the cooresponding aligned accesses
-    sequences.
+  * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are faster
+    than equivalent byte accesses.
 
   * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNSUPPORTED`: Misaligned accesses are
     not supported at all and will generate a misaligned address fault.
diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
index 23fed53b8815..d0345bd659c9 100644
--- a/arch/riscv/include/asm/cpufeature.h
+++ b/arch/riscv/include/asm/cpufeature.h
@@ -30,4 +30,6 @@ DECLARE_PER_CPU(long, misaligned_access_speed);
 /* Per-cpu ISA extensions. */
 extern struct riscv_isainfo hart_isa[NR_CPUS];
 
+void check_unaligned_access(int cpu);
+
 #endif
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index 506cc4a9a45a..7e6c464cdfe9 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -38,6 +38,7 @@ extra-y += vmlinux.lds
 obj-y	+= head.o
 obj-y	+= soc.o
 obj-$(CONFIG_RISCV_ALTERNATIVE) += alternative.o
+obj-y	+= copy-unaligned.o
 obj-y	+= cpu.o
 obj-y	+= cpufeature.o
 obj-y	+= entry.o
diff --git a/arch/riscv/kernel/copy-unaligned.S b/arch/riscv/kernel/copy-unaligned.S
new file mode 100644
index 000000000000..cfdecfbaad62
--- /dev/null
+++ b/arch/riscv/kernel/copy-unaligned.S
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Rivos Inc. */
+
+#include <linux/linkage.h>
+#include <asm/asm.h>
+
+	.text
+
+/* void __riscv_copy_words_unaligned(void *, const void *, size_t) */
+/* Performs a memcpy without aligning buffers, using word loads and stores. */
+/* Note: The size is truncated to a multiple of 8 * SZREG */
+ENTRY(__riscv_copy_words_unaligned)
+	andi  a4, a2, ~((8*SZREG)-1)
+	beqz  a4, 2f
+	add   a3, a1, a4
+1:
+	REG_L a4,       0(a1)
+	REG_L a5,   SZREG(a1)
+	REG_L a6, 2*SZREG(a1)
+	REG_L a7, 3*SZREG(a1)
+	REG_L t0, 4*SZREG(a1)
+	REG_L t1, 5*SZREG(a1)
+	REG_L t2, 6*SZREG(a1)
+	REG_L t3, 7*SZREG(a1)
+	REG_S a4,       0(a0)
+	REG_S a5,   SZREG(a0)
+	REG_S a6, 2*SZREG(a0)
+	REG_S a7, 3*SZREG(a0)
+	REG_S t0, 4*SZREG(a0)
+	REG_S t1, 5*SZREG(a0)
+	REG_S t2, 6*SZREG(a0)
+	REG_S t3, 7*SZREG(a0)
+	addi  a0, a0, 8*SZREG
+	addi  a1, a1, 8*SZREG
+	bltu  a1, a3, 1b
+
+2:
+	ret
+END(__riscv_copy_words_unaligned)
+
+/* void __riscv_copy_bytes_unaligned(void *, const void *, size_t) */
+/* Performs a memcpy without aligning buffers, using only byte accesses. */
+/* Note: The size is truncated to a multiple of 8 */
+ENTRY(__riscv_copy_bytes_unaligned)
+	andi a4, a2, ~(8-1)
+	beqz a4, 2f
+	add  a3, a1, a4
+1:
+	lb   a4, 0(a1)
+	lb   a5, 1(a1)
+	lb   a6, 2(a1)
+	lb   a7, 3(a1)
+	lb   t0, 4(a1)
+	lb   t1, 5(a1)
+	lb   t2, 6(a1)
+	lb   t3, 7(a1)
+	sb   a4, 0(a0)
+	sb   a5, 1(a0)
+	sb   a6, 2(a0)
+	sb   a7, 3(a0)
+	sb   t0, 4(a0)
+	sb   t1, 5(a0)
+	sb   t2, 6(a0)
+	sb   t3, 7(a0)
+	addi a0, a0, 8
+	addi a1, a1, 8
+	bltu a1, a3, 1b
+
+2:
+	ret
+END(__riscv_copy_bytes_unaligned)
diff --git a/arch/riscv/kernel/copy-unaligned.h b/arch/riscv/kernel/copy-unaligned.h
new file mode 100644
index 000000000000..e3d70d35b708
--- /dev/null
+++ b/arch/riscv/kernel/copy-unaligned.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2023 Rivos, Inc.
+ */
+#ifndef __RISCV_KERNEL_COPY_UNALIGNED_H
+#define __RISCV_KERNEL_COPY_UNALIGNED_H
+
+#include <linux/types.h>
+
+void __riscv_copy_words_unaligned(void *dst, const void *src, size_t size);
+void __riscv_copy_bytes_unaligned(void *dst, const void *src, size_t size);
+
+#endif /* __RISCV_KERNEL_COPY_UNALIGNED_H */
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index 71fb840ee246..72bbaf355067 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -19,12 +19,19 @@
 #include <asm/cacheflush.h>
 #include <asm/cpufeature.h>
 #include <asm/hwcap.h>
+#include <asm/hwprobe.h>
 #include <asm/patch.h>
 #include <asm/processor.h>
 #include <asm/vector.h>
 
+#include "copy-unaligned.h"
+
 #define NUM_ALPHA_EXTS ('z' - 'a' + 1)
 
+#define MISALIGNED_ACCESS_JIFFIES_LG2 1
+#define MISALIGNED_BUFFER_SIZE 0x4000
+#define MISALIGNED_COPY_SIZE ((MISALIGNED_BUFFER_SIZE / 2) - 0x80)
+
 unsigned long elf_hwcap __read_mostly;
 
 /* Host ISA bitmap */
@@ -555,6 +562,103 @@ unsigned long riscv_get_elf_hwcap(void)
 	return hwcap;
 }
 
+void check_unaligned_access(int cpu)
+{
+	u64 start_cycles, end_cycles;
+	u64 word_cycles;
+	u64 byte_cycles;
+	int ratio;
+	unsigned long start_jiffies, now;
+	struct page *page;
+	void *dst;
+	void *src;
+	long speed = RISCV_HWPROBE_MISALIGNED_SLOW;
+
+	page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE));
+	if (!page) {
+		pr_warn("Can't alloc pages to measure memcpy performance");
+		return;
+	}
+
+	/* Make an unaligned destination buffer. */
+	dst = (void *)((unsigned long)page_address(page) | 0x1);
+	/* Unalign src as well, but differently (off by 1 + 2 = 3). */
+	src = dst + (MISALIGNED_BUFFER_SIZE / 2);
+	src += 2;
+	word_cycles = -1ULL;
+	/* Do a warmup. */
+	__riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+	preempt_disable();
+	start_jiffies = jiffies;
+	while ((now = jiffies) == start_jiffies)
+		cpu_relax();
+
+	/*
+	 * For a fixed amount of time, repeatedly try the function, and take
+	 * the best time in cycles as the measurement.
+	 */
+	while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
+		start_cycles = get_cycles64();
+		/* Ensure the CSR read can't reorder WRT to the copy. */
+		mb();
+		__riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+		/* Ensure the copy ends before the end time is snapped. */
+		mb();
+		end_cycles = get_cycles64();
+		if ((end_cycles - start_cycles) < word_cycles)
+			word_cycles = end_cycles - start_cycles;
+	}
+
+	byte_cycles = -1ULL;
+	__riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+	start_jiffies = jiffies;
+	while ((now = jiffies) == start_jiffies)
+		cpu_relax();
+
+	while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) {
+		start_cycles = get_cycles64();
+		mb();
+		__riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE);
+		mb();
+		end_cycles = get_cycles64();
+		if ((end_cycles - start_cycles) < byte_cycles)
+			byte_cycles = end_cycles - start_cycles;
+	}
+
+	preempt_enable();
+
+	/* Don't divide by zero. */
+	if (!word_cycles || !byte_cycles) {
+		pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned access speed\n",
+			cpu);
+
+		goto out;
+	}
+
+	if (word_cycles < byte_cycles)
+		speed = RISCV_HWPROBE_MISALIGNED_FAST;
+
+	ratio = div_u64((byte_cycles * 100), word_cycles);
+	pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n",
+		cpu,
+		ratio / 100,
+		ratio % 100,
+		(speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow");
+
+	per_cpu(misaligned_access_speed, cpu) = speed;
+
+out:
+	__free_pages(page, get_order(MISALIGNED_BUFFER_SIZE));
+}
+
+static int check_unaligned_access_boot_cpu(void)
+{
+	check_unaligned_access(0);
+	return 0;
+}
+
+arch_initcall(check_unaligned_access_boot_cpu);
+
 #ifdef CONFIG_RISCV_ALTERNATIVE
 /*
  * Alternative patch sites consider 48 bits when determining when to patch
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
index f4d6acb38dd0..00ddbd2364dc 100644
--- a/arch/riscv/kernel/smpboot.c
+++ b/arch/riscv/kernel/smpboot.c
@@ -26,6 +26,7 @@
 #include <linux/sched/task_stack.h>
 #include <linux/sched/mm.h>
 #include <asm/cpu_ops.h>
+#include <asm/cpufeature.h>
 #include <asm/irq.h>
 #include <asm/mmu_context.h>
 #include <asm/numa.h>
@@ -245,6 +246,7 @@ asmlinkage __visible void smp_callin(void)
 
 	numa_add_cpu(curr_cpuid);
 	set_cpu_online(curr_cpuid, 1);
+	check_unaligned_access(curr_cpuid);
 	probe_vendor_features(curr_cpuid);
 
 	if (has_vector()) {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 2/2] RISC-V: alternative: Remove feature_probe_func
  2023-08-18 19:41 ` Evan Green
@ 2023-08-18 19:41   ` Evan Green
  -1 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-08-18 19:41 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Anup Patel, Albert Ou, Heiko Stuebner, Samuel Holland,
	Ley Foon Tan, Marc Zyngier, Randy Dunlap, linux-kernel,
	Conor Dooley, David Laight, Palmer Dabbelt, Evan Green,
	Jisheng Zhang, Paul Walmsley, Greentime Hu, Simon Hosie,
	linux-riscv, Andrew Jones

Now that we're testing unaligned memory copy and making that
determination generically, there are no more users of the vendor
feature_probe_func(). While I think it's probably going to need to come
back, there are no users right now, so let's remove it until it's
needed.

Signed-off-by: Evan Green <evan@rivosinc.com>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>

---

(no changes since v1)

 arch/riscv/errata/thead/errata.c     |  8 --------
 arch/riscv/include/asm/alternative.h |  5 -----
 arch/riscv/kernel/alternative.c      | 19 -------------------
 arch/riscv/kernel/smpboot.c          |  1 -
 4 files changed, 33 deletions(-)

diff --git a/arch/riscv/errata/thead/errata.c b/arch/riscv/errata/thead/errata.c
index be84b14f0118..0554ed4bf087 100644
--- a/arch/riscv/errata/thead/errata.c
+++ b/arch/riscv/errata/thead/errata.c
@@ -120,11 +120,3 @@ void thead_errata_patch_func(struct alt_entry *begin, struct alt_entry *end,
 	if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
 		local_flush_icache_all();
 }
-
-void thead_feature_probe_func(unsigned int cpu,
-			      unsigned long archid,
-			      unsigned long impid)
-{
-	if ((archid == 0) && (impid == 0))
-		per_cpu(misaligned_access_speed, cpu) = RISCV_HWPROBE_MISALIGNED_FAST;
-}
diff --git a/arch/riscv/include/asm/alternative.h b/arch/riscv/include/asm/alternative.h
index 6a41537826a7..58ccd2f8cab7 100644
--- a/arch/riscv/include/asm/alternative.h
+++ b/arch/riscv/include/asm/alternative.h
@@ -30,7 +30,6 @@
 #define ALT_OLD_PTR(a)			__ALT_PTR(a, old_offset)
 #define ALT_ALT_PTR(a)			__ALT_PTR(a, alt_offset)
 
-void probe_vendor_features(unsigned int cpu);
 void __init apply_boot_alternatives(void);
 void __init apply_early_boot_alternatives(void);
 void apply_module_alternatives(void *start, size_t length);
@@ -53,15 +52,11 @@ void thead_errata_patch_func(struct alt_entry *begin, struct alt_entry *end,
 			     unsigned long archid, unsigned long impid,
 			     unsigned int stage);
 
-void thead_feature_probe_func(unsigned int cpu, unsigned long archid,
-			      unsigned long impid);
-
 void riscv_cpufeature_patch_func(struct alt_entry *begin, struct alt_entry *end,
 				 unsigned int stage);
 
 #else /* CONFIG_RISCV_ALTERNATIVE */
 
-static inline void probe_vendor_features(unsigned int cpu) { }
 static inline void apply_boot_alternatives(void) { }
 static inline void apply_early_boot_alternatives(void) { }
 static inline void apply_module_alternatives(void *start, size_t length) { }
diff --git a/arch/riscv/kernel/alternative.c b/arch/riscv/kernel/alternative.c
index 6b75788c18e6..85056153fa23 100644
--- a/arch/riscv/kernel/alternative.c
+++ b/arch/riscv/kernel/alternative.c
@@ -27,8 +27,6 @@ struct cpu_manufacturer_info_t {
 	void (*patch_func)(struct alt_entry *begin, struct alt_entry *end,
 				  unsigned long archid, unsigned long impid,
 				  unsigned int stage);
-	void (*feature_probe_func)(unsigned int cpu, unsigned long archid,
-				   unsigned long impid);
 };
 
 static void riscv_fill_cpu_mfr_info(struct cpu_manufacturer_info_t *cpu_mfr_info)
@@ -43,7 +41,6 @@ static void riscv_fill_cpu_mfr_info(struct cpu_manufacturer_info_t *cpu_mfr_info
 	cpu_mfr_info->imp_id = sbi_get_mimpid();
 #endif
 
-	cpu_mfr_info->feature_probe_func = NULL;
 	switch (cpu_mfr_info->vendor_id) {
 #ifdef CONFIG_ERRATA_SIFIVE
 	case SIFIVE_VENDOR_ID:
@@ -53,7 +50,6 @@ static void riscv_fill_cpu_mfr_info(struct cpu_manufacturer_info_t *cpu_mfr_info
 #ifdef CONFIG_ERRATA_THEAD
 	case THEAD_VENDOR_ID:
 		cpu_mfr_info->patch_func = thead_errata_patch_func;
-		cpu_mfr_info->feature_probe_func = thead_feature_probe_func;
 		break;
 #endif
 	default:
@@ -143,20 +139,6 @@ void riscv_alternative_fix_offsets(void *alt_ptr, unsigned int len,
 	}
 }
 
-/* Called on each CPU as it starts */
-void probe_vendor_features(unsigned int cpu)
-{
-	struct cpu_manufacturer_info_t cpu_mfr_info;
-
-	riscv_fill_cpu_mfr_info(&cpu_mfr_info);
-	if (!cpu_mfr_info.feature_probe_func)
-		return;
-
-	cpu_mfr_info.feature_probe_func(cpu,
-					cpu_mfr_info.arch_id,
-					cpu_mfr_info.imp_id);
-}
-
 /*
  * This is called very early in the boot process (directly after we run
  * a feature detect on the boot CPU). No need to worry about other CPUs
@@ -211,7 +193,6 @@ void __init apply_boot_alternatives(void)
 	/* If called on non-boot cpu things could go wrong */
 	WARN_ON(smp_processor_id() != 0);
 
-	probe_vendor_features(0);
 	_apply_alternatives((struct alt_entry *)__alt_start,
 			    (struct alt_entry *)__alt_end,
 			    RISCV_ALTERNATIVES_BOOT);
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
index 00ddbd2364dc..1b8da4e40a4d 100644
--- a/arch/riscv/kernel/smpboot.c
+++ b/arch/riscv/kernel/smpboot.c
@@ -247,7 +247,6 @@ asmlinkage __visible void smp_callin(void)
 	numa_add_cpu(curr_cpuid);
 	set_cpu_online(curr_cpuid, 1);
 	check_unaligned_access(curr_cpuid);
-	probe_vendor_features(curr_cpuid);
 
 	if (has_vector()) {
 		if (riscv_v_setup_vsize())
-- 
2.34.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 2/2] RISC-V: alternative: Remove feature_probe_func
@ 2023-08-18 19:41   ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-08-18 19:41 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: David Laight, Simon Hosie, Evan Green, Conor Dooley, Albert Ou,
	Andrew Jones, Anup Patel, Greentime Hu, Heiko Stuebner,
	Jisheng Zhang, Ley Foon Tan, Marc Zyngier, Palmer Dabbelt,
	Paul Walmsley, Randy Dunlap, Samuel Holland, Sunil V L,
	linux-kernel, linux-riscv

Now that we're testing unaligned memory copy and making that
determination generically, there are no more users of the vendor
feature_probe_func(). While I think it's probably going to need to come
back, there are no users right now, so let's remove it until it's
needed.

Signed-off-by: Evan Green <evan@rivosinc.com>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>

---

(no changes since v1)

 arch/riscv/errata/thead/errata.c     |  8 --------
 arch/riscv/include/asm/alternative.h |  5 -----
 arch/riscv/kernel/alternative.c      | 19 -------------------
 arch/riscv/kernel/smpboot.c          |  1 -
 4 files changed, 33 deletions(-)

diff --git a/arch/riscv/errata/thead/errata.c b/arch/riscv/errata/thead/errata.c
index be84b14f0118..0554ed4bf087 100644
--- a/arch/riscv/errata/thead/errata.c
+++ b/arch/riscv/errata/thead/errata.c
@@ -120,11 +120,3 @@ void thead_errata_patch_func(struct alt_entry *begin, struct alt_entry *end,
 	if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
 		local_flush_icache_all();
 }
-
-void thead_feature_probe_func(unsigned int cpu,
-			      unsigned long archid,
-			      unsigned long impid)
-{
-	if ((archid == 0) && (impid == 0))
-		per_cpu(misaligned_access_speed, cpu) = RISCV_HWPROBE_MISALIGNED_FAST;
-}
diff --git a/arch/riscv/include/asm/alternative.h b/arch/riscv/include/asm/alternative.h
index 6a41537826a7..58ccd2f8cab7 100644
--- a/arch/riscv/include/asm/alternative.h
+++ b/arch/riscv/include/asm/alternative.h
@@ -30,7 +30,6 @@
 #define ALT_OLD_PTR(a)			__ALT_PTR(a, old_offset)
 #define ALT_ALT_PTR(a)			__ALT_PTR(a, alt_offset)
 
-void probe_vendor_features(unsigned int cpu);
 void __init apply_boot_alternatives(void);
 void __init apply_early_boot_alternatives(void);
 void apply_module_alternatives(void *start, size_t length);
@@ -53,15 +52,11 @@ void thead_errata_patch_func(struct alt_entry *begin, struct alt_entry *end,
 			     unsigned long archid, unsigned long impid,
 			     unsigned int stage);
 
-void thead_feature_probe_func(unsigned int cpu, unsigned long archid,
-			      unsigned long impid);
-
 void riscv_cpufeature_patch_func(struct alt_entry *begin, struct alt_entry *end,
 				 unsigned int stage);
 
 #else /* CONFIG_RISCV_ALTERNATIVE */
 
-static inline void probe_vendor_features(unsigned int cpu) { }
 static inline void apply_boot_alternatives(void) { }
 static inline void apply_early_boot_alternatives(void) { }
 static inline void apply_module_alternatives(void *start, size_t length) { }
diff --git a/arch/riscv/kernel/alternative.c b/arch/riscv/kernel/alternative.c
index 6b75788c18e6..85056153fa23 100644
--- a/arch/riscv/kernel/alternative.c
+++ b/arch/riscv/kernel/alternative.c
@@ -27,8 +27,6 @@ struct cpu_manufacturer_info_t {
 	void (*patch_func)(struct alt_entry *begin, struct alt_entry *end,
 				  unsigned long archid, unsigned long impid,
 				  unsigned int stage);
-	void (*feature_probe_func)(unsigned int cpu, unsigned long archid,
-				   unsigned long impid);
 };
 
 static void riscv_fill_cpu_mfr_info(struct cpu_manufacturer_info_t *cpu_mfr_info)
@@ -43,7 +41,6 @@ static void riscv_fill_cpu_mfr_info(struct cpu_manufacturer_info_t *cpu_mfr_info
 	cpu_mfr_info->imp_id = sbi_get_mimpid();
 #endif
 
-	cpu_mfr_info->feature_probe_func = NULL;
 	switch (cpu_mfr_info->vendor_id) {
 #ifdef CONFIG_ERRATA_SIFIVE
 	case SIFIVE_VENDOR_ID:
@@ -53,7 +50,6 @@ static void riscv_fill_cpu_mfr_info(struct cpu_manufacturer_info_t *cpu_mfr_info
 #ifdef CONFIG_ERRATA_THEAD
 	case THEAD_VENDOR_ID:
 		cpu_mfr_info->patch_func = thead_errata_patch_func;
-		cpu_mfr_info->feature_probe_func = thead_feature_probe_func;
 		break;
 #endif
 	default:
@@ -143,20 +139,6 @@ void riscv_alternative_fix_offsets(void *alt_ptr, unsigned int len,
 	}
 }
 
-/* Called on each CPU as it starts */
-void probe_vendor_features(unsigned int cpu)
-{
-	struct cpu_manufacturer_info_t cpu_mfr_info;
-
-	riscv_fill_cpu_mfr_info(&cpu_mfr_info);
-	if (!cpu_mfr_info.feature_probe_func)
-		return;
-
-	cpu_mfr_info.feature_probe_func(cpu,
-					cpu_mfr_info.arch_id,
-					cpu_mfr_info.imp_id);
-}
-
 /*
  * This is called very early in the boot process (directly after we run
  * a feature detect on the boot CPU). No need to worry about other CPUs
@@ -211,7 +193,6 @@ void __init apply_boot_alternatives(void)
 	/* If called on non-boot cpu things could go wrong */
 	WARN_ON(smp_processor_id() != 0);
 
-	probe_vendor_features(0);
 	_apply_alternatives((struct alt_entry *)__alt_start,
 			    (struct alt_entry *)__alt_end,
 			    RISCV_ALTERNATIVES_BOOT);
diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
index 00ddbd2364dc..1b8da4e40a4d 100644
--- a/arch/riscv/kernel/smpboot.c
+++ b/arch/riscv/kernel/smpboot.c
@@ -247,7 +247,6 @@ asmlinkage __visible void smp_callin(void)
 	numa_add_cpu(curr_cpuid);
 	set_cpu_online(curr_cpuid, 1);
 	check_unaligned_access(curr_cpuid);
-	probe_vendor_features(curr_cpuid);
 
 	if (has_vector()) {
 		if (riscv_v_setup_vsize())
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 0/2] RISC-V: Probe for misaligned access speed
  2023-08-18 19:41 ` Evan Green
@ 2023-08-30 20:30   ` patchwork-bot+linux-riscv
  -1 siblings, 0 replies; 30+ messages in thread
From: patchwork-bot+linux-riscv @ 2023-08-30 20:30 UTC (permalink / raw)
  To: Evan Green
  Cc: linux-riscv, palmer, rdunlap, heiko, linux-doc, bjorn,
	conor.dooley, guoren, jszhang, samuel, jeeheng.sia, maz,
	masahiroy, greentime.hu, shosie, ajones, aou, alexghiti,
	leyfoon.tan, paul.walmsley, apatel, corbet, linux-kernel,
	xianting.tian, David.Laight, palmer, andy.chiu

Hello:

This series was applied to riscv/linux.git (for-next)
by Palmer Dabbelt <palmer@rivosinc.com>:

On Fri, 18 Aug 2023 12:41:34 -0700 you wrote:
> The current setting for the hwprobe bit indicating misaligned access
> speed is controlled by a vendor-specific feature probe function. This is
> essentially a per-SoC table we have to maintain on behalf of each vendor
> going forward. Let's convert that instead to something we detect at
> runtime.
> 
> We have two assembly routines at the heart of our probe: one that
> does a bunch of word-sized accesses (without aligning its input buffer),
> and the other that does byte accesses. If we can move a larger number of
> bytes using misaligned word accesses than we can with the same amount of
> time doing byte accesses, then we can declare misaligned accesses as
> "fast".
> 
> [...]

Here is the summary with links:
  - [v4,1/2] RISC-V: Probe for unaligned access speed
    https://git.kernel.org/riscv/c/b98673c5b037
  - [v4,2/2] RISC-V: alternative: Remove feature_probe_func
    https://git.kernel.org/riscv/c/b6e3f6e009a1

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 0/2] RISC-V: Probe for misaligned access speed
@ 2023-08-30 20:30   ` patchwork-bot+linux-riscv
  0 siblings, 0 replies; 30+ messages in thread
From: patchwork-bot+linux-riscv @ 2023-08-30 20:30 UTC (permalink / raw)
  To: Evan Green
  Cc: linux-riscv, palmer, rdunlap, heiko, linux-doc, bjorn,
	conor.dooley, guoren, jszhang, samuel, jeeheng.sia, maz,
	masahiroy, greentime.hu, shosie, ajones, aou, alexghiti,
	leyfoon.tan, paul.walmsley, apatel, corbet, linux-kernel,
	xianting.tian, David.Laight, palmer, andy.chiu

Hello:

This series was applied to riscv/linux.git (for-next)
by Palmer Dabbelt <palmer@rivosinc.com>:

On Fri, 18 Aug 2023 12:41:34 -0700 you wrote:
> The current setting for the hwprobe bit indicating misaligned access
> speed is controlled by a vendor-specific feature probe function. This is
> essentially a per-SoC table we have to maintain on behalf of each vendor
> going forward. Let's convert that instead to something we detect at
> runtime.
> 
> We have two assembly routines at the heart of our probe: one that
> does a bunch of word-sized accesses (without aligning its input buffer),
> and the other that does byte accesses. If we can move a larger number of
> bytes using misaligned word accesses than we can with the same amount of
> time doing byte accesses, then we can declare misaligned accesses as
> "fast".
> 
> [...]

Here is the summary with links:
  - [v4,1/2] RISC-V: Probe for unaligned access speed
    https://git.kernel.org/riscv/c/b98673c5b037
  - [v4,2/2] RISC-V: alternative: Remove feature_probe_func
    https://git.kernel.org/riscv/c/b6e3f6e009a1

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-08-18 19:41   ` Evan Green
@ 2023-09-13 12:36     ` Geert Uytterhoeven
  -1 siblings, 0 replies; 30+ messages in thread
From: Geert Uytterhoeven @ 2023-09-13 12:36 UTC (permalink / raw)
  To: Evan Green
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, David Laight, Palmer Dabbelt,
	Andy Chiu

Hi Evan,

On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> Rather than deferring unaligned access speed determinations to a vendor
> function, let's probe them and find out how fast they are. If we
> determine that an unaligned word access is faster than N byte accesses,
> mark the hardware's unaligned access as "fast". Otherwise, we mark
> accesses as slow.
>
> The algorithm itself runs for a fixed amount of jiffies. Within each
> iteration it attempts to time a single loop, and then keeps only the best
> (fastest) loop it saw. This algorithm was found to have lower variance from
> run to run than my first attempt, which counted the total number of
> iterations that could be done in that fixed amount of jiffies. By taking
> only the best iteration in the loop, assuming at least one loop wasn't
> perturbed by an interrupt, we eliminate the effects of interrupts and
> other "warm up" factors like branch prediction. The only downside is it
> depends on having an rdtime granular and accurate enough to measure a
> single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> leave the unaligned setting at UNKNOWN.
>
> There is a slight change in user-visible behavior here. Previously, all
> boards except the THead C906 reported misaligned access speed of
> UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> misaligned access speed on each hart, all RISC-V systems will have this
> key set as either FAST or SLOW.
>
> Currently, we don't have a way to confidently measure the difference between
> SLOW and EMULATED, so we label anything not fast as SLOW. This will
> mislabel some systems that are actually EMULATED as SLOW. When we get
> support for delegating misaligned access traps to the kernel (as opposed
> to the firmware quietly handling it), we can explicitly test in Linux to
> see if unaligned accesses trap. Those systems will start to report
> EMULATED, though older (today's) systems without that new SBI mechanism
> will continue to report SLOW.
>
> I've updated the documentation for those hwprobe values to reflect
> this, specifically: SLOW may or may not be emulated by software, and FAST
> represents means being faster than equivalent byte accesses. The change
> in documentation is accurate with respect to both the former and current
> behavior.
>
> Signed-off-by: Evan Green <evan@rivosinc.com>
> Acked-by: Conor Dooley <conor.dooley@microchip.com>

Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
Probe for unaligned access speed") in v6.6-rc1.

On the boards I have, I get:

    rzfive:
        cpu0: Ratio of byte access time to unaligned word access is
1.05, unaligned accesses are fast

    icicle:

        cpu1: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow
        cpu2: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow
        cpu3: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow

        cpu0: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow

    k210:

        cpu1: Ratio of byte access time to unaligned word access is
0.02, unaligned accesses are slow
        cpu0: Ratio of byte access time to unaligned word access is
0.02, unaligned accesses are slow

    starlight:

        cpu1: Ratio of byte access time to unaligned word access is
0.01, unaligned accesses are slow
        cpu0: Ratio of byte access time to unaligned word access is
0.02, unaligned accesses are slow

    vexriscv/orangecrab:

        cpu0: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow

I am a bit surprised by the near-zero values.  Are these expected?
Thanks!

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-09-13 12:36     ` Geert Uytterhoeven
  0 siblings, 0 replies; 30+ messages in thread
From: Geert Uytterhoeven @ 2023-09-13 12:36 UTC (permalink / raw)
  To: Evan Green
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, David Laight, Palmer Dabbelt,
	Andy Chiu

Hi Evan,

On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> Rather than deferring unaligned access speed determinations to a vendor
> function, let's probe them and find out how fast they are. If we
> determine that an unaligned word access is faster than N byte accesses,
> mark the hardware's unaligned access as "fast". Otherwise, we mark
> accesses as slow.
>
> The algorithm itself runs for a fixed amount of jiffies. Within each
> iteration it attempts to time a single loop, and then keeps only the best
> (fastest) loop it saw. This algorithm was found to have lower variance from
> run to run than my first attempt, which counted the total number of
> iterations that could be done in that fixed amount of jiffies. By taking
> only the best iteration in the loop, assuming at least one loop wasn't
> perturbed by an interrupt, we eliminate the effects of interrupts and
> other "warm up" factors like branch prediction. The only downside is it
> depends on having an rdtime granular and accurate enough to measure a
> single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> leave the unaligned setting at UNKNOWN.
>
> There is a slight change in user-visible behavior here. Previously, all
> boards except the THead C906 reported misaligned access speed of
> UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> misaligned access speed on each hart, all RISC-V systems will have this
> key set as either FAST or SLOW.
>
> Currently, we don't have a way to confidently measure the difference between
> SLOW and EMULATED, so we label anything not fast as SLOW. This will
> mislabel some systems that are actually EMULATED as SLOW. When we get
> support for delegating misaligned access traps to the kernel (as opposed
> to the firmware quietly handling it), we can explicitly test in Linux to
> see if unaligned accesses trap. Those systems will start to report
> EMULATED, though older (today's) systems without that new SBI mechanism
> will continue to report SLOW.
>
> I've updated the documentation for those hwprobe values to reflect
> this, specifically: SLOW may or may not be emulated by software, and FAST
> represents means being faster than equivalent byte accesses. The change
> in documentation is accurate with respect to both the former and current
> behavior.
>
> Signed-off-by: Evan Green <evan@rivosinc.com>
> Acked-by: Conor Dooley <conor.dooley@microchip.com>

Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
Probe for unaligned access speed") in v6.6-rc1.

On the boards I have, I get:

    rzfive:
        cpu0: Ratio of byte access time to unaligned word access is
1.05, unaligned accesses are fast

    icicle:

        cpu1: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow
        cpu2: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow
        cpu3: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow

        cpu0: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow

    k210:

        cpu1: Ratio of byte access time to unaligned word access is
0.02, unaligned accesses are slow
        cpu0: Ratio of byte access time to unaligned word access is
0.02, unaligned accesses are slow

    starlight:

        cpu1: Ratio of byte access time to unaligned word access is
0.01, unaligned accesses are slow
        cpu0: Ratio of byte access time to unaligned word access is
0.02, unaligned accesses are slow

    vexriscv/orangecrab:

        cpu0: Ratio of byte access time to unaligned word access is
0.00, unaligned accesses are slow

I am a bit surprised by the near-zero values.  Are these expected?
Thanks!

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-09-13 12:36     ` Geert Uytterhoeven
@ 2023-09-13 17:46       ` Evan Green
  -1 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-09-13 17:46 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, David Laight, Palmer Dabbelt,
	Andy Chiu

On Wed, Sep 13, 2023 at 5:36 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> Hi Evan,
>
> On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> > Rather than deferring unaligned access speed determinations to a vendor
> > function, let's probe them and find out how fast they are. If we
> > determine that an unaligned word access is faster than N byte accesses,
> > mark the hardware's unaligned access as "fast". Otherwise, we mark
> > accesses as slow.
> >
> > The algorithm itself runs for a fixed amount of jiffies. Within each
> > iteration it attempts to time a single loop, and then keeps only the best
> > (fastest) loop it saw. This algorithm was found to have lower variance from
> > run to run than my first attempt, which counted the total number of
> > iterations that could be done in that fixed amount of jiffies. By taking
> > only the best iteration in the loop, assuming at least one loop wasn't
> > perturbed by an interrupt, we eliminate the effects of interrupts and
> > other "warm up" factors like branch prediction. The only downside is it
> > depends on having an rdtime granular and accurate enough to measure a
> > single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> > leave the unaligned setting at UNKNOWN.
> >
> > There is a slight change in user-visible behavior here. Previously, all
> > boards except the THead C906 reported misaligned access speed of
> > UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> > misaligned access speed on each hart, all RISC-V systems will have this
> > key set as either FAST or SLOW.
> >
> > Currently, we don't have a way to confidently measure the difference between
> > SLOW and EMULATED, so we label anything not fast as SLOW. This will
> > mislabel some systems that are actually EMULATED as SLOW. When we get
> > support for delegating misaligned access traps to the kernel (as opposed
> > to the firmware quietly handling it), we can explicitly test in Linux to
> > see if unaligned accesses trap. Those systems will start to report
> > EMULATED, though older (today's) systems without that new SBI mechanism
> > will continue to report SLOW.
> >
> > I've updated the documentation for those hwprobe values to reflect
> > this, specifically: SLOW may or may not be emulated by software, and FAST
> > represents means being faster than equivalent byte accesses. The change
> > in documentation is accurate with respect to both the former and current
> > behavior.
> >
> > Signed-off-by: Evan Green <evan@rivosinc.com>
> > Acked-by: Conor Dooley <conor.dooley@microchip.com>
>
> Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
> Probe for unaligned access speed") in v6.6-rc1.
>
> On the boards I have, I get:
>
>     rzfive:
>         cpu0: Ratio of byte access time to unaligned word access is
> 1.05, unaligned accesses are fast

Hrm, I'm a little surprised to be seeing this number come out so close
to 1. If you reboot a few times, what kind of variance do you get on
this?

>
>     icicle:
>
>         cpu1: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>         cpu2: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>         cpu3: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>
>         cpu0: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>
>     k210:
>
>         cpu1: Ratio of byte access time to unaligned word access is
> 0.02, unaligned accesses are slow
>         cpu0: Ratio of byte access time to unaligned word access is
> 0.02, unaligned accesses are slow
>
>     starlight:
>
>         cpu1: Ratio of byte access time to unaligned word access is
> 0.01, unaligned accesses are slow
>         cpu0: Ratio of byte access time to unaligned word access is
> 0.02, unaligned accesses are slow
>
>     vexriscv/orangecrab:
>
>         cpu0: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>
> I am a bit surprised by the near-zero values.  Are these expected?
> Thanks!

This could be expected, if firmware is trapping the unaligned accesses
and coming out >100x slower than a native access. If you're interested
in getting a little more resolution, you could try to print a few more
decimal places with something like (sorry gmail mangles the whitespace
on this):

diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index 1cfbba65d11a..2c094037658a 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -632,11 +632,11 @@ void check_unaligned_access(int cpu)
        if (word_cycles < byte_cycles)
                speed = RISCV_HWPROBE_MISALIGNED_FAST;

-       ratio = div_u64((byte_cycles * 100), word_cycles);
-       pr_info("cpu%d: Ratio of byte access time to unaligned word
access is %d.%02d, unaligned accesses are %s\n",
+       ratio = div_u64((byte_cycles * 100000), word_cycles);
+       pr_info("cpu%d: Ratio of byte access time to unaligned word
access is %d.%05d, unaligned accesses are %s\n",
                cpu,
-               ratio / 100,
-               ratio % 100,
+               ratio / 100000,
+               ratio % 100000,
                (speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow");

        per_cpu(misaligned_access_speed, cpu) = speed;

If you did, I'd be interested to see the results.
-Evan

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-09-13 17:46       ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-09-13 17:46 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, David Laight, Palmer Dabbelt,
	Andy Chiu

On Wed, Sep 13, 2023 at 5:36 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> Hi Evan,
>
> On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> > Rather than deferring unaligned access speed determinations to a vendor
> > function, let's probe them and find out how fast they are. If we
> > determine that an unaligned word access is faster than N byte accesses,
> > mark the hardware's unaligned access as "fast". Otherwise, we mark
> > accesses as slow.
> >
> > The algorithm itself runs for a fixed amount of jiffies. Within each
> > iteration it attempts to time a single loop, and then keeps only the best
> > (fastest) loop it saw. This algorithm was found to have lower variance from
> > run to run than my first attempt, which counted the total number of
> > iterations that could be done in that fixed amount of jiffies. By taking
> > only the best iteration in the loop, assuming at least one loop wasn't
> > perturbed by an interrupt, we eliminate the effects of interrupts and
> > other "warm up" factors like branch prediction. The only downside is it
> > depends on having an rdtime granular and accurate enough to measure a
> > single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> > leave the unaligned setting at UNKNOWN.
> >
> > There is a slight change in user-visible behavior here. Previously, all
> > boards except the THead C906 reported misaligned access speed of
> > UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> > misaligned access speed on each hart, all RISC-V systems will have this
> > key set as either FAST or SLOW.
> >
> > Currently, we don't have a way to confidently measure the difference between
> > SLOW and EMULATED, so we label anything not fast as SLOW. This will
> > mislabel some systems that are actually EMULATED as SLOW. When we get
> > support for delegating misaligned access traps to the kernel (as opposed
> > to the firmware quietly handling it), we can explicitly test in Linux to
> > see if unaligned accesses trap. Those systems will start to report
> > EMULATED, though older (today's) systems without that new SBI mechanism
> > will continue to report SLOW.
> >
> > I've updated the documentation for those hwprobe values to reflect
> > this, specifically: SLOW may or may not be emulated by software, and FAST
> > represents means being faster than equivalent byte accesses. The change
> > in documentation is accurate with respect to both the former and current
> > behavior.
> >
> > Signed-off-by: Evan Green <evan@rivosinc.com>
> > Acked-by: Conor Dooley <conor.dooley@microchip.com>
>
> Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
> Probe for unaligned access speed") in v6.6-rc1.
>
> On the boards I have, I get:
>
>     rzfive:
>         cpu0: Ratio of byte access time to unaligned word access is
> 1.05, unaligned accesses are fast

Hrm, I'm a little surprised to be seeing this number come out so close
to 1. If you reboot a few times, what kind of variance do you get on
this?

>
>     icicle:
>
>         cpu1: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>         cpu2: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>         cpu3: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>
>         cpu0: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>
>     k210:
>
>         cpu1: Ratio of byte access time to unaligned word access is
> 0.02, unaligned accesses are slow
>         cpu0: Ratio of byte access time to unaligned word access is
> 0.02, unaligned accesses are slow
>
>     starlight:
>
>         cpu1: Ratio of byte access time to unaligned word access is
> 0.01, unaligned accesses are slow
>         cpu0: Ratio of byte access time to unaligned word access is
> 0.02, unaligned accesses are slow
>
>     vexriscv/orangecrab:
>
>         cpu0: Ratio of byte access time to unaligned word access is
> 0.00, unaligned accesses are slow
>
> I am a bit surprised by the near-zero values.  Are these expected?
> Thanks!

This could be expected, if firmware is trapping the unaligned accesses
and coming out >100x slower than a native access. If you're interested
in getting a little more resolution, you could try to print a few more
decimal places with something like (sorry gmail mangles the whitespace
on this):

diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index 1cfbba65d11a..2c094037658a 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -632,11 +632,11 @@ void check_unaligned_access(int cpu)
        if (word_cycles < byte_cycles)
                speed = RISCV_HWPROBE_MISALIGNED_FAST;

-       ratio = div_u64((byte_cycles * 100), word_cycles);
-       pr_info("cpu%d: Ratio of byte access time to unaligned word
access is %d.%02d, unaligned accesses are %s\n",
+       ratio = div_u64((byte_cycles * 100000), word_cycles);
+       pr_info("cpu%d: Ratio of byte access time to unaligned word
access is %d.%05d, unaligned accesses are %s\n",
                cpu,
-               ratio / 100,
-               ratio % 100,
+               ratio / 100000,
+               ratio % 100000,
                (speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow");

        per_cpu(misaligned_access_speed, cpu) = speed;

If you did, I'd be interested to see the results.
-Evan

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-09-13 17:46       ` Evan Green
@ 2023-09-14  7:32         ` Geert Uytterhoeven
  -1 siblings, 0 replies; 30+ messages in thread
From: Geert Uytterhoeven @ 2023-09-14  7:32 UTC (permalink / raw)
  To: Evan Green
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, David Laight, Palmer Dabbelt,
	Andy Chiu

Hi Evan,

On Wed, Sep 13, 2023 at 7:46 PM Evan Green <evan@rivosinc.com> wrote:
> On Wed, Sep 13, 2023 at 5:36 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> > > Rather than deferring unaligned access speed determinations to a vendor
> > > function, let's probe them and find out how fast they are. If we
> > > determine that an unaligned word access is faster than N byte accesses,
> > > mark the hardware's unaligned access as "fast". Otherwise, we mark
> > > accesses as slow.
> > >
> > > The algorithm itself runs for a fixed amount of jiffies. Within each
> > > iteration it attempts to time a single loop, and then keeps only the best
> > > (fastest) loop it saw. This algorithm was found to have lower variance from
> > > run to run than my first attempt, which counted the total number of
> > > iterations that could be done in that fixed amount of jiffies. By taking
> > > only the best iteration in the loop, assuming at least one loop wasn't
> > > perturbed by an interrupt, we eliminate the effects of interrupts and
> > > other "warm up" factors like branch prediction. The only downside is it
> > > depends on having an rdtime granular and accurate enough to measure a
> > > single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> > > leave the unaligned setting at UNKNOWN.
> > >
> > > There is a slight change in user-visible behavior here. Previously, all
> > > boards except the THead C906 reported misaligned access speed of
> > > UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> > > misaligned access speed on each hart, all RISC-V systems will have this
> > > key set as either FAST or SLOW.
> > >
> > > Currently, we don't have a way to confidently measure the difference between
> > > SLOW and EMULATED, so we label anything not fast as SLOW. This will
> > > mislabel some systems that are actually EMULATED as SLOW. When we get
> > > support for delegating misaligned access traps to the kernel (as opposed
> > > to the firmware quietly handling it), we can explicitly test in Linux to
> > > see if unaligned accesses trap. Those systems will start to report
> > > EMULATED, though older (today's) systems without that new SBI mechanism
> > > will continue to report SLOW.
> > >
> > > I've updated the documentation for those hwprobe values to reflect
> > > this, specifically: SLOW may or may not be emulated by software, and FAST
> > > represents means being faster than equivalent byte accesses. The change
> > > in documentation is accurate with respect to both the former and current
> > > behavior.
> > >
> > > Signed-off-by: Evan Green <evan@rivosinc.com>
> > > Acked-by: Conor Dooley <conor.dooley@microchip.com>
> >
> > Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
> > Probe for unaligned access speed") in v6.6-rc1.
> >
> > On the boards I have, I get:
> >
> >     rzfive:
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 1.05, unaligned accesses are fast
>
> Hrm, I'm a little surprised to be seeing this number come out so close
> to 1. If you reboot a few times, what kind of variance do you get on
> this?

Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)

> >     icicle:
> >
> >         cpu1: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow
> >         cpu2: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow
> >         cpu3: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow
> >
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow

cpu1: Ratio of byte access time to unaligned word access is 0.00344,
unaligned accesses are slow
cpu2: Ratio of byte access time to unaligned word access is 0.00343,
unaligned accesses are slow
cpu3: Ratio of byte access time to unaligned word access is 0.00343,
unaligned accesses are slow
cpu0: Ratio of byte access time to unaligned word access is 0.00340,
unaligned accesses are slow

> >     k210:
> >
> >         cpu1: Ratio of byte access time to unaligned word access is
> > 0.02, unaligned accesses are slow
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 0.02, unaligned accesses are slow

cpu1: Ratio of byte access time to unaligned word access is 0.02392,
unaligned accesses are slow
cpu0: Ratio of byte access time to unaligned word access is 0.02084,
unaligned accesses are slow

> >     starlight:
> >
> >         cpu1: Ratio of byte access time to unaligned word access is
> > 0.01, unaligned accesses are slow
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 0.02, unaligned accesses are slow

cpu1: Ratio of byte access time to unaligned word access is 0.01872,
unaligned accesses are slow
cpu0: Ratio of byte access time to unaligned word access is 0.01930,
unaligned accesses are slow

> >     vexriscv/orangecrab:
> >
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow

cpu0: Ratio of byte access time to unaligned word access is 0.00417,
unaligned accesses are slow

> > I am a bit surprised by the near-zero values.  Are these expected?
>
> This could be expected, if firmware is trapping the unaligned accesses
> and coming out >100x slower than a native access. If you're interested
> in getting a little more resolution, you could try to print a few more
> decimal places with something like (sorry gmail mangles the whitespace
> on this):

Looks like you need to add one digit to get anything useful on half of the
systems.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-09-14  7:32         ` Geert Uytterhoeven
  0 siblings, 0 replies; 30+ messages in thread
From: Geert Uytterhoeven @ 2023-09-14  7:32 UTC (permalink / raw)
  To: Evan Green
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, David Laight, Palmer Dabbelt,
	Andy Chiu

Hi Evan,

On Wed, Sep 13, 2023 at 7:46 PM Evan Green <evan@rivosinc.com> wrote:
> On Wed, Sep 13, 2023 at 5:36 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> > > Rather than deferring unaligned access speed determinations to a vendor
> > > function, let's probe them and find out how fast they are. If we
> > > determine that an unaligned word access is faster than N byte accesses,
> > > mark the hardware's unaligned access as "fast". Otherwise, we mark
> > > accesses as slow.
> > >
> > > The algorithm itself runs for a fixed amount of jiffies. Within each
> > > iteration it attempts to time a single loop, and then keeps only the best
> > > (fastest) loop it saw. This algorithm was found to have lower variance from
> > > run to run than my first attempt, which counted the total number of
> > > iterations that could be done in that fixed amount of jiffies. By taking
> > > only the best iteration in the loop, assuming at least one loop wasn't
> > > perturbed by an interrupt, we eliminate the effects of interrupts and
> > > other "warm up" factors like branch prediction. The only downside is it
> > > depends on having an rdtime granular and accurate enough to measure a
> > > single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> > > leave the unaligned setting at UNKNOWN.
> > >
> > > There is a slight change in user-visible behavior here. Previously, all
> > > boards except the THead C906 reported misaligned access speed of
> > > UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> > > misaligned access speed on each hart, all RISC-V systems will have this
> > > key set as either FAST or SLOW.
> > >
> > > Currently, we don't have a way to confidently measure the difference between
> > > SLOW and EMULATED, so we label anything not fast as SLOW. This will
> > > mislabel some systems that are actually EMULATED as SLOW. When we get
> > > support for delegating misaligned access traps to the kernel (as opposed
> > > to the firmware quietly handling it), we can explicitly test in Linux to
> > > see if unaligned accesses trap. Those systems will start to report
> > > EMULATED, though older (today's) systems without that new SBI mechanism
> > > will continue to report SLOW.
> > >
> > > I've updated the documentation for those hwprobe values to reflect
> > > this, specifically: SLOW may or may not be emulated by software, and FAST
> > > represents means being faster than equivalent byte accesses. The change
> > > in documentation is accurate with respect to both the former and current
> > > behavior.
> > >
> > > Signed-off-by: Evan Green <evan@rivosinc.com>
> > > Acked-by: Conor Dooley <conor.dooley@microchip.com>
> >
> > Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
> > Probe for unaligned access speed") in v6.6-rc1.
> >
> > On the boards I have, I get:
> >
> >     rzfive:
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 1.05, unaligned accesses are fast
>
> Hrm, I'm a little surprised to be seeing this number come out so close
> to 1. If you reboot a few times, what kind of variance do you get on
> this?

Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)

> >     icicle:
> >
> >         cpu1: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow
> >         cpu2: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow
> >         cpu3: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow
> >
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow

cpu1: Ratio of byte access time to unaligned word access is 0.00344,
unaligned accesses are slow
cpu2: Ratio of byte access time to unaligned word access is 0.00343,
unaligned accesses are slow
cpu3: Ratio of byte access time to unaligned word access is 0.00343,
unaligned accesses are slow
cpu0: Ratio of byte access time to unaligned word access is 0.00340,
unaligned accesses are slow

> >     k210:
> >
> >         cpu1: Ratio of byte access time to unaligned word access is
> > 0.02, unaligned accesses are slow
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 0.02, unaligned accesses are slow

cpu1: Ratio of byte access time to unaligned word access is 0.02392,
unaligned accesses are slow
cpu0: Ratio of byte access time to unaligned word access is 0.02084,
unaligned accesses are slow

> >     starlight:
> >
> >         cpu1: Ratio of byte access time to unaligned word access is
> > 0.01, unaligned accesses are slow
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 0.02, unaligned accesses are slow

cpu1: Ratio of byte access time to unaligned word access is 0.01872,
unaligned accesses are slow
cpu0: Ratio of byte access time to unaligned word access is 0.01930,
unaligned accesses are slow

> >     vexriscv/orangecrab:
> >
> >         cpu0: Ratio of byte access time to unaligned word access is
> > 0.00, unaligned accesses are slow

cpu0: Ratio of byte access time to unaligned word access is 0.00417,
unaligned accesses are slow

> > I am a bit surprised by the near-zero values.  Are these expected?
>
> This could be expected, if firmware is trapping the unaligned accesses
> and coming out >100x slower than a native access. If you're interested
> in getting a little more resolution, you could try to print a few more
> decimal places with something like (sorry gmail mangles the whitespace
> on this):

Looks like you need to add one digit to get anything useful on half of the
systems.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-09-14  7:32         ` Geert Uytterhoeven
@ 2023-09-14  8:46           ` David Laight
  -1 siblings, 0 replies; 30+ messages in thread
From: David Laight @ 2023-09-14  8:46 UTC (permalink / raw)
  To: 'Geert Uytterhoeven', Evan Green
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, Palmer Dabbelt, Andy Chiu

From: Geert Uytterhoeven
> Sent: 14 September 2023 08:33
...
> > >     rzfive:
> > >         cpu0: Ratio of byte access time to unaligned word access is
> > > 1.05, unaligned accesses are fast
> >
> > Hrm, I'm a little surprised to be seeing this number come out so close
> > to 1. If you reboot a few times, what kind of variance do you get on
> > this?
> 
> Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)

Would that match zero overhead unless the access crosses a
cache line boundary?
(I can't remember whether the test is using increasing addresses.)

...
> > >     vexriscv/orangecrab:
> > >
> > >         cpu0: Ratio of byte access time to unaligned word access is
> > > 0.00, unaligned accesses are slow
> 
> cpu0: Ratio of byte access time to unaligned word access is 0.00417,
> unaligned accesses are slow
> 
> > > I am a bit surprised by the near-zero values.  Are these expected?
> >
> > This could be expected, if firmware is trapping the unaligned accesses
> > and coming out >100x slower than a native access. If you're interested
> > in getting a little more resolution, you could try to print a few more
> > decimal places with something like (sorry gmail mangles the whitespace
> > on this):

I'd expect one of three possible values:
- 1.0x: Basically zero cost except for cache line/page boundaries.
- ~2: Hardware does two reads and merges the values.
- >100: Trap fixed up in software.

I'd think the '2' case could be considered fast.
You only need to time one access to see if it was a fault.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-09-14  8:46           ` David Laight
  0 siblings, 0 replies; 30+ messages in thread
From: David Laight @ 2023-09-14  8:46 UTC (permalink / raw)
  To: 'Geert Uytterhoeven', Evan Green
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, Palmer Dabbelt, Andy Chiu

From: Geert Uytterhoeven
> Sent: 14 September 2023 08:33
...
> > >     rzfive:
> > >         cpu0: Ratio of byte access time to unaligned word access is
> > > 1.05, unaligned accesses are fast
> >
> > Hrm, I'm a little surprised to be seeing this number come out so close
> > to 1. If you reboot a few times, what kind of variance do you get on
> > this?
> 
> Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)

Would that match zero overhead unless the access crosses a
cache line boundary?
(I can't remember whether the test is using increasing addresses.)

...
> > >     vexriscv/orangecrab:
> > >
> > >         cpu0: Ratio of byte access time to unaligned word access is
> > > 0.00, unaligned accesses are slow
> 
> cpu0: Ratio of byte access time to unaligned word access is 0.00417,
> unaligned accesses are slow
> 
> > > I am a bit surprised by the near-zero values.  Are these expected?
> >
> > This could be expected, if firmware is trapping the unaligned accesses
> > and coming out >100x slower than a native access. If you're interested
> > in getting a little more resolution, you could try to print a few more
> > decimal places with something like (sorry gmail mangles the whitespace
> > on this):

I'd expect one of three possible values:
- 1.0x: Basically zero cost except for cache line/page boundaries.
- ~2: Hardware does two reads and merges the values.
- >100: Trap fixed up in software.

I'd think the '2' case could be considered fast.
You only need to time one access to see if it was a fault.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-09-14  8:46           ` David Laight
@ 2023-09-14 15:01             ` Evan Green
  -1 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-09-14 15:01 UTC (permalink / raw)
  To: David Laight
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
>
> From: Geert Uytterhoeven
> > Sent: 14 September 2023 08:33
> ...
> > > >     rzfive:
> > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > 1.05, unaligned accesses are fast
> > >
> > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > to 1. If you reboot a few times, what kind of variance do you get on
> > > this?
> >
> > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
>
> Would that match zero overhead unless the access crosses a
> cache line boundary?
> (I can't remember whether the test is using increasing addresses.)

Yes, the test does use increasing addresses, it copies across 4 pages.
We start with a warmup, so caching effects beyond L1 are largely not
taken into account.

>
> ...
> > > >     vexriscv/orangecrab:
> > > >
> > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > 0.00, unaligned accesses are slow
> >
> > cpu0: Ratio of byte access time to unaligned word access is 0.00417,
> > unaligned accesses are slow
> >
> > > > I am a bit surprised by the near-zero values.  Are these expected?
> > >
> > > This could be expected, if firmware is trapping the unaligned accesses
> > > and coming out >100x slower than a native access. If you're interested
> > > in getting a little more resolution, you could try to print a few more
> > > decimal places with something like (sorry gmail mangles the whitespace
> > > on this):
>
> I'd expect one of three possible values:
> - 1.0x: Basically zero cost except for cache line/page boundaries.
> - ~2: Hardware does two reads and merges the values.
> - >100: Trap fixed up in software.
>
> I'd think the '2' case could be considered fast.
> You only need to time one access to see if it was a fault.

We're comparing misaligned word accesses with byte accesses of the
same total size. So 1.0 means a misaligned load is basically no
different from 8 byte loads. The goal was to help people that are
forced to do odd loads and stores decide whether they are better off
moving by bytes or by misaligned words. (In contrast, the answer to
"should I do a misaligned word load or an aligned word load" is
generally always "do the aligned one if you can", so comparing those
two things didn't seem as useful).

We opted for 1.0 as a cutoff, since even at 1.05, you get a boost from
doing misaligned word loads over byte copies. I asked about the
variance because I don't want to see machines that change their mind
from boot to boot. I originally considered trying to create a "gray
zone" where the answer goes back to UNKNOWN, but in the end that just
moves the fiddly point rather than really eliminating it.

You're right that in theory we just need one perfect access to test,
but testing only once makes it susceptible to hiccups. We went with
doing it many times in a fixed period and taking the minimum to
hopefully remove noise like NMI-like things, branch prediction misses,
or cache eviction.

Geert,
Thanks for providing the numbers. Yes, we could add another digit to
the print. Though if you already know you're at least 100x slower,
maybe knowing exactly how much slower isn't super meaningful, just
very much avoid unaligned accesses on these systems :). Hopefully over
time the number of systems like this will dwindle.

-Evan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-09-14 15:01             ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-09-14 15:01 UTC (permalink / raw)
  To: David Laight
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
>
> From: Geert Uytterhoeven
> > Sent: 14 September 2023 08:33
> ...
> > > >     rzfive:
> > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > 1.05, unaligned accesses are fast
> > >
> > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > to 1. If you reboot a few times, what kind of variance do you get on
> > > this?
> >
> > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
>
> Would that match zero overhead unless the access crosses a
> cache line boundary?
> (I can't remember whether the test is using increasing addresses.)

Yes, the test does use increasing addresses, it copies across 4 pages.
We start with a warmup, so caching effects beyond L1 are largely not
taken into account.

>
> ...
> > > >     vexriscv/orangecrab:
> > > >
> > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > 0.00, unaligned accesses are slow
> >
> > cpu0: Ratio of byte access time to unaligned word access is 0.00417,
> > unaligned accesses are slow
> >
> > > > I am a bit surprised by the near-zero values.  Are these expected?
> > >
> > > This could be expected, if firmware is trapping the unaligned accesses
> > > and coming out >100x slower than a native access. If you're interested
> > > in getting a little more resolution, you could try to print a few more
> > > decimal places with something like (sorry gmail mangles the whitespace
> > > on this):
>
> I'd expect one of three possible values:
> - 1.0x: Basically zero cost except for cache line/page boundaries.
> - ~2: Hardware does two reads and merges the values.
> - >100: Trap fixed up in software.
>
> I'd think the '2' case could be considered fast.
> You only need to time one access to see if it was a fault.

We're comparing misaligned word accesses with byte accesses of the
same total size. So 1.0 means a misaligned load is basically no
different from 8 byte loads. The goal was to help people that are
forced to do odd loads and stores decide whether they are better off
moving by bytes or by misaligned words. (In contrast, the answer to
"should I do a misaligned word load or an aligned word load" is
generally always "do the aligned one if you can", so comparing those
two things didn't seem as useful).

We opted for 1.0 as a cutoff, since even at 1.05, you get a boost from
doing misaligned word loads over byte copies. I asked about the
variance because I don't want to see machines that change their mind
from boot to boot. I originally considered trying to create a "gray
zone" where the answer goes back to UNKNOWN, but in the end that just
moves the fiddly point rather than really eliminating it.

You're right that in theory we just need one perfect access to test,
but testing only once makes it susceptible to hiccups. We went with
doing it many times in a fixed period and taking the minimum to
hopefully remove noise like NMI-like things, branch prediction misses,
or cache eviction.

Geert,
Thanks for providing the numbers. Yes, we could add another digit to
the print. Though if you already know you're at least 100x slower,
maybe knowing exactly how much slower isn't super meaningful, just
very much avoid unaligned accesses on these systems :). Hopefully over
time the number of systems like this will dwindle.

-Evan

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-09-14 15:01             ` Evan Green
@ 2023-09-14 15:55               ` David Laight
  -1 siblings, 0 replies; 30+ messages in thread
From: David Laight @ 2023-09-14 15:55 UTC (permalink / raw)
  To: 'Evan Green'
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

From: Evan Green
> Sent: 14 September 2023 16:01
> 
> On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
> >
> > From: Geert Uytterhoeven
> > > Sent: 14 September 2023 08:33
> > ...
> > > > >     rzfive:
> > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > 1.05, unaligned accesses are fast
> > > >
> > > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > > to 1. If you reboot a few times, what kind of variance do you get on
> > > > this?
> > >
> > > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
> >
> > Would that match zero overhead unless the access crosses a
> > cache line boundary?
> > (I can't remember whether the test is using increasing addresses.)
> 
> Yes, the test does use increasing addresses, it copies across 4 pages.
> We start with a warmup, so caching effects beyond L1 are largely not
> taken into account.

That seems entirely excessive.
If you want to avoid data cache issues (which probably do)
then just repeating a single access would almost certainly
suffice.
Repeatedly using a short buffer (say 256 bytes) won't add
much loop overhead.
Although you may want to do a test that avoids transfers
that cross cache line and especially page boundaries.
Either of those could easily be much slower than a read
that is entirely within a cache line.

...
> > > > >     vexriscv/orangecrab:
> > > > >
> > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > 0.00, unaligned accesses are slow
> > >
> > > cpu0: Ratio of byte access time to unaligned word access is 0.00417,
> > > unaligned accesses are slow
> > >
> > > > > I am a bit surprised by the near-zero values.  Are these expected?
> > > >
> > > > This could be expected, if firmware is trapping the unaligned accesses
> > > > and coming out >100x slower than a native access. If you're interested
> > > > in getting a little more resolution, you could try to print a few more
> > > > decimal places with something like (sorry gmail mangles the whitespace
> > > > on this):
> >
> > I'd expect one of three possible values:
> > - 1.0x: Basically zero cost except for cache line/page boundaries.
> > - ~2: Hardware does two reads and merges the values.
> > - >100: Trap fixed up in software.
> >
> > I'd think the '2' case could be considered fast.
> > You only need to time one access to see if it was a fault.
> 
> We're comparing misaligned word accesses with byte accesses of the
> same total size. So 1.0 means a misaligned load is basically no
> different from 8 byte loads. The goal was to help people that are
> forced to do odd loads and stores decide whether they are better off
> moving by bytes or by misaligned words. (In contrast, the answer to
> "should I do a misaligned word load or an aligned word load" is
> generally always "do the aligned one if you can", so comparing those
> two things didn't seem as useful).

Ah, I'd have compared the cost of aligned accesses with misaligned ones.
That would tell you whether you really need to avoid them.
The cost of byte and aligned word accesses should be much the same
(for each access that is) - if not you've got a real bottleneck.

If a misaligned access is 8 times slower than an aligned one
it is still 'quite slow'.
I'd definitely call that 8 not 1 - even if you treat it as 'fast'.

For comparison you (well I) can write x64-64 asm for the ip-checksum
loop that will execute 1 memory read every clock (8 bytes/clock).
It is very slightly slower for misaligned buffers, but by less
than 1 clock per cache line.
That's what I'd call 1.0 :-)

I'd expect even simple hardware to do misaligned reads as two
reads and then merge the data - so should really be no slower
than two separate aligned reads.
Since you'd expect a cpu to do an L1 data cache read every clock
(probably pipelined) the misaligned read should just add 1 clock.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-09-14 15:55               ` David Laight
  0 siblings, 0 replies; 30+ messages in thread
From: David Laight @ 2023-09-14 15:55 UTC (permalink / raw)
  To: 'Evan Green'
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

From: Evan Green
> Sent: 14 September 2023 16:01
> 
> On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
> >
> > From: Geert Uytterhoeven
> > > Sent: 14 September 2023 08:33
> > ...
> > > > >     rzfive:
> > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > 1.05, unaligned accesses are fast
> > > >
> > > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > > to 1. If you reboot a few times, what kind of variance do you get on
> > > > this?
> > >
> > > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
> >
> > Would that match zero overhead unless the access crosses a
> > cache line boundary?
> > (I can't remember whether the test is using increasing addresses.)
> 
> Yes, the test does use increasing addresses, it copies across 4 pages.
> We start with a warmup, so caching effects beyond L1 are largely not
> taken into account.

That seems entirely excessive.
If you want to avoid data cache issues (which probably do)
then just repeating a single access would almost certainly
suffice.
Repeatedly using a short buffer (say 256 bytes) won't add
much loop overhead.
Although you may want to do a test that avoids transfers
that cross cache line and especially page boundaries.
Either of those could easily be much slower than a read
that is entirely within a cache line.

...
> > > > >     vexriscv/orangecrab:
> > > > >
> > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > 0.00, unaligned accesses are slow
> > >
> > > cpu0: Ratio of byte access time to unaligned word access is 0.00417,
> > > unaligned accesses are slow
> > >
> > > > > I am a bit surprised by the near-zero values.  Are these expected?
> > > >
> > > > This could be expected, if firmware is trapping the unaligned accesses
> > > > and coming out >100x slower than a native access. If you're interested
> > > > in getting a little more resolution, you could try to print a few more
> > > > decimal places with something like (sorry gmail mangles the whitespace
> > > > on this):
> >
> > I'd expect one of three possible values:
> > - 1.0x: Basically zero cost except for cache line/page boundaries.
> > - ~2: Hardware does two reads and merges the values.
> > - >100: Trap fixed up in software.
> >
> > I'd think the '2' case could be considered fast.
> > You only need to time one access to see if it was a fault.
> 
> We're comparing misaligned word accesses with byte accesses of the
> same total size. So 1.0 means a misaligned load is basically no
> different from 8 byte loads. The goal was to help people that are
> forced to do odd loads and stores decide whether they are better off
> moving by bytes or by misaligned words. (In contrast, the answer to
> "should I do a misaligned word load or an aligned word load" is
> generally always "do the aligned one if you can", so comparing those
> two things didn't seem as useful).

Ah, I'd have compared the cost of aligned accesses with misaligned ones.
That would tell you whether you really need to avoid them.
The cost of byte and aligned word accesses should be much the same
(for each access that is) - if not you've got a real bottleneck.

If a misaligned access is 8 times slower than an aligned one
it is still 'quite slow'.
I'd definitely call that 8 not 1 - even if you treat it as 'fast'.

For comparison you (well I) can write x64-64 asm for the ip-checksum
loop that will execute 1 memory read every clock (8 bytes/clock).
It is very slightly slower for misaligned buffers, but by less
than 1 clock per cache line.
That's what I'd call 1.0 :-)

I'd expect even simple hardware to do misaligned reads as two
reads and then merge the data - so should really be no slower
than two separate aligned reads.
Since you'd expect a cpu to do an L1 data cache read every clock
(probably pipelined) the misaligned read should just add 1 clock.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-09-14 15:55               ` David Laight
@ 2023-09-14 16:36                 ` Evan Green
  -1 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-09-14 16:36 UTC (permalink / raw)
  To: David Laight
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

On Thu, Sep 14, 2023 at 8:55 AM David Laight <David.Laight@aculab.com> wrote:
>
> From: Evan Green
> > Sent: 14 September 2023 16:01
> >
> > On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
> > >
> > > From: Geert Uytterhoeven
> > > > Sent: 14 September 2023 08:33
> > > ...
> > > > > >     rzfive:
> > > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > > 1.05, unaligned accesses are fast
> > > > >
> > > > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > > > to 1. If you reboot a few times, what kind of variance do you get on
> > > > > this?
> > > >
> > > > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
> > >
> > > Would that match zero overhead unless the access crosses a
> > > cache line boundary?
> > > (I can't remember whether the test is using increasing addresses.)
> >
> > Yes, the test does use increasing addresses, it copies across 4 pages.
> > We start with a warmup, so caching effects beyond L1 are largely not
> > taken into account.
>
> That seems entirely excessive.
> If you want to avoid data cache issues (which probably do)
> then just repeating a single access would almost certainly
> suffice.
> Repeatedly using a short buffer (say 256 bytes) won't add
> much loop overhead.
> Although you may want to do a test that avoids transfers
> that cross cache line and especially page boundaries.
> Either of those could easily be much slower than a read
> that is entirely within a cache line.

We won't be faulting on any of these pages, and they should remain in
the TLB, so I don't expect many page boundary specific effects. If
there is a steep penalty for misaligned loads across a cache line,
such that it's worse than doing byte accesses, I want the test results
to be dinged for that.

>
> ...
> > > > > >     vexriscv/orangecrab:
> > > > > >
> > > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > > 0.00, unaligned accesses are slow
> > > >
> > > > cpu0: Ratio of byte access time to unaligned word access is 0.00417,
> > > > unaligned accesses are slow
> > > >
> > > > > > I am a bit surprised by the near-zero values.  Are these expected?
> > > > >
> > > > > This could be expected, if firmware is trapping the unaligned accesses
> > > > > and coming out >100x slower than a native access. If you're interested
> > > > > in getting a little more resolution, you could try to print a few more
> > > > > decimal places with something like (sorry gmail mangles the whitespace
> > > > > on this):
> > >
> > > I'd expect one of three possible values:
> > > - 1.0x: Basically zero cost except for cache line/page boundaries.
> > > - ~2: Hardware does two reads and merges the values.
> > > - >100: Trap fixed up in software.
> > >
> > > I'd think the '2' case could be considered fast.
> > > You only need to time one access to see if it was a fault.
> >
> > We're comparing misaligned word accesses with byte accesses of the
> > same total size. So 1.0 means a misaligned load is basically no
> > different from 8 byte loads. The goal was to help people that are
> > forced to do odd loads and stores decide whether they are better off
> > moving by bytes or by misaligned words. (In contrast, the answer to
> > "should I do a misaligned word load or an aligned word load" is
> > generally always "do the aligned one if you can", so comparing those
> > two things didn't seem as useful).
>
> Ah, I'd have compared the cost of aligned accesses with misaligned ones.
> That would tell you whether you really need to avoid them.
> The cost of byte and aligned word accesses should be much the same
> (for each access that is) - if not you've got a real bottleneck.
>
> If a misaligned access is 8 times slower than an aligned one
> it is still 'quite slow'.
> I'd definitely call that 8 not 1 - even if you treat it as 'fast'.

The number itself isn't exported or saved anywhere, it's just printed
as diagnostic explanation into the final fast/slow designation.

Misaligned word loads are never going to be faster than aligned ones,
and aren't really going to be equal either. It's also generally not
something that causes software a lot of angst: we align most of our
buffers and structures with help from the compiler, and generally do
an aligned access whenever possible. It's the times when we're forced
to do odd sizes or accesses we know are already misaligned that this
hwprobe bit was designed to help. In those cases, users are forced to
decide if they should do a misaligned word access or byte accesses, so
we aim to provide that result.

If there's a use case for knowing "misaligned accesses are exactly as
fast as aligned ones", we could detect this threshold in the same
test, and add another hwprobe bit for it.

-Evan

>
> For comparison you (well I) can write x64-64 asm for the ip-checksum
> loop that will execute 1 memory read every clock (8 bytes/clock).
> It is very slightly slower for misaligned buffers, but by less
> than 1 clock per cache line.
> That's what I'd call 1.0 :-)
>
> I'd expect even simple hardware to do misaligned reads as two
> reads and then merge the data - so should really be no slower
> than two separate aligned reads.
> Since you'd expect a cpu to do an L1 data cache read every clock
> (probably pipelined) the misaligned read should just add 1 clock.
>
>         David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-09-14 16:36                 ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-09-14 16:36 UTC (permalink / raw)
  To: David Laight
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

On Thu, Sep 14, 2023 at 8:55 AM David Laight <David.Laight@aculab.com> wrote:
>
> From: Evan Green
> > Sent: 14 September 2023 16:01
> >
> > On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
> > >
> > > From: Geert Uytterhoeven
> > > > Sent: 14 September 2023 08:33
> > > ...
> > > > > >     rzfive:
> > > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > > 1.05, unaligned accesses are fast
> > > > >
> > > > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > > > to 1. If you reboot a few times, what kind of variance do you get on
> > > > > this?
> > > >
> > > > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
> > >
> > > Would that match zero overhead unless the access crosses a
> > > cache line boundary?
> > > (I can't remember whether the test is using increasing addresses.)
> >
> > Yes, the test does use increasing addresses, it copies across 4 pages.
> > We start with a warmup, so caching effects beyond L1 are largely not
> > taken into account.
>
> That seems entirely excessive.
> If you want to avoid data cache issues (which probably do)
> then just repeating a single access would almost certainly
> suffice.
> Repeatedly using a short buffer (say 256 bytes) won't add
> much loop overhead.
> Although you may want to do a test that avoids transfers
> that cross cache line and especially page boundaries.
> Either of those could easily be much slower than a read
> that is entirely within a cache line.

We won't be faulting on any of these pages, and they should remain in
the TLB, so I don't expect many page boundary specific effects. If
there is a steep penalty for misaligned loads across a cache line,
such that it's worse than doing byte accesses, I want the test results
to be dinged for that.

>
> ...
> > > > > >     vexriscv/orangecrab:
> > > > > >
> > > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > > 0.00, unaligned accesses are slow
> > > >
> > > > cpu0: Ratio of byte access time to unaligned word access is 0.00417,
> > > > unaligned accesses are slow
> > > >
> > > > > > I am a bit surprised by the near-zero values.  Are these expected?
> > > > >
> > > > > This could be expected, if firmware is trapping the unaligned accesses
> > > > > and coming out >100x slower than a native access. If you're interested
> > > > > in getting a little more resolution, you could try to print a few more
> > > > > decimal places with something like (sorry gmail mangles the whitespace
> > > > > on this):
> > >
> > > I'd expect one of three possible values:
> > > - 1.0x: Basically zero cost except for cache line/page boundaries.
> > > - ~2: Hardware does two reads and merges the values.
> > > - >100: Trap fixed up in software.
> > >
> > > I'd think the '2' case could be considered fast.
> > > You only need to time one access to see if it was a fault.
> >
> > We're comparing misaligned word accesses with byte accesses of the
> > same total size. So 1.0 means a misaligned load is basically no
> > different from 8 byte loads. The goal was to help people that are
> > forced to do odd loads and stores decide whether they are better off
> > moving by bytes or by misaligned words. (In contrast, the answer to
> > "should I do a misaligned word load or an aligned word load" is
> > generally always "do the aligned one if you can", so comparing those
> > two things didn't seem as useful).
>
> Ah, I'd have compared the cost of aligned accesses with misaligned ones.
> That would tell you whether you really need to avoid them.
> The cost of byte and aligned word accesses should be much the same
> (for each access that is) - if not you've got a real bottleneck.
>
> If a misaligned access is 8 times slower than an aligned one
> it is still 'quite slow'.
> I'd definitely call that 8 not 1 - even if you treat it as 'fast'.

The number itself isn't exported or saved anywhere, it's just printed
as diagnostic explanation into the final fast/slow designation.

Misaligned word loads are never going to be faster than aligned ones,
and aren't really going to be equal either. It's also generally not
something that causes software a lot of angst: we align most of our
buffers and structures with help from the compiler, and generally do
an aligned access whenever possible. It's the times when we're forced
to do odd sizes or accesses we know are already misaligned that this
hwprobe bit was designed to help. In those cases, users are forced to
decide if they should do a misaligned word access or byte accesses, so
we aim to provide that result.

If there's a use case for knowing "misaligned accesses are exactly as
fast as aligned ones", we could detect this threshold in the same
test, and add another hwprobe bit for it.

-Evan

>
> For comparison you (well I) can write x64-64 asm for the ip-checksum
> loop that will execute 1 memory read every clock (8 bytes/clock).
> It is very slightly slower for misaligned buffers, but by less
> than 1 clock per cache line.
> That's what I'd call 1.0 :-)
>
> I'd expect even simple hardware to do misaligned reads as two
> reads and then merge the data - so should really be no slower
> than two separate aligned reads.
> Since you'd expect a cpu to do an L1 data cache read every clock
> (probably pipelined) the misaligned read should just add 1 clock.
>
>         David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-09-14 16:36                 ` Evan Green
@ 2023-09-15  7:57                   ` David Laight
  -1 siblings, 0 replies; 30+ messages in thread
From: David Laight @ 2023-09-15  7:57 UTC (permalink / raw)
  To: 'Evan Green'
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

From: Evan Green
> Sent: 14 September 2023 17:37
> 
> On Thu, Sep 14, 2023 at 8:55 AM David Laight <David.Laight@aculab.com> wrote:
> >
> > From: Evan Green
> > > Sent: 14 September 2023 16:01
> > >
> > > On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
> > > >
> > > > From: Geert Uytterhoeven
> > > > > Sent: 14 September 2023 08:33
> > > > ...
> > > > > > >     rzfive:
> > > > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > > > 1.05, unaligned accesses are fast
> > > > > >
> > > > > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > > > > to 1. If you reboot a few times, what kind of variance do you get on
> > > > > > this?
> > > > >
> > > > > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
> > > >
> > > > Would that match zero overhead unless the access crosses a
> > > > cache line boundary?
> > > > (I can't remember whether the test is using increasing addresses.)
> > >
> > > Yes, the test does use increasing addresses, it copies across 4 pages.
> > > We start with a warmup, so caching effects beyond L1 are largely not
> > > taken into account.
> >
> > That seems entirely excessive.
> > If you want to avoid data cache issues (which probably do)
> > then just repeating a single access would almost certainly
> > suffice.
> > Repeatedly using a short buffer (say 256 bytes) won't add
> > much loop overhead.
> > Although you may want to do a test that avoids transfers
> > that cross cache line and especially page boundaries.
> > Either of those could easily be much slower than a read
> > that is entirely within a cache line.
> 
> We won't be faulting on any of these pages, and they should remain in
> the TLB, so I don't expect many page boundary specific effects. If
> there is a steep penalty for misaligned loads across a cache line,
> such that it's worse than doing byte accesses, I want the test results
> to be dinged for that.

That is an entirely different issue.

Are you absolutely certain that the reason 8 byte loads take
as long as a 64-bit mis-aligned load isn't because the entire
test is limited by L1 cache fills?

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-09-15  7:57                   ` David Laight
  0 siblings, 0 replies; 30+ messages in thread
From: David Laight @ 2023-09-15  7:57 UTC (permalink / raw)
  To: 'Evan Green'
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

From: Evan Green
> Sent: 14 September 2023 17:37
> 
> On Thu, Sep 14, 2023 at 8:55 AM David Laight <David.Laight@aculab.com> wrote:
> >
> > From: Evan Green
> > > Sent: 14 September 2023 16:01
> > >
> > > On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
> > > >
> > > > From: Geert Uytterhoeven
> > > > > Sent: 14 September 2023 08:33
> > > > ...
> > > > > > >     rzfive:
> > > > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > > > 1.05, unaligned accesses are fast
> > > > > >
> > > > > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > > > > to 1. If you reboot a few times, what kind of variance do you get on
> > > > > > this?
> > > > >
> > > > > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
> > > >
> > > > Would that match zero overhead unless the access crosses a
> > > > cache line boundary?
> > > > (I can't remember whether the test is using increasing addresses.)
> > >
> > > Yes, the test does use increasing addresses, it copies across 4 pages.
> > > We start with a warmup, so caching effects beyond L1 are largely not
> > > taken into account.
> >
> > That seems entirely excessive.
> > If you want to avoid data cache issues (which probably do)
> > then just repeating a single access would almost certainly
> > suffice.
> > Repeatedly using a short buffer (say 256 bytes) won't add
> > much loop overhead.
> > Although you may want to do a test that avoids transfers
> > that cross cache line and especially page boundaries.
> > Either of those could easily be much slower than a read
> > that is entirely within a cache line.
> 
> We won't be faulting on any of these pages, and they should remain in
> the TLB, so I don't expect many page boundary specific effects. If
> there is a steep penalty for misaligned loads across a cache line,
> such that it's worse than doing byte accesses, I want the test results
> to be dinged for that.

That is an entirely different issue.

Are you absolutely certain that the reason 8 byte loads take
as long as a 64-bit mis-aligned load isn't because the entire
test is limited by L1 cache fills?

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-09-15  7:57                   ` David Laight
@ 2023-09-15 16:47                     ` Evan Green
  -1 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-09-15 16:47 UTC (permalink / raw)
  To: David Laight
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

On Fri, Sep 15, 2023 at 12:57 AM David Laight <David.Laight@aculab.com> wrote:
>
> From: Evan Green
> > Sent: 14 September 2023 17:37
> >
> > On Thu, Sep 14, 2023 at 8:55 AM David Laight <David.Laight@aculab.com> wrote:
> > >
> > > From: Evan Green
> > > > Sent: 14 September 2023 16:01
> > > >
> > > > On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
> > > > >
> > > > > From: Geert Uytterhoeven
> > > > > > Sent: 14 September 2023 08:33
> > > > > ...
> > > > > > > >     rzfive:
> > > > > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > > > > 1.05, unaligned accesses are fast
> > > > > > >
> > > > > > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > > > > > to 1. If you reboot a few times, what kind of variance do you get on
> > > > > > > this?
> > > > > >
> > > > > > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
> > > > >
> > > > > Would that match zero overhead unless the access crosses a
> > > > > cache line boundary?
> > > > > (I can't remember whether the test is using increasing addresses.)
> > > >
> > > > Yes, the test does use increasing addresses, it copies across 4 pages.
> > > > We start with a warmup, so caching effects beyond L1 are largely not
> > > > taken into account.
> > >
> > > That seems entirely excessive.
> > > If you want to avoid data cache issues (which probably do)
> > > then just repeating a single access would almost certainly
> > > suffice.
> > > Repeatedly using a short buffer (say 256 bytes) won't add
> > > much loop overhead.
> > > Although you may want to do a test that avoids transfers
> > > that cross cache line and especially page boundaries.
> > > Either of those could easily be much slower than a read
> > > that is entirely within a cache line.
> >
> > We won't be faulting on any of these pages, and they should remain in
> > the TLB, so I don't expect many page boundary specific effects. If
> > there is a steep penalty for misaligned loads across a cache line,
> > such that it's worse than doing byte accesses, I want the test results
> > to be dinged for that.
>
> That is an entirely different issue.
>
> Are you absolutely certain that the reason 8 byte loads take
> as long as a 64-bit mis-aligned load isn't because the entire
> test is limited by L1 cache fills?

Fair question. I hacked up a little code [1] to retry the test at
several different sizes, as well as printing out the best and worst
times. I only have one piece of real hardware, the THead C906, which
has a 32KB L1 D-cache.

Here are the results at various sizes, starting with the original:
[    0.047556] cpu0: Ratio of byte access time to unaligned word
access is 4.35, unaligned accesses are fast
[    0.047578] EVAN size 0x1f80 word cycles best 69 worst 29e, byte
cycles best 1c9 worst 3b7
[    0.071549] cpu0: Ratio of byte access time to unaligned word
access is 4.29, unaligned accesses are fast
[    0.071566] EVAN size 0x1000 word cycles best 36 worst 210, byte
cycles best e8 worst 2b2
[    0.095540] cpu0: Ratio of byte access time to unaligned word
access is 4.14, unaligned accesses are fast
[    0.095556] EVAN size 0x200 word cycles best 7 worst 1d9, byte
cycles best 1d worst 1d5
[    0.119539] cpu0: Ratio of byte access time to unaligned word
access is 5.00, unaligned accesses are fast
[    0.119555] EVAN size 0x100 word cycles best 3 worst 1a8, byte
cycles best f worst 1b5
[    0.143538] cpu0: Ratio of byte access time to unaligned word
access is 3.50, unaligned accesses are fast
[    0.143556] EVAN size 0x80 word cycles best 2 worst 1a5, byte
cycles best 7 worst 1aa

[1] https://pastebin.com/uwwU2CVn

I don't see any cliffs as the numbers get smaller, so it seems to me
there are no working set issues. Geert, it might be interesting to see
these same results on the rzfive. The thing that made me uncomfortable
with the smaller buffer sizes is it starts to bump up against the
resolution of the timer. Another option would have been to time
several iterations, but I went with the larger buffer instead as I'd
hoped it would minimize other overhead like the function calls, branch
prediction, C loop management, etc.

-Evan

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-09-15 16:47                     ` Evan Green
  0 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2023-09-15 16:47 UTC (permalink / raw)
  To: David Laight
  Cc: Geert Uytterhoeven, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, Palmer Dabbelt,
	Andy Chiu

On Fri, Sep 15, 2023 at 12:57 AM David Laight <David.Laight@aculab.com> wrote:
>
> From: Evan Green
> > Sent: 14 September 2023 17:37
> >
> > On Thu, Sep 14, 2023 at 8:55 AM David Laight <David.Laight@aculab.com> wrote:
> > >
> > > From: Evan Green
> > > > Sent: 14 September 2023 16:01
> > > >
> > > > On Thu, Sep 14, 2023 at 1:47 AM David Laight <David.Laight@aculab.com> wrote:
> > > > >
> > > > > From: Geert Uytterhoeven
> > > > > > Sent: 14 September 2023 08:33
> > > > > ...
> > > > > > > >     rzfive:
> > > > > > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > > > > > 1.05, unaligned accesses are fast
> > > > > > >
> > > > > > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > > > > > to 1. If you reboot a few times, what kind of variance do you get on
> > > > > > > this?
> > > > > >
> > > > > > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
> > > > >
> > > > > Would that match zero overhead unless the access crosses a
> > > > > cache line boundary?
> > > > > (I can't remember whether the test is using increasing addresses.)
> > > >
> > > > Yes, the test does use increasing addresses, it copies across 4 pages.
> > > > We start with a warmup, so caching effects beyond L1 are largely not
> > > > taken into account.
> > >
> > > That seems entirely excessive.
> > > If you want to avoid data cache issues (which probably do)
> > > then just repeating a single access would almost certainly
> > > suffice.
> > > Repeatedly using a short buffer (say 256 bytes) won't add
> > > much loop overhead.
> > > Although you may want to do a test that avoids transfers
> > > that cross cache line and especially page boundaries.
> > > Either of those could easily be much slower than a read
> > > that is entirely within a cache line.
> >
> > We won't be faulting on any of these pages, and they should remain in
> > the TLB, so I don't expect many page boundary specific effects. If
> > there is a steep penalty for misaligned loads across a cache line,
> > such that it's worse than doing byte accesses, I want the test results
> > to be dinged for that.
>
> That is an entirely different issue.
>
> Are you absolutely certain that the reason 8 byte loads take
> as long as a 64-bit mis-aligned load isn't because the entire
> test is limited by L1 cache fills?

Fair question. I hacked up a little code [1] to retry the test at
several different sizes, as well as printing out the best and worst
times. I only have one piece of real hardware, the THead C906, which
has a 32KB L1 D-cache.

Here are the results at various sizes, starting with the original:
[    0.047556] cpu0: Ratio of byte access time to unaligned word
access is 4.35, unaligned accesses are fast
[    0.047578] EVAN size 0x1f80 word cycles best 69 worst 29e, byte
cycles best 1c9 worst 3b7
[    0.071549] cpu0: Ratio of byte access time to unaligned word
access is 4.29, unaligned accesses are fast
[    0.071566] EVAN size 0x1000 word cycles best 36 worst 210, byte
cycles best e8 worst 2b2
[    0.095540] cpu0: Ratio of byte access time to unaligned word
access is 4.14, unaligned accesses are fast
[    0.095556] EVAN size 0x200 word cycles best 7 worst 1d9, byte
cycles best 1d worst 1d5
[    0.119539] cpu0: Ratio of byte access time to unaligned word
access is 5.00, unaligned accesses are fast
[    0.119555] EVAN size 0x100 word cycles best 3 worst 1a8, byte
cycles best f worst 1b5
[    0.143538] cpu0: Ratio of byte access time to unaligned word
access is 3.50, unaligned accesses are fast
[    0.143556] EVAN size 0x80 word cycles best 2 worst 1a5, byte
cycles best 7 worst 1aa

[1] https://pastebin.com/uwwU2CVn

I don't see any cliffs as the numbers get smaller, so it seems to me
there are no working set issues. Geert, it might be interesting to see
these same results on the rzfive. The thing that made me uncomfortable
with the smaller buffer sizes is it starts to bump up against the
resolution of the timer. Another option would have been to time
several iterations, but I went with the larger buffer instead as I'd
hoped it would minimize other overhead like the function calls, branch
prediction, C loop management, etc.

-Evan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-09-14  7:32         ` Geert Uytterhoeven
@ 2023-10-19  6:37           ` Geert Uytterhoeven
  -1 siblings, 0 replies; 30+ messages in thread
From: Geert Uytterhoeven @ 2023-10-19  6:37 UTC (permalink / raw)
  To: Lad, Prabhakar
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, David Laight, Palmer Dabbelt,
	Andy Chiu, Evan Green

Hi Prabahkar,

On Thu, Sep 14, 2023 at 9:32 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> On Wed, Sep 13, 2023 at 7:46 PM Evan Green <evan@rivosinc.com> wrote:
> > On Wed, Sep 13, 2023 at 5:36 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > > On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> > > > Rather than deferring unaligned access speed determinations to a vendor
> > > > function, let's probe them and find out how fast they are. If we
> > > > determine that an unaligned word access is faster than N byte accesses,
> > > > mark the hardware's unaligned access as "fast". Otherwise, we mark
> > > > accesses as slow.
> > > >
> > > > The algorithm itself runs for a fixed amount of jiffies. Within each
> > > > iteration it attempts to time a single loop, and then keeps only the best
> > > > (fastest) loop it saw. This algorithm was found to have lower variance from
> > > > run to run than my first attempt, which counted the total number of
> > > > iterations that could be done in that fixed amount of jiffies. By taking
> > > > only the best iteration in the loop, assuming at least one loop wasn't
> > > > perturbed by an interrupt, we eliminate the effects of interrupts and
> > > > other "warm up" factors like branch prediction. The only downside is it
> > > > depends on having an rdtime granular and accurate enough to measure a
> > > > single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> > > > leave the unaligned setting at UNKNOWN.
> > > >
> > > > There is a slight change in user-visible behavior here. Previously, all
> > > > boards except the THead C906 reported misaligned access speed of
> > > > UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> > > > misaligned access speed on each hart, all RISC-V systems will have this
> > > > key set as either FAST or SLOW.
> > > >
> > > > Currently, we don't have a way to confidently measure the difference between
> > > > SLOW and EMULATED, so we label anything not fast as SLOW. This will
> > > > mislabel some systems that are actually EMULATED as SLOW. When we get
> > > > support for delegating misaligned access traps to the kernel (as opposed
> > > > to the firmware quietly handling it), we can explicitly test in Linux to
> > > > see if unaligned accesses trap. Those systems will start to report
> > > > EMULATED, though older (today's) systems without that new SBI mechanism
> > > > will continue to report SLOW.
> > > >
> > > > I've updated the documentation for those hwprobe values to reflect
> > > > this, specifically: SLOW may or may not be emulated by software, and FAST
> > > > represents means being faster than equivalent byte accesses. The change
> > > > in documentation is accurate with respect to both the former and current
> > > > behavior.
> > > >
> > > > Signed-off-by: Evan Green <evan@rivosinc.com>
> > > > Acked-by: Conor Dooley <conor.dooley@microchip.com>
> > >
> > > Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
> > > Probe for unaligned access speed") in v6.6-rc1.
> > >
> > > On the boards I have, I get:
> > >
> > >     rzfive:
> > >         cpu0: Ratio of byte access time to unaligned word access is
> > > 1.05, unaligned accesses are fast
> >
> > Hrm, I'm a little surprised to be seeing this number come out so close
> > to 1. If you reboot a few times, what kind of variance do you get on
> > this?
>
> Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)

After upgrading the firmware from [1] to [2], this changed to
"0.00, unaligned accesses are slow".

[1] RZ-Five-ETH
    U-Boot 2020.10-g611c657e43 (Aug 26 2022 - 11:29:06 +0100)

[2] OpenSBI v1.3-75-g3cf0ea4
    U-Boot 2023.01-00209-g1804c8ab17 (Oct 04 2023 - 13:18:01 +0100)

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-10-19  6:37           ` Geert Uytterhoeven
  0 siblings, 0 replies; 30+ messages in thread
From: Geert Uytterhoeven @ 2023-10-19  6:37 UTC (permalink / raw)
  To: Lad, Prabhakar
  Cc: Palmer Dabbelt, Heiko Stuebner, linux-doc, Björn Töpel,
	Conor Dooley, Guo Ren, Jisheng Zhang, linux-riscv,
	Jonathan Corbet, Sia Jee Heng, Marc Zyngier, Masahiro Yamada,
	Greentime Hu, Simon Hosie, Andrew Jones, Albert Ou,
	Alexandre Ghiti, Ley Foon Tan, Paul Walmsley, Anup Patel,
	linux-kernel, Xianting Tian, David Laight, Palmer Dabbelt,
	Andy Chiu, Evan Green

Hi Prabahkar,

On Thu, Sep 14, 2023 at 9:32 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> On Wed, Sep 13, 2023 at 7:46 PM Evan Green <evan@rivosinc.com> wrote:
> > On Wed, Sep 13, 2023 at 5:36 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > > On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> > > > Rather than deferring unaligned access speed determinations to a vendor
> > > > function, let's probe them and find out how fast they are. If we
> > > > determine that an unaligned word access is faster than N byte accesses,
> > > > mark the hardware's unaligned access as "fast". Otherwise, we mark
> > > > accesses as slow.
> > > >
> > > > The algorithm itself runs for a fixed amount of jiffies. Within each
> > > > iteration it attempts to time a single loop, and then keeps only the best
> > > > (fastest) loop it saw. This algorithm was found to have lower variance from
> > > > run to run than my first attempt, which counted the total number of
> > > > iterations that could be done in that fixed amount of jiffies. By taking
> > > > only the best iteration in the loop, assuming at least one loop wasn't
> > > > perturbed by an interrupt, we eliminate the effects of interrupts and
> > > > other "warm up" factors like branch prediction. The only downside is it
> > > > depends on having an rdtime granular and accurate enough to measure a
> > > > single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> > > > leave the unaligned setting at UNKNOWN.
> > > >
> > > > There is a slight change in user-visible behavior here. Previously, all
> > > > boards except the THead C906 reported misaligned access speed of
> > > > UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> > > > misaligned access speed on each hart, all RISC-V systems will have this
> > > > key set as either FAST or SLOW.
> > > >
> > > > Currently, we don't have a way to confidently measure the difference between
> > > > SLOW and EMULATED, so we label anything not fast as SLOW. This will
> > > > mislabel some systems that are actually EMULATED as SLOW. When we get
> > > > support for delegating misaligned access traps to the kernel (as opposed
> > > > to the firmware quietly handling it), we can explicitly test in Linux to
> > > > see if unaligned accesses trap. Those systems will start to report
> > > > EMULATED, though older (today's) systems without that new SBI mechanism
> > > > will continue to report SLOW.
> > > >
> > > > I've updated the documentation for those hwprobe values to reflect
> > > > this, specifically: SLOW may or may not be emulated by software, and FAST
> > > > represents means being faster than equivalent byte accesses. The change
> > > > in documentation is accurate with respect to both the former and current
> > > > behavior.
> > > >
> > > > Signed-off-by: Evan Green <evan@rivosinc.com>
> > > > Acked-by: Conor Dooley <conor.dooley@microchip.com>
> > >
> > > Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
> > > Probe for unaligned access speed") in v6.6-rc1.
> > >
> > > On the boards I have, I get:
> > >
> > >     rzfive:
> > >         cpu0: Ratio of byte access time to unaligned word access is
> > > 1.05, unaligned accesses are fast
> >
> > Hrm, I'm a little surprised to be seeing this number come out so close
> > to 1. If you reboot a few times, what kind of variance do you get on
> > this?
>
> Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)

After upgrading the firmware from [1] to [2], this changed to
"0.00, unaligned accesses are slow".

[1] RZ-Five-ETH
    U-Boot 2020.10-g611c657e43 (Aug 26 2022 - 11:29:06 +0100)

[2] OpenSBI v1.3-75-g3cf0ea4
    U-Boot 2023.01-00209-g1804c8ab17 (Oct 04 2023 - 13:18:01 +0100)

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
  2023-10-19  6:37           ` Geert Uytterhoeven
@ 2023-10-19  7:51             ` Lad, Prabhakar
  -1 siblings, 0 replies; 30+ messages in thread
From: Lad, Prabhakar @ 2023-10-19  7:51 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Lad, Prabhakar, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, David Laight,
	Palmer Dabbelt, Andy Chiu, Evan Green

Hi Geert,

On Thu, Oct 19, 2023 at 7:40 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> Hi Prabahkar,
>
> On Thu, Sep 14, 2023 at 9:32 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > On Wed, Sep 13, 2023 at 7:46 PM Evan Green <evan@rivosinc.com> wrote:
> > > On Wed, Sep 13, 2023 at 5:36 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > > > On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > Rather than deferring unaligned access speed determinations to a vendor
> > > > > function, let's probe them and find out how fast they are. If we
> > > > > determine that an unaligned word access is faster than N byte accesses,
> > > > > mark the hardware's unaligned access as "fast". Otherwise, we mark
> > > > > accesses as slow.
> > > > >
> > > > > The algorithm itself runs for a fixed amount of jiffies. Within each
> > > > > iteration it attempts to time a single loop, and then keeps only the best
> > > > > (fastest) loop it saw. This algorithm was found to have lower variance from
> > > > > run to run than my first attempt, which counted the total number of
> > > > > iterations that could be done in that fixed amount of jiffies. By taking
> > > > > only the best iteration in the loop, assuming at least one loop wasn't
> > > > > perturbed by an interrupt, we eliminate the effects of interrupts and
> > > > > other "warm up" factors like branch prediction. The only downside is it
> > > > > depends on having an rdtime granular and accurate enough to measure a
> > > > > single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> > > > > leave the unaligned setting at UNKNOWN.
> > > > >
> > > > > There is a slight change in user-visible behavior here. Previously, all
> > > > > boards except the THead C906 reported misaligned access speed of
> > > > > UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> > > > > misaligned access speed on each hart, all RISC-V systems will have this
> > > > > key set as either FAST or SLOW.
> > > > >
> > > > > Currently, we don't have a way to confidently measure the difference between
> > > > > SLOW and EMULATED, so we label anything not fast as SLOW. This will
> > > > > mislabel some systems that are actually EMULATED as SLOW. When we get
> > > > > support for delegating misaligned access traps to the kernel (as opposed
> > > > > to the firmware quietly handling it), we can explicitly test in Linux to
> > > > > see if unaligned accesses trap. Those systems will start to report
> > > > > EMULATED, though older (today's) systems without that new SBI mechanism
> > > > > will continue to report SLOW.
> > > > >
> > > > > I've updated the documentation for those hwprobe values to reflect
> > > > > this, specifically: SLOW may or may not be emulated by software, and FAST
> > > > > represents means being faster than equivalent byte accesses. The change
> > > > > in documentation is accurate with respect to both the former and current
> > > > > behavior.
> > > > >
> > > > > Signed-off-by: Evan Green <evan@rivosinc.com>
> > > > > Acked-by: Conor Dooley <conor.dooley@microchip.com>
> > > >
> > > > Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
> > > > Probe for unaligned access speed") in v6.6-rc1.
> > > >
> > > > On the boards I have, I get:
> > > >
> > > >     rzfive:
> > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > 1.05, unaligned accesses are fast
> > >
> > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > to 1. If you reboot a few times, what kind of variance do you get on
> > > this?
> >
> > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
>
> After upgrading the firmware from [1] to [2], this changed to
> "0.00, unaligned accesses are slow".
>
> [1] RZ-Five-ETH
>     U-Boot 2020.10-g611c657e43 (Aug 26 2022 - 11:29:06 +0100)
>
> [2] OpenSBI v1.3-75-g3cf0ea4
>     U-Boot 2023.01-00209-g1804c8ab17 (Oct 04 2023 - 13:18:01 +0100)
>
Thanks, let me go through the changes.

Cheers,
Prabhakar

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 1/2] RISC-V: Probe for unaligned access speed
@ 2023-10-19  7:51             ` Lad, Prabhakar
  0 siblings, 0 replies; 30+ messages in thread
From: Lad, Prabhakar @ 2023-10-19  7:51 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Lad, Prabhakar, Palmer Dabbelt, Heiko Stuebner, linux-doc,
	Björn Töpel, Conor Dooley, Guo Ren, Jisheng Zhang,
	linux-riscv, Jonathan Corbet, Sia Jee Heng, Marc Zyngier,
	Masahiro Yamada, Greentime Hu, Simon Hosie, Andrew Jones,
	Albert Ou, Alexandre Ghiti, Ley Foon Tan, Paul Walmsley,
	Anup Patel, linux-kernel, Xianting Tian, David Laight,
	Palmer Dabbelt, Andy Chiu, Evan Green

Hi Geert,

On Thu, Oct 19, 2023 at 7:40 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> Hi Prabahkar,
>
> On Thu, Sep 14, 2023 at 9:32 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > On Wed, Sep 13, 2023 at 7:46 PM Evan Green <evan@rivosinc.com> wrote:
> > > On Wed, Sep 13, 2023 at 5:36 AM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > > > On Fri, Aug 18, 2023 at 9:44 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > Rather than deferring unaligned access speed determinations to a vendor
> > > > > function, let's probe them and find out how fast they are. If we
> > > > > determine that an unaligned word access is faster than N byte accesses,
> > > > > mark the hardware's unaligned access as "fast". Otherwise, we mark
> > > > > accesses as slow.
> > > > >
> > > > > The algorithm itself runs for a fixed amount of jiffies. Within each
> > > > > iteration it attempts to time a single loop, and then keeps only the best
> > > > > (fastest) loop it saw. This algorithm was found to have lower variance from
> > > > > run to run than my first attempt, which counted the total number of
> > > > > iterations that could be done in that fixed amount of jiffies. By taking
> > > > > only the best iteration in the loop, assuming at least one loop wasn't
> > > > > perturbed by an interrupt, we eliminate the effects of interrupts and
> > > > > other "warm up" factors like branch prediction. The only downside is it
> > > > > depends on having an rdtime granular and accurate enough to measure a
> > > > > single copy. If we ever manage to complete a loop in 0 rdtime ticks, we
> > > > > leave the unaligned setting at UNKNOWN.
> > > > >
> > > > > There is a slight change in user-visible behavior here. Previously, all
> > > > > boards except the THead C906 reported misaligned access speed of
> > > > > UNKNOWN. C906 reported FAST. With this change, since we're now measuring
> > > > > misaligned access speed on each hart, all RISC-V systems will have this
> > > > > key set as either FAST or SLOW.
> > > > >
> > > > > Currently, we don't have a way to confidently measure the difference between
> > > > > SLOW and EMULATED, so we label anything not fast as SLOW. This will
> > > > > mislabel some systems that are actually EMULATED as SLOW. When we get
> > > > > support for delegating misaligned access traps to the kernel (as opposed
> > > > > to the firmware quietly handling it), we can explicitly test in Linux to
> > > > > see if unaligned accesses trap. Those systems will start to report
> > > > > EMULATED, though older (today's) systems without that new SBI mechanism
> > > > > will continue to report SLOW.
> > > > >
> > > > > I've updated the documentation for those hwprobe values to reflect
> > > > > this, specifically: SLOW may or may not be emulated by software, and FAST
> > > > > represents means being faster than equivalent byte accesses. The change
> > > > > in documentation is accurate with respect to both the former and current
> > > > > behavior.
> > > > >
> > > > > Signed-off-by: Evan Green <evan@rivosinc.com>
> > > > > Acked-by: Conor Dooley <conor.dooley@microchip.com>
> > > >
> > > > Thanks for your patch, which is now commit 584ea6564bcaead2 ("RISC-V:
> > > > Probe for unaligned access speed") in v6.6-rc1.
> > > >
> > > > On the boards I have, I get:
> > > >
> > > >     rzfive:
> > > >         cpu0: Ratio of byte access time to unaligned word access is
> > > > 1.05, unaligned accesses are fast
> > >
> > > Hrm, I'm a little surprised to be seeing this number come out so close
> > > to 1. If you reboot a few times, what kind of variance do you get on
> > > this?
> >
> > Rock-solid at 1.05 (even with increased resolution: 1.05853 on 3 tries)
>
> After upgrading the firmware from [1] to [2], this changed to
> "0.00, unaligned accesses are slow".
>
> [1] RZ-Five-ETH
>     U-Boot 2020.10-g611c657e43 (Aug 26 2022 - 11:29:06 +0100)
>
> [2] OpenSBI v1.3-75-g3cf0ea4
>     U-Boot 2023.01-00209-g1804c8ab17 (Oct 04 2023 - 13:18:01 +0100)
>
Thanks, let me go through the changes.

Cheers,
Prabhakar

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2023-10-19  7:51 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-18 19:41 [PATCH v4 0/2] RISC-V: Probe for misaligned access speed Evan Green
2023-08-18 19:41 ` Evan Green
2023-08-18 19:41 ` [PATCH v4 1/2] RISC-V: Probe for unaligned " Evan Green
2023-08-18 19:41   ` Evan Green
2023-09-13 12:36   ` Geert Uytterhoeven
2023-09-13 12:36     ` Geert Uytterhoeven
2023-09-13 17:46     ` Evan Green
2023-09-13 17:46       ` Evan Green
2023-09-14  7:32       ` Geert Uytterhoeven
2023-09-14  7:32         ` Geert Uytterhoeven
2023-09-14  8:46         ` David Laight
2023-09-14  8:46           ` David Laight
2023-09-14 15:01           ` Evan Green
2023-09-14 15:01             ` Evan Green
2023-09-14 15:55             ` David Laight
2023-09-14 15:55               ` David Laight
2023-09-14 16:36               ` Evan Green
2023-09-14 16:36                 ` Evan Green
2023-09-15  7:57                 ` David Laight
2023-09-15  7:57                   ` David Laight
2023-09-15 16:47                   ` Evan Green
2023-09-15 16:47                     ` Evan Green
2023-10-19  6:37         ` Geert Uytterhoeven
2023-10-19  6:37           ` Geert Uytterhoeven
2023-10-19  7:51           ` Lad, Prabhakar
2023-10-19  7:51             ` Lad, Prabhakar
2023-08-18 19:41 ` [PATCH v4 2/2] RISC-V: alternative: Remove feature_probe_func Evan Green
2023-08-18 19:41   ` Evan Green
2023-08-30 20:30 ` [PATCH v4 0/2] RISC-V: Probe for misaligned access speed patchwork-bot+linux-riscv
2023-08-30 20:30   ` patchwork-bot+linux-riscv

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.