* [PATCH v3 0/1] cputlb: Make store_helper less fragile to compiler optimizations
@ 2020-08-13 20:40 Richard Henderson
2020-08-13 20:40 ` [PATCH v3 1/1] " Richard Henderson
0 siblings, 1 reply; 3+ messages in thread
From: Richard Henderson @ 2020-08-13 20:40 UTC (permalink / raw)
To: qemu-devel
This is the patch I posted in reply to Shu-Chun Weng's v2 at
https://lists.nongnu.org/archive/html/qemu-devel/2020-07/msg07589.html
with the patch comment adjusted. The patch itself got an official R-b
from Alex, and an informal ack from Shu-Chun.
I plan to include this in tcg-next for 5.2.
r~
Richard Henderson (1):
cputlb: Make store_helper less fragile to compiler optimizations
accel/tcg/cputlb.c | 138 ++++++++++++++++++++++++++-------------------
1 file changed, 79 insertions(+), 59 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v3 1/1] cputlb: Make store_helper less fragile to compiler optimizations
2020-08-13 20:40 [PATCH v3 0/1] cputlb: Make store_helper less fragile to compiler optimizations Richard Henderson
@ 2020-08-13 20:40 ` Richard Henderson
2020-08-14 20:10 ` Shu-Chun Weng
0 siblings, 1 reply; 3+ messages in thread
From: Richard Henderson @ 2020-08-13 20:40 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée, Shu-Chun Weng
This has no functional change.
The current function structure is:
inline QEMU_ALWAYSINLINE
store_memop() {
switch () {
...
default:
qemu_build_not_reached();
}
}
inline QEMU_ALWAYSINLINE
store_helper() {
...
if (span_two_pages_or_io) {
...
helper_ret_stb_mmu();
}
store_memop();
}
helper_ret_stb_mmu() {
store_helper();
}
Whereas GCC will generate an error at compile-time when an always_inline
function is not inlined, Clang does not. Nor does Clang prioritize the
inlining of always_inline functions. Both of these are arguably bugs.
Both `store_memop` and `store_helper` need to be inlined and allow
constant propogations to eliminate the `qemu_build_not_reached` call.
However, if the compiler instead chooses to inline helper_ret_stb_mmu
into store_helper, then store_helper is now self-recursive and the
compiler is no longer able to propagate the constant in the same way.
This does not produce at current QEMU head, but was reproducible
at v4.2.0 with `clang-10 -O2 -fexperimental-new-pass-manager`.
The inline recursion problem can be fixed solely by marking
helper_ret_stb_mmu as noinline, so the compiler does not make an
incorrect decision about which functions to inline.
In addition, extract store_helper_unaligned as a noinline subroutine
that can be shared by all of the helpers. This saves about 6k code
size in an optimized x86_64 build.
Reported-by: Shu-Chun Weng <scw@google.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 138 ++++++++++++++++++++++++++-------------------
1 file changed, 79 insertions(+), 59 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 5698292749..7e603d6666 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -2009,6 +2009,80 @@ store_memop(void *haddr, uint64_t val, MemOp op)
}
}
+static void __attribute__((noinline))
+store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val,
+ uintptr_t retaddr, size_t size, uintptr_t mmu_idx,
+ bool big_endian)
+{
+ const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
+ uintptr_t index, index2;
+ CPUTLBEntry *entry, *entry2;
+ target_ulong page2, tlb_addr, tlb_addr2;
+ TCGMemOpIdx oi;
+ size_t size2;
+ int i;
+
+ /*
+ * Ensure the second page is in the TLB. Note that the first page
+ * is already guaranteed to be filled, and that the second page
+ * cannot evict the first.
+ */
+ page2 = (addr + size) & TARGET_PAGE_MASK;
+ size2 = (addr + size) & ~TARGET_PAGE_MASK;
+ index2 = tlb_index(env, mmu_idx, page2);
+ entry2 = tlb_entry(env, mmu_idx, page2);
+
+ tlb_addr2 = tlb_addr_write(entry2);
+ if (!tlb_hit_page(tlb_addr2, page2)) {
+ if (!victim_tlb_hit(env, mmu_idx, index2, tlb_off, page2)) {
+ tlb_fill(env_cpu(env), page2, size2, MMU_DATA_STORE,
+ mmu_idx, retaddr);
+ index2 = tlb_index(env, mmu_idx, page2);
+ entry2 = tlb_entry(env, mmu_idx, page2);
+ }
+ tlb_addr2 = tlb_addr_write(entry2);
+ }
+
+ index = tlb_index(env, mmu_idx, addr);
+ entry = tlb_entry(env, mmu_idx, addr);
+ tlb_addr = tlb_addr_write(entry);
+
+ /*
+ * Handle watchpoints. Since this may trap, all checks
+ * must happen before any store.
+ */
+ if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
+ cpu_check_watchpoint(env_cpu(env), addr, size - size2,
+ env_tlb(env)->d[mmu_idx].iotlb[index].attrs,
+ BP_MEM_WRITE, retaddr);
+ }
+ if (unlikely(tlb_addr2 & TLB_WATCHPOINT)) {
+ cpu_check_watchpoint(env_cpu(env), page2, size2,
+ env_tlb(env)->d[mmu_idx].iotlb[index2].attrs,
+ BP_MEM_WRITE, retaddr);
+ }
+
+ /*
+ * XXX: not efficient, but simple.
+ * This loop must go in the forward direction to avoid issues
+ * with self-modifying code in Windows 64-bit.
+ */
+ oi = make_memop_idx(MO_UB, mmu_idx);
+ if (big_endian) {
+ for (i = 0; i < size; ++i) {
+ /* Big-endian extract. */
+ uint8_t val8 = val >> (((size - 1) * 8) - (i * 8));
+ helper_ret_stb_mmu(env, addr + i, val8, oi, retaddr);
+ }
+ } else {
+ for (i = 0; i < size; ++i) {
+ /* Little-endian extract. */
+ uint8_t val8 = val >> (i * 8);
+ helper_ret_stb_mmu(env, addr + i, val8, oi, retaddr);
+ }
+ }
+}
+
static inline void QEMU_ALWAYS_INLINE
store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
TCGMemOpIdx oi, uintptr_t retaddr, MemOp op)
@@ -2097,64 +2171,9 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
if (size > 1
&& unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
>= TARGET_PAGE_SIZE)) {
- int i;
- uintptr_t index2;
- CPUTLBEntry *entry2;
- target_ulong page2, tlb_addr2;
- size_t size2;
-
do_unaligned_access:
- /*
- * Ensure the second page is in the TLB. Note that the first page
- * is already guaranteed to be filled, and that the second page
- * cannot evict the first.
- */
- page2 = (addr + size) & TARGET_PAGE_MASK;
- size2 = (addr + size) & ~TARGET_PAGE_MASK;
- index2 = tlb_index(env, mmu_idx, page2);
- entry2 = tlb_entry(env, mmu_idx, page2);
- tlb_addr2 = tlb_addr_write(entry2);
- if (!tlb_hit_page(tlb_addr2, page2)) {
- if (!victim_tlb_hit(env, mmu_idx, index2, tlb_off, page2)) {
- tlb_fill(env_cpu(env), page2, size2, MMU_DATA_STORE,
- mmu_idx, retaddr);
- index2 = tlb_index(env, mmu_idx, page2);
- entry2 = tlb_entry(env, mmu_idx, page2);
- }
- tlb_addr2 = tlb_addr_write(entry2);
- }
-
- /*
- * Handle watchpoints. Since this may trap, all checks
- * must happen before any store.
- */
- if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
- cpu_check_watchpoint(env_cpu(env), addr, size - size2,
- env_tlb(env)->d[mmu_idx].iotlb[index].attrs,
- BP_MEM_WRITE, retaddr);
- }
- if (unlikely(tlb_addr2 & TLB_WATCHPOINT)) {
- cpu_check_watchpoint(env_cpu(env), page2, size2,
- env_tlb(env)->d[mmu_idx].iotlb[index2].attrs,
- BP_MEM_WRITE, retaddr);
- }
-
- /*
- * XXX: not efficient, but simple.
- * This loop must go in the forward direction to avoid issues
- * with self-modifying code in Windows 64-bit.
- */
- for (i = 0; i < size; ++i) {
- uint8_t val8;
- if (memop_big_endian(op)) {
- /* Big-endian extract. */
- val8 = val >> (((size - 1) * 8) - (i * 8));
- } else {
- /* Little-endian extract. */
- val8 = val >> (i * 8);
- }
- helper_ret_stb_mmu(env, addr + i, val8, oi, retaddr);
- }
+ store_helper_unaligned(env, addr, val, retaddr, size,
+ mmu_idx, memop_big_endian(op));
return;
}
@@ -2162,8 +2181,9 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
store_memop(haddr, val, op);
}
-void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val,
- TCGMemOpIdx oi, uintptr_t retaddr)
+void __attribute__((noinline))
+helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val,
+ TCGMemOpIdx oi, uintptr_t retaddr)
{
store_helper(env, addr, val, oi, retaddr, MO_UB);
}
--
2.25.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v3 1/1] cputlb: Make store_helper less fragile to compiler optimizations
2020-08-13 20:40 ` [PATCH v3 1/1] " Richard Henderson
@ 2020-08-14 20:10 ` Shu-Chun Weng
0 siblings, 0 replies; 3+ messages in thread
From: Shu-Chun Weng @ 2020-08-14 20:10 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, Alex Bennée
[-- Attachment #1.1: Type: text/plain, Size: 8847 bytes --]
Can confirm this fixed the build in our configuration. Thank you.
Shu-Chun
On Thu, Aug 13, 2020 at 1:40 PM Richard Henderson <
richard.henderson@linaro.org> wrote:
> This has no functional change.
>
> The current function structure is:
>
> inline QEMU_ALWAYSINLINE
> store_memop() {
> switch () {
> ...
> default:
> qemu_build_not_reached();
> }
> }
> inline QEMU_ALWAYSINLINE
> store_helper() {
> ...
> if (span_two_pages_or_io) {
> ...
> helper_ret_stb_mmu();
> }
> store_memop();
> }
> helper_ret_stb_mmu() {
> store_helper();
> }
>
> Whereas GCC will generate an error at compile-time when an always_inline
> function is not inlined, Clang does not. Nor does Clang prioritize the
> inlining of always_inline functions. Both of these are arguably bugs.
>
> Both `store_memop` and `store_helper` need to be inlined and allow
> constant propogations to eliminate the `qemu_build_not_reached` call.
>
> However, if the compiler instead chooses to inline helper_ret_stb_mmu
> into store_helper, then store_helper is now self-recursive and the
> compiler is no longer able to propagate the constant in the same way.
>
> This does not produce at current QEMU head, but was reproducible
> at v4.2.0 with `clang-10 -O2 -fexperimental-new-pass-manager`.
>
> The inline recursion problem can be fixed solely by marking
> helper_ret_stb_mmu as noinline, so the compiler does not make an
> incorrect decision about which functions to inline.
>
> In addition, extract store_helper_unaligned as a noinline subroutine
> that can be shared by all of the helpers. This saves about 6k code
> size in an optimized x86_64 build.
>
> Reported-by: Shu-Chun Weng <scw@google.com>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> accel/tcg/cputlb.c | 138 ++++++++++++++++++++++++++-------------------
> 1 file changed, 79 insertions(+), 59 deletions(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 5698292749..7e603d6666 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -2009,6 +2009,80 @@ store_memop(void *haddr, uint64_t val, MemOp op)
> }
> }
>
> +static void __attribute__((noinline))
> +store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val,
> + uintptr_t retaddr, size_t size, uintptr_t mmu_idx,
> + bool big_endian)
> +{
> + const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
> + uintptr_t index, index2;
> + CPUTLBEntry *entry, *entry2;
> + target_ulong page2, tlb_addr, tlb_addr2;
> + TCGMemOpIdx oi;
> + size_t size2;
> + int i;
> +
> + /*
> + * Ensure the second page is in the TLB. Note that the first page
> + * is already guaranteed to be filled, and that the second page
> + * cannot evict the first.
> + */
> + page2 = (addr + size) & TARGET_PAGE_MASK;
> + size2 = (addr + size) & ~TARGET_PAGE_MASK;
> + index2 = tlb_index(env, mmu_idx, page2);
> + entry2 = tlb_entry(env, mmu_idx, page2);
> +
> + tlb_addr2 = tlb_addr_write(entry2);
> + if (!tlb_hit_page(tlb_addr2, page2)) {
> + if (!victim_tlb_hit(env, mmu_idx, index2, tlb_off, page2)) {
> + tlb_fill(env_cpu(env), page2, size2, MMU_DATA_STORE,
> + mmu_idx, retaddr);
> + index2 = tlb_index(env, mmu_idx, page2);
> + entry2 = tlb_entry(env, mmu_idx, page2);
> + }
> + tlb_addr2 = tlb_addr_write(entry2);
> + }
> +
> + index = tlb_index(env, mmu_idx, addr);
> + entry = tlb_entry(env, mmu_idx, addr);
> + tlb_addr = tlb_addr_write(entry);
> +
> + /*
> + * Handle watchpoints. Since this may trap, all checks
> + * must happen before any store.
> + */
> + if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
> + cpu_check_watchpoint(env_cpu(env), addr, size - size2,
> + env_tlb(env)->d[mmu_idx].iotlb[index].attrs,
> + BP_MEM_WRITE, retaddr);
> + }
> + if (unlikely(tlb_addr2 & TLB_WATCHPOINT)) {
> + cpu_check_watchpoint(env_cpu(env), page2, size2,
> + env_tlb(env)->d[mmu_idx].iotlb[index2].attrs,
> + BP_MEM_WRITE, retaddr);
> + }
> +
> + /*
> + * XXX: not efficient, but simple.
> + * This loop must go in the forward direction to avoid issues
> + * with self-modifying code in Windows 64-bit.
> + */
> + oi = make_memop_idx(MO_UB, mmu_idx);
> + if (big_endian) {
> + for (i = 0; i < size; ++i) {
> + /* Big-endian extract. */
> + uint8_t val8 = val >> (((size - 1) * 8) - (i * 8));
> + helper_ret_stb_mmu(env, addr + i, val8, oi, retaddr);
> + }
> + } else {
> + for (i = 0; i < size; ++i) {
> + /* Little-endian extract. */
> + uint8_t val8 = val >> (i * 8);
> + helper_ret_stb_mmu(env, addr + i, val8, oi, retaddr);
> + }
> + }
> +}
> +
> static inline void QEMU_ALWAYS_INLINE
> store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
> TCGMemOpIdx oi, uintptr_t retaddr, MemOp op)
> @@ -2097,64 +2171,9 @@ store_helper(CPUArchState *env, target_ulong addr,
> uint64_t val,
> if (size > 1
> && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
> >= TARGET_PAGE_SIZE)) {
> - int i;
> - uintptr_t index2;
> - CPUTLBEntry *entry2;
> - target_ulong page2, tlb_addr2;
> - size_t size2;
> -
> do_unaligned_access:
> - /*
> - * Ensure the second page is in the TLB. Note that the first page
> - * is already guaranteed to be filled, and that the second page
> - * cannot evict the first.
> - */
> - page2 = (addr + size) & TARGET_PAGE_MASK;
> - size2 = (addr + size) & ~TARGET_PAGE_MASK;
> - index2 = tlb_index(env, mmu_idx, page2);
> - entry2 = tlb_entry(env, mmu_idx, page2);
> - tlb_addr2 = tlb_addr_write(entry2);
> - if (!tlb_hit_page(tlb_addr2, page2)) {
> - if (!victim_tlb_hit(env, mmu_idx, index2, tlb_off, page2)) {
> - tlb_fill(env_cpu(env), page2, size2, MMU_DATA_STORE,
> - mmu_idx, retaddr);
> - index2 = tlb_index(env, mmu_idx, page2);
> - entry2 = tlb_entry(env, mmu_idx, page2);
> - }
> - tlb_addr2 = tlb_addr_write(entry2);
> - }
> -
> - /*
> - * Handle watchpoints. Since this may trap, all checks
> - * must happen before any store.
> - */
> - if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
> - cpu_check_watchpoint(env_cpu(env), addr, size - size2,
> -
> env_tlb(env)->d[mmu_idx].iotlb[index].attrs,
> - BP_MEM_WRITE, retaddr);
> - }
> - if (unlikely(tlb_addr2 & TLB_WATCHPOINT)) {
> - cpu_check_watchpoint(env_cpu(env), page2, size2,
> -
> env_tlb(env)->d[mmu_idx].iotlb[index2].attrs,
> - BP_MEM_WRITE, retaddr);
> - }
> -
> - /*
> - * XXX: not efficient, but simple.
> - * This loop must go in the forward direction to avoid issues
> - * with self-modifying code in Windows 64-bit.
> - */
> - for (i = 0; i < size; ++i) {
> - uint8_t val8;
> - if (memop_big_endian(op)) {
> - /* Big-endian extract. */
> - val8 = val >> (((size - 1) * 8) - (i * 8));
> - } else {
> - /* Little-endian extract. */
> - val8 = val >> (i * 8);
> - }
> - helper_ret_stb_mmu(env, addr + i, val8, oi, retaddr);
> - }
> + store_helper_unaligned(env, addr, val, retaddr, size,
> + mmu_idx, memop_big_endian(op));
> return;
> }
>
> @@ -2162,8 +2181,9 @@ store_helper(CPUArchState *env, target_ulong addr,
> uint64_t val,
> store_memop(haddr, val, op);
> }
>
> -void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val,
> - TCGMemOpIdx oi, uintptr_t retaddr)
> +void __attribute__((noinline))
> +helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val,
> + TCGMemOpIdx oi, uintptr_t retaddr)
> {
> store_helper(env, addr, val, oi, retaddr, MO_UB);
> }
> --
> 2.25.1
>
>
[-- Attachment #1.2: Type: text/html, Size: 10801 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3844 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-08-14 20:11 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-13 20:40 [PATCH v3 0/1] cputlb: Make store_helper less fragile to compiler optimizations Richard Henderson
2020-08-13 20:40 ` [PATCH v3 1/1] " Richard Henderson
2020-08-14 20:10 ` Shu-Chun Weng
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.