All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH 0/2] Attempt to clean-up softmmu templates
@ 2016-01-08 15:53 Alex Bennée
  2016-01-08 15:53 ` [Qemu-devel] [RFC PATCH 1/2] softmmu_template: add smmu_helper, convert VICTIM_TLB_HIT Alex Bennée
  2016-01-08 15:53 ` [Qemu-devel] [RFC PATCH 2/2] softmmu: simplify helper_*_st_name with smmu_helper(do_unl_store) Alex Bennée
  0 siblings, 2 replies; 4+ messages in thread
From: Alex Bennée @ 2016-01-08 15:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: Alex Bennée, jani.kokkonen, claudio.fontana, a.rigo

Hi,

While reviewing Alvise's LL/SC patches we were discussing how to avoid
duplication in some of the re-factoring work. The softmmu_template.h
code has a lot of duplication in due to BE and LE helpers. By pushing
code into an inline helper we can let the compiler do the hard work of
optimising away un-used branches but still keep broadly the same
generated code.

The VICTIM_TLB_HIT conversion was a proof of concept which only slightly
changes the code ordering in probe_write. The do_unl_store() conversion
changes a bit more as removing the goto means the code is inlined twice.
This can be fixed.

If this RFC seems a sane way to go then I can look at properly
re-factoring the code to remove duplication and maybe make the code
easier to follow and experiment with as well.

Alex Bennée (1):
  softmmu_template: add smmu_helper, convert VICTIM_TLB_HIT

Alvise Rigo (1):
  softmmu: simplify helper_*_st_name with smmu_helper(do_unl_store)

 softmmu_template.h | 150 ++++++++++++++++++++++++++++++-----------------------
 1 file changed, 85 insertions(+), 65 deletions(-)

-- 
2.6.4

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Qemu-devel] [RFC PATCH 1/2] softmmu_template: add smmu_helper, convert VICTIM_TLB_HIT
  2016-01-08 15:53 [Qemu-devel] [RFC PATCH 0/2] Attempt to clean-up softmmu templates Alex Bennée
@ 2016-01-08 15:53 ` Alex Bennée
  2016-01-08 15:53 ` [Qemu-devel] [RFC PATCH 2/2] softmmu: simplify helper_*_st_name with smmu_helper(do_unl_store) Alex Bennée
  1 sibling, 0 replies; 4+ messages in thread
From: Alex Bennée @ 2016-01-08 15:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Crosthwaite, claudio.fontana, a.rigo, Paolo Bonzini,
	jani.kokkonen, Alex Bennée, Richard Henderson

This lays the ground work for a re-factoring of the softmmu template
code. The patch introduces inline "smmu_helper" functions where
common (or almost common) code can be placed. Arguments that the
compiler picks up as constant can then be used to eliminate legs of code
in the inline fragments.

There is a minor wrinkle that we need to use a unique name for each
inline fragment as the template is included multiple times. For this the
smmu_helper macro does the appropriate glue magic.

I've tested the result with no change to functionality. Comparing the
the objdump of cputlb.o shows minimal changes in probe_write and
everything else is identical.

TODO: explain probe_write changes

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
 softmmu_template.h | 75 +++++++++++++++++++++++++++++++++---------------------
 1 file changed, 46 insertions(+), 29 deletions(-)

diff --git a/softmmu_template.h b/softmmu_template.h
index 6803890..0074bd7 100644
--- a/softmmu_template.h
+++ b/softmmu_template.h
@@ -116,30 +116,47 @@
 # define helper_te_st_name  helper_le_st_name
 #endif
 
-/* macro to check the victim tlb */
-#define VICTIM_TLB_HIT(ty)                                                    \
-({                                                                            \
-    /* we are about to do a page table walk. our last hope is the             \
-     * victim tlb. try to refill from the victim tlb before walking the       \
-     * page table. */                                                         \
-    int vidx;                                                                 \
-    CPUIOTLBEntry tmpiotlb;                                                   \
-    CPUTLBEntry tmptlb;                                                       \
-    for (vidx = CPU_VTLB_SIZE-1; vidx >= 0; --vidx) {                         \
-        if (env->tlb_v_table[mmu_idx][vidx].ty == (addr & TARGET_PAGE_MASK)) {\
-            /* found entry in victim tlb, swap tlb and iotlb */               \
-            tmptlb = env->tlb_table[mmu_idx][index];                          \
-            env->tlb_table[mmu_idx][index] = env->tlb_v_table[mmu_idx][vidx]; \
-            env->tlb_v_table[mmu_idx][vidx] = tmptlb;                         \
-            tmpiotlb = env->iotlb[mmu_idx][index];                            \
-            env->iotlb[mmu_idx][index] = env->iotlb_v[mmu_idx][vidx];         \
-            env->iotlb_v[mmu_idx][vidx] = tmpiotlb;                           \
-            break;                                                            \
-        }                                                                     \
-    }                                                                         \
-    /* return true when there is a vtlb hit, i.e. vidx >=0 */                 \
-    vidx >= 0;                                                                \
-})
+/* Inline helper functions for SoftMMU
+ *
+ * These functions help reduce code duplication in the various main
+ * helper functions. Constant arguments (like endian state) will allow
+ * the compiler to skip code which is never called in a given inline.
+ */
+
+#define smmu_helper(name) glue(glue(glue(_smmu_helper_, SUFFIX), MMUSUFFIX),name)
+
+static inline int smmu_helper(victim_tlb_hit) (const bool is_read, CPUArchState *env,
+                                               unsigned mmu_idx, int index,
+                                               target_ulong addr)
+{
+    /* we are about to do a page table walk. our last hope is the
+     * victim tlb. try to refill from the victim tlb before walking the
+     * page table. */
+    int vidx;
+    CPUIOTLBEntry tmpiotlb;
+    CPUTLBEntry tmptlb;
+    for (vidx = CPU_VTLB_SIZE-1; vidx >= 0; --vidx) {
+        bool match;
+        if (is_read) {
+            match = env->tlb_v_table[mmu_idx][vidx].ADDR_READ == (addr & TARGET_PAGE_MASK);
+        } else {
+            match = env->tlb_v_table[mmu_idx][vidx].addr_write == (addr & TARGET_PAGE_MASK);
+        }
+
+        if (match) {
+            /* found entry in victim tlb, swap tlb and iotlb */
+            tmptlb = env->tlb_table[mmu_idx][index];
+            env->tlb_table[mmu_idx][index] = env->tlb_v_table[mmu_idx][vidx];
+            env->tlb_v_table[mmu_idx][vidx] = tmptlb;
+            tmpiotlb = env->iotlb[mmu_idx][index];
+            env->iotlb[mmu_idx][index] = env->iotlb_v[mmu_idx][vidx];
+            env->iotlb_v[mmu_idx][vidx] = tmpiotlb;
+            break;
+        }
+    }
+    /* return true when there is a vtlb hit, i.e. vidx >=0 */
+    return vidx >= 0;
+}
 
 #ifndef SOFTMMU_CODE_ACCESS
 static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
@@ -185,7 +202,7 @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
             cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE,
                                  mmu_idx, retaddr);
         }
-        if (!VICTIM_TLB_HIT(ADDR_READ)) {
+        if (!smmu_helper(victim_tlb_hit)(true, env, mmu_idx, index, addr)) {
             tlb_fill(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE,
                      mmu_idx, retaddr);
         }
@@ -269,7 +286,7 @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
             cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE,
                                  mmu_idx, retaddr);
         }
-        if (!VICTIM_TLB_HIT(ADDR_READ)) {
+        if (!smmu_helper(victim_tlb_hit)(true, env, mmu_idx, index, addr)) {
             tlb_fill(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE,
                      mmu_idx, retaddr);
         }
@@ -389,7 +406,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
             cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
                                  mmu_idx, retaddr);
         }
-        if (!VICTIM_TLB_HIT(addr_write)) {
+        if (!smmu_helper(victim_tlb_hit)(false, env, mmu_idx, index, addr)) {
             tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr);
         }
         tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
@@ -469,7 +486,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
             cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
                                  mmu_idx, retaddr);
         }
-        if (!VICTIM_TLB_HIT(addr_write)) {
+        if (!smmu_helper(victim_tlb_hit)(false, env, mmu_idx, index, addr)) {
             tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr);
         }
         tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
@@ -542,7 +559,7 @@ void probe_write(CPUArchState *env, target_ulong addr, int mmu_idx,
     if ((addr & TARGET_PAGE_MASK)
         != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
         /* TLB entry is for a different page */
-        if (!VICTIM_TLB_HIT(addr_write)) {
+        if (!smmu_helper(victim_tlb_hit)(false, env, mmu_idx, index, addr)) {
             tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr);
         }
     }
-- 
2.6.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [Qemu-devel] [RFC PATCH 2/2] softmmu: simplify helper_*_st_name with smmu_helper(do_unl_store)
  2016-01-08 15:53 [Qemu-devel] [RFC PATCH 0/2] Attempt to clean-up softmmu templates Alex Bennée
  2016-01-08 15:53 ` [Qemu-devel] [RFC PATCH 1/2] softmmu_template: add smmu_helper, convert VICTIM_TLB_HIT Alex Bennée
@ 2016-01-08 15:53 ` Alex Bennée
  2016-01-19 15:53   ` alvise rigo
  1 sibling, 1 reply; 4+ messages in thread
From: Alex Bennée @ 2016-01-08 15:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Crosthwaite, claudio.fontana, a.rigo, Paolo Bonzini,
	jani.kokkonen, Alex Bennée, Richard Henderson

From: Alvise Rigo <a.rigo@virtualopensystems.com>

Attempting to simplify the helper_*_st_name, wrap the
do_unaligned_access code into an shared inline function. As this also
removes the goto statement the inline code is expanded twice in each
helper.

Suggested-by: Jani Kokkonen <jani.kokkonen@huawei.com>
Suggested-by: Claudio Fontana <claudio.fontana@huawei.com>
CC: Alvise Rigo <a.rigo@virtualopensystems.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
v2
  - based on original patch from Alvise
  - uses a single shared inline function to reduce duplication
---
 softmmu_template.h | 75 ++++++++++++++++++++++++++++--------------------------
 1 file changed, 39 insertions(+), 36 deletions(-)

diff --git a/softmmu_template.h b/softmmu_template.h
index 0074bd7..ac0b4ac 100644
--- a/softmmu_template.h
+++ b/softmmu_template.h
@@ -159,6 +159,39 @@ static inline int smmu_helper(victim_tlb_hit) (const bool is_read, CPUArchState
 }
 
 #ifndef SOFTMMU_CODE_ACCESS
+
+static inline void smmu_helper(do_unl_store)(CPUArchState *env,
+                                             bool little_endian,
+                                             DATA_TYPE val,
+                                             target_ulong addr,
+                                             TCGMemOpIdx oi,
+                                             unsigned mmu_idx,
+                                             uintptr_t retaddr)
+{
+    int i;
+
+    if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) {
+        cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
+                             mmu_idx, retaddr);
+    }
+    /* Note: relies on the fact that tlb_fill() does not remove the
+     * previous page from the TLB cache.  */
+    for (i = DATA_SIZE - 1; i >= 0; i--) {
+        uint8_t val8;
+        if (little_endian) {
+            /* Little-endian extract.  */
+            val8 = val >> (i * 8);
+        } else {
+            /* Big-endian extract.  */
+            val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8));
+        }
+        /* Note the adjustment at the beginning of the function.
+           Undo that for the recursion.  */
+        glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
+                                        oi, retaddr + GETPC_ADJ);
+    }
+}
+
 static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
                                               CPUIOTLBEntry *iotlbentry,
                                               target_ulong addr,
@@ -416,7 +449,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
     if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
         CPUIOTLBEntry *iotlbentry;
         if ((addr & (DATA_SIZE - 1)) != 0) {
-            goto do_unaligned_access;
+            smmu_helper(do_unl_store)(env, true, val, addr, oi, mmu_idx, retaddr);
+            return;
         }
         iotlbentry = &env->iotlb[mmu_idx][index];
 
@@ -431,23 +465,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
     if (DATA_SIZE > 1
         && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
                      >= TARGET_PAGE_SIZE)) {
-        int i;
-    do_unaligned_access:
-        if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) {
-            cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
-                                 mmu_idx, retaddr);
-        }
-        /* XXX: not efficient, but simple */
-        /* Note: relies on the fact that tlb_fill() does not remove the
-         * previous page from the TLB cache.  */
-        for (i = DATA_SIZE - 1; i >= 0; i--) {
-            /* Little-endian extract.  */
-            uint8_t val8 = val >> (i * 8);
-            /* Note the adjustment at the beginning of the function.
-               Undo that for the recursion.  */
-            glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
-                                            oi, retaddr + GETPC_ADJ);
-        }
+        smmu_helper(do_unl_store)(env, true, val, addr, oi, mmu_idx, retaddr);
         return;
     }
 
@@ -496,7 +514,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
     if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
         CPUIOTLBEntry *iotlbentry;
         if ((addr & (DATA_SIZE - 1)) != 0) {
-            goto do_unaligned_access;
+            smmu_helper(do_unl_store)(env, false, val, addr, oi, mmu_idx, retaddr);
+            return;
         }
         iotlbentry = &env->iotlb[mmu_idx][index];
 
@@ -511,23 +530,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
     if (DATA_SIZE > 1
         && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
                      >= TARGET_PAGE_SIZE)) {
-        int i;
-    do_unaligned_access:
-        if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) {
-            cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
-                                 mmu_idx, retaddr);
-        }
-        /* XXX: not efficient, but simple */
-        /* Note: relies on the fact that tlb_fill() does not remove the
-         * previous page from the TLB cache.  */
-        for (i = DATA_SIZE - 1; i >= 0; i--) {
-            /* Big-endian extract.  */
-            uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8));
-            /* Note the adjustment at the beginning of the function.
-               Undo that for the recursion.  */
-            glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
-                                            oi, retaddr + GETPC_ADJ);
-        }
+        smmu_helper(do_unl_store)(env, false, val, addr, oi, mmu_idx, retaddr);
         return;
     }
 
-- 
2.6.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 2/2] softmmu: simplify helper_*_st_name with smmu_helper(do_unl_store)
  2016-01-08 15:53 ` [Qemu-devel] [RFC PATCH 2/2] softmmu: simplify helper_*_st_name with smmu_helper(do_unl_store) Alex Bennée
@ 2016-01-19 15:53   ` alvise rigo
  0 siblings, 0 replies; 4+ messages in thread
From: alvise rigo @ 2016-01-19 15:53 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Peter Crosthwaite, Claudio Fontana, QEMU Developers,
	Paolo Bonzini, Jani Kokkonen, Richard Henderson

On Fri, Jan 8, 2016 at 4:53 PM, Alex Bennée <alex.bennee@linaro.org> wrote:
> From: Alvise Rigo <a.rigo@virtualopensystems.com>
>
> Attempting to simplify the helper_*_st_name, wrap the
> do_unaligned_access code into an shared inline function. As this also
> removes the goto statement the inline code is expanded twice in each
> helper.
>
> Suggested-by: Jani Kokkonen <jani.kokkonen@huawei.com>
> Suggested-by: Claudio Fontana <claudio.fontana@huawei.com>
> CC: Alvise Rigo <a.rigo@virtualopensystems.com>
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>
> ---
> v2
>   - based on original patch from Alvise
>   - uses a single shared inline function to reduce duplication
> ---
>  softmmu_template.h | 75 ++++++++++++++++++++++++++++--------------------------
>  1 file changed, 39 insertions(+), 36 deletions(-)
>
> diff --git a/softmmu_template.h b/softmmu_template.h
> index 0074bd7..ac0b4ac 100644
> --- a/softmmu_template.h
> +++ b/softmmu_template.h
> @@ -159,6 +159,39 @@ static inline int smmu_helper(victim_tlb_hit) (const bool is_read, CPUArchState
>  }
>
>  #ifndef SOFTMMU_CODE_ACCESS
> +
> +static inline void smmu_helper(do_unl_store)(CPUArchState *env,
> +                                             bool little_endian,
> +                                             DATA_TYPE val,
> +                                             target_ulong addr,
> +                                             TCGMemOpIdx oi,
> +                                             unsigned mmu_idx,
> +                                             uintptr_t retaddr)
> +{
> +    int i;
> +
> +    if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) {
> +        cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
> +                             mmu_idx, retaddr);
> +    }
> +    /* Note: relies on the fact that tlb_fill() does not remove the
> +     * previous page from the TLB cache.  */
> +    for (i = DATA_SIZE - 1; i >= 0; i--) {
> +        uint8_t val8;
> +        if (little_endian) {
> +            /* Little-endian extract.  */
> +            val8 = val >> (i * 8);
> +        } else {
> +            /* Big-endian extract.  */
> +            val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8));
> +        }
> +        /* Note the adjustment at the beginning of the function.
> +           Undo that for the recursion.  */
> +        glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
> +                                        oi, retaddr + GETPC_ADJ);
> +    }
> +}
> +
>  static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
>                                                CPUIOTLBEntry *iotlbentry,
>                                                target_ulong addr,
> @@ -416,7 +449,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
>      if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
>          CPUIOTLBEntry *iotlbentry;
>          if ((addr & (DATA_SIZE - 1)) != 0) {
> -            goto do_unaligned_access;
> +            smmu_helper(do_unl_store)(env, true, val, addr, oi, mmu_idx, retaddr);
> +            return;
>          }
>          iotlbentry = &env->iotlb[mmu_idx][index];
>
> @@ -431,23 +465,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
>      if (DATA_SIZE > 1
>          && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>                       >= TARGET_PAGE_SIZE)) {
> -        int i;
> -    do_unaligned_access:
> -        if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) {
> -            cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
> -                                 mmu_idx, retaddr);
> -        }
> -        /* XXX: not efficient, but simple */
> -        /* Note: relies on the fact that tlb_fill() does not remove the
> -         * previous page from the TLB cache.  */
> -        for (i = DATA_SIZE - 1; i >= 0; i--) {
> -            /* Little-endian extract.  */
> -            uint8_t val8 = val >> (i * 8);
> -            /* Note the adjustment at the beginning of the function.
> -               Undo that for the recursion.  */
> -            glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
> -                                            oi, retaddr + GETPC_ADJ);
> -        }
> +        smmu_helper(do_unl_store)(env, true, val, addr, oi, mmu_idx, retaddr);
>          return;
>      }
>
> @@ -496,7 +514,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
>      if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
>          CPUIOTLBEntry *iotlbentry;
>          if ((addr & (DATA_SIZE - 1)) != 0) {
> -            goto do_unaligned_access;
> +            smmu_helper(do_unl_store)(env, false, val, addr, oi, mmu_idx, retaddr);
> +            return;
>          }
>          iotlbentry = &env->iotlb[mmu_idx][index];
>
> @@ -511,23 +530,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
>      if (DATA_SIZE > 1
>          && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>                       >= TARGET_PAGE_SIZE)) {
> -        int i;
> -    do_unaligned_access:
> -        if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) {
> -            cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
> -                                 mmu_idx, retaddr);
> -        }
> -        /* XXX: not efficient, but simple */
> -        /* Note: relies on the fact that tlb_fill() does not remove the
> -         * previous page from the TLB cache.  */
> -        for (i = DATA_SIZE - 1; i >= 0; i--) {
> -            /* Big-endian extract.  */
> -            uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8));
> -            /* Note the adjustment at the beginning of the function.
> -               Undo that for the recursion.  */
> -            glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
> -                                            oi, retaddr + GETPC_ADJ);
> -        }
> +        smmu_helper(do_unl_store)(env, false, val, addr, oi, mmu_idx, retaddr);
>          return;
>      }
>
> --
> 2.6.4
>

This approach makes sense to me, given that the leg of the *if*
statement is actually inlined depending on the (constant) value of
little_endian.

The thing not convincing me is the fact that we are not imposing the
inlining, but we are relying on the compiler optimizations to do it. I
wonder if this will always happen, even with other compilers (clang).

alvise

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-01-19 15:53 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-08 15:53 [Qemu-devel] [RFC PATCH 0/2] Attempt to clean-up softmmu templates Alex Bennée
2016-01-08 15:53 ` [Qemu-devel] [RFC PATCH 1/2] softmmu_template: add smmu_helper, convert VICTIM_TLB_HIT Alex Bennée
2016-01-08 15:53 ` [Qemu-devel] [RFC PATCH 2/2] softmmu: simplify helper_*_st_name with smmu_helper(do_unl_store) Alex Bennée
2016-01-19 15:53   ` alvise rigo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.