All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 0/2] target-mips: Add support for misaligned accesses
@ 2015-05-11 11:30 Yongbok Kim
  2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 1/2] target-mips: Misaligned memory accesses for R6 Yongbok Kim
  2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA Yongbok Kim
  0 siblings, 2 replies; 10+ messages in thread
From: Yongbok Kim @ 2015-05-11 11:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, leon.alrae

This patch set adds support for misaligned memory accesses in MIPS architecture
Release 6 and MIPS SIMD Architecture.

The behaviour, semantics, and architecture specifications of misaligned memory
accesses are described in:
MIPS Architecture For Programmers Volume I-A: Introduction to the MIPS64
Architecture, Appendix B Misaligned Memory Accesses.
Available at http://www.imgtec.com/mips/architectures/mips64.asp

Regards,
Yongbok

v2:
* Removed re-translation in the mips_cpu_do_unaligned_access() (Peter)
* Checks validity only if an access is spanning into two pages in MSA (Leon)
* Introduced misaligned flag to indicate MSA ld/st is ongoing, is used to
  allow misaligned accesses in the mips_cpu_do_unaligned_access() callback.
  This is crucial to support MSA misaligned accesses in Release 5 cores.

Yongbok Kim (2):
  target-mips: Misaligned memory accesses for R6
  target-mips: Misaligned memory accesses for MSA

 target-mips/cpu.h            |    5 +++
 target-mips/helper.c         |   33 ++++++++++++++++++
 target-mips/op_helper.c      |   74 ++++++++++++++++++++++++++++++++---------
 target-mips/translate.c      |    4 ++
 target-mips/translate_init.c |    2 +-
 5 files changed, 100 insertions(+), 18 deletions(-)

-- 
1.7.5.4

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH v2 1/2] target-mips: Misaligned memory accesses for R6
  2015-05-11 11:30 [Qemu-devel] [PATCH v2 0/2] target-mips: Add support for misaligned accesses Yongbok Kim
@ 2015-05-11 11:30 ` Yongbok Kim
  2015-05-11 13:00   ` Andreas Färber
  2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA Yongbok Kim
  1 sibling, 1 reply; 10+ messages in thread
From: Yongbok Kim @ 2015-05-11 11:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, leon.alrae

Release 6 requires misaligned memory access support for all ordinary memory
access instructions (for example, LW/SW, LWC1/SWC1).
However misaligned support is not provided for certain special memory accesses
such as atomics (for example, LL/SC).

Allows misaligned accesses from mips_cpu_do_unaligned_access() callback,
if it is a R6 core. As the helper functions of LL/SC is checking misalignment,
just allowing all for R6 is good enough.

Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
---
 target-mips/op_helper.c      |    7 +++++++
 target-mips/translate_init.c |    2 +-
 2 files changed, 8 insertions(+), 1 deletions(-)

diff --git a/target-mips/op_helper.c b/target-mips/op_helper.c
index 73a8e45..58f02cf 100644
--- a/target-mips/op_helper.c
+++ b/target-mips/op_helper.c
@@ -2215,6 +2215,13 @@ void mips_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
     int error_code = 0;
     int excp;
 
+    if (env->insn_flags & ISA_MIPS32R6) {
+        /* Release 6 provides support for misaligned memory access for
+         * all ordinary memory reference instructions
+         * */
+        return;
+    }
+
     env->CP0_BadVAddr = addr;
 
     if (access_type == MMU_DATA_STORE) {
diff --git a/target-mips/translate_init.c b/target-mips/translate_init.c
index 85a65e7..ec54fef 100644
--- a/target-mips/translate_init.c
+++ b/target-mips/translate_init.c
@@ -607,7 +607,7 @@ static const mips_def_t mips_defs[] =
     },
     {
         /* A generic CPU supporting MIPS64 Release 6 ISA.
-           FIXME: Support IEEE 754-2008 FP and misaligned memory accesses.
+           FIXME: Support IEEE 754-2008 FP.
                   Eventually this should be replaced by a real CPU model. */
         .name = "MIPS64R6-generic",
         .CP0_PRid = 0x00010000,
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA
  2015-05-11 11:30 [Qemu-devel] [PATCH v2 0/2] target-mips: Add support for misaligned accesses Yongbok Kim
  2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 1/2] target-mips: Misaligned memory accesses for R6 Yongbok Kim
@ 2015-05-11 11:30 ` Yongbok Kim
  2015-05-11 13:12   ` Andreas Färber
                     ` (2 more replies)
  1 sibling, 3 replies; 10+ messages in thread
From: Yongbok Kim @ 2015-05-11 11:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, leon.alrae

MIPS SIMD Architecture vector loads and stores require misalignment support.
MSA Memory access should work as an atomic operation. Therefore, it has to
check validity of all addresses for an access if it is spanning into two pages.

Introduced misaligned flag to indicate MSA ld/st is ongoing, is used to allow
misaligned accesses in the mips_cpu_do_unaligned_access() callback.
This is crucial to support MSA misaligned accesses in Release 5 cores.

Further clean up to use mmu_idx from cpu_mmu_index() instead of calculating it
from hflag.

Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
---
 target-mips/cpu.h       |    5 +++
 target-mips/helper.c    |   33 ++++++++++++++++++++++
 target-mips/op_helper.c |   69 ++++++++++++++++++++++++++++++++++------------
 target-mips/translate.c |    4 +++
 4 files changed, 93 insertions(+), 18 deletions(-)

diff --git a/target-mips/cpu.h b/target-mips/cpu.h
index f9d2b4c..1bd1229 100644
--- a/target-mips/cpu.h
+++ b/target-mips/cpu.h
@@ -211,6 +211,9 @@ struct TCState {
         MSACSR_FS_MASK)
 
     float_status msa_fp_status;
+#if !defined(CONFIG_USER_ONLY)
+    bool misaligned; /* indicates misaligned access is allowed */
+#endif
 };
 
 typedef struct CPUMIPSState CPUMIPSState;
@@ -760,6 +763,8 @@ int mips_cpu_handle_mmu_fault(CPUState *cpu, vaddr address, int rw,
 void r4k_invalidate_tlb (CPUMIPSState *env, int idx, int use_extra);
 hwaddr cpu_mips_translate_address (CPUMIPSState *env, target_ulong address,
 		                               int rw);
+bool cpu_mips_validate_msa_block_access(CPUMIPSState *env, target_ulong addr,
+                                        int rw, int mmu_idx);
 #endif
 target_ulong exception_resume_pc (CPUMIPSState *env);
 
diff --git a/target-mips/helper.c b/target-mips/helper.c
index 8e3204a..951aea8 100644
--- a/target-mips/helper.c
+++ b/target-mips/helper.c
@@ -391,6 +391,37 @@ hwaddr cpu_mips_translate_address(CPUMIPSState *env, target_ulong address, int r
     }
 }
 
+bool cpu_mips_validate_msa_block_access(CPUMIPSState *env, target_ulong addr,
+                                        int rw, int mmu_idx)
+{
+    target_ulong vaddr = addr & TARGET_PAGE_MASK;
+    target_ulong badvaddr = addr;
+
+    CPUState *cs = CPU(mips_env_get_cpu(env));
+    int ret;
+
+    ret = mips_cpu_handle_mmu_fault(cs, vaddr, rw, mmu_idx);
+    if (ret != TLBRET_MATCH) {
+        /* calling raise_mmu_exeception again to correct badvaddr */
+        raise_mmu_exception(env, badvaddr, rw, ret);
+        return false;
+    }
+    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
+                 >= TARGET_PAGE_SIZE)) {
+        vaddr += TARGET_PAGE_SIZE;
+        ret = mips_cpu_handle_mmu_fault(cs, vaddr, rw, mmu_idx);
+        if (ret != TLBRET_MATCH) {
+            if (ret != TLBRET_BADADDR) {
+                badvaddr = vaddr;
+            }
+            /* calling raise_mmu_exeception again to correct badvaddr */
+            raise_mmu_exception(env, badvaddr, rw, ret);
+            return false;
+        }
+    }
+    return true;
+}
+
 static const char * const excp_names[EXCP_LAST + 1] = {
     [EXCP_RESET] = "reset",
     [EXCP_SRESET] = "soft reset",
@@ -497,6 +528,8 @@ void mips_cpu_do_interrupt(CPUState *cs)
         qemu_log("%s enter: PC " TARGET_FMT_lx " EPC " TARGET_FMT_lx " %s exception\n",
                  __func__, env->active_tc.PC, env->CP0_EPC, name);
     }
+
+    env->active_tc.misaligned = false;
     if (cs->exception_index == EXCP_EXT_INTERRUPT &&
         (env->hflags & MIPS_HFLAG_DM)) {
         cs->exception_index = EXCP_DINT;
diff --git a/target-mips/op_helper.c b/target-mips/op_helper.c
index 58f02cf..b3b8d52 100644
--- a/target-mips/op_helper.c
+++ b/target-mips/op_helper.c
@@ -47,7 +47,9 @@ static inline void QEMU_NORETURN do_raise_exception_err(CPUMIPSState *env,
         /* now we have a real cpu fault */
         cpu_restore_state(cs, pc);
     }
-
+#if !defined(CONFIG_USER_ONLY)
+    env->active_tc.misaligned = false;
+#endif
     cpu_loop_exit(cs);
 }
 
@@ -2215,9 +2217,12 @@ void mips_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
     int error_code = 0;
     int excp;
 
-    if (env->insn_flags & ISA_MIPS32R6) {
+    if ((env->insn_flags & ISA_MIPS32R6) || (env->active_tc.misaligned)) {
         /* Release 6 provides support for misaligned memory access for
          * all ordinary memory reference instructions
+         *
+         * MIPS SIMD Architecture vector loads and stores support misalignment
+         * memory access
          * */
         return;
     }
@@ -3571,33 +3576,47 @@ void helper_msa_ld_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
     wr_t *pwd = &(env->active_fpu.fpr[wd].wr);
     target_ulong addr = env->active_tc.gpr[rs] + (s10 << df);
     int i;
+    int mmu_idx = cpu_mmu_index(env);
+
+#if !defined(CONFIG_USER_ONLY)
+    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
+                 >= TARGET_PAGE_SIZE)) {
+        if (!cpu_mips_validate_msa_block_access(env, addr, MMU_DATA_LOAD,
+                                                mmu_idx)) {
+            CPUState *cs = CPU(mips_env_get_cpu(env));
+            helper_raise_exception_err(env, cs->exception_index,
+                                       env->error_code);
+            return;
+        }
+    }
+    env->active_tc.misaligned = true;
+#endif
 
     switch (df) {
     case DF_BYTE:
         for (i = 0; i < DF_ELEMENTS(DF_BYTE); i++) {
-            pwd->b[i] = do_lbu(env, addr + (i << DF_BYTE),
-                                env->hflags & MIPS_HFLAG_KSU);
+            pwd->b[i] = do_lbu(env, addr + (i << DF_BYTE), mmu_idx);
         }
         break;
     case DF_HALF:
         for (i = 0; i < DF_ELEMENTS(DF_HALF); i++) {
-            pwd->h[i] = do_lhu(env, addr + (i << DF_HALF),
-                                env->hflags & MIPS_HFLAG_KSU);
+            pwd->h[i] = do_lhu(env, addr + (i << DF_HALF), mmu_idx);
         }
         break;
     case DF_WORD:
         for (i = 0; i < DF_ELEMENTS(DF_WORD); i++) {
-            pwd->w[i] = do_lw(env, addr + (i << DF_WORD),
-                                env->hflags & MIPS_HFLAG_KSU);
+            pwd->w[i] = do_lw(env, addr + (i << DF_WORD), mmu_idx);
         }
         break;
     case DF_DOUBLE:
         for (i = 0; i < DF_ELEMENTS(DF_DOUBLE); i++) {
-            pwd->d[i] = do_ld(env, addr + (i << DF_DOUBLE),
-                                env->hflags & MIPS_HFLAG_KSU);
+            pwd->d[i] = do_ld(env, addr + (i << DF_DOUBLE), mmu_idx);
         }
         break;
     }
+#if !defined(CONFIG_USER_ONLY)
+    env->active_tc.misaligned = false;
+#endif
 }
 
 void helper_msa_st_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
@@ -3606,31 +3625,45 @@ void helper_msa_st_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
     wr_t *pwd = &(env->active_fpu.fpr[wd].wr);
     target_ulong addr = env->active_tc.gpr[rs] + (s10 << df);
     int i;
+    int mmu_idx = cpu_mmu_index(env);
+
+#if !defined(CONFIG_USER_ONLY)
+    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
+                 >= TARGET_PAGE_SIZE)) {
+        if (!cpu_mips_validate_msa_block_access(env, addr, MMU_DATA_STORE,
+                                                mmu_idx)) {
+            CPUState *cs = CPU(mips_env_get_cpu(env));
+            helper_raise_exception_err(env, cs->exception_index,
+                                       env->error_code);
+            return;
+        }
+    }
+    env->active_tc.misaligned = true;
+#endif
 
     switch (df) {
     case DF_BYTE:
         for (i = 0; i < DF_ELEMENTS(DF_BYTE); i++) {
-            do_sb(env, addr + (i << DF_BYTE), pwd->b[i],
-                    env->hflags & MIPS_HFLAG_KSU);
+            do_sb(env, addr + (i << DF_BYTE), pwd->b[i], mmu_idx);
         }
         break;
     case DF_HALF:
         for (i = 0; i < DF_ELEMENTS(DF_HALF); i++) {
-            do_sh(env, addr + (i << DF_HALF), pwd->h[i],
-                    env->hflags & MIPS_HFLAG_KSU);
+            do_sh(env, addr + (i << DF_HALF), pwd->h[i], mmu_idx);
         }
         break;
     case DF_WORD:
         for (i = 0; i < DF_ELEMENTS(DF_WORD); i++) {
-            do_sw(env, addr + (i << DF_WORD), pwd->w[i],
-                    env->hflags & MIPS_HFLAG_KSU);
+            do_sw(env, addr + (i << DF_WORD), pwd->w[i], mmu_idx);
         }
         break;
     case DF_DOUBLE:
         for (i = 0; i < DF_ELEMENTS(DF_DOUBLE); i++) {
-            do_sd(env, addr + (i << DF_DOUBLE), pwd->d[i],
-                    env->hflags & MIPS_HFLAG_KSU);
+            do_sd(env, addr + (i << DF_DOUBLE), pwd->d[i], mmu_idx);
         }
         break;
     }
+#if !defined(CONFIG_USER_ONLY)
+    env->active_tc.misaligned = false;
+#endif
 }
diff --git a/target-mips/translate.c b/target-mips/translate.c
index fd063a2..674b1cb 100644
--- a/target-mips/translate.c
+++ b/target-mips/translate.c
@@ -19641,6 +19641,10 @@ void cpu_state_reset(CPUMIPSState *env)
     restore_rounding_mode(env);
     restore_flush_mode(env);
     cs->exception_index = EXCP_NONE;
+
+#ifndef CONFIG_USER_ONLY
+    env->active_tc.misaligned = false;
+#endif
 }
 
 void restore_state_to_opc(CPUMIPSState *env, TranslationBlock *tb, int pc_pos)
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/2] target-mips: Misaligned memory accesses for R6
  2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 1/2] target-mips: Misaligned memory accesses for R6 Yongbok Kim
@ 2015-05-11 13:00   ` Andreas Färber
  0 siblings, 0 replies; 10+ messages in thread
From: Andreas Färber @ 2015-05-11 13:00 UTC (permalink / raw)
  To: Yongbok Kim, qemu-devel; +Cc: peter.maydell, leon.alrae

Am 11.05.2015 um 13:30 schrieb Yongbok Kim:
> Release 6 requires misaligned memory access support for all ordinary memory
> access instructions (for example, LW/SW, LWC1/SWC1).
> However misaligned support is not provided for certain special memory accesses
> such as atomics (for example, LL/SC).
> 
> Allows misaligned accesses from mips_cpu_do_unaligned_access() callback,
> if it is a R6 core. As the helper functions of LL/SC is checking misalignment,
> just allowing all for R6 is good enough.
> 
> Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
> ---
>  target-mips/op_helper.c      |    7 +++++++
>  target-mips/translate_init.c |    2 +-
>  2 files changed, 8 insertions(+), 1 deletions(-)

Reviewed-by: Andreas Färber <afaerber@suse.de>

Andreas

-- 
SUSE Linux GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton; HRB
21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA
  2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA Yongbok Kim
@ 2015-05-11 13:12   ` Andreas Färber
  2015-05-11 13:15   ` Yongbok Kim
  2015-05-12  9:43   ` Leon Alrae
  2 siblings, 0 replies; 10+ messages in thread
From: Andreas Färber @ 2015-05-11 13:12 UTC (permalink / raw)
  To: Yongbok Kim, qemu-devel; +Cc: peter.maydell, leon.alrae

Am 11.05.2015 um 13:30 schrieb Yongbok Kim:
> MIPS SIMD Architecture vector loads and stores require misalignment support.
> MSA Memory access should work as an atomic operation. Therefore, it has to
> check validity of all addresses for an access if it is spanning into two pages.
> 
> Introduced misaligned flag to indicate MSA ld/st is ongoing, is used to allow
> misaligned accesses in the mips_cpu_do_unaligned_access() callback.
> This is crucial to support MSA misaligned accesses in Release 5 cores.
> 
> Further clean up to use mmu_idx from cpu_mmu_index() instead of calculating it
> from hflag.
> 
> Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
> ---
>  target-mips/cpu.h       |    5 +++
>  target-mips/helper.c    |   33 ++++++++++++++++++++++
>  target-mips/op_helper.c |   69 ++++++++++++++++++++++++++++++++++------------
>  target-mips/translate.c |    4 +++
>  4 files changed, 93 insertions(+), 18 deletions(-)
> 
> diff --git a/target-mips/cpu.h b/target-mips/cpu.h
> index f9d2b4c..1bd1229 100644
> --- a/target-mips/cpu.h
> +++ b/target-mips/cpu.h
> @@ -211,6 +211,9 @@ struct TCState {
>          MSACSR_FS_MASK)
>  
>      float_status msa_fp_status;
> +#if !defined(CONFIG_USER_ONLY)
> +    bool misaligned; /* indicates misaligned access is allowed */
> +#endif
>  };
>  
>  typedef struct CPUMIPSState CPUMIPSState;
> @@ -760,6 +763,8 @@ int mips_cpu_handle_mmu_fault(CPUState *cpu, vaddr address, int rw,
>  void r4k_invalidate_tlb (CPUMIPSState *env, int idx, int use_extra);
>  hwaddr cpu_mips_translate_address (CPUMIPSState *env, target_ulong address,
>  		                               int rw);
> +bool cpu_mips_validate_msa_block_access(CPUMIPSState *env, target_ulong addr,
> +                                        int rw, int mmu_idx);

Can you please adopt the new style of using mips_cpu_... prefix and
MIPSCPU *cpu argument? You're dealing with conversions both in the call
site and inside the implementation anyway.

>  #endif
>  target_ulong exception_resume_pc (CPUMIPSState *env);
>  
> diff --git a/target-mips/helper.c b/target-mips/helper.c
> index 8e3204a..951aea8 100644
> --- a/target-mips/helper.c
> +++ b/target-mips/helper.c
> @@ -391,6 +391,37 @@ hwaddr cpu_mips_translate_address(CPUMIPSState *env, target_ulong address, int r
>      }
>  }
>  
> +bool cpu_mips_validate_msa_block_access(CPUMIPSState *env, target_ulong addr,
> +                                        int rw, int mmu_idx)
> +{
> +    target_ulong vaddr = addr & TARGET_PAGE_MASK;
> +    target_ulong badvaddr = addr;
> +
> +    CPUState *cs = CPU(mips_env_get_cpu(env));

This becomes CPU(cpu) then.

CPUMIPSState *env = &cpu->env;

> +    int ret;
> +
> +    ret = mips_cpu_handle_mmu_fault(cs, vaddr, rw, mmu_idx);
> +    if (ret != TLBRET_MATCH) {
> +        /* calling raise_mmu_exeception again to correct badvaddr */
> +        raise_mmu_exception(env, badvaddr, rw, ret);
> +        return false;
> +    }
> +    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
> +                 >= TARGET_PAGE_SIZE)) {
> +        vaddr += TARGET_PAGE_SIZE;
> +        ret = mips_cpu_handle_mmu_fault(cs, vaddr, rw, mmu_idx);
> +        if (ret != TLBRET_MATCH) {
> +            if (ret != TLBRET_BADADDR) {
> +                badvaddr = vaddr;
> +            }
> +            /* calling raise_mmu_exeception again to correct badvaddr */
> +            raise_mmu_exception(env, badvaddr, rw, ret);
> +            return false;
> +        }
> +    }
> +    return true;
> +}
> +
>  static const char * const excp_names[EXCP_LAST + 1] = {
>      [EXCP_RESET] = "reset",
>      [EXCP_SRESET] = "soft reset",
> @@ -497,6 +528,8 @@ void mips_cpu_do_interrupt(CPUState *cs)
>          qemu_log("%s enter: PC " TARGET_FMT_lx " EPC " TARGET_FMT_lx " %s exception\n",
>                   __func__, env->active_tc.PC, env->CP0_EPC, name);
>      }
> +
> +    env->active_tc.misaligned = false;
>      if (cs->exception_index == EXCP_EXT_INTERRUPT &&
>          (env->hflags & MIPS_HFLAG_DM)) {
>          cs->exception_index = EXCP_DINT;
> diff --git a/target-mips/op_helper.c b/target-mips/op_helper.c
> index 58f02cf..b3b8d52 100644
> --- a/target-mips/op_helper.c
> +++ b/target-mips/op_helper.c
> @@ -47,7 +47,9 @@ static inline void QEMU_NORETURN do_raise_exception_err(CPUMIPSState *env,
>          /* now we have a real cpu fault */
>          cpu_restore_state(cs, pc);
>      }
> -
> +#if !defined(CONFIG_USER_ONLY)
> +    env->active_tc.misaligned = false;
> +#endif
>      cpu_loop_exit(cs);
>  }
>  
> @@ -2215,9 +2217,12 @@ void mips_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
>      int error_code = 0;
>      int excp;
>  
> -    if (env->insn_flags & ISA_MIPS32R6) {
> +    if ((env->insn_flags & ISA_MIPS32R6) || (env->active_tc.misaligned)) {
>          /* Release 6 provides support for misaligned memory access for
>           * all ordinary memory reference instructions
> +         *
> +         * MIPS SIMD Architecture vector loads and stores support misalignment
> +         * memory access
>           * */
>          return;
>      }
> @@ -3571,33 +3576,47 @@ void helper_msa_ld_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
>      wr_t *pwd = &(env->active_fpu.fpr[wd].wr);
>      target_ulong addr = env->active_tc.gpr[rs] + (s10 << df);
>      int i;
> +    int mmu_idx = cpu_mmu_index(env);
> +
> +#if !defined(CONFIG_USER_ONLY)
> +    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
> +                 >= TARGET_PAGE_SIZE)) {
> +        if (!cpu_mips_validate_msa_block_access(env, addr, MMU_DATA_LOAD,
> +                                                mmu_idx)) {
> +            CPUState *cs = CPU(mips_env_get_cpu(env));

MIPSCPU *cpu = mips_env_get_cpu(env);
CPUState *cs = CPU(cpu);

Regards,
Andreas

> +            helper_raise_exception_err(env, cs->exception_index,
> +                                       env->error_code);
> +            return;
> +        }
> +    }
> +    env->active_tc.misaligned = true;
> +#endif
>  
>      switch (df) {
>      case DF_BYTE:
>          for (i = 0; i < DF_ELEMENTS(DF_BYTE); i++) {
> -            pwd->b[i] = do_lbu(env, addr + (i << DF_BYTE),
> -                                env->hflags & MIPS_HFLAG_KSU);
> +            pwd->b[i] = do_lbu(env, addr + (i << DF_BYTE), mmu_idx);
>          }
>          break;
>      case DF_HALF:
>          for (i = 0; i < DF_ELEMENTS(DF_HALF); i++) {
> -            pwd->h[i] = do_lhu(env, addr + (i << DF_HALF),
> -                                env->hflags & MIPS_HFLAG_KSU);
> +            pwd->h[i] = do_lhu(env, addr + (i << DF_HALF), mmu_idx);
>          }
>          break;
>      case DF_WORD:
>          for (i = 0; i < DF_ELEMENTS(DF_WORD); i++) {
> -            pwd->w[i] = do_lw(env, addr + (i << DF_WORD),
> -                                env->hflags & MIPS_HFLAG_KSU);
> +            pwd->w[i] = do_lw(env, addr + (i << DF_WORD), mmu_idx);
>          }
>          break;
>      case DF_DOUBLE:
>          for (i = 0; i < DF_ELEMENTS(DF_DOUBLE); i++) {
> -            pwd->d[i] = do_ld(env, addr + (i << DF_DOUBLE),
> -                                env->hflags & MIPS_HFLAG_KSU);
> +            pwd->d[i] = do_ld(env, addr + (i << DF_DOUBLE), mmu_idx);
>          }
>          break;
>      }
> +#if !defined(CONFIG_USER_ONLY)
> +    env->active_tc.misaligned = false;
> +#endif
>  }
>  
>  void helper_msa_st_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
> @@ -3606,31 +3625,45 @@ void helper_msa_st_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
>      wr_t *pwd = &(env->active_fpu.fpr[wd].wr);
>      target_ulong addr = env->active_tc.gpr[rs] + (s10 << df);
>      int i;
> +    int mmu_idx = cpu_mmu_index(env);
> +
> +#if !defined(CONFIG_USER_ONLY)
> +    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
> +                 >= TARGET_PAGE_SIZE)) {
> +        if (!cpu_mips_validate_msa_block_access(env, addr, MMU_DATA_STORE,
> +                                                mmu_idx)) {
> +            CPUState *cs = CPU(mips_env_get_cpu(env));
> +            helper_raise_exception_err(env, cs->exception_index,
> +                                       env->error_code);
> +            return;
> +        }
> +    }
> +    env->active_tc.misaligned = true;
> +#endif
>  
>      switch (df) {
>      case DF_BYTE:
>          for (i = 0; i < DF_ELEMENTS(DF_BYTE); i++) {
> -            do_sb(env, addr + (i << DF_BYTE), pwd->b[i],
> -                    env->hflags & MIPS_HFLAG_KSU);
> +            do_sb(env, addr + (i << DF_BYTE), pwd->b[i], mmu_idx);
>          }
>          break;
>      case DF_HALF:
>          for (i = 0; i < DF_ELEMENTS(DF_HALF); i++) {
> -            do_sh(env, addr + (i << DF_HALF), pwd->h[i],
> -                    env->hflags & MIPS_HFLAG_KSU);
> +            do_sh(env, addr + (i << DF_HALF), pwd->h[i], mmu_idx);
>          }
>          break;
>      case DF_WORD:
>          for (i = 0; i < DF_ELEMENTS(DF_WORD); i++) {
> -            do_sw(env, addr + (i << DF_WORD), pwd->w[i],
> -                    env->hflags & MIPS_HFLAG_KSU);
> +            do_sw(env, addr + (i << DF_WORD), pwd->w[i], mmu_idx);
>          }
>          break;
>      case DF_DOUBLE:
>          for (i = 0; i < DF_ELEMENTS(DF_DOUBLE); i++) {
> -            do_sd(env, addr + (i << DF_DOUBLE), pwd->d[i],
> -                    env->hflags & MIPS_HFLAG_KSU);
> +            do_sd(env, addr + (i << DF_DOUBLE), pwd->d[i], mmu_idx);
>          }
>          break;
>      }
> +#if !defined(CONFIG_USER_ONLY)
> +    env->active_tc.misaligned = false;
> +#endif
>  }
> diff --git a/target-mips/translate.c b/target-mips/translate.c
> index fd063a2..674b1cb 100644
> --- a/target-mips/translate.c
> +++ b/target-mips/translate.c
> @@ -19641,6 +19641,10 @@ void cpu_state_reset(CPUMIPSState *env)
>      restore_rounding_mode(env);
>      restore_flush_mode(env);
>      cs->exception_index = EXCP_NONE;
> +
> +#ifndef CONFIG_USER_ONLY
> +    env->active_tc.misaligned = false;
> +#endif
>  }
>  
>  void restore_state_to_opc(CPUMIPSState *env, TranslationBlock *tb, int pc_pos)
> 


-- 
SUSE Linux GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton; HRB
21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA
  2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA Yongbok Kim
  2015-05-11 13:12   ` Andreas Färber
@ 2015-05-11 13:15   ` Yongbok Kim
  2015-05-11 13:52     ` Leon Alrae
  2015-05-12  9:54     ` Peter Maydell
  2015-05-12  9:43   ` Leon Alrae
  2 siblings, 2 replies; 10+ messages in thread
From: Yongbok Kim @ 2015-05-11 13:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, leon.alrae

On 11/05/2015 12:30, Yongbok Kim wrote:
> MIPS SIMD Architecture vector loads and stores require misalignment support.
> MSA Memory access should work as an atomic operation. Therefore, it has to
> check validity of all addresses for an access if it is spanning into two pages.
>
> Introduced misaligned flag to indicate MSA ld/st is ongoing, is used to allow
> misaligned accesses in the mips_cpu_do_unaligned_access() callback.
> This is crucial to support MSA misaligned accesses in Release 5 cores.
>
> Further clean up to use mmu_idx from cpu_mmu_index() instead of calculating it
> from hflag.
>
> Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
> ---
>  target-mips/cpu.h       |    5 +++
>  target-mips/helper.c    |   33 ++++++++++++++++++++++
>  target-mips/op_helper.c |   69 ++++++++++++++++++++++++++++++++++------------
>  target-mips/translate.c |    4 +++
>  4 files changed, 93 insertions(+), 18 deletions(-)
>
> diff --git a/target-mips/cpu.h b/target-mips/cpu.h
> index f9d2b4c..1bd1229 100644
> --- a/target-mips/cpu.h
> +++ b/target-mips/cpu.h
> @@ -211,6 +211,9 @@ struct TCState {
>          MSACSR_FS_MASK)
>  
>      float_status msa_fp_status;
> +#if !defined(CONFIG_USER_ONLY)
> +    bool misaligned; /* indicates misaligned access is allowed */
> +#endif
>  };
>  
>  typedef struct CPUMIPSState CPUMIPSState;
> @@ -760,6 +763,8 @@ int mips_cpu_handle_mmu_fault(CPUState *cpu, vaddr address, int rw,
>  void r4k_invalidate_tlb (CPUMIPSState *env, int idx, int use_extra);
>  hwaddr cpu_mips_translate_address (CPUMIPSState *env, target_ulong address,
>  		                               int rw);
> +bool cpu_mips_validate_msa_block_access(CPUMIPSState *env, target_ulong addr,
> +                                        int rw, int mmu_idx);
>  #endif
>  target_ulong exception_resume_pc (CPUMIPSState *env);
>  
> diff --git a/target-mips/helper.c b/target-mips/helper.c
> index 8e3204a..951aea8 100644
> --- a/target-mips/helper.c
> +++ b/target-mips/helper.c
> @@ -391,6 +391,37 @@ hwaddr cpu_mips_translate_address(CPUMIPSState *env, target_ulong address, int r
>      }
>  }
>  
> +bool cpu_mips_validate_msa_block_access(CPUMIPSState *env, target_ulong addr,
> +                                        int rw, int mmu_idx)
> +{
> +    target_ulong vaddr = addr & TARGET_PAGE_MASK;
> +    target_ulong badvaddr = addr;
> +
> +    CPUState *cs = CPU(mips_env_get_cpu(env));
> +    int ret;
> +
> +    ret = mips_cpu_handle_mmu_fault(cs, vaddr, rw, mmu_idx);
> +    if (ret != TLBRET_MATCH) {
> +        /* calling raise_mmu_exeception again to correct badvaddr */
> +        raise_mmu_exception(env, badvaddr, rw, ret);
> +        return false;
> +    }
> +    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
> +                 >= TARGET_PAGE_SIZE)) {
> +        vaddr += TARGET_PAGE_SIZE;
> +        ret = mips_cpu_handle_mmu_fault(cs, vaddr, rw, mmu_idx);
> +        if (ret != TLBRET_MATCH) {
> +            if (ret != TLBRET_BADADDR) {
> +                badvaddr = vaddr;
> +            }
> +            /* calling raise_mmu_exeception again to correct badvaddr */
> +            raise_mmu_exception(env, badvaddr, rw, ret);
> +            return false;
> +        }
> +    }
> +    return true;
> +}
> +
>  static const char * const excp_names[EXCP_LAST + 1] = {
>      [EXCP_RESET] = "reset",
>      [EXCP_SRESET] = "soft reset",
> @@ -497,6 +528,8 @@ void mips_cpu_do_interrupt(CPUState *cs)
>          qemu_log("%s enter: PC " TARGET_FMT_lx " EPC " TARGET_FMT_lx " %s exception\n",
>                   __func__, env->active_tc.PC, env->CP0_EPC, name);
>      }
> +
> +    env->active_tc.misaligned = false;
>      if (cs->exception_index == EXCP_EXT_INTERRUPT &&
>          (env->hflags & MIPS_HFLAG_DM)) {
>          cs->exception_index = EXCP_DINT;
> diff --git a/target-mips/op_helper.c b/target-mips/op_helper.c
> index 58f02cf..b3b8d52 100644
> --- a/target-mips/op_helper.c
> +++ b/target-mips/op_helper.c
> @@ -47,7 +47,9 @@ static inline void QEMU_NORETURN do_raise_exception_err(CPUMIPSState *env,
>          /* now we have a real cpu fault */
>          cpu_restore_state(cs, pc);
>      }
> -
> +#if !defined(CONFIG_USER_ONLY)
> +    env->active_tc.misaligned = false;
> +#endif
>      cpu_loop_exit(cs);
>  }
>  
> @@ -2215,9 +2217,12 @@ void mips_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
>      int error_code = 0;
>      int excp;
>  
> -    if (env->insn_flags & ISA_MIPS32R6) {
> +    if ((env->insn_flags & ISA_MIPS32R6) || (env->active_tc.misaligned)) {
>          /* Release 6 provides support for misaligned memory access for
>           * all ordinary memory reference instructions
> +         *
> +         * MIPS SIMD Architecture vector loads and stores support misalignment
> +         * memory access
>           * */
>          return;
>      }
> @@ -3571,33 +3576,47 @@ void helper_msa_ld_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
>      wr_t *pwd = &(env->active_fpu.fpr[wd].wr);
>      target_ulong addr = env->active_tc.gpr[rs] + (s10 << df);
>      int i;
> +    int mmu_idx = cpu_mmu_index(env);
> +
> +#if !defined(CONFIG_USER_ONLY)
> +    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
> +                 >= TARGET_PAGE_SIZE)) {
> +        if (!cpu_mips_validate_msa_block_access(env, addr, MMU_DATA_LOAD,
> +                                                mmu_idx)) {
> +            CPUState *cs = CPU(mips_env_get_cpu(env));
> +            helper_raise_exception_err(env, cs->exception_index,
> +                                       env->error_code);
> +            return;
> +        }
> +    }
> +    env->active_tc.misaligned = true;
> +#endif
>  
>      switch (df) {
>      case DF_BYTE:
>          for (i = 0; i < DF_ELEMENTS(DF_BYTE); i++) {
> -            pwd->b[i] = do_lbu(env, addr + (i << DF_BYTE),
> -                                env->hflags & MIPS_HFLAG_KSU);
> +            pwd->b[i] = do_lbu(env, addr + (i << DF_BYTE), mmu_idx);
>          }
>          break;
>      case DF_HALF:
>          for (i = 0; i < DF_ELEMENTS(DF_HALF); i++) {
> -            pwd->h[i] = do_lhu(env, addr + (i << DF_HALF),
> -                                env->hflags & MIPS_HFLAG_KSU);
> +            pwd->h[i] = do_lhu(env, addr + (i << DF_HALF), mmu_idx);
>          }
>          break;
>      case DF_WORD:
>          for (i = 0; i < DF_ELEMENTS(DF_WORD); i++) {
> -            pwd->w[i] = do_lw(env, addr + (i << DF_WORD),
> -                                env->hflags & MIPS_HFLAG_KSU);
> +            pwd->w[i] = do_lw(env, addr + (i << DF_WORD), mmu_idx);
>          }
>          break;
>      case DF_DOUBLE:
>          for (i = 0; i < DF_ELEMENTS(DF_DOUBLE); i++) {
> -            pwd->d[i] = do_ld(env, addr + (i << DF_DOUBLE),
> -                                env->hflags & MIPS_HFLAG_KSU);
> +            pwd->d[i] = do_ld(env, addr + (i << DF_DOUBLE), mmu_idx);
>          }
>          break;
>      }
> +#if !defined(CONFIG_USER_ONLY)
> +    env->active_tc.misaligned = false;
> +#endif
>  }
>  
>  void helper_msa_st_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
> @@ -3606,31 +3625,45 @@ void helper_msa_st_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
>      wr_t *pwd = &(env->active_fpu.fpr[wd].wr);
>      target_ulong addr = env->active_tc.gpr[rs] + (s10 << df);
>      int i;
> +    int mmu_idx = cpu_mmu_index(env);
> +
> +#if !defined(CONFIG_USER_ONLY)
> +    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
> +                 >= TARGET_PAGE_SIZE)) {
> +        if (!cpu_mips_validate_msa_block_access(env, addr, MMU_DATA_STORE,
> +                                                mmu_idx)) {
> +            CPUState *cs = CPU(mips_env_get_cpu(env));
> +            helper_raise_exception_err(env, cs->exception_index,
> +                                       env->error_code);
> +            return;
> +        }
> +    }
> +    env->active_tc.misaligned = true;
> +#endif
>  
>      switch (df) {
>      case DF_BYTE:
>          for (i = 0; i < DF_ELEMENTS(DF_BYTE); i++) {
> -            do_sb(env, addr + (i << DF_BYTE), pwd->b[i],
> -                    env->hflags & MIPS_HFLAG_KSU);
> +            do_sb(env, addr + (i << DF_BYTE), pwd->b[i], mmu_idx);
>          }
>          break;
>      case DF_HALF:
>          for (i = 0; i < DF_ELEMENTS(DF_HALF); i++) {
> -            do_sh(env, addr + (i << DF_HALF), pwd->h[i],
> -                    env->hflags & MIPS_HFLAG_KSU);
> +            do_sh(env, addr + (i << DF_HALF), pwd->h[i], mmu_idx);
>          }
>          break;
>      case DF_WORD:
>          for (i = 0; i < DF_ELEMENTS(DF_WORD); i++) {
> -            do_sw(env, addr + (i << DF_WORD), pwd->w[i],
> -                    env->hflags & MIPS_HFLAG_KSU);
> +            do_sw(env, addr + (i << DF_WORD), pwd->w[i], mmu_idx);
>          }
>          break;
>      case DF_DOUBLE:
>          for (i = 0; i < DF_ELEMENTS(DF_DOUBLE); i++) {
> -            do_sd(env, addr + (i << DF_DOUBLE), pwd->d[i],
> -                    env->hflags & MIPS_HFLAG_KSU);
> +            do_sd(env, addr + (i << DF_DOUBLE), pwd->d[i], mmu_idx);
>          }
>          break;
>      }
> +#if !defined(CONFIG_USER_ONLY)
> +    env->active_tc.misaligned = false;
> +#endif
>  }
> diff --git a/target-mips/translate.c b/target-mips/translate.c
> index fd063a2..674b1cb 100644
> --- a/target-mips/translate.c
> +++ b/target-mips/translate.c
> @@ -19641,6 +19641,10 @@ void cpu_state_reset(CPUMIPSState *env)
>      restore_rounding_mode(env);
>      restore_flush_mode(env);
>      cs->exception_index = EXCP_NONE;
> +
> +#ifndef CONFIG_USER_ONLY
> +    env->active_tc.misaligned = false;
> +#endif
>  }
>  
>  void restore_state_to_opc(CPUMIPSState *env, TranslationBlock *tb, int pc_pos)

Hi
I have implemented this to have a flag which isn't that nice.

The thing is that the fact misaligned accesses of MSA LD/ST should be allowed in R5 cores
while all other instructions are not allowed.
Therefore it is required which types of instruction is triggering the misaligned accesses.

Initially I tried to fetch the instructions from the mips_cpu_do_unaligned_access() callback,
but if in certain case that the LD/ST address and PC are having same TLB indexes it goes wrong.

I also tried to increase mmu_idx to avoid this problem but that requires anyway a flag as it is not
able to pass mmu_idx to cpu_{ld,st}XX_XXX(). (cpu_{ld,st}XX_XXX() are calling cpu_mmu_index() to get mmu_idx).

I could use host address directly via {ld,st}xx_p() but then mmio will be left alone to be solved.
Perhaps another flag for the only case of R5 + MSA + MMIO.

I might able to change all the generic load/store macros such as cpu_ldst_template.h and
softmmu_template.h to pass the misalignment information.
However that would be a huge work impacting all the architectures.

Do you have any other thought or suggestion for this? Or this flag would be the necessary evil?

Regards,
Yongbok

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA
  2015-05-11 13:15   ` Yongbok Kim
@ 2015-05-11 13:52     ` Leon Alrae
  2015-05-12  9:54     ` Peter Maydell
  1 sibling, 0 replies; 10+ messages in thread
From: Leon Alrae @ 2015-05-11 13:52 UTC (permalink / raw)
  To: Yongbok Kim, qemu-devel; +Cc: peter.maydell

Hi Yongbok,

On 11/05/2015 14:15, Yongbok Kim wrote:
> Hi
> I have implemented this to have a flag which isn't that nice.
> 
> The thing is that the fact misaligned accesses of MSA LD/ST should be allowed in R5 cores
> while all other instructions are not allowed.
> Therefore it is required which types of instruction is triggering the misaligned accesses.
> 
> Initially I tried to fetch the instructions from the mips_cpu_do_unaligned_access() callback,
> but if in certain case that the LD/ST address and PC are having same TLB indexes it goes wrong.
> 
> I also tried to increase mmu_idx to avoid this problem but that requires anyway a flag as it is not
> able to pass mmu_idx to cpu_{ld,st}XX_XXX(). (cpu_{ld,st}XX_XXX() are calling cpu_mmu_index() to get mmu_idx).
> 
> I could use host address directly via {ld,st}xx_p() but then mmio will be left alone to be solved.
> Perhaps another flag for the only case of R5 + MSA + MMIO.
> 
> I might able to change all the generic load/store macros such as cpu_ldst_template.h and
> softmmu_template.h to pass the misalignment information.
> However that would be a huge work impacting all the architectures.
> 
> Do you have any other thought or suggestion for this? Or this flag would be the necessary evil?

I haven't reviewed this patch yet, but have you considered using always
byte-by-byte accesses for misaligned MSA loads/stores? The flag wouldn't
be required and also I suspect that we would benefit from the fast path.

Thanks,
Leon

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA
  2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA Yongbok Kim
  2015-05-11 13:12   ` Andreas Färber
  2015-05-11 13:15   ` Yongbok Kim
@ 2015-05-12  9:43   ` Leon Alrae
  2 siblings, 0 replies; 10+ messages in thread
From: Leon Alrae @ 2015-05-12  9:43 UTC (permalink / raw)
  To: Yongbok Kim, qemu-devel; +Cc: peter.maydell

On 11/05/2015 12:30, Yongbok Kim wrote:
> @@ -391,6 +391,37 @@ hwaddr cpu_mips_translate_address(CPUMIPSState *env, target_ulong address, int r
>      }
>  }
>  
> +bool cpu_mips_validate_msa_block_access(CPUMIPSState *env, target_ulong addr,
> +                                        int rw, int mmu_idx)
> +{
> +    target_ulong vaddr = addr & TARGET_PAGE_MASK;

This deserves more descriptive name, maybe "page_addr"?

> +    target_ulong badvaddr = addr;
> +
> +    CPUState *cs = CPU(mips_env_get_cpu(env));
> +    int ret;
> +
> +    ret = mips_cpu_handle_mmu_fault(cs, vaddr, rw, mmu_idx);
> +    if (ret != TLBRET_MATCH) {
> +        /* calling raise_mmu_exeception again to correct badvaddr */
> +        raise_mmu_exception(env, badvaddr, rw, ret);

mips_cpu_handle_mmu_fault() already calls raise_mmu_exception() where
appropriate registers get updated. Why calling it again here?

> +        return false;
> +    }
> +    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
> +                 >= TARGET_PAGE_SIZE)) {

This isn’t required, you already do this before calling this function.

> +        vaddr += TARGET_PAGE_SIZE;
> +        ret = mips_cpu_handle_mmu_fault(cs, vaddr, rw, mmu_idx);
> +        if (ret != TLBRET_MATCH) {
> +            if (ret != TLBRET_BADADDR) {
> +                badvaddr = vaddr;
> +            }
> +            /* calling raise_mmu_exeception again to correct badvaddr */
> +            raise_mmu_exception(env, badvaddr, rw, ret);
> +            return false;
> +        }
> +    }
> +    return true;
> +}
> +
>  static const char * const excp_names[EXCP_LAST + 1] = {
>      [EXCP_RESET] = "reset",
>      [EXCP_SRESET] = "soft reset",
> @@ -3571,33 +3576,47 @@ void helper_msa_ld_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
>      wr_t *pwd = &(env->active_fpu.fpr[wd].wr);
>      target_ulong addr = env->active_tc.gpr[rs] + (s10 << df);
>      int i;
> +    int mmu_idx = cpu_mmu_index(env);
> +
> +#if !defined(CONFIG_USER_ONLY)
> +    if (unlikely(((addr & ~TARGET_PAGE_MASK) + MSA_WRLEN - 1)
> +                 >= TARGET_PAGE_SIZE)) {

MSA_WRLEN/8

> +        if (!cpu_mips_validate_msa_block_access(env, addr, MMU_DATA_LOAD,
> +                                                mmu_idx)) {
> +            CPUState *cs = CPU(mips_env_get_cpu(env));
> +            helper_raise_exception_err(env, cs->exception_index,
> +                                       env->error_code);

Wouldn’t it be better to fold it into cpu_mips_validate_msa_block_access()?

Thanks,
Leon

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA
  2015-05-11 13:15   ` Yongbok Kim
  2015-05-11 13:52     ` Leon Alrae
@ 2015-05-12  9:54     ` Peter Maydell
  2015-05-12 15:38       ` Richard Henderson
  1 sibling, 1 reply; 10+ messages in thread
From: Peter Maydell @ 2015-05-12  9:54 UTC (permalink / raw)
  To: Yongbok Kim, Richard Henderson; +Cc: Leon Alrae, QEMU Developers

On 11 May 2015 at 14:15, Yongbok Kim <yongbok.kim@imgtec.com> wrote:
> The thing is that the fact misaligned accesses of MSA LD/ST should be allowed in R5 cores
> while all other instructions are not allowed.
> Therefore it is required which types of instruction is triggering the misaligned accesses.
>
> Initially I tried to fetch the instructions from the mips_cpu_do_unaligned_access() callback,
> but if in certain case that the LD/ST address and PC are having same TLB indexes it goes wrong.
>
> I also tried to increase mmu_idx to avoid this problem but that requires anyway a flag as it is not
> able to pass mmu_idx to cpu_{ld,st}XX_XXX(). (cpu_{ld,st}XX_XXX() are calling cpu_mmu_index() to get mmu_idx).
>
> I could use host address directly via {ld,st}xx_p() but then mmio will be left alone to be solved.
> Perhaps another flag for the only case of R5 + MSA + MMIO.
>
> I might able to change all the generic load/store macros such as cpu_ldst_template.h and
> softmmu_template.h to pass the misalignment information.
> However that would be a huge work impacting all the architectures.

Ideally it would be nice to have support in TCG so that a frontend
could output a TCG load/store op with a flag for "unaligned access
OK" or not. ARM also has this issue of some load/stores wanting to
do alignment traps and some not.

-- PMM

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA
  2015-05-12  9:54     ` Peter Maydell
@ 2015-05-12 15:38       ` Richard Henderson
  0 siblings, 0 replies; 10+ messages in thread
From: Richard Henderson @ 2015-05-12 15:38 UTC (permalink / raw)
  To: Peter Maydell, Yongbok Kim; +Cc: Leon Alrae, QEMU Developers

On 05/12/2015 02:54 AM, Peter Maydell wrote:
> Ideally it would be nice to have support in TCG so that a frontend
> could output a TCG load/store op with a flag for "unaligned access
> OK" or not. ARM also has this issue of some load/stores wanting to
> do alignment traps and some not.

Yes, that would be ideal.

As I was looking at softmmu_template.h for Peter C this morning, I was
wondering about that possibility, since he would be needing to hook
cpu_unaligned_access and the #ifdef ALIGNED_ONLY would need to go away.

What we can't afford is yet another parameter to the helpers.  So I turn my eye
to the mmu_idx parameter, of which we're only using a couple of bits.

What if we merge mmu_idx with TCGMemOp as a parameter, at the tcg-op.h
interface?  Save a tiny bit o space within the tcg opcode buffer.  We'd have to
teach each backend to pull them apart when generating code, but that's trivial.
 But in the end, the helpers have all the info that the code generator did wrt
the access.

Then we add an "aligned" bit to TCGMemOp and use it instead of ifdef ALIGNED_ONLY.

Thoughts?


r~

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-05-12 15:38 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-11 11:30 [Qemu-devel] [PATCH v2 0/2] target-mips: Add support for misaligned accesses Yongbok Kim
2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 1/2] target-mips: Misaligned memory accesses for R6 Yongbok Kim
2015-05-11 13:00   ` Andreas Färber
2015-05-11 11:30 ` [Qemu-devel] [PATCH v2 2/2] target-mips: Misaligned memory accesses for MSA Yongbok Kim
2015-05-11 13:12   ` Andreas Färber
2015-05-11 13:15   ` Yongbok Kim
2015-05-11 13:52     ` Leon Alrae
2015-05-12  9:54     ` Peter Maydell
2015-05-12 15:38       ` Richard Henderson
2015-05-12  9:43   ` Leon Alrae

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.