From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55489) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YFkpw-0003i8-Fe for qemu-devel@nongnu.org; Mon, 26 Jan 2015 09:40:22 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YFkps-0000YB-35 for qemu-devel@nongnu.org; Mon, 26 Jan 2015 09:40:20 -0500 Received: from mail-qc0-f169.google.com ([209.85.216.169]:34968) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YFkpr-0000Y2-Ue for qemu-devel@nongnu.org; Mon, 26 Jan 2015 09:40:16 -0500 Received: by mail-qc0-f169.google.com with SMTP id b13so7129322qcw.0 for ; Mon, 26 Jan 2015 06:40:15 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <1422037228-5363-6-git-send-email-peter.maydell@linaro.org> References: <1422037228-5363-1-git-send-email-peter.maydell@linaro.org> <1422037228-5363-6-git-send-email-peter.maydell@linaro.org> Date: Mon, 26 Jan 2015 08:40:14 -0600 Message-ID: From: Greg Bellows Content-Type: multipart/alternative; boundary=001a11c12eeace51f6050d8f1d56 Subject: Re: [Qemu-devel] [PATCH 05/11] target-arm: Use correct mmu_idx for unprivileged loads and stores List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Maydell Cc: "Edgar E. Iglesias" , Andrew Jones , =?UTF-8?B?QWxleCBCZW5uw6ll?= , QEMU Developers , Patch Tracking --001a11c12eeace51f6050d8f1d56 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable On Fri, Jan 23, 2015 at 12:20 PM, Peter Maydell wrote: > The MMU index to use for unprivileged loads and stores is more > complicated than we currently implement: > * for A64, it should be "if at EL1, access as if EL0; otherwise > access at current EL" > * for A32/T32, it should be "if EL2, UNPREDICTABLE; otherwise > access as if at EL0". > > =E2=80=8BThe wording between the specs appears to be almost identical, curi= ous why the handling is different?=E2=80=8B > In both cases, if we want to make the access for Secure EL0 > this is not the same mmu_idx as for Non-Secure EL0. > > Signed-off-by: Peter Maydell > --- > target-arm/translate-a64.c | 19 ++++++++++++++++++- > target-arm/translate.c | 26 ++++++++++++++++++++++++-- > 2 files changed, 42 insertions(+), 3 deletions(-) > > diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c > index 96f14ff..acf4b16 100644 > --- a/target-arm/translate-a64.c > +++ b/target-arm/translate-a64.c > @@ -123,6 +123,23 @@ void a64_translate_init(void) > #endif > } > > +static inline ARMMMUIdx get_a64_user_mem_index(DisasContext *s) > +{ > + /* Return the mmu_idx to use for A64 "unprivileged load/store" insns= : > + * if EL1, access as if EL0; otherwise access at current EL > + */ > + switch (s->mmu_idx) { > + case ARMMMUIdx_S12NSE1: > + return ARMMMUIdx_S12NSE0; > + case ARMMMUIdx_S1SE1: > + return ARMMMUIdx_S1SE0; > + case ARMMMUIdx_S2NS: > + g_assert_not_reached(); > + default: > + return s->mmu_idx; > + } > +} > + > void aarch64_cpu_dump_state(CPUState *cs, FILE *f, > fprintf_function cpu_fprintf, int flags) > { > @@ -2107,7 +2124,7 @@ static void disas_ldst_reg_imm9(DisasContext *s, > uint32_t insn) > } > } else { > TCGv_i64 tcg_rt =3D cpu_reg(s, rt); > - int memidx =3D is_unpriv ? MMU_USER_IDX : get_mem_index(s); > + int memidx =3D is_unpriv ? get_a64_user_mem_index(s) : > get_mem_index(s); > > if (is_store) { > do_gpr_st_memidx(s, tcg_rt, tcg_addr, size, memidx); > diff --git a/target-arm/translate.c b/target-arm/translate.c > index 7163649..715f65d 100644 > --- a/target-arm/translate.c > +++ b/target-arm/translate.c > @@ -113,6 +113,28 @@ void arm_translate_init(void) > a64_translate_init(); > } > > +static inline ARMMMUIdx get_a32_user_mem_index(DisasContext *s) > +{ > + /* Return the mmu_idx to use for A32/T32 "unprivileged load/store" > + * insns: > + * if PL2, UNPREDICTABLE (we choose to implement as if PL0) > + * otherwise, access as if at PL0. > + */ > + switch (s->mmu_idx) { > + case ARMMMUIdx_S1E2: /* this one is UNPREDICTABLE */ > + case ARMMMUIdx_S12NSE0: > + case ARMMMUIdx_S12NSE1: > + return ARMMMUIdx_S12NSE0; > + case ARMMMUIdx_S1E3: > + case ARMMMUIdx_S1SE0: > + case ARMMMUIdx_S1SE1: > + return ARMMMUIdx_S1SE0; > + case ARMMMUIdx_S2NS: > + default: > + g_assert_not_reached(); > + } > +} > + > static inline TCGv_i32 load_cpu_offset(int offset) > { > TCGv_i32 tmp =3D tcg_temp_new_i32(); > @@ -8793,7 +8815,7 @@ static void disas_arm_insn(DisasContext *s, unsigne= d > int insn) > tmp2 =3D load_reg(s, rn); > if ((insn & 0x01200000) =3D=3D 0x00200000) { > /* ldrt/strt */ > - i =3D MMU_USER_IDX; > + i =3D get_a32_user_mem_index(s); > } else { > i =3D get_mem_index(s); > } > @@ -10173,7 +10195,7 @@ static int disas_thumb2_insn(CPUARMState *env, > DisasContext *s, uint16_t insn_hw > break; > case 0xe: /* User privilege. */ > tcg_gen_addi_i32(addr, addr, imm); > - memidx =3D MMU_USER_IDX; > + memidx =3D get_a32_user_mem_index(s); > break; > case 0x9: /* Post-decrement. */ > imm =3D -imm; > -- > 1.9.1 > > =E2=80=8BOtherwise, Reviewed-by: Greg Bellows =E2=80=8B --001a11c12eeace51f6050d8f1d56 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On Fri, Jan 23, 2015 at 12:20 PM, Peter Maydell <peter= .maydell@linaro.org> wrote:
The MMU index to use for unprivileged loads and stores is more
complicated than we currently implement:
=C2=A0* for A64, it should be "if at EL1, access as if EL0; otherwise<= br> =C2=A0 =C2=A0access at current EL"
=C2=A0* for A32/T32, it should be "if EL2, UNPREDICTABLE; otherwise =C2=A0 =C2=A0access as if at EL0".


=E2=80=8BThe wording between the sp= ecs appears to be almost identical, curious why the handling is different?= =E2=80=8B

=C2=A0
In both cases, if we want to make the access for Secure EL0
this is not the same mmu_idx as for Non-Secure EL0.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
=C2=A0target-arm/translate-a64.c | 19 ++++++++++++++++++-
=C2=A0target-arm/translate.c=C2=A0 =C2=A0 =C2=A0| 26 ++++++++++++++++++++++= ++--
=C2=A02 files changed, 42 insertions(+), 3 deletions(-)

diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c
index 96f14ff..acf4b16 100644
--- a/target-arm/translate-a64.c
+++ b/target-arm/translate-a64.c
@@ -123,6 +123,23 @@ void a64_translate_init(void)
=C2=A0#endif
=C2=A0}

+static inline ARMMMUIdx get_a64_user_mem_index(DisasContext *s)
+{
+=C2=A0 =C2=A0 /* Return the mmu_idx to use for A64 "unprivileged load= /store" insns:
+=C2=A0 =C2=A0 =C2=A0*=C2=A0 if EL1, access as if EL0; otherwise access at = current EL
+=C2=A0 =C2=A0 =C2=A0*/
+=C2=A0 =C2=A0 switch (s->mmu_idx) {
+=C2=A0 =C2=A0 case ARMMMUIdx_S12NSE1:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return ARMMMUIdx_S12NSE0;
+=C2=A0 =C2=A0 case ARMMMUIdx_S1SE1:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return ARMMMUIdx_S1SE0;
+=C2=A0 =C2=A0 case ARMMMUIdx_S2NS:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 g_assert_not_reached();
+=C2=A0 =C2=A0 default:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return s->mmu_idx;
+=C2=A0 =C2=A0 }
+}
+
=C2=A0void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0fprintf_function cpu_fprintf, int flags)
=C2=A0{
@@ -2107,7 +2124,7 @@ static void disas_ldst_reg_imm9(DisasContext *s, uint= 32_t insn)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
=C2=A0 =C2=A0 =C2=A0} else {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0TCGv_i64 tcg_rt =3D cpu_reg(s, rt);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 int memidx =3D is_unpriv ? MMU_USER_IDX : get_= mem_index(s);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 int memidx =3D is_unpriv ? get_a64_user_mem_in= dex(s) : get_mem_index(s);

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (is_store) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0do_gpr_st_memidx(s, tcg_rt,= tcg_addr, size, memidx);
diff --git a/target-arm/translate.c b/target-arm/translate.c
index 7163649..715f65d 100644
--- a/target-arm/translate.c
+++ b/target-arm/translate.c
@@ -113,6 +113,28 @@ void arm_translate_init(void)
=C2=A0 =C2=A0 =C2=A0a64_translate_init();
=C2=A0}

+static inline ARMMMUIdx get_a32_user_mem_index(DisasContext *s)
+{
+=C2=A0 =C2=A0 /* Return the mmu_idx to use for A32/T32 "unprivileged = load/store"
+=C2=A0 =C2=A0 =C2=A0* insns:
+=C2=A0 =C2=A0 =C2=A0*=C2=A0 if PL2, UNPREDICTABLE (we choose to implement = as if PL0)
+=C2=A0 =C2=A0 =C2=A0*=C2=A0 otherwise, access as if at PL0.
+=C2=A0 =C2=A0 =C2=A0*/
+=C2=A0 =C2=A0 switch (s->mmu_idx) {
+=C2=A0 =C2=A0 case ARMMMUIdx_S1E2:=C2=A0 =C2=A0 =C2=A0 =C2=A0 /* this one = is UNPREDICTABLE */
+=C2=A0 =C2=A0 case ARMMMUIdx_S12NSE0:
+=C2=A0 =C2=A0 case ARMMMUIdx_S12NSE1:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return ARMMMUIdx_S12NSE0;
+=C2=A0 =C2=A0 case ARMMMUIdx_S1E3:
+=C2=A0 =C2=A0 case ARMMMUIdx_S1SE0:
+=C2=A0 =C2=A0 case ARMMMUIdx_S1SE1:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return ARMMMUIdx_S1SE0;
+=C2=A0 =C2=A0 case ARMMMUIdx_S2NS:
+=C2=A0 =C2=A0 default:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 g_assert_not_reached();
+=C2=A0 =C2=A0 }
+}
+
=C2=A0static inline TCGv_i32 load_cpu_offset(int offset)
=C2=A0{
=C2=A0 =C2=A0 =C2=A0TCGv_i32 tmp =3D tcg_temp_new_i32();
@@ -8793,7 +8815,7 @@ static void disas_arm_insn(DisasContext *s, unsigned = int insn)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0tmp2 =3D load_reg(s, rn); =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ((insn & 0x01200000)= =3D=3D 0x00200000) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* ldrt/strt = */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 i =3D MMU_USER_IDX= ;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 i =3D get_a32_user= _mem_index(s);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0} else {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0i =3D get_mem= _index(s);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
@@ -10173,7 +10195,7 @@ static int disas_thumb2_insn(CPUARMState *env, Disa= sContext *s, uint16_t insn_hw
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0break;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case 0xe: /* = User privilege.=C2=A0 */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0tcg_gen_addi_i32(addr, addr, imm);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 memi= dx =3D MMU_USER_IDX;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 memi= dx =3D get_a32_user_mem_index(s);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0break;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case 0x9: /* = Post-decrement.=C2=A0 */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0imm =3D -imm;
--
1.9.1

=E2=80=8BOtherwise,=C2=A0

<= /div>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>=E2=80=8B

--001a11c12eeace51f6050d8f1d56--