All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO
@ 2020-02-03 14:46 Richard Henderson
  2020-02-03 14:46 ` [PATCH v3 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2 Richard Henderson
                   ` (19 more replies)
  0 siblings, 20 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Based-on: <20200201192916.31796-1-richard.henderson@linaro.org>
("[v6] target/arm: Implement ARMv8.1-VHE")

Version 3 cleans up masking values that go into PSTATE/CPSR,
adding 6 new patches for that purpose.


r~


Richard Henderson (20):
  target/arm: Add arm_mmu_idx_is_stage1_of_2
  target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled
  target/arm: Add isar_feature tests for PAN + ATS1E1
  target/arm: Move LOR regdefs to file scope
  target/arm: Split out aarch32_cpsr_valid_mask
  target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask
  target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return
  target/arm: Remove CPSR_RESERVED
  target/arm: Tidy msr_mask
  target/arm: Introduce aarch64_pstate_valid_mask
  target/arm: Update MSR access for PAN
  target/arm: Update arm_mmu_idx_el for PAN
  target/arm: Enforce PAN semantics in get_S1prot
  target/arm: Set PAN bit as required on exception entry
  target/arm: Implement ATS1E1 system registers
  target/arm: Enable ARMv8.2-ATS1E1 in -cpu max
  target/arm: Add ID_AA64MMFR2_EL1
  target/arm: Update MSR access to UAO
  target/arm: Implement UAO semantics
  target/arm: Enable ARMv8.2-UAO in -cpu max

 target/arm/cpu-param.h     |   2 +-
 target/arm/cpu.h           |  95 ++++++++---
 target/arm/internals.h     |  85 ++++++++++
 target/arm/cpu.c           |   4 +
 target/arm/cpu64.c         |   9 ++
 target/arm/helper-a64.c    |   6 +-
 target/arm/helper.c        | 314 ++++++++++++++++++++++++++++---------
 target/arm/kvm64.c         |   2 +
 target/arm/op_helper.c     |  14 +-
 target/arm/translate-a64.c |  31 ++++
 target/arm/translate.c     |  49 +++---
 11 files changed, 490 insertions(+), 121 deletions(-)

-- 
2.20.1



^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v3 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
@ 2020-02-03 14:46 ` Richard Henderson
  2020-02-03 14:46 ` [PATCH v3 02/20] target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled Richard Henderson
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee, Philippe Mathieu-Daudé

Use a common predicate for querying stage1-ness.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Rename from arm_mmu_idx_is_stage1 to arm_mmu_idx_is_stage1_of_2
---
 target/arm/internals.h | 18 ++++++++++++++++++
 target/arm/helper.c    |  8 +++-----
 2 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index 6d4a942bde..1f8ee5f573 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1034,6 +1034,24 @@ static inline ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
 ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env);
 #endif
 
+/**
+ * arm_mmu_idx_is_stage1_of_2:
+ * @mmu_idx: The ARMMMUIdx to test
+ *
+ * Return true if @mmu_idx is a NOTLB mmu_idx that is the
+ * first stage of a two stage regime.
+ */
+static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
+{
+    switch (mmu_idx) {
+    case ARMMMUIdx_Stage1_E0:
+    case ARMMMUIdx_Stage1_E1:
+        return true;
+    default:
+        return false;
+    }
+}
+
 /*
  * Parameters of a given virtual address, as extracted from the
  * translation control register (TCR) for a given regime.
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 70b10428c5..852fd71dcc 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3261,8 +3261,7 @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
         bool take_exc = false;
 
         if (fi.s1ptw && current_el == 1 && !arm_is_secure(env)
-            && (mmu_idx == ARMMMUIdx_Stage1_E1 ||
-                mmu_idx == ARMMMUIdx_Stage1_E0)) {
+            && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
             /*
              * Synchronous stage 2 fault on an access made as part of the
              * translation table walk for AT S1E0* or AT S1E1* insn
@@ -9294,8 +9293,7 @@ static inline bool regime_translation_disabled(CPUARMState *env,
         }
     }
 
-    if ((env->cp15.hcr_el2 & HCR_DC) &&
-        (mmu_idx == ARMMMUIdx_Stage1_E0 || mmu_idx == ARMMMUIdx_Stage1_E1)) {
+    if ((env->cp15.hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
         /* HCR.DC means SCTLR_EL1.M behaves as 0 */
         return true;
     }
@@ -9604,7 +9602,7 @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
                                hwaddr addr, MemTxAttrs txattrs,
                                ARMMMUFaultInfo *fi)
 {
-    if ((mmu_idx == ARMMMUIdx_Stage1_E0 || mmu_idx == ARMMMUIdx_Stage1_E1) &&
+    if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
         !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
         target_ulong s2size;
         hwaddr s2pa;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 02/20] target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
  2020-02-03 14:46 ` [PATCH v3 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2 Richard Henderson
@ 2020-02-03 14:46 ` Richard Henderson
  2020-02-03 14:46 ` [PATCH v3 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1 Richard Henderson
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

To implement PAN, we will want to swap, for short periods
of time, to a different privileged mmu_idx.  In addition,
we cannot do this with flushing alone, because the AT*
instructions have both PAN and PAN-less versions.

Add the ARMMMUIdx*_PAN constants where necessary next to
the corresponding ARMMMUIdx* constant.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu-param.h     |  2 +-
 target/arm/cpu.h           | 33 ++++++++++++++-------
 target/arm/internals.h     |  9 ++++++
 target/arm/helper.c        | 60 +++++++++++++++++++++++++++++++-------
 target/arm/translate-a64.c |  3 ++
 target/arm/translate.c     |  2 ++
 6 files changed, 87 insertions(+), 22 deletions(-)

diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
index 18ac562346..d593b60b28 100644
--- a/target/arm/cpu-param.h
+++ b/target/arm/cpu-param.h
@@ -29,6 +29,6 @@
 # define TARGET_PAGE_BITS_MIN  10
 #endif
 
-#define NB_MMU_MODES 9
+#define NB_MMU_MODES 12
 
 #endif
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 0b3036c484..c63bceaaa5 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2751,20 +2751,24 @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
  *  5. we want to be able to use the TLB for accesses done as part of a
  *     stage1 page table walk, rather than having to walk the stage2 page
  *     table over and over.
+ *  6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access
+ *     Never (PAN) bit within PSTATE.
  *
  * This gives us the following list of cases:
  *
  * NS EL0 EL1&0 stage 1+2 (aka NS PL0)
  * NS EL1 EL1&0 stage 1+2 (aka NS PL1)
+ * NS EL1 EL1&0 stage 1+2 +PAN
  * NS EL0 EL2&0
- * NS EL2 EL2&0
+ * NS EL2 EL2&0 +PAN
  * NS EL2 (aka NS PL2)
  * S EL0 EL1&0 (aka S PL0)
  * S EL1 EL1&0 (not used if EL3 is 32 bit)
+ * S EL1 EL1&0 +PAN
  * S EL3 (aka S PL1)
  * NS EL1&0 stage 2
  *
- * for a total of 9 different mmu_idx.
+ * for a total of 12 different mmu_idx.
  *
  * R profile CPUs have an MPU, but can use the same set of MMU indexes
  * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
@@ -2819,19 +2823,22 @@ typedef enum ARMMMUIdx {
     /*
      * A-profile.
      */
-    ARMMMUIdx_E10_0 =  0 | ARM_MMU_IDX_A,
-    ARMMMUIdx_E20_0 =  1 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E10_0      =  0 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E20_0      =  1 | ARM_MMU_IDX_A,
 
-    ARMMMUIdx_E10_1 =  2 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E10_1      =  2 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E10_1_PAN  =  3 | ARM_MMU_IDX_A,
 
-    ARMMMUIdx_E2 =     3 | ARM_MMU_IDX_A,
-    ARMMMUIdx_E20_2 =  4 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E2         =  4 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E20_2      =  5 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E20_2_PAN  =  6 | ARM_MMU_IDX_A,
 
-    ARMMMUIdx_SE10_0 = 5 | ARM_MMU_IDX_A,
-    ARMMMUIdx_SE10_1 = 6 | ARM_MMU_IDX_A,
-    ARMMMUIdx_SE3 =    7 | ARM_MMU_IDX_A,
+    ARMMMUIdx_SE10_0     = 7 | ARM_MMU_IDX_A,
+    ARMMMUIdx_SE10_1     = 8 | ARM_MMU_IDX_A,
+    ARMMMUIdx_SE10_1_PAN = 9 | ARM_MMU_IDX_A,
+    ARMMMUIdx_SE3        = 10 | ARM_MMU_IDX_A,
 
-    ARMMMUIdx_Stage2 = 8 | ARM_MMU_IDX_A,
+    ARMMMUIdx_Stage2     = 11 | ARM_MMU_IDX_A,
 
     /*
      * These are not allocated TLBs and are used only for AT system
@@ -2839,6 +2846,7 @@ typedef enum ARMMMUIdx {
      */
     ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
     ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
+    ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
 
     /*
      * M-profile.
@@ -2864,10 +2872,13 @@ typedef enum ARMMMUIdxBit {
     TO_CORE_BIT(E10_0),
     TO_CORE_BIT(E20_0),
     TO_CORE_BIT(E10_1),
+    TO_CORE_BIT(E10_1_PAN),
     TO_CORE_BIT(E2),
     TO_CORE_BIT(E20_2),
+    TO_CORE_BIT(E20_2_PAN),
     TO_CORE_BIT(SE10_0),
     TO_CORE_BIT(SE10_1),
+    TO_CORE_BIT(SE10_1_PAN),
     TO_CORE_BIT(SE3),
     TO_CORE_BIT(Stage2),
 
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 1f8ee5f573..6be8b2d1a9 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -843,12 +843,16 @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
     switch (mmu_idx) {
     case ARMMMUIdx_Stage1_E0:
     case ARMMMUIdx_Stage1_E1:
+    case ARMMMUIdx_Stage1_E1_PAN:
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
     case ARMMMUIdx_SE10_0:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
         return true;
     default:
         return false;
@@ -861,10 +865,13 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
     switch (mmu_idx) {
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
     case ARMMMUIdx_Stage1_E0:
     case ARMMMUIdx_Stage1_E1:
+    case ARMMMUIdx_Stage1_E1_PAN:
     case ARMMMUIdx_E2:
     case ARMMMUIdx_Stage2:
     case ARMMMUIdx_MPrivNegPri:
@@ -875,6 +882,7 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
     case ARMMMUIdx_SE3:
     case ARMMMUIdx_SE10_0:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
     case ARMMMUIdx_MSPrivNegPri:
     case ARMMMUIdx_MSUserNegPri:
     case ARMMMUIdx_MSPriv:
@@ -1046,6 +1054,7 @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
     switch (mmu_idx) {
     case ARMMMUIdx_Stage1_E0:
     case ARMMMUIdx_Stage1_E1:
+    case ARMMMUIdx_Stage1_E1_PAN:
         return true;
     default:
         return false;
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 852fd71dcc..739d2d4cc5 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -671,6 +671,7 @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
 
     tlb_flush_by_mmuidx(cs,
                         ARMMMUIdxBit_E10_1 |
+                        ARMMMUIdxBit_E10_1_PAN |
                         ARMMMUIdxBit_E10_0 |
                         ARMMMUIdxBit_Stage2);
 }
@@ -682,6 +683,7 @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
 
     tlb_flush_by_mmuidx_all_cpus_synced(cs,
                                         ARMMMUIdxBit_E10_1 |
+                                        ARMMMUIdxBit_E10_1_PAN |
                                         ARMMMUIdxBit_E10_0 |
                                         ARMMMUIdxBit_Stage2);
 }
@@ -2700,6 +2702,7 @@ static int gt_phys_redir_timeridx(CPUARMState *env)
     switch (arm_mmu_idx(env)) {
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
         return GTIMER_HYP;
     default:
         return GTIMER_PHYS;
@@ -2711,6 +2714,7 @@ static int gt_virt_redir_timeridx(CPUARMState *env)
     switch (arm_mmu_idx(env)) {
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
         return GTIMER_HYPVIRT;
     default:
         return GTIMER_VIRT;
@@ -3337,7 +3341,9 @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
         format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
 
         if (arm_feature(env, ARM_FEATURE_EL2)) {
-            if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
+            if (mmu_idx == ARMMMUIdx_E10_0 ||
+                mmu_idx == ARMMMUIdx_E10_1 ||
+                mmu_idx == ARMMMUIdx_E10_1_PAN) {
                 format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
             } else {
                 format64 |= arm_current_el(env) == 2;
@@ -3797,7 +3803,9 @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
     if (extract64(raw_read(env, ri) ^ value, 48, 16) &&
         (arm_hcr_el2_eff(env) & HCR_E2H)) {
         tlb_flush_by_mmuidx(env_cpu(env),
-                            ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E20_0);
+                            ARMMMUIdxBit_E20_2 |
+                            ARMMMUIdxBit_E20_2_PAN |
+                            ARMMMUIdxBit_E20_0);
     }
     raw_write(env, ri, value);
 }
@@ -3815,6 +3823,7 @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
     if (raw_read(env, ri) != value) {
         tlb_flush_by_mmuidx(cs,
                             ARMMMUIdxBit_E10_1 |
+                            ARMMMUIdxBit_E10_1_PAN |
                             ARMMMUIdxBit_E10_0 |
                             ARMMMUIdxBit_Stage2);
         raw_write(env, ri, value);
@@ -4175,12 +4184,18 @@ static int vae1_tlbmask(CPUARMState *env)
 {
     /* Since we exclude secure first, we may read HCR_EL2 directly. */
     if (arm_is_secure_below_el3(env)) {
-        return ARMMMUIdxBit_SE10_1 | ARMMMUIdxBit_SE10_0;
+        return ARMMMUIdxBit_SE10_1 |
+               ARMMMUIdxBit_SE10_1_PAN |
+               ARMMMUIdxBit_SE10_0;
     } else if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE))
                == (HCR_E2H | HCR_TGE)) {
-        return ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E20_0;
+        return ARMMMUIdxBit_E20_2 |
+               ARMMMUIdxBit_E20_2_PAN |
+               ARMMMUIdxBit_E20_0;
     } else {
-        return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
+        return ARMMMUIdxBit_E10_1 |
+               ARMMMUIdxBit_E10_1_PAN |
+               ARMMMUIdxBit_E10_0;
     }
 }
 
@@ -4214,18 +4229,28 @@ static int alle1_tlbmask(CPUARMState *env)
      * stage 1 translations.
      */
     if (arm_is_secure_below_el3(env)) {
-        return ARMMMUIdxBit_SE10_1 | ARMMMUIdxBit_SE10_0;
+        return ARMMMUIdxBit_SE10_1 |
+               ARMMMUIdxBit_SE10_1_PAN |
+               ARMMMUIdxBit_SE10_0;
     } else if (arm_feature(env, ARM_FEATURE_EL2)) {
-        return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0 | ARMMMUIdxBit_Stage2;
+        return ARMMMUIdxBit_E10_1 |
+               ARMMMUIdxBit_E10_1_PAN |
+               ARMMMUIdxBit_E10_0 |
+               ARMMMUIdxBit_Stage2;
     } else {
-        return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
+        return ARMMMUIdxBit_E10_1 |
+               ARMMMUIdxBit_E10_1_PAN |
+               ARMMMUIdxBit_E10_0;
     }
 }
 
 static int alle2_tlbmask(CPUARMState *env)
 {
     /* TODO: ARMv8.4-SecEL2 */
-    return ARMMMUIdxBit_E20_0 | ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E2;
+    return ARMMMUIdxBit_E20_0 |
+           ARMMMUIdxBit_E20_2 |
+           ARMMMUIdxBit_E20_2_PAN |
+           ARMMMUIdxBit_E2;
 }
 
 static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -9215,6 +9240,7 @@ static uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
     switch (mmu_idx) {
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
     case ARMMMUIdx_Stage2:
     case ARMMMUIdx_E2:
         return 2;
@@ -9223,10 +9249,13 @@ static uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
     case ARMMMUIdx_SE10_0:
         return arm_el_is_aa64(env, 3) ? 1 : 3;
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
     case ARMMMUIdx_Stage1_E0:
     case ARMMMUIdx_Stage1_E1:
+    case ARMMMUIdx_Stage1_E1_PAN:
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_MPrivNegPri:
     case ARMMMUIdx_MUserNegPri:
     case ARMMMUIdx_MPriv:
@@ -9342,6 +9371,8 @@ static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
         return ARMMMUIdx_Stage1_E0;
     case ARMMMUIdx_E10_1:
         return ARMMMUIdx_Stage1_E1;
+    case ARMMMUIdx_E10_1_PAN:
+        return ARMMMUIdx_Stage1_E1_PAN;
     default:
         return mmu_idx;
     }
@@ -9388,6 +9419,7 @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
         return false;
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
         g_assert_not_reached();
     }
 }
@@ -11280,7 +11312,9 @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
                    target_ulong *page_size,
                    ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
 {
-    if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
+    if (mmu_idx == ARMMMUIdx_E10_0 ||
+        mmu_idx == ARMMMUIdx_E10_1 ||
+        mmu_idx == ARMMMUIdx_E10_1_PAN) {
         /* Call ourselves recursively to do the stage 1 and then stage 2
          * translations.
          */
@@ -11807,10 +11841,13 @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
     case ARMMMUIdx_SE10_0:
         return 0;
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
         return 1;
     case ARMMMUIdx_E2:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
         return 2;
     case ARMMMUIdx_SE3:
         return 3;
@@ -12027,11 +12064,14 @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
     /* TODO: ARMv8.2-UAO */
     switch (mmu_idx) {
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
         /* TODO: ARMv8.3-NV */
         flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
         break;
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
         /* TODO: ARMv8.4-SecEL2 */
         /*
          * Note that E20_2 is gated by HCR_EL2.E2H == 1, but E20_0 is
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 6e82486884..49631c2340 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -124,12 +124,15 @@ static int get_a64_user_mem_index(DisasContext *s)
          */
         switch (useridx) {
         case ARMMMUIdx_E10_1:
+        case ARMMMUIdx_E10_1_PAN:
             useridx = ARMMMUIdx_E10_0;
             break;
         case ARMMMUIdx_E20_2:
+        case ARMMMUIdx_E20_2_PAN:
             useridx = ARMMMUIdx_E20_0;
             break;
         case ARMMMUIdx_SE10_1:
+        case ARMMMUIdx_SE10_1_PAN:
             useridx = ARMMMUIdx_SE10_0;
             break;
         default:
diff --git a/target/arm/translate.c b/target/arm/translate.c
index e11a5871d0..d58c328e08 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -155,10 +155,12 @@ static inline int get_a32_user_mem_index(DisasContext *s)
     case ARMMMUIdx_E2:        /* this one is UNPREDICTABLE */
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
         return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
     case ARMMMUIdx_SE3:
     case ARMMMUIdx_SE10_0:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
         return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
     case ARMMMUIdx_MUser:
     case ARMMMUIdx_MPriv:
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
  2020-02-03 14:46 ` [PATCH v3 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2 Richard Henderson
  2020-02-03 14:46 ` [PATCH v3 02/20] target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled Richard Henderson
@ 2020-02-03 14:46 ` Richard Henderson
  2020-02-03 14:47 ` [PATCH v3 04/20] target/arm: Move LOR regdefs to file scope Richard Henderson
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Include definitions for all of the bits in ID_MMFR3.
We already have a definition for ID_AA64MMFR1.PAN.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.h | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index c63bceaaa5..08b2f5d73e 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1727,6 +1727,15 @@ FIELD(ID_ISAR6, FHM, 8, 4)
 FIELD(ID_ISAR6, SB, 12, 4)
 FIELD(ID_ISAR6, SPECRES, 16, 4)
 
+FIELD(ID_MMFR3, CMAINTVA, 0, 4)
+FIELD(ID_MMFR3, CMAINTSW, 4, 4)
+FIELD(ID_MMFR3, BPMAINT, 8, 4)
+FIELD(ID_MMFR3, MAINTBCST, 12, 4)
+FIELD(ID_MMFR3, PAN, 16, 4)
+FIELD(ID_MMFR3, COHWALK, 20, 4)
+FIELD(ID_MMFR3, CMEMSZ, 24, 4)
+FIELD(ID_MMFR3, SUPERSEC, 28, 4)
+
 FIELD(ID_MMFR4, SPECSEI, 0, 4)
 FIELD(ID_MMFR4, AC2, 4, 4)
 FIELD(ID_MMFR4, XNX, 8, 4)
@@ -3443,6 +3452,16 @@ static inline bool isar_feature_aa32_vminmaxnm(const ARMISARegisters *id)
     return FIELD_EX64(id->mvfr2, MVFR2, FPMISC) >= 4;
 }
 
+static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) != 0;
+}
+
+static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) >= 2;
+}
+
 /*
  * 64-bit feature tests via id registers.
  */
@@ -3602,6 +3621,16 @@ static inline bool isar_feature_aa64_lor(const ARMISARegisters *id)
     return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, LO) != 0;
 }
 
+static inline bool isar_feature_aa64_pan(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) != 0;
+}
+
+static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
+}
+
 static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
 {
     return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 04/20] target/arm: Move LOR regdefs to file scope
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (2 preceding siblings ...)
  2020-02-03 14:46 ` [PATCH v3 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1 Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-03 14:47 ` [PATCH v3 05/20] target/arm: Split out aarch32_cpsr_valid_mask Richard Henderson
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

For static const regdefs, file scope is preferred.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/helper.c | 57 +++++++++++++++++++++++----------------------
 1 file changed, 29 insertions(+), 28 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 739d2d4cc5..795ef727d0 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6343,6 +6343,35 @@ static CPAccessResult access_lor_other(CPUARMState *env,
     return access_lor_ns(env);
 }
 
+/*
+ * A trivial implementation of ARMv8.1-LOR leaves all of these
+ * registers fixed at 0, which indicates that there are zero
+ * supported Limited Ordering regions.
+ */
+static const ARMCPRegInfo lor_reginfo[] = {
+    { .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0,
+      .access = PL1_RW, .accessfn = access_lor_other,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    { .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1,
+      .access = PL1_RW, .accessfn = access_lor_other,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    { .name = "LORN_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2,
+      .access = PL1_RW, .accessfn = access_lor_other,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    { .name = "LORC_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3,
+      .access = PL1_RW, .accessfn = access_lor_other,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    { .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
+      .access = PL1_R, .accessfn = access_lorid,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    REGINFO_SENTINEL
+};
+
 #ifdef TARGET_AARCH64
 static CPAccessResult access_pauth(CPUARMState *env, const ARMCPRegInfo *ri,
                                    bool isread)
@@ -7577,34 +7606,6 @@ void register_cp_regs_for_features(ARMCPU *cpu)
     }
 
     if (cpu_isar_feature(aa64_lor, cpu)) {
-        /*
-         * A trivial implementation of ARMv8.1-LOR leaves all of these
-         * registers fixed at 0, which indicates that there are zero
-         * supported Limited Ordering regions.
-         */
-        static const ARMCPRegInfo lor_reginfo[] = {
-            { .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0,
-              .access = PL1_RW, .accessfn = access_lor_other,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            { .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1,
-              .access = PL1_RW, .accessfn = access_lor_other,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            { .name = "LORN_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2,
-              .access = PL1_RW, .accessfn = access_lor_other,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            { .name = "LORC_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3,
-              .access = PL1_RW, .accessfn = access_lor_other,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            { .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
-              .access = PL1_R, .accessfn = access_lorid,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            REGINFO_SENTINEL
-        };
         define_arm_cp_regs(cpu, lor_reginfo);
     }
 
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 05/20] target/arm: Split out aarch32_cpsr_valid_mask
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (3 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 04/20] target/arm: Move LOR regdefs to file scope Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-07 17:26   ` Peter Maydell
  2020-02-03 14:47 ` [PATCH v3 06/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask Richard Henderson
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Split this helper out of msr_mask in translate.c.  At the same time,
transform the negative reductive logic to positive accumulative logic.
It will be usable along the exception paths.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/internals.h | 24 ++++++++++++++++++++++++
 target/arm/translate.c | 17 +++--------------
 2 files changed, 27 insertions(+), 14 deletions(-)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index 6be8b2d1a9..0569c96fd9 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1061,6 +1061,30 @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
     }
 }
 
+static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
+                                               const ARMISARegisters *id)
+{
+    uint32_t valid = CPSR_M | CPSR_AIF | CPSR_IL | CPSR_NZCV;
+
+    if ((features >> ARM_FEATURE_V4T) & 1) {
+        valid |= CPSR_T;
+    }
+    if ((features >> ARM_FEATURE_V5) & 1) {
+        valid |= CPSR_Q; /* V5TE in reality*/
+    }
+    if ((features >> ARM_FEATURE_V6) & 1) {
+        valid |= CPSR_E | CPSR_GE;
+    }
+    if ((features >> ARM_FEATURE_THUMB2) & 1) {
+        valid |= CPSR_IT;
+    }
+    if (isar_feature_jazelle(id)) {
+        valid |= CPSR_J;
+    }
+
+    return valid;
+}
+
 /*
  * Parameters of a given virtual address, as extracted from the
  * translation control register (TCR) for a given regime.
diff --git a/target/arm/translate.c b/target/arm/translate.c
index d58c328e08..032f7074cb 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -2747,22 +2747,11 @@ static uint32_t msr_mask(DisasContext *s, int flags, int spsr)
         mask |= 0xff000000;
 
     /* Mask out undefined bits.  */
-    mask &= ~CPSR_RESERVED;
-    if (!arm_dc_feature(s, ARM_FEATURE_V4T)) {
-        mask &= ~CPSR_T;
-    }
-    if (!arm_dc_feature(s, ARM_FEATURE_V5)) {
-        mask &= ~CPSR_Q; /* V5TE in reality*/
-    }
-    if (!arm_dc_feature(s, ARM_FEATURE_V6)) {
-        mask &= ~(CPSR_E | CPSR_GE);
-    }
-    if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
-        mask &= ~CPSR_IT;
-    }
+    mask &= aarch32_cpsr_valid_mask(s->features, s->isar);
+
     /* Mask out execution state and reserved bits.  */
     if (!spsr) {
-        mask &= ~(CPSR_EXEC | CPSR_RESERVED);
+        mask &= ~CPSR_EXEC;
     }
     /* Mask out privileged bits.  */
     if (IS_USER(s))
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 06/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (4 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 05/20] target/arm: Split out aarch32_cpsr_valid_mask Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-07 17:32   ` Peter Maydell
  2020-02-03 14:47 ` [PATCH v3 07/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return Richard Henderson
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

CPSR_ERET_MASK was a useless renaming of CPSR_RESERVED.
The function also takes into account bits that the cpu
does not support.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.h       | 2 --
 target/arm/op_helper.c | 5 ++++-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 08b2f5d73e..694b074298 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1209,8 +1209,6 @@ void pmu_init(ARMCPU *cpu);
 #define CPSR_USER (CPSR_NZCV | CPSR_Q | CPSR_GE)
 /* Execution state bits.  MRS read as zero, MSR writes ignored.  */
 #define CPSR_EXEC (CPSR_T | CPSR_IT | CPSR_J | CPSR_IL)
-/* Mask of bits which may be set by exception return copying them from SPSR */
-#define CPSR_ERET_MASK (~CPSR_RESERVED)
 
 /* Bit definitions for M profile XPSR. Most are the same as CPSR. */
 #define XPSR_EXCP 0x1ffU
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
index 27d16ad9ad..acf1815ea3 100644
--- a/target/arm/op_helper.c
+++ b/target/arm/op_helper.c
@@ -400,11 +400,14 @@ void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
 /* Write the CPSR for a 32-bit exception return */
 void HELPER(cpsr_write_eret)(CPUARMState *env, uint32_t val)
 {
+    uint32_t mask;
+
     qemu_mutex_lock_iothread();
     arm_call_pre_el_change_hook(env_archcpu(env));
     qemu_mutex_unlock_iothread();
 
-    cpsr_write(env, val, CPSR_ERET_MASK, CPSRWriteExceptionReturn);
+    mask = aarch32_cpsr_valid_mask(env->features, &env_archcpu(env)->isar);
+    cpsr_write(env, val, mask, CPSRWriteExceptionReturn);
 
     /* Generated code has already stored the new PC value, but
      * without masking out its low bits, because which bits need
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 07/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (5 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 06/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-07 17:33   ` Peter Maydell
  2020-02-03 14:47 ` [PATCH v3 08/20] target/arm: Remove CPSR_RESERVED Richard Henderson
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Using ~0 as the mask on the aarch64->aarch32 exception return
was not even as correct as the CPSR_ERET_MASK that we had used
on the aarch32->aarch32 exception return.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/helper-a64.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
index bf45f8a785..0c9feba392 100644
--- a/target/arm/helper-a64.c
+++ b/target/arm/helper-a64.c
@@ -959,7 +959,7 @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
 {
     int cur_el = arm_current_el(env);
     unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
-    uint32_t spsr = env->banked_spsr[spsr_idx];
+    uint32_t mask, spsr = env->banked_spsr[spsr_idx];
     int new_el;
     bool return_to_aa64 = (spsr & PSTATE_nRW) == 0;
 
@@ -1014,7 +1014,8 @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
          * will sort the register banks out for us, and we've already
          * caught all the bad-mode cases in el_from_spsr().
          */
-        cpsr_write(env, spsr, ~0, CPSRWriteRaw);
+        mask = aarch32_cpsr_valid_mask(env->features, &env_archcpu(env)->isar);
+        cpsr_write(env, spsr, mask, CPSRWriteRaw);
         if (!arm_singlestep_active(env)) {
             env->uncached_cpsr &= ~PSTATE_SS;
         }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 08/20] target/arm: Remove CPSR_RESERVED
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (6 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 07/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-07 17:36   ` Peter Maydell
  2020-02-03 14:47 ` [PATCH v3 09/20] target/arm: Tidy msr_mask Richard Henderson
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

The only remaining use was in op_helper.c.  Use PSTATE_SS
directly, and move the commentary so that it is more obvious
what is going on.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.h       | 6 ------
 target/arm/op_helper.c | 9 ++++++++-
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 694b074298..c6dff1d55b 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1186,12 +1186,6 @@ void pmu_init(ARMCPU *cpu);
 #define CPSR_IT_2_7 (0xfc00U)
 #define CPSR_GE (0xfU << 16)
 #define CPSR_IL (1U << 20)
-/* Note that the RESERVED bits include bit 21, which is PSTATE_SS in
- * an AArch64 SPSR but RES0 in AArch32 SPSR and CPSR. In QEMU we use
- * env->uncached_cpsr bit 21 to store PSTATE.SS when executing in AArch32,
- * where it is live state but not accessible to the AArch32 code.
- */
-#define CPSR_RESERVED (0x7U << 21)
 #define CPSR_J (1U << 24)
 #define CPSR_IT_0_1 (3U << 25)
 #define CPSR_Q (1U << 27)
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
index acf1815ea3..af3020b78f 100644
--- a/target/arm/op_helper.c
+++ b/target/arm/op_helper.c
@@ -387,7 +387,14 @@ void HELPER(exception_bkpt_insn)(CPUARMState *env, uint32_t syndrome)
 
 uint32_t HELPER(cpsr_read)(CPUARMState *env)
 {
-    return cpsr_read(env) & ~(CPSR_EXEC | CPSR_RESERVED);
+    /*
+     * We store the ARMv8 PSTATE.SS bit in env->uncached_cpsr.
+     * This is convenient for populating SPSR_ELx, but must be
+     * hidden from aarch32 mode, where it is not visible.
+     *
+     * TODO: ARMv8.4-DIT -- need to move SS somewhere else.
+     */
+    return cpsr_read(env) & ~(CPSR_EXEC | PSTATE_SS);
 }
 
 void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 09/20] target/arm: Tidy msr_mask
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (7 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 08/20] target/arm: Remove CPSR_RESERVED Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-07 17:40   ` Peter Maydell
  2020-02-03 14:47 ` [PATCH v3 10/20] target/arm: Introduce aarch64_pstate_valid_mask Richard Henderson
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

The CPSR_USER mask for IS_USER already avoids all of the RES0
bits as per aarch32_cpsr_valid_mask.  Fix up the formatting.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/translate.c | 42 ++++++++++++++++++++++++------------------
 1 file changed, 24 insertions(+), 18 deletions(-)

diff --git a/target/arm/translate.c b/target/arm/translate.c
index 032f7074cb..2b3bfcf7ca 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -2734,28 +2734,34 @@ static inline void gen_mulxy(TCGv_i32 t0, TCGv_i32 t1, int x, int y)
 /* Return the mask of PSR bits set by a MSR instruction.  */
 static uint32_t msr_mask(DisasContext *s, int flags, int spsr)
 {
-    uint32_t mask;
+    uint32_t mask = 0;
 
-    mask = 0;
-    if (flags & (1 << 0))
+    if (flags & (1 << 0)) {
         mask |= 0xff;
-    if (flags & (1 << 1))
-        mask |= 0xff00;
-    if (flags & (1 << 2))
-        mask |= 0xff0000;
-    if (flags & (1 << 3))
-        mask |= 0xff000000;
-
-    /* Mask out undefined bits.  */
-    mask &= aarch32_cpsr_valid_mask(s->features, s->isar);
-
-    /* Mask out execution state and reserved bits.  */
-    if (!spsr) {
-        mask &= ~CPSR_EXEC;
     }
-    /* Mask out privileged bits.  */
-    if (IS_USER(s))
+    if (flags & (1 << 1)) {
+        mask |= 0xff00;
+    }
+    if (flags & (1 << 2)) {
+        mask |= 0xff0000;
+    }
+    if (flags & (1 << 3)) {
+        mask |= 0xff000000;
+    }
+
+    if (IS_USER(s)) {
+        /* Mask out privileged bits.  */
         mask &= CPSR_USER;
+    } else {
+        /* Mask out undefined bits.  */
+        mask &= aarch32_cpsr_valid_mask(s->features, s->isar);
+
+        /* Mask out execution state and reserved bits.  */
+        if (!spsr) {
+            mask &= ~CPSR_EXEC;
+        }
+    }
+
     return mask;
 }
 
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 10/20] target/arm: Introduce aarch64_pstate_valid_mask
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (8 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 09/20] target/arm: Tidy msr_mask Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-07 17:43   ` Peter Maydell
  2020-02-03 14:47 ` [PATCH v3 11/20] target/arm: Update MSR access for PAN Richard Henderson
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Use this along the exception return path, where we previously
accepted any values.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/internals.h  | 12 ++++++++++++
 target/arm/helper-a64.c |  1 +
 2 files changed, 13 insertions(+)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index 0569c96fd9..034d98ad53 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1085,6 +1085,18 @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
     return valid;
 }
 
+static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
+{
+    uint32_t valid;
+
+    valid = PSTATE_M | PSTATE_DAIF | PSTATE_IL | PSTATE_SS | PSTATE_NZCV;
+    if (isar_feature_aa64_bti(id)) {
+        valid |= PSTATE_BTYPE;
+    }
+
+    return valid;
+}
+
 /*
  * Parameters of a given virtual address, as extracted from the
  * translation control register (TCR) for a given regime.
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
index 0c9feba392..509ae93069 100644
--- a/target/arm/helper-a64.c
+++ b/target/arm/helper-a64.c
@@ -1032,6 +1032,7 @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
                       cur_el, new_el, env->regs[15]);
     } else {
         env->aarch64 = 1;
+        spsr &= aarch64_pstate_valid_mask(&env_archcpu(env)->isar);
         pstate_write(env, spsr);
         if (!arm_singlestep_active(env)) {
             env->pstate &= ~PSTATE_SS;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 11/20] target/arm: Update MSR access for PAN
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (9 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 10/20] target/arm: Introduce aarch64_pstate_valid_mask Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-07 17:49   ` Peter Maydell
  2020-02-03 14:47 ` [PATCH v3 12/20] target/arm: Update arm_mmu_idx_el " Richard Henderson
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

For aarch64, there's a dedicated msr (imm, reg) insn.
For aarch32, this is done via msr to cpsr; and writes
from el0 are ignored.

Since v8.0, the CPSR_RESERVED bits have been allocated.
We are not yet implementing ARMv8.0-SSBS or ARMv8.4-DIT,
so retain CPSR_RESERVED for now, so that the bits remain RES0.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Move regdef to file scope; merge patch for CPSR_RESERVED:
    do not remove CPSR_SSBS from CPSR_RESERVED yet, mask PAN
    from CPSR if feature not enabled (pmm).
v3: Update for cpsr_valid_mask etc.
---
 target/arm/cpu.h           |  2 ++
 target/arm/internals.h     |  6 ++++++
 target/arm/helper.c        | 21 +++++++++++++++++++++
 target/arm/translate-a64.c | 14 ++++++++++++++
 4 files changed, 43 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index c6dff1d55b..65a0ef8cd6 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1186,6 +1186,7 @@ void pmu_init(ARMCPU *cpu);
 #define CPSR_IT_2_7 (0xfc00U)
 #define CPSR_GE (0xfU << 16)
 #define CPSR_IL (1U << 20)
+#define CPSR_PAN (1U << 22)
 #define CPSR_J (1U << 24)
 #define CPSR_IT_0_1 (3U << 25)
 #define CPSR_Q (1U << 27)
@@ -1250,6 +1251,7 @@ void pmu_init(ARMCPU *cpu);
 #define PSTATE_BTYPE (3U << 10)
 #define PSTATE_IL (1U << 20)
 #define PSTATE_SS (1U << 21)
+#define PSTATE_PAN (1U << 22)
 #define PSTATE_V (1U << 28)
 #define PSTATE_C (1U << 29)
 #define PSTATE_Z (1U << 30)
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 034d98ad53..f6709a2b08 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1081,6 +1081,9 @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
     if (isar_feature_jazelle(id)) {
         valid |= CPSR_J;
     }
+    if (isar_feature_aa32_pan(id)) {
+        valid |= CPSR_PAN;
+    }
 
     return valid;
 }
@@ -1093,6 +1096,9 @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
     if (isar_feature_aa64_bti(id)) {
         valid |= PSTATE_BTYPE;
     }
+    if (isar_feature_aa64_pan(id)) {
+        valid |= PSTATE_PAN;
+    }
 
     return valid;
 }
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 795ef727d0..90a22921dc 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -4163,6 +4163,24 @@ static void aa64_daif_write(CPUARMState *env, const ARMCPRegInfo *ri,
     env->daif = value & PSTATE_DAIF;
 }
 
+static uint64_t aa64_pan_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+    return env->pstate & PSTATE_PAN;
+}
+
+static void aa64_pan_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                           uint64_t value)
+{
+    env->pstate = (env->pstate & ~PSTATE_PAN) | (value & PSTATE_PAN);
+}
+
+static const ARMCPRegInfo pan_reginfo = {
+    .name = "PAN", .state = ARM_CP_STATE_AA64,
+    .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 3,
+    .type = ARM_CP_NO_RAW, .access = PL1_RW,
+    .readfn = aa64_pan_read, .writefn = aa64_pan_write
+};
+
 static CPAccessResult aa64_cacheop_access(CPUARMState *env,
                                           const ARMCPRegInfo *ri,
                                           bool isread)
@@ -7608,6 +7626,9 @@ void register_cp_regs_for_features(ARMCPU *cpu)
     if (cpu_isar_feature(aa64_lor, cpu)) {
         define_arm_cp_regs(cpu, lor_reginfo);
     }
+    if (cpu_isar_feature(aa64_pan, cpu)) {
+        define_one_arm_cp_reg(cpu, &pan_reginfo);
+    }
 
     if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
         define_arm_cp_regs(cpu, vhe_reginfo);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 49631c2340..d8ba240a15 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1602,6 +1602,20 @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
         s->base.is_jmp = DISAS_NEXT;
         break;
 
+    case 0x04: /* PAN */
+        if (!dc_isar_feature(aa64_pan, s) || s->current_el == 0) {
+            goto do_unallocated;
+        }
+        if (crm & 1) {
+            set_pstate_bits(PSTATE_PAN);
+        } else {
+            clear_pstate_bits(PSTATE_PAN);
+        }
+        t1 = tcg_const_i32(s->current_el);
+        gen_helper_rebuild_hflags_a64(cpu_env, t1);
+        tcg_temp_free_i32(t1);
+        break;
+
     case 0x05: /* SPSel */
         if (s->current_el == 0) {
             goto do_unallocated;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 12/20] target/arm: Update arm_mmu_idx_el for PAN
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (10 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 11/20] target/arm: Update MSR access for PAN Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-03 14:47 ` [PATCH v3 13/20] target/arm: Enforce PAN semantics in get_S1prot Richard Henderson
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Examine the PAN bit for EL1, EL2, and Secure EL1 to
determine if it applies.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/helper.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 90a22921dc..638abe6af0 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -11904,13 +11904,22 @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
         return ARMMMUIdx_E10_0;
     case 1:
         if (arm_is_secure_below_el3(env)) {
+            if (env->pstate & PSTATE_PAN) {
+                return ARMMMUIdx_SE10_1_PAN;
+            }
             return ARMMMUIdx_SE10_1;
         }
+        if (env->pstate & PSTATE_PAN) {
+            return ARMMMUIdx_E10_1_PAN;
+        }
         return ARMMMUIdx_E10_1;
     case 2:
         /* TODO: ARMv8.4-SecEL2 */
         /* Note that TGE does not apply at EL2.  */
         if ((env->cp15.hcr_el2 & HCR_E2H) && arm_el_is_aa64(env, 2)) {
+            if (env->pstate & PSTATE_PAN) {
+                return ARMMMUIdx_E20_2_PAN;
+            }
             return ARMMMUIdx_E20_2;
         }
         return ARMMMUIdx_E2;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 13/20] target/arm: Enforce PAN semantics in get_S1prot
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (11 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 12/20] target/arm: Update arm_mmu_idx_el " Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-03 14:47 ` [PATCH v3 14/20] target/arm: Set PAN bit as required on exception entry Richard Henderson
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

If we have a PAN-enforcing mmu_idx, set prot == 0 if user_rw != 0.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/internals.h | 13 +++++++++++++
 target/arm/helper.c    |  3 +++
 2 files changed, 16 insertions(+)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index f6709a2b08..4a139644b5 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -893,6 +893,19 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
     }
 }
 
+static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
+{
+    switch (mmu_idx) {
+    case ARMMMUIdx_Stage1_E1_PAN:
+    case ARMMMUIdx_E10_1_PAN:
+    case ARMMMUIdx_E20_2_PAN:
+    case ARMMMUIdx_SE10_1_PAN:
+        return true;
+    default:
+        return false;
+    }
+}
+
 /* Return the FSR value for a debug exception (watchpoint, hardware
  * breakpoint or BKPT insn) targeting the specified exception level.
  */
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 638abe6af0..18e4cbb63c 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -9578,6 +9578,9 @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
     if (is_user) {
         prot_rw = user_rw;
     } else {
+        if (user_rw && regime_is_pan(env, mmu_idx)) {
+            return 0;
+        }
         prot_rw = simple_ap_to_rw_prot_is_user(ap, false);
     }
 
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 14/20] target/arm: Set PAN bit as required on exception entry
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (12 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 13/20] target/arm: Enforce PAN semantics in get_S1prot Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-07 18:01   ` Peter Maydell
  2020-02-03 14:47 ` [PATCH v3 15/20] target/arm: Implement ATS1E1 system registers Richard Henderson
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

The PAN bit is preserved, or set as per SCTLR_ELx.SPAN,
plus several other conditions listed in the ARM ARM.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Tidy preservation of CPSR_PAN in take_aarch32_exception (pmm).
---
 target/arm/helper.c | 40 +++++++++++++++++++++++++++++++++++++---
 1 file changed, 37 insertions(+), 3 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 18e4cbb63c..4c0eb7e7d9 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -8772,8 +8772,12 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
                                    uint32_t mask, uint32_t offset,
                                    uint32_t newpc)
 {
+    int new_el;
+
     /* Change the CPU state so as to actually take the exception. */
     switch_mode(env, new_mode);
+    new_el = arm_current_el(env);
+
     /*
      * For exceptions taken to AArch32 we must clear the SS bit in both
      * PSTATE and in the old-state value we save to SPSR_<mode>, so zero it now.
@@ -8786,7 +8790,7 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
     env->uncached_cpsr = (env->uncached_cpsr & ~CPSR_M) | new_mode;
     /* Set new mode endianness */
     env->uncached_cpsr &= ~CPSR_E;
-    if (env->cp15.sctlr_el[arm_current_el(env)] & SCTLR_EE) {
+    if (env->cp15.sctlr_el[new_el] & SCTLR_EE) {
         env->uncached_cpsr |= CPSR_E;
     }
     /* J and IL must always be cleared for exception entry */
@@ -8797,6 +8801,12 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
         env->thumb = (env->cp15.sctlr_el[2] & SCTLR_TE) != 0;
         env->elr_el[2] = env->regs[15];
     } else {
+        /* CPSR.PAN is preserved unless target is EL1 and SCTLR.SPAN == 0. */
+        if (cpu_isar_feature(aa64_pan, env_archcpu(env))
+            && new_el == 1
+            && !(env->cp15.sctlr_el[1] & SCTLR_SPAN)) {
+            env->uncached_cpsr |= CPSR_PAN;
+        }
         /*
          * this is a lie, as there was no c1_sys on V4T/V5, but who cares
          * and we should just guard the thumb mode on V4
@@ -9059,6 +9069,7 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
     unsigned int new_el = env->exception.target_el;
     target_ulong addr = env->cp15.vbar_el[new_el];
     unsigned int new_mode = aarch64_pstate_mode(new_el, true);
+    unsigned int old_mode;
     unsigned int cur_el = arm_current_el(env);
 
     /*
@@ -9138,20 +9149,43 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
     }
 
     if (is_a64(env)) {
-        env->banked_spsr[aarch64_banked_spsr_index(new_el)] = pstate_read(env);
+        old_mode = pstate_read(env);
         aarch64_save_sp(env, arm_current_el(env));
         env->elr_el[new_el] = env->pc;
     } else {
-        env->banked_spsr[aarch64_banked_spsr_index(new_el)] = cpsr_read(env);
+        old_mode = cpsr_read(env);
         env->elr_el[new_el] = env->regs[15];
 
         aarch64_sync_32_to_64(env);
 
         env->condexec_bits = 0;
     }
+    env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode;
+
     qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n",
                   env->elr_el[new_el]);
 
+    if (cpu_isar_feature(aa64_pan, cpu)) {
+        /* The value of PSTATE.PAN is normally preserved, except when ... */
+        new_mode |= old_mode & PSTATE_PAN;
+        switch (new_el) {
+        case 2:
+            /* ... the target is EL2 with HCR_EL2.{E2H,TGE} == '11' ...  */
+            if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE))
+                != (HCR_E2H | HCR_TGE)) {
+                break;
+            }
+            /* fall through */
+        case 1:
+            /* ... the target is EL1 ... */
+            /* ... and SCTLR_ELx.SPAN == 0, then set to 1.  */
+            if ((env->cp15.sctlr_el[new_el] & SCTLR_SPAN) == 0) {
+                new_mode |= PSTATE_PAN;
+            }
+            break;
+        }
+    }
+
     pstate_write(env, PSTATE_DAIF | new_mode);
     env->aarch64 = 1;
     aarch64_restore_sp(env, new_el);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 15/20] target/arm: Implement ATS1E1 system registers
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (13 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 14/20] target/arm: Set PAN bit as required on exception entry Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-03 14:47 ` [PATCH v3 16/20] target/arm: Enable ARMv8.2-ATS1E1 in -cpu max Richard Henderson
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

This is a minor enhancement over ARMv8.1-PAN.
The *_PAN mmu_idx are used with the existing do_ats_write.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Move regdefs to file scope (pmm).
---
 target/arm/helper.c | 56 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 50 insertions(+), 6 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 4c0eb7e7d9..e69cde801f 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3409,16 +3409,21 @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
 
     switch (ri->opc2 & 6) {
     case 0:
-        /* stage 1 current state PL1: ATS1CPR, ATS1CPW */
+        /* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */
         switch (el) {
         case 3:
             mmu_idx = ARMMMUIdx_SE3;
             break;
         case 2:
-            mmu_idx = ARMMMUIdx_Stage1_E1;
-            break;
+            g_assert(!secure);  /* TODO: ARMv8.4-SecEL2 */
+            /* fall through */
         case 1:
-            mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
+            if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
+                mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
+                           : ARMMMUIdx_Stage1_E1_PAN);
+            } else {
+                mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
+            }
             break;
         default:
             g_assert_not_reached();
@@ -3487,8 +3492,13 @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
     switch (ri->opc2 & 6) {
     case 0:
         switch (ri->opc1) {
-        case 0: /* AT S1E1R, AT S1E1W */
-            mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
+        case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
+            if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
+                mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
+                           : ARMMMUIdx_Stage1_E1_PAN);
+            } else {
+                mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
+            }
             break;
         case 4: /* AT S1E2R, AT S1E2W */
             mmu_idx = ARMMMUIdx_E2;
@@ -6692,6 +6702,32 @@ static const ARMCPRegInfo vhe_reginfo[] = {
     REGINFO_SENTINEL
 };
 
+#ifndef CONFIG_USER_ONLY
+static const ARMCPRegInfo ats1e1_reginfo[] = {
+    { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
+      .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
+      .writefn = ats_write64 },
+    { .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
+      .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
+      .writefn = ats_write64 },
+    REGINFO_SENTINEL
+};
+
+static const ARMCPRegInfo ats1cp_reginfo[] = {
+    { .name = "ATS1CPRP",
+      .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
+      .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
+      .writefn = ats_write },
+    { .name = "ATS1CPWP",
+      .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
+      .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
+      .writefn = ats_write },
+    REGINFO_SENTINEL
+};
+#endif
+
 void register_cp_regs_for_features(ARMCPU *cpu)
 {
     /* Register all the coprocessor registers based on feature bits */
@@ -7629,6 +7665,14 @@ void register_cp_regs_for_features(ARMCPU *cpu)
     if (cpu_isar_feature(aa64_pan, cpu)) {
         define_one_arm_cp_reg(cpu, &pan_reginfo);
     }
+#ifndef CONFIG_USER_ONLY
+    if (cpu_isar_feature(aa64_ats1e1, cpu)) {
+        define_arm_cp_regs(cpu, ats1e1_reginfo);
+    }
+    if (cpu_isar_feature(aa32_ats1e1, cpu)) {
+        define_arm_cp_regs(cpu, ats1cp_reginfo);
+    }
+#endif
 
     if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
         define_arm_cp_regs(cpu, vhe_reginfo);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 16/20] target/arm: Enable ARMv8.2-ATS1E1 in -cpu max
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (14 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 15/20] target/arm: Implement ATS1E1 system registers Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-03 14:47 ` [PATCH v3 17/20] target/arm: Add ID_AA64MMFR2_EL1 Richard Henderson
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

This includes enablement of ARMv8.1-PAN.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.c   | 4 ++++
 target/arm/cpu64.c | 5 +++++
 2 files changed, 9 insertions(+)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index b0762a76c4..de733aceeb 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -2709,6 +2709,10 @@ static void arm_max_initfn(Object *obj)
             t = FIELD_DP32(t, MVFR2, FPMISC, 4);   /* FP MaxNum */
             cpu->isar.mvfr2 = t;
 
+            t = cpu->id_mmfr3;
+            t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
+            cpu->id_mmfr3 = t;
+
             t = cpu->id_mmfr4;
             t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
             cpu->id_mmfr4 = t;
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index c80fb5fd43..57fbc5eade 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -673,6 +673,7 @@ static void aarch64_max_initfn(Object *obj)
         t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
         t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
         t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
+        t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
         cpu->isar.id_aa64mmfr1 = t;
 
         /* Replicate the same data to the 32-bit id registers.  */
@@ -693,6 +694,10 @@ static void aarch64_max_initfn(Object *obj)
         u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
         cpu->isar.id_isar6 = u;
 
+        u = cpu->id_mmfr3;
+        u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
+        cpu->id_mmfr3 = u;
+
         /*
          * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
          * so do not set MVFR1.FPHP.  Strictly speaking this is not legal,
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 17/20] target/arm: Add ID_AA64MMFR2_EL1
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (15 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 16/20] target/arm: Enable ARMv8.2-ATS1E1 in -cpu max Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-03 14:47 ` [PATCH v3 18/20] target/arm: Update MSR access to UAO Richard Henderson
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Add definitions for all of the fields, up to ARMv8.5.
Convert the existing RESERVED register to a full register.
Query KVM for the value of the register for the host.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.h    | 17 +++++++++++++++++
 target/arm/helper.c |  4 ++--
 target/arm/kvm64.c  |  2 ++
 3 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 65a0ef8cd6..71879393c2 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -871,6 +871,7 @@ struct ARMCPU {
         uint64_t id_aa64pfr1;
         uint64_t id_aa64mmfr0;
         uint64_t id_aa64mmfr1;
+        uint64_t id_aa64mmfr2;
     } isar;
     uint32_t midr;
     uint32_t revidr;
@@ -1803,6 +1804,22 @@ FIELD(ID_AA64MMFR1, PAN, 20, 4)
 FIELD(ID_AA64MMFR1, SPECSEI, 24, 4)
 FIELD(ID_AA64MMFR1, XNX, 28, 4)
 
+FIELD(ID_AA64MMFR2, CNP, 0, 4)
+FIELD(ID_AA64MMFR2, UAO, 4, 4)
+FIELD(ID_AA64MMFR2, LSM, 8, 4)
+FIELD(ID_AA64MMFR2, IESB, 12, 4)
+FIELD(ID_AA64MMFR2, VARANGE, 16, 4)
+FIELD(ID_AA64MMFR2, CCIDX, 20, 4)
+FIELD(ID_AA64MMFR2, NV, 24, 4)
+FIELD(ID_AA64MMFR2, ST, 28, 4)
+FIELD(ID_AA64MMFR2, AT, 32, 4)
+FIELD(ID_AA64MMFR2, IDS, 36, 4)
+FIELD(ID_AA64MMFR2, FWB, 40, 4)
+FIELD(ID_AA64MMFR2, TTL, 48, 4)
+FIELD(ID_AA64MMFR2, BBM, 52, 4)
+FIELD(ID_AA64MMFR2, EVT, 56, 4)
+FIELD(ID_AA64MMFR2, E0PD, 60, 4)
+
 FIELD(ID_DFR0, COPDBG, 0, 4)
 FIELD(ID_DFR0, COPSDBG, 4, 4)
 FIELD(ID_DFR0, MMAPDBG, 8, 4)
diff --git a/target/arm/helper.c b/target/arm/helper.c
index e69cde801f..a48f37dc05 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -7082,11 +7082,11 @@ void register_cp_regs_for_features(ARMCPU *cpu)
               .access = PL1_R, .type = ARM_CP_CONST,
               .accessfn = access_aa64_tid3,
               .resetvalue = cpu->isar.id_aa64mmfr1 },
-            { .name = "ID_AA64MMFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
+            { .name = "ID_AA64MMFR2_EL1", .state = ARM_CP_STATE_AA64,
               .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 2,
               .access = PL1_R, .type = ARM_CP_CONST,
               .accessfn = access_aa64_tid3,
-              .resetvalue = 0 },
+              .resetvalue = cpu->isar.id_aa64mmfr2 },
             { .name = "ID_AA64MMFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
               .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 3,
               .access = PL1_R, .type = ARM_CP_CONST,
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
index fb21ab9e73..3bae9e4a66 100644
--- a/target/arm/kvm64.c
+++ b/target/arm/kvm64.c
@@ -549,6 +549,8 @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
                               ARM64_SYS_REG(3, 0, 0, 7, 0));
         err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr1,
                               ARM64_SYS_REG(3, 0, 0, 7, 1));
+        err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr2,
+                              ARM64_SYS_REG(3, 0, 0, 7, 2));
 
         /*
          * Note that if AArch32 support is not present in the host,
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 18/20] target/arm: Update MSR access to UAO
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (16 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 17/20] target/arm: Add ID_AA64MMFR2_EL1 Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-07 17:52   ` Peter Maydell
  2020-02-03 14:47 ` [PATCH v3 19/20] target/arm: Implement UAO semantics Richard Henderson
  2020-02-03 14:47 ` [PATCH v3 20/20] target/arm: Enable ARMv8.2-UAO in -cpu max Richard Henderson
  19 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Move reginfo to file scope; avoid setting uao from spsr
    when the feature is not enabled (pmm).
v3: Update for aarch64_pstate_valid_mask
---
 target/arm/cpu.h           |  6 ++++++
 target/arm/internals.h     |  3 +++
 target/arm/helper.c        | 21 +++++++++++++++++++++
 target/arm/translate-a64.c | 14 ++++++++++++++
 4 files changed, 44 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 71879393c2..e943ffe8a9 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1253,6 +1253,7 @@ void pmu_init(ARMCPU *cpu);
 #define PSTATE_IL (1U << 20)
 #define PSTATE_SS (1U << 21)
 #define PSTATE_PAN (1U << 22)
+#define PSTATE_UAO (1U << 23)
 #define PSTATE_V (1U << 28)
 #define PSTATE_C (1U << 29)
 #define PSTATE_Z (1U << 30)
@@ -3642,6 +3643,11 @@ static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
     return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
 }
 
+static inline bool isar_feature_aa64_uao(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, UAO) != 0;
+}
+
 static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
 {
     return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 4a139644b5..58c4d707c5 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1112,6 +1112,9 @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
     if (isar_feature_aa64_pan(id)) {
         valid |= PSTATE_PAN;
     }
+    if (isar_feature_aa64_uao(id)) {
+        valid |= PSTATE_UAO;
+    }
 
     return valid;
 }
diff --git a/target/arm/helper.c b/target/arm/helper.c
index a48f37dc05..d847b0f40b 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -4191,6 +4191,24 @@ static const ARMCPRegInfo pan_reginfo = {
     .readfn = aa64_pan_read, .writefn = aa64_pan_write
 };
 
+static uint64_t aa64_uao_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+    return env->pstate & PSTATE_UAO;
+}
+
+static void aa64_uao_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                           uint64_t value)
+{
+    env->pstate = (env->pstate & ~PSTATE_UAO) | (value & PSTATE_UAO);
+}
+
+static const ARMCPRegInfo uao_reginfo = {
+    .name = "UAO", .state = ARM_CP_STATE_AA64,
+    .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 4,
+    .type = ARM_CP_NO_RAW, .access = PL1_RW,
+    .readfn = aa64_uao_read, .writefn = aa64_uao_write
+};
+
 static CPAccessResult aa64_cacheop_access(CPUARMState *env,
                                           const ARMCPRegInfo *ri,
                                           bool isread)
@@ -7673,6 +7691,9 @@ void register_cp_regs_for_features(ARMCPU *cpu)
         define_arm_cp_regs(cpu, ats1cp_reginfo);
     }
 #endif
+    if (cpu_isar_feature(aa64_uao, cpu)) {
+        define_one_arm_cp_reg(cpu, &uao_reginfo);
+    }
 
     if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
         define_arm_cp_regs(cpu, vhe_reginfo);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d8ba240a15..7c26c3bfeb 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1602,6 +1602,20 @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
         s->base.is_jmp = DISAS_NEXT;
         break;
 
+    case 0x03: /* UAO */
+        if (!dc_isar_feature(aa64_uao, s) || s->current_el == 0) {
+            goto do_unallocated;
+        }
+        if (crm & 1) {
+            set_pstate_bits(PSTATE_UAO);
+        } else {
+            clear_pstate_bits(PSTATE_UAO);
+        }
+        t1 = tcg_const_i32(s->current_el);
+        gen_helper_rebuild_hflags_a64(cpu_env, t1);
+        tcg_temp_free_i32(t1);
+        break;
+
     case 0x04: /* PAN */
         if (!dc_isar_feature(aa64_pan, s) || s->current_el == 0) {
             goto do_unallocated;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 19/20] target/arm: Implement UAO semantics
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (17 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 18/20] target/arm: Update MSR access to UAO Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  2020-02-03 14:47 ` [PATCH v3 20/20] target/arm: Enable ARMv8.2-UAO in -cpu max Richard Henderson
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

We need only override the current condition under which
TBFLAG_A64.UNPRIV is set.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/helper.c | 41 +++++++++++++++++++++--------------------
 1 file changed, 21 insertions(+), 20 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index d847b0f40b..b24a6a6526 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -12194,28 +12194,29 @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
     }
 
     /* Compute the condition for using AccType_UNPRIV for LDTR et al. */
-    /* TODO: ARMv8.2-UAO */
-    switch (mmu_idx) {
-    case ARMMMUIdx_E10_1:
-    case ARMMMUIdx_E10_1_PAN:
-    case ARMMMUIdx_SE10_1:
-    case ARMMMUIdx_SE10_1_PAN:
-        /* TODO: ARMv8.3-NV */
-        flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
-        break;
-    case ARMMMUIdx_E20_2:
-    case ARMMMUIdx_E20_2_PAN:
-        /* TODO: ARMv8.4-SecEL2 */
-        /*
-         * Note that E20_2 is gated by HCR_EL2.E2H == 1, but E20_0 is
-         * gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
-         */
-        if (env->cp15.hcr_el2 & HCR_TGE) {
+    if (!(env->pstate & PSTATE_UAO)) {
+        switch (mmu_idx) {
+        case ARMMMUIdx_E10_1:
+        case ARMMMUIdx_E10_1_PAN:
+        case ARMMMUIdx_SE10_1:
+        case ARMMMUIdx_SE10_1_PAN:
+            /* TODO: ARMv8.3-NV */
             flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
+            break;
+        case ARMMMUIdx_E20_2:
+        case ARMMMUIdx_E20_2_PAN:
+            /* TODO: ARMv8.4-SecEL2 */
+            /*
+             * Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
+             * gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
+             */
+            if (env->cp15.hcr_el2 & HCR_TGE) {
+                flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
+            }
+            break;
+        default:
+            break;
         }
-        break;
-    default:
-        break;
     }
 
     return rebuild_hflags_common(env, fp_el, mmu_idx, flags);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 20/20] target/arm: Enable ARMv8.2-UAO in -cpu max
  2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (18 preceding siblings ...)
  2020-02-03 14:47 ` [PATCH v3 19/20] target/arm: Implement UAO semantics Richard Henderson
@ 2020-02-03 14:47 ` Richard Henderson
  19 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-03 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu64.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 57fbc5eade..1359564c55 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -676,6 +676,10 @@ static void aarch64_max_initfn(Object *obj)
         t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
         cpu->isar.id_aa64mmfr1 = t;
 
+        t = cpu->isar.id_aa64mmfr2;
+        t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
+        cpu->isar.id_aa64mmfr2 = t;
+
         /* Replicate the same data to the 32-bit id registers.  */
         u = cpu->isar.id_isar5;
         u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 05/20] target/arm: Split out aarch32_cpsr_valid_mask
  2020-02-03 14:47 ` [PATCH v3 05/20] target/arm: Split out aarch32_cpsr_valid_mask Richard Henderson
@ 2020-02-07 17:26   ` Peter Maydell
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Maydell @ 2020-02-07 17:26 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Mon, 3 Feb 2020 at 14:47, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Split this helper out of msr_mask in translate.c.  At the same time,
> transform the negative reductive logic to positive accumulative logic.
> It will be usable along the exception paths.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  target/arm/internals.h | 24 ++++++++++++++++++++++++
>  target/arm/translate.c | 17 +++--------------
>  2 files changed, 27 insertions(+), 14 deletions(-)
>
> diff --git a/target/arm/internals.h b/target/arm/internals.h
> index 6be8b2d1a9..0569c96fd9 100644
> --- a/target/arm/internals.h
> +++ b/target/arm/internals.h
> @@ -1061,6 +1061,30 @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
>      }
>  }
>
> +static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
> +                                               const ARMISARegisters *id)
> +{
> +    uint32_t valid = CPSR_M | CPSR_AIF | CPSR_IL | CPSR_NZCV;
> +
> +    if ((features >> ARM_FEATURE_V4T) & 1) {
> +        valid |= CPSR_T;
> +    }
> +    if ((features >> ARM_FEATURE_V5) & 1) {
> +        valid |= CPSR_Q; /* V5TE in reality*/
> +    }
> +    if ((features >> ARM_FEATURE_V6) & 1) {
> +        valid |= CPSR_E | CPSR_GE;
> +    }
> +    if ((features >> ARM_FEATURE_THUMB2) & 1) {
> +        valid |= CPSR_IT;
> +    }
> +    if (isar_feature_jazelle(id)) {
> +        valid |= CPSR_J;
> +    }

This is a behaviour-change rather than just refactoring:
we used to unconditionally allow the J bit through,
and now we only do so if the isar feature bit is set.

> +    return valid;
> +}
> +
>  /*
>   * Parameters of a given virtual address, as extracted from the
>   * translation control register (TCR) for a given regime.
> diff --git a/target/arm/translate.c b/target/arm/translate.c
> index d58c328e08..032f7074cb 100644
> --- a/target/arm/translate.c
> +++ b/target/arm/translate.c
> @@ -2747,22 +2747,11 @@ static uint32_t msr_mask(DisasContext *s, int flags, int spsr)
>          mask |= 0xff000000;
>
>      /* Mask out undefined bits.  */
> -    mask &= ~CPSR_RESERVED;
> -    if (!arm_dc_feature(s, ARM_FEATURE_V4T)) {
> -        mask &= ~CPSR_T;
> -    }
> -    if (!arm_dc_feature(s, ARM_FEATURE_V5)) {
> -        mask &= ~CPSR_Q; /* V5TE in reality*/
> -    }
> -    if (!arm_dc_feature(s, ARM_FEATURE_V6)) {
> -        mask &= ~(CPSR_E | CPSR_GE);
> -    }
> -    if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
> -        mask &= ~CPSR_IT;
> -    }
> +    mask &= aarch32_cpsr_valid_mask(s->features, s->isar);
> +
>      /* Mask out execution state and reserved bits.  */

This comment no longer matches the code it's referring to.

>      if (!spsr) {
> -        mask &= ~(CPSR_EXEC | CPSR_RESERVED);
> +        mask &= ~CPSR_EXEC;
>      }
>      /* Mask out privileged bits.  */
>      if (IS_USER(s))
> --

Otherwise
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 06/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask
  2020-02-03 14:47 ` [PATCH v3 06/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask Richard Henderson
@ 2020-02-07 17:32   ` Peter Maydell
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Maydell @ 2020-02-07 17:32 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Mon, 3 Feb 2020 at 14:47, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> CPSR_ERET_MASK was a useless renaming of CPSR_RESERVED.
> The function also takes into account bits that the cpu
> does not support.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  target/arm/cpu.h       | 2 --
>  target/arm/op_helper.c | 5 ++++-
>  2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/target/arm/cpu.h b/target/arm/cpu.h
> index 08b2f5d73e..694b074298 100644
> --- a/target/arm/cpu.h
> +++ b/target/arm/cpu.h
> @@ -1209,8 +1209,6 @@ void pmu_init(ARMCPU *cpu);
>  #define CPSR_USER (CPSR_NZCV | CPSR_Q | CPSR_GE)
>  /* Execution state bits.  MRS read as zero, MSR writes ignored.  */
>  #define CPSR_EXEC (CPSR_T | CPSR_IT | CPSR_J | CPSR_IL)
> -/* Mask of bits which may be set by exception return copying them from SPSR */
> -#define CPSR_ERET_MASK (~CPSR_RESERVED)
>
>  /* Bit definitions for M profile XPSR. Most are the same as CPSR. */
>  #define XPSR_EXCP 0x1ffU
> diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
> index 27d16ad9ad..acf1815ea3 100644
> --- a/target/arm/op_helper.c
> +++ b/target/arm/op_helper.c
> @@ -400,11 +400,14 @@ void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
>  /* Write the CPSR for a 32-bit exception return */
>  void HELPER(cpsr_write_eret)(CPUARMState *env, uint32_t val)
>  {
> +    uint32_t mask;
> +
>      qemu_mutex_lock_iothread();
>      arm_call_pre_el_change_hook(env_archcpu(env));
>      qemu_mutex_unlock_iothread();
>
> -    cpsr_write(env, val, CPSR_ERET_MASK, CPSRWriteExceptionReturn);
> +    mask = aarch32_cpsr_valid_mask(env->features, &env_archcpu(env)->isar);
> +    cpsr_write(env, val, mask, CPSRWriteExceptionReturn);
>
>      /* Generated code has already stored the new PC value, but
>       * without masking out its low bits, because which bits need
> --

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 07/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return
  2020-02-03 14:47 ` [PATCH v3 07/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return Richard Henderson
@ 2020-02-07 17:33   ` Peter Maydell
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Maydell @ 2020-02-07 17:33 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Mon, 3 Feb 2020 at 14:47, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Using ~0 as the mask on the aarch64->aarch32 exception return
> was not even as correct as the CPSR_ERET_MASK that we had used
> on the aarch32->aarch32 exception return.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 08/20] target/arm: Remove CPSR_RESERVED
  2020-02-03 14:47 ` [PATCH v3 08/20] target/arm: Remove CPSR_RESERVED Richard Henderson
@ 2020-02-07 17:36   ` Peter Maydell
  2020-02-08  8:26     ` Richard Henderson
  0 siblings, 1 reply; 34+ messages in thread
From: Peter Maydell @ 2020-02-07 17:36 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Mon, 3 Feb 2020 at 14:47, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The only remaining use was in op_helper.c.  Use PSTATE_SS
> directly, and move the commentary so that it is more obvious
> what is going on.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  target/arm/cpu.h       | 6 ------
>  target/arm/op_helper.c | 9 ++++++++-
>  2 files changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/target/arm/cpu.h b/target/arm/cpu.h
> index 694b074298..c6dff1d55b 100644
> --- a/target/arm/cpu.h
> +++ b/target/arm/cpu.h
> @@ -1186,12 +1186,6 @@ void pmu_init(ARMCPU *cpu);
>  #define CPSR_IT_2_7 (0xfc00U)
>  #define CPSR_GE (0xfU << 16)
>  #define CPSR_IL (1U << 20)
> -/* Note that the RESERVED bits include bit 21, which is PSTATE_SS in
> - * an AArch64 SPSR but RES0 in AArch32 SPSR and CPSR. In QEMU we use
> - * env->uncached_cpsr bit 21 to store PSTATE.SS when executing in AArch32,
> - * where it is live state but not accessible to the AArch32 code.
> - */
> -#define CPSR_RESERVED (0x7U << 21)
>  #define CPSR_J (1U << 24)
>  #define CPSR_IT_0_1 (3U << 25)
>  #define CPSR_Q (1U << 27)
> diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
> index acf1815ea3..af3020b78f 100644
> --- a/target/arm/op_helper.c
> +++ b/target/arm/op_helper.c
> @@ -387,7 +387,14 @@ void HELPER(exception_bkpt_insn)(CPUARMState *env, uint32_t syndrome)
>
>  uint32_t HELPER(cpsr_read)(CPUARMState *env)
>  {
> -    return cpsr_read(env) & ~(CPSR_EXEC | CPSR_RESERVED);
> +    /*
> +     * We store the ARMv8 PSTATE.SS bit in env->uncached_cpsr.
> +     * This is convenient for populating SPSR_ELx, but must be
> +     * hidden from aarch32 mode, where it is not visible.
> +     *
> +     * TODO: ARMv8.4-DIT -- need to move SS somewhere else.
> +     */
> +    return cpsr_read(env) & ~(CPSR_EXEC | PSTATE_SS);

So previously we were masking out [23:21], and now we only mask
out [21]. Is this OK because we've now masked everywhere that
might have been able to write non-zero to [23:22] ?

(regarding the TODO comment, I guess the obvious place would
be env->pstate.)

>  }
>
>  void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
> --
> 2.20.1

thanks
-- PMM


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 09/20] target/arm: Tidy msr_mask
  2020-02-03 14:47 ` [PATCH v3 09/20] target/arm: Tidy msr_mask Richard Henderson
@ 2020-02-07 17:40   ` Peter Maydell
  2020-02-08  8:29     ` Richard Henderson
  0 siblings, 1 reply; 34+ messages in thread
From: Peter Maydell @ 2020-02-07 17:40 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Mon, 3 Feb 2020 at 14:47, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The CPSR_USER mask for IS_USER already avoids all of the RES0
> bits as per aarch32_cpsr_valid_mask.  Fix up the formatting.

CPSR_USER includes CPSR_Q and CPSR_GE, which might be RES0
depending on feature bit settings.

Diff made a bit of a mess of this patch -- I think it would
be easier to understand if the reformatting to add {} was
separate from the code change.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 10/20] target/arm: Introduce aarch64_pstate_valid_mask
  2020-02-03 14:47 ` [PATCH v3 10/20] target/arm: Introduce aarch64_pstate_valid_mask Richard Henderson
@ 2020-02-07 17:43   ` Peter Maydell
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Maydell @ 2020-02-07 17:43 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Mon, 3 Feb 2020 at 14:47, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Use this along the exception return path, where we previously
> accepted any values.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---


Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 11/20] target/arm: Update MSR access for PAN
  2020-02-03 14:47 ` [PATCH v3 11/20] target/arm: Update MSR access for PAN Richard Henderson
@ 2020-02-07 17:49   ` Peter Maydell
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Maydell @ 2020-02-07 17:49 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Mon, 3 Feb 2020 at 14:47, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> For aarch64, there's a dedicated msr (imm, reg) insn.
> For aarch32, this is done via msr to cpsr; and writes
> from el0 are ignored.
>
> Since v8.0, the CPSR_RESERVED bits have been allocated.
> We are not yet implementing ARMv8.0-SSBS or ARMv8.4-DIT,
> so retain CPSR_RESERVED for now, so that the bits remain RES0.

...we removed CPSR_RESERVED in patch 8...

>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> v2: Move regdef to file scope; merge patch for CPSR_RESERVED:
>     do not remove CPSR_SSBS from CPSR_RESERVED yet, mask PAN
>     from CPSR if feature not enabled (pmm).
> v3: Update for cpsr_valid_mask etc.
> ---
>  target/arm/cpu.h           |  2 ++
>  target/arm/internals.h     |  6 ++++++
>  target/arm/helper.c        | 21 +++++++++++++++++++++
>  target/arm/translate-a64.c | 14 ++++++++++++++
>  4 files changed, 43 insertions(+)

Other than fixing up the commit message

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 18/20] target/arm: Update MSR access to UAO
  2020-02-03 14:47 ` [PATCH v3 18/20] target/arm: Update MSR access to UAO Richard Henderson
@ 2020-02-07 17:52   ` Peter Maydell
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Maydell @ 2020-02-07 17:52 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Mon, 3 Feb 2020 at 14:47, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> v2: Move reginfo to file scope; avoid setting uao from spsr
>     when the feature is not enabled (pmm).
> v3: Update for aarch64_pstate_valid_mask
> ---
>  target/arm/cpu.h           |  6 ++++++
>  target/arm/internals.h     |  3 +++
>  target/arm/helper.c        | 21 +++++++++++++++++++++
>  target/arm/translate-a64.c | 14 ++++++++++++++
>  4 files changed, 44 insertions(+)

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 14/20] target/arm: Set PAN bit as required on exception entry
  2020-02-03 14:47 ` [PATCH v3 14/20] target/arm: Set PAN bit as required on exception entry Richard Henderson
@ 2020-02-07 18:01   ` Peter Maydell
  2020-02-08  8:45     ` Richard Henderson
  0 siblings, 1 reply; 34+ messages in thread
From: Peter Maydell @ 2020-02-07 18:01 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Mon, 3 Feb 2020 at 14:47, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The PAN bit is preserved, or set as per SCTLR_ELx.SPAN,
> plus several other conditions listed in the ARM ARM.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> v2: Tidy preservation of CPSR_PAN in take_aarch32_exception (pmm).
> ---
>  target/arm/helper.c | 40 +++++++++++++++++++++++++++++++++++++---
>  1 file changed, 37 insertions(+), 3 deletions(-)
>
> diff --git a/target/arm/helper.c b/target/arm/helper.c
> index 18e4cbb63c..4c0eb7e7d9 100644
> --- a/target/arm/helper.c
> +++ b/target/arm/helper.c
> @@ -8772,8 +8772,12 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
>                                     uint32_t mask, uint32_t offset,
>                                     uint32_t newpc)
>  {
> +    int new_el;
> +
>      /* Change the CPU state so as to actually take the exception. */
>      switch_mode(env, new_mode);
> +    new_el = arm_current_el(env);
> +
>      /*
>       * For exceptions taken to AArch32 we must clear the SS bit in both
>       * PSTATE and in the old-state value we save to SPSR_<mode>, so zero it now.
> @@ -8786,7 +8790,7 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
>      env->uncached_cpsr = (env->uncached_cpsr & ~CPSR_M) | new_mode;
>      /* Set new mode endianness */
>      env->uncached_cpsr &= ~CPSR_E;
> -    if (env->cp15.sctlr_el[arm_current_el(env)] & SCTLR_EE) {
> +    if (env->cp15.sctlr_el[new_el] & SCTLR_EE) {
>          env->uncached_cpsr |= CPSR_E;
>      }
>      /* J and IL must always be cleared for exception entry */
> @@ -8797,6 +8801,12 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
>          env->thumb = (env->cp15.sctlr_el[2] & SCTLR_TE) != 0;
>          env->elr_el[2] = env->regs[15];
>      } else {
> +        /* CPSR.PAN is preserved unless target is EL1 and SCTLR.SPAN == 0. */
> +        if (cpu_isar_feature(aa64_pan, env_archcpu(env))
> +            && new_el == 1
> +            && !(env->cp15.sctlr_el[1] & SCTLR_SPAN)) {
> +            env->uncached_cpsr |= CPSR_PAN;
> +        }

This doesn't catch the "taking exception to EL3 and AArch32 is EL3"
case, which is also supposed to honour SCTLR.SPAN.

Given where this code is, we know we're taking an exception to
AArch32 and that we're not going to Hyp mode, so in fact every
case where we get here is one where we should honour SCTLR.SPAN
and I think we can just drop the "new_el == 1" part of the condition.

Otherwise
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 08/20] target/arm: Remove CPSR_RESERVED
  2020-02-07 17:36   ` Peter Maydell
@ 2020-02-08  8:26     ` Richard Henderson
  0 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-08  8:26 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Alex Bennée, QEMU Developers

On 2/7/20 5:36 PM, Peter Maydell wrote:
>> -    return cpsr_read(env) & ~(CPSR_EXEC | CPSR_RESERVED);
>> +    /*
>> +     * We store the ARMv8 PSTATE.SS bit in env->uncached_cpsr.
>> +     * This is convenient for populating SPSR_ELx, but must be
>> +     * hidden from aarch32 mode, where it is not visible.
>> +     *
>> +     * TODO: ARMv8.4-DIT -- need to move SS somewhere else.
>> +     */
>> +    return cpsr_read(env) & ~(CPSR_EXEC | PSTATE_SS);
> 
> So previously we were masking out [23:21], and now we only mask
> out [21]. Is this OK because we've now masked everywhere that
> might have been able to write non-zero to [23:22] ?

Yes.

On the chance that I've missed one, we'll now call anything that fails to do so
a bug there, and not here.  ;-)

> (regarding the TODO comment, I guess the obvious place would
> be env->pstate.)

That was my thought too.  That env->pstate & PSTATE_SS would be where we leave
that bit all of the time, even when the rest of pstate is inactive in aa32 mode.


r~


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 09/20] target/arm: Tidy msr_mask
  2020-02-07 17:40   ` Peter Maydell
@ 2020-02-08  8:29     ` Richard Henderson
  0 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-08  8:29 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Alex Bennée, QEMU Developers

On 2/7/20 5:40 PM, Peter Maydell wrote:
> On Mon, 3 Feb 2020 at 14:47, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> The CPSR_USER mask for IS_USER already avoids all of the RES0
>> bits as per aarch32_cpsr_valid_mask.  Fix up the formatting.
> 
> CPSR_USER includes CPSR_Q and CPSR_GE, which might be RES0
> depending on feature bit settings.

Oops, yes.

> Diff made a bit of a mess of this patch -- I think it would
> be easier to understand if the reformatting to add {} was
> separate from the code change.

Because of the above, I'll probably drop all of this except for the formatting fix.


r~


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 14/20] target/arm: Set PAN bit as required on exception entry
  2020-02-07 18:01   ` Peter Maydell
@ 2020-02-08  8:45     ` Richard Henderson
  2020-02-08  9:27       ` Richard Henderson
  0 siblings, 1 reply; 34+ messages in thread
From: Richard Henderson @ 2020-02-08  8:45 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Alex Bennée, QEMU Developers

On 2/7/20 6:01 PM, Peter Maydell wrote:
>> +        /* CPSR.PAN is preserved unless target is EL1 and SCTLR.SPAN == 0. */
>> +        if (cpu_isar_feature(aa64_pan, env_archcpu(env))
>> +            && new_el == 1
>> +            && !(env->cp15.sctlr_el[1] & SCTLR_SPAN)) {
>> +            env->uncached_cpsr |= CPSR_PAN;
>> +        }
> This doesn't catch the "taking exception to EL3 and AArch32 is EL3"
> case, which is also supposed to honour SCTLR.SPAN.
> 
> Given where this code is, we know we're taking an exception to
> AArch32 and that we're not going to Hyp mode, so in fact every
> case where we get here is one where we should honour SCTLR.SPAN
> and I think we can just drop the "new_el == 1" part of the condition.

Presumably that becomes env->cp15.sctlr_el[new_el] as well, so that we get the
secure version of the sctlr.


r~


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 14/20] target/arm: Set PAN bit as required on exception entry
  2020-02-08  8:45     ` Richard Henderson
@ 2020-02-08  9:27       ` Richard Henderson
  0 siblings, 0 replies; 34+ messages in thread
From: Richard Henderson @ 2020-02-08  9:27 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Alex Bennée, QEMU Developers

On 2/8/20 8:45 AM, Richard Henderson wrote:
> On 2/7/20 6:01 PM, Peter Maydell wrote:
>>> +        /* CPSR.PAN is preserved unless target is EL1 and SCTLR.SPAN == 0. */
>>> +        if (cpu_isar_feature(aa64_pan, env_archcpu(env))
>>> +            && new_el == 1
>>> +            && !(env->cp15.sctlr_el[1] & SCTLR_SPAN)) {
>>> +            env->uncached_cpsr |= CPSR_PAN;
>>> +        }
>> This doesn't catch the "taking exception to EL3 and AArch32 is EL3"
>> case, which is also supposed to honour SCTLR.SPAN.
>>
>> Given where this code is, we know we're taking an exception to
>> AArch32 and that we're not going to Hyp mode, so in fact every
>> case where we get here is one where we should honour SCTLR.SPAN
>> and I think we can just drop the "new_el == 1" part of the condition.
> 
> Presumably that becomes env->cp15.sctlr_el[new_el] as well, so that we get the
> secure version of the sctlr.

Actually, there's another clause that I missed before:

  # When the target of the exception is EL3, from Non-secure
  # state, this bit is set to 0 regardless
  # of the value of the Secure SCTLR.SPAN bit.

See G8.2.33.  Will fix for v4.


r~


^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2020-02-08  9:28 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-03 14:46 [PATCH v3 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
2020-02-03 14:46 ` [PATCH v3 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2 Richard Henderson
2020-02-03 14:46 ` [PATCH v3 02/20] target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled Richard Henderson
2020-02-03 14:46 ` [PATCH v3 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1 Richard Henderson
2020-02-03 14:47 ` [PATCH v3 04/20] target/arm: Move LOR regdefs to file scope Richard Henderson
2020-02-03 14:47 ` [PATCH v3 05/20] target/arm: Split out aarch32_cpsr_valid_mask Richard Henderson
2020-02-07 17:26   ` Peter Maydell
2020-02-03 14:47 ` [PATCH v3 06/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask Richard Henderson
2020-02-07 17:32   ` Peter Maydell
2020-02-03 14:47 ` [PATCH v3 07/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return Richard Henderson
2020-02-07 17:33   ` Peter Maydell
2020-02-03 14:47 ` [PATCH v3 08/20] target/arm: Remove CPSR_RESERVED Richard Henderson
2020-02-07 17:36   ` Peter Maydell
2020-02-08  8:26     ` Richard Henderson
2020-02-03 14:47 ` [PATCH v3 09/20] target/arm: Tidy msr_mask Richard Henderson
2020-02-07 17:40   ` Peter Maydell
2020-02-08  8:29     ` Richard Henderson
2020-02-03 14:47 ` [PATCH v3 10/20] target/arm: Introduce aarch64_pstate_valid_mask Richard Henderson
2020-02-07 17:43   ` Peter Maydell
2020-02-03 14:47 ` [PATCH v3 11/20] target/arm: Update MSR access for PAN Richard Henderson
2020-02-07 17:49   ` Peter Maydell
2020-02-03 14:47 ` [PATCH v3 12/20] target/arm: Update arm_mmu_idx_el " Richard Henderson
2020-02-03 14:47 ` [PATCH v3 13/20] target/arm: Enforce PAN semantics in get_S1prot Richard Henderson
2020-02-03 14:47 ` [PATCH v3 14/20] target/arm: Set PAN bit as required on exception entry Richard Henderson
2020-02-07 18:01   ` Peter Maydell
2020-02-08  8:45     ` Richard Henderson
2020-02-08  9:27       ` Richard Henderson
2020-02-03 14:47 ` [PATCH v3 15/20] target/arm: Implement ATS1E1 system registers Richard Henderson
2020-02-03 14:47 ` [PATCH v3 16/20] target/arm: Enable ARMv8.2-ATS1E1 in -cpu max Richard Henderson
2020-02-03 14:47 ` [PATCH v3 17/20] target/arm: Add ID_AA64MMFR2_EL1 Richard Henderson
2020-02-03 14:47 ` [PATCH v3 18/20] target/arm: Update MSR access to UAO Richard Henderson
2020-02-07 17:52   ` Peter Maydell
2020-02-03 14:47 ` [PATCH v3 19/20] target/arm: Implement UAO semantics Richard Henderson
2020-02-03 14:47 ` [PATCH v3 20/20] target/arm: Enable ARMv8.2-UAO in -cpu max Richard Henderson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.