qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO
@ 2020-02-08 12:57 Richard Henderson
  2020-02-08 12:57 ` [PATCH v4 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2 Richard Henderson
                   ` (20 more replies)
  0 siblings, 21 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Based-on: https://git.linaro.org/people/peter.maydell/qemu-arm.git/log/?h=target-arm.next

Version 4 incorporates the feedback on v3.  In particular:
  * Split out CPSR_J masking to its own patch.
  * Merge trivial braces formatting fixes into patch 5.
  * Drop "Tidy msr_mask" patch, leaving CPSR_USER handling alone.
  * Fixes for EL3 in "Set PAN bit as required on exception entry".

Patches without review:

  0005-target-arm-Split-out-aarch32_cpsr_valid_mask.patch
  0006-target-arm-Mask-CPSR_J-when-Jazelle-is-not-enable.patch
  0009-target-arm-Remove-CPSR_RESERVED.patch
  0014-target-arm-Set-PAN-bit-as-required-on-exception-e.patch


r~


Richard Henderson (20):
  target/arm: Add arm_mmu_idx_is_stage1_of_2
  target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled
  target/arm: Add isar_feature tests for PAN + ATS1E1
  target/arm: Move LOR regdefs to file scope
  target/arm: Split out aarch32_cpsr_valid_mask
  target/arm: Mask CPSR_J when Jazelle is not enabled
  target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask
  target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return
  target/arm: Remove CPSR_RESERVED
  target/arm: Introduce aarch64_pstate_valid_mask
  target/arm: Update MSR access for PAN
  target/arm: Update arm_mmu_idx_el for PAN
  target/arm: Enforce PAN semantics in get_S1prot
  target/arm: Set PAN bit as required on exception entry
  target/arm: Implement ATS1E1 system registers
  target/arm: Enable ARMv8.2-ATS1E1 in -cpu max
  target/arm: Add ID_AA64MMFR2_EL1
  target/arm: Update MSR access to UAO
  target/arm: Implement UAO semantics
  target/arm: Enable ARMv8.2-UAO in -cpu max

 target/arm/cpu-param.h     |   2 +-
 target/arm/cpu.h           |  95 ++++++++---
 target/arm/internals.h     |  85 ++++++++++
 target/arm/cpu.c           |   4 +
 target/arm/cpu64.c         |   9 +
 target/arm/helper-a64.c    |   6 +-
 target/arm/helper.c        | 327 +++++++++++++++++++++++++++++--------
 target/arm/kvm64.c         |   2 +
 target/arm/op_helper.c     |  14 +-
 target/arm/translate-a64.c |  31 ++++
 target/arm/translate.c     |  42 +++--
 11 files changed, 499 insertions(+), 118 deletions(-)

-- 
2.20.1



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v4 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
@ 2020-02-08 12:57 ` Richard Henderson
  2020-02-08 12:57 ` [PATCH v4 02/20] target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled Richard Henderson
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee, Philippe Mathieu-Daudé

Use a common predicate for querying stage1-ness.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Rename from arm_mmu_idx_is_stage1 to arm_mmu_idx_is_stage1_of_2
---
 target/arm/internals.h | 18 ++++++++++++++++++
 target/arm/helper.c    |  8 +++-----
 2 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index 6d4a942bde..1f8ee5f573 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1034,6 +1034,24 @@ static inline ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
 ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env);
 #endif
 
+/**
+ * arm_mmu_idx_is_stage1_of_2:
+ * @mmu_idx: The ARMMMUIdx to test
+ *
+ * Return true if @mmu_idx is a NOTLB mmu_idx that is the
+ * first stage of a two stage regime.
+ */
+static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
+{
+    switch (mmu_idx) {
+    case ARMMMUIdx_Stage1_E0:
+    case ARMMMUIdx_Stage1_E1:
+        return true;
+    default:
+        return false;
+    }
+}
+
 /*
  * Parameters of a given virtual address, as extracted from the
  * translation control register (TCR) for a given regime.
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 7d15d5c933..57dc7a307c 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3261,8 +3261,7 @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
         bool take_exc = false;
 
         if (fi.s1ptw && current_el == 1 && !arm_is_secure(env)
-            && (mmu_idx == ARMMMUIdx_Stage1_E1 ||
-                mmu_idx == ARMMMUIdx_Stage1_E0)) {
+            && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
             /*
              * Synchronous stage 2 fault on an access made as part of the
              * translation table walk for AT S1E0* or AT S1E1* insn
@@ -9285,8 +9284,7 @@ static inline bool regime_translation_disabled(CPUARMState *env,
         }
     }
 
-    if ((env->cp15.hcr_el2 & HCR_DC) &&
-        (mmu_idx == ARMMMUIdx_Stage1_E0 || mmu_idx == ARMMMUIdx_Stage1_E1)) {
+    if ((env->cp15.hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
         /* HCR.DC means SCTLR_EL1.M behaves as 0 */
         return true;
     }
@@ -9595,7 +9593,7 @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
                                hwaddr addr, MemTxAttrs txattrs,
                                ARMMMUFaultInfo *fi)
 {
-    if ((mmu_idx == ARMMMUIdx_Stage1_E0 || mmu_idx == ARMMMUIdx_Stage1_E1) &&
+    if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
         !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
         target_ulong s2size;
         hwaddr s2pa;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 02/20] target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
  2020-02-08 12:57 ` [PATCH v4 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2 Richard Henderson
@ 2020-02-08 12:57 ` Richard Henderson
  2020-02-08 12:57 ` [PATCH v4 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1 Richard Henderson
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

To implement PAN, we will want to swap, for short periods
of time, to a different privileged mmu_idx.  In addition,
we cannot do this with flushing alone, because the AT*
instructions have both PAN and PAN-less versions.

Add the ARMMMUIdx*_PAN constants where necessary next to
the corresponding ARMMMUIdx* constant.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu-param.h     |  2 +-
 target/arm/cpu.h           | 33 ++++++++++++++-------
 target/arm/internals.h     |  9 ++++++
 target/arm/helper.c        | 60 +++++++++++++++++++++++++++++++-------
 target/arm/translate-a64.c |  3 ++
 target/arm/translate.c     |  2 ++
 6 files changed, 87 insertions(+), 22 deletions(-)

diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
index 18ac562346..d593b60b28 100644
--- a/target/arm/cpu-param.h
+++ b/target/arm/cpu-param.h
@@ -29,6 +29,6 @@
 # define TARGET_PAGE_BITS_MIN  10
 #endif
 
-#define NB_MMU_MODES 9
+#define NB_MMU_MODES 12
 
 #endif
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 0b3036c484..c63bceaaa5 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2751,20 +2751,24 @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
  *  5. we want to be able to use the TLB for accesses done as part of a
  *     stage1 page table walk, rather than having to walk the stage2 page
  *     table over and over.
+ *  6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access
+ *     Never (PAN) bit within PSTATE.
  *
  * This gives us the following list of cases:
  *
  * NS EL0 EL1&0 stage 1+2 (aka NS PL0)
  * NS EL1 EL1&0 stage 1+2 (aka NS PL1)
+ * NS EL1 EL1&0 stage 1+2 +PAN
  * NS EL0 EL2&0
- * NS EL2 EL2&0
+ * NS EL2 EL2&0 +PAN
  * NS EL2 (aka NS PL2)
  * S EL0 EL1&0 (aka S PL0)
  * S EL1 EL1&0 (not used if EL3 is 32 bit)
+ * S EL1 EL1&0 +PAN
  * S EL3 (aka S PL1)
  * NS EL1&0 stage 2
  *
- * for a total of 9 different mmu_idx.
+ * for a total of 12 different mmu_idx.
  *
  * R profile CPUs have an MPU, but can use the same set of MMU indexes
  * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
@@ -2819,19 +2823,22 @@ typedef enum ARMMMUIdx {
     /*
      * A-profile.
      */
-    ARMMMUIdx_E10_0 =  0 | ARM_MMU_IDX_A,
-    ARMMMUIdx_E20_0 =  1 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E10_0      =  0 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E20_0      =  1 | ARM_MMU_IDX_A,
 
-    ARMMMUIdx_E10_1 =  2 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E10_1      =  2 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E10_1_PAN  =  3 | ARM_MMU_IDX_A,
 
-    ARMMMUIdx_E2 =     3 | ARM_MMU_IDX_A,
-    ARMMMUIdx_E20_2 =  4 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E2         =  4 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E20_2      =  5 | ARM_MMU_IDX_A,
+    ARMMMUIdx_E20_2_PAN  =  6 | ARM_MMU_IDX_A,
 
-    ARMMMUIdx_SE10_0 = 5 | ARM_MMU_IDX_A,
-    ARMMMUIdx_SE10_1 = 6 | ARM_MMU_IDX_A,
-    ARMMMUIdx_SE3 =    7 | ARM_MMU_IDX_A,
+    ARMMMUIdx_SE10_0     = 7 | ARM_MMU_IDX_A,
+    ARMMMUIdx_SE10_1     = 8 | ARM_MMU_IDX_A,
+    ARMMMUIdx_SE10_1_PAN = 9 | ARM_MMU_IDX_A,
+    ARMMMUIdx_SE3        = 10 | ARM_MMU_IDX_A,
 
-    ARMMMUIdx_Stage2 = 8 | ARM_MMU_IDX_A,
+    ARMMMUIdx_Stage2     = 11 | ARM_MMU_IDX_A,
 
     /*
      * These are not allocated TLBs and are used only for AT system
@@ -2839,6 +2846,7 @@ typedef enum ARMMMUIdx {
      */
     ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
     ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
+    ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
 
     /*
      * M-profile.
@@ -2864,10 +2872,13 @@ typedef enum ARMMMUIdxBit {
     TO_CORE_BIT(E10_0),
     TO_CORE_BIT(E20_0),
     TO_CORE_BIT(E10_1),
+    TO_CORE_BIT(E10_1_PAN),
     TO_CORE_BIT(E2),
     TO_CORE_BIT(E20_2),
+    TO_CORE_BIT(E20_2_PAN),
     TO_CORE_BIT(SE10_0),
     TO_CORE_BIT(SE10_1),
+    TO_CORE_BIT(SE10_1_PAN),
     TO_CORE_BIT(SE3),
     TO_CORE_BIT(Stage2),
 
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 1f8ee5f573..6be8b2d1a9 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -843,12 +843,16 @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
     switch (mmu_idx) {
     case ARMMMUIdx_Stage1_E0:
     case ARMMMUIdx_Stage1_E1:
+    case ARMMMUIdx_Stage1_E1_PAN:
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
     case ARMMMUIdx_SE10_0:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
         return true;
     default:
         return false;
@@ -861,10 +865,13 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
     switch (mmu_idx) {
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
     case ARMMMUIdx_Stage1_E0:
     case ARMMMUIdx_Stage1_E1:
+    case ARMMMUIdx_Stage1_E1_PAN:
     case ARMMMUIdx_E2:
     case ARMMMUIdx_Stage2:
     case ARMMMUIdx_MPrivNegPri:
@@ -875,6 +882,7 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
     case ARMMMUIdx_SE3:
     case ARMMMUIdx_SE10_0:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
     case ARMMMUIdx_MSPrivNegPri:
     case ARMMMUIdx_MSUserNegPri:
     case ARMMMUIdx_MSPriv:
@@ -1046,6 +1054,7 @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
     switch (mmu_idx) {
     case ARMMMUIdx_Stage1_E0:
     case ARMMMUIdx_Stage1_E1:
+    case ARMMMUIdx_Stage1_E1_PAN:
         return true;
     default:
         return false;
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 57dc7a307c..bfd6c0d04b 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -671,6 +671,7 @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
 
     tlb_flush_by_mmuidx(cs,
                         ARMMMUIdxBit_E10_1 |
+                        ARMMMUIdxBit_E10_1_PAN |
                         ARMMMUIdxBit_E10_0 |
                         ARMMMUIdxBit_Stage2);
 }
@@ -682,6 +683,7 @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
 
     tlb_flush_by_mmuidx_all_cpus_synced(cs,
                                         ARMMMUIdxBit_E10_1 |
+                                        ARMMMUIdxBit_E10_1_PAN |
                                         ARMMMUIdxBit_E10_0 |
                                         ARMMMUIdxBit_Stage2);
 }
@@ -2700,6 +2702,7 @@ static int gt_phys_redir_timeridx(CPUARMState *env)
     switch (arm_mmu_idx(env)) {
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
         return GTIMER_HYP;
     default:
         return GTIMER_PHYS;
@@ -2711,6 +2714,7 @@ static int gt_virt_redir_timeridx(CPUARMState *env)
     switch (arm_mmu_idx(env)) {
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
         return GTIMER_HYPVIRT;
     default:
         return GTIMER_VIRT;
@@ -3337,7 +3341,9 @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
         format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
 
         if (arm_feature(env, ARM_FEATURE_EL2)) {
-            if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
+            if (mmu_idx == ARMMMUIdx_E10_0 ||
+                mmu_idx == ARMMMUIdx_E10_1 ||
+                mmu_idx == ARMMMUIdx_E10_1_PAN) {
                 format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
             } else {
                 format64 |= arm_current_el(env) == 2;
@@ -3797,7 +3803,9 @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
     if (extract64(raw_read(env, ri) ^ value, 48, 16) &&
         (arm_hcr_el2_eff(env) & HCR_E2H)) {
         tlb_flush_by_mmuidx(env_cpu(env),
-                            ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E20_0);
+                            ARMMMUIdxBit_E20_2 |
+                            ARMMMUIdxBit_E20_2_PAN |
+                            ARMMMUIdxBit_E20_0);
     }
     raw_write(env, ri, value);
 }
@@ -3815,6 +3823,7 @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
     if (raw_read(env, ri) != value) {
         tlb_flush_by_mmuidx(cs,
                             ARMMMUIdxBit_E10_1 |
+                            ARMMMUIdxBit_E10_1_PAN |
                             ARMMMUIdxBit_E10_0 |
                             ARMMMUIdxBit_Stage2);
         raw_write(env, ri, value);
@@ -4175,12 +4184,18 @@ static int vae1_tlbmask(CPUARMState *env)
 {
     /* Since we exclude secure first, we may read HCR_EL2 directly. */
     if (arm_is_secure_below_el3(env)) {
-        return ARMMMUIdxBit_SE10_1 | ARMMMUIdxBit_SE10_0;
+        return ARMMMUIdxBit_SE10_1 |
+               ARMMMUIdxBit_SE10_1_PAN |
+               ARMMMUIdxBit_SE10_0;
     } else if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE))
                == (HCR_E2H | HCR_TGE)) {
-        return ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E20_0;
+        return ARMMMUIdxBit_E20_2 |
+               ARMMMUIdxBit_E20_2_PAN |
+               ARMMMUIdxBit_E20_0;
     } else {
-        return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
+        return ARMMMUIdxBit_E10_1 |
+               ARMMMUIdxBit_E10_1_PAN |
+               ARMMMUIdxBit_E10_0;
     }
 }
 
@@ -4214,18 +4229,28 @@ static int alle1_tlbmask(CPUARMState *env)
      * stage 1 translations.
      */
     if (arm_is_secure_below_el3(env)) {
-        return ARMMMUIdxBit_SE10_1 | ARMMMUIdxBit_SE10_0;
+        return ARMMMUIdxBit_SE10_1 |
+               ARMMMUIdxBit_SE10_1_PAN |
+               ARMMMUIdxBit_SE10_0;
     } else if (arm_feature(env, ARM_FEATURE_EL2)) {
-        return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0 | ARMMMUIdxBit_Stage2;
+        return ARMMMUIdxBit_E10_1 |
+               ARMMMUIdxBit_E10_1_PAN |
+               ARMMMUIdxBit_E10_0 |
+               ARMMMUIdxBit_Stage2;
     } else {
-        return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
+        return ARMMMUIdxBit_E10_1 |
+               ARMMMUIdxBit_E10_1_PAN |
+               ARMMMUIdxBit_E10_0;
     }
 }
 
 static int e2_tlbmask(CPUARMState *env)
 {
     /* TODO: ARMv8.4-SecEL2 */
-    return ARMMMUIdxBit_E20_0 | ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E2;
+    return ARMMMUIdxBit_E20_0 |
+           ARMMMUIdxBit_E20_2 |
+           ARMMMUIdxBit_E20_2_PAN |
+           ARMMMUIdxBit_E2;
 }
 
 static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -9206,6 +9231,7 @@ static uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
     switch (mmu_idx) {
     case ARMMMUIdx_E20_0:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
     case ARMMMUIdx_Stage2:
     case ARMMMUIdx_E2:
         return 2;
@@ -9214,10 +9240,13 @@ static uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
     case ARMMMUIdx_SE10_0:
         return arm_el_is_aa64(env, 3) ? 1 : 3;
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
     case ARMMMUIdx_Stage1_E0:
     case ARMMMUIdx_Stage1_E1:
+    case ARMMMUIdx_Stage1_E1_PAN:
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_MPrivNegPri:
     case ARMMMUIdx_MUserNegPri:
     case ARMMMUIdx_MPriv:
@@ -9333,6 +9362,8 @@ static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
         return ARMMMUIdx_Stage1_E0;
     case ARMMMUIdx_E10_1:
         return ARMMMUIdx_Stage1_E1;
+    case ARMMMUIdx_E10_1_PAN:
+        return ARMMMUIdx_Stage1_E1_PAN;
     default:
         return mmu_idx;
     }
@@ -9379,6 +9410,7 @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
         return false;
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
         g_assert_not_reached();
     }
 }
@@ -11271,7 +11303,9 @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
                    target_ulong *page_size,
                    ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
 {
-    if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
+    if (mmu_idx == ARMMMUIdx_E10_0 ||
+        mmu_idx == ARMMMUIdx_E10_1 ||
+        mmu_idx == ARMMMUIdx_E10_1_PAN) {
         /* Call ourselves recursively to do the stage 1 and then stage 2
          * translations.
          */
@@ -11798,10 +11832,13 @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
     case ARMMMUIdx_SE10_0:
         return 0;
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
         return 1;
     case ARMMMUIdx_E2:
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
         return 2;
     case ARMMMUIdx_SE3:
         return 3;
@@ -12018,11 +12055,14 @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
     /* TODO: ARMv8.2-UAO */
     switch (mmu_idx) {
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
         /* TODO: ARMv8.3-NV */
         flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
         break;
     case ARMMMUIdx_E20_2:
+    case ARMMMUIdx_E20_2_PAN:
         /* TODO: ARMv8.4-SecEL2 */
         /*
          * Note that E20_2 is gated by HCR_EL2.E2H == 1, but E20_0 is
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 6e82486884..49631c2340 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -124,12 +124,15 @@ static int get_a64_user_mem_index(DisasContext *s)
          */
         switch (useridx) {
         case ARMMMUIdx_E10_1:
+        case ARMMMUIdx_E10_1_PAN:
             useridx = ARMMMUIdx_E10_0;
             break;
         case ARMMMUIdx_E20_2:
+        case ARMMMUIdx_E20_2_PAN:
             useridx = ARMMMUIdx_E20_0;
             break;
         case ARMMMUIdx_SE10_1:
+        case ARMMMUIdx_SE10_1_PAN:
             useridx = ARMMMUIdx_SE10_0;
             break;
         default:
diff --git a/target/arm/translate.c b/target/arm/translate.c
index e11a5871d0..d58c328e08 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -155,10 +155,12 @@ static inline int get_a32_user_mem_index(DisasContext *s)
     case ARMMMUIdx_E2:        /* this one is UNPREDICTABLE */
     case ARMMMUIdx_E10_0:
     case ARMMMUIdx_E10_1:
+    case ARMMMUIdx_E10_1_PAN:
         return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
     case ARMMMUIdx_SE3:
     case ARMMMUIdx_SE10_0:
     case ARMMMUIdx_SE10_1:
+    case ARMMMUIdx_SE10_1_PAN:
         return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
     case ARMMMUIdx_MUser:
     case ARMMMUIdx_MPriv:
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
  2020-02-08 12:57 ` [PATCH v4 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2 Richard Henderson
  2020-02-08 12:57 ` [PATCH v4 02/20] target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled Richard Henderson
@ 2020-02-08 12:57 ` Richard Henderson
  2020-02-14 11:28   ` Peter Maydell
  2020-02-08 12:58 ` [PATCH v4 04/20] target/arm: Move LOR regdefs to file scope Richard Henderson
                   ` (17 subsequent siblings)
  20 siblings, 1 reply; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Include definitions for all of the bits in ID_MMFR3.
We already have a definition for ID_AA64MMFR1.PAN.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.h | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index c63bceaaa5..08b2f5d73e 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1727,6 +1727,15 @@ FIELD(ID_ISAR6, FHM, 8, 4)
 FIELD(ID_ISAR6, SB, 12, 4)
 FIELD(ID_ISAR6, SPECRES, 16, 4)
 
+FIELD(ID_MMFR3, CMAINTVA, 0, 4)
+FIELD(ID_MMFR3, CMAINTSW, 4, 4)
+FIELD(ID_MMFR3, BPMAINT, 8, 4)
+FIELD(ID_MMFR3, MAINTBCST, 12, 4)
+FIELD(ID_MMFR3, PAN, 16, 4)
+FIELD(ID_MMFR3, COHWALK, 20, 4)
+FIELD(ID_MMFR3, CMEMSZ, 24, 4)
+FIELD(ID_MMFR3, SUPERSEC, 28, 4)
+
 FIELD(ID_MMFR4, SPECSEI, 0, 4)
 FIELD(ID_MMFR4, AC2, 4, 4)
 FIELD(ID_MMFR4, XNX, 8, 4)
@@ -3443,6 +3452,16 @@ static inline bool isar_feature_aa32_vminmaxnm(const ARMISARegisters *id)
     return FIELD_EX64(id->mvfr2, MVFR2, FPMISC) >= 4;
 }
 
+static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) != 0;
+}
+
+static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) >= 2;
+}
+
 /*
  * 64-bit feature tests via id registers.
  */
@@ -3602,6 +3621,16 @@ static inline bool isar_feature_aa64_lor(const ARMISARegisters *id)
     return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, LO) != 0;
 }
 
+static inline bool isar_feature_aa64_pan(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) != 0;
+}
+
+static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
+}
+
 static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
 {
     return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 04/20] target/arm: Move LOR regdefs to file scope
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (2 preceding siblings ...)
  2020-02-08 12:57 ` [PATCH v4 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1 Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 05/20] target/arm: Split out aarch32_cpsr_valid_mask Richard Henderson
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

For static const regdefs, file scope is preferred.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/helper.c | 57 +++++++++++++++++++++++----------------------
 1 file changed, 29 insertions(+), 28 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index bfd6c0d04b..e4f17c7e83 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6334,6 +6334,35 @@ static CPAccessResult access_lor_other(CPUARMState *env,
     return access_lor_ns(env);
 }
 
+/*
+ * A trivial implementation of ARMv8.1-LOR leaves all of these
+ * registers fixed at 0, which indicates that there are zero
+ * supported Limited Ordering regions.
+ */
+static const ARMCPRegInfo lor_reginfo[] = {
+    { .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0,
+      .access = PL1_RW, .accessfn = access_lor_other,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    { .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1,
+      .access = PL1_RW, .accessfn = access_lor_other,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    { .name = "LORN_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2,
+      .access = PL1_RW, .accessfn = access_lor_other,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    { .name = "LORC_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3,
+      .access = PL1_RW, .accessfn = access_lor_other,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    { .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
+      .access = PL1_R, .accessfn = access_lorid,
+      .type = ARM_CP_CONST, .resetvalue = 0 },
+    REGINFO_SENTINEL
+};
+
 #ifdef TARGET_AARCH64
 static CPAccessResult access_pauth(CPUARMState *env, const ARMCPRegInfo *ri,
                                    bool isread)
@@ -7568,34 +7597,6 @@ void register_cp_regs_for_features(ARMCPU *cpu)
     }
 
     if (cpu_isar_feature(aa64_lor, cpu)) {
-        /*
-         * A trivial implementation of ARMv8.1-LOR leaves all of these
-         * registers fixed at 0, which indicates that there are zero
-         * supported Limited Ordering regions.
-         */
-        static const ARMCPRegInfo lor_reginfo[] = {
-            { .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0,
-              .access = PL1_RW, .accessfn = access_lor_other,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            { .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1,
-              .access = PL1_RW, .accessfn = access_lor_other,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            { .name = "LORN_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2,
-              .access = PL1_RW, .accessfn = access_lor_other,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            { .name = "LORC_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3,
-              .access = PL1_RW, .accessfn = access_lor_other,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            { .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
-              .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
-              .access = PL1_R, .accessfn = access_lorid,
-              .type = ARM_CP_CONST, .resetvalue = 0 },
-            REGINFO_SENTINEL
-        };
         define_arm_cp_regs(cpu, lor_reginfo);
     }
 
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 05/20] target/arm: Split out aarch32_cpsr_valid_mask
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (3 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 04/20] target/arm: Move LOR regdefs to file scope Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-11 18:06   ` Peter Maydell
  2020-02-08 12:58 ` [PATCH v4 06/20] target/arm: Mask CPSR_J when Jazelle is not enabled Richard Henderson
                   ` (15 subsequent siblings)
  20 siblings, 1 reply; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Split this helper out of msr_mask in translate.c.  At the same time,
transform the negative reductive logic to positive accumulative logic.
It will be usable along the exception paths.

While touching msr_mask, fix up formatting.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v4: Keep CPSR_J unconditionally in this patch.
    Fix all formatting in msr_mask.
---
 target/arm/internals.h | 21 +++++++++++++++++++++
 target/arm/translate.c | 40 +++++++++++++++++-----------------------
 2 files changed, 38 insertions(+), 23 deletions(-)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index 6be8b2d1a9..4d4896fcdc 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1061,6 +1061,27 @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
     }
 }
 
+static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
+                                               const ARMISARegisters *id)
+{
+    uint32_t valid = CPSR_M | CPSR_AIF | CPSR_IL | CPSR_NZCV | CPSR_J;
+
+    if ((features >> ARM_FEATURE_V4T) & 1) {
+        valid |= CPSR_T;
+    }
+    if ((features >> ARM_FEATURE_V5) & 1) {
+        valid |= CPSR_Q; /* V5TE in reality*/
+    }
+    if ((features >> ARM_FEATURE_V6) & 1) {
+        valid |= CPSR_E | CPSR_GE;
+    }
+    if ((features >> ARM_FEATURE_THUMB2) & 1) {
+        valid |= CPSR_IT;
+    }
+
+    return valid;
+}
+
 /*
  * Parameters of a given virtual address, as extracted from the
  * translation control register (TCR) for a given regime.
diff --git a/target/arm/translate.c b/target/arm/translate.c
index d58c328e08..20f89ace2f 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -2734,39 +2734,33 @@ static inline void gen_mulxy(TCGv_i32 t0, TCGv_i32 t1, int x, int y)
 /* Return the mask of PSR bits set by a MSR instruction.  */
 static uint32_t msr_mask(DisasContext *s, int flags, int spsr)
 {
-    uint32_t mask;
+    uint32_t mask = 0;
 
-    mask = 0;
-    if (flags & (1 << 0))
+    if (flags & (1 << 0)) {
         mask |= 0xff;
-    if (flags & (1 << 1))
+    }
+    if (flags & (1 << 1)) {
         mask |= 0xff00;
-    if (flags & (1 << 2))
+    }
+    if (flags & (1 << 2)) {
         mask |= 0xff0000;
-    if (flags & (1 << 3))
+    }
+    if (flags & (1 << 3)) {
         mask |= 0xff000000;
+    }
 
-    /* Mask out undefined bits.  */
-    mask &= ~CPSR_RESERVED;
-    if (!arm_dc_feature(s, ARM_FEATURE_V4T)) {
-        mask &= ~CPSR_T;
-    }
-    if (!arm_dc_feature(s, ARM_FEATURE_V5)) {
-        mask &= ~CPSR_Q; /* V5TE in reality*/
-    }
-    if (!arm_dc_feature(s, ARM_FEATURE_V6)) {
-        mask &= ~(CPSR_E | CPSR_GE);
-    }
-    if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
-        mask &= ~CPSR_IT;
-    }
-    /* Mask out execution state and reserved bits.  */
+    /* Mask out undefined and reserved bits.  */
+    mask &= aarch32_cpsr_valid_mask(s->features, s->isar);
+
+    /* Mask out execution state.  */
     if (!spsr) {
-        mask &= ~(CPSR_EXEC | CPSR_RESERVED);
+        mask &= ~CPSR_EXEC;
     }
+
     /* Mask out privileged bits.  */
-    if (IS_USER(s))
+    if (IS_USER(s)) {
         mask &= CPSR_USER;
+    }
     return mask;
 }
 
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 06/20] target/arm: Mask CPSR_J when Jazelle is not enabled
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (4 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 05/20] target/arm: Split out aarch32_cpsr_valid_mask Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-11 18:07   ` Peter Maydell
  2020-02-08 12:58 ` [PATCH v4 07/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask Richard Henderson
                   ` (14 subsequent siblings)
  20 siblings, 1 reply; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

The J bit signals Jazelle mode, and so of course is RES0
when the feature is not enabled.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v4: Split out from aarch32_cpsr_valid_mask creation in previous patch.
---
 target/arm/internals.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index 4d4896fcdc..0569c96fd9 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1064,7 +1064,7 @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
 static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
                                                const ARMISARegisters *id)
 {
-    uint32_t valid = CPSR_M | CPSR_AIF | CPSR_IL | CPSR_NZCV | CPSR_J;
+    uint32_t valid = CPSR_M | CPSR_AIF | CPSR_IL | CPSR_NZCV;
 
     if ((features >> ARM_FEATURE_V4T) & 1) {
         valid |= CPSR_T;
@@ -1078,6 +1078,9 @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
     if ((features >> ARM_FEATURE_THUMB2) & 1) {
         valid |= CPSR_IT;
     }
+    if (isar_feature_jazelle(id)) {
+        valid |= CPSR_J;
+    }
 
     return valid;
 }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 07/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (5 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 06/20] target/arm: Mask CPSR_J when Jazelle is not enabled Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 08/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return Richard Henderson
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

CPSR_ERET_MASK was a useless renaming of CPSR_RESERVED.
The function also takes into account bits that the cpu
does not support.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.h       | 2 --
 target/arm/op_helper.c | 5 ++++-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 08b2f5d73e..694b074298 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1209,8 +1209,6 @@ void pmu_init(ARMCPU *cpu);
 #define CPSR_USER (CPSR_NZCV | CPSR_Q | CPSR_GE)
 /* Execution state bits.  MRS read as zero, MSR writes ignored.  */
 #define CPSR_EXEC (CPSR_T | CPSR_IT | CPSR_J | CPSR_IL)
-/* Mask of bits which may be set by exception return copying them from SPSR */
-#define CPSR_ERET_MASK (~CPSR_RESERVED)
 
 /* Bit definitions for M profile XPSR. Most are the same as CPSR. */
 #define XPSR_EXCP 0x1ffU
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
index 27d16ad9ad..acf1815ea3 100644
--- a/target/arm/op_helper.c
+++ b/target/arm/op_helper.c
@@ -400,11 +400,14 @@ void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
 /* Write the CPSR for a 32-bit exception return */
 void HELPER(cpsr_write_eret)(CPUARMState *env, uint32_t val)
 {
+    uint32_t mask;
+
     qemu_mutex_lock_iothread();
     arm_call_pre_el_change_hook(env_archcpu(env));
     qemu_mutex_unlock_iothread();
 
-    cpsr_write(env, val, CPSR_ERET_MASK, CPSRWriteExceptionReturn);
+    mask = aarch32_cpsr_valid_mask(env->features, &env_archcpu(env)->isar);
+    cpsr_write(env, val, mask, CPSRWriteExceptionReturn);
 
     /* Generated code has already stored the new PC value, but
      * without masking out its low bits, because which bits need
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 08/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (6 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 07/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 09/20] target/arm: Remove CPSR_RESERVED Richard Henderson
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Using ~0 as the mask on the aarch64->aarch32 exception return
was not even as correct as the CPSR_ERET_MASK that we had used
on the aarch32->aarch32 exception return.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/helper-a64.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
index bf45f8a785..0c9feba392 100644
--- a/target/arm/helper-a64.c
+++ b/target/arm/helper-a64.c
@@ -959,7 +959,7 @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
 {
     int cur_el = arm_current_el(env);
     unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
-    uint32_t spsr = env->banked_spsr[spsr_idx];
+    uint32_t mask, spsr = env->banked_spsr[spsr_idx];
     int new_el;
     bool return_to_aa64 = (spsr & PSTATE_nRW) == 0;
 
@@ -1014,7 +1014,8 @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
          * will sort the register banks out for us, and we've already
          * caught all the bad-mode cases in el_from_spsr().
          */
-        cpsr_write(env, spsr, ~0, CPSRWriteRaw);
+        mask = aarch32_cpsr_valid_mask(env->features, &env_archcpu(env)->isar);
+        cpsr_write(env, spsr, mask, CPSRWriteRaw);
         if (!arm_singlestep_active(env)) {
             env->uncached_cpsr &= ~PSTATE_SS;
         }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 09/20] target/arm: Remove CPSR_RESERVED
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (7 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 08/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-11 18:07   ` Peter Maydell
  2020-02-08 12:58 ` [PATCH v4 10/20] target/arm: Introduce aarch64_pstate_valid_mask Richard Henderson
                   ` (11 subsequent siblings)
  20 siblings, 1 reply; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

The only remaining use was in op_helper.c.  Use PSTATE_SS
directly, and move the commentary so that it is more obvious
what is going on.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.h       | 6 ------
 target/arm/op_helper.c | 9 ++++++++-
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 694b074298..c6dff1d55b 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1186,12 +1186,6 @@ void pmu_init(ARMCPU *cpu);
 #define CPSR_IT_2_7 (0xfc00U)
 #define CPSR_GE (0xfU << 16)
 #define CPSR_IL (1U << 20)
-/* Note that the RESERVED bits include bit 21, which is PSTATE_SS in
- * an AArch64 SPSR but RES0 in AArch32 SPSR and CPSR. In QEMU we use
- * env->uncached_cpsr bit 21 to store PSTATE.SS when executing in AArch32,
- * where it is live state but not accessible to the AArch32 code.
- */
-#define CPSR_RESERVED (0x7U << 21)
 #define CPSR_J (1U << 24)
 #define CPSR_IT_0_1 (3U << 25)
 #define CPSR_Q (1U << 27)
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
index acf1815ea3..af3020b78f 100644
--- a/target/arm/op_helper.c
+++ b/target/arm/op_helper.c
@@ -387,7 +387,14 @@ void HELPER(exception_bkpt_insn)(CPUARMState *env, uint32_t syndrome)
 
 uint32_t HELPER(cpsr_read)(CPUARMState *env)
 {
-    return cpsr_read(env) & ~(CPSR_EXEC | CPSR_RESERVED);
+    /*
+     * We store the ARMv8 PSTATE.SS bit in env->uncached_cpsr.
+     * This is convenient for populating SPSR_ELx, but must be
+     * hidden from aarch32 mode, where it is not visible.
+     *
+     * TODO: ARMv8.4-DIT -- need to move SS somewhere else.
+     */
+    return cpsr_read(env) & ~(CPSR_EXEC | PSTATE_SS);
 }
 
 void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 10/20] target/arm: Introduce aarch64_pstate_valid_mask
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (8 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 09/20] target/arm: Remove CPSR_RESERVED Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 11/20] target/arm: Update MSR access for PAN Richard Henderson
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Use this along the exception return path, where we previously
accepted any values.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/internals.h  | 12 ++++++++++++
 target/arm/helper-a64.c |  1 +
 2 files changed, 13 insertions(+)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index 0569c96fd9..034d98ad53 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1085,6 +1085,18 @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
     return valid;
 }
 
+static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
+{
+    uint32_t valid;
+
+    valid = PSTATE_M | PSTATE_DAIF | PSTATE_IL | PSTATE_SS | PSTATE_NZCV;
+    if (isar_feature_aa64_bti(id)) {
+        valid |= PSTATE_BTYPE;
+    }
+
+    return valid;
+}
+
 /*
  * Parameters of a given virtual address, as extracted from the
  * translation control register (TCR) for a given regime.
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
index 0c9feba392..509ae93069 100644
--- a/target/arm/helper-a64.c
+++ b/target/arm/helper-a64.c
@@ -1032,6 +1032,7 @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
                       cur_el, new_el, env->regs[15]);
     } else {
         env->aarch64 = 1;
+        spsr &= aarch64_pstate_valid_mask(&env_archcpu(env)->isar);
         pstate_write(env, spsr);
         if (!arm_singlestep_active(env)) {
             env->pstate &= ~PSTATE_SS;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 11/20] target/arm: Update MSR access for PAN
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (9 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 10/20] target/arm: Introduce aarch64_pstate_valid_mask Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 12/20] target/arm: Update arm_mmu_idx_el " Richard Henderson
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

For aarch64, there's a dedicated msr (imm, reg) insn.
For aarch32, this is done via msr to cpsr.  Writes from el0
are ignored, which is already handled by the CPSR_USER mask.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Move regdef to file scope; merge patch for CPSR_RESERVED:
    do not remove CPSR_SSBS from CPSR_RESERVED yet, mask PAN
    from CPSR if feature not enabled (pmm).
v3: Update for cpsr_valid_mask etc.
---
 target/arm/cpu.h           |  2 ++
 target/arm/internals.h     |  6 ++++++
 target/arm/helper.c        | 21 +++++++++++++++++++++
 target/arm/translate-a64.c | 14 ++++++++++++++
 4 files changed, 43 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index c6dff1d55b..65a0ef8cd6 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1186,6 +1186,7 @@ void pmu_init(ARMCPU *cpu);
 #define CPSR_IT_2_7 (0xfc00U)
 #define CPSR_GE (0xfU << 16)
 #define CPSR_IL (1U << 20)
+#define CPSR_PAN (1U << 22)
 #define CPSR_J (1U << 24)
 #define CPSR_IT_0_1 (3U << 25)
 #define CPSR_Q (1U << 27)
@@ -1250,6 +1251,7 @@ void pmu_init(ARMCPU *cpu);
 #define PSTATE_BTYPE (3U << 10)
 #define PSTATE_IL (1U << 20)
 #define PSTATE_SS (1U << 21)
+#define PSTATE_PAN (1U << 22)
 #define PSTATE_V (1U << 28)
 #define PSTATE_C (1U << 29)
 #define PSTATE_Z (1U << 30)
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 034d98ad53..f6709a2b08 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1081,6 +1081,9 @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
     if (isar_feature_jazelle(id)) {
         valid |= CPSR_J;
     }
+    if (isar_feature_aa32_pan(id)) {
+        valid |= CPSR_PAN;
+    }
 
     return valid;
 }
@@ -1093,6 +1096,9 @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
     if (isar_feature_aa64_bti(id)) {
         valid |= PSTATE_BTYPE;
     }
+    if (isar_feature_aa64_pan(id)) {
+        valid |= PSTATE_PAN;
+    }
 
     return valid;
 }
diff --git a/target/arm/helper.c b/target/arm/helper.c
index e4f17c7e83..058fb23959 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -4163,6 +4163,24 @@ static void aa64_daif_write(CPUARMState *env, const ARMCPRegInfo *ri,
     env->daif = value & PSTATE_DAIF;
 }
 
+static uint64_t aa64_pan_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+    return env->pstate & PSTATE_PAN;
+}
+
+static void aa64_pan_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                           uint64_t value)
+{
+    env->pstate = (env->pstate & ~PSTATE_PAN) | (value & PSTATE_PAN);
+}
+
+static const ARMCPRegInfo pan_reginfo = {
+    .name = "PAN", .state = ARM_CP_STATE_AA64,
+    .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 3,
+    .type = ARM_CP_NO_RAW, .access = PL1_RW,
+    .readfn = aa64_pan_read, .writefn = aa64_pan_write
+};
+
 static CPAccessResult aa64_cacheop_access(CPUARMState *env,
                                           const ARMCPRegInfo *ri,
                                           bool isread)
@@ -7599,6 +7617,9 @@ void register_cp_regs_for_features(ARMCPU *cpu)
     if (cpu_isar_feature(aa64_lor, cpu)) {
         define_arm_cp_regs(cpu, lor_reginfo);
     }
+    if (cpu_isar_feature(aa64_pan, cpu)) {
+        define_one_arm_cp_reg(cpu, &pan_reginfo);
+    }
 
     if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
         define_arm_cp_regs(cpu, vhe_reginfo);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 49631c2340..d8ba240a15 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1602,6 +1602,20 @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
         s->base.is_jmp = DISAS_NEXT;
         break;
 
+    case 0x04: /* PAN */
+        if (!dc_isar_feature(aa64_pan, s) || s->current_el == 0) {
+            goto do_unallocated;
+        }
+        if (crm & 1) {
+            set_pstate_bits(PSTATE_PAN);
+        } else {
+            clear_pstate_bits(PSTATE_PAN);
+        }
+        t1 = tcg_const_i32(s->current_el);
+        gen_helper_rebuild_hflags_a64(cpu_env, t1);
+        tcg_temp_free_i32(t1);
+        break;
+
     case 0x05: /* SPSel */
         if (s->current_el == 0) {
             goto do_unallocated;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 12/20] target/arm: Update arm_mmu_idx_el for PAN
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (10 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 11/20] target/arm: Update MSR access for PAN Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 13/20] target/arm: Enforce PAN semantics in get_S1prot Richard Henderson
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Examine the PAN bit for EL1, EL2, and Secure EL1 to
determine if it applies.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/helper.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 058fb23959..f6a600aa00 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -11895,13 +11895,22 @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
         return ARMMMUIdx_E10_0;
     case 1:
         if (arm_is_secure_below_el3(env)) {
+            if (env->pstate & PSTATE_PAN) {
+                return ARMMMUIdx_SE10_1_PAN;
+            }
             return ARMMMUIdx_SE10_1;
         }
+        if (env->pstate & PSTATE_PAN) {
+            return ARMMMUIdx_E10_1_PAN;
+        }
         return ARMMMUIdx_E10_1;
     case 2:
         /* TODO: ARMv8.4-SecEL2 */
         /* Note that TGE does not apply at EL2.  */
         if ((env->cp15.hcr_el2 & HCR_E2H) && arm_el_is_aa64(env, 2)) {
+            if (env->pstate & PSTATE_PAN) {
+                return ARMMMUIdx_E20_2_PAN;
+            }
             return ARMMMUIdx_E20_2;
         }
         return ARMMMUIdx_E2;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 13/20] target/arm: Enforce PAN semantics in get_S1prot
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (11 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 12/20] target/arm: Update arm_mmu_idx_el " Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 14/20] target/arm: Set PAN bit as required on exception entry Richard Henderson
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

If we have a PAN-enforcing mmu_idx, set prot == 0 if user_rw != 0.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/internals.h | 13 +++++++++++++
 target/arm/helper.c    |  3 +++
 2 files changed, 16 insertions(+)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index f6709a2b08..4a139644b5 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -893,6 +893,19 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
     }
 }
 
+static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
+{
+    switch (mmu_idx) {
+    case ARMMMUIdx_Stage1_E1_PAN:
+    case ARMMMUIdx_E10_1_PAN:
+    case ARMMMUIdx_E20_2_PAN:
+    case ARMMMUIdx_SE10_1_PAN:
+        return true;
+    default:
+        return false;
+    }
+}
+
 /* Return the FSR value for a debug exception (watchpoint, hardware
  * breakpoint or BKPT insn) targeting the specified exception level.
  */
diff --git a/target/arm/helper.c b/target/arm/helper.c
index f6a600aa00..178757d271 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -9569,6 +9569,9 @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
     if (is_user) {
         prot_rw = user_rw;
     } else {
+        if (user_rw && regime_is_pan(env, mmu_idx)) {
+            return 0;
+        }
         prot_rw = simple_ap_to_rw_prot_is_user(ap, false);
     }
 
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 14/20] target/arm: Set PAN bit as required on exception entry
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (12 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 13/20] target/arm: Enforce PAN semantics in get_S1prot Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-11 18:09   ` Peter Maydell
  2020-02-08 12:58 ` [PATCH v4 15/20] target/arm: Implement ATS1E1 system registers Richard Henderson
                   ` (6 subsequent siblings)
  20 siblings, 1 reply; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

The PAN bit is preserved, or set as per SCTLR_ELx.SPAN,
plus several other conditions listed in the ARM ARM.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Tidy preservation of CPSR_PAN in take_aarch32_exception (pmm).
v4: Fix exception entry to EL3.
---
 target/arm/helper.c | 53 ++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 50 insertions(+), 3 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 178757d271..de16ce79ad 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -8763,8 +8763,12 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
                                    uint32_t mask, uint32_t offset,
                                    uint32_t newpc)
 {
+    int new_el;
+
     /* Change the CPU state so as to actually take the exception. */
     switch_mode(env, new_mode);
+    new_el = arm_current_el(env);
+
     /*
      * For exceptions taken to AArch32 we must clear the SS bit in both
      * PSTATE and in the old-state value we save to SPSR_<mode>, so zero it now.
@@ -8777,7 +8781,7 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
     env->uncached_cpsr = (env->uncached_cpsr & ~CPSR_M) | new_mode;
     /* Set new mode endianness */
     env->uncached_cpsr &= ~CPSR_E;
-    if (env->cp15.sctlr_el[arm_current_el(env)] & SCTLR_EE) {
+    if (env->cp15.sctlr_el[new_el] & SCTLR_EE) {
         env->uncached_cpsr |= CPSR_E;
     }
     /* J and IL must always be cleared for exception entry */
@@ -8788,6 +8792,25 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
         env->thumb = (env->cp15.sctlr_el[2] & SCTLR_TE) != 0;
         env->elr_el[2] = env->regs[15];
     } else {
+        /* CPSR.PAN is normally preserved preserved unless...  */
+        if (cpu_isar_feature(aa64_pan, env_archcpu(env))) {
+            switch (new_el) {
+            case 3:
+                if (!arm_is_secure_below_el3(env)) {
+                    /* ... the target is EL3, from non-secure state.  */
+                    env->uncached_cpsr &= ~CPSR_PAN;
+                    break;
+                }
+                /* ... the target is EL3, from secure state ... */
+                /* fall through */
+            case 1:
+                /* ... the target is EL1 and SCTLR.SPAN is 0.  */
+                if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPAN)) {
+                    env->uncached_cpsr |= CPSR_PAN;
+                }
+                break;
+            }
+        }
         /*
          * this is a lie, as there was no c1_sys on V4T/V5, but who cares
          * and we should just guard the thumb mode on V4
@@ -9050,6 +9073,7 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
     unsigned int new_el = env->exception.target_el;
     target_ulong addr = env->cp15.vbar_el[new_el];
     unsigned int new_mode = aarch64_pstate_mode(new_el, true);
+    unsigned int old_mode;
     unsigned int cur_el = arm_current_el(env);
 
     /*
@@ -9129,20 +9153,43 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
     }
 
     if (is_a64(env)) {
-        env->banked_spsr[aarch64_banked_spsr_index(new_el)] = pstate_read(env);
+        old_mode = pstate_read(env);
         aarch64_save_sp(env, arm_current_el(env));
         env->elr_el[new_el] = env->pc;
     } else {
-        env->banked_spsr[aarch64_banked_spsr_index(new_el)] = cpsr_read(env);
+        old_mode = cpsr_read(env);
         env->elr_el[new_el] = env->regs[15];
 
         aarch64_sync_32_to_64(env);
 
         env->condexec_bits = 0;
     }
+    env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode;
+
     qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n",
                   env->elr_el[new_el]);
 
+    if (cpu_isar_feature(aa64_pan, cpu)) {
+        /* The value of PSTATE.PAN is normally preserved, except when ... */
+        new_mode |= old_mode & PSTATE_PAN;
+        switch (new_el) {
+        case 2:
+            /* ... the target is EL2 with HCR_EL2.{E2H,TGE} == '11' ...  */
+            if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE))
+                != (HCR_E2H | HCR_TGE)) {
+                break;
+            }
+            /* fall through */
+        case 1:
+            /* ... the target is EL1 ... */
+            /* ... and SCTLR_ELx.SPAN == 0, then set to 1.  */
+            if ((env->cp15.sctlr_el[new_el] & SCTLR_SPAN) == 0) {
+                new_mode |= PSTATE_PAN;
+            }
+            break;
+        }
+    }
+
     pstate_write(env, PSTATE_DAIF | new_mode);
     env->aarch64 = 1;
     aarch64_restore_sp(env, new_el);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 15/20] target/arm: Implement ATS1E1 system registers
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (13 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 14/20] target/arm: Set PAN bit as required on exception entry Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 16/20] target/arm: Enable ARMv8.2-ATS1E1 in -cpu max Richard Henderson
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

This is a minor enhancement over ARMv8.1-PAN.
The *_PAN mmu_idx are used with the existing do_ats_write.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Move regdefs to file scope (pmm).
---
 target/arm/helper.c | 56 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 50 insertions(+), 6 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index de16ce79ad..d99661d4ea 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3409,16 +3409,21 @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
 
     switch (ri->opc2 & 6) {
     case 0:
-        /* stage 1 current state PL1: ATS1CPR, ATS1CPW */
+        /* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */
         switch (el) {
         case 3:
             mmu_idx = ARMMMUIdx_SE3;
             break;
         case 2:
-            mmu_idx = ARMMMUIdx_Stage1_E1;
-            break;
+            g_assert(!secure);  /* TODO: ARMv8.4-SecEL2 */
+            /* fall through */
         case 1:
-            mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
+            if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
+                mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
+                           : ARMMMUIdx_Stage1_E1_PAN);
+            } else {
+                mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
+            }
             break;
         default:
             g_assert_not_reached();
@@ -3487,8 +3492,13 @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
     switch (ri->opc2 & 6) {
     case 0:
         switch (ri->opc1) {
-        case 0: /* AT S1E1R, AT S1E1W */
-            mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
+        case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
+            if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
+                mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
+                           : ARMMMUIdx_Stage1_E1_PAN);
+            } else {
+                mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
+            }
             break;
         case 4: /* AT S1E2R, AT S1E2W */
             mmu_idx = ARMMMUIdx_E2;
@@ -6683,6 +6693,32 @@ static const ARMCPRegInfo vhe_reginfo[] = {
     REGINFO_SENTINEL
 };
 
+#ifndef CONFIG_USER_ONLY
+static const ARMCPRegInfo ats1e1_reginfo[] = {
+    { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
+      .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
+      .writefn = ats_write64 },
+    { .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
+      .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
+      .writefn = ats_write64 },
+    REGINFO_SENTINEL
+};
+
+static const ARMCPRegInfo ats1cp_reginfo[] = {
+    { .name = "ATS1CPRP",
+      .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
+      .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
+      .writefn = ats_write },
+    { .name = "ATS1CPWP",
+      .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
+      .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
+      .writefn = ats_write },
+    REGINFO_SENTINEL
+};
+#endif
+
 void register_cp_regs_for_features(ARMCPU *cpu)
 {
     /* Register all the coprocessor registers based on feature bits */
@@ -7620,6 +7656,14 @@ void register_cp_regs_for_features(ARMCPU *cpu)
     if (cpu_isar_feature(aa64_pan, cpu)) {
         define_one_arm_cp_reg(cpu, &pan_reginfo);
     }
+#ifndef CONFIG_USER_ONLY
+    if (cpu_isar_feature(aa64_ats1e1, cpu)) {
+        define_arm_cp_regs(cpu, ats1e1_reginfo);
+    }
+    if (cpu_isar_feature(aa32_ats1e1, cpu)) {
+        define_arm_cp_regs(cpu, ats1cp_reginfo);
+    }
+#endif
 
     if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
         define_arm_cp_regs(cpu, vhe_reginfo);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 16/20] target/arm: Enable ARMv8.2-ATS1E1 in -cpu max
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (14 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 15/20] target/arm: Implement ATS1E1 system registers Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 17/20] target/arm: Add ID_AA64MMFR2_EL1 Richard Henderson
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

This includes enablement of ARMv8.1-PAN.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.c   | 4 ++++
 target/arm/cpu64.c | 5 +++++
 2 files changed, 9 insertions(+)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index b0762a76c4..de733aceeb 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -2709,6 +2709,10 @@ static void arm_max_initfn(Object *obj)
             t = FIELD_DP32(t, MVFR2, FPMISC, 4);   /* FP MaxNum */
             cpu->isar.mvfr2 = t;
 
+            t = cpu->id_mmfr3;
+            t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
+            cpu->id_mmfr3 = t;
+
             t = cpu->id_mmfr4;
             t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
             cpu->id_mmfr4 = t;
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index c80fb5fd43..57fbc5eade 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -673,6 +673,7 @@ static void aarch64_max_initfn(Object *obj)
         t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
         t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
         t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
+        t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
         cpu->isar.id_aa64mmfr1 = t;
 
         /* Replicate the same data to the 32-bit id registers.  */
@@ -693,6 +694,10 @@ static void aarch64_max_initfn(Object *obj)
         u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
         cpu->isar.id_isar6 = u;
 
+        u = cpu->id_mmfr3;
+        u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
+        cpu->id_mmfr3 = u;
+
         /*
          * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
          * so do not set MVFR1.FPHP.  Strictly speaking this is not legal,
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 17/20] target/arm: Add ID_AA64MMFR2_EL1
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (15 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 16/20] target/arm: Enable ARMv8.2-ATS1E1 in -cpu max Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 18/20] target/arm: Update MSR access to UAO Richard Henderson
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Add definitions for all of the fields, up to ARMv8.5.
Convert the existing RESERVED register to a full register.
Query KVM for the value of the register for the host.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.h    | 17 +++++++++++++++++
 target/arm/helper.c |  4 ++--
 target/arm/kvm64.c  |  2 ++
 3 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 65a0ef8cd6..71879393c2 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -871,6 +871,7 @@ struct ARMCPU {
         uint64_t id_aa64pfr1;
         uint64_t id_aa64mmfr0;
         uint64_t id_aa64mmfr1;
+        uint64_t id_aa64mmfr2;
     } isar;
     uint32_t midr;
     uint32_t revidr;
@@ -1803,6 +1804,22 @@ FIELD(ID_AA64MMFR1, PAN, 20, 4)
 FIELD(ID_AA64MMFR1, SPECSEI, 24, 4)
 FIELD(ID_AA64MMFR1, XNX, 28, 4)
 
+FIELD(ID_AA64MMFR2, CNP, 0, 4)
+FIELD(ID_AA64MMFR2, UAO, 4, 4)
+FIELD(ID_AA64MMFR2, LSM, 8, 4)
+FIELD(ID_AA64MMFR2, IESB, 12, 4)
+FIELD(ID_AA64MMFR2, VARANGE, 16, 4)
+FIELD(ID_AA64MMFR2, CCIDX, 20, 4)
+FIELD(ID_AA64MMFR2, NV, 24, 4)
+FIELD(ID_AA64MMFR2, ST, 28, 4)
+FIELD(ID_AA64MMFR2, AT, 32, 4)
+FIELD(ID_AA64MMFR2, IDS, 36, 4)
+FIELD(ID_AA64MMFR2, FWB, 40, 4)
+FIELD(ID_AA64MMFR2, TTL, 48, 4)
+FIELD(ID_AA64MMFR2, BBM, 52, 4)
+FIELD(ID_AA64MMFR2, EVT, 56, 4)
+FIELD(ID_AA64MMFR2, E0PD, 60, 4)
+
 FIELD(ID_DFR0, COPDBG, 0, 4)
 FIELD(ID_DFR0, COPSDBG, 4, 4)
 FIELD(ID_DFR0, MMAPDBG, 8, 4)
diff --git a/target/arm/helper.c b/target/arm/helper.c
index d99661d4ea..d29722d8ac 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -7073,11 +7073,11 @@ void register_cp_regs_for_features(ARMCPU *cpu)
               .access = PL1_R, .type = ARM_CP_CONST,
               .accessfn = access_aa64_tid3,
               .resetvalue = cpu->isar.id_aa64mmfr1 },
-            { .name = "ID_AA64MMFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
+            { .name = "ID_AA64MMFR2_EL1", .state = ARM_CP_STATE_AA64,
               .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 2,
               .access = PL1_R, .type = ARM_CP_CONST,
               .accessfn = access_aa64_tid3,
-              .resetvalue = 0 },
+              .resetvalue = cpu->isar.id_aa64mmfr2 },
             { .name = "ID_AA64MMFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
               .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 3,
               .access = PL1_R, .type = ARM_CP_CONST,
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
index fb21ab9e73..3bae9e4a66 100644
--- a/target/arm/kvm64.c
+++ b/target/arm/kvm64.c
@@ -549,6 +549,8 @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
                               ARM64_SYS_REG(3, 0, 0, 7, 0));
         err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr1,
                               ARM64_SYS_REG(3, 0, 0, 7, 1));
+        err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr2,
+                              ARM64_SYS_REG(3, 0, 0, 7, 2));
 
         /*
          * Note that if AArch32 support is not present in the host,
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 18/20] target/arm: Update MSR access to UAO
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (16 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 17/20] target/arm: Add ID_AA64MMFR2_EL1 Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 19/20] target/arm: Implement UAO semantics Richard Henderson
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Move reginfo to file scope; avoid setting uao from spsr
    when the feature is not enabled (pmm).
v3: Update for aarch64_pstate_valid_mask
---
 target/arm/cpu.h           |  6 ++++++
 target/arm/internals.h     |  3 +++
 target/arm/helper.c        | 21 +++++++++++++++++++++
 target/arm/translate-a64.c | 14 ++++++++++++++
 4 files changed, 44 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 71879393c2..e943ffe8a9 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1253,6 +1253,7 @@ void pmu_init(ARMCPU *cpu);
 #define PSTATE_IL (1U << 20)
 #define PSTATE_SS (1U << 21)
 #define PSTATE_PAN (1U << 22)
+#define PSTATE_UAO (1U << 23)
 #define PSTATE_V (1U << 28)
 #define PSTATE_C (1U << 29)
 #define PSTATE_Z (1U << 30)
@@ -3642,6 +3643,11 @@ static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
     return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
 }
 
+static inline bool isar_feature_aa64_uao(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, UAO) != 0;
+}
+
 static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
 {
     return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 4a139644b5..58c4d707c5 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1112,6 +1112,9 @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
     if (isar_feature_aa64_pan(id)) {
         valid |= PSTATE_PAN;
     }
+    if (isar_feature_aa64_uao(id)) {
+        valid |= PSTATE_UAO;
+    }
 
     return valid;
 }
diff --git a/target/arm/helper.c b/target/arm/helper.c
index d29722d8ac..11a5f0be52 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -4191,6 +4191,24 @@ static const ARMCPRegInfo pan_reginfo = {
     .readfn = aa64_pan_read, .writefn = aa64_pan_write
 };
 
+static uint64_t aa64_uao_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+    return env->pstate & PSTATE_UAO;
+}
+
+static void aa64_uao_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                           uint64_t value)
+{
+    env->pstate = (env->pstate & ~PSTATE_UAO) | (value & PSTATE_UAO);
+}
+
+static const ARMCPRegInfo uao_reginfo = {
+    .name = "UAO", .state = ARM_CP_STATE_AA64,
+    .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 4,
+    .type = ARM_CP_NO_RAW, .access = PL1_RW,
+    .readfn = aa64_uao_read, .writefn = aa64_uao_write
+};
+
 static CPAccessResult aa64_cacheop_access(CPUARMState *env,
                                           const ARMCPRegInfo *ri,
                                           bool isread)
@@ -7664,6 +7682,9 @@ void register_cp_regs_for_features(ARMCPU *cpu)
         define_arm_cp_regs(cpu, ats1cp_reginfo);
     }
 #endif
+    if (cpu_isar_feature(aa64_uao, cpu)) {
+        define_one_arm_cp_reg(cpu, &uao_reginfo);
+    }
 
     if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
         define_arm_cp_regs(cpu, vhe_reginfo);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d8ba240a15..7c26c3bfeb 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1602,6 +1602,20 @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
         s->base.is_jmp = DISAS_NEXT;
         break;
 
+    case 0x03: /* UAO */
+        if (!dc_isar_feature(aa64_uao, s) || s->current_el == 0) {
+            goto do_unallocated;
+        }
+        if (crm & 1) {
+            set_pstate_bits(PSTATE_UAO);
+        } else {
+            clear_pstate_bits(PSTATE_UAO);
+        }
+        t1 = tcg_const_i32(s->current_el);
+        gen_helper_rebuild_hflags_a64(cpu_env, t1);
+        tcg_temp_free_i32(t1);
+        break;
+
     case 0x04: /* PAN */
         if (!dc_isar_feature(aa64_pan, s) || s->current_el == 0) {
             goto do_unallocated;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 19/20] target/arm: Implement UAO semantics
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (17 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 18/20] target/arm: Update MSR access to UAO Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-08 12:58 ` [PATCH v4 20/20] target/arm: Enable ARMv8.2-UAO in -cpu max Richard Henderson
  2020-02-11 18:41 ` [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Peter Maydell
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

We need only override the current condition under which
TBFLAG_A64.UNPRIV is set.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/helper.c | 41 +++++++++++++++++++++--------------------
 1 file changed, 21 insertions(+), 20 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 11a5f0be52..366dbcf460 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -12198,28 +12198,29 @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
     }
 
     /* Compute the condition for using AccType_UNPRIV for LDTR et al. */
-    /* TODO: ARMv8.2-UAO */
-    switch (mmu_idx) {
-    case ARMMMUIdx_E10_1:
-    case ARMMMUIdx_E10_1_PAN:
-    case ARMMMUIdx_SE10_1:
-    case ARMMMUIdx_SE10_1_PAN:
-        /* TODO: ARMv8.3-NV */
-        flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
-        break;
-    case ARMMMUIdx_E20_2:
-    case ARMMMUIdx_E20_2_PAN:
-        /* TODO: ARMv8.4-SecEL2 */
-        /*
-         * Note that E20_2 is gated by HCR_EL2.E2H == 1, but E20_0 is
-         * gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
-         */
-        if (env->cp15.hcr_el2 & HCR_TGE) {
+    if (!(env->pstate & PSTATE_UAO)) {
+        switch (mmu_idx) {
+        case ARMMMUIdx_E10_1:
+        case ARMMMUIdx_E10_1_PAN:
+        case ARMMMUIdx_SE10_1:
+        case ARMMMUIdx_SE10_1_PAN:
+            /* TODO: ARMv8.3-NV */
             flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
+            break;
+        case ARMMMUIdx_E20_2:
+        case ARMMMUIdx_E20_2_PAN:
+            /* TODO: ARMv8.4-SecEL2 */
+            /*
+             * Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
+             * gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
+             */
+            if (env->cp15.hcr_el2 & HCR_TGE) {
+                flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
+            }
+            break;
+        default:
+            break;
         }
-        break;
-    default:
-        break;
     }
 
     return rebuild_hflags_common(env, fp_el, mmu_idx, flags);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 20/20] target/arm: Enable ARMv8.2-UAO in -cpu max
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (18 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 19/20] target/arm: Implement UAO semantics Richard Henderson
@ 2020-02-08 12:58 ` Richard Henderson
  2020-02-11 18:41 ` [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Peter Maydell
  20 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-08 12:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alex.bennee

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu64.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 57fbc5eade..1359564c55 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -676,6 +676,10 @@ static void aarch64_max_initfn(Object *obj)
         t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
         cpu->isar.id_aa64mmfr1 = t;
 
+        t = cpu->isar.id_aa64mmfr2;
+        t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
+        cpu->isar.id_aa64mmfr2 = t;
+
         /* Replicate the same data to the 32-bit id registers.  */
         u = cpu->isar.id_isar5;
         u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 05/20] target/arm: Split out aarch32_cpsr_valid_mask
  2020-02-08 12:58 ` [PATCH v4 05/20] target/arm: Split out aarch32_cpsr_valid_mask Richard Henderson
@ 2020-02-11 18:06   ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2020-02-11 18:06 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Sat, 8 Feb 2020 at 12:58, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Split this helper out of msr_mask in translate.c.  At the same time,
> transform the negative reductive logic to positive accumulative logic.
> It will be usable along the exception paths.
>
> While touching msr_mask, fix up formatting.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> v4: Keep CPSR_J unconditionally in this patch.
>     Fix all formatting in msr_mask.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/20] target/arm: Mask CPSR_J when Jazelle is not enabled
  2020-02-08 12:58 ` [PATCH v4 06/20] target/arm: Mask CPSR_J when Jazelle is not enabled Richard Henderson
@ 2020-02-11 18:07   ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2020-02-11 18:07 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Sat, 8 Feb 2020 at 12:58, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The J bit signals Jazelle mode, and so of course is RES0
> when the feature is not enabled.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> v4: Split out from aarch32_cpsr_valid_mask creation in previous patch.
> ---
>  target/arm/internals.h | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 09/20] target/arm: Remove CPSR_RESERVED
  2020-02-08 12:58 ` [PATCH v4 09/20] target/arm: Remove CPSR_RESERVED Richard Henderson
@ 2020-02-11 18:07   ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2020-02-11 18:07 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Sat, 8 Feb 2020 at 12:58, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The only remaining use was in op_helper.c.  Use PSTATE_SS
> directly, and move the commentary so that it is more obvious
> what is going on.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  target/arm/cpu.h       | 6 ------
>  target/arm/op_helper.c | 9 ++++++++-
>  2 files changed, 8 insertions(+), 7 deletions(-)

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 14/20] target/arm: Set PAN bit as required on exception entry
  2020-02-08 12:58 ` [PATCH v4 14/20] target/arm: Set PAN bit as required on exception entry Richard Henderson
@ 2020-02-11 18:09   ` Peter Maydell
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2020-02-11 18:09 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Sat, 8 Feb 2020 at 12:58, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The PAN bit is preserved, or set as per SCTLR_ELx.SPAN,
> plus several other conditions listed in the ARM ARM.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> v2: Tidy preservation of CPSR_PAN in take_aarch32_exception (pmm).
> v4: Fix exception entry to EL3.
> ---
>  target/arm/helper.c | 53 ++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 50 insertions(+), 3 deletions(-

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO
  2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
                   ` (19 preceding siblings ...)
  2020-02-08 12:58 ` [PATCH v4 20/20] target/arm: Enable ARMv8.2-UAO in -cpu max Richard Henderson
@ 2020-02-11 18:41 ` Peter Maydell
  20 siblings, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2020-02-11 18:41 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Sat, 8 Feb 2020 at 12:58, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Based-on: https://git.linaro.org/people/peter.maydell/qemu-arm.git/log/?h=target-arm.next
>
> Version 4 incorporates the feedback on v3.  In particular:
>   * Split out CPSR_J masking to its own patch.
>   * Merge trivial braces formatting fixes into patch 5.
>   * Drop "Tidy msr_mask" patch, leaving CPSR_USER handling alone.
>   * Fixes for EL3 in "Set PAN bit as required on exception entry".
>
> Patches without review:
>
>   0005-target-arm-Split-out-aarch32_cpsr_valid_mask.patch
>   0006-target-arm-Mask-CPSR_J-when-Jazelle-is-not-enable.patch
>   0009-target-arm-Remove-CPSR_RESERVED.patch
>   0014-target-arm-Set-PAN-bit-as-required-on-exception-e.patch



Applied to target-arm.next, thanks.

-- PMM


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1
  2020-02-08 12:57 ` [PATCH v4 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1 Richard Henderson
@ 2020-02-14 11:28   ` Peter Maydell
  2020-02-14 16:20     ` Peter Maydell
  2020-02-14 18:19     ` Richard Henderson
  0 siblings, 2 replies; 29+ messages in thread
From: Peter Maydell @ 2020-02-14 11:28 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Sat, 8 Feb 2020 at 12:58, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Include definitions for all of the bits in ID_MMFR3.
> We already have a definition for ID_AA64MMFR1.PAN.
>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>


> @@ -3443,6 +3452,16 @@ static inline bool isar_feature_aa32_vminmaxnm(const ARMISARegisters *id)
>      return FIELD_EX64(id->mvfr2, MVFR2, FPMISC) >= 4;
>  }
>
> +static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
> +{
> +    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) != 0;
> +}
> +
> +static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
> +{
> +    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) >= 2;
> +}

Didn't spot this before it hit master, but these feature
test functions are looking at id->mvfr0, which is MVFR0, not
MMFR3 !

Also they're using FIELD_EX64 on a 32-bit register: is there
a reason for that?

thanks
-- PMM


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1
  2020-02-14 11:28   ` Peter Maydell
@ 2020-02-14 16:20     ` Peter Maydell
  2020-02-14 18:19     ` Richard Henderson
  1 sibling, 0 replies; 29+ messages in thread
From: Peter Maydell @ 2020-02-14 16:20 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Alex Bennée, QEMU Developers

On Fri, 14 Feb 2020 at 11:28, Peter Maydell <peter.maydell@linaro.org> wrote:
>
> On Sat, 8 Feb 2020 at 12:58, Richard Henderson
> <richard.henderson@linaro.org> wrote:
> >
> > Include definitions for all of the bits in ID_MMFR3.
> > We already have a definition for ID_AA64MMFR1.PAN.
> >
> > Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> > Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
> > Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>
>
> > @@ -3443,6 +3452,16 @@ static inline bool isar_feature_aa32_vminmaxnm(const ARMISARegisters *id)
> >      return FIELD_EX64(id->mvfr2, MVFR2, FPMISC) >= 4;
> >  }
> >
> > +static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
> > +{
> > +    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) != 0;
> > +}
> > +
> > +static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
> > +{
> > +    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) >= 2;
> > +}
>
> Didn't spot this before it hit master, but these feature
> test functions are looking at id->mvfr0, which is MVFR0, not
> MMFR3 !
>
> Also they're using FIELD_EX64 on a 32-bit register: is there
> a reason for that?

I've been fiddling with the ID register stuff anyway,
so I've written a patch that addresses these things.
Due out in v2 of my PMU patchset.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1
  2020-02-14 11:28   ` Peter Maydell
  2020-02-14 16:20     ` Peter Maydell
@ 2020-02-14 18:19     ` Richard Henderson
  1 sibling, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2020-02-14 18:19 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Alex Bennée, QEMU Developers

On 2/14/20 3:28 AM, Peter Maydell wrote:
>> +static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
>> +{
>> +    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) != 0;
>> +}
>> +
>> +static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
>> +{
>> +    return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) >= 2;
>> +}
> 
> Didn't spot this before it hit master, but these feature
> test functions are looking at id->mvfr0, which is MVFR0, not
> MMFR3 !
> 
> Also they're using FIELD_EX64 on a 32-bit register: is there
> a reason for that?

Nope, both mistakes.  Will fix, if you haven't done so already.


r~


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2020-02-14 18:25 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-08 12:57 [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Richard Henderson
2020-02-08 12:57 ` [PATCH v4 01/20] target/arm: Add arm_mmu_idx_is_stage1_of_2 Richard Henderson
2020-02-08 12:57 ` [PATCH v4 02/20] target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled Richard Henderson
2020-02-08 12:57 ` [PATCH v4 03/20] target/arm: Add isar_feature tests for PAN + ATS1E1 Richard Henderson
2020-02-14 11:28   ` Peter Maydell
2020-02-14 16:20     ` Peter Maydell
2020-02-14 18:19     ` Richard Henderson
2020-02-08 12:58 ` [PATCH v4 04/20] target/arm: Move LOR regdefs to file scope Richard Henderson
2020-02-08 12:58 ` [PATCH v4 05/20] target/arm: Split out aarch32_cpsr_valid_mask Richard Henderson
2020-02-11 18:06   ` Peter Maydell
2020-02-08 12:58 ` [PATCH v4 06/20] target/arm: Mask CPSR_J when Jazelle is not enabled Richard Henderson
2020-02-11 18:07   ` Peter Maydell
2020-02-08 12:58 ` [PATCH v4 07/20] target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask Richard Henderson
2020-02-08 12:58 ` [PATCH v4 08/20] target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return Richard Henderson
2020-02-08 12:58 ` [PATCH v4 09/20] target/arm: Remove CPSR_RESERVED Richard Henderson
2020-02-11 18:07   ` Peter Maydell
2020-02-08 12:58 ` [PATCH v4 10/20] target/arm: Introduce aarch64_pstate_valid_mask Richard Henderson
2020-02-08 12:58 ` [PATCH v4 11/20] target/arm: Update MSR access for PAN Richard Henderson
2020-02-08 12:58 ` [PATCH v4 12/20] target/arm: Update arm_mmu_idx_el " Richard Henderson
2020-02-08 12:58 ` [PATCH v4 13/20] target/arm: Enforce PAN semantics in get_S1prot Richard Henderson
2020-02-08 12:58 ` [PATCH v4 14/20] target/arm: Set PAN bit as required on exception entry Richard Henderson
2020-02-11 18:09   ` Peter Maydell
2020-02-08 12:58 ` [PATCH v4 15/20] target/arm: Implement ATS1E1 system registers Richard Henderson
2020-02-08 12:58 ` [PATCH v4 16/20] target/arm: Enable ARMv8.2-ATS1E1 in -cpu max Richard Henderson
2020-02-08 12:58 ` [PATCH v4 17/20] target/arm: Add ID_AA64MMFR2_EL1 Richard Henderson
2020-02-08 12:58 ` [PATCH v4 18/20] target/arm: Update MSR access to UAO Richard Henderson
2020-02-08 12:58 ` [PATCH v4 19/20] target/arm: Implement UAO semantics Richard Henderson
2020-02-08 12:58 ` [PATCH v4 20/20] target/arm: Enable ARMv8.2-UAO in -cpu max Richard Henderson
2020-02-11 18:41 ` [PATCH v4 00/20] target/arm: Implement PAN, ATS1E1, UAO Peter Maydell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).