All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Alex Bennée" <alex.bennee@linaro.org>
To: Richard Henderson <richard.henderson@linaro.org>
Cc: peter.maydell@linaro.org, qemu-devel@nongnu.org
Subject: Re: [PATCH v4 18/40] target/arm: Reorganize ARMMMUIdx
Date: Wed, 04 Dec 2019 13:44:20 +0000	[thread overview]
Message-ID: <87pnh48atn.fsf@linaro.org> (raw)
In-Reply-To: <20191203022937.1474-19-richard.henderson@linaro.org>


Richard Henderson <richard.henderson@linaro.org> writes:

> Prepare for, but do not yet implement, the EL2&0 regime.
> This involves adding the new MMUIdx enumerators and adjusting
> some of the MMUIdx related predicates to match.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  target/arm/cpu-param.h |   2 +-
>  target/arm/cpu.h       | 128 ++++++++++++++++++-----------------------
>  target/arm/internals.h |  37 +++++++++++-
>  target/arm/helper.c    |  66 ++++++++++++++++++---
>  target/arm/translate.c |   1 -
>  5 files changed, 150 insertions(+), 84 deletions(-)
>
> diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
> index 6e6948e960..18ac562346 100644
> --- a/target/arm/cpu-param.h
> +++ b/target/arm/cpu-param.h
> @@ -29,6 +29,6 @@
>  # define TARGET_PAGE_BITS_MIN  10
>  #endif
>  
> -#define NB_MMU_MODES 8
> +#define NB_MMU_MODES 9
>  
>  #endif
> diff --git a/target/arm/cpu.h b/target/arm/cpu.h
> index 015301e93a..bf8eb57e3a 100644
> --- a/target/arm/cpu.h
> +++ b/target/arm/cpu.h
> @@ -2778,7 +2778,9 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
>   *  + NonSecure EL1 & 0 stage 1
>   *  + NonSecure EL1 & 0 stage 2
>   *  + NonSecure EL2
> - *  + Secure EL1 & EL0
> + *  + NonSecure EL2 & 0   (ARMv8.1-VHE)
> + *  + Secure EL0
> + *  + Secure EL1
>   *  + Secure EL3
>   * If EL3 is 32-bit:
>   *  + NonSecure PL1 & 0 stage 1
> @@ -2788,8 +2790,9 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
>   * (reminder: for 32 bit EL3, Secure PL1 is *EL3*, not EL1.)
>   *
>   * For QEMU, an mmu_idx is not quite the same as a translation regime because:
> - *  1. we need to split the "EL1 & 0" regimes into two mmu_idxes, because they
> - *     may differ in access permissions even if the VA->PA map is the same
> + *  1. we need to split the "EL1 & 0" and "EL2 & 0" regimes into two mmu_idxes,
> + *     because they may differ in access permissions even if the VA->PA map is
> + *     the same
>   *  2. we want to cache in our TLB the full VA->IPA->PA lookup for a stage 1+2
>   *     translation, which means that we have one mmu_idx that deals with two
>   *     concatenated translation regimes [this sort of combined s1+2 TLB is
> @@ -2801,19 +2804,23 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
>   *  4. we can also safely fold together the "32 bit EL3" and "64 bit EL3"
>   *     translation regimes, because they map reasonably well to each other
>   *     and they can't both be active at the same time.
> - * This gives us the following list of mmu_idx values:
> + *  5. we want to be able to use the TLB for accesses done as part of a
> + *     stage1 page table walk, rather than having to walk the stage2 page
> + *     table over and over.
>   *
> - * NS EL0 (aka NS PL0) stage 1+2
> - * NS EL1 (aka NS PL1) stage 1+2
> + * This gives us the following list of cases:
> + *
> + * NS EL0 (aka NS PL0) EL1&0 stage 1+2
> + * NS EL1 (aka NS PL1) EL1&0 stage 1+2
> + * NS EL0 EL2&0
> + * NS EL2 EL2&0
>   * NS EL2 (aka NS PL2)
> - * S EL3 (aka S PL1)
>   * S EL0 (aka S PL0)
>   * S EL1 (not used if EL3 is 32 bit)
> - * NS EL0+1 stage 2
> + * S EL3 (aka S PL1)
> + * NS EL0&1 stage 2
>   *
> - * (The last of these is an mmu_idx because we want to be able to use the TLB
> - * for the accesses done as part of a stage 1 page table walk, rather than
> - * having to walk the stage 2 page table over and over.)
> + * for a total of 9 different mmu_idx.
>   *
>   * R profile CPUs have an MPU, but can use the same set of MMU indexes
>   * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
> @@ -2851,26 +2858,47 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
>   * For M profile we arrange them to have a bit for priv, a bit for negpri
>   * and a bit for secure.
>   */
> -#define ARM_MMU_IDX_A 0x10 /* A profile */
> -#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
> -#define ARM_MMU_IDX_M 0x40 /* M profile */
> +#define ARM_MMU_IDX_A     0x10  /* A profile */
> +#define ARM_MMU_IDX_NOTLB 0x20  /* does not have a TLB */
> +#define ARM_MMU_IDX_M     0x40  /* M profile */
>  
> -/* meanings of the bits for M profile mmu idx values */
> -#define ARM_MMU_IDX_M_PRIV 0x1
> +/* Meanings of the bits for M profile mmu idx values */
> +#define ARM_MMU_IDX_M_PRIV   0x1
>  #define ARM_MMU_IDX_M_NEGPRI 0x2
> -#define ARM_MMU_IDX_M_S 0x4
> +#define ARM_MMU_IDX_M_S      0x4  /* Secure */
>  
> -#define ARM_MMU_IDX_TYPE_MASK (~0x7)
> -#define ARM_MMU_IDX_COREIDX_MASK 0x7
> +#define ARM_MMU_IDX_TYPE_MASK \
> +    (ARM_MMU_IDX_A | ARM_MMU_IDX_M | ARM_MMU_IDX_NOTLB)
> +#define ARM_MMU_IDX_COREIDX_MASK 0xf
>  
>  typedef enum ARMMMUIdx {
> +    /*
> +     * A-profile.
> +     */
>      ARMMMUIdx_EL10_0 = 0 | ARM_MMU_IDX_A,
> -    ARMMMUIdx_EL10_1 = 1 | ARM_MMU_IDX_A,
> -    ARMMMUIdx_E2 = 2 | ARM_MMU_IDX_A,
> -    ARMMMUIdx_SE3 = 3 | ARM_MMU_IDX_A,
> -    ARMMMUIdx_SE0 = 4 | ARM_MMU_IDX_A,
> -    ARMMMUIdx_SE1 = 5 | ARM_MMU_IDX_A,
> -    ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_A,
> +    ARMMMUIdx_EL20_0 = 1 | ARM_MMU_IDX_A,
> +
> +    ARMMMUIdx_EL10_1 = 2 | ARM_MMU_IDX_A,
> +
> +    ARMMMUIdx_E2 =     3 | ARM_MMU_IDX_A,
> +    ARMMMUIdx_EL20_2 = 4 | ARM_MMU_IDX_A,
> +
> +    ARMMMUIdx_SE0 =    5 | ARM_MMU_IDX_A,
> +    ARMMMUIdx_SE1 =    6 | ARM_MMU_IDX_A,
> +    ARMMMUIdx_SE3 =    7 | ARM_MMU_IDX_A,
> +
> +    ARMMMUIdx_Stage2 = 8 | ARM_MMU_IDX_A,
> +
> +    /*
> +     * These are not allocated TLBs and are used only for AT system
> +     * instructions or for the first stage of an S12 page table walk.
> +     */
> +    ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
> +    ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
> +
> +    /*
> +     * M-profile.
> +     */
>      ARMMMUIdx_MUser = ARM_MMU_IDX_M,
>      ARMMMUIdx_MPriv = ARM_MMU_IDX_M | ARM_MMU_IDX_M_PRIV,
>      ARMMMUIdx_MUserNegPri = ARMMMUIdx_MUser | ARM_MMU_IDX_M_NEGPRI,
> @@ -2879,11 +2907,6 @@ typedef enum ARMMMUIdx {
>      ARMMMUIdx_MSPriv = ARMMMUIdx_MPriv | ARM_MMU_IDX_M_S,
>      ARMMMUIdx_MSUserNegPri = ARMMMUIdx_MUserNegPri | ARM_MMU_IDX_M_S,
>      ARMMMUIdx_MSPrivNegPri = ARMMMUIdx_MPrivNegPri | ARM_MMU_IDX_M_S,
> -    /* Indexes below here don't have TLBs and are used only for AT system
> -     * instructions or for the first stage of an S12 page table walk.
> -     */
> -    ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
> -    ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
>  } ARMMMUIdx;
>  
>  /*
> @@ -2895,8 +2918,10 @@ typedef enum ARMMMUIdx {
>  
>  typedef enum ARMMMUIdxBit {
>      TO_CORE_BIT(EL10_0),
> +    TO_CORE_BIT(EL20_0),
>      TO_CORE_BIT(EL10_1),
>      TO_CORE_BIT(E2),
> +    TO_CORE_BIT(EL20_2),
>      TO_CORE_BIT(SE0),
>      TO_CORE_BIT(SE1),
>      TO_CORE_BIT(SE3),
> @@ -2916,49 +2941,6 @@ typedef enum ARMMMUIdxBit {
>  
>  #define MMU_USER_IDX 0
>  
> -static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx)
> -{
> -    return mmu_idx & ARM_MMU_IDX_COREIDX_MASK;
> -}
> -
> -static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
> -{
> -    if (arm_feature(env, ARM_FEATURE_M)) {
> -        return mmu_idx | ARM_MMU_IDX_M;
> -    } else {
> -        return mmu_idx | ARM_MMU_IDX_A;
> -    }
> -}
> -
> -/* Return the exception level we're running at if this is our mmu_idx */
> -static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
> -{
> -    switch (mmu_idx & ARM_MMU_IDX_TYPE_MASK) {
> -    case ARM_MMU_IDX_A:
> -        return mmu_idx & 3;
> -    case ARM_MMU_IDX_M:
> -        return mmu_idx & ARM_MMU_IDX_M_PRIV;
> -    default:
> -        g_assert_not_reached();
> -    }
> -}
> -
> -/*
> - * Return the MMU index for a v7M CPU with all relevant information
> - * manually specified.
> - */
> -ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
> -                              bool secstate, bool priv, bool negpri);
> -
> -/* Return the MMU index for a v7M CPU in the specified security and
> - * privilege state.
> - */
> -ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
> -                                                bool secstate, bool priv);
> -
> -/* Return the MMU index for a v7M CPU in the specified security state */
> -ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
> -
>  /**
>   * cpu_mmu_index:
>   * @env: The cpu environment
> diff --git a/target/arm/internals.h b/target/arm/internals.h
> index aee54dc105..d73615064c 100644
> --- a/target/arm/internals.h
> +++ b/target/arm/internals.h
> @@ -769,6 +769,39 @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
>                        MMUAccessType access_type, int mmu_idx,
>                        bool probe, uintptr_t retaddr);
>  
> +static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx)
> +{
> +    return mmu_idx & ARM_MMU_IDX_COREIDX_MASK;
> +}
> +
> +static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
> +{
> +    if (arm_feature(env, ARM_FEATURE_M)) {
> +        return mmu_idx | ARM_MMU_IDX_M;
> +    } else {
> +        return mmu_idx | ARM_MMU_IDX_A;
> +    }
> +}
> +
> +int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx);
> +
> +/*
> + * Return the MMU index for a v7M CPU with all relevant information
> + * manually specified.
> + */
> +ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
> +                              bool secstate, bool priv, bool negpri);
> +
> +/*
> + * Return the MMU index for a v7M CPU in the specified security and
> + * privilege state.
> + */
> +ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
> +                                                bool secstate, bool priv);
> +
> +/* Return the MMU index for a v7M CPU in the specified security state */
> +ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
> +
>  /* Return true if the stage 1 translation regime is using LPAE format page
>   * tables */
>  bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx);
> @@ -810,6 +843,8 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
>      switch (mmu_idx) {
>      case ARMMMUIdx_EL10_0:
>      case ARMMMUIdx_EL10_1:
> +    case ARMMMUIdx_EL20_0:
> +    case ARMMMUIdx_EL20_2:
>      case ARMMMUIdx_Stage1_E0:
>      case ARMMMUIdx_Stage1_E1:
>      case ARMMMUIdx_E2:
> @@ -819,9 +854,9 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
>      case ARMMMUIdx_MPriv:
>      case ARMMMUIdx_MUser:
>          return false;
> -    case ARMMMUIdx_SE3:
>      case ARMMMUIdx_SE0:
>      case ARMMMUIdx_SE1:
> +    case ARMMMUIdx_SE3:
>      case ARMMMUIdx_MSPrivNegPri:
>      case ARMMMUIdx_MSUserNegPri:
>      case ARMMMUIdx_MSPriv:
> diff --git a/target/arm/helper.c b/target/arm/helper.c
> index ec5c7fa325..f86285ffbe 100644
> --- a/target/arm/helper.c
> +++ b/target/arm/helper.c
> @@ -8561,9 +8561,11 @@ void arm_cpu_do_interrupt(CPUState *cs)
>  #endif /* !CONFIG_USER_ONLY */
>  
>  /* Return the exception level which controls this address translation regime */
> -static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
> +static uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
>  {
>      switch (mmu_idx) {
> +    case ARMMMUIdx_EL20_0:
> +    case ARMMMUIdx_EL20_2:
>      case ARMMMUIdx_Stage2:
>      case ARMMMUIdx_E2:
>          return 2;
> @@ -8574,6 +8576,8 @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
>      case ARMMMUIdx_SE1:
>      case ARMMMUIdx_Stage1_E0:
>      case ARMMMUIdx_Stage1_E1:
> +    case ARMMMUIdx_EL10_0:
> +    case ARMMMUIdx_EL10_1:
>      case ARMMMUIdx_MPrivNegPri:
>      case ARMMMUIdx_MUserNegPri:
>      case ARMMMUIdx_MPriv:
> @@ -8675,10 +8679,14 @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
>   */
>  static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
>  {
> -    if (mmu_idx == ARMMMUIdx_EL10_0 || mmu_idx == ARMMMUIdx_EL10_1) {
> -        mmu_idx += (ARMMMUIdx_Stage1_E0 - ARMMMUIdx_EL10_0);
> +    switch (mmu_idx) {
> +    case ARMMMUIdx_EL10_0:
> +        return ARMMMUIdx_Stage1_E0;
> +    case ARMMMUIdx_EL10_1:
> +        return ARMMMUIdx_Stage1_E1;
> +    default:
> +        return mmu_idx;
>      }
> -    return mmu_idx;
>  }
>  
>  /* Return true if the translation regime is using LPAE format page tables */
> @@ -8711,6 +8719,7 @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
>  {
>      switch (mmu_idx) {
>      case ARMMMUIdx_SE0:
> +    case ARMMMUIdx_EL20_0:
>      case ARMMMUIdx_Stage1_E0:
>      case ARMMMUIdx_MUser:
>      case ARMMMUIdx_MSUser:
> @@ -11136,6 +11145,31 @@ int fp_exception_el(CPUARMState *env, int cur_el)
>      return 0;
>  }
>  
> +/* Return the exception level we're running at if this is our mmu_idx */
> +int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
> +{
> +    if (mmu_idx & ARM_MMU_IDX_M) {
> +        return mmu_idx & ARM_MMU_IDX_M_PRIV;
> +    }
> +
> +    switch (mmu_idx) {
> +    case ARMMMUIdx_EL10_0:
> +    case ARMMMUIdx_EL20_0:
> +    case ARMMMUIdx_SE0:
> +        return 0;
> +    case ARMMMUIdx_EL10_1:
> +    case ARMMMUIdx_SE1:
> +        return 1;
> +    case ARMMMUIdx_E2:
> +    case ARMMMUIdx_EL20_2:
> +        return 2;
> +    case ARMMMUIdx_SE3:
> +        return 3;
> +    default:
> +        g_assert_not_reached();
> +    }
> +}
> +
>  #ifndef CONFIG_TCG
>  ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
>  {
> @@ -11149,10 +11183,26 @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
>          return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
>      }
>  
> -    if (el < 2 && arm_is_secure_below_el3(env)) {
> -        return ARMMMUIdx_SE0 + el;
> -    } else {
> -        return ARMMMUIdx_EL10_0 + el;
> +    switch (el) {
> +    case 0:
> +        /* TODO: ARMv8.1-VHE */
> +        if (arm_is_secure_below_el3(env)) {
> +            return ARMMMUIdx_SE0;
> +        }
> +        return ARMMMUIdx_EL10_0;
> +    case 1:
> +        if (arm_is_secure_below_el3(env)) {
> +            return ARMMMUIdx_SE1;
> +        }
> +        return ARMMMUIdx_EL10_1;
> +    case 2:
> +        /* TODO: ARMv8.1-VHE */
> +        /* TODO: ARMv8.4-SecEL2 */
> +        return ARMMMUIdx_E2;
> +    case 3:
> +        return ARMMMUIdx_SE3;
> +    default:
> +        g_assert_not_reached();
>      }
>  }
>  
> diff --git a/target/arm/translate.c b/target/arm/translate.c
> index cd757165e1..b7f726e733 100644
> --- a/target/arm/translate.c
> +++ b/target/arm/translate.c
> @@ -172,7 +172,6 @@ static inline int get_a32_user_mem_index(DisasContext *s)
>      case ARMMMUIdx_MSUserNegPri:
>      case ARMMMUIdx_MSPrivNegPri:
>          return arm_to_core_mmu_idx(ARMMMUIdx_MSUserNegPri);
> -    case ARMMMUIdx_Stage2:
>      default:
>          g_assert_not_reached();
>      }


-- 
Alex Bennée


  reply	other threads:[~2019-12-04 13:45 UTC|newest]

Thread overview: 98+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-03  2:28 [PATCH v4 00/40] target/arm: Implement ARMv8.1-VHE Richard Henderson
2019-12-03  2:28 ` [PATCH v4 01/40] target/arm: Define isar_feature_aa64_vh Richard Henderson
2019-12-03  2:28 ` [PATCH v4 02/40] target/arm: Enable HCR_E2H for VHE Richard Henderson
2019-12-03  2:29 ` [PATCH v4 03/40] target/arm: Add CONTEXTIDR_EL2 Richard Henderson
2019-12-03  2:29 ` [PATCH v4 04/40] target/arm: Add TTBR1_EL2 Richard Henderson
2019-12-10  9:14   ` Laurent Desnogues
2019-12-03  2:29 ` [PATCH v4 05/40] target/arm: Update CNTVCT_EL0 for VHE Richard Henderson
2019-12-03  2:29 ` [PATCH v4 06/40] target/arm: Split out vae1_tlbmask, vmalle1_tlbmask Richard Henderson
2019-12-03  6:25   ` Philippe Mathieu-Daudé
2019-12-03 22:01     ` Richard Henderson
2019-12-03  2:29 ` [PATCH v4 07/40] target/arm: Simplify tlb_force_broadcast alternatives Richard Henderson
2019-12-03  2:29 ` [PATCH v4 08/40] target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_* Richard Henderson
2019-12-04 10:38   ` Alex Bennée
2019-12-06 15:45   ` Peter Maydell
2019-12-06 18:00     ` Richard Henderson
2019-12-06 18:01       ` Peter Maydell
2019-12-03  2:29 ` [PATCH v4 09/40] target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2 Richard Henderson
2019-12-04 10:40   ` Alex Bennée
2019-12-06 15:46   ` Peter Maydell
2019-12-06 18:05     ` Richard Henderson
2019-12-03  2:29 ` [PATCH v4 10/40] target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E* Richard Henderson
2019-12-04 11:00   ` Alex Bennée
2019-12-06 15:47   ` Peter Maydell
2019-12-06 18:20     ` Richard Henderson
2019-12-03  2:29 ` [PATCH v4 11/40] target/arm: Rename ARMMMUIdx_S1SE* to ARMMMUIdx_SE* Richard Henderson
2019-12-04 11:01   ` Alex Bennée
2019-12-06 15:47   ` Peter Maydell
2019-12-03  2:29 ` [PATCH v4 12/40] target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3 Richard Henderson
2019-12-04 11:02   ` Alex Bennée
2019-12-03  2:29 ` [PATCH v4 13/40] target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2 Richard Henderson
2019-12-04 11:03   ` Alex Bennée
2019-12-03  2:29 ` [PATCH v4 14/40] target/arm: Recover 4 bits from TBFLAGs Richard Henderson
2019-12-04 11:43   ` Alex Bennée
2019-12-04 14:27     ` Richard Henderson
2019-12-04 15:53       ` Alex Bennée
2019-12-04 16:19         ` Richard Henderson
2019-12-03  2:29 ` [PATCH v4 15/40] target/arm: Expand TBFLAG_ANY.MMUIDX to 4 bits Richard Henderson
2019-12-04 11:48   ` Alex Bennée
2019-12-03  2:29 ` [PATCH v4 16/40] target/arm: Rearrange ARMMMUIdxBit Richard Henderson
2019-12-04 11:56   ` Alex Bennée
2019-12-04 16:01   ` Philippe Mathieu-Daudé
2019-12-03  2:29 ` [PATCH v4 17/40] target/arm: Tidy ARMMMUIdx m-profile definitions Richard Henderson
2019-12-03  6:27   ` Philippe Mathieu-Daudé
2019-12-03  2:29 ` [PATCH v4 18/40] target/arm: Reorganize ARMMMUIdx Richard Henderson
2019-12-04 13:44   ` Alex Bennée [this message]
2019-12-03  2:29 ` [PATCH v4 19/40] target/arm: Add regime_has_2_ranges Richard Henderson
2019-12-04 14:16   ` Alex Bennée
2019-12-03  2:29 ` [PATCH v4 20/40] target/arm: Update arm_mmu_idx for VHE Richard Henderson
2019-12-04 14:37   ` Alex Bennée
2019-12-03  2:29 ` [PATCH v4 21/40] target/arm: Update arm_sctlr " Richard Henderson
2019-12-03  2:29 ` [PATCH v4 22/40] target/arm: Update aa64_zva_access for EL2 Richard Henderson
2019-12-04 15:01   ` Alex Bennée
2019-12-03  2:29 ` [PATCH v4 23/40] target/arm: Update ctr_el0_access " Richard Henderson
2019-12-04 16:11   ` Alex Bennée
2019-12-03  2:29 ` [PATCH v4 24/40] target/arm: Add the hypervisor virtual counter Richard Henderson
2019-12-03  2:29 ` [PATCH v4 25/40] target/arm: Update timer access for VHE Richard Henderson
2019-12-04 18:35   ` Alex Bennée
2019-12-03  2:29 ` [PATCH v4 26/40] target/arm: Update define_one_arm_cp_reg_with_opaque " Richard Henderson
2019-12-04 18:58   ` Alex Bennée
2019-12-04 19:47     ` Richard Henderson
2019-12-04 22:38       ` Alex Bennée
2019-12-05 15:09         ` Richard Henderson
2019-12-06 15:53   ` Peter Maydell
2019-12-03  2:29 ` [PATCH v4 27/40] target/arm: Add VHE system register redirection and aliasing Richard Henderson
2019-12-06 17:24   ` Peter Maydell
2019-12-06 18:36     ` Richard Henderson
2019-12-06 18:41       ` Peter Maydell
2019-12-06 18:53         ` Richard Henderson
2019-12-03  2:29 ` [PATCH v4 28/40] target/arm: Add VHE timer " Richard Henderson
2019-12-06 17:33   ` Peter Maydell
2019-12-03  2:29 ` [PATCH v4 29/40] target/arm: Flush tlb for ASID changes in EL2&0 translation regime Richard Henderson
2019-12-06 17:05   ` Peter Maydell
2020-01-28  0:04     ` Richard Henderson
2019-12-03  2:29 ` [PATCH v4 30/40] target/arm: Flush tlbs for E2&0 " Richard Henderson
2019-12-06 17:14   ` Peter Maydell
2020-01-29 17:05     ` Richard Henderson
2019-12-03  2:29 ` [PATCH v4 31/40] target/arm: Update arm_phys_excp_target_el for TGE Richard Henderson
2019-12-06 16:59   ` Peter Maydell
2019-12-03  2:29 ` [PATCH v4 32/40] target/arm: Update {fp,sve}_exception_el for VHE Richard Henderson
2019-12-06 16:50   ` [PATCH v4 32/40] target/arm: Update {fp, sve}_exception_el " Peter Maydell
2019-12-03  2:29 ` [PATCH v4 33/40] target/arm: check TGE and E2H flags for EL0 pauth traps Richard Henderson
2019-12-06 16:08   ` Peter Maydell
2019-12-03  2:29 ` [PATCH v4 34/40] target/arm: Update get_a64_user_mem_index for VHE Richard Henderson
2019-12-06 16:46   ` Peter Maydell
2019-12-03  2:29 ` [PATCH v4 35/40] target/arm: Update arm_cpu_do_interrupt_aarch64 " Richard Henderson
2019-12-06 16:03   ` Peter Maydell
2019-12-06 18:51     ` Richard Henderson
2019-12-06 19:15       ` Peter Maydell
2019-12-03  2:29 ` [PATCH v4 36/40] target/arm: Enable ARMv8.1-VHE in -cpu max Richard Henderson
2019-12-06 15:57   ` Peter Maydell
2019-12-03  2:29 ` [PATCH v4 37/40] target/arm: Move arm_excp_unmasked to cpu.c Richard Henderson
2019-12-03  6:28   ` Philippe Mathieu-Daudé
2019-12-03  2:29 ` [PATCH v4 38/40] target/arm: Pass more cpu state to arm_excp_unmasked Richard Henderson
2019-12-03  6:29   ` Philippe Mathieu-Daudé
2019-12-03  2:29 ` [PATCH v4 39/40] target/arm: Use bool for unmasked in arm_excp_unmasked Richard Henderson
2019-12-03  6:30   ` Philippe Mathieu-Daudé
2019-12-03  2:29 ` [PATCH v4 40/40] target/arm: Raise only one interrupt in arm_cpu_exec_interrupt Richard Henderson
2019-12-06 15:57   ` Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87pnh48atn.fsf@linaro.org \
    --to=alex.bennee@linaro.org \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.