qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] target/arm: Add support for FEAT_TLBIOS and FEAT_TLBIRANGE
@ 2021-03-10  0:29 Rebecca Cran
  2021-03-10  0:29 ` [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE Rebecca Cran
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Rebecca Cran @ 2021-03-10  0:29 UTC (permalink / raw)
  To: Peter Maydell, Richard Henderson, qemu-arm; +Cc: Rebecca Cran, qemu-devel

ARMv8.4 adds the mandatory FEAT_TLBIOS and FEAT_TLBIRANGE. 
They provides TLBI maintenance instructions that extend to the Outer
Shareable domain and that apply to a range of input addresses.
    
Changes from v2 to v3:

o Change the functions in cputlb.c to do a full flush. This should
  only be a short-term implementation.

Testing:
  o Ran scripts/checkpatch.pl: functions in exec-all.h fail,
    but I think that's acceptable?
  o Built all targets
  o Ran test code that executed the new instructions
  o Ran "make test"

Rebecca Cran (3):
  target/arm: Add support for FEAT_TLBIRANGE
  target/arm: Add support for FEAT_TLBIOS
  target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type

 accel/tcg/cputlb.c      |  22 ++
 include/exec/exec-all.h |  41 +++
 target/arm/cpu.h        |  11 +
 target/arm/cpu64.c      |   1 +
 target/arm/helper.c     | 323 ++++++++++++++++++++
 5 files changed, 398 insertions(+)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE
  2021-03-10  0:29 [PATCH v3 0/3] target/arm: Add support for FEAT_TLBIOS and FEAT_TLBIRANGE Rebecca Cran
@ 2021-03-10  0:29 ` Rebecca Cran
  2021-03-10 19:24   ` Richard Henderson
  2021-03-10  0:29 ` [PATCH v3 2/3] target/arm: Add support for FEAT_TLBIOS Rebecca Cran
  2021-03-10  0:29 ` [PATCH v3 3/3] target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type Rebecca Cran
  2 siblings, 1 reply; 13+ messages in thread
From: Rebecca Cran @ 2021-03-10  0:29 UTC (permalink / raw)
  To: Peter Maydell, Richard Henderson, qemu-arm; +Cc: Rebecca Cran, qemu-devel

ARMv8.4 adds the mandatory FEAT_TLBIRANGE. It provides TLBI
maintenance instructions that apply to a range of input addresses.

Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
---
 accel/tcg/cputlb.c      |  22 ++
 include/exec/exec-all.h |  41 ++++
 target/arm/cpu.h        |   5 +
 target/arm/helper.c     | 248 ++++++++++++++++++++
 4 files changed, 316 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 8a7b779270a4..233fe302c236 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -623,6 +623,28 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr)
     tlb_flush_page_by_mmuidx(cpu, addr, ALL_MMUIDX_BITS);
 }
 
+void tlb_flush_page_range_by_mmuidx(CPUState *cpu, target_ulong addr,
+                                    unsigned int num_pages, uint16_t idxmap)
+{
+  /*
+   * We currently do a full flush, but for performance this should be
+   * updated to only flush the requested pages, taking TBI into account.
+   */
+    tlb_flush_by_mmuidx(cpu, idxmap);
+}
+
+void tlb_flush_page_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
+                                                    target_ulong addr,
+                                                    unsigned int num_pages,
+                                                    uint16_t idxmap)
+{
+    /*
+     * We currently do a full flush, but for performance this should be
+     * updated to only flush the requested pages, taking TBI into account.
+     */
+    tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, idxmap);
+}
+
 void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, target_ulong addr,
                                        uint16_t idxmap)
 {
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 6b036cae8f65..91232cd48d22 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -212,6 +212,35 @@ void tlb_flush_page_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
  */
 void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *cpu, target_ulong addr,
                                               uint16_t idxmap);
+/**
+ * tlb_flush_page_range_by_mmuidx:
+ * @cpu: CPU whose TLB should be flushed
+ * @addr: virtual address of start of page range to be flushed
+ * @num_pages: the number of pages to be flushed
+ * @idxmap: bitmap of MMU indexes to flush
+ *
+ * Flush a range of pages from the TLB of the specified CPU, for the specified
+ * MMU indexes.
+ */
+void tlb_flush_page_range_by_mmuidx(CPUState *cpu, target_ulong addr,
+                                    unsigned int num_pages, uint16_t idxmap);
+/**
+ * tlb_flush_page_range_by_mmuidx_all_cpus_synced:
+ * @cpu: Originating CPU of the flush
+ * @addr: virtual address of start of page range to be flushed
+ * @num_pages: the number of pages to be flushed
+ * @idxmap: bitmap of MMU indexes to flush
+ *
+ * Flush a range of pages from the TLB of all CPUs, for the specified MMU
+ * indexes like tlb_flush_page_by_mmuidx_all_cpus except the source
+ * vCPUs work is scheduled as safe work meaning all flushes will be
+ * complete once the source vCPUs safe work is complete. This will
+ * depend on when the guests translation ends the TB.
+ */
+void tlb_flush_page_range_by_mmuidx_all_cpus_synced(CPUState *cpu,
+                                                    target_ulong addr,
+                                                    unsigned int num_pages,
+                                                    uint16_t idxmap);
 /**
  * tlb_flush_by_mmuidx:
  * @cpu: CPU whose TLB should be flushed
@@ -313,6 +342,18 @@ static inline void tlb_flush_page_all_cpus_synced(CPUState *src,
                                                   target_ulong addr)
 {
 }
+static inline void tlb_flush_page_range_by_mmuidx(CPUState *cpu,
+                                                  target_ulong addr,
+                                                  unsigned int num_pages,
+                                                  int idxmap)
+{
+}
+static inline void tlb_flush_page_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
+                                                                  target_ulong addr,
+                                                                  unsigned int num_pages,
+                                                                  uint16_t idxmap)
+{
+}
 static inline void tlb_flush(CPUState *cpu)
 {
 }
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 193a49ec7fac..32b78a4ef587 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4038,6 +4038,11 @@ static inline bool isar_feature_aa64_pauth_arch(const ARMISARegisters *id)
     return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, APA) != 0;
 }
 
+static inline bool isar_feature_aa64_tlbirange(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
+}
+
 static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
 {
     return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 904b0927cd22..ec81586d90dd 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -4759,6 +4759,171 @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
                                                   ARMMMUIdxBit_SE3, bits);
 }
 
+#ifdef TARGET_AARCH64
+static unsigned int tlbi_aa64_range_get_num_pages(CPUARMState *env,
+                                                  uint64_t value,
+                                                  uint64_t addr)
+{
+    unsigned int page_size;
+    unsigned int page_shift;
+    unsigned int page_size_granule;
+    uint64_t num;
+    uint64_t scale;
+    uint64_t exponent;
+    uint64_t high_addr;
+
+    num = (value >> 39) & 0xF;
+    scale = (value >> 44) & 0x3;
+    page_size_granule = (value >> 46) & 0x3;
+
+    switch (page_size_granule) {
+    case 1:
+      page_size = 4096;
+      page_shift = 12;
+      break;
+    case 2:
+      page_size = 16384;
+      page_shift = 14;
+      break;
+    case 3:
+      page_size = 65536;
+      page_shift = 16;
+      break;
+    default:
+      qemu_log_mask(LOG_GUEST_ERROR, "Invalid page size granule %d\n",
+                    page_size_granule);
+
+      raise_exception(env, EXCP_UDEF, syn_uncategorized(),
+                      exception_target_el(env));
+
+      break;
+    }
+
+    exponent = (5 * scale) + 1;
+    high_addr = addr + (((num + 1) << exponent) * page_size);
+
+    return (high_addr - addr) >> page_shift;
+}
+
+static void tlbi_aa64_rvae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                                  uint64_t value)
+{
+    /*
+     * Invalidate by VA range, EL1&0.
+     * Currently handles all of RVAE1, RVAAE1, RVAALE1 and RVALE1,
+     * since we don't support flush-for-specific-ASID-only or
+     * flush-last-level-only.
+     */
+    CPUState *cs = env_cpu(env);
+    int mask = vae1_tlbmask(env);
+    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
+    unsigned int num_pages = tlbi_aa64_range_get_num_pages(env, value, addr);
+
+    if (tlb_force_broadcast(env)) {
+        tlb_flush_page_range_by_mmuidx_all_cpus_synced(cs, addr, num_pages,
+                                                       mask);
+    } else {
+        tlb_flush_page_range_by_mmuidx(cs, addr, num_pages, mask);
+    }
+}
+
+static void tlbi_aa64_rvae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                                    uint64_t value)
+{
+    /*
+     * Invalidate by VA range, Inner/Outer Shareable EL1&0.
+     * Currently handles all of RVAE1IS, RVAE1OS, RVAAE1IS, RVAAE1OS,
+     * RVAALE1IS, RVAALE1OS, RVALE1IS and RVALE1OS, since we don't support
+     * flush-for-specific-ASID-only, flush-last-level-only or inner/outer
+     * shareable specific flushes.
+     */
+    CPUState *cs = env_cpu(env);
+    int mask = vae1_tlbmask(env);
+    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
+    unsigned int num_pages = tlbi_aa64_range_get_num_pages(env, value, addr);
+
+    tlb_flush_page_range_by_mmuidx_all_cpus_synced(cs, addr, num_pages, mask);
+}
+
+static void tlbi_aa64_rvae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                                  uint64_t value)
+{
+    /*
+     * Invalidate by VA range, EL2.
+     * Currently handles all of RVAE2, RVAAE2, RVAALE2 and RVALE2,
+     * since we don't support flush-for-specific-ASID-only or
+     * flush-last-level-only.
+     */
+    CPUState *cs = env_cpu(env);
+    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
+    unsigned int num_pages = tlbi_aa64_range_get_num_pages(env, value, addr);
+
+    if (tlb_force_broadcast(env)) {
+        tlb_flush_page_range_by_mmuidx_all_cpus_synced(cs, addr, num_pages,
+                                                       ARMMMUIdxBit_E2);
+    } else {
+        tlb_flush_page_range_by_mmuidx(cs, addr, num_pages, ARMMMUIdxBit_E2);
+    }
+}
+
+static void tlbi_aa64_rvae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                                    uint64_t value)
+{
+    /*
+     * Invalidate by VA range, Inner/Outer Shareable, EL2.
+     * Currently handles all of RVAE2IS, RVAE2OS, RVAAE2IS, RVAAE2OS,
+     * RVAALE2IS, RVAALE2OS, RVALE2IS and RVALE2OS, since we don't support
+     * flush-for-specific-ASID-only, flush-last-level-only or inner/outer
+     * shareable specific flushes.
+     */
+    CPUState *cs = env_cpu(env);
+    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
+    unsigned int num_pages = tlbi_aa64_range_get_num_pages(env, value, addr);
+
+    tlb_flush_page_range_by_mmuidx_all_cpus_synced(cs, addr, num_pages,
+                                                   ARMMMUIdxBit_E2);
+}
+
+static void tlbi_aa64_rvae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                                  uint64_t value)
+{
+    /*
+     * Invalidate by VA range, EL3.
+     * Currently handles all of RVAE3, RVAAE3, RVAALE3 and RVALE3,
+     * since we don't support flush-for-specific-ASID-only or
+     * flush-last-level-only.
+     */
+    CPUState *cs = env_cpu(env);
+    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
+    unsigned int num_pages = tlbi_aa64_range_get_num_pages(env, value, addr);
+
+    if (tlb_force_broadcast(env)) {
+        tlb_flush_page_range_by_mmuidx_all_cpus_synced(cs, addr, num_pages,
+                                                       ARMMMUIdxBit_SE3);
+    } else {
+        tlb_flush_page_range_by_mmuidx(cs, addr, num_pages, ARMMMUIdxBit_SE3);
+    }
+}
+
+static void tlbi_aa64_rvae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
+                                    uint64_t value)
+{
+    /*
+     * Invalidate by VA range, EL3, Inner/Outer Shareable.
+     * Currently handles all of RVAE3IS, RVAE3OS, RVAAE3IS, RVAAE3OS,
+     * RVAALE3IS, RVAALE3OS, RVALE3IS, and RVALE3OS, since we don't support
+     * flush-for-specific-ASID-only, flush-last-level-only or inner/outer
+     * specific flushes.
+     */
+    CPUState *cs = env_cpu(env);
+    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
+    unsigned int num_pages = tlbi_aa64_range_get_num_pages(env, value, addr);
+
+    tlb_flush_page_range_by_mmuidx_all_cpus_synced(cs, addr, num_pages,
+                                                   ARMMMUIdxBit_SE3);
+}
+#endif
+
 static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
                                       bool isread)
 {
@@ -6920,6 +7085,86 @@ static const ARMCPRegInfo pauth_reginfo[] = {
     REGINFO_SENTINEL
 };
 
+static const ARMCPRegInfo tlbirange_reginfo[] = {
+    { .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1is_write },
+    { .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1is_write },
+   { .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1is_write },
+    { .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1is_write },
+    { .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1_write },
+    { .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1_write },
+   { .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1_write },
+    { .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1_write },
+    { .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
+      .access = PL2_W, .type = ARM_CP_NOP },
+    { .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
+      .access = PL2_W, .type = ARM_CP_NOP },
+    { .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
+      .access = PL2_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae2is_write },
+   { .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
+      .access = PL2_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae2is_write },
+    { .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
+      .access = PL2_W, .type = ARM_CP_NOP },
+   { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
+      .access = PL2_W, .type = ARM_CP_NOP },
+    { .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
+      .access = PL2_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae2_write },
+   { .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
+      .access = PL2_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae2_write },
+   { .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
+      .access = PL3_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae3is_write },
+   { .name = "TLBI_RVALE3IS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 5,
+      .access = PL3_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae3is_write },
+   { .name = "TLBI_RVAE3", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 1,
+      .access = PL3_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae3_write },
+   { .name = "TLBI_RVALE3", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
+      .access = PL3_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae3_write },
+    REGINFO_SENTINEL
+};
+
 static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
 {
     Error *err = NULL;
@@ -8289,6 +8534,9 @@ void register_cp_regs_for_features(ARMCPU *cpu)
     if (cpu_isar_feature(aa64_rndr, cpu)) {
         define_arm_cp_regs(cpu, rndr_reginfo);
     }
+    if (cpu_isar_feature(aa64_tlbirange, cpu)) {
+        define_arm_cp_regs(cpu, tlbirange_reginfo);
+    }
 #ifndef CONFIG_USER_ONLY
     /* Data Cache clean instructions up to PoP */
     if (cpu_isar_feature(aa64_dcpop, cpu)) {
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/3] target/arm: Add support for FEAT_TLBIOS
  2021-03-10  0:29 [PATCH v3 0/3] target/arm: Add support for FEAT_TLBIOS and FEAT_TLBIRANGE Rebecca Cran
  2021-03-10  0:29 ` [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE Rebecca Cran
@ 2021-03-10  0:29 ` Rebecca Cran
  2021-03-10 19:32   ` Richard Henderson
  2021-03-10  0:29 ` [PATCH v3 3/3] target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type Rebecca Cran
  2 siblings, 1 reply; 13+ messages in thread
From: Rebecca Cran @ 2021-03-10  0:29 UTC (permalink / raw)
  To: Peter Maydell, Richard Henderson, qemu-arm; +Cc: Rebecca Cran, qemu-devel

ARMv8.4 adds the mandatory FEAT_TLBIOS. It provides TLBI
maintenance instructions that extend to the Outer Shareable domain.

Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
---
 target/arm/cpu.h    |  6 ++
 target/arm/helper.c | 75 ++++++++++++++++++++
 2 files changed, 81 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 32b78a4ef587..cf0801994a5f 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4043,6 +4043,12 @@ static inline bool isar_feature_aa64_tlbirange(const ARMISARegisters *id)
     return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
 }
 
+static inline bool isar_feature_aa64_tlbios(const ARMISARegisters *id)
+{
+    return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 1 ||
+           FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
+}
+
 static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
 {
     return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
diff --git a/target/arm/helper.c b/target/arm/helper.c
index ec81586d90dd..b1f634b0c897 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -7165,6 +7165,78 @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
     REGINFO_SENTINEL
 };
 
+static const ARMCPRegInfo tlbios_reginfo[] = {
+    { .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_vmalle1is_write },
+    { .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_vmalle1is_write },
+    { .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1is_write },
+    { .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1is_write },
+   { .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1is_write },
+    { .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
+      .access = PL1_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae1is_write },
+    { .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
+      .access = PL2_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_alle2is_write },
+   { .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
+      .access = PL2_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_alle1is_write },
+    { .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
+      .access = PL2_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_alle1is_write },
+    { .name = "TLBI_IPAS2E1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 0,
+      .access = PL2_W, .type = ARM_CP_NOP },
+    { .name = "TLBI_RIPAS2E1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 3,
+      .access = PL2_W, .type = ARM_CP_NOP },
+    { .name = "TLBI_IPAS2LE1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 4,
+      .access = PL2_W, .type = ARM_CP_NOP },
+    { .name = "TLBI_RIPAS2LE1OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 7,
+      .access = PL2_W, .type = ARM_CP_NOP },
+   { .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
+      .access = PL2_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae2is_write },
+   { .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
+      .access = PL2_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae2is_write },
+    { .name = "TLBI_ALLE3OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 0,
+      .access = PL3_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_alle3is_write },
+   { .name = "TLBI_RVAE3OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 1,
+      .access = PL3_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae3is_write },
+   { .name = "TLBI_RVALE3OS", .state = ARM_CP_STATE_AA64,
+      .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 5,
+      .access = PL3_W, .type = ARM_CP_NO_RAW,
+      .writefn = tlbi_aa64_rvae3is_write },
+    REGINFO_SENTINEL
+};
+
 static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
 {
     Error *err = NULL;
@@ -8537,6 +8609,9 @@ void register_cp_regs_for_features(ARMCPU *cpu)
     if (cpu_isar_feature(aa64_tlbirange, cpu)) {
         define_arm_cp_regs(cpu, tlbirange_reginfo);
     }
+    if (cpu_isar_feature(aa64_tlbios, cpu)) {
+        define_arm_cp_regs(cpu, tlbios_reginfo);
+    }
 #ifndef CONFIG_USER_ONLY
     /* Data Cache clean instructions up to PoP */
     if (cpu_isar_feature(aa64_dcpop, cpu)) {
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/3] target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type
  2021-03-10  0:29 [PATCH v3 0/3] target/arm: Add support for FEAT_TLBIOS and FEAT_TLBIRANGE Rebecca Cran
  2021-03-10  0:29 ` [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE Rebecca Cran
  2021-03-10  0:29 ` [PATCH v3 2/3] target/arm: Add support for FEAT_TLBIOS Rebecca Cran
@ 2021-03-10  0:29 ` Rebecca Cran
  2021-03-10 19:34   ` Richard Henderson
  2 siblings, 1 reply; 13+ messages in thread
From: Rebecca Cran @ 2021-03-10  0:29 UTC (permalink / raw)
  To: Peter Maydell, Richard Henderson, qemu-arm; +Cc: Rebecca Cran, qemu-devel

Indicate support for FEAT_TLBIOS and FEAT_TLBIRANGE by setting
ID_AA64ISAR0.TLB to 2 for the max AARCH64 CPU type.

Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
---
 target/arm/cpu64.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index f0a9e968c9c1..e34a6a6174fe 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -651,6 +651,7 @@ static void aarch64_max_initfn(Object *obj)
         t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
         t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
         t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
+        t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2);
         t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1);
         cpu->isar.id_aa64isar0 = t;
 
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE
  2021-03-10  0:29 ` [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE Rebecca Cran
@ 2021-03-10 19:24   ` Richard Henderson
  2021-03-10 21:59     ` Rebecca Cran
                       ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Richard Henderson @ 2021-03-10 19:24 UTC (permalink / raw)
  To: Rebecca Cran, Peter Maydell, qemu-arm; +Cc: qemu-devel

On 3/9/21 6:29 PM, Rebecca Cran wrote:
> +void tlb_flush_page_range_by_mmuidx(CPUState *cpu, target_ulong addr,
> +                                    unsigned int num_pages, uint16_t idxmap)

I am not keen on this interface.  I think you should take either start+end 
addresses (inclusive) or start+length (in bytes).

Using num_pages, and as an unsigned int, seems too easy to fail when applied to 
a different guest.

> +{
> +  /*
> +   * We currently do a full flush, but for performance this should be
> +   * updated to only flush the requested pages, taking TBI into account.
> +   */
> +    tlb_flush_by_mmuidx(cpu, idxmap);
> +}

And if you're going to cop-out like this, you might as well just do it in 
target/arm and not add these new interfaces at all.

> +#ifdef TARGET_AARCH64
> +static unsigned int tlbi_aa64_range_get_num_pages(CPUARMState *env,
> +                                                  uint64_t value,
> +                                                  uint64_t addr)
> +{
> +    unsigned int page_size;
> +    unsigned int page_shift;
> +    unsigned int page_size_granule;
> +    uint64_t num;
> +    uint64_t scale;
> +    uint64_t exponent;
> +    uint64_t high_addr;
> +
> +    num = (value >> 39) & 0xF;
> +    scale = (value >> 44) & 0x3;
> +    page_size_granule = (value >> 46) & 0x3;

extract64()

> +
> +    switch (page_size_granule) {
> +    case 1:
> +      page_size = 4096;
> +      page_shift = 12;
> +      break;
> +    case 2:
> +      page_size = 16384;
> +      page_shift = 14;
> +      break;
> +    case 3:
> +      page_size = 65536;
> +      page_shift = 16;
> +      break;
> +    default:
> +      qemu_log_mask(LOG_GUEST_ERROR, "Invalid page size granule %d\n",
> +                    page_size_granule);
> +
> +      raise_exception(env, EXCP_UDEF, syn_uncategorized(),
> +                      exception_target_el(env));

You can't raise an exception from here, because you don't have all of the 
context for unwinding the tcg state.  Which you cannot access from within the 
callback of an ARMCPRegInfo.

The manual says that if TG does not correspond to the granule size of the 
actual translation then "the architecture does not require that the instruction 
invalidates any entries".  "Reserved" can be safely assumed to "not 
correspond", so I think you could just as easily return 0 here, after logging 
the guest error.


> +    high_addr = addr + (((num + 1) << exponent) * page_size);
> +
> +    return (high_addr - addr) >> page_shift;

I'll note that it would be much easier for you to return a byte value for the 
length, and that you don't actually need to pass in addr at all.

> +    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;

The manual does not explicitly say, but I'm certain that this should be a 
signed address, when regime_has_2_ranges().  Otherwise it would be impossible 
to flush a range of kernel addresses.

But all of this is moot if we're just going to flush all pages.  At which point 
you might as well simply re-use tlbi_aa64_vmalle1_write et al.  Place your TODO 
comment in front of tlbirange_reginfo[] instead of buried n-levels further down.


r~


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/3] target/arm: Add support for FEAT_TLBIOS
  2021-03-10  0:29 ` [PATCH v3 2/3] target/arm: Add support for FEAT_TLBIOS Rebecca Cran
@ 2021-03-10 19:32   ` Richard Henderson
  0 siblings, 0 replies; 13+ messages in thread
From: Richard Henderson @ 2021-03-10 19:32 UTC (permalink / raw)
  To: Rebecca Cran, Peter Maydell, qemu-arm; +Cc: qemu-devel

On 3/9/21 6:29 PM, Rebecca Cran wrote:
> +static inline bool isar_feature_aa64_tlbios(const ARMISARegisters *id)
> +{
> +    return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 1 ||
> +           FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
> +}

The correct test here is != 0.  See D13.1.3
Principles of the ID scheme for fields in ID registers.


r~


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/3] target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type
  2021-03-10  0:29 ` [PATCH v3 3/3] target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type Rebecca Cran
@ 2021-03-10 19:34   ` Richard Henderson
  0 siblings, 0 replies; 13+ messages in thread
From: Richard Henderson @ 2021-03-10 19:34 UTC (permalink / raw)
  To: Rebecca Cran, Peter Maydell, qemu-arm; +Cc: qemu-devel

On 3/9/21 6:29 PM, Rebecca Cran wrote:
> @@ -651,6 +651,7 @@ static void aarch64_max_initfn(Object *obj)
>           t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
>           t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
>           t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
> +        t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2);

Just add /* FEAT_TLBIRANGE */ on the line here.
Otherwise,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE
  2021-03-10 19:24   ` Richard Henderson
@ 2021-03-10 21:59     ` Rebecca Cran
  2021-03-15 18:34     ` Rebecca Cran
  2021-03-16  6:20     ` Rebecca Cran
  2 siblings, 0 replies; 13+ messages in thread
From: Rebecca Cran @ 2021-03-10 21:59 UTC (permalink / raw)
  To: Richard Henderson, Peter Maydell, qemu-arm; +Cc: qemu-devel

On 3/10/21 12:24 PM, Richard Henderson wrote:
> On 3/9/21 6:29 PM, Rebecca Cran wrote:
>> +void tlb_flush_page_range_by_mmuidx(CPUState *cpu, target_ulong addr,
>> +                                    unsigned int num_pages, uint16_t 
>> idxmap)
> 
> I am not keen on this interface.  I think you should take either 
> start+end addresses (inclusive) or start+length (in bytes).
> 
> Using num_pages, and as an unsigned int, seems too easy to fail when 
> applied to a different guest.
> 
>> +{
>> +  /*
>> +   * We currently do a full flush, but for performance this should be
>> +   * updated to only flush the requested pages, taking TBI into account.
>> +   */
>> +    tlb_flush_by_mmuidx(cpu, idxmap);
>> +}
> 
> And if you're going to cop-out like this, you might as well just do it 
> in target/arm and not add these new interfaces at all.
> 
>> +#ifdef TARGET_AARCH64
>> +static unsigned int tlbi_aa64_range_get_num_pages(CPUARMState *env,
>> +                                                  uint64_t value,
>> +                                                  uint64_t addr)
>> +{
>> +    unsigned int page_size;
>> +    unsigned int page_shift;
>> +    unsigned int page_size_granule;
>> +    uint64_t num;
>> +    uint64_t scale;
>> +    uint64_t exponent;
>> +    uint64_t high_addr;
>> +
>> +    num = (value >> 39) & 0xF;
>> +    scale = (value >> 44) & 0x3;
>> +    page_size_granule = (value >> 46) & 0x3;
> 
> extract64()
> 
>> +
>> +    switch (page_size_granule) {
>> +    case 1:
>> +      page_size = 4096;
>> +      page_shift = 12;
>> +      break;
>> +    case 2:
>> +      page_size = 16384;
>> +      page_shift = 14;
>> +      break;
>> +    case 3:
>> +      page_size = 65536;
>> +      page_shift = 16;
>> +      break;
>> +    default:
>> +      qemu_log_mask(LOG_GUEST_ERROR, "Invalid page size granule %d\n",
>> +                    page_size_granule);
>> +
>> +      raise_exception(env, EXCP_UDEF, syn_uncategorized(),
>> +                      exception_target_el(env));
> 
> You can't raise an exception from here, because you don't have all of 
> the context for unwinding the tcg state.  Which you cannot access from 
> within the callback of an ARMCPRegInfo.
> 
> The manual says that if TG does not correspond to the granule size of 
> the actual translation then "the architecture does not require that the 
> instruction invalidates any entries".  "Reserved" can be safely assumed 
> to "not correspond", so I think you could just as easily return 0 here, 
> after logging the guest error.
> 
> 
>> +    high_addr = addr + (((num + 1) << exponent) * page_size);
>> +
>> +    return (high_addr - addr) >> page_shift;
> 
> I'll note that it would be much easier for you to return a byte value 
> for the length, and that you don't actually need to pass in addr at all.
> 
>> +    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
> 
> The manual does not explicitly say, but I'm certain that this should be 
> a signed address, when regime_has_2_ranges().  Otherwise it would be 
> impossible to flush a range of kernel addresses.
> 
> But all of this is moot if we're just going to flush all pages.  At 
> which point you might as well simply re-use tlbi_aa64_vmalle1_write et 
> al.  Place your TODO comment in front of tlbirange_reginfo[] instead of 
> buried n-levels further down.

Thanks for the comments. I'll continue working on the full/proper 
implementation (including changing the interface to remove num_pages) 
and send out a v4.

-- 
Rebecca Cran


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE
  2021-03-10 19:24   ` Richard Henderson
  2021-03-10 21:59     ` Rebecca Cran
@ 2021-03-15 18:34     ` Rebecca Cran
  2021-03-15 18:42       ` Richard Henderson
  2021-03-16  6:20     ` Rebecca Cran
  2 siblings, 1 reply; 13+ messages in thread
From: Rebecca Cran @ 2021-03-15 18:34 UTC (permalink / raw)
  To: Richard Henderson, Peter Maydell, qemu-arm; +Cc: qemu-devel

On 3/10/21 12:24 PM, Richard Henderson wrote:
> On 3/9/21 6:29 PM, Rebecca Cran wrote:
>> +void tlb_flush_page_range_by_mmuidx(CPUState *cpu, target_ulong addr,
>> +                                    unsigned int num_pages, uint16_t 
>> idxmap)
> 
> I am not keen on this interface.  I think you should take either 
> start+end addresses (inclusive) or start+length (in bytes).
> 
> Using num_pages, and as an unsigned int, seems too easy to fail when 
> applied to a different guest.

Do you mean pushing the knowledge of the number of pages to invalidate 
down to cputlb.c? Because I'm thinking there has to be a loop somewhere 
that invalidates each page if a full flush isn't being done?

-- 
Rebecca Cran


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE
  2021-03-15 18:34     ` Rebecca Cran
@ 2021-03-15 18:42       ` Richard Henderson
  0 siblings, 0 replies; 13+ messages in thread
From: Richard Henderson @ 2021-03-15 18:42 UTC (permalink / raw)
  To: Rebecca Cran, Peter Maydell, qemu-arm; +Cc: qemu-devel

On 3/15/21 12:34 PM, Rebecca Cran wrote:
> On 3/10/21 12:24 PM, Richard Henderson wrote:
>> On 3/9/21 6:29 PM, Rebecca Cran wrote:
>>> +void tlb_flush_page_range_by_mmuidx(CPUState *cpu, target_ulong addr,
>>> +                                    unsigned int num_pages, uint16_t idxmap)
>>
>> I am not keen on this interface.  I think you should take either start+end 
>> addresses (inclusive) or start+length (in bytes).
>>
>> Using num_pages, and as an unsigned int, seems too easy to fail when applied 
>> to a different guest.
> 
> Do you mean pushing the knowledge of the number of pages to invalidate down to 
> cputlb.c?

Yes.

In particular, your interface does not allow a single call to invalidate 1/2 of 
the total address space.  Because the type for num_pages isn't large enough.

There's nothing else in the cputlb interface that is page-based, except for 
"flush one page", and I thought that either

   target_ulong addr, target_ulong length, unsigned bits

would be a clearer interface to use.

> Because I'm thinking there has to be a loop somewhere that 
> invalidates each page if a full flush isn't being done?

Certainly.

r~


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE
  2021-03-10 19:24   ` Richard Henderson
  2021-03-10 21:59     ` Rebecca Cran
  2021-03-15 18:34     ` Rebecca Cran
@ 2021-03-16  6:20     ` Rebecca Cran
  2021-03-16 15:09       ` Richard Henderson
  2 siblings, 1 reply; 13+ messages in thread
From: Rebecca Cran @ 2021-03-16  6:20 UTC (permalink / raw)
  To: Richard Henderson, Peter Maydell, qemu-arm; +Cc: qemu-devel

On 3/10/21 12:24 PM, Richard Henderson wrote:
> On 3/9/21 6:29 PM, Rebecca Cran wrote:

>> +    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
> 
> The manual does not explicitly say, but I'm certain that this should be 
> a signed address, when regime_has_2_ranges().  Otherwise it would be 
> impossible to flush a range of kernel addresses.

I see other functions have

uint64_t pageaddr = sextract(value...);

Would that be sufficient here too, or do we need to check 
regime_has_2_ranges()?

-- 
Rebecca Cran


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE
  2021-03-16  6:20     ` Rebecca Cran
@ 2021-03-16 15:09       ` Richard Henderson
  2021-03-16 15:51         ` Rebecca Cran
  0 siblings, 1 reply; 13+ messages in thread
From: Richard Henderson @ 2021-03-16 15:09 UTC (permalink / raw)
  To: Rebecca Cran, Peter Maydell, qemu-arm; +Cc: qemu-devel

On 3/16/21 12:20 AM, Rebecca Cran wrote:
> On 3/10/21 12:24 PM, Richard Henderson wrote:
>> On 3/9/21 6:29 PM, Rebecca Cran wrote:
> 
>>> +    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
>>
>> The manual does not explicitly say, but I'm certain that this should be a 
>> signed address, when regime_has_2_ranges().  Otherwise it would be impossible 
>> to flush a range of kernel addresses.
> 
> I see other functions have
> 
> uint64_t pageaddr = sextract(value...);
> 
> Would that be sufficient here too, or do we need to check regime_has_2_ranges()?

We need to check the regime.

r~



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE
  2021-03-16 15:09       ` Richard Henderson
@ 2021-03-16 15:51         ` Rebecca Cran
  0 siblings, 0 replies; 13+ messages in thread
From: Rebecca Cran @ 2021-03-16 15:51 UTC (permalink / raw)
  To: Richard Henderson, Peter Maydell, qemu-arm; +Cc: qemu-devel

On 3/16/21 9:09 AM, Richard Henderson wrote:
> On 3/16/21 12:20 AM, Rebecca Cran wrote:
>> On 3/10/21 12:24 PM, Richard Henderson wrote:
>>> On 3/9/21 6:29 PM, Rebecca Cran wrote:
>>
>>>> +    uint64_t addr = (value & 0xFFFFFFFFFUL) << TARGET_PAGE_BITS;
>>>
>>> The manual does not explicitly say, but I'm certain that this should 
>>> be a signed address, when regime_has_2_ranges().  Otherwise it would 
>>> be impossible to flush a range of kernel addresses.
>>
>> I see other functions have
>>
>> uint64_t pageaddr = sextract(value...);
>>
>> Would that be sufficient here too, or do we need to check 
>> regime_has_2_ranges()?
> 
> We need to check the regime.

Thanks. I've just sent out a v4 patch, which I hope is closer to being 
correct.

-- 
Rebecca Cran


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-03-16 15:56 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-10  0:29 [PATCH v3 0/3] target/arm: Add support for FEAT_TLBIOS and FEAT_TLBIRANGE Rebecca Cran
2021-03-10  0:29 ` [PATCH v3 1/3] target/arm: Add support for FEAT_TLBIRANGE Rebecca Cran
2021-03-10 19:24   ` Richard Henderson
2021-03-10 21:59     ` Rebecca Cran
2021-03-15 18:34     ` Rebecca Cran
2021-03-15 18:42       ` Richard Henderson
2021-03-16  6:20     ` Rebecca Cran
2021-03-16 15:09       ` Richard Henderson
2021-03-16 15:51         ` Rebecca Cran
2021-03-10  0:29 ` [PATCH v3 2/3] target/arm: Add support for FEAT_TLBIOS Rebecca Cran
2021-03-10 19:32   ` Richard Henderson
2021-03-10  0:29 ` [PATCH v3 3/3] target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type Rebecca Cran
2021-03-10 19:34   ` Richard Henderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).