All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/7] xen/arm: TLB flush helpers rework
@ 2019-05-08 16:15 ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii_Anisov

Hi all,

I spent the last few months looking at Xen boot and memory management to make
it simpler, more efficient and also more compliant in respect of the Arm Arm.

The full rework is quite consequence (already 150 patches and I haven't yet
finished!), so I am planning to send in smaller part over the next few weeks.

In this first part, I focus on reworking how we flush the TLBs in Xen.

Cheers,

Julien Grall (7):
  xen/arm: mm: Consolidate setting SCTLR_EL2.WXN in a single place
  xen/arm: Remove flush_xen_text_tlb_local()
  xen/arm: tlbflush: Clarify the TLB helpers name
  xen/arm: page: Clarify the Xen TLBs helpers name
  xen/arm: Gather all TLB flush helpers in tlbflush.h
  xen/arm: tlbflush: Rework TLB helpers
  xen/arm: mm: Flush the TLBs even if a mapping failed in
    create_xen_entries

 xen/arch/arm/mm.c                    | 69 ++++++++++++++++++++++-----------
 xen/arch/arm/p2m.c                   |  6 +--
 xen/arch/arm/smp.c                   |  2 +-
 xen/arch/arm/traps.c                 |  2 +-
 xen/include/asm-arm/arm32/flushtlb.h | 71 +++++++++++++++++++---------------
 xen/include/asm-arm/arm32/page.h     | 48 ++++-------------------
 xen/include/asm-arm/arm64/flushtlb.h | 75 ++++++++++++++++++++----------------
 xen/include/asm-arm/arm64/page.h     | 49 +++--------------------
 xen/include/asm-arm/flushtlb.h       | 38 ++++++++++++++++++
 xen/include/asm-arm/page.h           | 38 ------------------
 10 files changed, 184 insertions(+), 214 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [Xen-devel] [PATCH v2 0/7] xen/arm: TLB flush helpers rework
@ 2019-05-08 16:15 ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii_Anisov

Hi all,

I spent the last few months looking at Xen boot and memory management to make
it simpler, more efficient and also more compliant in respect of the Arm Arm.

The full rework is quite consequence (already 150 patches and I haven't yet
finished!), so I am planning to send in smaller part over the next few weeks.

In this first part, I focus on reworking how we flush the TLBs in Xen.

Cheers,

Julien Grall (7):
  xen/arm: mm: Consolidate setting SCTLR_EL2.WXN in a single place
  xen/arm: Remove flush_xen_text_tlb_local()
  xen/arm: tlbflush: Clarify the TLB helpers name
  xen/arm: page: Clarify the Xen TLBs helpers name
  xen/arm: Gather all TLB flush helpers in tlbflush.h
  xen/arm: tlbflush: Rework TLB helpers
  xen/arm: mm: Flush the TLBs even if a mapping failed in
    create_xen_entries

 xen/arch/arm/mm.c                    | 69 ++++++++++++++++++++++-----------
 xen/arch/arm/p2m.c                   |  6 +--
 xen/arch/arm/smp.c                   |  2 +-
 xen/arch/arm/traps.c                 |  2 +-
 xen/include/asm-arm/arm32/flushtlb.h | 71 +++++++++++++++++++---------------
 xen/include/asm-arm/arm32/page.h     | 48 ++++-------------------
 xen/include/asm-arm/arm64/flushtlb.h | 75 ++++++++++++++++++++----------------
 xen/include/asm-arm/arm64/page.h     | 49 +++--------------------
 xen/include/asm-arm/flushtlb.h       | 38 ++++++++++++++++++
 xen/include/asm-arm/page.h           | 38 ------------------
 10 files changed, 184 insertions(+), 214 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v2 1/7] xen/arm: mm: Consolidate setting SCTLR_EL2.WXN in a single place
@ 2019-05-08 16:15   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

The logic to set SCTLR_EL2.WXN is the same for the boot CPU and
non-boot CPU. So introduce a function to set the bit and clear TLBs.

This new function will help us to document and update the logic in a
single place.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Fix typo in the commit message
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 01ae2cccc0..93ad118183 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -601,6 +601,19 @@ void __init remove_early_mappings(void)
     flush_xen_data_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
 }
 
+/*
+ * After boot, Xen page-tables should not contain mapping that are both
+ * Writable and eXecutables.
+ *
+ * This should be called on each CPU to enforce the policy.
+ */
+static void xen_pt_enforce_wnx(void)
+{
+    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
+    /* Flush everything after setting WXN bit. */
+    flush_xen_text_tlb_local();
+}
+
 extern void switch_ttbr(uint64_t ttbr);
 
 /* Clear a translation table and clean & invalidate the cache */
@@ -702,10 +715,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
     clear_table(boot_second);
     clear_table(boot_third);
 
-    /* From now on, no mapping may be both writable and executable. */
-    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
-    /* Flush everything after setting WXN bit. */
-    flush_xen_text_tlb_local();
+    xen_pt_enforce_wnx();
 
 #ifdef CONFIG_ARM_32
     per_cpu(xen_pgtable, 0) = cpu0_pgtable;
@@ -777,9 +787,7 @@ int init_secondary_pagetables(int cpu)
 /* MMU setup for secondary CPUS (which already have paging enabled) */
 void mmu_init_secondary_cpu(void)
 {
-    /* From now on, no mapping may be both writable and executable. */
-    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
-    flush_xen_text_tlb_local();
+    xen_pt_enforce_wnx();
 }
 
 #ifdef CONFIG_ARM_32
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Xen-devel] [PATCH v2 1/7] xen/arm: mm: Consolidate setting SCTLR_EL2.WXN in a single place
@ 2019-05-08 16:15   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

The logic to set SCTLR_EL2.WXN is the same for the boot CPU and
non-boot CPU. So introduce a function to set the bit and clear TLBs.

This new function will help us to document and update the logic in a
single place.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Fix typo in the commit message
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 01ae2cccc0..93ad118183 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -601,6 +601,19 @@ void __init remove_early_mappings(void)
     flush_xen_data_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
 }
 
+/*
+ * After boot, Xen page-tables should not contain mapping that are both
+ * Writable and eXecutables.
+ *
+ * This should be called on each CPU to enforce the policy.
+ */
+static void xen_pt_enforce_wnx(void)
+{
+    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
+    /* Flush everything after setting WXN bit. */
+    flush_xen_text_tlb_local();
+}
+
 extern void switch_ttbr(uint64_t ttbr);
 
 /* Clear a translation table and clean & invalidate the cache */
@@ -702,10 +715,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
     clear_table(boot_second);
     clear_table(boot_third);
 
-    /* From now on, no mapping may be both writable and executable. */
-    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
-    /* Flush everything after setting WXN bit. */
-    flush_xen_text_tlb_local();
+    xen_pt_enforce_wnx();
 
 #ifdef CONFIG_ARM_32
     per_cpu(xen_pgtable, 0) = cpu0_pgtable;
@@ -777,9 +787,7 @@ int init_secondary_pagetables(int cpu)
 /* MMU setup for secondary CPUS (which already have paging enabled) */
 void mmu_init_secondary_cpu(void)
 {
-    /* From now on, no mapping may be both writable and executable. */
-    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
-    flush_xen_text_tlb_local();
+    xen_pt_enforce_wnx();
 }
 
 #ifdef CONFIG_ARM_32
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 2/7] xen/arm: Remove flush_xen_text_tlb_local()
@ 2019-05-08 16:15   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

The function flush_xen_text_tlb_local() has been misused and will result
to invalidate the instruction cache more than necessary.

For instance, there are no need to invalidate the instruction cache if
we are setting SCTLR_EL2.WXN.

There are effectively only one caller (i.e free_init_memory() would
who need to invalidate the instruction cache.

So rather than keeping around the function flush_xen_text_tlb_local()
around, replace it with call to flush_xen_tlb_local() and explicitely
flush the cache when necessary.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c                | 17 ++++++++++++++---
 xen/include/asm-arm/arm32/page.h | 23 +++++++++--------------
 xen/include/asm-arm/arm64/page.h | 21 +++++----------------
 3 files changed, 28 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 93ad118183..dfbe39c70a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -610,8 +610,12 @@ void __init remove_early_mappings(void)
 static void xen_pt_enforce_wnx(void)
 {
     WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
-    /* Flush everything after setting WXN bit. */
-    flush_xen_text_tlb_local();
+    /*
+     * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized
+     * before flushing the TLBs.
+     */
+    isb();
+    flush_xen_data_tlb_local();
 }
 
 extern void switch_ttbr(uint64_t ttbr);
@@ -1123,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
         }
         write_pte(xen_xenmap + i, pte);
     }
-    flush_xen_text_tlb_local();
+    flush_xen_data_tlb_local();
 }
 
 /* Release all __init and __initdata ranges to be reused */
@@ -1136,6 +1140,13 @@ void free_init_memory(void)
     uint32_t *p;
 
     set_pte_flags_on_range(__init_begin, len, mg_rw);
+
+    /*
+     * From now on, init will not be used for execution anymore,
+     * so nuke the instruction cache to remove entries related to init.
+     */
+    invalidate_icache_local();
+
 #ifdef CONFIG_ARM_32
     /* udf instruction i.e (see A8.8.247 in ARM DDI 0406C.c) */
     insn = 0xe7f000f0;
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index ea4b312c70..40a77daa9d 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -46,24 +46,19 @@ static inline void invalidate_icache(void)
 }
 
 /*
- * Flush all hypervisor mappings from the TLB and branch predictor of
- * the local processor.
- *
- * This is needed after changing Xen code mappings.
- *
- * The caller needs to issue the necessary DSB and D-cache flushes
- * before calling flush_xen_text_tlb.
+ * Invalidate all instruction caches on the local processor to PoU.
+ * We also need to flush the branch predictor for ARMv7 as it may be
+ * architecturally visible to the software (see B2.2.4 in ARM DDI 0406C.b).
  */
-static inline void flush_xen_text_tlb_local(void)
+static inline void invalidate_icache_local(void)
 {
     asm volatile (
-        "isb;"                        /* Ensure synchronization with previous changes to text */
-        CMD_CP32(TLBIALLH)            /* Flush hypervisor TLB */
-        CMD_CP32(ICIALLU)             /* Flush I-cache */
-        CMD_CP32(BPIALL)              /* Flush branch predictor */
-        "dsb;"                        /* Ensure completion of TLB+BP flush */
-        "isb;"
+        CMD_CP32(ICIALLU)       /* Flush I-cache. */
+        CMD_CP32(BPIALL)        /* Flush branch predictor. */
         : : : "memory");
+
+    dsb(nsh);                   /* Ensure completion of the flush I-cache */
+    isb();                      /* Synchronize fetched instruction stream. */
 }
 
 /*
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 23d778154d..6c36d0210f 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -37,23 +37,12 @@ static inline void invalidate_icache(void)
     isb();
 }
 
-/*
- * Flush all hypervisor mappings from the TLB of the local processor.
- *
- * This is needed after changing Xen code mappings.
- *
- * The caller needs to issue the necessary DSB and D-cache flushes
- * before calling flush_xen_text_tlb.
- */
-static inline void flush_xen_text_tlb_local(void)
+/* Invalidate all instruction caches on the local processor to PoU */
+static inline void invalidate_icache_local(void)
 {
-    asm volatile (
-        "isb;"       /* Ensure synchronization with previous changes to text */
-        "tlbi   alle2;"                 /* Flush hypervisor TLB */
-        "ic     iallu;"                 /* Flush I-cache */
-        "dsb    sy;"                    /* Ensure completion of TLB flush */
-        "isb;"
-        : : : "memory");
+    asm volatile ("ic iallu");
+    dsb(nsh);               /* Ensure completion of the I-cache flush */
+    isb();
 }
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Xen-devel] [PATCH v2 2/7] xen/arm: Remove flush_xen_text_tlb_local()
@ 2019-05-08 16:15   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

The function flush_xen_text_tlb_local() has been misused and will result
to invalidate the instruction cache more than necessary.

For instance, there are no need to invalidate the instruction cache if
we are setting SCTLR_EL2.WXN.

There are effectively only one caller (i.e free_init_memory() would
who need to invalidate the instruction cache.

So rather than keeping around the function flush_xen_text_tlb_local()
around, replace it with call to flush_xen_tlb_local() and explicitely
flush the cache when necessary.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c                | 17 ++++++++++++++---
 xen/include/asm-arm/arm32/page.h | 23 +++++++++--------------
 xen/include/asm-arm/arm64/page.h | 21 +++++----------------
 3 files changed, 28 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 93ad118183..dfbe39c70a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -610,8 +610,12 @@ void __init remove_early_mappings(void)
 static void xen_pt_enforce_wnx(void)
 {
     WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
-    /* Flush everything after setting WXN bit. */
-    flush_xen_text_tlb_local();
+    /*
+     * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized
+     * before flushing the TLBs.
+     */
+    isb();
+    flush_xen_data_tlb_local();
 }
 
 extern void switch_ttbr(uint64_t ttbr);
@@ -1123,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
         }
         write_pte(xen_xenmap + i, pte);
     }
-    flush_xen_text_tlb_local();
+    flush_xen_data_tlb_local();
 }
 
 /* Release all __init and __initdata ranges to be reused */
@@ -1136,6 +1140,13 @@ void free_init_memory(void)
     uint32_t *p;
 
     set_pte_flags_on_range(__init_begin, len, mg_rw);
+
+    /*
+     * From now on, init will not be used for execution anymore,
+     * so nuke the instruction cache to remove entries related to init.
+     */
+    invalidate_icache_local();
+
 #ifdef CONFIG_ARM_32
     /* udf instruction i.e (see A8.8.247 in ARM DDI 0406C.c) */
     insn = 0xe7f000f0;
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index ea4b312c70..40a77daa9d 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -46,24 +46,19 @@ static inline void invalidate_icache(void)
 }
 
 /*
- * Flush all hypervisor mappings from the TLB and branch predictor of
- * the local processor.
- *
- * This is needed after changing Xen code mappings.
- *
- * The caller needs to issue the necessary DSB and D-cache flushes
- * before calling flush_xen_text_tlb.
+ * Invalidate all instruction caches on the local processor to PoU.
+ * We also need to flush the branch predictor for ARMv7 as it may be
+ * architecturally visible to the software (see B2.2.4 in ARM DDI 0406C.b).
  */
-static inline void flush_xen_text_tlb_local(void)
+static inline void invalidate_icache_local(void)
 {
     asm volatile (
-        "isb;"                        /* Ensure synchronization with previous changes to text */
-        CMD_CP32(TLBIALLH)            /* Flush hypervisor TLB */
-        CMD_CP32(ICIALLU)             /* Flush I-cache */
-        CMD_CP32(BPIALL)              /* Flush branch predictor */
-        "dsb;"                        /* Ensure completion of TLB+BP flush */
-        "isb;"
+        CMD_CP32(ICIALLU)       /* Flush I-cache. */
+        CMD_CP32(BPIALL)        /* Flush branch predictor. */
         : : : "memory");
+
+    dsb(nsh);                   /* Ensure completion of the flush I-cache */
+    isb();                      /* Synchronize fetched instruction stream. */
 }
 
 /*
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 23d778154d..6c36d0210f 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -37,23 +37,12 @@ static inline void invalidate_icache(void)
     isb();
 }
 
-/*
- * Flush all hypervisor mappings from the TLB of the local processor.
- *
- * This is needed after changing Xen code mappings.
- *
- * The caller needs to issue the necessary DSB and D-cache flushes
- * before calling flush_xen_text_tlb.
- */
-static inline void flush_xen_text_tlb_local(void)
+/* Invalidate all instruction caches on the local processor to PoU */
+static inline void invalidate_icache_local(void)
 {
-    asm volatile (
-        "isb;"       /* Ensure synchronization with previous changes to text */
-        "tlbi   alle2;"                 /* Flush hypervisor TLB */
-        "ic     iallu;"                 /* Flush I-cache */
-        "dsb    sy;"                    /* Ensure completion of TLB flush */
-        "isb;"
-        : : : "memory");
+    asm volatile ("ic iallu");
+    dsb(nsh);               /* Ensure completion of the I-cache flush */
+    isb();
 }
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 3/7] xen/arm: tlbflush: Clarify the TLB helpers name
@ 2019-05-08 16:15   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

TLB helpers in the headers tlbflush.h are currently quite confusing to
use the name may lead to think they are dealing with hypervisors TLBs
while they actually deal with guest TLBs.

Rename them to make it clearer that we are dealing with guest TLBs.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/p2m.c                   | 6 +++---
 xen/arch/arm/smp.c                   | 2 +-
 xen/arch/arm/traps.c                 | 2 +-
 xen/include/asm-arm/arm32/flushtlb.h | 8 ++++----
 xen/include/asm-arm/arm64/flushtlb.h | 8 ++++----
 5 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c38bd7e16e..92c2413f20 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -151,7 +151,7 @@ void p2m_restore_state(struct vcpu *n)
      * when running multiple vCPU of the same domain on a single pCPU.
      */
     if ( *last_vcpu_ran != INVALID_VCPU_ID && *last_vcpu_ran != n->vcpu_id )
-        flush_tlb_local();
+        flush_guest_tlb_local();
 
     *last_vcpu_ran = n->vcpu_id;
 }
@@ -196,7 +196,7 @@ static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
         isb();
     }
 
-    flush_tlb();
+    flush_guest_tlb();
 
     if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
     {
@@ -1969,7 +1969,7 @@ static void setup_virt_paging_one(void *data)
         WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2);
         isb();
 
-        flush_tlb_all_local();
+        flush_all_guests_tlb_local();
     }
 }
 
diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c
index 62f57f0ba2..ce1fcc8ef9 100644
--- a/xen/arch/arm/smp.c
+++ b/xen/arch/arm/smp.c
@@ -8,7 +8,7 @@
 void flush_tlb_mask(const cpumask_t *mask)
 {
     /* No need to IPI other processors on ARM, the processor takes care of it. */
-    flush_tlb_all();
+    flush_all_guests_tlb();
 }
 
 void smp_send_event_check_mask(const cpumask_t *mask)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index d8b9a8a0f0..1aba970415 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1924,7 +1924,7 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
          * still be inaccurate.
          */
         if ( !is_data )
-            flush_tlb_local();
+            flush_guest_tlb_local();
 
         rc = gva_to_ipa(gva, &gpa, GV2M_READ);
         /*
diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index bbcc82f490..22e100eccf 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_ARM32_FLUSHTLB_H__
 
 /* Flush local TLBs, current VMID only */
-static inline void flush_tlb_local(void)
+static inline void flush_guest_tlb_local(void)
 {
     dsb(sy);
 
@@ -13,7 +13,7 @@ static inline void flush_tlb_local(void)
 }
 
 /* Flush inner shareable TLBs, current VMID only */
-static inline void flush_tlb(void)
+static inline void flush_guest_tlb(void)
 {
     dsb(sy);
 
@@ -24,7 +24,7 @@ static inline void flush_tlb(void)
 }
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all_local(void)
+static inline void flush_all_guests_tlb_local(void)
 {
     dsb(sy);
 
@@ -35,7 +35,7 @@ static inline void flush_tlb_all_local(void)
 }
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all(void)
+static inline void flush_all_guests_tlb(void)
 {
     dsb(sy);
 
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index 942f2d3992..adbbd5c522 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_ARM64_FLUSHTLB_H__
 
 /* Flush local TLBs, current VMID only */
-static inline void flush_tlb_local(void)
+static inline void flush_guest_tlb_local(void)
 {
     asm volatile(
         "dsb sy;"
@@ -13,7 +13,7 @@ static inline void flush_tlb_local(void)
 }
 
 /* Flush innershareable TLBs, current VMID only */
-static inline void flush_tlb(void)
+static inline void flush_guest_tlb(void)
 {
     asm volatile(
         "dsb sy;"
@@ -24,7 +24,7 @@ static inline void flush_tlb(void)
 }
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all_local(void)
+static inline void flush_all_guests_tlb_local(void)
 {
     asm volatile(
         "dsb sy;"
@@ -35,7 +35,7 @@ static inline void flush_tlb_all_local(void)
 }
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all(void)
+static inline void flush_all_guests_tlb(void)
 {
     asm volatile(
         "dsb sy;"
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Xen-devel] [PATCH v2 3/7] xen/arm: tlbflush: Clarify the TLB helpers name
@ 2019-05-08 16:15   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

TLB helpers in the headers tlbflush.h are currently quite confusing to
use the name may lead to think they are dealing with hypervisors TLBs
while they actually deal with guest TLBs.

Rename them to make it clearer that we are dealing with guest TLBs.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/p2m.c                   | 6 +++---
 xen/arch/arm/smp.c                   | 2 +-
 xen/arch/arm/traps.c                 | 2 +-
 xen/include/asm-arm/arm32/flushtlb.h | 8 ++++----
 xen/include/asm-arm/arm64/flushtlb.h | 8 ++++----
 5 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c38bd7e16e..92c2413f20 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -151,7 +151,7 @@ void p2m_restore_state(struct vcpu *n)
      * when running multiple vCPU of the same domain on a single pCPU.
      */
     if ( *last_vcpu_ran != INVALID_VCPU_ID && *last_vcpu_ran != n->vcpu_id )
-        flush_tlb_local();
+        flush_guest_tlb_local();
 
     *last_vcpu_ran = n->vcpu_id;
 }
@@ -196,7 +196,7 @@ static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
         isb();
     }
 
-    flush_tlb();
+    flush_guest_tlb();
 
     if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
     {
@@ -1969,7 +1969,7 @@ static void setup_virt_paging_one(void *data)
         WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2);
         isb();
 
-        flush_tlb_all_local();
+        flush_all_guests_tlb_local();
     }
 }
 
diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c
index 62f57f0ba2..ce1fcc8ef9 100644
--- a/xen/arch/arm/smp.c
+++ b/xen/arch/arm/smp.c
@@ -8,7 +8,7 @@
 void flush_tlb_mask(const cpumask_t *mask)
 {
     /* No need to IPI other processors on ARM, the processor takes care of it. */
-    flush_tlb_all();
+    flush_all_guests_tlb();
 }
 
 void smp_send_event_check_mask(const cpumask_t *mask)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index d8b9a8a0f0..1aba970415 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1924,7 +1924,7 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
          * still be inaccurate.
          */
         if ( !is_data )
-            flush_tlb_local();
+            flush_guest_tlb_local();
 
         rc = gva_to_ipa(gva, &gpa, GV2M_READ);
         /*
diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index bbcc82f490..22e100eccf 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_ARM32_FLUSHTLB_H__
 
 /* Flush local TLBs, current VMID only */
-static inline void flush_tlb_local(void)
+static inline void flush_guest_tlb_local(void)
 {
     dsb(sy);
 
@@ -13,7 +13,7 @@ static inline void flush_tlb_local(void)
 }
 
 /* Flush inner shareable TLBs, current VMID only */
-static inline void flush_tlb(void)
+static inline void flush_guest_tlb(void)
 {
     dsb(sy);
 
@@ -24,7 +24,7 @@ static inline void flush_tlb(void)
 }
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all_local(void)
+static inline void flush_all_guests_tlb_local(void)
 {
     dsb(sy);
 
@@ -35,7 +35,7 @@ static inline void flush_tlb_all_local(void)
 }
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all(void)
+static inline void flush_all_guests_tlb(void)
 {
     dsb(sy);
 
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index 942f2d3992..adbbd5c522 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_ARM64_FLUSHTLB_H__
 
 /* Flush local TLBs, current VMID only */
-static inline void flush_tlb_local(void)
+static inline void flush_guest_tlb_local(void)
 {
     asm volatile(
         "dsb sy;"
@@ -13,7 +13,7 @@ static inline void flush_tlb_local(void)
 }
 
 /* Flush innershareable TLBs, current VMID only */
-static inline void flush_tlb(void)
+static inline void flush_guest_tlb(void)
 {
     asm volatile(
         "dsb sy;"
@@ -24,7 +24,7 @@ static inline void flush_tlb(void)
 }
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all_local(void)
+static inline void flush_all_guests_tlb_local(void)
 {
     asm volatile(
         "dsb sy;"
@@ -35,7 +35,7 @@ static inline void flush_tlb_all_local(void)
 }
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all(void)
+static inline void flush_all_guests_tlb(void)
 {
     asm volatile(
         "dsb sy;"
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-08 16:16   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

Now that we dropped flush_xen_text_tlb_local(), we have only one set of
helpers acting on Xen TLBs. There naming are quite confusing because the
TLB instructions used will act on both Data and Instruction TLBs.

Take the opportunity to rework the documentation that can be confusing
to read as they don't match the implementation.

Lastly, switch from unsigned lont to vaddr_t as the function technically
deal with virtual address.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c                | 18 +++++++++---------
 xen/include/asm-arm/arm32/page.h | 15 +++++----------
 xen/include/asm-arm/arm64/page.h | 15 +++++----------
 xen/include/asm-arm/page.h       | 28 ++++++++++++++--------------
 4 files changed, 33 insertions(+), 43 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index dfbe39c70a..8ee828d445 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -335,7 +335,7 @@ void set_fixmap(unsigned map, mfn_t mfn, unsigned int flags)
     pte.pt.table = 1; /* 4k mappings always have this bit set */
     pte.pt.xn = 1;
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_xen_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 /* Remove a mapping from a fixmap entry */
@@ -343,7 +343,7 @@ void clear_fixmap(unsigned map)
 {
     lpae_t pte = {0};
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_xen_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 /* Create Xen's mappings of memory.
@@ -377,7 +377,7 @@ static void __init create_mappings(lpae_t *second,
         write_pte(p + i, pte);
         pte.pt.base += 1 << LPAE_SHIFT;
     }
-    flush_xen_data_tlb_local();
+    flush_xen_tlb_local();
 }
 
 #ifdef CONFIG_DOMAIN_PAGE
@@ -455,7 +455,7 @@ void *map_domain_page(mfn_t mfn)
      * We may not have flushed this specific subpage at map time,
      * since we only flush the 4k page not the superpage
      */
-    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
+    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
 
     return (void *)va;
 }
@@ -598,7 +598,7 @@ void __init remove_early_mappings(void)
     write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START), pte);
     write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START + SZ_2M),
               pte);
-    flush_xen_data_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
+    flush_xen_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
 }
 
 /*
@@ -615,7 +615,7 @@ static void xen_pt_enforce_wnx(void)
      * before flushing the TLBs.
      */
     isb();
-    flush_xen_data_tlb_local();
+    flush_xen_tlb_local();
 }
 
 extern void switch_ttbr(uint64_t ttbr);
@@ -879,7 +879,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
         vaddr += FIRST_SIZE;
     }
 
-    flush_xen_data_tlb_local();
+    flush_xen_tlb_local();
 }
 #endif
 
@@ -1052,7 +1052,7 @@ static int create_xen_entries(enum xenmap_operation op,
                 BUG();
         }
     }
-    flush_xen_data_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
+    flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
 
     rc = 0;
 
@@ -1127,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
         }
         write_pte(xen_xenmap + i, pte);
     }
-    flush_xen_data_tlb_local();
+    flush_xen_tlb_local();
 }
 
 /* Release all __init and __initdata ranges to be reused */
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index 40a77daa9d..0b41b9214b 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
     isb();                      /* Synchronize fetched instruction stream. */
 }
 
-/*
- * Flush all hypervisor mappings from the data TLB of the local
- * processor. This is not sufficient when changing code mappings or
- * for self modifying code.
- */
-static inline void flush_xen_data_tlb_local(void)
+/* Flush all hypervisor mappings from the TLB of the local processor. */
+static inline void flush_xen_tlb_local(void)
 {
     asm volatile("dsb;" /* Ensure preceding are visible */
                  CMD_CP32(TLBIALLH)
@@ -76,14 +72,13 @@ static inline void flush_xen_data_tlb_local(void)
 }
 
 /* Flush TLB of local processor for address va. */
-static inline void __flush_xen_data_tlb_one_local(vaddr_t va)
+static inline void __flush_xen_tlb_one_local(vaddr_t va)
 {
     asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
 }
 
-/* Flush TLB of all processors in the inner-shareable domain for
- * address va. */
-static inline void __flush_xen_data_tlb_one(vaddr_t va)
+/* Flush TLB of all processors in the inner-shareable domain for address va. */
+static inline void __flush_xen_tlb_one(vaddr_t va)
 {
     asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
 }
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 6c36d0210f..31d04ecf76 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -45,12 +45,8 @@ static inline void invalidate_icache_local(void)
     isb();
 }
 
-/*
- * Flush all hypervisor mappings from the data TLB of the local
- * processor. This is not sufficient when changing code mappings or
- * for self modifying code.
- */
-static inline void flush_xen_data_tlb_local(void)
+/* Flush all hypervisor mappings from the TLB of the local processor. */
+static inline void flush_xen_tlb_local(void)
 {
     asm volatile (
         "dsb    sy;"                    /* Ensure visibility of PTE writes */
@@ -61,14 +57,13 @@ static inline void flush_xen_data_tlb_local(void)
 }
 
 /* Flush TLB of local processor for address va. */
-static inline void  __flush_xen_data_tlb_one_local(vaddr_t va)
+static inline void  __flush_xen_tlb_one_local(vaddr_t va)
 {
     asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
 }
 
-/* Flush TLB of all processors in the inner-shareable domain for
- * address va. */
-static inline void __flush_xen_data_tlb_one(vaddr_t va)
+/* Flush TLB of all processors in the inner-shareable domain for address va. */
+static inline void __flush_xen_tlb_one(vaddr_t va)
 {
     asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
 }
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 1a1713ce02..195345e24a 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -234,18 +234,18 @@ static inline int clean_and_invalidate_dcache_va_range
 } while (0)
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB of the
- * local processor. This is not sufficient when changing code mappings
- * or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the TLB of the local
+ * processor.
  */
-static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
-                                                     unsigned long size)
+static inline void flush_xen_tlb_range_va_local(vaddr_t va,
+                                                unsigned long size)
 {
-    unsigned long end = va + size;
+    vaddr_t end = va + size;
+
     dsb(sy); /* Ensure preceding are visible */
     while ( va < end )
     {
-        __flush_xen_data_tlb_one_local(va);
+        __flush_xen_tlb_one_local(va);
         va += PAGE_SIZE;
     }
     dsb(sy); /* Ensure completion of the TLB flush */
@@ -253,18 +253,18 @@ static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
 }
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB of all
- * processors in the inner-shareable domain. This is not sufficient
- * when changing code mappings or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the TLB of all
+ * processors in the inner-shareable domain.
  */
-static inline void flush_xen_data_tlb_range_va(unsigned long va,
-                                               unsigned long size)
+static inline void flush_xen_tlb_range_va(vaddr_t va,
+                                          unsigned long size)
 {
-    unsigned long end = va + size;
+    vaddr_t end = va + size;
+
     dsb(sy); /* Ensure preceding are visible */
     while ( va < end )
     {
-        __flush_xen_data_tlb_one(va);
+        __flush_xen_tlb_one(va);
         va += PAGE_SIZE;
     }
     dsb(sy); /* Ensure completion of the TLB flush */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-08 16:16   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

Now that we dropped flush_xen_text_tlb_local(), we have only one set of
helpers acting on Xen TLBs. There naming are quite confusing because the
TLB instructions used will act on both Data and Instruction TLBs.

Take the opportunity to rework the documentation that can be confusing
to read as they don't match the implementation.

Lastly, switch from unsigned lont to vaddr_t as the function technically
deal with virtual address.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c                | 18 +++++++++---------
 xen/include/asm-arm/arm32/page.h | 15 +++++----------
 xen/include/asm-arm/arm64/page.h | 15 +++++----------
 xen/include/asm-arm/page.h       | 28 ++++++++++++++--------------
 4 files changed, 33 insertions(+), 43 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index dfbe39c70a..8ee828d445 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -335,7 +335,7 @@ void set_fixmap(unsigned map, mfn_t mfn, unsigned int flags)
     pte.pt.table = 1; /* 4k mappings always have this bit set */
     pte.pt.xn = 1;
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_xen_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 /* Remove a mapping from a fixmap entry */
@@ -343,7 +343,7 @@ void clear_fixmap(unsigned map)
 {
     lpae_t pte = {0};
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_xen_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 /* Create Xen's mappings of memory.
@@ -377,7 +377,7 @@ static void __init create_mappings(lpae_t *second,
         write_pte(p + i, pte);
         pte.pt.base += 1 << LPAE_SHIFT;
     }
-    flush_xen_data_tlb_local();
+    flush_xen_tlb_local();
 }
 
 #ifdef CONFIG_DOMAIN_PAGE
@@ -455,7 +455,7 @@ void *map_domain_page(mfn_t mfn)
      * We may not have flushed this specific subpage at map time,
      * since we only flush the 4k page not the superpage
      */
-    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
+    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
 
     return (void *)va;
 }
@@ -598,7 +598,7 @@ void __init remove_early_mappings(void)
     write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START), pte);
     write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START + SZ_2M),
               pte);
-    flush_xen_data_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
+    flush_xen_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
 }
 
 /*
@@ -615,7 +615,7 @@ static void xen_pt_enforce_wnx(void)
      * before flushing the TLBs.
      */
     isb();
-    flush_xen_data_tlb_local();
+    flush_xen_tlb_local();
 }
 
 extern void switch_ttbr(uint64_t ttbr);
@@ -879,7 +879,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
         vaddr += FIRST_SIZE;
     }
 
-    flush_xen_data_tlb_local();
+    flush_xen_tlb_local();
 }
 #endif
 
@@ -1052,7 +1052,7 @@ static int create_xen_entries(enum xenmap_operation op,
                 BUG();
         }
     }
-    flush_xen_data_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
+    flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
 
     rc = 0;
 
@@ -1127,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
         }
         write_pte(xen_xenmap + i, pte);
     }
-    flush_xen_data_tlb_local();
+    flush_xen_tlb_local();
 }
 
 /* Release all __init and __initdata ranges to be reused */
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index 40a77daa9d..0b41b9214b 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
     isb();                      /* Synchronize fetched instruction stream. */
 }
 
-/*
- * Flush all hypervisor mappings from the data TLB of the local
- * processor. This is not sufficient when changing code mappings or
- * for self modifying code.
- */
-static inline void flush_xen_data_tlb_local(void)
+/* Flush all hypervisor mappings from the TLB of the local processor. */
+static inline void flush_xen_tlb_local(void)
 {
     asm volatile("dsb;" /* Ensure preceding are visible */
                  CMD_CP32(TLBIALLH)
@@ -76,14 +72,13 @@ static inline void flush_xen_data_tlb_local(void)
 }
 
 /* Flush TLB of local processor for address va. */
-static inline void __flush_xen_data_tlb_one_local(vaddr_t va)
+static inline void __flush_xen_tlb_one_local(vaddr_t va)
 {
     asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
 }
 
-/* Flush TLB of all processors in the inner-shareable domain for
- * address va. */
-static inline void __flush_xen_data_tlb_one(vaddr_t va)
+/* Flush TLB of all processors in the inner-shareable domain for address va. */
+static inline void __flush_xen_tlb_one(vaddr_t va)
 {
     asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
 }
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 6c36d0210f..31d04ecf76 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -45,12 +45,8 @@ static inline void invalidate_icache_local(void)
     isb();
 }
 
-/*
- * Flush all hypervisor mappings from the data TLB of the local
- * processor. This is not sufficient when changing code mappings or
- * for self modifying code.
- */
-static inline void flush_xen_data_tlb_local(void)
+/* Flush all hypervisor mappings from the TLB of the local processor. */
+static inline void flush_xen_tlb_local(void)
 {
     asm volatile (
         "dsb    sy;"                    /* Ensure visibility of PTE writes */
@@ -61,14 +57,13 @@ static inline void flush_xen_data_tlb_local(void)
 }
 
 /* Flush TLB of local processor for address va. */
-static inline void  __flush_xen_data_tlb_one_local(vaddr_t va)
+static inline void  __flush_xen_tlb_one_local(vaddr_t va)
 {
     asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
 }
 
-/* Flush TLB of all processors in the inner-shareable domain for
- * address va. */
-static inline void __flush_xen_data_tlb_one(vaddr_t va)
+/* Flush TLB of all processors in the inner-shareable domain for address va. */
+static inline void __flush_xen_tlb_one(vaddr_t va)
 {
     asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
 }
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 1a1713ce02..195345e24a 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -234,18 +234,18 @@ static inline int clean_and_invalidate_dcache_va_range
 } while (0)
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB of the
- * local processor. This is not sufficient when changing code mappings
- * or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the TLB of the local
+ * processor.
  */
-static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
-                                                     unsigned long size)
+static inline void flush_xen_tlb_range_va_local(vaddr_t va,
+                                                unsigned long size)
 {
-    unsigned long end = va + size;
+    vaddr_t end = va + size;
+
     dsb(sy); /* Ensure preceding are visible */
     while ( va < end )
     {
-        __flush_xen_data_tlb_one_local(va);
+        __flush_xen_tlb_one_local(va);
         va += PAGE_SIZE;
     }
     dsb(sy); /* Ensure completion of the TLB flush */
@@ -253,18 +253,18 @@ static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
 }
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB of all
- * processors in the inner-shareable domain. This is not sufficient
- * when changing code mappings or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the TLB of all
+ * processors in the inner-shareable domain.
  */
-static inline void flush_xen_data_tlb_range_va(unsigned long va,
-                                               unsigned long size)
+static inline void flush_xen_tlb_range_va(vaddr_t va,
+                                          unsigned long size)
 {
-    unsigned long end = va + size;
+    vaddr_t end = va + size;
+
     dsb(sy); /* Ensure preceding are visible */
     while ( va < end )
     {
-        __flush_xen_data_tlb_one(va);
+        __flush_xen_tlb_one(va);
         va += PAGE_SIZE;
     }
     dsb(sy); /* Ensure completion of the TLB flush */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 5/7] xen/arm: Gather all TLB flush helpers in tlbflush.h
@ 2019-05-08 16:16   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

At the moment, TLB helpers are scattered in 2 headers: page.h (for
Xen TLB helpers) and tlbflush.h (for guest TLB helpers).

This patch is gathering all of them in tlbflush. This will help to
uniformize and update the logic of the helpers in follow-up patches.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/include/asm-arm/arm32/flushtlb.h | 22 +++++++++++++++++++++
 xen/include/asm-arm/arm32/page.h     | 22 ---------------------
 xen/include/asm-arm/arm64/flushtlb.h | 23 ++++++++++++++++++++++
 xen/include/asm-arm/arm64/page.h     | 23 ----------------------
 xen/include/asm-arm/flushtlb.h       | 38 ++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/page.h           | 38 ------------------------------------
 6 files changed, 83 insertions(+), 83 deletions(-)

diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index 22e100eccf..b629db61cb 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -45,6 +45,28 @@ static inline void flush_all_guests_tlb(void)
     isb();
 }
 
+/* Flush all hypervisor mappings from the TLB of the local processor. */
+static inline void flush_xen_tlb_local(void)
+{
+    asm volatile("dsb;" /* Ensure preceding are visible */
+                 CMD_CP32(TLBIALLH)
+                 "dsb;" /* Ensure completion of the TLB flush */
+                 "isb;"
+                 : : : "memory");
+}
+
+/* Flush TLB of local processor for address va. */
+static inline void __flush_xen_tlb_one_local(vaddr_t va)
+{
+    asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
+}
+
+/* Flush TLB of all processors in the inner-shareable domain for address va. */
+static inline void __flush_xen_tlb_one(vaddr_t va)
+{
+    asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
+}
+
 #endif /* __ASM_ARM_ARM32_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index 0b41b9214b..715a9e4fef 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -61,28 +61,6 @@ static inline void invalidate_icache_local(void)
     isb();                      /* Synchronize fetched instruction stream. */
 }
 
-/* Flush all hypervisor mappings from the TLB of the local processor. */
-static inline void flush_xen_tlb_local(void)
-{
-    asm volatile("dsb;" /* Ensure preceding are visible */
-                 CMD_CP32(TLBIALLH)
-                 "dsb;" /* Ensure completion of the TLB flush */
-                 "isb;"
-                 : : : "memory");
-}
-
-/* Flush TLB of local processor for address va. */
-static inline void __flush_xen_tlb_one_local(vaddr_t va)
-{
-    asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
-}
-
-/* Flush TLB of all processors in the inner-shareable domain for address va. */
-static inline void __flush_xen_tlb_one(vaddr_t va)
-{
-    asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
-}
-
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index adbbd5c522..2fed34b2ec 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -45,6 +45,29 @@ static inline void flush_all_guests_tlb(void)
         : : : "memory");
 }
 
+/* Flush all hypervisor mappings from the TLB of the local processor. */
+static inline void flush_xen_tlb_local(void)
+{
+    asm volatile (
+        "dsb    sy;"                    /* Ensure visibility of PTE writes */
+        "tlbi   alle2;"                 /* Flush hypervisor TLB */
+        "dsb    sy;"                    /* Ensure completion of TLB flush */
+        "isb;"
+        : : : "memory");
+}
+
+/* Flush TLB of local processor for address va. */
+static inline void  __flush_xen_tlb_one_local(vaddr_t va)
+{
+    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
+}
+
+/* Flush TLB of all processors in the inner-shareable domain for address va. */
+static inline void __flush_xen_tlb_one(vaddr_t va)
+{
+    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
+}
+
 #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 31d04ecf76..0cba266373 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -45,29 +45,6 @@ static inline void invalidate_icache_local(void)
     isb();
 }
 
-/* Flush all hypervisor mappings from the TLB of the local processor. */
-static inline void flush_xen_tlb_local(void)
-{
-    asm volatile (
-        "dsb    sy;"                    /* Ensure visibility of PTE writes */
-        "tlbi   alle2;"                 /* Flush hypervisor TLB */
-        "dsb    sy;"                    /* Ensure completion of TLB flush */
-        "isb;"
-        : : : "memory");
-}
-
-/* Flush TLB of local processor for address va. */
-static inline void  __flush_xen_tlb_one_local(vaddr_t va)
-{
-    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
-}
-
-/* Flush TLB of all processors in the inner-shareable domain for address va. */
-static inline void __flush_xen_tlb_one(vaddr_t va)
-{
-    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
-}
-
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
index 83ff9fa8b3..ab1aae5c90 100644
--- a/xen/include/asm-arm/flushtlb.h
+++ b/xen/include/asm-arm/flushtlb.h
@@ -28,6 +28,44 @@ static inline void page_set_tlbflush_timestamp(struct page_info *page)
 /* Flush specified CPUs' TLBs */
 void flush_tlb_mask(const cpumask_t *mask);
 
+/*
+ * Flush a range of VA's hypervisor mappings from the TLB of the local
+ * processor.
+ */
+static inline void flush_xen_tlb_range_va_local(vaddr_t va,
+                                                unsigned long size)
+{
+    vaddr_t end = va + size;
+
+    dsb(sy); /* Ensure preceding are visible */
+    while ( va < end )
+    {
+        __flush_xen_tlb_one_local(va);
+        va += PAGE_SIZE;
+    }
+    dsb(sy); /* Ensure completion of the TLB flush */
+    isb();
+}
+
+/*
+ * Flush a range of VA's hypervisor mappings from the TLB of all
+ * processors in the inner-shareable domain.
+ */
+static inline void flush_xen_tlb_range_va(vaddr_t va,
+                                          unsigned long size)
+{
+    vaddr_t end = va + size;
+
+    dsb(sy); /* Ensure preceding are visible */
+    while ( va < end )
+    {
+        __flush_xen_tlb_one(va);
+        va += PAGE_SIZE;
+    }
+    dsb(sy); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 #endif /* __ASM_ARM_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 195345e24a..2bcdb0f1a5 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -233,44 +233,6 @@ static inline int clean_and_invalidate_dcache_va_range
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
-/*
- * Flush a range of VA's hypervisor mappings from the TLB of the local
- * processor.
- */
-static inline void flush_xen_tlb_range_va_local(vaddr_t va,
-                                                unsigned long size)
-{
-    vaddr_t end = va + size;
-
-    dsb(sy); /* Ensure preceding are visible */
-    while ( va < end )
-    {
-        __flush_xen_tlb_one_local(va);
-        va += PAGE_SIZE;
-    }
-    dsb(sy); /* Ensure completion of the TLB flush */
-    isb();
-}
-
-/*
- * Flush a range of VA's hypervisor mappings from the TLB of all
- * processors in the inner-shareable domain.
- */
-static inline void flush_xen_tlb_range_va(vaddr_t va,
-                                          unsigned long size)
-{
-    vaddr_t end = va + size;
-
-    dsb(sy); /* Ensure preceding are visible */
-    while ( va < end )
-    {
-        __flush_xen_tlb_one(va);
-        va += PAGE_SIZE;
-    }
-    dsb(sy); /* Ensure completion of the TLB flush */
-    isb();
-}
-
 /* Flush the dcache for an entire page. */
 void flush_page_to_ram(unsigned long mfn, bool sync_icache);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Xen-devel] [PATCH v2 5/7] xen/arm: Gather all TLB flush helpers in tlbflush.h
@ 2019-05-08 16:16   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

At the moment, TLB helpers are scattered in 2 headers: page.h (for
Xen TLB helpers) and tlbflush.h (for guest TLB helpers).

This patch is gathering all of them in tlbflush. This will help to
uniformize and update the logic of the helpers in follow-up patches.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/include/asm-arm/arm32/flushtlb.h | 22 +++++++++++++++++++++
 xen/include/asm-arm/arm32/page.h     | 22 ---------------------
 xen/include/asm-arm/arm64/flushtlb.h | 23 ++++++++++++++++++++++
 xen/include/asm-arm/arm64/page.h     | 23 ----------------------
 xen/include/asm-arm/flushtlb.h       | 38 ++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/page.h           | 38 ------------------------------------
 6 files changed, 83 insertions(+), 83 deletions(-)

diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index 22e100eccf..b629db61cb 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -45,6 +45,28 @@ static inline void flush_all_guests_tlb(void)
     isb();
 }
 
+/* Flush all hypervisor mappings from the TLB of the local processor. */
+static inline void flush_xen_tlb_local(void)
+{
+    asm volatile("dsb;" /* Ensure preceding are visible */
+                 CMD_CP32(TLBIALLH)
+                 "dsb;" /* Ensure completion of the TLB flush */
+                 "isb;"
+                 : : : "memory");
+}
+
+/* Flush TLB of local processor for address va. */
+static inline void __flush_xen_tlb_one_local(vaddr_t va)
+{
+    asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
+}
+
+/* Flush TLB of all processors in the inner-shareable domain for address va. */
+static inline void __flush_xen_tlb_one(vaddr_t va)
+{
+    asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
+}
+
 #endif /* __ASM_ARM_ARM32_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index 0b41b9214b..715a9e4fef 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -61,28 +61,6 @@ static inline void invalidate_icache_local(void)
     isb();                      /* Synchronize fetched instruction stream. */
 }
 
-/* Flush all hypervisor mappings from the TLB of the local processor. */
-static inline void flush_xen_tlb_local(void)
-{
-    asm volatile("dsb;" /* Ensure preceding are visible */
-                 CMD_CP32(TLBIALLH)
-                 "dsb;" /* Ensure completion of the TLB flush */
-                 "isb;"
-                 : : : "memory");
-}
-
-/* Flush TLB of local processor for address va. */
-static inline void __flush_xen_tlb_one_local(vaddr_t va)
-{
-    asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
-}
-
-/* Flush TLB of all processors in the inner-shareable domain for address va. */
-static inline void __flush_xen_tlb_one(vaddr_t va)
-{
-    asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
-}
-
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index adbbd5c522..2fed34b2ec 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -45,6 +45,29 @@ static inline void flush_all_guests_tlb(void)
         : : : "memory");
 }
 
+/* Flush all hypervisor mappings from the TLB of the local processor. */
+static inline void flush_xen_tlb_local(void)
+{
+    asm volatile (
+        "dsb    sy;"                    /* Ensure visibility of PTE writes */
+        "tlbi   alle2;"                 /* Flush hypervisor TLB */
+        "dsb    sy;"                    /* Ensure completion of TLB flush */
+        "isb;"
+        : : : "memory");
+}
+
+/* Flush TLB of local processor for address va. */
+static inline void  __flush_xen_tlb_one_local(vaddr_t va)
+{
+    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
+}
+
+/* Flush TLB of all processors in the inner-shareable domain for address va. */
+static inline void __flush_xen_tlb_one(vaddr_t va)
+{
+    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
+}
+
 #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 31d04ecf76..0cba266373 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -45,29 +45,6 @@ static inline void invalidate_icache_local(void)
     isb();
 }
 
-/* Flush all hypervisor mappings from the TLB of the local processor. */
-static inline void flush_xen_tlb_local(void)
-{
-    asm volatile (
-        "dsb    sy;"                    /* Ensure visibility of PTE writes */
-        "tlbi   alle2;"                 /* Flush hypervisor TLB */
-        "dsb    sy;"                    /* Ensure completion of TLB flush */
-        "isb;"
-        : : : "memory");
-}
-
-/* Flush TLB of local processor for address va. */
-static inline void  __flush_xen_tlb_one_local(vaddr_t va)
-{
-    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
-}
-
-/* Flush TLB of all processors in the inner-shareable domain for address va. */
-static inline void __flush_xen_tlb_one(vaddr_t va)
-{
-    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
-}
-
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
index 83ff9fa8b3..ab1aae5c90 100644
--- a/xen/include/asm-arm/flushtlb.h
+++ b/xen/include/asm-arm/flushtlb.h
@@ -28,6 +28,44 @@ static inline void page_set_tlbflush_timestamp(struct page_info *page)
 /* Flush specified CPUs' TLBs */
 void flush_tlb_mask(const cpumask_t *mask);
 
+/*
+ * Flush a range of VA's hypervisor mappings from the TLB of the local
+ * processor.
+ */
+static inline void flush_xen_tlb_range_va_local(vaddr_t va,
+                                                unsigned long size)
+{
+    vaddr_t end = va + size;
+
+    dsb(sy); /* Ensure preceding are visible */
+    while ( va < end )
+    {
+        __flush_xen_tlb_one_local(va);
+        va += PAGE_SIZE;
+    }
+    dsb(sy); /* Ensure completion of the TLB flush */
+    isb();
+}
+
+/*
+ * Flush a range of VA's hypervisor mappings from the TLB of all
+ * processors in the inner-shareable domain.
+ */
+static inline void flush_xen_tlb_range_va(vaddr_t va,
+                                          unsigned long size)
+{
+    vaddr_t end = va + size;
+
+    dsb(sy); /* Ensure preceding are visible */
+    while ( va < end )
+    {
+        __flush_xen_tlb_one(va);
+        va += PAGE_SIZE;
+    }
+    dsb(sy); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 #endif /* __ASM_ARM_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 195345e24a..2bcdb0f1a5 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -233,44 +233,6 @@ static inline int clean_and_invalidate_dcache_va_range
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
-/*
- * Flush a range of VA's hypervisor mappings from the TLB of the local
- * processor.
- */
-static inline void flush_xen_tlb_range_va_local(vaddr_t va,
-                                                unsigned long size)
-{
-    vaddr_t end = va + size;
-
-    dsb(sy); /* Ensure preceding are visible */
-    while ( va < end )
-    {
-        __flush_xen_tlb_one_local(va);
-        va += PAGE_SIZE;
-    }
-    dsb(sy); /* Ensure completion of the TLB flush */
-    isb();
-}
-
-/*
- * Flush a range of VA's hypervisor mappings from the TLB of all
- * processors in the inner-shareable domain.
- */
-static inline void flush_xen_tlb_range_va(vaddr_t va,
-                                          unsigned long size)
-{
-    vaddr_t end = va + size;
-
-    dsb(sy); /* Ensure preceding are visible */
-    while ( va < end )
-    {
-        __flush_xen_tlb_one(va);
-        va += PAGE_SIZE;
-    }
-    dsb(sy); /* Ensure completion of the TLB flush */
-    isb();
-}
-
 /* Flush the dcache for an entire page. */
 void flush_page_to_ram(unsigned long mfn, bool sync_icache);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 6/7] xen/arm: tlbflush: Rework TLB helpers
@ 2019-05-08 16:16   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

All the TLBs helpers invalidate all the TLB entries are using the same
pattern:
    DSB SY
    TLBI ...
    DSB SY
    ISB

This pattern is following pattern recommended by the Arm Arm to ensure
visibility of updates to translation tables (see K11.5.2 in ARM DDI
0487D.b).

We have been a bit too eager in Xen and use system-wide DSBs when this
can be limited to the inner-shareable domain.

Furthermore, the first DSB can be restrict further to only store in the
inner-shareable domain. This is because the DSB is here to ensure
visibility of the update to translation table walks.

Lastly, there are a lack of documentation in most of the TLBs helper.

Rather than trying to update the helpers one by one, this patch
introduce a per-arch macro to generate the TLB helpers. This will be
easier to update the TLBs helper in the future and the documentation.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Update the reference to the Arm Arm to the latest spec
        - Add Andrii's reviewed-by
---
 xen/include/asm-arm/arm32/flushtlb.h | 73 ++++++++++++++--------------------
 xen/include/asm-arm/arm64/flushtlb.h | 76 +++++++++++++++---------------------
 2 files changed, 60 insertions(+), 89 deletions(-)

diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index b629db61cb..9085e65011 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -1,59 +1,44 @@
 #ifndef __ASM_ARM_ARM32_FLUSHTLB_H__
 #define __ASM_ARM_ARM32_FLUSHTLB_H__
 
-/* Flush local TLBs, current VMID only */
-static inline void flush_guest_tlb_local(void)
-{
-    dsb(sy);
-
-    WRITE_CP32((uint32_t) 0, TLBIALL);
-
-    dsb(sy);
-    isb();
+/*
+ * Every invalidation operation use the following patterns:
+ *
+ * DSB ISHST        // Ensure prior page-tables updates have completed
+ * TLBI...          // Invalidate the TLB
+ * DSB ISH          // Ensure the TLB invalidation has completed
+ * ISB              // See explanation below
+ *
+ * For Xen page-tables the ISB will discard any instructions fetched
+ * from the old mappings.
+ *
+ * For the Stage-2 page-tables the ISB ensures the completion of the DSB
+ * (and therefore the TLB invalidation) before continuing. So we know
+ * the TLBs cannot contain an entry for a mapping we may have removed.
+ */
+#define TLB_HELPER(name, tlbop) \
+static inline void name(void)   \
+{                               \
+    dsb(ishst);                 \
+    WRITE_CP32(0, tlbop);       \
+    dsb(ish);                   \
+    isb();                      \
 }
 
-/* Flush inner shareable TLBs, current VMID only */
-static inline void flush_guest_tlb(void)
-{
-    dsb(sy);
-
-    WRITE_CP32((uint32_t) 0, TLBIALLIS);
+/* Flush local TLBs, current VMID only */
+TLB_HELPER(flush_guest_tlb_local, TLBIALL);
 
-    dsb(sy);
-    isb();
-}
+/* Flush inner shareable TLBs, current VMID only */
+TLB_HELPER(flush_guest_tlb, TLBIALLIS);
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_all_guests_tlb_local(void)
-{
-    dsb(sy);
-
-    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
-
-    dsb(sy);
-    isb();
-}
+TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH);
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_all_guests_tlb(void)
-{
-    dsb(sy);
-
-    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
-
-    dsb(sy);
-    isb();
-}
+TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS);
 
 /* Flush all hypervisor mappings from the TLB of the local processor. */
-static inline void flush_xen_tlb_local(void)
-{
-    asm volatile("dsb;" /* Ensure preceding are visible */
-                 CMD_CP32(TLBIALLH)
-                 "dsb;" /* Ensure completion of the TLB flush */
-                 "isb;"
-                 : : : "memory");
-}
+TLB_HELPER(flush_xen_tlb_local, TLBIALLH);
 
 /* Flush TLB of local processor for address va. */
 static inline void __flush_xen_tlb_one_local(vaddr_t va)
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index 2fed34b2ec..ceec59542e 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -1,60 +1,46 @@
 #ifndef __ASM_ARM_ARM64_FLUSHTLB_H__
 #define __ASM_ARM_ARM64_FLUSHTLB_H__
 
-/* Flush local TLBs, current VMID only */
-static inline void flush_guest_tlb_local(void)
-{
-    asm volatile(
-        "dsb sy;"
-        "tlbi vmalls12e1;"
-        "dsb sy;"
-        "isb;"
-        : : : "memory");
+/*
+ * Every invalidation operation use the following patterns:
+ *
+ * DSB ISHST        // Ensure prior page-tables updates have completed
+ * TLBI...          // Invalidate the TLB
+ * DSB ISH          // Ensure the TLB invalidation has completed
+ * ISB              // See explanation below
+ *
+ * For Xen page-tables the ISB will discard any instructions fetched
+ * from the old mappings.
+ *
+ * For the Stage-2 page-tables the ISB ensures the completion of the DSB
+ * (and therefore the TLB invalidation) before continuing. So we know
+ * the TLBs cannot contain an entry for a mapping we may have removed.
+ */
+#define TLB_HELPER(name, tlbop) \
+static inline void name(void)   \
+{                               \
+    asm volatile(               \
+        "dsb  ishst;"           \
+        "tlbi "  # tlbop  ";"   \
+        "dsb  ish;"             \
+        "isb;"                  \
+        : : : "memory");        \
 }
 
+/* Flush local TLBs, current VMID only. */
+TLB_HELPER(flush_guest_tlb_local, vmalls12e1);
+
 /* Flush innershareable TLBs, current VMID only */
-static inline void flush_guest_tlb(void)
-{
-    asm volatile(
-        "dsb sy;"
-        "tlbi vmalls12e1is;"
-        "dsb sy;"
-        "isb;"
-        : : : "memory");
-}
+TLB_HELPER(flush_guest_tlb, vmalls12e1is);
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_all_guests_tlb_local(void)
-{
-    asm volatile(
-        "dsb sy;"
-        "tlbi alle1;"
-        "dsb sy;"
-        "isb;"
-        : : : "memory");
-}
+TLB_HELPER(flush_all_guests_tlb_local, alle1);
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_all_guests_tlb(void)
-{
-    asm volatile(
-        "dsb sy;"
-        "tlbi alle1is;"
-        "dsb sy;"
-        "isb;"
-        : : : "memory");
-}
+TLB_HELPER(flush_all_guests_tlb, alle1is);
 
 /* Flush all hypervisor mappings from the TLB of the local processor. */
-static inline void flush_xen_tlb_local(void)
-{
-    asm volatile (
-        "dsb    sy;"                    /* Ensure visibility of PTE writes */
-        "tlbi   alle2;"                 /* Flush hypervisor TLB */
-        "dsb    sy;"                    /* Ensure completion of TLB flush */
-        "isb;"
-        : : : "memory");
-}
+TLB_HELPER(flush_xen_tlb_local, alle2);
 
 /* Flush TLB of local processor for address va. */
 static inline void  __flush_xen_tlb_one_local(vaddr_t va)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Xen-devel] [PATCH v2 6/7] xen/arm: tlbflush: Rework TLB helpers
@ 2019-05-08 16:16   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

All the TLBs helpers invalidate all the TLB entries are using the same
pattern:
    DSB SY
    TLBI ...
    DSB SY
    ISB

This pattern is following pattern recommended by the Arm Arm to ensure
visibility of updates to translation tables (see K11.5.2 in ARM DDI
0487D.b).

We have been a bit too eager in Xen and use system-wide DSBs when this
can be limited to the inner-shareable domain.

Furthermore, the first DSB can be restrict further to only store in the
inner-shareable domain. This is because the DSB is here to ensure
visibility of the update to translation table walks.

Lastly, there are a lack of documentation in most of the TLBs helper.

Rather than trying to update the helpers one by one, this patch
introduce a per-arch macro to generate the TLB helpers. This will be
easier to update the TLBs helper in the future and the documentation.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Update the reference to the Arm Arm to the latest spec
        - Add Andrii's reviewed-by
---
 xen/include/asm-arm/arm32/flushtlb.h | 73 ++++++++++++++--------------------
 xen/include/asm-arm/arm64/flushtlb.h | 76 +++++++++++++++---------------------
 2 files changed, 60 insertions(+), 89 deletions(-)

diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index b629db61cb..9085e65011 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -1,59 +1,44 @@
 #ifndef __ASM_ARM_ARM32_FLUSHTLB_H__
 #define __ASM_ARM_ARM32_FLUSHTLB_H__
 
-/* Flush local TLBs, current VMID only */
-static inline void flush_guest_tlb_local(void)
-{
-    dsb(sy);
-
-    WRITE_CP32((uint32_t) 0, TLBIALL);
-
-    dsb(sy);
-    isb();
+/*
+ * Every invalidation operation use the following patterns:
+ *
+ * DSB ISHST        // Ensure prior page-tables updates have completed
+ * TLBI...          // Invalidate the TLB
+ * DSB ISH          // Ensure the TLB invalidation has completed
+ * ISB              // See explanation below
+ *
+ * For Xen page-tables the ISB will discard any instructions fetched
+ * from the old mappings.
+ *
+ * For the Stage-2 page-tables the ISB ensures the completion of the DSB
+ * (and therefore the TLB invalidation) before continuing. So we know
+ * the TLBs cannot contain an entry for a mapping we may have removed.
+ */
+#define TLB_HELPER(name, tlbop) \
+static inline void name(void)   \
+{                               \
+    dsb(ishst);                 \
+    WRITE_CP32(0, tlbop);       \
+    dsb(ish);                   \
+    isb();                      \
 }
 
-/* Flush inner shareable TLBs, current VMID only */
-static inline void flush_guest_tlb(void)
-{
-    dsb(sy);
-
-    WRITE_CP32((uint32_t) 0, TLBIALLIS);
+/* Flush local TLBs, current VMID only */
+TLB_HELPER(flush_guest_tlb_local, TLBIALL);
 
-    dsb(sy);
-    isb();
-}
+/* Flush inner shareable TLBs, current VMID only */
+TLB_HELPER(flush_guest_tlb, TLBIALLIS);
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_all_guests_tlb_local(void)
-{
-    dsb(sy);
-
-    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
-
-    dsb(sy);
-    isb();
-}
+TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH);
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_all_guests_tlb(void)
-{
-    dsb(sy);
-
-    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
-
-    dsb(sy);
-    isb();
-}
+TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS);
 
 /* Flush all hypervisor mappings from the TLB of the local processor. */
-static inline void flush_xen_tlb_local(void)
-{
-    asm volatile("dsb;" /* Ensure preceding are visible */
-                 CMD_CP32(TLBIALLH)
-                 "dsb;" /* Ensure completion of the TLB flush */
-                 "isb;"
-                 : : : "memory");
-}
+TLB_HELPER(flush_xen_tlb_local, TLBIALLH);
 
 /* Flush TLB of local processor for address va. */
 static inline void __flush_xen_tlb_one_local(vaddr_t va)
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index 2fed34b2ec..ceec59542e 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -1,60 +1,46 @@
 #ifndef __ASM_ARM_ARM64_FLUSHTLB_H__
 #define __ASM_ARM_ARM64_FLUSHTLB_H__
 
-/* Flush local TLBs, current VMID only */
-static inline void flush_guest_tlb_local(void)
-{
-    asm volatile(
-        "dsb sy;"
-        "tlbi vmalls12e1;"
-        "dsb sy;"
-        "isb;"
-        : : : "memory");
+/*
+ * Every invalidation operation use the following patterns:
+ *
+ * DSB ISHST        // Ensure prior page-tables updates have completed
+ * TLBI...          // Invalidate the TLB
+ * DSB ISH          // Ensure the TLB invalidation has completed
+ * ISB              // See explanation below
+ *
+ * For Xen page-tables the ISB will discard any instructions fetched
+ * from the old mappings.
+ *
+ * For the Stage-2 page-tables the ISB ensures the completion of the DSB
+ * (and therefore the TLB invalidation) before continuing. So we know
+ * the TLBs cannot contain an entry for a mapping we may have removed.
+ */
+#define TLB_HELPER(name, tlbop) \
+static inline void name(void)   \
+{                               \
+    asm volatile(               \
+        "dsb  ishst;"           \
+        "tlbi "  # tlbop  ";"   \
+        "dsb  ish;"             \
+        "isb;"                  \
+        : : : "memory");        \
 }
 
+/* Flush local TLBs, current VMID only. */
+TLB_HELPER(flush_guest_tlb_local, vmalls12e1);
+
 /* Flush innershareable TLBs, current VMID only */
-static inline void flush_guest_tlb(void)
-{
-    asm volatile(
-        "dsb sy;"
-        "tlbi vmalls12e1is;"
-        "dsb sy;"
-        "isb;"
-        : : : "memory");
-}
+TLB_HELPER(flush_guest_tlb, vmalls12e1is);
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_all_guests_tlb_local(void)
-{
-    asm volatile(
-        "dsb sy;"
-        "tlbi alle1;"
-        "dsb sy;"
-        "isb;"
-        : : : "memory");
-}
+TLB_HELPER(flush_all_guests_tlb_local, alle1);
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_all_guests_tlb(void)
-{
-    asm volatile(
-        "dsb sy;"
-        "tlbi alle1is;"
-        "dsb sy;"
-        "isb;"
-        : : : "memory");
-}
+TLB_HELPER(flush_all_guests_tlb, alle1is);
 
 /* Flush all hypervisor mappings from the TLB of the local processor. */
-static inline void flush_xen_tlb_local(void)
-{
-    asm volatile (
-        "dsb    sy;"                    /* Ensure visibility of PTE writes */
-        "tlbi   alle2;"                 /* Flush hypervisor TLB */
-        "dsb    sy;"                    /* Ensure completion of TLB flush */
-        "isb;"
-        : : : "memory");
-}
+TLB_HELPER(flush_xen_tlb_local, alle2);
 
 /* Flush TLB of local processor for address va. */
 static inline void  __flush_xen_tlb_one_local(vaddr_t va)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 7/7] xen/arm: mm: Flush the TLBs even if a mapping failed in create_xen_entries
@ 2019-05-08 16:16   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

At the moment, create_xen_entries will only flush the TLBs if the full
range has successfully been updated. This may lead to leave unwanted
entries in the TLBs if we fail to update some entries.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 8ee828d445..9d584e4cbf 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -984,7 +984,7 @@ static int create_xen_entries(enum xenmap_operation op,
                               unsigned long nr_mfns,
                               unsigned int flags)
 {
-    int rc;
+    int rc = 0;
     unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
     lpae_t pte, *entry;
     lpae_t *third = NULL;
@@ -1013,7 +1013,8 @@ static int create_xen_entries(enum xenmap_operation op,
                 {
                     printk("%s: trying to replace an existing mapping addr=%lx mfn=%"PRI_mfn"\n",
                            __func__, addr, mfn_x(mfn));
-                    return -EINVAL;
+                    rc = -EINVAL;
+                    goto out;
                 }
                 if ( op == RESERVE )
                     break;
@@ -1030,7 +1031,8 @@ static int create_xen_entries(enum xenmap_operation op,
                 {
                     printk("%s: trying to %s a non-existing mapping addr=%lx\n",
                            __func__, op == REMOVE ? "remove" : "modify", addr);
-                    return -EINVAL;
+                    rc = -EINVAL;
+                    goto out;
                 }
                 if ( op == REMOVE )
                     pte.bits = 0;
@@ -1043,7 +1045,8 @@ static int create_xen_entries(enum xenmap_operation op,
                     {
                         printk("%s: Incorrect combination for addr=%lx\n",
                                __func__, addr);
-                        return -EINVAL;
+                        rc = -EINVAL;
+                        goto out;
                     }
                 }
                 write_pte(entry, pte);
@@ -1052,11 +1055,14 @@ static int create_xen_entries(enum xenmap_operation op,
                 BUG();
         }
     }
+out:
+    /*
+     * Flush the TLBs even in case of failure because we may have
+     * partially modified the PT. This will prevent any unexpected
+     * behavior afterwards.
+     */
     flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
 
-    rc = 0;
-
-out:
     return rc;
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Xen-devel] [PATCH v2 7/7] xen/arm: mm: Flush the TLBs even if a mapping failed in create_xen_entries
@ 2019-05-08 16:16   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-08 16:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr_Tyshchenko, Julien Grall, Stefano Stabellini, Andrii Anisov

At the moment, create_xen_entries will only flush the TLBs if the full
range has successfully been updated. This may lead to leave unwanted
entries in the TLBs if we fail to update some entries.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 8ee828d445..9d584e4cbf 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -984,7 +984,7 @@ static int create_xen_entries(enum xenmap_operation op,
                               unsigned long nr_mfns,
                               unsigned int flags)
 {
-    int rc;
+    int rc = 0;
     unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
     lpae_t pte, *entry;
     lpae_t *third = NULL;
@@ -1013,7 +1013,8 @@ static int create_xen_entries(enum xenmap_operation op,
                 {
                     printk("%s: trying to replace an existing mapping addr=%lx mfn=%"PRI_mfn"\n",
                            __func__, addr, mfn_x(mfn));
-                    return -EINVAL;
+                    rc = -EINVAL;
+                    goto out;
                 }
                 if ( op == RESERVE )
                     break;
@@ -1030,7 +1031,8 @@ static int create_xen_entries(enum xenmap_operation op,
                 {
                     printk("%s: trying to %s a non-existing mapping addr=%lx\n",
                            __func__, op == REMOVE ? "remove" : "modify", addr);
-                    return -EINVAL;
+                    rc = -EINVAL;
+                    goto out;
                 }
                 if ( op == REMOVE )
                     pte.bits = 0;
@@ -1043,7 +1045,8 @@ static int create_xen_entries(enum xenmap_operation op,
                     {
                         printk("%s: Incorrect combination for addr=%lx\n",
                                __func__, addr);
-                        return -EINVAL;
+                        rc = -EINVAL;
+                        goto out;
                     }
                 }
                 write_pte(entry, pte);
@@ -1052,11 +1055,14 @@ static int create_xen_entries(enum xenmap_operation op,
                 BUG();
         }
     }
+out:
+    /*
+     * Flush the TLBs even in case of failure because we may have
+     * partially modified the PT. This will prevent any unexpected
+     * behavior afterwards.
+     */
     flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
 
-    rc = 0;
-
-out:
     return rc;
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 1/7] xen/arm: mm: Consolidate setting SCTLR_EL2.WXN in a single place
@ 2019-05-09 19:52     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 19:52 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> The logic to set SCTLR_EL2.WXN is the same for the boot CPU and
> non-boot CPU. So introduce a function to set the bit and clear TLBs.
> 
> This new function will help us to document and update the logic in a
> single place.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Fix typo in the commit message
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 01ae2cccc0..93ad118183 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -601,6 +601,19 @@ void __init remove_early_mappings(void)
>      flush_xen_data_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
>  }
>  
> +/*
> + * After boot, Xen page-tables should not contain mapping that are both
> + * Writable and eXecutables.
> + *
> + * This should be called on each CPU to enforce the policy.
> + */
> +static void xen_pt_enforce_wnx(void)
> +{
> +    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
> +    /* Flush everything after setting WXN bit. */
> +    flush_xen_text_tlb_local();
> +}
> +
>  extern void switch_ttbr(uint64_t ttbr);
>  
>  /* Clear a translation table and clean & invalidate the cache */
> @@ -702,10 +715,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
>      clear_table(boot_second);
>      clear_table(boot_third);
>  
> -    /* From now on, no mapping may be both writable and executable. */
> -    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
> -    /* Flush everything after setting WXN bit. */
> -    flush_xen_text_tlb_local();
> +    xen_pt_enforce_wnx();
>  
>  #ifdef CONFIG_ARM_32
>      per_cpu(xen_pgtable, 0) = cpu0_pgtable;
> @@ -777,9 +787,7 @@ int init_secondary_pagetables(int cpu)
>  /* MMU setup for secondary CPUS (which already have paging enabled) */
>  void mmu_init_secondary_cpu(void)
>  {
> -    /* From now on, no mapping may be both writable and executable. */
> -    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
> -    flush_xen_text_tlb_local();
> +    xen_pt_enforce_wnx();
>  }
>  
>  #ifdef CONFIG_ARM_32
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 1/7] xen/arm: mm: Consolidate setting SCTLR_EL2.WXN in a single place
@ 2019-05-09 19:52     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 19:52 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> The logic to set SCTLR_EL2.WXN is the same for the boot CPU and
> non-boot CPU. So introduce a function to set the bit and clear TLBs.
> 
> This new function will help us to document and update the logic in a
> single place.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Fix typo in the commit message
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 01ae2cccc0..93ad118183 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -601,6 +601,19 @@ void __init remove_early_mappings(void)
>      flush_xen_data_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
>  }
>  
> +/*
> + * After boot, Xen page-tables should not contain mapping that are both
> + * Writable and eXecutables.
> + *
> + * This should be called on each CPU to enforce the policy.
> + */
> +static void xen_pt_enforce_wnx(void)
> +{
> +    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
> +    /* Flush everything after setting WXN bit. */
> +    flush_xen_text_tlb_local();
> +}
> +
>  extern void switch_ttbr(uint64_t ttbr);
>  
>  /* Clear a translation table and clean & invalidate the cache */
> @@ -702,10 +715,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
>      clear_table(boot_second);
>      clear_table(boot_third);
>  
> -    /* From now on, no mapping may be both writable and executable. */
> -    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
> -    /* Flush everything after setting WXN bit. */
> -    flush_xen_text_tlb_local();
> +    xen_pt_enforce_wnx();
>  
>  #ifdef CONFIG_ARM_32
>      per_cpu(xen_pgtable, 0) = cpu0_pgtable;
> @@ -777,9 +787,7 @@ int init_secondary_pagetables(int cpu)
>  /* MMU setup for secondary CPUS (which already have paging enabled) */
>  void mmu_init_secondary_cpu(void)
>  {
> -    /* From now on, no mapping may be both writable and executable. */
> -    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
> -    flush_xen_text_tlb_local();
> +    xen_pt_enforce_wnx();
>  }
>  
>  #ifdef CONFIG_ARM_32
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 2/7] xen/arm: Remove flush_xen_text_tlb_local()
@ 2019-05-09 20:03     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:03 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> The function flush_xen_text_tlb_local() has been misused and will result
> to invalidate the instruction cache more than necessary.
> 
> For instance, there are no need to invalidate the instruction cache if
                       ^ is


> we are setting SCTLR_EL2.WXN.
> 
> There are effectively only one caller (i.e free_init_memory() would
        ^ is

> who need to invalidate the instruction cache.
  ^ would who / who would

> 
> So rather than keeping around the function flush_xen_text_tlb_local()
> around, replace it with call to flush_xen_tlb_local() and explicitely
  ^ remove


> flush the cache when necessary.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> 
> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c                | 17 ++++++++++++++---
>  xen/include/asm-arm/arm32/page.h | 23 +++++++++--------------
>  xen/include/asm-arm/arm64/page.h | 21 +++++----------------
>  3 files changed, 28 insertions(+), 33 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 93ad118183..dfbe39c70a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -610,8 +610,12 @@ void __init remove_early_mappings(void)
>  static void xen_pt_enforce_wnx(void)
>  {
>      WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
> -    /* Flush everything after setting WXN bit. */
> -    flush_xen_text_tlb_local();
> +    /*
> +     * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized
> +     * before flushing the TLBs.
> +     */
> +    isb();
> +    flush_xen_data_tlb_local();
>  }
>  
>  extern void switch_ttbr(uint64_t ttbr);
> @@ -1123,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>          }
>          write_pte(xen_xenmap + i, pte);
>      }
> -    flush_xen_text_tlb_local();
> +    flush_xen_data_tlb_local();

I think it would make sense to move the remaining call to
flush_xen_data_tlb_local from set_pte_flags_on_range to free_init_memory
before the call to invalidate_icache_local. What do you think?


>  }
>  
>  /* Release all __init and __initdata ranges to be reused */
> @@ -1136,6 +1140,13 @@ void free_init_memory(void)
>      uint32_t *p;
>  
>      set_pte_flags_on_range(__init_begin, len, mg_rw);
> +
> +    /*
> +     * From now on, init will not be used for execution anymore,
> +     * so nuke the instruction cache to remove entries related to init.
> +     */
> +    invalidate_icache_local();
> +
>  #ifdef CONFIG_ARM_32
>      /* udf instruction i.e (see A8.8.247 in ARM DDI 0406C.c) */
>      insn = 0xe7f000f0;
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index ea4b312c70..40a77daa9d 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -46,24 +46,19 @@ static inline void invalidate_icache(void)
>  }
>  
>  /*
> - * Flush all hypervisor mappings from the TLB and branch predictor of
> - * the local processor.
> - *
> - * This is needed after changing Xen code mappings.
> - *
> - * The caller needs to issue the necessary DSB and D-cache flushes
> - * before calling flush_xen_text_tlb.
> + * Invalidate all instruction caches on the local processor to PoU.
> + * We also need to flush the branch predictor for ARMv7 as it may be
> + * architecturally visible to the software (see B2.2.4 in ARM DDI 0406C.b).
>   */
> -static inline void flush_xen_text_tlb_local(void)
> +static inline void invalidate_icache_local(void)
>  {
>      asm volatile (
> -        "isb;"                        /* Ensure synchronization with previous changes to text */
> -        CMD_CP32(TLBIALLH)            /* Flush hypervisor TLB */
> -        CMD_CP32(ICIALLU)             /* Flush I-cache */
> -        CMD_CP32(BPIALL)              /* Flush branch predictor */
> -        "dsb;"                        /* Ensure completion of TLB+BP flush */
> -        "isb;"
> +        CMD_CP32(ICIALLU)       /* Flush I-cache. */
> +        CMD_CP32(BPIALL)        /* Flush branch predictor. */
>          : : : "memory");
> +
> +    dsb(nsh);                   /* Ensure completion of the flush I-cache */
> +    isb();                      /* Synchronize fetched instruction stream. */
>  }
>  
>  /*
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 23d778154d..6c36d0210f 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -37,23 +37,12 @@ static inline void invalidate_icache(void)
>      isb();
>  }
>  
> -/*
> - * Flush all hypervisor mappings from the TLB of the local processor.
> - *
> - * This is needed after changing Xen code mappings.
> - *
> - * The caller needs to issue the necessary DSB and D-cache flushes
> - * before calling flush_xen_text_tlb.
> - */
> -static inline void flush_xen_text_tlb_local(void)
> +/* Invalidate all instruction caches on the local processor to PoU */
> +static inline void invalidate_icache_local(void)
>  {
> -    asm volatile (
> -        "isb;"       /* Ensure synchronization with previous changes to text */
> -        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> -        "ic     iallu;"                 /* Flush I-cache */
> -        "dsb    sy;"                    /* Ensure completion of TLB flush */
> -        "isb;"
> -        : : : "memory");
> +    asm volatile ("ic iallu");
> +    dsb(nsh);               /* Ensure completion of the I-cache flush */
> +    isb();
>  }
>  
>  /*
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/7] xen/arm: Remove flush_xen_text_tlb_local()
@ 2019-05-09 20:03     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:03 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> The function flush_xen_text_tlb_local() has been misused and will result
> to invalidate the instruction cache more than necessary.
> 
> For instance, there are no need to invalidate the instruction cache if
                       ^ is


> we are setting SCTLR_EL2.WXN.
> 
> There are effectively only one caller (i.e free_init_memory() would
        ^ is

> who need to invalidate the instruction cache.
  ^ would who / who would

> 
> So rather than keeping around the function flush_xen_text_tlb_local()
> around, replace it with call to flush_xen_tlb_local() and explicitely
  ^ remove


> flush the cache when necessary.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> 
> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c                | 17 ++++++++++++++---
>  xen/include/asm-arm/arm32/page.h | 23 +++++++++--------------
>  xen/include/asm-arm/arm64/page.h | 21 +++++----------------
>  3 files changed, 28 insertions(+), 33 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 93ad118183..dfbe39c70a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -610,8 +610,12 @@ void __init remove_early_mappings(void)
>  static void xen_pt_enforce_wnx(void)
>  {
>      WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
> -    /* Flush everything after setting WXN bit. */
> -    flush_xen_text_tlb_local();
> +    /*
> +     * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized
> +     * before flushing the TLBs.
> +     */
> +    isb();
> +    flush_xen_data_tlb_local();
>  }
>  
>  extern void switch_ttbr(uint64_t ttbr);
> @@ -1123,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>          }
>          write_pte(xen_xenmap + i, pte);
>      }
> -    flush_xen_text_tlb_local();
> +    flush_xen_data_tlb_local();

I think it would make sense to move the remaining call to
flush_xen_data_tlb_local from set_pte_flags_on_range to free_init_memory
before the call to invalidate_icache_local. What do you think?


>  }
>  
>  /* Release all __init and __initdata ranges to be reused */
> @@ -1136,6 +1140,13 @@ void free_init_memory(void)
>      uint32_t *p;
>  
>      set_pte_flags_on_range(__init_begin, len, mg_rw);
> +
> +    /*
> +     * From now on, init will not be used for execution anymore,
> +     * so nuke the instruction cache to remove entries related to init.
> +     */
> +    invalidate_icache_local();
> +
>  #ifdef CONFIG_ARM_32
>      /* udf instruction i.e (see A8.8.247 in ARM DDI 0406C.c) */
>      insn = 0xe7f000f0;
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index ea4b312c70..40a77daa9d 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -46,24 +46,19 @@ static inline void invalidate_icache(void)
>  }
>  
>  /*
> - * Flush all hypervisor mappings from the TLB and branch predictor of
> - * the local processor.
> - *
> - * This is needed after changing Xen code mappings.
> - *
> - * The caller needs to issue the necessary DSB and D-cache flushes
> - * before calling flush_xen_text_tlb.
> + * Invalidate all instruction caches on the local processor to PoU.
> + * We also need to flush the branch predictor for ARMv7 as it may be
> + * architecturally visible to the software (see B2.2.4 in ARM DDI 0406C.b).
>   */
> -static inline void flush_xen_text_tlb_local(void)
> +static inline void invalidate_icache_local(void)
>  {
>      asm volatile (
> -        "isb;"                        /* Ensure synchronization with previous changes to text */
> -        CMD_CP32(TLBIALLH)            /* Flush hypervisor TLB */
> -        CMD_CP32(ICIALLU)             /* Flush I-cache */
> -        CMD_CP32(BPIALL)              /* Flush branch predictor */
> -        "dsb;"                        /* Ensure completion of TLB+BP flush */
> -        "isb;"
> +        CMD_CP32(ICIALLU)       /* Flush I-cache. */
> +        CMD_CP32(BPIALL)        /* Flush branch predictor. */
>          : : : "memory");
> +
> +    dsb(nsh);                   /* Ensure completion of the flush I-cache */
> +    isb();                      /* Synchronize fetched instruction stream. */
>  }
>  
>  /*
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 23d778154d..6c36d0210f 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -37,23 +37,12 @@ static inline void invalidate_icache(void)
>      isb();
>  }
>  
> -/*
> - * Flush all hypervisor mappings from the TLB of the local processor.
> - *
> - * This is needed after changing Xen code mappings.
> - *
> - * The caller needs to issue the necessary DSB and D-cache flushes
> - * before calling flush_xen_text_tlb.
> - */
> -static inline void flush_xen_text_tlb_local(void)
> +/* Invalidate all instruction caches on the local processor to PoU */
> +static inline void invalidate_icache_local(void)
>  {
> -    asm volatile (
> -        "isb;"       /* Ensure synchronization with previous changes to text */
> -        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> -        "ic     iallu;"                 /* Flush I-cache */
> -        "dsb    sy;"                    /* Ensure completion of TLB flush */
> -        "isb;"
> -        : : : "memory");
> +    asm volatile ("ic iallu");
> +    dsb(nsh);               /* Ensure completion of the I-cache flush */
> +    isb();
>  }
>  
>  /*
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 3/7] xen/arm: tlbflush: Clarify the TLB helpers name
@ 2019-05-09 20:05     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:05 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> TLB helpers in the headers tlbflush.h are currently quite confusing to
> use the name may lead to think they are dealing with hypervisors TLBs
> while they actually deal with guest TLBs.
> 
> Rename them to make it clearer that we are dealing with guest TLBs.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/p2m.c                   | 6 +++---
>  xen/arch/arm/smp.c                   | 2 +-
>  xen/arch/arm/traps.c                 | 2 +-
>  xen/include/asm-arm/arm32/flushtlb.h | 8 ++++----
>  xen/include/asm-arm/arm64/flushtlb.h | 8 ++++----
>  5 files changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index c38bd7e16e..92c2413f20 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -151,7 +151,7 @@ void p2m_restore_state(struct vcpu *n)
>       * when running multiple vCPU of the same domain on a single pCPU.
>       */
>      if ( *last_vcpu_ran != INVALID_VCPU_ID && *last_vcpu_ran != n->vcpu_id )
> -        flush_tlb_local();
> +        flush_guest_tlb_local();
>  
>      *last_vcpu_ran = n->vcpu_id;
>  }
> @@ -196,7 +196,7 @@ static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
>          isb();
>      }
>  
> -    flush_tlb();
> +    flush_guest_tlb();
>  
>      if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
>      {
> @@ -1969,7 +1969,7 @@ static void setup_virt_paging_one(void *data)
>          WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2);
>          isb();
>  
> -        flush_tlb_all_local();
> +        flush_all_guests_tlb_local();
>      }
>  }
>  
> diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c
> index 62f57f0ba2..ce1fcc8ef9 100644
> --- a/xen/arch/arm/smp.c
> +++ b/xen/arch/arm/smp.c
> @@ -8,7 +8,7 @@
>  void flush_tlb_mask(const cpumask_t *mask)
>  {
>      /* No need to IPI other processors on ARM, the processor takes care of it. */
> -    flush_tlb_all();
> +    flush_all_guests_tlb();
>  }
>  
>  void smp_send_event_check_mask(const cpumask_t *mask)
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index d8b9a8a0f0..1aba970415 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1924,7 +1924,7 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
>           * still be inaccurate.
>           */
>          if ( !is_data )
> -            flush_tlb_local();
> +            flush_guest_tlb_local();
>  
>          rc = gva_to_ipa(gva, &gpa, GV2M_READ);
>          /*
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index bbcc82f490..22e100eccf 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -2,7 +2,7 @@
>  #define __ASM_ARM_ARM32_FLUSHTLB_H__
>  
>  /* Flush local TLBs, current VMID only */
> -static inline void flush_tlb_local(void)
> +static inline void flush_guest_tlb_local(void)
>  {
>      dsb(sy);
>  
> @@ -13,7 +13,7 @@ static inline void flush_tlb_local(void)
>  }
>  
>  /* Flush inner shareable TLBs, current VMID only */
> -static inline void flush_tlb(void)
> +static inline void flush_guest_tlb(void)
>  {
>      dsb(sy);
>  
> @@ -24,7 +24,7 @@ static inline void flush_tlb(void)
>  }
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all_local(void)
> +static inline void flush_all_guests_tlb_local(void)
>  {
>      dsb(sy);
>  
> @@ -35,7 +35,7 @@ static inline void flush_tlb_all_local(void)
>  }
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all(void)
> +static inline void flush_all_guests_tlb(void)
>  {
>      dsb(sy);
>  
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index 942f2d3992..adbbd5c522 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -2,7 +2,7 @@
>  #define __ASM_ARM_ARM64_FLUSHTLB_H__
>  
>  /* Flush local TLBs, current VMID only */
> -static inline void flush_tlb_local(void)
> +static inline void flush_guest_tlb_local(void)
>  {
>      asm volatile(
>          "dsb sy;"
> @@ -13,7 +13,7 @@ static inline void flush_tlb_local(void)
>  }
>  
>  /* Flush innershareable TLBs, current VMID only */
> -static inline void flush_tlb(void)
> +static inline void flush_guest_tlb(void)
>  {
>      asm volatile(
>          "dsb sy;"
> @@ -24,7 +24,7 @@ static inline void flush_tlb(void)
>  }
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all_local(void)
> +static inline void flush_all_guests_tlb_local(void)
>  {
>      asm volatile(
>          "dsb sy;"
> @@ -35,7 +35,7 @@ static inline void flush_tlb_all_local(void)
>  }
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all(void)
> +static inline void flush_all_guests_tlb(void)
>  {
>      asm volatile(
>          "dsb sy;"
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 3/7] xen/arm: tlbflush: Clarify the TLB helpers name
@ 2019-05-09 20:05     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:05 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> TLB helpers in the headers tlbflush.h are currently quite confusing to
> use the name may lead to think they are dealing with hypervisors TLBs
> while they actually deal with guest TLBs.
> 
> Rename them to make it clearer that we are dealing with guest TLBs.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/p2m.c                   | 6 +++---
>  xen/arch/arm/smp.c                   | 2 +-
>  xen/arch/arm/traps.c                 | 2 +-
>  xen/include/asm-arm/arm32/flushtlb.h | 8 ++++----
>  xen/include/asm-arm/arm64/flushtlb.h | 8 ++++----
>  5 files changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index c38bd7e16e..92c2413f20 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -151,7 +151,7 @@ void p2m_restore_state(struct vcpu *n)
>       * when running multiple vCPU of the same domain on a single pCPU.
>       */
>      if ( *last_vcpu_ran != INVALID_VCPU_ID && *last_vcpu_ran != n->vcpu_id )
> -        flush_tlb_local();
> +        flush_guest_tlb_local();
>  
>      *last_vcpu_ran = n->vcpu_id;
>  }
> @@ -196,7 +196,7 @@ static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
>          isb();
>      }
>  
> -    flush_tlb();
> +    flush_guest_tlb();
>  
>      if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
>      {
> @@ -1969,7 +1969,7 @@ static void setup_virt_paging_one(void *data)
>          WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2);
>          isb();
>  
> -        flush_tlb_all_local();
> +        flush_all_guests_tlb_local();
>      }
>  }
>  
> diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c
> index 62f57f0ba2..ce1fcc8ef9 100644
> --- a/xen/arch/arm/smp.c
> +++ b/xen/arch/arm/smp.c
> @@ -8,7 +8,7 @@
>  void flush_tlb_mask(const cpumask_t *mask)
>  {
>      /* No need to IPI other processors on ARM, the processor takes care of it. */
> -    flush_tlb_all();
> +    flush_all_guests_tlb();
>  }
>  
>  void smp_send_event_check_mask(const cpumask_t *mask)
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index d8b9a8a0f0..1aba970415 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1924,7 +1924,7 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
>           * still be inaccurate.
>           */
>          if ( !is_data )
> -            flush_tlb_local();
> +            flush_guest_tlb_local();
>  
>          rc = gva_to_ipa(gva, &gpa, GV2M_READ);
>          /*
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index bbcc82f490..22e100eccf 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -2,7 +2,7 @@
>  #define __ASM_ARM_ARM32_FLUSHTLB_H__
>  
>  /* Flush local TLBs, current VMID only */
> -static inline void flush_tlb_local(void)
> +static inline void flush_guest_tlb_local(void)
>  {
>      dsb(sy);
>  
> @@ -13,7 +13,7 @@ static inline void flush_tlb_local(void)
>  }
>  
>  /* Flush inner shareable TLBs, current VMID only */
> -static inline void flush_tlb(void)
> +static inline void flush_guest_tlb(void)
>  {
>      dsb(sy);
>  
> @@ -24,7 +24,7 @@ static inline void flush_tlb(void)
>  }
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all_local(void)
> +static inline void flush_all_guests_tlb_local(void)
>  {
>      dsb(sy);
>  
> @@ -35,7 +35,7 @@ static inline void flush_tlb_all_local(void)
>  }
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all(void)
> +static inline void flush_all_guests_tlb(void)
>  {
>      dsb(sy);
>  
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index 942f2d3992..adbbd5c522 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -2,7 +2,7 @@
>  #define __ASM_ARM_ARM64_FLUSHTLB_H__
>  
>  /* Flush local TLBs, current VMID only */
> -static inline void flush_tlb_local(void)
> +static inline void flush_guest_tlb_local(void)
>  {
>      asm volatile(
>          "dsb sy;"
> @@ -13,7 +13,7 @@ static inline void flush_tlb_local(void)
>  }
>  
>  /* Flush innershareable TLBs, current VMID only */
> -static inline void flush_tlb(void)
> +static inline void flush_guest_tlb(void)
>  {
>      asm volatile(
>          "dsb sy;"
> @@ -24,7 +24,7 @@ static inline void flush_tlb(void)
>  }
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all_local(void)
> +static inline void flush_all_guests_tlb_local(void)
>  {
>      asm volatile(
>          "dsb sy;"
> @@ -35,7 +35,7 @@ static inline void flush_tlb_all_local(void)
>  }
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all(void)
> +static inline void flush_all_guests_tlb(void)
>  {
>      asm volatile(
>          "dsb sy;"
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-09 20:13     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:13 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> Now that we dropped flush_xen_text_tlb_local(), we have only one set of
> helpers acting on Xen TLBs. There naming are quite confusing because the
> TLB instructions used will act on both Data and Instruction TLBs.
> 
> Take the opportunity to rework the documentation that can be confusing
> to read as they don't match the implementation.
> 
> Lastly, switch from unsigned lont to vaddr_t as the function technically
                               ^ long

One comment about the in-code comments below.


> deal with virtual address.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> 
> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c                | 18 +++++++++---------
>  xen/include/asm-arm/arm32/page.h | 15 +++++----------
>  xen/include/asm-arm/arm64/page.h | 15 +++++----------
>  xen/include/asm-arm/page.h       | 28 ++++++++++++++--------------
>  4 files changed, 33 insertions(+), 43 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index dfbe39c70a..8ee828d445 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -335,7 +335,7 @@ void set_fixmap(unsigned map, mfn_t mfn, unsigned int flags)
>      pte.pt.table = 1; /* 4k mappings always have this bit set */
>      pte.pt.xn = 1;
>      write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
> -    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
> +    flush_xen_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
>  }
>  
>  /* Remove a mapping from a fixmap entry */
> @@ -343,7 +343,7 @@ void clear_fixmap(unsigned map)
>  {
>      lpae_t pte = {0};
>      write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
> -    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
> +    flush_xen_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
>  }
>  
>  /* Create Xen's mappings of memory.
> @@ -377,7 +377,7 @@ static void __init create_mappings(lpae_t *second,
>          write_pte(p + i, pte);
>          pte.pt.base += 1 << LPAE_SHIFT;
>      }
> -    flush_xen_data_tlb_local();
> +    flush_xen_tlb_local();
>  }
>  
>  #ifdef CONFIG_DOMAIN_PAGE
> @@ -455,7 +455,7 @@ void *map_domain_page(mfn_t mfn)
>       * We may not have flushed this specific subpage at map time,
>       * since we only flush the 4k page not the superpage
>       */
> -    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
> +    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
>  
>      return (void *)va;
>  }
> @@ -598,7 +598,7 @@ void __init remove_early_mappings(void)
>      write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START), pte);
>      write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START + SZ_2M),
>                pte);
> -    flush_xen_data_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
> +    flush_xen_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
>  }
>  
>  /*
> @@ -615,7 +615,7 @@ static void xen_pt_enforce_wnx(void)
>       * before flushing the TLBs.
>       */
>      isb();
> -    flush_xen_data_tlb_local();
> +    flush_xen_tlb_local();
>  }
>  
>  extern void switch_ttbr(uint64_t ttbr);
> @@ -879,7 +879,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
>          vaddr += FIRST_SIZE;
>      }
>  
> -    flush_xen_data_tlb_local();
> +    flush_xen_tlb_local();
>  }
>  #endif
>  
> @@ -1052,7 +1052,7 @@ static int create_xen_entries(enum xenmap_operation op,
>                  BUG();
>          }
>      }
> -    flush_xen_data_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
> +    flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
>  
>      rc = 0;
>  
> @@ -1127,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>          }
>          write_pte(xen_xenmap + i, pte);
>      }
> -    flush_xen_data_tlb_local();
> +    flush_xen_tlb_local();
>  }
>  
>  /* Release all __init and __initdata ranges to be reused */
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index 40a77daa9d..0b41b9214b 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>      isb();                      /* Synchronize fetched instruction stream. */
>  }
>  
> -/*
> - * Flush all hypervisor mappings from the data TLB of the local
> - * processor. This is not sufficient when changing code mappings or
> - * for self modifying code.
> - */
> -static inline void flush_xen_data_tlb_local(void)
> +/* Flush all hypervisor mappings from the TLB of the local processor. */

I realize that the statement "This is not sufficient when changing code
mappings or for self modifying code" is not quite accurate, but I would
prefer not to remove it completely. It would be good to retain a warning
somewhere about IC been needed when changing Xen's own mappings. Maybe
on top of invalidate_icache_local? 


> +static inline void flush_xen_tlb_local(void)
>  {
>      asm volatile("dsb;" /* Ensure preceding are visible */
>                   CMD_CP32(TLBIALLH)
> @@ -76,14 +72,13 @@ static inline void flush_xen_data_tlb_local(void)
>  }
>  
>  /* Flush TLB of local processor for address va. */
> -static inline void __flush_xen_data_tlb_one_local(vaddr_t va)
> +static inline void __flush_xen_tlb_one_local(vaddr_t va)
>  {
>      asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
>  }
>  
> -/* Flush TLB of all processors in the inner-shareable domain for
> - * address va. */
> -static inline void __flush_xen_data_tlb_one(vaddr_t va)
> +/* Flush TLB of all processors in the inner-shareable domain for address va. */
> +static inline void __flush_xen_tlb_one(vaddr_t va)
>  {
>      asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
>  }
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 6c36d0210f..31d04ecf76 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -45,12 +45,8 @@ static inline void invalidate_icache_local(void)
>      isb();
>  }
>  
> -/*
> - * Flush all hypervisor mappings from the data TLB of the local
> - * processor. This is not sufficient when changing code mappings or
> - * for self modifying code.
> - */
> -static inline void flush_xen_data_tlb_local(void)
> +/* Flush all hypervisor mappings from the TLB of the local processor. */
> +static inline void flush_xen_tlb_local(void)
>  {
>      asm volatile (
>          "dsb    sy;"                    /* Ensure visibility of PTE writes */
> @@ -61,14 +57,13 @@ static inline void flush_xen_data_tlb_local(void)
>  }
>  
>  /* Flush TLB of local processor for address va. */
> -static inline void  __flush_xen_data_tlb_one_local(vaddr_t va)
> +static inline void  __flush_xen_tlb_one_local(vaddr_t va)
>  {
>      asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
>  }
>  
> -/* Flush TLB of all processors in the inner-shareable domain for
> - * address va. */
> -static inline void __flush_xen_data_tlb_one(vaddr_t va)
> +/* Flush TLB of all processors in the inner-shareable domain for address va. */
> +static inline void __flush_xen_tlb_one(vaddr_t va)
>  {
>      asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
>  }
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 1a1713ce02..195345e24a 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -234,18 +234,18 @@ static inline int clean_and_invalidate_dcache_va_range
>  } while (0)
>  
>  /*
> - * Flush a range of VA's hypervisor mappings from the data TLB of the
> - * local processor. This is not sufficient when changing code mappings
> - * or for self modifying code.
> + * Flush a range of VA's hypervisor mappings from the TLB of the local
> + * processor.
>   */
> -static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
> -                                                     unsigned long size)
> +static inline void flush_xen_tlb_range_va_local(vaddr_t va,
> +                                                unsigned long size)
>  {
> -    unsigned long end = va + size;
> +    vaddr_t end = va + size;
> +
>      dsb(sy); /* Ensure preceding are visible */
>      while ( va < end )
>      {
> -        __flush_xen_data_tlb_one_local(va);
> +        __flush_xen_tlb_one_local(va);
>          va += PAGE_SIZE;
>      }
>      dsb(sy); /* Ensure completion of the TLB flush */
> @@ -253,18 +253,18 @@ static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
>  }
>  
>  /*
> - * Flush a range of VA's hypervisor mappings from the data TLB of all
> - * processors in the inner-shareable domain. This is not sufficient
> - * when changing code mappings or for self modifying code.
> + * Flush a range of VA's hypervisor mappings from the TLB of all
> + * processors in the inner-shareable domain.
>   */
> -static inline void flush_xen_data_tlb_range_va(unsigned long va,
> -                                               unsigned long size)
> +static inline void flush_xen_tlb_range_va(vaddr_t va,
> +                                          unsigned long size)
>  {
> -    unsigned long end = va + size;
> +    vaddr_t end = va + size;
> +
>      dsb(sy); /* Ensure preceding are visible */
>      while ( va < end )
>      {
> -        __flush_xen_data_tlb_one(va);
> +        __flush_xen_tlb_one(va);
>          va += PAGE_SIZE;
>      }
>      dsb(sy); /* Ensure completion of the TLB flush */
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-09 20:13     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:13 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> Now that we dropped flush_xen_text_tlb_local(), we have only one set of
> helpers acting on Xen TLBs. There naming are quite confusing because the
> TLB instructions used will act on both Data and Instruction TLBs.
> 
> Take the opportunity to rework the documentation that can be confusing
> to read as they don't match the implementation.
> 
> Lastly, switch from unsigned lont to vaddr_t as the function technically
                               ^ long

One comment about the in-code comments below.


> deal with virtual address.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> 
> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c                | 18 +++++++++---------
>  xen/include/asm-arm/arm32/page.h | 15 +++++----------
>  xen/include/asm-arm/arm64/page.h | 15 +++++----------
>  xen/include/asm-arm/page.h       | 28 ++++++++++++++--------------
>  4 files changed, 33 insertions(+), 43 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index dfbe39c70a..8ee828d445 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -335,7 +335,7 @@ void set_fixmap(unsigned map, mfn_t mfn, unsigned int flags)
>      pte.pt.table = 1; /* 4k mappings always have this bit set */
>      pte.pt.xn = 1;
>      write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
> -    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
> +    flush_xen_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
>  }
>  
>  /* Remove a mapping from a fixmap entry */
> @@ -343,7 +343,7 @@ void clear_fixmap(unsigned map)
>  {
>      lpae_t pte = {0};
>      write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
> -    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
> +    flush_xen_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
>  }
>  
>  /* Create Xen's mappings of memory.
> @@ -377,7 +377,7 @@ static void __init create_mappings(lpae_t *second,
>          write_pte(p + i, pte);
>          pte.pt.base += 1 << LPAE_SHIFT;
>      }
> -    flush_xen_data_tlb_local();
> +    flush_xen_tlb_local();
>  }
>  
>  #ifdef CONFIG_DOMAIN_PAGE
> @@ -455,7 +455,7 @@ void *map_domain_page(mfn_t mfn)
>       * We may not have flushed this specific subpage at map time,
>       * since we only flush the 4k page not the superpage
>       */
> -    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);
> +    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
>  
>      return (void *)va;
>  }
> @@ -598,7 +598,7 @@ void __init remove_early_mappings(void)
>      write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START), pte);
>      write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START + SZ_2M),
>                pte);
> -    flush_xen_data_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
> +    flush_xen_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
>  }
>  
>  /*
> @@ -615,7 +615,7 @@ static void xen_pt_enforce_wnx(void)
>       * before flushing the TLBs.
>       */
>      isb();
> -    flush_xen_data_tlb_local();
> +    flush_xen_tlb_local();
>  }
>  
>  extern void switch_ttbr(uint64_t ttbr);
> @@ -879,7 +879,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
>          vaddr += FIRST_SIZE;
>      }
>  
> -    flush_xen_data_tlb_local();
> +    flush_xen_tlb_local();
>  }
>  #endif
>  
> @@ -1052,7 +1052,7 @@ static int create_xen_entries(enum xenmap_operation op,
>                  BUG();
>          }
>      }
> -    flush_xen_data_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
> +    flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
>  
>      rc = 0;
>  
> @@ -1127,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>          }
>          write_pte(xen_xenmap + i, pte);
>      }
> -    flush_xen_data_tlb_local();
> +    flush_xen_tlb_local();
>  }
>  
>  /* Release all __init and __initdata ranges to be reused */
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index 40a77daa9d..0b41b9214b 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>      isb();                      /* Synchronize fetched instruction stream. */
>  }
>  
> -/*
> - * Flush all hypervisor mappings from the data TLB of the local
> - * processor. This is not sufficient when changing code mappings or
> - * for self modifying code.
> - */
> -static inline void flush_xen_data_tlb_local(void)
> +/* Flush all hypervisor mappings from the TLB of the local processor. */

I realize that the statement "This is not sufficient when changing code
mappings or for self modifying code" is not quite accurate, but I would
prefer not to remove it completely. It would be good to retain a warning
somewhere about IC been needed when changing Xen's own mappings. Maybe
on top of invalidate_icache_local? 


> +static inline void flush_xen_tlb_local(void)
>  {
>      asm volatile("dsb;" /* Ensure preceding are visible */
>                   CMD_CP32(TLBIALLH)
> @@ -76,14 +72,13 @@ static inline void flush_xen_data_tlb_local(void)
>  }
>  
>  /* Flush TLB of local processor for address va. */
> -static inline void __flush_xen_data_tlb_one_local(vaddr_t va)
> +static inline void __flush_xen_tlb_one_local(vaddr_t va)
>  {
>      asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
>  }
>  
> -/* Flush TLB of all processors in the inner-shareable domain for
> - * address va. */
> -static inline void __flush_xen_data_tlb_one(vaddr_t va)
> +/* Flush TLB of all processors in the inner-shareable domain for address va. */
> +static inline void __flush_xen_tlb_one(vaddr_t va)
>  {
>      asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
>  }
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 6c36d0210f..31d04ecf76 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -45,12 +45,8 @@ static inline void invalidate_icache_local(void)
>      isb();
>  }
>  
> -/*
> - * Flush all hypervisor mappings from the data TLB of the local
> - * processor. This is not sufficient when changing code mappings or
> - * for self modifying code.
> - */
> -static inline void flush_xen_data_tlb_local(void)
> +/* Flush all hypervisor mappings from the TLB of the local processor. */
> +static inline void flush_xen_tlb_local(void)
>  {
>      asm volatile (
>          "dsb    sy;"                    /* Ensure visibility of PTE writes */
> @@ -61,14 +57,13 @@ static inline void flush_xen_data_tlb_local(void)
>  }
>  
>  /* Flush TLB of local processor for address va. */
> -static inline void  __flush_xen_data_tlb_one_local(vaddr_t va)
> +static inline void  __flush_xen_tlb_one_local(vaddr_t va)
>  {
>      asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
>  }
>  
> -/* Flush TLB of all processors in the inner-shareable domain for
> - * address va. */
> -static inline void __flush_xen_data_tlb_one(vaddr_t va)
> +/* Flush TLB of all processors in the inner-shareable domain for address va. */
> +static inline void __flush_xen_tlb_one(vaddr_t va)
>  {
>      asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
>  }
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 1a1713ce02..195345e24a 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -234,18 +234,18 @@ static inline int clean_and_invalidate_dcache_va_range
>  } while (0)
>  
>  /*
> - * Flush a range of VA's hypervisor mappings from the data TLB of the
> - * local processor. This is not sufficient when changing code mappings
> - * or for self modifying code.
> + * Flush a range of VA's hypervisor mappings from the TLB of the local
> + * processor.
>   */
> -static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
> -                                                     unsigned long size)
> +static inline void flush_xen_tlb_range_va_local(vaddr_t va,
> +                                                unsigned long size)
>  {
> -    unsigned long end = va + size;
> +    vaddr_t end = va + size;
> +
>      dsb(sy); /* Ensure preceding are visible */
>      while ( va < end )
>      {
> -        __flush_xen_data_tlb_one_local(va);
> +        __flush_xen_tlb_one_local(va);
>          va += PAGE_SIZE;
>      }
>      dsb(sy); /* Ensure completion of the TLB flush */
> @@ -253,18 +253,18 @@ static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
>  }
>  
>  /*
> - * Flush a range of VA's hypervisor mappings from the data TLB of all
> - * processors in the inner-shareable domain. This is not sufficient
> - * when changing code mappings or for self modifying code.
> + * Flush a range of VA's hypervisor mappings from the TLB of all
> + * processors in the inner-shareable domain.
>   */
> -static inline void flush_xen_data_tlb_range_va(unsigned long va,
> -                                               unsigned long size)
> +static inline void flush_xen_tlb_range_va(vaddr_t va,
> +                                          unsigned long size)
>  {
> -    unsigned long end = va + size;
> +    vaddr_t end = va + size;
> +
>      dsb(sy); /* Ensure preceding are visible */
>      while ( va < end )
>      {
> -        __flush_xen_data_tlb_one(va);
> +        __flush_xen_tlb_one(va);
>          va += PAGE_SIZE;
>      }
>      dsb(sy); /* Ensure completion of the TLB flush */
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 5/7] xen/arm: Gather all TLB flush helpers in tlbflush.h
@ 2019-05-09 20:17     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:17 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> At the moment, TLB helpers are scattered in 2 headers: page.h (for
> Xen TLB helpers) and tlbflush.h (for guest TLB helpers).
> 
> This patch is gathering all of them in tlbflush. This will help to
> uniformize and update the logic of the helpers in follow-up patches.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/include/asm-arm/arm32/flushtlb.h | 22 +++++++++++++++++++++
>  xen/include/asm-arm/arm32/page.h     | 22 ---------------------
>  xen/include/asm-arm/arm64/flushtlb.h | 23 ++++++++++++++++++++++
>  xen/include/asm-arm/arm64/page.h     | 23 ----------------------
>  xen/include/asm-arm/flushtlb.h       | 38 ++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/page.h           | 38 ------------------------------------
>  6 files changed, 83 insertions(+), 83 deletions(-)
> 
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index 22e100eccf..b629db61cb 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -45,6 +45,28 @@ static inline void flush_all_guests_tlb(void)
>      isb();
>  }
>  
> +/* Flush all hypervisor mappings from the TLB of the local processor. */
> +static inline void flush_xen_tlb_local(void)
> +{
> +    asm volatile("dsb;" /* Ensure preceding are visible */
> +                 CMD_CP32(TLBIALLH)
> +                 "dsb;" /* Ensure completion of the TLB flush */
> +                 "isb;"
> +                 : : : "memory");
> +}
> +
> +/* Flush TLB of local processor for address va. */
> +static inline void __flush_xen_tlb_one_local(vaddr_t va)
> +{
> +    asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
> +}
> +
> +/* Flush TLB of all processors in the inner-shareable domain for address va. */
> +static inline void __flush_xen_tlb_one(vaddr_t va)
> +{
> +    asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
> +}
> +
>  #endif /* __ASM_ARM_ARM32_FLUSHTLB_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index 0b41b9214b..715a9e4fef 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -61,28 +61,6 @@ static inline void invalidate_icache_local(void)
>      isb();                      /* Synchronize fetched instruction stream. */
>  }
>  
> -/* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile("dsb;" /* Ensure preceding are visible */
> -                 CMD_CP32(TLBIALLH)
> -                 "dsb;" /* Ensure completion of the TLB flush */
> -                 "isb;"
> -                 : : : "memory");
> -}
> -
> -/* Flush TLB of local processor for address va. */
> -static inline void __flush_xen_tlb_one_local(vaddr_t va)
> -{
> -    asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
> -}
> -
> -/* Flush TLB of all processors in the inner-shareable domain for address va. */
> -static inline void __flush_xen_tlb_one(vaddr_t va)
> -{
> -    asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
> -}
> -
>  /* Ask the MMU to translate a VA for us */
>  static inline uint64_t __va_to_par(vaddr_t va)
>  {
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index adbbd5c522..2fed34b2ec 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -45,6 +45,29 @@ static inline void flush_all_guests_tlb(void)
>          : : : "memory");
>  }
>  
> +/* Flush all hypervisor mappings from the TLB of the local processor. */
> +static inline void flush_xen_tlb_local(void)
> +{
> +    asm volatile (
> +        "dsb    sy;"                    /* Ensure visibility of PTE writes */
> +        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> +        "dsb    sy;"                    /* Ensure completion of TLB flush */
> +        "isb;"
> +        : : : "memory");
> +}
> +
> +/* Flush TLB of local processor for address va. */
> +static inline void  __flush_xen_tlb_one_local(vaddr_t va)
> +{
> +    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
> +}
> +
> +/* Flush TLB of all processors in the inner-shareable domain for address va. */
> +static inline void __flush_xen_tlb_one(vaddr_t va)
> +{
> +    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
> +}
> +
>  #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 31d04ecf76..0cba266373 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -45,29 +45,6 @@ static inline void invalidate_icache_local(void)
>      isb();
>  }
>  
> -/* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile (
> -        "dsb    sy;"                    /* Ensure visibility of PTE writes */
> -        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> -        "dsb    sy;"                    /* Ensure completion of TLB flush */
> -        "isb;"
> -        : : : "memory");
> -}
> -
> -/* Flush TLB of local processor for address va. */
> -static inline void  __flush_xen_tlb_one_local(vaddr_t va)
> -{
> -    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
> -}
> -
> -/* Flush TLB of all processors in the inner-shareable domain for address va. */
> -static inline void __flush_xen_tlb_one(vaddr_t va)
> -{
> -    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
> -}
> -
>  /* Ask the MMU to translate a VA for us */
>  static inline uint64_t __va_to_par(vaddr_t va)
>  {
> diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
> index 83ff9fa8b3..ab1aae5c90 100644
> --- a/xen/include/asm-arm/flushtlb.h
> +++ b/xen/include/asm-arm/flushtlb.h
> @@ -28,6 +28,44 @@ static inline void page_set_tlbflush_timestamp(struct page_info *page)
>  /* Flush specified CPUs' TLBs */
>  void flush_tlb_mask(const cpumask_t *mask);
>  
> +/*
> + * Flush a range of VA's hypervisor mappings from the TLB of the local
> + * processor.
> + */
> +static inline void flush_xen_tlb_range_va_local(vaddr_t va,
> +                                                unsigned long size)
> +{
> +    vaddr_t end = va + size;
> +
> +    dsb(sy); /* Ensure preceding are visible */
> +    while ( va < end )
> +    {
> +        __flush_xen_tlb_one_local(va);
> +        va += PAGE_SIZE;
> +    }
> +    dsb(sy); /* Ensure completion of the TLB flush */
> +    isb();
> +}
> +
> +/*
> + * Flush a range of VA's hypervisor mappings from the TLB of all
> + * processors in the inner-shareable domain.
> + */
> +static inline void flush_xen_tlb_range_va(vaddr_t va,
> +                                          unsigned long size)
> +{
> +    vaddr_t end = va + size;
> +
> +    dsb(sy); /* Ensure preceding are visible */
> +    while ( va < end )
> +    {
> +        __flush_xen_tlb_one(va);
> +        va += PAGE_SIZE;
> +    }
> +    dsb(sy); /* Ensure completion of the TLB flush */
> +    isb();
> +}
> +
>  #endif /* __ASM_ARM_FLUSHTLB_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 195345e24a..2bcdb0f1a5 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -233,44 +233,6 @@ static inline int clean_and_invalidate_dcache_va_range
>              : : "r" (_p), "m" (*_p));                                   \
>  } while (0)
>  
> -/*
> - * Flush a range of VA's hypervisor mappings from the TLB of the local
> - * processor.
> - */
> -static inline void flush_xen_tlb_range_va_local(vaddr_t va,
> -                                                unsigned long size)
> -{
> -    vaddr_t end = va + size;
> -
> -    dsb(sy); /* Ensure preceding are visible */
> -    while ( va < end )
> -    {
> -        __flush_xen_tlb_one_local(va);
> -        va += PAGE_SIZE;
> -    }
> -    dsb(sy); /* Ensure completion of the TLB flush */
> -    isb();
> -}
> -
> -/*
> - * Flush a range of VA's hypervisor mappings from the TLB of all
> - * processors in the inner-shareable domain.
> - */
> -static inline void flush_xen_tlb_range_va(vaddr_t va,
> -                                          unsigned long size)
> -{
> -    vaddr_t end = va + size;
> -
> -    dsb(sy); /* Ensure preceding are visible */
> -    while ( va < end )
> -    {
> -        __flush_xen_tlb_one(va);
> -        va += PAGE_SIZE;
> -    }
> -    dsb(sy); /* Ensure completion of the TLB flush */
> -    isb();
> -}
> -
>  /* Flush the dcache for an entire page. */
>  void flush_page_to_ram(unsigned long mfn, bool sync_icache);
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 5/7] xen/arm: Gather all TLB flush helpers in tlbflush.h
@ 2019-05-09 20:17     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:17 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> At the moment, TLB helpers are scattered in 2 headers: page.h (for
> Xen TLB helpers) and tlbflush.h (for guest TLB helpers).
> 
> This patch is gathering all of them in tlbflush. This will help to
> uniformize and update the logic of the helpers in follow-up patches.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/include/asm-arm/arm32/flushtlb.h | 22 +++++++++++++++++++++
>  xen/include/asm-arm/arm32/page.h     | 22 ---------------------
>  xen/include/asm-arm/arm64/flushtlb.h | 23 ++++++++++++++++++++++
>  xen/include/asm-arm/arm64/page.h     | 23 ----------------------
>  xen/include/asm-arm/flushtlb.h       | 38 ++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/page.h           | 38 ------------------------------------
>  6 files changed, 83 insertions(+), 83 deletions(-)
> 
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index 22e100eccf..b629db61cb 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -45,6 +45,28 @@ static inline void flush_all_guests_tlb(void)
>      isb();
>  }
>  
> +/* Flush all hypervisor mappings from the TLB of the local processor. */
> +static inline void flush_xen_tlb_local(void)
> +{
> +    asm volatile("dsb;" /* Ensure preceding are visible */
> +                 CMD_CP32(TLBIALLH)
> +                 "dsb;" /* Ensure completion of the TLB flush */
> +                 "isb;"
> +                 : : : "memory");
> +}
> +
> +/* Flush TLB of local processor for address va. */
> +static inline void __flush_xen_tlb_one_local(vaddr_t va)
> +{
> +    asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
> +}
> +
> +/* Flush TLB of all processors in the inner-shareable domain for address va. */
> +static inline void __flush_xen_tlb_one(vaddr_t va)
> +{
> +    asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
> +}
> +
>  #endif /* __ASM_ARM_ARM32_FLUSHTLB_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index 0b41b9214b..715a9e4fef 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -61,28 +61,6 @@ static inline void invalidate_icache_local(void)
>      isb();                      /* Synchronize fetched instruction stream. */
>  }
>  
> -/* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile("dsb;" /* Ensure preceding are visible */
> -                 CMD_CP32(TLBIALLH)
> -                 "dsb;" /* Ensure completion of the TLB flush */
> -                 "isb;"
> -                 : : : "memory");
> -}
> -
> -/* Flush TLB of local processor for address va. */
> -static inline void __flush_xen_tlb_one_local(vaddr_t va)
> -{
> -    asm volatile(STORE_CP32(0, TLBIMVAH) : : "r" (va) : "memory");
> -}
> -
> -/* Flush TLB of all processors in the inner-shareable domain for address va. */
> -static inline void __flush_xen_tlb_one(vaddr_t va)
> -{
> -    asm volatile(STORE_CP32(0, TLBIMVAHIS) : : "r" (va) : "memory");
> -}
> -
>  /* Ask the MMU to translate a VA for us */
>  static inline uint64_t __va_to_par(vaddr_t va)
>  {
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index adbbd5c522..2fed34b2ec 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -45,6 +45,29 @@ static inline void flush_all_guests_tlb(void)
>          : : : "memory");
>  }
>  
> +/* Flush all hypervisor mappings from the TLB of the local processor. */
> +static inline void flush_xen_tlb_local(void)
> +{
> +    asm volatile (
> +        "dsb    sy;"                    /* Ensure visibility of PTE writes */
> +        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> +        "dsb    sy;"                    /* Ensure completion of TLB flush */
> +        "isb;"
> +        : : : "memory");
> +}
> +
> +/* Flush TLB of local processor for address va. */
> +static inline void  __flush_xen_tlb_one_local(vaddr_t va)
> +{
> +    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
> +}
> +
> +/* Flush TLB of all processors in the inner-shareable domain for address va. */
> +static inline void __flush_xen_tlb_one(vaddr_t va)
> +{
> +    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
> +}
> +
>  #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 31d04ecf76..0cba266373 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -45,29 +45,6 @@ static inline void invalidate_icache_local(void)
>      isb();
>  }
>  
> -/* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile (
> -        "dsb    sy;"                    /* Ensure visibility of PTE writes */
> -        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> -        "dsb    sy;"                    /* Ensure completion of TLB flush */
> -        "isb;"
> -        : : : "memory");
> -}
> -
> -/* Flush TLB of local processor for address va. */
> -static inline void  __flush_xen_tlb_one_local(vaddr_t va)
> -{
> -    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
> -}
> -
> -/* Flush TLB of all processors in the inner-shareable domain for address va. */
> -static inline void __flush_xen_tlb_one(vaddr_t va)
> -{
> -    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
> -}
> -
>  /* Ask the MMU to translate a VA for us */
>  static inline uint64_t __va_to_par(vaddr_t va)
>  {
> diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
> index 83ff9fa8b3..ab1aae5c90 100644
> --- a/xen/include/asm-arm/flushtlb.h
> +++ b/xen/include/asm-arm/flushtlb.h
> @@ -28,6 +28,44 @@ static inline void page_set_tlbflush_timestamp(struct page_info *page)
>  /* Flush specified CPUs' TLBs */
>  void flush_tlb_mask(const cpumask_t *mask);
>  
> +/*
> + * Flush a range of VA's hypervisor mappings from the TLB of the local
> + * processor.
> + */
> +static inline void flush_xen_tlb_range_va_local(vaddr_t va,
> +                                                unsigned long size)
> +{
> +    vaddr_t end = va + size;
> +
> +    dsb(sy); /* Ensure preceding are visible */
> +    while ( va < end )
> +    {
> +        __flush_xen_tlb_one_local(va);
> +        va += PAGE_SIZE;
> +    }
> +    dsb(sy); /* Ensure completion of the TLB flush */
> +    isb();
> +}
> +
> +/*
> + * Flush a range of VA's hypervisor mappings from the TLB of all
> + * processors in the inner-shareable domain.
> + */
> +static inline void flush_xen_tlb_range_va(vaddr_t va,
> +                                          unsigned long size)
> +{
> +    vaddr_t end = va + size;
> +
> +    dsb(sy); /* Ensure preceding are visible */
> +    while ( va < end )
> +    {
> +        __flush_xen_tlb_one(va);
> +        va += PAGE_SIZE;
> +    }
> +    dsb(sy); /* Ensure completion of the TLB flush */
> +    isb();
> +}
> +
>  #endif /* __ASM_ARM_FLUSHTLB_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 195345e24a..2bcdb0f1a5 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -233,44 +233,6 @@ static inline int clean_and_invalidate_dcache_va_range
>              : : "r" (_p), "m" (*_p));                                   \
>  } while (0)
>  
> -/*
> - * Flush a range of VA's hypervisor mappings from the TLB of the local
> - * processor.
> - */
> -static inline void flush_xen_tlb_range_va_local(vaddr_t va,
> -                                                unsigned long size)
> -{
> -    vaddr_t end = va + size;
> -
> -    dsb(sy); /* Ensure preceding are visible */
> -    while ( va < end )
> -    {
> -        __flush_xen_tlb_one_local(va);
> -        va += PAGE_SIZE;
> -    }
> -    dsb(sy); /* Ensure completion of the TLB flush */
> -    isb();
> -}
> -
> -/*
> - * Flush a range of VA's hypervisor mappings from the TLB of all
> - * processors in the inner-shareable domain.
> - */
> -static inline void flush_xen_tlb_range_va(vaddr_t va,
> -                                          unsigned long size)
> -{
> -    vaddr_t end = va + size;
> -
> -    dsb(sy); /* Ensure preceding are visible */
> -    while ( va < end )
> -    {
> -        __flush_xen_tlb_one(va);
> -        va += PAGE_SIZE;
> -    }
> -    dsb(sy); /* Ensure completion of the TLB flush */
> -    isb();
> -}
> -
>  /* Flush the dcache for an entire page. */
>  void flush_page_to_ram(unsigned long mfn, bool sync_icache);
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 2/7] xen/arm: Remove flush_xen_text_tlb_local()
@ 2019-05-09 20:17       ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-09 20:17 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

Hi,

On 09/05/2019 21:03, Stefano Stabellini wrote:
> On Wed, 8 May 2019, Julien Grall wrote:
>> The function flush_xen_text_tlb_local() has been misused and will result
>> to invalidate the instruction cache more than necessary.
>>
>> For instance, there are no need to invalidate the instruction cache if
>                         ^ is
> 
> 
>> we are setting SCTLR_EL2.WXN.
>>
>> There are effectively only one caller (i.e free_init_memory() would
>          ^ is
> 
>> who need to invalidate the instruction cache.
>    ^ would who / who would
> 
>>
>> So rather than keeping around the function flush_xen_text_tlb_local()
>> around, replace it with call to flush_xen_tlb_local() and explicitely
>    ^ remove

I will fix the typoes in the next version.

> 
> 
>> flush the cache when necessary.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
>>
>> ---
>>      Changes in v2:
>>          - Add Andrii's reviewed-by
>> ---
>>   xen/arch/arm/mm.c                | 17 ++++++++++++++---
>>   xen/include/asm-arm/arm32/page.h | 23 +++++++++--------------
>>   xen/include/asm-arm/arm64/page.h | 21 +++++----------------
>>   3 files changed, 28 insertions(+), 33 deletions(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 93ad118183..dfbe39c70a 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -610,8 +610,12 @@ void __init remove_early_mappings(void)
>>   static void xen_pt_enforce_wnx(void)
>>   {
>>       WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
>> -    /* Flush everything after setting WXN bit. */
>> -    flush_xen_text_tlb_local();
>> +    /*
>> +     * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized
>> +     * before flushing the TLBs.
>> +     */
>> +    isb();
>> +    flush_xen_data_tlb_local();
>>   }
>>   
>>   extern void switch_ttbr(uint64_t ttbr);
>> @@ -1123,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>>           }
>>           write_pte(xen_xenmap + i, pte);
>>       }
>> -    flush_xen_text_tlb_local();
>> +    flush_xen_data_tlb_local();
> 
> I think it would make sense to move the remaining call to
> flush_xen_data_tlb_local from set_pte_flags_on_range to free_init_memory
> before the call to invalidate_icache_local. What do you think?

We still need the TLB flush for the two callers. The first one for 
remove all TLBs with the previous permission, the second when the 
mappings are removed from the TLBs.

Today, it is not possible to re-use the virtual address of the init 
section, so it is arguably not necessary. However, I don't want to take 
the chance to introduce potential coherency issues if the TLBs entries 
where still present when re-using the virtual address.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/7] xen/arm: Remove flush_xen_text_tlb_local()
@ 2019-05-09 20:17       ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-09 20:17 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

Hi,

On 09/05/2019 21:03, Stefano Stabellini wrote:
> On Wed, 8 May 2019, Julien Grall wrote:
>> The function flush_xen_text_tlb_local() has been misused and will result
>> to invalidate the instruction cache more than necessary.
>>
>> For instance, there are no need to invalidate the instruction cache if
>                         ^ is
> 
> 
>> we are setting SCTLR_EL2.WXN.
>>
>> There are effectively only one caller (i.e free_init_memory() would
>          ^ is
> 
>> who need to invalidate the instruction cache.
>    ^ would who / who would
> 
>>
>> So rather than keeping around the function flush_xen_text_tlb_local()
>> around, replace it with call to flush_xen_tlb_local() and explicitely
>    ^ remove

I will fix the typoes in the next version.

> 
> 
>> flush the cache when necessary.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
>>
>> ---
>>      Changes in v2:
>>          - Add Andrii's reviewed-by
>> ---
>>   xen/arch/arm/mm.c                | 17 ++++++++++++++---
>>   xen/include/asm-arm/arm32/page.h | 23 +++++++++--------------
>>   xen/include/asm-arm/arm64/page.h | 21 +++++----------------
>>   3 files changed, 28 insertions(+), 33 deletions(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 93ad118183..dfbe39c70a 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -610,8 +610,12 @@ void __init remove_early_mappings(void)
>>   static void xen_pt_enforce_wnx(void)
>>   {
>>       WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_WXN, SCTLR_EL2);
>> -    /* Flush everything after setting WXN bit. */
>> -    flush_xen_text_tlb_local();
>> +    /*
>> +     * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized
>> +     * before flushing the TLBs.
>> +     */
>> +    isb();
>> +    flush_xen_data_tlb_local();
>>   }
>>   
>>   extern void switch_ttbr(uint64_t ttbr);
>> @@ -1123,7 +1127,7 @@ static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>>           }
>>           write_pte(xen_xenmap + i, pte);
>>       }
>> -    flush_xen_text_tlb_local();
>> +    flush_xen_data_tlb_local();
> 
> I think it would make sense to move the remaining call to
> flush_xen_data_tlb_local from set_pte_flags_on_range to free_init_memory
> before the call to invalidate_icache_local. What do you think?

We still need the TLB flush for the two callers. The first one for 
remove all TLBs with the previous permission, the second when the 
mappings are removed from the TLBs.

Today, it is not possible to re-use the virtual address of the init 
section, so it is arguably not necessary. However, I don't want to take 
the chance to introduce potential coherency issues if the TLBs entries 
where still present when re-using the virtual address.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-09 20:32       ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-09 20:32 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

Hi,

On 09/05/2019 21:13, Stefano Stabellini wrote:
> On Wed, 8 May 2019, Julien Grall wrote:
>>   /* Release all __init and __initdata ranges to be reused */
>> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
>> index 40a77daa9d..0b41b9214b 100644
>> --- a/xen/include/asm-arm/arm32/page.h
>> +++ b/xen/include/asm-arm/arm32/page.h
>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>>       isb();                      /* Synchronize fetched instruction stream. */
>>   }
>>   
>> -/*
>> - * Flush all hypervisor mappings from the data TLB of the local
>> - * processor. This is not sufficient when changing code mappings or
>> - * for self modifying code.
>> - */
>> -static inline void flush_xen_data_tlb_local(void)
>> +/* Flush all hypervisor mappings from the TLB of the local processor. */
> 
> I realize that the statement "This is not sufficient when changing code
> mappings or for self modifying code" is not quite accurate, but I would
> prefer not to remove it completely. It would be good to retain a warning
> somewhere about IC been needed when changing Xen's own mappings. Maybe
> on top of invalidate_icache_local?

Can you please expand in which circumstance you need to invalid the 
instruction cache when changing Xen's own mappings?

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-09 20:32       ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-09 20:32 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

Hi,

On 09/05/2019 21:13, Stefano Stabellini wrote:
> On Wed, 8 May 2019, Julien Grall wrote:
>>   /* Release all __init and __initdata ranges to be reused */
>> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
>> index 40a77daa9d..0b41b9214b 100644
>> --- a/xen/include/asm-arm/arm32/page.h
>> +++ b/xen/include/asm-arm/arm32/page.h
>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>>       isb();                      /* Synchronize fetched instruction stream. */
>>   }
>>   
>> -/*
>> - * Flush all hypervisor mappings from the data TLB of the local
>> - * processor. This is not sufficient when changing code mappings or
>> - * for self modifying code.
>> - */
>> -static inline void flush_xen_data_tlb_local(void)
>> +/* Flush all hypervisor mappings from the TLB of the local processor. */
> 
> I realize that the statement "This is not sufficient when changing code
> mappings or for self modifying code" is not quite accurate, but I would
> prefer not to remove it completely. It would be good to retain a warning
> somewhere about IC been needed when changing Xen's own mappings. Maybe
> on top of invalidate_icache_local?

Can you please expand in which circumstance you need to invalid the 
instruction cache when changing Xen's own mappings?

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 6/7] xen/arm: tlbflush: Rework TLB helpers
@ 2019-05-09 20:32     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:32 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> All the TLBs helpers invalidate all the TLB entries are using the same
> pattern:
>     DSB SY
>     TLBI ...
>     DSB SY
>     ISB
> 
> This pattern is following pattern recommended by the Arm Arm to ensure
> visibility of updates to translation tables (see K11.5.2 in ARM DDI
> 0487D.b).
> 
> We have been a bit too eager in Xen and use system-wide DSBs when this
> can be limited to the inner-shareable domain.
> 
> Furthermore, the first DSB can be restrict further to only store in the
> inner-shareable domain. This is because the DSB is here to ensure
> visibility of the update to translation table walks.
> 
> Lastly, there are a lack of documentation in most of the TLBs helper.
> 
> Rather than trying to update the helpers one by one, this patch
> introduce a per-arch macro to generate the TLB helpers. This will be
> easier to update the TLBs helper in the future and the documentation.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
>
> ---
>     Changes in v2:
>         - Update the reference to the Arm Arm to the latest spec
>         - Add Andrii's reviewed-by
> ---
>  xen/include/asm-arm/arm32/flushtlb.h | 73 ++++++++++++++--------------------
>  xen/include/asm-arm/arm64/flushtlb.h | 76 +++++++++++++++---------------------
>  2 files changed, 60 insertions(+), 89 deletions(-)
> 
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index b629db61cb..9085e65011 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -1,59 +1,44 @@
>  #ifndef __ASM_ARM_ARM32_FLUSHTLB_H__
>  #define __ASM_ARM_ARM32_FLUSHTLB_H__
>  
> -/* Flush local TLBs, current VMID only */
> -static inline void flush_guest_tlb_local(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALL);
> -
> -    dsb(sy);
> -    isb();
> +/*
> + * Every invalidation operation use the following patterns:
> + *
> + * DSB ISHST        // Ensure prior page-tables updates have completed
> + * TLBI...          // Invalidate the TLB
> + * DSB ISH          // Ensure the TLB invalidation has completed
> + * ISB              // See explanation below
> + *
> + * For Xen page-tables the ISB will discard any instructions fetched
> + * from the old mappings.
> + *
> + * For the Stage-2 page-tables the ISB ensures the completion of the DSB
> + * (and therefore the TLB invalidation) before continuing. So we know
> + * the TLBs cannot contain an entry for a mapping we may have removed.
> + */
> +#define TLB_HELPER(name, tlbop) \
> +static inline void name(void)   \
> +{                               \
> +    dsb(ishst);                 \
> +    WRITE_CP32(0, tlbop);       \
> +    dsb(ish);                   \
> +    isb();                      \
>  }
>  

Hi Julien,

I agree with what you are trying to achieve with this patch and I like
the idea of reducing code duplication. As I look at the code, I was
hoping to find a way to avoid introducing macros and use static inline
functions instead, but it doesn't look like it is possible for arm32.
There is no way to pass TLBIALLIS as a parameter to a function for
instance. It might be possible for arm64 as they are just strings, but at
that point it might be better to keep the code similar between arm32 and
arm64 having both of them as macros, instead of having one as macro and
the other as static inline.

Do you agree with me? Can you see any other ways to turn TLB_HELPER into
a static inline?



> -/* Flush inner shareable TLBs, current VMID only */
> -static inline void flush_guest_tlb(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLIS);
> +/* Flush local TLBs, current VMID only */
> +TLB_HELPER(flush_guest_tlb_local, TLBIALL);
>  
> -    dsb(sy);
> -    isb();
> -}
> +/* Flush inner shareable TLBs, current VMID only */
> +TLB_HELPER(flush_guest_tlb, TLBIALLIS);
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb_local(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
> -
> -    dsb(sy);
> -    isb();
> -}
> +TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH);
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
> -
> -    dsb(sy);
> -    isb();
> -}
> +TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS);
>  
>  /* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile("dsb;" /* Ensure preceding are visible */
> -                 CMD_CP32(TLBIALLH)
> -                 "dsb;" /* Ensure completion of the TLB flush */
> -                 "isb;"
> -                 : : : "memory");
> -}
> +TLB_HELPER(flush_xen_tlb_local, TLBIALLH);
>  
>  /* Flush TLB of local processor for address va. */
>  static inline void __flush_xen_tlb_one_local(vaddr_t va)
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index 2fed34b2ec..ceec59542e 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -1,60 +1,46 @@
>  #ifndef __ASM_ARM_ARM64_FLUSHTLB_H__
>  #define __ASM_ARM_ARM64_FLUSHTLB_H__
>  
> -/* Flush local TLBs, current VMID only */
> -static inline void flush_guest_tlb_local(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi vmalls12e1;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> +/*
> + * Every invalidation operation use the following patterns:
> + *
> + * DSB ISHST        // Ensure prior page-tables updates have completed
> + * TLBI...          // Invalidate the TLB
> + * DSB ISH          // Ensure the TLB invalidation has completed
> + * ISB              // See explanation below
> + *
> + * For Xen page-tables the ISB will discard any instructions fetched
> + * from the old mappings.
> + *
> + * For the Stage-2 page-tables the ISB ensures the completion of the DSB
> + * (and therefore the TLB invalidation) before continuing. So we know
> + * the TLBs cannot contain an entry for a mapping we may have removed.
> + */
> +#define TLB_HELPER(name, tlbop) \
> +static inline void name(void)   \
> +{                               \
> +    asm volatile(               \
> +        "dsb  ishst;"           \
> +        "tlbi "  # tlbop  ";"   \
> +        "dsb  ish;"             \
> +        "isb;"                  \
> +        : : : "memory");        \
>  }
>  
> +/* Flush local TLBs, current VMID only. */
> +TLB_HELPER(flush_guest_tlb_local, vmalls12e1);
> +
>  /* Flush innershareable TLBs, current VMID only */
> -static inline void flush_guest_tlb(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi vmalls12e1is;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_guest_tlb, vmalls12e1is);
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb_local(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi alle1;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_all_guests_tlb_local, alle1);
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi alle1is;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_all_guests_tlb, alle1is);
>  
>  /* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile (
> -        "dsb    sy;"                    /* Ensure visibility of PTE writes */
> -        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> -        "dsb    sy;"                    /* Ensure completion of TLB flush */
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_xen_tlb_local, alle2);
>  
>  /* Flush TLB of local processor for address va. */
>  static inline void  __flush_xen_tlb_one_local(vaddr_t va)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 6/7] xen/arm: tlbflush: Rework TLB helpers
@ 2019-05-09 20:32     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:32 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> All the TLBs helpers invalidate all the TLB entries are using the same
> pattern:
>     DSB SY
>     TLBI ...
>     DSB SY
>     ISB
> 
> This pattern is following pattern recommended by the Arm Arm to ensure
> visibility of updates to translation tables (see K11.5.2 in ARM DDI
> 0487D.b).
> 
> We have been a bit too eager in Xen and use system-wide DSBs when this
> can be limited to the inner-shareable domain.
> 
> Furthermore, the first DSB can be restrict further to only store in the
> inner-shareable domain. This is because the DSB is here to ensure
> visibility of the update to translation table walks.
> 
> Lastly, there are a lack of documentation in most of the TLBs helper.
> 
> Rather than trying to update the helpers one by one, this patch
> introduce a per-arch macro to generate the TLB helpers. This will be
> easier to update the TLBs helper in the future and the documentation.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
>
> ---
>     Changes in v2:
>         - Update the reference to the Arm Arm to the latest spec
>         - Add Andrii's reviewed-by
> ---
>  xen/include/asm-arm/arm32/flushtlb.h | 73 ++++++++++++++--------------------
>  xen/include/asm-arm/arm64/flushtlb.h | 76 +++++++++++++++---------------------
>  2 files changed, 60 insertions(+), 89 deletions(-)
> 
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index b629db61cb..9085e65011 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -1,59 +1,44 @@
>  #ifndef __ASM_ARM_ARM32_FLUSHTLB_H__
>  #define __ASM_ARM_ARM32_FLUSHTLB_H__
>  
> -/* Flush local TLBs, current VMID only */
> -static inline void flush_guest_tlb_local(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALL);
> -
> -    dsb(sy);
> -    isb();
> +/*
> + * Every invalidation operation use the following patterns:
> + *
> + * DSB ISHST        // Ensure prior page-tables updates have completed
> + * TLBI...          // Invalidate the TLB
> + * DSB ISH          // Ensure the TLB invalidation has completed
> + * ISB              // See explanation below
> + *
> + * For Xen page-tables the ISB will discard any instructions fetched
> + * from the old mappings.
> + *
> + * For the Stage-2 page-tables the ISB ensures the completion of the DSB
> + * (and therefore the TLB invalidation) before continuing. So we know
> + * the TLBs cannot contain an entry for a mapping we may have removed.
> + */
> +#define TLB_HELPER(name, tlbop) \
> +static inline void name(void)   \
> +{                               \
> +    dsb(ishst);                 \
> +    WRITE_CP32(0, tlbop);       \
> +    dsb(ish);                   \
> +    isb();                      \
>  }
>  

Hi Julien,

I agree with what you are trying to achieve with this patch and I like
the idea of reducing code duplication. As I look at the code, I was
hoping to find a way to avoid introducing macros and use static inline
functions instead, but it doesn't look like it is possible for arm32.
There is no way to pass TLBIALLIS as a parameter to a function for
instance. It might be possible for arm64 as they are just strings, but at
that point it might be better to keep the code similar between arm32 and
arm64 having both of them as macros, instead of having one as macro and
the other as static inline.

Do you agree with me? Can you see any other ways to turn TLB_HELPER into
a static inline?



> -/* Flush inner shareable TLBs, current VMID only */
> -static inline void flush_guest_tlb(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLIS);
> +/* Flush local TLBs, current VMID only */
> +TLB_HELPER(flush_guest_tlb_local, TLBIALL);
>  
> -    dsb(sy);
> -    isb();
> -}
> +/* Flush inner shareable TLBs, current VMID only */
> +TLB_HELPER(flush_guest_tlb, TLBIALLIS);
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb_local(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
> -
> -    dsb(sy);
> -    isb();
> -}
> +TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH);
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
> -
> -    dsb(sy);
> -    isb();
> -}
> +TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS);
>  
>  /* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile("dsb;" /* Ensure preceding are visible */
> -                 CMD_CP32(TLBIALLH)
> -                 "dsb;" /* Ensure completion of the TLB flush */
> -                 "isb;"
> -                 : : : "memory");
> -}
> +TLB_HELPER(flush_xen_tlb_local, TLBIALLH);
>  
>  /* Flush TLB of local processor for address va. */
>  static inline void __flush_xen_tlb_one_local(vaddr_t va)
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index 2fed34b2ec..ceec59542e 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -1,60 +1,46 @@
>  #ifndef __ASM_ARM_ARM64_FLUSHTLB_H__
>  #define __ASM_ARM_ARM64_FLUSHTLB_H__
>  
> -/* Flush local TLBs, current VMID only */
> -static inline void flush_guest_tlb_local(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi vmalls12e1;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> +/*
> + * Every invalidation operation use the following patterns:
> + *
> + * DSB ISHST        // Ensure prior page-tables updates have completed
> + * TLBI...          // Invalidate the TLB
> + * DSB ISH          // Ensure the TLB invalidation has completed
> + * ISB              // See explanation below
> + *
> + * For Xen page-tables the ISB will discard any instructions fetched
> + * from the old mappings.
> + *
> + * For the Stage-2 page-tables the ISB ensures the completion of the DSB
> + * (and therefore the TLB invalidation) before continuing. So we know
> + * the TLBs cannot contain an entry for a mapping we may have removed.
> + */
> +#define TLB_HELPER(name, tlbop) \
> +static inline void name(void)   \
> +{                               \
> +    asm volatile(               \
> +        "dsb  ishst;"           \
> +        "tlbi "  # tlbop  ";"   \
> +        "dsb  ish;"             \
> +        "isb;"                  \
> +        : : : "memory");        \
>  }
>  
> +/* Flush local TLBs, current VMID only. */
> +TLB_HELPER(flush_guest_tlb_local, vmalls12e1);
> +
>  /* Flush innershareable TLBs, current VMID only */
> -static inline void flush_guest_tlb(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi vmalls12e1is;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_guest_tlb, vmalls12e1is);
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb_local(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi alle1;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_all_guests_tlb_local, alle1);
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi alle1is;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_all_guests_tlb, alle1is);
>  
>  /* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile (
> -        "dsb    sy;"                    /* Ensure visibility of PTE writes */
> -        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> -        "dsb    sy;"                    /* Ensure completion of TLB flush */
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_xen_tlb_local, alle2);
>  
>  /* Flush TLB of local processor for address va. */
>  static inline void  __flush_xen_tlb_one_local(vaddr_t va)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 7/7] xen/arm: mm: Flush the TLBs even if a mapping failed in create_xen_entries
@ 2019-05-09 20:40     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:40 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> At the moment, create_xen_entries will only flush the TLBs if the full
> range has successfully been updated. This may lead to leave unwanted
> entries in the TLBs if we fail to update some entries.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 8ee828d445..9d584e4cbf 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -984,7 +984,7 @@ static int create_xen_entries(enum xenmap_operation op,
>                                unsigned long nr_mfns,
>                                unsigned int flags)
>  {
> -    int rc;
> +    int rc = 0;
>      unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
>      lpae_t pte, *entry;
>      lpae_t *third = NULL;
> @@ -1013,7 +1013,8 @@ static int create_xen_entries(enum xenmap_operation op,
>                  {
>                      printk("%s: trying to replace an existing mapping addr=%lx mfn=%"PRI_mfn"\n",
>                             __func__, addr, mfn_x(mfn));
> -                    return -EINVAL;
> +                    rc = -EINVAL;
> +                    goto out;
>                  }
>                  if ( op == RESERVE )
>                      break;
> @@ -1030,7 +1031,8 @@ static int create_xen_entries(enum xenmap_operation op,
>                  {
>                      printk("%s: trying to %s a non-existing mapping addr=%lx\n",
>                             __func__, op == REMOVE ? "remove" : "modify", addr);
> -                    return -EINVAL;
> +                    rc = -EINVAL;
> +                    goto out;
>                  }
>                  if ( op == REMOVE )
>                      pte.bits = 0;
> @@ -1043,7 +1045,8 @@ static int create_xen_entries(enum xenmap_operation op,
>                      {
>                          printk("%s: Incorrect combination for addr=%lx\n",
>                                 __func__, addr);
> -                        return -EINVAL;
> +                        rc = -EINVAL;
> +                        goto out;
>                      }
>                  }
>                  write_pte(entry, pte);
> @@ -1052,11 +1055,14 @@ static int create_xen_entries(enum xenmap_operation op,
>                  BUG();
>          }
>      }
> +out:
> +    /*
> +     * Flush the TLBs even in case of failure because we may have
> +     * partially modified the PT. This will prevent any unexpected
> +     * behavior afterwards.
> +     */
>      flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
>  
> -    rc = 0;
> -
> -out:
>      return rc;
>  }
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 7/7] xen/arm: mm: Flush the TLBs even if a mapping failed in create_xen_entries
@ 2019-05-09 20:40     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 20:40 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> At the moment, create_xen_entries will only flush the TLBs if the full
> range has successfully been updated. This may lead to leave unwanted
> entries in the TLBs if we fail to update some entries.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 8ee828d445..9d584e4cbf 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -984,7 +984,7 @@ static int create_xen_entries(enum xenmap_operation op,
>                                unsigned long nr_mfns,
>                                unsigned int flags)
>  {
> -    int rc;
> +    int rc = 0;
>      unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
>      lpae_t pte, *entry;
>      lpae_t *third = NULL;
> @@ -1013,7 +1013,8 @@ static int create_xen_entries(enum xenmap_operation op,
>                  {
>                      printk("%s: trying to replace an existing mapping addr=%lx mfn=%"PRI_mfn"\n",
>                             __func__, addr, mfn_x(mfn));
> -                    return -EINVAL;
> +                    rc = -EINVAL;
> +                    goto out;
>                  }
>                  if ( op == RESERVE )
>                      break;
> @@ -1030,7 +1031,8 @@ static int create_xen_entries(enum xenmap_operation op,
>                  {
>                      printk("%s: trying to %s a non-existing mapping addr=%lx\n",
>                             __func__, op == REMOVE ? "remove" : "modify", addr);
> -                    return -EINVAL;
> +                    rc = -EINVAL;
> +                    goto out;
>                  }
>                  if ( op == REMOVE )
>                      pte.bits = 0;
> @@ -1043,7 +1045,8 @@ static int create_xen_entries(enum xenmap_operation op,
>                      {
>                          printk("%s: Incorrect combination for addr=%lx\n",
>                                 __func__, addr);
> -                        return -EINVAL;
> +                        rc = -EINVAL;
> +                        goto out;
>                      }
>                  }
>                  write_pte(entry, pte);
> @@ -1052,11 +1055,14 @@ static int create_xen_entries(enum xenmap_operation op,
>                  BUG();
>          }
>      }
> +out:
> +    /*
> +     * Flush the TLBs even in case of failure because we may have
> +     * partially modified the PT. This will prevent any unexpected
> +     * behavior afterwards.
> +     */
>      flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
>  
> -    rc = 0;
> -
> -out:
>      return rc;
>  }
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 6/7] xen/arm: tlbflush: Rework TLB helpers
@ 2019-05-09 20:43       ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-09 20:43 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko



On 09/05/2019 21:32, Stefano Stabellini wrote:
> On Wed, 8 May 2019, Julien Grall wrote:
> I agree with what you are trying to achieve with this patch and I like
> the idea of reducing code duplication. As I look at the code, I was
> hoping to find a way to avoid introducing macros and use static inline
> functions instead, but it doesn't look like it is possible for arm32.
> There is no way to pass TLBIALLIS as a parameter to a function for
> instance. It might be possible for arm64 as they are just strings, but at
> that point it might be better to keep the code similar between arm32 and
> arm64 having both of them as macros, instead of having one as macro and
> the other as static inline.
> 
> Do you agree with me? Can you see any other ways to turn TLB_HELPER into
> a static inline?

I really can't see how you can even turn the arm64 version as a static 
inline... Even if TLBIALLIS is a string, we are using it to generate the 
assembly. Without the help of the pre-processor, you would have to look 
at the string and generate the associated operation.

So there are no way you can do the same with static inline unless you 
duplicate all the helpers. But this would defeat the purpose of this patch.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 6/7] xen/arm: tlbflush: Rework TLB helpers
@ 2019-05-09 20:43       ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-09 20:43 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko



On 09/05/2019 21:32, Stefano Stabellini wrote:
> On Wed, 8 May 2019, Julien Grall wrote:
> I agree with what you are trying to achieve with this patch and I like
> the idea of reducing code duplication. As I look at the code, I was
> hoping to find a way to avoid introducing macros and use static inline
> functions instead, but it doesn't look like it is possible for arm32.
> There is no way to pass TLBIALLIS as a parameter to a function for
> instance. It might be possible for arm64 as they are just strings, but at
> that point it might be better to keep the code similar between arm32 and
> arm64 having both of them as macros, instead of having one as macro and
> the other as static inline.
> 
> Do you agree with me? Can you see any other ways to turn TLB_HELPER into
> a static inline?

I really can't see how you can even turn the arm64 version as a static 
inline... Even if TLBIALLIS is a string, we are using it to generate the 
assembly. Without the help of the pre-processor, you would have to look 
at the string and generate the associated operation.

So there are no way you can do the same with static inline unless you 
duplicate all the helpers. But this would defeat the purpose of this patch.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 6/7] xen/arm: tlbflush: Rework TLB helpers
@ 2019-05-09 21:37     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 21:37 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> All the TLBs helpers invalidate all the TLB entries are using the same
> pattern:
>     DSB SY
>     TLBI ...
>     DSB SY
>     ISB
> 
> This pattern is following pattern recommended by the Arm Arm to ensure
> visibility of updates to translation tables (see K11.5.2 in ARM DDI
> 0487D.b).
> 
> We have been a bit too eager in Xen and use system-wide DSBs when this
> can be limited to the inner-shareable domain.
> 
> Furthermore, the first DSB can be restrict further to only store in the
> inner-shareable domain. This is because the DSB is here to ensure
> visibility of the update to translation table walks.
> 
> Lastly, there are a lack of documentation in most of the TLBs helper.
> 
> Rather than trying to update the helpers one by one, this patch
> introduce a per-arch macro to generate the TLB helpers. This will be
> easier to update the TLBs helper in the future and the documentation.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Update the reference to the Arm Arm to the latest spec
>         - Add Andrii's reviewed-by
> ---
>  xen/include/asm-arm/arm32/flushtlb.h | 73 ++++++++++++++--------------------
>  xen/include/asm-arm/arm64/flushtlb.h | 76 +++++++++++++++---------------------
>  2 files changed, 60 insertions(+), 89 deletions(-)
> 
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index b629db61cb..9085e65011 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -1,59 +1,44 @@
>  #ifndef __ASM_ARM_ARM32_FLUSHTLB_H__
>  #define __ASM_ARM_ARM32_FLUSHTLB_H__
>  
> -/* Flush local TLBs, current VMID only */
> -static inline void flush_guest_tlb_local(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALL);
> -
> -    dsb(sy);
> -    isb();
> +/*
> + * Every invalidation operation use the following patterns:
> + *
> + * DSB ISHST        // Ensure prior page-tables updates have completed
> + * TLBI...          // Invalidate the TLB
> + * DSB ISH          // Ensure the TLB invalidation has completed
> + * ISB              // See explanation below
> + *
> + * For Xen page-tables the ISB will discard any instructions fetched
> + * from the old mappings.
> + *
> + * For the Stage-2 page-tables the ISB ensures the completion of the DSB
> + * (and therefore the TLB invalidation) before continuing. So we know
> + * the TLBs cannot contain an entry for a mapping we may have removed.
> + */
> +#define TLB_HELPER(name, tlbop) \
> +static inline void name(void)   \
> +{                               \
> +    dsb(ishst);                 \
> +    WRITE_CP32(0, tlbop);       \
> +    dsb(ish);                   \
> +    isb();                      \
>  }
>  
> -/* Flush inner shareable TLBs, current VMID only */
> -static inline void flush_guest_tlb(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLIS);
> +/* Flush local TLBs, current VMID only */
> +TLB_HELPER(flush_guest_tlb_local, TLBIALL);
>  
> -    dsb(sy);
> -    isb();
> -}
> +/* Flush inner shareable TLBs, current VMID only */
> +TLB_HELPER(flush_guest_tlb, TLBIALLIS);
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb_local(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
> -
> -    dsb(sy);
> -    isb();
> -}
> +TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH);
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
> -
> -    dsb(sy);
> -    isb();
> -}
> +TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS);
>  
>  /* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile("dsb;" /* Ensure preceding are visible */
> -                 CMD_CP32(TLBIALLH)
> -                 "dsb;" /* Ensure completion of the TLB flush */
> -                 "isb;"
> -                 : : : "memory");
> -}
> +TLB_HELPER(flush_xen_tlb_local, TLBIALLH);
>  
>  /* Flush TLB of local processor for address va. */
>  static inline void __flush_xen_tlb_one_local(vaddr_t va)
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index 2fed34b2ec..ceec59542e 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -1,60 +1,46 @@
>  #ifndef __ASM_ARM_ARM64_FLUSHTLB_H__
>  #define __ASM_ARM_ARM64_FLUSHTLB_H__
>  
> -/* Flush local TLBs, current VMID only */
> -static inline void flush_guest_tlb_local(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi vmalls12e1;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> +/*
> + * Every invalidation operation use the following patterns:
> + *
> + * DSB ISHST        // Ensure prior page-tables updates have completed
> + * TLBI...          // Invalidate the TLB
> + * DSB ISH          // Ensure the TLB invalidation has completed
> + * ISB              // See explanation below
> + *
> + * For Xen page-tables the ISB will discard any instructions fetched
> + * from the old mappings.
> + *
> + * For the Stage-2 page-tables the ISB ensures the completion of the DSB
> + * (and therefore the TLB invalidation) before continuing. So we know
> + * the TLBs cannot contain an entry for a mapping we may have removed.
> + */
> +#define TLB_HELPER(name, tlbop) \
> +static inline void name(void)   \
> +{                               \
> +    asm volatile(               \
> +        "dsb  ishst;"           \
> +        "tlbi "  # tlbop  ";"   \
> +        "dsb  ish;"             \
> +        "isb;"                  \
> +        : : : "memory");        \
>  }
>  
> +/* Flush local TLBs, current VMID only. */
> +TLB_HELPER(flush_guest_tlb_local, vmalls12e1);
> +
>  /* Flush innershareable TLBs, current VMID only */
> -static inline void flush_guest_tlb(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi vmalls12e1is;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_guest_tlb, vmalls12e1is);
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb_local(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi alle1;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_all_guests_tlb_local, alle1);
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi alle1is;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_all_guests_tlb, alle1is);
>  
>  /* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile (
> -        "dsb    sy;"                    /* Ensure visibility of PTE writes */
> -        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> -        "dsb    sy;"                    /* Ensure completion of TLB flush */
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_xen_tlb_local, alle2);
>  
>  /* Flush TLB of local processor for address va. */
>  static inline void  __flush_xen_tlb_one_local(vaddr_t va)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 6/7] xen/arm: tlbflush: Rework TLB helpers
@ 2019-05-09 21:37     ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-09 21:37 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Wed, 8 May 2019, Julien Grall wrote:
> All the TLBs helpers invalidate all the TLB entries are using the same
> pattern:
>     DSB SY
>     TLBI ...
>     DSB SY
>     ISB
> 
> This pattern is following pattern recommended by the Arm Arm to ensure
> visibility of updates to translation tables (see K11.5.2 in ARM DDI
> 0487D.b).
> 
> We have been a bit too eager in Xen and use system-wide DSBs when this
> can be limited to the inner-shareable domain.
> 
> Furthermore, the first DSB can be restrict further to only store in the
> inner-shareable domain. This is because the DSB is here to ensure
> visibility of the update to translation table walks.
> 
> Lastly, there are a lack of documentation in most of the TLBs helper.
> 
> Rather than trying to update the helpers one by one, this patch
> introduce a per-arch macro to generate the TLB helpers. This will be
> easier to update the TLBs helper in the future and the documentation.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Update the reference to the Arm Arm to the latest spec
>         - Add Andrii's reviewed-by
> ---
>  xen/include/asm-arm/arm32/flushtlb.h | 73 ++++++++++++++--------------------
>  xen/include/asm-arm/arm64/flushtlb.h | 76 +++++++++++++++---------------------
>  2 files changed, 60 insertions(+), 89 deletions(-)
> 
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index b629db61cb..9085e65011 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -1,59 +1,44 @@
>  #ifndef __ASM_ARM_ARM32_FLUSHTLB_H__
>  #define __ASM_ARM_ARM32_FLUSHTLB_H__
>  
> -/* Flush local TLBs, current VMID only */
> -static inline void flush_guest_tlb_local(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALL);
> -
> -    dsb(sy);
> -    isb();
> +/*
> + * Every invalidation operation use the following patterns:
> + *
> + * DSB ISHST        // Ensure prior page-tables updates have completed
> + * TLBI...          // Invalidate the TLB
> + * DSB ISH          // Ensure the TLB invalidation has completed
> + * ISB              // See explanation below
> + *
> + * For Xen page-tables the ISB will discard any instructions fetched
> + * from the old mappings.
> + *
> + * For the Stage-2 page-tables the ISB ensures the completion of the DSB
> + * (and therefore the TLB invalidation) before continuing. So we know
> + * the TLBs cannot contain an entry for a mapping we may have removed.
> + */
> +#define TLB_HELPER(name, tlbop) \
> +static inline void name(void)   \
> +{                               \
> +    dsb(ishst);                 \
> +    WRITE_CP32(0, tlbop);       \
> +    dsb(ish);                   \
> +    isb();                      \
>  }
>  
> -/* Flush inner shareable TLBs, current VMID only */
> -static inline void flush_guest_tlb(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLIS);
> +/* Flush local TLBs, current VMID only */
> +TLB_HELPER(flush_guest_tlb_local, TLBIALL);
>  
> -    dsb(sy);
> -    isb();
> -}
> +/* Flush inner shareable TLBs, current VMID only */
> +TLB_HELPER(flush_guest_tlb, TLBIALLIS);
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb_local(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
> -
> -    dsb(sy);
> -    isb();
> -}
> +TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH);
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb(void)
> -{
> -    dsb(sy);
> -
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
> -
> -    dsb(sy);
> -    isb();
> -}
> +TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS);
>  
>  /* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile("dsb;" /* Ensure preceding are visible */
> -                 CMD_CP32(TLBIALLH)
> -                 "dsb;" /* Ensure completion of the TLB flush */
> -                 "isb;"
> -                 : : : "memory");
> -}
> +TLB_HELPER(flush_xen_tlb_local, TLBIALLH);
>  
>  /* Flush TLB of local processor for address va. */
>  static inline void __flush_xen_tlb_one_local(vaddr_t va)
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index 2fed34b2ec..ceec59542e 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -1,60 +1,46 @@
>  #ifndef __ASM_ARM_ARM64_FLUSHTLB_H__
>  #define __ASM_ARM_ARM64_FLUSHTLB_H__
>  
> -/* Flush local TLBs, current VMID only */
> -static inline void flush_guest_tlb_local(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi vmalls12e1;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> +/*
> + * Every invalidation operation use the following patterns:
> + *
> + * DSB ISHST        // Ensure prior page-tables updates have completed
> + * TLBI...          // Invalidate the TLB
> + * DSB ISH          // Ensure the TLB invalidation has completed
> + * ISB              // See explanation below
> + *
> + * For Xen page-tables the ISB will discard any instructions fetched
> + * from the old mappings.
> + *
> + * For the Stage-2 page-tables the ISB ensures the completion of the DSB
> + * (and therefore the TLB invalidation) before continuing. So we know
> + * the TLBs cannot contain an entry for a mapping we may have removed.
> + */
> +#define TLB_HELPER(name, tlbop) \
> +static inline void name(void)   \
> +{                               \
> +    asm volatile(               \
> +        "dsb  ishst;"           \
> +        "tlbi "  # tlbop  ";"   \
> +        "dsb  ish;"             \
> +        "isb;"                  \
> +        : : : "memory");        \
>  }
>  
> +/* Flush local TLBs, current VMID only. */
> +TLB_HELPER(flush_guest_tlb_local, vmalls12e1);
> +
>  /* Flush innershareable TLBs, current VMID only */
> -static inline void flush_guest_tlb(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi vmalls12e1is;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_guest_tlb, vmalls12e1is);
>  
>  /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb_local(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi alle1;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_all_guests_tlb_local, alle1);
>  
>  /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_all_guests_tlb(void)
> -{
> -    asm volatile(
> -        "dsb sy;"
> -        "tlbi alle1is;"
> -        "dsb sy;"
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_all_guests_tlb, alle1is);
>  
>  /* Flush all hypervisor mappings from the TLB of the local processor. */
> -static inline void flush_xen_tlb_local(void)
> -{
> -    asm volatile (
> -        "dsb    sy;"                    /* Ensure visibility of PTE writes */
> -        "tlbi   alle2;"                 /* Flush hypervisor TLB */
> -        "dsb    sy;"                    /* Ensure completion of TLB flush */
> -        "isb;"
> -        : : : "memory");
> -}
> +TLB_HELPER(flush_xen_tlb_local, alle2);
>  
>  /* Flush TLB of local processor for address va. */
>  static inline void  __flush_xen_tlb_one_local(vaddr_t va)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-09 21:46         ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-09 21:46 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

Hi,

On 09/05/2019 21:32, Julien Grall wrote:
> Hi,
> 
> On 09/05/2019 21:13, Stefano Stabellini wrote:
>> On Wed, 8 May 2019, Julien Grall wrote:
>>>   /* Release all __init and __initdata ranges to be reused */
>>> diff --git a/xen/include/asm-arm/arm32/page.h 
>>> b/xen/include/asm-arm/arm32/page.h
>>> index 40a77daa9d..0b41b9214b 100644
>>> --- a/xen/include/asm-arm/arm32/page.h
>>> +++ b/xen/include/asm-arm/arm32/page.h
>>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>>>       isb();                      /* Synchronize fetched instruction 
>>> stream. */
>>>   }
>>> -/*
>>> - * Flush all hypervisor mappings from the data TLB of the local
>>> - * processor. This is not sufficient when changing code mappings or
>>> - * for self modifying code.
>>> - */
>>> -static inline void flush_xen_data_tlb_local(void)
>>> +/* Flush all hypervisor mappings from the TLB of the local 
>>> processor. */
>>
>> I realize that the statement "This is not sufficient when changing code
>> mappings or for self modifying code" is not quite accurate, but I would
>> prefer not to remove it completely. It would be good to retain a warning
>> somewhere about IC been needed when changing Xen's own mappings. Maybe
>> on top of invalidate_icache_local?
> 
> Can you please expand in which circumstance you need to invalid the 
> instruction cache when changing Xen's own mappings?

Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM 
DDI 0487D.a), most of the instruction caches implement the IVIPT 
extension. This means that instruction cache maintenance is required 
only after write new data to a PA that holds instructions (see D5-2522 
in ARM DDI 0487D.a and B3.11.2 in ARM DDI 0406C.c).

The only one that differs with that behavior is ASID and VMID tagged 
VIVT instruction caches which is only present in Armv7 (I can't remember 
why it was dropped in Armv8). Instruction cache maintenance can be 
required when changing the translation of a virtual address to a 
physical address.

There are only a few limited places where Xen mappings can change and a 
instruction cache flush is required (namely livepatch, changing 
permission, free init). All the others are not necessary.

A comment on top of invalidate_icache_local() is not going to help as 
you rely on the developer knows which function to use. The one on top of 
flush tlb helpers is at best misleading without a long explanation. At 
which point, you better read the Arm Arm.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-09 21:46         ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-09 21:46 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

Hi,

On 09/05/2019 21:32, Julien Grall wrote:
> Hi,
> 
> On 09/05/2019 21:13, Stefano Stabellini wrote:
>> On Wed, 8 May 2019, Julien Grall wrote:
>>>   /* Release all __init and __initdata ranges to be reused */
>>> diff --git a/xen/include/asm-arm/arm32/page.h 
>>> b/xen/include/asm-arm/arm32/page.h
>>> index 40a77daa9d..0b41b9214b 100644
>>> --- a/xen/include/asm-arm/arm32/page.h
>>> +++ b/xen/include/asm-arm/arm32/page.h
>>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>>>       isb();                      /* Synchronize fetched instruction 
>>> stream. */
>>>   }
>>> -/*
>>> - * Flush all hypervisor mappings from the data TLB of the local
>>> - * processor. This is not sufficient when changing code mappings or
>>> - * for self modifying code.
>>> - */
>>> -static inline void flush_xen_data_tlb_local(void)
>>> +/* Flush all hypervisor mappings from the TLB of the local 
>>> processor. */
>>
>> I realize that the statement "This is not sufficient when changing code
>> mappings or for self modifying code" is not quite accurate, but I would
>> prefer not to remove it completely. It would be good to retain a warning
>> somewhere about IC been needed when changing Xen's own mappings. Maybe
>> on top of invalidate_icache_local?
> 
> Can you please expand in which circumstance you need to invalid the 
> instruction cache when changing Xen's own mappings?

Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM 
DDI 0487D.a), most of the instruction caches implement the IVIPT 
extension. This means that instruction cache maintenance is required 
only after write new data to a PA that holds instructions (see D5-2522 
in ARM DDI 0487D.a and B3.11.2 in ARM DDI 0406C.c).

The only one that differs with that behavior is ASID and VMID tagged 
VIVT instruction caches which is only present in Armv7 (I can't remember 
why it was dropped in Armv8). Instruction cache maintenance can be 
required when changing the translation of a virtual address to a 
physical address.

There are only a few limited places where Xen mappings can change and a 
instruction cache flush is required (namely livepatch, changing 
permission, free init). All the others are not necessary.

A comment on top of invalidate_icache_local() is not going to help as 
you rely on the developer knows which function to use. The one on top of 
flush tlb helpers is at best misleading without a long explanation. At 
which point, you better read the Arm Arm.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-10 14:38           ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-10 14:38 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

On 09/05/2019 22:46, Julien Grall wrote:
> Hi,
> 
> On 09/05/2019 21:32, Julien Grall wrote:
>> Hi,
>>
>> On 09/05/2019 21:13, Stefano Stabellini wrote:
>>> On Wed, 8 May 2019, Julien Grall wrote:
>>>>   /* Release all __init and __initdata ranges to be reused */
>>>> diff --git a/xen/include/asm-arm/arm32/page.h 
>>>> b/xen/include/asm-arm/arm32/page.h
>>>> index 40a77daa9d..0b41b9214b 100644
>>>> --- a/xen/include/asm-arm/arm32/page.h
>>>> +++ b/xen/include/asm-arm/arm32/page.h
>>>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>>>>       isb();                      /* Synchronize fetched instruction stream. */
>>>>   }
>>>> -/*
>>>> - * Flush all hypervisor mappings from the data TLB of the local
>>>> - * processor. This is not sufficient when changing code mappings or
>>>> - * for self modifying code.
>>>> - */
>>>> -static inline void flush_xen_data_tlb_local(void)
>>>> +/* Flush all hypervisor mappings from the TLB of the local processor. */
>>>
>>> I realize that the statement "This is not sufficient when changing code
>>> mappings or for self modifying code" is not quite accurate, but I would
>>> prefer not to remove it completely. It would be good to retain a warning
>>> somewhere about IC been needed when changing Xen's own mappings. Maybe
>>> on top of invalidate_icache_local?
>>
>> Can you please expand in which circumstance you need to invalid the 
>> instruction cache when changing Xen's own mappings?
> 
> Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM DDI 
> 0487D.a), most of the instruction caches implement the IVIPT extension. This 
> means that instruction cache maintenance is required only after write new data 
> to a PA that holds instructions (see D5-2522 in ARM DDI 0487D.a and B3.11.2 in 
> ARM DDI 0406C.c).
> 
> The only one that differs with that behavior is ASID and VMID tagged VIVT 
> instruction caches which is only present in Armv7 (I can't remember why it was 
> dropped in Armv8). Instruction cache maintenance can be required when changing 
> the translation of a virtual address to a physical address.

I thought about this a bit more and chat with my team at Arm. Xen on Arm only 
support Cortex-A7, Cortex-A15 and Brahma 15 (see the CPU ID check in arm32/head.S).
	
None of them are actually using VIVT instruction caches. In general, VIVT caches 
are more difficult to deal with because they require more flush. So I would be 
more incline to deny booting Xen on platform where the instruction caches don't 
support IVIVT extension.

I don't think that will have a major impact on the user because of my point above.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-10 14:38           ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-10 14:38 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

On 09/05/2019 22:46, Julien Grall wrote:
> Hi,
> 
> On 09/05/2019 21:32, Julien Grall wrote:
>> Hi,
>>
>> On 09/05/2019 21:13, Stefano Stabellini wrote:
>>> On Wed, 8 May 2019, Julien Grall wrote:
>>>>   /* Release all __init and __initdata ranges to be reused */
>>>> diff --git a/xen/include/asm-arm/arm32/page.h 
>>>> b/xen/include/asm-arm/arm32/page.h
>>>> index 40a77daa9d..0b41b9214b 100644
>>>> --- a/xen/include/asm-arm/arm32/page.h
>>>> +++ b/xen/include/asm-arm/arm32/page.h
>>>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>>>>       isb();                      /* Synchronize fetched instruction stream. */
>>>>   }
>>>> -/*
>>>> - * Flush all hypervisor mappings from the data TLB of the local
>>>> - * processor. This is not sufficient when changing code mappings or
>>>> - * for self modifying code.
>>>> - */
>>>> -static inline void flush_xen_data_tlb_local(void)
>>>> +/* Flush all hypervisor mappings from the TLB of the local processor. */
>>>
>>> I realize that the statement "This is not sufficient when changing code
>>> mappings or for self modifying code" is not quite accurate, but I would
>>> prefer not to remove it completely. It would be good to retain a warning
>>> somewhere about IC been needed when changing Xen's own mappings. Maybe
>>> on top of invalidate_icache_local?
>>
>> Can you please expand in which circumstance you need to invalid the 
>> instruction cache when changing Xen's own mappings?
> 
> Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM DDI 
> 0487D.a), most of the instruction caches implement the IVIPT extension. This 
> means that instruction cache maintenance is required only after write new data 
> to a PA that holds instructions (see D5-2522 in ARM DDI 0487D.a and B3.11.2 in 
> ARM DDI 0406C.c).
> 
> The only one that differs with that behavior is ASID and VMID tagged VIVT 
> instruction caches which is only present in Armv7 (I can't remember why it was 
> dropped in Armv8). Instruction cache maintenance can be required when changing 
> the translation of a virtual address to a physical address.

I thought about this a bit more and chat with my team at Arm. Xen on Arm only 
support Cortex-A7, Cortex-A15 and Brahma 15 (see the CPU ID check in arm32/head.S).
	
None of them are actually using VIVT instruction caches. In general, VIVT caches 
are more difficult to deal with because they require more flush. So I would be 
more incline to deny booting Xen on platform where the instruction caches don't 
support IVIVT extension.

I don't think that will have a major impact on the user because of my point above.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-10 17:57             ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-10 17:57 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, nd, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

[-- Attachment #1: Type: text/plain, Size: 3524 bytes --]

On Fri, 10 May 2019, Julien Grall wrote:
> On 09/05/2019 22:46, Julien Grall wrote:
> > Hi,
> > 
> > On 09/05/2019 21:32, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 09/05/2019 21:13, Stefano Stabellini wrote:
> > > > On Wed, 8 May 2019, Julien Grall wrote:
> > > > >   /* Release all __init and __initdata ranges to be reused */
> > > > > diff --git a/xen/include/asm-arm/arm32/page.h
> > > > > b/xen/include/asm-arm/arm32/page.h
> > > > > index 40a77daa9d..0b41b9214b 100644
> > > > > --- a/xen/include/asm-arm/arm32/page.h
> > > > > +++ b/xen/include/asm-arm/arm32/page.h
> > > > > @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
> > > > >       isb();                      /* Synchronize fetched instruction
> > > > > stream. */
> > > > >   }
> > > > > -/*
> > > > > - * Flush all hypervisor mappings from the data TLB of the local
> > > > > - * processor. This is not sufficient when changing code mappings or
> > > > > - * for self modifying code.
> > > > > - */
> > > > > -static inline void flush_xen_data_tlb_local(void)
> > > > > +/* Flush all hypervisor mappings from the TLB of the local processor.
> > > > > */
> > > > 
> > > > I realize that the statement "This is not sufficient when changing code
> > > > mappings or for self modifying code" is not quite accurate, but I would
> > > > prefer not to remove it completely. It would be good to retain a warning
> > > > somewhere about IC been needed when changing Xen's own mappings. Maybe
> > > > on top of invalidate_icache_local?
> > > 
> > > Can you please expand in which circumstance you need to invalid the
> > > instruction cache when changing Xen's own mappings?
> > 
> > Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM DDI
> > 0487D.a), most of the instruction caches implement the IVIPT extension. This
> > means that instruction cache maintenance is required only after write new
> > data to a PA that holds instructions (see D5-2522 in ARM DDI 0487D.a and
> > B3.11.2 in ARM DDI 0406C.c).
> > 
> > The only one that differs with that behavior is ASID and VMID tagged VIVT
> > instruction caches which is only present in Armv7 (I can't remember why it
> > was dropped in Armv8). Instruction cache maintenance can be required when
> > changing the translation of a virtual address to a physical address.
> 
> I thought about this a bit more and chat with my team at Arm. Xen on Arm only
> support Cortex-A7, Cortex-A15 and Brahma 15 (see the CPU ID check in
> arm32/head.S).
> 	
> None of them are actually using VIVT instruction caches. In general, VIVT
> caches are more difficult to deal with because they require more flush. So I
> would be more incline to deny booting Xen on platform where the instruction
> caches don't support IVIVT extension.
> 
> I don't think that will have a major impact on the user because of my point
> above.

Thanks for looking this up in details. I think there are two interesting
points here:

1) what to do with VIVT
2) what to write in the in-code comment

For 1) I think it would be OK to deny booting. For sure we need at least
a warning. Would you be able to add the warning/boot-denial as part of
this series, or at least an in-code comment?

For 2) I would like this reasonining to be captured somewhere with a
in-code comment, if nothing else as a reference to what to search in
the Arm Arm. I don't know where is the best place for it. If
invalidate_icache_local is not good place for the comment please suggest
a better location.

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-10 17:57             ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-10 17:57 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, nd, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

[-- Attachment #1: Type: text/plain, Size: 3524 bytes --]

On Fri, 10 May 2019, Julien Grall wrote:
> On 09/05/2019 22:46, Julien Grall wrote:
> > Hi,
> > 
> > On 09/05/2019 21:32, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 09/05/2019 21:13, Stefano Stabellini wrote:
> > > > On Wed, 8 May 2019, Julien Grall wrote:
> > > > >   /* Release all __init and __initdata ranges to be reused */
> > > > > diff --git a/xen/include/asm-arm/arm32/page.h
> > > > > b/xen/include/asm-arm/arm32/page.h
> > > > > index 40a77daa9d..0b41b9214b 100644
> > > > > --- a/xen/include/asm-arm/arm32/page.h
> > > > > +++ b/xen/include/asm-arm/arm32/page.h
> > > > > @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
> > > > >       isb();                      /* Synchronize fetched instruction
> > > > > stream. */
> > > > >   }
> > > > > -/*
> > > > > - * Flush all hypervisor mappings from the data TLB of the local
> > > > > - * processor. This is not sufficient when changing code mappings or
> > > > > - * for self modifying code.
> > > > > - */
> > > > > -static inline void flush_xen_data_tlb_local(void)
> > > > > +/* Flush all hypervisor mappings from the TLB of the local processor.
> > > > > */
> > > > 
> > > > I realize that the statement "This is not sufficient when changing code
> > > > mappings or for self modifying code" is not quite accurate, but I would
> > > > prefer not to remove it completely. It would be good to retain a warning
> > > > somewhere about IC been needed when changing Xen's own mappings. Maybe
> > > > on top of invalidate_icache_local?
> > > 
> > > Can you please expand in which circumstance you need to invalid the
> > > instruction cache when changing Xen's own mappings?
> > 
> > Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM DDI
> > 0487D.a), most of the instruction caches implement the IVIPT extension. This
> > means that instruction cache maintenance is required only after write new
> > data to a PA that holds instructions (see D5-2522 in ARM DDI 0487D.a and
> > B3.11.2 in ARM DDI 0406C.c).
> > 
> > The only one that differs with that behavior is ASID and VMID tagged VIVT
> > instruction caches which is only present in Armv7 (I can't remember why it
> > was dropped in Armv8). Instruction cache maintenance can be required when
> > changing the translation of a virtual address to a physical address.
> 
> I thought about this a bit more and chat with my team at Arm. Xen on Arm only
> support Cortex-A7, Cortex-A15 and Brahma 15 (see the CPU ID check in
> arm32/head.S).
> 	
> None of them are actually using VIVT instruction caches. In general, VIVT
> caches are more difficult to deal with because they require more flush. So I
> would be more incline to deny booting Xen on platform where the instruction
> caches don't support IVIVT extension.
> 
> I don't think that will have a major impact on the user because of my point
> above.

Thanks for looking this up in details. I think there are two interesting
points here:

1) what to do with VIVT
2) what to write in the in-code comment

For 1) I think it would be OK to deny booting. For sure we need at least
a warning. Would you be able to add the warning/boot-denial as part of
this series, or at least an in-code comment?

For 2) I would like this reasonining to be captured somewhere with a
in-code comment, if nothing else as a reference to what to search in
the Arm Arm. I don't know where is the best place for it. If
invalidate_icache_local is not good place for the comment please suggest
a better location.

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-10 18:35               ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-10 18:35 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

Hi,

On 10/05/2019 18:57, Stefano Stabellini wrote:
> On Fri, 10 May 2019, Julien Grall wrote:
>> On 09/05/2019 22:46, Julien Grall wrote:
>>> Hi,
>>>
>>> On 09/05/2019 21:32, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 09/05/2019 21:13, Stefano Stabellini wrote:
>>>>> On Wed, 8 May 2019, Julien Grall wrote:
>>>>>>    /* Release all __init and __initdata ranges to be reused */
>>>>>> diff --git a/xen/include/asm-arm/arm32/page.h
>>>>>> b/xen/include/asm-arm/arm32/page.h
>>>>>> index 40a77daa9d..0b41b9214b 100644
>>>>>> --- a/xen/include/asm-arm/arm32/page.h
>>>>>> +++ b/xen/include/asm-arm/arm32/page.h
>>>>>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>>>>>>        isb();                      /* Synchronize fetched instruction
>>>>>> stream. */
>>>>>>    }
>>>>>> -/*
>>>>>> - * Flush all hypervisor mappings from the data TLB of the local
>>>>>> - * processor. This is not sufficient when changing code mappings or
>>>>>> - * for self modifying code.
>>>>>> - */
>>>>>> -static inline void flush_xen_data_tlb_local(void)
>>>>>> +/* Flush all hypervisor mappings from the TLB of the local processor.
>>>>>> */
>>>>>
>>>>> I realize that the statement "This is not sufficient when changing code
>>>>> mappings or for self modifying code" is not quite accurate, but I would
>>>>> prefer not to remove it completely. It would be good to retain a warning
>>>>> somewhere about IC been needed when changing Xen's own mappings. Maybe
>>>>> on top of invalidate_icache_local?
>>>>
>>>> Can you please expand in which circumstance you need to invalid the
>>>> instruction cache when changing Xen's own mappings?
>>>
>>> Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM DDI
>>> 0487D.a), most of the instruction caches implement the IVIPT extension. This
>>> means that instruction cache maintenance is required only after write new
>>> data to a PA that holds instructions (see D5-2522 in ARM DDI 0487D.a and
>>> B3.11.2 in ARM DDI 0406C.c).
>>>
>>> The only one that differs with that behavior is ASID and VMID tagged VIVT
>>> instruction caches which is only present in Armv7 (I can't remember why it
>>> was dropped in Armv8). Instruction cache maintenance can be required when
>>> changing the translation of a virtual address to a physical address.
>>
>> I thought about this a bit more and chat with my team at Arm. Xen on Arm only
>> support Cortex-A7, Cortex-A15 and Brahma 15 (see the CPU ID check in
>> arm32/head.S).
>> 	
>> None of them are actually using VIVT instruction caches. In general, VIVT
>> caches are more difficult to deal with because they require more flush. So I
>> would be more incline to deny booting Xen on platform where the instruction
>> caches don't support IVIVT extension.
>>
>> I don't think that will have a major impact on the user because of my point
>> above.
> 
> Thanks for looking this up in details. I think there are two interesting
> points here:
> 
> 1) what to do with VIVT
> 2) what to write in the in-code comment
> 
> For 1) I think it would be OK to deny booting. For sure we need at least
> a warning. Would you be able to add the warning/boot-denial as part of
> this series, or at least an in-code comment?

I am planning to deny booting Xen on such platforms.

> 
> For 2) I would like this reasonining to be captured somewhere with a
> in-code comment, if nothing else as a reference to what to search in
> the Arm Arm. I don't know where is the best place for it. If
> invalidate_icache_local is not good place for the comment please suggest
> a better location.

I still don't understand what reasoning you want to write. If we don't 
support VIVT then the instruction cache is very easy to maintain. I.e 
"You flush if you modify the instruction".

I am worry that if we overdo the explanation in the code, then you are 
going to confuse more than one person. So it would be better to blank 
out "VIVT" completely from then.

Feel free to suggest an in-code comment so we can discuss on the worthiness.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-10 18:35               ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-10 18:35 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

Hi,

On 10/05/2019 18:57, Stefano Stabellini wrote:
> On Fri, 10 May 2019, Julien Grall wrote:
>> On 09/05/2019 22:46, Julien Grall wrote:
>>> Hi,
>>>
>>> On 09/05/2019 21:32, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 09/05/2019 21:13, Stefano Stabellini wrote:
>>>>> On Wed, 8 May 2019, Julien Grall wrote:
>>>>>>    /* Release all __init and __initdata ranges to be reused */
>>>>>> diff --git a/xen/include/asm-arm/arm32/page.h
>>>>>> b/xen/include/asm-arm/arm32/page.h
>>>>>> index 40a77daa9d..0b41b9214b 100644
>>>>>> --- a/xen/include/asm-arm/arm32/page.h
>>>>>> +++ b/xen/include/asm-arm/arm32/page.h
>>>>>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
>>>>>>        isb();                      /* Synchronize fetched instruction
>>>>>> stream. */
>>>>>>    }
>>>>>> -/*
>>>>>> - * Flush all hypervisor mappings from the data TLB of the local
>>>>>> - * processor. This is not sufficient when changing code mappings or
>>>>>> - * for self modifying code.
>>>>>> - */
>>>>>> -static inline void flush_xen_data_tlb_local(void)
>>>>>> +/* Flush all hypervisor mappings from the TLB of the local processor.
>>>>>> */
>>>>>
>>>>> I realize that the statement "This is not sufficient when changing code
>>>>> mappings or for self modifying code" is not quite accurate, but I would
>>>>> prefer not to remove it completely. It would be good to retain a warning
>>>>> somewhere about IC been needed when changing Xen's own mappings. Maybe
>>>>> on top of invalidate_icache_local?
>>>>
>>>> Can you please expand in which circumstance you need to invalid the
>>>> instruction cache when changing Xen's own mappings?
>>>
>>> Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM DDI
>>> 0487D.a), most of the instruction caches implement the IVIPT extension. This
>>> means that instruction cache maintenance is required only after write new
>>> data to a PA that holds instructions (see D5-2522 in ARM DDI 0487D.a and
>>> B3.11.2 in ARM DDI 0406C.c).
>>>
>>> The only one that differs with that behavior is ASID and VMID tagged VIVT
>>> instruction caches which is only present in Armv7 (I can't remember why it
>>> was dropped in Armv8). Instruction cache maintenance can be required when
>>> changing the translation of a virtual address to a physical address.
>>
>> I thought about this a bit more and chat with my team at Arm. Xen on Arm only
>> support Cortex-A7, Cortex-A15 and Brahma 15 (see the CPU ID check in
>> arm32/head.S).
>> 	
>> None of them are actually using VIVT instruction caches. In general, VIVT
>> caches are more difficult to deal with because they require more flush. So I
>> would be more incline to deny booting Xen on platform where the instruction
>> caches don't support IVIVT extension.
>>
>> I don't think that will have a major impact on the user because of my point
>> above.
> 
> Thanks for looking this up in details. I think there are two interesting
> points here:
> 
> 1) what to do with VIVT
> 2) what to write in the in-code comment
> 
> For 1) I think it would be OK to deny booting. For sure we need at least
> a warning. Would you be able to add the warning/boot-denial as part of
> this series, or at least an in-code comment?

I am planning to deny booting Xen on such platforms.

> 
> For 2) I would like this reasonining to be captured somewhere with a
> in-code comment, if nothing else as a reference to what to search in
> the Arm Arm. I don't know where is the best place for it. If
> invalidate_icache_local is not good place for the comment please suggest
> a better location.

I still don't understand what reasoning you want to write. If we don't 
support VIVT then the instruction cache is very easy to maintain. I.e 
"You flush if you modify the instruction".

I am worry that if we overdo the explanation in the code, then you are 
going to confuse more than one person. So it would be better to blank 
out "VIVT" completely from then.

Feel free to suggest an in-code comment so we can discuss on the worthiness.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-20 21:01                 ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-20 21:01 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, nd, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

[-- Attachment #1: Type: text/plain, Size: 4546 bytes --]

On Fri, 10 May 2019, Julien Grall wrote:
> On 10/05/2019 18:57, Stefano Stabellini wrote:
> > On Fri, 10 May 2019, Julien Grall wrote:
> >> On 09/05/2019 22:46, Julien Grall wrote:
> >>> Hi,
> >>>
> >>> On 09/05/2019 21:32, Julien Grall wrote:
> >>>> Hi,
> >>>>
> >>>> On 09/05/2019 21:13, Stefano Stabellini wrote:
> >>>>> On Wed, 8 May 2019, Julien Grall wrote:
> >>>>>>    /* Release all __init and __initdata ranges to be reused */
> >>>>>> diff --git a/xen/include/asm-arm/arm32/page.h
> >>>>>> b/xen/include/asm-arm/arm32/page.h
> >>>>>> index 40a77daa9d..0b41b9214b 100644
> >>>>>> --- a/xen/include/asm-arm/arm32/page.h
> >>>>>> +++ b/xen/include/asm-arm/arm32/page.h
> >>>>>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
> >>>>>>        isb();                      /* Synchronize fetched instruction
> >>>>>> stream. */
> >>>>>>    }
> >>>>>> -/*
> >>>>>> - * Flush all hypervisor mappings from the data TLB of the local
> >>>>>> - * processor. This is not sufficient when changing code mappings or
> >>>>>> - * for self modifying code.
> >>>>>> - */
> >>>>>> -static inline void flush_xen_data_tlb_local(void)
> >>>>>> +/* Flush all hypervisor mappings from the TLB of the local processor.
> >>>>>> */
> >>>>>
> >>>>> I realize that the statement "This is not sufficient when changing code
> >>>>> mappings or for self modifying code" is not quite accurate, but I would
> >>>>> prefer not to remove it completely. It would be good to retain a warning
> >>>>> somewhere about IC been needed when changing Xen's own mappings. Maybe
> >>>>> on top of invalidate_icache_local?
> >>>>
> >>>> Can you please expand in which circumstance you need to invalid the
> >>>> instruction cache when changing Xen's own mappings?
> >>>
> >>> Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM DDI
> >>> 0487D.a), most of the instruction caches implement the IVIPT extension. This
> >>> means that instruction cache maintenance is required only after write new
> >>> data to a PA that holds instructions (see D5-2522 in ARM DDI 0487D.a and
> >>> B3.11.2 in ARM DDI 0406C.c).
> >>>
> >>> The only one that differs with that behavior is ASID and VMID tagged VIVT
> >>> instruction caches which is only present in Armv7 (I can't remember why it
> >>> was dropped in Armv8). Instruction cache maintenance can be required when
> >>> changing the translation of a virtual address to a physical address.
> >>
> >> I thought about this a bit more and chat with my team at Arm. Xen on Arm only
> >> support Cortex-A7, Cortex-A15 and Brahma 15 (see the CPU ID check in
> >> arm32/head.S).
> >> 	
> >> None of them are actually using VIVT instruction caches. In general, VIVT
> >> caches are more difficult to deal with because they require more flush. So I
> >> would be more incline to deny booting Xen on platform where the instruction
> >> caches don't support IVIVT extension.
> >>
> >> I don't think that will have a major impact on the user because of my point
> >> above.
> > 
> > Thanks for looking this up in details. I think there are two interesting
> > points here:
> > 
> > 1) what to do with VIVT
> > 2) what to write in the in-code comment
> > 
> > For 1) I think it would be OK to deny booting. For sure we need at least
> > a warning. Would you be able to add the warning/boot-denial as part of
> > this series, or at least an in-code comment?
> 
> I am planning to deny booting Xen on such platforms.
> 
> > 
> > For 2) I would like this reasonining to be captured somewhere with a
> > in-code comment, if nothing else as a reference to what to search in
> > the Arm Arm. I don't know where is the best place for it. If
> > invalidate_icache_local is not good place for the comment please suggest
> > a better location.
> 
> I still don't understand what reasoning you want to write. If we don't 
> support VIVT then the instruction cache is very easy to maintain. I.e 
> "You flush if you modify the instruction".
> 
> I am worry that if we overdo the explanation in the code, then you are 
> going to confuse more than one person. So it would be better to blank 
> out "VIVT" completely from then.
> 
> Feel free to suggest an in-code comment so we can discuss on the worthiness.

I suggest something like the following:

 /* 
  * Flush all hypervisor mappings from the TLB of the local processor. Note
  * that instruction cache maintenance might also be required when self
  * modifying Xen code, see D5-2522 in ARM DDI 0487D.a and B3.11.2 in ARM
  * DDI 0406C.c.
  */

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-20 21:01                 ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-05-20 21:01 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, nd, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

[-- Attachment #1: Type: text/plain, Size: 4546 bytes --]

On Fri, 10 May 2019, Julien Grall wrote:
> On 10/05/2019 18:57, Stefano Stabellini wrote:
> > On Fri, 10 May 2019, Julien Grall wrote:
> >> On 09/05/2019 22:46, Julien Grall wrote:
> >>> Hi,
> >>>
> >>> On 09/05/2019 21:32, Julien Grall wrote:
> >>>> Hi,
> >>>>
> >>>> On 09/05/2019 21:13, Stefano Stabellini wrote:
> >>>>> On Wed, 8 May 2019, Julien Grall wrote:
> >>>>>>    /* Release all __init and __initdata ranges to be reused */
> >>>>>> diff --git a/xen/include/asm-arm/arm32/page.h
> >>>>>> b/xen/include/asm-arm/arm32/page.h
> >>>>>> index 40a77daa9d..0b41b9214b 100644
> >>>>>> --- a/xen/include/asm-arm/arm32/page.h
> >>>>>> +++ b/xen/include/asm-arm/arm32/page.h
> >>>>>> @@ -61,12 +61,8 @@ static inline void invalidate_icache_local(void)
> >>>>>>        isb();                      /* Synchronize fetched instruction
> >>>>>> stream. */
> >>>>>>    }
> >>>>>> -/*
> >>>>>> - * Flush all hypervisor mappings from the data TLB of the local
> >>>>>> - * processor. This is not sufficient when changing code mappings or
> >>>>>> - * for self modifying code.
> >>>>>> - */
> >>>>>> -static inline void flush_xen_data_tlb_local(void)
> >>>>>> +/* Flush all hypervisor mappings from the TLB of the local processor.
> >>>>>> */
> >>>>>
> >>>>> I realize that the statement "This is not sufficient when changing code
> >>>>> mappings or for self modifying code" is not quite accurate, but I would
> >>>>> prefer not to remove it completely. It would be good to retain a warning
> >>>>> somewhere about IC been needed when changing Xen's own mappings. Maybe
> >>>>> on top of invalidate_icache_local?
> >>>>
> >>>> Can you please expand in which circumstance you need to invalid the
> >>>> instruction cache when changing Xen's own mappings?
> >>>
> >>> Reading the Armv7 (B3.11.2 in ARM DDI 0406C.c) and Armv8 (D5.11.2 in ARM DDI
> >>> 0487D.a), most of the instruction caches implement the IVIPT extension. This
> >>> means that instruction cache maintenance is required only after write new
> >>> data to a PA that holds instructions (see D5-2522 in ARM DDI 0487D.a and
> >>> B3.11.2 in ARM DDI 0406C.c).
> >>>
> >>> The only one that differs with that behavior is ASID and VMID tagged VIVT
> >>> instruction caches which is only present in Armv7 (I can't remember why it
> >>> was dropped in Armv8). Instruction cache maintenance can be required when
> >>> changing the translation of a virtual address to a physical address.
> >>
> >> I thought about this a bit more and chat with my team at Arm. Xen on Arm only
> >> support Cortex-A7, Cortex-A15 and Brahma 15 (see the CPU ID check in
> >> arm32/head.S).
> >> 	
> >> None of them are actually using VIVT instruction caches. In general, VIVT
> >> caches are more difficult to deal with because they require more flush. So I
> >> would be more incline to deny booting Xen on platform where the instruction
> >> caches don't support IVIVT extension.
> >>
> >> I don't think that will have a major impact on the user because of my point
> >> above.
> > 
> > Thanks for looking this up in details. I think there are two interesting
> > points here:
> > 
> > 1) what to do with VIVT
> > 2) what to write in the in-code comment
> > 
> > For 1) I think it would be OK to deny booting. For sure we need at least
> > a warning. Would you be able to add the warning/boot-denial as part of
> > this series, or at least an in-code comment?
> 
> I am planning to deny booting Xen on such platforms.
> 
> > 
> > For 2) I would like this reasonining to be captured somewhere with a
> > in-code comment, if nothing else as a reference to what to search in
> > the Arm Arm. I don't know where is the best place for it. If
> > invalidate_icache_local is not good place for the comment please suggest
> > a better location.
> 
> I still don't understand what reasoning you want to write. If we don't 
> support VIVT then the instruction cache is very easy to maintain. I.e 
> "You flush if you modify the instruction".
> 
> I am worry that if we overdo the explanation in the code, then you are 
> going to confuse more than one person. So it would be better to blank 
> out "VIVT" completely from then.
> 
> Feel free to suggest an in-code comment so we can discuss on the worthiness.

I suggest something like the following:

 /* 
  * Flush all hypervisor mappings from the TLB of the local processor. Note
  * that instruction cache maintenance might also be required when self
  * modifying Xen code, see D5-2522 in ARM DDI 0487D.a and B3.11.2 in ARM
  * DDI 0406C.c.
  */

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-20 21:59                   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-20 21:59 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko



On 20/05/2019 22:01, Stefano Stabellini wrote:
> On Fri, 10 May 2019, Julien Grall wrote:
>> Feel free to suggest an in-code comment so we can discuss on the worthiness.
> 
> I suggest something like the following:
> 
>   /*
>    * Flush all hypervisor mappings from the TLB of the local processor. Note
>    * that instruction cache maintenance might also be required when self
>    * modifying Xen code, see D5-2522 in ARM DDI 0487D.a and B3.11.2 in ARM
>    * DDI 0406C.c.
>    */

This looks quite out-of-context, what is the relation between 
self-modifying code and TLB flush?

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
@ 2019-05-20 21:59                   ` Julien Grall
  0 siblings, 0 replies; 53+ messages in thread
From: Julien Grall @ 2019-05-20 21:59 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko



On 20/05/2019 22:01, Stefano Stabellini wrote:
> On Fri, 10 May 2019, Julien Grall wrote:
>> Feel free to suggest an in-code comment so we can discuss on the worthiness.
> 
> I suggest something like the following:
> 
>   /*
>    * Flush all hypervisor mappings from the TLB of the local processor. Note
>    * that instruction cache maintenance might also be required when self
>    * modifying Xen code, see D5-2522 in ARM DDI 0487D.a and B3.11.2 in ARM
>    * DDI 0406C.c.
>    */

This looks quite out-of-context, what is the relation between 
self-modifying code and TLB flush?

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
  2019-05-20 21:59                   ` [Xen-devel] " Julien Grall
  (?)
@ 2019-06-10 20:51                   ` Stefano Stabellini
  2019-06-10 21:03                     ` Julien Grall
  -1 siblings, 1 reply; 53+ messages in thread
From: Stefano Stabellini @ 2019-06-10 20:51 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, nd, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Mon, 20 May 2019, Julien Grall wrote:
> On 20/05/2019 22:01, Stefano Stabellini wrote:
> > On Fri, 10 May 2019, Julien Grall wrote:
> >> Feel free to suggest an in-code comment so we can discuss on the worthiness.
> > 
> > I suggest something like the following:
> > 
> >   /*
> >    * Flush all hypervisor mappings from the TLB of the local processor. Note
> >    * that instruction cache maintenance might also be required when self
> >    * modifying Xen code, see D5-2522 in ARM DDI 0487D.a and B3.11.2 in ARM
> >    * DDI 0406C.c.
> >    */
> 
> This looks quite out-of-context, what is the relation between 
> self-modifying code and TLB flush?

"Flush all hypervisor mappings from the TLB of the local processor" is
the description of the function below (it cannot be seen here but it's
the function on top of which this comment is supposed to be on,
flush_xen_data_tlb_local). The rest of the comment is informative
regarding difficult cases such as self-modifying code, which was present
in the previous version of the code and I would like to retain. The
relation is that there is a good chance you need to do both.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
  2019-06-10 20:51                   ` Stefano Stabellini
@ 2019-06-10 21:03                     ` Julien Grall
  2019-06-11 18:15                       ` Stefano Stabellini
  0 siblings, 1 reply; 53+ messages in thread
From: Julien Grall @ 2019-06-10 21:03 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Andrii Anisov, Oleksandr_Tyshchenko

Hi Stefano,

On 6/10/19 9:51 PM, Stefano Stabellini wrote:
> On Mon, 20 May 2019, Julien Grall wrote:
>> On 20/05/2019 22:01, Stefano Stabellini wrote:
>>> On Fri, 10 May 2019, Julien Grall wrote:
>>>> Feel free to suggest an in-code comment so we can discuss on the worthiness.
>>>
>>> I suggest something like the following:
>>>
>>>    /*
>>>     * Flush all hypervisor mappings from the TLB of the local processor. Note
>>>     * that instruction cache maintenance might also be required when self
>>>     * modifying Xen code, see D5-2522 in ARM DDI 0487D.a and B3.11.2 in ARM
>>>     * DDI 0406C.c.
>>>     */
>>
>> This looks quite out-of-context, what is the relation between
>> self-modifying code and TLB flush?
> 
> "Flush all hypervisor mappings from the TLB of the local processor" is
> the description of the function below (it cannot be seen here but it's
> the function on top of which this comment is supposed to be on,
> flush_xen_data_tlb_local). The rest of the comment is informative
> regarding difficult cases such as self-modifying code, which was present
> in the previous version of the code and I would like to retain. The
> relation is that there is a good chance you need to do both.
Sorry but this doesn't make sense to me. You are unlikely going to 
modify mapping when using self-modifying. And if you were, then because 
instructions caches are implementing the IVIPT extension (assuming we 
forbid IVIVT cache as suggested by patch #1 for Arm32) there are no need 
to modifying the cache because the physical address would be different.

All the self-modifying code in Xen (i.e alternative, livepatch) don't 
requires a TLB maintenance. I also can't see when the two would be 
necessary at the same.

Can you please give a concrete example where it would be necessary?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
  2019-06-10 21:03                     ` Julien Grall
@ 2019-06-11 18:15                       ` Stefano Stabellini
  0 siblings, 0 replies; 53+ messages in thread
From: Stefano Stabellini @ 2019-06-11 18:15 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, nd, Stefano Stabellini, Andrii Anisov, Oleksandr_Tyshchenko

On Mon, 10 Jun 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 6/10/19 9:51 PM, Stefano Stabellini wrote:
> > On Mon, 20 May 2019, Julien Grall wrote:
> > > On 20/05/2019 22:01, Stefano Stabellini wrote:
> > > > On Fri, 10 May 2019, Julien Grall wrote:
> > > > > Feel free to suggest an in-code comment so we can discuss on the
> > > > > worthiness.
> > > > 
> > > > I suggest something like the following:
> > > > 
> > > >    /*
> > > >     * Flush all hypervisor mappings from the TLB of the local processor.
> > > > Note
> > > >     * that instruction cache maintenance might also be required when
> > > > self
> > > >     * modifying Xen code, see D5-2522 in ARM DDI 0487D.a and B3.11.2 in
> > > > ARM
> > > >     * DDI 0406C.c.
> > > >     */
> > > 
> > > This looks quite out-of-context, what is the relation between
> > > self-modifying code and TLB flush?
> > 
> > "Flush all hypervisor mappings from the TLB of the local processor" is
> > the description of the function below (it cannot be seen here but it's
> > the function on top of which this comment is supposed to be on,
> > flush_xen_data_tlb_local). The rest of the comment is informative
> > regarding difficult cases such as self-modifying code, which was present
> > in the previous version of the code and I would like to retain. The
> > relation is that there is a good chance you need to do both.
> Sorry but this doesn't make sense to me. You are unlikely going to modify
> mapping when using self-modifying. And if you were, then because instructions
> caches are implementing the IVIPT extension (assuming we forbid IVIVT cache as
> suggested by patch #1 for Arm32) there are no need to modifying the cache
> because the physical address would be different.
> 
> All the self-modifying code in Xen (i.e alternative, livepatch) don't requires
> a TLB maintenance. I also can't see when the two would be necessary at the
> same.
> 
> Can you please give a concrete example where it would be necessary?

Given the scarcity of IVIVT platforms out there, the unlikely usefulness
in the IVIPT case, and that this is just a comment, I don't think this
issue is worth spending more time on.

For v3 of the patch:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2019-06-11 18:15 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-08 16:15 [PATCH v2 0/7] xen/arm: TLB flush helpers rework Julien Grall
2019-05-08 16:15 ` [Xen-devel] " Julien Grall
2019-05-08 16:15 ` [PATCH v2 1/7] xen/arm: mm: Consolidate setting SCTLR_EL2.WXN in a single place Julien Grall
2019-05-08 16:15   ` [Xen-devel] " Julien Grall
2019-05-09 19:52   ` Stefano Stabellini
2019-05-09 19:52     ` [Xen-devel] " Stefano Stabellini
2019-05-08 16:15 ` [PATCH v2 2/7] xen/arm: Remove flush_xen_text_tlb_local() Julien Grall
2019-05-08 16:15   ` [Xen-devel] " Julien Grall
2019-05-09 20:03   ` Stefano Stabellini
2019-05-09 20:03     ` [Xen-devel] " Stefano Stabellini
2019-05-09 20:17     ` Julien Grall
2019-05-09 20:17       ` [Xen-devel] " Julien Grall
2019-05-08 16:15 ` [PATCH v2 3/7] xen/arm: tlbflush: Clarify the TLB helpers name Julien Grall
2019-05-08 16:15   ` [Xen-devel] " Julien Grall
2019-05-09 20:05   ` Stefano Stabellini
2019-05-09 20:05     ` [Xen-devel] " Stefano Stabellini
2019-05-08 16:16 ` [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs " Julien Grall
2019-05-08 16:16   ` [Xen-devel] " Julien Grall
2019-05-09 20:13   ` Stefano Stabellini
2019-05-09 20:13     ` [Xen-devel] " Stefano Stabellini
2019-05-09 20:32     ` Julien Grall
2019-05-09 20:32       ` [Xen-devel] " Julien Grall
2019-05-09 21:46       ` Julien Grall
2019-05-09 21:46         ` [Xen-devel] " Julien Grall
2019-05-10 14:38         ` Julien Grall
2019-05-10 14:38           ` [Xen-devel] " Julien Grall
2019-05-10 17:57           ` Stefano Stabellini
2019-05-10 17:57             ` [Xen-devel] " Stefano Stabellini
2019-05-10 18:35             ` Julien Grall
2019-05-10 18:35               ` [Xen-devel] " Julien Grall
2019-05-20 21:01               ` Stefano Stabellini
2019-05-20 21:01                 ` [Xen-devel] " Stefano Stabellini
2019-05-20 21:59                 ` Julien Grall
2019-05-20 21:59                   ` [Xen-devel] " Julien Grall
2019-06-10 20:51                   ` Stefano Stabellini
2019-06-10 21:03                     ` Julien Grall
2019-06-11 18:15                       ` Stefano Stabellini
2019-05-08 16:16 ` [PATCH v2 5/7] xen/arm: Gather all TLB flush helpers in tlbflush.h Julien Grall
2019-05-08 16:16   ` [Xen-devel] " Julien Grall
2019-05-09 20:17   ` Stefano Stabellini
2019-05-09 20:17     ` [Xen-devel] " Stefano Stabellini
2019-05-08 16:16 ` [PATCH v2 6/7] xen/arm: tlbflush: Rework TLB helpers Julien Grall
2019-05-08 16:16   ` [Xen-devel] " Julien Grall
2019-05-09 20:32   ` Stefano Stabellini
2019-05-09 20:32     ` [Xen-devel] " Stefano Stabellini
2019-05-09 20:43     ` Julien Grall
2019-05-09 20:43       ` [Xen-devel] " Julien Grall
2019-05-09 21:37   ` Stefano Stabellini
2019-05-09 21:37     ` [Xen-devel] " Stefano Stabellini
2019-05-08 16:16 ` [PATCH v2 7/7] xen/arm: mm: Flush the TLBs even if a mapping failed in create_xen_entries Julien Grall
2019-05-08 16:16   ` [Xen-devel] " Julien Grall
2019-05-09 20:40   ` Stefano Stabellini
2019-05-09 20:40     ` [Xen-devel] " Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.