All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v7 0/3] x86/guest: use assisted TLB flush in guest mode
@ 2020-03-19 15:47 Roger Pau Monne
  2020-03-19 15:47 ` [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag Roger Pau Monne
                   ` (2 more replies)
  0 siblings, 3 replies; 29+ messages in thread
From: Roger Pau Monne @ 2020-03-19 15:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Tim Deegan, Jan Beulich,
	Roger Pau Monne

Hello,

This is the remaining of the assisted TLB flush series. This last set of
patches enable the usage of the Xen assisted flush when running nested
on Xen.

Thanks, Roger.

Roger Pau Monne (3):
  x86/tlb: introduce a flush HVM ASIDs flag
  x86/tlb: allow disabling the TLB clock
  x86/tlb: use Xen L0 assisted TLB flush when available

 xen/arch/x86/flushtlb.c                | 25 +++++++++++++++++--------
 xen/arch/x86/guest/hypervisor.c        | 14 ++++++++++++++
 xen/arch/x86/guest/xen/xen.c           |  6 ++++++
 xen/arch/x86/mm/hap/hap.c              |  8 ++++----
 xen/arch/x86/mm/hap/nested_hap.c       |  2 +-
 xen/arch/x86/mm/p2m-pt.c               |  3 ++-
 xen/arch/x86/mm/paging.c               |  2 +-
 xen/arch/x86/mm/shadow/common.c        | 18 +++++++++---------
 xen/arch/x86/mm/shadow/hvm.c           |  2 +-
 xen/arch/x86/mm/shadow/multi.c         | 16 ++++++++--------
 xen/arch/x86/smp.c                     |  7 +++++++
 xen/include/asm-x86/flushtlb.h         | 23 ++++++++++++++++++++++-
 xen/include/asm-x86/guest/hypervisor.h | 17 +++++++++++++++++
 xen/include/xen/mm.h                   |  2 +-
 14 files changed, 110 insertions(+), 35 deletions(-)

-- 
2.25.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-19 15:47 [Xen-devel] [PATCH v7 0/3] x86/guest: use assisted TLB flush in guest mode Roger Pau Monne
@ 2020-03-19 15:47 ` Roger Pau Monne
  2020-03-19 16:21   ` Julien Grall
  2020-03-19 15:47 ` [Xen-devel] [PATCH v7 2/3] x86/tlb: allow disabling the TLB clock Roger Pau Monne
  2020-03-19 15:47 ` [Xen-devel] [PATCH v7 3/3] x86/tlb: use Xen L0 assisted TLB flush when available Roger Pau Monne
  2 siblings, 1 reply; 29+ messages in thread
From: Roger Pau Monne @ 2020-03-19 15:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Tim Deegan, Jan Beulich,
	Roger Pau Monne

Introduce a specific flag to request a HVM guest linear TLB flush,
which is an ASID/VPID tickle that forces a guest linear to guest
physical TLB flush for all HVM guests.

This was previously unconditionally done in each pre_flush call, but
that's not required: HVM guests not using shadow don't require linear
TLB flushes as Xen doesn't modify the guest page tables in that case
(ie: when using HAP). Note that shadow paging code already takes care
of issuing the necessary flushes when the shadow page tables are
modified.

In order to keep the previous behavior modify all shadow code TLB
flushes to also flush the guest linear to physical TLB. I haven't
looked at each specific shadow code TLB flush in order to figure out
whether it actually requires a guest TLB flush or not, so there might
be room for improvement in that regard.

Also perform ASID/VPIT flushes when modifying the p2m tables as it's a
requirement for AMD hardware. Finally keep the flush in
switch_cr3_cr4, as it's not clear whether code could rely on
switch_cr3_cr4 also performing a guest linear TLB flush. A following
patch can remove the ASID/VPIT tickle from switch_cr3_cr4 if found to
not be necessary.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v6:
 - Add ASID/VPIT flushes when modifying the p2m.
 - Keep the ASID/VPIT flush in switch_cr3_cr4.

Changes since v5:
 - Rename FLUSH_GUESTS_TLB to FLUSH_HVM_ASID_CORE.
 - Clarify commit message.
 - Define FLUSH_HVM_ASID_CORE to 0 when !CONFIG_HVM.
---
 xen/arch/x86/flushtlb.c          |  6 ++++--
 xen/arch/x86/mm/hap/hap.c        |  8 ++++----
 xen/arch/x86/mm/hap/nested_hap.c |  2 +-
 xen/arch/x86/mm/p2m-pt.c         |  3 ++-
 xen/arch/x86/mm/paging.c         |  2 +-
 xen/arch/x86/mm/shadow/common.c  | 18 +++++++++---------
 xen/arch/x86/mm/shadow/hvm.c     |  2 +-
 xen/arch/x86/mm/shadow/multi.c   | 16 ++++++++--------
 xen/include/asm-x86/flushtlb.h   |  6 ++++++
 xen/include/xen/mm.h             |  2 +-
 10 files changed, 37 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c
index 03f92c23dc..c81e53c0ae 100644
--- a/xen/arch/x86/flushtlb.c
+++ b/xen/arch/x86/flushtlb.c
@@ -59,8 +59,6 @@ static u32 pre_flush(void)
         raise_softirq(NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ);
 
  skip_clocktick:
-    hvm_flush_guest_tlbs();
-
     return t2;
 }
 
@@ -118,6 +116,7 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4)
     local_irq_save(flags);
 
     t = pre_flush();
+    hvm_flush_guest_tlbs();
 
     old_cr4 = read_cr4();
     ASSERT(!(old_cr4 & X86_CR4_PCIDE) || !(old_cr4 & X86_CR4_PGE));
@@ -221,6 +220,9 @@ unsigned int flush_area_local(const void *va, unsigned int flags)
             do_tlb_flush();
     }
 
+    if ( flags & FLUSH_HVM_ASID_CORE )
+        hvm_flush_guest_tlbs();
+
     if ( flags & FLUSH_CACHE )
     {
         const struct cpuinfo_x86 *c = &current_cpu_data;
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a6d5e39b02..004a89b4b9 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -118,7 +118,7 @@ int hap_track_dirty_vram(struct domain *d,
             p2m_change_type_range(d, begin_pfn, begin_pfn + nr,
                                   p2m_ram_rw, p2m_ram_logdirty);
 
-            flush_tlb_mask(d->dirty_cpumask);
+            flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 
             memset(dirty_bitmap, 0xff, size); /* consider all pages dirty */
         }
@@ -205,7 +205,7 @@ static int hap_enable_log_dirty(struct domain *d, bool_t log_global)
          * to be read-only, or via hardware-assisted log-dirty.
          */
         p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
     }
     return 0;
 }
@@ -234,7 +234,7 @@ static void hap_clean_dirty_bitmap(struct domain *d)
      * be read-only, or via hardware-assisted log-dirty.
      */
     p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
-    flush_tlb_mask(d->dirty_cpumask);
+    flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 }
 
 /************************************************/
@@ -798,7 +798,7 @@ hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
 
     safe_write_pte(p, new);
     if ( old_flags & _PAGE_PRESENT )
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 
     paging_unlock(d);
 
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index abe5958a52..9c0750be17 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -84,7 +84,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     safe_write_pte(p, new);
 
     if (old_flags & _PAGE_PRESENT)
-        flush_tlb_mask(p2m->dirty_cpumask);
+        flush_mask(p2m->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 
     paging_unlock(d);
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index eb66077496..fbcea181ba 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -896,7 +896,8 @@ static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
     unmap_domain_page(tab);
 
     if ( changed )
-         flush_tlb_mask(p2m->domain->dirty_cpumask);
+         flush_mask(p2m->domain->dirty_cpumask,
+                    FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 }
 
 static int p2m_pt_change_entry_type_range(struct p2m_domain *p2m,
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 469bb76429..f9d930b7a9 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -613,7 +613,7 @@ void paging_log_dirty_range(struct domain *d,
 
     p2m_unlock(p2m);
 
-    flush_tlb_mask(d->dirty_cpumask);
+    flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 }
 
 /*
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 121ddf1255..aa750eafae 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -363,7 +363,7 @@ static int oos_remove_write_access(struct vcpu *v, mfn_t gmfn,
     }
 
     if ( ftlb )
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 
     return 0;
 }
@@ -939,7 +939,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 /* See if that freed up enough space */
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
-                    flush_tlb_mask(d->dirty_cpumask);
+                    flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
                     return;
                 }
             }
@@ -993,7 +993,7 @@ static void shadow_blow_tables(struct domain *d)
                                pagetable_get_mfn(v->arch.shadow_table[i]), 0);
 
     /* Make sure everyone sees the unshadowings */
-    flush_tlb_mask(d->dirty_cpumask);
+    flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 }
 
 void shadow_blow_tables_per_domain(struct domain *d)
@@ -1102,7 +1102,7 @@ mfn_t shadow_alloc(struct domain *d,
         if ( unlikely(!cpumask_empty(&mask)) )
         {
             perfc_incr(shadow_alloc_tlbflush);
-            flush_tlb_mask(&mask);
+            flush_mask(&mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
         }
         /* Now safe to clear the page for reuse */
         clear_domain_page(page_to_mfn(sp));
@@ -2290,7 +2290,7 @@ void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all)
 
     /* Need to flush TLBs now, so that linear maps are safe next time we
      * take a fault. */
-    flush_tlb_mask(d->dirty_cpumask);
+    flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 
     paging_unlock(d);
 }
@@ -3005,7 +3005,7 @@ static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
         {
             sh_remove_all_shadows_and_parents(d, mfn);
             if ( sh_remove_all_mappings(d, mfn, _gfn(gfn)) )
-                flush_tlb_mask(d->dirty_cpumask);
+                flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
         }
     }
 
@@ -3045,7 +3045,7 @@ static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
                 }
                 omfn = mfn_add(omfn, 1);
             }
-            flush_tlb_mask(&flushmask);
+            flush_mask(&flushmask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 
             if ( npte )
                 unmap_domain_page(npte);
@@ -3332,7 +3332,7 @@ int shadow_track_dirty_vram(struct domain *d,
         }
     }
     if ( flush_tlb )
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
     goto out;
 
 out_sl1ma:
@@ -3402,7 +3402,7 @@ bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
     }
 
     /* Flush TLBs on all CPUs with dirty vcpu state. */
-    flush_tlb_mask(mask);
+    flush_mask(mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 
     /* Done. */
     for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index 1e6024c71f..509162cdce 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -591,7 +591,7 @@ static void validate_guest_pt_write(struct vcpu *v, mfn_t gmfn,
 
     if ( rc & SHADOW_SET_FLUSH )
         /* Need to flush TLBs to pick up shadow PT changes */
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 
     if ( rc & SHADOW_SET_ERROR )
     {
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index b6afc0fba4..667fca96c7 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3066,7 +3066,7 @@ static int sh_page_fault(struct vcpu *v,
         perfc_incr(shadow_rm_write_flush_tlb);
         smp_wmb();
         atomic_inc(&d->arch.paging.shadow.gtable_dirty_version);
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
     }
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
@@ -3575,7 +3575,7 @@ static bool sh_invlpg(struct vcpu *v, unsigned long linear)
     if ( mfn_to_page(sl1mfn)->u.sh.type
          == SH_type_fl1_shadow )
     {
-        flush_tlb_local();
+        flush_local(FLUSH_TLB | FLUSH_HVM_ASID_CORE);
         return false;
     }
 
@@ -3810,7 +3810,7 @@ sh_update_linear_entries(struct vcpu *v)
          * table entry. But, without this change, it would fetch the wrong
          * value due to a stale TLB.
          */
-        flush_tlb_local();
+        flush_local(FLUSH_TLB | FLUSH_HVM_ASID_CORE);
     }
 }
 
@@ -4011,7 +4011,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
      * (old) shadow linear maps in the writeable mapping heuristics. */
 #if GUEST_PAGING_LEVELS == 2
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow);
 #elif GUEST_PAGING_LEVELS == 3
     /* PAE guests have four shadow_table entries, based on the
@@ -4035,7 +4035,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
             }
         }
         if ( flush )
-            flush_tlb_mask(d->dirty_cpumask);
+            flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
         /* Now install the new shadows. */
         for ( i = 0; i < 4; i++ )
         {
@@ -4056,7 +4056,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     }
 #elif GUEST_PAGING_LEVELS == 4
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow);
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
@@ -4502,7 +4502,7 @@ static void sh_pagetable_dying(paddr_t gpa)
         }
     }
     if ( flush )
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
 
     /* Remember that we've seen the guest use this interface, so we
      * can rely on it using it in future, instead of guessing at
@@ -4539,7 +4539,7 @@ static void sh_pagetable_dying(paddr_t gpa)
         mfn_to_page(gmfn)->pagetable_dying = true;
         shadow_unhook_mappings(d, smfn, 1/* user pages only */);
         /* Now flush the TLB: we removed toplevel mappings. */
-        flush_tlb_mask(d->dirty_cpumask);
+        flush_mask(d->dirty_cpumask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
     }
 
     /* Remember that we've seen the guest use this interface, so we
diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h
index 2cfe4e6e97..579dc56803 100644
--- a/xen/include/asm-x86/flushtlb.h
+++ b/xen/include/asm-x86/flushtlb.h
@@ -105,6 +105,12 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4);
 #define FLUSH_VCPU_STATE 0x1000
  /* Flush the per-cpu root page table */
 #define FLUSH_ROOT_PGTBL 0x2000
+#if CONFIG_HVM
+ /* Flush all HVM guests linear TLB (using ASID/VPID) */
+#define FLUSH_HVM_ASID_CORE 0x4000
+#else
+#define FLUSH_HVM_ASID_CORE 0
+#endif
 
 /* Flush local TLBs/caches. */
 unsigned int flush_area_local(const void *va, unsigned int flags);
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index d0d095d9c7..02aad43042 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -644,7 +644,7 @@ static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
     if ( !cpumask_empty(&mask) )
     {
         perfc_incr(need_flush_tlb_flush);
-        flush_tlb_mask(&mask);
+        flush_mask(&mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
     }
 }
 
-- 
2.25.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Xen-devel] [PATCH v7 2/3] x86/tlb: allow disabling the TLB clock
  2020-03-19 15:47 [Xen-devel] [PATCH v7 0/3] x86/guest: use assisted TLB flush in guest mode Roger Pau Monne
  2020-03-19 15:47 ` [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag Roger Pau Monne
@ 2020-03-19 15:47 ` Roger Pau Monne
  2020-03-19 15:47 ` [Xen-devel] [PATCH v7 3/3] x86/tlb: use Xen L0 assisted TLB flush when available Roger Pau Monne
  2 siblings, 0 replies; 29+ messages in thread
From: Roger Pau Monne @ 2020-03-19 15:47 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monne

The TLB clock is helpful when running Xen on bare metal because when
doing a TLB flush each CPU is IPI'ed and can keep a timestamp of the
last flush.

This is not the case however when Xen is running virtualized, and the
underlying hypervisor provides mechanism to assist in performing TLB
flushes: Xen itself for example offers a HVMOP_flush_tlbs hypercall in
order to perform a TLB flush without having to IPI each CPU. When
using such mechanisms it's no longer possible to keep a timestamp of
the flushes on each CPU, as they are performed by the underlying
hypervisor.

Offer a boolean in order to signal Xen that the timestamped TLB
shouldn't be used. This avoids keeping the timestamps of the flushes,
and also forces NEED_FLUSH to always return true.

No functional change intended, as this change doesn't introduce any
user that disables the timestamped TLB.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Wei Liu <wl@xen.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/flushtlb.c        | 19 +++++++++++++------
 xen/include/asm-x86/flushtlb.h | 17 ++++++++++++++++-
 2 files changed, 29 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/flushtlb.c b/xen/arch/x86/flushtlb.c
index c81e53c0ae..22b2e84329 100644
--- a/xen/arch/x86/flushtlb.c
+++ b/xen/arch/x86/flushtlb.c
@@ -32,6 +32,9 @@
 u32 tlbflush_clock = 1U;
 DEFINE_PER_CPU(u32, tlbflush_time);
 
+/* Signals whether the TLB flush clock is in use. */
+bool __read_mostly tlb_clk_enabled = true;
+
 /*
  * pre_flush(): Increment the virtual TLB-flush clock. Returns new clock value.
  * 
@@ -82,12 +85,13 @@ static void post_flush(u32 t)
 static void do_tlb_flush(void)
 {
     unsigned long flags, cr4;
-    u32 t;
+    u32 t = 0;
 
     /* This non-reentrant function is sometimes called in interrupt context. */
     local_irq_save(flags);
 
-    t = pre_flush();
+    if ( tlb_clk_enabled )
+        t = pre_flush();
 
     if ( use_invpcid )
         invpcid_flush_all();
@@ -99,7 +103,8 @@ static void do_tlb_flush(void)
     else
         write_cr3(read_cr3());
 
-    post_flush(t);
+    if ( tlb_clk_enabled )
+        post_flush(t);
 
     local_irq_restore(flags);
 }
@@ -107,7 +112,7 @@ static void do_tlb_flush(void)
 void switch_cr3_cr4(unsigned long cr3, unsigned long cr4)
 {
     unsigned long flags, old_cr4;
-    u32 t;
+    u32 t = 0;
 
     /* Throughout this function we make this assumption: */
     ASSERT(!(cr4 & X86_CR4_PCIDE) || !(cr4 & X86_CR4_PGE));
@@ -115,7 +120,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4)
     /* This non-reentrant function is sometimes called in interrupt context. */
     local_irq_save(flags);
 
-    t = pre_flush();
+    if ( tlb_clk_enabled )
+        t = pre_flush();
     hvm_flush_guest_tlbs();
 
     old_cr4 = read_cr4();
@@ -168,7 +174,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4)
     if ( cr4 & X86_CR4_PCIDE )
         invpcid_flush_all_nonglobals();
 
-    post_flush(t);
+    if ( tlb_clk_enabled )
+        post_flush(t);
 
     local_irq_restore(flags);
 }
diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h
index 579dc56803..724455ae0c 100644
--- a/xen/include/asm-x86/flushtlb.h
+++ b/xen/include/asm-x86/flushtlb.h
@@ -21,10 +21,21 @@ extern u32 tlbflush_clock;
 /* Time at which each CPU's TLB was last flushed. */
 DECLARE_PER_CPU(u32, tlbflush_time);
 
-#define tlbflush_current_time() tlbflush_clock
+/* TLB clock is in use. */
+extern bool tlb_clk_enabled;
+
+static inline uint32_t tlbflush_current_time(void)
+{
+    /* Returning 0 from tlbflush_current_time will always force a flush. */
+    return tlb_clk_enabled ? tlbflush_clock : 0;
+}
 
 static inline void page_set_tlbflush_timestamp(struct page_info *page)
 {
+    /* Avoid the write if the TLB clock is disabled. */
+    if ( !tlb_clk_enabled )
+        return;
+
     /*
      * Prevent storing a stale time stamp, which could happen if an update
      * to tlbflush_clock plus a subsequent flush IPI happen between the
@@ -67,6 +78,10 @@ static inline void tlbflush_filter(cpumask_t *mask, uint32_t page_timestamp)
 {
     unsigned int cpu;
 
+    /* Short-circuit: there's no need to iterate if the clock is disabled. */
+    if ( !tlb_clk_enabled )
+        return;
+
     for_each_cpu ( cpu, mask )
         if ( !NEED_FLUSH(per_cpu(tlbflush_time, cpu), page_timestamp) )
             __cpumask_clear_cpu(cpu, mask);
-- 
2.25.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Xen-devel] [PATCH v7 3/3] x86/tlb: use Xen L0 assisted TLB flush when available
  2020-03-19 15:47 [Xen-devel] [PATCH v7 0/3] x86/guest: use assisted TLB flush in guest mode Roger Pau Monne
  2020-03-19 15:47 ` [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag Roger Pau Monne
  2020-03-19 15:47 ` [Xen-devel] [PATCH v7 2/3] x86/tlb: allow disabling the TLB clock Roger Pau Monne
@ 2020-03-19 15:47 ` Roger Pau Monne
  2 siblings, 0 replies; 29+ messages in thread
From: Roger Pau Monne @ 2020-03-19 15:47 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monne

Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes.
This greatly increases the performance of TLB flushes when running
with a high amount of vCPUs as a Xen guest, and is specially important
when running in shim mode.

The following figures are from a PV guest running `make -j32 xen` in
shim mode with 32 vCPUs and HAP.

Using x2APIC and ALLBUT shorthand:
real	4m35.973s
user	4m35.110s
sys	36m24.117s

Using L0 assisted flush:
real    1m2.596s
user    4m34.818s
sys     5m16.374s

The implementation adds a new hook to hypervisor_ops so other
enlightenments can also implement such assisted flush just by filling
the hook.

Note that the Xen implementation completely ignores the dirty CPU mask
and the linear address passed in, and always performs a global TLB
flush on all vCPUs. This is a limitation of the hypercall provided by
Xen. Also note that local TLB flushes are not performed using the
assisted TLB flush, only remote ones.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Wei Liu <wl@xen.org>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v5:
 - Clarify commit message.
 - Test for assisted flush at setup, do this for all hypervisors.
 - Return EOPNOTSUPP if assisted flush is not available.

Changes since v4:
 - Adjust order calculation.

Changes since v3:
 - Use an alternative call for the flush hook.

Changes since v1:
 - Add a L0 assisted hook to hypervisor ops.
---
 xen/arch/x86/guest/hypervisor.c        | 14 ++++++++++++++
 xen/arch/x86/guest/xen/xen.c           |  6 ++++++
 xen/arch/x86/smp.c                     |  7 +++++++
 xen/include/asm-x86/guest/hypervisor.h | 17 +++++++++++++++++
 4 files changed, 44 insertions(+)

diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hypervisor.c
index 647cdb1367..e46de42ded 100644
--- a/xen/arch/x86/guest/hypervisor.c
+++ b/xen/arch/x86/guest/hypervisor.c
@@ -18,6 +18,7 @@
  *
  * Copyright (c) 2019 Microsoft.
  */
+#include <xen/cpumask.h>
 #include <xen/init.h>
 #include <xen/types.h>
 
@@ -51,6 +52,10 @@ void __init hypervisor_setup(void)
 {
     if ( ops.setup )
         ops.setup();
+
+    /* Check if assisted flush is available and disable the TLB clock if so. */
+    if ( !hypervisor_flush_tlb(cpumask_of(smp_processor_id()), NULL, 0) )
+        tlb_clk_enabled = false;
 }
 
 int hypervisor_ap_setup(void)
@@ -73,6 +78,15 @@ void __init hypervisor_e820_fixup(struct e820map *e820)
         ops.e820_fixup(e820);
 }
 
+int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
+                         unsigned int order)
+{
+    if ( ops.flush_tlb )
+        return alternative_call(ops.flush_tlb, mask, va, order);
+
+    return -EOPNOTSUPP;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c
index e74fd1e995..3bc01c8723 100644
--- a/xen/arch/x86/guest/xen/xen.c
+++ b/xen/arch/x86/guest/xen/xen.c
@@ -324,12 +324,18 @@ static void __init e820_fixup(struct e820map *e820)
         pv_shim_fixup_e820(e820);
 }
 
+static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int order)
+{
+    return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL);
+}
+
 static const struct hypervisor_ops __initconstrel ops = {
     .name = "Xen",
     .setup = setup,
     .ap_setup = ap_setup,
     .resume = resume,
     .e820_fixup = e820_fixup,
+    .flush_tlb = flush_tlb,
 };
 
 const struct hypervisor_ops *__init xg_probe(void)
diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
index bcead5d01b..1d9fec65de 100644
--- a/xen/arch/x86/smp.c
+++ b/xen/arch/x86/smp.c
@@ -15,6 +15,7 @@
 #include <xen/perfc.h>
 #include <xen/spinlock.h>
 #include <asm/current.h>
+#include <asm/guest.h>
 #include <asm/smp.h>
 #include <asm/mc146818rtc.h>
 #include <asm/flushtlb.h>
@@ -268,6 +269,12 @@ void flush_area_mask(const cpumask_t *mask, const void *va, unsigned int flags)
     if ( (flags & ~FLUSH_ORDER_MASK) &&
          !cpumask_subset(mask, cpumask_of(cpu)) )
     {
+        if ( cpu_has_hypervisor &&
+             !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID |
+                         FLUSH_ORDER_MASK)) &&
+             !hypervisor_flush_tlb(mask, va, (flags - 1) & FLUSH_ORDER_MASK) )
+            return;
+
         spin_lock(&flush_lock);
         cpumask_and(&flush_cpumask, mask, &cpu_online_map);
         cpumask_clear_cpu(cpu, &flush_cpumask);
diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-x86/guest/hypervisor.h
index ade10e74ea..77a1d21824 100644
--- a/xen/include/asm-x86/guest/hypervisor.h
+++ b/xen/include/asm-x86/guest/hypervisor.h
@@ -19,6 +19,8 @@
 #ifndef __X86_HYPERVISOR_H__
 #define __X86_HYPERVISOR_H__
 
+#include <xen/cpumask.h>
+
 #include <asm/e820.h>
 
 struct hypervisor_ops {
@@ -32,6 +34,8 @@ struct hypervisor_ops {
     void (*resume)(void);
     /* Fix up e820 map */
     void (*e820_fixup)(struct e820map *e820);
+    /* L0 assisted TLB flush */
+    int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int order);
 };
 
 #ifdef CONFIG_GUEST
@@ -41,6 +45,14 @@ void hypervisor_setup(void);
 int hypervisor_ap_setup(void);
 void hypervisor_resume(void);
 void hypervisor_e820_fixup(struct e820map *e820);
+/*
+ * L0 assisted TLB flush.
+ * mask: cpumask of the dirty vCPUs that should be flushed.
+ * va: linear address to flush, or NULL for global flushes.
+ * order: order of the linear address pointed by va.
+ */
+int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
+                         unsigned int order);
 
 #else
 
@@ -52,6 +64,11 @@ static inline void hypervisor_setup(void) { ASSERT_UNREACHABLE(); }
 static inline int hypervisor_ap_setup(void) { return 0; }
 static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); }
 static inline void hypervisor_e820_fixup(struct e820map *e820) {}
+static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
+                                       unsigned int order)
+{
+    return -EOPNOTSUPP;
+}
 
 #endif  /* CONFIG_GUEST */
 
-- 
2.25.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-19 15:47 ` [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag Roger Pau Monne
@ 2020-03-19 16:21   ` Julien Grall
  2020-03-19 17:38     ` Roger Pau Monné
  0 siblings, 1 reply; 29+ messages in thread
From: Julien Grall @ 2020-03-19 16:21 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich

Hi,

On 19/03/2020 15:47, Roger Pau Monne wrote:
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index d0d095d9c7..02aad43042 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -644,7 +644,7 @@ static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
>       if ( !cpumask_empty(&mask) )
>       {
>           perfc_incr(need_flush_tlb_flush);
> -        flush_tlb_mask(&mask);
> +        flush_mask(&mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);

A rule of thumb is any modification in common code may impact Arm. This 
is a case here because the flag and the "new" function are not defined 
on Arm and therefore going to break the build.

Why can't you keep flush_tlb_mask() here?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-19 16:21   ` Julien Grall
@ 2020-03-19 17:38     ` Roger Pau Monné
  2020-03-19 18:07       ` Julien Grall
  0 siblings, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-19 17:38 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
> Hi,
> 
> On 19/03/2020 15:47, Roger Pau Monne wrote:
> > diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> > index d0d095d9c7..02aad43042 100644
> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -644,7 +644,7 @@ static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
> >       if ( !cpumask_empty(&mask) )
> >       {
> >           perfc_incr(need_flush_tlb_flush);
> > -        flush_tlb_mask(&mask);
> > +        flush_mask(&mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
> 
> A rule of thumb is any modification in common code may impact Arm. This is a
> case here because the flag and the "new" function are not defined on Arm and
> therefore going to break the build.

flush_mask is not a new function, it's just not implemented on ARM I
guess.

> Why can't you keep flush_tlb_mask() here?

Because filtered_flush_tlb_mask is used in populate_physmap, and
changes to the phymap require an ASID flush on AMD hardware.

I will send an updated version.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-19 17:38     ` Roger Pau Monné
@ 2020-03-19 18:07       ` Julien Grall
  2020-03-19 18:43         ` Roger Pau Monné
  0 siblings, 1 reply; 29+ messages in thread
From: Julien Grall @ 2020-03-19 18:07 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel



On 19/03/2020 17:38, Roger Pau Monné wrote:
> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>> Hi,
>>
>> On 19/03/2020 15:47, Roger Pau Monne wrote:
>>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>>> index d0d095d9c7..02aad43042 100644
>>> --- a/xen/include/xen/mm.h
>>> +++ b/xen/include/xen/mm.h
>>> @@ -644,7 +644,7 @@ static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
>>>        if ( !cpumask_empty(&mask) )
>>>        {
>>>            perfc_incr(need_flush_tlb_flush);
>>> -        flush_tlb_mask(&mask);
>>> +        flush_mask(&mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
>>
>> A rule of thumb is any modification in common code may impact Arm. This is a
>> case here because the flag and the "new" function are not defined on Arm and
>> therefore going to break the build.
> 
> flush_mask is not a new function, it's just not implemented on ARM I
> guess.

That's why I said it in "" ;).

>  >> Why can't you keep flush_tlb_mask() here?
> 
> Because filtered_flush_tlb_mask is used in populate_physmap, and
> changes to the phymap require an ASID flush on AMD hardware.

I am afraid this does not yet explain me why flush_tlb_mask() could not 
be updated so it flush the ASID on AMD hardware.

This would actually match the behavior of flush_tlb_mask() on Arm where 
all the guest TLBs would be removed.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-19 18:07       ` Julien Grall
@ 2020-03-19 18:43         ` Roger Pau Monné
  2020-03-19 19:07           ` Julien Grall
  0 siblings, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-19 18:43 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
> 
> 
> On 19/03/2020 17:38, Roger Pau Monné wrote:
> > On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
> >  >> Why can't you keep flush_tlb_mask() here?
> > 
> > Because filtered_flush_tlb_mask is used in populate_physmap, and
> > changes to the phymap require an ASID flush on AMD hardware.
> 
> I am afraid this does not yet explain me why flush_tlb_mask() could not be
> updated so it flush the ASID on AMD hardware.

Current behavior previous to this patch is to flush the ASIDs on
every TLB flush.

flush_tlb_mask is too widely used on x86 in places where there's no
need to flush the ASIDs. This prevents using assisted flushes (by L0)
when running nested, since those assisted flushes performed by L0
don't flush the L2 guests TLBs.

I could keep current behavior and leave flush_tlb_mask also flushing the
ASIDs, but that seems wrong as the function doesn't have anything in
it's name that suggests it also flushes the in-guest TLBs for HVM.

I would rather prefer the default to be to not flush the
ASIDs, so that users need to specify so by passing the flag to
flusk_mask.

> This would actually match the behavior of flush_tlb_mask() on Arm where all
> the guest TLBs would be removed.

That's how it used to be previous to this patch, and the whole point
is to split the ASID flushes into a separate flag, so it's not done
for every TLB flush.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-19 18:43         ` Roger Pau Monné
@ 2020-03-19 19:07           ` Julien Grall
  2020-03-20  7:21             ` Jan Beulich
  0 siblings, 1 reply; 29+ messages in thread
From: Julien Grall @ 2020-03-19 19:07 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

Hi,

On 19/03/2020 18:43, Roger Pau Monné wrote:
> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>
>>
>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>   >> Why can't you keep flush_tlb_mask() here?
>>>
>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>> changes to the phymap require an ASID flush on AMD hardware.
>>
>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
>> updated so it flush the ASID on AMD hardware.
> 
> Current behavior previous to this patch is to flush the ASIDs on
> every TLB flush.
> 
> flush_tlb_mask is too widely used on x86 in places where there's no
> need to flush the ASIDs. This prevents using assisted flushes (by L0)
> when running nested, since those assisted flushes performed by L0
> don't flush the L2 guests TLBs.
> 
> I could keep current behavior and leave flush_tlb_mask also flushing the
> ASIDs, but that seems wrong as the function doesn't have anything in
> it's name that suggests it also flushes the in-guest TLBs for HVM.

I agree the name is confusing, I had to look at the implementation to 
understand what it does.

How about renaming (or introducing) the function to 
flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?

> 
> I would rather prefer the default to be to not flush the
> ASIDs, so that users need to specify so by passing the flag to
> flusk_mask.
That's x86 choice. For common, I would rather no introduce those flags 
until we have another arch that make use of it.

> 
>> This would actually match the behavior of flush_tlb_mask() on Arm where all
>> the guest TLBs would be removed.
> 
> That's how it used to be previous to this patch, and the whole point
> is to split the ASID flushes into a separate flag, so it's not done
> for every TLB flush.

Well, tlb_flush_mask() is only implemented for the benefit of common 
code. It has no other users on Arm.

It feels to me that we want an helper that will nuke all the guest TLBs 
on a given set of CPUs (see above for some name suggestion).

On x86, you could implement it using flush_mask(). On Arm, this could be 
a rename of the existing function flush_tlb_mask().

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-19 19:07           ` Julien Grall
@ 2020-03-20  7:21             ` Jan Beulich
  2020-03-20  9:01               ` Roger Pau Monné
  0 siblings, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2020-03-20  7:21 UTC (permalink / raw)
  To: Julien Grall, Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, xen-devel

On 19.03.2020 20:07, Julien Grall wrote:
> Hi,
> 
> On 19/03/2020 18:43, Roger Pau Monné wrote:
>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>>
>>>
>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>>   >> Why can't you keep flush_tlb_mask() here?
>>>>
>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>>> changes to the phymap require an ASID flush on AMD hardware.
>>>
>>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
>>> updated so it flush the ASID on AMD hardware.
>>
>> Current behavior previous to this patch is to flush the ASIDs on
>> every TLB flush.
>>
>> flush_tlb_mask is too widely used on x86 in places where there's no
>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
>> when running nested, since those assisted flushes performed by L0
>> don't flush the L2 guests TLBs.
>>
>> I could keep current behavior and leave flush_tlb_mask also flushing the
>> ASIDs, but that seems wrong as the function doesn't have anything in
>> it's name that suggests it also flushes the in-guest TLBs for HVM.
> 
> I agree the name is confusing, I had to look at the implementation to understand what it does.
> 
> How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?

And this would then flush _only_ guest TLBs?

>> I would rather prefer the default to be to not flush the
>> ASIDs, so that users need to specify so by passing the flag to
>> flusk_mask.
> That's x86 choice. For common, I would rather no introduce those flags until we have another arch that make use of it.

The flags should perhaps indeed remain x86-specific, but suitable
wrappers usable from common code should exist (as you suggest
below).

Jan

>>> This would actually match the behavior of flush_tlb_mask() on Arm where all
>>> the guest TLBs would be removed.
>>
>> That's how it used to be previous to this patch, and the whole point
>> is to split the ASID flushes into a separate flag, so it's not done
>> for every TLB flush.
> 
> Well, tlb_flush_mask() is only implemented for the benefit of common code. It has no other users on Arm.
> 
> It feels to me that we want an helper that will nuke all the guest TLBs on a given set of CPUs (see above for some name suggestion).
> 
> On x86, you could implement it using flush_mask(). On Arm, this could be a rename of the existing function flush_tlb_mask().
> 
> Cheers,
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20  7:21             ` Jan Beulich
@ 2020-03-20  9:01               ` Roger Pau Monné
  2020-03-20  9:12                 ` Julien Grall
  2020-03-20 13:16                 ` Jan Beulich
  0 siblings, 2 replies; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-20  9:01 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Tim Deegan, xen-devel

On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
> On 19.03.2020 20:07, Julien Grall wrote:
> > Hi,
> > 
> > On 19/03/2020 18:43, Roger Pau Monné wrote:
> >> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
> >>>
> >>>
> >>> On 19/03/2020 17:38, Roger Pau Monné wrote:
> >>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
> >>>>   >> Why can't you keep flush_tlb_mask() here?
> >>>>
> >>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
> >>>> changes to the phymap require an ASID flush on AMD hardware.
> >>>
> >>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
> >>> updated so it flush the ASID on AMD hardware.
> >>
> >> Current behavior previous to this patch is to flush the ASIDs on
> >> every TLB flush.
> >>
> >> flush_tlb_mask is too widely used on x86 in places where there's no
> >> need to flush the ASIDs. This prevents using assisted flushes (by L0)
> >> when running nested, since those assisted flushes performed by L0
> >> don't flush the L2 guests TLBs.
> >>
> >> I could keep current behavior and leave flush_tlb_mask also flushing the
> >> ASIDs, but that seems wrong as the function doesn't have anything in
> >> it's name that suggests it also flushes the in-guest TLBs for HVM.
> > 
> > I agree the name is confusing, I had to look at the implementation to understand what it does.
> > 
> > How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
> 
> And this would then flush _only_ guest TLBs?

No, I think from Julien's proposal (if I understood it correctly)
flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
previous to this patch (flush Xen's TLBs + HVM ASIDs).

> >> I would rather prefer the default to be to not flush the
> >> ASIDs, so that users need to specify so by passing the flag to
> >> flusk_mask.
> > That's x86 choice. For common, I would rather no introduce those flags until we have another arch that make use of it.
> 
> The flags should perhaps indeed remain x86-specific, but suitable
> wrappers usable from common code should exist (as you suggest
> below).

I don't have a strong opinion re naming, are you OK with the names
proposed above?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20  9:01               ` Roger Pau Monné
@ 2020-03-20  9:12                 ` Julien Grall
  2020-03-20  9:42                   ` Roger Pau Monné
  2020-03-20 13:19                   ` Jan Beulich
  2020-03-20 13:16                 ` Jan Beulich
  1 sibling, 2 replies; 29+ messages in thread
From: Julien Grall @ 2020-03-20  9:12 UTC (permalink / raw)
  To: Roger Pau Monné, Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, xen-devel

Hi Roger,

On 20/03/2020 09:01, Roger Pau Monné wrote:
> On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
>> On 19.03.2020 20:07, Julien Grall wrote:
>>> Hi,
>>>
>>> On 19/03/2020 18:43, Roger Pau Monné wrote:
>>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>>>>    >> Why can't you keep flush_tlb_mask() here?
>>>>>>
>>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>>>>> changes to the phymap require an ASID flush on AMD hardware.
>>>>>
>>>>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
>>>>> updated so it flush the ASID on AMD hardware.
>>>>
>>>> Current behavior previous to this patch is to flush the ASIDs on
>>>> every TLB flush.
>>>>
>>>> flush_tlb_mask is too widely used on x86 in places where there's no
>>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
>>>> when running nested, since those assisted flushes performed by L0
>>>> don't flush the L2 guests TLBs.
>>>>
>>>> I could keep current behavior and leave flush_tlb_mask also flushing the
>>>> ASIDs, but that seems wrong as the function doesn't have anything in
>>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
>>>
>>> I agree the name is confusing, I had to look at the implementation to understand what it does.
>>>
>>> How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
>>
>> And this would then flush _only_ guest TLBs?
> 
> No, I think from Julien's proposal (if I understood it correctly)
> flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
> previous to this patch (flush Xen's TLBs + HVM ASIDs).

It looks like there might be confusion on what "guest TLBs" means. In my 
view this means any TLBs associated directly or indirectly with the 
guest. On Arm, this would be nuke:
    - guest virtual address -> guest physical address TLB entry
    - guest physical address -> host physical address TLB entry
    - guest virtual address -> host physical address TLB entry

I would assume you want something similar on x86, right?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20  9:12                 ` Julien Grall
@ 2020-03-20  9:42                   ` Roger Pau Monné
  2020-03-20 10:00                     ` Roger Pau Monné
  2020-03-20 13:19                   ` Jan Beulich
  1 sibling, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-20  9:42 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

On Fri, Mar 20, 2020 at 09:12:16AM +0000, Julien Grall wrote:
> Hi Roger,
> 
> On 20/03/2020 09:01, Roger Pau Monné wrote:
> > On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
> > > On 19.03.2020 20:07, Julien Grall wrote:
> > > > Hi,
> > > > 
> > > > On 19/03/2020 18:43, Roger Pau Monné wrote:
> > > > > On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
> > > > > > 
> > > > > > 
> > > > > > On 19/03/2020 17:38, Roger Pau Monné wrote:
> > > > > > > On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
> > > > > > >    >> Why can't you keep flush_tlb_mask() here?
> > > > > > > 
> > > > > > > Because filtered_flush_tlb_mask is used in populate_physmap, and
> > > > > > > changes to the phymap require an ASID flush on AMD hardware.
> > > > > > 
> > > > > > I am afraid this does not yet explain me why flush_tlb_mask() could not be
> > > > > > updated so it flush the ASID on AMD hardware.
> > > > > 
> > > > > Current behavior previous to this patch is to flush the ASIDs on
> > > > > every TLB flush.
> > > > > 
> > > > > flush_tlb_mask is too widely used on x86 in places where there's no
> > > > > need to flush the ASIDs. This prevents using assisted flushes (by L0)
> > > > > when running nested, since those assisted flushes performed by L0
> > > > > don't flush the L2 guests TLBs.
> > > > > 
> > > > > I could keep current behavior and leave flush_tlb_mask also flushing the
> > > > > ASIDs, but that seems wrong as the function doesn't have anything in
> > > > > it's name that suggests it also flushes the in-guest TLBs for HVM.
> > > > 
> > > > I agree the name is confusing, I had to look at the implementation to understand what it does.
> > > > 
> > > > How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
> > > 
> > > And this would then flush _only_ guest TLBs?
> > 
> > No, I think from Julien's proposal (if I understood it correctly)
> > flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
> > previous to this patch (flush Xen's TLBs + HVM ASIDs).
> 
> It looks like there might be confusion on what "guest TLBs" means. In my
> view this means any TLBs associated directly or indirectly with the guest.
> On Arm, this would be nuke:
>    - guest virtual address -> guest physical address TLB entry
>    - guest physical address -> host physical address TLB entry
>    - guest virtual address -> host physical address TLB entry

AFAICT ASID flush on AMD hardware will flush any of the above, while
VPID flush on Intel will only flush the first item (guest linear to
guest physical). When using EPT on Intel you need to issue EPT flushes
when modifying the p2m, which will get rid of the last two types of
cached translations (guest-physical mappings and combined mappings in
Intel speak).

So the response is 'it depends' on whether the underlying hardware is
Intel or AMD. That's why the constant was renamed from
FLUSH_HVM_GUESTS_TLB to FLUSH_HVM_ASID_CORE.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20  9:42                   ` Roger Pau Monné
@ 2020-03-20 10:00                     ` Roger Pau Monné
  2020-03-20 10:08                       ` Julien Grall
  0 siblings, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-20 10:00 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

On Fri, Mar 20, 2020 at 10:42:14AM +0100, Roger Pau Monné wrote:
> On Fri, Mar 20, 2020 at 09:12:16AM +0000, Julien Grall wrote:
> > Hi Roger,
> > 
> > On 20/03/2020 09:01, Roger Pau Monné wrote:
> > > On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
> > > > On 19.03.2020 20:07, Julien Grall wrote:
> > > > > Hi,
> > > > > 
> > > > > On 19/03/2020 18:43, Roger Pau Monné wrote:
> > > > > > On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > On 19/03/2020 17:38, Roger Pau Monné wrote:
> > > > > > > > On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
> > > > > > > >    >> Why can't you keep flush_tlb_mask() here?
> > > > > > > > 
> > > > > > > > Because filtered_flush_tlb_mask is used in populate_physmap, and
> > > > > > > > changes to the phymap require an ASID flush on AMD hardware.
> > > > > > > 
> > > > > > > I am afraid this does not yet explain me why flush_tlb_mask() could not be
> > > > > > > updated so it flush the ASID on AMD hardware.
> > > > > > 
> > > > > > Current behavior previous to this patch is to flush the ASIDs on
> > > > > > every TLB flush.
> > > > > > 
> > > > > > flush_tlb_mask is too widely used on x86 in places where there's no
> > > > > > need to flush the ASIDs. This prevents using assisted flushes (by L0)
> > > > > > when running nested, since those assisted flushes performed by L0
> > > > > > don't flush the L2 guests TLBs.
> > > > > > 
> > > > > > I could keep current behavior and leave flush_tlb_mask also flushing the
> > > > > > ASIDs, but that seems wrong as the function doesn't have anything in
> > > > > > it's name that suggests it also flushes the in-guest TLBs for HVM.
> > > > > 
> > > > > I agree the name is confusing, I had to look at the implementation to understand what it does.
> > > > > 
> > > > > How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
> > > > 
> > > > And this would then flush _only_ guest TLBs?
> > > 
> > > No, I think from Julien's proposal (if I understood it correctly)
> > > flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
> > > previous to this patch (flush Xen's TLBs + HVM ASIDs).
> > 
> > It looks like there might be confusion on what "guest TLBs" means. In my
> > view this means any TLBs associated directly or indirectly with the guest.
> > On Arm, this would be nuke:
> >    - guest virtual address -> guest physical address TLB entry
> >    - guest physical address -> host physical address TLB entry
> >    - guest virtual address -> host physical address TLB entry
> 
> AFAICT ASID flush on AMD hardware will flush any of the above, while
> VPID flush on Intel will only flush the first item (guest linear to

Sorry, doing too many things at the same time. On Intel VPID flushes
will get rid of guest virtual to guest physical or host physical, but
not of guest physical to host physical, you need an EPT flush to
accomplish that.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 10:00                     ` Roger Pau Monné
@ 2020-03-20 10:08                       ` Julien Grall
  2020-03-20 10:24                         ` Roger Pau Monné
  0 siblings, 1 reply; 29+ messages in thread
From: Julien Grall @ 2020-03-20 10:08 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

Hi,

On 20/03/2020 10:00, Roger Pau Monné wrote:
> On Fri, Mar 20, 2020 at 10:42:14AM +0100, Roger Pau Monné wrote:
>> On Fri, Mar 20, 2020 at 09:12:16AM +0000, Julien Grall wrote:
>>> Hi Roger,
>>>
>>> On 20/03/2020 09:01, Roger Pau Monné wrote:
>>>> On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
>>>>> On 19.03.2020 20:07, Julien Grall wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 19/03/2020 18:43, Roger Pau Monné wrote:
>>>>>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>>>>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>>>>>>>     >> Why can't you keep flush_tlb_mask() here?
>>>>>>>>>
>>>>>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>>>>>>>> changes to the phymap require an ASID flush on AMD hardware.
>>>>>>>>
>>>>>>>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
>>>>>>>> updated so it flush the ASID on AMD hardware.
>>>>>>>
>>>>>>> Current behavior previous to this patch is to flush the ASIDs on
>>>>>>> every TLB flush.
>>>>>>>
>>>>>>> flush_tlb_mask is too widely used on x86 in places where there's no
>>>>>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
>>>>>>> when running nested, since those assisted flushes performed by L0
>>>>>>> don't flush the L2 guests TLBs.
>>>>>>>
>>>>>>> I could keep current behavior and leave flush_tlb_mask also flushing the
>>>>>>> ASIDs, but that seems wrong as the function doesn't have anything in
>>>>>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
>>>>>>
>>>>>> I agree the name is confusing, I had to look at the implementation to understand what it does.
>>>>>>
>>>>>> How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
>>>>>
>>>>> And this would then flush _only_ guest TLBs?
>>>>
>>>> No, I think from Julien's proposal (if I understood it correctly)
>>>> flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
>>>> previous to this patch (flush Xen's TLBs + HVM ASIDs).
>>>
>>> It looks like there might be confusion on what "guest TLBs" means. In my
>>> view this means any TLBs associated directly or indirectly with the guest.
>>> On Arm, this would be nuke:
>>>     - guest virtual address -> guest physical address TLB entry
>>>     - guest physical address -> host physical address TLB entry
>>>     - guest virtual address -> host physical address TLB entry
>>
>> AFAICT ASID flush on AMD hardware will flush any of the above, while
>> VPID flush on Intel will only flush the first item (guest linear to
> 
> Sorry, doing too many things at the same time. On Intel VPID flushes
> will get rid of guest virtual to guest physical or host physical, but
> not of guest physical to host physical, you need an EPT flush to
> accomplish that.
Are you suggesting that on x86, flush_tlb_mask() would not nuke the 
guest physical to host physical entries? If so, how is it meant to be safe?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 10:08                       ` Julien Grall
@ 2020-03-20 10:24                         ` Roger Pau Monné
  2020-03-20 10:36                           ` Julien Grall
  0 siblings, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-20 10:24 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

On Fri, Mar 20, 2020 at 10:08:33AM +0000, Julien Grall wrote:
> Hi,
> 
> On 20/03/2020 10:00, Roger Pau Monné wrote:
> > On Fri, Mar 20, 2020 at 10:42:14AM +0100, Roger Pau Monné wrote:
> > > On Fri, Mar 20, 2020 at 09:12:16AM +0000, Julien Grall wrote:
> > > > Hi Roger,
> > > > 
> > > > On 20/03/2020 09:01, Roger Pau Monné wrote:
> > > > > On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
> > > > > > On 19.03.2020 20:07, Julien Grall wrote:
> > > > > > > Hi,
> > > > > > > 
> > > > > > > On 19/03/2020 18:43, Roger Pau Monné wrote:
> > > > > > > > On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > On 19/03/2020 17:38, Roger Pau Monné wrote:
> > > > > > > > > > On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
> > > > > > > > > >     >> Why can't you keep flush_tlb_mask() here?
> > > > > > > > > > 
> > > > > > > > > > Because filtered_flush_tlb_mask is used in populate_physmap, and
> > > > > > > > > > changes to the phymap require an ASID flush on AMD hardware.
> > > > > > > > > 
> > > > > > > > > I am afraid this does not yet explain me why flush_tlb_mask() could not be
> > > > > > > > > updated so it flush the ASID on AMD hardware.
> > > > > > > > 
> > > > > > > > Current behavior previous to this patch is to flush the ASIDs on
> > > > > > > > every TLB flush.
> > > > > > > > 
> > > > > > > > flush_tlb_mask is too widely used on x86 in places where there's no
> > > > > > > > need to flush the ASIDs. This prevents using assisted flushes (by L0)
> > > > > > > > when running nested, since those assisted flushes performed by L0
> > > > > > > > don't flush the L2 guests TLBs.
> > > > > > > > 
> > > > > > > > I could keep current behavior and leave flush_tlb_mask also flushing the
> > > > > > > > ASIDs, but that seems wrong as the function doesn't have anything in
> > > > > > > > it's name that suggests it also flushes the in-guest TLBs for HVM.
> > > > > > > 
> > > > > > > I agree the name is confusing, I had to look at the implementation to understand what it does.
> > > > > > > 
> > > > > > > How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
> > > > > > 
> > > > > > And this would then flush _only_ guest TLBs?
> > > > > 
> > > > > No, I think from Julien's proposal (if I understood it correctly)
> > > > > flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
> > > > > previous to this patch (flush Xen's TLBs + HVM ASIDs).
> > > > 
> > > > It looks like there might be confusion on what "guest TLBs" means. In my
> > > > view this means any TLBs associated directly or indirectly with the guest.
> > > > On Arm, this would be nuke:
> > > >     - guest virtual address -> guest physical address TLB entry
> > > >     - guest physical address -> host physical address TLB entry
> > > >     - guest virtual address -> host physical address TLB entry
> > > 
> > > AFAICT ASID flush on AMD hardware will flush any of the above, while
> > > VPID flush on Intel will only flush the first item (guest linear to
> > 
> > Sorry, doing too many things at the same time. On Intel VPID flushes
> > will get rid of guest virtual to guest physical or host physical, but
> > not of guest physical to host physical, you need an EPT flush to
> > accomplish that.
> Are you suggesting that on x86, flush_tlb_mask() would not nuke the guest
> physical to host physical entries? If so, how is it meant to be safe?

You issue EPT flushes in that case when an EPT modification is
performed.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 10:24                         ` Roger Pau Monné
@ 2020-03-20 10:36                           ` Julien Grall
  2020-03-20 11:18                             ` Roger Pau Monné
  0 siblings, 1 reply; 29+ messages in thread
From: Julien Grall @ 2020-03-20 10:36 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

Hi,

On 20/03/2020 10:24, Roger Pau Monné wrote:
> On Fri, Mar 20, 2020 at 10:08:33AM +0000, Julien Grall wrote:
>> Hi,
>>
>> On 20/03/2020 10:00, Roger Pau Monné wrote:
>>> On Fri, Mar 20, 2020 at 10:42:14AM +0100, Roger Pau Monné wrote:
>>>> On Fri, Mar 20, 2020 at 09:12:16AM +0000, Julien Grall wrote:
>>>>> Hi Roger,
>>>>>
>>>>> On 20/03/2020 09:01, Roger Pau Monné wrote:
>>>>>> On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
>>>>>>> On 19.03.2020 20:07, Julien Grall wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> On 19/03/2020 18:43, Roger Pau Monné wrote:
>>>>>>>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>>>>>>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>>>>>>>>>      >> Why can't you keep flush_tlb_mask() here?
>>>>>>>>>>>
>>>>>>>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>>>>>>>>>> changes to the phymap require an ASID flush on AMD hardware.
>>>>>>>>>>
>>>>>>>>>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
>>>>>>>>>> updated so it flush the ASID on AMD hardware.
>>>>>>>>>
>>>>>>>>> Current behavior previous to this patch is to flush the ASIDs on
>>>>>>>>> every TLB flush.
>>>>>>>>>
>>>>>>>>> flush_tlb_mask is too widely used on x86 in places where there's no
>>>>>>>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
>>>>>>>>> when running nested, since those assisted flushes performed by L0
>>>>>>>>> don't flush the L2 guests TLBs.
>>>>>>>>>
>>>>>>>>> I could keep current behavior and leave flush_tlb_mask also flushing the
>>>>>>>>> ASIDs, but that seems wrong as the function doesn't have anything in
>>>>>>>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
>>>>>>>>
>>>>>>>> I agree the name is confusing, I had to look at the implementation to understand what it does.
>>>>>>>>
>>>>>>>> How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
>>>>>>>
>>>>>>> And this would then flush _only_ guest TLBs?
>>>>>>
>>>>>> No, I think from Julien's proposal (if I understood it correctly)
>>>>>> flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
>>>>>> previous to this patch (flush Xen's TLBs + HVM ASIDs).
>>>>>
>>>>> It looks like there might be confusion on what "guest TLBs" means. In my
>>>>> view this means any TLBs associated directly or indirectly with the guest.
>>>>> On Arm, this would be nuke:
>>>>>      - guest virtual address -> guest physical address TLB entry
>>>>>      - guest physical address -> host physical address TLB entry
>>>>>      - guest virtual address -> host physical address TLB entry
>>>>
>>>> AFAICT ASID flush on AMD hardware will flush any of the above, while
>>>> VPID flush on Intel will only flush the first item (guest linear to
>>>
>>> Sorry, doing too many things at the same time. On Intel VPID flushes
>>> will get rid of guest virtual to guest physical or host physical, but
>>> not of guest physical to host physical, you need an EPT flush to
>>> accomplish that.
>> Are you suggesting that on x86, flush_tlb_mask() would not nuke the guest
>> physical to host physical entries? If so, how is it meant to be safe?
> 
> You issue EPT flushes in that case when an EPT modification is
> performed.

I am getting more and more confused with the goal of flush_tlb_mask() in 
common code.

Looking at the Arm code, the P2M code should already flush appropriatly 
the guest TLBs before giving back a page to the allocator.

So what are we trying to protect against with the call of flush_tlb_mask()?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 10:36                           ` Julien Grall
@ 2020-03-20 11:18                             ` Roger Pau Monné
  0 siblings, 0 replies; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-20 11:18 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

On Fri, Mar 20, 2020 at 10:36:49AM +0000, Julien Grall wrote:
> Hi,
> 
> On 20/03/2020 10:24, Roger Pau Monné wrote:
> > On Fri, Mar 20, 2020 at 10:08:33AM +0000, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 20/03/2020 10:00, Roger Pau Monné wrote:
> > > > On Fri, Mar 20, 2020 at 10:42:14AM +0100, Roger Pau Monné wrote:
> > > > > On Fri, Mar 20, 2020 at 09:12:16AM +0000, Julien Grall wrote:
> > > > > > Hi Roger,
> > > > > > 
> > > > > > On 20/03/2020 09:01, Roger Pau Monné wrote:
> > > > > > > On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
> > > > > > > > On 19.03.2020 20:07, Julien Grall wrote:
> > > > > > > > > Hi,
> > > > > > > > > 
> > > > > > > > > On 19/03/2020 18:43, Roger Pau Monné wrote:
> > > > > > > > > > On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > On 19/03/2020 17:38, Roger Pau Monné wrote:
> > > > > > > > > > > > On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
> > > > > > > > > > > >      >> Why can't you keep flush_tlb_mask() here?
> > > > > > > > > > > > 
> > > > > > > > > > > > Because filtered_flush_tlb_mask is used in populate_physmap, and
> > > > > > > > > > > > changes to the phymap require an ASID flush on AMD hardware.
> > > > > > > > > > > 
> > > > > > > > > > > I am afraid this does not yet explain me why flush_tlb_mask() could not be
> > > > > > > > > > > updated so it flush the ASID on AMD hardware.
> > > > > > > > > > 
> > > > > > > > > > Current behavior previous to this patch is to flush the ASIDs on
> > > > > > > > > > every TLB flush.
> > > > > > > > > > 
> > > > > > > > > > flush_tlb_mask is too widely used on x86 in places where there's no
> > > > > > > > > > need to flush the ASIDs. This prevents using assisted flushes (by L0)
> > > > > > > > > > when running nested, since those assisted flushes performed by L0
> > > > > > > > > > don't flush the L2 guests TLBs.
> > > > > > > > > > 
> > > > > > > > > > I could keep current behavior and leave flush_tlb_mask also flushing the
> > > > > > > > > > ASIDs, but that seems wrong as the function doesn't have anything in
> > > > > > > > > > it's name that suggests it also flushes the in-guest TLBs for HVM.
> > > > > > > > > 
> > > > > > > > > I agree the name is confusing, I had to look at the implementation to understand what it does.
> > > > > > > > > 
> > > > > > > > > How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
> > > > > > > > 
> > > > > > > > And this would then flush _only_ guest TLBs?
> > > > > > > 
> > > > > > > No, I think from Julien's proposal (if I understood it correctly)
> > > > > > > flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
> > > > > > > previous to this patch (flush Xen's TLBs + HVM ASIDs).
> > > > > > 
> > > > > > It looks like there might be confusion on what "guest TLBs" means. In my
> > > > > > view this means any TLBs associated directly or indirectly with the guest.
> > > > > > On Arm, this would be nuke:
> > > > > >      - guest virtual address -> guest physical address TLB entry
> > > > > >      - guest physical address -> host physical address TLB entry
> > > > > >      - guest virtual address -> host physical address TLB entry
> > > > > 
> > > > > AFAICT ASID flush on AMD hardware will flush any of the above, while
> > > > > VPID flush on Intel will only flush the first item (guest linear to
> > > > 
> > > > Sorry, doing too many things at the same time. On Intel VPID flushes
> > > > will get rid of guest virtual to guest physical or host physical, but
> > > > not of guest physical to host physical, you need an EPT flush to
> > > > accomplish that.
> > > Are you suggesting that on x86, flush_tlb_mask() would not nuke the guest
> > > physical to host physical entries? If so, how is it meant to be safe?
> > 
> > You issue EPT flushes in that case when an EPT modification is
> > performed.
> 
> I am getting more and more confused with the goal of flush_tlb_mask() in
> common code.
> 
> Looking at the Arm code, the P2M code should already flush appropriatly the
> guest TLBs before giving back a page to the allocator.
> 
> So what are we trying to protect against with the call of flush_tlb_mask()?

So on x86 there are two completely different nested page table
technologies, NPT from AMD and EPT from Intel. EPT doesn't require a
VPID flush when modifying the nested page tables (it requires an EPT
flush), OTOH AMD NPT requires an ASID flush when modifying the tables,
and this seems to be implemented in common code for populate_physmap.

On x86 populate_physmap could also get rid of the flush, since the NPT
code already performs an ASID flush when modifying a nested page table
entry, but that's part of another patch.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20  9:01               ` Roger Pau Monné
  2020-03-20  9:12                 ` Julien Grall
@ 2020-03-20 13:16                 ` Jan Beulich
  2020-03-20 14:22                   ` Roger Pau Monné
  1 sibling, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2020-03-20 13:16 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Tim Deegan, xen-devel

On 20.03.2020 10:01, Roger Pau Monné wrote:
> On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
>> On 19.03.2020 20:07, Julien Grall wrote:
>>> On 19/03/2020 18:43, Roger Pau Monné wrote:
>>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>>>>   >> Why can't you keep flush_tlb_mask() here?
>>>>>>
>>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>>>>> changes to the phymap require an ASID flush on AMD hardware.
>>>>>
>>>>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
>>>>> updated so it flush the ASID on AMD hardware.
>>>>
>>>> Current behavior previous to this patch is to flush the ASIDs on
>>>> every TLB flush.
>>>>
>>>> flush_tlb_mask is too widely used on x86 in places where there's no
>>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
>>>> when running nested, since those assisted flushes performed by L0
>>>> don't flush the L2 guests TLBs.
>>>>
>>>> I could keep current behavior and leave flush_tlb_mask also flushing the
>>>> ASIDs, but that seems wrong as the function doesn't have anything in
>>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
>>>
>>> I agree the name is confusing, I had to look at the implementation to understand what it does.
>>>
>>> How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
>>
>> And this would then flush _only_ guest TLBs?
> 
> No, I think from Julien's proposal (if I understood it correctly)
> flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
> previous to this patch (flush Xen's TLBs + HVM ASIDs).
> 
>>>> I would rather prefer the default to be to not flush the
>>>> ASIDs, so that users need to specify so by passing the flag to
>>>> flusk_mask.
>>> That's x86 choice. For common, I would rather no introduce those flags until we have another arch that make use of it.
>>
>> The flags should perhaps indeed remain x86-specific, but suitable
>> wrappers usable from common code should exist (as you suggest
>> below).
> 
> I don't have a strong opinion re naming, are you OK with the names
> proposed above?

Well, no - imo a function named e.g. flush_tlb_all_guests_cpumask() is
not supposed to flush any host TLBs. But I'll also reply to Julien's
subsequent reply in a minute.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20  9:12                 ` Julien Grall
  2020-03-20  9:42                   ` Roger Pau Monné
@ 2020-03-20 13:19                   ` Jan Beulich
  2020-03-20 14:17                     ` Julien Grall
  1 sibling, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2020-03-20 13:19 UTC (permalink / raw)
  To: Julien Grall, Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, xen-devel

On 20.03.2020 10:12, Julien Grall wrote:
> On 20/03/2020 09:01, Roger Pau Monné wrote:
>> On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
>>> On 19.03.2020 20:07, Julien Grall wrote:
>>>> On 19/03/2020 18:43, Roger Pau Monné wrote:
>>>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>>>>>    >> Why can't you keep flush_tlb_mask() here?
>>>>>>>
>>>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>>>>>> changes to the phymap require an ASID flush on AMD hardware.
>>>>>>
>>>>>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
>>>>>> updated so it flush the ASID on AMD hardware.
>>>>>
>>>>> Current behavior previous to this patch is to flush the ASIDs on
>>>>> every TLB flush.
>>>>>
>>>>> flush_tlb_mask is too widely used on x86 in places where there's no
>>>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
>>>>> when running nested, since those assisted flushes performed by L0
>>>>> don't flush the L2 guests TLBs.
>>>>>
>>>>> I could keep current behavior and leave flush_tlb_mask also flushing the
>>>>> ASIDs, but that seems wrong as the function doesn't have anything in
>>>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
>>>>
>>>> I agree the name is confusing, I had to look at the implementation to understand what it does.
>>>>
>>>> How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
>>>
>>> And this would then flush _only_ guest TLBs?
>>
>> No, I think from Julien's proposal (if I understood it correctly)
>> flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
>> previous to this patch (flush Xen's TLBs + HVM ASIDs).
> 
> It looks like there might be confusion on what "guest TLBs" means. In my view this means any TLBs associated directly or indirectly with the guest. On Arm, this would be nuke:
>    - guest virtual address -> guest physical address TLB entry
>    - guest physical address -> host physical address TLB entry
>    - guest virtual address -> host physical address TLB entry
> 
> I would assume you want something similar on x86, right?

I don't think we'd want the middle of the three items you list,
but I also don't see how this would be relevant here - flushing
that is a p2m operation, not one affecting in-guest translations.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 13:19                   ` Jan Beulich
@ 2020-03-20 14:17                     ` Julien Grall
  2020-03-20 14:56                       ` Jan Beulich
  0 siblings, 1 reply; 29+ messages in thread
From: Julien Grall @ 2020-03-20 14:17 UTC (permalink / raw)
  To: Jan Beulich, Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, xen-devel

Hi,

On 20/03/2020 13:19, Jan Beulich wrote:
> On 20.03.2020 10:12, Julien Grall wrote:
>> On 20/03/2020 09:01, Roger Pau Monné wrote:
>>> On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
>>>> On 19.03.2020 20:07, Julien Grall wrote:
>>>>> On 19/03/2020 18:43, Roger Pau Monné wrote:
>>>>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>>>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>>>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>>>>>>     >> Why can't you keep flush_tlb_mask() here?
>>>>>>>>
>>>>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>>>>>>> changes to the phymap require an ASID flush on AMD hardware.
>>>>>>>
>>>>>>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
>>>>>>> updated so it flush the ASID on AMD hardware.
>>>>>>
>>>>>> Current behavior previous to this patch is to flush the ASIDs on
>>>>>> every TLB flush.
>>>>>>
>>>>>> flush_tlb_mask is too widely used on x86 in places where there's no
>>>>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
>>>>>> when running nested, since those assisted flushes performed by L0
>>>>>> don't flush the L2 guests TLBs.
>>>>>>
>>>>>> I could keep current behavior and leave flush_tlb_mask also flushing the
>>>>>> ASIDs, but that seems wrong as the function doesn't have anything in
>>>>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
>>>>>
>>>>> I agree the name is confusing, I had to look at the implementation to understand what it does.
>>>>>
>>>>> How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
>>>>
>>>> And this would then flush _only_ guest TLBs?
>>>
>>> No, I think from Julien's proposal (if I understood it correctly)
>>> flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
>>> previous to this patch (flush Xen's TLBs + HVM ASIDs).
>>
>> It looks like there might be confusion on what "guest TLBs" means. In my view this means any TLBs associated directly or indirectly with the guest. On Arm, this would be nuke:
>>     - guest virtual address -> guest physical address TLB entry
>>     - guest physical address -> host physical address TLB entry
>>     - guest virtual address -> host physical address TLB entry
>>
>> I would assume you want something similar on x86, right?
> 
> I don't think we'd want the middle of the three items you list,
> but I also don't see how this would be relevant here - flushing
> that is a p2m operation, not one affecting in-guest translations.

Apologies if this seems obvious to you, but why would you want to only 
flush in-guest translations in common code? What are you trying to 
protect against?

At least on Arm, you don't know whether the TLBs contains split or 
combined stage-2 (P2M) - stage-1 (guest PT) entries. So you have to nuke 
everything.

But this is already done as part of the P2M flush. I believe this should 
be the same on x86.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 13:16                 ` Jan Beulich
@ 2020-03-20 14:22                   ` Roger Pau Monné
  2020-03-20 14:27                     ` Julien Grall
  0 siblings, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-20 14:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Tim Deegan, xen-devel

On Fri, Mar 20, 2020 at 02:16:47PM +0100, Jan Beulich wrote:
> On 20.03.2020 10:01, Roger Pau Monné wrote:
> > On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
> >> On 19.03.2020 20:07, Julien Grall wrote:
> >>> On 19/03/2020 18:43, Roger Pau Monné wrote:
> >>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
> >>>>>
> >>>>>
> >>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
> >>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
> >>>>>>   >> Why can't you keep flush_tlb_mask() here?
> >>>>>>
> >>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
> >>>>>> changes to the phymap require an ASID flush on AMD hardware.
> >>>>>
> >>>>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
> >>>>> updated so it flush the ASID on AMD hardware.
> >>>>
> >>>> Current behavior previous to this patch is to flush the ASIDs on
> >>>> every TLB flush.
> >>>>
> >>>> flush_tlb_mask is too widely used on x86 in places where there's no
> >>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
> >>>> when running nested, since those assisted flushes performed by L0
> >>>> don't flush the L2 guests TLBs.
> >>>>
> >>>> I could keep current behavior and leave flush_tlb_mask also flushing the
> >>>> ASIDs, but that seems wrong as the function doesn't have anything in
> >>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
> >>>
> >>> I agree the name is confusing, I had to look at the implementation to understand what it does.
> >>>
> >>> How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
> >>
> >> And this would then flush _only_ guest TLBs?
> > 
> > No, I think from Julien's proposal (if I understood it correctly)
> > flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
> > previous to this patch (flush Xen's TLBs + HVM ASIDs).
> > 
> >>>> I would rather prefer the default to be to not flush the
> >>>> ASIDs, so that users need to specify so by passing the flag to
> >>>> flusk_mask.
> >>> That's x86 choice. For common, I would rather no introduce those flags until we have another arch that make use of it.
> >>
> >> The flags should perhaps indeed remain x86-specific, but suitable
> >> wrappers usable from common code should exist (as you suggest
> >> below).
> > 
> > I don't have a strong opinion re naming, are you OK with the names
> > proposed above?
> 
> Well, no - imo a function named e.g. flush_tlb_all_guests_cpumask() is
> not supposed to flush any host TLBs. But I'll also reply to Julien's
> subsequent reply in a minute.

It seems like the implementation of flush_tlb_mask on ARM and x86
already has different meanings, as the ARM one only flushes guests
TLBs but not Xen's one.

Alternatively I could code this as:

static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
{
    cpumask_t mask;

    cpumask_copy(&mask, &cpu_online_map);
    tlbflush_filter(&mask, tlbflush_timestamp);
    if ( !cpumask_empty(&mask) )
    {
        perfc_incr(need_flush_tlb_flush);
#if CONFIG_X86
        /*
         * filtered_flush_tlb_mask is used after modifying the p2m in
         * populate_physmap, Xen needs to trigger an ASID tickle as this is a
         * requirement on AMD hardware.
         */
        flush_mask(&mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
#else
        flush_tlb_mask(&mask);
#endif
    }
}

And we can see later about getting rid of the filtered_flush_tlb_mask
calls in populate_physmap and alloc_heap_pages if they are really
unneeded, which will allows us to get rid of the function altogether.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 14:22                   ` Roger Pau Monné
@ 2020-03-20 14:27                     ` Julien Grall
  2020-03-20 14:43                       ` Julien Grall
  2020-03-20 14:49                       ` Roger Pau Monné
  0 siblings, 2 replies; 29+ messages in thread
From: Julien Grall @ 2020-03-20 14:27 UTC (permalink / raw)
  To: Roger Pau Monné, Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, xen-devel



On 20/03/2020 14:22, Roger Pau Monné wrote:
> static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
> {
>      cpumask_t mask;
> 
>      cpumask_copy(&mask, &cpu_online_map);
>      tlbflush_filter(&mask, tlbflush_timestamp);
>      if ( !cpumask_empty(&mask) )
>      {
>          perfc_incr(need_flush_tlb_flush);
> #if CONFIG_X86
>          /*
>           * filtered_flush_tlb_mask is used after modifying the p2m in
>           * populate_physmap, Xen needs to trigger an ASID tickle as this is a
>           * requirement on AMD hardware.
>           */

I don't think this comment is correct. populate_physmap() is only going 
to add entry in the P2M and therefore flush should not be needed.

The only reason the flush would happen in populate_physmap() is because 
we allocated a page that was required to be flush (see free.need_tbflush).

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 14:27                     ` Julien Grall
@ 2020-03-20 14:43                       ` Julien Grall
  2020-03-20 14:52                         ` Roger Pau Monné
  2020-03-20 14:49                       ` Roger Pau Monné
  1 sibling, 1 reply; 29+ messages in thread
From: Julien Grall @ 2020-03-20 14:43 UTC (permalink / raw)
  To: Roger Pau Monné, Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, xen-devel

Hi,

On 20/03/2020 14:27, Julien Grall wrote:
> 
> 
> On 20/03/2020 14:22, Roger Pau Monné wrote:
>> static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
>> {
>>      cpumask_t mask;
>>
>>      cpumask_copy(&mask, &cpu_online_map);
>>      tlbflush_filter(&mask, tlbflush_timestamp);
>>      if ( !cpumask_empty(&mask) )
>>      {
>>          perfc_incr(need_flush_tlb_flush);
>> #if CONFIG_X86
>>          /*
>>           * filtered_flush_tlb_mask is used after modifying the p2m in
>>           * populate_physmap, Xen needs to trigger an ASID tickle as 
>> this is a
>>           * requirement on AMD hardware.
>>           */
> 
> I don't think this comment is correct. populate_physmap() is only going 
> to add entry in the P2M and therefore flush should not be needed.

I should have probably said "in most of the cases..." and ...

> 
> The only reason the flush would happen in populate_physmap() is because 
> we allocated a page that was required to be flush (see free.need_tbflush).

... extend this comment a bit more. The flush will happen when the page 
used to have an owner. So if there is no owner, there is no flush.

Therefore we can't rely on it if we really wanted to trigger an ASID 
tickle after a P2M update.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 14:27                     ` Julien Grall
  2020-03-20 14:43                       ` Julien Grall
@ 2020-03-20 14:49                       ` Roger Pau Monné
  2020-03-20 14:59                         ` Jan Beulich
  1 sibling, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-20 14:49 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

On Fri, Mar 20, 2020 at 02:27:36PM +0000, Julien Grall wrote:
> 
> 
> On 20/03/2020 14:22, Roger Pau Monné wrote:
> > static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
> > {
> >      cpumask_t mask;
> > 
> >      cpumask_copy(&mask, &cpu_online_map);
> >      tlbflush_filter(&mask, tlbflush_timestamp);
> >      if ( !cpumask_empty(&mask) )
> >      {
> >          perfc_incr(need_flush_tlb_flush);
> > #if CONFIG_X86
> >          /*
> >           * filtered_flush_tlb_mask is used after modifying the p2m in
> >           * populate_physmap, Xen needs to trigger an ASID tickle as this is a
> >           * requirement on AMD hardware.
> >           */
> 
> I don't think this comment is correct. populate_physmap() is only going to
> add entry in the P2M and therefore flush should not be needed.

Since this is strictly only adding entries I think you are right and
the ASID tickle could be avoided, as long as we can assert the gfn was
empty (or didn't have the valid bit set) previous to being populated.

Or that the nested page tables code already handles all this and
perform the necessary flushes.

> The only reason the flush would happen in populate_physmap() is because we
> allocated a page that was required to be flush (see free.need_tbflush).

I think this is related to PV guests rather than HVM ones? For HVM we
would always flush whatever is needed after removing an entry from the
page tables.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 14:43                       ` Julien Grall
@ 2020-03-20 14:52                         ` Roger Pau Monné
  0 siblings, 0 replies; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-20 14:52 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, Jan Beulich, xen-devel

On Fri, Mar 20, 2020 at 02:43:38PM +0000, Julien Grall wrote:
> Hi,
> 
> On 20/03/2020 14:27, Julien Grall wrote:
> > 
> > 
> > On 20/03/2020 14:22, Roger Pau Monné wrote:
> > > static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
> > > {
> > >      cpumask_t mask;
> > > 
> > >      cpumask_copy(&mask, &cpu_online_map);
> > >      tlbflush_filter(&mask, tlbflush_timestamp);
> > >      if ( !cpumask_empty(&mask) )
> > >      {
> > >          perfc_incr(need_flush_tlb_flush);
> > > #if CONFIG_X86
> > >          /*
> > >           * filtered_flush_tlb_mask is used after modifying the p2m in
> > >           * populate_physmap, Xen needs to trigger an ASID tickle as
> > > this is a
> > >           * requirement on AMD hardware.
> > >           */
> > 
> > I don't think this comment is correct. populate_physmap() is only going
> > to add entry in the P2M and therefore flush should not be needed.
> 
> I should have probably said "in most of the cases..." and ...
> 
> > 
> > The only reason the flush would happen in populate_physmap() is because
> > we allocated a page that was required to be flush (see
> > free.need_tbflush).
> 
> ... extend this comment a bit more. The flush will happen when the page used
> to have an owner. So if there is no owner, there is no flush.
> 
> Therefore we can't rely on it if we really wanted to trigger an ASID tickle
> after a P2M update.

Right, so I can leave filtered_flush_tlb_mask as-is. Will prepare a
new patch.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 14:17                     ` Julien Grall
@ 2020-03-20 14:56                       ` Jan Beulich
  0 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2020-03-20 14:56 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Tim Deegan, xen-devel, Roger Pau Monné

On 20.03.2020 15:17, Julien Grall wrote:
> Hi,
> 
> On 20/03/2020 13:19, Jan Beulich wrote:
>> On 20.03.2020 10:12, Julien Grall wrote:
>>> On 20/03/2020 09:01, Roger Pau Monné wrote:
>>>> On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
>>>>> On 19.03.2020 20:07, Julien Grall wrote:
>>>>>> On 19/03/2020 18:43, Roger Pau Monné wrote:
>>>>>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>>>>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>>>>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>>>>>>>     >> Why can't you keep flush_tlb_mask() here?
>>>>>>>>>
>>>>>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>>>>>>>> changes to the phymap require an ASID flush on AMD hardware.
>>>>>>>>
>>>>>>>> I am afraid this does not yet explain me why flush_tlb_mask() could not be
>>>>>>>> updated so it flush the ASID on AMD hardware.
>>>>>>>
>>>>>>> Current behavior previous to this patch is to flush the ASIDs on
>>>>>>> every TLB flush.
>>>>>>>
>>>>>>> flush_tlb_mask is too widely used on x86 in places where there's no
>>>>>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
>>>>>>> when running nested, since those assisted flushes performed by L0
>>>>>>> don't flush the L2 guests TLBs.
>>>>>>>
>>>>>>> I could keep current behavior and leave flush_tlb_mask also flushing the
>>>>>>> ASIDs, but that seems wrong as the function doesn't have anything in
>>>>>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
>>>>>>
>>>>>> I agree the name is confusing, I had to look at the implementation to understand what it does.
>>>>>>
>>>>>> How about renaming (or introducing) the function to flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
>>>>>
>>>>> And this would then flush _only_ guest TLBs?
>>>>
>>>> No, I think from Julien's proposal (if I understood it correctly)
>>>> flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
>>>> previous to this patch (flush Xen's TLBs + HVM ASIDs).
>>>
>>> It looks like there might be confusion on what "guest TLBs" means. In my view this means any TLBs associated directly or indirectly with the guest. On Arm, this would be nuke:
>>>     - guest virtual address -> guest physical address TLB entry
>>>     - guest physical address -> host physical address TLB entry
>>>     - guest virtual address -> host physical address TLB entry
>>>
>>> I would assume you want something similar on x86, right?
>>
>> I don't think we'd want the middle of the three items you list,
>> but I also don't see how this would be relevant here - flushing
>> that is a p2m operation, not one affecting in-guest translations.
> 
> Apologies if this seems obvious to you, but why would you want to only flush in-guest translations in common code? What are you trying to protect against?

I've not looked at the particular use in common code, my comment
was only about what's still in context above.

> At least on Arm, you don't know whether the TLBs contains split or combined stage-2 (P2M) - stage-1 (guest PT) entries. So you have to nuke everything.

Flushing guest mappings (or giving the appearance to do so, by
switching ASID/PCID) is supposed to also flush combined
mappings of course. It is not supposed to flush p2m mappings,
because that's a different address space. It may well be that
in the place the function gets used flushing of everything is
needed, but then - as I think Roger has already said - the
p2m part of the flushing simply happens elsewhere on x86. (It
may well be that it could do with avoiding there and getting
done centrally from the common code invocation.)

> But this is already done as part of the P2M flush. I believe this should be the same on x86.

A p2m flush would deal with p2m and combined mappings; it
still wouldn't flush guest linear -> guest phys ones.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 14:49                       ` Roger Pau Monné
@ 2020-03-20 14:59                         ` Jan Beulich
  2020-03-20 15:05                           ` Roger Pau Monné
  0 siblings, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2020-03-20 14:59 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Tim Deegan, xen-devel

On 20.03.2020 15:49, Roger Pau Monné wrote:
> On Fri, Mar 20, 2020 at 02:27:36PM +0000, Julien Grall wrote:
>>
>>
>> On 20/03/2020 14:22, Roger Pau Monné wrote:
>>> static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
>>> {
>>>      cpumask_t mask;
>>>
>>>      cpumask_copy(&mask, &cpu_online_map);
>>>      tlbflush_filter(&mask, tlbflush_timestamp);
>>>      if ( !cpumask_empty(&mask) )
>>>      {
>>>          perfc_incr(need_flush_tlb_flush);
>>> #if CONFIG_X86
>>>          /*
>>>           * filtered_flush_tlb_mask is used after modifying the p2m in
>>>           * populate_physmap, Xen needs to trigger an ASID tickle as this is a
>>>           * requirement on AMD hardware.
>>>           */
>>
>> I don't think this comment is correct. populate_physmap() is only going to
>> add entry in the P2M and therefore flush should not be needed.
> 
> Since this is strictly only adding entries I think you are right and
> the ASID tickle could be avoided, as long as we can assert the gfn was
> empty (or didn't have the valid bit set) previous to being populated.

While this may be true for x86, it's not guaranteed in general
that non-present translations may not also be put into TLBs.
So from common code there shouldn't be assumptions like this.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag
  2020-03-20 14:59                         ` Jan Beulich
@ 2020-03-20 15:05                           ` Roger Pau Monné
  0 siblings, 0 replies; 29+ messages in thread
From: Roger Pau Monné @ 2020-03-20 15:05 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Tim Deegan, xen-devel

On Fri, Mar 20, 2020 at 03:59:35PM +0100, Jan Beulich wrote:
> On 20.03.2020 15:49, Roger Pau Monné wrote:
> > On Fri, Mar 20, 2020 at 02:27:36PM +0000, Julien Grall wrote:
> >>
> >>
> >> On 20/03/2020 14:22, Roger Pau Monné wrote:
> >>> static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
> >>> {
> >>>      cpumask_t mask;
> >>>
> >>>      cpumask_copy(&mask, &cpu_online_map);
> >>>      tlbflush_filter(&mask, tlbflush_timestamp);
> >>>      if ( !cpumask_empty(&mask) )
> >>>      {
> >>>          perfc_incr(need_flush_tlb_flush);
> >>> #if CONFIG_X86
> >>>          /*
> >>>           * filtered_flush_tlb_mask is used after modifying the p2m in
> >>>           * populate_physmap, Xen needs to trigger an ASID tickle as this is a
> >>>           * requirement on AMD hardware.
> >>>           */
> >>
> >> I don't think this comment is correct. populate_physmap() is only going to
> >> add entry in the P2M and therefore flush should not be needed.
> > 
> > Since this is strictly only adding entries I think you are right and
> > the ASID tickle could be avoided, as long as we can assert the gfn was
> > empty (or didn't have the valid bit set) previous to being populated.
> 
> While this may be true for x86, it's not guaranteed in general
> that non-present translations may not also be put into TLBs.
> So from common code there shouldn't be assumptions like this.

But as pointed out by Julien filtered_flush_tlb_mask is exclusively
used in combination with accumulate_tlbflush, which only cares about
the need_tlbflush in the page struct, and hence if pages added to the
physmap didn't had an owner you won't end up calling
filtered_flush_tlb_mask at all.

So the ASID tickle must be performed somewhere else, because gating
the ASID flush on whether the page had a previous owner is not
correct.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2020-03-20 15:06 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-19 15:47 [Xen-devel] [PATCH v7 0/3] x86/guest: use assisted TLB flush in guest mode Roger Pau Monne
2020-03-19 15:47 ` [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag Roger Pau Monne
2020-03-19 16:21   ` Julien Grall
2020-03-19 17:38     ` Roger Pau Monné
2020-03-19 18:07       ` Julien Grall
2020-03-19 18:43         ` Roger Pau Monné
2020-03-19 19:07           ` Julien Grall
2020-03-20  7:21             ` Jan Beulich
2020-03-20  9:01               ` Roger Pau Monné
2020-03-20  9:12                 ` Julien Grall
2020-03-20  9:42                   ` Roger Pau Monné
2020-03-20 10:00                     ` Roger Pau Monné
2020-03-20 10:08                       ` Julien Grall
2020-03-20 10:24                         ` Roger Pau Monné
2020-03-20 10:36                           ` Julien Grall
2020-03-20 11:18                             ` Roger Pau Monné
2020-03-20 13:19                   ` Jan Beulich
2020-03-20 14:17                     ` Julien Grall
2020-03-20 14:56                       ` Jan Beulich
2020-03-20 13:16                 ` Jan Beulich
2020-03-20 14:22                   ` Roger Pau Monné
2020-03-20 14:27                     ` Julien Grall
2020-03-20 14:43                       ` Julien Grall
2020-03-20 14:52                         ` Roger Pau Monné
2020-03-20 14:49                       ` Roger Pau Monné
2020-03-20 14:59                         ` Jan Beulich
2020-03-20 15:05                           ` Roger Pau Monné
2020-03-19 15:47 ` [Xen-devel] [PATCH v7 2/3] x86/tlb: allow disabling the TLB clock Roger Pau Monne
2020-03-19 15:47 ` [Xen-devel] [PATCH v7 3/3] x86/tlb: use Xen L0 assisted TLB flush when available Roger Pau Monne

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.