xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v2 0/3] Xen on Hyper-V: Implement L0 assisted TLB flush
@ 2020-02-14 12:34 Wei Liu
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb Wei Liu
                   ` (2 more replies)
  0 siblings, 3 replies; 24+ messages in thread
From: Wei Liu @ 2020-02-14 12:34 UTC (permalink / raw)
  To: Xen Development List
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Paul Durrant, Michael Kelley,
	Jan Beulich, Roger Pau Monné

Hi all

This seris is based on Roger's L0 assisted flush series.

I have done some testing against a Linux on Hyper-V in a 32-vcpu VM.
All builds were done with -j32.



Building Xen on Linux:
real    0m45.376s
user    2m28.156s
sys     0m51.672s

Building Xen on Linux on Xen on Hyper-V, no assisted flush:
real    3m8.762s
user    10m46.787s
sys     30m14.492s

Building Xen on Linux on Xen on Hyper-V, with assisted flush:
real    0m44.369s
user    3m16.231s
sys     3m3.330s



Building Linux x86_64_defconfig on Linux:
real    0m59.698s
user    21m14.014s
sys     2m58.742s

Building Linux x86_64_defconfig on Linux on Xen on Hyper-V, no assisted
flush:
real    2m6.284s
user    31m18.706s
sys     20m31.106s

Building Linux x86_64_defconfig on Linux on Xen on Hyper-V, with assisted
flush:
real    1m38.968s
user    28m40.398s
sys     11m20.151s



There are various degrees of improvement depending on the workload. Xen
can perhaps be optmised a bit more because it currently doesn't pass the
address space id (cr3) to Hyper-V, but that requires reworking TLB flush
APIs within Xen.

Wei.

Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Roger Pau Monné <roger.pau@citrix.com>
Cc: Michael Kelley <mikelley@microsoft.com>
Cc: Paul Durrant <pdurrant@amazon.com>

Wei Liu (3):
  x86/hypervisor: pass flags to hypervisor_flush_tlb
  x86/hyperv: skeleton for L0 assisted TLB flush
  x86/hyperv: L0 assisted TLB flush

 xen/arch/x86/guest/hyperv/Makefile     |   2 +
 xen/arch/x86/guest/hyperv/hyperv.c     |  17 ++
 xen/arch/x86/guest/hyperv/private.h    |  13 ++
 xen/arch/x86/guest/hyperv/tlb.c        | 211 +++++++++++++++++++++++++
 xen/arch/x86/guest/hyperv/util.c       |  74 +++++++++
 xen/arch/x86/guest/hypervisor.c        |   7 +-
 xen/arch/x86/guest/xen/xen.c           |   2 +-
 xen/arch/x86/smp.c                     |   5 +-
 xen/include/asm-x86/flushtlb.h         |   3 +
 xen/include/asm-x86/guest/hypervisor.h |  10 +-
 10 files changed, 333 insertions(+), 11 deletions(-)
 create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
 create mode 100644 xen/arch/x86/guest/hyperv/util.c

-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb
  2020-02-14 12:34 [Xen-devel] [PATCH v2 0/3] Xen on Hyper-V: Implement L0 assisted TLB flush Wei Liu
@ 2020-02-14 12:34 ` Wei Liu
  2020-02-14 14:44   ` Roger Pau Monné
  2020-02-14 16:51   ` Durrant, Paul
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush Wei Liu
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 3/3] x86/hyperv: " Wei Liu
  2 siblings, 2 replies; 24+ messages in thread
From: Wei Liu @ 2020-02-14 12:34 UTC (permalink / raw)
  To: Xen Development List
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Paul Durrant, Michael Kelley,
	Jan Beulich, Roger Pau Monné

Hyper-V's L0 assisted flush has fine-grained control over what gets
flushed. We need all the flags available to make the best decisions
possible.

No functional change because Xen's implementation doesn't care about
what is passed to it.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
---
v2:
1. Introduce FLUSH_TLB_FLAGS_MASK
---
 xen/arch/x86/guest/hypervisor.c        |  7 +++++--
 xen/arch/x86/guest/xen/xen.c           |  2 +-
 xen/arch/x86/smp.c                     |  5 ++---
 xen/include/asm-x86/flushtlb.h         |  3 +++
 xen/include/asm-x86/guest/hypervisor.h | 10 +++++-----
 5 files changed, 16 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hypervisor.c
index 47e938e287..6ee28c9df1 100644
--- a/xen/arch/x86/guest/hypervisor.c
+++ b/xen/arch/x86/guest/hypervisor.c
@@ -75,10 +75,13 @@ void __init hypervisor_e820_fixup(struct e820map *e820)
 }
 
 int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
-                         unsigned int order)
+                         unsigned int flags)
 {
+    if ( flags & ~FLUSH_TLB_FLAGS_MASK )
+        return -EINVAL;
+
     if ( ops.flush_tlb )
-        return alternative_call(ops.flush_tlb, mask, va, order);
+        return alternative_call(ops.flush_tlb, mask, va, flags);
 
     return -ENOSYS;
 }
diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c
index 5d3427a713..0eb1115c4d 100644
--- a/xen/arch/x86/guest/xen/xen.c
+++ b/xen/arch/x86/guest/xen/xen.c
@@ -324,7 +324,7 @@ static void __init e820_fixup(struct e820map *e820)
         pv_shim_fixup_e820(e820);
 }
 
-static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int order)
+static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int flags)
 {
     return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL);
 }
diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
index 9bc925616a..2ab0e30eef 100644
--- a/xen/arch/x86/smp.c
+++ b/xen/arch/x86/smp.c
@@ -258,9 +258,8 @@ void flush_area_mask(const cpumask_t *mask, const void *va, unsigned int flags)
          !cpumask_subset(mask, cpumask_of(cpu)) )
     {
         if ( cpu_has_hypervisor &&
-             !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID |
-                         FLUSH_ORDER_MASK)) &&
-             !hypervisor_flush_tlb(mask, va, flags & FLUSH_ORDER_MASK) )
+             !(flags & ~FLUSH_TLB_FLAGS_MASK) &&
+             !hypervisor_flush_tlb(mask, va, flags) )
         {
             if ( tlb_clk_enabled )
                 tlb_clk_enabled = false;
diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h
index 9773014320..a4de317452 100644
--- a/xen/include/asm-x86/flushtlb.h
+++ b/xen/include/asm-x86/flushtlb.h
@@ -123,6 +123,9 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4);
  /* Flush all HVM guests linear TLB (using ASID/VPID) */
 #define FLUSH_GUESTS_TLB 0x4000
 
+#define FLUSH_TLB_FLAGS_MASK (FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | \
+                              FLUSH_ORDER_MASK)
+
 /* Flush local TLBs/caches. */
 unsigned int flush_area_local(const void *va, unsigned int flags);
 #define flush_local(flags) flush_area_local(NULL, flags)
diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-x86/guest/hypervisor.h
index 432e57c2a0..48d54735d2 100644
--- a/xen/include/asm-x86/guest/hypervisor.h
+++ b/xen/include/asm-x86/guest/hypervisor.h
@@ -35,7 +35,7 @@ struct hypervisor_ops {
     /* Fix up e820 map */
     void (*e820_fixup)(struct e820map *e820);
     /* L0 assisted TLB flush */
-    int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int order);
+    int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int flags);
 };
 
 #ifdef CONFIG_GUEST
@@ -48,11 +48,11 @@ void hypervisor_e820_fixup(struct e820map *e820);
 /*
  * L0 assisted TLB flush.
  * mask: cpumask of the dirty vCPUs that should be flushed.
- * va: linear address to flush, or NULL for global flushes.
- * order: order of the linear address pointed by va.
+ * va: linear address to flush, or NULL for entire address space.
+ * flags: flags for flushing, including the order of va.
  */
 int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
-                         unsigned int order);
+                         unsigned int flags);
 
 #else
 
@@ -65,7 +65,7 @@ static inline int hypervisor_ap_setup(void) { return 0; }
 static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); }
 static inline void hypervisor_e820_fixup(struct e820map *e820) {}
 static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
-                                       unsigned int order)
+                                       unsigned int flags)
 {
     return -ENOSYS;
 }
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-14 12:34 [Xen-devel] [PATCH v2 0/3] Xen on Hyper-V: Implement L0 assisted TLB flush Wei Liu
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb Wei Liu
@ 2020-02-14 12:34 ` Wei Liu
  2020-02-14 14:46   ` Roger Pau Monné
  2020-02-14 16:55   ` Durrant, Paul
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 3/3] x86/hyperv: " Wei Liu
  2 siblings, 2 replies; 24+ messages in thread
From: Wei Liu @ 2020-02-14 12:34 UTC (permalink / raw)
  To: Xen Development List
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Paul Durrant, Michael Kelley,
	Jan Beulich, Roger Pau Monné

Implement a basic hook for L0 assisted TLB flush. The hook needs to
check if prerequisites are met. If they are not met, it returns an error
number to fall back to native flushes.

Introduce a new variable to indicate if hypercall page is ready.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
---
 xen/arch/x86/guest/hyperv/Makefile  |  1 +
 xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
 xen/arch/x86/guest/hyperv/private.h |  4 +++
 xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
 4 files changed, 63 insertions(+)
 create mode 100644 xen/arch/x86/guest/hyperv/tlb.c

diff --git a/xen/arch/x86/guest/hyperv/Makefile b/xen/arch/x86/guest/hyperv/Makefile
index 68170109a9..18902c33e9 100644
--- a/xen/arch/x86/guest/hyperv/Makefile
+++ b/xen/arch/x86/guest/hyperv/Makefile
@@ -1 +1,2 @@
 obj-y += hyperv.o
+obj-y += tlb.o
diff --git a/xen/arch/x86/guest/hyperv/hyperv.c b/xen/arch/x86/guest/hyperv/hyperv.c
index 70f4cd5ae0..f9d1f11ae3 100644
--- a/xen/arch/x86/guest/hyperv/hyperv.c
+++ b/xen/arch/x86/guest/hyperv/hyperv.c
@@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
 DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
 DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
 
+static bool __read_mostly hv_hcall_page_ready;
+
 static uint64_t generate_guest_id(void)
 {
     union hv_guest_os_id id = {};
@@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
     BUG_ON(!hypercall_msr.enable);
 
     set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
+
+    hv_hcall_page_ready = true;
 }
 
 static int setup_hypercall_pcpu_arg(void)
@@ -199,11 +203,24 @@ static void __init e820_fixup(struct e820map *e820)
         panic("Unable to reserve Hyper-V hypercall range\n");
 }
 
+static int flush_tlb(const cpumask_t *mask, const void *va,
+                     unsigned int flags)
+{
+    if ( !(ms_hyperv.hints & HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED) )
+        return -EOPNOTSUPP;
+
+    if ( !hv_hcall_page_ready || !this_cpu(hv_input_page) )
+        return -ENXIO;
+
+    return hyperv_flush_tlb(mask, va, flags);
+}
+
 static const struct hypervisor_ops __initdata ops = {
     .name = "Hyper-V",
     .setup = setup,
     .ap_setup = ap_setup,
     .e820_fixup = e820_fixup,
+    .flush_tlb = flush_tlb,
 };
 
 /*
diff --git a/xen/arch/x86/guest/hyperv/private.h b/xen/arch/x86/guest/hyperv/private.h
index 956eff831f..509bedaafa 100644
--- a/xen/arch/x86/guest/hyperv/private.h
+++ b/xen/arch/x86/guest/hyperv/private.h
@@ -22,10 +22,14 @@
 #ifndef __XEN_HYPERV_PRIVIATE_H__
 #define __XEN_HYPERV_PRIVIATE_H__
 
+#include <xen/cpumask.h>
 #include <xen/percpu.h>
 
 DECLARE_PER_CPU(void *, hv_input_page);
 DECLARE_PER_CPU(void *, hv_vp_assist);
 DECLARE_PER_CPU(unsigned int, hv_vp_index);
 
+int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
+                     unsigned int flags);
+
 #endif /* __XEN_HYPERV_PRIVIATE_H__  */
diff --git a/xen/arch/x86/guest/hyperv/tlb.c b/xen/arch/x86/guest/hyperv/tlb.c
new file mode 100644
index 0000000000..48f527229e
--- /dev/null
+++ b/xen/arch/x86/guest/hyperv/tlb.c
@@ -0,0 +1,41 @@
+/******************************************************************************
+ * arch/x86/guest/hyperv/tlb.c
+ *
+ * Support for TLB management using hypercalls
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (c) 2020 Microsoft.
+ */
+
+#include <xen/cpumask.h>
+#include <xen/errno.h>
+
+#include "private.h"
+
+int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
+                     unsigned int flags)
+{
+    return -EOPNOTSUPP;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 3/3] x86/hyperv: L0 assisted TLB flush
  2020-02-14 12:34 [Xen-devel] [PATCH v2 0/3] Xen on Hyper-V: Implement L0 assisted TLB flush Wei Liu
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb Wei Liu
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush Wei Liu
@ 2020-02-14 12:34 ` Wei Liu
  2020-02-14 14:42   ` Roger Pau Monné
  2020-02-14 16:42   ` Michael Kelley
  2 siblings, 2 replies; 24+ messages in thread
From: Wei Liu @ 2020-02-14 12:34 UTC (permalink / raw)
  To: Xen Development List
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Paul Durrant, Michael Kelley,
	Jan Beulich, Roger Pau Monné

Implement L0 assisted TLB flush for Xen on Hyper-V. It takes advantage
of several hypercalls:

 * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST
 * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
 * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE
 * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX

Pick the most efficient hypercalls available.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
---
v2:
1. Address Roger and Jan's comments re types etc.
2. Fix pointer arithmetic.
3. Misc improvement to code.
---
 xen/arch/x86/guest/hyperv/Makefile  |   1 +
 xen/arch/x86/guest/hyperv/private.h |   9 ++
 xen/arch/x86/guest/hyperv/tlb.c     | 172 +++++++++++++++++++++++++++-
 xen/arch/x86/guest/hyperv/util.c    |  74 ++++++++++++
 4 files changed, 255 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/x86/guest/hyperv/util.c

diff --git a/xen/arch/x86/guest/hyperv/Makefile b/xen/arch/x86/guest/hyperv/Makefile
index 18902c33e9..0e39410968 100644
--- a/xen/arch/x86/guest/hyperv/Makefile
+++ b/xen/arch/x86/guest/hyperv/Makefile
@@ -1,2 +1,3 @@
 obj-y += hyperv.o
 obj-y += tlb.o
+obj-y += util.o
diff --git a/xen/arch/x86/guest/hyperv/private.h b/xen/arch/x86/guest/hyperv/private.h
index 509bedaafa..79a77930a0 100644
--- a/xen/arch/x86/guest/hyperv/private.h
+++ b/xen/arch/x86/guest/hyperv/private.h
@@ -24,12 +24,21 @@
 
 #include <xen/cpumask.h>
 #include <xen/percpu.h>
+#include <xen/types.h>
 
 DECLARE_PER_CPU(void *, hv_input_page);
 DECLARE_PER_CPU(void *, hv_vp_assist);
 DECLARE_PER_CPU(unsigned int, hv_vp_index);
 
+static inline unsigned int hv_vp_index(unsigned int cpu)
+{
+    return per_cpu(hv_vp_index, cpu);
+}
+
 int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
                      unsigned int flags);
 
+/* Returns number of banks, -ev if error */
+int cpumask_to_vpset(struct hv_vpset *vpset, const cpumask_t *mask);
+
 #endif /* __XEN_HYPERV_PRIVIATE_H__  */
diff --git a/xen/arch/x86/guest/hyperv/tlb.c b/xen/arch/x86/guest/hyperv/tlb.c
index 48f527229e..f68e14f151 100644
--- a/xen/arch/x86/guest/hyperv/tlb.c
+++ b/xen/arch/x86/guest/hyperv/tlb.c
@@ -19,15 +19,185 @@
  * Copyright (c) 2020 Microsoft.
  */
 
+#include <xen/cpu.h>
 #include <xen/cpumask.h>
 #include <xen/errno.h>
 
+#include <asm/guest/hyperv.h>
+#include <asm/guest/hyperv-hcall.h>
+#include <asm/guest/hyperv-tlfs.h>
+
 #include "private.h"
 
+/*
+ * It is possible to encode up to 4096 pages using the lower 12 bits
+ * in an element of gva_list
+ */
+#define HV_TLB_FLUSH_UNIT (4096 * PAGE_SIZE)
+
+static unsigned int fill_gva_list(uint64_t *gva_list, const void *va,
+                                  unsigned int order)
+{
+    unsigned long start = (unsigned long)va;
+    unsigned long end = start + (PAGE_SIZE << order) - 1;
+    unsigned int n = 0;
+
+    do {
+        unsigned long remain = end - start;
+
+        gva_list[n] = start & PAGE_MASK;
+
+        /*
+         * Use lower 12 bits to encode the number of additional pages
+         * to flush
+         */
+        if ( remain >= HV_TLB_FLUSH_UNIT )
+        {
+            gva_list[n] |= ~PAGE_MASK;
+            start += HV_TLB_FLUSH_UNIT;
+        }
+        else if ( remain )
+        {
+            gva_list[n] |= (remain - 1) >> PAGE_SHIFT;
+            start = end;
+        }
+
+        n++;
+    } while ( start < end );
+
+    return n;
+}
+
+static uint64_t flush_tlb_ex(const cpumask_t *mask, const void *va,
+                             unsigned int flags)
+{
+    struct hv_tlb_flush_ex *flush = this_cpu(hv_input_page);
+    int nr_banks;
+    unsigned int max_gvas, order = flags & FLUSH_ORDER_MASK;
+    uint64_t ret;
+
+    if ( !flush || local_irq_is_enabled() )
+    {
+        ASSERT_UNREACHABLE();
+        return ~0ULL;
+    }
+
+    if ( !(ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED) )
+        return ~0ULL;
+
+    flush->address_space = 0;
+    flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+    if ( !(flags & FLUSH_TLB_GLOBAL) )
+        flush->flags |= HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY;
+
+    nr_banks = cpumask_to_vpset(&flush->hv_vp_set, mask);
+    if ( nr_banks < 0 )
+        return ~0ULL;
+
+    max_gvas =
+        (PAGE_SIZE - sizeof(*flush) - nr_banks *
+         sizeof(flush->hv_vp_set.bank_contents[0])) /
+        sizeof(uint64_t);       /* gva is represented as uint64_t */
+
+    /*
+     * Flush the entire address space if va is NULL or if there is not
+     * enough space for gva_list.
+     */
+    if ( !va || (PAGE_SIZE << order) / HV_TLB_FLUSH_UNIT > max_gvas )
+        ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX, 0,
+                                  nr_banks, virt_to_maddr(flush), 0);
+    else
+    {
+        uint64_t *gva_list =
+            (uint64_t *)flush + sizeof(*flush) / sizeof(uint64_t) + nr_banks;
+        unsigned int gvas = fill_gva_list(gva_list, va, order);
+
+        BUILD_BUG_ON(sizeof(*flush) % sizeof(uint64_t));
+
+        ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX,
+                                  gvas, nr_banks, virt_to_maddr(flush), 0);
+    }
+
+    return ret;
+}
+
 int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
                      unsigned int flags)
 {
-    return -EOPNOTSUPP;
+    unsigned long irq_flags;
+    struct hv_tlb_flush *flush = this_cpu(hv_input_page);
+    unsigned int max_gvas, order = flags & FLUSH_ORDER_MASK;
+    uint64_t ret;
+
+    ASSERT(flush);
+    ASSERT(!cpumask_empty(mask));
+
+    local_irq_save(irq_flags);
+
+    flush->address_space = 0;
+    flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
+    flush->processor_mask = 0;
+    if ( !(flags & FLUSH_TLB_GLOBAL) )
+        flush->flags |= HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY;
+
+    if ( cpumask_equal(mask, &cpu_online_map) )
+        flush->flags |= HV_FLUSH_ALL_PROCESSORS;
+    else
+    {
+        unsigned int cpu;
+
+        /*
+         * Normally VP indices are in ascending order and match Xen's
+         * idea of CPU ids. Check the last index to see if VP index is
+         * >= 64. If so, we can skip setting up parameters for
+         * non-applicable hypercalls without looking further.
+         */
+        if ( hv_vp_index(cpumask_last(mask)) >= 64 )
+            goto do_ex_hypercall;
+
+        for_each_cpu ( cpu, mask )
+        {
+            uint32_t vpid = hv_vp_index(cpu);
+
+            if ( vpid > ms_hyperv.max_vp_index )
+            {
+                local_irq_restore(irq_flags);
+                return -ENXIO;
+            }
+
+            if ( vpid >= 64 )
+                goto do_ex_hypercall;
+
+            __set_bit(vpid, &flush->processor_mask);
+        }
+    }
+
+    max_gvas = (PAGE_SIZE - sizeof(*flush)) / sizeof(flush->gva_list[0]);
+
+    /*
+     * Flush the entire address space if va is NULL or if there is not
+     * enough space for gva_list.
+     */
+    if ( !va || (PAGE_SIZE << order) / HV_TLB_FLUSH_UNIT > max_gvas )
+        ret = hv_do_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE,
+                              virt_to_maddr(flush), 0);
+    else
+    {
+        unsigned int gvas = fill_gva_list(flush->gva_list, va, order);
+
+        ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST, gvas, 0,
+                                  virt_to_maddr(flush), 0);
+    }
+
+    goto done;
+
+ do_ex_hypercall:
+    ret = flush_tlb_ex(mask, va, flags);
+
+ done:
+    local_irq_restore(irq_flags);
+
+    return ret & HV_HYPERCALL_RESULT_MASK ? -ENXIO : 0;
 }
 
 /*
diff --git a/xen/arch/x86/guest/hyperv/util.c b/xen/arch/x86/guest/hyperv/util.c
new file mode 100644
index 0000000000..e092593746
--- /dev/null
+++ b/xen/arch/x86/guest/hyperv/util.c
@@ -0,0 +1,74 @@
+/******************************************************************************
+ * arch/x86/guest/hyperv/util.c
+ *
+ * Hyper-V utility functions
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (c) 2020 Microsoft.
+ */
+
+#include <xen/cpu.h>
+#include <xen/cpumask.h>
+#include <xen/errno.h>
+
+#include <asm/guest/hyperv.h>
+#include <asm/guest/hyperv-tlfs.h>
+
+#include "private.h"
+
+int cpumask_to_vpset(struct hv_vpset *vpset,
+                     const cpumask_t *mask)
+{
+    int nr = 1;
+    unsigned int cpu, vcpu_bank, vcpu_offset;
+    unsigned int max_banks = ms_hyperv.max_vp_index / 64;
+
+    /* Up to 64 banks can be represented by valid_bank_mask */
+    if ( max_banks >= 64 )
+        return -E2BIG;
+
+    /* Clear all banks to avoid flushing unwanted CPUs */
+    for ( vcpu_bank = 0; vcpu_bank <= max_banks; vcpu_bank++ )
+        vpset->bank_contents[vcpu_bank] = 0;
+
+    vpset->valid_bank_mask = 0;
+    vpset->format = HV_GENERIC_SET_SPARSE_4K;
+
+    for_each_cpu ( cpu, mask )
+    {
+        unsigned int vcpu = hv_vp_index(cpu);
+
+        vcpu_bank = vcpu / 64;
+        vcpu_offset = vcpu % 64;
+
+        __set_bit(vcpu_offset, &vpset->bank_contents[vcpu_bank]);
+        __set_bit(vcpu_bank, &vpset->valid_bank_mask);
+
+        if ( vcpu_bank >= nr )
+            nr = vcpu_bank + 1;
+    }
+
+    return nr;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 3/3] x86/hyperv: L0 assisted TLB flush
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 3/3] x86/hyperv: " Wei Liu
@ 2020-02-14 14:42   ` Roger Pau Monné
  2020-02-14 16:03     ` Wei Liu
  2020-02-14 16:42   ` Michael Kelley
  1 sibling, 1 reply; 24+ messages in thread
From: Roger Pau Monné @ 2020-02-14 14:42 UTC (permalink / raw)
  To: Wei Liu
  Cc: Wei Liu, Andrew Cooper, Paul Durrant, Michael Kelley,
	Jan Beulich, Xen Development List

On Fri, Feb 14, 2020 at 12:34:30PM +0000, Wei Liu wrote:
> Implement L0 assisted TLB flush for Xen on Hyper-V. It takes advantage
> of several hypercalls:
> 
>  * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST
>  * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
>  * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE
>  * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX
> 
> Pick the most efficient hypercalls available.
> 
> Signed-off-by: Wei Liu <liuwe@microsoft.com>

Thanks! LGTM, I've just got a couple of comments below.

> ---
> v2:
> 1. Address Roger and Jan's comments re types etc.
> 2. Fix pointer arithmetic.
> 3. Misc improvement to code.
> ---
>  xen/arch/x86/guest/hyperv/Makefile  |   1 +
>  xen/arch/x86/guest/hyperv/private.h |   9 ++
>  xen/arch/x86/guest/hyperv/tlb.c     | 172 +++++++++++++++++++++++++++-
>  xen/arch/x86/guest/hyperv/util.c    |  74 ++++++++++++
>  4 files changed, 255 insertions(+), 1 deletion(-)
>  create mode 100644 xen/arch/x86/guest/hyperv/util.c
> 
> diff --git a/xen/arch/x86/guest/hyperv/Makefile b/xen/arch/x86/guest/hyperv/Makefile
> index 18902c33e9..0e39410968 100644
> --- a/xen/arch/x86/guest/hyperv/Makefile
> +++ b/xen/arch/x86/guest/hyperv/Makefile
> @@ -1,2 +1,3 @@
>  obj-y += hyperv.o
>  obj-y += tlb.o
> +obj-y += util.o
> diff --git a/xen/arch/x86/guest/hyperv/private.h b/xen/arch/x86/guest/hyperv/private.h
> index 509bedaafa..79a77930a0 100644
> --- a/xen/arch/x86/guest/hyperv/private.h
> +++ b/xen/arch/x86/guest/hyperv/private.h
> @@ -24,12 +24,21 @@
>  
>  #include <xen/cpumask.h>
>  #include <xen/percpu.h>
> +#include <xen/types.h>
>  
>  DECLARE_PER_CPU(void *, hv_input_page);
>  DECLARE_PER_CPU(void *, hv_vp_assist);
>  DECLARE_PER_CPU(unsigned int, hv_vp_index);
>  
> +static inline unsigned int hv_vp_index(unsigned int cpu)
> +{
> +    return per_cpu(hv_vp_index, cpu);
> +}
> +
>  int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
>                       unsigned int flags);
>  
> +/* Returns number of banks, -ev if error */
> +int cpumask_to_vpset(struct hv_vpset *vpset, const cpumask_t *mask);
> +
>  #endif /* __XEN_HYPERV_PRIVIATE_H__  */
> diff --git a/xen/arch/x86/guest/hyperv/tlb.c b/xen/arch/x86/guest/hyperv/tlb.c
> index 48f527229e..f68e14f151 100644
> --- a/xen/arch/x86/guest/hyperv/tlb.c
> +++ b/xen/arch/x86/guest/hyperv/tlb.c
> @@ -19,15 +19,185 @@
>   * Copyright (c) 2020 Microsoft.
>   */
>  
> +#include <xen/cpu.h>
>  #include <xen/cpumask.h>
>  #include <xen/errno.h>
>  
> +#include <asm/guest/hyperv.h>
> +#include <asm/guest/hyperv-hcall.h>
> +#include <asm/guest/hyperv-tlfs.h>
> +
>  #include "private.h"
>  
> +/*
> + * It is possible to encode up to 4096 pages using the lower 12 bits
> + * in an element of gva_list
> + */
> +#define HV_TLB_FLUSH_UNIT (4096 * PAGE_SIZE)
> +
> +static unsigned int fill_gva_list(uint64_t *gva_list, const void *va,
> +                                  unsigned int order)
> +{
> +    unsigned long start = (unsigned long)va;
> +    unsigned long end = start + (PAGE_SIZE << order) - 1;
> +    unsigned int n = 0;
> +
> +    do {
> +        unsigned long remain = end - start;
> +
> +        gva_list[n] = start & PAGE_MASK;
> +
> +        /*
> +         * Use lower 12 bits to encode the number of additional pages
> +         * to flush
> +         */
> +        if ( remain >= HV_TLB_FLUSH_UNIT )
> +        {
> +            gva_list[n] |= ~PAGE_MASK;
> +            start += HV_TLB_FLUSH_UNIT;
> +        }
> +        else if ( remain )

remain is always going to be > 0, since the loop condition is end >
start, and hence this can be a plain else.

> +        {
> +            gva_list[n] |= (remain - 1) >> PAGE_SHIFT;
> +            start = end;
> +        }
> +
> +        n++;
> +    } while ( start < end );
> +
> +    return n;
> +}
> +
> +static uint64_t flush_tlb_ex(const cpumask_t *mask, const void *va,
> +                             unsigned int flags)
> +{
> +    struct hv_tlb_flush_ex *flush = this_cpu(hv_input_page);
> +    int nr_banks;
> +    unsigned int max_gvas, order = flags & FLUSH_ORDER_MASK;
> +    uint64_t ret;
> +
> +    if ( !flush || local_irq_is_enabled() )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return ~0ULL;
> +    }
> +
> +    if ( !(ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED) )
> +        return ~0ULL;
> +
> +    flush->address_space = 0;
> +    flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> +    if ( !(flags & FLUSH_TLB_GLOBAL) )
> +        flush->flags |= HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY;
> +
> +    nr_banks = cpumask_to_vpset(&flush->hv_vp_set, mask);
> +    if ( nr_banks < 0 )
> +        return ~0ULL;

It would be nice to propagate the error code from cpumask_to_vpset,
but since the function can also return HyperV error codes this doesn't
make much sense.

> +
> +    max_gvas =
> +        (PAGE_SIZE - sizeof(*flush) - nr_banks *
> +         sizeof(flush->hv_vp_set.bank_contents[0])) /
> +        sizeof(uint64_t);       /* gva is represented as uint64_t */
> +
> +    /*
> +     * Flush the entire address space if va is NULL or if there is not
> +     * enough space for gva_list.
> +     */
> +    if ( !va || (PAGE_SIZE << order) / HV_TLB_FLUSH_UNIT > max_gvas )
> +        ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX, 0,
> +                                  nr_banks, virt_to_maddr(flush), 0);

You could just return hv_do_rep_hypercall(...); here, which will avoid
the else branch below and the indentation.

> +    else
> +    {
> +        uint64_t *gva_list =
> +            (uint64_t *)flush + sizeof(*flush) / sizeof(uint64_t) + nr_banks;
> +        unsigned int gvas = fill_gva_list(gva_list, va, order);
> +
> +        BUILD_BUG_ON(sizeof(*flush) % sizeof(uint64_t));
> +
> +        ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX,
> +                                  gvas, nr_banks, virt_to_maddr(flush), 0);
> +    }
> +
> +    return ret;
> +}
> +
>  int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
>                       unsigned int flags)
>  {
> -    return -EOPNOTSUPP;
> +    unsigned long irq_flags;
> +    struct hv_tlb_flush *flush = this_cpu(hv_input_page);
> +    unsigned int max_gvas, order = flags & FLUSH_ORDER_MASK;
> +    uint64_t ret;
> +
> +    ASSERT(flush);
> +    ASSERT(!cpumask_empty(mask));

I would also turn this into an if ( ... ) { ASSERT; return -EFOO; }

> +
> +    local_irq_save(irq_flags);
> +
> +    flush->address_space = 0;
> +    flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> +    flush->processor_mask = 0;
> +    if ( !(flags & FLUSH_TLB_GLOBAL) )
> +        flush->flags |= HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY;
> +
> +    if ( cpumask_equal(mask, &cpu_online_map) )
> +        flush->flags |= HV_FLUSH_ALL_PROCESSORS;
> +    else
> +    {
> +        unsigned int cpu;
> +
> +        /*
> +         * Normally VP indices are in ascending order and match Xen's
> +         * idea of CPU ids. Check the last index to see if VP index is
> +         * >= 64. If so, we can skip setting up parameters for
> +         * non-applicable hypercalls without looking further.
> +         */
> +        if ( hv_vp_index(cpumask_last(mask)) >= 64 )
> +            goto do_ex_hypercall;
> +
> +        for_each_cpu ( cpu, mask )
> +        {
> +            uint32_t vpid = hv_vp_index(cpu);

This should be unsigned int now.

> +
> +            if ( vpid > ms_hyperv.max_vp_index )
> +            {
> +                local_irq_restore(irq_flags);
> +                return -ENXIO;
> +            }
> +
> +            if ( vpid >= 64 )
> +                goto do_ex_hypercall;
> +
> +            __set_bit(vpid, &flush->processor_mask);
> +        }

Would it make sense to abstract this as cpumask_to_processor_mask,
since you are adding cpumask_to_vpset below.

> +    }
> +
> +    max_gvas = (PAGE_SIZE - sizeof(*flush)) / sizeof(flush->gva_list[0]);

You could init this at declaration, and make it const static since the
value can be calculated at compile time AFAICT. Or create a define
with it (HV_TLB_FLUSH_MAX_GVAS?). There's no need to store it on the
stack.

> +
> +    /*
> +     * Flush the entire address space if va is NULL or if there is not
> +     * enough space for gva_list.
> +     */
> +    if ( !va || (PAGE_SIZE << order) / HV_TLB_FLUSH_UNIT > max_gvas )
> +        ret = hv_do_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE,
> +                              virt_to_maddr(flush), 0);
> +    else
> +    {
> +        unsigned int gvas = fill_gva_list(flush->gva_list, va, order);

No need for the gvas variable, you can just call fill_gva_list at
hv_do_rep_hypercall.

> +
> +        ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST, gvas, 0,
> +                                  virt_to_maddr(flush), 0);
> +    }
> +
> +    goto done;
> +
> + do_ex_hypercall:
> +    ret = flush_tlb_ex(mask, va, flags);
> +
> + done:
> +    local_irq_restore(irq_flags);
> +
> +    return ret & HV_HYPERCALL_RESULT_MASK ? -ENXIO : 0;
>  }
>  
>  /*
> diff --git a/xen/arch/x86/guest/hyperv/util.c b/xen/arch/x86/guest/hyperv/util.c
> new file mode 100644
> index 0000000000..e092593746
> --- /dev/null
> +++ b/xen/arch/x86/guest/hyperv/util.c
> @@ -0,0 +1,74 @@
> +/******************************************************************************
> + * arch/x86/guest/hyperv/util.c
> + *
> + * Hyper-V utility functions
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + *
> + * Copyright (c) 2020 Microsoft.
> + */
> +
> +#include <xen/cpu.h>
> +#include <xen/cpumask.h>
> +#include <xen/errno.h>
> +
> +#include <asm/guest/hyperv.h>
> +#include <asm/guest/hyperv-tlfs.h>
> +
> +#include "private.h"
> +
> +int cpumask_to_vpset(struct hv_vpset *vpset,
> +                     const cpumask_t *mask)
> +{
> +    int nr = 1;
> +    unsigned int cpu, vcpu_bank, vcpu_offset;
> +    unsigned int max_banks = ms_hyperv.max_vp_index / 64;
> +
> +    /* Up to 64 banks can be represented by valid_bank_mask */
> +    if ( max_banks >= 64 )
> +        return -E2BIG;
> +
> +    /* Clear all banks to avoid flushing unwanted CPUs */
> +    for ( vcpu_bank = 0; vcpu_bank <= max_banks; vcpu_bank++ )

I think this is off by one and should be vcpu_bank < max_banks? Or
else you are clearing one extra bank.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb Wei Liu
@ 2020-02-14 14:44   ` Roger Pau Monné
  2020-02-14 16:51   ` Durrant, Paul
  1 sibling, 0 replies; 24+ messages in thread
From: Roger Pau Monné @ 2020-02-14 14:44 UTC (permalink / raw)
  To: Wei Liu
  Cc: Wei Liu, Andrew Cooper, Paul Durrant, Michael Kelley,
	Jan Beulich, Xen Development List

On Fri, Feb 14, 2020 at 12:34:28PM +0000, Wei Liu wrote:
> Hyper-V's L0 assisted flush has fine-grained control over what gets
> flushed. We need all the flags available to make the best decisions
> possible.
> 
> No functional change because Xen's implementation doesn't care about
> what is passed to it.
> 
> Signed-off-by: Wei Liu <liuwe@microsoft.com>

LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Just one comment below.

> ---
> v2:
> 1. Introduce FLUSH_TLB_FLAGS_MASK
> ---
>  xen/arch/x86/guest/hypervisor.c        |  7 +++++--
>  xen/arch/x86/guest/xen/xen.c           |  2 +-
>  xen/arch/x86/smp.c                     |  5 ++---
>  xen/include/asm-x86/flushtlb.h         |  3 +++
>  xen/include/asm-x86/guest/hypervisor.h | 10 +++++-----
>  5 files changed, 16 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hypervisor.c
> index 47e938e287..6ee28c9df1 100644
> --- a/xen/arch/x86/guest/hypervisor.c
> +++ b/xen/arch/x86/guest/hypervisor.c
> @@ -75,10 +75,13 @@ void __init hypervisor_e820_fixup(struct e820map *e820)
>  }
>  
>  int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
> -                         unsigned int order)
> +                         unsigned int flags)
>  {
> +    if ( flags & ~FLUSH_TLB_FLAGS_MASK )

I think an ASSERT_UNREACHABLE() would be good here, since you are not
supposed to call hypervisor_flush_tlb with non TLB related flags.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush Wei Liu
@ 2020-02-14 14:46   ` Roger Pau Monné
  2020-02-14 16:55   ` Durrant, Paul
  1 sibling, 0 replies; 24+ messages in thread
From: Roger Pau Monné @ 2020-02-14 14:46 UTC (permalink / raw)
  To: Wei Liu
  Cc: Wei Liu, Andrew Cooper, Paul Durrant, Michael Kelley,
	Jan Beulich, Xen Development List

On Fri, Feb 14, 2020 at 12:34:29PM +0000, Wei Liu wrote:
> Implement a basic hook for L0 assisted TLB flush. The hook needs to
> check if prerequisites are met. If they are not met, it returns an error
> number to fall back to native flushes.
> 
> Introduce a new variable to indicate if hypercall page is ready.
> 
> Signed-off-by: Wei Liu <liuwe@microsoft.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Just one comment below.

> ---
>  xen/arch/x86/guest/hyperv/Makefile  |  1 +
>  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
>  xen/arch/x86/guest/hyperv/private.h |  4 +++
>  xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
>  4 files changed, 63 insertions(+)
>  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> 
> diff --git a/xen/arch/x86/guest/hyperv/Makefile b/xen/arch/x86/guest/hyperv/Makefile
> index 68170109a9..18902c33e9 100644
> --- a/xen/arch/x86/guest/hyperv/Makefile
> +++ b/xen/arch/x86/guest/hyperv/Makefile
> @@ -1 +1,2 @@
>  obj-y += hyperv.o
> +obj-y += tlb.o
> diff --git a/xen/arch/x86/guest/hyperv/hyperv.c b/xen/arch/x86/guest/hyperv/hyperv.c
> index 70f4cd5ae0..f9d1f11ae3 100644
> --- a/xen/arch/x86/guest/hyperv/hyperv.c
> +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
>  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
>  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
>  
> +static bool __read_mostly hv_hcall_page_ready;

Since this is static, I would drop the hv_ prefix.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 3/3] x86/hyperv: L0 assisted TLB flush
  2020-02-14 14:42   ` Roger Pau Monné
@ 2020-02-14 16:03     ` Wei Liu
  0 siblings, 0 replies; 24+ messages in thread
From: Wei Liu @ 2020-02-14 16:03 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Paul Durrant, Michael Kelley,
	Jan Beulich, Xen Development List

On Fri, Feb 14, 2020 at 03:42:17PM +0100, Roger Pau Monné wrote:
[...]
> >  #endif /* __XEN_HYPERV_PRIVIATE_H__  */
> > diff --git a/xen/arch/x86/guest/hyperv/tlb.c b/xen/arch/x86/guest/hyperv/tlb.c
> > index 48f527229e..f68e14f151 100644
> > --- a/xen/arch/x86/guest/hyperv/tlb.c
> > +++ b/xen/arch/x86/guest/hyperv/tlb.c
> > @@ -19,15 +19,185 @@
> >   * Copyright (c) 2020 Microsoft.
> >   */
> >  
> > +#include <xen/cpu.h>
> >  #include <xen/cpumask.h>
> >  #include <xen/errno.h>
> >  
> > +#include <asm/guest/hyperv.h>
> > +#include <asm/guest/hyperv-hcall.h>
> > +#include <asm/guest/hyperv-tlfs.h>
> > +
> >  #include "private.h"
> >  
> > +/*
> > + * It is possible to encode up to 4096 pages using the lower 12 bits
> > + * in an element of gva_list
> > + */
> > +#define HV_TLB_FLUSH_UNIT (4096 * PAGE_SIZE)
> > +
> > +static unsigned int fill_gva_list(uint64_t *gva_list, const void *va,
> > +                                  unsigned int order)
> > +{
> > +    unsigned long start = (unsigned long)va;
> > +    unsigned long end = start + (PAGE_SIZE << order) - 1;
> > +    unsigned int n = 0;
> > +
> > +    do {
> > +        unsigned long remain = end - start;
> > +
> > +        gva_list[n] = start & PAGE_MASK;
> > +
> > +        /*
> > +         * Use lower 12 bits to encode the number of additional pages
> > +         * to flush
> > +         */
> > +        if ( remain >= HV_TLB_FLUSH_UNIT )
> > +        {
> > +            gva_list[n] |= ~PAGE_MASK;
> > +            start += HV_TLB_FLUSH_UNIT;
> > +        }
> > +        else if ( remain )
> 
> remain is always going to be > 0, since the loop condition is end >
> start, and hence this can be a plain else.

Ack.

> 
> > +        {
> > +            gva_list[n] |= (remain - 1) >> PAGE_SHIFT;
> > +            start = end;
> > +        }
> > +
> > +        n++;
> > +    } while ( start < end );
> > +
> > +    return n;
> > +}
> > +
> > +static uint64_t flush_tlb_ex(const cpumask_t *mask, const void *va,
> > +                             unsigned int flags)
> > +{
> > +    struct hv_tlb_flush_ex *flush = this_cpu(hv_input_page);
> > +    int nr_banks;
> > +    unsigned int max_gvas, order = flags & FLUSH_ORDER_MASK;
> > +    uint64_t ret;
> > +
> > +    if ( !flush || local_irq_is_enabled() )
> > +    {
> > +        ASSERT_UNREACHABLE();
> > +        return ~0ULL;
> > +    }
> > +
> > +    if ( !(ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED) )
> > +        return ~0ULL;
> > +
> > +    flush->address_space = 0;
> > +    flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> > +    if ( !(flags & FLUSH_TLB_GLOBAL) )
> > +        flush->flags |= HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY;
> > +
> > +    nr_banks = cpumask_to_vpset(&flush->hv_vp_set, mask);
> > +    if ( nr_banks < 0 )
> > +        return ~0ULL;
> 
> It would be nice to propagate the error code from cpumask_to_vpset,
> but since the function can also return HyperV error codes this doesn't
> make much sense.
> 
> > +
> > +    max_gvas =
> > +        (PAGE_SIZE - sizeof(*flush) - nr_banks *
> > +         sizeof(flush->hv_vp_set.bank_contents[0])) /
> > +        sizeof(uint64_t);       /* gva is represented as uint64_t */
> > +
> > +    /*
> > +     * Flush the entire address space if va is NULL or if there is not
> > +     * enough space for gva_list.
> > +     */
> > +    if ( !va || (PAGE_SIZE << order) / HV_TLB_FLUSH_UNIT > max_gvas )
> > +        ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX, 0,
> > +                                  nr_banks, virt_to_maddr(flush), 0);
> 
> You could just return hv_do_rep_hypercall(...); here, which will avoid
> the else branch below and the indentation.

Ack.

> 
> > +    else
> > +    {
> > +        uint64_t *gva_list =
> > +            (uint64_t *)flush + sizeof(*flush) / sizeof(uint64_t) + nr_banks;
> > +        unsigned int gvas = fill_gva_list(gva_list, va, order);
> > +
> > +        BUILD_BUG_ON(sizeof(*flush) % sizeof(uint64_t));
> > +
> > +        ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX,
> > +                                  gvas, nr_banks, virt_to_maddr(flush), 0);
> > +    }
> > +
> > +    return ret;
> > +}
> > +
> >  int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
> >                       unsigned int flags)
> >  {
> > -    return -EOPNOTSUPP;
> > +    unsigned long irq_flags;
> > +    struct hv_tlb_flush *flush = this_cpu(hv_input_page);
> > +    unsigned int max_gvas, order = flags & FLUSH_ORDER_MASK;
> > +    uint64_t ret;
> > +
> > +    ASSERT(flush);
> > +    ASSERT(!cpumask_empty(mask));
> 
> I would also turn this into an if ( ... ) { ASSERT; return -EFOO; }

Ack.

> 
> > +
> > +    local_irq_save(irq_flags);
> > +
> > +    flush->address_space = 0;
> > +    flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> > +    flush->processor_mask = 0;
> > +    if ( !(flags & FLUSH_TLB_GLOBAL) )
> > +        flush->flags |= HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY;
> > +
> > +    if ( cpumask_equal(mask, &cpu_online_map) )
> > +        flush->flags |= HV_FLUSH_ALL_PROCESSORS;
> > +    else
> > +    {
> > +        unsigned int cpu;
> > +
> > +        /*
> > +         * Normally VP indices are in ascending order and match Xen's
> > +         * idea of CPU ids. Check the last index to see if VP index is
> > +         * >= 64. If so, we can skip setting up parameters for
> > +         * non-applicable hypercalls without looking further.
> > +         */
> > +        if ( hv_vp_index(cpumask_last(mask)) >= 64 )
> > +            goto do_ex_hypercall;
> > +
> > +        for_each_cpu ( cpu, mask )
> > +        {
> > +            uint32_t vpid = hv_vp_index(cpu);
> 
> This should be unsigned int now.

Good catch.

> 
> > +
> > +            if ( vpid > ms_hyperv.max_vp_index )
> > +            {
> > +                local_irq_restore(irq_flags);
> > +                return -ENXIO;
> > +            }
> > +
> > +            if ( vpid >= 64 )
> > +                goto do_ex_hypercall;
> > +
> > +            __set_bit(vpid, &flush->processor_mask);
> > +        }
> 
> Would it make sense to abstract this as cpumask_to_processor_mask,
> since you are adding cpumask_to_vpset below.
> 

There is only one usage so far, so I don't think it is necessary.

> > +    }
> > +
> > +    max_gvas = (PAGE_SIZE - sizeof(*flush)) / sizeof(flush->gva_list[0]);
> 
> You could init this at declaration, and make it const static since the
> value can be calculated at compile time AFAICT. Or create a define
> with it (HV_TLB_FLUSH_MAX_GVAS?). There's no need to store it on the
> stack.

I can introduce a define, but the name you suggested is too generic. If there
is variable sized header, the calculation is going to be different.

> 
> > +
> > +    /*
> > +     * Flush the entire address space if va is NULL or if there is not
> > +     * enough space for gva_list.
> > +     */
> > +    if ( !va || (PAGE_SIZE << order) / HV_TLB_FLUSH_UNIT > max_gvas )
> > +        ret = hv_do_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE,
> > +                              virt_to_maddr(flush), 0);
> > +    else
> > +    {
> > +        unsigned int gvas = fill_gva_list(flush->gva_list, va, order);
> 
> No need for the gvas variable, you can just call fill_gva_list at
> hv_do_rep_hypercall.
> 

Sure.

> > +
> > +        ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST, gvas, 0,
> > +                                  virt_to_maddr(flush), 0);
> > +    }
> > +
> > +    goto done;
> > +
> > + do_ex_hypercall:
> > +    ret = flush_tlb_ex(mask, va, flags);
> > +
> > + done:
> > +    local_irq_restore(irq_flags);
> > +
> > +    return ret & HV_HYPERCALL_RESULT_MASK ? -ENXIO : 0;
> >  }
> >  
> >  /*
> > diff --git a/xen/arch/x86/guest/hyperv/util.c b/xen/arch/x86/guest/hyperv/util.c
> > new file mode 100644
> > index 0000000000..e092593746
> > --- /dev/null
> > +++ b/xen/arch/x86/guest/hyperv/util.c
> > @@ -0,0 +1,74 @@
> > +/******************************************************************************
> > + * arch/x86/guest/hyperv/util.c
> > + *
> > + * Hyper-V utility functions
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU General Public License
> > + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> > + *
> > + * Copyright (c) 2020 Microsoft.
> > + */
> > +
> > +#include <xen/cpu.h>
> > +#include <xen/cpumask.h>
> > +#include <xen/errno.h>
> > +
> > +#include <asm/guest/hyperv.h>
> > +#include <asm/guest/hyperv-tlfs.h>
> > +
> > +#include "private.h"
> > +
> > +int cpumask_to_vpset(struct hv_vpset *vpset,
> > +                     const cpumask_t *mask)
> > +{
> > +    int nr = 1;
> > +    unsigned int cpu, vcpu_bank, vcpu_offset;
> > +    unsigned int max_banks = ms_hyperv.max_vp_index / 64;
> > +
> > +    /* Up to 64 banks can be represented by valid_bank_mask */
> > +    if ( max_banks >= 64 )
> > +        return -E2BIG;
> > +
> > +    /* Clear all banks to avoid flushing unwanted CPUs */
> > +    for ( vcpu_bank = 0; vcpu_bank <= max_banks; vcpu_bank++ )
> 
> I think this is off by one and should be vcpu_bank < max_banks? Or
> else you are clearing one extra bank.

If max_vp_index were the maximum vp index, max_banks would be inclusive
(for example, max_vp_index is 63, 63/64=0).  The code should be correct
in that case.

However, when I did another round of self-review, I discovered that
the value stored in max_vp_index was actually "the maximum number of
virtual processors supported", so it wasn't really the maximum index.

This means I will need to adjust places where max_vp_index is used.

Here, max_banks is not inclusive anymore.

And an earlier place

 +            if ( vpid > ms_hyperv.max_vp_index )
 +            {

also requires fixing.

I will also see about changing max_{l,v}p_index to more sensible names
or add some comments to their definitions.

Wei.

> 
> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 3/3] x86/hyperv: L0 assisted TLB flush
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 3/3] x86/hyperv: " Wei Liu
  2020-02-14 14:42   ` Roger Pau Monné
@ 2020-02-14 16:42   ` Michael Kelley
  2020-02-17 12:03     ` Wei Liu
  1 sibling, 1 reply; 24+ messages in thread
From: Michael Kelley @ 2020-02-14 16:42 UTC (permalink / raw)
  To: Wei Liu, Xen Development List
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu Sent: Friday, February 14, 2020 4:35 AM
> 
> Implement L0 assisted TLB flush for Xen on Hyper-V. It takes advantage
> of several hypercalls:
> 
>  * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST
>  * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
>  * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE
>  * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX
> 
> Pick the most efficient hypercalls available.
> 
> Signed-off-by: Wei Liu <liuwe@microsoft.com>
> ---
> v2:
> 1. Address Roger and Jan's comments re types etc.
> 2. Fix pointer arithmetic.
> 3. Misc improvement to code.
> ---
>  xen/arch/x86/guest/hyperv/Makefile  |   1 +
>  xen/arch/x86/guest/hyperv/private.h |   9 ++
>  xen/arch/x86/guest/hyperv/tlb.c     | 172 +++++++++++++++++++++++++++-
>  xen/arch/x86/guest/hyperv/util.c    |  74 ++++++++++++
>  4 files changed, 255 insertions(+), 1 deletion(-)
>  create mode 100644 xen/arch/x86/guest/hyperv/util.c
> 
> diff --git a/xen/arch/x86/guest/hyperv/Makefile b/xen/arch/x86/guest/hyperv/Makefile
> index 18902c33e9..0e39410968 100644
> --- a/xen/arch/x86/guest/hyperv/Makefile
> +++ b/xen/arch/x86/guest/hyperv/Makefile
> @@ -1,2 +1,3 @@
>  obj-y += hyperv.o
>  obj-y += tlb.o
> +obj-y += util.o
> diff --git a/xen/arch/x86/guest/hyperv/private.h b/xen/arch/x86/guest/hyperv/private.h
> index 509bedaafa..79a77930a0 100644
> --- a/xen/arch/x86/guest/hyperv/private.h
> +++ b/xen/arch/x86/guest/hyperv/private.h
> @@ -24,12 +24,21 @@
> 
>  #include <xen/cpumask.h>
>  #include <xen/percpu.h>
> +#include <xen/types.h>
> 
>  DECLARE_PER_CPU(void *, hv_input_page);
>  DECLARE_PER_CPU(void *, hv_vp_assist);
>  DECLARE_PER_CPU(unsigned int, hv_vp_index);
> 
> +static inline unsigned int hv_vp_index(unsigned int cpu)
> +{
> +    return per_cpu(hv_vp_index, cpu);
> +}
> +
>  int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
>                       unsigned int flags);
> 
> +/* Returns number of banks, -ev if error */
> +int cpumask_to_vpset(struct hv_vpset *vpset, const cpumask_t *mask);
> +
>  #endif /* __XEN_HYPERV_PRIVIATE_H__  */
> diff --git a/xen/arch/x86/guest/hyperv/tlb.c b/xen/arch/x86/guest/hyperv/tlb.c
> index 48f527229e..f68e14f151 100644
> --- a/xen/arch/x86/guest/hyperv/tlb.c
> +++ b/xen/arch/x86/guest/hyperv/tlb.c
> @@ -19,15 +19,185 @@
>   * Copyright (c) 2020 Microsoft.
>   */
> 
> +#include <xen/cpu.h>
>  #include <xen/cpumask.h>
>  #include <xen/errno.h>
> 
> +#include <asm/guest/hyperv.h>
> +#include <asm/guest/hyperv-hcall.h>
> +#include <asm/guest/hyperv-tlfs.h>
> +
>  #include "private.h"
> 
> +/*
> + * It is possible to encode up to 4096 pages using the lower 12 bits
> + * in an element of gva_list
> + */
> +#define HV_TLB_FLUSH_UNIT (4096 * PAGE_SIZE)
> +
> +static unsigned int fill_gva_list(uint64_t *gva_list, const void *va,
> +                                  unsigned int order)
> +{
> +    unsigned long start = (unsigned long)va;
> +    unsigned long end = start + (PAGE_SIZE << order) - 1;
> +    unsigned int n = 0;
> +
> +    do {
> +        unsigned long remain = end - start;

The calculated value here isn't actually the remaining bytes in the
range to flush -- it's one less than the remaining bytes in the range
to flush because of the -1 in the calculation of 'end'.   That difference
will mess up the comparison below against HV_TLB_FLUSH_UNIT
in the case that there are exactly 4096 page remaining to be
flushed.  It should take the "=" case, but won't.  Also, the
'-1' in 'remain - 1' in the else clause becomes unneeded, and
the 'start = end' assignment then propagates the error.

In the parallel code in Linux, if you follow the call sequence to get to
fill_gav_list(), the 'end' argument is really the address of the first byte
of the first page that isn't in the flush range (i.e., one beyond the true
'end') and so is a bit misnamed.

I think the calculation of 'end' should drop the -1, and perhaps 'end'
should be renamed.

Michael

> +
> +        gva_list[n] = start & PAGE_MASK;
> +
> +        /*
> +         * Use lower 12 bits to encode the number of additional pages
> +         * to flush
> +         */
> +        if ( remain >= HV_TLB_FLUSH_UNIT )
> +        {
> +            gva_list[n] |= ~PAGE_MASK;
> +            start += HV_TLB_FLUSH_UNIT;
> +        }
> +        else if ( remain )
> +        {
> +            gva_list[n] |= (remain - 1) >> PAGE_SHIFT;
> +            start = end;
> +        }
> +
> +        n++;
> +    } while ( start < end );
> +
> +    return n;
> +}
> +


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb Wei Liu
  2020-02-14 14:44   ` Roger Pau Monné
@ 2020-02-14 16:51   ` Durrant, Paul
  1 sibling, 0 replies; 24+ messages in thread
From: Durrant, Paul @ 2020-02-14 16:51 UTC (permalink / raw)
  To: Wei Liu, Xen Development List
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jan Beulich, Michael Kelley

> -----Original Message-----
> From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> Sent: 14 February 2020 13:34
> To: Xen Development List <xen-devel@lists.xenproject.org>
> Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu
> <wl@xen.org>; Roger Pau Monné <roger.pau@citrix.com>
> Subject: [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb
> 
> Hyper-V's L0 assisted flush has fine-grained control over what gets
> flushed. We need all the flags available to make the best decisions
> possible.
> 
> No functional change because Xen's implementation doesn't care about
> what is passed to it.
> 
> Signed-off-by: Wei Liu <liuwe@microsoft.com>

Reviewed-by: Paul Durrant <pdurrant@amazon.com>

> ---
> v2:
> 1. Introduce FLUSH_TLB_FLAGS_MASK
> ---
>  xen/arch/x86/guest/hypervisor.c        |  7 +++++--
>  xen/arch/x86/guest/xen/xen.c           |  2 +-
>  xen/arch/x86/smp.c                     |  5 ++---
>  xen/include/asm-x86/flushtlb.h         |  3 +++
>  xen/include/asm-x86/guest/hypervisor.h | 10 +++++-----
>  5 files changed, 16 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/arch/x86/guest/hypervisor.c
> b/xen/arch/x86/guest/hypervisor.c
> index 47e938e287..6ee28c9df1 100644
> --- a/xen/arch/x86/guest/hypervisor.c
> +++ b/xen/arch/x86/guest/hypervisor.c
> @@ -75,10 +75,13 @@ void __init hypervisor_e820_fixup(struct e820map
> *e820)
>  }
> 
>  int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
> -                         unsigned int order)
> +                         unsigned int flags)
>  {
> +    if ( flags & ~FLUSH_TLB_FLAGS_MASK )
> +        return -EINVAL;
> +
>      if ( ops.flush_tlb )
> -        return alternative_call(ops.flush_tlb, mask, va, order);
> +        return alternative_call(ops.flush_tlb, mask, va, flags);
> 
>      return -ENOSYS;
>  }
> diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c
> index 5d3427a713..0eb1115c4d 100644
> --- a/xen/arch/x86/guest/xen/xen.c
> +++ b/xen/arch/x86/guest/xen/xen.c
> @@ -324,7 +324,7 @@ static void __init e820_fixup(struct e820map *e820)
>          pv_shim_fixup_e820(e820);
>  }
> 
> -static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int
> order)
> +static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int
> flags)
>  {
>      return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL);
>  }
> diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
> index 9bc925616a..2ab0e30eef 100644
> --- a/xen/arch/x86/smp.c
> +++ b/xen/arch/x86/smp.c
> @@ -258,9 +258,8 @@ void flush_area_mask(const cpumask_t *mask, const void
> *va, unsigned int flags)
>           !cpumask_subset(mask, cpumask_of(cpu)) )
>      {
>          if ( cpu_has_hypervisor &&
> -             !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID |
> -                         FLUSH_ORDER_MASK)) &&
> -             !hypervisor_flush_tlb(mask, va, flags & FLUSH_ORDER_MASK) )
> +             !(flags & ~FLUSH_TLB_FLAGS_MASK) &&
> +             !hypervisor_flush_tlb(mask, va, flags) )
>          {
>              if ( tlb_clk_enabled )
>                  tlb_clk_enabled = false;
> diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-
> x86/flushtlb.h
> index 9773014320..a4de317452 100644
> --- a/xen/include/asm-x86/flushtlb.h
> +++ b/xen/include/asm-x86/flushtlb.h
> @@ -123,6 +123,9 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long
> cr4);
>   /* Flush all HVM guests linear TLB (using ASID/VPID) */
>  #define FLUSH_GUESTS_TLB 0x4000
> 
> +#define FLUSH_TLB_FLAGS_MASK (FLUSH_TLB | FLUSH_TLB_GLOBAL |
> FLUSH_VA_VALID | \
> +                              FLUSH_ORDER_MASK)
> +
>  /* Flush local TLBs/caches. */
>  unsigned int flush_area_local(const void *va, unsigned int flags);
>  #define flush_local(flags) flush_area_local(NULL, flags)
> diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-
> x86/guest/hypervisor.h
> index 432e57c2a0..48d54735d2 100644
> --- a/xen/include/asm-x86/guest/hypervisor.h
> +++ b/xen/include/asm-x86/guest/hypervisor.h
> @@ -35,7 +35,7 @@ struct hypervisor_ops {
>      /* Fix up e820 map */
>      void (*e820_fixup)(struct e820map *e820);
>      /* L0 assisted TLB flush */
> -    int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int
> order);
> +    int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int
> flags);
>  };
> 
>  #ifdef CONFIG_GUEST
> @@ -48,11 +48,11 @@ void hypervisor_e820_fixup(struct e820map *e820);
>  /*
>   * L0 assisted TLB flush.
>   * mask: cpumask of the dirty vCPUs that should be flushed.
> - * va: linear address to flush, or NULL for global flushes.
> - * order: order of the linear address pointed by va.
> + * va: linear address to flush, or NULL for entire address space.
> + * flags: flags for flushing, including the order of va.
>   */
>  int hypervisor_flush_tlb(const cpumask_t *mask, const void *va,
> -                         unsigned int order);
> +                         unsigned int flags);
> 
>  #else
> 
> @@ -65,7 +65,7 @@ static inline int hypervisor_ap_setup(void) { return 0;
> }
>  static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); }
>  static inline void hypervisor_e820_fixup(struct e820map *e820) {}
>  static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void
> *va,
> -                                       unsigned int order)
> +                                       unsigned int flags)
>  {
>      return -ENOSYS;
>  }
> --
> 2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-14 12:34 ` [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush Wei Liu
  2020-02-14 14:46   ` Roger Pau Monné
@ 2020-02-14 16:55   ` Durrant, Paul
  2020-02-17 11:34     ` Wei Liu
  1 sibling, 1 reply; 24+ messages in thread
From: Durrant, Paul @ 2020-02-14 16:55 UTC (permalink / raw)
  To: Wei Liu, Xen Development List
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jan Beulich, Michael Kelley

> -----Original Message-----
> From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> Sent: 14 February 2020 13:34
> To: Xen Development List <xen-devel@lists.xenproject.org>
> Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> 
> Implement a basic hook for L0 assisted TLB flush. The hook needs to
> check if prerequisites are met. If they are not met, it returns an error
> number to fall back to native flushes.
> 
> Introduce a new variable to indicate if hypercall page is ready.
> 
> Signed-off-by: Wei Liu <liuwe@microsoft.com>
> ---
>  xen/arch/x86/guest/hyperv/Makefile  |  1 +
>  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
>  xen/arch/x86/guest/hyperv/private.h |  4 +++
>  xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
>  4 files changed, 63 insertions(+)
>  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> 
> diff --git a/xen/arch/x86/guest/hyperv/Makefile
> b/xen/arch/x86/guest/hyperv/Makefile
> index 68170109a9..18902c33e9 100644
> --- a/xen/arch/x86/guest/hyperv/Makefile
> +++ b/xen/arch/x86/guest/hyperv/Makefile
> @@ -1 +1,2 @@
>  obj-y += hyperv.o
> +obj-y += tlb.o
> diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> b/xen/arch/x86/guest/hyperv/hyperv.c
> index 70f4cd5ae0..f9d1f11ae3 100644
> --- a/xen/arch/x86/guest/hyperv/hyperv.c
> +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
>  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
>  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> 
> +static bool __read_mostly hv_hcall_page_ready;
> +
>  static uint64_t generate_guest_id(void)
>  {
>      union hv_guest_os_id id = {};
> @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
>      BUG_ON(!hypercall_msr.enable);
> 
>      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);

Shouldn't this have at least a compiler barrier here?

  Paul

> +
> +    hv_hcall_page_ready = true;
>  }
> 
>  static int setup_hypercall_pcpu_arg(void)
> @@ -199,11 +203,24 @@ static void __init e820_fixup(struct e820map *e820)
>          panic("Unable to reserve Hyper-V hypercall range\n");
>  }
> 
> +static int flush_tlb(const cpumask_t *mask, const void *va,
> +                     unsigned int flags)
> +{
> +    if ( !(ms_hyperv.hints & HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED) )
> +        return -EOPNOTSUPP;
> +
> +    if ( !hv_hcall_page_ready || !this_cpu(hv_input_page) )
> +        return -ENXIO;
> +
> +    return hyperv_flush_tlb(mask, va, flags);
> +}
> +
>  static const struct hypervisor_ops __initdata ops = {
>      .name = "Hyper-V",
>      .setup = setup,
>      .ap_setup = ap_setup,
>      .e820_fixup = e820_fixup,
> +    .flush_tlb = flush_tlb,
>  };
> 
>  /*
> diff --git a/xen/arch/x86/guest/hyperv/private.h
> b/xen/arch/x86/guest/hyperv/private.h
> index 956eff831f..509bedaafa 100644
> --- a/xen/arch/x86/guest/hyperv/private.h
> +++ b/xen/arch/x86/guest/hyperv/private.h
> @@ -22,10 +22,14 @@
>  #ifndef __XEN_HYPERV_PRIVIATE_H__
>  #define __XEN_HYPERV_PRIVIATE_H__
> 
> +#include <xen/cpumask.h>
>  #include <xen/percpu.h>
> 
>  DECLARE_PER_CPU(void *, hv_input_page);
>  DECLARE_PER_CPU(void *, hv_vp_assist);
>  DECLARE_PER_CPU(unsigned int, hv_vp_index);
> 
> +int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
> +                     unsigned int flags);
> +
>  #endif /* __XEN_HYPERV_PRIVIATE_H__  */
> diff --git a/xen/arch/x86/guest/hyperv/tlb.c
> b/xen/arch/x86/guest/hyperv/tlb.c
> new file mode 100644
> index 0000000000..48f527229e
> --- /dev/null
> +++ b/xen/arch/x86/guest/hyperv/tlb.c
> @@ -0,0 +1,41 @@
> +/************************************************************************
> ******
> + * arch/x86/guest/hyperv/tlb.c
> + *
> + * Support for TLB management using hypercalls
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + *
> + * Copyright (c) 2020 Microsoft.
> + */
> +
> +#include <xen/cpumask.h>
> +#include <xen/errno.h>
> +
> +#include "private.h"
> +
> +int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
> +                     unsigned int flags)
> +{
> +    return -EOPNOTSUPP;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> --
> 2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-14 16:55   ` Durrant, Paul
@ 2020-02-17 11:34     ` Wei Liu
  2020-02-17 11:40       ` Roger Pau Monné
  0 siblings, 1 reply; 24+ messages in thread
From: Wei Liu @ 2020-02-17 11:34 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List, Roger Pau Monné

On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > Sent: 14 February 2020 13:34
> > To: Xen Development List <xen-devel@lists.xenproject.org>
> > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> > 
> > Implement a basic hook for L0 assisted TLB flush. The hook needs to
> > check if prerequisites are met. If they are not met, it returns an error
> > number to fall back to native flushes.
> > 
> > Introduce a new variable to indicate if hypercall page is ready.
> > 
> > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > ---
> >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> >  xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
> >  4 files changed, 63 insertions(+)
> >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > 
> > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > b/xen/arch/x86/guest/hyperv/Makefile
> > index 68170109a9..18902c33e9 100644
> > --- a/xen/arch/x86/guest/hyperv/Makefile
> > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > @@ -1 +1,2 @@
> >  obj-y += hyperv.o
> > +obj-y += tlb.o
> > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > b/xen/arch/x86/guest/hyperv/hyperv.c
> > index 70f4cd5ae0..f9d1f11ae3 100644
> > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > 
> > +static bool __read_mostly hv_hcall_page_ready;
> > +
> >  static uint64_t generate_guest_id(void)
> >  {
> >      union hv_guest_os_id id = {};
> > @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
> >      BUG_ON(!hypercall_msr.enable);
> > 
> >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> 
> Shouldn't this have at least a compiler barrier here?
> 

OK. I will add a write barrier here.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 11:34     ` Wei Liu
@ 2020-02-17 11:40       ` Roger Pau Monné
  2020-02-17 11:45         ` Wei Liu
  2020-02-17 12:01         ` Durrant, Paul
  0 siblings, 2 replies; 24+ messages in thread
From: Roger Pau Monné @ 2020-02-17 11:40 UTC (permalink / raw)
  To: Wei Liu
  Cc: Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List, Durrant, Paul

On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > -----Original Message-----
> > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > Sent: 14 February 2020 13:34
> > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> > > 
> > > Implement a basic hook for L0 assisted TLB flush. The hook needs to
> > > check if prerequisites are met. If they are not met, it returns an error
> > > number to fall back to native flushes.
> > > 
> > > Introduce a new variable to indicate if hypercall page is ready.
> > > 
> > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > ---
> > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > >  xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
> > >  4 files changed, 63 insertions(+)
> > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > 
> > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > b/xen/arch/x86/guest/hyperv/Makefile
> > > index 68170109a9..18902c33e9 100644
> > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > @@ -1 +1,2 @@
> > >  obj-y += hyperv.o
> > > +obj-y += tlb.o
> > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > 
> > > +static bool __read_mostly hv_hcall_page_ready;
> > > +
> > >  static uint64_t generate_guest_id(void)
> > >  {
> > >      union hv_guest_os_id id = {};
> > > @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
> > >      BUG_ON(!hypercall_msr.enable);
> > > 
> > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > 
> > Shouldn't this have at least a compiler barrier here?
> > 
> 
> OK. I will add a write barrier here.

Hm, shouldn't such barrier be part of set_fixmap_x itself?

Note that map_pages_to_xen already performs atomic writes.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 11:40       ` Roger Pau Monné
@ 2020-02-17 11:45         ` Wei Liu
  2020-02-17 12:00           ` Roger Pau Monné
  2020-02-17 12:01         ` Durrant, Paul
  1 sibling, 1 reply; 24+ messages in thread
From: Wei Liu @ 2020-02-17 11:45 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List, Durrant, Paul

On Mon, Feb 17, 2020 at 12:40:31PM +0100, Roger Pau Monné wrote:
> On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > -----Original Message-----
> > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > Sent: 14 February 2020 13:34
> > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> > > > 
> > > > Implement a basic hook for L0 assisted TLB flush. The hook needs to
> > > > check if prerequisites are met. If they are not met, it returns an error
> > > > number to fall back to native flushes.
> > > > 
> > > > Introduce a new variable to indicate if hypercall page is ready.
> > > > 
> > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > ---
> > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
> > > >  4 files changed, 63 insertions(+)
> > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > > 
> > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > index 68170109a9..18902c33e9 100644
> > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > @@ -1 +1,2 @@
> > > >  obj-y += hyperv.o
> > > > +obj-y += tlb.o
> > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > > 
> > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > +
> > > >  static uint64_t generate_guest_id(void)
> > > >  {
> > > >      union hv_guest_os_id id = {};
> > > > @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
> > > >      BUG_ON(!hypercall_msr.enable);
> > > > 
> > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > 
> > > Shouldn't this have at least a compiler barrier here?
> > > 
> > 
> > OK. I will add a write barrier here.
> 
> Hm, shouldn't such barrier be part of set_fixmap_x itself?
> 
> Note that map_pages_to_xen already performs atomic writes.

I don't mind making things more explicit though. However unlikely, I
may end up putting something in between set_fixmap_x and setting
hcall_page_ready, I will need the barrier by then, I may as well put it
in now.

Wei.

> 
> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 11:45         ` Wei Liu
@ 2020-02-17 12:00           ` Roger Pau Monné
  2020-02-17 12:08             ` Wei Liu
  0 siblings, 1 reply; 24+ messages in thread
From: Roger Pau Monné @ 2020-02-17 12:00 UTC (permalink / raw)
  To: Wei Liu
  Cc: Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List, Durrant, Paul

On Mon, Feb 17, 2020 at 11:45:38AM +0000, Wei Liu wrote:
> On Mon, Feb 17, 2020 at 12:40:31PM +0100, Roger Pau Monné wrote:
> > On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > > -----Original Message-----
> > > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > > Sent: 14 February 2020 13:34
> > > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> > > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> > > > > 
> > > > > Implement a basic hook for L0 assisted TLB flush. The hook needs to
> > > > > check if prerequisites are met. If they are not met, it returns an error
> > > > > number to fall back to native flushes.
> > > > > 
> > > > > Introduce a new variable to indicate if hypercall page is ready.
> > > > > 
> > > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > > ---
> > > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
> > > > >  4 files changed, 63 insertions(+)
> > > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > > > 
> > > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > > index 68170109a9..18902c33e9 100644
> > > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > > @@ -1 +1,2 @@
> > > > >  obj-y += hyperv.o
> > > > > +obj-y += tlb.o
> > > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> > > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > > > 
> > > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > > +
> > > > >  static uint64_t generate_guest_id(void)
> > > > >  {
> > > > >      union hv_guest_os_id id = {};
> > > > > @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
> > > > >      BUG_ON(!hypercall_msr.enable);
> > > > > 
> > > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > > 
> > > > Shouldn't this have at least a compiler barrier here?
> > > > 
> > > 
> > > OK. I will add a write barrier here.
> > 
> > Hm, shouldn't such barrier be part of set_fixmap_x itself?
> > 
> > Note that map_pages_to_xen already performs atomic writes.
> 
> I don't mind making things more explicit though. However unlikely, I
> may end up putting something in between set_fixmap_x and setting
> hcall_page_ready, I will need the barrier by then, I may as well put it
> in now.

IMO set_fixmap_x should have the necessary barriers (or other
synchronization methods) so that on return the address is correctly
mapped across all processors, and that it prevents the compiler from
moving accesses past it. I would consider a bug of set_fixmap_x
not having this behavior and requiring callers to do extra work in
order to ensure this.

Ie: something like the snipped below should not require an extra
barrier IMO:

set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
*((unsigned int *)fix_x_to_virt(FIX_X_HYPERV_HCALL)) = 0;

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 11:40       ` Roger Pau Monné
  2020-02-17 11:45         ` Wei Liu
@ 2020-02-17 12:01         ` Durrant, Paul
  2020-02-17 12:08           ` Roger Pau Monné
  1 sibling, 1 reply; 24+ messages in thread
From: Durrant, Paul @ 2020-02-17 12:01 UTC (permalink / raw)
  To: Roger Pau Monné, Wei Liu
  Cc: Xen Development List, Andrew Cooper, Wei Liu, Jan Beulich,
	Michael Kelley

> -----Original Message-----
> From: Roger Pau Monné <roger.pau@citrix.com>
> Sent: 17 February 2020 11:41
> To: Wei Liu <wl@xen.org>
> Cc: Durrant, Paul <pdurrant@amazon.co.uk>; Xen Development List <xen-
> devel@lists.xenproject.org>; Michael Kelley <mikelley@microsoft.com>; Wei
> Liu <liuwe@microsoft.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>
> Subject: Re: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> 
> On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > -----Original Message-----
> > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > Sent: 14 February 2020 13:34
> > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB
> flush
> > > >
> > > > Implement a basic hook for L0 assisted TLB flush. The hook needs to
> > > > check if prerequisites are met. If they are not met, it returns an
> error
> > > > number to fall back to native flushes.
> > > >
> > > > Introduce a new variable to indicate if hypercall page is ready.
> > > >
> > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > ---
> > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41
> +++++++++++++++++++++++++++++
> > > >  4 files changed, 63 insertions(+)
> > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > >
> > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > index 68170109a9..18902c33e9 100644
> > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > @@ -1 +1,2 @@
> > > >  obj-y += hyperv.o
> > > > +obj-y += tlb.o
> > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > >
> > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > +
> > > >  static uint64_t generate_guest_id(void)
> > > >  {
> > > >      union hv_guest_os_id id = {};
> > > > @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
> > > >      BUG_ON(!hypercall_msr.enable);
> > > >
> > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > >
> > > Shouldn't this have at least a compiler barrier here?
> > >
> >
> > OK. I will add a write barrier here.
> 
> Hm, shouldn't such barrier be part of set_fixmap_x itself?
> 

Not really, for the purpose I had in mind. The hv_hcall_page_ready global is specific to this code and we need to make sure the page is actually ready before the code says it is.

  Paul

> Note that map_pages_to_xen already performs atomic writes.
> 
> Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 3/3] x86/hyperv: L0 assisted TLB flush
  2020-02-14 16:42   ` Michael Kelley
@ 2020-02-17 12:03     ` Wei Liu
  0 siblings, 0 replies; 24+ messages in thread
From: Wei Liu @ 2020-02-17 12:03 UTC (permalink / raw)
  To: Michael Kelley
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Paul Durrant, Jan Beulich,
	Xen Development List, Roger Pau Monné

On Fri, Feb 14, 2020 at 04:42:47PM +0000, Michael Kelley wrote:
> From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu Sent: Friday, February 14, 2020 4:35 AM
> > 
> > Implement L0 assisted TLB flush for Xen on Hyper-V. It takes advantage
> > of several hypercalls:
> > 
> >  * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST
> >  * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
> >  * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE
> >  * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX
> > 
> > Pick the most efficient hypercalls available.
> > 
> > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > ---
> > v2:
> > 1. Address Roger and Jan's comments re types etc.
> > 2. Fix pointer arithmetic.
> > 3. Misc improvement to code.
> > ---
> >  xen/arch/x86/guest/hyperv/Makefile  |   1 +
> >  xen/arch/x86/guest/hyperv/private.h |   9 ++
> >  xen/arch/x86/guest/hyperv/tlb.c     | 172 +++++++++++++++++++++++++++-
> >  xen/arch/x86/guest/hyperv/util.c    |  74 ++++++++++++
> >  4 files changed, 255 insertions(+), 1 deletion(-)
> >  create mode 100644 xen/arch/x86/guest/hyperv/util.c
> > 
> > diff --git a/xen/arch/x86/guest/hyperv/Makefile b/xen/arch/x86/guest/hyperv/Makefile
> > index 18902c33e9..0e39410968 100644
> > --- a/xen/arch/x86/guest/hyperv/Makefile
> > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > @@ -1,2 +1,3 @@
> >  obj-y += hyperv.o
> >  obj-y += tlb.o
> > +obj-y += util.o
> > diff --git a/xen/arch/x86/guest/hyperv/private.h b/xen/arch/x86/guest/hyperv/private.h
> > index 509bedaafa..79a77930a0 100644
> > --- a/xen/arch/x86/guest/hyperv/private.h
> > +++ b/xen/arch/x86/guest/hyperv/private.h
> > @@ -24,12 +24,21 @@
> > 
> >  #include <xen/cpumask.h>
> >  #include <xen/percpu.h>
> > +#include <xen/types.h>
> > 
> >  DECLARE_PER_CPU(void *, hv_input_page);
> >  DECLARE_PER_CPU(void *, hv_vp_assist);
> >  DECLARE_PER_CPU(unsigned int, hv_vp_index);
> > 
> > +static inline unsigned int hv_vp_index(unsigned int cpu)
> > +{
> > +    return per_cpu(hv_vp_index, cpu);
> > +}
> > +
> >  int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
> >                       unsigned int flags);
> > 
> > +/* Returns number of banks, -ev if error */
> > +int cpumask_to_vpset(struct hv_vpset *vpset, const cpumask_t *mask);
> > +
> >  #endif /* __XEN_HYPERV_PRIVIATE_H__  */
> > diff --git a/xen/arch/x86/guest/hyperv/tlb.c b/xen/arch/x86/guest/hyperv/tlb.c
> > index 48f527229e..f68e14f151 100644
> > --- a/xen/arch/x86/guest/hyperv/tlb.c
> > +++ b/xen/arch/x86/guest/hyperv/tlb.c
> > @@ -19,15 +19,185 @@
> >   * Copyright (c) 2020 Microsoft.
> >   */
> > 
> > +#include <xen/cpu.h>
> >  #include <xen/cpumask.h>
> >  #include <xen/errno.h>
> > 
> > +#include <asm/guest/hyperv.h>
> > +#include <asm/guest/hyperv-hcall.h>
> > +#include <asm/guest/hyperv-tlfs.h>
> > +
> >  #include "private.h"
> > 
> > +/*
> > + * It is possible to encode up to 4096 pages using the lower 12 bits
> > + * in an element of gva_list
> > + */
> > +#define HV_TLB_FLUSH_UNIT (4096 * PAGE_SIZE)
> > +
> > +static unsigned int fill_gva_list(uint64_t *gva_list, const void *va,
> > +                                  unsigned int order)
> > +{
> > +    unsigned long start = (unsigned long)va;
> > +    unsigned long end = start + (PAGE_SIZE << order) - 1;
> > +    unsigned int n = 0;
> > +
> > +    do {
> > +        unsigned long remain = end - start;
> 
> The calculated value here isn't actually the remaining bytes in the
> range to flush -- it's one less than the remaining bytes in the range
> to flush because of the -1 in the calculation of 'end'.   That difference
> will mess up the comparison below against HV_TLB_FLUSH_UNIT
> in the case that there are exactly 4096 page remaining to be
> flushed.  It should take the "=" case, but won't.  Also, the
> '-1' in 'remain - 1' in the else clause becomes unneeded, and
> the 'start = end' assignment then propagates the error.
> 
> In the parallel code in Linux, if you follow the call sequence to get to
> fill_gav_list(), the 'end' argument is really the address of the first byte
> of the first page that isn't in the flush range (i.e., one beyond the true
> 'end') and so is a bit misnamed.
> 
> I think the calculation of 'end' should drop the -1, and perhaps 'end'
> should be renamed.

Thanks for the detailed review. Let me fix this.

Wei.

> 
> Michael
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 12:00           ` Roger Pau Monné
@ 2020-02-17 12:08             ` Wei Liu
  2020-02-17 12:13               ` Roger Pau Monné
  0 siblings, 1 reply; 24+ messages in thread
From: Wei Liu @ 2020-02-17 12:08 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List, Durrant, Paul

On Mon, Feb 17, 2020 at 01:00:54PM +0100, Roger Pau Monné wrote:
> On Mon, Feb 17, 2020 at 11:45:38AM +0000, Wei Liu wrote:
> > On Mon, Feb 17, 2020 at 12:40:31PM +0100, Roger Pau Monné wrote:
> > > On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > > > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > > > -----Original Message-----
> > > > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > > > Sent: 14 February 2020 13:34
> > > > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > > > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> > > > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> > > > > > 
> > > > > > Implement a basic hook for L0 assisted TLB flush. The hook needs to
> > > > > > check if prerequisites are met. If they are not met, it returns an error
> > > > > > number to fall back to native flushes.
> > > > > > 
> > > > > > Introduce a new variable to indicate if hypercall page is ready.
> > > > > > 
> > > > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > > > ---
> > > > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
> > > > > >  4 files changed, 63 insertions(+)
> > > > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > > > > 
> > > > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > index 68170109a9..18902c33e9 100644
> > > > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > @@ -1 +1,2 @@
> > > > > >  obj-y += hyperv.o
> > > > > > +obj-y += tlb.o
> > > > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> > > > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > > > > 
> > > > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > > > +
> > > > > >  static uint64_t generate_guest_id(void)
> > > > > >  {
> > > > > >      union hv_guest_os_id id = {};
> > > > > > @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
> > > > > >      BUG_ON(!hypercall_msr.enable);
> > > > > > 
> > > > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > > > 
> > > > > Shouldn't this have at least a compiler barrier here?
> > > > > 
> > > > 
> > > > OK. I will add a write barrier here.
> > > 
> > > Hm, shouldn't such barrier be part of set_fixmap_x itself?
> > > 
> > > Note that map_pages_to_xen already performs atomic writes.
> > 
> > I don't mind making things more explicit though. However unlikely, I
> > may end up putting something in between set_fixmap_x and setting
> > hcall_page_ready, I will need the barrier by then, I may as well put it
> > in now.
> 
> IMO set_fixmap_x should have the necessary barriers (or other
> synchronization methods) so that on return the address is correctly
> mapped across all processors, and that it prevents the compiler from
> moving accesses past it. I would consider a bug of set_fixmap_x
> not having this behavior and requiring callers to do extra work in
> order to ensure this.
> 
> Ie: something like the snipped below should not require an extra
> barrier IMO:
> 
> set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> *((unsigned int *)fix_x_to_virt(FIX_X_HYPERV_HCALL)) = 0;

That's different though. Compiler can't make the connection between
hcall_page_ready and the address returned by set_fixmap_x.

Wei.

> 
> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 12:01         ` Durrant, Paul
@ 2020-02-17 12:08           ` Roger Pau Monné
  2020-02-17 12:21             ` Durrant, Paul
  0 siblings, 1 reply; 24+ messages in thread
From: Roger Pau Monné @ 2020-02-17 12:08 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List

On Mon, Feb 17, 2020 at 12:01:23PM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Roger Pau Monné <roger.pau@citrix.com>
> > Sent: 17 February 2020 11:41
> > To: Wei Liu <wl@xen.org>
> > Cc: Durrant, Paul <pdurrant@amazon.co.uk>; Xen Development List <xen-
> > devel@lists.xenproject.org>; Michael Kelley <mikelley@microsoft.com>; Wei
> > Liu <liuwe@microsoft.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > <andrew.cooper3@citrix.com>
> > Subject: Re: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> > 
> > On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > > -----Original Message-----
> > > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > > Sent: 14 February 2020 13:34
> > > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> > > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB
> > flush
> > > > >
> > > > > Implement a basic hook for L0 assisted TLB flush. The hook needs to
> > > > > check if prerequisites are met. If they are not met, it returns an
> > error
> > > > > number to fall back to native flushes.
> > > > >
> > > > > Introduce a new variable to indicate if hypercall page is ready.
> > > > >
> > > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > > ---
> > > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41
> > +++++++++++++++++++++++++++++
> > > > >  4 files changed, 63 insertions(+)
> > > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > > >
> > > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > > index 68170109a9..18902c33e9 100644
> > > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > > @@ -1 +1,2 @@
> > > > >  obj-y += hyperv.o
> > > > > +obj-y += tlb.o
> > > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> > > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > > >
> > > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > > +
> > > > >  static uint64_t generate_guest_id(void)
> > > > >  {
> > > > >      union hv_guest_os_id id = {};
> > > > > @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
> > > > >      BUG_ON(!hypercall_msr.enable);
> > > > >
> > > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > >
> > > > Shouldn't this have at least a compiler barrier here?
> > > >
> > >
> > > OK. I will add a write barrier here.
> > 
> > Hm, shouldn't such barrier be part of set_fixmap_x itself?
> > 
> 
> Not really, for the purpose I had in mind. The hv_hcall_page_ready global is specific to this code and we need to make sure the page is actually ready before the code says it is.

But anything that modifies the page tables should already have a
barrier if required in order to prevent accesses from being moved
ahead of it, or else things would certainly go wrong in many other
places?

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 12:08             ` Wei Liu
@ 2020-02-17 12:13               ` Roger Pau Monné
  2020-02-17 12:38                 ` Wei Liu
  0 siblings, 1 reply; 24+ messages in thread
From: Roger Pau Monné @ 2020-02-17 12:13 UTC (permalink / raw)
  To: Wei Liu
  Cc: Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List, Durrant, Paul

On Mon, Feb 17, 2020 at 12:08:01PM +0000, Wei Liu wrote:
> On Mon, Feb 17, 2020 at 01:00:54PM +0100, Roger Pau Monné wrote:
> > On Mon, Feb 17, 2020 at 11:45:38AM +0000, Wei Liu wrote:
> > > On Mon, Feb 17, 2020 at 12:40:31PM +0100, Roger Pau Monné wrote:
> > > > On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > > > > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > > > > -----Original Message-----
> > > > > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > > > > Sent: 14 February 2020 13:34
> > > > > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > > > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > > > > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> > > > > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> > > > > > > 
> > > > > > > Implement a basic hook for L0 assisted TLB flush. The hook needs to
> > > > > > > check if prerequisites are met. If they are not met, it returns an error
> > > > > > > number to fall back to native flushes.
> > > > > > > 
> > > > > > > Introduce a new variable to indicate if hypercall page is ready.
> > > > > > > 
> > > > > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > > > > ---
> > > > > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > > > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > > > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > > > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
> > > > > > >  4 files changed, 63 insertions(+)
> > > > > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > > > > > 
> > > > > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > index 68170109a9..18902c33e9 100644
> > > > > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > @@ -1 +1,2 @@
> > > > > > >  obj-y += hyperv.o
> > > > > > > +obj-y += tlb.o
> > > > > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> > > > > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > > > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > > > > > 
> > > > > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > > > > +
> > > > > > >  static uint64_t generate_guest_id(void)
> > > > > > >  {
> > > > > > >      union hv_guest_os_id id = {};
> > > > > > > @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
> > > > > > >      BUG_ON(!hypercall_msr.enable);
> > > > > > > 
> > > > > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > > > > 
> > > > > > Shouldn't this have at least a compiler barrier here?
> > > > > > 
> > > > > 
> > > > > OK. I will add a write barrier here.
> > > > 
> > > > Hm, shouldn't such barrier be part of set_fixmap_x itself?
> > > > 
> > > > Note that map_pages_to_xen already performs atomic writes.
> > > 
> > > I don't mind making things more explicit though. However unlikely, I
> > > may end up putting something in between set_fixmap_x and setting
> > > hcall_page_ready, I will need the barrier by then, I may as well put it
> > > in now.
> > 
> > IMO set_fixmap_x should have the necessary barriers (or other
> > synchronization methods) so that on return the address is correctly
> > mapped across all processors, and that it prevents the compiler from
> > moving accesses past it. I would consider a bug of set_fixmap_x
> > not having this behavior and requiring callers to do extra work in
> > order to ensure this.
> > 
> > Ie: something like the snipped below should not require an extra
> > barrier IMO:
> > 
> > set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > *((unsigned int *)fix_x_to_virt(FIX_X_HYPERV_HCALL)) = 0;
> 
> That's different though. Compiler can't make the connection between
> hcall_page_ready and the address returned by set_fixmap_x.

I'm not sure the compiler can make a connection between set_fixmap_x
and fix_x_to_virt either (as fix_x_to_virt is a simple mathematical
operation and FIX_X_HYPERV_HCALL is a constant known at build time).

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 12:08           ` Roger Pau Monné
@ 2020-02-17 12:21             ` Durrant, Paul
  2020-02-17 12:48               ` Wei Liu
  0 siblings, 1 reply; 24+ messages in thread
From: Durrant, Paul @ 2020-02-17 12:21 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List

> -----Original Message-----
> From: Roger Pau Monné <roger.pau@citrix.com>
> Sent: 17 February 2020 12:08
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: Wei Liu <wl@xen.org>; Xen Development List <xen-
> devel@lists.xenproject.org>; Michael Kelley <mikelley@microsoft.com>; Wei
> Liu <liuwe@microsoft.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>
> Subject: Re: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> 
> On Mon, Feb 17, 2020 at 12:01:23PM +0000, Durrant, Paul wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monné <roger.pau@citrix.com>
> > > Sent: 17 February 2020 11:41
> > > To: Wei Liu <wl@xen.org>
> > > Cc: Durrant, Paul <pdurrant@amazon.co.uk>; Xen Development List <xen-
> > > devel@lists.xenproject.org>; Michael Kelley <mikelley@microsoft.com>;
> Wei
> > > Liu <liuwe@microsoft.com>; Jan Beulich <jbeulich@suse.com>; Andrew
> Cooper
> > > <andrew.cooper3@citrix.com>
> > > Subject: Re: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB
> flush
> > >
> > > On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > > > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > > > -----Original Message-----
> > > > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > > > Sent: 14 February 2020 13:34
> > > > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > > > <andrew.cooper3@citrix.com>; Roger Pau Monné
> <roger.pau@citrix.com>
> > > > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB
> > > flush
> > > > > >
> > > > > > Implement a basic hook for L0 assisted TLB flush. The hook needs
> to
> > > > > > check if prerequisites are met. If they are not met, it returns
> an
> > > error
> > > > > > number to fall back to native flushes.
> > > > > >
> > > > > > Introduce a new variable to indicate if hypercall page is ready.
> > > > > >
> > > > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > > > ---
> > > > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41
> > > +++++++++++++++++++++++++++++
> > > > > >  4 files changed, 63 insertions(+)
> > > > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > > > >
> > > > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > index 68170109a9..18902c33e9 100644
> > > > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > @@ -1 +1,2 @@
> > > > > >  obj-y += hyperv.o
> > > > > > +obj-y += tlb.o
> > > > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *,
> hv_input_page);
> > > > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > > > >
> > > > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > > > +
> > > > > >  static uint64_t generate_guest_id(void)
> > > > > >  {
> > > > > >      union hv_guest_os_id id = {};
> > > > > > @@ -119,6 +121,8 @@ static void __init
> setup_hypercall_page(void)
> > > > > >      BUG_ON(!hypercall_msr.enable);
> > > > > >
> > > > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > > >
> > > > > Shouldn't this have at least a compiler barrier here?
> > > > >
> > > >
> > > > OK. I will add a write barrier here.
> > >
> > > Hm, shouldn't such barrier be part of set_fixmap_x itself?
> > >
> >
> > Not really, for the purpose I had in mind. The hv_hcall_page_ready
> global is specific to this code and we need to make sure the page is
> actually ready before the code says it is.
> 
> But anything that modifies the page tables should already have a
> barrier if required in order to prevent accesses from being moved
> ahead of it, or else things would certainly go wrong in many other
> places?

Oh. I'm not saying that we don't need a barrier there too (and more than a compiler one in that case).

  Paul

> 
> Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 12:13               ` Roger Pau Monné
@ 2020-02-17 12:38                 ` Wei Liu
  0 siblings, 0 replies; 24+ messages in thread
From: Wei Liu @ 2020-02-17 12:38 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List, Durrant, Paul

On Mon, Feb 17, 2020 at 01:13:28PM +0100, Roger Pau Monné wrote:
> On Mon, Feb 17, 2020 at 12:08:01PM +0000, Wei Liu wrote:
> > On Mon, Feb 17, 2020 at 01:00:54PM +0100, Roger Pau Monné wrote:
> > > On Mon, Feb 17, 2020 at 11:45:38AM +0000, Wei Liu wrote:
> > > > On Mon, Feb 17, 2020 at 12:40:31PM +0100, Roger Pau Monné wrote:
> > > > > On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > > > > > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > > > > > -----Original Message-----
> > > > > > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > > > > > Sent: 14 February 2020 13:34
> > > > > > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > > > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > > > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > > > > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > > > > > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>
> > > > > > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> > > > > > > > 
> > > > > > > > Implement a basic hook for L0 assisted TLB flush. The hook needs to
> > > > > > > > check if prerequisites are met. If they are not met, it returns an error
> > > > > > > > number to fall back to native flushes.
> > > > > > > > 
> > > > > > > > Introduce a new variable to indicate if hypercall page is ready.
> > > > > > > > 
> > > > > > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > > > > > ---
> > > > > > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > > > > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > > > > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > > > > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41 +++++++++++++++++++++++++++++
> > > > > > > >  4 files changed, 63 insertions(+)
> > > > > > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > > > > > > 
> > > > > > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > > index 68170109a9..18902c33e9 100644
> > > > > > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > > @@ -1 +1,2 @@
> > > > > > > >  obj-y += hyperv.o
> > > > > > > > +obj-y += tlb.o
> > > > > > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > > > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> > > > > > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > > > > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > > > > > > 
> > > > > > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > > > > > +
> > > > > > > >  static uint64_t generate_guest_id(void)
> > > > > > > >  {
> > > > > > > >      union hv_guest_os_id id = {};
> > > > > > > > @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void)
> > > > > > > >      BUG_ON(!hypercall_msr.enable);
> > > > > > > > 
> > > > > > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > > > > > 
> > > > > > > Shouldn't this have at least a compiler barrier here?
> > > > > > > 
> > > > > > 
> > > > > > OK. I will add a write barrier here.
> > > > > 
> > > > > Hm, shouldn't such barrier be part of set_fixmap_x itself?
> > > > > 
> > > > > Note that map_pages_to_xen already performs atomic writes.
> > > > 
> > > > I don't mind making things more explicit though. However unlikely, I
> > > > may end up putting something in between set_fixmap_x and setting
> > > > hcall_page_ready, I will need the barrier by then, I may as well put it
> > > > in now.
> > > 
> > > IMO set_fixmap_x should have the necessary barriers (or other
> > > synchronization methods) so that on return the address is correctly
> > > mapped across all processors, and that it prevents the compiler from
> > > moving accesses past it. I would consider a bug of set_fixmap_x
> > > not having this behavior and requiring callers to do extra work in
> > > order to ensure this.
> > > 
> > > Ie: something like the snipped below should not require an extra
> > > barrier IMO:
> > > 
> > > set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > *((unsigned int *)fix_x_to_virt(FIX_X_HYPERV_HCALL)) = 0;
> > 
> > That's different though. Compiler can't make the connection between
> > hcall_page_ready and the address returned by set_fixmap_x.
> 
> I'm not sure the compiler can make a connection between set_fixmap_x
> and fix_x_to_virt either (as fix_x_to_virt is a simple mathematical
> operation and FIX_X_HYPERV_HCALL is a constant known at build time).

Oh, I misread your example, sorry.

Wei.

> 
> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 12:21             ` Durrant, Paul
@ 2020-02-17 12:48               ` Wei Liu
  2020-02-17 12:56                 ` Durrant, Paul
  0 siblings, 1 reply; 24+ messages in thread
From: Wei Liu @ 2020-02-17 12:48 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Wei Liu, Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List, Roger Pau Monné

On Mon, Feb 17, 2020 at 12:21:09PM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Roger Pau Monné <roger.pau@citrix.com>
> > Sent: 17 February 2020 12:08
> > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > Cc: Wei Liu <wl@xen.org>; Xen Development List <xen-
> > devel@lists.xenproject.org>; Michael Kelley <mikelley@microsoft.com>; Wei
> > Liu <liuwe@microsoft.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > <andrew.cooper3@citrix.com>
> > Subject: Re: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> > 
> > On Mon, Feb 17, 2020 at 12:01:23PM +0000, Durrant, Paul wrote:
> > > > -----Original Message-----
> > > > From: Roger Pau Monné <roger.pau@citrix.com>
> > > > Sent: 17 February 2020 11:41
> > > > To: Wei Liu <wl@xen.org>
> > > > Cc: Durrant, Paul <pdurrant@amazon.co.uk>; Xen Development List <xen-
> > > > devel@lists.xenproject.org>; Michael Kelley <mikelley@microsoft.com>;
> > Wei
> > > > Liu <liuwe@microsoft.com>; Jan Beulich <jbeulich@suse.com>; Andrew
> > Cooper
> > > > <andrew.cooper3@citrix.com>
> > > > Subject: Re: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB
> > flush
> > > >
> > > > On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > > > > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > > > > -----Original Message-----
> > > > > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > > > > Sent: 14 February 2020 13:34
> > > > > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei Liu
> > > > > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > > > > <andrew.cooper3@citrix.com>; Roger Pau Monné
> > <roger.pau@citrix.com>
> > > > > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB
> > > > flush
> > > > > > >
> > > > > > > Implement a basic hook for L0 assisted TLB flush. The hook needs
> > to
> > > > > > > check if prerequisites are met. If they are not met, it returns
> > an
> > > > error
> > > > > > > number to fall back to native flushes.
> > > > > > >
> > > > > > > Introduce a new variable to indicate if hypercall page is ready.
> > > > > > >
> > > > > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > > > > ---
> > > > > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > > > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > > > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > > > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41
> > > > +++++++++++++++++++++++++++++
> > > > > > >  4 files changed, 63 insertions(+)
> > > > > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > > > > >
> > > > > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > index 68170109a9..18902c33e9 100644
> > > > > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > @@ -1 +1,2 @@
> > > > > > >  obj-y += hyperv.o
> > > > > > > +obj-y += tlb.o
> > > > > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *,
> > hv_input_page);
> > > > > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > > > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > > > > >
> > > > > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > > > > +
> > > > > > >  static uint64_t generate_guest_id(void)
> > > > > > >  {
> > > > > > >      union hv_guest_os_id id = {};
> > > > > > > @@ -119,6 +121,8 @@ static void __init
> > setup_hypercall_page(void)
> > > > > > >      BUG_ON(!hypercall_msr.enable);
> > > > > > >
> > > > > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > > > >
> > > > > > Shouldn't this have at least a compiler barrier here?
> > > > > >
> > > > >
> > > > > OK. I will add a write barrier here.
> > > >
> > > > Hm, shouldn't such barrier be part of set_fixmap_x itself?
> > > >
> > >
> > > Not really, for the purpose I had in mind. The hv_hcall_page_ready
> > global is specific to this code and we need to make sure the page is
> > actually ready before the code says it is.
> > 
> > But anything that modifies the page tables should already have a
> > barrier if required in order to prevent accesses from being moved
> > ahead of it, or else things would certainly go wrong in many other
> > places?
> 
> Oh. I'm not saying that we don't need a barrier there too (and more
> than a compiler one in that case).
> 

The argument Roger has is that set_fixmap_x also contains strong enough
barriers to prevent hcall_page_ready to be set before page table is
correctly set up.

Since you asked for it, there must be something on your mind that
prompted this (maybe it is simply because you were bitten by similar
things and wants to be extra sure, maybe you think it is harder to grasp
the side effect of set_fixmap_x, maybe something else).

Code is written to be read by humans after all. I would rather be more
explicit / redundant to make humans happy than to save a potential
barrier / some typing in a code path.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
  2020-02-17 12:48               ` Wei Liu
@ 2020-02-17 12:56                 ` Durrant, Paul
  0 siblings, 0 replies; 24+ messages in thread
From: Durrant, Paul @ 2020-02-17 12:56 UTC (permalink / raw)
  To: Wei Liu
  Cc: Wei Liu, Andrew Cooper, Michael Kelley, Jan Beulich,
	Xen Development List, Roger Pau Monné

> -----Original Message-----
> From: Wei Liu <wl@xen.org>
> Sent: 17 February 2020 12:48
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: Roger Pau Monné <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; Xen
> Development List <xen-devel@lists.xenproject.org>; Michael Kelley
> <mikelley@microsoft.com>; Wei Liu <liuwe@microsoft.com>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>
> Subject: Re: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush
> 
> On Mon, Feb 17, 2020 at 12:21:09PM +0000, Durrant, Paul wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monné <roger.pau@citrix.com>
> > > Sent: 17 February 2020 12:08
> > > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > > Cc: Wei Liu <wl@xen.org>; Xen Development List <xen-
> > > devel@lists.xenproject.org>; Michael Kelley <mikelley@microsoft.com>;
> Wei
> > > Liu <liuwe@microsoft.com>; Jan Beulich <jbeulich@suse.com>; Andrew
> Cooper
> > > <andrew.cooper3@citrix.com>
> > > Subject: Re: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB
> flush
> > >
> > > On Mon, Feb 17, 2020 at 12:01:23PM +0000, Durrant, Paul wrote:
> > > > > -----Original Message-----
> > > > > From: Roger Pau Monné <roger.pau@citrix.com>
> > > > > Sent: 17 February 2020 11:41
> > > > > To: Wei Liu <wl@xen.org>
> > > > > Cc: Durrant, Paul <pdurrant@amazon.co.uk>; Xen Development List
> <xen-
> > > > > devel@lists.xenproject.org>; Michael Kelley
> <mikelley@microsoft.com>;
> > > Wei
> > > > > Liu <liuwe@microsoft.com>; Jan Beulich <jbeulich@suse.com>; Andrew
> > > Cooper
> > > > > <andrew.cooper3@citrix.com>
> > > > > Subject: Re: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted
> TLB
> > > flush
> > > > >
> > > > > On Mon, Feb 17, 2020 at 11:34:41AM +0000, Wei Liu wrote:
> > > > > > On Fri, Feb 14, 2020 at 04:55:44PM +0000, Durrant, Paul wrote:
> > > > > > > > -----Original Message-----
> > > > > > > > From: Wei Liu <wei.liu.xen@gmail.com> On Behalf Of Wei Liu
> > > > > > > > Sent: 14 February 2020 13:34
> > > > > > > > To: Xen Development List <xen-devel@lists.xenproject.org>
> > > > > > > > Cc: Michael Kelley <mikelley@microsoft.com>; Durrant, Paul
> > > > > > > > <pdurrant@amazon.co.uk>; Wei Liu <liuwe@microsoft.com>; Wei
> Liu
> > > > > > > > <wl@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> > > > > > > > <andrew.cooper3@citrix.com>; Roger Pau Monné
> > > <roger.pau@citrix.com>
> > > > > > > > Subject: [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted
> TLB
> > > > > flush
> > > > > > > >
> > > > > > > > Implement a basic hook for L0 assisted TLB flush. The hook
> needs
> > > to
> > > > > > > > check if prerequisites are met. If they are not met, it
> returns
> > > an
> > > > > error
> > > > > > > > number to fall back to native flushes.
> > > > > > > >
> > > > > > > > Introduce a new variable to indicate if hypercall page is
> ready.
> > > > > > > >
> > > > > > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > > > > > ---
> > > > > > > >  xen/arch/x86/guest/hyperv/Makefile  |  1 +
> > > > > > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 17 ++++++++++++
> > > > > > > >  xen/arch/x86/guest/hyperv/private.h |  4 +++
> > > > > > > >  xen/arch/x86/guest/hyperv/tlb.c     | 41
> > > > > +++++++++++++++++++++++++++++
> > > > > > > >  4 files changed, 63 insertions(+)
> > > > > > > >  create mode 100644 xen/arch/x86/guest/hyperv/tlb.c
> > > > > > > >
> > > > > > > > diff --git a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > > b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > > index 68170109a9..18902c33e9 100644
> > > > > > > > --- a/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > > +++ b/xen/arch/x86/guest/hyperv/Makefile
> > > > > > > > @@ -1 +1,2 @@
> > > > > > > >  obj-y += hyperv.o
> > > > > > > > +obj-y += tlb.o
> > > > > > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > > b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > > index 70f4cd5ae0..f9d1f11ae3 100644
> > > > > > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > > > > > @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *,
> > > hv_input_page);
> > > > > > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > > > > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > > > > > >
> > > > > > > > +static bool __read_mostly hv_hcall_page_ready;
> > > > > > > > +
> > > > > > > >  static uint64_t generate_guest_id(void)
> > > > > > > >  {
> > > > > > > >      union hv_guest_os_id id = {};
> > > > > > > > @@ -119,6 +121,8 @@ static void __init
> > > setup_hypercall_page(void)
> > > > > > > >      BUG_ON(!hypercall_msr.enable);
> > > > > > > >
> > > > > > > >      set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> > > > > > >
> > > > > > > Shouldn't this have at least a compiler barrier here?
> > > > > > >
> > > > > >
> > > > > > OK. I will add a write barrier here.
> > > > >
> > > > > Hm, shouldn't such barrier be part of set_fixmap_x itself?
> > > > >
> > > >
> > > > Not really, for the purpose I had in mind. The hv_hcall_page_ready
> > > global is specific to this code and we need to make sure the page is
> > > actually ready before the code says it is.
> > >
> > > But anything that modifies the page tables should already have a
> > > barrier if required in order to prevent accesses from being moved
> > > ahead of it, or else things would certainly go wrong in many other
> > > places?
> >
> > Oh. I'm not saying that we don't need a barrier there too (and more
> > than a compiler one in that case).
> >
> 
> The argument Roger has is that set_fixmap_x also contains strong enough
> barriers to prevent hcall_page_ready to be set before page table is
> correctly set up.
> 
> Since you asked for it, there must be something on your mind that
> prompted this (maybe it is simply because you were bitten by similar
> things and wants to be extra sure, maybe you think it is harder to grasp
> the side effect of set_fixmap_x, maybe something else).
> 
> Code is written to be read by humans after all. I would rather be more
> explicit / redundant to make humans happy than to save a potential
> barrier / some typing in a code path.

If set_fixmap_x() is a barriering operation, and that is reasonably obvious (from comments or cursory examination of the code) then that's fine. I have indeed been bitten by this kind of thing in writing PV drivers so just wanted to ensure we didn't run into it here.

  Paul

> 
> Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-02-17 12:57 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-14 12:34 [Xen-devel] [PATCH v2 0/3] Xen on Hyper-V: Implement L0 assisted TLB flush Wei Liu
2020-02-14 12:34 ` [Xen-devel] [PATCH v2 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb Wei Liu
2020-02-14 14:44   ` Roger Pau Monné
2020-02-14 16:51   ` Durrant, Paul
2020-02-14 12:34 ` [Xen-devel] [PATCH v2 2/3] x86/hyperv: skeleton for L0 assisted TLB flush Wei Liu
2020-02-14 14:46   ` Roger Pau Monné
2020-02-14 16:55   ` Durrant, Paul
2020-02-17 11:34     ` Wei Liu
2020-02-17 11:40       ` Roger Pau Monné
2020-02-17 11:45         ` Wei Liu
2020-02-17 12:00           ` Roger Pau Monné
2020-02-17 12:08             ` Wei Liu
2020-02-17 12:13               ` Roger Pau Monné
2020-02-17 12:38                 ` Wei Liu
2020-02-17 12:01         ` Durrant, Paul
2020-02-17 12:08           ` Roger Pau Monné
2020-02-17 12:21             ` Durrant, Paul
2020-02-17 12:48               ` Wei Liu
2020-02-17 12:56                 ` Durrant, Paul
2020-02-14 12:34 ` [Xen-devel] [PATCH v2 3/3] x86/hyperv: " Wei Liu
2020-02-14 14:42   ` Roger Pau Monné
2020-02-14 16:03     ` Wei Liu
2020-02-14 16:42   ` Michael Kelley
2020-02-17 12:03     ` Wei Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).