All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V5 0/3] Fix VGA logdirty related display freezes with altp2m
@ 2018-11-11 14:07 Razvan Cojocaru
  2018-11-11 14:07 ` [PATCH V5 1/3] x86/altp2m: propagate ept.ad changes to all active altp2ms Razvan Cojocaru
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-11 14:07 UTC (permalink / raw)
  To: xen-devel

This series aims to prevent the display from freezing when
enabling altp2m and switching to a new view (and assorted problems
when resizing the display). Since the last version of the series,
what was previously the second (and last) patch has been split in
two patches, the first of which only concerns itself with rangeset
allocation / deallocation / initialization.

The first patch propagates ept.ad changes to all active altp2ms,
the second one allocates and initializes a new logdirty rangeset for
each new altp2m, and the third propagates (under lock) changes to all
p2ms.

[PATCH V5 1/3] x86/altp2m: propagate ept.ad changes to all active altp2m
[PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
[PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH V5 1/3] x86/altp2m: propagate ept.ad changes to all active altp2ms
  2018-11-11 14:07 [PATCH V5 0/3] Fix VGA logdirty related display freezes with altp2m Razvan Cojocaru
@ 2018-11-11 14:07 ` Razvan Cojocaru
  2018-11-14  5:44   ` Tian, Kevin
  2018-11-11 14:07 ` [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms Razvan Cojocaru
  2018-11-11 14:07 ` [PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early Razvan Cojocaru
  2 siblings, 1 reply; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-11 14:07 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Razvan Cojocaru,
	George Dunlap, Andrew Cooper, Jan Beulich

This patch is a pre-requisite for fixing the logdirty VGA issue
(display freezes when switching to a new altp2m view early in a
domain's lifetime), but sent separately for easier review.
The new ept_set_ad_sync() function has been added to update all
active altp2ms' ept.ad. New altp2ms will inherit the hostp2m's
ept.ad value.

The p2m_{en,dis}able_hardware_log_dirty() hostp2m locking has
been moved to the new ept_{en,dis}able_hardware_log_dirty()
functions as part of the code refactoring, while locks for the
individual altp2ms are taken in ept_set_ad_sync() (called by
ept_{en,dis}able_pml()).

Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Suggested-by: George Dunlap <george.dunlap@citrix.com>
Tested-by: Tamas K Lengyel <tamas@tklengyel.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>

---
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>

---
Changes since V4:
 - Added George's ack.
---
 xen/arch/x86/mm/p2m-ept.c | 57 +++++++++++++++++++++++++++++++++++++++++++----
 xen/arch/x86/mm/p2m.c     |  8 -------
 2 files changed, 53 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 407e299..fabcd06 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -17,6 +17,7 @@
 
 #include <xen/domain_page.h>
 #include <xen/sched.h>
+#include <asm/altp2m.h>
 #include <asm/current.h>
 #include <asm/paging.h>
 #include <asm/types.h>
@@ -1222,6 +1223,34 @@ static void ept_tlb_flush(struct p2m_domain *p2m)
     ept_sync_domain_mask(p2m, p2m->domain->dirty_cpumask);
 }
 
+static void ept_set_ad_sync(struct domain *d, bool value)
+{
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
+
+    ASSERT(p2m_locked_by_me(hostp2m));
+
+    hostp2m->ept.ad = value;
+
+    if ( unlikely(altp2m_active(d)) )
+    {
+        unsigned int i;
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+        {
+            struct p2m_domain *p2m;
+
+            if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+                continue;
+
+            p2m = d->arch.altp2m_p2m[i];
+
+            p2m_lock(p2m);
+            p2m->ept.ad = value;
+            p2m_unlock(p2m);
+        }
+    }
+}
+
 static void ept_enable_pml(struct p2m_domain *p2m)
 {
     /* Domain must have been paused */
@@ -1236,7 +1265,7 @@ static void ept_enable_pml(struct p2m_domain *p2m)
         return;
 
     /* Enable EPT A/D bit for PML */
-    p2m->ept.ad = 1;
+    ept_set_ad_sync(p2m->domain, true);
     vmx_domain_update_eptp(p2m->domain);
 }
 
@@ -1248,10 +1277,28 @@ static void ept_disable_pml(struct p2m_domain *p2m)
     vmx_domain_disable_pml(p2m->domain);
 
     /* Disable EPT A/D bit */
-    p2m->ept.ad = 0;
+    ept_set_ad_sync(p2m->domain, false);
     vmx_domain_update_eptp(p2m->domain);
 }
 
+static void ept_enable_hardware_log_dirty(struct p2m_domain *p2m)
+{
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(p2m->domain);
+
+    p2m_lock(hostp2m);
+    ept_enable_pml(hostp2m);
+    p2m_unlock(hostp2m);
+}
+
+static void ept_disable_hardware_log_dirty(struct p2m_domain *p2m)
+{
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(p2m->domain);
+
+    p2m_lock(hostp2m);
+    ept_disable_pml(hostp2m);
+    p2m_unlock(hostp2m);
+}
+
 static void ept_flush_pml_buffers(struct p2m_domain *p2m)
 {
     /* Domain must have been paused */
@@ -1281,8 +1328,8 @@ int ept_p2m_init(struct p2m_domain *p2m)
 
     if ( cpu_has_vmx_pml )
     {
-        p2m->enable_hardware_log_dirty = ept_enable_pml;
-        p2m->disable_hardware_log_dirty = ept_disable_pml;
+        p2m->enable_hardware_log_dirty = ept_enable_hardware_log_dirty;
+        p2m->disable_hardware_log_dirty = ept_disable_hardware_log_dirty;
         p2m->flush_hardware_cached_dirty = ept_flush_pml_buffers;
     }
 
@@ -1390,8 +1437,10 @@ void setup_ept_dump(void)
 void p2m_init_altp2m_ept(struct domain *d, unsigned int i)
 {
     struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
     struct ept_data *ept;
 
+    p2m->ept.ad = hostp2m->ept.ad;
     p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
     p2m->max_remapped_gfn = 0;
     ept = &p2m->ept;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index a00a3c1..42b9ef4 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -360,11 +360,7 @@ void p2m_enable_hardware_log_dirty(struct domain *d)
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     if ( p2m->enable_hardware_log_dirty )
-    {
-        p2m_lock(p2m);
         p2m->enable_hardware_log_dirty(p2m);
-        p2m_unlock(p2m);
-    }
 }
 
 void p2m_disable_hardware_log_dirty(struct domain *d)
@@ -372,11 +368,7 @@ void p2m_disable_hardware_log_dirty(struct domain *d)
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     if ( p2m->disable_hardware_log_dirty )
-    {
-        p2m_lock(p2m);
         p2m->disable_hardware_log_dirty(p2m);
-        p2m_unlock(p2m);
-    }
 }
 
 void p2m_flush_hardware_cached_dirty(struct domain *d)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-11 14:07 [PATCH V5 0/3] Fix VGA logdirty related display freezes with altp2m Razvan Cojocaru
  2018-11-11 14:07 ` [PATCH V5 1/3] x86/altp2m: propagate ept.ad changes to all active altp2ms Razvan Cojocaru
@ 2018-11-11 14:07 ` Razvan Cojocaru
  2018-11-13 17:57   ` George Dunlap
  2018-11-11 14:07 ` [PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early Razvan Cojocaru
  2 siblings, 1 reply; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-11 14:07 UTC (permalink / raw)
  To: xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, Razvan Cojocaru

This patch is a pre-requisite for the one fixing VGA logdirty
freezes when using altp2m. It only concerns itself with the
ranges allocation / deallocation / initialization part. While
touching the code, I've switched global_logdirty from bool_t
to bool.

Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>

---
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>

---
Changes since V4:
 - Always call p2m_free_logdirty() in p2m_free_one() (previously
   the call was gated on hap_enabled(p2m->domain) && cpu_has_vmx).
---
 xen/arch/x86/mm/p2m.c     | 74 ++++++++++++++++++++++++++++++++++++-----------
 xen/include/asm-x86/p2m.h |  2 +-
 2 files changed, 58 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 42b9ef4..69536c1 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -59,6 +59,28 @@ static void p2m_nestedp2m_init(struct p2m_domain *p2m)
 #endif
 }
 
+static int p2m_init_logdirty(struct p2m_domain *p2m)
+{
+    if ( p2m->logdirty_ranges )
+        return 0;
+
+    p2m->logdirty_ranges = rangeset_new(p2m->domain, "log-dirty",
+                                        RANGESETF_prettyprint_hex);
+    if ( !p2m->logdirty_ranges )
+        return -ENOMEM;
+
+    return 0;
+}
+
+static void p2m_free_logdirty(struct p2m_domain *p2m)
+{
+    if ( !p2m->logdirty_ranges )
+        return;
+
+    rangeset_destroy(p2m->logdirty_ranges);
+    p2m->logdirty_ranges = NULL;
+}
+
 /* Init the datastructures for later use by the p2m code */
 static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
@@ -107,6 +129,7 @@ free_p2m:
 
 static void p2m_free_one(struct p2m_domain *p2m)
 {
+    p2m_free_logdirty(p2m);
     if ( hap_enabled(p2m->domain) && cpu_has_vmx )
         ept_p2m_uninit(p2m);
     free_cpumask_var(p2m->dirty_cpumask);
@@ -116,19 +139,19 @@ static void p2m_free_one(struct p2m_domain *p2m)
 static int p2m_init_hostp2m(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_init_one(d);
+    int rc;
 
-    if ( p2m )
-    {
-        p2m->logdirty_ranges = rangeset_new(d, "log-dirty",
-                                            RANGESETF_prettyprint_hex);
-        if ( p2m->logdirty_ranges )
-        {
-            d->arch.p2m = p2m;
-            return 0;
-        }
+    if ( !p2m )
+        return -ENOMEM;
+
+    rc = p2m_init_logdirty(p2m);
+
+    if ( !rc )
+        d->arch.p2m = p2m;
+    else
         p2m_free_one(p2m);
-    }
-    return -ENOMEM;
+
+    return rc;
 }
 
 static void p2m_teardown_hostp2m(struct domain *d)
@@ -138,7 +161,6 @@ static void p2m_teardown_hostp2m(struct domain *d)
 
     if ( p2m )
     {
-        rangeset_destroy(p2m->logdirty_ranges);
         p2m_free_one(p2m);
         d->arch.p2m = NULL;
     }
@@ -2279,6 +2301,18 @@ void p2m_flush_altp2m(struct domain *d)
     altp2m_list_unlock(d);
 }
 
+static int p2m_init_altp2m_logdirty(struct p2m_domain *p2m)
+{
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(p2m->domain);
+    int rc = p2m_init_logdirty(p2m);
+
+    if ( rc )
+        return rc;
+
+    /* The following is really just a rangeset copy. */
+    return rangeset_merge(p2m->logdirty_ranges, hostp2m->logdirty_ranges);
+}
+
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 {
     int rc = -EINVAL;
@@ -2290,8 +2324,9 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 
     if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
     {
-        p2m_init_altp2m_ept(d, idx);
-        rc = 0;
+        rc = p2m_init_altp2m_logdirty(d->arch.altp2m_p2m[idx]);
+        if ( !rc )
+            p2m_init_altp2m_ept(d, idx);
     }
 
     altp2m_list_unlock(d);
@@ -2310,9 +2345,13 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
         if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             continue;
 
-        p2m_init_altp2m_ept(d, i);
-        *idx = i;
-        rc = 0;
+        rc = p2m_init_altp2m_logdirty(d->arch.altp2m_p2m[i]);
+
+        if ( !rc )
+        {
+            p2m_init_altp2m_ept(d, i);
+            *idx = i;
+        }
 
         break;
     }
@@ -2341,6 +2380,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
         {
             p2m_flush_table(d->arch.altp2m_p2m[idx]);
             /* Uninit and reinit ept to force TLB shootdown */
+            p2m_free_logdirty(d->arch.altp2m_p2m[idx]);
             ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
             ept_p2m_init(d->arch.altp2m_p2m[idx]);
             d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index ac33f50..c7f5710 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -222,7 +222,7 @@ struct p2m_domain {
     struct rangeset   *logdirty_ranges;
 
     /* Host p2m: Global log-dirty mode enabled for the domain. */
-    bool_t             global_logdirty;
+    bool               global_logdirty;
 
     /* Host p2m: when this flag is set, don't flush all the nested-p2m 
      * tables on every host-p2m change.  The setter of this flag 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early
  2018-11-11 14:07 [PATCH V5 0/3] Fix VGA logdirty related display freezes with altp2m Razvan Cojocaru
  2018-11-11 14:07 ` [PATCH V5 1/3] x86/altp2m: propagate ept.ad changes to all active altp2ms Razvan Cojocaru
  2018-11-11 14:07 ` [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms Razvan Cojocaru
@ 2018-11-11 14:07 ` Razvan Cojocaru
  2018-11-14  5:55   ` Tian, Kevin
  2 siblings, 1 reply; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-11 14:07 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Razvan Cojocaru,
	George Dunlap, Andrew Cooper, Jan Beulich

When an new altp2m view is created very early on guest boot, the
display will freeze (although the guest will run normally). This
may also happen on resizing the display. The reason is the way
Xen currently (mis)handles logdirty VGA: it intentionally
misconfigures VGA pages so that they will fault.

The problem is that it only does this in the host p2m. Once we
switch to a new altp2m, the misconfigured entries will no longer
fault, so the display will not be updated.

This patch:
* updates ept_handle_misconfig() to use the active altp2m instead
  of the hostp2m;
* modifies p2m_change_entry_type_global(), p2m_memory_type_changed
  and p2m_change_type_range() to propagate their changes to all
  valid altp2ms.

Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Suggested-by: George Dunlap <george.dunlap@citrix.com>

---
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>

---
Changes since V4:
 - Now ASSERT()ing that altp2m is _not_ active in
   p2m_pt_handle_deferred_changes(), with added comment.
 - p2m_memory_type_changed() and p2m_change_type_range() now
   process altp2ms with the hostp2m lock taken.
---
 xen/arch/x86/mm/p2m-ept.c |   8 ++++
 xen/arch/x86/mm/p2m-pt.c  |   8 ++++
 xen/arch/x86/mm/p2m.c     | 115 ++++++++++++++++++++++++++++++++++++++--------
 xen/include/asm-x86/p2m.h |   6 +--
 4 files changed, 114 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index fabcd06..e6fa85f 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -657,6 +657,9 @@ bool_t ept_handle_misconfig(uint64_t gpa)
     bool_t spurious;
     int rc;
 
+    if ( altp2m_active(curr->domain) )
+        p2m = p2m_get_altp2m(curr);
+
     p2m_lock(p2m);
 
     spurious = curr->arch.hvm.vmx.ept_spurious_misconfig;
@@ -1440,6 +1443,11 @@ void p2m_init_altp2m_ept(struct domain *d, unsigned int i)
     struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
     struct ept_data *ept;
 
+    p2m->max_mapped_pfn = hostp2m->max_mapped_pfn;
+    p2m->default_access = hostp2m->default_access;
+    p2m->domain = hostp2m->domain;
+
+    p2m->global_logdirty = hostp2m->global_logdirty;
     p2m->ept.ad = hostp2m->ept.ad;
     p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
     p2m->max_remapped_gfn = 0;
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 55df185..3828088 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -29,6 +29,7 @@
 #include <xen/event.h>
 #include <xen/trace.h>
 #include <public/vm_event.h>
+#include <asm/altp2m.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
@@ -464,6 +465,13 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa)
     struct p2m_domain *p2m = p2m_get_hostp2m(current->domain);
     int rc;
 
+    /*
+     * Should altp2m ever be enabled for NPT / shadow use, this code
+     * should be updated to make use of the active altp2m, like
+     * ept_handle_misconfig().
+     */
+    ASSERT(!altp2m_active(current->domain));
+
     p2m_lock(p2m);
     rc = do_recalc(p2m, PFN_DOWN(gpa));
     p2m_unlock(p2m);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 69536c1..c8561ba 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -277,7 +277,6 @@ int p2m_init(struct domain *d)
 int p2m_is_logdirty_range(struct p2m_domain *p2m, unsigned long start,
                           unsigned long end)
 {
-    ASSERT(p2m_is_hostp2m(p2m));
     if ( p2m->global_logdirty ||
          rangeset_contains_range(p2m->logdirty_ranges, start, end) )
         return 1;
@@ -286,31 +285,79 @@ int p2m_is_logdirty_range(struct p2m_domain *p2m, unsigned long start,
     return 0;
 }
 
+static void change_entry_type_global(struct p2m_domain *p2m,
+                                     p2m_type_t ot, p2m_type_t nt)
+{
+    p2m->change_entry_type_global(p2m, ot, nt);
+    p2m->global_logdirty = (nt == p2m_ram_logdirty);
+}
+
 void p2m_change_entry_type_global(struct domain *d,
                                   p2m_type_t ot, p2m_type_t nt)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
 
     ASSERT(ot != nt);
     ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
 
-    p2m_lock(p2m);
-    p2m->change_entry_type_global(p2m, ot, nt);
-    p2m->global_logdirty = (nt == p2m_ram_logdirty);
-    p2m_unlock(p2m);
+    p2m_lock(hostp2m);
+
+    change_entry_type_global(hostp2m, ot, nt);
+
+#ifdef CONFIG_HVM
+    if ( unlikely(altp2m_active(d)) )
+    {
+        unsigned int i;
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+            {
+                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
+
+                p2m_lock(altp2m);
+                change_entry_type_global(altp2m, ot, nt);
+                p2m_unlock(altp2m);
+            }
+    }
+#endif
+
+    p2m_unlock(hostp2m);
+}
+
+#ifdef CONFIG_HVM
+/* There's already a memory_type_changed() in asm/mtrr.h. */
+static void _memory_type_changed(struct p2m_domain *p2m)
+{
+    if ( p2m->memory_type_changed )
+        p2m->memory_type_changed(p2m);
 }
 
 void p2m_memory_type_changed(struct domain *d)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
 
-    if ( p2m->memory_type_changed )
+    p2m_lock(hostp2m);
+
+    _memory_type_changed(hostp2m);
+
+    if ( unlikely(altp2m_active(d)) )
     {
-        p2m_lock(p2m);
-        p2m->memory_type_changed(p2m);
-        p2m_unlock(p2m);
+        unsigned int i;
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+            {
+                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
+
+                p2m_lock(altp2m);
+                _memory_type_changed(altp2m);
+                p2m_unlock(altp2m);
+            }
     }
+
+    p2m_unlock(hostp2m);
 }
+#endif
 
 int p2m_set_ioreq_server(struct domain *d,
                          unsigned int flags,
@@ -992,18 +1039,14 @@ int p2m_change_type_one(struct domain *d, unsigned long gfn_l,
 }
 
 /* Modify the p2m type of a range of gfns from ot to nt. */
-void p2m_change_type_range(struct domain *d, 
-                           unsigned long start, unsigned long end,
-                           p2m_type_t ot, p2m_type_t nt)
+static void change_type_range(struct p2m_domain *p2m,
+                              unsigned long start, unsigned long end,
+                              p2m_type_t ot, p2m_type_t nt)
 {
     unsigned long gfn = start;
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct domain *d = p2m->domain;
     int rc = 0;
 
-    ASSERT(ot != nt);
-    ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
-
-    p2m_lock(p2m);
     p2m->defer_nested_flush = 1;
 
     if ( unlikely(end > p2m->max_mapped_pfn) )
@@ -1047,7 +1090,39 @@ void p2m_change_type_range(struct domain *d,
     p2m->defer_nested_flush = 0;
     if ( nestedhvm_enabled(d) )
         p2m_flush_nestedp2m(d);
-    p2m_unlock(p2m);
+}
+
+void p2m_change_type_range(struct domain *d,
+                           unsigned long start, unsigned long end,
+                           p2m_type_t ot, p2m_type_t nt)
+{
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
+
+    ASSERT(ot != nt);
+    ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
+
+    p2m_lock(hostp2m);
+
+    change_type_range(hostp2m, start, end, ot, nt);
+
+#ifdef CONFIG_HVM
+    if ( unlikely(altp2m_active(d)) )
+    {
+        unsigned int i;
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+            {
+                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
+
+                p2m_lock(altp2m);
+                change_type_range(altp2m, start, end, ot, nt);
+                p2m_unlock(altp2m);
+            }
+    }
+#endif
+
+    p2m_unlock(hostp2m);
 }
 
 /*
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index c7f5710..be5b7a2 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -630,9 +630,6 @@ int p2m_finish_type_change(struct domain *d,
                            gfn_t first_gfn,
                            unsigned long max_nr);
 
-/* Report a change affecting memory types. */
-void p2m_memory_type_changed(struct domain *d);
-
 int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start,
                           unsigned long end);
 
@@ -663,6 +660,9 @@ void p2m_pod_dump_data(struct domain *d);
 
 #ifdef CONFIG_HVM
 
+/* Report a change affecting memory types. */
+void p2m_memory_type_changed(struct domain *d);
+
 /* Called by p2m code when demand-populating a PoD page */
 bool
 p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn, unsigned int order);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-11 14:07 ` [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms Razvan Cojocaru
@ 2018-11-13 17:57   ` George Dunlap
  2018-11-13 18:43     ` Razvan Cojocaru
  0 siblings, 1 reply; 19+ messages in thread
From: George Dunlap @ 2018-11-13 17:57 UTC (permalink / raw)
  To: Razvan Cojocaru, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich

On 11/11/18 2:07 PM, Razvan Cojocaru wrote:
> This patch is a pre-requisite for the one fixing VGA logdirty
> freezes when using altp2m. It only concerns itself with the
> ranges allocation / deallocation / initialization part. While
> touching the code, I've switched global_logdirty from bool_t
> to bool.
> 
> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>

I've convinced myself that this patch is probably correct now, and as a
result I've had a chance to look a bit at the resulting code.  Which
means, unfortunately, that I'm going to be a bit annoying and ask more
questions that I didn't ask last time.

> 
> ---
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> 
> ---
> Changes since V4:
>  - Always call p2m_free_logdirty() in p2m_free_one() (previously
>    the call was gated on hap_enabled(p2m->domain) && cpu_has_vmx).
> ---
>  xen/arch/x86/mm/p2m.c     | 74 ++++++++++++++++++++++++++++++++++++-----------
>  xen/include/asm-x86/p2m.h |  2 +-
>  2 files changed, 58 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 42b9ef4..69536c1 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -59,6 +59,28 @@ static void p2m_nestedp2m_init(struct p2m_domain *p2m)
>  #endif
>  }
>  
> +static int p2m_init_logdirty(struct p2m_domain *p2m)
> +{
> +    if ( p2m->logdirty_ranges )
> +        return 0;
> +
> +    p2m->logdirty_ranges = rangeset_new(p2m->domain, "log-dirty",
> +                                        RANGESETF_prettyprint_hex);
> +    if ( !p2m->logdirty_ranges )
> +        return -ENOMEM;
> +
> +    return 0;
> +}
> +
> +static void p2m_free_logdirty(struct p2m_domain *p2m)
> +{
> +    if ( !p2m->logdirty_ranges )
> +        return;
> +
> +    rangeset_destroy(p2m->logdirty_ranges);
> +    p2m->logdirty_ranges = NULL;
> +}
> +
>  /* Init the datastructures for later use by the p2m code */
>  static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
>  {
> @@ -107,6 +129,7 @@ free_p2m:
>  
>  static void p2m_free_one(struct p2m_domain *p2m)
>  {
> +    p2m_free_logdirty(p2m);
>      if ( hap_enabled(p2m->domain) && cpu_has_vmx )
>          ept_p2m_uninit(p2m);
>      free_cpumask_var(p2m->dirty_cpumask);
> @@ -116,19 +139,19 @@ static void p2m_free_one(struct p2m_domain *p2m)
>  static int p2m_init_hostp2m(struct domain *d)
>  {
>      struct p2m_domain *p2m = p2m_init_one(d);
> +    int rc;
>  
> -    if ( p2m )
> -    {
> -        p2m->logdirty_ranges = rangeset_new(d, "log-dirty",
> -                                            RANGESETF_prettyprint_hex);
> -        if ( p2m->logdirty_ranges )
> -        {
> -            d->arch.p2m = p2m;
> -            return 0;
> -        }
> +    if ( !p2m )
> +        return -ENOMEM;
> +
> +    rc = p2m_init_logdirty(p2m);
> +
> +    if ( !rc )
> +        d->arch.p2m = p2m;
> +    else
>          p2m_free_one(p2m);
> -    }
> -    return -ENOMEM;
> +
> +    return rc;
>  }
>  
>  static void p2m_teardown_hostp2m(struct domain *d)
> @@ -138,7 +161,6 @@ static void p2m_teardown_hostp2m(struct domain *d)
>  
>      if ( p2m )
>      {
> -        rangeset_destroy(p2m->logdirty_ranges);
>          p2m_free_one(p2m);
>          d->arch.p2m = NULL;
>      }
> @@ -2279,6 +2301,18 @@ void p2m_flush_altp2m(struct domain *d)
>      altp2m_list_unlock(d);
>  }

I think everything above here could usefully be in its own patch; it
would make it easier to verify that there were no functional changes in
the refactoring.

> +static int p2m_init_altp2m_logdirty(struct p2m_domain *p2m)
> +{
> +    struct p2m_domain *hostp2m = p2m_get_hostp2m(p2m->domain);
> +    int rc = p2m_init_logdirty(p2m);
> +
> +    if ( rc )
> +        return rc;
> +
> +    /* The following is really just a rangeset copy. */
> +    return rangeset_merge(p2m->logdirty_ranges, hostp2m->logdirty_ranges);
> +}
> +
>  int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
>  {
>      int rc = -EINVAL;
> @@ -2290,8 +2324,9 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
>  
>      if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
>      {
> -        p2m_init_altp2m_ept(d, idx);
> -        rc = 0;
> +        rc = p2m_init_altp2m_logdirty(d->arch.altp2m_p2m[idx]);
> +        if ( !rc )
> +            p2m_init_altp2m_ept(d, idx);
>      }
>  
>      altp2m_list_unlock(d);
> @@ -2310,9 +2345,13 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
>          if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
>              continue;
>  
> -        p2m_init_altp2m_ept(d, i);
> -        *idx = i;
> -        rc = 0;
> +        rc = p2m_init_altp2m_logdirty(d->arch.altp2m_p2m[i]);
> +
> +        if ( !rc )
> +        {
> +            p2m_init_altp2m_ept(d, i);
> +            *idx = i;
> +        }

It looks like there's a 1-1 correspondence between
p2m_init_altp2m_logdirty() succeeding and calling p2m_inti_altp2m_ept().
 Would it make sense to combine them into the same function, maybe named
"p2m_activate_altp2m()" or something (to distinguish it from
p2m_init_altp2m())?

> @@ -2341,6 +2380,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>          {
>              p2m_flush_table(d->arch.altp2m_p2m[idx]);
>              /* Uninit and reinit ept to force TLB shootdown */
> +            p2m_free_logdirty(d->arch.altp2m_p2m[idx]);
>              ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>              ept_p2m_init(d->arch.altp2m_p2m[idx]);
>              d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);

(In case I forget: Also, this is called without holding the appropriate
p2m lock. )

I'm a bit suspicious of long strings of these sorts of functions in the
middle of another function.  It turns out that there are three copies of
this sequence of function calls (p2m_flush_table -> ept_p2m_uninit ->
ept_p2m_init):

* Here (p2m_destroy_altp2m_id), when the user asks for the alt2m index
to be destroyed

* In p2m_flush_altp2m(), which is called when altp2m is disabled for a
domain

* In p2m_reset_altp2m(), which is called when an entry in the hostp2m is
set to INVALID_MFN.

Presumably in p2m_reset_altp2m() we don't want to call
p2m_free_logdirty(), as the altp2m is still active and we want to keep
the logdirty ranges around.  But in p2m_flush_altp2m(), I'm pretty sure
we do want to discard them: when altp2m is enabled again,
p2m_init_logdirty() will return early, leaving the old rangesets in
place; if the hostp2m rangesets have changed between the time altp2m was
disabled and enabled again, the rangeset_merge() may have incorrect results.

At the moment we essentially have two "init" states:
* After domain creation; altp2m structures allocated, but no rangesets, & c
* After being enabled for the first time: rangesets mirroring hostp2m,
p2m_init_altp2m_ept() initialization done

Is there any particular reason we allocate the p2m structures on domain
creation, but not logdirty range structures?  It seems like allocating
altp2m structures on-demand, rather than at domain creation time, might
make a lot of the reasoning here simpler.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-13 17:57   ` George Dunlap
@ 2018-11-13 18:43     ` Razvan Cojocaru
  2018-11-13 18:57       ` Razvan Cojocaru
  2018-11-14 11:58       ` George Dunlap
  0 siblings, 2 replies; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-13 18:43 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich

On 11/13/18 7:57 PM, George Dunlap wrote:
> On 11/11/18 2:07 PM, Razvan Cojocaru wrote:
>> This patch is a pre-requisite for the one fixing VGA logdirty
>> freezes when using altp2m. It only concerns itself with the
>> ranges allocation / deallocation / initialization part. While
>> touching the code, I've switched global_logdirty from bool_t
>> to bool.
>>
>> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
> 
> I've convinced myself that this patch is probably correct now, and as a
> result I've had a chance to look a bit at the resulting code.  Which
> means, unfortunately, that I'm going to be a bit annoying and ask more
> questions that I didn't ask last time.

Thanks for the review, and please ask away. :)

>> ---
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>>
>> ---
>> Changes since V4:
>>  - Always call p2m_free_logdirty() in p2m_free_one() (previously
>>    the call was gated on hap_enabled(p2m->domain) && cpu_has_vmx).
>> ---
>>  xen/arch/x86/mm/p2m.c     | 74 ++++++++++++++++++++++++++++++++++++-----------
>>  xen/include/asm-x86/p2m.h |  2 +-
>>  2 files changed, 58 insertions(+), 18 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>> index 42b9ef4..69536c1 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -59,6 +59,28 @@ static void p2m_nestedp2m_init(struct p2m_domain *p2m)
>>  #endif
>>  }
>>  
>> +static int p2m_init_logdirty(struct p2m_domain *p2m)
>> +{
>> +    if ( p2m->logdirty_ranges )
>> +        return 0;
>> +
>> +    p2m->logdirty_ranges = rangeset_new(p2m->domain, "log-dirty",
>> +                                        RANGESETF_prettyprint_hex);
>> +    if ( !p2m->logdirty_ranges )
>> +        return -ENOMEM;
>> +
>> +    return 0;
>> +}
>> +
>> +static void p2m_free_logdirty(struct p2m_domain *p2m)
>> +{
>> +    if ( !p2m->logdirty_ranges )
>> +        return;
>> +
>> +    rangeset_destroy(p2m->logdirty_ranges);
>> +    p2m->logdirty_ranges = NULL;
>> +}
>> +
>>  /* Init the datastructures for later use by the p2m code */
>>  static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
>>  {
>> @@ -107,6 +129,7 @@ free_p2m:
>>  
>>  static void p2m_free_one(struct p2m_domain *p2m)
>>  {
>> +    p2m_free_logdirty(p2m);
>>      if ( hap_enabled(p2m->domain) && cpu_has_vmx )
>>          ept_p2m_uninit(p2m);
>>      free_cpumask_var(p2m->dirty_cpumask);
>> @@ -116,19 +139,19 @@ static void p2m_free_one(struct p2m_domain *p2m)
>>  static int p2m_init_hostp2m(struct domain *d)
>>  {
>>      struct p2m_domain *p2m = p2m_init_one(d);
>> +    int rc;
>>  
>> -    if ( p2m )
>> -    {
>> -        p2m->logdirty_ranges = rangeset_new(d, "log-dirty",
>> -                                            RANGESETF_prettyprint_hex);
>> -        if ( p2m->logdirty_ranges )
>> -        {
>> -            d->arch.p2m = p2m;
>> -            return 0;
>> -        }
>> +    if ( !p2m )
>> +        return -ENOMEM;
>> +
>> +    rc = p2m_init_logdirty(p2m);
>> +
>> +    if ( !rc )
>> +        d->arch.p2m = p2m;
>> +    else
>>          p2m_free_one(p2m);
>> -    }
>> -    return -ENOMEM;
>> +
>> +    return rc;
>>  }
>>  
>>  static void p2m_teardown_hostp2m(struct domain *d)
>> @@ -138,7 +161,6 @@ static void p2m_teardown_hostp2m(struct domain *d)
>>  
>>      if ( p2m )
>>      {
>> -        rangeset_destroy(p2m->logdirty_ranges);
>>          p2m_free_one(p2m);
>>          d->arch.p2m = NULL;
>>      }
>> @@ -2279,6 +2301,18 @@ void p2m_flush_altp2m(struct domain *d)
>>      altp2m_list_unlock(d);
>>  }
> 
> I think everything above here could usefully be in its own patch; it
> would make it easier to verify that there were no functional changes in
> the refactoring.

Right, I'll split this patch then.

>> +static int p2m_init_altp2m_logdirty(struct p2m_domain *p2m)
>> +{
>> +    struct p2m_domain *hostp2m = p2m_get_hostp2m(p2m->domain);
>> +    int rc = p2m_init_logdirty(p2m);
>> +
>> +    if ( rc )
>> +        return rc;
>> +
>> +    /* The following is really just a rangeset copy. */
>> +    return rangeset_merge(p2m->logdirty_ranges, hostp2m->logdirty_ranges);
>> +}
>> +
>>  int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
>>  {
>>      int rc = -EINVAL;
>> @@ -2290,8 +2324,9 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
>>  
>>      if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
>>      {
>> -        p2m_init_altp2m_ept(d, idx);
>> -        rc = 0;
>> +        rc = p2m_init_altp2m_logdirty(d->arch.altp2m_p2m[idx]);
>> +        if ( !rc )
>> +            p2m_init_altp2m_ept(d, idx);
>>      }
>>  
>>      altp2m_list_unlock(d);
>> @@ -2310,9 +2345,13 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
>>          if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
>>              continue;
>>  
>> -        p2m_init_altp2m_ept(d, i);
>> -        *idx = i;
>> -        rc = 0;
>> +        rc = p2m_init_altp2m_logdirty(d->arch.altp2m_p2m[i]);
>> +
>> +        if ( !rc )
>> +        {
>> +            p2m_init_altp2m_ept(d, i);
>> +            *idx = i;
>> +        }
> 
> It looks like there's a 1-1 correspondence between
> p2m_init_altp2m_logdirty() succeeding and calling p2m_inti_altp2m_ept().
>  Would it make sense to combine them into the same function, maybe named
> "p2m_activate_altp2m()" or something (to distinguish it from
> p2m_init_altp2m())?

Of course, that'll cut back on some redundancy. Always a good thing.

>> @@ -2341,6 +2380,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>>          {
>>              p2m_flush_table(d->arch.altp2m_p2m[idx]);
>>              /* Uninit and reinit ept to force TLB shootdown */
>> +            p2m_free_logdirty(d->arch.altp2m_p2m[idx]);
>>              ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>>              ept_p2m_init(d->arch.altp2m_p2m[idx]);
>>              d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
> 
> (In case I forget: Also, this is called without holding the appropriate
> p2m lock. )

Could you please provide more details? I have assumed that at the point
of calling a function called p2m_destroy_altp2m_by_id() it should be
safe to tear the altp2m down without further precaution.

I think you're saying that I should p2m_lock(d->arch.altp2m_p2m[idx])
just for the duration of the p2m_free_logdirty() call?

> I'm a bit suspicious of long strings of these sorts of functions in the
> middle of another function.  It turns out that there are three copies of
> this sequence of function calls (p2m_flush_table -> ept_p2m_uninit ->
> ept_p2m_init):
> 
> * Here (p2m_destroy_altp2m_id), when the user asks for the alt2m index
> to be destroyed
> 
> * In p2m_flush_altp2m(), which is called when altp2m is disabled for a
> domain
> 
> * In p2m_reset_altp2m(), which is called when an entry in the hostp2m is
> set to INVALID_MFN.
> 
> Presumably in p2m_reset_altp2m() we don't want to call
> p2m_free_logdirty(), as the altp2m is still active and we want to keep
> the logdirty ranges around.  But in p2m_flush_altp2m(), I'm pretty sure
> we do want to discard them: when altp2m is enabled again,
> p2m_init_logdirty() will return early, leaving the old rangesets in
> place; if the hostp2m rangesets have changed between the time altp2m was
> disabled and enabled again, the rangeset_merge() may have incorrect results.

I'll call p2m_free_logdirty() in p2m_flush_altp2m() as well.

> At the moment we essentially have two "init" states:
> * After domain creation; altp2m structures allocated, but no rangesets, & c
> * After being enabled for the first time: rangesets mirroring hostp2m,
> p2m_init_altp2m_ept() initialization done
> 
> Is there any particular reason we allocate the p2m structures on domain
> creation, but not logdirty range structures?  It seems like allocating
> altp2m structures on-demand, rather than at domain creation time, might
> make a lot of the reasoning here simpler.

I assume that this question is not addressed to me, since I'm not able
to answer it - I can only assume that having less heap used has been
preferred.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-13 18:43     ` Razvan Cojocaru
@ 2018-11-13 18:57       ` Razvan Cojocaru
  2018-11-14  8:11         ` Jan Beulich
  2018-11-14 11:58       ` George Dunlap
  1 sibling, 1 reply; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-13 18:57 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich

On 11/13/18 8:43 PM, Razvan Cojocaru wrote:
> On 11/13/18 7:57 PM, George Dunlap wrote:
>> On 11/11/18 2:07 PM, Razvan Cojocaru wrote:
>> At the moment we essentially have two "init" states:
>> * After domain creation; altp2m structures allocated, but no rangesets, & c
>> * After being enabled for the first time: rangesets mirroring hostp2m,
>> p2m_init_altp2m_ept() initialization done
>>
>> Is there any particular reason we allocate the p2m structures on domain
>> creation, but not logdirty range structures?  It seems like allocating
>> altp2m structures on-demand, rather than at domain creation time, might
>> make a lot of the reasoning here simpler.
> 
> I assume that this question is not addressed to me, since I'm not able
> to answer it - I can only assume that having less heap used has been
> preferred.

Actually I now realize that you're asking why the hostp2m rangeset is
created via paging_domain_init() in arch_domain_create() (so immediately
on domain creation) while I'm allocating the altp2m rangesets on altp2m
init.

I'm doing that to save memory, since we can have MAX_ALTP2M altp2ms
(which is currently 10), and only two active altp2ms - that means that I
would allocate 10 rangesets and only use two. In fact we're currently
only using 2 altp2ms and the hostp2m for our #VE work. That saves the
space required for 8 rangesets. If that's not much, or if you think that
the benefits of allocating them early outweigh the costs we can switch
to allocating them on domain creation, like the hostp2m, and perhaps
always keeping them in sync.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 1/3] x86/altp2m: propagate ept.ad changes to all active altp2ms
  2018-11-11 14:07 ` [PATCH V5 1/3] x86/altp2m: propagate ept.ad changes to all active altp2ms Razvan Cojocaru
@ 2018-11-14  5:44   ` Tian, Kevin
  2018-11-14  7:14     ` Razvan Cojocaru
  0 siblings, 1 reply; 19+ messages in thread
From: Tian, Kevin @ 2018-11-14  5:44 UTC (permalink / raw)
  To: Razvan Cojocaru, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, Nakajima, Jun

> From: Razvan Cojocaru [mailto:rcojocaru@bitdefender.com]
> Sent: Sunday, November 11, 2018 10:07 PM
> 
> This patch is a pre-requisite for fixing the logdirty VGA issue
> (display freezes when switching to a new altp2m view early in a
> domain's lifetime), but sent separately for easier review.

it's in the series now, instead of sending separately.

> The new ept_set_ad_sync() function has been added to update all
> active altp2ms' ept.ad. New altp2ms will inherit the hostp2m's
> ept.ad value.
> 
> The p2m_{en,dis}able_hardware_log_dirty() hostp2m locking has
> been moved to the new ept_{en,dis}able_hardware_log_dirty()
> functions as part of the code refactoring, while locks for the
> individual altp2ms are taken in ept_set_ad_sync() (called by
> ept_{en,dis}able_pml()).
> 
> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Suggested-by: George Dunlap <george.dunlap@citrix.com>
> Tested-by: Tamas K Lengyel <tamas@tklengyel.com>
> Acked-by: George Dunlap <george.dunlap@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early
  2018-11-11 14:07 ` [PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early Razvan Cojocaru
@ 2018-11-14  5:55   ` Tian, Kevin
  2018-11-14  7:33     ` Razvan Cojocaru
  0 siblings, 1 reply; 19+ messages in thread
From: Tian, Kevin @ 2018-11-14  5:55 UTC (permalink / raw)
  To: Razvan Cojocaru, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, Nakajima, Jun

> From: Razvan Cojocaru [mailto:rcojocaru@bitdefender.com]
> Sent: Sunday, November 11, 2018 10:07 PM
> 
> When an new altp2m view is created very early on guest boot, the
> display will freeze (although the guest will run normally). This
> may also happen on resizing the display. The reason is the way
> Xen currently (mis)handles logdirty VGA: it intentionally
> misconfigures VGA pages so that they will fault.
> 
> The problem is that it only does this in the host p2m. Once we
> switch to a new altp2m, the misconfigured entries will no longer
> fault, so the display will not be updated.
> 
> This patch:
> * updates ept_handle_misconfig() to use the active altp2m instead
>   of the hostp2m;
> * modifies p2m_change_entry_type_global(), p2m_memory_type_changed
>   and p2m_change_type_range() to propagate their changes to all
>   valid altp2ms.
> 
> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Suggested-by: George Dunlap <george.dunlap@citrix.com>
> 
> ---
> CC: Jun Nakajima <jun.nakajima@intel.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> 
> ---
> Changes since V4:
>  - Now ASSERT()ing that altp2m is _not_ active in
>    p2m_pt_handle_deferred_changes(), with added comment.
>  - p2m_memory_type_changed() and p2m_change_type_range() now
>    process altp2ms with the hostp2m lock taken.
> ---
>  xen/arch/x86/mm/p2m-ept.c |   8 ++++
>  xen/arch/x86/mm/p2m-pt.c  |   8 ++++
>  xen/arch/x86/mm/p2m.c     | 115
> ++++++++++++++++++++++++++++++++++++++--------
>  xen/include/asm-x86/p2m.h |   6 +--
>  4 files changed, 114 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index fabcd06..e6fa85f 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -657,6 +657,9 @@ bool_t ept_handle_misconfig(uint64_t gpa)
>      bool_t spurious;
>      int rc;
> 
> +    if ( altp2m_active(curr->domain) )
> +        p2m = p2m_get_altp2m(curr);
> +
>      p2m_lock(p2m);
> 
>      spurious = curr->arch.hvm.vmx.ept_spurious_misconfig;
> @@ -1440,6 +1443,11 @@ void p2m_init_altp2m_ept(struct domain *d,
> unsigned int i)
>      struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
>      struct ept_data *ept;
> 
> +    p2m->max_mapped_pfn = hostp2m->max_mapped_pfn;
> +    p2m->default_access = hostp2m->default_access;
> +    p2m->domain = hostp2m->domain;
> +
> +    p2m->global_logdirty = hostp2m->global_logdirty;
>      p2m->ept.ad = hostp2m->ept.ad;
>      p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
>      p2m->max_remapped_gfn = 0;
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 55df185..3828088 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -29,6 +29,7 @@
>  #include <xen/event.h>
>  #include <xen/trace.h>
>  #include <public/vm_event.h>
> +#include <asm/altp2m.h>
>  #include <asm/domain.h>
>  #include <asm/page.h>
>  #include <asm/paging.h>
> @@ -464,6 +465,13 @@ int p2m_pt_handle_deferred_changes(uint64_t
> gpa)
>      struct p2m_domain *p2m = p2m_get_hostp2m(current->domain);
>      int rc;
> 
> +    /*
> +     * Should altp2m ever be enabled for NPT / shadow use, this code
> +     * should be updated to make use of the active altp2m, like
> +     * ept_handle_misconfig().
> +     */
> +    ASSERT(!altp2m_active(current->domain));
> +
>      p2m_lock(p2m);
>      rc = do_recalc(p2m, PFN_DOWN(gpa));
>      p2m_unlock(p2m);
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 69536c1..c8561ba 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -277,7 +277,6 @@ int p2m_init(struct domain *d)
>  int p2m_is_logdirty_range(struct p2m_domain *p2m, unsigned long start,
>                            unsigned long end)
>  {
> -    ASSERT(p2m_is_hostp2m(p2m));
>      if ( p2m->global_logdirty ||
>           rangeset_contains_range(p2m->logdirty_ranges, start, end) )
>          return 1;
> @@ -286,31 +285,79 @@ int p2m_is_logdirty_range(struct p2m_domain
> *p2m, unsigned long start,
>      return 0;
>  }
> 
> +static void change_entry_type_global(struct p2m_domain *p2m,
> +                                     p2m_type_t ot, p2m_type_t nt)
> +{
> +    p2m->change_entry_type_global(p2m, ot, nt);
> +    p2m->global_logdirty = (nt == p2m_ram_logdirty);
> +}
> +
>  void p2m_change_entry_type_global(struct domain *d,
>                                    p2m_type_t ot, p2m_type_t nt)
>  {
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
> 
>      ASSERT(ot != nt);
>      ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
> 
> -    p2m_lock(p2m);
> -    p2m->change_entry_type_global(p2m, ot, nt);
> -    p2m->global_logdirty = (nt == p2m_ram_logdirty);
> -    p2m_unlock(p2m);
> +    p2m_lock(hostp2m);
> +
> +    change_entry_type_global(hostp2m, ot, nt);
> +
> +#ifdef CONFIG_HVM
> +    if ( unlikely(altp2m_active(d)) )
> +    {
> +        unsigned int i;
> +
> +        for ( i = 0; i < MAX_ALTP2M; i++ )
> +            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
> +            {
> +                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
> +
> +                p2m_lock(altp2m);
> +                change_entry_type_global(altp2m, ot, nt);
> +                p2m_unlock(altp2m);
> +            }
> +    }
> +#endif
> +
> +    p2m_unlock(hostp2m);
> +}
> +
> +#ifdef CONFIG_HVM
> +/* There's already a memory_type_changed() in asm/mtrr.h. */
> +static void _memory_type_changed(struct p2m_domain *p2m)
> +{
> +    if ( p2m->memory_type_changed )
> +        p2m->memory_type_changed(p2m);
>  }
> 
>  void p2m_memory_type_changed(struct domain *d)

why making whole p2m_memory_type_changed under CONFIG_HVM?

>  {
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
> 
> -    if ( p2m->memory_type_changed )
> +    p2m_lock(hostp2m);
> +
> +    _memory_type_changed(hostp2m);
> +
> +    if ( unlikely(altp2m_active(d)) )
>      {
> -        p2m_lock(p2m);
> -        p2m->memory_type_changed(p2m);
> -        p2m_unlock(p2m);
> +        unsigned int i;
> +
> +        for ( i = 0; i < MAX_ALTP2M; i++ )
> +            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
> +            {
> +                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
> +
> +                p2m_lock(altp2m);
> +                _memory_type_changed(altp2m);
> +                p2m_unlock(altp2m);
> +            }
>      }
> +
> +    p2m_unlock(hostp2m);
>  }
> +#endif
> 
>  int p2m_set_ioreq_server(struct domain *d,
>                           unsigned int flags,
> @@ -992,18 +1039,14 @@ int p2m_change_type_one(struct domain *d,
> unsigned long gfn_l,
>  }
> 
>  /* Modify the p2m type of a range of gfns from ot to nt. */
> -void p2m_change_type_range(struct domain *d,
> -                           unsigned long start, unsigned long end,
> -                           p2m_type_t ot, p2m_type_t nt)
> +static void change_type_range(struct p2m_domain *p2m,
> +                              unsigned long start, unsigned long end,
> +                              p2m_type_t ot, p2m_type_t nt)
>  {
>      unsigned long gfn = start;
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    struct domain *d = p2m->domain;
>      int rc = 0;
> 
> -    ASSERT(ot != nt);
> -    ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
> -
> -    p2m_lock(p2m);
>      p2m->defer_nested_flush = 1;
> 
>      if ( unlikely(end > p2m->max_mapped_pfn) )
> @@ -1047,7 +1090,39 @@ void p2m_change_type_range(struct domain *d,
>      p2m->defer_nested_flush = 0;
>      if ( nestedhvm_enabled(d) )
>          p2m_flush_nestedp2m(d);
> -    p2m_unlock(p2m);
> +}
> +
> +void p2m_change_type_range(struct domain *d,
> +                           unsigned long start, unsigned long end,
> +                           p2m_type_t ot, p2m_type_t nt)
> +{
> +    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
> +
> +    ASSERT(ot != nt);
> +    ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
> +
> +    p2m_lock(hostp2m);
> +
> +    change_type_range(hostp2m, start, end, ot, nt);
> +
> +#ifdef CONFIG_HVM
> +    if ( unlikely(altp2m_active(d)) )
> +    {
> +        unsigned int i;
> +
> +        for ( i = 0; i < MAX_ALTP2M; i++ )
> +            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
> +            {
> +                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
> +
> +                p2m_lock(altp2m);
> +                change_type_range(altp2m, start, end, ot, nt);
> +                p2m_unlock(altp2m);
> +            }
> +    }
> +#endif
> +
> +    p2m_unlock(hostp2m);
>  }
> 
>  /*
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index c7f5710..be5b7a2 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -630,9 +630,6 @@ int p2m_finish_type_change(struct domain *d,
>                             gfn_t first_gfn,
>                             unsigned long max_nr);
> 
> -/* Report a change affecting memory types. */
> -void p2m_memory_type_changed(struct domain *d);
> -
>  int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start,
>                            unsigned long end);
> 
> @@ -663,6 +660,9 @@ void p2m_pod_dump_data(struct domain *d);
> 
>  #ifdef CONFIG_HVM
> 
> +/* Report a change affecting memory types. */
> +void p2m_memory_type_changed(struct domain *d);
> +
>  /* Called by p2m code when demand-populating a PoD page */
>  bool
>  p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn,
> unsigned int order);
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 1/3] x86/altp2m: propagate ept.ad changes to all active altp2ms
  2018-11-14  5:44   ` Tian, Kevin
@ 2018-11-14  7:14     ` Razvan Cojocaru
  0 siblings, 0 replies; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-14  7:14 UTC (permalink / raw)
  To: Tian, Kevin, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Nakajima, Jun, Jan Beulich

On 11/14/18 7:44 AM, Tian, Kevin wrote:
>> From: Razvan Cojocaru [mailto:rcojocaru@bitdefender.com]
>> Sent: Sunday, November 11, 2018 10:07 PM
>>
>> This patch is a pre-requisite for fixing the logdirty VGA issue
>> (display freezes when switching to a new altp2m view early in a
>> domain's lifetime), but sent separately for easier review.
> 
> it's in the series now, instead of sending separately.

Right, sorry for that omission - although now with your Reviewed-by it
can be applied now and escape the series. :)

>> The new ept_set_ad_sync() function has been added to update all
>> active altp2ms' ept.ad. New altp2ms will inherit the hostp2m's
>> ept.ad value.
>>
>> The p2m_{en,dis}able_hardware_log_dirty() hostp2m locking has
>> been moved to the new ept_{en,dis}able_hardware_log_dirty()
>> functions as part of the code refactoring, while locks for the
>> individual altp2ms are taken in ept_set_ad_sync() (called by
>> ept_{en,dis}able_pml()).
>>
>> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
>> Suggested-by: George Dunlap <george.dunlap@citrix.com>
>> Tested-by: Tamas K Lengyel <tamas@tklengyel.com>
>> Acked-by: George Dunlap <george.dunlap@citrix.com>
> 
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early
  2018-11-14  5:55   ` Tian, Kevin
@ 2018-11-14  7:33     ` Razvan Cojocaru
  2018-11-14  7:35       ` Tian, Kevin
  0 siblings, 1 reply; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-14  7:33 UTC (permalink / raw)
  To: Tian, Kevin, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, Nakajima, Jun

On 11/14/18 7:55 AM, Tian, Kevin wrote:
>> From: Razvan Cojocaru [mailto:rcojocaru@bitdefender.com]
>> Sent: Sunday, November 11, 2018 10:07 PM
>>
>> When an new altp2m view is created very early on guest boot, the
>> display will freeze (although the guest will run normally). This
>> may also happen on resizing the display. The reason is the way
>> Xen currently (mis)handles logdirty VGA: it intentionally
>> misconfigures VGA pages so that they will fault.
>>
>> The problem is that it only does this in the host p2m. Once we
>> switch to a new altp2m, the misconfigured entries will no longer
>> fault, so the display will not be updated.
>>
>> This patch:
>> * updates ept_handle_misconfig() to use the active altp2m instead
>>   of the hostp2m;
>> * modifies p2m_change_entry_type_global(), p2m_memory_type_changed
>>   and p2m_change_type_range() to propagate their changes to all
>>   valid altp2ms.
>>
>> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
>> Suggested-by: George Dunlap <george.dunlap@citrix.com>
>>
>> ---
>> CC: Jun Nakajima <jun.nakajima@intel.com>
>> CC: Kevin Tian <kevin.tian@intel.com>
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>>
>> ---
>> Changes since V4:
>>  - Now ASSERT()ing that altp2m is _not_ active in
>>    p2m_pt_handle_deferred_changes(), with added comment.
>>  - p2m_memory_type_changed() and p2m_change_type_range() now
>>    process altp2ms with the hostp2m lock taken.
>> ---
>>  xen/arch/x86/mm/p2m-ept.c |   8 ++++
>>  xen/arch/x86/mm/p2m-pt.c  |   8 ++++
>>  xen/arch/x86/mm/p2m.c     | 115
>> ++++++++++++++++++++++++++++++++++++++--------
>>  xen/include/asm-x86/p2m.h |   6 +--
>>  4 files changed, 114 insertions(+), 23 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
>> index fabcd06..e6fa85f 100644
>> --- a/xen/arch/x86/mm/p2m-ept.c
>> +++ b/xen/arch/x86/mm/p2m-ept.c
>> @@ -657,6 +657,9 @@ bool_t ept_handle_misconfig(uint64_t gpa)
>>      bool_t spurious;
>>      int rc;
>>
>> +    if ( altp2m_active(curr->domain) )
>> +        p2m = p2m_get_altp2m(curr);
>> +
>>      p2m_lock(p2m);
>>
>>      spurious = curr->arch.hvm.vmx.ept_spurious_misconfig;
>> @@ -1440,6 +1443,11 @@ void p2m_init_altp2m_ept(struct domain *d,
>> unsigned int i)
>>      struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
>>      struct ept_data *ept;
>>
>> +    p2m->max_mapped_pfn = hostp2m->max_mapped_pfn;
>> +    p2m->default_access = hostp2m->default_access;
>> +    p2m->domain = hostp2m->domain;
>> +
>> +    p2m->global_logdirty = hostp2m->global_logdirty;
>>      p2m->ept.ad = hostp2m->ept.ad;
>>      p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
>>      p2m->max_remapped_gfn = 0;
>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>> index 55df185..3828088 100644
>> --- a/xen/arch/x86/mm/p2m-pt.c
>> +++ b/xen/arch/x86/mm/p2m-pt.c
>> @@ -29,6 +29,7 @@
>>  #include <xen/event.h>
>>  #include <xen/trace.h>
>>  #include <public/vm_event.h>
>> +#include <asm/altp2m.h>
>>  #include <asm/domain.h>
>>  #include <asm/page.h>
>>  #include <asm/paging.h>
>> @@ -464,6 +465,13 @@ int p2m_pt_handle_deferred_changes(uint64_t
>> gpa)
>>      struct p2m_domain *p2m = p2m_get_hostp2m(current->domain);
>>      int rc;
>>
>> +    /*
>> +     * Should altp2m ever be enabled for NPT / shadow use, this code
>> +     * should be updated to make use of the active altp2m, like
>> +     * ept_handle_misconfig().
>> +     */
>> +    ASSERT(!altp2m_active(current->domain));
>> +
>>      p2m_lock(p2m);
>>      rc = do_recalc(p2m, PFN_DOWN(gpa));
>>      p2m_unlock(p2m);
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>> index 69536c1..c8561ba 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -277,7 +277,6 @@ int p2m_init(struct domain *d)
>>  int p2m_is_logdirty_range(struct p2m_domain *p2m, unsigned long start,
>>                            unsigned long end)
>>  {
>> -    ASSERT(p2m_is_hostp2m(p2m));
>>      if ( p2m->global_logdirty ||
>>           rangeset_contains_range(p2m->logdirty_ranges, start, end) )
>>          return 1;
>> @@ -286,31 +285,79 @@ int p2m_is_logdirty_range(struct p2m_domain
>> *p2m, unsigned long start,
>>      return 0;
>>  }
>>
>> +static void change_entry_type_global(struct p2m_domain *p2m,
>> +                                     p2m_type_t ot, p2m_type_t nt)
>> +{
>> +    p2m->change_entry_type_global(p2m, ot, nt);
>> +    p2m->global_logdirty = (nt == p2m_ram_logdirty);
>> +}
>> +
>>  void p2m_change_entry_type_global(struct domain *d,
>>                                    p2m_type_t ot, p2m_type_t nt)
>>  {
>> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
>>
>>      ASSERT(ot != nt);
>>      ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
>>
>> -    p2m_lock(p2m);
>> -    p2m->change_entry_type_global(p2m, ot, nt);
>> -    p2m->global_logdirty = (nt == p2m_ram_logdirty);
>> -    p2m_unlock(p2m);
>> +    p2m_lock(hostp2m);
>> +
>> +    change_entry_type_global(hostp2m, ot, nt);
>> +
>> +#ifdef CONFIG_HVM
>> +    if ( unlikely(altp2m_active(d)) )
>> +    {
>> +        unsigned int i;
>> +
>> +        for ( i = 0; i < MAX_ALTP2M; i++ )
>> +            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
>> +            {
>> +                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
>> +
>> +                p2m_lock(altp2m);
>> +                change_entry_type_global(altp2m, ot, nt);
>> +                p2m_unlock(altp2m);
>> +            }
>> +    }
>> +#endif
>> +
>> +    p2m_unlock(hostp2m);
>> +}
>> +
>> +#ifdef CONFIG_HVM
>> +/* There's already a memory_type_changed() in asm/mtrr.h. */
>> +static void _memory_type_changed(struct p2m_domain *p2m)
>> +{
>> +    if ( p2m->memory_type_changed )
>> +        p2m->memory_type_changed(p2m);
>>  }
>>
>>  void p2m_memory_type_changed(struct domain *d)
> 
> why making whole p2m_memory_type_changed under CONFIG_HVM?

It has been requested by Jan and Wei in V3:

https://lists.xenproject.org/archives/html/xen-devel/2018-10/msg02495.html


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early
  2018-11-14  7:33     ` Razvan Cojocaru
@ 2018-11-14  7:35       ` Tian, Kevin
  2018-11-14  7:43         ` Razvan Cojocaru
  0 siblings, 1 reply; 19+ messages in thread
From: Tian, Kevin @ 2018-11-14  7:35 UTC (permalink / raw)
  To: Razvan Cojocaru, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, Nakajima, Jun

> From: Razvan Cojocaru [mailto:rcojocaru@bitdefender.com]
> Sent: Wednesday, November 14, 2018 3:33 PM
> 
> On 11/14/18 7:55 AM, Tian, Kevin wrote:
> >> From: Razvan Cojocaru [mailto:rcojocaru@bitdefender.com]
> >> Sent: Sunday, November 11, 2018 10:07 PM
> >>
> >> When an new altp2m view is created very early on guest boot, the
> >> display will freeze (although the guest will run normally). This
> >> may also happen on resizing the display. The reason is the way
> >> Xen currently (mis)handles logdirty VGA: it intentionally
> >> misconfigures VGA pages so that they will fault.
> >>
> >> The problem is that it only does this in the host p2m. Once we
> >> switch to a new altp2m, the misconfigured entries will no longer
> >> fault, so the display will not be updated.
> >>
> >> This patch:
> >> * updates ept_handle_misconfig() to use the active altp2m instead
> >>   of the hostp2m;
> >> * modifies p2m_change_entry_type_global(),
> p2m_memory_type_changed
> >>   and p2m_change_type_range() to propagate their changes to all
> >>   valid altp2ms.
> >>
> >> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
> >> Suggested-by: George Dunlap <george.dunlap@citrix.com>
> >>
> >> ---
> >> CC: Jun Nakajima <jun.nakajima@intel.com>
> >> CC: Kevin Tian <kevin.tian@intel.com>
> >> CC: George Dunlap <george.dunlap@eu.citrix.com>
> >> CC: Jan Beulich <jbeulich@suse.com>
> >> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> >> CC: Wei Liu <wei.liu2@citrix.com>
> >>
> >> ---
> >> Changes since V4:
> >>  - Now ASSERT()ing that altp2m is _not_ active in
> >>    p2m_pt_handle_deferred_changes(), with added comment.
> >>  - p2m_memory_type_changed() and p2m_change_type_range() now
> >>    process altp2ms with the hostp2m lock taken.
> >> ---
> >>  xen/arch/x86/mm/p2m-ept.c |   8 ++++
> >>  xen/arch/x86/mm/p2m-pt.c  |   8 ++++
> >>  xen/arch/x86/mm/p2m.c     | 115
> >> ++++++++++++++++++++++++++++++++++++++--------
> >>  xen/include/asm-x86/p2m.h |   6 +--
> >>  4 files changed, 114 insertions(+), 23 deletions(-)
> >>
> >> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> >> index fabcd06..e6fa85f 100644
> >> --- a/xen/arch/x86/mm/p2m-ept.c
> >> +++ b/xen/arch/x86/mm/p2m-ept.c
> >> @@ -657,6 +657,9 @@ bool_t ept_handle_misconfig(uint64_t gpa)
> >>      bool_t spurious;
> >>      int rc;
> >>
> >> +    if ( altp2m_active(curr->domain) )
> >> +        p2m = p2m_get_altp2m(curr);
> >> +
> >>      p2m_lock(p2m);
> >>
> >>      spurious = curr->arch.hvm.vmx.ept_spurious_misconfig;
> >> @@ -1440,6 +1443,11 @@ void p2m_init_altp2m_ept(struct domain *d,
> >> unsigned int i)
> >>      struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
> >>      struct ept_data *ept;
> >>
> >> +    p2m->max_mapped_pfn = hostp2m->max_mapped_pfn;
> >> +    p2m->default_access = hostp2m->default_access;
> >> +    p2m->domain = hostp2m->domain;
> >> +
> >> +    p2m->global_logdirty = hostp2m->global_logdirty;
> >>      p2m->ept.ad = hostp2m->ept.ad;
> >>      p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
> >>      p2m->max_remapped_gfn = 0;
> >> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> >> index 55df185..3828088 100644
> >> --- a/xen/arch/x86/mm/p2m-pt.c
> >> +++ b/xen/arch/x86/mm/p2m-pt.c
> >> @@ -29,6 +29,7 @@
> >>  #include <xen/event.h>
> >>  #include <xen/trace.h>
> >>  #include <public/vm_event.h>
> >> +#include <asm/altp2m.h>
> >>  #include <asm/domain.h>
> >>  #include <asm/page.h>
> >>  #include <asm/paging.h>
> >> @@ -464,6 +465,13 @@ int p2m_pt_handle_deferred_changes(uint64_t
> >> gpa)
> >>      struct p2m_domain *p2m = p2m_get_hostp2m(current->domain);
> >>      int rc;
> >>
> >> +    /*
> >> +     * Should altp2m ever be enabled for NPT / shadow use, this code
> >> +     * should be updated to make use of the active altp2m, like
> >> +     * ept_handle_misconfig().
> >> +     */
> >> +    ASSERT(!altp2m_active(current->domain));
> >> +
> >>      p2m_lock(p2m);
> >>      rc = do_recalc(p2m, PFN_DOWN(gpa));
> >>      p2m_unlock(p2m);
> >> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> >> index 69536c1..c8561ba 100644
> >> --- a/xen/arch/x86/mm/p2m.c
> >> +++ b/xen/arch/x86/mm/p2m.c
> >> @@ -277,7 +277,6 @@ int p2m_init(struct domain *d)
> >>  int p2m_is_logdirty_range(struct p2m_domain *p2m, unsigned long
> start,
> >>                            unsigned long end)
> >>  {
> >> -    ASSERT(p2m_is_hostp2m(p2m));
> >>      if ( p2m->global_logdirty ||
> >>           rangeset_contains_range(p2m->logdirty_ranges, start, end) )
> >>          return 1;
> >> @@ -286,31 +285,79 @@ int p2m_is_logdirty_range(struct
> p2m_domain
> >> *p2m, unsigned long start,
> >>      return 0;
> >>  }
> >>
> >> +static void change_entry_type_global(struct p2m_domain *p2m,
> >> +                                     p2m_type_t ot, p2m_type_t nt)
> >> +{
> >> +    p2m->change_entry_type_global(p2m, ot, nt);
> >> +    p2m->global_logdirty = (nt == p2m_ram_logdirty);
> >> +}
> >> +
> >>  void p2m_change_entry_type_global(struct domain *d,
> >>                                    p2m_type_t ot, p2m_type_t nt)
> >>  {
> >> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> >> +    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
> >>
> >>      ASSERT(ot != nt);
> >>      ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
> >>
> >> -    p2m_lock(p2m);
> >> -    p2m->change_entry_type_global(p2m, ot, nt);
> >> -    p2m->global_logdirty = (nt == p2m_ram_logdirty);
> >> -    p2m_unlock(p2m);
> >> +    p2m_lock(hostp2m);
> >> +
> >> +    change_entry_type_global(hostp2m, ot, nt);
> >> +
> >> +#ifdef CONFIG_HVM
> >> +    if ( unlikely(altp2m_active(d)) )
> >> +    {
> >> +        unsigned int i;
> >> +
> >> +        for ( i = 0; i < MAX_ALTP2M; i++ )
> >> +            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
> >> +            {
> >> +                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
> >> +
> >> +                p2m_lock(altp2m);
> >> +                change_entry_type_global(altp2m, ot, nt);
> >> +                p2m_unlock(altp2m);
> >> +            }
> >> +    }
> >> +#endif
> >> +
> >> +    p2m_unlock(hostp2m);
> >> +}
> >> +
> >> +#ifdef CONFIG_HVM
> >> +/* There's already a memory_type_changed() in asm/mtrr.h. */
> >> +static void _memory_type_changed(struct p2m_domain *p2m)
> >> +{
> >> +    if ( p2m->memory_type_changed )
> >> +        p2m->memory_type_changed(p2m);
> >>  }
> >>
> >>  void p2m_memory_type_changed(struct domain *d)
> >
> > why making whole p2m_memory_type_changed under CONFIG_HVM?
> 
> It has been requested by Jan and Wei in V3:
> 
> https://lists.xenproject.org/archives/html/xen-devel/2018-
> 10/msg02495.html
> 

then it's good to mention it in patch description.

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early
  2018-11-14  7:35       ` Tian, Kevin
@ 2018-11-14  7:43         ` Razvan Cojocaru
  0 siblings, 0 replies; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-14  7:43 UTC (permalink / raw)
  To: Tian, Kevin, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Nakajima, Jun, Jan Beulich

On 11/14/18 9:35 AM, Tian, Kevin wrote:
>>>> +#ifdef CONFIG_HVM
>>>> +/* There's already a memory_type_changed() in asm/mtrr.h. */
>>>> +static void _memory_type_changed(struct p2m_domain *p2m)
>>>> +{
>>>> +    if ( p2m->memory_type_changed )
>>>> +        p2m->memory_type_changed(p2m);
>>>>  }
>>>>
>>>>  void p2m_memory_type_changed(struct domain *d)
>>> why making whole p2m_memory_type_changed under CONFIG_HVM?
>> It has been requested by Jan and Wei in V3:
>>
>> https://lists.xenproject.org/archives/html/xen-devel/2018-
>> 10/msg02495.html
>>
> then it's good to mention it in patch description.
> 
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>

With George's comments yesterday there'll be at least one more iteration
of the series (probably minus the first patch, which seems ready to go
in) - so I'll update the patch description in the next version.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-13 18:57       ` Razvan Cojocaru
@ 2018-11-14  8:11         ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2018-11-14  8:11 UTC (permalink / raw)
  To: Razvan Cojocaru
  Cc: George Dunlap, Andrew Cooper, Wei Liu, george.dunlap, xen-devel

>>> On 13.11.18 at 19:57, <rcojocaru@bitdefender.com> wrote:
> On 11/13/18 8:43 PM, Razvan Cojocaru wrote:
>> On 11/13/18 7:57 PM, George Dunlap wrote:
>>> Is there any particular reason we allocate the p2m structures on domain
>>> creation, but not logdirty range structures?  It seems like allocating
>>> altp2m structures on-demand, rather than at domain creation time, might
>>> make a lot of the reasoning here simpler.
>> 
>> I assume that this question is not addressed to me, since I'm not able
>> to answer it - I can only assume that having less heap used has been
>> preferred.
> 
> Actually I now realize that you're asking why the hostp2m rangeset is
> created via paging_domain_init() in arch_domain_create() (so immediately
> on domain creation) while I'm allocating the altp2m rangesets on altp2m
> init.
> 
> I'm doing that to save memory, since we can have MAX_ALTP2M altp2ms
> (which is currently 10), and only two active altp2ms - that means that I
> would allocate 10 rangesets and only use two. In fact we're currently
> only using 2 altp2ms and the hostp2m for our #VE work. That saves the
> space required for 8 rangesets. If that's not much, or if you think that
> the benefits of allocating them early outweigh the costs we can switch
> to allocating them on domain creation, like the hostp2m, and perhaps
> always keeping them in sync.

I think George's question had the opposite goal: Rather than just
allocating the rangesets on demand, why don't the entire altp2m
structures get allocated on demand?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-13 18:43     ` Razvan Cojocaru
  2018-11-13 18:57       ` Razvan Cojocaru
@ 2018-11-14 11:58       ` George Dunlap
  2018-11-14 12:50         ` Razvan Cojocaru
  1 sibling, 1 reply; 19+ messages in thread
From: George Dunlap @ 2018-11-14 11:58 UTC (permalink / raw)
  To: Razvan Cojocaru, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich

On 11/13/18 6:43 PM, Razvan Cojocaru wrote:
> On 11/13/18 7:57 PM, George Dunlap wrote:
>> On 11/11/18 2:07 PM, Razvan Cojocaru wrote:
[snip]
>> I think everything above here could usefully be in its own patch; it
>> would make it easier to verify that there were no functional changes in
>> the refactoring.
> 
> Right, I'll split this patch then.

Thanks.

>>> @@ -2341,6 +2380,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>>>          {
>>>              p2m_flush_table(d->arch.altp2m_p2m[idx]);
>>>              /* Uninit and reinit ept to force TLB shootdown */
>>> +            p2m_free_logdirty(d->arch.altp2m_p2m[idx]);
>>>              ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>>>              ept_p2m_init(d->arch.altp2m_p2m[idx]);
>>>              d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
>>
>> (In case I forget: Also, this is called without holding the appropriate
>> p2m lock. )
> 
> Could you please provide more details? I have assumed that at the point
> of calling a function called p2m_destroy_altp2m_by_id() it should be
> safe to tear the altp2m down without further precaution.

Are you absolutely positive that at this point there's no way anywhere
else in Xen might be doing something with this p2m struct?

If so, then 1) there should be a comment there explaining why that's the
case, and 2) ideally we should refactor p2m_flush_table such that we can
call what's now p2m_flush_table_locked() without the lock.

> I think you're saying that I should p2m_lock(d->arch.altp2m_p2m[idx])
> just for the duration of the p2m_free_logdirty() call?

The same argument really goes for ept_p2m_uninit/init -- uninit actually
frees a data structure; if anyone else may be using that, you'll run
into a use-after-free bug.  (Although that really needs to be changed as
well -- freeing and re-allocating a structure just to set all the bits
is ridiculous.)

If we need locking, then I'd grab the p2m lock before p2m_flush_table()
(calling p2m_flush_table_locked() instead), and release it after the
ept_p2m_init().

I realize you didn't write this code, and so I'm not holding you
responsible for all the changes I mentioned above.  But if we're going
to add the p2m_free_logdirty() call, we do need to either grab the lock
or add a comment explaining why it's not necessary; we might as well fix
it properly at the same time.

p2m_flush_table() already grabs and releases the lock; so grabbing the
lock over all four calls won't add any more overhead (or risk of
deadlock) than what we already have.

>> I'm a bit suspicious of long strings of these sorts of functions in the
>> middle of another function.  It turns out that there are three copies of
>> this sequence of function calls (p2m_flush_table -> ept_p2m_uninit ->
>> ept_p2m_init):
>>
>> * Here (p2m_destroy_altp2m_id), when the user asks for the alt2m index
>> to be destroyed
>>
>> * In p2m_flush_altp2m(), which is called when altp2m is disabled for a
>> domain
>>
>> * In p2m_reset_altp2m(), which is called when an entry in the hostp2m is
>> set to INVALID_MFN.
>>
>> Presumably in p2m_reset_altp2m() we don't want to call
>> p2m_free_logdirty(), as the altp2m is still active and we want to keep
>> the logdirty ranges around.  But in p2m_flush_altp2m(), I'm pretty sure
>> we do want to discard them: when altp2m is enabled again,
>> p2m_init_logdirty() will return early, leaving the old rangesets in
>> place; if the hostp2m rangesets have changed between the time altp2m was
>> disabled and enabled again, the rangeset_merge() may have incorrect results.
> 
> I'll call p2m_free_logdirty() in p2m_flush_altp2m() as well.

I was more thinking of refactoring those two to share the same code, and
potentially having p2m_reset_altp2m() share the same code as well.  The
reason you missed the p2m_flush_altp2m() there was because of the code
duplication.

Or alternately...

>>> Is there any particular reason we allocate the p2m structures on domain
>>> creation, but not logdirty range structures?  It seems like allocating
>>> altp2m structures on-demand, rather than at domain creation time, might
>>> make a lot of the reasoning here simpler.
>>
>> I assume that this question is not addressed to me, since I'm not able
>> to answer it - I can only assume that having less heap used has been
>> preferred.

I'm asking you because you've recently been going through this code, and
probably have at least as much familiarity with it as I do.  I can't
immediately see any reason to allocate them at domain creation time.
Maybe you can and maybe you can't, but I won't know until I ask. :-)

> Actually I now realize that you're asking why the hostp2m rangeset is
> created via paging_domain_init() in arch_domain_create() (so immediately
> on domain creation) while I'm allocating the altp2m rangesets on altp2m
> init.
>
> I'm doing that to save memory, since we can have MAX_ALTP2M altp2ms
> (which is currently 10), and only two active altp2ms - that means that I
> would allocate 10 rangesets and only use two. In fact we're currently
> only using 2 altp2ms and the hostp2m for our #VE work. That saves the
> space required for 8 rangesets. If that's not much, or if you think that
> the benefits of allocating them early outweigh the costs we can switch
> to allocating them on domain creation, like the hostp2m, and perhaps
> always keeping them in sync.

On the contrary, I was thinking of leaving the altp2m_p2m[N] NULL until
it becomes used; and at that point allocating both the p2m structure and
the logdirty rangesets; and when deactivating altp2m_p2m[N], freeing
both the logdirty rangesets and the p2m structure.

One advantage of that is that we'd reduce the amount of memory used --
not just for you, but for the vast majority of people who aren't using
the altp2m functionality; the other advantage is that it simplifies the
disable / enable logic: Everything that needs to be done is done in one
place, rather than half in one place and half in another.

I don't necessarily expect you to do that refactoring, but as you're
familiar with the code, and have the most investment in its future, it
makes sense to discuss the possibilities with you. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-14 11:58       ` George Dunlap
@ 2018-11-14 12:50         ` Razvan Cojocaru
  2018-11-14 14:00           ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-14 12:50 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich

On 11/14/18 1:58 PM, George Dunlap wrote:
> On 11/13/18 6:43 PM, Razvan Cojocaru wrote:
>> On 11/13/18 7:57 PM, George Dunlap wrote:
>>> On 11/11/18 2:07 PM, Razvan Cojocaru wrote:
>>>> @@ -2341,6 +2380,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>>>>          {
>>>>              p2m_flush_table(d->arch.altp2m_p2m[idx]);
>>>>              /* Uninit and reinit ept to force TLB shootdown */
>>>> +            p2m_free_logdirty(d->arch.altp2m_p2m[idx]);
>>>>              ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>>>>              ept_p2m_init(d->arch.altp2m_p2m[idx]);
>>>>              d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
>>>
>>> (In case I forget: Also, this is called without holding the appropriate
>>> p2m lock. )
>>
>> Could you please provide more details? I have assumed that at the point
>> of calling a function called p2m_destroy_altp2m_by_id() it should be
>> safe to tear the altp2m down without further precaution.
> 
> Are you absolutely positive that at this point there's no way anywhere
> else in Xen might be doing something with this p2m struct?
> 
> If so, then 1) there should be a comment there explaining why that's the
> case, and 2) ideally we should refactor p2m_flush_table such that we can
> call what's now p2m_flush_table_locked() without the lock.

AFAICT the only place p2m_destroy_altp2m_by_id() is ever called is in
arch/x86/hvm/hvm.c's do_altp2m_op() (on HVMOP_altp2m_destroy_p2m), which
is done under domain lock. Is that insufficient?

>> I think you're saying that I should p2m_lock(d->arch.altp2m_p2m[idx])
>> just for the duration of the p2m_free_logdirty() call?
> 
> The same argument really goes for ept_p2m_uninit/init -- uninit actually
> frees a data structure; if anyone else may be using that, you'll run
> into a use-after-free bug.  (Although that really needs to be changed as
> well -- freeing and re-allocating a structure just to set all the bits
> is ridiculous.)
> 
> If we need locking, then I'd grab the p2m lock before p2m_flush_table()
> (calling p2m_flush_table_locked() instead), and release it after the
> ept_p2m_init().
> 
> I realize you didn't write this code, and so I'm not holding you
> responsible for all the changes I mentioned above.  But if we're going
> to add the p2m_free_logdirty() call, we do need to either grab the lock
> or add a comment explaining why it's not necessary; we might as well fix
> it properly at the same time.
> 
> p2m_flush_table() already grabs and releases the lock; so grabbing the
> lock over all four calls won't add any more overhead (or risk of
> deadlock) than what we already have.

Of course, I'll use p2m_flush_table_locked().

>>> I'm a bit suspicious of long strings of these sorts of functions in the
>>> middle of another function.  It turns out that there are three copies of
>>> this sequence of function calls (p2m_flush_table -> ept_p2m_uninit ->
>>> ept_p2m_init):
>>>
>>> * Here (p2m_destroy_altp2m_id), when the user asks for the alt2m index
>>> to be destroyed
>>>
>>> * In p2m_flush_altp2m(), which is called when altp2m is disabled for a
>>> domain
>>>
>>> * In p2m_reset_altp2m(), which is called when an entry in the hostp2m is
>>> set to INVALID_MFN.
>>>
>>> Presumably in p2m_reset_altp2m() we don't want to call
>>> p2m_free_logdirty(), as the altp2m is still active and we want to keep
>>> the logdirty ranges around.  But in p2m_flush_altp2m(), I'm pretty sure
>>> we do want to discard them: when altp2m is enabled again,
>>> p2m_init_logdirty() will return early, leaving the old rangesets in
>>> place; if the hostp2m rangesets have changed between the time altp2m was
>>> disabled and enabled again, the rangeset_merge() may have incorrect results.
>>
>> I'll call p2m_free_logdirty() in p2m_flush_altp2m() as well.
> 
> I was more thinking of refactoring those two to share the same code, and
> potentially having p2m_reset_altp2m() share the same code as well.  The
> reason you missed the p2m_flush_altp2m() there was because of the code
> duplication.

Right, I'll do my best to refactor that then. TBH I'm not a big fan of
that extra verbosity either but thought the least code churn would be
good for reviewing.

> Or alternately...
> 
>>>> Is there any particular reason we allocate the p2m structures on domain
>>>> creation, but not logdirty range structures?  It seems like allocating
>>>> altp2m structures on-demand, rather than at domain creation time, might
>>>> make a lot of the reasoning here simpler.
>>>
>>> I assume that this question is not addressed to me, since I'm not able
>>> to answer it - I can only assume that having less heap used has been
>>> preferred.
> 
> I'm asking you because you've recently been going through this code, and
> probably have at least as much familiarity with it as I do.  I can't
> immediately see any reason to allocate them at domain creation time.
> Maybe you can and maybe you can't, but I won't know until I ask. :-)

I've looked at the code closer today, and there's no reason as far as I
can tell why we shouldn't allocate altp2ms on-demand. However, changing
the code is somewhat involved at this point, since there's a lot of:

2357     if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
2358     {
2359         p2m = d->arch.altp2m_p2m[idx];
2360
2361         if ( !_atomic_read(p2m->active_vcpus) )
2362         {
2363             p2m_flush_table(d->arch.altp2m_p2m[idx]);
2364             /* Uninit and reinit ept to force TLB shootdown */
2365             ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
2366             ept_p2m_init(d->arch.altp2m_p2m[idx]);
2367             d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
2368             rc = 0;
2369         }
2370     }

going on. That is, code checking that d->arch.altp2m_eptp[idx] !=
mfn_x(INVALID_MFN), and then blindly assuming that p2m will not be NULL
and is usable.

>> Actually I now realize that you're asking why the hostp2m rangeset is
>> created via paging_domain_init() in arch_domain_create() (so immediately
>> on domain creation) while I'm allocating the altp2m rangesets on altp2m
>> init.
>>
>> I'm doing that to save memory, since we can have MAX_ALTP2M altp2ms
>> (which is currently 10), and only two active altp2ms - that means that I
>> would allocate 10 rangesets and only use two. In fact we're currently
>> only using 2 altp2ms and the hostp2m for our #VE work. That saves the
>> space required for 8 rangesets. If that's not much, or if you think that
>> the benefits of allocating them early outweigh the costs we can switch
>> to allocating them on domain creation, like the hostp2m, and perhaps
>> always keeping them in sync.
> 
> On the contrary, I was thinking of leaving the altp2m_p2m[N] NULL until
> it becomes used; and at that point allocating both the p2m structure and
> the logdirty rangesets; and when deactivating altp2m_p2m[N], freeing
> both the logdirty rangesets and the p2m structure.
> 
> One advantage of that is that we'd reduce the amount of memory used --
> not just for you, but for the vast majority of people who aren't using
> the altp2m functionality; the other advantage is that it simplifies the
> disable / enable logic: Everything that needs to be done is done in one
> place, rather than half in one place and half in another.
> 
> I don't necessarily expect you to do that refactoring, but as you're
> familiar with the code, and have the most investment in its future, it
> makes sense to discuss the possibilities with you. :-)

I agree that that's a valid optimization, and it looks worth doing.
However, the huge priority now is to get the display working since this
is completely crippling altp2m use (so quite urgent, both for Tamas and
us) - so in the interest of getting things to work I propose to, for the
time being, get this series in as soon as acceptable (that is, with the
current altp2m allocation strategy(, and we'll come back later for the
allocation optimizations.

Does that sound reasonable?


Thank you,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-14 12:50         ` Razvan Cojocaru
@ 2018-11-14 14:00           ` Jan Beulich
  2018-11-14 14:05             ` Razvan Cojocaru
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2018-11-14 14:00 UTC (permalink / raw)
  To: Razvan Cojocaru
  Cc: George Dunlap, Andrew Cooper, Wei Liu, george.dunlap, xen-devel

>>> On 14.11.18 at 13:50, <rcojocaru@bitdefender.com> wrote:
> On 11/14/18 1:58 PM, George Dunlap wrote:
>> On 11/13/18 6:43 PM, Razvan Cojocaru wrote:
>>> On 11/13/18 7:57 PM, George Dunlap wrote:
>>>> On 11/11/18 2:07 PM, Razvan Cojocaru wrote:
>>>>> @@ -2341,6 +2380,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>>>>>          {
>>>>>              p2m_flush_table(d->arch.altp2m_p2m[idx]);
>>>>>              /* Uninit and reinit ept to force TLB shootdown */
>>>>> +            p2m_free_logdirty(d->arch.altp2m_p2m[idx]);
>>>>>              ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>>>>>              ept_p2m_init(d->arch.altp2m_p2m[idx]);
>>>>>              d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
>>>>
>>>> (In case I forget: Also, this is called without holding the appropriate
>>>> p2m lock. )
>>>
>>> Could you please provide more details? I have assumed that at the point
>>> of calling a function called p2m_destroy_altp2m_by_id() it should be
>>> safe to tear the altp2m down without further precaution.
>> 
>> Are you absolutely positive that at this point there's no way anywhere
>> else in Xen might be doing something with this p2m struct?
>> 
>> If so, then 1) there should be a comment there explaining why that's the
>> case, and 2) ideally we should refactor p2m_flush_table such that we can
>> call what's now p2m_flush_table_locked() without the lock.
> 
> AFAICT the only place p2m_destroy_altp2m_by_id() is ever called is in
> arch/x86/hvm/hvm.c's do_altp2m_op() (on HVMOP_altp2m_destroy_p2m), which
> is done under domain lock. Is that insufficient?

Holding the domain lock does not imply nothing can happen to the
domain elsewhere. Only if both parties hold the _same_ lock there
is a guarantee of serialization between both.

>>>>> Is there any particular reason we allocate the p2m structures on domain
>>>>> creation, but not logdirty range structures?  It seems like allocating
>>>>> altp2m structures on-demand, rather than at domain creation time, might
>>>>> make a lot of the reasoning here simpler.
>>>>
>>>> I assume that this question is not addressed to me, since I'm not able
>>>> to answer it - I can only assume that having less heap used has been
>>>> preferred.
>> 
>> I'm asking you because you've recently been going through this code, and
>> probably have at least as much familiarity with it as I do.  I can't
>> immediately see any reason to allocate them at domain creation time.
>> Maybe you can and maybe you can't, but I won't know until I ask. :-)
> 
> I've looked at the code closer today, and there's no reason as far as I
> can tell why we shouldn't allocate altp2ms on-demand. However, changing
> the code is somewhat involved at this point, since there's a lot of:
> 
> 2357     if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
> 2358     {
> 2359         p2m = d->arch.altp2m_p2m[idx];
> 2360
> 2361         if ( !_atomic_read(p2m->active_vcpus) )
> 2362         {
> 2363             p2m_flush_table(d->arch.altp2m_p2m[idx]);
> 2364             /* Uninit and reinit ept to force TLB shootdown */
> 2365             ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
> 2366             ept_p2m_init(d->arch.altp2m_p2m[idx]);
> 2367             d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
> 2368             rc = 0;
> 2369         }
> 2370     }
> 
> going on. That is, code checking that d->arch.altp2m_eptp[idx] !=
> mfn_x(INVALID_MFN), and then blindly assuming that p2m will not be NULL
> and is usable.

Wouldn't the implication of George's proposal be that
d->arch.altp2m_eptp[] slots get demand-populated, too?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-14 14:00           ` Jan Beulich
@ 2018-11-14 14:05             ` Razvan Cojocaru
  2018-11-14 15:05               ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Razvan Cojocaru @ 2018-11-14 14:05 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, Andrew Cooper, Wei Liu, george.dunlap, xen-devel

On 11/14/18 4:00 PM, Jan Beulich wrote:
>>>> On 14.11.18 at 13:50, <rcojocaru@bitdefender.com> wrote:
>> On 11/14/18 1:58 PM, George Dunlap wrote:
>>> On 11/13/18 6:43 PM, Razvan Cojocaru wrote:
>>>> On 11/13/18 7:57 PM, George Dunlap wrote:
>>>>> On 11/11/18 2:07 PM, Razvan Cojocaru wrote:
>>>>>> @@ -2341,6 +2380,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>>>>>>          {
>>>>>>              p2m_flush_table(d->arch.altp2m_p2m[idx]);
>>>>>>              /* Uninit and reinit ept to force TLB shootdown */
>>>>>> +            p2m_free_logdirty(d->arch.altp2m_p2m[idx]);
>>>>>>              ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>>>>>>              ept_p2m_init(d->arch.altp2m_p2m[idx]);
>>>>>>              d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
>>>>>
>>>>> (In case I forget: Also, this is called without holding the appropriate
>>>>> p2m lock. )
>>>>
>>>> Could you please provide more details? I have assumed that at the point
>>>> of calling a function called p2m_destroy_altp2m_by_id() it should be
>>>> safe to tear the altp2m down without further precaution.
>>>
>>> Are you absolutely positive that at this point there's no way anywhere
>>> else in Xen might be doing something with this p2m struct?
>>>
>>> If so, then 1) there should be a comment there explaining why that's the
>>> case, and 2) ideally we should refactor p2m_flush_table such that we can
>>> call what's now p2m_flush_table_locked() without the lock.
>>
>> AFAICT the only place p2m_destroy_altp2m_by_id() is ever called is in
>> arch/x86/hvm/hvm.c's do_altp2m_op() (on HVMOP_altp2m_destroy_p2m), which
>> is done under domain lock. Is that insufficient?
> 
> Holding the domain lock does not imply nothing can happen to the
> domain elsewhere. Only if both parties hold the _same_ lock there
> is a guarantee of serialization between both.

Right, I was under the impression that for the duration of a HVMOP (or
DOMCTL) nothing moves in the domain.

In that case, we do need the locking as George has suggested.

>>>>>> Is there any particular reason we allocate the p2m structures on domain
>>>>>> creation, but not logdirty range structures?  It seems like allocating
>>>>>> altp2m structures on-demand, rather than at domain creation time, might
>>>>>> make a lot of the reasoning here simpler.
>>>>>
>>>>> I assume that this question is not addressed to me, since I'm not able
>>>>> to answer it - I can only assume that having less heap used has been
>>>>> preferred.
>>>
>>> I'm asking you because you've recently been going through this code, and
>>> probably have at least as much familiarity with it as I do.  I can't
>>> immediately see any reason to allocate them at domain creation time.
>>> Maybe you can and maybe you can't, but I won't know until I ask. :-)
>>
>> I've looked at the code closer today, and there's no reason as far as I
>> can tell why we shouldn't allocate altp2ms on-demand. However, changing
>> the code is somewhat involved at this point, since there's a lot of:
>>
>> 2357     if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
>> 2358     {
>> 2359         p2m = d->arch.altp2m_p2m[idx];
>> 2360
>> 2361         if ( !_atomic_read(p2m->active_vcpus) )
>> 2362         {
>> 2363             p2m_flush_table(d->arch.altp2m_p2m[idx]);
>> 2364             /* Uninit and reinit ept to force TLB shootdown */
>> 2365             ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>> 2366             ept_p2m_init(d->arch.altp2m_p2m[idx]);
>> 2367             d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
>> 2368             rc = 0;
>> 2369         }
>> 2370     }
>>
>> going on. That is, code checking that d->arch.altp2m_eptp[idx] !=
>> mfn_x(INVALID_MFN), and then blindly assuming that p2m will not be NULL
>> and is usable.
> 
> Wouldn't the implication of George's proposal be that
> d->arch.altp2m_eptp[] slots get demand-populated, too?

Of course, but still we must make sure that that really does happen in
all (corner) cases, and that the two never get out of sync (some
function sets d->arch.altp2m_p2m[idx] to NULL while leaving
d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN), for example).

My point was just that this requires quite a bit of testing to make sure
we got it right IMHO. Which we should do, but it's also very important
to get the display problem fixed.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms
  2018-11-14 14:05             ` Razvan Cojocaru
@ 2018-11-14 15:05               ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2018-11-14 15:05 UTC (permalink / raw)
  To: Razvan Cojocaru
  Cc: George Dunlap, Andrew Cooper, Wei Liu, george.dunlap, xen-devel

>>> On 14.11.18 at 15:05, <rcojocaru@bitdefender.com> wrote:
> On 11/14/18 4:00 PM, Jan Beulich wrote:
>>>>> On 14.11.18 at 13:50, <rcojocaru@bitdefender.com> wrote:
>>> On 11/14/18 1:58 PM, George Dunlap wrote:
>>>> On 11/13/18 6:43 PM, Razvan Cojocaru wrote:
>>>>> On 11/13/18 7:57 PM, George Dunlap wrote:
>>>>>> On 11/11/18 2:07 PM, Razvan Cojocaru wrote:
>>>>>>> @@ -2341,6 +2380,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>>>>>>>          {
>>>>>>>              p2m_flush_table(d->arch.altp2m_p2m[idx]);
>>>>>>>              /* Uninit and reinit ept to force TLB shootdown */
>>>>>>> +            p2m_free_logdirty(d->arch.altp2m_p2m[idx]);
>>>>>>>              ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>>>>>>>              ept_p2m_init(d->arch.altp2m_p2m[idx]);
>>>>>>>              d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
>>>>>>
>>>>>> (In case I forget: Also, this is called without holding the appropriate
>>>>>> p2m lock. )
>>>>>
>>>>> Could you please provide more details? I have assumed that at the point
>>>>> of calling a function called p2m_destroy_altp2m_by_id() it should be
>>>>> safe to tear the altp2m down without further precaution.
>>>>
>>>> Are you absolutely positive that at this point there's no way anywhere
>>>> else in Xen might be doing something with this p2m struct?
>>>>
>>>> If so, then 1) there should be a comment there explaining why that's the
>>>> case, and 2) ideally we should refactor p2m_flush_table such that we can
>>>> call what's now p2m_flush_table_locked() without the lock.
>>>
>>> AFAICT the only place p2m_destroy_altp2m_by_id() is ever called is in
>>> arch/x86/hvm/hvm.c's do_altp2m_op() (on HVMOP_altp2m_destroy_p2m), which
>>> is done under domain lock. Is that insufficient?
>> 
>> Holding the domain lock does not imply nothing can happen to the
>> domain elsewhere. Only if both parties hold the _same_ lock there
>> is a guarantee of serialization between both.
> 
> Right, I was under the impression that for the duration of a HVMOP (or
> DOMCTL) nothing moves in the domain.

Well, if you need such behavior, you need to pause the domain (as
various domctl-s actually do).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2018-11-14 15:05 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-11 14:07 [PATCH V5 0/3] Fix VGA logdirty related display freezes with altp2m Razvan Cojocaru
2018-11-11 14:07 ` [PATCH V5 1/3] x86/altp2m: propagate ept.ad changes to all active altp2ms Razvan Cojocaru
2018-11-14  5:44   ` Tian, Kevin
2018-11-14  7:14     ` Razvan Cojocaru
2018-11-11 14:07 ` [PATCH V5 2/3] x86/mm: allocate logdirty_ranges for altp2ms Razvan Cojocaru
2018-11-13 17:57   ` George Dunlap
2018-11-13 18:43     ` Razvan Cojocaru
2018-11-13 18:57       ` Razvan Cojocaru
2018-11-14  8:11         ` Jan Beulich
2018-11-14 11:58       ` George Dunlap
2018-11-14 12:50         ` Razvan Cojocaru
2018-11-14 14:00           ` Jan Beulich
2018-11-14 14:05             ` Razvan Cojocaru
2018-11-14 15:05               ` Jan Beulich
2018-11-11 14:07 ` [PATCH V5 3/3] x86/altp2m: fix display frozen when switching to a new view early Razvan Cojocaru
2018-11-14  5:55   ` Tian, Kevin
2018-11-14  7:33     ` Razvan Cojocaru
2018-11-14  7:35       ` Tian, Kevin
2018-11-14  7:43         ` Razvan Cojocaru

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.