All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] XSA-248...251 follow-up
@ 2017-12-12 14:53 Jan Beulich
  2017-12-12 15:04 ` [PATCH 1/6] x86/shadow: drop further 32-bit relics Jan Beulich
                   ` (7 more replies)
  0 siblings, 8 replies; 30+ messages in thread
From: Jan Beulich @ 2017-12-12 14:53 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Tim Deegan

The parts of this series aren't really dependent upon one another,
they belong together solely because of their origin.

1: x86/shadow: drop further 32-bit relics
2: x86/shadow: remove pointless loops over all vCPU-s
3: x86/shadow: ignore sh_pin() failure in one more case
4: x86/shadow: widen reference count
5: x86/mm: clean up SHARED_M2P{,_ENTRY} uses
6: x86: use paging_mark_pfn_dirty()

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/6] x86/shadow: drop further 32-bit relics
  2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
@ 2017-12-12 15:04 ` Jan Beulich
  2017-12-20  8:03   ` Tim Deegan
  2017-12-12 15:05 ` [PATCH 2/6] x86/shadow: remove pointless loops over all vCPU-s Jan Beulich
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2017-12-12 15:04 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Tim Deegan

PV guests don't ever get shadowed in other than 4-level mode anymore;
commit 5a3ce8f85e ("x86/shadow: drop stray name tags from
sh_{guest_get,map}_eff_l1e()") didn't go quite fare enough (and there's
a good chance that further cleanup opportunity exists, which I simply
didn't notice).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I'm not sure if all the ASSERT()s are really useful to have.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3774,8 +3774,7 @@ sh_update_linear_entries(struct vcpu *v)
 
 #elif SHADOW_PAGING_LEVELS == 3
 
-    /* PV: XXX
-     *
+    /*
      * HVM: To give ourselves a linear map of the  shadows, we need to
      * extend a PAE shadow to 4 levels.  We do this by  having a monitor
      * l3 in slot 0 of the monitor l4 table, and  copying the PAE l3
@@ -3784,7 +3783,7 @@ sh_update_linear_entries(struct vcpu *v)
      * the shadows.
      */
 
-    if ( shadow_mode_external(d) )
+    ASSERT(shadow_mode_external(d));
     {
         /* Install copies of the shadow l3es into the monitor l2 table
          * that maps SH_LINEAR_PT_VIRT_START. */
@@ -3830,8 +3829,6 @@ sh_update_linear_entries(struct vcpu *v)
         if ( v != current )
             unmap_domain_page(ml2e);
     }
-    else
-        domain_crash(d); /* XXX */
 
 #else
 #error this should not happen
@@ -4060,12 +4057,9 @@ sh_update_cr3(struct vcpu *v, int do_loc
       * until the next CR3 write makes us refresh our cache. */
      ASSERT(v->arch.paging.shadow.guest_vtable == NULL);
 
-     if ( shadow_mode_external(d) )
-         /* Find where in the page the l3 table is */
-         guest_idx = guest_index((void *)v->arch.hvm_vcpu.guest_cr[3]);
-     else
-         /* PV guest: l3 is at the start of a page */
-         guest_idx = 0;
+     ASSERT(shadow_mode_external(d));
+     /* Find where in the page the l3 table is */
+     guest_idx = guest_index((void *)v->arch.hvm_vcpu.guest_cr[3]);
 
      // Ignore the low 2 bits of guest_idx -- they are really just
      // cache control.
@@ -4076,17 +4070,13 @@ sh_update_cr3(struct vcpu *v, int do_loc
          v->arch.paging.shadow.gl3e[i] = gl3e[i];
      unmap_domain_page(gl3e);
 #elif GUEST_PAGING_LEVELS == 2
-    if ( shadow_mode_external(d) || shadow_mode_translate(d) )
-    {
-        if ( v->arch.paging.shadow.guest_vtable )
-            unmap_domain_page_global(v->arch.paging.shadow.guest_vtable);
-        v->arch.paging.shadow.guest_vtable = map_domain_page_global(gmfn);
-        /* Does this really need map_domain_page_global?  Handle the
-         * error properly if so. */
-        BUG_ON(v->arch.paging.shadow.guest_vtable == NULL); /* XXX */
-    }
-    else
-        v->arch.paging.shadow.guest_vtable = __linear_l2_table;
+    ASSERT(shadow_mode_external(d));
+    if ( v->arch.paging.shadow.guest_vtable )
+        unmap_domain_page_global(v->arch.paging.shadow.guest_vtable);
+    v->arch.paging.shadow.guest_vtable = map_domain_page_global(gmfn);
+    /* Does this really need map_domain_page_global?  Handle the
+     * error properly if so. */
+    BUG_ON(v->arch.paging.shadow.guest_vtable == NULL); /* XXX */
 #else
 #error this should never happen
 #endif
@@ -4195,21 +4185,15 @@ sh_update_cr3(struct vcpu *v, int do_loc
     {
         make_cr3(v, pagetable_get_mfn(v->arch.monitor_table));
     }
+#if SHADOW_PAGING_LEVELS == 4
     else // not shadow_mode_external...
     {
         /* We don't support PV except guest == shadow == config levels */
-        BUG_ON(GUEST_PAGING_LEVELS != SHADOW_PAGING_LEVELS);
-#if SHADOW_PAGING_LEVELS == 3
-        /* 2-on-3 or 3-on-3: Use the PAE shadow l3 table we just fabricated.
-         * Don't use make_cr3 because (a) we know it's below 4GB, and
-         * (b) it's not necessarily page-aligned, and make_cr3 takes a pfn */
-        ASSERT(virt_to_maddr(&v->arch.paging.shadow.l3table) <= 0xffffffe0ULL);
-        v->arch.cr3 = virt_to_maddr(&v->arch.paging.shadow.l3table);
-#else
-        /* 4-on-4: Just use the shadow top-level directly */
+        BUILD_BUG_ON(GUEST_PAGING_LEVELS != SHADOW_PAGING_LEVELS);
+        /* Just use the shadow top-level directly */
         make_cr3(v, pagetable_get_mfn(v->arch.shadow_table[0]));
-#endif
     }
+#endif
 
 
     ///



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 2/6] x86/shadow: remove pointless loops over all vCPU-s
  2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
  2017-12-12 15:04 ` [PATCH 1/6] x86/shadow: drop further 32-bit relics Jan Beulich
@ 2017-12-12 15:05 ` Jan Beulich
  2017-12-20  8:06   ` Tim Deegan
  2017-12-12 15:06 ` [PATCH 3/6] x86/shadow: ignore sh_pin() failure in one more case Jan Beulich
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2017-12-12 15:05 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Tim Deegan

The vCPU count can be had more directly.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
In the sh_make_shadow() case the question is whether it really was
intended to count all vCPU-s, rather than e.g. only all initialized
ones. I guess the problem would be the phase before the guest
actually starts secondary processors, but that could perhaps be
covered by using ->max_vcpus if otherwise 1 would result.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1214,13 +1214,7 @@ const u8 sh_type_to_size[] = {
  * worth to make sure we never return zero. */
 static unsigned int shadow_min_acceptable_pages(struct domain *d)
 {
-    u32 vcpu_count = 1;
-    struct vcpu *v;
-
-    for_each_vcpu(d, v)
-        vcpu_count++;
-
-    return (vcpu_count * 128);
+    return (d->max_vcpus + 1) * 128;
 }
 
 /* Dispatcher function: call the per-mode function that will unhook the
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1476,16 +1476,14 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
          * pinning l3es.  This is not very quick but it doesn't happen
          * very often. */
         struct page_info *sp, *t;
-        struct vcpu *v2;
-        int l4count = 0, vcpus = 0;
+        unsigned int l4count = 0;
+
         page_list_for_each(sp, &d->arch.paging.shadow.pinned_shadows)
         {
             if ( sp->u.sh.type == SH_type_l4_64_shadow )
                 l4count++;
         }
-        for_each_vcpu ( d, v2 )
-            vcpus++;
-        if ( l4count > 2 * vcpus )
+        if ( l4count > 2 * d->max_vcpus )
         {
             /* Unpin all the pinned l3 tables, and don't pin any more. */
             page_list_for_each_safe(sp, t, &d->arch.paging.shadow.pinned_shadows)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 3/6] x86/shadow: ignore sh_pin() failure in one more case
  2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
  2017-12-12 15:04 ` [PATCH 1/6] x86/shadow: drop further 32-bit relics Jan Beulich
  2017-12-12 15:05 ` [PATCH 2/6] x86/shadow: remove pointless loops over all vCPU-s Jan Beulich
@ 2017-12-12 15:06 ` Jan Beulich
  2017-12-20  8:08   ` Tim Deegan
  2017-12-12 15:07 ` [PATCH 4/6] x86/shadow: widen reference count Jan Beulich
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2017-12-12 15:06 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Tim Deegan

Following what we've already done in the XSA-250 fix, convert another
sh_pin() caller to no longer fail the higher level operation if pinning
fails, as pinning is a performance optimization only in those places.

Suggested-by: Tim Deegan <tim@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Would it be worth making sh_pin() return void, by adding a bool
parameter which the other call site in sh_set_toplevel_shadow() could
use to indicate a ref is avalable to be used (instead of aquiring a
new one)? This would allow to drop another domain_crash().

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3925,17 +3925,15 @@ sh_set_toplevel_shadow(struct vcpu *v,
     }
     ASSERT(mfn_valid(smfn));
 
-    /* Pin the shadow and put it (back) on the list of pinned shadows */
-    if ( sh_pin(d, smfn) == 0 )
-    {
-        SHADOW_ERROR("can't pin %#lx as toplevel shadow\n", mfn_x(smfn));
-        domain_crash(d);
-    }
-
     /* Take a ref to this page: it will be released in sh_detach_old_tables()
      * or the next call to set_toplevel_shadow() */
     if ( sh_get_ref(d, smfn, 0) )
+    {
+        /* Pin the shadow and put it (back) on the list of pinned shadows */
+        sh_pin(d, smfn);
+
         new_entry = pagetable_from_mfn(smfn);
+    }
     else
     {
         SHADOW_ERROR("can't install %#lx as toplevel shadow\n", mfn_x(smfn));




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 4/6] x86/shadow: widen reference count
  2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
                   ` (2 preceding siblings ...)
  2017-12-12 15:06 ` [PATCH 3/6] x86/shadow: ignore sh_pin() failure in one more case Jan Beulich
@ 2017-12-12 15:07 ` Jan Beulich
  2017-12-12 16:32   ` George Dunlap
  2017-12-20  8:08   ` Tim Deegan
  2017-12-12 15:08 ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{,_ENTRY} uses Jan Beulich
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 30+ messages in thread
From: Jan Beulich @ 2017-12-12 15:07 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Tim Deegan

Utilize as many of the bits available in the union as possible, without
(just to be on the safe side) colliding with any of the bits outside of
PGT_type_mask.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -510,7 +510,7 @@ void sh_destroy_shadow(struct domain *d,
  * Returns 0 for failure, 1 for success. */
 static inline int sh_get_ref(struct domain *d, mfn_t smfn, paddr_t entry_pa)
 {
-    u32 x, nx;
+    unsigned long x, nx;
     struct page_info *sp = mfn_to_page(smfn);
 
     ASSERT(mfn_valid(smfn));
@@ -519,7 +519,7 @@ static inline int sh_get_ref(struct doma
     x = sp->u.sh.count;
     nx = x + 1;
 
-    if ( unlikely(nx >= (1U << PAGE_SH_REFCOUNT_WIDTH)) )
+    if ( unlikely(nx >= (1UL << PAGE_SH_REFCOUNT_WIDTH)) )
     {
         SHADOW_PRINTK("shadow ref overflow, gmfn=%lx smfn=%lx\n",
                        __backpointer(sp), mfn_x(smfn));
@@ -543,7 +543,7 @@ static inline int sh_get_ref(struct doma
  * physical address of the shadow entry that held this reference. */
 static inline void sh_put_ref(struct domain *d, mfn_t smfn, paddr_t entry_pa)
 {
-    u32 x, nx;
+    unsigned long x, nx;
     struct page_info *sp = mfn_to_page(smfn);
 
     ASSERT(mfn_valid(smfn));
@@ -561,8 +561,8 @@ static inline void sh_put_ref(struct dom
 
     if ( unlikely(x == 0) )
     {
-        SHADOW_ERROR("shadow ref underflow, smfn=%lx oc=%08x t=%#x\n",
-                     mfn_x(smfn), sp->u.sh.count, sp->u.sh.type);
+        SHADOW_ERROR("shadow ref underflow, smfn=%lx oc=%#lx t=%#x\n",
+                     mfn_x(smfn), sp->u.sh.count + 0UL, sp->u.sh.type);
         BUG();
     }
 
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -18,6 +18,77 @@
  */
 #define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
 
+#define PG_shift(idx)   (BITS_PER_LONG - (idx))
+#define PG_mask(x, idx) (x ## UL << PG_shift(idx))
+
+ /* The following page types are MUTUALLY EXCLUSIVE. */
+#define PGT_none          PG_mask(0, 3)  /* no special uses of this page   */
+#define PGT_l1_page_table PG_mask(1, 3)  /* using as an L1 page table?     */
+#define PGT_l2_page_table PG_mask(2, 3)  /* using as an L2 page table?     */
+#define PGT_l3_page_table PG_mask(3, 3)  /* using as an L3 page table?     */
+#define PGT_l4_page_table PG_mask(4, 3)  /* using as an L4 page table?     */
+#define PGT_seg_desc_page PG_mask(5, 3)  /* using this page in a GDT/LDT?  */
+#define PGT_shared_page   PG_mask(6, 3)  /* CoW sharable page              */
+#define PGT_writable_page PG_mask(7, 3)  /* has writable mappings?         */
+#define PGT_type_mask     PG_mask(7, 3)  /* Bits 61-63.                    */
+
+ /* Page is locked? */
+#define _PGT_locked       PG_shift(4)
+#define PGT_locked        PG_mask(1, 4)
+ /* Owning guest has pinned this page to its current type? */
+#define _PGT_pinned       PG_shift(5)
+#define PGT_pinned        PG_mask(1, 5)
+ /* Has this page been validated for use as its current type? */
+#define _PGT_validated    PG_shift(6)
+#define PGT_validated     PG_mask(1, 6)
+ /* PAE only: is this an L2 page directory containing Xen-private mappings? */
+#define _PGT_pae_xen_l2   PG_shift(7)
+#define PGT_pae_xen_l2    PG_mask(1, 7)
+/* Has this page been *partially* validated for use as its current type? */
+#define _PGT_partial      PG_shift(8)
+#define PGT_partial       PG_mask(1, 8)
+
+ /* Count of uses of this frame as its current type. */
+#define PGT_count_width   PG_shift(8)
+#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
+
+/* Are the 'type mask' bits identical? */
+#define PGT_type_equal(x, y) (!(((x) ^ (y)) & PGT_type_mask))
+
+ /* Cleared when the owning guest 'frees' this page. */
+#define _PGC_allocated    PG_shift(1)
+#define PGC_allocated     PG_mask(1, 1)
+ /* Page is Xen heap? */
+#define _PGC_xen_heap     PG_shift(2)
+#define PGC_xen_heap      PG_mask(1, 2)
+ /* Set when is using a page as a page table */
+#define _PGC_page_table   PG_shift(3)
+#define PGC_page_table    PG_mask(1, 3)
+ /* 3-bit PAT/PCD/PWT cache-attribute hint. */
+#define PGC_cacheattr_base PG_shift(6)
+#define PGC_cacheattr_mask PG_mask(7, 6)
+ /* Page is broken? */
+#define _PGC_broken       PG_shift(7)
+#define PGC_broken        PG_mask(1, 7)
+ /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */
+#define PGC_state         PG_mask(3, 9)
+#define PGC_state_inuse   PG_mask(0, 9)
+#define PGC_state_offlining PG_mask(1, 9)
+#define PGC_state_offlined PG_mask(2, 9)
+#define PGC_state_free    PG_mask(3, 9)
+#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
+
+ /* Count of references to this frame. */
+#define PGC_count_width   PG_shift(9)
+#define PGC_count_mask    ((1UL<<PGC_count_width)-1)
+
+/*
+ * Page needs to be scrubbed. Since this bit can only be set on a page that is
+ * free (i.e. in PGC_state_free) we can reuse PGC_allocated bit.
+ */
+#define _PGC_need_scrub   _PGC_allocated
+#define PGC_need_scrub    PGC_allocated
+
 #ifndef CONFIG_BIGMEM
 /*
  * This definition is solely for the use in struct page_info (and
@@ -82,7 +153,7 @@ struct page_info
             unsigned long type:5;   /* What kind of shadow is this? */
             unsigned long pinned:1; /* Is the shadow pinned? */
             unsigned long head:1;   /* Is this the first page of the shadow? */
-#define PAGE_SH_REFCOUNT_WIDTH 25
+#define PAGE_SH_REFCOUNT_WIDTH (PGT_count_width - 7)
             unsigned long count:PAGE_SH_REFCOUNT_WIDTH; /* Reference count */
         } sh;
 
@@ -198,77 +269,6 @@ struct page_info
 
 #undef __pdx_t
 
-#define PG_shift(idx)   (BITS_PER_LONG - (idx))
-#define PG_mask(x, idx) (x ## UL << PG_shift(idx))
-
- /* The following page types are MUTUALLY EXCLUSIVE. */
-#define PGT_none          PG_mask(0, 3)  /* no special uses of this page   */
-#define PGT_l1_page_table PG_mask(1, 3)  /* using as an L1 page table?     */
-#define PGT_l2_page_table PG_mask(2, 3)  /* using as an L2 page table?     */
-#define PGT_l3_page_table PG_mask(3, 3)  /* using as an L3 page table?     */
-#define PGT_l4_page_table PG_mask(4, 3)  /* using as an L4 page table?     */
-#define PGT_seg_desc_page PG_mask(5, 3)  /* using this page in a GDT/LDT?  */
-#define PGT_shared_page   PG_mask(6, 3)  /* CoW sharable page              */
-#define PGT_writable_page PG_mask(7, 3)  /* has writable mappings?         */
-#define PGT_type_mask     PG_mask(7, 3)  /* Bits 61-63.                    */
-
- /* Page is locked? */
-#define _PGT_locked       PG_shift(4)
-#define PGT_locked        PG_mask(1, 4)
- /* Owning guest has pinned this page to its current type? */
-#define _PGT_pinned       PG_shift(5)
-#define PGT_pinned        PG_mask(1, 5)
- /* Has this page been validated for use as its current type? */
-#define _PGT_validated    PG_shift(6)
-#define PGT_validated     PG_mask(1, 6)
- /* PAE only: is this an L2 page directory containing Xen-private mappings? */
-#define _PGT_pae_xen_l2   PG_shift(7)
-#define PGT_pae_xen_l2    PG_mask(1, 7)
-/* Has this page been *partially* validated for use as its current type? */
-#define _PGT_partial      PG_shift(8)
-#define PGT_partial       PG_mask(1, 8)
-
- /* Count of uses of this frame as its current type. */
-#define PGT_count_width   PG_shift(8)
-#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
-
-/* Are the 'type mask' bits identical? */
-#define PGT_type_equal(x, y) (!(((x) ^ (y)) & PGT_type_mask))
-
- /* Cleared when the owning guest 'frees' this page. */
-#define _PGC_allocated    PG_shift(1)
-#define PGC_allocated     PG_mask(1, 1)
- /* Page is Xen heap? */
-#define _PGC_xen_heap     PG_shift(2)
-#define PGC_xen_heap      PG_mask(1, 2)
- /* Set when is using a page as a page table */
-#define _PGC_page_table   PG_shift(3)
-#define PGC_page_table    PG_mask(1, 3)
- /* 3-bit PAT/PCD/PWT cache-attribute hint. */
-#define PGC_cacheattr_base PG_shift(6)
-#define PGC_cacheattr_mask PG_mask(7, 6)
- /* Page is broken? */
-#define _PGC_broken       PG_shift(7)
-#define PGC_broken        PG_mask(1, 7)
- /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */
-#define PGC_state         PG_mask(3, 9)
-#define PGC_state_inuse   PG_mask(0, 9)
-#define PGC_state_offlining PG_mask(1, 9)
-#define PGC_state_offlined PG_mask(2, 9)
-#define PGC_state_free    PG_mask(3, 9)
-#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
-
- /* Count of references to this frame. */
-#define PGC_count_width   PG_shift(9)
-#define PGC_count_mask    ((1UL<<PGC_count_width)-1)
-
-/*
- * Page needs to be scrubbed. Since this bit can only be set on a page that is
- * free (i.e. in PGC_state_free) we can reuse PGC_allocated bit.
- */
-#define _PGC_need_scrub   _PGC_allocated
-#define PGC_need_scrub    PGC_allocated
-
 #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
 #define is_xen_heap_mfn(mfn) \
     (__mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 5/6] x86/mm: clean up SHARED_M2P{,_ENTRY} uses
  2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
                   ` (3 preceding siblings ...)
  2017-12-12 15:07 ` [PATCH 4/6] x86/shadow: widen reference count Jan Beulich
@ 2017-12-12 15:08 ` Jan Beulich
  2017-12-12 17:50   ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses George Dunlap
  2017-12-20  8:09   ` Tim Deegan
  2017-12-12 15:09 ` [PATCH 6/6] x86: use paging_mark_pfn_dirty() Jan Beulich
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 30+ messages in thread
From: Jan Beulich @ 2017-12-12 15:08 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, tamas, Tim Deegan

Stop open-coding SHARED_M2P() and drop a pointless use of it from
paging_mfn_is_dirty() (!VALID_M2P() is a superset of SHARED_M2P()) and
another one from free_page_type() (prior assertions render this
redundant).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2371,9 +2371,7 @@ int free_page_type(struct page_info *pag
 
         gmfn = mfn_to_gmfn(owner, mfn_x(page_to_mfn(page)));
         ASSERT(VALID_M2P(gmfn));
-        /* Page sharing not supported for shadowed domains */
-        if(!SHARED_M2P(gmfn))
-            shadow_remove_all_shadows(owner, _mfn(gmfn));
+        shadow_remove_all_shadows(owner, _mfn(gmfn));
     }
 
     if ( !(type & PGT_partial) )
@@ -4154,7 +4152,7 @@ int xenmem_add_to_physmap_one(
 
     /* Unmap from old location, if any. */
     old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
-    ASSERT( old_gpfn != SHARED_M2P_ENTRY );
+    ASSERT(!SHARED_M2P(old_gpfn));
     if ( (space == XENMAPSPACE_gmfn || space == XENMAPSPACE_gmfn_range) &&
          old_gpfn != gfn )
     {
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -409,7 +409,7 @@ static struct page_info* mem_sharing_loo
             unsigned long t = read_atomic(&page->u.inuse.type_info);
             ASSERT((t & PGT_type_mask) == PGT_shared_page);
             ASSERT((t & PGT_count_mask) >= 2);
-            ASSERT(get_gpfn_from_mfn(mfn) == SHARED_M2P_ENTRY); 
+            ASSERT(SHARED_M2P(get_gpfn_from_mfn(mfn)));
             return page;
         }
     }
@@ -469,7 +469,7 @@ static int audit(void)
         }
 
         /* Check the m2p entry */
-        if ( get_gpfn_from_mfn(mfn_x(mfn)) != SHARED_M2P_ENTRY )
+        if ( !SHARED_M2P(get_gpfn_from_mfn(mfn_x(mfn))) )
         {
            MEM_SHARING_DEBUG("mfn %lx shared, but wrong m2p entry (%lx)!\n",
                              mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn)));
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2596,7 +2596,7 @@ void audit_p2m(struct domain *d,
             continue;
         }
 
-        if ( gfn == SHARED_M2P_ENTRY )
+        if ( SHARED_M2P(gfn) )
         {
             P2M_PRINTK("shared mfn (%lx) on domain page list!\n",
                     mfn);
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -1059,8 +1059,7 @@ long p2m_pt_audit_p2m(struct p2m_domain
                         {
                             m2pfn = get_gpfn_from_mfn(mfn+i1);
                             /* Allow shared M2Ps */
-                            if ( (m2pfn != (gfn + i1)) &&
-                                 (m2pfn != SHARED_M2P_ENTRY) )
+                            if ( (m2pfn != (gfn + i1)) && !SHARED_M2P(m2pfn) )
                             {
                                 pmbad++;
                                 P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx"
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -369,8 +369,8 @@ int paging_mfn_is_dirty(struct domain *d
 
     /* We /really/ mean PFN here, even for non-translated guests. */
     pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));
-    /* Shared pages are always read-only; invalid pages can't be dirty. */
-    if ( unlikely(SHARED_M2P(pfn_x(pfn)) || !VALID_M2P(pfn_x(pfn))) )
+    /* Invalid pages can't be dirty. */
+    if ( unlikely(!VALID_M2P(pfn_x(pfn))) )
         return 0;
 
     mfn = d->arch.paging.log_dirty.top;




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 6/6] x86: use paging_mark_pfn_dirty()
  2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
                   ` (4 preceding siblings ...)
  2017-12-12 15:08 ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{,_ENTRY} uses Jan Beulich
@ 2017-12-12 15:09 ` Jan Beulich
  2017-12-20  8:10   ` Tim Deegan
  2017-12-20  9:37 ` [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
  2018-02-07 15:27 ` Ping: [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
  7 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2017-12-12 15:09 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Paul Durrant, Tim Deegan

... in preference over paging_mark_dirty(), when the PFN is known
anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
This has a contextual dependency on
https://lists.xenproject.org/archives/html/xen-devel/2017-12/msg00151.html 
which is ready to go in, just waiting for the tree to fully re-open.

--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -219,14 +219,12 @@ static int modified_memory(struct domain
             page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
             if ( page )
             {
-                mfn_t gmfn = _mfn(page_to_mfn(page));
-
-                paging_mark_dirty(d, gmfn);
+                paging_mark_pfn_dirty(d, _pfn(pfn));
                 /*
                  * These are most probably not page tables any more
                  * don't take a long time and don't die either.
                  */
-                sh_remove_shadows(d, gmfn, 1, 0);
+                sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
                 put_page(page);
             }
         }
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1848,7 +1848,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
          */
         if ( npfec.write_access )
         {
-            paging_mark_dirty(currd, mfn);
+            paging_mark_pfn_dirty(currd, _pfn(gfn));
             /*
              * If p2m is really an altp2m, unlock here to avoid lock ordering
              * violation when the change below is propagated from host p2m.
@@ -2553,7 +2553,7 @@ static void *_hvm_map_guest_frame(unsign
         if ( unlikely(p2m_is_discard_write(p2mt)) )
             *writable = 0;
         else if ( !permanent )
-            paging_mark_dirty(d, _mfn(page_to_mfn(page)));
+            paging_mark_pfn_dirty(d, _pfn(gfn));
     }
 
     if ( !permanent )
@@ -3216,7 +3216,7 @@ static enum hvm_translation_result __hvm
                     memcpy(p, buf, count);
                 else
                     memset(p, 0, count);
-                paging_mark_dirty(v->domain, _mfn(page_to_mfn(page)));
+                paging_mark_pfn_dirty(v->domain, _pfn(gfn_x(gfn)));
             }
         }
         else
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -283,7 +283,7 @@ static int hvm_add_ioreq_gfn(
     rc = guest_physmap_add_page(d, _gfn(iorp->gfn),
                                 _mfn(page_to_mfn(iorp->page)), 0);
     if ( rc == 0 )
-        paging_mark_dirty(d, _mfn(page_to_mfn(iorp->page)));
+        paging_mark_pfn_dirty(d, _pfn(iorp->gfn));
 
     return rc;
 }
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1215,7 +1215,7 @@ p2m_pod_demand_populate(struct p2m_domai
     for( i = 0; i < (1UL << order); i++ )
     {
         set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn_aligned) + i);
-        paging_mark_dirty(d, mfn_add(mfn, i));
+        paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn_aligned) + i));
     }
 
     p2m->pod.entry_count -= (1UL << order);
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3728,8 +3728,7 @@ long do_mmu_update(
             }
 
             set_gpfn_from_mfn(mfn, gpfn);
-
-            paging_mark_dirty(pg_owner, _mfn(mfn));
+            paging_mark_pfn_dirty(pg_owner, _pfn(gpfn));
 
             put_page(page);
             break;




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/6] x86/shadow: widen reference count
  2017-12-12 15:07 ` [PATCH 4/6] x86/shadow: widen reference count Jan Beulich
@ 2017-12-12 16:32   ` George Dunlap
  2017-12-13  9:17     ` Jan Beulich
  2017-12-20  8:08   ` Tim Deegan
  1 sibling, 1 reply; 30+ messages in thread
From: George Dunlap @ 2017-12-12 16:32 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: George Dunlap, Andrew Cooper, Tim Deegan

On 12/12/2017 03:07 PM, Jan Beulich wrote:
> Utilize as many of the bits available in the union as possible, without
> (just to be on the safe side) colliding with any of the bits outside of
> PGT_type_mask.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/mm/shadow/private.h
> +++ b/xen/arch/x86/mm/shadow/private.h
> @@ -510,7 +510,7 @@ void sh_destroy_shadow(struct domain *d,
>   * Returns 0 for failure, 1 for success. */
>  static inline int sh_get_ref(struct domain *d, mfn_t smfn, paddr_t entry_pa)
>  {
> -    u32 x, nx;
> +    unsigned long x, nx;
>      struct page_info *sp = mfn_to_page(smfn);
>  
>      ASSERT(mfn_valid(smfn));
> @@ -519,7 +519,7 @@ static inline int sh_get_ref(struct doma
>      x = sp->u.sh.count;
>      nx = x + 1;
>  
> -    if ( unlikely(nx >= (1U << PAGE_SH_REFCOUNT_WIDTH)) )
> +    if ( unlikely(nx >= (1UL << PAGE_SH_REFCOUNT_WIDTH)) )
>      {
>          SHADOW_PRINTK("shadow ref overflow, gmfn=%lx smfn=%lx\n",
>                         __backpointer(sp), mfn_x(smfn));
> @@ -543,7 +543,7 @@ static inline int sh_get_ref(struct doma
>   * physical address of the shadow entry that held this reference. */
>  static inline void sh_put_ref(struct domain *d, mfn_t smfn, paddr_t entry_pa)
>  {
> -    u32 x, nx;
> +    unsigned long x, nx;
>      struct page_info *sp = mfn_to_page(smfn);
>  
>      ASSERT(mfn_valid(smfn));
> @@ -561,8 +561,8 @@ static inline void sh_put_ref(struct dom
>  
>      if ( unlikely(x == 0) )
>      {
> -        SHADOW_ERROR("shadow ref underflow, smfn=%lx oc=%08x t=%#x\n",
> -                     mfn_x(smfn), sp->u.sh.count, sp->u.sh.type);
> +        SHADOW_ERROR("shadow ref underflow, smfn=%lx oc=%#lx t=%#x\n",
> +                     mfn_x(smfn), sp->u.sh.count + 0UL, sp->u.sh.type);
>          BUG();
>      }
>  
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -18,6 +18,77 @@
>   */
>  #define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
>  
> +#define PG_shift(idx)   (BITS_PER_LONG - (idx))
> +#define PG_mask(x, idx) (x ## UL << PG_shift(idx))
> +
> + /* The following page types are MUTUALLY EXCLUSIVE. */
> +#define PGT_none          PG_mask(0, 3)  /* no special uses of this page   */
> +#define PGT_l1_page_table PG_mask(1, 3)  /* using as an L1 page table?     */
> +#define PGT_l2_page_table PG_mask(2, 3)  /* using as an L2 page table?     */
> +#define PGT_l3_page_table PG_mask(3, 3)  /* using as an L3 page table?     */
> +#define PGT_l4_page_table PG_mask(4, 3)  /* using as an L4 page table?     */
> +#define PGT_seg_desc_page PG_mask(5, 3)  /* using this page in a GDT/LDT?  */
> +#define PGT_shared_page   PG_mask(6, 3)  /* CoW sharable page              */
> +#define PGT_writable_page PG_mask(7, 3)  /* has writable mappings?         */
> +#define PGT_type_mask     PG_mask(7, 3)  /* Bits 61-63.                    */
> +
> + /* Page is locked? */
> +#define _PGT_locked       PG_shift(4)
> +#define PGT_locked        PG_mask(1, 4)
> + /* Owning guest has pinned this page to its current type? */
> +#define _PGT_pinned       PG_shift(5)
> +#define PGT_pinned        PG_mask(1, 5)
> + /* Has this page been validated for use as its current type? */
> +#define _PGT_validated    PG_shift(6)
> +#define PGT_validated     PG_mask(1, 6)
> + /* PAE only: is this an L2 page directory containing Xen-private mappings? */
> +#define _PGT_pae_xen_l2   PG_shift(7)
> +#define PGT_pae_xen_l2    PG_mask(1, 7)
> +/* Has this page been *partially* validated for use as its current type? */
> +#define _PGT_partial      PG_shift(8)
> +#define PGT_partial       PG_mask(1, 8)
> +
> + /* Count of uses of this frame as its current type. */
> +#define PGT_count_width   PG_shift(8)
> +#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
> +
> +/* Are the 'type mask' bits identical? */
> +#define PGT_type_equal(x, y) (!(((x) ^ (y)) & PGT_type_mask))
> +
> + /* Cleared when the owning guest 'frees' this page. */
> +#define _PGC_allocated    PG_shift(1)
> +#define PGC_allocated     PG_mask(1, 1)
> + /* Page is Xen heap? */
> +#define _PGC_xen_heap     PG_shift(2)
> +#define PGC_xen_heap      PG_mask(1, 2)
> + /* Set when is using a page as a page table */
> +#define _PGC_page_table   PG_shift(3)
> +#define PGC_page_table    PG_mask(1, 3)
> + /* 3-bit PAT/PCD/PWT cache-attribute hint. */
> +#define PGC_cacheattr_base PG_shift(6)
> +#define PGC_cacheattr_mask PG_mask(7, 6)
> + /* Page is broken? */
> +#define _PGC_broken       PG_shift(7)
> +#define PGC_broken        PG_mask(1, 7)
> + /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */
> +#define PGC_state         PG_mask(3, 9)
> +#define PGC_state_inuse   PG_mask(0, 9)
> +#define PGC_state_offlining PG_mask(1, 9)
> +#define PGC_state_offlined PG_mask(2, 9)
> +#define PGC_state_free    PG_mask(3, 9)
> +#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
> +
> + /* Count of references to this frame. */
> +#define PGC_count_width   PG_shift(9)
> +#define PGC_count_mask    ((1UL<<PGC_count_width)-1)
> +
> +/*
> + * Page needs to be scrubbed. Since this bit can only be set on a page that is
> + * free (i.e. in PGC_state_free) we can reuse PGC_allocated bit.
> + */
> +#define _PGC_need_scrub   _PGC_allocated
> +#define PGC_need_scrub    PGC_allocated
> +
>  #ifndef CONFIG_BIGMEM
>  /*
>   * This definition is solely for the use in struct page_info (and
> @@ -82,7 +153,7 @@ struct page_info
>              unsigned long type:5;   /* What kind of shadow is this? */
>              unsigned long pinned:1; /* Is the shadow pinned? */
>              unsigned long head:1;   /* Is this the first page of the shadow? */
> -#define PAGE_SH_REFCOUNT_WIDTH 25
> +#define PAGE_SH_REFCOUNT_WIDTH (PGT_count_width - 7)
>              unsigned long count:PAGE_SH_REFCOUNT_WIDTH; /* Reference count */
>          } sh;
>  
> @@ -198,77 +269,6 @@ struct page_info
>  
>  #undef __pdx_t
>  
> -#define PG_shift(idx)   (BITS_PER_LONG - (idx))
> -#define PG_mask(x, idx) (x ## UL << PG_shift(idx))
> -
> - /* The following page types are MUTUALLY EXCLUSIVE. */
> -#define PGT_none          PG_mask(0, 3)  /* no special uses of this page   */
> -#define PGT_l1_page_table PG_mask(1, 3)  /* using as an L1 page table?     */
> -#define PGT_l2_page_table PG_mask(2, 3)  /* using as an L2 page table?     */
> -#define PGT_l3_page_table PG_mask(3, 3)  /* using as an L3 page table?     */
> -#define PGT_l4_page_table PG_mask(4, 3)  /* using as an L4 page table?     */
> -#define PGT_seg_desc_page PG_mask(5, 3)  /* using this page in a GDT/LDT?  */
> -#define PGT_shared_page   PG_mask(6, 3)  /* CoW sharable page              */
> -#define PGT_writable_page PG_mask(7, 3)  /* has writable mappings?         */
> -#define PGT_type_mask     PG_mask(7, 3)  /* Bits 61-63.                    */
> -
> - /* Page is locked? */
> -#define _PGT_locked       PG_shift(4)
> -#define PGT_locked        PG_mask(1, 4)
> - /* Owning guest has pinned this page to its current type? */
> -#define _PGT_pinned       PG_shift(5)
> -#define PGT_pinned        PG_mask(1, 5)
> - /* Has this page been validated for use as its current type? */
> -#define _PGT_validated    PG_shift(6)
> -#define PGT_validated     PG_mask(1, 6)
> - /* PAE only: is this an L2 page directory containing Xen-private mappings? */
> -#define _PGT_pae_xen_l2   PG_shift(7)
> -#define PGT_pae_xen_l2    PG_mask(1, 7)
> -/* Has this page been *partially* validated for use as its current type? */
> -#define _PGT_partial      PG_shift(8)
> -#define PGT_partial       PG_mask(1, 8)
> -
> - /* Count of uses of this frame as its current type. */
> -#define PGT_count_width   PG_shift(8)
> -#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
> -
> -/* Are the 'type mask' bits identical? */
> -#define PGT_type_equal(x, y) (!(((x) ^ (y)) & PGT_type_mask))
> -
> - /* Cleared when the owning guest 'frees' this page. */
> -#define _PGC_allocated    PG_shift(1)
> -#define PGC_allocated     PG_mask(1, 1)
> - /* Page is Xen heap? */
> -#define _PGC_xen_heap     PG_shift(2)
> -#define PGC_xen_heap      PG_mask(1, 2)
> - /* Set when is using a page as a page table */
> -#define _PGC_page_table   PG_shift(3)
> -#define PGC_page_table    PG_mask(1, 3)
> - /* 3-bit PAT/PCD/PWT cache-attribute hint. */
> -#define PGC_cacheattr_base PG_shift(6)
> -#define PGC_cacheattr_mask PG_mask(7, 6)
> - /* Page is broken? */
> -#define _PGC_broken       PG_shift(7)
> -#define PGC_broken        PG_mask(1, 7)
> - /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */
> -#define PGC_state         PG_mask(3, 9)
> -#define PGC_state_inuse   PG_mask(0, 9)
> -#define PGC_state_offlining PG_mask(1, 9)
> -#define PGC_state_offlined PG_mask(2, 9)
> -#define PGC_state_free    PG_mask(3, 9)
> -#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)

What's the point of moving this code?  And are there any important changes?

It would be a lot easier to review if you separated code motion from
code changes; but if you don't want to do that, you need to make it
clear what you're doing in your changelog.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses
  2017-12-12 15:08 ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{,_ENTRY} uses Jan Beulich
@ 2017-12-12 17:50   ` George Dunlap
  2017-12-13  9:30     ` Jan Beulich
  2017-12-18 16:56     ` Jan Beulich
  2017-12-20  8:09   ` Tim Deegan
  1 sibling, 2 replies; 30+ messages in thread
From: George Dunlap @ 2017-12-12 17:50 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: George Dunlap, Andrew Cooper, tamas, Tim Deegan

On 12/12/2017 03:08 PM, Jan Beulich wrote:
> Stop open-coding SHARED_M2P() and drop a pointless use of it from
> paging_mfn_is_dirty() (!VALID_M2P() is a superset of SHARED_M2P()) and
> another one from free_page_type() (prior assertions render this
> redundant).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2371,9 +2371,7 @@ int free_page_type(struct page_info *pag
>  
>          gmfn = mfn_to_gmfn(owner, mfn_x(page_to_mfn(page)));
>          ASSERT(VALID_M2P(gmfn));
> -        /* Page sharing not supported for shadowed domains */
> -        if(!SHARED_M2P(gmfn))
> -            shadow_remove_all_shadows(owner, _mfn(gmfn));
> +        shadow_remove_all_shadows(owner, _mfn(gmfn));

But that's an ASSERT(), not a BUG_ON().  Code after an ASSERT() needs to
make sure that if it turns out to be false in a non-debug run, nothing
worse than a BUG() will happen -- for instance, an information leak or a
privilege escalation.

xen/arch/x86/mm/shadow/common.c:sh_remove_shadows() looks up the page
struct for the mfn without checking if it's valid; so it will *probably*
end up accessing a wild pointer; at which point it would be better to
change the ASSERT(VALID_M2P()) into a BUG_ON(!VALID_M2P()).

Or, if we don't want to crash on a production box in that case, we
should leave the if() statement there.

> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -369,8 +369,8 @@ int paging_mfn_is_dirty(struct domain *d
>  
>      /* We /really/ mean PFN here, even for non-translated guests. */
>      pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));
> -    /* Shared pages are always read-only; invalid pages can't be dirty. */
> -    if ( unlikely(SHARED_M2P(pfn_x(pfn)) || !VALID_M2P(pfn_x(pfn))) )
> +    /* Invalid pages can't be dirty. */
> +    if ( unlikely(!VALID_M2P(pfn_x(pfn))) )
>          return 0;

Are you sure that it will always be the case in the future that
SHARED_MP2(x) implies !VALID_M2P(x)?  (This is also relevant for my
previous comment.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/6] x86/shadow: widen reference count
  2017-12-12 16:32   ` George Dunlap
@ 2017-12-13  9:17     ` Jan Beulich
  2017-12-13 10:32       ` George Dunlap
  0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2017-12-13  9:17 UTC (permalink / raw)
  To: George Dunlap; +Cc: George Dunlap, Andrew Cooper, Tim Deegan, xen-devel

>>> On 12.12.17 at 17:32, <george.dunlap@citrix.com> wrote:
>> @@ -82,7 +153,7 @@ struct page_info
>>              unsigned long type:5;   /* What kind of shadow is this? */
>>              unsigned long pinned:1; /* Is the shadow pinned? */
>>              unsigned long head:1;   /* Is this the first page of the shadow? */
>> -#define PAGE_SH_REFCOUNT_WIDTH 25
>> +#define PAGE_SH_REFCOUNT_WIDTH (PGT_count_width - 7)
>>              unsigned long count:PAGE_SH_REFCOUNT_WIDTH; /* Reference count */
>>          } sh;

It is this use of PGT_count_width ...

> What's the point of moving this code?  And are there any important changes?

... which requires the move. I thought that would be clear enough.

> It would be a lot easier to review if you separated code motion from
> code changes; but if you don't want to do that, you need to make it
> clear what you're doing in your changelog.

I've added

"Note that the first and last hunks of the xen/include/asm-x86/mm.h
 change are merely code motion."

to the description; I don't think separating out that change would
make review any easier (you either trust the code-motion-only
statement, or you want to compare both blocks, regardless of
whether this was a separate patch).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses
  2017-12-12 17:50   ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses George Dunlap
@ 2017-12-13  9:30     ` Jan Beulich
  2017-12-18 16:56     ` Jan Beulich
  1 sibling, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2017-12-13  9:30 UTC (permalink / raw)
  To: George Dunlap; +Cc: George Dunlap, Andrew Cooper, tamas, Tim Deegan, xen-devel

>>> On 12.12.17 at 18:50, <george.dunlap@citrix.com> wrote:
> On 12/12/2017 03:08 PM, Jan Beulich wrote:
>> Stop open-coding SHARED_M2P() and drop a pointless use of it from
>> paging_mfn_is_dirty() (!VALID_M2P() is a superset of SHARED_M2P()) and
>> another one from free_page_type() (prior assertions render this
>> redundant).
>> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -2371,9 +2371,7 @@ int free_page_type(struct page_info *pag
>>  
>>          gmfn = mfn_to_gmfn(owner, mfn_x(page_to_mfn(page)));
>>          ASSERT(VALID_M2P(gmfn));
>> -        /* Page sharing not supported for shadowed domains */
>> -        if(!SHARED_M2P(gmfn))
>> -            shadow_remove_all_shadows(owner, _mfn(gmfn));
>> +        shadow_remove_all_shadows(owner, _mfn(gmfn));
> 
> But that's an ASSERT(), not a BUG_ON().  Code after an ASSERT() needs to
> make sure that if it turns out to be false in a non-debug run, nothing
> worse than a BUG() will happen -- for instance, an information leak or a
> privilege escalation.

Okay, I think I finally will need to introduce the assert-or-crash-
domain construct as a prereq here. I agree with the general
comment you make, however we have lots and lots of examples
to the contrary, not the least ...

> xen/arch/x86/mm/shadow/common.c:sh_remove_shadows() looks up the page
> struct for the mfn without checking if it's valid; so it will *probably*
> end up accessing a wild pointer; at which point it would be better to
> change the ASSERT(VALID_M2P()) into a BUG_ON(!VALID_M2P()).

... sh_remove_shadows() with its "ASSERT(mfn_valid(gmfn))",
which is the very sanity check allowing the looked up page
pointer to be de-referenced.

>> --- a/xen/arch/x86/mm/paging.c
>> +++ b/xen/arch/x86/mm/paging.c
>> @@ -369,8 +369,8 @@ int paging_mfn_is_dirty(struct domain *d
>>  
>>      /* We /really/ mean PFN here, even for non-translated guests. */
>>      pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));
>> -    /* Shared pages are always read-only; invalid pages can't be dirty. */
>> -    if ( unlikely(SHARED_M2P(pfn_x(pfn)) || !VALID_M2P(pfn_x(pfn))) )
>> +    /* Invalid pages can't be dirty. */
>> +    if ( unlikely(!VALID_M2P(pfn_x(pfn))) )
>>          return 0;
> 
> Are you sure that it will always be the case in the future that
> SHARED_MP2(x) implies !VALID_M2P(x)?  (This is also relevant for my
> previous comment.)

Well, at least with the current concept it always will be afaict. As
for its relevance to the previous comment: In the description I'm
specifically mentioning paging_mfn_is_dirty() but not
free_page_type() - in the latter the SHARED_M2P() check isn't
being dropped for being redundant with VALID_M2P(), but for
being dead code altogether (due to the earlier
ASSERT(!shadow_mode_refcounts(owner))).

Of course both ASSERT()s there suffer the same problem as
mentioned above. I doubt it should be the subject of this patch
to convert all of them to the to-be-introduced construct, despite
them sitting in code next to one being modified.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/6] x86/shadow: widen reference count
  2017-12-13  9:17     ` Jan Beulich
@ 2017-12-13 10:32       ` George Dunlap
  2017-12-13 14:20         ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: George Dunlap @ 2017-12-13 10:32 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Andrew Cooper, Tim Deegan, xen-devel

On 12/13/2017 09:17 AM, Jan Beulich wrote:
>>>> On 12.12.17 at 17:32, <george.dunlap@citrix.com> wrote:
>>> @@ -82,7 +153,7 @@ struct page_info
>>>              unsigned long type:5;   /* What kind of shadow is this? */
>>>              unsigned long pinned:1; /* Is the shadow pinned? */
>>>              unsigned long head:1;   /* Is this the first page of the shadow? */
>>> -#define PAGE_SH_REFCOUNT_WIDTH 25
>>> +#define PAGE_SH_REFCOUNT_WIDTH (PGT_count_width - 7)
>>>              unsigned long count:PAGE_SH_REFCOUNT_WIDTH; /* Reference count */
>>>          } sh;
> 
> It is this use of PGT_count_width ...
> 
>> What's the point of moving this code?  And are there any important changes?
> 
> ... which requires the move. I thought that would be clear enough.

It's a lot less work for the patch author to point out everything that's
going on than for a reviewer to infer it.

The point I was reaching about reviewing your emulator patches was that
as much as possible, the patch author should make the reviewer's job
simply one of *verification*.  Consider the following changelog:

"Move the PG[CT]* definitions before the declaration of the page_info
struct, so that we can use PGT_count_width to define the width of
inuse.sh.count."

Now I don't need to figure out that it's simply code motion, or why it
might be necessary.  I can see that sh.count uses PGT_count_width
(verifying that the move is necessary), and I know that the movement is
simply meant to be code motion, so I can verify that there are no other
changes (if I feel so inclined).

It's a lot less work for you to write such a paragraph than it is for
each of the reviewers to independently infer what's going on.  It's
"clear enough" if you already have filtered out what's merely code
motion and what's a substantial change -- but sorting out those two
requires a certain amount of work, which each reviewer will have to do
independently.

Note that there is obviously more to review than verification -- once a
reviewer determined that the patch is doing what it claims, they also
need to look for other side effects.  But verification is the first
step, and the less mental effort a reviewer uses to accomplish the first
step, the more will be available for the next step, and for other patches.

>> It would be a lot easier to review if you separated code motion from
>> code changes; but if you don't want to do that, you need to make it
>> clear what you're doing in your changelog.
> 
> I've added
> 
> "Note that the first and last hunks of the xen/include/asm-x86/mm.h
>  change are merely code motion."
> 
> to the description; I don't think separating out that change would
> make review any easier (you either trust the code-motion-only
> statement, or you want to compare both blocks, regardless of
> whether this was a separate patch).

Fair enough regarding separating code motion and changes.  What do you
think of the paragraph I suggest above?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/6] x86/shadow: widen reference count
  2017-12-13 10:32       ` George Dunlap
@ 2017-12-13 14:20         ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2017-12-13 14:20 UTC (permalink / raw)
  To: George Dunlap; +Cc: George Dunlap, Andrew Cooper, Tim Deegan, xen-devel

>>> On 13.12.17 at 11:32, <george.dunlap@citrix.com> wrote:
> On 12/13/2017 09:17 AM, Jan Beulich wrote:
>>>>> On 12.12.17 at 17:32, <george.dunlap@citrix.com> wrote:
>>>> @@ -82,7 +153,7 @@ struct page_info
>>>>              unsigned long type:5;   /* What kind of shadow is this? */
>>>>              unsigned long pinned:1; /* Is the shadow pinned? */
>>>>              unsigned long head:1;   /* Is this the first page of the shadow? */
>>>> -#define PAGE_SH_REFCOUNT_WIDTH 25
>>>> +#define PAGE_SH_REFCOUNT_WIDTH (PGT_count_width - 7)
>>>>              unsigned long count:PAGE_SH_REFCOUNT_WIDTH; /* Reference count */
>>>>          } sh;
>> 
>> It is this use of PGT_count_width ...
>> 
>>> What's the point of moving this code?  And are there any important changes?
>> 
>> ... which requires the move. I thought that would be clear enough.
> 
> It's a lot less work for the patch author to point out everything that's
> going on than for a reviewer to infer it.
> 
> The point I was reaching about reviewing your emulator patches was that
> as much as possible, the patch author should make the reviewer's job
> simply one of *verification*.  Consider the following changelog:
> 
> "Move the PG[CT]* definitions before the declaration of the page_info
> struct, so that we can use PGT_count_width to define the width of
> inuse.sh.count."
> 
> Now I don't need to figure out that it's simply code motion, or why it
> might be necessary.  I can see that sh.count uses PGT_count_width
> (verifying that the move is necessary), and I know that the movement is
> simply meant to be code motion, so I can verify that there are no other
> changes (if I feel so inclined).
> 
> It's a lot less work for you to write such a paragraph than it is for
> each of the reviewers to independently infer what's going on.  It's
> "clear enough" if you already have filtered out what's merely code
> motion and what's a substantial change -- but sorting out those two
> requires a certain amount of work, which each reviewer will have to do
> independently.
> 
> Note that there is obviously more to review than verification -- once a
> reviewer determined that the patch is doing what it claims, they also
> need to look for other side effects.  But verification is the first
> step, and the less mental effort a reviewer uses to accomplish the first
> step, the more will be available for the next step, and for other patches.
> 
>>> It would be a lot easier to review if you separated code motion from
>>> code changes; but if you don't want to do that, you need to make it
>>> clear what you're doing in your changelog.
>> 
>> I've added
>> 
>> "Note that the first and last hunks of the xen/include/asm-x86/mm.h
>>  change are merely code motion."
>> 
>> to the description; I don't think separating out that change would
>> make review any easier (you either trust the code-motion-only
>> statement, or you want to compare both blocks, regardless of
>> whether this was a separate patch).
> 
> Fair enough regarding separating code motion and changes.  What do you
> think of the paragraph I suggest above?

I can use that one if you think it's better than the one I had
added already. I'm not sure I see much of a difference in
conveyed information.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses
  2017-12-12 17:50   ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses George Dunlap
  2017-12-13  9:30     ` Jan Beulich
@ 2017-12-18 16:56     ` Jan Beulich
  1 sibling, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2017-12-18 16:56 UTC (permalink / raw)
  To: George Dunlap; +Cc: George Dunlap, Andrew Cooper, tamas, Tim Deegan, xen-devel

>>> On 12.12.17 at 18:50, <george.dunlap@citrix.com> wrote:
> On 12/12/2017 03:08 PM, Jan Beulich wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -2371,9 +2371,7 @@ int free_page_type(struct page_info *pag
>>  
>>          gmfn = mfn_to_gmfn(owner, mfn_x(page_to_mfn(page)));
>>          ASSERT(VALID_M2P(gmfn));
>> -        /* Page sharing not supported for shadowed domains */
>> -        if(!SHARED_M2P(gmfn))
>> -            shadow_remove_all_shadows(owner, _mfn(gmfn));
>> +        shadow_remove_all_shadows(owner, _mfn(gmfn));
> 
> But that's an ASSERT(), not a BUG_ON().  Code after an ASSERT() needs to
> make sure that if it turns out to be false in a non-debug run, nothing
> worse than a BUG() will happen -- for instance, an information leak or a
> privilege escalation.
> 
> xen/arch/x86/mm/shadow/common.c:sh_remove_shadows() looks up the page
> struct for the mfn without checking if it's valid; so it will *probably*
> end up accessing a wild pointer; at which point it would be better to
> change the ASSERT(VALID_M2P()) into a BUG_ON(!VALID_M2P()).
> 
> Or, if we don't want to crash on a production box in that case, we
> should leave the if() statement there.

Considering

    ASSERT(mfn_valid(gmfn));

in sh_remove_shadows(), rather than leaving the conditional,
wouldn't it then be better to drop the ASSERT() and use

        if ( VALID_M2P(gmfn) )
            shadow_remove_all_shadows(owner, _mfn(gmfn));

?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/6] x86/shadow: drop further 32-bit relics
  2017-12-12 15:04 ` [PATCH 1/6] x86/shadow: drop further 32-bit relics Jan Beulich
@ 2017-12-20  8:03   ` Tim Deegan
  0 siblings, 0 replies; 30+ messages in thread
From: Tim Deegan @ 2017-12-20  8:03 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, xen-devel, Andrew Cooper

At 08:04 -0700 on 12 Dec (1513065884), Jan Beulich wrote:
> PV guests don't ever get shadowed in other than 4-level mode anymore;
> commit 5a3ce8f85e ("x86/shadow: drop stray name tags from
> sh_{guest_get,map}_eff_l1e()") didn't go quite fare enough (and there's
> a good chance that further cleanup opportunity exists, which I simply
> didn't notice).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

> I'm not sure if all the ASSERT()s are really useful to have.

They seem good to me.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/6] x86/shadow: remove pointless loops over all vCPU-s
  2017-12-12 15:05 ` [PATCH 2/6] x86/shadow: remove pointless loops over all vCPU-s Jan Beulich
@ 2017-12-20  8:06   ` Tim Deegan
  0 siblings, 0 replies; 30+ messages in thread
From: Tim Deegan @ 2017-12-20  8:06 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, xen-devel, Andrew Cooper

At 08:05 -0700 on 12 Dec (1513065939), Jan Beulich wrote:
> The vCPU count can be had more directly.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

> In the sh_make_shadow() case the question is whether it really was
> intended to count all vCPU-s, rather than e.g. only all initialized
> ones. I guess the problem would be the phase before the guest
> actually starts secondary processors, but that could perhaps be
> covered by using ->max_vcpus if otherwise 1 would result.

Yes, the intention was to count 'active' vcpus in both cases.  I'm OK
with the change though, and I don't think it's worth making things any
more complex here.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/6] x86/shadow: ignore sh_pin() failure in one more case
  2017-12-12 15:06 ` [PATCH 3/6] x86/shadow: ignore sh_pin() failure in one more case Jan Beulich
@ 2017-12-20  8:08   ` Tim Deegan
  0 siblings, 0 replies; 30+ messages in thread
From: Tim Deegan @ 2017-12-20  8:08 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, xen-devel, Andrew Cooper

At 08:06 -0700 on 12 Dec (1513065979), Jan Beulich wrote:
> Following what we've already done in the XSA-250 fix, convert another
> sh_pin() caller to no longer fail the higher level operation if pinning
> fails, as pinning is a performance optimization only in those places.
> 
> Suggested-by: Tim Deegan <tim@xen.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Tim Deegan <tim@xen.org>

> Would it be worth making sh_pin() return void, by adding a bool
> parameter which the other call site in sh_set_toplevel_shadow() could
> use to indicate a ref is avalable to be used (instead of aquiring a
> new one)? This would allow to drop another domain_crash().

No, let's keep the calling convention as it is, please.

Cheers

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/6] x86/shadow: widen reference count
  2017-12-12 15:07 ` [PATCH 4/6] x86/shadow: widen reference count Jan Beulich
  2017-12-12 16:32   ` George Dunlap
@ 2017-12-20  8:08   ` Tim Deegan
  1 sibling, 0 replies; 30+ messages in thread
From: Tim Deegan @ 2017-12-20  8:08 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, xen-devel, Andrew Cooper

At 08:07 -0700 on 12 Dec (1513066056), Jan Beulich wrote:
> Utilize as many of the bits available in the union as possible, without
> (just to be on the safe side) colliding with any of the bits outside of
> PGT_type_mask.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses
  2017-12-12 15:08 ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{,_ENTRY} uses Jan Beulich
  2017-12-12 17:50   ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses George Dunlap
@ 2017-12-20  8:09   ` Tim Deegan
  1 sibling, 0 replies; 30+ messages in thread
From: Tim Deegan @ 2017-12-20  8:09 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, xen-devel, tamas, Andrew Cooper

At 08:08 -0700 on 12 Dec (1513066100), Jan Beulich wrote:
> Stop open-coding SHARED_M2P() and drop a pointless use of it from
> paging_mfn_is_dirty() (!VALID_M2P() is a superset of SHARED_M2P()) and
> another one from free_page_type() (prior assertions render this
> redundant).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/6] x86: use paging_mark_pfn_dirty()
  2017-12-12 15:09 ` [PATCH 6/6] x86: use paging_mark_pfn_dirty() Jan Beulich
@ 2017-12-20  8:10   ` Tim Deegan
  0 siblings, 0 replies; 30+ messages in thread
From: Tim Deegan @ 2017-12-20  8:10 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, xen-devel, Paul Durrant, Andrew Cooper

At 08:09 -0700 on 12 Dec (1513066153), Jan Beulich wrote:
> ... in preference over paging_mark_dirty(), when the PFN is known
> anyway.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 0/3] XSA-248...251 follow-up
  2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
                   ` (5 preceding siblings ...)
  2017-12-12 15:09 ` [PATCH 6/6] x86: use paging_mark_pfn_dirty() Jan Beulich
@ 2017-12-20  9:37 ` Jan Beulich
  2017-12-20  9:40   ` [PATCH v2 1/3] x86/shadow: widen reference count Jan Beulich
                     ` (2 more replies)
  2018-02-07 15:27 ` Ping: [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
  7 siblings, 3 replies; 30+ messages in thread
From: Jan Beulich @ 2017-12-20  9:37 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper

The parts of this series aren't really dependent upon one another,
they belong together solely because of their origin.

1: x86/shadow: widen reference count
2: x86/mm: clean up SHARED_M2P{,_ENTRY} uses
3: x86: use paging_mark_pfn_dirty()

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 1/3] x86/shadow: widen reference count
  2017-12-20  9:37 ` [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
@ 2017-12-20  9:40   ` Jan Beulich
  2017-12-20  9:41   ` [PATCH v2 2/3] x86/mm: clean up SHARED_M2P{, _ENTRY} uses Jan Beulich
  2017-12-20  9:42   ` [PATCH v2 3/3] x86: use paging_mark_pfn_dirty() Jan Beulich
  2 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2017-12-20  9:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper

Utilize as many of the bits available in the union as possible, without
(just to be on the safe side) colliding with any of the bits outside of
PGT_type_mask.

Note that the first and last hunks of the xen/include/asm-x86/mm.h
change are merely code motion.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Slightly extend description.

--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -520,7 +520,7 @@ void sh_destroy_shadow(struct domain *d,
  * Returns 0 for failure, 1 for success. */
 static inline int sh_get_ref(struct domain *d, mfn_t smfn, paddr_t entry_pa)
 {
-    u32 x, nx;
+    unsigned long x, nx;
     struct page_info *sp = mfn_to_page(smfn);
 
     ASSERT(mfn_valid(smfn));
@@ -529,7 +529,7 @@ static inline int sh_get_ref(struct doma
     x = sp->u.sh.count;
     nx = x + 1;
 
-    if ( unlikely(nx >= (1U << PAGE_SH_REFCOUNT_WIDTH)) )
+    if ( unlikely(nx >= (1UL << PAGE_SH_REFCOUNT_WIDTH)) )
     {
         SHADOW_PRINTK("shadow ref overflow, gmfn=%lx smfn=%lx\n",
                        __backpointer(sp), mfn_x(smfn));
@@ -553,7 +553,7 @@ static inline int sh_get_ref(struct doma
  * physical address of the shadow entry that held this reference. */
 static inline void sh_put_ref(struct domain *d, mfn_t smfn, paddr_t entry_pa)
 {
-    u32 x, nx;
+    unsigned long x, nx;
     struct page_info *sp = mfn_to_page(smfn);
 
     ASSERT(mfn_valid(smfn));
@@ -571,8 +571,8 @@ static inline void sh_put_ref(struct dom
 
     if ( unlikely(x == 0) )
     {
-        SHADOW_ERROR("shadow ref underflow, smfn=%lx oc=%08x t=%#x\n",
-                     mfn_x(smfn), sp->u.sh.count, sp->u.sh.type);
+        SHADOW_ERROR("shadow ref underflow, smfn=%lx oc=%#lx t=%#x\n",
+                     mfn_x(smfn), sp->u.sh.count + 0UL, sp->u.sh.type);
         BUG();
     }
 
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -18,6 +18,77 @@
  */
 #define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
 
+#define PG_shift(idx)   (BITS_PER_LONG - (idx))
+#define PG_mask(x, idx) (x ## UL << PG_shift(idx))
+
+ /* The following page types are MUTUALLY EXCLUSIVE. */
+#define PGT_none          PG_mask(0, 3)  /* no special uses of this page   */
+#define PGT_l1_page_table PG_mask(1, 3)  /* using as an L1 page table?     */
+#define PGT_l2_page_table PG_mask(2, 3)  /* using as an L2 page table?     */
+#define PGT_l3_page_table PG_mask(3, 3)  /* using as an L3 page table?     */
+#define PGT_l4_page_table PG_mask(4, 3)  /* using as an L4 page table?     */
+#define PGT_seg_desc_page PG_mask(5, 3)  /* using this page in a GDT/LDT?  */
+#define PGT_shared_page   PG_mask(6, 3)  /* CoW sharable page              */
+#define PGT_writable_page PG_mask(7, 3)  /* has writable mappings?         */
+#define PGT_type_mask     PG_mask(7, 3)  /* Bits 61-63.                    */
+
+ /* Page is locked? */
+#define _PGT_locked       PG_shift(4)
+#define PGT_locked        PG_mask(1, 4)
+ /* Owning guest has pinned this page to its current type? */
+#define _PGT_pinned       PG_shift(5)
+#define PGT_pinned        PG_mask(1, 5)
+ /* Has this page been validated for use as its current type? */
+#define _PGT_validated    PG_shift(6)
+#define PGT_validated     PG_mask(1, 6)
+ /* PAE only: is this an L2 page directory containing Xen-private mappings? */
+#define _PGT_pae_xen_l2   PG_shift(7)
+#define PGT_pae_xen_l2    PG_mask(1, 7)
+/* Has this page been *partially* validated for use as its current type? */
+#define _PGT_partial      PG_shift(8)
+#define PGT_partial       PG_mask(1, 8)
+
+ /* Count of uses of this frame as its current type. */
+#define PGT_count_width   PG_shift(8)
+#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
+
+/* Are the 'type mask' bits identical? */
+#define PGT_type_equal(x, y) (!(((x) ^ (y)) & PGT_type_mask))
+
+ /* Cleared when the owning guest 'frees' this page. */
+#define _PGC_allocated    PG_shift(1)
+#define PGC_allocated     PG_mask(1, 1)
+ /* Page is Xen heap? */
+#define _PGC_xen_heap     PG_shift(2)
+#define PGC_xen_heap      PG_mask(1, 2)
+ /* Set when is using a page as a page table */
+#define _PGC_page_table   PG_shift(3)
+#define PGC_page_table    PG_mask(1, 3)
+ /* 3-bit PAT/PCD/PWT cache-attribute hint. */
+#define PGC_cacheattr_base PG_shift(6)
+#define PGC_cacheattr_mask PG_mask(7, 6)
+ /* Page is broken? */
+#define _PGC_broken       PG_shift(7)
+#define PGC_broken        PG_mask(1, 7)
+ /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */
+#define PGC_state         PG_mask(3, 9)
+#define PGC_state_inuse   PG_mask(0, 9)
+#define PGC_state_offlining PG_mask(1, 9)
+#define PGC_state_offlined PG_mask(2, 9)
+#define PGC_state_free    PG_mask(3, 9)
+#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
+
+ /* Count of references to this frame. */
+#define PGC_count_width   PG_shift(9)
+#define PGC_count_mask    ((1UL<<PGC_count_width)-1)
+
+/*
+ * Page needs to be scrubbed. Since this bit can only be set on a page that is
+ * free (i.e. in PGC_state_free) we can reuse PGC_allocated bit.
+ */
+#define _PGC_need_scrub   _PGC_allocated
+#define PGC_need_scrub    PGC_allocated
+
 #ifndef CONFIG_BIGMEM
 /*
  * This definition is solely for the use in struct page_info (and
@@ -82,7 +153,7 @@ struct page_info
             unsigned long type:5;   /* What kind of shadow is this? */
             unsigned long pinned:1; /* Is the shadow pinned? */
             unsigned long head:1;   /* Is this the first page of the shadow? */
-#define PAGE_SH_REFCOUNT_WIDTH 25
+#define PAGE_SH_REFCOUNT_WIDTH (PGT_count_width - 7)
             unsigned long count:PAGE_SH_REFCOUNT_WIDTH; /* Reference count */
         } sh;
 
@@ -198,77 +269,6 @@ struct page_info
 
 #undef __pdx_t
 
-#define PG_shift(idx)   (BITS_PER_LONG - (idx))
-#define PG_mask(x, idx) (x ## UL << PG_shift(idx))
-
- /* The following page types are MUTUALLY EXCLUSIVE. */
-#define PGT_none          PG_mask(0, 3)  /* no special uses of this page   */
-#define PGT_l1_page_table PG_mask(1, 3)  /* using as an L1 page table?     */
-#define PGT_l2_page_table PG_mask(2, 3)  /* using as an L2 page table?     */
-#define PGT_l3_page_table PG_mask(3, 3)  /* using as an L3 page table?     */
-#define PGT_l4_page_table PG_mask(4, 3)  /* using as an L4 page table?     */
-#define PGT_seg_desc_page PG_mask(5, 3)  /* using this page in a GDT/LDT?  */
-#define PGT_shared_page   PG_mask(6, 3)  /* CoW sharable page              */
-#define PGT_writable_page PG_mask(7, 3)  /* has writable mappings?         */
-#define PGT_type_mask     PG_mask(7, 3)  /* Bits 61-63.                    */
-
- /* Page is locked? */
-#define _PGT_locked       PG_shift(4)
-#define PGT_locked        PG_mask(1, 4)
- /* Owning guest has pinned this page to its current type? */
-#define _PGT_pinned       PG_shift(5)
-#define PGT_pinned        PG_mask(1, 5)
- /* Has this page been validated for use as its current type? */
-#define _PGT_validated    PG_shift(6)
-#define PGT_validated     PG_mask(1, 6)
- /* PAE only: is this an L2 page directory containing Xen-private mappings? */
-#define _PGT_pae_xen_l2   PG_shift(7)
-#define PGT_pae_xen_l2    PG_mask(1, 7)
-/* Has this page been *partially* validated for use as its current type? */
-#define _PGT_partial      PG_shift(8)
-#define PGT_partial       PG_mask(1, 8)
-
- /* Count of uses of this frame as its current type. */
-#define PGT_count_width   PG_shift(8)
-#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
-
-/* Are the 'type mask' bits identical? */
-#define PGT_type_equal(x, y) (!(((x) ^ (y)) & PGT_type_mask))
-
- /* Cleared when the owning guest 'frees' this page. */
-#define _PGC_allocated    PG_shift(1)
-#define PGC_allocated     PG_mask(1, 1)
- /* Page is Xen heap? */
-#define _PGC_xen_heap     PG_shift(2)
-#define PGC_xen_heap      PG_mask(1, 2)
- /* Set when is using a page as a page table */
-#define _PGC_page_table   PG_shift(3)
-#define PGC_page_table    PG_mask(1, 3)
- /* 3-bit PAT/PCD/PWT cache-attribute hint. */
-#define PGC_cacheattr_base PG_shift(6)
-#define PGC_cacheattr_mask PG_mask(7, 6)
- /* Page is broken? */
-#define _PGC_broken       PG_shift(7)
-#define PGC_broken        PG_mask(1, 7)
- /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */
-#define PGC_state         PG_mask(3, 9)
-#define PGC_state_inuse   PG_mask(0, 9)
-#define PGC_state_offlining PG_mask(1, 9)
-#define PGC_state_offlined PG_mask(2, 9)
-#define PGC_state_free    PG_mask(3, 9)
-#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
-
- /* Count of references to this frame. */
-#define PGC_count_width   PG_shift(9)
-#define PGC_count_mask    ((1UL<<PGC_count_width)-1)
-
-/*
- * Page needs to be scrubbed. Since this bit can only be set on a page that is
- * free (i.e. in PGC_state_free) we can reuse PGC_allocated bit.
- */
-#define _PGC_need_scrub   _PGC_allocated
-#define PGC_need_scrub    PGC_allocated
-
 #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
 #define is_xen_heap_mfn(mfn) \
     (__mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 2/3] x86/mm: clean up SHARED_M2P{, _ENTRY} uses
  2017-12-20  9:37 ` [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
  2017-12-20  9:40   ` [PATCH v2 1/3] x86/shadow: widen reference count Jan Beulich
@ 2017-12-20  9:41   ` Jan Beulich
  2018-02-08 12:31     ` George Dunlap
  2017-12-20  9:42   ` [PATCH v2 3/3] x86: use paging_mark_pfn_dirty() Jan Beulich
  2 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2017-12-20  9:41 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, tamas

Stop open-coding SHARED_M2P() and drop a pointless use of it from
paging_mfn_is_dirty() (!VALID_M2P() is a superset of SHARED_M2P()) and
another one from free_page_type() (prior assertions render this
redundant).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Re-do free_page_type() change.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2369,9 +2369,7 @@ int free_page_type(struct page_info *pag
         ASSERT(!shadow_mode_refcounts(owner));
 
         gmfn = mfn_to_gmfn(owner, mfn_x(page_to_mfn(page)));
-        ASSERT(VALID_M2P(gmfn));
-        /* Page sharing not supported for shadowed domains */
-        if(!SHARED_M2P(gmfn))
+        if ( VALID_M2P(gmfn) )
             shadow_remove_all_shadows(owner, _mfn(gmfn));
     }
 
@@ -4166,7 +4164,7 @@ int xenmem_add_to_physmap_one(
 
     /* Unmap from old location, if any. */
     old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
-    ASSERT( old_gpfn != SHARED_M2P_ENTRY );
+    ASSERT(!SHARED_M2P(old_gpfn));
     if ( (space == XENMAPSPACE_gmfn || space == XENMAPSPACE_gmfn_range) &&
          old_gpfn != gfn )
     {
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -409,7 +409,7 @@ static struct page_info* mem_sharing_loo
             unsigned long t = read_atomic(&page->u.inuse.type_info);
             ASSERT((t & PGT_type_mask) == PGT_shared_page);
             ASSERT((t & PGT_count_mask) >= 2);
-            ASSERT(get_gpfn_from_mfn(mfn) == SHARED_M2P_ENTRY); 
+            ASSERT(SHARED_M2P(get_gpfn_from_mfn(mfn)));
             return page;
         }
     }
@@ -469,7 +469,7 @@ static int audit(void)
         }
 
         /* Check the m2p entry */
-        if ( get_gpfn_from_mfn(mfn_x(mfn)) != SHARED_M2P_ENTRY )
+        if ( !SHARED_M2P(get_gpfn_from_mfn(mfn_x(mfn))) )
         {
            MEM_SHARING_DEBUG("mfn %lx shared, but wrong m2p entry (%lx)!\n",
                              mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn)));
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2583,7 +2583,7 @@ void audit_p2m(struct domain *d,
             continue;
         }
 
-        if ( gfn == SHARED_M2P_ENTRY )
+        if ( SHARED_M2P(gfn) )
         {
             P2M_PRINTK("shared mfn (%lx) on domain page list!\n",
                     mfn);
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -1059,8 +1059,7 @@ long p2m_pt_audit_p2m(struct p2m_domain
                         {
                             m2pfn = get_gpfn_from_mfn(mfn+i1);
                             /* Allow shared M2Ps */
-                            if ( (m2pfn != (gfn + i1)) &&
-                                 (m2pfn != SHARED_M2P_ENTRY) )
+                            if ( (m2pfn != (gfn + i1)) && !SHARED_M2P(m2pfn) )
                             {
                                 pmbad++;
                                 P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx"
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -369,8 +369,8 @@ int paging_mfn_is_dirty(struct domain *d
 
     /* We /really/ mean PFN here, even for non-translated guests. */
     pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));
-    /* Shared pages are always read-only; invalid pages can't be dirty. */
-    if ( unlikely(SHARED_M2P(pfn_x(pfn)) || !VALID_M2P(pfn_x(pfn))) )
+    /* Invalid pages can't be dirty. */
+    if ( unlikely(!VALID_M2P(pfn_x(pfn))) )
         return 0;
 
     mfn = d->arch.paging.log_dirty.top;




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] x86: use paging_mark_pfn_dirty()
  2017-12-20  9:37 ` [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
  2017-12-20  9:40   ` [PATCH v2 1/3] x86/shadow: widen reference count Jan Beulich
  2017-12-20  9:41   ` [PATCH v2 2/3] x86/mm: clean up SHARED_M2P{, _ENTRY} uses Jan Beulich
@ 2017-12-20  9:42   ` Jan Beulich
  2017-12-20  9:44     ` Paul Durrant
  2018-02-08 12:32     ` George Dunlap
  2 siblings, 2 replies; 30+ messages in thread
From: Jan Beulich @ 2017-12-20  9:42 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Paul Durrant

... in preference over paging_mark_dirty(), when the PFN is known
anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>

--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -219,14 +219,12 @@ static int modified_memory(struct domain
             page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
             if ( page )
             {
-                mfn_t gmfn = _mfn(page_to_mfn(page));
-
-                paging_mark_dirty(d, gmfn);
+                paging_mark_pfn_dirty(d, _pfn(pfn));
                 /*
                  * These are most probably not page tables any more
                  * don't take a long time and don't die either.
                  */
-                sh_remove_shadows(d, gmfn, 1, 0);
+                sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
                 put_page(page);
             }
         }
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1893,7 +1893,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
          */
         if ( npfec.write_access )
         {
-            paging_mark_dirty(currd, mfn);
+            paging_mark_pfn_dirty(currd, _pfn(gfn));
             /*
              * If p2m is really an altp2m, unlock here to avoid lock ordering
              * violation when the change below is propagated from host p2m.
@@ -2591,7 +2591,7 @@ static void *_hvm_map_guest_frame(unsign
         if ( unlikely(p2m_is_discard_write(p2mt)) )
             *writable = 0;
         else if ( !permanent )
-            paging_mark_dirty(d, _mfn(page_to_mfn(page)));
+            paging_mark_pfn_dirty(d, _pfn(gfn));
     }
 
     if ( !permanent )
@@ -3254,7 +3254,7 @@ static enum hvm_translation_result __hvm
                     memcpy(p, buf, count);
                 else
                     memset(p, 0, count);
-                paging_mark_dirty(v->domain, _mfn(page_to_mfn(page)));
+                paging_mark_pfn_dirty(v->domain, _pfn(gfn_x(gfn)));
             }
         }
         else
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -283,7 +283,7 @@ static int hvm_add_ioreq_gfn(
     rc = guest_physmap_add_page(d, _gfn(iorp->gfn),
                                 _mfn(page_to_mfn(iorp->page)), 0);
     if ( rc == 0 )
-        paging_mark_dirty(d, _mfn(page_to_mfn(iorp->page)));
+        paging_mark_pfn_dirty(d, _pfn(iorp->gfn));
 
     return rc;
 }
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1221,7 +1221,7 @@ p2m_pod_demand_populate(struct p2m_domai
     for( i = 0; i < (1UL << order); i++ )
     {
         set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn_aligned) + i);
-        paging_mark_dirty(d, mfn_add(mfn, i));
+        paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn_aligned) + i));
     }
 
     p2m->pod.entry_count -= (1UL << order);
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3742,8 +3742,7 @@ long do_mmu_update(
             }
 
             set_gpfn_from_mfn(mfn, gpfn);
-
-            paging_mark_dirty(pg_owner, _mfn(mfn));
+            paging_mark_pfn_dirty(pg_owner, _pfn(gpfn));
 
             put_page(page);
             break;




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] x86: use paging_mark_pfn_dirty()
  2017-12-20  9:42   ` [PATCH v2 3/3] x86: use paging_mark_pfn_dirty() Jan Beulich
@ 2017-12-20  9:44     ` Paul Durrant
  2018-02-08 12:32     ` George Dunlap
  1 sibling, 0 replies; 30+ messages in thread
From: Paul Durrant @ 2017-12-20  9:44 UTC (permalink / raw)
  To: 'Jan Beulich', xen-devel; +Cc: Andrew Cooper, George Dunlap

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 20 December 2017 09:42
> To: xen-devel <xen-devel@lists.xenproject.org>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul Durrant
> <Paul.Durrant@citrix.com>; George Dunlap <George.Dunlap@citrix.com>
> Subject: [PATCH v2 3/3] x86: use paging_mark_pfn_dirty()
> 
> ... in preference over paging_mark_dirty(), when the PFN is known
> anyway.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Tim Deegan <tim@xen.org>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> 
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -219,14 +219,12 @@ static int modified_memory(struct domain
>              page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
>              if ( page )
>              {
> -                mfn_t gmfn = _mfn(page_to_mfn(page));
> -
> -                paging_mark_dirty(d, gmfn);
> +                paging_mark_pfn_dirty(d, _pfn(pfn));
>                  /*
>                   * These are most probably not page tables any more
>                   * don't take a long time and don't die either.
>                   */
> -                sh_remove_shadows(d, gmfn, 1, 0);
> +                sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
>                  put_page(page);
>              }
>          }
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1893,7 +1893,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
>           */
>          if ( npfec.write_access )
>          {
> -            paging_mark_dirty(currd, mfn);
> +            paging_mark_pfn_dirty(currd, _pfn(gfn));
>              /*
>               * If p2m is really an altp2m, unlock here to avoid lock ordering
>               * violation when the change below is propagated from host p2m.
> @@ -2591,7 +2591,7 @@ static void *_hvm_map_guest_frame(unsign
>          if ( unlikely(p2m_is_discard_write(p2mt)) )
>              *writable = 0;
>          else if ( !permanent )
> -            paging_mark_dirty(d, _mfn(page_to_mfn(page)));
> +            paging_mark_pfn_dirty(d, _pfn(gfn));
>      }
> 
>      if ( !permanent )
> @@ -3254,7 +3254,7 @@ static enum hvm_translation_result __hvm
>                      memcpy(p, buf, count);
>                  else
>                      memset(p, 0, count);
> -                paging_mark_dirty(v->domain, _mfn(page_to_mfn(page)));
> +                paging_mark_pfn_dirty(v->domain, _pfn(gfn_x(gfn)));
>              }
>          }
>          else
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -283,7 +283,7 @@ static int hvm_add_ioreq_gfn(
>      rc = guest_physmap_add_page(d, _gfn(iorp->gfn),
>                                  _mfn(page_to_mfn(iorp->page)), 0);
>      if ( rc == 0 )
> -        paging_mark_dirty(d, _mfn(page_to_mfn(iorp->page)));
> +        paging_mark_pfn_dirty(d, _pfn(iorp->gfn));
> 
>      return rc;
>  }
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1221,7 +1221,7 @@ p2m_pod_demand_populate(struct p2m_domai
>      for( i = 0; i < (1UL << order); i++ )
>      {
>          set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn_aligned) + i);
> -        paging_mark_dirty(d, mfn_add(mfn, i));
> +        paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn_aligned) + i));
>      }
> 
>      p2m->pod.entry_count -= (1UL << order);
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3742,8 +3742,7 @@ long do_mmu_update(
>              }
> 
>              set_gpfn_from_mfn(mfn, gpfn);
> -
> -            paging_mark_dirty(pg_owner, _mfn(mfn));
> +            paging_mark_pfn_dirty(pg_owner, _pfn(gpfn));
> 
>              put_page(page);
>              break;
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Ping: [PATCH v2 0/3] XSA-248...251 follow-up
  2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
                   ` (6 preceding siblings ...)
  2017-12-20  9:37 ` [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
@ 2018-02-07 15:27 ` Jan Beulich
  2018-02-08 12:34   ` George Dunlap
  7 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2018-02-07 15:27 UTC (permalink / raw)
  To: George Dunlap; +Cc: Andrew Cooper, xen-devel

>>> On 20.12.17 at 10:37,  wrote:
> The parts of this series aren't really dependent upon one another,
> they belong together solely because of their origin.
> 
> 1: x86/shadow: widen reference count
> 2: x86/mm: clean up SHARED_M2P{,_ENTRY} uses
> 3: x86: use paging_mark_pfn_dirty()

George,

any chance to get an ack or otherwise (or an indication that they
can go in with just Andrew's ack, which was provided via IRC) for
the latter two?

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/3] x86/mm: clean up SHARED_M2P{, _ENTRY} uses
  2017-12-20  9:41   ` [PATCH v2 2/3] x86/mm: clean up SHARED_M2P{, _ENTRY} uses Jan Beulich
@ 2018-02-08 12:31     ` George Dunlap
  0 siblings, 0 replies; 30+ messages in thread
From: George Dunlap @ 2018-02-08 12:31 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Tamas K Lengyel, Andrew Cooper

On Wed, Dec 20, 2017 at 9:41 AM, Jan Beulich <JBeulich@suse.com> wrote:
> Stop open-coding SHARED_M2P() and drop a pointless use of it from
> paging_mfn_is_dirty() (!VALID_M2P() is a superset of SHARED_M2P()) and
> another one from free_page_type() (prior assertions render this
> redundant).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Tim Deegan <tim@xen.org>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] x86: use paging_mark_pfn_dirty()
  2017-12-20  9:42   ` [PATCH v2 3/3] x86: use paging_mark_pfn_dirty() Jan Beulich
  2017-12-20  9:44     ` Paul Durrant
@ 2018-02-08 12:32     ` George Dunlap
  1 sibling, 0 replies; 30+ messages in thread
From: George Dunlap @ 2018-02-08 12:32 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Paul Durrant, Andrew Cooper

On Wed, Dec 20, 2017 at 9:42 AM, Jan Beulich <JBeulich@suse.com> wrote:
> ... in preference over paging_mark_dirty(), when the PFN is known
> anyway.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Tim Deegan <tim@xen.org>

Acked-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Ping: [PATCH v2 0/3] XSA-248...251 follow-up
  2018-02-07 15:27 ` Ping: [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
@ 2018-02-08 12:34   ` George Dunlap
  2018-02-13  7:44     ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: George Dunlap @ 2018-02-08 12:34 UTC (permalink / raw)
  To: Jan Beulich, George Dunlap; +Cc: Andrew Cooper, xen-devel

On 02/07/2018 03:27 PM, Jan Beulich wrote:
>>>> On 20.12.17 at 10:37,  wrote:
>> The parts of this series aren't really dependent upon one another,
>> they belong together solely because of their origin.
>>
>> 1: x86/shadow: widen reference count
>> 2: x86/mm: clean up SHARED_M2P{,_ENTRY} uses
>> 3: x86: use paging_mark_pfn_dirty()
> 
> George,
> 
> any chance to get an ack or otherwise (or an indication that they
> can go in with just Andrew's ack, which was provided via IRC) for
> the latter two?

Due to some quirks in my mail setup right now I can't respond directly
(or rather, I have just done so but I know they'll never get to you), so:

#2: Reviewed-by: George Dunlap <george.dunlap@citrix.com>
#3: Acked-by: George Dunalp <george.dunlap@citrix.com>

Sorry for the delay.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Ping: [PATCH v2 0/3] XSA-248...251 follow-up
  2018-02-08 12:34   ` George Dunlap
@ 2018-02-13  7:44     ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2018-02-13  7:44 UTC (permalink / raw)
  To: George Dunlap, George Dunlap; +Cc: Andrew Cooper, xen-devel

>>> On 08.02.18 at 13:34, <george.dunlap@citrix.com> wrote:
> On 02/07/2018 03:27 PM, Jan Beulich wrote:
>>>>> On 20.12.17 at 10:37,  wrote:
>>> The parts of this series aren't really dependent upon one another,
>>> they belong together solely because of their origin.
>>>
>>> 1: x86/shadow: widen reference count
>>> 2: x86/mm: clean up SHARED_M2P{,_ENTRY} uses
>>> 3: x86: use paging_mark_pfn_dirty()
>> 
>> George,
>> 
>> any chance to get an ack or otherwise (or an indication that they
>> can go in with just Andrew's ack, which was provided via IRC) for
>> the latter two?
> 
> Due to some quirks in my mail setup right now I can't respond directly
> (or rather, I have just done so but I know they'll never get to you), so:
> 
> #2: Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> #3: Acked-by: George Dunalp <george.dunlap@citrix.com>

Thanks. The other two mails (apparently sent a few minutes before
this one) arrived fine though.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2018-02-13  7:44 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
2017-12-12 15:04 ` [PATCH 1/6] x86/shadow: drop further 32-bit relics Jan Beulich
2017-12-20  8:03   ` Tim Deegan
2017-12-12 15:05 ` [PATCH 2/6] x86/shadow: remove pointless loops over all vCPU-s Jan Beulich
2017-12-20  8:06   ` Tim Deegan
2017-12-12 15:06 ` [PATCH 3/6] x86/shadow: ignore sh_pin() failure in one more case Jan Beulich
2017-12-20  8:08   ` Tim Deegan
2017-12-12 15:07 ` [PATCH 4/6] x86/shadow: widen reference count Jan Beulich
2017-12-12 16:32   ` George Dunlap
2017-12-13  9:17     ` Jan Beulich
2017-12-13 10:32       ` George Dunlap
2017-12-13 14:20         ` Jan Beulich
2017-12-20  8:08   ` Tim Deegan
2017-12-12 15:08 ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{,_ENTRY} uses Jan Beulich
2017-12-12 17:50   ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses George Dunlap
2017-12-13  9:30     ` Jan Beulich
2017-12-18 16:56     ` Jan Beulich
2017-12-20  8:09   ` Tim Deegan
2017-12-12 15:09 ` [PATCH 6/6] x86: use paging_mark_pfn_dirty() Jan Beulich
2017-12-20  8:10   ` Tim Deegan
2017-12-20  9:37 ` [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
2017-12-20  9:40   ` [PATCH v2 1/3] x86/shadow: widen reference count Jan Beulich
2017-12-20  9:41   ` [PATCH v2 2/3] x86/mm: clean up SHARED_M2P{, _ENTRY} uses Jan Beulich
2018-02-08 12:31     ` George Dunlap
2017-12-20  9:42   ` [PATCH v2 3/3] x86: use paging_mark_pfn_dirty() Jan Beulich
2017-12-20  9:44     ` Paul Durrant
2018-02-08 12:32     ` George Dunlap
2018-02-07 15:27 ` Ping: [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
2018-02-08 12:34   ` George Dunlap
2018-02-13  7:44     ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.