xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v6 0/5] x86/mem_sharing: assorted fixes
@ 2019-07-17 19:33 Tamas K Lengyel
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released Tamas K Lengyel
                   ` (4 more replies)
  0 siblings, 5 replies; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-17 19:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Tamas K Lengyel

This is an assorted series of fixes to the memory sharing subsystem.

Patch 1-2: fix general brokeness of the system
Patch 3-5: nice-to-haves & cleanups, could be applied independently from the
           rest of the patches in the series

Tamas K Lengyel (5):
  x86/mem_sharing: reorder when pages are unlocked and released
  x86/mem_sharing: copy a page_lock version to be internal to memshr
  x86/mem_sharing: enable mem_share audit mode only in debug builds
  x86/mem_sharing: compile mem_sharing subsystem only when kconfig is
    enabled
  x86/mem_sharing: style cleanup

 xen/arch/x86/Kconfig              |   6 +-
 xen/arch/x86/domain.c             |   2 +
 xen/arch/x86/domctl.c             |   2 +
 xen/arch/x86/mm/Makefile          |   2 +-
 xen/arch/x86/mm/mem_sharing.c     | 355 +++++++++++++++++-------------
 xen/arch/x86/x86_64/compat/mm.c   |   2 +
 xen/arch/x86/x86_64/mm.c          |   2 +
 xen/common/Kconfig                |   3 -
 xen/common/domain.c               |   6 +-
 xen/common/grant_table.c          |   2 +-
 xen/common/memory.c               |   2 +-
 xen/common/vm_event.c             |   6 +-
 xen/include/asm-x86/mem_sharing.h |  32 +++
 xen/include/asm-x86/mm.h          |  18 +-
 xen/include/xen/mm.h              |   2 +-
 xen/include/xen/sched.h           |   2 +-
 xen/include/xsm/dummy.h           |   2 +-
 xen/include/xsm/xsm.h             |   4 +-
 xen/xsm/dummy.c                   |   2 +-
 xen/xsm/flask/hooks.c             |   4 +-
 20 files changed, 266 insertions(+), 190 deletions(-)

-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-17 19:33 [Xen-devel] [PATCH v6 0/5] x86/mem_sharing: assorted fixes Tamas K Lengyel
@ 2019-07-17 19:33 ` Tamas K Lengyel
  2019-07-18 10:46   ` Jan Beulich
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 2/5] x86/mem_sharing: copy a page_lock version to be internal to memshr Tamas K Lengyel
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-17 19:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Jan Beulich, Roger Pau Monne

Calling _put_page_type while also holding the page_lock
for that page can cause a deadlock.

The comment being dropped is incorrect since it's now out-of-date.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
---
v6: rebase on staging
v5: BUG_ON early before releasing references
---
 xen/arch/x86/mm/mem_sharing.c | 40 +++++++++++------------------------
 1 file changed, 12 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 58d9157fa8..6c033d1488 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -648,10 +648,6 @@ static int page_make_private(struct domain *d, struct page_info *page)
         return -EBUSY;
     }
 
-    /* We can only change the type if count is one */
-    /* Because we are locking pages individually, we need to drop
-     * the lock here, while the page is typed. We cannot risk the 
-     * race of page_unlock and then put_page_type. */
     expected_type = (PGT_shared_page | PGT_validated | PGT_locked | 2);
     if ( page->u.inuse.type_info != expected_type )
     {
@@ -660,12 +656,11 @@ static int page_make_private(struct domain *d, struct page_info *page)
         return -EEXIST;
     }
 
+    mem_sharing_page_unlock(page);
+
     /* Drop the final typecount */
     put_page_and_type(page);
 
-    /* Now that we've dropped the type, we can unlock */
-    mem_sharing_page_unlock(page);
-
     /* Change the owner */
     ASSERT(page_get_owner(page) == dom_cow);
     page_set_owner(page, d);
@@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
     p2m_type_t smfn_type, cmfn_type;
     struct two_gfns tg;
     struct rmap_iterator ri;
+    unsigned long put_count = 0;
 
     get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
                  cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
@@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
         goto err_out;
     }
 
-    /* Acquire an extra reference, for the freeing below to be safe. */
-    if ( !get_page(cpage, dom_cow) )
-    {
-        ret = -EOVERFLOW;
-        mem_sharing_page_unlock(secondpg);
-        mem_sharing_page_unlock(firstpg);
-        goto err_out;
-    }
-
     /* Merge the lists together */
     rmap_seed_iterator(cpage, &ri);
     while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
@@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
          * Don't change the type of rmap for the client page. */
         rmap_del(gfn, cpage, 0);
         rmap_add(gfn, spage);
-        put_page_and_type(cpage);
+        put_count++;
         d = get_domain_by_id(gfn->domain);
         BUG_ON(!d);
         BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
         put_domain(d);
     }
     ASSERT(list_empty(&cpage->sharing->gfns));
+    BUG_ON(!put_count);
 
     /* Clear the rest of the shared state */
     page_sharing_dispose(cpage);
@@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
 
     /* Free the client page */
     put_page_alloc_ref(cpage);
-    put_page(cpage);
+
+    while ( put_count-- )
+        put_page_and_type(cpage);
 
     /* We managed to free a domain page. */
     atomic_dec(&nr_shared_mfns);
@@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
     {
         if ( !last_gfn )
             mem_sharing_gfn_destroy(page, d, gfn_info);
-        put_page_and_type(page);
+
         mem_sharing_page_unlock(page);
+
         if ( last_gfn )
-        {
-            if ( !get_page(page, dom_cow) )
-            {
-                put_gfn(d, gfn);
-                domain_crash(d);
-                return -EOVERFLOW;
-            }
             put_page_alloc_ref(page);
-            put_page(page);
-        }
+
+        put_page_and_type(page);
         put_gfn(d, gfn);
 
         return 0;
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Xen-devel] [PATCH v6 2/5] x86/mem_sharing: copy a page_lock version to be internal to memshr
  2019-07-17 19:33 [Xen-devel] [PATCH v6 0/5] x86/mem_sharing: assorted fixes Tamas K Lengyel
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released Tamas K Lengyel
@ 2019-07-17 19:33 ` Tamas K Lengyel
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 3/5] x86/mem_sharing: enable mem_share audit mode only in debug builds Tamas K Lengyel
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-17 19:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Jan Beulich, Roger Pau Monne

Patch cf4b30dca0a "Add debug code to detect illegal page_lock and put_page_type
ordering" added extra sanity checking to page_lock/page_unlock for debug builds
with the assumption that no hypervisor path ever locks two pages at once.

This assumption doesn't hold during memory sharing so we copy a version of
page_lock/unlock to be used exclusively in the memory sharing subsystem
without the sanity checks.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
---
v6: adjust comment according to Jan's suggestion
---
 xen/arch/x86/mm/mem_sharing.c | 54 ++++++++++++++++++++++++++++++++---
 xen/include/asm-x86/mm.h      | 15 ++--------
 2 files changed, 53 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 6c033d1488..a5fe89e339 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -112,13 +112,59 @@ static inline void page_sharing_dispose(struct page_info *page)
 
 #endif /* MEM_SHARING_AUDIT */
 
-static inline int mem_sharing_page_lock(struct page_info *pg)
+/*
+ * Private implementations of page_lock/unlock to bypass PV-only
+ * sanity checks not applicable to mem-sharing.
+ *
+ * _page_lock is used in memory sharing to protect addition (share) and removal
+ * (unshare) of (gfn,domain) tupples to a list of gfn's that the shared page is
+ * currently backing.
+ * Nesting may happen when sharing (and locking) two pages.
+ * Deadlock is avoided by locking pages in increasing order.
+ * All memory sharing code paths take the p2m lock of the affected gfn before
+ * taking the lock for the underlying page. We enforce ordering between page_lock
+ * and p2m_lock using an mm-locks.h construct.
+ *
+ * TODO: Investigate if PGT_validated is necessary.
+ */
+static inline bool _page_lock(struct page_info *page)
 {
-    int rc;
+    unsigned long x, nx;
+
+    do {
+        while ( (x = page->u.inuse.type_info) & PGT_locked )
+            cpu_relax();
+        nx = x + (1 | PGT_locked);
+        if ( !(x & PGT_validated) ||
+             !(x & PGT_count_mask) ||
+             !(nx & PGT_count_mask) )
+            return false;
+    } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );
+
+    return true;
+}
+
+static inline void _page_unlock(struct page_info *page)
+{
+    unsigned long x, nx, y = page->u.inuse.type_info;
+
+    do {
+        x = y;
+        ASSERT((x & PGT_count_mask) && (x & PGT_locked));
+
+        nx = x - (1 | PGT_locked);
+        /* We must not drop the last reference here. */
+        ASSERT(nx & PGT_count_mask);
+    } while ( (y = cmpxchg(&page->u.inuse.type_info, x, nx)) != x );
+}
+
+static inline bool mem_sharing_page_lock(struct page_info *pg)
+{
+    bool rc;
     pg_lock_data_t *pld = &(this_cpu(__pld));
 
     page_sharing_mm_pre_lock();
-    rc = page_lock(pg);
+    rc = _page_lock(pg);
     if ( rc )
     {
         preempt_disable();
@@ -135,7 +181,7 @@ static inline void mem_sharing_page_unlock(struct page_info *pg)
     page_sharing_mm_unlock(pld->mm_unlock_level, 
                            &pld->recurse_count);
     preempt_enable();
-    page_unlock(pg);
+    _page_unlock(pg);
 }
 
 static inline shr_handle_t get_next_handle(void)
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 6c14635270..087ad97351 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -356,24 +356,15 @@ struct platform_bad_page {
 const struct platform_bad_page *get_platform_badpages(unsigned int *array_size);
 
 /* Per page locks:
- * page_lock() is used for two purposes: pte serialization, and memory sharing.
+ * page_lock() is used for pte serialization.
  *
  * All users of page lock for pte serialization live in mm.c, use it
  * to lock a page table page during pte updates, do not take other locks within
  * the critical section delimited by page_lock/unlock, and perform no
  * nesting.
  *
- * All users of page lock for memory sharing live in mm/mem_sharing.c. Page_lock
- * is used in memory sharing to protect addition (share) and removal (unshare)
- * of (gfn,domain) tupples to a list of gfn's that the shared page is currently
- * backing. Nesting may happen when sharing (and locking) two pages -- deadlock
- * is avoided by locking pages in increasing order.
- * All memory sharing code paths take the p2m lock of the affected gfn before
- * taking the lock for the underlying page. We enforce ordering between page_lock
- * and p2m_lock using an mm-locks.h construct.
- *
- * These two users (pte serialization and memory sharing) do not collide, since
- * sharing is only supported for hvm guests, which do not perform pv pte updates.
+ * The use of PGT_locked in mem_sharing does not collide, since mem_sharing is
+ * only supported for hvm guests, which do not have PV PTEs updated.
  */
 int page_lock(struct page_info *page);
 void page_unlock(struct page_info *page);
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Xen-devel] [PATCH v6 3/5] x86/mem_sharing: enable mem_share audit mode only in debug builds
  2019-07-17 19:33 [Xen-devel] [PATCH v6 0/5] x86/mem_sharing: assorted fixes Tamas K Lengyel
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released Tamas K Lengyel
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 2/5] x86/mem_sharing: copy a page_lock version to be internal to memshr Tamas K Lengyel
@ 2019-07-17 19:33 ` Tamas K Lengyel
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 4/5] x86/mem_sharing: compile mem_sharing subsystem only when kconfig is enabled Tamas K Lengyel
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup Tamas K Lengyel
  4 siblings, 0 replies; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-17 19:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Tamas K Lengyel, Wei Liu, Jan Beulich, Roger Pau Monne

Improves performance for release builds.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
---
v6: patch ping
---
 xen/include/asm-x86/mem_sharing.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index 9f9f7e93e3..afd0c17292 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -25,7 +25,11 @@
 #include <public/memory.h>
 
 /* Auditing of memory sharing code? */
+#ifndef NDEBUG
 #define MEM_SHARING_AUDIT 1
+#else
+#define MEM_SHARING_AUDIT 0
+#endif
 
 typedef uint64_t shr_handle_t; 
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Xen-devel] [PATCH v6 4/5] x86/mem_sharing: compile mem_sharing subsystem only when kconfig is enabled
  2019-07-17 19:33 [Xen-devel] [PATCH v6 0/5] x86/mem_sharing: assorted fixes Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 3/5] x86/mem_sharing: enable mem_share audit mode only in debug builds Tamas K Lengyel
@ 2019-07-17 19:33 ` Tamas K Lengyel
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup Tamas K Lengyel
  4 siblings, 0 replies; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-17 19:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Wei Liu, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Stefano Stabellini, Jan Beulich, Daniel De Graaf,
	Roger Pau Monne

Disable it by default as it is only an experimental subsystem.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
----
v6: rebase on staging
---
 xen/arch/x86/Kconfig              |  6 +++++-
 xen/arch/x86/domain.c             |  2 ++
 xen/arch/x86/domctl.c             |  2 ++
 xen/arch/x86/mm/Makefile          |  2 +-
 xen/arch/x86/x86_64/compat/mm.c   |  2 ++
 xen/arch/x86/x86_64/mm.c          |  2 ++
 xen/common/Kconfig                |  3 ---
 xen/common/domain.c               |  6 +++---
 xen/common/grant_table.c          |  2 +-
 xen/common/memory.c               |  2 +-
 xen/common/vm_event.c             |  6 +++---
 xen/include/asm-x86/mem_sharing.h | 28 ++++++++++++++++++++++++++++
 xen/include/asm-x86/mm.h          |  3 +++
 xen/include/xen/mm.h              |  2 +-
 xen/include/xen/sched.h           |  2 +-
 xen/include/xsm/dummy.h           |  2 +-
 xen/include/xsm/xsm.h             |  4 ++--
 xen/xsm/dummy.c                   |  2 +-
 xen/xsm/flask/hooks.c             |  4 ++--
 19 files changed, 61 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index f502d765ba..288dc6c042 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -18,7 +18,6 @@ config X86
 	select HAS_KEXEC
 	select MEM_ACCESS_ALWAYS_ON
 	select HAS_MEM_PAGING
-	select HAS_MEM_SHARING
 	select HAS_NS16550
 	select HAS_PASSTHROUGH
 	select HAS_PCI
@@ -199,6 +198,11 @@ config PV_SHIM_EXCLUSIVE
 	  firmware, and will not function correctly in other scenarios.
 
 	  If unsure, say N.
+
+config MEM_SHARING
+	bool "Xen memory sharing support" if EXPERT = "y"
+	depends on HVM
+
 endmenu
 
 source "common/Kconfig"
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index e791d86892..ea55160887 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2095,6 +2095,7 @@ int domain_relinquish_resources(struct domain *d)
             d->arch.auto_unmask = 0;
         }
 
+#ifdef CONFIG_MEM_SHARING
     PROGRESS(shared):
 
         if ( is_hvm_domain(d) )
@@ -2105,6 +2106,7 @@ int domain_relinquish_resources(struct domain *d)
             if ( ret )
                 return ret;
         }
+#endif
 
         spin_lock(&d->page_alloc_lock);
         page_list_splice(&d->arch.relmem_list, &d->page_list);
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index c827790202..2d45e5b8a8 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1236,9 +1236,11 @@ long arch_do_domctl(
         break;
     }
 
+#ifdef CONFIG_MEM_SHARING
     case XEN_DOMCTL_mem_sharing_op:
         ret = mem_sharing_domctl(d, &domctl->u.mem_sharing_op);
         break;
+#endif
 
 #if P2M_AUDIT && defined(CONFIG_HVM)
     case XEN_DOMCTL_audit_p2m:
diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
index 5a17646f98..5010a29d6c 100644
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -6,7 +6,7 @@ obj-$(CONFIG_HVM) += guest_walk_2.o guest_walk_3.o guest_walk_4.o
 obj-$(CONFIG_SHADOW_PAGING) += guest_walk_2.o guest_walk_3.o guest_walk_4.o
 obj-$(CONFIG_MEM_ACCESS) += mem_access.o
 obj-y += mem_paging.o
-obj-y += mem_sharing.o
+obj-$(CONFIG_MEM_SHARING) += mem_sharing.o
 obj-y += p2m.o p2m-pt.o
 obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o
 obj-y += paging.o
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 32410ed273..d4c6be3032 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -152,8 +152,10 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case XENMEM_paging_op:
         return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
 
+#ifdef CONFIG_MEM_SHARING
     case XENMEM_sharing_op:
         return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t));
+#endif
 
     default:
         rc = -ENOSYS;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 899b883b2d..1919cae18b 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -993,8 +993,10 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case XENMEM_paging_op:
         return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
 
+#ifdef CONFIG_MEM_SHARING
     case XENMEM_sharing_op:
         return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t));
+#endif
 
     default:
         rc = -ENOSYS;
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 912630a4fb..16829f6274 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -48,9 +48,6 @@ config MEM_ACCESS
 config HAS_MEM_PAGING
 	bool
 
-config HAS_MEM_SHARING
-	bool
-
 config HAS_PDX
 	bool
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 8a5fa8662b..55aa759b75 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -74,7 +74,7 @@ integer_param("hardware_dom", hardware_domid);
 /* Private domain structs for DOMID_XEN, DOMID_IO, etc. */
 struct domain *__read_mostly dom_xen;
 struct domain *__read_mostly dom_io;
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
 struct domain *__read_mostly dom_cow;
 #endif
 
@@ -549,7 +549,7 @@ void __init setup_system_domains(void)
     if ( IS_ERR(dom_io) )
         panic("Failed to create d[IO]: %ld\n", PTR_ERR(dom_io));
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
     /*
      * Initialise our COW domain.
      * This domain owns sharable pages.
@@ -966,7 +966,7 @@ static void complete_domain_destroy(struct rcu_head *head)
     xfree(d->vm_event_paging);
 #endif
     xfree(d->vm_event_monitor);
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
     xfree(d->vm_event_share);
 #endif
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index b6b45c21e1..97695a221a 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3798,7 +3798,7 @@ void grant_table_init_vcpu(struct vcpu *v)
     v->maptrack_tail = MAPTRACK_TAIL;
 }
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
 int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
                             gfn_t *gfn, uint16_t *status)
 {
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 30d210fc08..d9b35a608c 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1676,7 +1676,7 @@ int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly,
         return -EAGAIN;
     }
 #endif
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
     if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
     {
         if ( page )
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index e8726805e7..2a1c87e44b 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -531,7 +531,7 @@ static void monitor_notification(struct vcpu *v, unsigned int port)
     vm_event_resume(v->domain, v->domain->vm_event_monitor);
 }
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 {
@@ -561,7 +561,7 @@ void vm_event_cleanup(struct domain *d)
         destroy_waitqueue_head(&d->vm_event_monitor->wq);
         (void)vm_event_disable(d, &d->vm_event_monitor);
     }
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
     if ( vm_event_check_ring(d->vm_event_share) )
     {
         destroy_waitqueue_head(&d->vm_event_share->wq);
@@ -703,7 +703,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec)
     }
     break;
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
     case XEN_DOMCTL_VM_EVENT_OP_SHARING:
     {
         rc = -EINVAL;
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index afd0c17292..db22468744 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -24,6 +24,8 @@
 #include <public/domctl.h>
 #include <public/memory.h>
 
+#ifdef CONFIG_MEM_SHARING
+
 /* Auditing of memory sharing code? */
 #ifndef NDEBUG
 #define MEM_SHARING_AUDIT 1
@@ -99,4 +101,30 @@ int mem_sharing_domctl(struct domain *d,
  */
 int relinquish_shared_pages(struct domain *d);
 
+#else
+
+static inline unsigned int mem_sharing_get_nr_saved_mfns(void)
+{
+    return 0;
+}
+static inline unsigned int mem_sharing_get_nr_shared_mfns(void)
+{
+    return 0;
+}
+static inline int mem_sharing_unshare_page(struct domain *d,
+                                           unsigned long gfn,
+                                           uint16_t flags)
+{
+    ASSERT_UNREACHABLE();
+    return -EOPNOTSUPP;
+}
+static inline int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
+                              bool allow_sleep)
+{
+    ASSERT_UNREACHABLE();
+    return -EOPNOTSUPP;
+}
+
+#endif
+
 #endif /* __MEM_SHARING_H__ */
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 087ad97351..35c2527265 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -127,6 +127,8 @@ struct page_info
         /* For non-pinnable single-page shadows, a higher entry that points
          * at us. */
         paddr_t up;
+
+#ifdef CONFIG_MEM_SHARING
         /* For shared/sharable pages, we use a doubly-linked list
          * of all the {pfn,domain} pairs that map this page. We also include
          * an opaque handle, which is effectively a version, so that clients
@@ -134,6 +136,7 @@ struct page_info
          * This list is allocated and freed when a page is shared/unshared.
          */
         struct page_sharing_info *sharing;
+#endif
     };
 
     /* Reference count and various PGC_xxx flags and fields. */
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 47c8412b3a..977e45aae7 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -275,7 +275,7 @@ struct npfec {
 
 /* Private domain structs for DOMID_XEN, DOMID_IO, etc. */
 extern struct domain *dom_xen, *dom_io;
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
 extern struct domain *dom_cow;
 #else
 # define dom_cow NULL
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 97a3ab55aa..b40c8fd138 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -459,7 +459,7 @@ struct domain
     /* Various vm_events */
 
     /* Memory sharing support */
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
     struct vm_event_domain *vm_event_share;
 #endif
     /* Memory paging support */
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 01d2814fed..ef52bb1764 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -597,7 +597,7 @@ static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d)
 }
 #endif
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
 static XSM_INLINE int xsm_mem_sharing(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index b6141f6ab1..e22d6160b5 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -150,7 +150,7 @@ struct xsm_operations {
     int (*mem_paging) (struct domain *d);
 #endif
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
     int (*mem_sharing) (struct domain *d);
 #endif
 
@@ -597,7 +597,7 @@ static inline int xsm_mem_paging (xsm_default_t def, struct domain *d)
 }
 #endif
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
 static inline int xsm_mem_sharing (xsm_default_t def, struct domain *d)
 {
     return xsm_ops->mem_sharing(d);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index c9a566f2b5..5705e52791 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -128,7 +128,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, mem_paging);
 #endif
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
     set_to_dummy_if_null(ops, mem_sharing);
 #endif
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index a7d690ac3c..791c1f66af 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1262,7 +1262,7 @@ static int flask_mem_paging(struct domain *d)
 }
 #endif
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
 static int flask_mem_sharing(struct domain *d)
 {
     return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_SHARING);
@@ -1829,7 +1829,7 @@ static struct xsm_operations flask_ops = {
     .mem_paging = flask_mem_paging,
 #endif
 
-#ifdef CONFIG_HAS_MEM_SHARING
+#ifdef CONFIG_MEM_SHARING
     .mem_sharing = flask_mem_sharing,
 #endif
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup
  2019-07-17 19:33 [Xen-devel] [PATCH v6 0/5] x86/mem_sharing: assorted fixes Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 4/5] x86/mem_sharing: compile mem_sharing subsystem only when kconfig is enabled Tamas K Lengyel
@ 2019-07-17 19:33 ` Tamas K Lengyel
  2019-07-18 10:55   ` Jan Beulich
  4 siblings, 1 reply; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-17 19:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Tamas K Lengyel

No functional change

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 xen/arch/x86/mm/mem_sharing.c | 265 ++++++++++++++++++----------------
 1 file changed, 138 insertions(+), 127 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index a5fe89e339..aaf8c2f2c8 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -42,7 +42,8 @@
 
 static shr_handle_t next_handle = 1;
 
-typedef struct pg_lock_data {
+typedef struct pg_lock_data
+{
     int mm_unlock_level;
     unsigned short recurse_count;
 } pg_lock_data_t;
@@ -88,8 +89,8 @@ static inline void page_sharing_dispose(struct page_info *page)
 {
     /* Unlikely given our thresholds, but we should be careful. */
     if ( unlikely(RMAP_USES_HASHTAB(page)) )
-        free_xenheap_pages(page->sharing->hash_table.bucket, 
-                            RMAP_HASHTAB_ORDER);
+        free_xenheap_pages(page->sharing->hash_table.bucket,
+                           RMAP_HASHTAB_ORDER);
 
     spin_lock(&shr_audit_lock);
     list_del_rcu(&page->sharing->entry);
@@ -105,8 +106,8 @@ static inline void page_sharing_dispose(struct page_info *page)
 {
     /* Unlikely given our thresholds, but we should be careful. */
     if ( unlikely(RMAP_USES_HASHTAB(page)) )
-        free_xenheap_pages(page->sharing->hash_table.bucket, 
-                            RMAP_HASHTAB_ORDER);
+        free_xenheap_pages(page->sharing->hash_table.bucket,
+                           RMAP_HASHTAB_ORDER);
     xfree(page->sharing);
 }
 
@@ -136,8 +137,8 @@ static inline bool _page_lock(struct page_info *page)
             cpu_relax();
         nx = x + (1 | PGT_locked);
         if ( !(x & PGT_validated) ||
-             !(x & PGT_count_mask) ||
-             !(nx & PGT_count_mask) )
+                !(x & PGT_count_mask) ||
+                !(nx & PGT_count_mask) )
             return false;
     } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );
 
@@ -168,7 +169,7 @@ static inline bool mem_sharing_page_lock(struct page_info *pg)
     if ( rc )
     {
         preempt_disable();
-        page_sharing_mm_post_lock(&pld->mm_unlock_level, 
+        page_sharing_mm_post_lock(&pld->mm_unlock_level,
                                   &pld->recurse_count);
     }
     return rc;
@@ -178,7 +179,7 @@ static inline void mem_sharing_page_unlock(struct page_info *pg)
 {
     pg_lock_data_t *pld = &(this_cpu(__pld));
 
-    page_sharing_mm_unlock(pld->mm_unlock_level, 
+    page_sharing_mm_unlock(pld->mm_unlock_level,
                            &pld->recurse_count);
     preempt_enable();
     _page_unlock(pg);
@@ -186,19 +187,18 @@ static inline void mem_sharing_page_unlock(struct page_info *pg)
 
 static inline shr_handle_t get_next_handle(void)
 {
-    /* Get the next handle get_page style */ 
+    /* Get the next handle get_page style */
     uint64_t x, y = next_handle;
     do {
         x = y;
-    }
-    while ( (y = cmpxchg(&next_handle, x, x + 1)) != x );
+    } while ( (y = cmpxchg(&next_handle, x, x + 1)) != x );
     return x + 1;
 }
 
 #define mem_sharing_enabled(d) \
     (is_hvm_domain(d) && (d)->arch.hvm.mem_sharing_enabled)
 
-static atomic_t nr_saved_mfns   = ATOMIC_INIT(0); 
+static atomic_t nr_saved_mfns   = ATOMIC_INIT(0);
 static atomic_t nr_shared_mfns  = ATOMIC_INIT(0);
 
 /** Reverse map **/
@@ -210,7 +210,7 @@ static atomic_t nr_shared_mfns  = ATOMIC_INIT(0);
 typedef struct gfn_info
 {
     unsigned long gfn;
-    domid_t domain; 
+    domid_t domain;
     struct list_head list;
 } gfn_info_t;
 
@@ -225,7 +225,7 @@ rmap_init(struct page_info *page)
 #define HASH(domain, gfn)       \
     (((gfn) + (domain)) % RMAP_HASHTAB_SIZE)
 
-/* Conversions. Tuned by the thresholds. Should only happen twice 
+/* Conversions. Tuned by the thresholds. Should only happen twice
  * (once each) during the lifetime of a shared page */
 static inline int
 rmap_list_to_hash_table(struct page_info *page)
@@ -288,13 +288,13 @@ rmap_count(struct page_info *pg)
 }
 
 /* The page type count is always decreased after removing from the rmap.
- * Use a convert flag to avoid mutating the rmap if in the middle of an 
+ * Use a convert flag to avoid mutating the rmap if in the middle of an
  * iterator, or if the page will be soon destroyed anyways. */
 static inline void
 rmap_del(gfn_info_t *gfn_info, struct page_info *page, int convert)
 {
     if ( RMAP_USES_HASHTAB(page) && convert &&
-         (rmap_count(page) <= RMAP_LIGHT_SHARED_PAGE) )
+            (rmap_count(page) <= RMAP_LIGHT_SHARED_PAGE) )
         rmap_hash_table_to_list(page);
 
     /* Regardless of rmap type, same removal operation */
@@ -308,30 +308,30 @@ rmap_add(gfn_info_t *gfn_info, struct page_info *page)
     struct list_head *head;
 
     if ( !RMAP_USES_HASHTAB(page) &&
-         (rmap_count(page) >= RMAP_HEAVY_SHARED_PAGE) )
+            (rmap_count(page) >= RMAP_HEAVY_SHARED_PAGE) )
         /* The conversion may fail with ENOMEM. We'll be less efficient,
          * but no reason to panic. */
         (void)rmap_list_to_hash_table(page);
 
     head = (RMAP_USES_HASHTAB(page)) ?
-        page->sharing->hash_table.bucket + 
-                            HASH(gfn_info->domain, gfn_info->gfn) :
-        &page->sharing->gfns;
+           page->sharing->hash_table.bucket +
+           HASH(gfn_info->domain, gfn_info->gfn) :
+           &page->sharing->gfns;
 
     INIT_LIST_HEAD(&gfn_info->list);
     list_add(&gfn_info->list, head);
 }
 
 static inline gfn_info_t *
-rmap_retrieve(uint16_t domain_id, unsigned long gfn, 
-                            struct page_info *page)
+rmap_retrieve(uint16_t domain_id, unsigned long gfn,
+              struct page_info *page)
 {
     gfn_info_t *gfn_info;
     struct list_head *le, *head;
 
     head = (RMAP_USES_HASHTAB(page)) ?
-        page->sharing->hash_table.bucket + HASH(domain_id, gfn) :
-        &page->sharing->gfns;
+           page->sharing->hash_table.bucket + HASH(domain_id, gfn) :
+           &page->sharing->gfns;
 
     list_for_each(le, head)
     {
@@ -358,7 +358,8 @@ static inline int rmap_has_entries(struct page_info *page)
 
 /* The iterator hides the details of how the rmap is implemented. This
  * involves splitting the list_for_each_safe macro into two steps. */
-struct rmap_iterator {
+struct rmap_iterator
+{
     struct list_head *curr;
     struct list_head *next;
     unsigned int bucket;
@@ -368,9 +369,9 @@ static inline void
 rmap_seed_iterator(struct page_info *page, struct rmap_iterator *ri)
 {
     ri->curr = (RMAP_USES_HASHTAB(page)) ?
-                page->sharing->hash_table.bucket :
-                &page->sharing->gfns;
-    ri->next = ri->curr->next; 
+               page->sharing->hash_table.bucket :
+               &page->sharing->gfns;
+    ri->next = ri->curr->next;
     ri->bucket = 0;
 }
 
@@ -378,8 +379,8 @@ static inline gfn_info_t *
 rmap_iterate(struct page_info *page, struct rmap_iterator *ri)
 {
     struct list_head *head = (RMAP_USES_HASHTAB(page)) ?
-                page->sharing->hash_table.bucket + ri->bucket :
-                &page->sharing->gfns;
+                             page->sharing->hash_table.bucket + ri->bucket :
+                             &page->sharing->gfns;
 
 retry:
     if ( ri->next == head)
@@ -394,7 +395,8 @@ retry:
             ri->curr = head;
             ri->next = ri->curr->next;
             goto retry;
-        } else
+        }
+        else
             /* List exhausted */
             return NULL;
     }
@@ -406,13 +408,13 @@ retry:
 }
 
 static inline gfn_info_t *mem_sharing_gfn_alloc(struct page_info *page,
-                                                struct domain *d,
-                                                unsigned long gfn)
+        struct domain *d,
+        unsigned long gfn)
 {
     gfn_info_t *gfn_info = xmalloc(gfn_info_t);
 
     if ( gfn_info == NULL )
-        return NULL; 
+        return NULL;
 
     gfn_info->gfn = gfn;
     gfn_info->domain = d->domain_id;
@@ -426,8 +428,8 @@ static inline gfn_info_t *mem_sharing_gfn_alloc(struct page_info *page,
 }
 
 static inline void mem_sharing_gfn_destroy(struct page_info *page,
-                                           struct domain *d,
-                                           gfn_info_t *gfn_info)
+        struct domain *d,
+        gfn_info_t *gfn_info)
 {
     /* Decrement the number of pages. */
     atomic_dec(&d->shr_pages);
@@ -437,15 +439,15 @@ static inline void mem_sharing_gfn_destroy(struct page_info *page,
     xfree(gfn_info);
 }
 
-static struct page_info* mem_sharing_lookup(unsigned long mfn)
+static struct page_info *mem_sharing_lookup(unsigned long mfn)
 {
     if ( mfn_valid(_mfn(mfn)) )
     {
-        struct page_info* page = mfn_to_page(_mfn(mfn));
+        struct page_info *page = mfn_to_page(_mfn(mfn));
         if ( page_get_owner(page) == dom_cow )
         {
             /* Count has to be at least two, because we're called
-             * with the mfn locked (1) and this is supposed to be 
+             * with the mfn locked (1) and this is supposed to be
              * a shared page (1). */
             unsigned long t = read_atomic(&page->u.inuse.type_info);
             ASSERT((t & PGT_type_mask) == PGT_shared_page);
@@ -486,44 +488,44 @@ static int audit(void)
         /* If we can't lock it, it's definitely not a shared page */
         if ( !mem_sharing_page_lock(pg) )
         {
-           MEM_SHARING_DEBUG("mfn %lx in audit list, but cannot be locked (%lx)!\n",
+            MEM_SHARING_DEBUG("mfn %lx in audit list, but cannot be locked (%lx)!\n",
                               mfn_x(mfn), pg->u.inuse.type_info);
-           errors++;
-           continue;
+            errors++;
+            continue;
         }
 
-        /* Check if the MFN has correct type, owner and handle. */ 
+        /* Check if the MFN has correct type, owner and handle. */
         if ( (pg->u.inuse.type_info & PGT_type_mask) != PGT_shared_page )
         {
-           MEM_SHARING_DEBUG("mfn %lx in audit list, but not PGT_shared_page (%lx)!\n",
+            MEM_SHARING_DEBUG("mfn %lx in audit list, but not PGT_shared_page (%lx)!\n",
                               mfn_x(mfn), pg->u.inuse.type_info & PGT_type_mask);
-           errors++;
-           continue;
+            errors++;
+            continue;
         }
 
         /* Check the page owner. */
         if ( page_get_owner(pg) != dom_cow )
         {
-           MEM_SHARING_DEBUG("mfn %lx shared, but wrong owner (%hu)!\n",
-                             mfn_x(mfn), page_get_owner(pg)->domain_id);
-           errors++;
+            MEM_SHARING_DEBUG("mfn %lx shared, but wrong owner (%hu)!\n",
+                              mfn_x(mfn), page_get_owner(pg)->domain_id);
+            errors++;
         }
 
         /* Check the m2p entry */
         if ( !SHARED_M2P(get_gpfn_from_mfn(mfn_x(mfn))) )
         {
-           MEM_SHARING_DEBUG("mfn %lx shared, but wrong m2p entry (%lx)!\n",
-                             mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn)));
-           errors++;
+            MEM_SHARING_DEBUG("mfn %lx shared, but wrong m2p entry (%lx)!\n",
+                              mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn)));
+            errors++;
         }
 
         /* Check we have a list */
         if ( (!pg->sharing) || !rmap_has_entries(pg) )
         {
-           MEM_SHARING_DEBUG("mfn %lx shared, but empty gfn list!\n",
-                             mfn_x(mfn));
-           errors++;
-           continue;
+            MEM_SHARING_DEBUG("mfn %lx shared, but empty gfn list!\n",
+                              mfn_x(mfn));
+            errors++;
+            continue;
         }
 
         /* We've found a page that is shared */
@@ -545,7 +547,7 @@ static int audit(void)
                 errors++;
                 continue;
             }
-            o_mfn = get_gfn_query_unlocked(d, g->gfn, &t); 
+            o_mfn = get_gfn_query_unlocked(d, g->gfn, &t);
             if ( !mfn_eq(o_mfn, mfn) )
             {
                 MEM_SHARING_DEBUG("Incorrect P2M for d=%hu, PFN=%lx."
@@ -568,7 +570,7 @@ static int audit(void)
         {
             MEM_SHARING_DEBUG("Mismatched counts for MFN=%lx."
                               "nr_gfns in list %lu, in type_info %lx\n",
-                              mfn_x(mfn), nr_gfns, 
+                              mfn_x(mfn), nr_gfns,
                               (pg->u.inuse.type_info & PGT_count_mask));
             errors++;
         }
@@ -596,15 +598,16 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
 {
     struct vcpu *v = current;
     int rc;
-    vm_event_request_t req = {
+    vm_event_request_t req =
+    {
         .reason = VM_EVENT_REASON_MEM_SHARING,
         .vcpu_id = v->vcpu_id,
         .u.mem_sharing.gfn = gfn,
         .u.mem_sharing.p2mt = p2m_ram_shared
     };
 
-    if ( (rc = __vm_event_claim_slot(d, 
-                        d->vm_event_share, allow_sleep)) < 0 )
+    if ( (rc = __vm_event_claim_slot(d,
+                                     d->vm_event_share, allow_sleep)) < 0 )
         return rc;
 
     if ( v->domain == d )
@@ -629,9 +632,9 @@ unsigned int mem_sharing_get_nr_shared_mfns(void)
 }
 
 /* Functions that change a page's type and ownership */
-static int page_make_sharable(struct domain *d, 
-                       struct page_info *page, 
-                       int expected_refcnt)
+static int page_make_sharable(struct domain *d,
+                              struct page_info *page,
+                              int expected_refcnt)
 {
     bool_t drop_dom_ref;
 
@@ -684,7 +687,7 @@ static int page_make_private(struct domain *d, struct page_info *page)
 
     if ( !get_page(page, dom_cow) )
         return -EINVAL;
-    
+
     spin_lock(&d->page_alloc_lock);
 
     if ( d->is_dying )
@@ -729,7 +732,7 @@ static inline struct page_info *__grab_shared_page(mfn_t mfn)
         return NULL;
     pg = mfn_to_page(mfn);
 
-    /* If the page is not validated we can't lock it, and if it's  
+    /* If the page is not validated we can't lock it, and if it's
      * not validated it's obviously not shared. */
     if ( !mem_sharing_page_lock(pg) )
         return NULL;
@@ -754,12 +757,12 @@ static int debug_mfn(mfn_t mfn)
         return -EINVAL;
     }
 
-    MEM_SHARING_DEBUG( 
-            "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner_id=%d\n",
-            mfn_x(page_to_mfn(page)), 
-            page->count_info, 
-            page->u.inuse.type_info,
-            page_get_owner(page)->domain_id);
+    MEM_SHARING_DEBUG(
+        "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner_id=%d\n",
+        mfn_x(page_to_mfn(page)),
+        page->count_info,
+        page->u.inuse.type_info,
+        page_get_owner(page)->domain_id);
 
     /* -1 because the page is locked and that's an additional type ref */
     num_refs = ((int) (page->u.inuse.type_info & PGT_count_mask)) - 1;
@@ -775,7 +778,7 @@ static int debug_gfn(struct domain *d, gfn_t gfn)
 
     mfn = get_gfn_query(d, gfn_x(gfn), &p2mt);
 
-    MEM_SHARING_DEBUG("Debug for dom%d, gfn=%" PRI_gfn "\n", 
+    MEM_SHARING_DEBUG("Debug for dom%d, gfn=%" PRI_gfn "\n",
                       d->domain_id, gfn_x(gfn));
     num_refs = debug_mfn(mfn);
     put_gfn(d, gfn_x(gfn));
@@ -796,10 +799,10 @@ static int debug_gref(struct domain *d, grant_ref_t ref)
                           d->domain_id, ref, rc);
         return rc;
     }
-    
+
     MEM_SHARING_DEBUG(
-            "==> Grant [dom=%d,ref=%d], status=%x. ", 
-            d->domain_id, ref, status);
+        "==> Grant [dom=%d,ref=%d], status=%x. ",
+        d->domain_id, ref, status);
 
     return debug_gfn(d, gfn);
 }
@@ -824,12 +827,14 @@ static int nominate_page(struct domain *d, gfn_t gfn,
         goto out;
 
     /* Return the handle if the page is already shared */
-    if ( p2m_is_shared(p2mt) ) {
+    if ( p2m_is_shared(p2mt) )
+    {
         struct page_info *pg = __grab_shared_page(mfn);
         if ( !pg )
         {
             gprintk(XENLOG_ERR,
-                    "Shared p2m entry gfn %" PRI_gfn ", but could not grab mfn %" PRI_mfn " dom%d\n",
+                    "Shared p2m entry gfn %" PRI_gfn ", but could not grab mfn %" PRI_mfn
+                    " dom%d\n",
                     gfn_x(gfn), mfn_x(mfn), d->domain_id);
             BUG();
         }
@@ -876,12 +881,12 @@ static int nominate_page(struct domain *d, gfn_t gfn,
 
     /* Try to convert the mfn to the sharable type */
     page = mfn_to_page(mfn);
-    ret = page_make_sharable(d, page, expected_refcnt); 
-    if ( ret ) 
+    ret = page_make_sharable(d, page, expected_refcnt);
+    if ( ret )
         goto out;
 
-    /* Now that the page is validated, we can lock it. There is no 
-     * race because we're holding the p2m entry, so no one else 
+    /* Now that the page is validated, we can lock it. There is no
+     * race because we're holding the p2m entry, so no one else
      * could be nominating this gfn */
     ret = -ENOENT;
     if ( !mem_sharing_page_lock(page) )
@@ -889,8 +894,8 @@ static int nominate_page(struct domain *d, gfn_t gfn,
 
     /* Initialize the shared state */
     ret = -ENOMEM;
-    if ( (page->sharing = 
-            xmalloc(struct page_sharing_info)) == NULL )
+    if ( (page->sharing =
+                xmalloc(struct page_sharing_info)) == NULL )
     {
         /* Making a page private atomically unlocks it */
         BUG_ON(page_make_private(d, page) != 0);
@@ -900,7 +905,7 @@ static int nominate_page(struct domain *d, gfn_t gfn,
     rmap_init(page);
 
     /* Create the handle */
-    page->sharing->handle = get_next_handle();  
+    page->sharing->handle = get_next_handle();
 
     /* Create the local gfn info */
     if ( mem_sharing_gfn_alloc(page, d, gfn_x(gfn)) == NULL )
@@ -946,7 +951,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
     get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
                  cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
 
-    /* This tricky business is to avoid two callers deadlocking if 
+    /* This tricky business is to avoid two callers deadlocking if
      * grabbing pages in opposite client/source order */
     if ( mfn_eq(smfn, cmfn) )
     {
@@ -972,7 +977,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
             mem_sharing_page_unlock(spage);
             goto err_out;
         }
-    } else {
+    }
+    else
+    {
         ret = XENMEM_SHARING_OP_C_HANDLE_INVALID;
         cpage = firstpg = __grab_shared_page(cmfn);
         if ( cpage == NULL )
@@ -1010,7 +1017,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
     rmap_seed_iterator(cpage, &ri);
     while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
     {
-        /* Get the source page and type, this should never fail: 
+        /* Get the source page and type, this should never fail:
          * we are under shr lock, and got a successful lookup */
         BUG_ON(!get_page_and_type(spage, dom_cow, PGT_shared_page));
         /* Move the gfn_info from client list to source list.
@@ -1043,14 +1050,15 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
     atomic_dec(&nr_shared_mfns);
     atomic_inc(&nr_saved_mfns);
     ret = 0;
-    
+
 err_out:
     put_two_gfns(&tg);
     return ret;
 }
 
-int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh,
-                            struct domain *cd, unsigned long cgfn) 
+int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn,
+                               shr_handle_t sh,
+                               struct domain *cd, unsigned long cgfn)
 {
     struct page_info *spage;
     int ret = -EINVAL;
@@ -1101,7 +1109,9 @@ int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle
     {
         mem_sharing_gfn_destroy(spage, cd, gfn_info);
         put_page_and_type(spage);
-    } else {
+    }
+    else
+    {
         /* There is a chance we're plugging a hole where a paged out page was */
         if ( p2m_is_paging(cmfn_type) && (cmfn_type != p2m_ram_paging_out) )
         {
@@ -1136,10 +1146,10 @@ err_out:
 /* A note on the rationale for unshare error handling:
  *  1. Unshare can only fail with ENOMEM. Any other error conditions BUG_ON()'s
  *  2. We notify a potential dom0 helper through a vm_event ring. But we
- *     allow the notification to not go to sleep. If the event ring is full 
+ *     allow the notification to not go to sleep. If the event ring is full
  *     of ENOMEM warnings, then it's on the ball.
  *  3. We cannot go to sleep until the unshare is resolved, because we might
- *     be buried deep into locks (e.g. something -> copy_to_user -> __hvm_copy) 
+ *     be buried deep into locks (e.g. something -> copy_to_user -> __hvm_copy)
  *  4. So, we make sure we:
  *     4.1. return an error
  *     4.2. do not corrupt shared memory
@@ -1147,19 +1157,20 @@ err_out:
  *     4.4. let the guest deal with it if the error propagation will reach it
  */
 int __mem_sharing_unshare_page(struct domain *d,
-                             unsigned long gfn, 
-                             uint16_t flags)
+                               unsigned long gfn,
+                               uint16_t flags)
 {
     p2m_type_t p2mt;
     mfn_t mfn;
     struct page_info *page, *old_page;
     int last_gfn;
     gfn_info_t *gfn_info = NULL;
-   
+
     mfn = get_gfn(d, gfn, &p2mt);
-    
+
     /* Has someone already unshared it? */
-    if ( !p2m_is_shared(p2mt) ) {
+    if ( !p2m_is_shared(p2mt) )
+    {
         put_gfn(d, gfn);
         return 0;
     }
@@ -1168,7 +1179,7 @@ int __mem_sharing_unshare_page(struct domain *d,
     if ( page == NULL )
     {
         gdprintk(XENLOG_ERR, "Domain p2m is shared, but page is not: "
-                                "%lx\n", gfn);
+                 "%lx\n", gfn);
         BUG();
     }
 
@@ -1176,12 +1187,12 @@ int __mem_sharing_unshare_page(struct domain *d,
     if ( unlikely(gfn_info == NULL) )
     {
         gdprintk(XENLOG_ERR, "Could not find gfn_info for shared gfn: "
-                                "%lx\n", gfn);
+                 "%lx\n", gfn);
         BUG();
     }
 
     /* Do the accounting first. If anything fails below, we have bigger
-     * bigger fish to fry. First, remove the gfn from the list. */ 
+     * bigger fish to fry. First, remove the gfn from the list. */
     last_gfn = rmap_has_one_entry(page);
     if ( last_gfn )
     {
@@ -1195,7 +1206,7 @@ int __mem_sharing_unshare_page(struct domain *d,
     else
         atomic_dec(&nr_saved_mfns);
 
-    /* If the GFN is getting destroyed drop the references to MFN 
+    /* If the GFN is getting destroyed drop the references to MFN
      * (possibly freeing the page), and exit early */
     if ( flags & MEM_SHARING_DESTROY_GFN )
     {
@@ -1212,7 +1223,7 @@ int __mem_sharing_unshare_page(struct domain *d,
 
         return 0;
     }
- 
+
     if ( last_gfn )
     {
         /* Making a page private atomically unlocks it */
@@ -1222,7 +1233,7 @@ int __mem_sharing_unshare_page(struct domain *d,
 
     old_page = page;
     page = alloc_domheap_page(d, 0);
-    if ( !page ) 
+    if ( !page )
     {
         /* Undo dec of nr_saved_mfns, as the retry will decrease again. */
         atomic_inc(&nr_saved_mfns);
@@ -1240,11 +1251,11 @@ int __mem_sharing_unshare_page(struct domain *d,
     mem_sharing_page_unlock(old_page);
     put_page_and_type(old_page);
 
-private_page_found:    
+private_page_found:
     if ( p2m_change_type_one(d, gfn, p2m_ram_shared, p2m_ram_rw) )
     {
-        gdprintk(XENLOG_ERR, "Could not change p2m type d %hu gfn %lx.\n", 
-                                d->domain_id, gfn);
+        gdprintk(XENLOG_ERR, "Could not change p2m type d %hu gfn %lx.\n",
+                 d->domain_id, gfn);
         BUG();
     }
 
@@ -1270,7 +1281,7 @@ int relinquish_shared_pages(struct domain *d)
 
     p2m_lock(p2m);
     for ( gfn = p2m->next_shared_gfn_to_relinquish;
-          gfn <= p2m->max_mapped_pfn; gfn++ )
+            gfn <= p2m->max_mapped_pfn; gfn++ )
     {
         p2m_access_t a;
         p2m_type_t t;
@@ -1283,8 +1294,8 @@ int relinquish_shared_pages(struct domain *d)
         if ( mfn_valid(mfn) && (t == p2m_ram_shared) )
         {
             /* Does not fail with ENOMEM given the DESTROY flag */
-            BUG_ON(__mem_sharing_unshare_page(d, gfn, 
-                    MEM_SHARING_DESTROY_GFN));
+            BUG_ON(__mem_sharing_unshare_page(d, gfn,
+                                              MEM_SHARING_DESTROY_GFN));
             /* Clear out the p2m entry so no one else may try to
              * unshare.  Must succeed: we just read the old entry and
              * we hold the p2m lock. */
@@ -1454,8 +1465,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 
             if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.source_gfn) )
             {
-                grant_ref_t gref = (grant_ref_t) 
-                                    (XENMEM_SHARING_OP_FIELD_GET_GREF(
+                grant_ref_t gref = (grant_ref_t)
+                                   (XENMEM_SHARING_OP_FIELD_GET_GREF(
                                         mso.u.share.source_gfn));
                 rc = mem_sharing_gref_to_gfn(d->grant_table, gref, &sgfn,
                                              NULL);
@@ -1470,8 +1481,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 
             if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.client_gfn) )
             {
-                grant_ref_t gref = (grant_ref_t) 
-                                    (XENMEM_SHARING_OP_FIELD_GET_GREF(
+                grant_ref_t gref = (grant_ref_t)
+                                   (XENMEM_SHARING_OP_FIELD_GET_GREF(
                                         mso.u.share.client_gfn));
                 rc = mem_sharing_gref_to_gfn(cd->grant_table, gref, &cgfn,
                                              NULL);
@@ -1534,7 +1545,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
             sh      = mso.u.share.source_handle;
             cgfn    = mso.u.share.client_gfn;
 
-            rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn); 
+            rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn);
 
             rcu_unlock_domain(cd);
         }
@@ -1547,8 +1558,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 
             rc = -EINVAL;
             if ( mso.u.range._pad[0] || mso.u.range._pad[1] ||
-                 mso.u.range._pad[2] )
-                 goto out;
+                    mso.u.range._pad[2] )
+                goto out;
 
             /*
              * We use opaque for the hypercall continuation value.
@@ -1557,8 +1568,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
              * that it's at least in range.
              */
             if ( mso.u.range.opaque &&
-                 (mso.u.range.opaque < mso.u.range.first_gfn ||
-                  mso.u.range.opaque > mso.u.range.last_gfn) )
+                    (mso.u.range.opaque < mso.u.range.first_gfn ||
+                     mso.u.range.opaque > mso.u.range.last_gfn) )
                 goto out;
 
             if ( !mem_sharing_enabled(d) )
@@ -1593,7 +1604,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
              * the duration of this op.
              */
             if ( !atomic_read(&d->pause_count) ||
-                 !atomic_read(&cd->pause_count) )
+                    !atomic_read(&cd->pause_count) )
             {
                 rcu_unlock_domain(cd);
                 rc = -EINVAL;
@@ -1604,9 +1615,9 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
             max_cgfn = domain_get_maximum_gpfn(cd);
 
             if ( max_sgfn < mso.u.range.first_gfn ||
-                 max_sgfn < mso.u.range.last_gfn ||
-                 max_cgfn < mso.u.range.first_gfn ||
-                 max_cgfn < mso.u.range.last_gfn )
+                    max_sgfn < mso.u.range.last_gfn ||
+                    max_cgfn < mso.u.range.first_gfn ||
+                    max_cgfn < mso.u.range.last_gfn )
             {
                 rcu_unlock_domain(cd);
                 rc = -EINVAL;
@@ -1657,9 +1668,9 @@ int mem_sharing_domctl(struct domain *d, struct xen_domctl_mem_sharing_op *mec)
 
     /* Only HAP is supported */
     if ( !hap_enabled(d) )
-         return -ENODEV;
+        return -ENODEV;
 
-    switch(mec->op)
+    switch (mec->op)
     {
         case XEN_DOMCTL_MEM_SHARING_CONTROL:
         {
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released Tamas K Lengyel
@ 2019-07-18 10:46   ` Jan Beulich
  2019-07-18 12:55     ` Tamas K Lengyel
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2019-07-18 10:46 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, xen-devel, Roger Pau Monne, Wei Liu, Andrew Cooper

On 17.07.2019 21:33, Tamas K Lengyel wrote:
> Calling _put_page_type while also holding the page_lock
> for that page can cause a deadlock.
> 
> The comment being dropped is incorrect since it's now out-of-date.
> 
> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>

The description covers ...

> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -648,10 +648,6 @@ static int page_make_private(struct domain *d, struct page_info *page)
>           return -EBUSY;
>       }
>   
> -    /* We can only change the type if count is one */
> -    /* Because we are locking pages individually, we need to drop
> -     * the lock here, while the page is typed. We cannot risk the
> -     * race of page_unlock and then put_page_type. */
>       expected_type = (PGT_shared_page | PGT_validated | PGT_locked | 2);
>       if ( page->u.inuse.type_info != expected_type )
>       {
> @@ -660,12 +656,11 @@ static int page_make_private(struct domain *d, struct page_info *page)
>           return -EEXIST;
>       }
>   
> +    mem_sharing_page_unlock(page);
> +
>       /* Drop the final typecount */
>       put_page_and_type(page);
>   
> -    /* Now that we've dropped the type, we can unlock */
> -    mem_sharing_page_unlock(page);
> -
>       /* Change the owner */
>       ASSERT(page_get_owner(page) == dom_cow);
>       page_set_owner(page, d);

all of the above. But what about ...

> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>       p2m_type_t smfn_type, cmfn_type;
>       struct two_gfns tg;
>       struct rmap_iterator ri;
> +    unsigned long put_count = 0;
>   
>       get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
>                    cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>           goto err_out;
>       }
>   
> -    /* Acquire an extra reference, for the freeing below to be safe. */
> -    if ( !get_page(cpage, dom_cow) )
> -    {
> -        ret = -EOVERFLOW;
> -        mem_sharing_page_unlock(secondpg);
> -        mem_sharing_page_unlock(firstpg);
> -        goto err_out;
> -    }
> -
>       /* Merge the lists together */
>       rmap_seed_iterator(cpage, &ri);
>       while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>            * Don't change the type of rmap for the client page. */
>           rmap_del(gfn, cpage, 0);
>           rmap_add(gfn, spage);
> -        put_page_and_type(cpage);
> +        put_count++;
>           d = get_domain_by_id(gfn->domain);
>           BUG_ON(!d);
>           BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
>           put_domain(d);
>       }
>       ASSERT(list_empty(&cpage->sharing->gfns));
> +    BUG_ON(!put_count);
>   
>       /* Clear the rest of the shared state */
>       page_sharing_dispose(cpage);
> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>   
>       /* Free the client page */
>       put_page_alloc_ref(cpage);
> -    put_page(cpage);
> +
> +    while ( put_count-- )
> +        put_page_and_type(cpage);
>   
>       /* We managed to free a domain page. */
>       atomic_dec(&nr_shared_mfns);
> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
>       {
>           if ( !last_gfn )
>               mem_sharing_gfn_destroy(page, d, gfn_info);
> -        put_page_and_type(page);
> +
>           mem_sharing_page_unlock(page);
> +
>           if ( last_gfn )
> -        {
> -            if ( !get_page(page, dom_cow) )
> -            {
> -                put_gfn(d, gfn);
> -                domain_crash(d);
> -                return -EOVERFLOW;
> -            }
>               put_page_alloc_ref(page);
> -            put_page(page);
> -        }
> +
> +        put_page_and_type(page);
>           put_gfn(d, gfn);
>   
>           return 0;

... this (main, as I guess by the title) part of the change? I think
you want to explain what was wrong here and/or why the new arrangement
is better. (I'm sorry, I guess it was this way on prior versions
already, but apparently I didn't notice.)

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup
  2019-07-17 19:33 ` [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup Tamas K Lengyel
@ 2019-07-18 10:55   ` Jan Beulich
  2019-07-18 12:59     ` Tamas K Lengyel
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2019-07-18 10:55 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel

On 17.07.2019 21:33, Tamas K Lengyel wrote:
> @@ -136,8 +137,8 @@ static inline bool _page_lock(struct page_info *page)
>               cpu_relax();
>           nx = x + (1 | PGT_locked);
>           if ( !(x & PGT_validated) ||
> -             !(x & PGT_count_mask) ||
> -             !(nx & PGT_count_mask) )
> +                !(x & PGT_count_mask) ||
> +                !(nx & PGT_count_mask) )
>               return false;
>       } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );

Aren't you screwing up indentation here? It looks wrong both in my
mail client's view and on the list archives, whereas. Furthermore
this is code you've introduced earlier in the series, so it should
be got right there, not here.

> @@ -225,7 +225,7 @@ rmap_init(struct page_info *page)
>   #define HASH(domain, gfn)       \
>       (((gfn) + (domain)) % RMAP_HASHTAB_SIZE)
>   
> -/* Conversions. Tuned by the thresholds. Should only happen twice
> +/* Conversions. Tuned by the thresholds. Should only happen twice
>    * (once each) during the lifetime of a shared page */

Please fix the comment style as a whole, not just the stray trailing
blank.

> @@ -288,13 +288,13 @@ rmap_count(struct page_info *pg)
>   }
>   
>   /* The page type count is always decreased after removing from the rmap.
> - * Use a convert flag to avoid mutating the rmap if in the middle of an
> + * Use a convert flag to avoid mutating the rmap if in the middle of an
>    * iterator, or if the page will be soon destroyed anyways. */

Same here.

>   static inline void
>   rmap_del(gfn_info_t *gfn_info, struct page_info *page, int convert)
>   {
>       if ( RMAP_USES_HASHTAB(page) && convert &&
> -         (rmap_count(page) <= RMAP_LIGHT_SHARED_PAGE) )
> +            (rmap_count(page) <= RMAP_LIGHT_SHARED_PAGE) )

Here you again seem to be screwing up correct indentation. There are
more such instances, so I guess I'll leave it to you to go over the
whole patch once more.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 10:46   ` Jan Beulich
@ 2019-07-18 12:55     ` Tamas K Lengyel
  2019-07-18 13:12       ` Jan Beulich
  0 siblings, 1 reply; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-18 12:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, xen-devel, Roger Pau Monne, Wei Liu, Andrew Cooper

On Thu, Jul 18, 2019 at 4:47 AM Jan Beulich <JBeulich@suse.com> wrote:
>
> On 17.07.2019 21:33, Tamas K Lengyel wrote:
> > Calling _put_page_type while also holding the page_lock
> > for that page can cause a deadlock.
> >
> > The comment being dropped is incorrect since it's now out-of-date.
> >
> > Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
>
> The description covers ...
>
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @@ -648,10 +648,6 @@ static int page_make_private(struct domain *d, struct page_info *page)
> >           return -EBUSY;
> >       }
> >
> > -    /* We can only change the type if count is one */
> > -    /* Because we are locking pages individually, we need to drop
> > -     * the lock here, while the page is typed. We cannot risk the
> > -     * race of page_unlock and then put_page_type. */
> >       expected_type = (PGT_shared_page | PGT_validated | PGT_locked | 2);
> >       if ( page->u.inuse.type_info != expected_type )
> >       {
> > @@ -660,12 +656,11 @@ static int page_make_private(struct domain *d, struct page_info *page)
> >           return -EEXIST;
> >       }
> >
> > +    mem_sharing_page_unlock(page);
> > +
> >       /* Drop the final typecount */
> >       put_page_and_type(page);
> >
> > -    /* Now that we've dropped the type, we can unlock */
> > -    mem_sharing_page_unlock(page);
> > -
> >       /* Change the owner */
> >       ASSERT(page_get_owner(page) == dom_cow);
> >       page_set_owner(page, d);
>
> all of the above. But what about ...
>
> > @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >       p2m_type_t smfn_type, cmfn_type;
> >       struct two_gfns tg;
> >       struct rmap_iterator ri;
> > +    unsigned long put_count = 0;
> >
> >       get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
> >                    cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
> > @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >           goto err_out;
> >       }
> >
> > -    /* Acquire an extra reference, for the freeing below to be safe. */
> > -    if ( !get_page(cpage, dom_cow) )
> > -    {
> > -        ret = -EOVERFLOW;
> > -        mem_sharing_page_unlock(secondpg);
> > -        mem_sharing_page_unlock(firstpg);
> > -        goto err_out;
> > -    }
> > -
> >       /* Merge the lists together */
> >       rmap_seed_iterator(cpage, &ri);
> >       while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
> > @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >            * Don't change the type of rmap for the client page. */
> >           rmap_del(gfn, cpage, 0);
> >           rmap_add(gfn, spage);
> > -        put_page_and_type(cpage);
> > +        put_count++;
> >           d = get_domain_by_id(gfn->domain);
> >           BUG_ON(!d);
> >           BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
> >           put_domain(d);
> >       }
> >       ASSERT(list_empty(&cpage->sharing->gfns));
> > +    BUG_ON(!put_count);
> >
> >       /* Clear the rest of the shared state */
> >       page_sharing_dispose(cpage);
> > @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >
> >       /* Free the client page */
> >       put_page_alloc_ref(cpage);
> > -    put_page(cpage);
> > +
> > +    while ( put_count-- )
> > +        put_page_and_type(cpage);
> >
> >       /* We managed to free a domain page. */
> >       atomic_dec(&nr_shared_mfns);
> > @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
> >       {
> >           if ( !last_gfn )
> >               mem_sharing_gfn_destroy(page, d, gfn_info);
> > -        put_page_and_type(page);
> > +
> >           mem_sharing_page_unlock(page);
> > +
> >           if ( last_gfn )
> > -        {
> > -            if ( !get_page(page, dom_cow) )
> > -            {
> > -                put_gfn(d, gfn);
> > -                domain_crash(d);
> > -                return -EOVERFLOW;
> > -            }
> >               put_page_alloc_ref(page);
> > -            put_page(page);
> > -        }
> > +
> > +        put_page_and_type(page);
> >           put_gfn(d, gfn);
> >
> >           return 0;
>
> ... this (main, as I guess by the title) part of the change? I think
> you want to explain what was wrong here and/or why the new arrangement
> is better. (I'm sorry, I guess it was this way on prior versions
> already, but apparently I didn't notice.)

It's what the patch message says - calling put_page_and_type before
mem_sharing_page_unlock can cause a deadlock. Since now we are now
holding a reference to the page till the end there is no need for the
extra get_page/put_page logic when we are dealing with the last_gfn.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup
  2019-07-18 10:55   ` Jan Beulich
@ 2019-07-18 12:59     ` Tamas K Lengyel
  2019-07-18 13:14       ` Jan Beulich
  0 siblings, 1 reply; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-18 12:59 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

On Thu, Jul 18, 2019 at 4:56 AM Jan Beulich <JBeulich@suse.com> wrote:
>
> On 17.07.2019 21:33, Tamas K Lengyel wrote:
> > @@ -136,8 +137,8 @@ static inline bool _page_lock(struct page_info *page)
> >               cpu_relax();
> >           nx = x + (1 | PGT_locked);
> >           if ( !(x & PGT_validated) ||
> > -             !(x & PGT_count_mask) ||
> > -             !(nx & PGT_count_mask) )
> > +                !(x & PGT_count_mask) ||
> > +                !(nx & PGT_count_mask) )
> >               return false;
> >       } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );
>
> Aren't you screwing up indentation here? It looks wrong both in my
> mail client's view and on the list archives, whereas. Furthermore
> this is code you've introduced earlier in the series, so it should
> be got right there, not here.

The style was auto-applied with astyle using the bsd format. In the
previous patch there were no style-changes applied because it was a
copy-paste job from the other code location. I rather keep
code-copying and style fixes separate.

>
> > @@ -225,7 +225,7 @@ rmap_init(struct page_info *page)
> >   #define HASH(domain, gfn)       \
> >       (((gfn) + (domain)) % RMAP_HASHTAB_SIZE)
> >
> > -/* Conversions. Tuned by the thresholds. Should only happen twice
> > +/* Conversions. Tuned by the thresholds. Should only happen twice
> >    * (once each) during the lifetime of a shared page */
>
> Please fix the comment style as a whole, not just the stray trailing
> blank.
>
> > @@ -288,13 +288,13 @@ rmap_count(struct page_info *pg)
> >   }
> >
> >   /* The page type count is always decreased after removing from the rmap.
> > - * Use a convert flag to avoid mutating the rmap if in the middle of an
> > + * Use a convert flag to avoid mutating the rmap if in the middle of an
> >    * iterator, or if the page will be soon destroyed anyways. */
>
> Same here.
>
> >   static inline void
> >   rmap_del(gfn_info_t *gfn_info, struct page_info *page, int convert)
> >   {
> >       if ( RMAP_USES_HASHTAB(page) && convert &&
> > -         (rmap_count(page) <= RMAP_LIGHT_SHARED_PAGE) )
> > +            (rmap_count(page) <= RMAP_LIGHT_SHARED_PAGE) )
>
> Here you again seem to be screwing up correct indentation. There are
> more such instances, so I guess I'll leave it to you to go over the
> whole patch once more.

Again, this is the astyle bsd format auto-applied. I rather have this
style if it means I don't ever have to check for this kind of stuff
manually in the future.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 12:55     ` Tamas K Lengyel
@ 2019-07-18 13:12       ` Jan Beulich
  2019-07-18 13:13         ` Tamas K Lengyel
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2019-07-18 13:12 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On 18.07.2019 14:55, Tamas K Lengyel wrote:
> On Thu, Jul 18, 2019 at 4:47 AM Jan Beulich <JBeulich@suse.com> wrote:
>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
>>> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>        p2m_type_t smfn_type, cmfn_type;
>>>        struct two_gfns tg;
>>>        struct rmap_iterator ri;
>>> +    unsigned long put_count = 0;
>>>
>>>        get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
>>>                     cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
>>> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>            goto err_out;
>>>        }
>>>
>>> -    /* Acquire an extra reference, for the freeing below to be safe. */
>>> -    if ( !get_page(cpage, dom_cow) )
>>> -    {
>>> -        ret = -EOVERFLOW;
>>> -        mem_sharing_page_unlock(secondpg);
>>> -        mem_sharing_page_unlock(firstpg);
>>> -        goto err_out;
>>> -    }
>>> -
>>>        /* Merge the lists together */
>>>        rmap_seed_iterator(cpage, &ri);
>>>        while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
>>> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>             * Don't change the type of rmap for the client page. */
>>>            rmap_del(gfn, cpage, 0);
>>>            rmap_add(gfn, spage);
>>> -        put_page_and_type(cpage);
>>> +        put_count++;
>>>            d = get_domain_by_id(gfn->domain);
>>>            BUG_ON(!d);
>>>            BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
>>>            put_domain(d);
>>>        }
>>>        ASSERT(list_empty(&cpage->sharing->gfns));
>>> +    BUG_ON(!put_count);
>>>
>>>        /* Clear the rest of the shared state */
>>>        page_sharing_dispose(cpage);
>>> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>
>>>        /* Free the client page */
>>>        put_page_alloc_ref(cpage);
>>> -    put_page(cpage);
>>> +
>>> +    while ( put_count-- )
>>> +        put_page_and_type(cpage);
>>>
>>>        /* We managed to free a domain page. */
>>>        atomic_dec(&nr_shared_mfns);
>>> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
>>>        {
>>>            if ( !last_gfn )
>>>                mem_sharing_gfn_destroy(page, d, gfn_info);
>>> -        put_page_and_type(page);
>>> +
>>>            mem_sharing_page_unlock(page);
>>> +
>>>            if ( last_gfn )
>>> -        {
>>> -            if ( !get_page(page, dom_cow) )
>>> -            {
>>> -                put_gfn(d, gfn);
>>> -                domain_crash(d);
>>> -                return -EOVERFLOW;
>>> -            }
>>>                put_page_alloc_ref(page);
>>> -            put_page(page);
>>> -        }
>>> +
>>> +        put_page_and_type(page);
>>>            put_gfn(d, gfn);
>>>
>>>            return 0;
>>
>> ... this (main, as I guess by the title) part of the change? I think
>> you want to explain what was wrong here and/or why the new arrangement
>> is better. (I'm sorry, I guess it was this way on prior versions
>> already, but apparently I didn't notice.)
> 
> It's what the patch message says - calling put_page_and_type before
> mem_sharing_page_unlock can cause a deadlock. Since now we are now
> holding a reference to the page till the end there is no need for the
> extra get_page/put_page logic when we are dealing with the last_gfn.

The title says "reorder" without any "why".

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 13:12       ` Jan Beulich
@ 2019-07-18 13:13         ` Tamas K Lengyel
  2019-07-18 13:33           ` Jan Beulich
  0 siblings, 1 reply; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-18 13:13 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Thu, Jul 18, 2019 at 7:12 AM Jan Beulich <JBeulich@suse.com> wrote:
>
> On 18.07.2019 14:55, Tamas K Lengyel wrote:
> > On Thu, Jul 18, 2019 at 4:47 AM Jan Beulich <JBeulich@suse.com> wrote:
> >> On 17.07.2019 21:33, Tamas K Lengyel wrote:
> >>> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>        p2m_type_t smfn_type, cmfn_type;
> >>>        struct two_gfns tg;
> >>>        struct rmap_iterator ri;
> >>> +    unsigned long put_count = 0;
> >>>
> >>>        get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
> >>>                     cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
> >>> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>            goto err_out;
> >>>        }
> >>>
> >>> -    /* Acquire an extra reference, for the freeing below to be safe. */
> >>> -    if ( !get_page(cpage, dom_cow) )
> >>> -    {
> >>> -        ret = -EOVERFLOW;
> >>> -        mem_sharing_page_unlock(secondpg);
> >>> -        mem_sharing_page_unlock(firstpg);
> >>> -        goto err_out;
> >>> -    }
> >>> -
> >>>        /* Merge the lists together */
> >>>        rmap_seed_iterator(cpage, &ri);
> >>>        while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
> >>> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>             * Don't change the type of rmap for the client page. */
> >>>            rmap_del(gfn, cpage, 0);
> >>>            rmap_add(gfn, spage);
> >>> -        put_page_and_type(cpage);
> >>> +        put_count++;
> >>>            d = get_domain_by_id(gfn->domain);
> >>>            BUG_ON(!d);
> >>>            BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
> >>>            put_domain(d);
> >>>        }
> >>>        ASSERT(list_empty(&cpage->sharing->gfns));
> >>> +    BUG_ON(!put_count);
> >>>
> >>>        /* Clear the rest of the shared state */
> >>>        page_sharing_dispose(cpage);
> >>> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>
> >>>        /* Free the client page */
> >>>        put_page_alloc_ref(cpage);
> >>> -    put_page(cpage);
> >>> +
> >>> +    while ( put_count-- )
> >>> +        put_page_and_type(cpage);
> >>>
> >>>        /* We managed to free a domain page. */
> >>>        atomic_dec(&nr_shared_mfns);
> >>> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
> >>>        {
> >>>            if ( !last_gfn )
> >>>                mem_sharing_gfn_destroy(page, d, gfn_info);
> >>> -        put_page_and_type(page);
> >>> +
> >>>            mem_sharing_page_unlock(page);
> >>> +
> >>>            if ( last_gfn )
> >>> -        {
> >>> -            if ( !get_page(page, dom_cow) )
> >>> -            {
> >>> -                put_gfn(d, gfn);
> >>> -                domain_crash(d);
> >>> -                return -EOVERFLOW;
> >>> -            }
> >>>                put_page_alloc_ref(page);
> >>> -            put_page(page);
> >>> -        }
> >>> +
> >>> +        put_page_and_type(page);
> >>>            put_gfn(d, gfn);
> >>>
> >>>            return 0;
> >>
> >> ... this (main, as I guess by the title) part of the change? I think
> >> you want to explain what was wrong here and/or why the new arrangement
> >> is better. (I'm sorry, I guess it was this way on prior versions
> >> already, but apparently I didn't notice.)
> >
> > It's what the patch message says - calling put_page_and_type before
> > mem_sharing_page_unlock can cause a deadlock. Since now we are now
> > holding a reference to the page till the end there is no need for the
> > extra get_page/put_page logic when we are dealing with the last_gfn.
>
> The title says "reorder" without any "why".

Yes, I can't reasonably fit "Calling _put_page_type while also holding
the page_lock for that page can cause a deadlock." into the title. So
it's spelled out in the patch message.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup
  2019-07-18 12:59     ` Tamas K Lengyel
@ 2019-07-18 13:14       ` Jan Beulich
  2019-07-18 13:16         ` Tamas K Lengyel
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2019-07-18 13:14 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel

On 18.07.2019 14:59, Tamas K Lengyel wrote:
> On Thu, Jul 18, 2019 at 4:56 AM Jan Beulich <JBeulich@suse.com> wrote:
>>
>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
>>> @@ -136,8 +137,8 @@ static inline bool _page_lock(struct page_info *page)
>>>                cpu_relax();
>>>            nx = x + (1 | PGT_locked);
>>>            if ( !(x & PGT_validated) ||
>>> -             !(x & PGT_count_mask) ||
>>> -             !(nx & PGT_count_mask) )
>>> +                !(x & PGT_count_mask) ||
>>> +                !(nx & PGT_count_mask) )
>>>                return false;
>>>        } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );
>>
>> Aren't you screwing up indentation here? It looks wrong both in my
>> mail client's view and on the list archives, whereas. Furthermore
>> this is code you've introduced earlier in the series, so it should
>> be got right there, not here.
> 
> The style was auto-applied with astyle using the bsd format. In the
> previous patch there were no style-changes applied because it was a
> copy-paste job from the other code location. I rather keep
> code-copying and style fixes separate.

But you're actively breaking Xen style here (and below).

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup
  2019-07-18 13:14       ` Jan Beulich
@ 2019-07-18 13:16         ` Tamas K Lengyel
  2019-07-18 13:37           ` Jan Beulich
  0 siblings, 1 reply; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-18 13:16 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

On Thu, Jul 18, 2019 at 7:14 AM Jan Beulich <JBeulich@suse.com> wrote:
>
> On 18.07.2019 14:59, Tamas K Lengyel wrote:
> > On Thu, Jul 18, 2019 at 4:56 AM Jan Beulich <JBeulich@suse.com> wrote:
> >>
> >> On 17.07.2019 21:33, Tamas K Lengyel wrote:
> >>> @@ -136,8 +137,8 @@ static inline bool _page_lock(struct page_info *page)
> >>>                cpu_relax();
> >>>            nx = x + (1 | PGT_locked);
> >>>            if ( !(x & PGT_validated) ||
> >>> -             !(x & PGT_count_mask) ||
> >>> -             !(nx & PGT_count_mask) )
> >>> +                !(x & PGT_count_mask) ||
> >>> +                !(nx & PGT_count_mask) )
> >>>                return false;
> >>>        } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );
> >>
> >> Aren't you screwing up indentation here? It looks wrong both in my
> >> mail client's view and on the list archives, whereas. Furthermore
> >> this is code you've introduced earlier in the series, so it should
> >> be got right there, not here.
> >
> > The style was auto-applied with astyle using the bsd format. In the
> > previous patch there were no style-changes applied because it was a
> > copy-paste job from the other code location. I rather keep
> > code-copying and style fixes separate.
>
> But you're actively breaking Xen style here (and below).

I don't see any mention of style restrictions regarding this in
CODING_STYLE. If there is, I would prefer changing that so we can
automate style checks which IMHO are the biggest waste of everyone's
time to do manually.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 13:13         ` Tamas K Lengyel
@ 2019-07-18 13:33           ` Jan Beulich
  2019-07-18 13:47             ` Tamas K Lengyel
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2019-07-18 13:33 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On 18.07.2019 15:13, Tamas K Lengyel wrote:
> On Thu, Jul 18, 2019 at 7:12 AM Jan Beulich <JBeulich@suse.com> wrote:
>>
>> On 18.07.2019 14:55, Tamas K Lengyel wrote:
>>> On Thu, Jul 18, 2019 at 4:47 AM Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
>>>>> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>>>         p2m_type_t smfn_type, cmfn_type;
>>>>>         struct two_gfns tg;
>>>>>         struct rmap_iterator ri;
>>>>> +    unsigned long put_count = 0;
>>>>>
>>>>>         get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
>>>>>                      cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
>>>>> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>>>             goto err_out;
>>>>>         }
>>>>>
>>>>> -    /* Acquire an extra reference, for the freeing below to be safe. */
>>>>> -    if ( !get_page(cpage, dom_cow) )
>>>>> -    {
>>>>> -        ret = -EOVERFLOW;
>>>>> -        mem_sharing_page_unlock(secondpg);
>>>>> -        mem_sharing_page_unlock(firstpg);
>>>>> -        goto err_out;
>>>>> -    }
>>>>> -
>>>>>         /* Merge the lists together */
>>>>>         rmap_seed_iterator(cpage, &ri);
>>>>>         while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
>>>>> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>>>              * Don't change the type of rmap for the client page. */
>>>>>             rmap_del(gfn, cpage, 0);
>>>>>             rmap_add(gfn, spage);
>>>>> -        put_page_and_type(cpage);
>>>>> +        put_count++;
>>>>>             d = get_domain_by_id(gfn->domain);
>>>>>             BUG_ON(!d);
>>>>>             BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
>>>>>             put_domain(d);
>>>>>         }
>>>>>         ASSERT(list_empty(&cpage->sharing->gfns));
>>>>> +    BUG_ON(!put_count);
>>>>>
>>>>>         /* Clear the rest of the shared state */
>>>>>         page_sharing_dispose(cpage);
>>>>> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>>>
>>>>>         /* Free the client page */
>>>>>         put_page_alloc_ref(cpage);
>>>>> -    put_page(cpage);
>>>>> +
>>>>> +    while ( put_count-- )
>>>>> +        put_page_and_type(cpage);
>>>>>
>>>>>         /* We managed to free a domain page. */
>>>>>         atomic_dec(&nr_shared_mfns);
>>>>> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
>>>>>         {
>>>>>             if ( !last_gfn )
>>>>>                 mem_sharing_gfn_destroy(page, d, gfn_info);
>>>>> -        put_page_and_type(page);
>>>>> +
>>>>>             mem_sharing_page_unlock(page);
>>>>> +
>>>>>             if ( last_gfn )
>>>>> -        {
>>>>> -            if ( !get_page(page, dom_cow) )
>>>>> -            {
>>>>> -                put_gfn(d, gfn);
>>>>> -                domain_crash(d);
>>>>> -                return -EOVERFLOW;
>>>>> -            }
>>>>>                 put_page_alloc_ref(page);
>>>>> -            put_page(page);
>>>>> -        }
>>>>> +
>>>>> +        put_page_and_type(page);
>>>>>             put_gfn(d, gfn);
>>>>>
>>>>>             return 0;
>>>>
>>>> ... this (main, as I guess by the title) part of the change? I think
>>>> you want to explain what was wrong here and/or why the new arrangement
>>>> is better. (I'm sorry, I guess it was this way on prior versions
>>>> already, but apparently I didn't notice.)
>>>
>>> It's what the patch message says - calling put_page_and_type before
>>> mem_sharing_page_unlock can cause a deadlock. Since now we are now
>>> holding a reference to the page till the end there is no need for the
>>> extra get_page/put_page logic when we are dealing with the last_gfn.
>>
>> The title says "reorder" without any "why".
> 
> Yes, I can't reasonably fit "Calling _put_page_type while also holding
> the page_lock for that page can cause a deadlock." into the title. So
> it's spelled out in the patch message.

Of course not. And I realize _part_ of the changes is indeed what the
title says (although for share_pages() that's not obvious from the
patch alone). But you do more: You also avoid acquiring an extra
reference in share_pages().

And since you made me look at the code again: If put_page() is unsafe
with a lock held, how come the get_page_and_type() in share_pages()
is safe with two such locks held? If it really is, it surely would be
worthwhile to state in the description. There's another such instance
in mem_sharing_add_to_physmap() (plus a get_page()), and also one
where put_page_and_type() gets called with such a lock held (afaics).

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup
  2019-07-18 13:16         ` Tamas K Lengyel
@ 2019-07-18 13:37           ` Jan Beulich
  2019-07-18 13:52             ` Tamas K Lengyel
  2019-07-26 14:00             ` Viktor Mitin
  0 siblings, 2 replies; 28+ messages in thread
From: Jan Beulich @ 2019-07-18 13:37 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel

On 18.07.2019 15:16, Tamas K Lengyel wrote:
> On Thu, Jul 18, 2019 at 7:14 AM Jan Beulich <JBeulich@suse.com> wrote:
>>
>> On 18.07.2019 14:59, Tamas K Lengyel wrote:
>>> On Thu, Jul 18, 2019 at 4:56 AM Jan Beulich <JBeulich@suse.com> wrote:
>>>>
>>>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
>>>>> @@ -136,8 +137,8 @@ static inline bool _page_lock(struct page_info *page)
>>>>>                 cpu_relax();
>>>>>             nx = x + (1 | PGT_locked);
>>>>>             if ( !(x & PGT_validated) ||
>>>>> -             !(x & PGT_count_mask) ||
>>>>> -             !(nx & PGT_count_mask) )
>>>>> +                !(x & PGT_count_mask) ||
>>>>> +                !(nx & PGT_count_mask) )
>>>>>                 return false;
>>>>>         } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );
>>>>
>>>> Aren't you screwing up indentation here? It looks wrong both in my
>>>> mail client's view and on the list archives, whereas. Furthermore
>>>> this is code you've introduced earlier in the series, so it should
>>>> be got right there, not here.
>>>
>>> The style was auto-applied with astyle using the bsd format. In the
>>> previous patch there were no style-changes applied because it was a
>>> copy-paste job from the other code location. I rather keep
>>> code-copying and style fixes separate.
>>
>> But you're actively breaking Xen style here (and below).
> 
> I don't see any mention of style restrictions regarding this in
> CODING_STYLE. If there is, I would prefer changing that so we can
> automate style checks which IMHO are the biggest waste of everyone's
> time to do manually.

./CODING_STYLE fails to mention many aspects of what we do everywhere.
Almost any attempt of updating it has failed for me in the past, often
due to entire lack of responses on patches (in other cases also because
of people disagreeing). Despite you being the maintainer of the file I
strongly think you shouldn't actively break style that's in line with
large swathes of code elsewhere.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 13:33           ` Jan Beulich
@ 2019-07-18 13:47             ` Tamas K Lengyel
  2019-07-18 14:28               ` Jan Beulich
  0 siblings, 1 reply; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-18 13:47 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Thu, Jul 18, 2019 at 7:33 AM Jan Beulich <JBeulich@suse.com> wrote:
>
> On 18.07.2019 15:13, Tamas K Lengyel wrote:
> > On Thu, Jul 18, 2019 at 7:12 AM Jan Beulich <JBeulich@suse.com> wrote:
> >>
> >> On 18.07.2019 14:55, Tamas K Lengyel wrote:
> >>> On Thu, Jul 18, 2019 at 4:47 AM Jan Beulich <JBeulich@suse.com> wrote:
> >>>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
> >>>>> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>>>         p2m_type_t smfn_type, cmfn_type;
> >>>>>         struct two_gfns tg;
> >>>>>         struct rmap_iterator ri;
> >>>>> +    unsigned long put_count = 0;
> >>>>>
> >>>>>         get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
> >>>>>                      cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
> >>>>> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>>>             goto err_out;
> >>>>>         }
> >>>>>
> >>>>> -    /* Acquire an extra reference, for the freeing below to be safe. */
> >>>>> -    if ( !get_page(cpage, dom_cow) )
> >>>>> -    {
> >>>>> -        ret = -EOVERFLOW;
> >>>>> -        mem_sharing_page_unlock(secondpg);
> >>>>> -        mem_sharing_page_unlock(firstpg);
> >>>>> -        goto err_out;
> >>>>> -    }
> >>>>> -
> >>>>>         /* Merge the lists together */
> >>>>>         rmap_seed_iterator(cpage, &ri);
> >>>>>         while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
> >>>>> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>>>              * Don't change the type of rmap for the client page. */
> >>>>>             rmap_del(gfn, cpage, 0);
> >>>>>             rmap_add(gfn, spage);
> >>>>> -        put_page_and_type(cpage);
> >>>>> +        put_count++;
> >>>>>             d = get_domain_by_id(gfn->domain);
> >>>>>             BUG_ON(!d);
> >>>>>             BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
> >>>>>             put_domain(d);
> >>>>>         }
> >>>>>         ASSERT(list_empty(&cpage->sharing->gfns));
> >>>>> +    BUG_ON(!put_count);
> >>>>>
> >>>>>         /* Clear the rest of the shared state */
> >>>>>         page_sharing_dispose(cpage);
> >>>>> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>>>
> >>>>>         /* Free the client page */
> >>>>>         put_page_alloc_ref(cpage);
> >>>>> -    put_page(cpage);
> >>>>> +
> >>>>> +    while ( put_count-- )
> >>>>> +        put_page_and_type(cpage);
> >>>>>
> >>>>>         /* We managed to free a domain page. */
> >>>>>         atomic_dec(&nr_shared_mfns);
> >>>>> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
> >>>>>         {
> >>>>>             if ( !last_gfn )
> >>>>>                 mem_sharing_gfn_destroy(page, d, gfn_info);
> >>>>> -        put_page_and_type(page);
> >>>>> +
> >>>>>             mem_sharing_page_unlock(page);
> >>>>> +
> >>>>>             if ( last_gfn )
> >>>>> -        {
> >>>>> -            if ( !get_page(page, dom_cow) )
> >>>>> -            {
> >>>>> -                put_gfn(d, gfn);
> >>>>> -                domain_crash(d);
> >>>>> -                return -EOVERFLOW;
> >>>>> -            }
> >>>>>                 put_page_alloc_ref(page);
> >>>>> -            put_page(page);
> >>>>> -        }
> >>>>> +
> >>>>> +        put_page_and_type(page);
> >>>>>             put_gfn(d, gfn);
> >>>>>
> >>>>>             return 0;
> >>>>
> >>>> ... this (main, as I guess by the title) part of the change? I think
> >>>> you want to explain what was wrong here and/or why the new arrangement
> >>>> is better. (I'm sorry, I guess it was this way on prior versions
> >>>> already, but apparently I didn't notice.)
> >>>
> >>> It's what the patch message says - calling put_page_and_type before
> >>> mem_sharing_page_unlock can cause a deadlock. Since now we are now
> >>> holding a reference to the page till the end there is no need for the
> >>> extra get_page/put_page logic when we are dealing with the last_gfn.
> >>
> >> The title says "reorder" without any "why".
> >
> > Yes, I can't reasonably fit "Calling _put_page_type while also holding
> > the page_lock for that page can cause a deadlock." into the title. So
> > it's spelled out in the patch message.
>
> Of course not. And I realize _part_ of the changes is indeed what the
> title says (although for share_pages() that's not obvious from the
> patch alone). But you do more: You also avoid acquiring an extra
> reference in share_pages().

I feel like we are going in circles and having the same conversations
over and over without really making any headway. You introduced
grabbing the broken extra reference in 0502e0adae2. It is and was
actually unneeded to start with if the proper solution was put in
place, which is what this patch does, reordering things.

> And since you made me look at the code again: If put_page() is unsafe
> with a lock held, how come the get_page_and_type() in share_pages()
> is safe with two such locks held? If it really is, it surely would be
> worthwhile to state in the description. There's another such instance
> in mem_sharing_add_to_physmap() (plus a get_page()), and also one
> where put_page_and_type() gets called with such a lock held (afaics).
>

It's possible there are other instances where this may still be
broken. Right now I only have bandwidth to test and fix the paths I
use. If that's unacceptable I'm happy to continue development in my
private fork and leave things as-is upstream.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup
  2019-07-18 13:37           ` Jan Beulich
@ 2019-07-18 13:52             ` Tamas K Lengyel
  2019-07-18 14:09               ` Tamas K Lengyel
  2019-07-26 14:00             ` Viktor Mitin
  1 sibling, 1 reply; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-18 13:52 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

On Thu, Jul 18, 2019 at 7:37 AM Jan Beulich <JBeulich@suse.com> wrote:
>
> On 18.07.2019 15:16, Tamas K Lengyel wrote:
> > On Thu, Jul 18, 2019 at 7:14 AM Jan Beulich <JBeulich@suse.com> wrote:
> >>
> >> On 18.07.2019 14:59, Tamas K Lengyel wrote:
> >>> On Thu, Jul 18, 2019 at 4:56 AM Jan Beulich <JBeulich@suse.com> wrote:
> >>>>
> >>>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
> >>>>> @@ -136,8 +137,8 @@ static inline bool _page_lock(struct page_info *page)
> >>>>>                 cpu_relax();
> >>>>>             nx = x + (1 | PGT_locked);
> >>>>>             if ( !(x & PGT_validated) ||
> >>>>> -             !(x & PGT_count_mask) ||
> >>>>> -             !(nx & PGT_count_mask) )
> >>>>> +                !(x & PGT_count_mask) ||
> >>>>> +                !(nx & PGT_count_mask) )
> >>>>>                 return false;
> >>>>>         } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );
> >>>>
> >>>> Aren't you screwing up indentation here? It looks wrong both in my
> >>>> mail client's view and on the list archives, whereas. Furthermore
> >>>> this is code you've introduced earlier in the series, so it should
> >>>> be got right there, not here.
> >>>
> >>> The style was auto-applied with astyle using the bsd format. In the
> >>> previous patch there were no style-changes applied because it was a
> >>> copy-paste job from the other code location. I rather keep
> >>> code-copying and style fixes separate.
> >>
> >> But you're actively breaking Xen style here (and below).
> >
> > I don't see any mention of style restrictions regarding this in
> > CODING_STYLE. If there is, I would prefer changing that so we can
> > automate style checks which IMHO are the biggest waste of everyone's
> > time to do manually.
>
> ./CODING_STYLE fails to mention many aspects of what we do everywhere.
> Almost any attempt of updating it has failed for me in the past, often
> due to entire lack of responses on patches (in other cases also because
> of people disagreeing). Despite you being the maintainer of the file I
> strongly think you shouldn't actively break style that's in line with
> large swathes of code elsewhere.
>

I wholly disagree. I don't have have time to check for style issues
manually. Patches look like crap to begin with via email and I most
certainly won't bother carving patches out of emails when people fail
to bother to push things as git branches. This should be something
that's done automatically. I don't even think we should be having a
discussions about style issues on the mailinglist. Style fixes could
be made automatically when the patches are applied by the committers.
Anything else is just a waste of time.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup
  2019-07-18 13:52             ` Tamas K Lengyel
@ 2019-07-18 14:09               ` Tamas K Lengyel
  0 siblings, 0 replies; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-18 14:09 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

On Thu, Jul 18, 2019 at 7:52 AM Tamas K Lengyel <tamas@tklengyel.com> wrote:
>
> On Thu, Jul 18, 2019 at 7:37 AM Jan Beulich <JBeulich@suse.com> wrote:
> >
> > On 18.07.2019 15:16, Tamas K Lengyel wrote:
> > > On Thu, Jul 18, 2019 at 7:14 AM Jan Beulich <JBeulich@suse.com> wrote:
> > >>
> > >> On 18.07.2019 14:59, Tamas K Lengyel wrote:
> > >>> On Thu, Jul 18, 2019 at 4:56 AM Jan Beulich <JBeulich@suse.com> wrote:
> > >>>>
> > >>>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
> > >>>>> @@ -136,8 +137,8 @@ static inline bool _page_lock(struct page_info *page)
> > >>>>>                 cpu_relax();
> > >>>>>             nx = x + (1 | PGT_locked);
> > >>>>>             if ( !(x & PGT_validated) ||
> > >>>>> -             !(x & PGT_count_mask) ||
> > >>>>> -             !(nx & PGT_count_mask) )
> > >>>>> +                !(x & PGT_count_mask) ||
> > >>>>> +                !(nx & PGT_count_mask) )
> > >>>>>                 return false;
> > >>>>>         } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );
> > >>>>
> > >>>> Aren't you screwing up indentation here? It looks wrong both in my
> > >>>> mail client's view and on the list archives, whereas. Furthermore
> > >>>> this is code you've introduced earlier in the series, so it should
> > >>>> be got right there, not here.
> > >>>
> > >>> The style was auto-applied with astyle using the bsd format. In the
> > >>> previous patch there were no style-changes applied because it was a
> > >>> copy-paste job from the other code location. I rather keep
> > >>> code-copying and style fixes separate.
> > >>
> > >> But you're actively breaking Xen style here (and below).
> > >
> > > I don't see any mention of style restrictions regarding this in
> > > CODING_STYLE. If there is, I would prefer changing that so we can
> > > automate style checks which IMHO are the biggest waste of everyone's
> > > time to do manually.
> >
> > ./CODING_STYLE fails to mention many aspects of what we do everywhere.
> > Almost any attempt of updating it has failed for me in the past, often
> > due to entire lack of responses on patches (in other cases also because
> > of people disagreeing). Despite you being the maintainer of the file I
> > strongly think you shouldn't actively break style that's in line with
> > large swathes of code elsewhere.
> >
>
> I wholly disagree. I don't have have time to check for style issues
> manually. Patches look like crap to begin with via email and I most
> certainly won't bother carving patches out of emails when people fail
> to bother to push things as git branches. This should be something
> that's done automatically. I don't even think we should be having a
> discussions about style issues on the mailinglist. Style fixes could
> be made automatically when the patches are applied by the committers.
> Anything else is just a waste of time.
>

Fortunately I found an astlye option that lets me override the bsd
style in this regard and keep the indentation for these blocks, so it
can still be automated. The only part I can't find an option to is to
incorporate the Xen exception in the do-while style of writing "do {".
I can keep the while-part in the same line but not the brace with the
do. If could make an exception for that in the CODING_STYLE, then the
whole thing could be automated with the following .astylerc I'm using
at the moment:

style=bsd
suffix=none
align-pointer=name
align-reference=name
indent=spaces=4
max-code-length=80
min-conditional-indent=0
attach-closing-while
remove-braces
indent-switches
break-blocks
break-one-line-headers
keep-one-line-blocks
pad-comma
pad-header

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 13:47             ` Tamas K Lengyel
@ 2019-07-18 14:28               ` Jan Beulich
  2019-07-18 14:35                 ` Tamas K Lengyel
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2019-07-18 14:28 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On 18.07.2019 15:47, Tamas K Lengyel wrote:
> On Thu, Jul 18, 2019 at 7:33 AM Jan Beulich <JBeulich@suse.com> wrote:
>>
>> On 18.07.2019 15:13, Tamas K Lengyel wrote:
>>> On Thu, Jul 18, 2019 at 7:12 AM Jan Beulich <JBeulich@suse.com> wrote:
>>>>
>>>> On 18.07.2019 14:55, Tamas K Lengyel wrote:
>>>>> On Thu, Jul 18, 2019 at 4:47 AM Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
>>>>>>> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>>>>>          p2m_type_t smfn_type, cmfn_type;
>>>>>>>          struct two_gfns tg;
>>>>>>>          struct rmap_iterator ri;
>>>>>>> +    unsigned long put_count = 0;
>>>>>>>
>>>>>>>          get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
>>>>>>>                       cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
>>>>>>> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>>>>>              goto err_out;
>>>>>>>          }
>>>>>>>
>>>>>>> -    /* Acquire an extra reference, for the freeing below to be safe. */
>>>>>>> -    if ( !get_page(cpage, dom_cow) )
>>>>>>> -    {
>>>>>>> -        ret = -EOVERFLOW;
>>>>>>> -        mem_sharing_page_unlock(secondpg);
>>>>>>> -        mem_sharing_page_unlock(firstpg);
>>>>>>> -        goto err_out;
>>>>>>> -    }
>>>>>>> -
>>>>>>>          /* Merge the lists together */
>>>>>>>          rmap_seed_iterator(cpage, &ri);
>>>>>>>          while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
>>>>>>> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>>>>>               * Don't change the type of rmap for the client page. */
>>>>>>>              rmap_del(gfn, cpage, 0);
>>>>>>>              rmap_add(gfn, spage);
>>>>>>> -        put_page_and_type(cpage);
>>>>>>> +        put_count++;
>>>>>>>              d = get_domain_by_id(gfn->domain);
>>>>>>>              BUG_ON(!d);
>>>>>>>              BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
>>>>>>>              put_domain(d);
>>>>>>>          }
>>>>>>>          ASSERT(list_empty(&cpage->sharing->gfns));
>>>>>>> +    BUG_ON(!put_count);
>>>>>>>
>>>>>>>          /* Clear the rest of the shared state */
>>>>>>>          page_sharing_dispose(cpage);
>>>>>>> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
>>>>>>>
>>>>>>>          /* Free the client page */
>>>>>>>          put_page_alloc_ref(cpage);
>>>>>>> -    put_page(cpage);
>>>>>>> +
>>>>>>> +    while ( put_count-- )
>>>>>>> +        put_page_and_type(cpage);
>>>>>>>
>>>>>>>          /* We managed to free a domain page. */
>>>>>>>          atomic_dec(&nr_shared_mfns);
>>>>>>> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
>>>>>>>          {
>>>>>>>              if ( !last_gfn )
>>>>>>>                  mem_sharing_gfn_destroy(page, d, gfn_info);
>>>>>>> -        put_page_and_type(page);
>>>>>>> +
>>>>>>>              mem_sharing_page_unlock(page);
>>>>>>> +
>>>>>>>              if ( last_gfn )
>>>>>>> -        {
>>>>>>> -            if ( !get_page(page, dom_cow) )
>>>>>>> -            {
>>>>>>> -                put_gfn(d, gfn);
>>>>>>> -                domain_crash(d);
>>>>>>> -                return -EOVERFLOW;
>>>>>>> -            }
>>>>>>>                  put_page_alloc_ref(page);
>>>>>>> -            put_page(page);
>>>>>>> -        }
>>>>>>> +
>>>>>>> +        put_page_and_type(page);
>>>>>>>              put_gfn(d, gfn);
>>>>>>>
>>>>>>>              return 0;
>>>>>>
>>>>>> ... this (main, as I guess by the title) part of the change? I think
>>>>>> you want to explain what was wrong here and/or why the new arrangement
>>>>>> is better. (I'm sorry, I guess it was this way on prior versions
>>>>>> already, but apparently I didn't notice.)
>>>>>
>>>>> It's what the patch message says - calling put_page_and_type before
>>>>> mem_sharing_page_unlock can cause a deadlock. Since now we are now
>>>>> holding a reference to the page till the end there is no need for the
>>>>> extra get_page/put_page logic when we are dealing with the last_gfn.
>>>>
>>>> The title says "reorder" without any "why".
>>>
>>> Yes, I can't reasonably fit "Calling _put_page_type while also holding
>>> the page_lock for that page can cause a deadlock." into the title. So
>>> it's spelled out in the patch message.
>>
>> Of course not. And I realize _part_ of the changes is indeed what the
>> title says (although for share_pages() that's not obvious from the
>> patch alone). But you do more: You also avoid acquiring an extra
>> reference in share_pages().
> 
> I feel like we are going in circles and having the same conversations
> over and over without really making any headway. You introduced
> grabbing the broken extra reference in 0502e0adae2. It is and was
> actually unneeded to start with if the proper solution was put in
> place, which is what this patch does, reordering things.

I'm not complaining about the changes; I'd merely like the description
state why they're needed.

>> And since you made me look at the code again: If put_page() is unsafe
>> with a lock held, how come the get_page_and_type() in share_pages()
>> is safe with two such locks held? If it really is, it surely would be
>> worthwhile to state in the description. There's another such instance
>> in mem_sharing_add_to_physmap() (plus a get_page()), and also one
>> where put_page_and_type() gets called with such a lock held (afaics).
> 
> It's possible there are other instances where this may still be
> broken. Right now I only have bandwidth to test and fix the paths I
> use. If that's unacceptable I'm happy to continue development in my
> private fork and leave things as-is upstream.

Similarly, if there are limitations - fine. But please say so in the
description, to avoid giving the impression that the issues have been
taken care of altogether.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 14:28               ` Jan Beulich
@ 2019-07-18 14:35                 ` Tamas K Lengyel
  2019-07-18 14:47                   ` Jan Beulich
  0 siblings, 1 reply; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-18 14:35 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Thu, Jul 18, 2019 at 8:28 AM Jan Beulich <JBeulich@suse.com> wrote:
>
> On 18.07.2019 15:47, Tamas K Lengyel wrote:
> > On Thu, Jul 18, 2019 at 7:33 AM Jan Beulich <JBeulich@suse.com> wrote:
> >>
> >> On 18.07.2019 15:13, Tamas K Lengyel wrote:
> >>> On Thu, Jul 18, 2019 at 7:12 AM Jan Beulich <JBeulich@suse.com> wrote:
> >>>>
> >>>> On 18.07.2019 14:55, Tamas K Lengyel wrote:
> >>>>> On Thu, Jul 18, 2019 at 4:47 AM Jan Beulich <JBeulich@suse.com> wrote:
> >>>>>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
> >>>>>>> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>>>>>          p2m_type_t smfn_type, cmfn_type;
> >>>>>>>          struct two_gfns tg;
> >>>>>>>          struct rmap_iterator ri;
> >>>>>>> +    unsigned long put_count = 0;
> >>>>>>>
> >>>>>>>          get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
> >>>>>>>                       cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
> >>>>>>> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>>>>>              goto err_out;
> >>>>>>>          }
> >>>>>>>
> >>>>>>> -    /* Acquire an extra reference, for the freeing below to be safe. */
> >>>>>>> -    if ( !get_page(cpage, dom_cow) )
> >>>>>>> -    {
> >>>>>>> -        ret = -EOVERFLOW;
> >>>>>>> -        mem_sharing_page_unlock(secondpg);
> >>>>>>> -        mem_sharing_page_unlock(firstpg);
> >>>>>>> -        goto err_out;
> >>>>>>> -    }
> >>>>>>> -
> >>>>>>>          /* Merge the lists together */
> >>>>>>>          rmap_seed_iterator(cpage, &ri);
> >>>>>>>          while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
> >>>>>>> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>>>>>               * Don't change the type of rmap for the client page. */
> >>>>>>>              rmap_del(gfn, cpage, 0);
> >>>>>>>              rmap_add(gfn, spage);
> >>>>>>> -        put_page_and_type(cpage);
> >>>>>>> +        put_count++;
> >>>>>>>              d = get_domain_by_id(gfn->domain);
> >>>>>>>              BUG_ON(!d);
> >>>>>>>              BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
> >>>>>>>              put_domain(d);
> >>>>>>>          }
> >>>>>>>          ASSERT(list_empty(&cpage->sharing->gfns));
> >>>>>>> +    BUG_ON(!put_count);
> >>>>>>>
> >>>>>>>          /* Clear the rest of the shared state */
> >>>>>>>          page_sharing_dispose(cpage);
> >>>>>>> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
> >>>>>>>
> >>>>>>>          /* Free the client page */
> >>>>>>>          put_page_alloc_ref(cpage);
> >>>>>>> -    put_page(cpage);
> >>>>>>> +
> >>>>>>> +    while ( put_count-- )
> >>>>>>> +        put_page_and_type(cpage);
> >>>>>>>
> >>>>>>>          /* We managed to free a domain page. */
> >>>>>>>          atomic_dec(&nr_shared_mfns);
> >>>>>>> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
> >>>>>>>          {
> >>>>>>>              if ( !last_gfn )
> >>>>>>>                  mem_sharing_gfn_destroy(page, d, gfn_info);
> >>>>>>> -        put_page_and_type(page);
> >>>>>>> +
> >>>>>>>              mem_sharing_page_unlock(page);
> >>>>>>> +
> >>>>>>>              if ( last_gfn )
> >>>>>>> -        {
> >>>>>>> -            if ( !get_page(page, dom_cow) )
> >>>>>>> -            {
> >>>>>>> -                put_gfn(d, gfn);
> >>>>>>> -                domain_crash(d);
> >>>>>>> -                return -EOVERFLOW;
> >>>>>>> -            }
> >>>>>>>                  put_page_alloc_ref(page);
> >>>>>>> -            put_page(page);
> >>>>>>> -        }
> >>>>>>> +
> >>>>>>> +        put_page_and_type(page);
> >>>>>>>              put_gfn(d, gfn);
> >>>>>>>
> >>>>>>>              return 0;
> >>>>>>
> >>>>>> ... this (main, as I guess by the title) part of the change? I think
> >>>>>> you want to explain what was wrong here and/or why the new arrangement
> >>>>>> is better. (I'm sorry, I guess it was this way on prior versions
> >>>>>> already, but apparently I didn't notice.)
> >>>>>
> >>>>> It's what the patch message says - calling put_page_and_type before
> >>>>> mem_sharing_page_unlock can cause a deadlock. Since now we are now
> >>>>> holding a reference to the page till the end there is no need for the
> >>>>> extra get_page/put_page logic when we are dealing with the last_gfn.
> >>>>
> >>>> The title says "reorder" without any "why".
> >>>
> >>> Yes, I can't reasonably fit "Calling _put_page_type while also holding
> >>> the page_lock for that page can cause a deadlock." into the title. So
> >>> it's spelled out in the patch message.
> >>
> >> Of course not. And I realize _part_ of the changes is indeed what the
> >> title says (although for share_pages() that's not obvious from the
> >> patch alone). But you do more: You also avoid acquiring an extra
> >> reference in share_pages().
> >
> > I feel like we are going in circles and having the same conversations
> > over and over without really making any headway. You introduced
> > grabbing the broken extra reference in 0502e0adae2. It is and was
> > actually unneeded to start with if the proper solution was put in
> > place, which is what this patch does, reordering things.
>
> I'm not complaining about the changes; I'd merely like the description
> state why they're needed.

OK.

>
> >> And since you made me look at the code again: If put_page() is unsafe
> >> with a lock held, how come the get_page_and_type() in share_pages()
> >> is safe with two such locks held? If it really is, it surely would be
> >> worthwhile to state in the description. There's another such instance
> >> in mem_sharing_add_to_physmap() (plus a get_page()), and also one
> >> where put_page_and_type() gets called with such a lock held (afaics).
> >
> > It's possible there are other instances where this may still be
> > broken. Right now I only have bandwidth to test and fix the paths I
> > use. If that's unacceptable I'm happy to continue development in my
> > private fork and leave things as-is upstream.
>
> Similarly, if there are limitations - fine. But please say so in the
> description, to avoid giving the impression that the issues have been
> taken care of altogether.
>

Fair enough.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 14:35                 ` Tamas K Lengyel
@ 2019-07-18 14:47                   ` Jan Beulich
  2019-07-18 14:56                     ` Tamas K Lengyel
  2019-07-29 15:46                     ` George Dunlap
  0 siblings, 2 replies; 28+ messages in thread
From: Jan Beulich @ 2019-07-18 14:47 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On 18.07.2019 16:35, Tamas K Lengyel wrote:
> On Thu, Jul 18, 2019 at 8:28 AM Jan Beulich <JBeulich@suse.com> wrote:
>> On 18.07.2019 15:47, Tamas K Lengyel wrote:
>>> I feel like we are going in circles and having the same conversations
>>> over and over without really making any headway. You introduced
>>> grabbing the broken extra reference in 0502e0adae2. It is and was
>>> actually unneeded to start with if the proper solution was put in
>>> place, which is what this patch does, reordering things.
>>
>> I'm not complaining about the changes; I'd merely like the description
>> state why they're needed.
> 
> OK.
> 
>>> It's possible there are other instances where this may still be
>>> broken. Right now I only have bandwidth to test and fix the paths I
>>> use. If that's unacceptable I'm happy to continue development in my
>>> private fork and leave things as-is upstream.
>>
>> Similarly, if there are limitations - fine. But please say so in the
>> description, to avoid giving the impression that the issues have been
>> taken care of altogether.
> 
> Fair enough.

And btw - if you just sent an updated description, I think I'd commit
this without further waiting for George to find time to eventually ack
it.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 14:47                   ` Jan Beulich
@ 2019-07-18 14:56                     ` Tamas K Lengyel
  2019-07-29 15:46                     ` George Dunlap
  1 sibling, 0 replies; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-18 14:56 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Thu, Jul 18, 2019 at 8:47 AM Jan Beulich <JBeulich@suse.com> wrote:
>
> On 18.07.2019 16:35, Tamas K Lengyel wrote:
> > On Thu, Jul 18, 2019 at 8:28 AM Jan Beulich <JBeulich@suse.com> wrote:
> >> On 18.07.2019 15:47, Tamas K Lengyel wrote:
> >>> I feel like we are going in circles and having the same conversations
> >>> over and over without really making any headway. You introduced
> >>> grabbing the broken extra reference in 0502e0adae2. It is and was
> >>> actually unneeded to start with if the proper solution was put in
> >>> place, which is what this patch does, reordering things.
> >>
> >> I'm not complaining about the changes; I'd merely like the description
> >> state why they're needed.
> >
> > OK.
> >
> >>> It's possible there are other instances where this may still be
> >>> broken. Right now I only have bandwidth to test and fix the paths I
> >>> use. If that's unacceptable I'm happy to continue development in my
> >>> private fork and leave things as-is upstream.
> >>
> >> Similarly, if there are limitations - fine. But please say so in the
> >> description, to avoid giving the impression that the issues have been
> >> taken care of altogether.
> >
> > Fair enough.
>
> And btw - if you just sent an updated description, I think I'd commit
> this without further waiting for George to find time to eventually ack
> it.

That would be great. This is the message I typed out for v7:

    x86/mem_sharing: reorder when pages are unlocked and released

    Calling _put_page_type while also holding the page_lock for that
page can cause
    a deadlock. There may be code-paths still in place where this is
an issue, but
    for normal sharing purposes this has been tested and works.

    Removing grabbing the extra page reference at certain points is
done because it
    is no longer needed, a reference is held till necessary with this
reorder thus
    the extra reference is redundant.

    The comment being dropped is incorrect since it's now out-of-date.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup
  2019-07-18 13:37           ` Jan Beulich
  2019-07-18 13:52             ` Tamas K Lengyel
@ 2019-07-26 14:00             ` Viktor Mitin
  1 sibling, 0 replies; 28+ messages in thread
From: Viktor Mitin @ 2019-07-26 14:00 UTC (permalink / raw)
  To: Jan Beulich, Lars Kurth, Artem Mygaiev, Julien Grall
  Cc: xen-devel, Tamas K Lengyel

Hi Jan, All,

On Thu, Jul 18, 2019 at 4:38 PM Jan Beulich <JBeulich@suse.com> wrote:

> >> But you're actively breaking Xen style here (and below).
> >
> > I don't see any mention of style restrictions regarding this in
> > CODING_STYLE. If there is, I would prefer changing that so we can
> > automate style checks which IMHO are the biggest waste of everyone's
> > time to do manually.
>
> ./CODING_STYLE fails to mention many aspects of what we do everywhere.
> Almost any attempt of updating it has failed for me in the past, often
> due to entire lack of responses on patches (in other cases also because
> of people disagreeing). Despite you being the maintainer of the file I
> strongly think you shouldn't actively break style that's in line with
> large swathes of code elsewhere.

The example above demonstrates the common situation about Xen code style rules.
Agree with you that ./CODING_STYLE should be improved by adding explicit rules.
So all the formatting aspects can be addressed explicitly.
IMHO there should not be any implicit 'non-written' code formatting rules.
In other cases, it will be really hard to automate code formatting checks.

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-18 14:47                   ` Jan Beulich
  2019-07-18 14:56                     ` Tamas K Lengyel
@ 2019-07-29 15:46                     ` George Dunlap
  2019-07-29 15:59                       ` Jan Beulich
  2019-07-29 16:01                       ` Tamas K Lengyel
  1 sibling, 2 replies; 28+ messages in thread
From: George Dunlap @ 2019-07-29 15:46 UTC (permalink / raw)
  To: Jan Beulich, Tamas K Lengyel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On 7/18/19 3:47 PM, Jan Beulich wrote:
> On 18.07.2019 16:35, Tamas K Lengyel wrote:
>> On Thu, Jul 18, 2019 at 8:28 AM Jan Beulich <JBeulich@suse.com> wrote:
>>> On 18.07.2019 15:47, Tamas K Lengyel wrote:
>>>> I feel like we are going in circles and having the same conversations
>>>> over and over without really making any headway. You introduced
>>>> grabbing the broken extra reference in 0502e0adae2. It is and was
>>>> actually unneeded to start with if the proper solution was put in
>>>> place, which is what this patch does, reordering things.
>>>
>>> I'm not complaining about the changes; I'd merely like the description
>>> state why they're needed.
>>
>> OK.
>>
>>>> It's possible there are other instances where this may still be
>>>> broken. Right now I only have bandwidth to test and fix the paths I
>>>> use. If that's unacceptable I'm happy to continue development in my
>>>> private fork and leave things as-is upstream.
>>>
>>> Similarly, if there are limitations - fine. But please say so in the
>>> description, to avoid giving the impression that the issues have been
>>> taken care of altogether.
>>
>> Fair enough.
> 
> And btw - if you just sent an updated description, I think I'd commit
> this without further waiting for George to find time to eventually ack
> it.

Thanks -- but it looks like maybe you didn't commit the final patch of
the series ("x86/mem_sharing: style cleanup")?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-29 15:46                     ` George Dunlap
@ 2019-07-29 15:59                       ` Jan Beulich
  2019-07-30  9:31                         ` George Dunlap
  2019-07-29 16:01                       ` Tamas K Lengyel
  1 sibling, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2019-07-29 15:59 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	xen-devel, Roger Pau Monne

On 29.07.2019 17:46, George Dunlap wrote:
> On 7/18/19 3:47 PM, Jan Beulich wrote:
>> And btw - if you just sent an updated description, I think I'd commit
>> this without further waiting for George to find time to eventually ack
>> it.
> 
> Thanks -- but it looks like maybe you didn't commit the final patch of
> the series ("x86/mem_sharing: style cleanup")?

Yes, because of objections to the damage the patch actually does.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-29 15:46                     ` George Dunlap
  2019-07-29 15:59                       ` Jan Beulich
@ 2019-07-29 16:01                       ` Tamas K Lengyel
  1 sibling, 0 replies; 28+ messages in thread
From: Tamas K Lengyel @ 2019-07-29 16:01 UTC (permalink / raw)
  To: George Dunlap
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Jan Beulich, xen-devel,
	Roger Pau Monne

On Mon, Jul 29, 2019 at 9:47 AM George Dunlap <george.dunlap@citrix.com> wrote:
>
> On 7/18/19 3:47 PM, Jan Beulich wrote:
> > On 18.07.2019 16:35, Tamas K Lengyel wrote:
> >> On Thu, Jul 18, 2019 at 8:28 AM Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.07.2019 15:47, Tamas K Lengyel wrote:
> >>>> I feel like we are going in circles and having the same conversations
> >>>> over and over without really making any headway. You introduced
> >>>> grabbing the broken extra reference in 0502e0adae2. It is and was
> >>>> actually unneeded to start with if the proper solution was put in
> >>>> place, which is what this patch does, reordering things.
> >>>
> >>> I'm not complaining about the changes; I'd merely like the description
> >>> state why they're needed.
> >>
> >> OK.
> >>
> >>>> It's possible there are other instances where this may still be
> >>>> broken. Right now I only have bandwidth to test and fix the paths I
> >>>> use. If that's unacceptable I'm happy to continue development in my
> >>>> private fork and leave things as-is upstream.
> >>>
> >>> Similarly, if there are limitations - fine. But please say so in the
> >>> description, to avoid giving the impression that the issues have been
> >>> taken care of altogether.
> >>
> >> Fair enough.
> >
> > And btw - if you just sent an updated description, I think I'd commit
> > this without further waiting for George to find time to eventually ack
> > it.
>
> Thanks -- but it looks like maybe you didn't commit the final patch of
> the series ("x86/mem_sharing: style cleanup")?

Jan requested additional style cleanups to be applied. I'll try to
send that in this week.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released
  2019-07-29 15:59                       ` Jan Beulich
@ 2019-07-30  9:31                         ` George Dunlap
  0 siblings, 0 replies; 28+ messages in thread
From: George Dunlap @ 2019-07-30  9:31 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	xen-devel, Roger Pau Monne

On 7/29/19 4:59 PM, Jan Beulich wrote:
> On 29.07.2019 17:46, George Dunlap wrote:
>> On 7/18/19 3:47 PM, Jan Beulich wrote:
>>> And btw - if you just sent an updated description, I think I'd commit
>>> this without further waiting for George to find time to eventually ack
>>> it.
>>
>> Thanks -- but it looks like maybe you didn't commit the final patch of
>> the series ("x86/mem_sharing: style cleanup")?
> 
> Yes, because of objections to the damage the patch actually does.

Right -- for some reason in that thread I was only getting Tamas'
replies, not yours.  But my main question was whether I could delete the
series from my review queue; and since it's to be re-sent, the answer
turns out to be "yes".

Thanks,
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2019-07-30  9:32 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-17 19:33 [Xen-devel] [PATCH v6 0/5] x86/mem_sharing: assorted fixes Tamas K Lengyel
2019-07-17 19:33 ` [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released Tamas K Lengyel
2019-07-18 10:46   ` Jan Beulich
2019-07-18 12:55     ` Tamas K Lengyel
2019-07-18 13:12       ` Jan Beulich
2019-07-18 13:13         ` Tamas K Lengyel
2019-07-18 13:33           ` Jan Beulich
2019-07-18 13:47             ` Tamas K Lengyel
2019-07-18 14:28               ` Jan Beulich
2019-07-18 14:35                 ` Tamas K Lengyel
2019-07-18 14:47                   ` Jan Beulich
2019-07-18 14:56                     ` Tamas K Lengyel
2019-07-29 15:46                     ` George Dunlap
2019-07-29 15:59                       ` Jan Beulich
2019-07-30  9:31                         ` George Dunlap
2019-07-29 16:01                       ` Tamas K Lengyel
2019-07-17 19:33 ` [Xen-devel] [PATCH v6 2/5] x86/mem_sharing: copy a page_lock version to be internal to memshr Tamas K Lengyel
2019-07-17 19:33 ` [Xen-devel] [PATCH v6 3/5] x86/mem_sharing: enable mem_share audit mode only in debug builds Tamas K Lengyel
2019-07-17 19:33 ` [Xen-devel] [PATCH v6 4/5] x86/mem_sharing: compile mem_sharing subsystem only when kconfig is enabled Tamas K Lengyel
2019-07-17 19:33 ` [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup Tamas K Lengyel
2019-07-18 10:55   ` Jan Beulich
2019-07-18 12:59     ` Tamas K Lengyel
2019-07-18 13:14       ` Jan Beulich
2019-07-18 13:16         ` Tamas K Lengyel
2019-07-18 13:37           ` Jan Beulich
2019-07-18 13:52             ` Tamas K Lengyel
2019-07-18 14:09               ` Tamas K Lengyel
2019-07-26 14:00             ` Viktor Mitin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).