All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wei Liu <wei.liu2@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [PATCH v4 31/31] x86/mm: move {get, put}_page_from_l{2, 3, 4}e
Date: Thu, 17 Aug 2017 15:44:56 +0100	[thread overview]
Message-ID: <20170817144456.18989-32-wei.liu2@citrix.com> (raw)
In-Reply-To: <20170817144456.18989-1-wei.liu2@citrix.com>

They are only used by PV code.

Fix coding style issues while moving. Move declarations to PV specific
header file.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c           | 253 --------------------------------------------
 xen/arch/x86/pv/mm.c        | 246 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/mm.h    |  10 --
 xen/include/asm-x86/pv/mm.h |  29 +++++
 4 files changed, 275 insertions(+), 263 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 70559c687c..9750f657ca 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -507,72 +507,6 @@ int get_page_and_type_from_mfn(mfn_t mfn, unsigned long type, struct domain *d,
     return rc;
 }
 
-static void put_data_page(
-    struct page_info *page, int writeable)
-{
-    if ( writeable )
-        put_page_and_type(page);
-    else
-        put_page(page);
-}
-
-/*
- * We allow root tables to map each other (a.k.a. linear page tables). It
- * needs some special care with reference counts and access permissions:
- *  1. The mapping entry must be read-only, or the guest may get write access
- *     to its own PTEs.
- *  2. We must only bump the reference counts for an *already validated*
- *     L2 table, or we can end up in a deadlock in get_page_type() by waiting
- *     on a validation that is required to complete that validation.
- *  3. We only need to increment the reference counts for the mapped page
- *     frame if it is mapped by a different root table. This is sufficient and
- *     also necessary to allow validation of a root table mapping itself.
- */
-#define define_get_linear_pagetable(level)                                  \
-static int                                                                  \
-get_##level##_linear_pagetable(                                             \
-    level##_pgentry_t pde, unsigned long pde_pfn, struct domain *d)         \
-{                                                                           \
-    unsigned long x, y;                                                     \
-    struct page_info *page;                                                 \
-    unsigned long pfn;                                                      \
-                                                                            \
-    if ( (level##e_get_flags(pde) & _PAGE_RW) )                             \
-    {                                                                       \
-        gdprintk(XENLOG_WARNING,                                            \
-                 "Attempt to create linear p.t. with write perms\n");       \
-        return 0;                                                           \
-    }                                                                       \
-                                                                            \
-    if ( (pfn = level##e_get_pfn(pde)) != pde_pfn )                         \
-    {                                                                       \
-        /* Make sure the mapped frame belongs to the correct domain. */     \
-        if ( unlikely(!get_page_from_mfn(_mfn(pfn), d)) )                   \
-            return 0;                                                       \
-                                                                            \
-        /*                                                                  \
-         * Ensure that the mapped frame is an already-validated page table. \
-         * If so, atomically increment the count (checking for overflow).   \
-         */                                                                 \
-        page = mfn_to_page(pfn);                                            \
-        y = page->u.inuse.type_info;                                        \
-        do {                                                                \
-            x = y;                                                          \
-            if ( unlikely((x & PGT_count_mask) == PGT_count_mask) ||        \
-                 unlikely((x & (PGT_type_mask|PGT_validated)) !=            \
-                          (PGT_##level##_page_table|PGT_validated)) )       \
-            {                                                               \
-                put_page(page);                                             \
-                return 0;                                                   \
-            }                                                               \
-        }                                                                   \
-        while ( (y = cmpxchg(&page->u.inuse.type_info, x, x + 1)) != x );   \
-    }                                                                       \
-                                                                            \
-    return 1;                                                               \
-}
-
-
 bool is_iomem_page(mfn_t mfn)
 {
     struct page_info *page;
@@ -862,108 +796,6 @@ get_page_from_l1e(
 }
 
 
-/* NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'. */
-/*
- * get_page_from_l2e returns:
- *   1 => page not present
- *   0 => success
- *  <0 => error code
- */
-define_get_linear_pagetable(l2);
-int
-get_page_from_l2e(
-    l2_pgentry_t l2e, unsigned long pfn, struct domain *d)
-{
-    unsigned long mfn = l2e_get_pfn(l2e);
-    int rc;
-
-    if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
-        return 1;
-
-    if ( unlikely((l2e_get_flags(l2e) & L2_DISALLOW_MASK)) )
-    {
-        gdprintk(XENLOG_WARNING, "Bad L2 flags %x\n",
-                 l2e_get_flags(l2e) & L2_DISALLOW_MASK);
-        return -EINVAL;
-    }
-
-    if ( !(l2e_get_flags(l2e) & _PAGE_PSE) )
-    {
-        rc = get_page_and_type_from_mfn(_mfn(mfn), PGT_l1_page_table, d, 0,
-                                        false);
-        if ( unlikely(rc == -EINVAL) && get_l2_linear_pagetable(l2e, pfn, d) )
-            rc = 0;
-        return rc;
-    }
-
-    return -EINVAL;
-}
-
-
-/*
- * get_page_from_l3e returns:
- *   1 => page not present
- *   0 => success
- *  <0 => error code
- */
-define_get_linear_pagetable(l3);
-int
-get_page_from_l3e(
-    l3_pgentry_t l3e, unsigned long pfn, struct domain *d, int partial)
-{
-    int rc;
-
-    if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
-        return 1;
-
-    if ( unlikely((l3e_get_flags(l3e) & l3_disallow_mask(d))) )
-    {
-        gdprintk(XENLOG_WARNING, "Bad L3 flags %x\n",
-                 l3e_get_flags(l3e) & l3_disallow_mask(d));
-        return -EINVAL;
-    }
-
-    rc = get_page_and_type_from_mfn(_mfn(l3e_get_pfn(l3e)), PGT_l2_page_table,
-                                    d, partial, true);
-    if ( unlikely(rc == -EINVAL) &&
-         !is_pv_32bit_domain(d) &&
-         get_l3_linear_pagetable(l3e, pfn, d) )
-        rc = 0;
-
-    return rc;
-}
-
-/*
- * get_page_from_l4e returns:
- *   1 => page not present
- *   0 => success
- *  <0 => error code
- */
-define_get_linear_pagetable(l4);
-int
-get_page_from_l4e(
-    l4_pgentry_t l4e, unsigned long pfn, struct domain *d, int partial)
-{
-    int rc;
-
-    if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
-        return 1;
-
-    if ( unlikely((l4e_get_flags(l4e) & L4_DISALLOW_MASK)) )
-    {
-        gdprintk(XENLOG_WARNING, "Bad L4 flags %x\n",
-                 l4e_get_flags(l4e) & L4_DISALLOW_MASK);
-        return -EINVAL;
-    }
-
-    rc = get_page_and_type_from_mfn(_mfn(l4e_get_pfn(l4e)), PGT_l3_page_table,
-                                    d, partial, true);
-    if ( unlikely(rc == -EINVAL) && get_l4_linear_pagetable(l4e, pfn, d) )
-        rc = 0;
-
-    return rc;
-}
-
 void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner)
 {
     unsigned long     pfn = l1e_get_pfn(l1e);
@@ -1024,91 +856,6 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner)
 }
 
 
-/*
- * NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'.
- * Note also that this automatically deals correctly with linear p.t.'s.
- */
-int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn)
-{
-    if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || (l2e_get_pfn(l2e) == pfn) )
-        return 1;
-
-    if ( l2e_get_flags(l2e) & _PAGE_PSE )
-    {
-        struct page_info *page = mfn_to_page(l2e_get_pfn(l2e));
-        unsigned int i;
-
-        for ( i = 0; i < (1u << PAGETABLE_ORDER); i++, page++ )
-            put_page_and_type(page);
-    } else
-        put_page_and_type(l2e_get_page(l2e));
-
-    return 0;
-}
-
-int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, int partial,
-                      bool defer)
-{
-    struct page_info *pg;
-
-    if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || (l3e_get_pfn(l3e) == pfn) )
-        return 1;
-
-    if ( unlikely(l3e_get_flags(l3e) & _PAGE_PSE) )
-    {
-        unsigned long mfn = l3e_get_pfn(l3e);
-        int writeable = l3e_get_flags(l3e) & _PAGE_RW;
-
-        ASSERT(!(mfn & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1)));
-        do {
-            put_data_page(mfn_to_page(mfn), writeable);
-        } while ( ++mfn & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1) );
-
-        return 0;
-    }
-
-    pg = l3e_get_page(l3e);
-
-    if ( unlikely(partial > 0) )
-    {
-        ASSERT(!defer);
-        return put_page_type_preemptible(pg);
-    }
-
-    if ( defer )
-    {
-        current->arch.old_guest_table = pg;
-        return 0;
-    }
-
-    return put_page_and_type_preemptible(pg);
-}
-
-int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, int partial,
-                      bool defer)
-{
-    if ( (l4e_get_flags(l4e) & _PAGE_PRESENT) &&
-         (l4e_get_pfn(l4e) != pfn) )
-    {
-        struct page_info *pg = l4e_get_page(l4e);
-
-        if ( unlikely(partial > 0) )
-        {
-            ASSERT(!defer);
-            return put_page_type_preemptible(pg);
-        }
-
-        if ( defer )
-        {
-            current->arch.old_guest_table = pg;
-            return 0;
-        }
-
-        return put_page_and_type_preemptible(pg);
-    }
-    return 1;
-}
-
 bool fill_ro_mpt(unsigned long mfn)
 {
     l4_pgentry_t *l4tab = map_domain_page(_mfn(mfn));
diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c
index 19b2ae588e..ad35808c51 100644
--- a/xen/arch/x86/pv/mm.c
+++ b/xen/arch/x86/pv/mm.c
@@ -777,6 +777,252 @@ void pv_invalidate_shadow_ldt(struct vcpu *v, bool flush)
     spin_unlock(&v->arch.pv_vcpu.shadow_ldt_lock);
 }
 
+/*
+ * We allow root tables to map each other (a.k.a. linear page tables). It
+ * needs some special care with reference counts and access permissions:
+ *  1. The mapping entry must be read-only, or the guest may get write access
+ *     to its own PTEs.
+ *  2. We must only bump the reference counts for an *already validated*
+ *     L2 table, or we can end up in a deadlock in get_page_type() by waiting
+ *     on a validation that is required to complete that validation.
+ *  3. We only need to increment the reference counts for the mapped page
+ *     frame if it is mapped by a different root table. This is sufficient and
+ *     also necessary to allow validation of a root table mapping itself.
+ */
+#define define_get_linear_pagetable(level)                                  \
+static int                                                                  \
+get_##level##_linear_pagetable(                                             \
+    level##_pgentry_t pde, unsigned long pde_pfn, struct domain *d)         \
+{                                                                           \
+    unsigned long x, y;                                                     \
+    struct page_info *page;                                                 \
+    unsigned long pfn;                                                      \
+                                                                            \
+    if ( (level##e_get_flags(pde) & _PAGE_RW) )                             \
+    {                                                                       \
+        gdprintk(XENLOG_WARNING,                                            \
+                 "Attempt to create linear p.t. with write perms\n");       \
+        return 0;                                                           \
+    }                                                                       \
+                                                                            \
+    if ( (pfn = level##e_get_pfn(pde)) != pde_pfn )                         \
+    {                                                                       \
+        /* Make sure the mapped frame belongs to the correct domain. */     \
+        if ( unlikely(!get_page_from_mfn(_mfn(pfn), d)) )                   \
+            return 0;                                                       \
+                                                                            \
+        /*                                                                  \
+         * Ensure that the mapped frame is an already-validated page table. \
+         * If so, atomically increment the count (checking for overflow).   \
+         */                                                                 \
+        page = mfn_to_page(pfn);                                            \
+        y = page->u.inuse.type_info;                                        \
+        do {                                                                \
+            x = y;                                                          \
+            if ( unlikely((x & PGT_count_mask) == PGT_count_mask) ||        \
+                 unlikely((x & (PGT_type_mask|PGT_validated)) !=            \
+                          (PGT_##level##_page_table|PGT_validated)) )       \
+            {                                                               \
+                put_page(page);                                             \
+                return 0;                                                   \
+            }                                                               \
+        }                                                                   \
+        while ( (y = cmpxchg(&page->u.inuse.type_info, x, x + 1)) != x );   \
+    }                                                                       \
+                                                                            \
+    return 1;                                                               \
+}
+
+/* NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'. */
+/*
+ * get_page_from_l2e returns:
+ *   1 => page not present
+ *   0 => success
+ *  <0 => error code
+ */
+define_get_linear_pagetable(l2);
+int get_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn, struct domain *d)
+{
+    unsigned long mfn = l2e_get_pfn(l2e);
+    int rc;
+
+    if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
+        return 1;
+
+    if ( unlikely((l2e_get_flags(l2e) & L2_DISALLOW_MASK)) )
+    {
+        gdprintk(XENLOG_WARNING, "Bad L2 flags %x\n",
+                 l2e_get_flags(l2e) & L2_DISALLOW_MASK);
+        return -EINVAL;
+    }
+
+    if ( !(l2e_get_flags(l2e) & _PAGE_PSE) )
+    {
+        rc = get_page_and_type_from_mfn(_mfn(mfn), PGT_l1_page_table, d, 0,
+                                        false);
+        if ( unlikely(rc == -EINVAL) && get_l2_linear_pagetable(l2e, pfn, d) )
+            rc = 0;
+        return rc;
+    }
+
+    return -EINVAL;
+}
+
+/*
+ * get_page_from_l3e returns:
+ *   1 => page not present
+ *   0 => success
+ *  <0 => error code
+ */
+define_get_linear_pagetable(l3);
+int get_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, struct domain *d,
+                      int partial)
+{
+    int rc;
+
+    if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
+        return 1;
+
+    if ( unlikely((l3e_get_flags(l3e) & l3_disallow_mask(d))) )
+    {
+        gdprintk(XENLOG_WARNING, "Bad L3 flags %x\n",
+                 l3e_get_flags(l3e) & l3_disallow_mask(d));
+        return -EINVAL;
+    }
+
+    rc = get_page_and_type_from_mfn(_mfn(l3e_get_pfn(l3e)), PGT_l2_page_table,
+                                    d, partial, true);
+    if ( unlikely(rc == -EINVAL) &&
+         !is_pv_32bit_domain(d) &&
+         get_l3_linear_pagetable(l3e, pfn, d) )
+        rc = 0;
+
+    return rc;
+}
+
+/*
+ * get_page_from_l4e returns:
+ *   1 => page not present
+ *   0 => success
+ *  <0 => error code
+ */
+define_get_linear_pagetable(l4);
+int get_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, struct domain *d,
+                      int partial)
+{
+    int rc;
+
+    if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
+        return 1;
+
+    if ( unlikely((l4e_get_flags(l4e) & L4_DISALLOW_MASK)) )
+    {
+        gdprintk(XENLOG_WARNING, "Bad L4 flags %x\n",
+                 l4e_get_flags(l4e) & L4_DISALLOW_MASK);
+        return -EINVAL;
+    }
+
+    rc = get_page_and_type_from_mfn(_mfn(l4e_get_pfn(l4e)), PGT_l3_page_table,
+                                    d, partial, true);
+    if ( unlikely(rc == -EINVAL) && get_l4_linear_pagetable(l4e, pfn, d) )
+        rc = 0;
+
+    return rc;
+}
+
+/*
+ * NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'.
+ * Note also that this automatically deals correctly with linear p.t.'s.
+ */
+int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn)
+{
+    if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || (l2e_get_pfn(l2e) == pfn) )
+        return 1;
+
+    if ( l2e_get_flags(l2e) & _PAGE_PSE )
+    {
+        struct page_info *page = mfn_to_page(l2e_get_pfn(l2e));
+        unsigned int i;
+
+        for ( i = 0; i < (1u << PAGETABLE_ORDER); i++, page++ )
+            put_page_and_type(page);
+    } else
+        put_page_and_type(l2e_get_page(l2e));
+
+    return 0;
+}
+
+static void put_data_page(struct page_info *page, bool writeable)
+{
+    if ( writeable )
+        put_page_and_type(page);
+    else
+        put_page(page);
+}
+
+int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, int partial,
+                      bool defer)
+{
+    struct page_info *pg;
+
+    if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || (l3e_get_pfn(l3e) == pfn) )
+        return 1;
+
+    if ( unlikely(l3e_get_flags(l3e) & _PAGE_PSE) )
+    {
+        unsigned long mfn = l3e_get_pfn(l3e);
+        int writeable = l3e_get_flags(l3e) & _PAGE_RW;
+
+        ASSERT(!(mfn & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1)));
+        do {
+            put_data_page(mfn_to_page(mfn), writeable);
+        } while ( ++mfn & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1) );
+
+        return 0;
+    }
+
+    pg = l3e_get_page(l3e);
+
+    if ( unlikely(partial > 0) )
+    {
+        ASSERT(!defer);
+        return put_page_type_preemptible(pg);
+    }
+
+    if ( defer )
+    {
+        current->arch.old_guest_table = pg;
+        return 0;
+    }
+
+    return put_page_and_type_preemptible(pg);
+}
+
+int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, int partial,
+                      bool defer)
+{
+    if ( (l4e_get_flags(l4e) & _PAGE_PRESENT) &&
+         (l4e_get_pfn(l4e) != pfn) )
+    {
+        struct page_info *pg = l4e_get_page(l4e);
+
+        if ( unlikely(partial > 0) )
+        {
+            ASSERT(!defer);
+            return put_page_type_preemptible(pg);
+        }
+
+        if ( defer )
+        {
+            current->arch.old_guest_table = pg;
+            return 0;
+        }
+
+        return put_page_and_type_preemptible(pg);
+    }
+    return 1;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 7480341240..4eeaf709c1 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -358,16 +358,6 @@ int  put_old_guest_table(struct vcpu *);
 int  get_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner,
                        struct domain *pg_owner);
 void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner);
-int get_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn, struct domain *d);
-int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn);
-int get_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, struct domain *d,
-                      int partial);
-int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, int partial,
-                      bool defer);
-int get_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, struct domain *d,
-                      int partial);
-int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, int partial,
-                      bool defer);
 void get_page_light(struct page_info *page);
 bool get_page_from_mfn(mfn_t mfn, struct domain *d);
 int get_page_and_type_from_mfn(mfn_t mfn, unsigned long type, struct domain *d,
diff --git a/xen/include/asm-x86/pv/mm.h b/xen/include/asm-x86/pv/mm.h
index 1e2871fb58..acab7b7c42 100644
--- a/xen/include/asm-x86/pv/mm.h
+++ b/xen/include/asm-x86/pv/mm.h
@@ -102,6 +102,17 @@ int pv_free_page_type(struct page_info *page, unsigned long type,
 
 void pv_invalidate_shadow_ldt(struct vcpu *v, bool flush);
 
+int get_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn, struct domain *d);
+int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn);
+int get_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, struct domain *d,
+                      int partial);
+int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, int partial,
+                      bool defer);
+int get_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, struct domain *d,
+                      int partial);
+int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, int partial,
+                      bool defer);
+
 #else
 
 #include <xen/errno.h>
@@ -143,6 +154,24 @@ static inline int pv_free_page_type(struct page_info *page, unsigned long type,
 
 static inline void pv_invalidate_shadow_ldt(struct vcpu *v, bool flush) {}
 
+static inline int get_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn,
+                                    struct domain *d)
+{ return -EINVAL; }
+static inline int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn)
+{ return -EINVAL; }
+static inline int get_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn,
+                                    struct domain *d, int partial)
+{ return -EINVAL; }
+static inline int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn,
+                                    int partial, bool defer)
+{ return -EINVAL; }
+static inline int get_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn,
+                                    struct domain *d, int partial)
+{ return -EINVAL; }
+static inline int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn,
+                                    int partial, bool defer)
+{ return -EINVAL; }
+
 #endif
 
 #endif /* __X86_PV_MM_H__ */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

      parent reply	other threads:[~2017-08-17 14:46 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-17 14:44 [PATCH v4 00/31] x86: refactor mm.c Wei Liu
2017-08-17 14:44 ` [PATCH v4 01/31] x86/mm: carve out create_grant_pv_mapping Wei Liu
2017-08-18 10:12   ` Jan Beulich
2017-08-17 14:44 ` [PATCH v4 02/31] x86/mm: carve out replace_grant_pv_mapping Wei Liu
2017-08-18 10:14   ` Jan Beulich
2017-08-17 14:44 ` [PATCH v4 03/31] x86/mm: split HVM grant table code to hvm/grant_table.c Wei Liu
2017-08-18 10:16   ` Jan Beulich
2017-08-18 10:26   ` Andrew Cooper
2017-08-17 14:44 ` [PATCH v4 04/31] x86/mm: lift PAGE_CACHE_ATTRS to page.h Wei Liu
2017-08-17 14:46   ` Andrew Cooper
2017-08-17 14:44 ` [PATCH v4 05/31] x86/mm: document the return values from get_page_from_l*e Wei Liu
2017-08-18 10:24   ` Jan Beulich
2017-08-17 14:44 ` [PATCH v4 06/31] x86: move pv_emul_is_mem_write to pv/emulate.c Wei Liu
2017-08-17 14:53   ` Andrew Cooper
2017-08-18 10:08   ` Jan Beulich
2017-08-18 12:08     ` Wei Liu
2017-08-18 12:13       ` Andrew Cooper
2017-08-17 14:44 ` [PATCH v4 07/31] x86/mm: move and rename guest_get_eff{, kern}_l1e Wei Liu
2017-08-24 14:52   ` Jan Beulich
2017-08-17 14:44 ` [PATCH v4 08/31] x86/mm: export get_page_from_mfn Wei Liu
2017-08-24 14:55   ` Jan Beulich
2017-08-17 14:44 ` [PATCH v4 09/31] x86/mm: rename and move update_intpte Wei Liu
2017-08-24 14:59   ` Jan Beulich
2017-08-17 14:44 ` [PATCH v4 10/31] x86/mm: move {un, }adjust_guest_* to pv/mm.h Wei Liu
2017-08-24 15:00   ` Jan Beulich
2017-08-17 14:44 ` [PATCH v4 11/31] x86/mm: split out writable pagetable emulation code Wei Liu
2017-08-24 15:15   ` Jan Beulich
2017-08-30 14:07     ` Wei Liu
2017-08-30 15:23       ` Jan Beulich
2017-08-30 15:43         ` Wei Liu
2017-08-17 14:44 ` [PATCH v4 12/31] x86/mm: split out readonly MMIO " Wei Liu
2017-08-24 15:16   ` Jan Beulich
2017-08-24 15:25     ` Andrew Cooper
2017-08-30 14:35       ` Wei Liu
2017-08-17 14:44 ` [PATCH v4 13/31] x86/mm: remove the unused inclusion of pv/emulate.h Wei Liu
2017-08-17 14:44 ` [PATCH v4 14/31] x86/mm: move and rename guest_{, un}map_l1e Wei Liu
2017-08-17 14:44 ` [PATCH v4 15/31] x86/mm: split out PV grant table code Wei Liu
2017-08-17 14:44 ` [PATCH v4 16/31] x86/mm: split out descriptor " Wei Liu
2017-08-17 14:44 ` [PATCH v4 17/31] x86/mm: move compat descriptor handling code Wei Liu
2017-08-17 14:44 ` [PATCH v4 18/31] x86/mm: move and rename map_ldt_shadow_page Wei Liu
2017-08-17 14:44 ` [PATCH v4 19/31] x86/mm: factor out pv_arch_init_memory Wei Liu
2017-08-17 14:44 ` [PATCH v4 20/31] x86/mm: move l4 table setup code Wei Liu
2017-08-17 14:44 ` [PATCH v4 21/31] x86/mm: add "pv_" prefix to new_guest_cr3 Wei Liu
2017-08-17 14:44 ` [PATCH v4 22/31] x86: add pv_ prefix to {alloc, free}_page_type Wei Liu
2017-08-17 14:44 ` [PATCH v4 23/31] x86/mm: export more get/put page functions Wei Liu
2017-08-17 14:44 ` [PATCH v4 24/31] x86/mm: move and add pv_ prefix to create_pae_xen_mappings Wei Liu
2017-08-17 14:44 ` [PATCH v4 25/31] x86/mm: move disallow_mask variable and macros Wei Liu
2017-08-17 14:44 ` [PATCH v4 26/31] x86/mm: move pv_{alloc, free}_page_type Wei Liu
2017-08-17 14:44 ` [PATCH v4 27/31] x86/mm: move and add pv_ prefix to invalidate_shadow_ldt Wei Liu
2017-08-17 14:44 ` [PATCH v4 28/31] x86/mm: move PV hypercalls to pv/mm-hypercalls.c Wei Liu
2017-08-17 14:44 ` [PATCH v4 29/31] x86/mm: remove the now unused inclusion of pv/mm.h Wei Liu
2017-08-17 14:44 ` [PATCH v4 30/31] x86/mm: use put_page_type_preemptible in put_page_from_l{2, 3}e Wei Liu
2017-08-17 14:44 ` Wei Liu [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170817144456.18989-32-wei.liu2@citrix.com \
    --to=wei.liu2@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.