All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm
@ 2022-05-11 18:47 Oleksandr Tyshchenko
  2022-05-11 18:47 ` [PATCH V6 2/2] xen/arm: Harden the P2M code in p2m_remove_mapping() Oleksandr Tyshchenko
  2022-06-23 17:50 ` [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm Julien Grall
  0 siblings, 2 replies; 12+ messages in thread
From: Oleksandr Tyshchenko @ 2022-05-11 18:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Roger Pau Monné

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Rework Arm implementation to store grant table frame GFN
in struct page_info directly instead of keeping it in
standalone status/shared arrays. This patch is based on
the assumption that a grant table page is a xenheap page.

To cover 64-bit/40-bit IPA on Arm64/Arm32 we need the space
to hold 52-bit/28-bit + extra bit value respectively. In order
to not grow the size of struct page_info borrow the required
amount of bits from type_info's count portion which current
context won't suffer (currently only 1 bit is used on Arm).
Please note, to minimize code changes and avoid introducing
an extra #ifdef-s to the header, we keep the same amount of
bits on both subarches, although the count portion on Arm64
could be wider, so we waste some bits here.

Introduce corresponding PGT_* constructs and access macro
page_get(set)_xenheap_gfn. Please note, all accesses to
the GFN portion of type_info field should always be protected
by the P2M lock. In case when it is not feasible to satisfy
that requirement (risk of deadlock, lock inversion, etc)
it is important to make sure that all non-protected updates
to this field are atomic.
As several non-protected read accesses still exist within
current code (most calls to page_get_xenheap_gfn() are not
protected by the P2M lock) the subsequent patch will introduce
hardening code for p2m_remove_mapping() to be called with P2M
lock held in order to check any difference between what is
already mapped and what is requested to be ummapped.

Update existing gnttab macros to deal with GFN value according
to new location. Also update the use of count portion of type_info
field on Arm in share_xen_page_with_guest().

While at it, extend this simplified M2P-like approach for any
xenheap pages which are proccessed in xenmem_add_to_physmap_one()
except foreign ones. Update the code to set GFN portion after
establishing new mapping for the xenheap page in said function
and to clean GFN portion when putting a reference on that page
in p2m_put_l3_page().

And for everything to work correctly introduce arch-specific
initialization pattern PGT_TYPE_INFO_INITIALIZER to be applied
to type_info field during initialization at alloc_heap_pages()
and acquire_staticmem_pages(). The pattern's purpose on Arm
is to clear the GFN portion before use, on x86 it is just
a stub.

This patch is intended to fix the potential issue on Arm
which might happen when remapping grant-table frame.
A guest (or the toolstack) will unmap the grant-table frame
using XENMEM_remove_physmap. This is a generic hypercall,
so on x86, we are relying on the fact the M2P entry will
be cleared on removal. For architecture without the M2P,
the GFN would still be present in the grant frame/status
array. So on the next call to map the page, we will end up to
request the P2M to remove whatever mapping was the given GFN.
This could well be another mapping.

Besides that, this patch simplifies arch code on Arm by
removing arrays and corresponding management code and
as the result gnttab_init_arch/gnttab_destroy_arch helpers
and struct grant_table_arch become useless and can be
dropped globally.

Suggested-by: Julien Grall <jgrall@amazon.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
Dear @RISC-V maintainers, please note in current patch I drop arch
specific helpers gnttab_init(destroy)_arch helpers as unneeded for
both Arm and x86. Please let me know if you are going to reuse them
in the nearest future and I will retain them.

You can find the related discussions at:
https://lore.kernel.org/xen-devel/93d0df14-2c8a-c2e3-8c51-54412190171c@xen.org/
https://lore.kernel.org/xen-devel/1628890077-12545-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1631652245-30746-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1632425551-18910-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1641424268-12968-1-git-send-email-olekstysh@gmail.com/

Changes RFC1 -> RFC2:
 - update patch description
 - add/update comments in code
 - clarify check in p2m_put_l3_page()
 - introduce arch_alloc_xenheap_page() and arch_free_xenheap_page()
   and drop page_arch_init()
 - add ASSERT to gnttab_shared_page() and gnttab_status_page()
 - rework changes to Arm's struct page_info: do not split type_info,
   allocate GFN portion by reducing count portion, create corresponding
   PGT_* construct, etc
 - update page_get_frame_gfn() and page_set_frame_gfn()
 - update the use of count portion on Arm
 - drop the leading underscore in the macro parameter names

Changes RFC2 -> RFC3:
 - update patch description
 - drop PGT_count_base and MASK_INSR() in share_xen_page_with_guest()
 - update alloc_xenheap_page() and free_xenheap_page() for SEPARATE_XENHEAP
   case (Arm32)
 - provide an extra bit for GFN portion, to get PGT_INVALID_FRAME_GFN
   one bit more than the maximum number of physical address bits on Arm32

Changes RFC3 -> V4:
 - rebase on Jan's "gnttab: remove guest_physmap_remove_page() call
   from gnttab_map_frame()"
 - finally resolve locking question by recent Julien's suggestion,
   so drop the RFC tag
 - update comments in Arm's mm.h/p2m.c to not mention grant table
 - convert page_set(get)_frame_gfn to static inline func and
   rename them to page_set(get)_xenheap_gfn()
 - rename PGT_INVALID_FRAME_GFN to PGT_INVALID_XENHEAP_GFN
 - add ASSERT(is_xen_heap_page(...)) in page_set(get)_frame_gfn
 - remove BUG_ON() in arch_free_xenheap_page
 - remove local type_info in share_xen_page_with_guest()
 - remove an extra argument p2m in p2m_put_l3_page()
 - remove #ifdef CONFIG_GRANT_TABLE in p2m_put_l3_page()
 - also cover real-only pages by using p2m_is_ram instead of a check
   against p2m_ram_rw in p2m_put_l3_page() and use "else if" construct
 - call arch_free_xenheap_page() before clearing the PGC_xen_heap in
   free_xenheap_pages()
 - remove ASSERT() in gnttab_shared(status)_page and use simpler
   virt_to_page
 - remove local pg_ in gnttab_shared(status)_gfn
 - update patch description to reflect recent changes

Changes V4 -> V5:
 - rebase on latest staging
 - update patch description
 - drop arch_alloc(free)_xenheap_page macro and use arch-specific
   initialization pattern to clear GFN portion before use
 - add const to struct page_info *p in page_get_xenheap_gfn
 - fix a breakage on Arm32

Changes V5 -> V6:
 - update patch description
 - add/update comments in code
 - s/PGT_TYPE_INFO_INIT_PATTERN/PGT_TYPE_INFO_INITIALIZER
 - define PGT_TYPE_INFO_INITIALIZER in page_alloc.c if arch doesn't define it
 - modify page_get_xenheap_gfn() to use ACCESS_ONCE() when reading type_info field
 - modify page_set_xenheap_gfn() to use cmpxchg() when changing type_info field
 - apply PGT_TYPE_INFO_INITIALIZER in alloc_heap_pages() and acquire_staticmem_pages()
   rather than altering both flavors of alloc_xenheap_pages() to make an extra
   assignment
 - simplify gnttab_shared(status)_page and gnttab_(shared)status_gfn macro
 - update a check in Arm's gnttab_set_frame_gfn()
---
 xen/arch/arm/include/asm/grant_table.h | 53 +++++++---------------------------
 xen/arch/arm/include/asm/mm.h          | 47 ++++++++++++++++++++++++++++--
 xen/arch/arm/mm.c                      | 24 +++++++++++++--
 xen/arch/arm/p2m.c                     |  7 +++--
 xen/arch/x86/include/asm/grant_table.h |  5 ----
 xen/common/grant_table.c               |  9 ------
 xen/common/page_alloc.c                |  8 +++--
 7 files changed, 87 insertions(+), 66 deletions(-)

diff --git a/xen/arch/arm/include/asm/grant_table.h b/xen/arch/arm/include/asm/grant_table.h
index d31a4d6..16f817b 100644
--- a/xen/arch/arm/include/asm/grant_table.h
+++ b/xen/arch/arm/include/asm/grant_table.h
@@ -11,11 +11,6 @@
 #define INITIAL_NR_GRANT_FRAMES 1U
 #define GNTTAB_MAX_VERSION 1
 
-struct grant_table_arch {
-    gfn_t *shared_gfn;
-    gfn_t *status_gfn;
-};
-
 static inline void gnttab_clear_flags(struct domain *d,
                                       unsigned int mask, uint16_t *addr)
 {
@@ -46,53 +41,27 @@ int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn,
 #define gnttab_dom0_frames()                                             \
     min_t(unsigned int, opt_max_grant_frames, PFN_DOWN(_etext - _stext))
 
-#define gnttab_init_arch(gt)                                             \
-({                                                                       \
-    unsigned int ngf_ = (gt)->max_grant_frames;                          \
-    unsigned int nsf_ = grant_to_status_frames(ngf_);                    \
-                                                                         \
-    (gt)->arch.shared_gfn = xmalloc_array(gfn_t, ngf_);                  \
-    (gt)->arch.status_gfn = xmalloc_array(gfn_t, nsf_);                  \
-    if ( (gt)->arch.shared_gfn && (gt)->arch.status_gfn )                \
-    {                                                                    \
-        while ( ngf_-- )                                                 \
-            (gt)->arch.shared_gfn[ngf_] = INVALID_GFN;                   \
-        while ( nsf_-- )                                                 \
-            (gt)->arch.status_gfn[nsf_] = INVALID_GFN;                   \
-    }                                                                    \
-    else                                                                 \
-        gnttab_destroy_arch(gt);                                         \
-    (gt)->arch.shared_gfn ? 0 : -ENOMEM;                                 \
-})
-
-#define gnttab_destroy_arch(gt)                                          \
-    do {                                                                 \
-        XFREE((gt)->arch.shared_gfn);                                    \
-        XFREE((gt)->arch.status_gfn);                                    \
-    } while ( 0 )
-
 #define gnttab_set_frame_gfn(gt, st, idx, gfn, mfn)                      \
-    ({                                                                   \
-        int rc_ = 0;                                                     \
-        gfn_t ogfn = gnttab_get_frame_gfn(gt, st, idx);                  \
-        if ( gfn_eq(ogfn, INVALID_GFN) || gfn_eq(ogfn, gfn) ||           \
-             (rc_ = guest_physmap_remove_page((gt)->domain, ogfn, mfn,   \
-                                              0)) == 0 )                 \
-            ((st) ? (gt)->arch.status_gfn                                \
-                  : (gt)->arch.shared_gfn)[idx] = (gfn);                 \
-        rc_;                                                             \
-    })
+    (gfn_eq(gfn, INVALID_GFN)                                            \
+     ? guest_physmap_remove_page((gt)->domain,                           \
+                                 gnttab_get_frame_gfn(gt, st, idx),      \
+                                 mfn, 0)                                 \
+     : 0)
 
 #define gnttab_get_frame_gfn(gt, st, idx) ({                             \
    (st) ? gnttab_status_gfn(NULL, gt, idx)                               \
         : gnttab_shared_gfn(NULL, gt, idx);                              \
 })
 
+#define gnttab_shared_page(t, i)   virt_to_page((t)->shared_raw[i])
+
+#define gnttab_status_page(t, i)   virt_to_page((t)->status[i])
+
 #define gnttab_shared_gfn(d, t, i)                                       \
-    (((i) >= nr_grant_frames(t)) ? INVALID_GFN : (t)->arch.shared_gfn[i])
+    page_get_xenheap_gfn(gnttab_shared_page(t, i))
 
 #define gnttab_status_gfn(d, t, i)                                       \
-    (((i) >= nr_status_frames(t)) ? INVALID_GFN : (t)->arch.status_gfn[i])
+    page_get_xenheap_gfn(gnttab_status_page(t, i))
 
 #define gnttab_need_iommu_mapping(d)                    \
     (is_domain_direct_mapped(d) && is_iommu_enabled(d))
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 424aaf2..412200a 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -98,9 +98,22 @@ struct page_info
 #define PGT_writable_page PG_mask(1, 1)  /* has writable mappings?         */
 #define PGT_type_mask     PG_mask(1, 1)  /* Bits 31 or 63.                 */
 
- /* Count of uses of this frame as its current type. */
-#define PGT_count_width   PG_shift(2)
-#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
+ /* 2-bit count of uses of this frame as its current type. */
+#define PGT_count_mask    PG_mask(3, 3)
+
+/*
+ * Stored in bits [28:0] (arm32) or [60:0] (arm64) GFN if page is xenheap page.
+ */
+#define PGT_gfn_width     PG_shift(3)
+#define PGT_gfn_mask      ((1UL<<PGT_gfn_width)-1)
+
+#define PGT_INVALID_XENHEAP_GFN   _gfn(PGT_gfn_mask)
+
+/*
+ * An arch-specific initialization pattern is needed for the type_info field
+ * as it's GFN portion can contain the valid GFN if page is xenheap page.
+ */
+#define PGT_TYPE_INFO_INITIALIZER   gfn_x(PGT_INVALID_XENHEAP_GFN)
 
  /* Cleared when the owning guest 'frees' this page. */
 #define _PGC_allocated    PG_shift(1)
@@ -358,6 +371,34 @@ void clear_and_clean_page(struct page_info *page);
 
 unsigned int arch_get_dma_bitsize(void);
 
+/*
+ * All accesses to the GFN portion of type_info field should always be
+ * protected by the P2M lock. In case when it is not feasible to satisfy
+ * that requirement (risk of deadlock, lock inversion, etc) it is important
+ * to make sure that all non-protected updates to this field are atomic.
+ */
+static inline gfn_t page_get_xenheap_gfn(const struct page_info *p)
+{
+    gfn_t gfn_ = _gfn(ACCESS_ONCE(p->u.inuse.type_info) & PGT_gfn_mask);
+
+    ASSERT(is_xen_heap_page(p));
+
+    return gfn_eq(gfn_, PGT_INVALID_XENHEAP_GFN) ? INVALID_GFN : gfn_;
+}
+
+static inline void page_set_xenheap_gfn(struct page_info *p, gfn_t gfn)
+{
+    gfn_t gfn_ = gfn_eq(gfn, INVALID_GFN) ? PGT_INVALID_XENHEAP_GFN : gfn;
+    unsigned long x, nx, y = p->u.inuse.type_info;
+
+    ASSERT(is_xen_heap_page(p));
+
+    do {
+        x = y;
+        nx = (x & ~PGT_gfn_mask) | gfn_x(gfn_);
+    } while ( (y = cmpxchg(&p->u.inuse.type_info, x, nx)) != x );
+}
+
 #endif /*  __ARCH_ARM_MM__ */
 /*
  * Local variables:
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 7b1f2f4..c94bdaf 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1400,8 +1400,10 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
     spin_lock(&d->page_alloc_lock);
 
     /* The incremented type count pins as writable or read-only. */
-    page->u.inuse.type_info =
-        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
+    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
+    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
+                                                  : PGT_writable_page) |
+                                MASK_INSR(1, PGT_count_mask);
 
     page_set_owner(page, d);
     smp_wmb(); /* install valid domain ptr before updating refcnt. */
@@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
     }
 
     /* Map at new location. */
-    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
+    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
+        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
+    else
+    {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+        p2m_write_lock(p2m);
+        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
+        {
+            rc = p2m_set_entry(p2m, gfn, 1, mfn, t, p2m->default_access);
+            if ( !rc )
+                page_set_xenheap_gfn(mfn_to_page(mfn), gfn);
+        }
+        else
+            rc = -EBUSY;
+        p2m_write_unlock(p2m);
+    }
 
     /*
      * For XENMAPSPACE_gmfn_foreign if we failed to add the mapping, we need
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d00c2e4..f87b48e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -716,6 +716,8 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn,
  */
 static void p2m_put_l3_page(const lpae_t pte)
 {
+    mfn_t mfn = lpae_get_mfn(pte);
+
     ASSERT(p2m_is_valid(pte));
 
     /*
@@ -727,11 +729,12 @@ static void p2m_put_l3_page(const lpae_t pte)
      */
     if ( p2m_is_foreign(pte.p2m.type) )
     {
-        mfn_t mfn = lpae_get_mfn(pte);
-
         ASSERT(mfn_valid(mfn));
         put_page(mfn_to_page(mfn));
     }
+    /* Detect the xenheap page and mark the stored GFN as invalid. */
+    else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) )
+        page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN);
 }
 
 /* Free lpae sub-tree behind an entry */
diff --git a/xen/arch/x86/include/asm/grant_table.h b/xen/arch/x86/include/asm/grant_table.h
index a8a2143..5c23cec 100644
--- a/xen/arch/x86/include/asm/grant_table.h
+++ b/xen/arch/x86/include/asm/grant_table.h
@@ -14,9 +14,6 @@
 
 #define INITIAL_NR_GRANT_FRAMES 1U
 
-struct grant_table_arch {
-};
-
 static inline int create_grant_host_mapping(uint64_t addr, mfn_t frame,
                                             unsigned int flags,
                                             unsigned int cache_flags)
@@ -35,8 +32,6 @@ static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame,
     return replace_grant_pv_mapping(addr, frame, new_addr, flags);
 }
 
-#define gnttab_init_arch(gt) 0
-#define gnttab_destroy_arch(gt) do {} while ( 0 )
 #define gnttab_set_frame_gfn(gt, st, idx, gfn, mfn)                      \
     (gfn_eq(gfn, INVALID_GFN)                                            \
      ? guest_physmap_remove_page((gt)->domain,                           \
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index febbe12..4115da0 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -99,8 +99,6 @@ struct grant_table {
 
     /* Domain to which this struct grant_table belongs. */
     struct domain *domain;
-
-    struct grant_table_arch arch;
 };
 
 unsigned int __read_mostly opt_max_grant_frames = 64;
@@ -2018,14 +2016,9 @@ int grant_table_init(struct domain *d, int max_grant_frames,
 
     grant_write_lock(gt);
 
-    ret = gnttab_init_arch(gt);
-    if ( ret )
-        goto unlock;
-
     /* gnttab_grow_table() allocates a min number of frames, so 0 is okay. */
     ret = gnttab_grow_table(d, 0);
 
- unlock:
     grant_write_unlock(gt);
 
  out:
@@ -3940,8 +3933,6 @@ grant_table_destroy(
     if ( t == NULL )
         return;
 
-    gnttab_destroy_arch(t);
-
     for ( i = 0; i < nr_grant_frames(t); i++ )
         free_xenheap_page(t->shared_raw[i]);
     xfree(t->shared_raw);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 3190291..dbacee2 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -155,6 +155,10 @@
 #define PGC_reserved 0
 #endif
 
+#ifndef PGT_TYPE_INFO_INITIALIZER
+#define PGT_TYPE_INFO_INITIALIZER 0
+#endif
+
 /*
  * Comma-separated list of hexadecimal page numbers containing bad bytes.
  * e.g. 'badpage=0x3f45,0x8a321'.
@@ -1024,7 +1028,7 @@ static struct page_info *alloc_heap_pages(
                                 &tlbflush_timestamp);
 
         /* Initialise fields which have other uses for free pages. */
-        pg[i].u.inuse.type_info = 0;
+        pg[i].u.inuse.type_info = PGT_TYPE_INFO_INITIALIZER;
         page_set_owner(&pg[i], NULL);
 
     }
@@ -2702,7 +2706,7 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
          */
         pg[i].count_info = PGC_reserved | PGC_state_inuse;
         /* Initialise fields which have other uses for free pages. */
-        pg[i].u.inuse.type_info = 0;
+        pg[i].u.inuse.type_info = PGT_TYPE_INFO_INITIALIZER;
         page_set_owner(&pg[i], NULL);
     }
 
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V6 2/2] xen/arm: Harden the P2M code in p2m_remove_mapping()
  2022-05-11 18:47 [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm Oleksandr Tyshchenko
@ 2022-05-11 18:47 ` Oleksandr Tyshchenko
  2022-06-23 18:08   ` Julien Grall
  2022-06-23 17:50 ` [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm Julien Grall
  1 sibling, 1 reply; 12+ messages in thread
From: Oleksandr Tyshchenko @ 2022-05-11 18:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Borrow the x86's check from p2m_remove_page() which was added
by the following commit: c65ea16dbcafbe4fe21693b18f8c2a3c5d14600e
"x86/p2m: don't assert that the passed in MFN matches for a remove"
and adjust it to the Arm code base.

Basically, this check is strictly needed for the xenheap pages only
since there are several non-protected read accesses to our simplified
xenheap based M2P approach on Arm (most calls to page_get_xenheap_gfn()
are not protected by the P2M lock).

But, it will be a good opportunity to harden the P2M code for *every*
RAM pages since it is possible to remove any GFN - MFN mapping
currently on Arm (even with the wrong helpers). This can result in
a few issues when mapping is overridden silently (in particular when
building dom0).

Suggested-by: Julien Grall <jgrall@amazon.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
You can find the corresponding discussion at:
https://lore.kernel.org/xen-devel/82d8bfe0-cb46-d303-6a60-2324dd76a1f7@xen.org/

Changes V5 -> V6:
 - new patch
---
 xen/arch/arm/p2m.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f87b48e..635e474 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1311,11 +1311,32 @@ static inline int p2m_remove_mapping(struct domain *d,
                                      mfn_t mfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long i;
     int rc;
 
     p2m_write_lock(p2m);
+    for ( i = 0; i < nr; )
+    {
+        unsigned int cur_order;
+        p2m_type_t t;
+        mfn_t mfn_return = p2m_get_entry(p2m, gfn_add(start_gfn, i), &t, NULL,
+                                         &cur_order, NULL);
+
+        if ( p2m_is_any_ram(t) &&
+             (!mfn_valid(mfn) || !mfn_eq(mfn_add(mfn, i), mfn_return)) )
+        {
+            rc = -EILSEQ;
+            goto out;
+        }
+
+        i += (1UL << cur_order) -
+             ((gfn_x(start_gfn) + i) & ((1UL << cur_order) - 1));
+    }
+
     rc = p2m_set_entry(p2m, start_gfn, nr, INVALID_MFN,
                        p2m_invalid, p2m_access_rwx);
+
+out:
     p2m_write_unlock(p2m);
 
     return rc;
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm
  2022-05-11 18:47 [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm Oleksandr Tyshchenko
  2022-05-11 18:47 ` [PATCH V6 2/2] xen/arm: Harden the P2M code in p2m_remove_mapping() Oleksandr Tyshchenko
@ 2022-06-23 17:50 ` Julien Grall
  2022-06-24  6:45   ` Jan Beulich
  2022-06-24 11:47   ` Oleksandr
  1 sibling, 2 replies; 12+ messages in thread
From: Julien Grall @ 2022-06-23 17:50 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné

Hi Oleksandr,

Sorry for the late reply.

On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> +/*
> + * All accesses to the GFN portion of type_info field should always be
> + * protected by the P2M lock. In case when it is not feasible to satisfy
> + * that requirement (risk of deadlock, lock inversion, etc) it is important
> + * to make sure that all non-protected updates to this field are atomic.

Here you say the non-protected updates should be atomic but...

[...]

> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 7b1f2f4..c94bdaf 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1400,8 +1400,10 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
>       spin_lock(&d->page_alloc_lock);
>   
>       /* The incremented type count pins as writable or read-only. */
> -    page->u.inuse.type_info =
> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
> +    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
> +    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
> +                                                  : PGT_writable_page) |
> +                                MASK_INSR(1, PGT_count_mask);

... this is not going to be atomic. So I would suggest to add a comment 
explaining why this is fine.

>   
>       page_set_owner(page, d);
>       smp_wmb(); /* install valid domain ptr before updating refcnt. */
> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>       }
>   
>       /* Map at new location. */
> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);

> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);

I would expand the comment above to explain why you need a different 
path for xenheap mapped as RAM. AFAICT, this is because we need to call 
page_set_xenheap_gfn().

> +    else
> +    {
> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +        p2m_write_lock(p2m);
> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )

Sorry to only notice it now. This check will also change the behavior 
for XENMAPSPACE_shared_info. Now, we are only allowed to map the shared 
info once.

I believe this is fine because AFAICT x86 already prevents it. But this 
is probably something that ought to be explained in the already long 
commit message.

My comments are mainly seeking for clarifications. The code itself looks 
correct to me. I can handle the comments on commit to save you a round 
trip once we agree on them.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 2/2] xen/arm: Harden the P2M code in p2m_remove_mapping()
  2022-05-11 18:47 ` [PATCH V6 2/2] xen/arm: Harden the P2M code in p2m_remove_mapping() Oleksandr Tyshchenko
@ 2022-06-23 18:08   ` Julien Grall
  2022-06-24 15:31     ` Oleksandr
  0 siblings, 1 reply; 12+ messages in thread
From: Julien Grall @ 2022-06-23 18:08 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk

Hi Oleksandr,

On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Borrow the x86's check from p2m_remove_page() which was added
> by the following commit: c65ea16dbcafbe4fe21693b18f8c2a3c5d14600e
> "x86/p2m: don't assert that the passed in MFN matches for a remove"
> and adjust it to the Arm code base.
> 
> Basically, this check is strictly needed for the xenheap pages only
> since there are several non-protected read accesses to our simplified
> xenheap based M2P approach on Arm (most calls to page_get_xenheap_gfn()
> are not protected by the P2M lock).

To me, this read as you introduced a bug in patch #1 and now you are 
fixing it. So this patch should have been first.

> 
> But, it will be a good opportunity to harden the P2M code for *every*
> RAM pages since it is possible to remove any GFN - MFN mapping
> currently on Arm (even with the wrong helpers).

> This can result in
> a few issues when mapping is overridden silently (in particular when
> building dom0).

Hmmm... AFAIU, in such situation p2m_remove_mapping() wouldn't be 
called. Instead, we would call the mapping helper twice and the override 
would still happen.

> 
> Suggested-by: Julien Grall <jgrall@amazon.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
> You can find the corresponding discussion at:
> https://lore.kernel.org/xen-devel/82d8bfe0-cb46-d303-6a60-2324dd76a1f7@xen.org/
> 
> Changes V5 -> V6:
>   - new patch
> ---
>   xen/arch/arm/p2m.c | 21 +++++++++++++++++++++
>   1 file changed, 21 insertions(+)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index f87b48e..635e474 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1311,11 +1311,32 @@ static inline int p2m_remove_mapping(struct domain *d,
>                                        mfn_t mfn)
>   {
>       struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    unsigned long i;
>       int rc;
>   
>       p2m_write_lock(p2m);
> +    for ( i = 0; i < nr; )
One bit I really hate in the x86 code is the lack of in-code 
documentation. It makes really difficult to understand the logic.

I know this code was taken from x86, but I would like to avoid making 
same mistake (this code is definitely not trivial). So can we document 
the logic?

The code itself looks good to me.

> +    {
> +        unsigned int cur_order;
> +        p2m_type_t t;
> +        mfn_t mfn_return = p2m_get_entry(p2m, gfn_add(start_gfn, i), &t, NULL,
> +                                         &cur_order, NULL);
> +
> +        if ( p2m_is_any_ram(t) &&
> +             (!mfn_valid(mfn) || !mfn_eq(mfn_add(mfn, i), mfn_return)) )
> +        {
> +            rc = -EILSEQ;
> +            goto out;
> +        }
> +
> +        i += (1UL << cur_order) -
> +             ((gfn_x(start_gfn) + i) & ((1UL << cur_order) - 1));
> +    }
> +
>       rc = p2m_set_entry(p2m, start_gfn, nr, INVALID_MFN,
>                          p2m_invalid, p2m_access_rwx);
> +
> +out:
>       p2m_write_unlock(p2m);
>   
>       return rc;

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm
  2022-06-23 17:50 ` [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm Julien Grall
@ 2022-06-24  6:45   ` Jan Beulich
  2022-06-30 21:49     ` Julien Grall
  2022-06-24 11:47   ` Oleksandr
  1 sibling, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2022-06-24  6:45 UTC (permalink / raw)
  To: Julien Grall, Oleksandr Tyshchenko
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	xen-devel

On 23.06.2022 19:50, Julien Grall wrote:
> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>>       }
>>   
>>       /* Map at new location. */
>> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
> 
>> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
>> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
> 
> I would expand the comment above to explain why you need a different 
> path for xenheap mapped as RAM. AFAICT, this is because we need to call 
> page_set_xenheap_gfn().
> 
>> +    else
>> +    {
>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +        p2m_write_lock(p2m);
>> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
> 
> Sorry to only notice it now. This check will also change the behavior 
> for XENMAPSPACE_shared_info. Now, we are only allowed to map the shared 
> info once.
> 
> I believe this is fine because AFAICT x86 already prevents it. But this 
> is probably something that ought to be explained in the already long 
> commit message.

If by "prevent" you mean x86 unmaps the page from its earlier GFN, then
yes. But this means that Arm would better follow that model instead of
returning -EBUSY in this case. Just think of kexec-ing or a boot loader
wanting to map shared info or grant table: There wouldn't necessarily
be an explicit unmap.

Jan


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm
  2022-06-23 17:50 ` [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm Julien Grall
  2022-06-24  6:45   ` Jan Beulich
@ 2022-06-24 11:47   ` Oleksandr
  2022-07-15 17:49     ` Julien Grall
  1 sibling, 1 reply; 12+ messages in thread
From: Oleksandr @ 2022-06-24 11:47 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné


On 23.06.22 20:50, Julien Grall wrote:
> Hi Oleksandr,


Hello Julien


>
> Sorry for the late reply.


no problem)


>
> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>> diff --git a/xen/arch/arm/include/asm/mm.h 
>> b/xen/arch/arm/include/asm/mm.h
>> +/*
>> + * All accesses to the GFN portion of type_info field should always be
>> + * protected by the P2M lock. In case when it is not feasible to 
>> satisfy
>> + * that requirement (risk of deadlock, lock inversion, etc) it is 
>> important
>> + * to make sure that all non-protected updates to this field are 
>> atomic.
>
> Here you say the non-protected updates should be atomic but...
>
> [...]
>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 7b1f2f4..c94bdaf 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -1400,8 +1400,10 @@ void share_xen_page_with_guest(struct 
>> page_info *page, struct domain *d,
>>       spin_lock(&d->page_alloc_lock);
>>         /* The incremented type count pins as writable or read-only. */
>> -    page->u.inuse.type_info =
>> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
>> +    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
>> +    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
>> +                                                  : 
>> PGT_writable_page) |
>> +                                MASK_INSR(1, PGT_count_mask);
>
> ... this is not going to be atomic. So I would suggest to add a 
> comment explaining why this is fine.


Yes, I should have added your explanation given in V5 why this is fine.

So I propose the following text, do you agree with that being added?

/*
  * Please note, the update of type_info field here is not atomic as we use
  * Read-Modify-Write operation on it. But currently it is fine because
  * the caller of page_set_xenheap_gfn() (which is another place where
  * type_info is updated) would need to acquire a reference on the page.
  * This is only possible after the count_info is updated *and* there a 
barrier
  * between the type_info and count_info. So there is no immediate need 
to use
  * cmpxchg() here.
  */


>
>
>>         page_set_owner(page, d);
>>       smp_wmb(); /* install valid domain ptr before updating refcnt. */
>> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>>       }
>>         /* Map at new location. */
>> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>
>> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
>> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>
> I would expand the comment above to explain why you need a different 
> path for xenheap mapped as RAM. AFAICT, this is because we need to 
> call page_set_xenheap_gfn().


agree, I propose the following text, do you agree with that?

/*
  * Map at new location. Here we need to map xenheap RAM page differently
  * because we need to store the valid GFN and make sure that nothing was
  * mapped before (the stored GFN is invalid).
  */


>
>
>> +    else
>> +    {
>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +        p2m_write_lock(p2m);
>> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), 
>> INVALID_GFN) )
>
> Sorry to only notice it now. This check will also change the behavior 
> for XENMAPSPACE_shared_info. Now, we are only allowed to map the 
> shared info once.
>
> I believe this is fine because AFAICT x86 already prevents it. But 
> this is probably something that ought to be explained in the already 
> long commit message.


agree, I propose the following text, do you agree with that?

Please note, this patch changes the behavior how the shared_info page
(which is xenheap RAM page) is mapped in xenmem_add_to_physmap_one().
Now, we only allow to map the shared_info at once. The subsequent attempts
to map it will result in -EBUSY, if there is a legitimate use case
we will be able to relax that behavior.


>
>
> My comments are mainly seeking for clarifications. The code itself 
> looks correct to me. I can handle the comments on commit to save you a 
> round trip once we agree on them.


Thank you, that would be much appreciated.


>
>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 2/2] xen/arm: Harden the P2M code in p2m_remove_mapping()
  2022-06-23 18:08   ` Julien Grall
@ 2022-06-24 15:31     ` Oleksandr
  2022-07-15 17:15       ` Julien Grall
  0 siblings, 1 reply; 12+ messages in thread
From: Oleksandr @ 2022-06-24 15:31 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk


On 23.06.22 21:08, Julien Grall wrote:
> Hi Oleksandr,


Hello Julien


>
> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Borrow the x86's check from p2m_remove_page() which was added
>> by the following commit: c65ea16dbcafbe4fe21693b18f8c2a3c5d14600e
>> "x86/p2m: don't assert that the passed in MFN matches for a remove"
>> and adjust it to the Arm code base.
>>
>> Basically, this check is strictly needed for the xenheap pages only
>> since there are several non-protected read accesses to our simplified
>> xenheap based M2P approach on Arm (most calls to page_get_xenheap_gfn()
>> are not protected by the P2M lock).
>
> To me, this read as you introduced a bug in patch #1 and now you are 
> fixing it. So this patch should have been first.


Sounds like yes, I agree. But, in that case I propose to rewrite this 
text like the following:


Basically, this check will be strictly needed for the xenheap pages only 
*and* only after applying subsequent
commit which will introduce xenheap based M2P approach on Arm. But, it 
will be a good opportunity
to harden the P2M code for *every* RAM pages since it is possible to 
remove any GFN - MFN mapping
currently on Arm (even with the wrong helpers).


And ...


>
>>
>> But, it will be a good opportunity to harden the P2M code for *every*
>> RAM pages since it is possible to remove any GFN - MFN mapping
>> currently on Arm (even with the wrong helpers).
>
>> This can result in
>> a few issues when mapping is overridden silently (in particular when
>> building dom0).
>
> Hmmm... AFAIU, in such situation p2m_remove_mapping() wouldn't be 
> called. Instead, we would call the mapping helper twice and the 
> override would still happen.


    ... drop this one.


>
>
>>
>> Suggested-by: Julien Grall <jgrall@amazon.com>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> ---
>> You can find the corresponding discussion at:
>> https://lore.kernel.org/xen-devel/82d8bfe0-cb46-d303-6a60-2324dd76a1f7@xen.org/ 
>>
>>
>> Changes V5 -> V6:
>>   - new patch
>> ---
>>   xen/arch/arm/p2m.c | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index f87b48e..635e474 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1311,11 +1311,32 @@ static inline int p2m_remove_mapping(struct 
>> domain *d,
>>                                        mfn_t mfn)
>>   {
>>       struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    unsigned long i;
>>       int rc;
>>         p2m_write_lock(p2m);
>> +    for ( i = 0; i < nr; )
> One bit I really hate in the x86 code is the lack of in-code 
> documentation. It makes really difficult to understand the logic.
>
> I know this code was taken from x86, but I would like to avoid making 
> same mistake (this code is definitely not trivial). So can we document 
> the logic?


ok, I propose the following text right after acquiring the p2m lock:


  /*
   * Before removing the GFN - MFN mapping for any RAM pages make sure
   * that there is no difference between what is already mapped and what
   * is requested to be unmapped. If passed mapping doesn't match
   * the existing one bail out early.
   */


Could you please clarify, do you agree with both?


>
> The code itself looks good to me.

Thanks!


>
>> +    {
>> +        unsigned int cur_order;
>> +        p2m_type_t t;
>> +        mfn_t mfn_return = p2m_get_entry(p2m, gfn_add(start_gfn, i), 
>> &t, NULL,
>> +                                         &cur_order, NULL);
>> +
>> +        if ( p2m_is_any_ram(t) &&
>> +             (!mfn_valid(mfn) || !mfn_eq(mfn_add(mfn, i), 
>> mfn_return)) )
>> +        {
>> +            rc = -EILSEQ;
>> +            goto out;
>> +        }
>> +
>> +        i += (1UL << cur_order) -
>> +             ((gfn_x(start_gfn) + i) & ((1UL << cur_order) - 1));
>> +    }
>> +
>>       rc = p2m_set_entry(p2m, start_gfn, nr, INVALID_MFN,
>>                          p2m_invalid, p2m_access_rwx);
>> +
>> +out:
>>       p2m_write_unlock(p2m);
>>         return rc;
>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm
  2022-06-24  6:45   ` Jan Beulich
@ 2022-06-30 21:49     ` Julien Grall
  0 siblings, 0 replies; 12+ messages in thread
From: Julien Grall @ 2022-06-30 21:49 UTC (permalink / raw)
  To: Jan Beulich, Oleksandr Tyshchenko
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	xen-devel

Hi Jan,

On 24/06/2022 07:45, Jan Beulich wrote:
> On 23.06.2022 19:50, Julien Grall wrote:
>> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>>> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>>>        }
>>>    
>>>        /* Map at new location. */
>>> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>>
>>> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
>>> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>>
>> I would expand the comment above to explain why you need a different
>> path for xenheap mapped as RAM. AFAICT, this is because we need to call
>> page_set_xenheap_gfn().
>>
>>> +    else
>>> +    {
>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +
>>> +        p2m_write_lock(p2m);
>>> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
>>
>> Sorry to only notice it now. This check will also change the behavior
>> for XENMAPSPACE_shared_info. Now, we are only allowed to map the shared
>> info once.
>>
>> I believe this is fine because AFAICT x86 already prevents it. But this
>> is probably something that ought to be explained in the already long
>> commit message.
> 
> If by "prevent" you mean x86 unmaps the page from its earlier GFN, then
> yes. But this means that Arm would better follow that model instead of
> returning -EBUSY in this case. Just think of kexec-ing or a boot loader
> wanting to map shared info or grant table: There wouldn't necessarily
> be an explicit unmap.

I spent some time to think about this. There is a potential big issue 
with implicit unmapping from its earlier GFN. Imagine the boot loader 
decided to map the page in place of a RAM.

If the boot loader didn't unmap the page then when the OS map again, we 
would have a hole in the RAM. The OS may not know that and it may end up 
to use a page as RAM (and crash).

The problem is the same for kexec and AFAIU that's why we need to use 
soft-reset when kexec-ing.

So overall, I think we should prevent implicit unmap. So it would help 
to enforce that the bootloader (or any other components) clean-up behind 
themselves (i.e. unmap the page and populate if necessary).

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 2/2] xen/arm: Harden the P2M code in p2m_remove_mapping()
  2022-06-24 15:31     ` Oleksandr
@ 2022-07-15 17:15       ` Julien Grall
  2022-07-15 17:24         ` Oleksandr
  0 siblings, 1 reply; 12+ messages in thread
From: Julien Grall @ 2022-07-15 17:15 UTC (permalink / raw)
  To: Oleksandr, xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk



On 24/06/2022 16:31, Oleksandr wrote:
> 
> On 23.06.22 21:08, Julien Grall wrote:
>> Hi Oleksandr,
> 
> 
> Hello Julien

Hi Oleksandr,


> 
>>
>> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> Borrow the x86's check from p2m_remove_page() which was added
>>> by the following commit: c65ea16dbcafbe4fe21693b18f8c2a3c5d14600e
>>> "x86/p2m: don't assert that the passed in MFN matches for a remove"
>>> and adjust it to the Arm code base.
>>>
>>> Basically, this check is strictly needed for the xenheap pages only
>>> since there are several non-protected read accesses to our simplified
>>> xenheap based M2P approach on Arm (most calls to page_get_xenheap_gfn()
>>> are not protected by the P2M lock).
>>
>> To me, this read as you introduced a bug in patch #1 and now you are 
>> fixing it. So this patch should have been first.
> 
> 
> Sounds like yes, I agree. But, in that case I propose to rewrite this 
> text like the following:
> 
> 
> Basically, this check will be strictly needed for the xenheap pages only 
> *and* only after applying subsequent

NIT: s/only and only/, this is pretty clear that this patch is necessary 
for a follow-up patch.

Also please add "a" in from of subsequent because the two patches may 
not be committed together.

> commit which will introduce xenheap based M2P approach on Arm. But, it 
> will be a good opportunity
> to harden the P2M code for *every* RAM pages since it is possible to 
> remove any GFN - MFN mapping
> currently on Arm (even with the wrong helpers).

[...]

>>>
>>> Suggested-by: Julien Grall <jgrall@amazon.com>
>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>> ---
>>> You can find the corresponding discussion at:
>>> https://lore.kernel.org/xen-devel/82d8bfe0-cb46-d303-6a60-2324dd76a1f7@xen.org/ 
>>>
>>>
>>> Changes V5 -> V6:
>>>   - new patch
>>> ---
>>>   xen/arch/arm/p2m.c | 21 +++++++++++++++++++++
>>>   1 file changed, 21 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index f87b48e..635e474 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -1311,11 +1311,32 @@ static inline int p2m_remove_mapping(struct 
>>> domain *d,
>>>                                        mfn_t mfn)
>>>   {
>>>       struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +    unsigned long i;
>>>       int rc;
>>>         p2m_write_lock(p2m);
>>> +    for ( i = 0; i < nr; )
>> One bit I really hate in the x86 code is the lack of in-code 
>> documentation. It makes really difficult to understand the logic.
>>
>> I know this code was taken from x86, but I would like to avoid making 
>> same mistake (this code is definitely not trivial). So can we document 
>> the logic?
> 
> 
> ok, I propose the following text right after acquiring the p2m lock:
> 
> 
>   /*
>    * Before removing the GFN - MFN mapping for any RAM pages make sure
>    * that there is no difference between what is already mapped and what
>    * is requested to be unmapped. If passed mapping doesn't match
>    * the existing one bail out early.

NIT: I would simply write "If they don't match bail out early".

Also, it would be good to explanation how this could happen. Something like:

"For instance, this could happen if two CPUs are requesting to unmap the 
same P2M concurrently."


>    */
> 
> 
> Could you please clarify, do you agree with both?

I have proposed some changes in both cases. I originally thought I would 
do the update in the commit. However, this is more than simple tweak, so 
would you mind to send a new version?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 2/2] xen/arm: Harden the P2M code in p2m_remove_mapping()
  2022-07-15 17:15       ` Julien Grall
@ 2022-07-15 17:24         ` Oleksandr
  0 siblings, 0 replies; 12+ messages in thread
From: Oleksandr @ 2022-07-15 17:24 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk


On 15.07.22 20:15, Julien Grall wrote:
>
>
> On 24/06/2022 16:31, Oleksandr wrote:
>>
>> On 23.06.22 21:08, Julien Grall wrote:
>>> Hi Oleksandr,
>>
>>
>> Hello Julien
>
> Hi Oleksandr,


Hello Julien



>
>
>>
>>>
>>> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>
>>>> Borrow the x86's check from p2m_remove_page() which was added
>>>> by the following commit: c65ea16dbcafbe4fe21693b18f8c2a3c5d14600e
>>>> "x86/p2m: don't assert that the passed in MFN matches for a remove"
>>>> and adjust it to the Arm code base.
>>>>
>>>> Basically, this check is strictly needed for the xenheap pages only
>>>> since there are several non-protected read accesses to our simplified
>>>> xenheap based M2P approach on Arm (most calls to 
>>>> page_get_xenheap_gfn()
>>>> are not protected by the P2M lock).
>>>
>>> To me, this read as you introduced a bug in patch #1 and now you are 
>>> fixing it. So this patch should have been first.
>>
>>
>> Sounds like yes, I agree. But, in that case I propose to rewrite this 
>> text like the following:
>>
>>
>> Basically, this check will be strictly needed for the xenheap pages 
>> only *and* only after applying subsequent
>
> NIT: s/only and only/, this is pretty clear that this patch is 
> necessary for a follow-up patch.

ok


>
>
> Also please add "a" in from of subsequent because the two patches may 
> not be committed together.

ok


>
>> commit which will introduce xenheap based M2P approach on Arm. But, 
>> it will be a good opportunity
>> to harden the P2M code for *every* RAM pages since it is possible to 
>> remove any GFN - MFN mapping
>> currently on Arm (even with the wrong helpers).
>
> [...]
>
>>>>
>>>> Suggested-by: Julien Grall <jgrall@amazon.com>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>> ---
>>>> You can find the corresponding discussion at:
>>>> https://lore.kernel.org/xen-devel/82d8bfe0-cb46-d303-6a60-2324dd76a1f7@xen.org/ 
>>>>
>>>>
>>>> Changes V5 -> V6:
>>>>   - new patch
>>>> ---
>>>>   xen/arch/arm/p2m.c | 21 +++++++++++++++++++++
>>>>   1 file changed, 21 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>> index f87b48e..635e474 100644
>>>> --- a/xen/arch/arm/p2m.c
>>>> +++ b/xen/arch/arm/p2m.c
>>>> @@ -1311,11 +1311,32 @@ static inline int p2m_remove_mapping(struct 
>>>> domain *d,
>>>>                                        mfn_t mfn)
>>>>   {
>>>>       struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>>> +    unsigned long i;
>>>>       int rc;
>>>>         p2m_write_lock(p2m);
>>>> +    for ( i = 0; i < nr; )
>>> One bit I really hate in the x86 code is the lack of in-code 
>>> documentation. It makes really difficult to understand the logic.
>>>
>>> I know this code was taken from x86, but I would like to avoid 
>>> making same mistake (this code is definitely not trivial). So can we 
>>> document the logic?
>>
>>
>> ok, I propose the following text right after acquiring the p2m lock:
>>
>>
>>   /*
>>    * Before removing the GFN - MFN mapping for any RAM pages make sure
>>    * that there is no difference between what is already mapped and what
>>    * is requested to be unmapped. If passed mapping doesn't match
>>    * the existing one bail out early.
>
> NIT: I would simply write "If they don't match bail out early".

ok, I guess this is related to the last sentence only.


>
> Also, it would be good to explanation how this could happen. Something 
> like:
>
> "For instance, this could happen if two CPUs are requesting to unmap 
> the same P2M concurrently."

agree



>
>
>>    */
>>
>>
>> Could you please clarify, do you agree with both?
>
> I have proposed some changes in both cases. I originally thought I 
> would do the update in the commit. However, this is more than simple 
> tweak, so would you mind to send a new version?


yes, will do


>
>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm
  2022-06-24 11:47   ` Oleksandr
@ 2022-07-15 17:49     ` Julien Grall
  2022-07-15 18:10       ` Oleksandr
  0 siblings, 1 reply; 12+ messages in thread
From: Julien Grall @ 2022-07-15 17:49 UTC (permalink / raw)
  To: Oleksandr, xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné

Hi Oleksandr,

On 24/06/2022 12:47, Oleksandr wrote:
> 
> On 23.06.22 20:50, Julien Grall wrote:
>> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>>> diff --git a/xen/arch/arm/include/asm/mm.h 
>>> b/xen/arch/arm/include/asm/mm.h
>>> +/*
>>> + * All accesses to the GFN portion of type_info field should always be
>>> + * protected by the P2M lock. In case when it is not feasible to 
>>> satisfy
>>> + * that requirement (risk of deadlock, lock inversion, etc) it is 
>>> important
>>> + * to make sure that all non-protected updates to this field are 
>>> atomic.
>>
>> Here you say the non-protected updates should be atomic but...
>>
>> [...]
>>
>>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>>> index 7b1f2f4..c94bdaf 100644
>>> --- a/xen/arch/arm/mm.c
>>> +++ b/xen/arch/arm/mm.c
>>> @@ -1400,8 +1400,10 @@ void share_xen_page_with_guest(struct 
>>> page_info *page, struct domain *d,
>>>       spin_lock(&d->page_alloc_lock);
>>>         /* The incremented type count pins as writable or read-only. */
>>> -    page->u.inuse.type_info =
>>> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
>>> +    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
>>> +    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
>>> +                                                  : 
>>> PGT_writable_page) |
>>> +                                MASK_INSR(1, PGT_count_mask);
>>
>> ... this is not going to be atomic. So I would suggest to add a 
>> comment explaining why this is fine.
> 
> 
> Yes, I should have added your explanation given in V5 why this is fine.
> 
> So I propose the following text, do you agree with that being added?
> 
> /*
>   * Please note, the update of type_info field here is not atomic as we use
>   * Read-Modify-Write operation on it. But currently it is fine because
>   * the caller of page_set_xenheap_gfn() (which is another place where
>   * type_info is updated) would need to acquire a reference on the page.
>   * This is only possible after the count_info is updated *and* there a 

Missing word: there *is* a.

> barrier
>   * between the type_info and count_info. So there is no immediate need 
> to use
>   * cmpxchg() here.
>   */
> 
> 
>>
>>
>>>         page_set_owner(page, d);
>>>       smp_wmb(); /* install valid domain ptr before updating refcnt. */
>>> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>>>       }
>>>         /* Map at new location. */
>>> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>>
>>> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
>>> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>>
>> I would expand the comment above to explain why you need a different 
>> path for xenheap mapped as RAM. AFAICT, this is because we need to 
>> call page_set_xenheap_gfn().
> 
> 
> agree, I propose the following text, do you agree with that?
> 
> /*
>   * Map at new location. Here we need to map xenheap RAM page differently
>   * because we need to store the valid GFN and make sure that nothing was
>   * mapped before (the stored GFN is invalid).
>   */

So I think the key point here is the p2m_set_xenheap_gfn() needs to 
happen with the P2M lock held.

That said, looking at the code again this is a bit confusing to use 
guest_physmap_add_entry() in one place and p2m_set_entry() in the other.

The only way I can think to avoid the confusion is than open-coding 
guest_physmap_add_entry() (i.e. p2m_write_lock(); p2m_set_entry(); 
p2m_write_unlock()) or try to merge the two paths.

However, I am aware this is already at version 6 and your code should 
work. So I would be OK with a comment explaining that 
guest_physmap_add_entry() is just a wrapper on top of p2m_set_entry().

>>> +    else
>>> +    {
>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +
>>> +        p2m_write_lock(p2m);
>>> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), 
>>> INVALID_GFN) )
>>
>> Sorry to only notice it now. This check will also change the behavior 
>> for XENMAPSPACE_shared_info. Now, we are only allowed to map the 
>> shared info once.
>>
>> I believe this is fine because AFAICT x86 already prevents it. But 
>> this is probably something that ought to be explained in the already 
>> long commit message.

So I was wrong thinking that x86 would prevent it (see Jan's answer). 
However, I think this is a mistake to allow it because it can result to 
weird issues.

In fact, you mentioned on IRC that there is already an example on how 
this hypercall could be misused in U-boot [1]. At the moment, U-boot 
will steal a RAM page to map the shared info page but not unmap it.

The OS will not be aware where the shared info page is mapped. As this 
is part of the RAM region, the OS may end up to allocate for other 
purpose and corrupt the page.

If Xen were going to unmap it, then we would create a hole and the OS 
will crash when accessing the page. In some way, it is better than 
corruption but still a good user experience (the RAM page may only be 
accessed a long time after boot).

So I think it is much better to return -EBUSY here at least we can catch 
misuse in the firmware code earlier.

In the case of U-boot, as discussed on IRC, the code should:
   1) Unmap the page
   2) Populate the area with memory using XENMEM_populate_physmap

An optimization would be to use the extended regions. But this was only 
recently introduced so U-boot cannot always rely on it.

> 
> 
> agree, I propose the following text, do you agree with that?
> 
> Please note, this patch changes the behavior how the shared_info page
> (which is xenheap RAM page) is mapped in xenmem_add_to_physmap_one().
> Now, we only allow to map the shared_info at once. The subsequent attempts
> to map it will result in -EBUSY, if there is a legitimate use case
> we will be able to relax that behavior.

I would suggest to summarize what I wrote above in the commit message. I 
think this is a strong reason to return -EBUSY and push other projects 
(e.g. U-boot) to fix there code.

Cheers,

[1] 
https://source.denx.de/u-boot/u-boot/-/blob/master/drivers/xen/hypervisor.c

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm
  2022-07-15 17:49     ` Julien Grall
@ 2022-07-15 18:10       ` Oleksandr
  0 siblings, 0 replies; 12+ messages in thread
From: Oleksandr @ 2022-07-15 18:10 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Oleksandr Tyshchenko, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné


On 15.07.22 20:49, Julien Grall wrote:
> Hi Oleksandr,


Hello Julien



>
> On 24/06/2022 12:47, Oleksandr wrote:
>>
>> On 23.06.22 20:50, Julien Grall wrote:
>>> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>>>> diff --git a/xen/arch/arm/include/asm/mm.h 
>>>> b/xen/arch/arm/include/asm/mm.h
>>>> +/*
>>>> + * All accesses to the GFN portion of type_info field should 
>>>> always be
>>>> + * protected by the P2M lock. In case when it is not feasible to 
>>>> satisfy
>>>> + * that requirement (risk of deadlock, lock inversion, etc) it is 
>>>> important
>>>> + * to make sure that all non-protected updates to this field are 
>>>> atomic.
>>>
>>> Here you say the non-protected updates should be atomic but...
>>>
>>> [...]
>>>
>>>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>>>> index 7b1f2f4..c94bdaf 100644
>>>> --- a/xen/arch/arm/mm.c
>>>> +++ b/xen/arch/arm/mm.c
>>>> @@ -1400,8 +1400,10 @@ void share_xen_page_with_guest(struct 
>>>> page_info *page, struct domain *d,
>>>>       spin_lock(&d->page_alloc_lock);
>>>>         /* The incremented type count pins as writable or 
>>>> read-only. */
>>>> -    page->u.inuse.type_info =
>>>> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
>>>> +    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
>>>> +    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
>>>> +                                                  : 
>>>> PGT_writable_page) |
>>>> +                                MASK_INSR(1, PGT_count_mask);
>>>
>>> ... this is not going to be atomic. So I would suggest to add a 
>>> comment explaining why this is fine.
>>
>>
>> Yes, I should have added your explanation given in V5 why this is fine.
>>
>> So I propose the following text, do you agree with that being added?
>>
>> /*
>>   * Please note, the update of type_info field here is not atomic as 
>> we use
>>   * Read-Modify-Write operation on it. But currently it is fine because
>>   * the caller of page_set_xenheap_gfn() (which is another place where
>>   * type_info is updated) would need to acquire a reference on the page.
>>   * This is only possible after the count_info is updated *and* there a 
>
> Missing word: there *is* a.


ok


>
>> barrier
>>   * between the type_info and count_info. So there is no immediate 
>> need to use
>>   * cmpxchg() here.
>>   */
>>
>>
>>>
>>>
>>>>         page_set_owner(page, d);
>>>>       smp_wmb(); /* install valid domain ptr before updating 
>>>> refcnt. */
>>>> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>>>>       }
>>>>         /* Map at new location. */
>>>> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>>>
>>>> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
>>>> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>>>
>>> I would expand the comment above to explain why you need a different 
>>> path for xenheap mapped as RAM. AFAICT, this is because we need to 
>>> call page_set_xenheap_gfn().
>>
>>
>> agree, I propose the following text, do you agree with that?
>>
>> /*
>>   * Map at new location. Here we need to map xenheap RAM page 
>> differently
>>   * because we need to store the valid GFN and make sure that nothing 
>> was
>>   * mapped before (the stored GFN is invalid).
>>   */
>
> So I think the key point here is the p2m_set_xenheap_gfn() needs to 
> happen with the P2M lock held.
>
> That said, looking at the code again this is a bit confusing to use 
> guest_physmap_add_entry() in one place and p2m_set_entry() in the other.
>
> The only way I can think to avoid the confusion is than open-coding 
> guest_physmap_add_entry() (i.e. p2m_write_lock(); p2m_set_entry(); 
> p2m_write_unlock()) or try to merge the two paths.
>
> However, I am aware this is already at version 6 and your code should 
> work. So I would be OK with a comment explaining that 
> guest_physmap_add_entry() is just a wrapper on top of p2m_set_entry().


ok, thanks


>
>>>> +    else
>>>> +    {
>>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>>> +
>>>> +        p2m_write_lock(p2m);
>>>> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), 
>>>> INVALID_GFN) )
>>>
>>> Sorry to only notice it now. This check will also change the 
>>> behavior for XENMAPSPACE_shared_info. Now, we are only allowed to 
>>> map the shared info once.
>>>
>>> I believe this is fine because AFAICT x86 already prevents it. But 
>>> this is probably something that ought to be explained in the already 
>>> long commit message.
>
> So I was wrong thinking that x86 would prevent it (see Jan's answer). 
> However, I think this is a mistake to allow it because it can result 
> to weird issues.
>
> In fact, you mentioned on IRC that there is already an example on how 
> this hypercall could be misused in U-boot [1]. At the moment, U-boot 
> will steal a RAM page to map the shared info page but not unmap it.
>
> The OS will not be aware where the shared info page is mapped. As this 
> is part of the RAM region, the OS may end up to allocate for other 
> purpose and corrupt the page.
>
> If Xen were going to unmap it, then we would create a hole and the OS 
> will crash when accessing the page. In some way, it is better than 
> corruption but still a good user experience (the RAM page may only be 
> accessed a long time after boot).
>
> So I think it is much better to return -EBUSY here at least we can 
> catch misuse in the firmware code earlier.
>
> In the case of U-boot, as discussed on IRC, the code should:
>   1) Unmap the page
>   2) Populate the area with memory using XENMEM_populate_physmap
>
> An optimization would be to use the extended regions. But this was 
> only recently introduced so U-boot cannot always rely on it.


you are right, I have nothing to add


>
>>
>>
>> agree, I propose the following text, do you agree with that?
>>
>> Please note, this patch changes the behavior how the shared_info page
>> (which is xenheap RAM page) is mapped in xenmem_add_to_physmap_one().
>> Now, we only allow to map the shared_info at once. The subsequent 
>> attempts
>> to map it will result in -EBUSY, if there is a legitimate use case
>> we will be able to relax that behavior.
>
> I would suggest to summarize what I wrote above in the commit message. 
> I think this is a strong reason to return -EBUSY and push other 
> projects (e.g. U-boot) to fix there code.


agree, will do


>
>
> Cheers,
>
> [1] 
> https://source.denx.de/u-boot/u-boot/-/blob/master/drivers/xen/hypervisor.c
>
-- 
Regards,

Oleksandr Tyshchenko



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-07-15 18:10 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-11 18:47 [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm Oleksandr Tyshchenko
2022-05-11 18:47 ` [PATCH V6 2/2] xen/arm: Harden the P2M code in p2m_remove_mapping() Oleksandr Tyshchenko
2022-06-23 18:08   ` Julien Grall
2022-06-24 15:31     ` Oleksandr
2022-07-15 17:15       ` Julien Grall
2022-07-15 17:24         ` Oleksandr
2022-06-23 17:50 ` [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on Arm Julien Grall
2022-06-24  6:45   ` Jan Beulich
2022-06-30 21:49     ` Julien Grall
2022-06-24 11:47   ` Oleksandr
2022-07-15 17:49     ` Julien Grall
2022-07-15 18:10       ` Oleksandr

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.