xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
@ 2021-07-02 14:29 ` Juergen Gross
  2021-07-16  6:47   ` Juergen Gross
  0 siblings, 1 reply; 28+ messages in thread
From: Juergen Gross @ 2021-07-02 14:29 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, Ian Jackson, Wei Liu

The core of a pv linux guest produced via "xl dump-core" is not usable
as since kernel 4.14 only the linear p2m table is kept if Xen indicates
it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
supporting the 3-level p2m tree only.

Fix that by copying the functionality of map_p2m() from libxenguest to
libxenctrl.

Additionally the mapped p2m isn't of a fixed length now, so the
interface to the mapping functions needs to be adapted. In order not to
add even more parameters, expand struct domain_info_context and use a
pointer to that as a parameter.

This is a backport of upstream commit bd7a29c3d0b937ab542a.

As the original patch includes a modification of a data structure
passed via pointer to a library function, the related function in the
library is renamed in order to be able to spot any external users of
that function. Note that it is extremely unlikely any such users
outside the Xen git tree are existing, so the risk to break any
existing programs is very unlikely. In case such a user is existing,
changing the name of xc_map_domain_meminfo() will at least avoid
silent breakage.

Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/include/xenguest.h      |   2 +
 tools/libs/ctrl/xc_core.c     |   5 +-
 tools/libs/ctrl/xc_core.h     |   8 +-
 tools/libs/ctrl/xc_core_arm.c |  23 +--
 tools/libs/ctrl/xc_core_x86.c | 256 ++++++++++++++++++++++++++++------
 tools/libs/ctrl/xc_private.h  |   1 +
 tools/libs/guest/xg_domain.c  |  17 +--
 7 files changed, 234 insertions(+), 78 deletions(-)

diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 217022b6e7..36a26deba4 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -700,8 +700,10 @@ struct xc_domain_meminfo {
     xen_pfn_t *pfn_type;
     xen_pfn_t *p2m_table;
     unsigned long p2m_size;
+    unsigned int p2m_frames;
 };
 
+#define xc_map_domain_meminfo xc_map_domain_meminfo_mod
 int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
                           struct xc_domain_meminfo *minfo);
 
diff --git a/tools/libs/ctrl/xc_core.c b/tools/libs/ctrl/xc_core.c
index b47ab2f6d8..9576bec5a3 100644
--- a/tools/libs/ctrl/xc_core.c
+++ b/tools/libs/ctrl/xc_core.c
@@ -574,8 +574,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
             goto out;
         }
 
-        sts = xc_core_arch_map_p2m(xch, dinfo->guest_width, &info, live_shinfo,
-                                   &p2m, &dinfo->p2m_size);
+        sts = xc_core_arch_map_p2m(xch, dinfo, &info, live_shinfo, &p2m);
         if ( sts != 0 )
             goto out;
 
@@ -945,7 +944,7 @@ out:
     if ( memory_map != NULL )
         free(memory_map);
     if ( p2m != NULL )
-        munmap(p2m, PAGE_SIZE * P2M_FL_ENTRIES);
+        munmap(p2m, PAGE_SIZE * dinfo->p2m_frames);
     if ( p2m_array != NULL )
         free(p2m_array);
     if ( pfn_array != NULL )
diff --git a/tools/libs/ctrl/xc_core.h b/tools/libs/ctrl/xc_core.h
index 36fb755da2..8ea1f93a10 100644
--- a/tools/libs/ctrl/xc_core.h
+++ b/tools/libs/ctrl/xc_core.h
@@ -138,14 +138,14 @@ int xc_core_arch_memory_map_get(xc_interface *xch,
                                 xc_dominfo_t *info, shared_info_any_t *live_shinfo,
                                 xc_core_memory_map_t **mapp,
                                 unsigned int *nr_entries);
-int xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width,
+int xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo,
                          xc_dominfo_t *info, shared_info_any_t *live_shinfo,
-                         xen_pfn_t **live_p2m, unsigned long *pfnp);
+                         xen_pfn_t **live_p2m);
 
-int xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int guest_width,
+int xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo,
                                   xc_dominfo_t *info,
                                   shared_info_any_t *live_shinfo,
-                                  xen_pfn_t **live_p2m, unsigned long *pfnp);
+                                  xen_pfn_t **live_p2m);
 
 int xc_core_arch_get_scratch_gpfn(xc_interface *xch, uint32_t domid,
                                   xen_pfn_t *gpfn);
diff --git a/tools/libs/ctrl/xc_core_arm.c b/tools/libs/ctrl/xc_core_arm.c
index 7b587b4cc5..93765a565f 100644
--- a/tools/libs/ctrl/xc_core_arm.c
+++ b/tools/libs/ctrl/xc_core_arm.c
@@ -66,33 +66,24 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
 
 static int
 xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
-                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                        unsigned long *pfnp, int rw)
+                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     errno = ENOSYS;
     return -1;
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
-                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                        unsigned long *pfnp)
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
-    struct domain_info_context _dinfo = { .guest_width = guest_width };
-    struct domain_info_context *dinfo = &_dinfo;
-    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
-                                   live_shinfo, live_p2m, pfnp, 0);
+    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
-                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                              unsigned long *pfnp)
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
-    struct domain_info_context _dinfo = { .guest_width = guest_width };
-    struct domain_info_context *dinfo = &_dinfo;
-    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
-                                   live_shinfo, live_p2m, pfnp, 1);
+    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
 }
 
 int
diff --git a/tools/libs/ctrl/xc_core_x86.c b/tools/libs/ctrl/xc_core_x86.c
index cb76e6207b..c8f71d4b75 100644
--- a/tools/libs/ctrl/xc_core_x86.c
+++ b/tools/libs/ctrl/xc_core_x86.c
@@ -17,6 +17,7 @@
  *
  */
 
+#include <inttypes.h>
 #include "xc_private.h"
 #include "xc_core.h"
 #include <xen/hvm/e820.h>
@@ -65,34 +66,169 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
     return 0;
 }
 
-static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
-                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                        unsigned long *pfnp, int rw)
+static inline bool is_canonical_address(uint64_t vaddr)
 {
-    /* Double and single indirect references to the live P2M table */
-    xen_pfn_t *live_p2m_frame_list_list = NULL;
-    xen_pfn_t *live_p2m_frame_list = NULL;
-    /* Copies of the above. */
-    xen_pfn_t *p2m_frame_list_list = NULL;
-    xen_pfn_t *p2m_frame_list = NULL;
+    return ((int64_t)vaddr >> 47) == ((int64_t)vaddr >> 63);
+}
 
-    uint32_t dom = info->domid;
-    int ret = -1;
-    int err;
-    int i;
+/* Virtual address ranges reserved for hypervisor. */
+#define HYPERVISOR_VIRT_START_X86_64 0xFFFF800000000000ULL
+#define HYPERVISOR_VIRT_END_X86_64   0xFFFF87FFFFFFFFFFULL
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
+#define HYPERVISOR_VIRT_START_X86_32 0x00000000F5800000ULL
+#define HYPERVISOR_VIRT_END_X86_32   0x00000000FFFFFFFFULL
+
+static xen_pfn_t *
+xc_core_arch_map_p2m_list_rw(xc_interface *xch, struct domain_info_context *dinfo,
+                             uint32_t dom, shared_info_any_t *live_shinfo,
+                             uint64_t p2m_cr3)
+{
+    uint64_t p2m_vaddr, p2m_end, mask, off;
+    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
+    uint64_t *ptes = NULL;
+    xen_pfn_t *mfns = NULL;
+    unsigned int fpp, n_pages, level, n_levels, shift,
+                 idx_start, idx_end, idx, saved_idx;
+
+    p2m_vaddr = GET_FIELD(live_shinfo, arch.p2m_vaddr, dinfo->guest_width);
+    fpp = PAGE_SIZE / dinfo->guest_width;
+    dinfo->p2m_frames = (dinfo->p2m_size - 1) / fpp + 1;
+    p2m_end = p2m_vaddr + dinfo->p2m_frames * PAGE_SIZE - 1;
+
+    if ( dinfo->guest_width == 8 )
     {
-        ERROR("Could not get maximum GPFN!");
-        goto out;
+        mask = 0x0000ffffffffffffULL;
+        n_levels = 4;
+        p2m_mfn = p2m_cr3 >> 12;
+        if ( !is_canonical_address(p2m_vaddr) ||
+             !is_canonical_address(p2m_end) ||
+             p2m_end < p2m_vaddr ||
+             (p2m_vaddr <= HYPERVISOR_VIRT_END_X86_64 &&
+              p2m_end > HYPERVISOR_VIRT_START_X86_64) )
+        {
+            ERROR("Bad virtual p2m address range %#" PRIx64 "-%#" PRIx64,
+                  p2m_vaddr, p2m_end);
+            errno = ERANGE;
+            goto out;
+        }
+    }
+    else
+    {
+        mask = 0x00000000ffffffffULL;
+        n_levels = 3;
+        if ( p2m_cr3 & ~mask )
+            p2m_mfn = ~0UL;
+        else
+            p2m_mfn = (uint32_t)((p2m_cr3 >> 12) | (p2m_cr3 << 20));
+        if ( p2m_vaddr > mask || p2m_end > mask || p2m_end < p2m_vaddr ||
+             (p2m_vaddr <= HYPERVISOR_VIRT_END_X86_32 &&
+              p2m_end > HYPERVISOR_VIRT_START_X86_32) )
+        {
+            ERROR("Bad virtual p2m address range %#" PRIx64 "-%#" PRIx64,
+                  p2m_vaddr, p2m_end);
+            errno = ERANGE;
+            goto out;
+        }
     }
 
-    if ( dinfo->p2m_size < info->nr_pages  )
+    mfns = malloc(sizeof(*mfns));
+    if ( !mfns )
     {
-        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
+        ERROR("Cannot allocate memory for array of %u mfns", 1);
         goto out;
     }
+    mfns[0] = p2m_mfn;
+    off = 0;
+    saved_mfn = 0;
+    idx_start = idx_end = saved_idx = 0;
+
+    for ( level = n_levels; level > 0; level-- )
+    {
+        n_pages = idx_end - idx_start + 1;
+        ptes = xc_map_foreign_pages(xch, dom, PROT_READ, mfns, n_pages);
+        if ( !ptes )
+        {
+            PERROR("Failed to map %u page table pages for p2m list", n_pages);
+            goto out;
+        }
+        free(mfns);
+
+        shift = level * 9 + 3;
+        idx_start = ((p2m_vaddr - off) & mask) >> shift;
+        idx_end = ((p2m_end - off) & mask) >> shift;
+        idx = idx_end - idx_start + 1;
+        mfns = malloc(sizeof(*mfns) * idx);
+        if ( !mfns )
+        {
+            ERROR("Cannot allocate memory for array of %u mfns", idx);
+            goto out;
+        }
+
+        for ( idx = idx_start; idx <= idx_end; idx++ )
+        {
+            mfn = (ptes[idx] & 0x000ffffffffff000ULL) >> PAGE_SHIFT;
+            if ( mfn == 0 )
+            {
+                ERROR("Bad mfn %#lx during page table walk for vaddr %#" PRIx64 " at level %d of p2m list",
+                      mfn, off + ((uint64_t)idx << shift), level);
+                errno = ERANGE;
+                goto out;
+            }
+            mfns[idx - idx_start] = mfn;
+
+            /* Maximum pfn check at level 2. Same reasoning as for p2m tree. */
+            if ( level == 2 )
+            {
+                if ( mfn != saved_mfn )
+                {
+                    saved_mfn = mfn;
+                    saved_idx = idx - idx_start;
+                }
+            }
+        }
+
+        if ( level == 2 )
+        {
+            if ( saved_idx == idx_end )
+                saved_idx++;
+            max_pfn = ((xen_pfn_t)saved_idx << 9) * fpp;
+            if ( max_pfn < dinfo->p2m_size )
+            {
+                dinfo->p2m_size = max_pfn;
+                dinfo->p2m_frames = (dinfo->p2m_size + fpp - 1) / fpp;
+                p2m_end = p2m_vaddr + dinfo->p2m_frames * PAGE_SIZE - 1;
+                idx_end = idx_start + saved_idx;
+            }
+        }
+
+        munmap(ptes, n_pages * PAGE_SIZE);
+        ptes = NULL;
+        off = p2m_vaddr & ((mask >> shift) << shift);
+    }
+
+    return mfns;
+
+ out:
+    free(mfns);
+    if ( ptes )
+        munmap(ptes, n_pages * PAGE_SIZE);
+
+    return NULL;
+}
+
+static xen_pfn_t *
+xc_core_arch_map_p2m_tree_rw(xc_interface *xch, struct domain_info_context *dinfo,
+                             uint32_t dom, shared_info_any_t *live_shinfo)
+{
+    /* Double and single indirect references to the live P2M table */
+    xen_pfn_t *live_p2m_frame_list_list;
+    xen_pfn_t *live_p2m_frame_list = NULL;
+    /* Copies of the above. */
+    xen_pfn_t *p2m_frame_list_list = NULL;
+    xen_pfn_t *p2m_frame_list;
+
+    int err;
+    int i;
 
     live_p2m_frame_list_list =
         xc_map_foreign_range(xch, dom, PAGE_SIZE, PROT_READ,
@@ -151,10 +287,60 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
         for ( i = P2M_FL_ENTRIES - 1; i >= 0; i-- )
             p2m_frame_list[i] = ((uint32_t *)p2m_frame_list)[i];
 
+    dinfo->p2m_frames = P2M_FL_ENTRIES;
+
+    return p2m_frame_list;
+
+ out:
+    err = errno;
+
+    if ( live_p2m_frame_list_list )
+        munmap(live_p2m_frame_list_list, PAGE_SIZE);
+
+    if ( live_p2m_frame_list )
+        munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE);
+
+    free(p2m_frame_list_list);
+
+    errno = err;
+
+    return NULL;
+}
+
+static int
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
+{
+    xen_pfn_t *p2m_frame_list = NULL;
+    uint64_t p2m_cr3;
+    uint32_t dom = info->domid;
+    int ret = -1;
+    int err;
+
+    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
+    {
+        ERROR("Could not get maximum GPFN!");
+        goto out;
+    }
+
+    if ( dinfo->p2m_size < info->nr_pages  )
+    {
+        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
+        goto out;
+    }
+
+    p2m_cr3 = GET_FIELD(live_shinfo, arch.p2m_cr3, dinfo->guest_width);
+
+    p2m_frame_list = p2m_cr3 ? xc_core_arch_map_p2m_list_rw(xch, dinfo, dom, live_shinfo, p2m_cr3)
+                             : xc_core_arch_map_p2m_tree_rw(xch, dinfo, dom, live_shinfo);
+
+    if ( !p2m_frame_list )
+        goto out;
+
     *live_p2m = xc_map_foreign_pages(xch, dom,
                                     rw ? (PROT_READ | PROT_WRITE) : PROT_READ,
                                     p2m_frame_list,
-                                    P2M_FL_ENTRIES);
+                                    dinfo->p2m_frames);
 
     if ( !*live_p2m )
     {
@@ -162,21 +348,11 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
         goto out;
     }
 
-    *pfnp = dinfo->p2m_size;
-
     ret = 0;
 
 out:
     err = errno;
 
-    if ( live_p2m_frame_list_list )
-        munmap(live_p2m_frame_list_list, PAGE_SIZE);
-
-    if ( live_p2m_frame_list )
-        munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE);
-
-    free(p2m_frame_list_list);
-
     free(p2m_frame_list);
 
     errno = err;
@@ -184,25 +360,17 @@ out:
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
-                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                        unsigned long *pfnp)
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
-    struct domain_info_context _dinfo = { .guest_width = guest_width };
-    struct domain_info_context *dinfo = &_dinfo;
-    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
-                                   live_shinfo, live_p2m, pfnp, 0);
+    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
-                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
-                              unsigned long *pfnp)
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
-    struct domain_info_context _dinfo = { .guest_width = guest_width };
-    struct domain_info_context *dinfo = &_dinfo;
-    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
-                                   live_shinfo, live_p2m, pfnp, 1);
+    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
 }
 
 int
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index f0b5f83ac8..8ebc0b59da 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -79,6 +79,7 @@ struct iovec {
 
 struct domain_info_context {
     unsigned int guest_width;
+    unsigned int p2m_frames;
     unsigned long p2m_size;
 };
 
diff --git a/tools/libs/guest/xg_domain.c b/tools/libs/guest/xg_domain.c
index 5019c84e0e..dd7db2cbd8 100644
--- a/tools/libs/guest/xg_domain.c
+++ b/tools/libs/guest/xg_domain.c
@@ -24,13 +24,9 @@
 
 int xc_unmap_domain_meminfo(xc_interface *xch, struct xc_domain_meminfo *minfo)
 {
-    struct domain_info_context _di = { .guest_width = minfo->guest_width,
-                                       .p2m_size = minfo->p2m_size};
-    struct domain_info_context *dinfo = &_di;
-
     free(minfo->pfn_type);
     if ( minfo->p2m_table )
-        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
+        munmap(minfo->p2m_table, minfo->p2m_frames * PAGE_SIZE);
     minfo->p2m_table = NULL;
 
     return 0;
@@ -40,7 +36,6 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
                           struct xc_domain_meminfo *minfo)
 {
     struct domain_info_context _di;
-    struct domain_info_context *dinfo = &_di;
 
     xc_dominfo_t info;
     shared_info_any_t *live_shinfo;
@@ -96,16 +91,16 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
         return -1;
     }
 
-    if ( xc_core_arch_map_p2m_writable(xch, minfo->guest_width, &info,
-                                       live_shinfo, &minfo->p2m_table,
-                                       &minfo->p2m_size) )
+    if ( xc_core_arch_map_p2m_writable(xch, &_di, &info,
+                                       live_shinfo, &minfo->p2m_table) )
     {
         PERROR("Could not map the P2M table");
         munmap(live_shinfo, PAGE_SIZE);
         return -1;
     }
     munmap(live_shinfo, PAGE_SIZE);
-    _di.p2m_size = minfo->p2m_size;
+    minfo->p2m_size = _di.p2m_size;
+    minfo->p2m_frames = _di.p2m_frames;
 
     /* Make space and prepare for getting the PFN types */
     minfo->pfn_type = calloc(sizeof(*minfo->pfn_type), minfo->p2m_size);
@@ -141,7 +136,7 @@ failed:
     }
     if ( minfo->p2m_table )
     {
-        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
+        munmap(minfo->p2m_table, minfo->p2m_frames * PAGE_SIZE);
         minfo->p2m_table = NULL;
     }
 
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* preparations for 4.15.1 and 4.13.4
@ 2021-07-15  7:58 Jan Beulich
  2021-07-02 14:29 ` [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table Juergen Gross
                   ` (7 more replies)
  0 siblings, 8 replies; 28+ messages in thread
From: Jan Beulich @ 2021-07-15  7:58 UTC (permalink / raw)
  To: xen-devel, Ian Jackson
  Cc: George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard, Julien Grall

All,

the releases are due in a couple of weeks time (and 4.14.3 is
supposed to follow another few weeks later). Please point out backports
you find missing from the respective staging branches, but which you
consider relevant.

Please note that 4.13.4 is going to be the last Xen Project
coordinated release from the 4.13 branch; the branch will go into
security-only maintenance mode after this release.

Ian: I did take the liberty to backport Anthony's

5d3e4ebb5c71 libs/foreignmemory: Fix osdep_xenforeignmemory_map prototype

Beyond this I'd like the following to be considered:

6409210a5f51 libxencall: osdep_hypercall() should return long
bef64f2c0019 libxencall: introduce variant of xencall2() returning long
01a2d001dea2 libxencall: Bump SONAME following new functionality
6f02d1ea4a10 libxc: use multicall for memory-op on Linux (and Solaris)

If those are to be taken (which means in particular if the question of
the .so versioning can be properly sorted),

198a2bc6f149 x86/HVM: wire up multicalls

is going to be required as a prereq. I have backports of all of the
above ready (so I could put them in if you tell me to), but for
01a2d001dea2 only in its straightforward but simplistic form, which I'm
not sure is the right thing to do.

Jan



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15  7:58 preparations for 4.15.1 and 4.13.4 Jan Beulich
  2021-07-02 14:29 ` [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table Juergen Gross
@ 2021-07-15  8:02 ` Jan Beulich
  2021-07-15  9:02 ` Anthony PERARD
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2021-07-15  8:02 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard,
	Julien Grall, Ian Jackson, xen-devel

Andrew,

On 15.07.2021 09:58, Jan Beulich wrote:
> the releases are due in a couple of weeks time (and 4.14.3 is
> supposed to follow another few weeks later). Please point out backports
> you find missing from the respective staging branches, but which you
> consider relevant.
> 
> Please note that 4.13.4 is going to be the last Xen Project
> coordinated release from the 4.13 branch; the branch will go into
> security-only maintenance mode after this release.

as I don't suppose "x86/cpuid: Fix HLE and RTM handling (again)" is
what you were meaning to be all that's needed to fix my backport of
"x86/cpuid: Rework HLE and RTM handling" on the 4.13 branch, would
you please either submit whatever further fix you deem necessary,
or share enough information for someone else (perhaps me) to create
such a fix?

Thanks, Jan



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15  7:58 preparations for 4.15.1 and 4.13.4 Jan Beulich
  2021-07-02 14:29 ` [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table Juergen Gross
  2021-07-15  8:02 ` preparations for 4.15.1 and 4.13.4 Jan Beulich
@ 2021-07-15  9:02 ` Anthony PERARD
  2021-08-19 16:23   ` QEMU 6.0+ in 4.15 and 4.14 (Re: preparations for 4.15.1 and 4.13.4) Ian Jackson
  2021-07-15 10:58 ` preparations for 4.15.1 and 4.13.4 Andrew Cooper
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Anthony PERARD @ 2021-07-15  9:02 UTC (permalink / raw)
  To: Jan Beulich, Ian Jackson
  Cc: xen-devel, Ian Jackson, George Dunlap, Stefano Stabellini,
	Wei Liu, Julien Grall

Can we backport support of QEMU 6.0 to Xen 4.15? I'm pretty sure
distributions are going to want to use the latest QEMU and latest Xen,
without needed to build two different QEMU binaries.

[XEN PATCH v2 0/8] Fix libxl with QEMU 6.0 + remove some more deprecated usages.
<20210511092810.13759-1-anthony.perard@citrix.com>
Commits: d5f54009db^..fe6630ddc4

Some more QEMU 6.0 fixes
<20210628100157.5010-1-anthony.perard@citrix.com>
Commits: 217eef30f7  3bc3be978f


Also, Olaf want them to be backported to 4.14, see
    <20210629095952.7b0b94c1.olaf@aepfle.de>

Thanks,

-- 
Anthony PERARD


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15  7:58 preparations for 4.15.1 and 4.13.4 Jan Beulich
                   ` (2 preceding siblings ...)
  2021-07-15  9:02 ` Anthony PERARD
@ 2021-07-15 10:58 ` Andrew Cooper
  2021-07-15 11:12   ` Jan Beulich
  2021-07-15 14:11 ` Olaf Hering
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Andrew Cooper @ 2021-07-15 10:58 UTC (permalink / raw)
  To: Jan Beulich, xen-devel, Ian Jackson
  Cc: George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard, Julien Grall

On 15/07/2021 08:58, Jan Beulich wrote:
> All,
>
> the releases are due in a couple of weeks time (and 4.14.3 is
> supposed to follow another few weeks later). Please point out backports
> you find missing from the respective staging branches, but which you
> consider relevant.

I've got a queue of:

* 429b0a5c62b9 - (HEAD -> staging-4.15) tools/libxenstat: fix populating
vbd.rd_sect <Richard Kojedzinszky>
* 41f0903e1632 - tools/python: fix Python3.4 TypeError in format string
<Olaf Hering>
* 67f798942caf - tools/python: handle libxl__physmap_info.name properly
in convert-legacy-stream (74 seconds ago) <Olaf Hering>
* e9709a83490f - tools: use integer division in
convert-legacy-stream<Olaf Hering>
* 1a6824957d05 - (upstream/staging-4.15, origin/staging-4.15) build:
clean "lib.a" <Anthony PERARD>

which I'd already ok'd with Ian as ok for backport.  I'll sort those out
on all applicable branches right now.

~Andrew


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15 10:58 ` preparations for 4.15.1 and 4.13.4 Andrew Cooper
@ 2021-07-15 11:12   ` Jan Beulich
  0 siblings, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2021-07-15 11:12 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard,
	Julien Grall, xen-devel, Ian Jackson

On 15.07.2021 12:58, Andrew Cooper wrote:
> On 15/07/2021 08:58, Jan Beulich wrote:
>> All,
>>
>> the releases are due in a couple of weeks time (and 4.14.3 is
>> supposed to follow another few weeks later). Please point out backports
>> you find missing from the respective staging branches, but which you
>> consider relevant.
> 
> I've got a queue of:
> 
> * 429b0a5c62b9 - (HEAD -> staging-4.15) tools/libxenstat: fix populating
> vbd.rd_sect <Richard Kojedzinszky>
> * 41f0903e1632 - tools/python: fix Python3.4 TypeError in format string
> <Olaf Hering>
> * 67f798942caf - tools/python: handle libxl__physmap_info.name properly
> in convert-legacy-stream (74 seconds ago) <Olaf Hering>
> * e9709a83490f - tools: use integer division in
> convert-legacy-stream<Olaf Hering>
> * 1a6824957d05 - (upstream/staging-4.15, origin/staging-4.15) build:
> clean "lib.a" <Anthony PERARD>

You'll notice that this last one is already there.

Jan



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15  7:58 preparations for 4.15.1 and 4.13.4 Jan Beulich
                   ` (3 preceding siblings ...)
  2021-07-15 10:58 ` preparations for 4.15.1 and 4.13.4 Andrew Cooper
@ 2021-07-15 14:11 ` Olaf Hering
  2021-07-15 14:21   ` Jan Beulich
  2021-08-19 16:46   ` Ian Jackson
  2021-07-15 17:16 ` Andrew Cooper
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 28+ messages in thread
From: Olaf Hering @ 2021-07-15 14:11 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Ian Jackson, George Dunlap, Stefano Stabellini,
	Wei Liu, Anthony Perard, Julien Grall

[-- Attachment #1: Type: text/plain, Size: 477 bytes --]

Am Thu, 15 Jul 2021 09:58:24 +0200
schrieb Jan Beulich <jbeulich@suse.com>:

> Please point out backports you find missing from the respective staging branches, but which you consider relevant.

Depending on how green the CI is supposed to be:

76416c459c libfsimage: fix parentheses in macro parameters
e54c433adf libfsimage: fix clang 10 build

This will likely turn the Leap clang builds at https://gitlab.com/xen-project/xen/-/pipelines/337629824 green.

Olaf

[-- Attachment #2: Digitale Signatur von OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15 14:11 ` Olaf Hering
@ 2021-07-15 14:21   ` Jan Beulich
  2021-08-19 16:46   ` Ian Jackson
  1 sibling, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2021-07-15 14:21 UTC (permalink / raw)
  To: Olaf Hering, Ian Jackson
  Cc: xen-devel, George Dunlap, Stefano Stabellini, Wei Liu,
	Anthony Perard, Julien Grall

On 15.07.2021 16:11, Olaf Hering wrote:
> Am Thu, 15 Jul 2021 09:58:24 +0200
> schrieb Jan Beulich <jbeulich@suse.com>:
> 
>> Please point out backports you find missing from the respective staging branches, but which you consider relevant.
> 
> Depending on how green the CI is supposed to be:
> 
> 76416c459c libfsimage: fix parentheses in macro parameters
> e54c433adf libfsimage: fix clang 10 build

Ian, that's again something for you to consider.

Thanks, Jan

> This will likely turn the Leap clang builds at https://gitlab.com/xen-project/xen/-/pipelines/337629824 green.
> 
> Olaf
> 



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15  7:58 preparations for 4.15.1 and 4.13.4 Jan Beulich
                   ` (4 preceding siblings ...)
  2021-07-15 14:11 ` Olaf Hering
@ 2021-07-15 17:16 ` Andrew Cooper
  2021-07-16  6:16   ` Jan Beulich
  2021-08-19 16:43   ` preparations for 4.15.1 and 4.13.4 [and 1 more messages] Ian Jackson
  2021-07-16  7:41 ` preparations for 4.15.1 and 4.13.4 Julien Grall
  2021-07-19 10:32 ` Jan Beulich
  7 siblings, 2 replies; 28+ messages in thread
From: Andrew Cooper @ 2021-07-15 17:16 UTC (permalink / raw)
  To: Jan Beulich, xen-devel, Ian Jackson
  Cc: George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard, Julien Grall

On 15/07/2021 08:58, Jan Beulich wrote:
> Beyond this I'd like the following to be considered:
>
> 6409210a5f51 libxencall: osdep_hypercall() should return long
> bef64f2c0019 libxencall: introduce variant of xencall2() returning long
> 01a2d001dea2 libxencall: Bump SONAME following new functionality
> 6f02d1ea4a10 libxc: use multicall for memory-op on Linux (and Solaris)
>
> If those are to be taken (which means in particular if the question of
> the .so versioning can be properly sorted),
>
> 198a2bc6f149 x86/HVM: wire up multicalls

We can backport changes in SONAME safely so long as:

1) We declare VERS_1.2 to be fixed and released.  This means that we
bump to 1.3 for the next change, even if it is ahead of Xen 4.16 being
release, and

2) *All* ABI changes up to VERS_1.2 are backported.


The ABI called VERS_1.2 must be identical on all older branches to avoid
binary problems when rebuilding a package against old-xen+updates, and
then updating to a newer Xen.

~Andrew



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15 17:16 ` Andrew Cooper
@ 2021-07-16  6:16   ` Jan Beulich
  2021-08-19 16:43   ` preparations for 4.15.1 and 4.13.4 [and 1 more messages] Ian Jackson
  1 sibling, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2021-07-16  6:16 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard,
	Julien Grall, xen-devel, Ian Jackson

On 15.07.2021 19:16, Andrew Cooper wrote:
> On 15/07/2021 08:58, Jan Beulich wrote:
>> Beyond this I'd like the following to be considered:
>>
>> 6409210a5f51 libxencall: osdep_hypercall() should return long
>> bef64f2c0019 libxencall: introduce variant of xencall2() returning long
>> 01a2d001dea2 libxencall: Bump SONAME following new functionality
>> 6f02d1ea4a10 libxc: use multicall for memory-op on Linux (and Solaris)
>>
>> If those are to be taken (which means in particular if the question of
>> the .so versioning can be properly sorted),
>>
>> 198a2bc6f149 x86/HVM: wire up multicalls
> 
> We can backport changes in SONAME safely so long as:
> 
> 1) We declare VERS_1.2 to be fixed and released.  This means that we
> bump to 1.3 for the next change, even if it is ahead of Xen 4.16 being
> release, and

Right. A matter of remembering at the right point (if need be). That's
where I think the risk is. (And of course I understand you meaning
VERS_1.3 and VERS_1.4 respectively for "fixed and released" and "bump
to".)

If we did so, what I can't tell offhand is whether any ABI-checker
data would need updating then.

> 2) *All* ABI changes up to VERS_1.2 are backported.
> 
> 
> The ABI called VERS_1.2 must be identical on all older branches to avoid
> binary problems when rebuilding a package against old-xen+updates, and
> then updating to a newer Xen.

I'm afraid I'm less clear about this part: There shouldn't be any ABI
differences in VERS_1.2 in the first place, should there? Or, if the
number is again off by one, the sole new function would be identical
(ABI-wise) everywhere.

Jan



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
  2021-07-02 14:29 ` [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table Juergen Gross
@ 2021-07-16  6:47   ` Juergen Gross
  2021-08-19  7:45     ` Juergen Gross
  2021-08-19 15:44     ` preparations for 4.15.1 and 4.13.4 support linear p2m table [and 1 more messages] Ian Jackson
  0 siblings, 2 replies; 28+ messages in thread
From: Juergen Gross @ 2021-07-16  6:47 UTC (permalink / raw)
  To: xen-devel, Ian Jackson; +Cc: Wei Liu, Jan Beulich


[-- Attachment #1.1.1: Type: text/plain, Size: 21366 bytes --]

Ping?

On 02.07.21 16:29, Juergen Gross wrote:
> The core of a pv linux guest produced via "xl dump-core" is not usable
> as since kernel 4.14 only the linear p2m table is kept if Xen indicates
> it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
> supporting the 3-level p2m tree only.
> 
> Fix that by copying the functionality of map_p2m() from libxenguest to
> libxenctrl.
> 
> Additionally the mapped p2m isn't of a fixed length now, so the
> interface to the mapping functions needs to be adapted. In order not to
> add even more parameters, expand struct domain_info_context and use a
> pointer to that as a parameter.
> 
> This is a backport of upstream commit bd7a29c3d0b937ab542a.
> 
> As the original patch includes a modification of a data structure
> passed via pointer to a library function, the related function in the
> library is renamed in order to be able to spot any external users of
> that function. Note that it is extremely unlikely any such users
> outside the Xen git tree are existing, so the risk to break any
> existing programs is very unlikely. In case such a user is existing,
> changing the name of xc_map_domain_meminfo() will at least avoid
> silent breakage.
> 
> Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list in domain builder")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/include/xenguest.h      |   2 +
>   tools/libs/ctrl/xc_core.c     |   5 +-
>   tools/libs/ctrl/xc_core.h     |   8 +-
>   tools/libs/ctrl/xc_core_arm.c |  23 +--
>   tools/libs/ctrl/xc_core_x86.c | 256 ++++++++++++++++++++++++++++------
>   tools/libs/ctrl/xc_private.h  |   1 +
>   tools/libs/guest/xg_domain.c  |  17 +--
>   7 files changed, 234 insertions(+), 78 deletions(-)
> 
> diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
> index 217022b6e7..36a26deba4 100644
> --- a/tools/include/xenguest.h
> +++ b/tools/include/xenguest.h
> @@ -700,8 +700,10 @@ struct xc_domain_meminfo {
>       xen_pfn_t *pfn_type;
>       xen_pfn_t *p2m_table;
>       unsigned long p2m_size;
> +    unsigned int p2m_frames;
>   };
>   
> +#define xc_map_domain_meminfo xc_map_domain_meminfo_mod
>   int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
>                             struct xc_domain_meminfo *minfo);
>   
> diff --git a/tools/libs/ctrl/xc_core.c b/tools/libs/ctrl/xc_core.c
> index b47ab2f6d8..9576bec5a3 100644
> --- a/tools/libs/ctrl/xc_core.c
> +++ b/tools/libs/ctrl/xc_core.c
> @@ -574,8 +574,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
>               goto out;
>           }
>   
> -        sts = xc_core_arch_map_p2m(xch, dinfo->guest_width, &info, live_shinfo,
> -                                   &p2m, &dinfo->p2m_size);
> +        sts = xc_core_arch_map_p2m(xch, dinfo, &info, live_shinfo, &p2m);
>           if ( sts != 0 )
>               goto out;
>   
> @@ -945,7 +944,7 @@ out:
>       if ( memory_map != NULL )
>           free(memory_map);
>       if ( p2m != NULL )
> -        munmap(p2m, PAGE_SIZE * P2M_FL_ENTRIES);
> +        munmap(p2m, PAGE_SIZE * dinfo->p2m_frames);
>       if ( p2m_array != NULL )
>           free(p2m_array);
>       if ( pfn_array != NULL )
> diff --git a/tools/libs/ctrl/xc_core.h b/tools/libs/ctrl/xc_core.h
> index 36fb755da2..8ea1f93a10 100644
> --- a/tools/libs/ctrl/xc_core.h
> +++ b/tools/libs/ctrl/xc_core.h
> @@ -138,14 +138,14 @@ int xc_core_arch_memory_map_get(xc_interface *xch,
>                                   xc_dominfo_t *info, shared_info_any_t *live_shinfo,
>                                   xc_core_memory_map_t **mapp,
>                                   unsigned int *nr_entries);
> -int xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width,
> +int xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo,
>                            xc_dominfo_t *info, shared_info_any_t *live_shinfo,
> -                         xen_pfn_t **live_p2m, unsigned long *pfnp);
> +                         xen_pfn_t **live_p2m);
>   
> -int xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int guest_width,
> +int xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo,
>                                     xc_dominfo_t *info,
>                                     shared_info_any_t *live_shinfo,
> -                                  xen_pfn_t **live_p2m, unsigned long *pfnp);
> +                                  xen_pfn_t **live_p2m);
>   
>   int xc_core_arch_get_scratch_gpfn(xc_interface *xch, uint32_t domid,
>                                     xen_pfn_t *gpfn);
> diff --git a/tools/libs/ctrl/xc_core_arm.c b/tools/libs/ctrl/xc_core_arm.c
> index 7b587b4cc5..93765a565f 100644
> --- a/tools/libs/ctrl/xc_core_arm.c
> +++ b/tools/libs/ctrl/xc_core_arm.c
> @@ -66,33 +66,24 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
>   
>   static int
>   xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
> -                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
> -                        unsigned long *pfnp, int rw)
> +                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
>   {
>       errno = ENOSYS;
>       return -1;
>   }
>   
>   int
> -xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
> -                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
> -                        unsigned long *pfnp)
> +xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
> +                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
>   {
> -    struct domain_info_context _dinfo = { .guest_width = guest_width };
> -    struct domain_info_context *dinfo = &_dinfo;
> -    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
> -                                   live_shinfo, live_p2m, pfnp, 0);
> +    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
>   }
>   
>   int
> -xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
> -                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
> -                              unsigned long *pfnp)
> +xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
> +                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
>   {
> -    struct domain_info_context _dinfo = { .guest_width = guest_width };
> -    struct domain_info_context *dinfo = &_dinfo;
> -    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
> -                                   live_shinfo, live_p2m, pfnp, 1);
> +    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
>   }
>   
>   int
> diff --git a/tools/libs/ctrl/xc_core_x86.c b/tools/libs/ctrl/xc_core_x86.c
> index cb76e6207b..c8f71d4b75 100644
> --- a/tools/libs/ctrl/xc_core_x86.c
> +++ b/tools/libs/ctrl/xc_core_x86.c
> @@ -17,6 +17,7 @@
>    *
>    */
>   
> +#include <inttypes.h>
>   #include "xc_private.h"
>   #include "xc_core.h"
>   #include <xen/hvm/e820.h>
> @@ -65,34 +66,169 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
>       return 0;
>   }
>   
> -static int
> -xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
> -                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
> -                        unsigned long *pfnp, int rw)
> +static inline bool is_canonical_address(uint64_t vaddr)
>   {
> -    /* Double and single indirect references to the live P2M table */
> -    xen_pfn_t *live_p2m_frame_list_list = NULL;
> -    xen_pfn_t *live_p2m_frame_list = NULL;
> -    /* Copies of the above. */
> -    xen_pfn_t *p2m_frame_list_list = NULL;
> -    xen_pfn_t *p2m_frame_list = NULL;
> +    return ((int64_t)vaddr >> 47) == ((int64_t)vaddr >> 63);
> +}
>   
> -    uint32_t dom = info->domid;
> -    int ret = -1;
> -    int err;
> -    int i;
> +/* Virtual address ranges reserved for hypervisor. */
> +#define HYPERVISOR_VIRT_START_X86_64 0xFFFF800000000000ULL
> +#define HYPERVISOR_VIRT_END_X86_64   0xFFFF87FFFFFFFFFFULL
>   
> -    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
> +#define HYPERVISOR_VIRT_START_X86_32 0x00000000F5800000ULL
> +#define HYPERVISOR_VIRT_END_X86_32   0x00000000FFFFFFFFULL
> +
> +static xen_pfn_t *
> +xc_core_arch_map_p2m_list_rw(xc_interface *xch, struct domain_info_context *dinfo,
> +                             uint32_t dom, shared_info_any_t *live_shinfo,
> +                             uint64_t p2m_cr3)
> +{
> +    uint64_t p2m_vaddr, p2m_end, mask, off;
> +    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
> +    uint64_t *ptes = NULL;
> +    xen_pfn_t *mfns = NULL;
> +    unsigned int fpp, n_pages, level, n_levels, shift,
> +                 idx_start, idx_end, idx, saved_idx;
> +
> +    p2m_vaddr = GET_FIELD(live_shinfo, arch.p2m_vaddr, dinfo->guest_width);
> +    fpp = PAGE_SIZE / dinfo->guest_width;
> +    dinfo->p2m_frames = (dinfo->p2m_size - 1) / fpp + 1;
> +    p2m_end = p2m_vaddr + dinfo->p2m_frames * PAGE_SIZE - 1;
> +
> +    if ( dinfo->guest_width == 8 )
>       {
> -        ERROR("Could not get maximum GPFN!");
> -        goto out;
> +        mask = 0x0000ffffffffffffULL;
> +        n_levels = 4;
> +        p2m_mfn = p2m_cr3 >> 12;
> +        if ( !is_canonical_address(p2m_vaddr) ||
> +             !is_canonical_address(p2m_end) ||
> +             p2m_end < p2m_vaddr ||
> +             (p2m_vaddr <= HYPERVISOR_VIRT_END_X86_64 &&
> +              p2m_end > HYPERVISOR_VIRT_START_X86_64) )
> +        {
> +            ERROR("Bad virtual p2m address range %#" PRIx64 "-%#" PRIx64,
> +                  p2m_vaddr, p2m_end);
> +            errno = ERANGE;
> +            goto out;
> +        }
> +    }
> +    else
> +    {
> +        mask = 0x00000000ffffffffULL;
> +        n_levels = 3;
> +        if ( p2m_cr3 & ~mask )
> +            p2m_mfn = ~0UL;
> +        else
> +            p2m_mfn = (uint32_t)((p2m_cr3 >> 12) | (p2m_cr3 << 20));
> +        if ( p2m_vaddr > mask || p2m_end > mask || p2m_end < p2m_vaddr ||
> +             (p2m_vaddr <= HYPERVISOR_VIRT_END_X86_32 &&
> +              p2m_end > HYPERVISOR_VIRT_START_X86_32) )
> +        {
> +            ERROR("Bad virtual p2m address range %#" PRIx64 "-%#" PRIx64,
> +                  p2m_vaddr, p2m_end);
> +            errno = ERANGE;
> +            goto out;
> +        }
>       }
>   
> -    if ( dinfo->p2m_size < info->nr_pages  )
> +    mfns = malloc(sizeof(*mfns));
> +    if ( !mfns )
>       {
> -        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
> +        ERROR("Cannot allocate memory for array of %u mfns", 1);
>           goto out;
>       }
> +    mfns[0] = p2m_mfn;
> +    off = 0;
> +    saved_mfn = 0;
> +    idx_start = idx_end = saved_idx = 0;
> +
> +    for ( level = n_levels; level > 0; level-- )
> +    {
> +        n_pages = idx_end - idx_start + 1;
> +        ptes = xc_map_foreign_pages(xch, dom, PROT_READ, mfns, n_pages);
> +        if ( !ptes )
> +        {
> +            PERROR("Failed to map %u page table pages for p2m list", n_pages);
> +            goto out;
> +        }
> +        free(mfns);
> +
> +        shift = level * 9 + 3;
> +        idx_start = ((p2m_vaddr - off) & mask) >> shift;
> +        idx_end = ((p2m_end - off) & mask) >> shift;
> +        idx = idx_end - idx_start + 1;
> +        mfns = malloc(sizeof(*mfns) * idx);
> +        if ( !mfns )
> +        {
> +            ERROR("Cannot allocate memory for array of %u mfns", idx);
> +            goto out;
> +        }
> +
> +        for ( idx = idx_start; idx <= idx_end; idx++ )
> +        {
> +            mfn = (ptes[idx] & 0x000ffffffffff000ULL) >> PAGE_SHIFT;
> +            if ( mfn == 0 )
> +            {
> +                ERROR("Bad mfn %#lx during page table walk for vaddr %#" PRIx64 " at level %d of p2m list",
> +                      mfn, off + ((uint64_t)idx << shift), level);
> +                errno = ERANGE;
> +                goto out;
> +            }
> +            mfns[idx - idx_start] = mfn;
> +
> +            /* Maximum pfn check at level 2. Same reasoning as for p2m tree. */
> +            if ( level == 2 )
> +            {
> +                if ( mfn != saved_mfn )
> +                {
> +                    saved_mfn = mfn;
> +                    saved_idx = idx - idx_start;
> +                }
> +            }
> +        }
> +
> +        if ( level == 2 )
> +        {
> +            if ( saved_idx == idx_end )
> +                saved_idx++;
> +            max_pfn = ((xen_pfn_t)saved_idx << 9) * fpp;
> +            if ( max_pfn < dinfo->p2m_size )
> +            {
> +                dinfo->p2m_size = max_pfn;
> +                dinfo->p2m_frames = (dinfo->p2m_size + fpp - 1) / fpp;
> +                p2m_end = p2m_vaddr + dinfo->p2m_frames * PAGE_SIZE - 1;
> +                idx_end = idx_start + saved_idx;
> +            }
> +        }
> +
> +        munmap(ptes, n_pages * PAGE_SIZE);
> +        ptes = NULL;
> +        off = p2m_vaddr & ((mask >> shift) << shift);
> +    }
> +
> +    return mfns;
> +
> + out:
> +    free(mfns);
> +    if ( ptes )
> +        munmap(ptes, n_pages * PAGE_SIZE);
> +
> +    return NULL;
> +}
> +
> +static xen_pfn_t *
> +xc_core_arch_map_p2m_tree_rw(xc_interface *xch, struct domain_info_context *dinfo,
> +                             uint32_t dom, shared_info_any_t *live_shinfo)
> +{
> +    /* Double and single indirect references to the live P2M table */
> +    xen_pfn_t *live_p2m_frame_list_list;
> +    xen_pfn_t *live_p2m_frame_list = NULL;
> +    /* Copies of the above. */
> +    xen_pfn_t *p2m_frame_list_list = NULL;
> +    xen_pfn_t *p2m_frame_list;
> +
> +    int err;
> +    int i;
>   
>       live_p2m_frame_list_list =
>           xc_map_foreign_range(xch, dom, PAGE_SIZE, PROT_READ,
> @@ -151,10 +287,60 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
>           for ( i = P2M_FL_ENTRIES - 1; i >= 0; i-- )
>               p2m_frame_list[i] = ((uint32_t *)p2m_frame_list)[i];
>   
> +    dinfo->p2m_frames = P2M_FL_ENTRIES;
> +
> +    return p2m_frame_list;
> +
> + out:
> +    err = errno;
> +
> +    if ( live_p2m_frame_list_list )
> +        munmap(live_p2m_frame_list_list, PAGE_SIZE);
> +
> +    if ( live_p2m_frame_list )
> +        munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE);
> +
> +    free(p2m_frame_list_list);
> +
> +    errno = err;
> +
> +    return NULL;
> +}
> +
> +static int
> +xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
> +                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
> +{
> +    xen_pfn_t *p2m_frame_list = NULL;
> +    uint64_t p2m_cr3;
> +    uint32_t dom = info->domid;
> +    int ret = -1;
> +    int err;
> +
> +    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
> +    {
> +        ERROR("Could not get maximum GPFN!");
> +        goto out;
> +    }
> +
> +    if ( dinfo->p2m_size < info->nr_pages  )
> +    {
> +        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
> +        goto out;
> +    }
> +
> +    p2m_cr3 = GET_FIELD(live_shinfo, arch.p2m_cr3, dinfo->guest_width);
> +
> +    p2m_frame_list = p2m_cr3 ? xc_core_arch_map_p2m_list_rw(xch, dinfo, dom, live_shinfo, p2m_cr3)
> +                             : xc_core_arch_map_p2m_tree_rw(xch, dinfo, dom, live_shinfo);
> +
> +    if ( !p2m_frame_list )
> +        goto out;
> +
>       *live_p2m = xc_map_foreign_pages(xch, dom,
>                                       rw ? (PROT_READ | PROT_WRITE) : PROT_READ,
>                                       p2m_frame_list,
> -                                    P2M_FL_ENTRIES);
> +                                    dinfo->p2m_frames);
>   
>       if ( !*live_p2m )
>       {
> @@ -162,21 +348,11 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
>           goto out;
>       }
>   
> -    *pfnp = dinfo->p2m_size;
> -
>       ret = 0;
>   
>   out:
>       err = errno;
>   
> -    if ( live_p2m_frame_list_list )
> -        munmap(live_p2m_frame_list_list, PAGE_SIZE);
> -
> -    if ( live_p2m_frame_list )
> -        munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE);
> -
> -    free(p2m_frame_list_list);
> -
>       free(p2m_frame_list);
>   
>       errno = err;
> @@ -184,25 +360,17 @@ out:
>   }
>   
>   int
> -xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
> -                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
> -                        unsigned long *pfnp)
> +xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
> +                        shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
>   {
> -    struct domain_info_context _dinfo = { .guest_width = guest_width };
> -    struct domain_info_context *dinfo = &_dinfo;
> -    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
> -                                   live_shinfo, live_p2m, pfnp, 0);
> +    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
>   }
>   
>   int
> -xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int guest_width, xc_dominfo_t *info,
> -                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m,
> -                              unsigned long *pfnp)
> +xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
> +                              shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
>   {
> -    struct domain_info_context _dinfo = { .guest_width = guest_width };
> -    struct domain_info_context *dinfo = &_dinfo;
> -    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
> -                                   live_shinfo, live_p2m, pfnp, 1);
> +    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
>   }
>   
>   int
> diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
> index f0b5f83ac8..8ebc0b59da 100644
> --- a/tools/libs/ctrl/xc_private.h
> +++ b/tools/libs/ctrl/xc_private.h
> @@ -79,6 +79,7 @@ struct iovec {
>   
>   struct domain_info_context {
>       unsigned int guest_width;
> +    unsigned int p2m_frames;
>       unsigned long p2m_size;
>   };
>   
> diff --git a/tools/libs/guest/xg_domain.c b/tools/libs/guest/xg_domain.c
> index 5019c84e0e..dd7db2cbd8 100644
> --- a/tools/libs/guest/xg_domain.c
> +++ b/tools/libs/guest/xg_domain.c
> @@ -24,13 +24,9 @@
>   
>   int xc_unmap_domain_meminfo(xc_interface *xch, struct xc_domain_meminfo *minfo)
>   {
> -    struct domain_info_context _di = { .guest_width = minfo->guest_width,
> -                                       .p2m_size = minfo->p2m_size};
> -    struct domain_info_context *dinfo = &_di;
> -
>       free(minfo->pfn_type);
>       if ( minfo->p2m_table )
> -        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
> +        munmap(minfo->p2m_table, minfo->p2m_frames * PAGE_SIZE);
>       minfo->p2m_table = NULL;
>   
>       return 0;
> @@ -40,7 +36,6 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
>                             struct xc_domain_meminfo *minfo)
>   {
>       struct domain_info_context _di;
> -    struct domain_info_context *dinfo = &_di;
>   
>       xc_dominfo_t info;
>       shared_info_any_t *live_shinfo;
> @@ -96,16 +91,16 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
>           return -1;
>       }
>   
> -    if ( xc_core_arch_map_p2m_writable(xch, minfo->guest_width, &info,
> -                                       live_shinfo, &minfo->p2m_table,
> -                                       &minfo->p2m_size) )
> +    if ( xc_core_arch_map_p2m_writable(xch, &_di, &info,
> +                                       live_shinfo, &minfo->p2m_table) )
>       {
>           PERROR("Could not map the P2M table");
>           munmap(live_shinfo, PAGE_SIZE);
>           return -1;
>       }
>       munmap(live_shinfo, PAGE_SIZE);
> -    _di.p2m_size = minfo->p2m_size;
> +    minfo->p2m_size = _di.p2m_size;
> +    minfo->p2m_frames = _di.p2m_frames;
>   
>       /* Make space and prepare for getting the PFN types */
>       minfo->pfn_type = calloc(sizeof(*minfo->pfn_type), minfo->p2m_size);
> @@ -141,7 +136,7 @@ failed:
>       }
>       if ( minfo->p2m_table )
>       {
> -        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
> +        munmap(minfo->p2m_table, minfo->p2m_frames * PAGE_SIZE);
>           minfo->p2m_table = NULL;
>       }
>   
> 


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15  7:58 preparations for 4.15.1 and 4.13.4 Jan Beulich
                   ` (5 preceding siblings ...)
  2021-07-15 17:16 ` Andrew Cooper
@ 2021-07-16  7:41 ` Julien Grall
  2021-07-16 20:16   ` Stefano Stabellini
  2021-07-19 10:32 ` Jan Beulich
  7 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2021-07-16  7:41 UTC (permalink / raw)
  To: Jan Beulich, xen-devel, Stefano Stabellini
  Cc: George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard, Ian Jackson

On 15/07/2021 08:58, Jan Beulich wrote:
> All,

Hi Jan & Stefano,


> the releases are due in a couple of weeks time (and 4.14.3 is
> supposed to follow another few weeks later). Please point out backports
> you find missing from the respective staging branches, but which you
> consider relevant.
> 
> Please note that 4.13.4 is going to be the last Xen Project
> coordinated release from the 4.13 branch; the branch will go into
> security-only maintenance mode after this release.

I would like to request the backports of the following commits:

4473f3601098 xen/arm: bootfdt: Always sort memory banks
b80470c84553 arm: Modify type of actlr to register_t
dfcffb128be4 xen/arm32: SPSR_hyp/SPSR
93031fbe9f4c Arm32: MSR to SPSR needs qualification

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-16  7:41 ` preparations for 4.15.1 and 4.13.4 Julien Grall
@ 2021-07-16 20:16   ` Stefano Stabellini
  0 siblings, 0 replies; 28+ messages in thread
From: Stefano Stabellini @ 2021-07-16 20:16 UTC (permalink / raw)
  To: Julien Grall
  Cc: Jan Beulich, xen-devel, Stefano Stabellini, George Dunlap,
	Wei Liu, Anthony Perard, Ian Jackson

On Fri, 16 Jul 2021, Julien Grall wrote:
> On 15/07/2021 08:58, Jan Beulich wrote:
> > All,
> 
> Hi Jan & Stefano,
> 
> 
> > the releases are due in a couple of weeks time (and 4.14.3 is
> > supposed to follow another few weeks later). Please point out backports
> > you find missing from the respective staging branches, but which you
> > consider relevant.
> > 
> > Please note that 4.13.4 is going to be the last Xen Project
> > coordinated release from the 4.13 branch; the branch will go into
> > security-only maintenance mode after this release.
> 
> I would like to request the backports of the following commits:
> 
> 4473f3601098 xen/arm: bootfdt: Always sort memory banks
> b80470c84553 arm: Modify type of actlr to register_t
> dfcffb128be4 xen/arm32: SPSR_hyp/SPSR
> 93031fbe9f4c Arm32: MSR to SPSR needs qualification

Done


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15  7:58 preparations for 4.15.1 and 4.13.4 Jan Beulich
                   ` (6 preceding siblings ...)
  2021-07-16  7:41 ` preparations for 4.15.1 and 4.13.4 Julien Grall
@ 2021-07-19 10:32 ` Jan Beulich
  2021-08-19 16:49   ` Ian Jackson
  7 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2021-07-19 10:32 UTC (permalink / raw)
  To: Ian Jackson
  Cc: George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard,
	Julien Grall, xen-devel

Ian,

On 15.07.2021 09:58, Jan Beulich wrote:
> Beyond this I'd like the following to be considered:
> 
> 6409210a5f51 libxencall: osdep_hypercall() should return long
> bef64f2c0019 libxencall: introduce variant of xencall2() returning long
> 01a2d001dea2 libxencall: Bump SONAME following new functionality
> 6f02d1ea4a10 libxc: use multicall for memory-op on Linux (and Solaris)

in addition I'd like to ask you to consider

0be5a00af590 libxl/x86: check return value of SHADOW_OP_SET_ALLOCATION domctl

as well, now that it has gone in.

Thanks, Jan



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
  2021-07-16  6:47   ` Juergen Gross
@ 2021-08-19  7:45     ` Juergen Gross
  2021-08-19 16:11       ` Ian Jackson
  2021-08-19 15:44     ` preparations for 4.15.1 and 4.13.4 support linear p2m table [and 1 more messages] Ian Jackson
  1 sibling, 1 reply; 28+ messages in thread
From: Juergen Gross @ 2021-08-19  7:45 UTC (permalink / raw)
  To: xen-devel, Ian Jackson, Wei Liu; +Cc: Jan Beulich


[-- Attachment #1.1.1: Type: text/plain, Size: 25239 bytes --]

PING!!!

On 16.07.21 08:47, Juergen Gross wrote:
> Ping?
> 
> On 02.07.21 16:29, Juergen Gross wrote:
>> The core of a pv linux guest produced via "xl dump-core" is not usable
>> as since kernel 4.14 only the linear p2m table is kept if Xen indicates
>> it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
>> supporting the 3-level p2m tree only.
>>
>> Fix that by copying the functionality of map_p2m() from libxenguest to
>> libxenctrl.
>>
>> Additionally the mapped p2m isn't of a fixed length now, so the
>> interface to the mapping functions needs to be adapted. In order not to
>> add even more parameters, expand struct domain_info_context and use a
>> pointer to that as a parameter.
>>
>> This is a backport of upstream commit bd7a29c3d0b937ab542a.
>>
>> As the original patch includes a modification of a data structure
>> passed via pointer to a library function, the related function in the
>> library is renamed in order to be able to spot any external users of
>> that function. Note that it is extremely unlikely any such users
>> outside the Xen git tree are existing, so the risk to break any
>> existing programs is very unlikely. In case such a user is existing,
>> changing the name of xc_map_domain_meminfo() will at least avoid
>> silent breakage.
>>
>> Fixes: dc6d60937121 ("libxc: set flag for support of linear p2m list 
>> in domain builder")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   tools/include/xenguest.h      |   2 +
>>   tools/libs/ctrl/xc_core.c     |   5 +-
>>   tools/libs/ctrl/xc_core.h     |   8 +-
>>   tools/libs/ctrl/xc_core_arm.c |  23 +--
>>   tools/libs/ctrl/xc_core_x86.c | 256 ++++++++++++++++++++++++++++------
>>   tools/libs/ctrl/xc_private.h  |   1 +
>>   tools/libs/guest/xg_domain.c  |  17 +--
>>   7 files changed, 234 insertions(+), 78 deletions(-)
>>
>> diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
>> index 217022b6e7..36a26deba4 100644
>> --- a/tools/include/xenguest.h
>> +++ b/tools/include/xenguest.h
>> @@ -700,8 +700,10 @@ struct xc_domain_meminfo {
>>       xen_pfn_t *pfn_type;
>>       xen_pfn_t *p2m_table;
>>       unsigned long p2m_size;
>> +    unsigned int p2m_frames;
>>   };
>> +#define xc_map_domain_meminfo xc_map_domain_meminfo_mod
>>   int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
>>                             struct xc_domain_meminfo *minfo);
>> diff --git a/tools/libs/ctrl/xc_core.c b/tools/libs/ctrl/xc_core.c
>> index b47ab2f6d8..9576bec5a3 100644
>> --- a/tools/libs/ctrl/xc_core.c
>> +++ b/tools/libs/ctrl/xc_core.c
>> @@ -574,8 +574,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
>>               goto out;
>>           }
>> -        sts = xc_core_arch_map_p2m(xch, dinfo->guest_width, &info, 
>> live_shinfo,
>> -                                   &p2m, &dinfo->p2m_size);
>> +        sts = xc_core_arch_map_p2m(xch, dinfo, &info, live_shinfo, 
>> &p2m);
>>           if ( sts != 0 )
>>               goto out;
>> @@ -945,7 +944,7 @@ out:
>>       if ( memory_map != NULL )
>>           free(memory_map);
>>       if ( p2m != NULL )
>> -        munmap(p2m, PAGE_SIZE * P2M_FL_ENTRIES);
>> +        munmap(p2m, PAGE_SIZE * dinfo->p2m_frames);
>>       if ( p2m_array != NULL )
>>           free(p2m_array);
>>       if ( pfn_array != NULL )
>> diff --git a/tools/libs/ctrl/xc_core.h b/tools/libs/ctrl/xc_core.h
>> index 36fb755da2..8ea1f93a10 100644
>> --- a/tools/libs/ctrl/xc_core.h
>> +++ b/tools/libs/ctrl/xc_core.h
>> @@ -138,14 +138,14 @@ int xc_core_arch_memory_map_get(xc_interface *xch,
>>                                   xc_dominfo_t *info, 
>> shared_info_any_t *live_shinfo,
>>                                   xc_core_memory_map_t **mapp,
>>                                   unsigned int *nr_entries);
>> -int xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width,
>> +int xc_core_arch_map_p2m(xc_interface *xch, struct 
>> domain_info_context *dinfo,
>>                            xc_dominfo_t *info, shared_info_any_t 
>> *live_shinfo,
>> -                         xen_pfn_t **live_p2m, unsigned long *pfnp);
>> +                         xen_pfn_t **live_p2m);
>> -int xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int 
>> guest_width,
>> +int xc_core_arch_map_p2m_writable(xc_interface *xch, struct 
>> domain_info_context *dinfo,
>>                                     xc_dominfo_t *info,
>>                                     shared_info_any_t *live_shinfo,
>> -                                  xen_pfn_t **live_p2m, unsigned long 
>> *pfnp);
>> +                                  xen_pfn_t **live_p2m);
>>   int xc_core_arch_get_scratch_gpfn(xc_interface *xch, uint32_t domid,
>>                                     xen_pfn_t *gpfn);
>> diff --git a/tools/libs/ctrl/xc_core_arm.c 
>> b/tools/libs/ctrl/xc_core_arm.c
>> index 7b587b4cc5..93765a565f 100644
>> --- a/tools/libs/ctrl/xc_core_arm.c
>> +++ b/tools/libs/ctrl/xc_core_arm.c
>> @@ -66,33 +66,24 @@ xc_core_arch_memory_map_get(xc_interface *xch, 
>> struct xc_core_arch_context *unus
>>   static int
>>   xc_core_arch_map_p2m_rw(xc_interface *xch, struct 
>> domain_info_context *dinfo, xc_dominfo_t *info,
>> -                        shared_info_any_t *live_shinfo, xen_pfn_t 
>> **live_p2m,
>> -                        unsigned long *pfnp, int rw)
>> +                        shared_info_any_t *live_shinfo, xen_pfn_t 
>> **live_p2m, int rw)
>>   {
>>       errno = ENOSYS;
>>       return -1;
>>   }
>>   int
>> -xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width, 
>> xc_dominfo_t *info,
>> -                        shared_info_any_t *live_shinfo, xen_pfn_t 
>> **live_p2m,
>> -                        unsigned long *pfnp)
>> +xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context 
>> *dinfo, xc_dominfo_t *info,
>> +                        shared_info_any_t *live_shinfo, xen_pfn_t 
>> **live_p2m)
>>   {
>> -    struct domain_info_context _dinfo = { .guest_width = guest_width };
>> -    struct domain_info_context *dinfo = &_dinfo;
>> -    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
>> -                                   live_shinfo, live_p2m, pfnp, 0);
>> +    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, 
>> live_p2m, 0);
>>   }
>>   int
>> -xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int 
>> guest_width, xc_dominfo_t *info,
>> -                              shared_info_any_t *live_shinfo, 
>> xen_pfn_t **live_p2m,
>> -                              unsigned long *pfnp)
>> +xc_core_arch_map_p2m_writable(xc_interface *xch, struct 
>> domain_info_context *dinfo, xc_dominfo_t *info,
>> +                              shared_info_any_t *live_shinfo, 
>> xen_pfn_t **live_p2m)
>>   {
>> -    struct domain_info_context _dinfo = { .guest_width = guest_width };
>> -    struct domain_info_context *dinfo = &_dinfo;
>> -    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
>> -                                   live_shinfo, live_p2m, pfnp, 1);
>> +    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, 
>> live_p2m, 1);
>>   }
>>   int
>> diff --git a/tools/libs/ctrl/xc_core_x86.c 
>> b/tools/libs/ctrl/xc_core_x86.c
>> index cb76e6207b..c8f71d4b75 100644
>> --- a/tools/libs/ctrl/xc_core_x86.c
>> +++ b/tools/libs/ctrl/xc_core_x86.c
>> @@ -17,6 +17,7 @@
>>    *
>>    */
>> +#include <inttypes.h>
>>   #include "xc_private.h"
>>   #include "xc_core.h"
>>   #include <xen/hvm/e820.h>
>> @@ -65,34 +66,169 @@ xc_core_arch_memory_map_get(xc_interface *xch, 
>> struct xc_core_arch_context *unus
>>       return 0;
>>   }
>> -static int
>> -xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context 
>> *dinfo, xc_dominfo_t *info,
>> -                        shared_info_any_t *live_shinfo, xen_pfn_t 
>> **live_p2m,
>> -                        unsigned long *pfnp, int rw)
>> +static inline bool is_canonical_address(uint64_t vaddr)
>>   {
>> -    /* Double and single indirect references to the live P2M table */
>> -    xen_pfn_t *live_p2m_frame_list_list = NULL;
>> -    xen_pfn_t *live_p2m_frame_list = NULL;
>> -    /* Copies of the above. */
>> -    xen_pfn_t *p2m_frame_list_list = NULL;
>> -    xen_pfn_t *p2m_frame_list = NULL;
>> +    return ((int64_t)vaddr >> 47) == ((int64_t)vaddr >> 63);
>> +}
>> -    uint32_t dom = info->domid;
>> -    int ret = -1;
>> -    int err;
>> -    int i;
>> +/* Virtual address ranges reserved for hypervisor. */
>> +#define HYPERVISOR_VIRT_START_X86_64 0xFFFF800000000000ULL
>> +#define HYPERVISOR_VIRT_END_X86_64   0xFFFF87FFFFFFFFFFULL
>> -    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
>> +#define HYPERVISOR_VIRT_START_X86_32 0x00000000F5800000ULL
>> +#define HYPERVISOR_VIRT_END_X86_32   0x00000000FFFFFFFFULL
>> +
>> +static xen_pfn_t *
>> +xc_core_arch_map_p2m_list_rw(xc_interface *xch, struct 
>> domain_info_context *dinfo,
>> +                             uint32_t dom, shared_info_any_t 
>> *live_shinfo,
>> +                             uint64_t p2m_cr3)
>> +{
>> +    uint64_t p2m_vaddr, p2m_end, mask, off;
>> +    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
>> +    uint64_t *ptes = NULL;
>> +    xen_pfn_t *mfns = NULL;
>> +    unsigned int fpp, n_pages, level, n_levels, shift,
>> +                 idx_start, idx_end, idx, saved_idx;
>> +
>> +    p2m_vaddr = GET_FIELD(live_shinfo, arch.p2m_vaddr, 
>> dinfo->guest_width);
>> +    fpp = PAGE_SIZE / dinfo->guest_width;
>> +    dinfo->p2m_frames = (dinfo->p2m_size - 1) / fpp + 1;
>> +    p2m_end = p2m_vaddr + dinfo->p2m_frames * PAGE_SIZE - 1;
>> +
>> +    if ( dinfo->guest_width == 8 )
>>       {
>> -        ERROR("Could not get maximum GPFN!");
>> -        goto out;
>> +        mask = 0x0000ffffffffffffULL;
>> +        n_levels = 4;
>> +        p2m_mfn = p2m_cr3 >> 12;
>> +        if ( !is_canonical_address(p2m_vaddr) ||
>> +             !is_canonical_address(p2m_end) ||
>> +             p2m_end < p2m_vaddr ||
>> +             (p2m_vaddr <= HYPERVISOR_VIRT_END_X86_64 &&
>> +              p2m_end > HYPERVISOR_VIRT_START_X86_64) )
>> +        {
>> +            ERROR("Bad virtual p2m address range %#" PRIx64 "-%#" 
>> PRIx64,
>> +                  p2m_vaddr, p2m_end);
>> +            errno = ERANGE;
>> +            goto out;
>> +        }
>> +    }
>> +    else
>> +    {
>> +        mask = 0x00000000ffffffffULL;
>> +        n_levels = 3;
>> +        if ( p2m_cr3 & ~mask )
>> +            p2m_mfn = ~0UL;
>> +        else
>> +            p2m_mfn = (uint32_t)((p2m_cr3 >> 12) | (p2m_cr3 << 20));
>> +        if ( p2m_vaddr > mask || p2m_end > mask || p2m_end < 
>> p2m_vaddr ||
>> +             (p2m_vaddr <= HYPERVISOR_VIRT_END_X86_32 &&
>> +              p2m_end > HYPERVISOR_VIRT_START_X86_32) )
>> +        {
>> +            ERROR("Bad virtual p2m address range %#" PRIx64 "-%#" 
>> PRIx64,
>> +                  p2m_vaddr, p2m_end);
>> +            errno = ERANGE;
>> +            goto out;
>> +        }
>>       }
>> -    if ( dinfo->p2m_size < info->nr_pages  )
>> +    mfns = malloc(sizeof(*mfns));
>> +    if ( !mfns )
>>       {
>> -        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, 
>> info->nr_pages - 1);
>> +        ERROR("Cannot allocate memory for array of %u mfns", 1);
>>           goto out;
>>       }
>> +    mfns[0] = p2m_mfn;
>> +    off = 0;
>> +    saved_mfn = 0;
>> +    idx_start = idx_end = saved_idx = 0;
>> +
>> +    for ( level = n_levels; level > 0; level-- )
>> +    {
>> +        n_pages = idx_end - idx_start + 1;
>> +        ptes = xc_map_foreign_pages(xch, dom, PROT_READ, mfns, n_pages);
>> +        if ( !ptes )
>> +        {
>> +            PERROR("Failed to map %u page table pages for p2m list", 
>> n_pages);
>> +            goto out;
>> +        }
>> +        free(mfns);
>> +
>> +        shift = level * 9 + 3;
>> +        idx_start = ((p2m_vaddr - off) & mask) >> shift;
>> +        idx_end = ((p2m_end - off) & mask) >> shift;
>> +        idx = idx_end - idx_start + 1;
>> +        mfns = malloc(sizeof(*mfns) * idx);
>> +        if ( !mfns )
>> +        {
>> +            ERROR("Cannot allocate memory for array of %u mfns", idx);
>> +            goto out;
>> +        }
>> +
>> +        for ( idx = idx_start; idx <= idx_end; idx++ )
>> +        {
>> +            mfn = (ptes[idx] & 0x000ffffffffff000ULL) >> PAGE_SHIFT;
>> +            if ( mfn == 0 )
>> +            {
>> +                ERROR("Bad mfn %#lx during page table walk for vaddr 
>> %#" PRIx64 " at level %d of p2m list",
>> +                      mfn, off + ((uint64_t)idx << shift), level);
>> +                errno = ERANGE;
>> +                goto out;
>> +            }
>> +            mfns[idx - idx_start] = mfn;
>> +
>> +            /* Maximum pfn check at level 2. Same reasoning as for 
>> p2m tree. */
>> +            if ( level == 2 )
>> +            {
>> +                if ( mfn != saved_mfn )
>> +                {
>> +                    saved_mfn = mfn;
>> +                    saved_idx = idx - idx_start;
>> +                }
>> +            }
>> +        }
>> +
>> +        if ( level == 2 )
>> +        {
>> +            if ( saved_idx == idx_end )
>> +                saved_idx++;
>> +            max_pfn = ((xen_pfn_t)saved_idx << 9) * fpp;
>> +            if ( max_pfn < dinfo->p2m_size )
>> +            {
>> +                dinfo->p2m_size = max_pfn;
>> +                dinfo->p2m_frames = (dinfo->p2m_size + fpp - 1) / fpp;
>> +                p2m_end = p2m_vaddr + dinfo->p2m_frames * PAGE_SIZE - 1;
>> +                idx_end = idx_start + saved_idx;
>> +            }
>> +        }
>> +
>> +        munmap(ptes, n_pages * PAGE_SIZE);
>> +        ptes = NULL;
>> +        off = p2m_vaddr & ((mask >> shift) << shift);
>> +    }
>> +
>> +    return mfns;
>> +
>> + out:
>> +    free(mfns);
>> +    if ( ptes )
>> +        munmap(ptes, n_pages * PAGE_SIZE);
>> +
>> +    return NULL;
>> +}
>> +
>> +static xen_pfn_t *
>> +xc_core_arch_map_p2m_tree_rw(xc_interface *xch, struct 
>> domain_info_context *dinfo,
>> +                             uint32_t dom, shared_info_any_t 
>> *live_shinfo)
>> +{
>> +    /* Double and single indirect references to the live P2M table */
>> +    xen_pfn_t *live_p2m_frame_list_list;
>> +    xen_pfn_t *live_p2m_frame_list = NULL;
>> +    /* Copies of the above. */
>> +    xen_pfn_t *p2m_frame_list_list = NULL;
>> +    xen_pfn_t *p2m_frame_list;
>> +
>> +    int err;
>> +    int i;
>>       live_p2m_frame_list_list =
>>           xc_map_foreign_range(xch, dom, PAGE_SIZE, PROT_READ,
>> @@ -151,10 +287,60 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, 
>> struct domain_info_context *dinfo, xc
>>           for ( i = P2M_FL_ENTRIES - 1; i >= 0; i-- )
>>               p2m_frame_list[i] = ((uint32_t *)p2m_frame_list)[i];
>> +    dinfo->p2m_frames = P2M_FL_ENTRIES;
>> +
>> +    return p2m_frame_list;
>> +
>> + out:
>> +    err = errno;
>> +
>> +    if ( live_p2m_frame_list_list )
>> +        munmap(live_p2m_frame_list_list, PAGE_SIZE);
>> +
>> +    if ( live_p2m_frame_list )
>> +        munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE);
>> +
>> +    free(p2m_frame_list_list);
>> +
>> +    errno = err;
>> +
>> +    return NULL;
>> +}
>> +
>> +static int
>> +xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context 
>> *dinfo, xc_dominfo_t *info,
>> +                        shared_info_any_t *live_shinfo, xen_pfn_t 
>> **live_p2m, int rw)
>> +{
>> +    xen_pfn_t *p2m_frame_list = NULL;
>> +    uint64_t p2m_cr3;
>> +    uint32_t dom = info->domid;
>> +    int ret = -1;
>> +    int err;
>> +
>> +    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
>> +    {
>> +        ERROR("Could not get maximum GPFN!");
>> +        goto out;
>> +    }
>> +
>> +    if ( dinfo->p2m_size < info->nr_pages  )
>> +    {
>> +        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, 
>> info->nr_pages - 1);
>> +        goto out;
>> +    }
>> +
>> +    p2m_cr3 = GET_FIELD(live_shinfo, arch.p2m_cr3, dinfo->guest_width);
>> +
>> +    p2m_frame_list = p2m_cr3 ? xc_core_arch_map_p2m_list_rw(xch, 
>> dinfo, dom, live_shinfo, p2m_cr3)
>> +                             : xc_core_arch_map_p2m_tree_rw(xch, 
>> dinfo, dom, live_shinfo);
>> +
>> +    if ( !p2m_frame_list )
>> +        goto out;
>> +
>>       *live_p2m = xc_map_foreign_pages(xch, dom,
>>                                       rw ? (PROT_READ | PROT_WRITE) : 
>> PROT_READ,
>>                                       p2m_frame_list,
>> -                                    P2M_FL_ENTRIES);
>> +                                    dinfo->p2m_frames);
>>       if ( !*live_p2m )
>>       {
>> @@ -162,21 +348,11 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, 
>> struct domain_info_context *dinfo, xc
>>           goto out;
>>       }
>> -    *pfnp = dinfo->p2m_size;
>> -
>>       ret = 0;
>>   out:
>>       err = errno;
>> -    if ( live_p2m_frame_list_list )
>> -        munmap(live_p2m_frame_list_list, PAGE_SIZE);
>> -
>> -    if ( live_p2m_frame_list )
>> -        munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE);
>> -
>> -    free(p2m_frame_list_list);
>> -
>>       free(p2m_frame_list);
>>       errno = err;
>> @@ -184,25 +360,17 @@ out:
>>   }
>>   int
>> -xc_core_arch_map_p2m(xc_interface *xch, unsigned int guest_width, 
>> xc_dominfo_t *info,
>> -                        shared_info_any_t *live_shinfo, xen_pfn_t 
>> **live_p2m,
>> -                        unsigned long *pfnp)
>> +xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context 
>> *dinfo, xc_dominfo_t *info,
>> +                        shared_info_any_t *live_shinfo, xen_pfn_t 
>> **live_p2m)
>>   {
>> -    struct domain_info_context _dinfo = { .guest_width = guest_width };
>> -    struct domain_info_context *dinfo = &_dinfo;
>> -    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
>> -                                   live_shinfo, live_p2m, pfnp, 0);
>> +    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, 
>> live_p2m, 0);
>>   }
>>   int
>> -xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned int 
>> guest_width, xc_dominfo_t *info,
>> -                              shared_info_any_t *live_shinfo, 
>> xen_pfn_t **live_p2m,
>> -                              unsigned long *pfnp)
>> +xc_core_arch_map_p2m_writable(xc_interface *xch, struct 
>> domain_info_context *dinfo, xc_dominfo_t *info,
>> +                              shared_info_any_t *live_shinfo, 
>> xen_pfn_t **live_p2m)
>>   {
>> -    struct domain_info_context _dinfo = { .guest_width = guest_width };
>> -    struct domain_info_context *dinfo = &_dinfo;
>> -    return xc_core_arch_map_p2m_rw(xch, dinfo, info,
>> -                                   live_shinfo, live_p2m, pfnp, 1);
>> +    return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, 
>> live_p2m, 1);
>>   }
>>   int
>> diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
>> index f0b5f83ac8..8ebc0b59da 100644
>> --- a/tools/libs/ctrl/xc_private.h
>> +++ b/tools/libs/ctrl/xc_private.h
>> @@ -79,6 +79,7 @@ struct iovec {
>>   struct domain_info_context {
>>       unsigned int guest_width;
>> +    unsigned int p2m_frames;
>>       unsigned long p2m_size;
>>   };
>> diff --git a/tools/libs/guest/xg_domain.c b/tools/libs/guest/xg_domain.c
>> index 5019c84e0e..dd7db2cbd8 100644
>> --- a/tools/libs/guest/xg_domain.c
>> +++ b/tools/libs/guest/xg_domain.c
>> @@ -24,13 +24,9 @@
>>   int xc_unmap_domain_meminfo(xc_interface *xch, struct 
>> xc_domain_meminfo *minfo)
>>   {
>> -    struct domain_info_context _di = { .guest_width = 
>> minfo->guest_width,
>> -                                       .p2m_size = minfo->p2m_size};
>> -    struct domain_info_context *dinfo = &_di;
>> -
>>       free(minfo->pfn_type);
>>       if ( minfo->p2m_table )
>> -        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
>> +        munmap(minfo->p2m_table, minfo->p2m_frames * PAGE_SIZE);
>>       minfo->p2m_table = NULL;
>>       return 0;
>> @@ -40,7 +36,6 @@ int xc_map_domain_meminfo(xc_interface *xch, 
>> uint32_t domid,
>>                             struct xc_domain_meminfo *minfo)
>>   {
>>       struct domain_info_context _di;
>> -    struct domain_info_context *dinfo = &_di;
>>       xc_dominfo_t info;
>>       shared_info_any_t *live_shinfo;
>> @@ -96,16 +91,16 @@ int xc_map_domain_meminfo(xc_interface *xch, 
>> uint32_t domid,
>>           return -1;
>>       }
>> -    if ( xc_core_arch_map_p2m_writable(xch, minfo->guest_width, &info,
>> -                                       live_shinfo, &minfo->p2m_table,
>> -                                       &minfo->p2m_size) )
>> +    if ( xc_core_arch_map_p2m_writable(xch, &_di, &info,
>> +                                       live_shinfo, &minfo->p2m_table) )
>>       {
>>           PERROR("Could not map the P2M table");
>>           munmap(live_shinfo, PAGE_SIZE);
>>           return -1;
>>       }
>>       munmap(live_shinfo, PAGE_SIZE);
>> -    _di.p2m_size = minfo->p2m_size;
>> +    minfo->p2m_size = _di.p2m_size;
>> +    minfo->p2m_frames = _di.p2m_frames;
>>       /* Make space and prepare for getting the PFN types */
>>       minfo->pfn_type = calloc(sizeof(*minfo->pfn_type), 
>> minfo->p2m_size);
>> @@ -141,7 +136,7 @@ failed:
>>       }
>>       if ( minfo->p2m_table )
>>       {
>> -        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
>> +        munmap(minfo->p2m_table, minfo->p2m_frames * PAGE_SIZE);
>>           minfo->p2m_table = NULL;
>>       }
>>
> 


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* preparations for 4.15.1 and 4.13.4 support linear p2m table [and 1 more messages]
  2021-07-16  6:47   ` Juergen Gross
  2021-08-19  7:45     ` Juergen Gross
@ 2021-08-19 15:44     ` Ian Jackson
  1 sibling, 0 replies; 28+ messages in thread
From: Ian Jackson @ 2021-08-19 15:44 UTC (permalink / raw)
  To: Jan Beulich, Juergen Gross
  Cc: xen-devel, Wei Liu, Ian Jackson, George Dunlap,
	Stefano Stabellini, Anthony Perard, Julien Grall

Juergen Gross writes ("Re: [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table"):
> Ping?

Jan Beulich writes ("preparations for 4.15.1 and 4.13.4"):
> the releases are due in a couple of weeks time (and 4.14.3 is
> supposed to follow another few weeks later). Please point out backports
> you find missing from the respective staging branches, but which you
> consider relevant.

Hi.  I'm sorry I am behind on this.

I am going to look at some of these backports now but I may not get
through all of them...

Ian.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table
  2021-08-19  7:45     ` Juergen Gross
@ 2021-08-19 16:11       ` Ian Jackson
  0 siblings, 0 replies; 28+ messages in thread
From: Ian Jackson @ 2021-08-19 16:11 UTC (permalink / raw)
  To: Juergen Gross; +Cc: xen-devel, Ian Jackson, Wei Liu, Jan Beulich

Juergen Gross writes ("Re: [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table"):
> PING!!!

Sorry.

I have reviewed this and I think it is good to backport, so

Reviewed-by: Ian Jackson <iwj@xenproject.org>

and queued.  I'll push it to staging-4.15 later today.

Ian.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* QEMU 6.0+ in 4.15 and 4.14 (Re: preparations for 4.15.1 and 4.13.4)
  2021-07-15  9:02 ` Anthony PERARD
@ 2021-08-19 16:23   ` Ian Jackson
  2021-08-23  9:48     ` Olaf Hering
  2021-08-27 12:39     ` Anthony PERARD
  0 siblings, 2 replies; 28+ messages in thread
From: Ian Jackson @ 2021-08-19 16:23 UTC (permalink / raw)
  To: Anthony PERARD
  Cc: Jan Beulich, Olaf Hering, xen-devel, George Dunlap,
	Stefano Stabellini, Wei Liu, Julien  Grall

Anthony PERARD writes ("Re: preparations for 4.15.1 and 4.13.4"):
> Can we backport support of QEMU 6.0 to Xen 4.15? I'm pretty sure
> distributions are going to want to use the latest QEMU and latest Xen,
> without needed to build two different QEMU binaries.

I think this is appropriate.  Xen 4.15 is still now, and there was an
unfortunate interaction between release dates.  Your argument makes
sense.

> [XEN PATCH v2 0/8] Fix libxl with QEMU 6.0 + remove some more deprecated usages.
> <20210511092810.13759-1-anthony.perard@citrix.com>
> Commits: d5f54009db^..fe6630ddc4
> 
> Some more QEMU 6.0 fixes
> <20210628100157.5010-1-anthony.perard@citrix.com>
> Commits: 217eef30f7  3bc3be978f

So I have queued all these.

> Also, Olaf want them to be backported to 4.14, see
>     <20210629095952.7b0b94c1.olaf@aepfle.de>

I'm unsure about this.  The diff seems moderately large.  Also, are we
sure that it wouldn't break anything other than very old qemu ?  OTOH
compat problems with newer qemu are indeed a problem especially for
distros.

I'm currently leaning towards "no" but I am very open to being
convinced this is a good idea.

Ian.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4 [and 1 more messages]
  2021-07-15 17:16 ` Andrew Cooper
  2021-07-16  6:16   ` Jan Beulich
@ 2021-08-19 16:43   ` Ian Jackson
  2021-08-19 16:55     ` Ian Jackson
                       ` (2 more replies)
  1 sibling, 3 replies; 28+ messages in thread
From: Ian Jackson @ 2021-08-19 16:43 UTC (permalink / raw)
  To: Andrew Cooper, Jan Beulich
  Cc: xen-devel, Ian Jackson, George Dunlap, Stefano Stabellini,
	Wei Liu, Anthony Perard, Julien Grall

Jan Beulich writes ("preparations for 4.15.1 and 4.13.4"):
> Ian: I did take the liberty to backport Anthony's
> 
> 5d3e4ebb5c71 libs/foreignmemory: Fix osdep_xenforeignmemory_map prototype

Thanks.

> Beyond this I'd like the following to be considered:
> 
> 6409210a5f51 libxencall: osdep_hypercall() should return long
> bef64f2c0019 libxencall: introduce variant of xencall2() returning long
> 01a2d001dea2 libxencall: Bump SONAME following new functionality
> 6f02d1ea4a10 libxc: use multicall for memory-op on Linux (and Solaris)

I agree these are needed.

Don't we need these, or something like them in 4.14 and earlier too ?

> If those are to be taken (which means in particular if the question of
> the .so versioning can be properly sorted),
> 
> 198a2bc6f149 x86/HVM: wire up multicalls
> 
> is going to be required as a prereq. I have backports of all of the
> above ready (so I could put them in if you tell me to), but for
> 01a2d001dea2 only in its straightforward but simplistic form, which I'm
> not sure is the right thing to do.

So, I have queued 198a2bc6f149 too.

As for the ABI: 01a2d001dea2 introduces VERS_1.3 with xencall2L.
I think backporting it to 4.15 means declaring that that is precisely
what VERS_1.3 is, and that any future changes must be in VERS_1.4.

I checked that after the backport of 198a2bc6f149, the two files
defining VERS_1.3 are the same.  Well, they are different because of
  7ffbed8681a0
  libxencall: drop bogus mentioning of xencall6()
which is fine, since that symbol didn't exist in any version.

So I propose to bump xencall to 1.4 in staging, to make sure we don't
break the ABI for 1.3 by mistake.

Andrew Cooper writes ("Re: preparations for 4.15.1 and 4.13.4"):
> We can backport changes in SONAME safely so long as:
> 
> 1) We declare VERS_1.2 to be fixed and released.  This means that we
> bump to 1.3 for the next change, even if it is ahead of Xen 4.16 being
> release, and
> 
> 2) *All* ABI changes up to VERS_1.2 are backported.

I think this is what I am doing, except that I think Andy wrote "1.2"
instead of "1.3".  "1.2" is currently in staging-4.15, without my
queued series.

> The ABI called VERS_1.2 must be identical on all older branches to avoid
> binary problems when rebuilding a package against old-xen+updates, and
> then updating to a newer Xen.

Indeed.  But that is less relevant than the fact that this must also
be true for VERS_1.3 which is what we are introducing to 4.15 here :-).

Andy, I usually agree with you on ABI matters.  I think I am doing
what you mean.  Please correct me if I have misunderstood you.  If
what I hnve done is wrong, we should revert and/or fix it quickly on
staging-4.15.

(I'll ping you in IRC when I have pushed my queue to staging-4.15.)

Ian.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-15 14:11 ` Olaf Hering
  2021-07-15 14:21   ` Jan Beulich
@ 2021-08-19 16:46   ` Ian Jackson
  2021-08-23  8:42     ` Olaf Hering
  1 sibling, 1 reply; 28+ messages in thread
From: Ian Jackson @ 2021-08-19 16:46 UTC (permalink / raw)
  To: Olaf Hering
  Cc: Jan Beulich, xen-devel, George Dunlap, Stefano Stabellini,
	Wei Liu, Anthony Perard, Julien Grall

Olaf Hering writes ("Re: preparations for 4.15.1 and 4.13.4"):
> Am Thu, 15 Jul 2021 09:58:24 +0200
> schrieb Jan Beulich <jbeulich@suse.com>:
> 
> > Please point out backports you find missing from the respective staging branches, but which you consider relevant.
> 
> Depending on how green the CI is supposed to be:
> 
> 76416c459c libfsimage: fix parentheses in macro parameters
> e54c433adf libfsimage: fix clang 10 build
> 
> This will likely turn the Leap clang builds at https://gitlab.com/xen-project/xen/-/pipelines/337629824 green.

I would be happy to take fixes like these for stable branches.  I
tried a git cherry-pick but it didn't apply.  Would you like to supply
backports ?

Thanks,
Ian.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-07-19 10:32 ` Jan Beulich
@ 2021-08-19 16:49   ` Ian Jackson
  0 siblings, 0 replies; 28+ messages in thread
From: Ian Jackson @ 2021-08-19 16:49 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard,
	Julien Grall, xen-devel

Jan Beulich writes ("Re: preparations for 4.15.1 and 4.13.4"):
> On 15.07.2021 09:58, Jan Beulich wrote:
> > Beyond this I'd like the following to be considered:
> > 
> > 6409210a5f51 libxencall: osdep_hypercall() should return long
> > bef64f2c0019 libxencall: introduce variant of xencall2() returning long
> > 01a2d001dea2 libxencall: Bump SONAME following new functionality
> > 6f02d1ea4a10 libxc: use multicall for memory-op on Linux (and Solaris)
> 
> in addition I'd like to ask you to consider
> 
> 0be5a00af590 libxl/x86: check return value of SHADOW_OP_SET_ALLOCATION domctl
> 
> as well, now that it has gone in.

I have queued that all the way back to 4.12, since it seems
security-adjacent at the very least.

Thanks,
Ian.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4 [and 1 more messages]
  2021-08-19 16:43   ` preparations for 4.15.1 and 4.13.4 [and 1 more messages] Ian Jackson
@ 2021-08-19 16:55     ` Ian Jackson
  2021-08-19 17:04       ` Ian Jackson
  2021-08-19 16:56     ` Andrew Cooper
  2021-08-20  6:10     ` Jan Beulich
  2 siblings, 1 reply; 28+ messages in thread
From: Ian Jackson @ 2021-08-19 16:55 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Jan Beulich, xen-devel, Ian Jackson, George Dunlap,
	Stefano Stabellini, Wei Liu, Anthony Perard, Julien Grall

Ian Jackson writes ("Re: preparations for 4.15.1 and 4.13.4 [and 1 more messages]"):
> Jan Beulich writes ("preparations for 4.15.1 and 4.13.4"):
> > The ABI called VERS_1.2 must be identical on all older branches to avoid
> > binary problems when rebuilding a package against old-xen+updates, and
> > then updating to a newer Xen.
> 
> Indeed.  But that is less relevant than the fact that this must also
> be true for VERS_1.3 which is what we are introducing to 4.15 here :-).
> 
> Andy, I usually agree with you on ABI matters.  I think I am doing
> what you mean.  Please correct me if I have misunderstood you.  If
> what I hnve done is wrong, we should revert and/or fix it quickly on
> staging-4.15.
> 
> (I'll ping you in IRC when I have pushed my queue to staging-4.15.)

I thought of a better way to do this.  See below for proposed patch to
xen.git#staging.

Ian.

From 239847451fbef4194e757ce090b6f96c4852af46 Mon Sep 17 00:00:00 2001
From: Ian Jackson <iwj@xenproject.org>
Date: Thu, 19 Aug 2021 17:53:19 +0100
Subject: [PATCH] libxencall: Put a reminder that ABI VERS_1.3 is now frozen

CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>
Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 tools/libs/call/libxencall.map | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/libs/call/libxencall.map b/tools/libs/call/libxencall.map
index d18a3174e9..c99630a44b 100644
--- a/tools/libs/call/libxencall.map
+++ b/tools/libs/call/libxencall.map
@@ -30,4 +30,5 @@ VERS_1.2 {
 VERS_1.3 {
 	global:
 		xencall2L;
+	/* NB VERS_1.3 is frozen since it has been exposed in Xen 4.15 */
 } VERS_1.2;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4 [and 1 more messages]
  2021-08-19 16:43   ` preparations for 4.15.1 and 4.13.4 [and 1 more messages] Ian Jackson
  2021-08-19 16:55     ` Ian Jackson
@ 2021-08-19 16:56     ` Andrew Cooper
  2021-08-20  6:10     ` Jan Beulich
  2 siblings, 0 replies; 28+ messages in thread
From: Andrew Cooper @ 2021-08-19 16:56 UTC (permalink / raw)
  To: Ian Jackson, Jan Beulich
  Cc: xen-devel, Ian Jackson, George Dunlap, Stefano Stabellini,
	Wei Liu, Anthony Perard, Julien Grall

On 19/08/2021 17:43, Ian Jackson wrote:
> Jan Beulich writes ("preparations for 4.15.1 and 4.13.4"):
>> Ian: I did take the liberty to backport Anthony's
>>
>> 5d3e4ebb5c71 libs/foreignmemory: Fix osdep_xenforeignmemory_map prototype
> Thanks.
>
>> Beyond this I'd like the following to be considered:
>>
>> 6409210a5f51 libxencall: osdep_hypercall() should return long
>> bef64f2c0019 libxencall: introduce variant of xencall2() returning long
>> 01a2d001dea2 libxencall: Bump SONAME following new functionality
>> 6f02d1ea4a10 libxc: use multicall for memory-op on Linux (and Solaris)
> I agree these are needed.
>
> Don't we need these, or something like them in 4.14 and earlier too ?
>
>> If those are to be taken (which means in particular if the question of
>> the .so versioning can be properly sorted),
>>
>> 198a2bc6f149 x86/HVM: wire up multicalls
>>
>> is going to be required as a prereq. I have backports of all of the
>> above ready (so I could put them in if you tell me to), but for
>> 01a2d001dea2 only in its straightforward but simplistic form, which I'm
>> not sure is the right thing to do.
> So, I have queued 198a2bc6f149 too.
>
> As for the ABI: 01a2d001dea2 introduces VERS_1.3 with xencall2L.
> I think backporting it to 4.15 means declaring that that is precisely
> what VERS_1.3 is, and that any future changes must be in VERS_1.4.

Yes

>
> I checked that after the backport of 198a2bc6f149, the two files
> defining VERS_1.3 are the same.  Well, they are different because of
>   7ffbed8681a0
>   libxencall: drop bogus mentioning of xencall6()
> which is fine, since that symbol didn't exist in any version.

That's probably ok, but I'd be tempted to backport that fix too.

> So I propose to bump xencall to 1.4 in staging, to make sure we don't
> break the ABI for 1.3 by mistake.

We don't proactively bump the stable libs sonames - they get bumped on
first new addition.

Otherwise, if there is no addition between now and the 4.16 release,
then the 4.16 build will produce a libfoo.so.1.4 with 1.3's effective ABI.

The same would be true in general for every stable library we didn't
modify in a specific release cycle.

>
> Andrew Cooper writes ("Re: preparations for 4.15.1 and 4.13.4"):
>> We can backport changes in SONAME safely so long as:
>>
>> 1) We declare VERS_1.2 to be fixed and released.  This means that we
>> bump to 1.3 for the next change, even if it is ahead of Xen 4.16 being
>> release, and
>>
>> 2) *All* ABI changes up to VERS_1.2 are backported.
> I think this is what I am doing, except that I think Andy wrote "1.2"
> instead of "1.3".  "1.2" is currently in staging-4.15, without my
> queued series.

Oops - my mistake.

>
>> The ABI called VERS_1.2 must be identical on all older branches to avoid
>> binary problems when rebuilding a package against old-xen+updates, and
>> then updating to a newer Xen.
> Indeed.  But that is less relevant than the fact that this must also
> be true for VERS_1.3 which is what we are introducing to 4.15 here :-).
>
> Andy, I usually agree with you on ABI matters.  I think I am doing
> what you mean.  Please correct me if I have misunderstood you.  If
> what I hnve done is wrong, we should revert and/or fix it quickly on
> staging-4.15.

Looks good to me.

~Andrew



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4 [and 1 more messages]
  2021-08-19 16:55     ` Ian Jackson
@ 2021-08-19 17:04       ` Ian Jackson
  0 siblings, 0 replies; 28+ messages in thread
From: Ian Jackson @ 2021-08-19 17:04 UTC (permalink / raw)
  To: Andrew Cooper, Jan Beulich, xen-devel, Ian Jackson,
	George Dunlap, Stefano Stabellini, Wei Liu, Anthony Perard,
	Julien Grall

Ian Jackson writes ("Re: preparations for 4.15.1 and 4.13.4 [and 1 more messages]"):
> I thought of a better way to do this.  See below for proposed patch to
> xen.git#staging.

We discussed this on IRC, and I'm going to drop this patch.

Ian.

18:00 <@andyhhp> I'm debating what to say there
18:01 <@Diziet> I thought of this since otherwise we're just setting ourselves 
                up to make a mistake.
18:01 <@andyhhp> I'm tempted to just leave it all alone.  That comment will go 
                 stale very quickly, and the general rule is applicable to 
                 other libs
18:01 <@Diziet> True
18:01 <@andyhhp> evidence suggests that almost all edits to the soname require 
                 a fixup...
18:01 <@Diziet> haha
18:01 <@Diziet> FE
18:01 <@andyhhp> it would be nice to do something better
18:02 <@andyhhp> and the "bump the soname proactively" is almost very nice for 
                 that.  My first thought was to suggest that we change to that 
                 as a default way of working
18:01 <@Diziet> haha
18:01 <@Diziet> FE
18:01 <@andyhhp> it would be nice to do something better
18:02 <@andyhhp> and the "bump the soname proactively" is almost very nice for 
                 that.  My first thought was to suggest that we change to that 
                 as a default way of working
18:02 <@andyhhp> but it gets you into a position where you're off-by-one when 
                 it comes to the release
18:02 <@Diziet> Mmmm
18:03 <@Diziet> OK, well, I'll drop my patch for now then if you don't feel 
                it's worth it.
18:03 <@Diziet> Thanks for the reviews, anyway.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4 [and 1 more messages]
  2021-08-19 16:43   ` preparations for 4.15.1 and 4.13.4 [and 1 more messages] Ian Jackson
  2021-08-19 16:55     ` Ian Jackson
  2021-08-19 16:56     ` Andrew Cooper
@ 2021-08-20  6:10     ` Jan Beulich
  2 siblings, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2021-08-20  6:10 UTC (permalink / raw)
  To: Ian Jackson, Andrew Cooper
  Cc: xen-devel, Ian Jackson, George Dunlap, Stefano Stabellini,
	Wei Liu, Anthony Perard, Julien Grall

On 19.08.2021 18:43, Ian Jackson wrote:
> Jan Beulich writes ("preparations for 4.15.1 and 4.13.4"):
>> Ian: I did take the liberty to backport Anthony's
>>
>> 5d3e4ebb5c71 libs/foreignmemory: Fix osdep_xenforeignmemory_map prototype
> 
> Thanks.
> 
>> Beyond this I'd like the following to be considered:
>>
>> 6409210a5f51 libxencall: osdep_hypercall() should return long
>> bef64f2c0019 libxencall: introduce variant of xencall2() returning long
>> 01a2d001dea2 libxencall: Bump SONAME following new functionality
>> 6f02d1ea4a10 libxc: use multicall for memory-op on Linux (and Solaris)
> 
> I agree these are needed.
> 
> Don't we need these, or something like them in 4.14 and earlier too ?

Well, yes, I think so. Did I give the wrong impression of asking
for 4.15 only?

Jan



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: preparations for 4.15.1 and 4.13.4
  2021-08-19 16:46   ` Ian Jackson
@ 2021-08-23  8:42     ` Olaf Hering
  0 siblings, 0 replies; 28+ messages in thread
From: Olaf Hering @ 2021-08-23  8:42 UTC (permalink / raw)
  To: Ian Jackson
  Cc: Jan Beulich, xen-devel, George Dunlap, Stefano Stabellini,
	Wei Liu, Anthony Perard, Julien Grall

[-- Attachment #1: Type: text/plain, Size: 400 bytes --]

Am Thu, 19 Aug 2021 17:46:28 +0100
schrieb Ian Jackson <iwj@xenproject.org>:

> I would be happy to take fixes like these for stable branches.  I
> tried a git cherry-pick but it didn't apply.  Would you like to supply
> backports ?

This script worked for me in staging-4.13:

set -ex
git cherry-pick -ex e54c433adf01a242bf6e9fe9378a2c83d3f8b419
git cherry-pick -ex 76416c459c


Olaf

[-- Attachment #2: Digitale Signatur von OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: QEMU 6.0+ in 4.15 and 4.14 (Re: preparations for 4.15.1 and 4.13.4)
  2021-08-19 16:23   ` QEMU 6.0+ in 4.15 and 4.14 (Re: preparations for 4.15.1 and 4.13.4) Ian Jackson
@ 2021-08-23  9:48     ` Olaf Hering
  2021-08-27 12:39     ` Anthony PERARD
  1 sibling, 0 replies; 28+ messages in thread
From: Olaf Hering @ 2021-08-23  9:48 UTC (permalink / raw)
  To: Ian Jackson
  Cc: Anthony PERARD, Jan Beulich, xen-devel, George Dunlap,
	Stefano Stabellini, Wei Liu, Julien Grall

[-- Attachment #1: Type: text/plain, Size: 782 bytes --]

Am Thu, 19 Aug 2021 17:23:26 +0100
schrieb Ian Jackson <iwj@xenproject.org>:

> Anthony PERARD writes ("Re: preparations for 4.15.1 and 4.13.4"):
> > Also, Olaf want them to be backported to 4.14, see
> >     <20210629095952.7b0b94c1.olaf@aepfle.de>  
> I'm unsure about this.  The diff seems moderately large.  Also, are we
> sure that it wouldn't break anything other than very old qemu ?  OTOH
> compat problems with newer qemu are indeed a problem especially for
> distros.

libxl in 4.14 and 4.15 is nearly identical.
It would reduce the amount of changes for Xen in Leap 15.3.
SUSE uses a separate qemu binary since a few years, in case of 15.3 it happens to be qemu-5.2. 
This variant of qemu removed "cpu-add", as a result cpu hotplug with HVM fails.

Olaf

[-- Attachment #2: Digitale Signatur von OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: QEMU 6.0+ in 4.15 and 4.14 (Re: preparations for 4.15.1 and 4.13.4)
  2021-08-19 16:23   ` QEMU 6.0+ in 4.15 and 4.14 (Re: preparations for 4.15.1 and 4.13.4) Ian Jackson
  2021-08-23  9:48     ` Olaf Hering
@ 2021-08-27 12:39     ` Anthony PERARD
  1 sibling, 0 replies; 28+ messages in thread
From: Anthony PERARD @ 2021-08-27 12:39 UTC (permalink / raw)
  To: Ian Jackson
  Cc: Jan Beulich, Olaf Hering, xen-devel, George Dunlap,
	Stefano Stabellini, Wei Liu, Julien Grall

On Thu, Aug 19, 2021 at 05:23:26PM +0100, Ian Jackson wrote:
> Anthony PERARD writes ("Re: preparations for 4.15.1 and 4.13.4"):
> > Can we backport support of QEMU 6.0 to Xen 4.15? I'm pretty sure
> > distributions are going to want to use the latest QEMU and latest Xen,
> > without needed to build two different QEMU binaries.
> 
> I think this is appropriate.  Xen 4.15 is still now, and there was an
> unfortunate interaction between release dates.  Your argument makes
> sense.
> 
> > [XEN PATCH v2 0/8] Fix libxl with QEMU 6.0 + remove some more deprecated usages.
> > <20210511092810.13759-1-anthony.perard@citrix.com>
> > Commits: d5f54009db^..fe6630ddc4
> > 
> > Some more QEMU 6.0 fixes
> > <20210628100157.5010-1-anthony.perard@citrix.com>
> > Commits: 217eef30f7  3bc3be978f
> 
> So I have queued all these.
> 
> > Also, Olaf want them to be backported to 4.14, see
> >     <20210629095952.7b0b94c1.olaf@aepfle.de>
> 
> I'm unsure about this.  The diff seems moderately large.  Also, are we
> sure that it wouldn't break anything other than very old qemu ?  OTOH
> compat problems with newer qemu are indeed a problem especially for
> distros.

I've check all commits, beside two commits they all have a fallback
mechanism so we still are compatible with old qemus.
There is these two commits
    libxl: Fix QEMU cmdline for scsi device
    libxl: Use -device for cd-rom drives
which replace command line arguments, but it is to use something that
has been available since QEMU 0.15, so before the first version that
libxl as evert supported.

So overall, I don't think we break compatibility with very old qemus. It
would take a couple more QMP command to run some feature as libxl would
try the new command first before falling back to previous ones.

> I'm currently leaning towards "no" but I am very open to being
> convinced this is a good idea.

I don't know if it is a good idea, but at least it doesn't seems to be a
bad one.

-- 
Anthony PERARD


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2021-08-27 12:39 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-15  7:58 preparations for 4.15.1 and 4.13.4 Jan Beulich
2021-07-02 14:29 ` [PATCH-4.15] tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m table Juergen Gross
2021-07-16  6:47   ` Juergen Gross
2021-08-19  7:45     ` Juergen Gross
2021-08-19 16:11       ` Ian Jackson
2021-08-19 15:44     ` preparations for 4.15.1 and 4.13.4 support linear p2m table [and 1 more messages] Ian Jackson
2021-07-15  8:02 ` preparations for 4.15.1 and 4.13.4 Jan Beulich
2021-07-15  9:02 ` Anthony PERARD
2021-08-19 16:23   ` QEMU 6.0+ in 4.15 and 4.14 (Re: preparations for 4.15.1 and 4.13.4) Ian Jackson
2021-08-23  9:48     ` Olaf Hering
2021-08-27 12:39     ` Anthony PERARD
2021-07-15 10:58 ` preparations for 4.15.1 and 4.13.4 Andrew Cooper
2021-07-15 11:12   ` Jan Beulich
2021-07-15 14:11 ` Olaf Hering
2021-07-15 14:21   ` Jan Beulich
2021-08-19 16:46   ` Ian Jackson
2021-08-23  8:42     ` Olaf Hering
2021-07-15 17:16 ` Andrew Cooper
2021-07-16  6:16   ` Jan Beulich
2021-08-19 16:43   ` preparations for 4.15.1 and 4.13.4 [and 1 more messages] Ian Jackson
2021-08-19 16:55     ` Ian Jackson
2021-08-19 17:04       ` Ian Jackson
2021-08-19 16:56     ` Andrew Cooper
2021-08-20  6:10     ` Jan Beulich
2021-07-16  7:41 ` preparations for 4.15.1 and 4.13.4 Julien Grall
2021-07-16 20:16   ` Stefano Stabellini
2021-07-19 10:32 ` Jan Beulich
2021-08-19 16:49   ` Ian Jackson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).