All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/8] xen: better grant v2 support
@ 2017-09-08  6:56 Juergen Gross
  2017-09-08  6:56 ` [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c Juergen Gross
                   ` (7 more replies)
  0 siblings, 8 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	dgdegra

Currently Linux has no support for grant v2 as this would reduce the
maximum number of active grants by a factor of 2 compared to v1,
because the number of possible grants are limited by the allowed number
of grant frames and grant entries of v2 need twice as much bytes as
those of v1.

Unfortunately grant v2 is the only way to support either guests with
more than 16TB memory size or PV guests with memory above the 16TB
border, as grant v1 limits the frame number to be 32 bits wide.

In order to remove the disadvantage of grant v2 this patch series
adds support for setting per-domain values regarding grant limits.
Additionally the default limit of grant frames is doubled in case
of hosts with memory above the 16TB border.

Changes in V5:
- patch 6: add set_gnttab_limits to create_domain_common in xen.if
  (Daniel De Graaf)

Changes in V4:
- patch 3: make ret more local (Wei Liu)
- patch 7: use domid_t (Wei Liu)
- patch 8: rename configuration items to use max_ prefixes (Wei Liu)

Changes in V3:
- patch 1: update commit message
- patch 3: move call of grant_table_init() from gnttab_setup_table() to
  gnttab_grow_table() (Paul Durrant)
- patch 4: correct error message (Paul Durrant)
- patch 6: rename *gnttbl* to *gnttab* (Paul Durrant)

Changes in V2:
- add per-domain grant limits instead of different v1 and v2 limits
- double default limit for huge hosts

Juergen Gross (8):
  xen: move XENMAPSPACE_grant_table code into grant_table.c
  xen: clean up grant_table.h
  xen: delay allocation of grant table sub structures
  xen: make grant resource limits per domain
  xen: double default grant frame limit for huge hosts
  xen: add new domctl hypercall to set grant table resource limits
  libxc: add libxc support for setting grant table resource limits
  libxl: add libxl support for setting grant table resource limits

 docs/man/xl.cfg.pod.5.in            |  15 ++
 tools/flask/policy/modules/dom0.te  |   2 +-
 tools/flask/policy/modules/xen.if   |   2 +-
 tools/libxc/include/xenctrl.h       |  14 ++
 tools/libxc/xc_domain.c             |  13 ++
 tools/libxl/libxl.h                 |   6 +
 tools/libxl/libxl_dom.c             |   8 +
 tools/libxl/libxl_types.idl         |   3 +
 tools/xl/xl_parse.c                 |   5 +
 tools/xl/xl_sxp.c                   |   2 +
 xen/arch/arm/mm.c                   |  36 +---
 xen/arch/x86/mm.c                   |  41 +----
 xen/common/domain.c                 |  17 +-
 xen/common/domctl.c                 |   6 +
 xen/common/grant_table.c            | 354 +++++++++++++++++++++++++++---------
 xen/include/asm-arm/grant_table.h   |   7 +
 xen/include/asm-x86/grant_table.h   |   5 +
 xen/include/public/domctl.h         |   9 +
 xen/include/xen/grant_table.h       |  91 +--------
 xen/xsm/flask/hooks.c               |   3 +
 xen/xsm/flask/policy/access_vectors |   2 +
 21 files changed, 402 insertions(+), 239 deletions(-)

-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c
  2017-09-08  6:56 [PATCH v5 0/8] xen: better grant v2 support Juergen Gross
@ 2017-09-08  6:56 ` Juergen Gross
  2017-09-08 14:49   ` Jan Beulich
                     ` (2 more replies)
  2017-09-08  6:56 ` [PATCH v5 2/8] xen: clean up grant_table.h Juergen Gross
                   ` (6 subsequent siblings)
  7 siblings, 3 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	dgdegra

The x86 and arm versions of XENMAPSPACE_grant_table handling are nearly
identical. Move the code into a function in grant_table.c and add an
architecture dependant hook to handle the differences.

Switch to mfn_t in order to be more type safe.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
V3:
- update commit message

V2:
- rebased to staging
---
 xen/arch/arm/mm.c                 | 36 ++++------------------------------
 xen/arch/x86/mm.c                 | 41 ++++++++++-----------------------------
 xen/common/grant_table.c          | 38 ++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/grant_table.h |  7 +++++++
 xen/include/asm-x86/grant_table.h |  5 +++++
 xen/include/xen/grant_table.h     |  3 +++
 6 files changed, 67 insertions(+), 63 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index b39677eac9..3db0e3bdea 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1229,39 +1229,11 @@ int xenmem_add_to_physmap_one(
     switch ( space )
     {
     case XENMAPSPACE_grant_table:
-        grant_write_lock(d->grant_table);
-
-        if ( d->grant_table->gt_version == 0 )
-            d->grant_table->gt_version = 1;
-
-        if ( d->grant_table->gt_version == 2 &&
-                (idx & XENMAPIDX_grant_table_status) )
-        {
-            idx &= ~XENMAPIDX_grant_table_status;
-            if ( idx < nr_status_frames(d->grant_table) )
-                mfn = virt_to_mfn(d->grant_table->status[idx]);
-        }
-        else
-        {
-            if ( (idx >= nr_grant_frames(d->grant_table)) &&
-                 (idx < max_grant_frames) )
-                gnttab_grow_table(d, idx + 1);
-
-            if ( idx < nr_grant_frames(d->grant_table) )
-                mfn = virt_to_mfn(d->grant_table->shared_raw[idx]);
-        }
-
-        if ( !mfn_eq(mfn, INVALID_MFN) )
-        {
-            d->arch.grant_table_gfn[idx] = gfn;
-
-            t = p2m_ram_rw;
-        }
-
-        grant_write_unlock(d->grant_table);
+        rc = gnttab_map_frame(d, idx, gfn, &mfn);
+        if ( rc )
+            return rc;
 
-        if ( mfn_eq(mfn, INVALID_MFN) )
-            return -EINVAL;
+        t = p2m_ram_rw;
 
         break;
     case XENMAPSPACE_shared_info:
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index e5a029c9be..3d899e4a8e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4631,40 +4631,19 @@ int xenmem_add_to_physmap_one(
 {
     struct page_info *page = NULL;
     unsigned long gfn = 0; /* gcc ... */
-    unsigned long prev_mfn, mfn = 0, old_gpfn;
+    unsigned long prev_mfn, old_gpfn;
     int rc = 0;
+    mfn_t mfn = INVALID_MFN;
     p2m_type_t p2mt;
 
     switch ( space )
     {
         case XENMAPSPACE_shared_info:
             if ( idx == 0 )
-                mfn = virt_to_mfn(d->shared_info);
+                mfn = _mfn(virt_to_mfn(d->shared_info));
             break;
         case XENMAPSPACE_grant_table:
-            grant_write_lock(d->grant_table);
-
-            if ( d->grant_table->gt_version == 0 )
-                d->grant_table->gt_version = 1;
-
-            if ( d->grant_table->gt_version == 2 &&
-                 (idx & XENMAPIDX_grant_table_status) )
-            {
-                idx &= ~XENMAPIDX_grant_table_status;
-                if ( idx < nr_status_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->status[idx]);
-            }
-            else
-            {
-                if ( (idx >= nr_grant_frames(d->grant_table)) &&
-                     (idx < max_grant_frames) )
-                    gnttab_grow_table(d, idx + 1);
-
-                if ( idx < nr_grant_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->shared_raw[idx]);
-            }
-
-            grant_write_unlock(d->grant_table);
+            gnttab_map_frame(d, idx, gpfn, &mfn);
             break;
         case XENMAPSPACE_gmfn_range:
         case XENMAPSPACE_gmfn:
@@ -4681,8 +4660,8 @@ int xenmem_add_to_physmap_one(
             }
             if ( !get_page_from_mfn(_mfn(idx), d) )
                 break;
-            mfn = idx;
-            page = mfn_to_page(_mfn(mfn));
+            mfn = _mfn(idx);
+            page = mfn_to_page(mfn);
             break;
         }
         case XENMAPSPACE_gmfn_foreign:
@@ -4691,7 +4670,7 @@ int xenmem_add_to_physmap_one(
             break;
     }
 
-    if ( !paging_mode_translate(d) || (mfn == 0) )
+    if ( !paging_mode_translate(d) || mfn_eq(mfn, INVALID_MFN) )
     {
         rc = -EINVAL;
         goto put_both;
@@ -4715,16 +4694,16 @@ int xenmem_add_to_physmap_one(
         goto put_both;
 
     /* Unmap from old location, if any. */
-    old_gpfn = get_gpfn_from_mfn(mfn);
+    old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
     ASSERT( old_gpfn != SHARED_M2P_ENTRY );
     if ( space == XENMAPSPACE_gmfn || space == XENMAPSPACE_gmfn_range )
         ASSERT( old_gpfn == gfn );
     if ( old_gpfn != INVALID_M2P_ENTRY )
-        rc = guest_physmap_remove_page(d, _gfn(old_gpfn), _mfn(mfn), PAGE_ORDER_4K);
+        rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
 
     /* Map at new location. */
     if ( !rc )
-        rc = guest_physmap_add_page(d, gpfn, _mfn(mfn), PAGE_ORDER_4K);
+        rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
 
  put_both:
     /* In the XENMAPSPACE_gmfn, we took a ref of the gfn at the top */
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index fb3859ce8e..4c2e9e40a5 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3607,6 +3607,44 @@ int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
 }
 #endif
 
+int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
+                     mfn_t *mfn)
+{
+    int rc = 0;
+    struct grant_table *gt = d->grant_table;
+
+    grant_write_lock(gt);
+
+    if ( gt->gt_version == 0 )
+        gt->gt_version = 1;
+
+    if ( gt->gt_version == 2 &&
+         (idx & XENMAPIDX_grant_table_status) )
+    {
+        idx &= ~XENMAPIDX_grant_table_status;
+        if ( idx < nr_status_frames(gt) )
+            *mfn = _mfn(virt_to_mfn(gt->status[idx]));
+        else
+            rc = -EINVAL;
+    }
+    else
+    {
+        if ( (idx >= nr_grant_frames(gt)) && (idx < max_grant_frames) )
+            gnttab_grow_table(d, idx + 1);
+
+        if ( idx < nr_grant_frames(gt) )
+            *mfn = _mfn(virt_to_mfn(gt->shared_raw[idx]));
+        else
+            rc = -EINVAL;
+    }
+
+    gnttab_set_frame_gfn(d, idx, gfn);
+
+    grant_write_unlock(gt);
+
+    return rc;
+}
+
 static void gnttab_usage_print(struct domain *rd)
 {
     int first = 1;
diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
index bc4d61a940..0a248a765a 100644
--- a/xen/include/asm-arm/grant_table.h
+++ b/xen/include/asm-arm/grant_table.h
@@ -2,6 +2,7 @@
 #define __ASM_GRANT_TABLE_H__
 
 #include <xen/grant_table.h>
+#include <xen/sched.h>
 
 #define INITIAL_NR_GRANT_FRAMES 4
 
@@ -21,6 +22,12 @@ static inline int replace_grant_supported(void)
     return 1;
 }
 
+static inline void gnttab_set_frame_gfn(struct domain *d, unsigned long idx,
+                                        gfn_t gfn)
+{
+    d->arch.grant_table_gfn[idx] = gfn;
+}
+
 #define gnttab_create_shared_page(d, t, i)                               \
     do {                                                                 \
         share_xen_page_with_guest(                                       \
diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
index 33b2f88b96..c865999a33 100644
--- a/xen/include/asm-x86/grant_table.h
+++ b/xen/include/asm-x86/grant_table.h
@@ -75,6 +75,11 @@ static inline void gnttab_clear_flag(unsigned int nr, uint16_t *st)
     asm volatile ("lock btrw %w1,%0" : "=m" (*st) : "Ir" (nr), "m" (*st));
 }
 
+static inline void gnttab_set_frame_gfn(struct domain *d, unsigned long idx,
+                                        gfn_t gfn)
+{
+}
+
 /* Foreign mappings of HHVM-guest pages do not modify the type count. */
 #define gnttab_host_mapping_get_page_type(ro, ld, rd)   \
     (!(ro) && (((ld) == (rd)) || !paging_mode_external(rd)))
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index af269a108d..43ec6c4d80 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -136,4 +136,7 @@ static inline unsigned int grant_to_status_frames(int grant_frames)
 int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
                             gfn_t *gfn, uint16_t *status);
 
+int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
+                     mfn_t *mfn);
+
 #endif /* __XEN_GRANT_TABLE_H__ */
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 2/8] xen: clean up grant_table.h
  2017-09-08  6:56 [PATCH v5 0/8] xen: better grant v2 support Juergen Gross
  2017-09-08  6:56 ` [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c Juergen Gross
@ 2017-09-08  6:56 ` Juergen Gross
  2017-09-08 15:00   ` Jan Beulich
       [not found]   ` <59B2CC990200007800178E3C@suse.com>
  2017-09-08  6:56 ` [PATCH v5 3/8] xen: delay allocation of grant table sub structures Juergen Gross
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	dgdegra

Many definitions can be moved from xen/grant_table.h to
common/grant_table.c now, as they are no longer used in other sources.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/grant_table.c      | 83 ++++++++++++++++++++++++++++++++++++++++--
 xen/include/xen/grant_table.h | 84 -------------------------------------------
 2 files changed, 81 insertions(+), 86 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 4c2e9e40a5..4520e36d90 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -40,6 +40,44 @@
 #include <xsm/xsm.h>
 #include <asm/flushtlb.h>
 
+/* Per-domain grant information. */
+struct grant_table {
+    /*
+     * Lock protecting updates to grant table state (version, active
+     * entry list, etc.)
+     */
+    percpu_rwlock_t       lock;
+    /* Table size. Number of frames shared with guest */
+    unsigned int          nr_grant_frames;
+    /* Shared grant table (see include/public/grant_table.h). */
+    union {
+        void **shared_raw;
+        struct grant_entry_v1 **shared_v1;
+        union grant_entry_v2 **shared_v2;
+    };
+    /* Number of grant status frames shared with guest (for version 2) */
+    unsigned int          nr_status_frames;
+    /* State grant table (see include/public/grant_table.h). */
+    grant_status_t       **status;
+    /* Active grant table. */
+    struct active_grant_entry **active;
+    /* Mapping tracking table per vcpu. */
+    struct grant_mapping **maptrack;
+    unsigned int          maptrack_limit;
+    /* Lock protecting the maptrack limit */
+    spinlock_t            maptrack_lock;
+    /*
+     * The defined versions are 1 and 2.  Set to 0 if we don't know
+     * what version to use yet.
+     */
+    unsigned              gt_version;
+};
+
+#ifndef DEFAULT_MAX_NR_GRANT_FRAMES /* to allow arch to override */
+/* Default maximum size of a grant table. [POLICY] */
+#define DEFAULT_MAX_NR_GRANT_FRAMES   32
+#endif
+
 unsigned int __read_mostly max_grant_frames;
 integer_param("gnttab_max_frames", max_grant_frames);
 
@@ -118,6 +156,18 @@ struct grant_mapping {
     uint32_t pad;           /* round size to a power of 2 */
 };
 
+/* Number of grant table frames. Caller must hold d's grant table lock. */
+static inline unsigned int nr_grant_frames(const struct grant_table *gt)
+{
+    return gt->nr_grant_frames;
+}
+
+/* Number of status grant table frames. Caller must hold d's gr. table lock.*/
+static inline unsigned int nr_status_frames(const struct grant_table *gt)
+{
+    return gt->nr_status_frames;
+}
+
 #define MAPTRACK_PER_PAGE (PAGE_SIZE / sizeof(struct grant_mapping))
 #define maptrack_entry(t, e) \
     ((t)->maptrack[(e)/MAPTRACK_PER_PAGE][(e)%MAPTRACK_PER_PAGE])
@@ -197,7 +247,27 @@ static inline void act_set_gfn(struct active_grant_entry *act, gfn_t gfn)
 #endif
 }
 
-DEFINE_PERCPU_RWLOCK_GLOBAL(grant_rwlock);
+static DEFINE_PERCPU_RWLOCK_GLOBAL(grant_rwlock);
+
+static inline void grant_read_lock(struct grant_table *gt)
+{
+    percpu_read_lock(grant_rwlock, &gt->lock);
+}
+
+static inline void grant_read_unlock(struct grant_table *gt)
+{
+    percpu_read_unlock(grant_rwlock, &gt->lock);
+}
+
+static inline void grant_write_lock(struct grant_table *gt)
+{
+    percpu_write_lock(grant_rwlock, &gt->lock);
+}
+
+static inline void grant_write_unlock(struct grant_table *gt)
+{
+    percpu_write_unlock(grant_rwlock, &gt->lock);
+}
 
 static inline void gnttab_flush_tlb(const struct domain *d)
 {
@@ -250,6 +320,15 @@ static inline void active_entry_release(struct active_grant_entry *act)
     spin_unlock(&act->lock);
 }
 
+#define GRANT_STATUS_PER_PAGE (PAGE_SIZE / sizeof(grant_status_t))
+#define GRANT_PER_PAGE (PAGE_SIZE / sizeof(grant_entry_v2_t))
+/* Number of grant table status entries. Caller must hold d's gr. table lock.*/
+static inline unsigned int grant_to_status_frames(unsigned int grant_frames)
+{
+    return (grant_frames * GRANT_PER_PAGE + GRANT_STATUS_PER_PAGE - 1) /
+        GRANT_STATUS_PER_PAGE;
+}
+
 /* Check if the page has been paged out, or needs unsharing.
    If rc == GNTST_okay, *page contains the page struct with a ref taken.
    Caller must do put_page(*page).
@@ -1580,7 +1659,7 @@ gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
  * Grow the grant table. The caller must hold the grant table's
  * write lock before calling this function.
  */
-int
+static int
 gnttab_grow_table(struct domain *d, unsigned int req_nr_frames)
 {
     struct grant_table *gt = d->grant_table;
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 43ec6c4d80..43b07e60c5 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -29,66 +29,9 @@
 #include <asm/page.h>
 #include <asm/grant_table.h>
 
-#ifndef DEFAULT_MAX_NR_GRANT_FRAMES /* to allow arch to override */
-/* Default maximum size of a grant table. [POLICY] */
-#define DEFAULT_MAX_NR_GRANT_FRAMES   32
-#endif
 /* The maximum size of a grant table. */
 extern unsigned int max_grant_frames;
 
-DECLARE_PERCPU_RWLOCK_GLOBAL(grant_rwlock);
-
-/* Per-domain grant information. */
-struct grant_table {
-    /*
-     * Lock protecting updates to grant table state (version, active
-     * entry list, etc.)
-     */
-    percpu_rwlock_t       lock;
-    /* Table size. Number of frames shared with guest */
-    unsigned int          nr_grant_frames;
-    /* Shared grant table (see include/public/grant_table.h). */
-    union {
-        void **shared_raw;
-        struct grant_entry_v1 **shared_v1;
-        union grant_entry_v2 **shared_v2;
-    };
-    /* Number of grant status frames shared with guest (for version 2) */
-    unsigned int          nr_status_frames;
-    /* State grant table (see include/public/grant_table.h). */
-    grant_status_t       **status;
-    /* Active grant table. */
-    struct active_grant_entry **active;
-    /* Mapping tracking table per vcpu. */
-    struct grant_mapping **maptrack;
-    unsigned int          maptrack_limit;
-    /* Lock protecting the maptrack limit */
-    spinlock_t            maptrack_lock;
-    /* The defined versions are 1 and 2.  Set to 0 if we don't know
-       what version to use yet. */
-    unsigned              gt_version;
-};
-
-static inline void grant_read_lock(struct grant_table *gt)
-{
-    percpu_read_lock(grant_rwlock, &gt->lock);
-}
-
-static inline void grant_read_unlock(struct grant_table *gt)
-{
-    percpu_read_unlock(grant_rwlock, &gt->lock);
-}
-
-static inline void grant_write_lock(struct grant_table *gt)
-{
-    percpu_write_lock(grant_rwlock, &gt->lock);
-}
-
-static inline void grant_write_unlock(struct grant_table *gt)
-{
-    percpu_write_unlock(grant_rwlock, &gt->lock);
-}
-
 /* Create/destroy per-domain grant table context. */
 int grant_table_create(
     struct domain *d);
@@ -106,33 +49,6 @@ void
 gnttab_release_mappings(
     struct domain *d);
 
-/* Increase the size of a domain's grant table.
- * Caller must hold d's grant table write lock.
- */
-int
-gnttab_grow_table(struct domain *d, unsigned int req_nr_frames);
-
-/* Number of grant table frames. Caller must hold d's grant table lock. */
-static inline unsigned int nr_grant_frames(struct grant_table *gt)
-{
-    return gt->nr_grant_frames;
-}
-
-/* Number of status grant table frames. Caller must hold d's gr. table lock.*/
-static inline unsigned int nr_status_frames(struct grant_table *gt)
-{
-    return gt->nr_status_frames;
-}
-
-#define GRANT_STATUS_PER_PAGE (PAGE_SIZE / sizeof(grant_status_t))
-#define GRANT_PER_PAGE (PAGE_SIZE / sizeof(grant_entry_v2_t))
-/* Number of grant table status entries. Caller must hold d's gr. table lock.*/
-static inline unsigned int grant_to_status_frames(int grant_frames)
-{
-    return (grant_frames * GRANT_PER_PAGE + GRANT_STATUS_PER_PAGE - 1) /
-        GRANT_STATUS_PER_PAGE;
-}
-
 int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
                             gfn_t *gfn, uint16_t *status);
 
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 3/8] xen: delay allocation of grant table sub structures
  2017-09-08  6:56 [PATCH v5 0/8] xen: better grant v2 support Juergen Gross
  2017-09-08  6:56 ` [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c Juergen Gross
  2017-09-08  6:56 ` [PATCH v5 2/8] xen: clean up grant_table.h Juergen Gross
@ 2017-09-08  6:56 ` Juergen Gross
  2017-09-08 15:28   ` Jan Beulich
       [not found]   ` <59B2D34F0200007800178EBC@suse.com>
  2017-09-08  6:56 ` [PATCH v5 4/8] xen: make grant resource limits per domain Juergen Gross
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	dgdegra

Delay the allocation of the grant table sub structures in order to
allow modifying parameters needed for sizing of these structures at a
per domain basis. Either do it from gnttab_setup_table() or just
before the domain is started the first time.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
V4:
- make ret more local (Wei Liu)

V3:
- move call of grant_table_init() from gnttab_setup_table() to
  gnttab_grow_table() (Paul Durrant)
---
 xen/common/domain.c           |  17 +++++-
 xen/common/grant_table.c      | 138 ++++++++++++++++++++++++------------------
 xen/include/xen/grant_table.h |   2 +
 3 files changed, 96 insertions(+), 61 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 5aebcf265f..983f3336a9 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -363,6 +363,9 @@ struct domain *domain_create(domid_t domid, unsigned int domcr_flags,
             goto fail;
         init_status |= INIT_gnttab;
 
+        if ( domid == 0 && grant_table_init(d) )
+            goto fail;
+
         poolid = 0;
 
         err = -ENOMEM;
@@ -998,7 +1001,8 @@ int __domain_pause_by_systemcontroller(struct domain *d,
         prev = cmpxchg(&d->controller_pause_count, old, new);
     } while ( prev != old );
 
-    pause_fn(d);
+    if ( pause_fn )
+        pause_fn(d);
 
     return 0;
 }
@@ -1029,8 +1033,17 @@ int domain_unpause_by_systemcontroller(struct domain *d)
      * Creation is considered finished when the controller reference count
      * first drops to 0.
      */
-    if ( new == 0 )
+    if ( new == 0 && !d->creation_finished )
+    {
+        int ret = grant_table_init(d);
+
+        if ( ret )
+        {
+            __domain_pause_by_systemcontroller(d, NULL);
+            return ret;
+        }
         d->creation_finished = true;
+    }
 
     domain_unpause(d);
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 4520e36d90..29e7fa539b 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1655,6 +1655,78 @@ gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
     gt->nr_status_frames = 0;
 }
 
+int
+grant_table_init(struct domain *d)
+{
+    struct grant_table *gt = d->grant_table;
+    unsigned int i, j;
+
+    if ( gt->nr_grant_frames )
+        return 0;
+
+    gt->nr_grant_frames = INITIAL_NR_GRANT_FRAMES;
+
+    /* Active grant table. */
+    if ( (gt->active = xzalloc_array(struct active_grant_entry *,
+                                     max_nr_active_grant_frames)) == NULL )
+        goto no_mem_1;
+    for ( i = 0;
+          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
+    {
+        if ( (gt->active[i] = alloc_xenheap_page()) == NULL )
+            goto no_mem_2;
+        clear_page(gt->active[i]);
+        for ( j = 0; j < ACGNT_PER_PAGE; j++ )
+            spin_lock_init(&gt->active[i][j].lock);
+    }
+
+    /* Tracking of mapped foreign frames table */
+    gt->maptrack = vzalloc(max_maptrack_frames * sizeof(*gt->maptrack));
+    if ( gt->maptrack == NULL )
+        goto no_mem_2;
+
+    /* Shared grant table. */
+    if ( (gt->shared_raw = xzalloc_array(void *, max_grant_frames)) == NULL )
+        goto no_mem_3;
+    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
+    {
+        if ( (gt->shared_raw[i] = alloc_xenheap_page()) == NULL )
+            goto no_mem_4;
+        clear_page(gt->shared_raw[i]);
+    }
+
+    /* Status pages for grant table - for version 2 */
+    gt->status = xzalloc_array(grant_status_t *,
+                               grant_to_status_frames(max_grant_frames));
+    if ( gt->status == NULL )
+        goto no_mem_4;
+
+    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
+        gnttab_create_shared_page(d, gt, i);
+
+    gt->nr_status_frames = 0;
+
+    return 0;
+
+ no_mem_4:
+    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
+        free_xenheap_page(gt->shared_raw[i]);
+    xfree(gt->shared_raw);
+    gt->shared_raw = NULL;
+ no_mem_3:
+    vfree(gt->maptrack);
+    gt->maptrack = NULL;
+ no_mem_2:
+    for ( i = 0;
+          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
+        free_xenheap_page(gt->active[i]);
+    xfree(gt->active);
+    gt->active = NULL;
+ no_mem_1:
+    gt->nr_grant_frames = 0;
+    return -ENOMEM;
+}
+
 /*
  * Grow the grant table. The caller must hold the grant table's
  * write lock before calling this function.
@@ -1665,6 +1737,12 @@ gnttab_grow_table(struct domain *d, unsigned int req_nr_frames)
     struct grant_table *gt = d->grant_table;
     unsigned int i, j;
 
+    if ( !gt->nr_grant_frames && grant_table_init(d) )
+    {
+        gdprintk(XENLOG_INFO, "Allocation failure in grant table init.\n");
+        return 0;
+    }
+
     ASSERT(req_nr_frames <= max_grant_frames);
 
     gdprintk(XENLOG_INFO,
@@ -3380,75 +3458,17 @@ grant_table_create(
     struct domain *d)
 {
     struct grant_table *t;
-    unsigned int i, j;
 
     if ( (t = xzalloc(struct grant_table)) == NULL )
-        goto no_mem_0;
+        return -ENOMEM;
 
     /* Simple stuff. */
     percpu_rwlock_resource_init(&t->lock, grant_rwlock);
     spin_lock_init(&t->maptrack_lock);
-    t->nr_grant_frames = INITIAL_NR_GRANT_FRAMES;
-
-    /* Active grant table. */
-    if ( (t->active = xzalloc_array(struct active_grant_entry *,
-                                    max_nr_active_grant_frames)) == NULL )
-        goto no_mem_1;
-    for ( i = 0;
-          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
-    {
-        if ( (t->active[i] = alloc_xenheap_page()) == NULL )
-            goto no_mem_2;
-        clear_page(t->active[i]);
-        for ( j = 0; j < ACGNT_PER_PAGE; j++ )
-            spin_lock_init(&t->active[i][j].lock);
-    }
-
-    /* Tracking of mapped foreign frames table */
-    t->maptrack = vzalloc(max_maptrack_frames * sizeof(*t->maptrack));
-    if ( t->maptrack == NULL )
-        goto no_mem_2;
-
-    /* Shared grant table. */
-    if ( (t->shared_raw = xzalloc_array(void *, max_grant_frames)) == NULL )
-        goto no_mem_3;
-    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
-    {
-        if ( (t->shared_raw[i] = alloc_xenheap_page()) == NULL )
-            goto no_mem_4;
-        clear_page(t->shared_raw[i]);
-    }
-
-    /* Status pages for grant table - for version 2 */
-    t->status = xzalloc_array(grant_status_t *,
-                              grant_to_status_frames(max_grant_frames));
-    if ( t->status == NULL )
-        goto no_mem_4;
-
-    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
-        gnttab_create_shared_page(d, t, i);
-
-    t->nr_status_frames = 0;
 
     /* Okay, install the structure. */
     d->grant_table = t;
     return 0;
-
- no_mem_4:
-    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
-        free_xenheap_page(t->shared_raw[i]);
-    xfree(t->shared_raw);
- no_mem_3:
-    vfree(t->maptrack);
- no_mem_2:
-    for ( i = 0;
-          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
-        free_xenheap_page(t->active[i]);
-    xfree(t->active);
- no_mem_1:
-    xfree(t);
- no_mem_0:
-    return -ENOMEM;
 }
 
 void
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 43b07e60c5..84a8d61616 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -35,6 +35,8 @@ extern unsigned int max_grant_frames;
 /* Create/destroy per-domain grant table context. */
 int grant_table_create(
     struct domain *d);
+int grant_table_init(
+    struct domain *d);
 void grant_table_destroy(
     struct domain *d);
 void grant_table_init_vcpu(struct vcpu *v);
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 4/8] xen: make grant resource limits per domain
  2017-09-08  6:56 [PATCH v5 0/8] xen: better grant v2 support Juergen Gross
                   ` (2 preceding siblings ...)
  2017-09-08  6:56 ` [PATCH v5 3/8] xen: delay allocation of grant table sub structures Juergen Gross
@ 2017-09-08  6:56 ` Juergen Gross
  2017-09-08 15:44   ` Jan Beulich
       [not found]   ` <59B2D7180200007800178ED3@suse.com>
  2017-09-08  6:56 ` [PATCH v5 5/8] xen: double default grant frame limit for huge hosts Juergen Gross
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	dgdegra

Instead of using the same global resource limits of grant tables (max.
number of grant frames, max. number of maptrack frames) for all domains
make these limits per domain. This will allow setting individual limits
in the future. For now initialize the per domain limits with the global
values.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
V3:
- correct error message (Paul Durrant)
---
 xen/common/grant_table.c | 82 ++++++++++++++++++++++++++----------------------
 1 file changed, 45 insertions(+), 37 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 29e7fa539b..ff735a4b47 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -71,6 +71,9 @@ struct grant_table {
      * what version to use yet.
      */
     unsigned              gt_version;
+    /* Resource limits of the domain. */
+    unsigned int          max_grant_frames;
+    unsigned int          max_maptrack_frames;
 };
 
 #ifndef DEFAULT_MAX_NR_GRANT_FRAMES /* to allow arch to override */
@@ -287,8 +290,8 @@ num_act_frames_from_sha_frames(const unsigned int num)
     return DIV_ROUND_UP(num * sha_per_page, ACGNT_PER_PAGE);
 }
 
-#define max_nr_active_grant_frames \
-    num_act_frames_from_sha_frames(max_grant_frames)
+#define max_nr_active_grant_frames(gt) \
+    num_act_frames_from_sha_frames(gt->max_grant_frames)
 
 static inline unsigned int
 nr_active_grant_frames(struct grant_table *gt)
@@ -526,7 +529,7 @@ get_maptrack_handle(
      * out of memory, try stealing an entry from another VCPU (in case the
      * guest isn't mapping across its VCPUs evenly).
      */
-    if ( nr_maptrack_frames(lgt) < max_maptrack_frames )
+    if ( nr_maptrack_frames(lgt) < lgt->max_maptrack_frames )
         new_mt = alloc_xenheap_page();
 
     if ( !new_mt )
@@ -1664,14 +1667,15 @@ grant_table_init(struct domain *d)
     if ( gt->nr_grant_frames )
         return 0;
 
-    gt->nr_grant_frames = INITIAL_NR_GRANT_FRAMES;
+    gt->nr_grant_frames = min_t(unsigned int, INITIAL_NR_GRANT_FRAMES,
+                                              gt->max_grant_frames);
 
     /* Active grant table. */
     if ( (gt->active = xzalloc_array(struct active_grant_entry *,
-                                     max_nr_active_grant_frames)) == NULL )
+                                     max_nr_active_grant_frames(gt))) == NULL )
         goto no_mem_1;
     for ( i = 0;
-          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
+          i < num_act_frames_from_sha_frames(gt->nr_grant_frames); i++ )
     {
         if ( (gt->active[i] = alloc_xenheap_page()) == NULL )
             goto no_mem_2;
@@ -1681,14 +1685,14 @@ grant_table_init(struct domain *d)
     }
 
     /* Tracking of mapped foreign frames table */
-    gt->maptrack = vzalloc(max_maptrack_frames * sizeof(*gt->maptrack));
+    gt->maptrack = vzalloc(gt->max_maptrack_frames * sizeof(*gt->maptrack));
     if ( gt->maptrack == NULL )
         goto no_mem_2;
 
     /* Shared grant table. */
-    if ( (gt->shared_raw = xzalloc_array(void *, max_grant_frames)) == NULL )
+    if ( (gt->shared_raw = xzalloc_array(void *, gt->max_grant_frames)) == NULL )
         goto no_mem_3;
-    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
+    for ( i = 0; i < gt->nr_grant_frames; i++ )
     {
         if ( (gt->shared_raw[i] = alloc_xenheap_page()) == NULL )
             goto no_mem_4;
@@ -1697,11 +1701,11 @@ grant_table_init(struct domain *d)
 
     /* Status pages for grant table - for version 2 */
     gt->status = xzalloc_array(grant_status_t *,
-                               grant_to_status_frames(max_grant_frames));
+                               grant_to_status_frames(gt->max_grant_frames));
     if ( gt->status == NULL )
         goto no_mem_4;
 
-    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
+    for ( i = 0; i < gt->nr_grant_frames; i++ )
         gnttab_create_shared_page(d, gt, i);
 
     gt->nr_status_frames = 0;
@@ -1709,7 +1713,7 @@ grant_table_init(struct domain *d)
     return 0;
 
  no_mem_4:
-    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
+    for ( i = 0; i < gt->nr_grant_frames; i++ )
         free_xenheap_page(gt->shared_raw[i]);
     xfree(gt->shared_raw);
     gt->shared_raw = NULL;
@@ -1718,7 +1722,7 @@ grant_table_init(struct domain *d)
     gt->maptrack = NULL;
  no_mem_2:
     for ( i = 0;
-          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
+          i < num_act_frames_from_sha_frames(gt->nr_grant_frames); i++ )
         free_xenheap_page(gt->active[i]);
     xfree(gt->active);
     gt->active = NULL;
@@ -1743,7 +1747,7 @@ gnttab_grow_table(struct domain *d, unsigned int req_nr_frames)
         return 0;
     }
 
-    ASSERT(req_nr_frames <= max_grant_frames);
+    ASSERT(req_nr_frames <= gt->max_grant_frames);
 
     gdprintk(XENLOG_INFO,
             "Expanding dom (%d) grant table from (%d) to (%d) frames.\n",
@@ -1815,15 +1819,6 @@ gnttab_setup_table(
     if ( unlikely(copy_from_guest(&op, uop, 1)) )
         return -EFAULT;
 
-    if ( unlikely(op.nr_frames > max_grant_frames) )
-    {
-        gdprintk(XENLOG_INFO, "Xen only supports up to %d grant-table frames"
-                " per domain.\n",
-                max_grant_frames);
-        op.status = GNTST_general_error;
-        goto out;
-    }
-
     if ( !guest_handle_okay(op.frame_list, op.nr_frames) )
         return -EFAULT;
 
@@ -1843,6 +1838,14 @@ gnttab_setup_table(
     gt = d->grant_table;
     grant_write_lock(gt);
 
+    if ( unlikely(op.nr_frames > gt->max_grant_frames) )
+    {
+        gdprintk(XENLOG_INFO, "Domain is limited to %d grant-table frames.\n",
+                gt->max_grant_frames);
+        op.status = GNTST_general_error;
+        goto unlock;
+    }
+
     if ( gt->gt_version == 0 )
         gt->gt_version = 1;
 
@@ -1853,7 +1856,7 @@ gnttab_setup_table(
     {
         gdprintk(XENLOG_INFO,
                  "Expand grant table to %u failed. Current: %u Max: %u\n",
-                 op.nr_frames, nr_grant_frames(gt), max_grant_frames);
+                 op.nr_frames, nr_grant_frames(gt), gt->max_grant_frames);
         op.status = GNTST_general_error;
         goto unlock;
     }
@@ -1888,6 +1891,7 @@ gnttab_query_size(
 {
     struct gnttab_query_size op;
     struct domain *d;
+    struct grant_table *gt;
 
     if ( count != 1 )
         return -EINVAL;
@@ -1908,13 +1912,15 @@ gnttab_query_size(
         goto out;
     }
 
-    grant_read_lock(d->grant_table);
+    gt = d->grant_table;
+
+    grant_read_lock(gt);
 
-    op.nr_frames     = nr_grant_frames(d->grant_table);
-    op.max_nr_frames = max_grant_frames;
+    op.nr_frames     = nr_grant_frames(gt);
+    op.max_nr_frames = gt->max_grant_frames;
     op.status        = GNTST_okay;
 
-    grant_read_unlock(d->grant_table);
+    grant_read_unlock(gt);
 
  out:
     if ( d )
@@ -3465,6 +3471,8 @@ grant_table_create(
     /* Simple stuff. */
     percpu_rwlock_resource_init(&t->lock, grant_rwlock);
     spin_lock_init(&t->maptrack_lock);
+    t->max_grant_frames = max_grant_frames;
+    t->max_maptrack_frames = max_maptrack_frames;
 
     /* Okay, install the structure. */
     d->grant_table = t;
@@ -3728,7 +3736,7 @@ int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
     }
     else
     {
-        if ( (idx >= nr_grant_frames(gt)) && (idx < max_grant_frames) )
+        if ( (idx >= nr_grant_frames(gt)) && (idx < gt->max_grant_frames) )
             gnttab_grow_table(d, idx + 1);
 
         if ( idx < nr_grant_frames(gt) )
@@ -3755,6 +3763,12 @@ static void gnttab_usage_print(struct domain *rd)
 
     grant_read_lock(gt);
 
+    printk("grant-table for remote domain:%5d (v%d)\n"
+           "  %d frames (%d max), %d maptrack frames (%d max)\n",
+           rd->domain_id, gt->gt_version,
+           nr_grant_frames(gt), gt->max_grant_frames,
+           nr_maptrack_frames(gt), gt->max_maptrack_frames);
+
     for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
     {
         struct active_grant_entry *act;
@@ -3782,12 +3796,7 @@ static void gnttab_usage_print(struct domain *rd)
             status = status_entry(gt, ref);
         }
 
-        if ( first )
-        {
-            printk("grant-table for remote domain:%5d (v%d)\n",
-                   rd->domain_id, gt->gt_version);
-            first = 0;
-        }
+        first = 0;
 
         /*      [0xXXX]  ddddd 0xXXXXXX 0xXXXXXXXX      ddddd 0xXXXXXX 0xXX */
         printk("[0x%03x]  %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n",
@@ -3799,8 +3808,7 @@ static void gnttab_usage_print(struct domain *rd)
     grant_read_unlock(gt);
 
     if ( first )
-        printk("grant-table for remote domain:%5d ... "
-               "no active grant table entries\n", rd->domain_id);
+        printk("no active grant table entries\n");
 }
 
 static void gnttab_usage_print_all(unsigned char key)
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 5/8] xen: double default grant frame limit for huge hosts
  2017-09-08  6:56 [PATCH v5 0/8] xen: better grant v2 support Juergen Gross
                   ` (3 preceding siblings ...)
  2017-09-08  6:56 ` [PATCH v5 4/8] xen: make grant resource limits per domain Juergen Gross
@ 2017-09-08  6:56 ` Juergen Gross
  2017-09-08 15:48   ` Jan Beulich
       [not found]   ` <59B2D8050200007800178ED6@suse.com>
  2017-09-08  6:56 ` [PATCH v5 6/8] xen: add new domctl hypercall to set grant table resource limits Juergen Gross
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	dgdegra

In case a system has memory above the 16TB boundary double the default
grant frame number limit per domain. This ensures a pv domain can still
establish the same number of grants even if it is required to use
version 2 grants which need twice the space of v1 grants.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/grant_table.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ff735a4b47..c00119f2fe 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3824,8 +3824,15 @@ static int __init gnttab_usage_init(void)
 {
     BUILD_BUG_ON(DEFAULT_MAX_MAPTRACK_FRAMES < DEFAULT_MAX_NR_GRANT_FRAMES);
 
+    /*
+     * In case grant v2 is required for pv domains to reference any possible
+     * memory page (i.e. memory is installed above 16TB boundary) double the
+     * grant frame limit. This will allow a guest using v2 grants without
+     * having to lower the number of usable grants.
+     */
     if ( !max_grant_frames )
-        max_grant_frames = DEFAULT_MAX_NR_GRANT_FRAMES;
+        max_grant_frames = ((max_page >> 32) ? 2 : 1) *
+                           DEFAULT_MAX_NR_GRANT_FRAMES;
 
     if ( !max_maptrack_frames )
         max_maptrack_frames = DEFAULT_MAX_MAPTRACK_FRAMES;
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 6/8] xen: add new domctl hypercall to set grant table resource limits
  2017-09-08  6:56 [PATCH v5 0/8] xen: better grant v2 support Juergen Gross
                   ` (4 preceding siblings ...)
  2017-09-08  6:56 ` [PATCH v5 5/8] xen: double default grant frame limit for huge hosts Juergen Gross
@ 2017-09-08  6:56 ` Juergen Gross
  2017-09-08 15:55   ` Jan Beulich
       [not found]   ` <59B2D98F0200007800178F19@suse.com>
  2017-09-08  6:56 ` [PATCH v5 7/8] libxc: add libxc support for setting " Juergen Gross
  2017-09-08  6:56 ` [PATCH v5 8/8] libxl: add libxl " Juergen Gross
  7 siblings, 2 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	dgdegra

Add a domctl hypercall to set the domain's resource limits regarding
grant tables. It is accepted only as long as neither
gnttab_setup_table() has been called for the domain, nor the domain
has started to run.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
V5:
- add set_gnttab_limits to create_domain_common in xen.if
  (Daniel De Graaf)

V3:
- rename *gnttbl* to *gnttab* (Paul Durrant)
---
 tools/flask/policy/modules/dom0.te  |  2 +-
 tools/flask/policy/modules/xen.if   |  2 +-
 xen/common/domctl.c                 |  6 ++++++
 xen/common/grant_table.c            | 26 ++++++++++++++++++++++++++
 xen/include/public/domctl.h         |  9 +++++++++
 xen/include/xen/grant_table.h       |  2 ++
 xen/xsm/flask/hooks.c               |  3 +++
 xen/xsm/flask/policy/access_vectors |  2 ++
 8 files changed, 50 insertions(+), 2 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 338caaf41e..1643b400f0 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -39,7 +39,7 @@ allow dom0_t dom0_t:domain {
 };
 allow dom0_t dom0_t:domain2 {
 	set_cpuid gettsc settsc setscheduler set_max_evtchn set_vnumainfo
-	get_vnumainfo psr_cmt_op psr_cat_op
+	get_vnumainfo psr_cmt_op psr_cat_op set_gnttab_limits
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 912640002e..55437496f6 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -52,7 +52,7 @@ define(`create_domain_common', `
 			settime setdomainhandle getvcpucontext set_misc_info };
 	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim
 			set_max_evtchn set_vnumainfo get_vnumainfo cacheflush
-			psr_cmt_op psr_cat_op soft_reset };
+			psr_cmt_op psr_cat_op soft_reset set_gnttab_limits };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 42658e5744..58381f8fe9 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -14,6 +14,7 @@
 #include <xen/sched-if.h>
 #include <xen/domain.h>
 #include <xen/event.h>
+#include <xen/grant_table.h>
 #include <xen/domain_page.h>
 #include <xen/trace.h>
 #include <xen/console.h>
@@ -1149,6 +1150,11 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             copyback = 1;
         break;
 
+    case XEN_DOMCTL_set_gnttab_limits:
+        ret = grant_table_set_limits(d, op->u.set_gnttab_limits.grant_frames,
+                                     op->u.set_gnttab_limits.maptrack_frames);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index c00119f2fe..83f1a9dd34 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3667,6 +3667,32 @@ void grant_table_init_vcpu(struct vcpu *v)
     v->maptrack_tail = MAPTRACK_TAIL;
 }
 
+int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
+                           unsigned int maptrack_frames)
+{
+    struct grant_table *gt = d->grant_table;
+    int ret = -EBUSY;
+
+    if ( !gt )
+        return -EEXIST;
+
+    grant_write_lock(gt);
+
+    if ( gt->nr_grant_frames )
+        goto unlock;
+
+    ret = 0;
+    if ( grant_frames )
+        gt->max_grant_frames = grant_frames;
+    if ( maptrack_frames )
+        gt->max_maptrack_frames = maptrack_frames;
+
+ unlock:
+    grant_write_unlock(gt);
+
+    return ret;
+}
+
 #ifdef CONFIG_HAS_MEM_SHARING
 int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
                             gfn_t *gfn, uint16_t *status)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 50ff58f5b9..f7e3509c27 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1163,6 +1163,13 @@ struct xen_domctl_psr_cat_op {
 typedef struct xen_domctl_psr_cat_op xen_domctl_psr_cat_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cat_op_t);
 
+struct xen_domctl_set_gnttab_limits {
+    uint32_t grant_frames;     /* IN: if 0, dont change */
+    uint32_t maptrack_frames;  /* IN: if 0, dont change */
+};
+typedef struct xen_domctl_set_gnttab_limits xen_domctl_set_gnttab_limits_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_gnttab_limits_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1240,6 +1247,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_monitor_op                    77
 #define XEN_DOMCTL_psr_cat_op                    78
 #define XEN_DOMCTL_soft_reset                    79
+#define XEN_DOMCTL_set_gnttab_limits             80
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1302,6 +1310,7 @@ struct xen_domctl {
         struct xen_domctl_psr_cmt_op        psr_cmt_op;
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_cat_op        psr_cat_op;
+        struct xen_domctl_set_gnttab_limits set_gnttab_limits;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 84a8d61616..dd9aa3b9ee 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -40,6 +40,8 @@ int grant_table_init(
 void grant_table_destroy(
     struct domain *d);
 void grant_table_init_vcpu(struct vcpu *v);
+int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
+                           unsigned int maptrack_frames);
 
 /*
  * Check if domain has active grants and log first 10 of them.
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 56dc5b0ab9..7b005af834 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -749,6 +749,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_soft_reset:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SOFT_RESET);
 
+    case XEN_DOMCTL_set_gnttab_limits:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_GNTTAB_LIMITS);
+
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index da9f3dfb2e..3a2d863b8f 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -248,6 +248,8 @@ class domain2
     mem_sharing
 # XEN_DOMCTL_psr_cat_op
     psr_cat_op
+# XEN_DOMCTL_set_gnttab_limits
+    set_gnttab_limits
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 7/8] libxc: add libxc support for setting grant table resource limits
  2017-09-08  6:56 [PATCH v5 0/8] xen: better grant v2 support Juergen Gross
                   ` (5 preceding siblings ...)
  2017-09-08  6:56 ` [PATCH v5 6/8] xen: add new domctl hypercall to set grant table resource limits Juergen Gross
@ 2017-09-08  6:56 ` Juergen Gross
  2017-09-08  6:56 ` [PATCH v5 8/8] libxl: add libxl " Juergen Gross
  7 siblings, 0 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	dgdegra

Add a new libxc function xc_domain_set_gnttbl_limits() setting the
limits for the maximum numbers of grant table frames and maptrack
frames of a domain.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
V4:
- use domid_t (Wei Liu)
---
 tools/libxc/include/xenctrl.h | 14 ++++++++++++++
 tools/libxc/xc_domain.c       | 13 +++++++++++++
 2 files changed, 27 insertions(+)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 43151cb415..ab34fb4f70 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1064,6 +1064,20 @@ int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq);
 int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
                              uint32_t max_port);
 
+/**
+ * Set the maximum number of grant frames and/or maptrack frames a domain
+ * can have. Can only be used at domain setup time. A zero value means
+ * no change.
+ *
+ * @param xch a handle to an open hypervisor interface
+ * @param domid the domain id
+ * @param grant_frames max. number of grant frames
+ * @param maptrack_frames max. number of maptrack frames
+ */
+int xc_domain_set_gnttab_limits(xc_interface *xch, domid_t domid,
+                                uint32_t grant_frames,
+                                uint32_t maptrack_frames);
+
 /*
  * CPUPOOL MANAGEMENT FUNCTIONS
  */
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 3bab4e8bab..41b42d6637 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2268,6 +2268,19 @@ int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_set_gnttab_limits(xc_interface *xch, domid_t domid,
+                                uint32_t grant_frames,
+                                uint32_t maptrack_frames)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_set_gnttab_limits;
+    domctl.domain = domid;
+    domctl.u.set_gnttab_limits.grant_frames = grant_frames;
+    domctl.u.set_gnttab_limits.maptrack_frames = maptrack_frames;
+    return do_domctl(xch, &domctl);
+}
+
 /* Plumbing Xen with vNUMA topology */
 int xc_domain_setvnuma(xc_interface *xch,
                        uint32_t domid,
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 8/8] libxl: add libxl support for setting grant table resource limits
  2017-09-08  6:56 [PATCH v5 0/8] xen: better grant v2 support Juergen Gross
                   ` (6 preceding siblings ...)
  2017-09-08  6:56 ` [PATCH v5 7/8] libxc: add libxc support for setting " Juergen Gross
@ 2017-09-08  6:56 ` Juergen Gross
  7 siblings, 0 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	dgdegra

Add new domain config items for setting the limits for the maximum
numbers of grant table frames and maptrack frames of a domain.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
V4:
- rename configuration items to use max_ prefixes (Wei Liu)
---
 docs/man/xl.cfg.pod.5.in    | 15 +++++++++++++++
 tools/libxl/libxl.h         |  6 ++++++
 tools/libxl/libxl_dom.c     |  8 ++++++++
 tools/libxl/libxl_types.idl |  3 +++
 tools/xl/xl_parse.c         |  5 +++++
 tools/xl/xl_sxp.c           |  2 ++
 6 files changed, 39 insertions(+)

diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index 79cb2eaea7..caa674d59d 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -444,6 +444,21 @@ unpausing the domain. With a properly constructed security policy (such
 as nomigrate_t in the example policy), this can be used to build a
 domain whose memory is not accessible to the toolstack domain.
 
+=item B<max_grant_frames=NUMBER>
+
+Specify the maximum number of grant frames the domain is allowed to have.
+This value controls how many pages the domain is able to grant access to for
+other domains, needed e.g. for the operation of paravirtualized devices.
+The default is 32, if not set to another value via a Xen boot parameter.
+
+=item B<max_maptrack_frames=NUMBER>
+
+Specify the maximum number of grant maptrack frames the domain is allowed
+to have. This value controls how many pages of foreign domains can be accessed
+via the grant mechanism by this domain. A value higher than the normal default
+of 1024 is normally needed only for very large configurations for driver
+domains.
+
 =item B<nomigrate=BOOLEAN>
 
 Disable migration of this domain.  This enables certain other features
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 812b7ea95d..912f636b1a 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -311,6 +311,12 @@
 #define LIBXL_HAVE_P9S 1
 
 /*
+ * LIBXL_HAVE_BUILDINFO_GRANT_LIMITS indicates that libxl_domain_build_info
+ * has the max_grant_frames and max_maptrack_frames fields.
+ */
+#define LIBXL_HAVE_BUILDINFO_GRANT_LIMITS 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index f54fd49a73..83ba4757d2 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -322,6 +322,14 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
         return ERROR_FAIL;
     }
 
+    if (info->max_grant_frames || info->max_maptrack_frames) {
+        if (xc_domain_set_gnttab_limits(ctx->xch, domid, info->max_grant_frames,
+                                        info->max_maptrack_frames) != 0) {
+            LOG(ERROR, "Couldn't set grant table limits");
+            return ERROR_FAIL;
+        }
+    }
+
     /*
      * Check if the domain has any CPU or node affinity already. If not, try
      * to build up the latter via automatic NUMA placement. In fact, in case
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 173d70acec..47487d69c5 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -472,6 +472,9 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("blkdev_start",    string),
 
     ("vnuma_nodes", Array(libxl_vnode_info, "num_vnuma_nodes")),
+
+    ("max_grant_frames",    uint32),
+    ("max_maptrack_frames", uint32),
     
     ("device_model_version", libxl_device_model_version),
     ("device_model_stubdomain", libxl_defbool),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 02ddd2e90d..dc9df3ce2e 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -943,6 +943,11 @@ void parse_config_data(const char *config_source,
         !xlu_cfg_get_string (config, "cpus_soft", &buf, 0))
         parse_vcpu_affinity(b_info, cpus, buf, num_cpus, false);
 
+    if (!xlu_cfg_get_long (config, "max_grant_frames", &l, 0))
+        b_info->max_grant_frames = l;
+    if (!xlu_cfg_get_long (config, "max_maptrack_frames", &l, 0))
+        b_info->max_maptrack_frames = l;
+
     libxl_defbool_set(&b_info->claim_mode, claim_mode);
 
     if (xlu_cfg_get_string (config, "on_poweroff", &buf, 0))
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index e738bf2465..e264cf2023 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -64,6 +64,8 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
 
     fprintf(fh, "\t(build_info)\n");
     fprintf(fh, "\t(max_vcpus %d)\n", b_info->max_vcpus);
+    fprintf(fh, "\t(max_grant_frames %d)\n", b_info->max_grant_frames);
+    fprintf(fh, "\t(max_maptrack_frames %d)\n", b_info->max_maptrack_frames);
     fprintf(fh, "\t(tsc_mode %s)\n", libxl_tsc_mode_to_string(b_info->tsc_mode));
     fprintf(fh, "\t(max_memkb %"PRId64")\n", b_info->max_memkb);
     fprintf(fh, "\t(target_memkb %"PRId64")\n", b_info->target_memkb);
-- 
2.12.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c
  2017-09-08  6:56 ` [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c Juergen Gross
@ 2017-09-08 14:49   ` Jan Beulich
       [not found]   ` <59B2C9FF0200007800178E24@suse.com>
  2017-09-11 11:41   ` George Dunlap
  2 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-08 14:49 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4631,40 +4631,19 @@ int xenmem_add_to_physmap_one(
>  {
>      struct page_info *page = NULL;
>      unsigned long gfn = 0; /* gcc ... */
> -    unsigned long prev_mfn, mfn = 0, old_gpfn;
> +    unsigned long prev_mfn, old_gpfn;
>      int rc = 0;
> +    mfn_t mfn = INVALID_MFN;
>      p2m_type_t p2mt;
>  
>      switch ( space )
>      {
>          case XENMAPSPACE_shared_info:
>              if ( idx == 0 )
> -                mfn = virt_to_mfn(d->shared_info);
> +                mfn = _mfn(virt_to_mfn(d->shared_info));
>              break;
>          case XENMAPSPACE_grant_table:
> -            grant_write_lock(d->grant_table);
> -
> -            if ( d->grant_table->gt_version == 0 )
> -                d->grant_table->gt_version = 1;
> -
> -            if ( d->grant_table->gt_version == 2 &&
> -                 (idx & XENMAPIDX_grant_table_status) )
> -            {
> -                idx &= ~XENMAPIDX_grant_table_status;
> -                if ( idx < nr_status_frames(d->grant_table) )
> -                    mfn = virt_to_mfn(d->grant_table->status[idx]);
> -            }
> -            else
> -            {
> -                if ( (idx >= nr_grant_frames(d->grant_table)) &&
> -                     (idx < max_grant_frames) )
> -                    gnttab_grow_table(d, idx + 1);
> -
> -                if ( idx < nr_grant_frames(d->grant_table) )
> -                    mfn = virt_to_mfn(d->grant_table->shared_raw[idx]);
> -            }
> -
> -            grant_write_unlock(d->grant_table);
> +            gnttab_map_frame(d, idx, gpfn, &mfn);

You're ignoring a possible error here. Of course right now this can
only come together with mfn staying INVALID_MFN, but if any
further change to the function happened this may change without
the author noticing that the error check here is missing.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 2/8] xen: clean up grant_table.h
  2017-09-08  6:56 ` [PATCH v5 2/8] xen: clean up grant_table.h Juergen Gross
@ 2017-09-08 15:00   ` Jan Beulich
       [not found]   ` <59B2CC990200007800178E3C@suse.com>
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-08 15:00 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -40,6 +40,44 @@
>  #include <xsm/xsm.h>
>  #include <asm/flushtlb.h>
>  
> +/* Per-domain grant information. */
> +struct grant_table {
> +    /*
> +     * Lock protecting updates to grant table state (version, active
> +     * entry list, etc.)
> +     */
> +    percpu_rwlock_t       lock;
> +    /* Table size. Number of frames shared with guest */
> +    unsigned int          nr_grant_frames;
> +    /* Shared grant table (see include/public/grant_table.h). */
> +    union {
> +        void **shared_raw;
> +        struct grant_entry_v1 **shared_v1;
> +        union grant_entry_v2 **shared_v2;
> +    };
> +    /* Number of grant status frames shared with guest (for version 2) */
> +    unsigned int          nr_status_frames;
> +    /* State grant table (see include/public/grant_table.h). */
> +    grant_status_t       **status;
> +    /* Active grant table. */
> +    struct active_grant_entry **active;
> +    /* Mapping tracking table per vcpu. */
> +    struct grant_mapping **maptrack;
> +    unsigned int          maptrack_limit;
> +    /* Lock protecting the maptrack limit */
> +    spinlock_t            maptrack_lock;
> +    /*
> +     * The defined versions are 1 and 2.  Set to 0 if we don't know
> +     * what version to use yet.
> +     */
> +    unsigned              gt_version;

unsigned int please, for consistency with other code (elsewhere and
right above).

> +};

While you're moving it here I think a little bit of re-arrangement
wouldn't hurt. At the very least nr_status_frames could be put
next to nr_grant_frames, so there won't be two 4-byte holes. Or
if there isn't one after nr_grant_frames (i.e. if my counting of
percpu_rwlock_t members was wrong), gt_version could also be
moved next to those other two.

> +static inline unsigned int grant_to_status_frames(unsigned int grant_frames)
> +{
> +    return (grant_frames * GRANT_PER_PAGE + GRANT_STATUS_PER_PAGE - 1) /
> +        GRANT_STATUS_PER_PAGE;

Would be nice if indentation was corrected here at the same time.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c
       [not found]   ` <59B2C9FF0200007800178E24@suse.com>
@ 2017-09-08 15:14     ` Juergen Gross
  0 siblings, 0 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-08 15:14 UTC (permalink / raw)
  To: Jan Beulich
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

On 08/09/17 16:49, Jan Beulich wrote:
>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -4631,40 +4631,19 @@ int xenmem_add_to_physmap_one(
>>  {
>>      struct page_info *page = NULL;
>>      unsigned long gfn = 0; /* gcc ... */
>> -    unsigned long prev_mfn, mfn = 0, old_gpfn;
>> +    unsigned long prev_mfn, old_gpfn;
>>      int rc = 0;
>> +    mfn_t mfn = INVALID_MFN;
>>      p2m_type_t p2mt;
>>  
>>      switch ( space )
>>      {
>>          case XENMAPSPACE_shared_info:
>>              if ( idx == 0 )
>> -                mfn = virt_to_mfn(d->shared_info);
>> +                mfn = _mfn(virt_to_mfn(d->shared_info));
>>              break;
>>          case XENMAPSPACE_grant_table:
>> -            grant_write_lock(d->grant_table);
>> -
>> -            if ( d->grant_table->gt_version == 0 )
>> -                d->grant_table->gt_version = 1;
>> -
>> -            if ( d->grant_table->gt_version == 2 &&
>> -                 (idx & XENMAPIDX_grant_table_status) )
>> -            {
>> -                idx &= ~XENMAPIDX_grant_table_status;
>> -                if ( idx < nr_status_frames(d->grant_table) )
>> -                    mfn = virt_to_mfn(d->grant_table->status[idx]);
>> -            }
>> -            else
>> -            {
>> -                if ( (idx >= nr_grant_frames(d->grant_table)) &&
>> -                     (idx < max_grant_frames) )
>> -                    gnttab_grow_table(d, idx + 1);
>> -
>> -                if ( idx < nr_grant_frames(d->grant_table) )
>> -                    mfn = virt_to_mfn(d->grant_table->shared_raw[idx]);
>> -            }
>> -
>> -            grant_write_unlock(d->grant_table);
>> +            gnttab_map_frame(d, idx, gpfn, &mfn);
> 
> You're ignoring a possible error here. Of course right now this can
> only come together with mfn staying INVALID_MFN, but if any
> further change to the function happened this may change without
> the author noticing that the error check here is missing.

Okay, I'll test for error here.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 3/8] xen: delay allocation of grant table sub structures
  2017-09-08  6:56 ` [PATCH v5 3/8] xen: delay allocation of grant table sub structures Juergen Gross
@ 2017-09-08 15:28   ` Jan Beulich
       [not found]   ` <59B2D34F0200007800178EBC@suse.com>
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-08 15:28 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
> Delay the allocation of the grant table sub structures in order to
> allow modifying parameters needed for sizing of these structures at a
> per domain basis. Either do it from gnttab_setup_table() or just
> before the domain is started the first time.

The reference to gnttab_setup_table() is now sort of stale.

> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -363,6 +363,9 @@ struct domain *domain_create(domid_t domid, unsigned int domcr_flags,
>              goto fail;
>          init_status |= INIT_gnttab;
>  
> +        if ( domid == 0 && grant_table_init(d) )
> +            goto fail;

Besides not really liking the special case, why can't
grant_table_create() make this call, keeping more grant table
logic within grant_table.c? And if you special case Dom0,
wouldn't it be better to (also) special case the hardware domain
(in case that's not Dom0)?

> @@ -1029,8 +1033,17 @@ int domain_unpause_by_systemcontroller(struct domain *d)
>       * Creation is considered finished when the controller reference count
>       * first drops to 0.
>       */
> -    if ( new == 0 )
> +    if ( new == 0 && !d->creation_finished )
> +    {
> +        int ret = grant_table_init(d);
> +
> +        if ( ret )
> +        {
> +            __domain_pause_by_systemcontroller(d, NULL);
> +            return ret;
> +        }
>          d->creation_finished = true;
> +    }

Adding a grant table call here looks rather arbitrary, if not hackish.
Why can't you call it from the domctl you're going to add in a later
patch, requiring the tool stack to issue that domctl in all cases, just
like e.g. a max_vcpus one is always necessary? That would also
avoid a possibly confusing error (from the unpause, i.e. not
obviously related to grant table setup failure). Of course that will
require merging this patch with the other one to avoid an
intermediate state in which the call wouldn't be made at all.

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1655,6 +1655,78 @@ gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
>      gt->nr_status_frames = 0;
>  }
>  
> +int
> +grant_table_init(struct domain *d)
> +{
> +    struct grant_table *gt = d->grant_table;
> +    unsigned int i, j;
> +
> +    if ( gt->nr_grant_frames )
> +        return 0;
> +
> +    gt->nr_grant_frames = INITIAL_NR_GRANT_FRAMES;
> +
> +    /* Active grant table. */
> +    if ( (gt->active = xzalloc_array(struct active_grant_entry *,
> +                                     max_nr_active_grant_frames)) == NULL )
> +        goto no_mem_1;
> +    for ( i = 0;
> +          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
> +    {
> +        if ( (gt->active[i] = alloc_xenheap_page()) == NULL )
> +            goto no_mem_2;
> +        clear_page(gt->active[i]);
> +        for ( j = 0; j < ACGNT_PER_PAGE; j++ )
> +            spin_lock_init(&gt->active[i][j].lock);
> +    }
> +
> +    /* Tracking of mapped foreign frames table */
> +    gt->maptrack = vzalloc(max_maptrack_frames * sizeof(*gt->maptrack));
> +    if ( gt->maptrack == NULL )
> +        goto no_mem_2;
> +
> +    /* Shared grant table. */
> +    if ( (gt->shared_raw = xzalloc_array(void *, max_grant_frames)) == NULL )
> +        goto no_mem_3;
> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
> +    {
> +        if ( (gt->shared_raw[i] = alloc_xenheap_page()) == NULL )
> +            goto no_mem_4;
> +        clear_page(gt->shared_raw[i]);
> +    }
> +
> +    /* Status pages for grant table - for version 2 */
> +    gt->status = xzalloc_array(grant_status_t *,
> +                               grant_to_status_frames(max_grant_frames));
> +    if ( gt->status == NULL )
> +        goto no_mem_4;
> +
> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
> +        gnttab_create_shared_page(d, gt, i);
> +
> +    gt->nr_status_frames = 0;
> +
> +    return 0;
> +
> + no_mem_4:
> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
> +        free_xenheap_page(gt->shared_raw[i]);
> +    xfree(gt->shared_raw);
> +    gt->shared_raw = NULL;
> + no_mem_3:
> +    vfree(gt->maptrack);
> +    gt->maptrack = NULL;
> + no_mem_2:
> +    for ( i = 0;
> +          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
> +        free_xenheap_page(gt->active[i]);
> +    xfree(gt->active);
> +    gt->active = NULL;
> + no_mem_1:
> +    gt->nr_grant_frames = 0;
> +    return -ENOMEM;
> +}

The redundancy between this code and gnttab_grow_table() has
always bothered me, and now would seem to be a good occasion
to do away with it. Why don't you defer to gnttab_grow_table()
anything that function already does (keeping the respective limits
at zero in here)?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 4/8] xen: make grant resource limits per domain
  2017-09-08  6:56 ` [PATCH v5 4/8] xen: make grant resource limits per domain Juergen Gross
@ 2017-09-08 15:44   ` Jan Beulich
       [not found]   ` <59B2D7180200007800178ED3@suse.com>
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-08 15:44 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
> @@ -1843,6 +1838,14 @@ gnttab_setup_table(
>      gt = d->grant_table;
>      grant_write_lock(gt);
>  
> +    if ( unlikely(op.nr_frames > gt->max_grant_frames) )
> +    {
> +        gdprintk(XENLOG_INFO, "Domain is limited to %d grant-table frames.\n",
> +                gt->max_grant_frames);

%u please

> @@ -3465,6 +3471,8 @@ grant_table_create(
>      /* Simple stuff. */
>      percpu_rwlock_resource_init(&t->lock, grant_rwlock);
>      spin_lock_init(&t->maptrack_lock);
> +    t->max_grant_frames = max_grant_frames;
> +    t->max_maptrack_frames = max_maptrack_frames;

Am I mistaken or are these the only uses of the two static variables
now? If so (also to prove that's the case) their definitions would
probably better be moved into this function, together with their
integer_param() invocations. The adjustments done by
gnttab_usage_init() could also go here afaict.

> @@ -3755,6 +3763,12 @@ static void gnttab_usage_print(struct domain *rd)
>  
>      grant_read_lock(gt);
>  
> +    printk("grant-table for remote domain:%5d (v%d)\n"
> +           "  %d frames (%d max), %d maptrack frames (%d max)\n",
> +           rd->domain_id, gt->gt_version,
> +           nr_grant_frames(gt), gt->max_grant_frames,
> +           nr_maptrack_frames(gt), gt->max_maptrack_frames);

Various %u instances again, and Dom%d please. Also you put this
after the table header, corrupting intended output.

> @@ -3782,12 +3796,7 @@ static void gnttab_usage_print(struct domain *rd)
>              status = status_entry(gt, ref);
>          }
>  
> -        if ( first )
> -        {
> -            printk("grant-table for remote domain:%5d (v%d)\n",
> -                   rd->domain_id, gt->gt_version);
> -            first = 0;
> -        }
> +        first = 0;

Is it useful to print the per-table information when there are no
entries at all for a domain? I think it would be better to move
what you add as well as the table header into the if() that you
delete.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 5/8] xen: double default grant frame limit for huge hosts
  2017-09-08  6:56 ` [PATCH v5 5/8] xen: double default grant frame limit for huge hosts Juergen Gross
@ 2017-09-08 15:48   ` Jan Beulich
       [not found]   ` <59B2D8050200007800178ED6@suse.com>
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-08 15:48 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -3824,8 +3824,15 @@ static int __init gnttab_usage_init(void)
>  {
>      BUILD_BUG_ON(DEFAULT_MAX_MAPTRACK_FRAMES < DEFAULT_MAX_NR_GRANT_FRAMES);
>  
> +    /*
> +     * In case grant v2 is required for pv domains to reference any possible
> +     * memory page (i.e. memory is installed above 16TB boundary) double the
> +     * grant frame limit. This will allow a guest using v2 grants without
> +     * having to lower the number of usable grants.
> +     */
>      if ( !max_grant_frames )
> -        max_grant_frames = DEFAULT_MAX_NR_GRANT_FRAMES;
> +        max_grant_frames = ((max_page >> 32) ? 2 : 1) *
> +                           DEFAULT_MAX_NR_GRANT_FRAMES;

Didn't we agree that HVM domains exceeding 16Tb (or even only
placing any GFNs above the 16Tb boundary) would also need to
make use of v2? If so, please drop the reference to PV or add
half a sentence for HVM. Dropping may be better as this is
common code, and ARM doesn't make that distinction.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 6/8] xen: add new domctl hypercall to set grant table resource limits
  2017-09-08  6:56 ` [PATCH v5 6/8] xen: add new domctl hypercall to set grant table resource limits Juergen Gross
@ 2017-09-08 15:55   ` Jan Beulich
       [not found]   ` <59B2D98F0200007800178F19@suse.com>
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-08 15:55 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -3667,6 +3667,32 @@ void grant_table_init_vcpu(struct vcpu *v)
>      v->maptrack_tail = MAPTRACK_TAIL;
>  }
>  
> +int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
> +                           unsigned int maptrack_frames)
> +{
> +    struct grant_table *gt = d->grant_table;
> +    int ret = -EBUSY;
> +
> +    if ( !gt )
> +        return -EEXIST;

How does EEXIST represent the error condition?

> +    grant_write_lock(gt);
> +
> +    if ( gt->nr_grant_frames )
> +        goto unlock;

I think you can do without goto here with no risk of lowered
readability.

> +    ret = 0;
> +    if ( grant_frames )
> +        gt->max_grant_frames = grant_frames;
> +    if ( maptrack_frames )
> +        gt->max_maptrack_frames = maptrack_frames;

Together with what I have said regarding making the invocation
of this domctl mandatory, I think these two shouldn't be conditional.
In particular for maptrack I also don't see why a domain couldn't
do with a limit of zero, as long as it's not serving as a backend for
another guest.

> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1163,6 +1163,13 @@ struct xen_domctl_psr_cat_op {
>  typedef struct xen_domctl_psr_cat_op xen_domctl_psr_cat_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cat_op_t);
>  
> +struct xen_domctl_set_gnttab_limits {
> +    uint32_t grant_frames;     /* IN: if 0, dont change */
> +    uint32_t maptrack_frames;  /* IN: if 0, dont change */
> +};
> +typedef struct xen_domctl_set_gnttab_limits xen_domctl_set_gnttab_limits_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_gnttab_limits_t);

In another context I had already recently requested to stop the
bad habit of adding typedef and handle for all domctl-s. I don't
see what they're needed for, they just clutter the name space.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 2/8] xen: clean up grant_table.h
       [not found]   ` <59B2CC990200007800178E3C@suse.com>
@ 2017-09-11  8:22     ` Juergen Gross
  0 siblings, 0 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-11  8:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

On 08/09/17 17:00, Jan Beulich wrote:
>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -40,6 +40,44 @@
>>  #include <xsm/xsm.h>
>>  #include <asm/flushtlb.h>
>>  
>> +/* Per-domain grant information. */
>> +struct grant_table {
>> +    /*
>> +     * Lock protecting updates to grant table state (version, active
>> +     * entry list, etc.)
>> +     */
>> +    percpu_rwlock_t       lock;
>> +    /* Table size. Number of frames shared with guest */
>> +    unsigned int          nr_grant_frames;
>> +    /* Shared grant table (see include/public/grant_table.h). */
>> +    union {
>> +        void **shared_raw;
>> +        struct grant_entry_v1 **shared_v1;
>> +        union grant_entry_v2 **shared_v2;
>> +    };
>> +    /* Number of grant status frames shared with guest (for version 2) */
>> +    unsigned int          nr_status_frames;
>> +    /* State grant table (see include/public/grant_table.h). */
>> +    grant_status_t       **status;
>> +    /* Active grant table. */
>> +    struct active_grant_entry **active;
>> +    /* Mapping tracking table per vcpu. */
>> +    struct grant_mapping **maptrack;
>> +    unsigned int          maptrack_limit;
>> +    /* Lock protecting the maptrack limit */
>> +    spinlock_t            maptrack_lock;
>> +    /*
>> +     * The defined versions are 1 and 2.  Set to 0 if we don't know
>> +     * what version to use yet.
>> +     */
>> +    unsigned              gt_version;
> 
> unsigned int please, for consistency with other code (elsewhere and
> right above).

Yes.

> 
>> +};
> 
> While you're moving it here I think a little bit of re-arrangement
> wouldn't hurt. At the very least nr_status_frames could be put
> next to nr_grant_frames, so there won't be two 4-byte holes. Or
> if there isn't one after nr_grant_frames (i.e. if my counting of
> percpu_rwlock_t members was wrong), gt_version could also be
> moved next to those other two.

I think I'll reorder the structure completely by putting all pointers
at the end and locks and ints at the beginning. That should minimize
the holes.

>> +static inline unsigned int grant_to_status_frames(unsigned int grant_frames)
>> +{
>> +    return (grant_frames * GRANT_PER_PAGE + GRANT_STATUS_PER_PAGE - 1) /
>> +        GRANT_STATUS_PER_PAGE;
> 
> Would be nice if indentation was corrected here at the same time.

DIV_ROUND_UP() seems to be the better choice.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 3/8] xen: delay allocation of grant table sub structures
       [not found]   ` <59B2D34F0200007800178EBC@suse.com>
@ 2017-09-11  9:03     ` Juergen Gross
  2017-09-11  9:23       ` Jan Beulich
       [not found]       ` <59B6722B0200007800179767@suse.com>
  0 siblings, 2 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-11  9:03 UTC (permalink / raw)
  To: Jan Beulich
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

On 08/09/17 17:28, Jan Beulich wrote:
>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>> Delay the allocation of the grant table sub structures in order to
>> allow modifying parameters needed for sizing of these structures at a
>> per domain basis. Either do it from gnttab_setup_table() or just
>> before the domain is started the first time.
> 
> The reference to gnttab_setup_table() is now sort of stale.

Hmm, yes.

> 
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -363,6 +363,9 @@ struct domain *domain_create(domid_t domid, unsigned int domcr_flags,
>>              goto fail;
>>          init_status |= INIT_gnttab;
>>  
>> +        if ( domid == 0 && grant_table_init(d) )
>> +            goto fail;
> 
> Besides not really liking the special case, why can't
> grant_table_create() make this call, keeping more grant table
> logic within grant_table.c?

Right, this is better.

> And if you special case Dom0,
> wouldn't it be better to (also) special case the hardware domain
> (in case that's not Dom0)?

As a hardware domain not being dom0 would need to be created via the
appropriate domctl hypercalls I don't see why the measures for all
other domains wouldn't be enough for that case.

> 
>> @@ -1029,8 +1033,17 @@ int domain_unpause_by_systemcontroller(struct domain *d)
>>       * Creation is considered finished when the controller reference count
>>       * first drops to 0.
>>       */
>> -    if ( new == 0 )
>> +    if ( new == 0 && !d->creation_finished )
>> +    {
>> +        int ret = grant_table_init(d);
>> +
>> +        if ( ret )
>> +        {
>> +            __domain_pause_by_systemcontroller(d, NULL);
>> +            return ret;
>> +        }
>>          d->creation_finished = true;
>> +    }
> 
> Adding a grant table call here looks rather arbitrary, if not hackish.

Would it still be hackish if I'd add a generic function doing last
init calls just before the domain starts to run for the first time?
The call to grant_table_init() would be just the first such late init
calls for the domain.

> Why can't you call it from the domctl you're going to add in a later
> patch, requiring the tool stack to issue that domctl in all cases, just
> like e.g. a max_vcpus one is always necessary? That would also
> avoid a possibly confusing error (from the unpause, i.e. not
> obviously related to grant table setup failure). Of course that will
> require merging this patch with the other one to avoid an
> intermediate state in which the call wouldn't be made at all.

This would be another possibility, yes.

Instead of merging the patches I'd just move patches 6-8 before this one
to have everything in place, including the needed tools side.

> 
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -1655,6 +1655,78 @@ gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
>>      gt->nr_status_frames = 0;
>>  }
>>  
>> +int
>> +grant_table_init(struct domain *d)
>> +{
>> +    struct grant_table *gt = d->grant_table;
>> +    unsigned int i, j;
>> +
>> +    if ( gt->nr_grant_frames )
>> +        return 0;
>> +
>> +    gt->nr_grant_frames = INITIAL_NR_GRANT_FRAMES;
>> +
>> +    /* Active grant table. */
>> +    if ( (gt->active = xzalloc_array(struct active_grant_entry *,
>> +                                     max_nr_active_grant_frames)) == NULL )
>> +        goto no_mem_1;
>> +    for ( i = 0;
>> +          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
>> +    {
>> +        if ( (gt->active[i] = alloc_xenheap_page()) == NULL )
>> +            goto no_mem_2;
>> +        clear_page(gt->active[i]);
>> +        for ( j = 0; j < ACGNT_PER_PAGE; j++ )
>> +            spin_lock_init(&gt->active[i][j].lock);
>> +    }
>> +
>> +    /* Tracking of mapped foreign frames table */
>> +    gt->maptrack = vzalloc(max_maptrack_frames * sizeof(*gt->maptrack));
>> +    if ( gt->maptrack == NULL )
>> +        goto no_mem_2;
>> +
>> +    /* Shared grant table. */
>> +    if ( (gt->shared_raw = xzalloc_array(void *, max_grant_frames)) == NULL )
>> +        goto no_mem_3;
>> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>> +    {
>> +        if ( (gt->shared_raw[i] = alloc_xenheap_page()) == NULL )
>> +            goto no_mem_4;
>> +        clear_page(gt->shared_raw[i]);
>> +    }
>> +
>> +    /* Status pages for grant table - for version 2 */
>> +    gt->status = xzalloc_array(grant_status_t *,
>> +                               grant_to_status_frames(max_grant_frames));
>> +    if ( gt->status == NULL )
>> +        goto no_mem_4;
>> +
>> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>> +        gnttab_create_shared_page(d, gt, i);
>> +
>> +    gt->nr_status_frames = 0;
>> +
>> +    return 0;
>> +
>> + no_mem_4:
>> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>> +        free_xenheap_page(gt->shared_raw[i]);
>> +    xfree(gt->shared_raw);
>> +    gt->shared_raw = NULL;
>> + no_mem_3:
>> +    vfree(gt->maptrack);
>> +    gt->maptrack = NULL;
>> + no_mem_2:
>> +    for ( i = 0;
>> +          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
>> +        free_xenheap_page(gt->active[i]);
>> +    xfree(gt->active);
>> +    gt->active = NULL;
>> + no_mem_1:
>> +    gt->nr_grant_frames = 0;
>> +    return -ENOMEM;
>> +}
> 
> The redundancy between this code and gnttab_grow_table() has
> always bothered me, and now would seem to be a good occasion
> to do away with it. Why don't you defer to gnttab_grow_table()
> anything that function already does (keeping the respective limits
> at zero in here)?

Just to be sure I understand you correctly: you want to get rid of
INITIAL_NR_GRANT_FRAMES and just grow from 0 on instead of starting at
the current value 4, right?

So I would just allocate the arrays (gt->active, gt->maptrack,
gt->shared_raw, gt->status) in grant_table_init().


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 3/8] xen: delay allocation of grant table sub structures
  2017-09-11  9:03     ` Juergen Gross
@ 2017-09-11  9:23       ` Jan Beulich
       [not found]       ` <59B6722B0200007800179767@suse.com>
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-11  9:23 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 11.09.17 at 11:03, <jgross@suse.com> wrote:
> On 08/09/17 17:28, Jan Beulich wrote:
>>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>> And if you special case Dom0,
>> wouldn't it be better to (also) special case the hardware domain
>> (in case that's not Dom0)?
> 
> As a hardware domain not being dom0 would need to be created via the
> appropriate domctl hypercalls I don't see why the measures for all
> other domains wouldn't be enough for that case.

Yes, that's true especially when making the domctl mandatory to be
used. Whether suitable default values for that case wouldn't better
live in a single place (the hypervisor) is a point to be considered
here, though (by default values I mean ones to be used when the
config file doesn't specify any, not ones to be used by the domctl
handler if the passed in values are zero or some other "use
defaults" indicator, as you had elsewhere).

>>> @@ -1029,8 +1033,17 @@ int domain_unpause_by_systemcontroller(struct domain *d)
>>>       * Creation is considered finished when the controller reference count
>>>       * first drops to 0.
>>>       */
>>> -    if ( new == 0 )
>>> +    if ( new == 0 && !d->creation_finished )
>>> +    {
>>> +        int ret = grant_table_init(d);
>>> +
>>> +        if ( ret )
>>> +        {
>>> +            __domain_pause_by_systemcontroller(d, NULL);
>>> +            return ret;
>>> +        }
>>>          d->creation_finished = true;
>>> +    }
>> 
>> Adding a grant table call here looks rather arbitrary, if not hackish.
> 
> Would it still be hackish if I'd add a generic function doing last
> init calls just before the domain starts to run for the first time?
> The call to grant_table_init() would be just the first such late init
> calls for the domain.

Generalizing this would make things look better, yes, but that
would then still not deal with the bad error reporting which
results.

>> Why can't you call it from the domctl you're going to add in a later
>> patch, requiring the tool stack to issue that domctl in all cases, just
>> like e.g. a max_vcpus one is always necessary? That would also
>> avoid a possibly confusing error (from the unpause, i.e. not
>> obviously related to grant table setup failure). Of course that will
>> require merging this patch with the other one to avoid an
>> intermediate state in which the call wouldn't be made at all.
> 
> This would be another possibility, yes.
> 
> Instead of merging the patches I'd just move patches 6-8 before this one
> to have everything in place, including the needed tools side.

Right, later I had realized too that simple re-ordering would be
sufficient.

>>> --- a/xen/common/grant_table.c
>>> +++ b/xen/common/grant_table.c
>>> @@ -1655,6 +1655,78 @@ gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
>>>      gt->nr_status_frames = 0;
>>>  }
>>>  
>>> +int
>>> +grant_table_init(struct domain *d)
>>> +{
>>> +    struct grant_table *gt = d->grant_table;
>>> +    unsigned int i, j;
>>> +
>>> +    if ( gt->nr_grant_frames )
>>> +        return 0;
>>> +
>>> +    gt->nr_grant_frames = INITIAL_NR_GRANT_FRAMES;
>>> +
>>> +    /* Active grant table. */
>>> +    if ( (gt->active = xzalloc_array(struct active_grant_entry *,
>>> +                                     max_nr_active_grant_frames)) == NULL )
>>> +        goto no_mem_1;
>>> +    for ( i = 0;
>>> +          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
>>> +    {
>>> +        if ( (gt->active[i] = alloc_xenheap_page()) == NULL )
>>> +            goto no_mem_2;
>>> +        clear_page(gt->active[i]);
>>> +        for ( j = 0; j < ACGNT_PER_PAGE; j++ )
>>> +            spin_lock_init(&gt->active[i][j].lock);
>>> +    }
>>> +
>>> +    /* Tracking of mapped foreign frames table */
>>> +    gt->maptrack = vzalloc(max_maptrack_frames * sizeof(*gt->maptrack));
>>> +    if ( gt->maptrack == NULL )
>>> +        goto no_mem_2;
>>> +
>>> +    /* Shared grant table. */
>>> +    if ( (gt->shared_raw = xzalloc_array(void *, max_grant_frames)) == NULL )
>>> +        goto no_mem_3;
>>> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>>> +    {
>>> +        if ( (gt->shared_raw[i] = alloc_xenheap_page()) == NULL )
>>> +            goto no_mem_4;
>>> +        clear_page(gt->shared_raw[i]);
>>> +    }
>>> +
>>> +    /* Status pages for grant table - for version 2 */
>>> +    gt->status = xzalloc_array(grant_status_t *,
>>> +                               grant_to_status_frames(max_grant_frames));
>>> +    if ( gt->status == NULL )
>>> +        goto no_mem_4;
>>> +
>>> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>>> +        gnttab_create_shared_page(d, gt, i);
>>> +
>>> +    gt->nr_status_frames = 0;
>>> +
>>> +    return 0;
>>> +
>>> + no_mem_4:
>>> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>>> +        free_xenheap_page(gt->shared_raw[i]);
>>> +    xfree(gt->shared_raw);
>>> +    gt->shared_raw = NULL;
>>> + no_mem_3:
>>> +    vfree(gt->maptrack);
>>> +    gt->maptrack = NULL;
>>> + no_mem_2:
>>> +    for ( i = 0;
>>> +          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
>>> +        free_xenheap_page(gt->active[i]);
>>> +    xfree(gt->active);
>>> +    gt->active = NULL;
>>> + no_mem_1:
>>> +    gt->nr_grant_frames = 0;
>>> +    return -ENOMEM;
>>> +}
>> 
>> The redundancy between this code and gnttab_grow_table() has
>> always bothered me, and now would seem to be a good occasion
>> to do away with it. Why don't you defer to gnttab_grow_table()
>> anything that function already does (keeping the respective limits
>> at zero in here)?
> 
> Just to be sure I understand you correctly: you want to get rid of
> INITIAL_NR_GRANT_FRAMES and just grow from 0 on instead of starting at
> the current value 4, right?

Yes, the use of INITIAL_NR_GRANT_FRAMES would move to that
first "grow" invocation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 3/8] xen: delay allocation of grant table sub structures
       [not found]       ` <59B6722B0200007800179767@suse.com>
@ 2017-09-11  9:39         ` Juergen Gross
  2017-09-11 11:02           ` Jan Beulich
       [not found]           ` <59B6896302000078001798B7@suse.com>
  0 siblings, 2 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-11  9:39 UTC (permalink / raw)
  To: Jan Beulich
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

On 11/09/17 11:23, Jan Beulich wrote:
>>>> On 11.09.17 at 11:03, <jgross@suse.com> wrote:
>> On 08/09/17 17:28, Jan Beulich wrote:
>>>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>>> And if you special case Dom0,
>>> wouldn't it be better to (also) special case the hardware domain
>>> (in case that's not Dom0)?
>>
>> As a hardware domain not being dom0 would need to be created via the
>> appropriate domctl hypercalls I don't see why the measures for all
>> other domains wouldn't be enough for that case.
> 
> Yes, that's true especially when making the domctl mandatory to be
> used. Whether suitable default values for that case wouldn't better
> live in a single place (the hypervisor) is a point to be considered
> here, though (by default values I mean ones to be used when the
> config file doesn't specify any, not ones to be used by the domctl
> handler if the passed in values are zero or some other "use
> defaults" indicator, as you had elsewhere).

But this is exactly what happens: the hypervisor defaults are being
used in case nothing is specified in the domain's config file: a value
of 0 for a value (grant table frame limit or maptrack frame limit)
specified in the domctl will just use the default values.

Or did I misunderstand you here?

> 
>>>> @@ -1029,8 +1033,17 @@ int domain_unpause_by_systemcontroller(struct domain *d)
>>>>       * Creation is considered finished when the controller reference count
>>>>       * first drops to 0.
>>>>       */
>>>> -    if ( new == 0 )
>>>> +    if ( new == 0 && !d->creation_finished )
>>>> +    {
>>>> +        int ret = grant_table_init(d);
>>>> +
>>>> +        if ( ret )
>>>> +        {
>>>> +            __domain_pause_by_systemcontroller(d, NULL);
>>>> +            return ret;
>>>> +        }
>>>>          d->creation_finished = true;
>>>> +    }
>>>
>>> Adding a grant table call here looks rather arbitrary, if not hackish.
>>
>> Would it still be hackish if I'd add a generic function doing last
>> init calls just before the domain starts to run for the first time?
>> The call to grant_table_init() would be just the first such late init
>> calls for the domain.
> 
> Generalizing this would make things look better, yes, but that
> would then still not deal with the bad error reporting which
> results.

In case nobody objects I'll go with making the new domctl mandatory
then.

> 
>>> Why can't you call it from the domctl you're going to add in a later
>>> patch, requiring the tool stack to issue that domctl in all cases, just
>>> like e.g. a max_vcpus one is always necessary? That would also
>>> avoid a possibly confusing error (from the unpause, i.e. not
>>> obviously related to grant table setup failure). Of course that will
>>> require merging this patch with the other one to avoid an
>>> intermediate state in which the call wouldn't be made at all.
>>
>> This would be another possibility, yes.
>>
>> Instead of merging the patches I'd just move patches 6-8 before this one
>> to have everything in place, including the needed tools side.
> 
> Right, later I had realized too that simple re-ordering would be
> sufficient.
> 
>>>> --- a/xen/common/grant_table.c
>>>> +++ b/xen/common/grant_table.c
>>>> @@ -1655,6 +1655,78 @@ gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
>>>>      gt->nr_status_frames = 0;
>>>>  }
>>>>  
>>>> +int
>>>> +grant_table_init(struct domain *d)
>>>> +{
>>>> +    struct grant_table *gt = d->grant_table;
>>>> +    unsigned int i, j;
>>>> +
>>>> +    if ( gt->nr_grant_frames )
>>>> +        return 0;
>>>> +
>>>> +    gt->nr_grant_frames = INITIAL_NR_GRANT_FRAMES;
>>>> +
>>>> +    /* Active grant table. */
>>>> +    if ( (gt->active = xzalloc_array(struct active_grant_entry *,
>>>> +                                     max_nr_active_grant_frames)) == NULL )
>>>> +        goto no_mem_1;
>>>> +    for ( i = 0;
>>>> +          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
>>>> +    {
>>>> +        if ( (gt->active[i] = alloc_xenheap_page()) == NULL )
>>>> +            goto no_mem_2;
>>>> +        clear_page(gt->active[i]);
>>>> +        for ( j = 0; j < ACGNT_PER_PAGE; j++ )
>>>> +            spin_lock_init(&gt->active[i][j].lock);
>>>> +    }
>>>> +
>>>> +    /* Tracking of mapped foreign frames table */
>>>> +    gt->maptrack = vzalloc(max_maptrack_frames * sizeof(*gt->maptrack));
>>>> +    if ( gt->maptrack == NULL )
>>>> +        goto no_mem_2;
>>>> +
>>>> +    /* Shared grant table. */
>>>> +    if ( (gt->shared_raw = xzalloc_array(void *, max_grant_frames)) == NULL )
>>>> +        goto no_mem_3;
>>>> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>>>> +    {
>>>> +        if ( (gt->shared_raw[i] = alloc_xenheap_page()) == NULL )
>>>> +            goto no_mem_4;
>>>> +        clear_page(gt->shared_raw[i]);
>>>> +    }
>>>> +
>>>> +    /* Status pages for grant table - for version 2 */
>>>> +    gt->status = xzalloc_array(grant_status_t *,
>>>> +                               grant_to_status_frames(max_grant_frames));
>>>> +    if ( gt->status == NULL )
>>>> +        goto no_mem_4;
>>>> +
>>>> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>>>> +        gnttab_create_shared_page(d, gt, i);
>>>> +
>>>> +    gt->nr_status_frames = 0;
>>>> +
>>>> +    return 0;
>>>> +
>>>> + no_mem_4:
>>>> +    for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>>>> +        free_xenheap_page(gt->shared_raw[i]);
>>>> +    xfree(gt->shared_raw);
>>>> +    gt->shared_raw = NULL;
>>>> + no_mem_3:
>>>> +    vfree(gt->maptrack);
>>>> +    gt->maptrack = NULL;
>>>> + no_mem_2:
>>>> +    for ( i = 0;
>>>> +          i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
>>>> +        free_xenheap_page(gt->active[i]);
>>>> +    xfree(gt->active);
>>>> +    gt->active = NULL;
>>>> + no_mem_1:
>>>> +    gt->nr_grant_frames = 0;
>>>> +    return -ENOMEM;
>>>> +}
>>>
>>> The redundancy between this code and gnttab_grow_table() has
>>> always bothered me, and now would seem to be a good occasion
>>> to do away with it. Why don't you defer to gnttab_grow_table()
>>> anything that function already does (keeping the respective limits
>>> at zero in here)?
>>
>> Just to be sure I understand you correctly: you want to get rid of
>> INITIAL_NR_GRANT_FRAMES and just grow from 0 on instead of starting at
>> the current value 4, right?
> 
> Yes, the use of INITIAL_NR_GRANT_FRAMES would move to that
> first "grow" invocation.

Hmm, shouldn't we just grow one frame after the other? Is it really true
that most domains will need more than 1500 grants? A simple test domain
with one disk and one NIC seems to be okay with a little bit more than
300 grants, so 1 grant table frame would be enough for that case.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 4/8] xen: make grant resource limits per domain
       [not found]   ` <59B2D7180200007800178ED3@suse.com>
@ 2017-09-11 10:40     ` Juergen Gross
  2017-09-11 11:07       ` Jan Beulich
  0 siblings, 1 reply; 29+ messages in thread
From: Juergen Gross @ 2017-09-11 10:40 UTC (permalink / raw)
  To: Jan Beulich
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

On 08/09/17 17:44, Jan Beulich wrote:
>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>> @@ -1843,6 +1838,14 @@ gnttab_setup_table(
>>      gt = d->grant_table;
>>      grant_write_lock(gt);
>>  
>> +    if ( unlikely(op.nr_frames > gt->max_grant_frames) )
>> +    {
>> +        gdprintk(XENLOG_INFO, "Domain is limited to %d grant-table frames.\n",
>> +                gt->max_grant_frames);
> 
> %u please

Okay.

> 
>> @@ -3465,6 +3471,8 @@ grant_table_create(
>>      /* Simple stuff. */
>>      percpu_rwlock_resource_init(&t->lock, grant_rwlock);
>>      spin_lock_init(&t->maptrack_lock);
>> +    t->max_grant_frames = max_grant_frames;
>> +    t->max_maptrack_frames = max_maptrack_frames;
> 
> Am I mistaken or are these the only uses of the two static variables
> now? If so (also to prove that's the case) their definitions would
> probably better be moved into this function, together with their
> integer_param() invocations. The adjustments done by
> gnttab_usage_init() could also go here afaict.

Okay.

> 
>> @@ -3755,6 +3763,12 @@ static void gnttab_usage_print(struct domain *rd)
>>  
>>      grant_read_lock(gt);
>>  
>> +    printk("grant-table for remote domain:%5d (v%d)\n"
>> +           "  %d frames (%d max), %d maptrack frames (%d max)\n",
>> +           rd->domain_id, gt->gt_version,
>> +           nr_grant_frames(gt), gt->max_grant_frames,
>> +           nr_maptrack_frames(gt), gt->max_maptrack_frames);
> 
> Various %u instances again, and Dom%d please. Also you put this
> after the table header, corrupting intended output.

The position where the domain header is printed didn't change. It is
just unconditional now and contains some more information.

> 
>> @@ -3782,12 +3796,7 @@ static void gnttab_usage_print(struct domain *rd)
>>              status = status_entry(gt, ref);
>>          }
>>  
>> -        if ( first )
>> -        {
>> -            printk("grant-table for remote domain:%5d (v%d)\n",
>> -                   rd->domain_id, gt->gt_version);
>> -            first = 0;
>> -        }
>> +        first = 0;
> 
> Is it useful to print the per-table information when there are no
> entries at all for a domain? I think it would be better to move
> what you add as well as the table header into the if() that you
> delete.

Hmm, I think the per-domain limits are valuable even without any
grant entries being present. In the end I don't expect lots of domains
without any grant entry other than dom0, as the default entries for
xenstore and console are being created by the tools already. And having
dom0 information especially for the maptrack entries will be
interesting.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 5/8] xen: double default grant frame limit for huge hosts
       [not found]   ` <59B2D8050200007800178ED6@suse.com>
@ 2017-09-11 10:41     ` Juergen Gross
  0 siblings, 0 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-11 10:41 UTC (permalink / raw)
  To: Jan Beulich
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

On 08/09/17 17:48, Jan Beulich wrote:
>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -3824,8 +3824,15 @@ static int __init gnttab_usage_init(void)
>>  {
>>      BUILD_BUG_ON(DEFAULT_MAX_MAPTRACK_FRAMES < DEFAULT_MAX_NR_GRANT_FRAMES);
>>  
>> +    /*
>> +     * In case grant v2 is required for pv domains to reference any possible
>> +     * memory page (i.e. memory is installed above 16TB boundary) double the
>> +     * grant frame limit. This will allow a guest using v2 grants without
>> +     * having to lower the number of usable grants.
>> +     */
>>      if ( !max_grant_frames )
>> -        max_grant_frames = DEFAULT_MAX_NR_GRANT_FRAMES;
>> +        max_grant_frames = ((max_page >> 32) ? 2 : 1) *
>> +                           DEFAULT_MAX_NR_GRANT_FRAMES;
> 
> Didn't we agree that HVM domains exceeding 16Tb (or even only
> placing any GFNs above the 16Tb boundary) would also need to
> make use of v2? If so, please drop the reference to PV or add
> half a sentence for HVM. Dropping may be better as this is
> common code, and ARM doesn't make that distinction.

Oh, sorry. Will drop the PV reference.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 6/8] xen: add new domctl hypercall to set grant table resource limits
       [not found]   ` <59B2D98F0200007800178F19@suse.com>
@ 2017-09-11 10:48     ` Juergen Gross
  2017-09-11 11:14       ` Jan Beulich
       [not found]       ` <59B68C4902000078001798F9@suse.com>
  0 siblings, 2 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-11 10:48 UTC (permalink / raw)
  To: Jan Beulich
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

On 08/09/17 17:55, Jan Beulich wrote:
>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -3667,6 +3667,32 @@ void grant_table_init_vcpu(struct vcpu *v)
>>      v->maptrack_tail = MAPTRACK_TAIL;
>>  }
>>  
>> +int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
>> +                           unsigned int maptrack_frames)
>> +{
>> +    struct grant_table *gt = d->grant_table;
>> +    int ret = -EBUSY;
>> +
>> +    if ( !gt )
>> +        return -EEXIST;
> 
> How does EEXIST represent the error condition?

Yes, this was a bad choice. What about ENOENT?

> 
>> +    grant_write_lock(gt);
>> +
>> +    if ( gt->nr_grant_frames )
>> +        goto unlock;
> 
> I think you can do without goto here with no risk of lowered
> readability.

Okay.

> 
>> +    ret = 0;
>> +    if ( grant_frames )
>> +        gt->max_grant_frames = grant_frames;
>> +    if ( maptrack_frames )
>> +        gt->max_maptrack_frames = maptrack_frames;
> 
> Together with what I have said regarding making the invocation
> of this domctl mandatory, I think these two shouldn't be conditional.
> In particular for maptrack I also don't see why a domain couldn't
> do with a limit of zero, as long as it's not serving as a backend for
> another guest.

Okay, then I'd need to specify a "take hypervisor default" value (e.g.
~0) in order to handle the case where no value was specified in the
domain's config file.

The question would be then: do we want to set maptrack to 0 as default
and require it to be specified for backend domains (driver domains,
stubdoms)?

>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -1163,6 +1163,13 @@ struct xen_domctl_psr_cat_op {
>>  typedef struct xen_domctl_psr_cat_op xen_domctl_psr_cat_op_t;
>>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cat_op_t);
>>  
>> +struct xen_domctl_set_gnttab_limits {
>> +    uint32_t grant_frames;     /* IN: if 0, dont change */
>> +    uint32_t maptrack_frames;  /* IN: if 0, dont change */
>> +};
>> +typedef struct xen_domctl_set_gnttab_limits xen_domctl_set_gnttab_limits_t;
>> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_gnttab_limits_t);
> 
> In another context I had already recently requested to stop the
> bad habit of adding typedef and handle for all domctl-s. I don't
> see what they're needed for, they just clutter the name space.

I'm happy to remove it from the patch.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 3/8] xen: delay allocation of grant table sub structures
  2017-09-11  9:39         ` Juergen Gross
@ 2017-09-11 11:02           ` Jan Beulich
       [not found]           ` <59B6896302000078001798B7@suse.com>
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-11 11:02 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 11.09.17 at 11:39, <jgross@suse.com> wrote:
> On 11/09/17 11:23, Jan Beulich wrote:
>>>>> On 11.09.17 at 11:03, <jgross@suse.com> wrote:
>>> On 08/09/17 17:28, Jan Beulich wrote:
>>>>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>>>> And if you special case Dom0,
>>>> wouldn't it be better to (also) special case the hardware domain
>>>> (in case that's not Dom0)?
>>>
>>> As a hardware domain not being dom0 would need to be created via the
>>> appropriate domctl hypercalls I don't see why the measures for all
>>> other domains wouldn't be enough for that case.
>> 
>> Yes, that's true especially when making the domctl mandatory to be
>> used. Whether suitable default values for that case wouldn't better
>> live in a single place (the hypervisor) is a point to be considered
>> here, though (by default values I mean ones to be used when the
>> config file doesn't specify any, not ones to be used by the domctl
>> handler if the passed in values are zero or some other "use
>> defaults" indicator, as you had elsewhere).
> 
> But this is exactly what happens: the hypervisor defaults are being
> used in case nothing is specified in the domain's config file: a value
> of 0 for a value (grant table frame limit or maptrack frame limit)
> specified in the domctl will just use the default values.
> 
> Or did I misunderstand you here?

Hypervisor defaults are in general meaningless when the domctl
becomes mandatory (as indicated elsewhere, at least for the
maptrack table size I view zero as a valid to use option). The only
time hypervisor defaults would continue to be needed would be
for Dom0 and, maybe (for consistency as explained) for the
hardware and/or control domains.

>>> Just to be sure I understand you correctly: you want to get rid of
>>> INITIAL_NR_GRANT_FRAMES and just grow from 0 on instead of starting at
>>> the current value 4, right?
>> 
>> Yes, the use of INITIAL_NR_GRANT_FRAMES would move to that
>> first "grow" invocation.
> 
> Hmm, shouldn't we just grow one frame after the other? Is it really true
> that most domains will need more than 1500 grants? A simple test domain
> with one disk and one NIC seems to be okay with a little bit more than
> 300 grants, so 1 grant table frame would be enough for that case.

Yes and no. Yes because indeed many domains will not need more.
No, however, because of the risk of memory shortage: By giving all
domains a certain minimum, they can prepare themselves for how
much (or really how little) they can do without depending on there
being memory available to grow the tables later on. IOW I don't
generally object to the limit being reduced, but not as a side effect
of the re-work. In case of problems this needs to be easy to revert
(or adjust). I.e. the change can be in the same series, but ought to
be a separate patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 4/8] xen: make grant resource limits per domain
  2017-09-11 10:40     ` Juergen Gross
@ 2017-09-11 11:07       ` Jan Beulich
  0 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-11 11:07 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 11.09.17 at 12:40, <jgross@suse.com> wrote:
> On 08/09/17 17:44, Jan Beulich wrote:
>>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>>> @@ -3755,6 +3763,12 @@ static void gnttab_usage_print(struct domain *rd)
>>>  
>>>      grant_read_lock(gt);
>>>  
>>> +    printk("grant-table for remote domain:%5d (v%d)\n"
>>> +           "  %d frames (%d max), %d maptrack frames (%d max)\n",
>>> +           rd->domain_id, gt->gt_version,
>>> +           nr_grant_frames(gt), gt->max_grant_frames,
>>> +           nr_maptrack_frames(gt), gt->max_maptrack_frames);
>> 
>> Various %u instances again, and Dom%d please. Also you put this
>> after the table header, corrupting intended output.
> 
> The position where the domain header is printed didn't change. It is
> just unconditional now and contains some more information.

Oh, indeed, it's sort of malformed even before your change.

>>> @@ -3782,12 +3796,7 @@ static void gnttab_usage_print(struct domain *rd)
>>>              status = status_entry(gt, ref);
>>>          }
>>>  
>>> -        if ( first )
>>> -        {
>>> -            printk("grant-table for remote domain:%5d (v%d)\n",
>>> -                   rd->domain_id, gt->gt_version);
>>> -            first = 0;
>>> -        }
>>> +        first = 0;
>> 
>> Is it useful to print the per-table information when there are no
>> entries at all for a domain? I think it would be better to move
>> what you add as well as the table header into the if() that you
>> delete.
> 
> Hmm, I think the per-domain limits are valuable even without any
> grant entries being present. In the end I don't expect lots of domains
> without any grant entry other than dom0, as the default entries for
> xenstore and console are being created by the tools already. And having
> dom0 information especially for the maptrack entries will be
> interesting.

Okay, valid point.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 6/8] xen: add new domctl hypercall to set grant table resource limits
  2017-09-11 10:48     ` Juergen Gross
@ 2017-09-11 11:14       ` Jan Beulich
       [not found]       ` <59B68C4902000078001798F9@suse.com>
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2017-09-11 11:14 UTC (permalink / raw)
  To: Juergen Gross
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

>>> On 11.09.17 at 12:48, <jgross@suse.com> wrote:
> On 08/09/17 17:55, Jan Beulich wrote:
>>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>>> --- a/xen/common/grant_table.c
>>> +++ b/xen/common/grant_table.c
>>> @@ -3667,6 +3667,32 @@ void grant_table_init_vcpu(struct vcpu *v)
>>>      v->maptrack_tail = MAPTRACK_TAIL;
>>>  }
>>>  
>>> +int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
>>> +                           unsigned int maptrack_frames)
>>> +{
>>> +    struct grant_table *gt = d->grant_table;
>>> +    int ret = -EBUSY;
>>> +
>>> +    if ( !gt )
>>> +        return -EEXIST;
>> 
>> How does EEXIST represent the error condition?
> 
> Yes, this was a bad choice. What about ENOENT?

Fine with me. Or ENODEV.

>>> +    ret = 0;
>>> +    if ( grant_frames )
>>> +        gt->max_grant_frames = grant_frames;
>>> +    if ( maptrack_frames )
>>> +        gt->max_maptrack_frames = maptrack_frames;
>> 
>> Together with what I have said regarding making the invocation
>> of this domctl mandatory, I think these two shouldn't be conditional.
>> In particular for maptrack I also don't see why a domain couldn't
>> do with a limit of zero, as long as it's not serving as a backend for
>> another guest.
> 
> Okay, then I'd need to specify a "take hypervisor default" value (e.g.
> ~0) in order to handle the case where no value was specified in the
> domain's config file.

Well, part of the point I was trying to make in earlier replies on
other patches of this series is that I think the lack of a guest
config file setting should not invoke a _hypervisor_ default.
Instead, the tool stack should establish a sensible one.

> The question would be then: do we want to set maptrack to 0 as default
> and require it to be specified for backend domains (driver domains,
> stubdoms)?

I think so, yes. Question is whether there's a way for the tool stack
to easily recognize a driver domain when it's being created.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 6/8] xen: add new domctl hypercall to set grant table resource limits
       [not found]       ` <59B68C4902000078001798F9@suse.com>
@ 2017-09-11 11:31         ` Juergen Gross
  0 siblings, 0 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-11 11:31 UTC (permalink / raw)
  To: Jan Beulich
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

On 11/09/17 13:14, Jan Beulich wrote:
>>>> On 11.09.17 at 12:48, <jgross@suse.com> wrote:
>> On 08/09/17 17:55, Jan Beulich wrote:
>>>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>>>> --- a/xen/common/grant_table.c
>>>> +++ b/xen/common/grant_table.c
>>>> @@ -3667,6 +3667,32 @@ void grant_table_init_vcpu(struct vcpu *v)
>>>>      v->maptrack_tail = MAPTRACK_TAIL;
>>>>  }
>>>>  
>>>> +int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
>>>> +                           unsigned int maptrack_frames)
>>>> +{
>>>> +    struct grant_table *gt = d->grant_table;
>>>> +    int ret = -EBUSY;
>>>> +
>>>> +    if ( !gt )
>>>> +        return -EEXIST;
>>>
>>> How does EEXIST represent the error condition?
>>
>> Yes, this was a bad choice. What about ENOENT?
> 
> Fine with me. Or ENODEV.
> 
>>>> +    ret = 0;
>>>> +    if ( grant_frames )
>>>> +        gt->max_grant_frames = grant_frames;
>>>> +    if ( maptrack_frames )
>>>> +        gt->max_maptrack_frames = maptrack_frames;
>>>
>>> Together with what I have said regarding making the invocation
>>> of this domctl mandatory, I think these two shouldn't be conditional.
>>> In particular for maptrack I also don't see why a domain couldn't
>>> do with a limit of zero, as long as it's not serving as a backend for
>>> another guest.
>>
>> Okay, then I'd need to specify a "take hypervisor default" value (e.g.
>> ~0) in order to handle the case where no value was specified in the
>> domain's config file.
> 
> Well, part of the point I was trying to make in earlier replies on
> other patches of this series is that I think the lack of a guest
> config file setting should not invoke a _hypervisor_ default.
> Instead, the tool stack should establish a sensible one.

Okay, I can add this to the series.

> 
>> The question would be then: do we want to set maptrack to 0 as default
>> and require it to be specified for backend domains (driver domains,
>> stubdoms)?
> 
> I think so, yes. Question is whether there's a way for the tool stack
> to easily recognize a driver domain when it's being created.

I could think of various mechanisms for driver domains:

1. Add an explicit config item (I guess this could be utilized for other
   cases like XSM, too).

2. Do some guess work, e.g. any domain with PCI-passthrough configured
   could be regarded as a potential driver domain.

3. Just require the max_maptrack_frames= config item.

Starting with 3. is the easiest solution for now, it can be switched to
1. or 2. later.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 3/8] xen: delay allocation of grant table sub structures
       [not found]           ` <59B6896302000078001798B7@suse.com>
@ 2017-09-11 11:36             ` Juergen Gross
  0 siblings, 0 replies; 29+ messages in thread
From: Juergen Gross @ 2017-09-11 11:36 UTC (permalink / raw)
  To: Jan Beulich
  Cc: tim, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, dgdegra

On 11/09/17 13:02, Jan Beulich wrote:
>>>> On 11.09.17 at 11:39, <jgross@suse.com> wrote:
>> On 11/09/17 11:23, Jan Beulich wrote:
>>>>>> On 11.09.17 at 11:03, <jgross@suse.com> wrote:
>>>> On 08/09/17 17:28, Jan Beulich wrote:
>>>>>>>> On 08.09.17 at 08:56, <jgross@suse.com> wrote:
>>>>> And if you special case Dom0,
>>>>> wouldn't it be better to (also) special case the hardware domain
>>>>> (in case that's not Dom0)?
>>>>
>>>> As a hardware domain not being dom0 would need to be created via the
>>>> appropriate domctl hypercalls I don't see why the measures for all
>>>> other domains wouldn't be enough for that case.
>>>
>>> Yes, that's true especially when making the domctl mandatory to be
>>> used. Whether suitable default values for that case wouldn't better
>>> live in a single place (the hypervisor) is a point to be considered
>>> here, though (by default values I mean ones to be used when the
>>> config file doesn't specify any, not ones to be used by the domctl
>>> handler if the passed in values are zero or some other "use
>>> defaults" indicator, as you had elsewhere).
>>
>> But this is exactly what happens: the hypervisor defaults are being
>> used in case nothing is specified in the domain's config file: a value
>> of 0 for a value (grant table frame limit or maptrack frame limit)
>> specified in the domctl will just use the default values.
>>
>> Or did I misunderstand you here?
> 
> Hypervisor defaults are in general meaningless when the domctl
> becomes mandatory (as indicated elsewhere, at least for the
> maptrack table size I view zero as a valid to use option). The only
> time hypervisor defaults would continue to be needed would be
> for Dom0 and, maybe (for consistency as explained) for the
> hardware and/or control domains.
> 
>>>> Just to be sure I understand you correctly: you want to get rid of
>>>> INITIAL_NR_GRANT_FRAMES and just grow from 0 on instead of starting at
>>>> the current value 4, right?
>>>
>>> Yes, the use of INITIAL_NR_GRANT_FRAMES would move to that
>>> first "grow" invocation.
>>
>> Hmm, shouldn't we just grow one frame after the other? Is it really true
>> that most domains will need more than 1500 grants? A simple test domain
>> with one disk and one NIC seems to be okay with a little bit more than
>> 300 grants, so 1 grant table frame would be enough for that case.
> 
> Yes and no. Yes because indeed many domains will not need more.
> No, however, because of the risk of memory shortage: By giving all
> domains a certain minimum, they can prepare themselves for how
> much (or really how little) they can do without depending on there
> being memory available to grow the tables later on. IOW I don't
> generally object to the limit being reduced, but not as a side effect
> of the re-work. In case of problems this needs to be easy to revert
> (or adjust). I.e. the change can be in the same series, but ought to
> be a separate patch.

Okay, I'll add another patch for that purpose.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c
  2017-09-08  6:56 ` [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c Juergen Gross
  2017-09-08 14:49   ` Jan Beulich
       [not found]   ` <59B2C9FF0200007800178E24@suse.com>
@ 2017-09-11 11:41   ` George Dunlap
  2 siblings, 0 replies; 29+ messages in thread
From: George Dunlap @ 2017-09-11 11:41 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, tim, julien.grall, jbeulich, dgdegra

On 09/08/2017 07:56 AM, Juergen Gross wrote:
> The x86 and arm versions of XENMAPSPACE_grant_table handling are nearly
> identical. Move the code into a function in grant_table.c and add an
> architecture dependant hook to handle the differences.
> 
> Switch to mfn_t in order to be more type safe.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Wei Liu <wei.liu2@citrix.com>

FWIW I agree with Jan's comment about ignoring the error return value.

Other than that it looks good to me.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2017-09-11 11:41 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-08  6:56 [PATCH v5 0/8] xen: better grant v2 support Juergen Gross
2017-09-08  6:56 ` [PATCH v5 1/8] xen: move XENMAPSPACE_grant_table code into grant_table.c Juergen Gross
2017-09-08 14:49   ` Jan Beulich
     [not found]   ` <59B2C9FF0200007800178E24@suse.com>
2017-09-08 15:14     ` Juergen Gross
2017-09-11 11:41   ` George Dunlap
2017-09-08  6:56 ` [PATCH v5 2/8] xen: clean up grant_table.h Juergen Gross
2017-09-08 15:00   ` Jan Beulich
     [not found]   ` <59B2CC990200007800178E3C@suse.com>
2017-09-11  8:22     ` Juergen Gross
2017-09-08  6:56 ` [PATCH v5 3/8] xen: delay allocation of grant table sub structures Juergen Gross
2017-09-08 15:28   ` Jan Beulich
     [not found]   ` <59B2D34F0200007800178EBC@suse.com>
2017-09-11  9:03     ` Juergen Gross
2017-09-11  9:23       ` Jan Beulich
     [not found]       ` <59B6722B0200007800179767@suse.com>
2017-09-11  9:39         ` Juergen Gross
2017-09-11 11:02           ` Jan Beulich
     [not found]           ` <59B6896302000078001798B7@suse.com>
2017-09-11 11:36             ` Juergen Gross
2017-09-08  6:56 ` [PATCH v5 4/8] xen: make grant resource limits per domain Juergen Gross
2017-09-08 15:44   ` Jan Beulich
     [not found]   ` <59B2D7180200007800178ED3@suse.com>
2017-09-11 10:40     ` Juergen Gross
2017-09-11 11:07       ` Jan Beulich
2017-09-08  6:56 ` [PATCH v5 5/8] xen: double default grant frame limit for huge hosts Juergen Gross
2017-09-08 15:48   ` Jan Beulich
     [not found]   ` <59B2D8050200007800178ED6@suse.com>
2017-09-11 10:41     ` Juergen Gross
2017-09-08  6:56 ` [PATCH v5 6/8] xen: add new domctl hypercall to set grant table resource limits Juergen Gross
2017-09-08 15:55   ` Jan Beulich
     [not found]   ` <59B2D98F0200007800178F19@suse.com>
2017-09-11 10:48     ` Juergen Gross
2017-09-11 11:14       ` Jan Beulich
     [not found]       ` <59B68C4902000078001798F9@suse.com>
2017-09-11 11:31         ` Juergen Gross
2017-09-08  6:56 ` [PATCH v5 7/8] libxc: add libxc support for setting " Juergen Gross
2017-09-08  6:56 ` [PATCH v5 8/8] libxl: add libxl " Juergen Gross

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.