All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH For-4.11 00/20] Improvements to domain creation
@ 2018-03-19 19:13 Andrew Cooper
  2018-03-19 19:13 ` [PATCH 01/20] tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state Andrew Cooper
                   ` (19 more replies)
  0 siblings, 20 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Juergen Gross, Marek Marczykowski-Górecki, Rob Hoes,
	Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Jon Ludlam, Tim Deegan, Julien Grall,
	Christian Lindig, Jan Beulich, David Scott, Daniel De Graaf

The purpose of this patch series is for the final patch.  See the commit
message for details.

Patches 1 and 2 have been posted before, but are included here as they are
required for the eventual functionality.

This series can be found in git form here:
  http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/xen-create-v1

It is semi-RFC because it has only had light testing so far.  It has been
build tested for ARM, but not functionally tested.

Andrew Cooper (20):
  tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state
  tools/libxl: Don't prepare or save xc_config when soft resetting a domain
  xen/public: Rename xen_domctl_createdomain.config to arch
  xen/domctl: Drop vcpu_alloc_lock
  arm/boot: Mark construct_dom0() as __init
  tools/ocaml: Drop domain_create_flag_table[]
  tools/ocaml: Drop int_array_of_uuid_string()
  tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create()
  tools: Rework xc_domain_create() to take a full xen_domctl_createdomain
  xen/domctl: Merge set_max_evtchn into createdomain
  xen/domctl: Merge set_gnttab_limits into createdomain
  xen/domctl: Merge max_vcpus into createdomain
  xen/evtchn: Pass max_evtchn_port into evtchn_init()
  xen/gnttab: Remove replace_grant_supported()
  xen/gnttab: Export opt_max_{grant,maptrack}_frames
  xen/gnttab: Pass max_{grant,maptrack}_frames into grant_table_create()
  xen/gnttab: Fold grant_table_{create,set_limits}() into grant_table_init()
  xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value
  xen/domain: Call arch_domain_create() as early as possible in domain_create()
  xen/domain: Allocate d->vcpu[] in arch_domain_create()

 tools/flask/policy/modules/dom0.te   |   6 +-
 tools/flask/policy/modules/xen.if    |   6 +-
 tools/helpers/init-xenstore-domain.c |  43 +++++-------
 tools/libxc/include/xenctrl.h        |  43 +-----------
 tools/libxc/xc_domain.c              |  64 ++----------------
 tools/libxl/libxl_arch.h             |   4 +-
 tools/libxl/libxl_arm.c              |  16 ++---
 tools/libxl/libxl_create.c           |  61 +++++++++--------
 tools/libxl/libxl_dm.c               |   3 +-
 tools/libxl/libxl_dom.c              |  18 -----
 tools/libxl/libxl_internal.h         |   5 +-
 tools/libxl/libxl_x86.c              |  10 +--
 tools/ocaml/libs/xc/xenctrl.ml       |  40 +++++------
 tools/ocaml/libs/xc/xenctrl.mli      |  22 ++++--
 tools/ocaml/libs/xc/xenctrl_stubs.c  | 106 +++++++++++++++--------------
 tools/python/xen/lowlevel/xc/xc.c    |  64 +++++++++---------
 xen/arch/arm/domain.c                |  23 +++++--
 xen/arch/arm/domain_build.c          |  15 +++--
 xen/arch/arm/setup.c                 |  21 +++++-
 xen/arch/arm/vgic.c                  |  14 ----
 xen/arch/x86/dom0_build.c            |   7 --
 xen/arch/x86/domain.c                |  13 +++-
 xen/arch/x86/platform_hypercall.c    |  15 -----
 xen/arch/x86/setup.c                 |   7 +-
 xen/common/domain.c                  |  39 +++++------
 xen/common/domctl.c                  | 125 +++++++++--------------------------
 xen/common/event_channel.c           |   8 +--
 xen/common/grant_table.c             | 100 +++++++---------------------
 xen/include/asm-arm/domain.h         |   6 --
 xen/include/asm-arm/grant_table.h    |  16 -----
 xen/include/asm-arm/vgic.h           |   2 -
 xen/include/asm-x86/domain.h         |   2 -
 xen/include/asm-x86/grant_table.h    |  10 ---
 xen/include/asm-x86/setup.h          |   2 -
 xen/include/public/domctl.h          |  41 ++++--------
 xen/include/xen/domain.h             |   4 +-
 xen/include/xen/grant_table.h        |  11 ++-
 xen/include/xen/sched.h              |   2 +-
 xen/xsm/flask/hooks.c                |   9 ---
 xen/xsm/flask/policy/access_vectors  |   6 --
 40 files changed, 366 insertions(+), 643 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [PATCH 01/20] tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-21 16:09   ` Roger Pau Monné
  2018-03-21 17:17   ` Wei Liu
  2018-03-19 19:13 ` [PATCH 02/20] tools/libxl: Don't prepare or save xc_config when soft resetting a domain Andrew Cooper
                   ` (18 subsequent siblings)
  19 siblings, 2 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Ian Jackson, Wei Liu

The data it stores is initialised and exclusively used within
libxl__domain_make(), with the important details written back elsewhere by
libxl__arch_domain_save_config().  Prepare xc_config on libxl__domain_make()'s
stack, and drop the parameter.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
---
 tools/libxl/libxl_create.c   | 12 ++++++------
 tools/libxl/libxl_dm.c       |  3 +--
 tools/libxl/libxl_internal.h |  5 +----
 3 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index c4981352..f92c383 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -538,7 +538,7 @@ int libxl__domain_build(libxl__gc *gc,
 }
 
 int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
-                       uint32_t *domid, xc_domain_configuration_t *xc_config)
+                       uint32_t *domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int flags, ret, rc, nb_vm;
@@ -551,6 +551,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
     xs_transaction_t t = 0;
     xen_domain_handle_t handle;
     libxl_vminfo *vm_list;
+    xc_domain_configuration_t xc_config = {};
 
     /* convenience aliases */
     libxl_domain_create_info *info = &d_config->c_info;
@@ -571,7 +572,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
     /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
     libxl_uuid_copy(ctx, (libxl_uuid *)handle, &info->uuid);
 
-    ret = libxl__arch_domain_prepare_config(gc, d_config, xc_config);
+    ret = libxl__arch_domain_prepare_config(gc, d_config, &xc_config);
     if (ret < 0) {
         LOGED(ERROR, *domid, "fail to get domain config");
         rc = ERROR_FAIL;
@@ -581,7 +582,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
     /* Valid domid here means we're soft resetting. */
     if (!libxl_domid_valid_guest(*domid)) {
         ret = xc_domain_create(ctx->xch, info->ssidref, handle, flags, domid,
-                               xc_config);
+                               &xc_config);
         if (ret < 0) {
             LOGED(ERROR, *domid, "domain creation fail");
             rc = ERROR_FAIL;
@@ -589,7 +590,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         }
     }
 
-    rc = libxl__arch_domain_save_config(gc, d_config, xc_config);
+    rc = libxl__arch_domain_save_config(gc, d_config, &xc_config);
     if (rc < 0)
         goto out;
 
@@ -822,7 +823,6 @@ static void initiate_domain_create(libxl__egc *egc,
 
     /* convenience aliases */
     libxl_domain_config *const d_config = dcs->guest_config;
-    libxl__domain_build_state *const state = &dcs->build_state;
     const int restore_fd = dcs->restore_fd;
 
     domid = dcs->domid_soft_reset;
@@ -957,7 +957,7 @@ static void initiate_domain_create(libxl__egc *egc,
         goto error_out;
     }
 
-    ret = libxl__domain_make(gc, d_config, &domid, &state->config);
+    ret = libxl__domain_make(gc, d_config, &domid);
     if (ret) {
         LOGD(ERROR, domid, "cannot make domain: %d", ret);
         dcs->guest_domid = domid;
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index a3cddce..49678bd 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1953,8 +1953,7 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
     stubdom_state->pv_ramdisk.path = "";
 
     /* fixme: this function can leak the stubdom if it fails */
-    ret = libxl__domain_make(gc, dm_config, &sdss->pvqemu.guest_domid,
-                             &stubdom_state->config);
+    ret = libxl__domain_make(gc, dm_config, &sdss->pvqemu.guest_domid);
     if (ret)
         goto out;
     uint32_t dm_domid = sdss->pvqemu.guest_domid;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 8dd6331..663fcb2 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1145,8 +1145,6 @@ typedef struct {
     xen_vmemrange_t *vmemranges;
     uint32_t num_vmemranges;
 
-    xc_domain_configuration_t config;
-
     xen_pfn_t vuart_gfn;
     evtchn_port_t vuart_port;
 } libxl__domain_build_state;
@@ -1657,8 +1655,7 @@ _hidden  void libxl__exec(libxl__gc *gc, int stdinfd, int stdoutfd,
   * on exit (even error exit), domid may be valid and refer to a domain */
 _hidden int libxl__domain_make(libxl__gc *gc,
                                libxl_domain_config *d_config,
-                               uint32_t *domid,
-                               xc_domain_configuration_t *xc_config);
+                               uint32_t *domid);
 
 _hidden int libxl__domain_build(libxl__gc *gc,
                                 libxl_domain_config *d_config,
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 02/20] tools/libxl: Don't prepare or save xc_config when soft resetting a domain
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
  2018-03-19 19:13 ` [PATCH 01/20] tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-21 16:18   ` Roger Pau Monné
  2018-03-21 17:27   ` Wei Liu
  2018-03-19 19:13 ` [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch Andrew Cooper
                   ` (17 subsequent siblings)
  19 siblings, 2 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Ian Jackson, Wei Liu

xc_config is only used by xc_domain_create(), but by calling
libxl__arch_domain_{prepare,save}_config() we clobber the real settings with
the default settings.

Move all data and calls relating to xc_domain_create() into the path which
calls it.

As far as I can tell, soft_reset has always been broken for ARM domains using
LIBXL_GIC_VERSION_DEFAULT, which elicits a hard error out of
libxl__arch_domain_save_config().

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>

This patch is far more easily reviewed with `git diff --ignore-all-space`.
---
 tools/libxl/libxl_create.c | 47 +++++++++++++++++++++++-----------------------
 1 file changed, 24 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index f92c383..60a5542 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -541,7 +541,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
                        uint32_t *domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    int flags, ret, rc, nb_vm;
+    int ret, rc, nb_vm;
     const char *dom_type;
     char *uuid_string;
     char *dom_path, *vm_path, *libxl_path;
@@ -549,9 +549,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
     struct xs_permissions rwperm[1];
     struct xs_permissions noperm[1];
     xs_transaction_t t = 0;
-    xen_domain_handle_t handle;
     libxl_vminfo *vm_list;
-    xc_domain_configuration_t xc_config = {};
 
     /* convenience aliases */
     libxl_domain_create_info *info = &d_config->c_info;
@@ -562,25 +560,28 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         goto out;
     }
 
-    flags = 0;
-    if (info->type != LIBXL_DOMAIN_TYPE_PV) {
-        flags |= XEN_DOMCTL_CDF_hvm_guest;
-        flags |= libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
-        flags |= libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
-    }
+    /* Valid domid here means we're soft resetting. */
+    if (!libxl_domid_valid_guest(*domid)) {
+        int flags = 0;
+        xen_domain_handle_t handle;
+        xc_domain_configuration_t xc_config = {};
+
+        if (info->type != LIBXL_DOMAIN_TYPE_PV) {
+            flags |= XEN_DOMCTL_CDF_hvm_guest;
+            flags |= libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
+            flags |= libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
+        }
 
-    /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
-    libxl_uuid_copy(ctx, (libxl_uuid *)handle, &info->uuid);
+        /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
+        libxl_uuid_copy(ctx, (libxl_uuid *)handle, &info->uuid);
 
-    ret = libxl__arch_domain_prepare_config(gc, d_config, &xc_config);
-    if (ret < 0) {
-        LOGED(ERROR, *domid, "fail to get domain config");
-        rc = ERROR_FAIL;
-        goto out;
-    }
+        ret = libxl__arch_domain_prepare_config(gc, d_config, &xc_config);
+        if (ret < 0) {
+            LOGED(ERROR, *domid, "fail to get domain config");
+            rc = ERROR_FAIL;
+            goto out;
+        }
 
-    /* Valid domid here means we're soft resetting. */
-    if (!libxl_domid_valid_guest(*domid)) {
         ret = xc_domain_create(ctx->xch, info->ssidref, handle, flags, domid,
                                &xc_config);
         if (ret < 0) {
@@ -588,11 +589,11 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             rc = ERROR_FAIL;
             goto out;
         }
-    }
 
-    rc = libxl__arch_domain_save_config(gc, d_config, &xc_config);
-    if (rc < 0)
-        goto out;
+        rc = libxl__arch_domain_save_config(gc, d_config, &xc_config);
+        if (rc < 0)
+            goto out;
+    }
 
     ret = xc_cpupool_movedomain(ctx->xch, info->poolid, *domid);
     if (ret < 0) {
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
  2018-03-19 19:13 ` [PATCH 01/20] tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state Andrew Cooper
  2018-03-19 19:13 ` [PATCH 02/20] tools/libxl: Don't prepare or save xc_config when soft resetting a domain Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20 16:53   ` Jan Beulich
                     ` (2 more replies)
  2018-03-19 19:13 ` [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock Andrew Cooper
                   ` (16 subsequent siblings)
  19 siblings, 3 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	Julien Grall, Jan Beulich

This is a tools only hypercall so fine to change.  Altering the name avoids
having confusing code such as config->config all over the hypervisor and
toolstack.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
---
 tools/libxc/xc_domain.c     |  4 ++--
 xen/arch/arm/domain.c       | 10 +++++-----
 xen/arch/arm/setup.c        |  4 ++--
 xen/arch/x86/domain.c       |  2 +-
 xen/arch/x86/setup.c        |  2 +-
 xen/include/public/domctl.h |  2 +-
 6 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index ea3df1e..f122ea6 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -57,12 +57,12 @@ int xc_domain_create(xc_interface *xch, uint32_t ssidref,
     domctl.u.createdomain.flags   = flags;
     memcpy(domctl.u.createdomain.handle, handle, sizeof(xen_domain_handle_t));
     /* xc_domain_configure_t is an alias of arch_domainconfig_t */
-    memcpy(&domctl.u.createdomain.config, config, sizeof(*config));
+    memcpy(&domctl.u.createdomain.arch, config, sizeof(*config));
     if ( (err = do_domctl(xch, &domctl)) != 0 )
         return err;
 
     *pdomid = (uint16_t)domctl.domain;
-    memcpy(config, &domctl.u.createdomain.config, sizeof(*config));
+    memcpy(config, &domctl.u.createdomain.arch, sizeof(*config));
 
     return 0;
 }
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 7193531..0931ce6 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -601,18 +601,18 @@ int arch_domain_create(struct domain *d,
     clear_page(d->shared_info);
     share_xen_page_with_guest(virt_to_page(d->shared_info), d, SHARE_rw);
 
-    switch ( config->config.gic_version )
+    switch ( config->arch.gic_version )
     {
     case XEN_DOMCTL_CONFIG_GIC_NATIVE:
         switch ( gic_hw_version () )
         {
         case GIC_V2:
-            config->config.gic_version = XEN_DOMCTL_CONFIG_GIC_V2;
+            config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_V2;
             d->arch.vgic.version = GIC_V2;
             break;
 
         case GIC_V3:
-            config->config.gic_version = XEN_DOMCTL_CONFIG_GIC_V3;
+            config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_V3;
             d->arch.vgic.version = GIC_V3;
             break;
 
@@ -640,10 +640,10 @@ int arch_domain_create(struct domain *d,
     if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )
         goto fail;
 
-    if ( (rc = domain_vgic_init(d, config->config.nr_spis)) != 0 )
+    if ( (rc = domain_vgic_init(d, config->arch.nr_spis)) != 0 )
         goto fail;
 
-    if ( (rc = domain_vtimer_init(d, &config->config)) != 0 )
+    if ( (rc = domain_vtimer_init(d, &config->arch)) != 0 )
         goto fail;
 
     update_domain_wallclock_time(d);
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index b17797d..e6f8e23 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -840,8 +840,8 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     /* Create initial domain 0. */
     /* The vGIC for DOM0 is exactly emulating the hardware GIC */
-    dom0_cfg.config.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
-    dom0_cfg.config.nr_spis = gic_number_lines() - 32;
+    dom0_cfg.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
+    dom0_cfg.arch.nr_spis = gic_number_lines() - 32;
 
     dom0 = domain_create(0, &dom0_cfg);
     if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 4cac890..c4c34b4 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -480,7 +480,7 @@ int arch_domain_create(struct domain *d,
 
     d->arch.s3_integrity = config->flags & XEN_DOMCTL_CDF_s3_integrity;
 
-    emflags = config->config.emulation_flags;
+    emflags = config->arch.emulation_flags;
 
     if ( is_hardware_domain(d) && is_pv_domain(d) )
         emflags |= XEN_X86_EMU_PIT;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 3f6ecf4..fa77bae 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1638,7 +1638,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                            ((hvm_funcs.hap_supported && !opt_dom0_shadow) ?
                             XEN_DOMCTL_CDF_hap : 0));
 
-        dom0_cfg.config.emulation_flags |=
+        dom0_cfg.arch.emulation_flags |=
             XEN_X86_EMU_LAPIC | XEN_X86_EMU_IOAPIC;
     }
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index ec7a860..0535da8 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -65,7 +65,7 @@ struct xen_domctl_createdomain {
 #define _XEN_DOMCTL_CDF_xs_domain     4
 #define XEN_DOMCTL_CDF_xs_domain      (1U<<_XEN_DOMCTL_CDF_xs_domain)
     uint32_t flags;
-    struct xen_arch_domainconfig config;
+    struct xen_arch_domainconfig arch;
 };
 
 /* XEN_DOMCTL_getdomaininfo */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (2 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20 16:58   ` Jan Beulich
  2018-03-19 19:13 ` [PATCH 05/20] arm/boot: Mark construct_dom0() as __init Andrew Cooper
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich

It is not entirely clear why this interlock was introduced in c/s 8cbb5278e
"x86/AMD: Add support for AMD's OSVW feature in guests".

At the time, svm_handle_osvw() could have seen an unexpected change in OSVW
(not the case now, due to the new CPUID Policy infrastructure), but even then,
it would have caused spurious changes in behaviour when handling
OSVW_{ID_LENGTH,STATUS} read requests on behalf of an already-running guest.

There are plenty of other aspects of domain creation which depend on hardware
details which may change across a microcode load, but where not protected by
this interlock.

A host administrator choosing to perform late microcode loading has plenty of
other problems to worry about, and is it not unreasonable to expect them to
temporarily cease domain construction activities while the microcode loading
is in progress.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/platform_hypercall.c | 15 ---------------
 xen/common/domctl.c               | 18 ------------------
 xen/include/xen/domain.h          |  1 -
 3 files changed, 34 deletions(-)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index ea18c32..b19f6ec 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -280,24 +280,9 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 
         guest_from_compat_handle(data, op->u.microcode.data);
 
-        /*
-         * alloc_vcpu() will access data which is modified during
-         * microcode update
-         */
-        while ( !spin_trylock(&vcpu_alloc_lock) )
-        {
-            if ( hypercall_preempt_check() )
-            {
-                ret = hypercall_create_continuation(
-                    __HYPERVISOR_platform_op, "h", u_xenpf_op);
-                goto out;
-            }
-        }
-
         ret = microcode_update(
                 guest_handle_to_param(data, const_void),
                 op->u.microcode.length);
-        spin_unlock(&vcpu_alloc_lock);
     }
     break;
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 9b7bc08..4d275ff 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -34,7 +34,6 @@
 #include <xsm/xsm.h>
 
 static DEFINE_SPINLOCK(domctl_lock);
-DEFINE_SPINLOCK(vcpu_alloc_lock);
 
 static int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap,
                                    const unsigned long *bitmap,
@@ -567,20 +566,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         /* Needed, for example, to ensure writable p.t. state is synced. */
         domain_pause(d);
 
-        /*
-         * Certain operations (e.g. CPU microcode updates) modify data which is
-         * used during VCPU allocation/initialization
-         */
-        while ( !spin_trylock(&vcpu_alloc_lock) )
-        {
-            if ( hypercall_preempt_check() )
-            {
-                ret =  hypercall_create_continuation(
-                    __HYPERVISOR_domctl, "h", u_domctl);
-                goto maxvcpu_out_novcpulock;
-            }
-        }
-
         /* We cannot reduce maximum VCPUs. */
         ret = -EINVAL;
         if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )
@@ -630,9 +615,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         ret = 0;
 
     maxvcpu_out:
-        spin_unlock(&vcpu_alloc_lock);
-
-    maxvcpu_out_novcpulock:
         domain_unpause(d);
         break;
     }
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 177cb35..1929fa0 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -82,7 +82,6 @@ void arch_dump_domain_info(struct domain *d);
 
 int arch_vcpu_reset(struct vcpu *);
 
-extern spinlock_t vcpu_alloc_lock;
 bool_t domctl_lock_acquire(void);
 void domctl_lock_release(void);
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 05/20] arm/boot: Mark construct_dom0() as __init
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (3 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20  3:40   ` Julien Grall
  2018-03-19 19:13 ` [PATCH 06/20] tools/ocaml: Drop domain_create_flag_table[] Andrew Cooper
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini

Its sole caller, start_xen(), is __init.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/domain_build.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 28ee876..9ef9030 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2094,7 +2094,7 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
-int construct_dom0(struct domain *d)
+int __init construct_dom0(struct domain *d)
 {
     struct kernel_info kinfo = {};
     struct vcpu *saved_current;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 06/20] tools/ocaml: Drop domain_create_flag_table[]
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (4 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 05/20] arm/boot: Mark construct_dom0() as __init Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20 19:42   ` Christian Lindig
  2018-03-19 19:13 ` [PATCH 07/20] tools/ocaml: Drop int_array_of_uuid_string() Andrew Cooper
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Rob Hoes, Andrew Cooper, Jon Ludlam, Christian Lindig, David Scott

This is a logarithm in disguise.  Update the logic to match how
x86_arch_emulation_flags works in c/s 9d683b5e37 and b38d96f596.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index f97070c..45d4e20 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -97,11 +97,6 @@ CAMLprim value stub_xc_interface_close(value xch)
 	CAMLreturn(Val_unit);
 }
 
-static int domain_create_flag_table[] = {
-	XEN_DOMCTL_CDF_hvm_guest,
-	XEN_DOMCTL_CDF_hap,
-};
-
 CAMLprim value stub_xc_domain_create(value xch, value ssidref,
                                      value flags, value handle,
                                      value domconfig)
@@ -124,10 +119,8 @@ CAMLprim value stub_xc_domain_create(value xch, value ssidref,
 		h[i] = Int_val(Field(handle, i)) & 0xff;
 	}
 
-	for (l = flags; l != Val_none; l = Field(l, 1)) {
-		int v = Int_val(Field(l, 0));
-		c_flags |= domain_create_flag_table[v];
-	}
+	for (l = flags; l != Val_none; l = Field(l, 1))
+		c_flags |= 1u << Int_val(Field(l, 0));
 
 	switch(Tag_val(domconfig)) {
 	case 0: /* ARM - nothing to do */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 07/20] tools/ocaml: Drop int_array_of_uuid_string()
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (5 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 06/20] tools/ocaml: Drop domain_create_flag_table[] Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-19 19:13 ` [PATCH 08/20] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create() Andrew Cooper
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Rob Hoes, Andrew Cooper, Jon Ludlam, Christian Lindig, David Scott

This function is entirely internal to xenctrl stubs, and serves only to
convert the uuid string to an integer array (making 16 memory allocations as
it goes), while the C stubs turns the integer array back into a binary array.

Instead, pass the string all the way down into C, and have sscanf() unpack it
directly into a xen_domain_handle_t object.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/ocaml/libs/xc/xenctrl.ml      | 21 +++----------------
 tools/ocaml/libs/xc/xenctrl.mli     |  5 +++--
 tools/ocaml/libs/xc/xenctrl_stubs.c | 41 +++++++++++++++++++++++--------------
 3 files changed, 32 insertions(+), 35 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 1a01faa..b3b33bb 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -135,26 +135,11 @@ let with_intf f =
 	interface_close xc;
 	r
 
-external _domain_create: handle -> int32 -> domain_create_flag list -> int array -> arch_domainconfig -> domid
+external domain_create: handle -> int32 -> domain_create_flag list -> string -> arch_domainconfig -> domid
        = "stub_xc_domain_create"
 
-let int_array_of_uuid_string s =
-	try
-		Scanf.sscanf s
-			"%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x%02x-%02x%02x%02x%02x%02x%02x"
-			(fun a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 ->
-				[| a0; a1; a2; a3; a4; a5; a6; a7;
-				   a8; a9; a10; a11; a12; a13; a14; a15 |])
-	with _ -> invalid_arg ("Xc.int_array_of_uuid_string: " ^ s)
-
-let domain_create handle n flags uuid =
-	_domain_create handle n flags (int_array_of_uuid_string uuid)
-
-external _domain_sethandle: handle -> domid -> int array -> unit
-                          = "stub_xc_domain_sethandle"
-
-let domain_sethandle handle n uuid =
-	_domain_sethandle handle n (int_array_of_uuid_string uuid)
+external domain_sethandle: handle -> domid -> string -> unit
+       = "stub_xc_domain_sethandle"
 
 external domain_max_vcpus: handle -> domid -> int -> unit
        = "stub_xc_domain_max_vcpus"
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index 7d2e6f0..35303ab 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -98,8 +98,9 @@ type handle
 external interface_open : unit -> handle = "stub_xc_interface_open"
 external interface_close : handle -> unit = "stub_xc_interface_close"
 val with_intf : (handle -> 'a) -> 'a
-val domain_create : handle -> int32 -> domain_create_flag list -> string -> arch_domainconfig -> domid
-val domain_sethandle : handle -> domid -> string -> unit
+external domain_create : handle -> int32 -> domain_create_flag list -> string -> arch_domainconfig -> domid
+  = "stub_xc_domain_create"
+external domain_sethandle : handle -> domid -> string -> unit = "stub_xc_domain_sethandle"
 external domain_max_vcpus : handle -> domid -> int -> unit
   = "stub_xc_domain_max_vcpus"
 external domain_pause : handle -> domid -> unit = "stub_xc_domain_pause"
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 45d4e20..af92bfa 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -28,6 +28,7 @@
 #include <sys/mman.h>
 #include <stdint.h>
 #include <string.h>
+#include <inttypes.h>
 
 #define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
@@ -97,6 +98,27 @@ CAMLprim value stub_xc_interface_close(value xch)
 	CAMLreturn(Val_unit);
 }
 
+static void domain_handle_of_uuid_string(xen_domain_handle_t h,
+					 const char *uuid)
+{
+#define X "%02"SCNx8
+#define UUID_FMT (X X X X "-" X X "-" X X "-" X X "-" X X X X X X)
+
+	if ( sscanf(uuid, UUID_FMT, &h[0], &h[1], &h[2], &h[3], &h[4],
+		    &h[5], &h[6], &h[7], &h[8], &h[9], &h[10], &h[11],
+		    &h[12], &h[13], &h[14], &h[15]) != 16 )
+	{
+		char buf[128];
+
+		snprintf(buf, sizeof(buf),
+			 "Xc.int_array_of_uuid_string: %s", uuid);
+
+		caml_invalid_argument(buf);
+	}
+
+#undef X
+}
+
 CAMLprim value stub_xc_domain_create(value xch, value ssidref,
                                      value flags, value handle,
                                      value domconfig)
@@ -104,20 +126,14 @@ CAMLprim value stub_xc_domain_create(value xch, value ssidref,
 	CAMLparam4(xch, ssidref, flags, handle);
 
 	uint32_t domid = 0;
-	xen_domain_handle_t h = { 0 };
+	xen_domain_handle_t h;
 	int result;
-	int i;
 	uint32_t c_ssidref = Int32_val(ssidref);
 	unsigned int c_flags = 0;
 	value l;
 	xc_domain_configuration_t config = {};
 
-        if (Wosize_val(handle) != 16)
-		caml_invalid_argument("Handle not a 16-integer array");
-
-	for (i = 0; i < sizeof(h); i++) {
-		h[i] = Int_val(Field(handle, i)) & 0xff;
-	}
+	domain_handle_of_uuid_string(h, String_val(handle));
 
 	for (l = flags; l != Val_none; l = Field(l, 1))
 		c_flags |= 1u << Int_val(Field(l, 0));
@@ -169,15 +185,10 @@ CAMLprim value stub_xc_domain_max_vcpus(value xch, value domid,
 value stub_xc_domain_sethandle(value xch, value domid, value handle)
 {
 	CAMLparam3(xch, domid, handle);
-	xen_domain_handle_t h = { 0 };
+	xen_domain_handle_t h;
 	int i;
 
-        if (Wosize_val(handle) != 16)
-		caml_invalid_argument("Handle not a 16-integer array");
-
-	for (i = 0; i < sizeof(h); i++) {
-		h[i] = Int_val(Field(handle, i)) & 0xff;
-	}
+	domain_handle_of_uuid_string(h, String_val(handle));
 
 	i = xc_domain_sethandle(_H(xch), _D(domid), h);
 	if (i)
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 08/20] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create()
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (6 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 07/20] tools/ocaml: Drop int_array_of_uuid_string() Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20 19:43   ` Christian Lindig
  2018-03-19 19:13 ` [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain Andrew Cooper
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Rob Hoes, Andrew Cooper, Jon Ludlam, Christian Lindig, David Scott

The underlying C function is about to make the same change, and the structure
is going to gain extra fields.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/ocaml/libs/xc/xenctrl.ml      | 14 ++++++++++---
 tools/ocaml/libs/xc/xenctrl.mli     | 13 ++++++++++---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 39 +++++++++++++++++++++++--------------
 3 files changed, 45 insertions(+), 21 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index b3b33bb..3b7526e 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -56,6 +56,16 @@ type arch_domainconfig =
 	| ARM of xen_arm_arch_domainconfig
 	| X86 of xen_x86_arch_domainconfig
 
+type domain_create_flag = CDF_HVM | CDF_HAP
+
+type domctl_create_config =
+{
+	ssidref: int32;
+	handle: string;
+	flags: domain_create_flag list;
+	arch: arch_domainconfig;
+}
+
 type domaininfo =
 {
 	domid             : domid;
@@ -120,8 +130,6 @@ type compile_info =
 
 type shutdown_reason = Poweroff | Reboot | Suspend | Crash | Watchdog | Soft_reset
 
-type domain_create_flag = CDF_HVM | CDF_HAP
-
 exception Error of string
 
 type handle
@@ -135,7 +143,7 @@ let with_intf f =
 	interface_close xc;
 	r
 
-external domain_create: handle -> int32 -> domain_create_flag list -> string -> arch_domainconfig -> domid
+external domain_create: handle -> domctl_create_config -> domid
        = "stub_xc_domain_create"
 
 external domain_sethandle: handle -> domid -> string -> unit
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index 35303ab..d103a33 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -49,6 +49,15 @@ type arch_domainconfig =
   | ARM of xen_arm_arch_domainconfig
   | X86 of xen_x86_arch_domainconfig
 
+type domain_create_flag = CDF_HVM | CDF_HAP
+
+type domctl_create_config = {
+  ssidref: int32;
+  handle: string;
+  flags: domain_create_flag list;
+  arch: arch_domainconfig;
+}
+
 type domaininfo = {
   domid : domid;
   dying : bool;
@@ -91,14 +100,12 @@ type compile_info = {
 }
 type shutdown_reason = Poweroff | Reboot | Suspend | Crash | Watchdog | Soft_reset
 
-type domain_create_flag = CDF_HVM | CDF_HAP
-
 exception Error of string
 type handle
 external interface_open : unit -> handle = "stub_xc_interface_open"
 external interface_close : handle -> unit = "stub_xc_interface_close"
 val with_intf : (handle -> 'a) -> 'a
-external domain_create : handle -> int32 -> domain_create_flag list -> string -> arch_domainconfig -> domid
+external domain_create : handle -> domctl_create_config -> domid
   = "stub_xc_domain_create"
 external domain_sethandle : handle -> domid -> string -> unit = "stub_xc_domain_sethandle"
 external domain_max_vcpus : handle -> domid -> int -> unit
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index af92bfa..fd8341e 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -119,36 +119,39 @@ static void domain_handle_of_uuid_string(xen_domain_handle_t h,
 #undef X
 }
 
-CAMLprim value stub_xc_domain_create(value xch, value ssidref,
-                                     value flags, value handle,
-                                     value domconfig)
+CAMLprim value stub_xc_domain_create(value xch, value config)
 {
-	CAMLparam4(xch, ssidref, flags, handle);
+	CAMLparam2(xch, config);
+
+	/* Mnemonics for the named fields inside domctl_create_config */
+#define VAL_SSIDREF             Field(config, 0)
+#define VAL_HANDLE              Field(config, 1)
+#define VAL_FLAGS               Field(config, 2)
+#define VAL_ARCH                Field(config, 3)
 
 	uint32_t domid = 0;
-	xen_domain_handle_t h;
 	int result;
-	uint32_t c_ssidref = Int32_val(ssidref);
-	unsigned int c_flags = 0;
 	value l;
-	xc_domain_configuration_t config = {};
+	struct xen_domctl_createdomain cfg = {
+		.ssidref = Int32_val(VAL_SSIDREF),
+	};
 
-	domain_handle_of_uuid_string(h, String_val(handle));
+	domain_handle_of_uuid_string(cfg.handle, String_val(VAL_HANDLE));
 
-	for (l = flags; l != Val_none; l = Field(l, 1))
-		c_flags |= 1u << Int_val(Field(l, 0));
+	for (l = VAL_FLAGS; l != Val_none; l = Field(l, 1))
+		cfg.flags |= 1u << Int_val(Field(l, 0));
 
-	switch(Tag_val(domconfig)) {
+	switch(Tag_val(VAL_ARCH)) {
 	case 0: /* ARM - nothing to do */
 		caml_failwith("Unhandled: ARM");
 		break;
 
 	case 1: /* X86 - emulation flags in the block */
 #if defined(__i386__) || defined(__x86_64__)
-		for (l = Field(Field(domconfig, 0), 0);
+		for (l = Field(VAL_ARCH, 0);
 		     l != Val_none;
 		     l = Field(l, 1))
-			config.emulation_flags |= 1u << Int_val(Field(l, 0));
+			cfg.arch.emulation_flags |= 1u << Int_val(Field(l, 0));
 #else
 		caml_failwith("Unhandled: x86");
 #endif
@@ -158,8 +161,14 @@ CAMLprim value stub_xc_domain_create(value xch, value ssidref,
 		caml_failwith("Unhandled domconfig type");
 	}
 
+#undef VAL_ARCH
+#undef VAL_FLAGS
+#undef VAL_HANDLE
+#undef VAL_SSIDREF
+
 	caml_enter_blocking_section();
-	result = xc_domain_create(_H(xch), c_ssidref, h, c_flags, &domid, &config);
+	result = xc_domain_create(_H(xch), cfg.ssidref, cfg.handle, cfg.flags,
+				  &domid, &cfg.arch);
 	caml_leave_blocking_section();
 
 	if (result < 0)
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (7 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 08/20] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create() Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20 19:42   ` Christian Lindig
                     ` (2 more replies)
  2018-03-19 19:13 ` [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
                   ` (10 subsequent siblings)
  19 siblings, 3 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Juergen Gross, Marek Marczykowski-Górecki, Wei Liu,
	Andrew Cooper, Ian Jackson, Jon Ludlam, Rob Hoes,
	Christian Lindig, David Scott

In future patches, the structure will be extended with further information,
and this is far cleaner than adding extra parameters.

The python stubs are the only user which passes NULL for the existing config
option (which is actually the arch substructure).  Therefore, the #ifdefary
moves to compensate.

For libxl, pass the full config object down into
libxl__arch_domain_{prepare,save}_config(), as there are in practice arch
specific settings in the common part of the structure (flags s3_integrity and
oos_off specifically).

No practical change in behaviour.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Juergen Gross <jgross@suse.com>
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 tools/helpers/init-xenstore-domain.c | 16 +++++++---------
 tools/libxc/include/xenctrl.h        |  6 ++----
 tools/libxc/xc_domain.c              | 31 ++++---------------------------
 tools/libxl/libxl_arch.h             |  4 ++--
 tools/libxl/libxl_arm.c              | 16 ++++++++--------
 tools/libxl/libxl_create.c           | 23 ++++++++++++-----------
 tools/libxl/libxl_x86.c              | 10 +++++-----
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  3 +--
 tools/python/xen/lowlevel/xc/xc.c    | 28 ++++++++++++++++++++--------
 9 files changed, 61 insertions(+), 76 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 8453be2..785e570 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -60,11 +60,13 @@ static void usage(void)
 static int build(xc_interface *xch)
 {
     char cmdline[512];
-    uint32_t ssid;
-    xen_domain_handle_t handle = { 0 };
     int rv, xs_fd;
     struct xc_dom_image *dom = NULL;
     int limit_kb = (maxmem ? : (memory + 1)) * 1024;
+    struct xen_domctl_createdomain config = {
+        .ssidref = SECINITSID_DOMU,
+        .flags = XEN_DOMCTL_CDF_xs_domain,
+    };
 
     xs_fd = open("/dev/xen/xenbus_backend", O_RDWR);
     if ( xs_fd == -1 )
@@ -75,19 +77,15 @@ static int build(xc_interface *xch)
 
     if ( flask )
     {
-        rv = xc_flask_context_to_sid(xch, flask, strlen(flask), &ssid);
+        rv = xc_flask_context_to_sid(xch, flask, strlen(flask), &config.ssidref);
         if ( rv )
         {
             fprintf(stderr, "xc_flask_context_to_sid failed\n");
             goto err;
         }
     }
-    else
-    {
-        ssid = SECINITSID_DOMU;
-    }
-    rv = xc_domain_create(xch, ssid, handle, XEN_DOMCTL_CDF_xs_domain,
-                          &domid, NULL);
+
+    rv = xc_domain_create(xch, &domid, &config);
     if ( rv )
     {
         fprintf(stderr, "xc_domain_create failed\n");
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 543abfc..6ecc850 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -494,10 +494,8 @@ typedef struct xc_vcpu_extstate {
     void *buffer;
 } xc_vcpu_extstate_t;
 
-typedef struct xen_arch_domainconfig xc_domain_configuration_t;
-int xc_domain_create(xc_interface *xch, uint32_t ssidref,
-                     xen_domain_handle_t handle, uint32_t flags,
-                     uint32_t *pdomid, xc_domain_configuration_t *config);
+int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
+                     struct xen_domctl_createdomain *config);
 
 
 /* Functions to produce a dump of a given domain
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index f122ea6..0124cea 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -26,43 +26,20 @@
 #include <xen/memory.h>
 #include <xen/hvm/hvm_op.h>
 
-int xc_domain_create(xc_interface *xch, uint32_t ssidref,
-                     xen_domain_handle_t handle, uint32_t flags,
-                     uint32_t *pdomid, xc_domain_configuration_t *config)
+int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
+                     struct xen_domctl_createdomain *config)
 {
-    xc_domain_configuration_t lconfig;
     int err;
     DECLARE_DOMCTL;
 
-    if ( config == NULL )
-    {
-        memset(&lconfig, 0, sizeof(lconfig));
-
-#if defined (__i386) || defined(__x86_64__)
-        if ( flags & XEN_DOMCTL_CDF_hvm_guest )
-            lconfig.emulation_flags = XEN_X86_EMU_ALL;
-#elif defined (__arm__) || defined(__aarch64__)
-        lconfig.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
-        lconfig.nr_spis = 0;
-#else
-#error Architecture not supported
-#endif
-
-        config = &lconfig;
-    }
-
     domctl.cmd = XEN_DOMCTL_createdomain;
     domctl.domain = *pdomid;
-    domctl.u.createdomain.ssidref = ssidref;
-    domctl.u.createdomain.flags   = flags;
-    memcpy(domctl.u.createdomain.handle, handle, sizeof(xen_domain_handle_t));
-    /* xc_domain_configure_t is an alias of arch_domainconfig_t */
-    memcpy(&domctl.u.createdomain.arch, config, sizeof(*config));
+    domctl.u.createdomain = *config;
+
     if ( (err = do_domctl(xch, &domctl)) != 0 )
         return err;
 
     *pdomid = (uint16_t)domctl.domain;
-    memcpy(config, &domctl.u.createdomain.arch, sizeof(*config));
 
     return 0;
 }
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 784ec7f..5c3c9fb 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -19,13 +19,13 @@
 _hidden
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
-                                      xc_domain_configuration_t *xc_config);
+                                      struct xen_domctl_createdomain *config);
 
 /* save the arch specific configuration for the domain */
 _hidden
 int libxl__arch_domain_save_config(libxl__gc *gc,
                                    libxl_domain_config *d_config,
-                                   const xc_domain_configuration_t *xc_config);
+                                   const struct xen_domctl_createdomain *config);
 
 /* arch specific internal domain creation function */
 _hidden
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 906fd0d..827a9d9 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -39,7 +39,7 @@ static const char *gicv_to_string(uint8_t gic_version)
 
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
-                                      xc_domain_configuration_t *xc_config)
+                                      struct xen_domctl_createdomain *config)
 {
     uint32_t nr_spis = 0;
     unsigned int i;
@@ -86,18 +86,18 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 
     LOG(DEBUG, "Configure the domain");
 
-    xc_config->nr_spis = nr_spis;
+    config->arch.nr_spis = nr_spis;
     LOG(DEBUG, " - Allocate %u SPIs", nr_spis);
 
     switch (d_config->b_info.arch_arm.gic_version) {
     case LIBXL_GIC_VERSION_DEFAULT:
-        xc_config->gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
+        config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
         break;
     case LIBXL_GIC_VERSION_V2:
-        xc_config->gic_version = XEN_DOMCTL_CONFIG_GIC_V2;
+        config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_V2;
         break;
     case LIBXL_GIC_VERSION_V3:
-        xc_config->gic_version = XEN_DOMCTL_CONFIG_GIC_V3;
+        config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_V3;
         break;
     default:
         LOG(ERROR, "Unknown GIC version %d",
@@ -110,9 +110,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 
 int libxl__arch_domain_save_config(libxl__gc *gc,
                                    libxl_domain_config *d_config,
-                                   const xc_domain_configuration_t *xc_config)
+                                   const struct xen_domctl_createdomain *config)
 {
-    switch (xc_config->gic_version) {
+    switch (config->arch.gic_version) {
     case XEN_DOMCTL_CONFIG_GIC_V2:
         d_config->b_info.arch_arm.gic_version = LIBXL_GIC_VERSION_V2;
         break;
@@ -120,7 +120,7 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
         d_config->b_info.arch_arm.gic_version = LIBXL_GIC_VERSION_V3;
         break;
     default:
-        LOG(ERROR, "Unexpected gic version %u", xc_config->gic_version);
+        LOG(ERROR, "Unexpected gic version %u", config->arch.gic_version);
         return ERROR_FAIL;
     }
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 60a5542..668bd3e 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -562,35 +562,36 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     /* Valid domid here means we're soft resetting. */
     if (!libxl_domid_valid_guest(*domid)) {
-        int flags = 0;
-        xen_domain_handle_t handle;
-        xc_domain_configuration_t xc_config = {};
+        struct xen_domctl_createdomain create = {
+            .ssidref = info->ssidref,
+        };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
-            flags |= XEN_DOMCTL_CDF_hvm_guest;
-            flags |= libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
-            flags |= libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
+            create.flags |= XEN_DOMCTL_CDF_hvm_guest;
+            create.flags |=
+                libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
+            create.flags |=
+                libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
         }
 
         /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
-        libxl_uuid_copy(ctx, (libxl_uuid *)handle, &info->uuid);
+        libxl_uuid_copy(ctx, (libxl_uuid *)&create.handle, &info->uuid);
 
-        ret = libxl__arch_domain_prepare_config(gc, d_config, &xc_config);
+        ret = libxl__arch_domain_prepare_config(gc, d_config, &create);
         if (ret < 0) {
             LOGED(ERROR, *domid, "fail to get domain config");
             rc = ERROR_FAIL;
             goto out;
         }
 
-        ret = xc_domain_create(ctx->xch, info->ssidref, handle, flags, domid,
-                               &xc_config);
+        ret = xc_domain_create(ctx->xch, domid, &create);
         if (ret < 0) {
             LOGED(ERROR, *domid, "domain creation fail");
             rc = ERROR_FAIL;
             goto out;
         }
 
-        rc = libxl__arch_domain_save_config(gc, d_config, &xc_config);
+        rc = libxl__arch_domain_save_config(gc, d_config, &create);
         if (rc < 0)
             goto out;
     }
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 4ea1249..3f0c33a 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -5,17 +5,17 @@
 
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
-                                      xc_domain_configuration_t *xc_config)
+                                      struct xen_domctl_createdomain *config)
 {
     switch(d_config->c_info.type) {
     case LIBXL_DOMAIN_TYPE_HVM:
-        xc_config->emulation_flags = XEN_X86_EMU_ALL;
+        config->arch.emulation_flags = XEN_X86_EMU_ALL;
         break;
     case LIBXL_DOMAIN_TYPE_PVH:
-        xc_config->emulation_flags = XEN_X86_EMU_LAPIC;
+        config->arch.emulation_flags = XEN_X86_EMU_LAPIC;
         break;
     case LIBXL_DOMAIN_TYPE_PV:
-        xc_config->emulation_flags = 0;
+        config->arch.emulation_flags = 0;
         break;
     default:
         abort();
@@ -26,7 +26,7 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 
 int libxl__arch_domain_save_config(libxl__gc *gc,
                                    libxl_domain_config *d_config,
-                                   const xc_domain_configuration_t *xc_config)
+                                   const struct xen_domctl_createdomain *config)
 {
     return 0;
 }
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index fd8341e..360e054 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -167,8 +167,7 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #undef VAL_SSIDREF
 
 	caml_enter_blocking_section();
-	result = xc_domain_create(_H(xch), cfg.ssidref, cfg.handle, cfg.flags,
-				  &domid, &cfg.arch);
+	result = xc_domain_create(_H(xch), &domid, &cfg);
 	caml_leave_blocking_section();
 
 	if (result < 0)
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index f501764..d0bd173 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -117,17 +117,21 @@ static PyObject *pyxc_domain_create(XcObject *self,
                                     PyObject *args,
                                     PyObject *kwds)
 {
-    uint32_t dom = 0, ssidref = 0, flags = 0, target = 0;
+    uint32_t dom = 0, target = 0;
     int      ret, i;
     PyObject *pyhandle = NULL;
-    xen_domain_handle_t handle = { 
-        0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
-        0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef };
+    struct xen_domctl_createdomain config = {
+        .handle = {
+            0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
+            0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
+        },
+    };
 
     static char *kwd_list[] = { "domid", "ssidref", "handle", "flags", "target", NULL };
 
     if ( !PyArg_ParseTupleAndKeywords(args, kwds, "|iiOii", kwd_list,
-                                      &dom, &ssidref, &pyhandle, &flags, &target))
+                                      &dom, &config.ssidref, &pyhandle,
+                                      &config.flags, &target))
         return NULL;
     if ( pyhandle != NULL )
     {
@@ -140,12 +144,20 @@ static PyObject *pyxc_domain_create(XcObject *self,
             PyObject *p = PyList_GetItem(pyhandle, i);
             if ( !PyLongOrInt_Check(p) )
                 goto out_exception;
-            handle[i] = (uint8_t)PyLongOrInt_AsLong(p);
+            config.handle[i] = (uint8_t)PyLongOrInt_AsLong(p);
         }
     }
 
-    if ( (ret = xc_domain_create(self->xc_handle, ssidref,
-                                 handle, flags, &dom, NULL)) < 0 )
+#if defined (__i386) || defined(__x86_64__)
+    if ( config.flags & XEN_DOMCTL_CDF_hvm_guest )
+        config.arch.emulation_flags = XEN_X86_EMU_ALL;
+#elif defined (__arm__) || defined(__aarch64__)
+    config.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
+#else
+#error Architecture not supported
+#endif
+
+    if ( (ret = xc_domain_create(self->xc_handle, &dom, &config)) < 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
     if ( target )
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (8 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20 19:27   ` Daniel De Graaf
                     ` (2 more replies)
  2018-03-19 19:13 ` [PATCH 11/20] xen/domctl: Merge set_gnttab_limits " Andrew Cooper
                   ` (9 subsequent siblings)
  19 siblings, 3 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Marek Marczykowski-Górecki, Wei Liu, Andrew Cooper,
	Ian Jackson, Jon Ludlam, Rob Hoes, Christian Lindig, Jan Beulich,
	David Scott, Daniel De Graaf

set_max_evtchn is somewhat weird.  It was introduced with the event_fifo work,
but has never been used.  Still, it is a bounding on resources consumed by the
event channel infrastructure, and should be part of createdomain, rather than
editable after the fact.

Drop XEN_DOMCTL_set_max_evtchn completely (including XSM hooks and libxc
wrappers), and retain the functionality in XEN_DOMCTL_createdomain.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
CC: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Hypervisor side cleanup is present in a later patch
---
 tools/flask/policy/modules/dom0.te   |  2 +-
 tools/flask/policy/modules/xen.if    |  2 +-
 tools/helpers/init-xenstore-domain.c |  1 +
 tools/libxc/include/xenctrl.h        | 12 ------------
 tools/libxc/xc_domain.c              | 11 -----------
 tools/libxl/libxl_create.c           |  2 ++
 tools/libxl/libxl_dom.c              |  7 -------
 tools/ocaml/libs/xc/xenctrl.ml       |  1 +
 tools/ocaml/libs/xc/xenctrl.mli      |  1 +
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  5 ++++-
 tools/python/xen/lowlevel/xc/xc.c    |  1 +
 xen/common/domctl.c                  |  9 +++------
 xen/include/public/domctl.h          | 19 ++++++++-----------
 xen/xsm/flask/hooks.c                |  3 ---
 xen/xsm/flask/policy/access_vectors  |  2 --
 15 files changed, 23 insertions(+), 55 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index bf794d9..4eb3843 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -38,7 +38,7 @@ allow dom0_t dom0_t:domain {
 	getpodtarget setpodtarget set_misc_info set_virq_handler
 };
 allow dom0_t dom0_t:domain2 {
-	set_cpuid gettsc settsc setscheduler set_max_evtchn set_vnumainfo
+	set_cpuid gettsc settsc setscheduler set_vnumainfo
 	get_vnumainfo psr_cmt_op psr_alloc set_gnttab_limits
 };
 allow dom0_t dom0_t:resource { add remove };
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 459880b..7dc25be 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -51,7 +51,7 @@ define(`create_domain_common', `
 			getvcpuinfo getaddrsize getaffinity setaffinity
 			settime setdomainhandle getvcpucontext set_misc_info };
 	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim
-			set_max_evtchn set_vnumainfo get_vnumainfo cacheflush
+			set_vnumainfo get_vnumainfo cacheflush
 			psr_cmt_op psr_alloc soft_reset set_gnttab_limits };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 785e570..89c329c 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -66,6 +66,7 @@ static int build(xc_interface *xch)
     struct xen_domctl_createdomain config = {
         .ssidref = SECINITSID_DOMU,
         .flags = XEN_DOMCTL_CDF_xs_domain,
+        .max_evtchn_port = -1, /* No limit. */
     };
 
     xs_fd = open("/dev/xen/xenbus_backend", O_RDWR);
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 6ecc850..88a175f 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1072,18 +1072,6 @@ int xc_domain_set_access_required(xc_interface *xch,
 int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq);
 
 /**
- * Set the maximum event channel port a domain may bind.
- *
- * This does not affect ports that are already bound.
- *
- * @param xch a handle to an open hypervisor interface
- * @param domid the domain id
- * @param max_port maximum port number
- */
-int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
-                             uint32_t max_port);
-
-/**
  * Set the maximum number of grant frames and maptrack frames a domain
  * can have. Must be used at domain setup time and only then.
  *
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 0124cea..2bc695c 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2256,17 +2256,6 @@ int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq)
     return do_domctl(xch, &domctl);
 }
 
-int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
-                             uint32_t max_port)
-{
-    DECLARE_DOMCTL;
-
-    domctl.cmd = XEN_DOMCTL_set_max_evtchn;
-    domctl.domain = domid;
-    domctl.u.set_max_evtchn.max_port = max_port;
-    return do_domctl(xch, &domctl);
-}
-
 int xc_domain_set_gnttab_limits(xc_interface *xch, uint32_t domid,
                                 uint32_t grant_frames,
                                 uint32_t maptrack_frames)
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 668bd3e..2cb3460 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -553,6 +553,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     /* convenience aliases */
     libxl_domain_create_info *info = &d_config->c_info;
+    libxl_domain_build_info *b_info = &d_config->b_info;
 
     uuid_string = libxl__uuid2string(gc, info->uuid);
     if (!uuid_string) {
@@ -564,6 +565,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
     if (!libxl_domid_valid_guest(*domid)) {
         struct xen_domctl_createdomain create = {
             .ssidref = info->ssidref,
+            .max_evtchn_port = b_info->event_channels,
         };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 2e29b52..227e6cf 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -590,13 +590,6 @@ int libxl__build_post(libxl__gc *gc, uint32_t domid,
     if (rc)
         return rc;
 
-    rc = xc_domain_set_max_evtchn(ctx->xch, domid, info->event_channels);
-    if (rc) {
-        LOG(ERROR, "Failed to set event channel limit to %d (%d)",
-            info->event_channels, rc);
-        return ERROR_FAIL;
-    }
-
     libxl_cpuid_apply_policy(ctx, domid);
     if (info->cpuid != NULL)
         libxl_cpuid_set(ctx, domid, info->cpuid);
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 3b7526e..6dc0dd7 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -63,6 +63,7 @@ type domctl_create_config =
 	ssidref: int32;
 	handle: string;
 	flags: domain_create_flag list;
+	max_evtchn_port: int32;
 	arch: arch_domainconfig;
 }
 
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index d103a33..b0fec24 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -55,6 +55,7 @@ type domctl_create_config = {
   ssidref: int32;
   handle: string;
   flags: domain_create_flag list;
+  max_evtchn_port: int32;
   arch: arch_domainconfig;
 }
 
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 360e054..c022de9 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -127,13 +127,15 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #define VAL_SSIDREF             Field(config, 0)
 #define VAL_HANDLE              Field(config, 1)
 #define VAL_FLAGS               Field(config, 2)
-#define VAL_ARCH                Field(config, 3)
+#define VAL_MAX_EVTCHN_PORT     Field(config, 3)
+#define VAL_ARCH                Field(config, 4)
 
 	uint32_t domid = 0;
 	int result;
 	value l;
 	struct xen_domctl_createdomain cfg = {
 		.ssidref = Int32_val(VAL_SSIDREF),
+		.max_evtchn_port = Int32_val(VAL_MAX_EVTCHN_PORT),
 	};
 
 	domain_handle_of_uuid_string(cfg.handle, String_val(VAL_HANDLE));
@@ -162,6 +164,7 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 	}
 
 #undef VAL_ARCH
+#undef VAL_MAX_EVTCHN_PORT
 #undef VAL_FLAGS
 #undef VAL_HANDLE
 #undef VAL_SSIDREF
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index d0bd173..a307de7 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -125,6 +125,7 @@ static PyObject *pyxc_domain_create(XcObject *self,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
         },
+        .max_evtchn_port = -1, /* No limit. */
     };
 
     static char *kwd_list[] = { "domid", "ssidref", "handle", "flags", "target", NULL };
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 4d275ff..14dab56 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -539,6 +539,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             break;
         }
 
+        d->max_evtchn_port =
+            min_t(unsigned int, op->u.createdomain.max_evtchn_port, INT_MAX);
+
         ret = 0;
         op->domain = d->domain_id;
         copyback = 1;
@@ -1085,12 +1088,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         ret = set_global_virq_handler(d, op->u.set_virq_handler.virq);
         break;
 
-    case XEN_DOMCTL_set_max_evtchn:
-        d->max_evtchn_port = min_t(unsigned int,
-                                   op->u.set_max_evtchn.max_port,
-                                   INT_MAX);
-        break;
-
     case XEN_DOMCTL_setvnumainfo:
     {
         struct vnuma_info *vnuma;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 0535da8..515671e 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -65,6 +65,13 @@ struct xen_domctl_createdomain {
 #define _XEN_DOMCTL_CDF_xs_domain     4
 #define XEN_DOMCTL_CDF_xs_domain      (1U<<_XEN_DOMCTL_CDF_xs_domain)
     uint32_t flags;
+
+    /*
+     * Various domain limits, which impact the quantity of resources (global
+     * mapping space, xenheap, etc) a guest may consume.
+     */
+    uint32_t max_evtchn_port;
+
     struct xen_arch_domainconfig arch;
 };
 
@@ -875,15 +882,6 @@ struct xen_domctl_set_broken_page_p2m {
 };
 
 /*
- * XEN_DOMCTL_set_max_evtchn: sets the maximum event channel port
- * number the guest may use.  Use this limit the amount of resources
- * (global mapping space, xenheap) a guest may use for event channels.
- */
-struct xen_domctl_set_max_evtchn {
-    uint32_t max_port;
-};
-
-/*
  * ARM: Clean and invalidate caches associated with given region of
  * guest memory.
  */
@@ -1161,7 +1159,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_set_broken_page_p2m           67
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
-#define XEN_DOMCTL_set_max_evtchn                70
+/* #define XEN_DOMCTL_set_max_evtchn             70 - Moved into XEN_DOMCTL_createdomain */
 #define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_get_vcpu_msrs                 72
 #define XEN_DOMCTL_set_vcpu_msrs                 73
@@ -1222,7 +1220,6 @@ struct xen_domctl {
         struct xen_domctl_set_access_required access_required;
         struct xen_domctl_audit_p2m         audit_p2m;
         struct xen_domctl_set_virq_handler  set_virq_handler;
-        struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_cacheflush        cacheflush;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 1802d8d..1978703 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -726,9 +726,6 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_audit_p2m:
         return current_has_perm(d, SECCLASS_HVM, HVM__AUDIT_P2M);
 
-    case XEN_DOMCTL_set_max_evtchn:
-        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
-
     case XEN_DOMCTL_cacheflush:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
 
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 89b9996..deecfa2 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -222,8 +222,6 @@ class domain2
     setscheduler
 # XENMEM_claim_pages
     setclaim
-# XEN_DOMCTL_set_max_evtchn
-    set_max_evtchn
 # XEN_DOMCTL_cacheflush
     cacheflush
 # Creation of the hardware domain when it is not dom0
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 11/20] xen/domctl: Merge set_gnttab_limits into createdomain
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (9 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-19 21:43   ` Christian Lindig
                     ` (3 more replies)
  2018-03-19 19:13 ` [PATCH 12/20] xen/domctl: Merge max_vcpus " Andrew Cooper
                   ` (8 subsequent siblings)
  19 siblings, 4 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Marek Marczykowski-Górecki, Wei Liu, Andrew Cooper,
	Ian Jackson, Jon Ludlam, Rob Hoes, Christian Lindig, Jan Beulich,
	David Scott, Daniel De Graaf

XEN_DOMCTL_set_gnttab_limits is a fairly new hypercall, and is strictly
mandatory.  Adding support for it introduced a state where a domain has a
mostly un-constructed grant table, and there were cases where mis-ordering of
toolstack hypercalls could cause a NULL pointer deference in the hypervisor.
In fixing this, the grant table initialisation code became very tangled.

As the settings are mandatory, delete XEN_DOMCTL_set_gnttab_limits (including
XSM hooks and libxc wrappers) and retain the functionality in
XEN_DOMCTL_createdomain.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
CC: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Hypervisor side cleanup is present in later patchs
---
 tools/flask/policy/modules/dom0.te   |  2 +-
 tools/flask/policy/modules/xen.if    |  2 +-
 tools/helpers/init-xenstore-domain.c | 19 ++++++++-----------
 tools/libxc/include/xenctrl.h        | 13 -------------
 tools/libxc/xc_domain.c              | 13 -------------
 tools/libxl/libxl_create.c           |  2 ++
 tools/libxl/libxl_dom.c              |  6 ------
 tools/ocaml/libs/xc/xenctrl.ml       |  2 ++
 tools/ocaml/libs/xc/xenctrl.mli      |  2 ++
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  8 +++++++-
 tools/python/xen/lowlevel/xc/xc.c    |  2 ++
 xen/common/domctl.c                  | 34 ++++++++++++++++++++++++++--------
 xen/include/public/domctl.h          | 10 +++-------
 xen/xsm/flask/hooks.c                |  3 ---
 xen/xsm/flask/policy/access_vectors  |  2 --
 15 files changed, 54 insertions(+), 66 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 4eb3843..dfdcdcd 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -39,7 +39,7 @@ allow dom0_t dom0_t:domain {
 };
 allow dom0_t dom0_t:domain2 {
 	set_cpuid gettsc settsc setscheduler set_vnumainfo
-	get_vnumainfo psr_cmt_op psr_alloc set_gnttab_limits
+	get_vnumainfo psr_cmt_op psr_alloc
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 7dc25be..5af984c 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -52,7 +52,7 @@ define(`create_domain_common', `
 			settime setdomainhandle getvcpucontext set_misc_info };
 	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim
 			set_vnumainfo get_vnumainfo cacheflush
-			psr_cmt_op psr_alloc soft_reset set_gnttab_limits };
+			psr_cmt_op psr_alloc soft_reset };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 89c329c..4771750 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -67,6 +67,14 @@ static int build(xc_interface *xch)
         .ssidref = SECINITSID_DOMU,
         .flags = XEN_DOMCTL_CDF_xs_domain,
         .max_evtchn_port = -1, /* No limit. */
+
+        /*
+         * 1 grant frame is enough: we don't need many grants.
+         * Mini-OS doesn't like less than 4, though, so use 4.
+         * 128 maptrack frames: 256 entries per frame, enough for 32768 domains.
+         */
+        .max_grant_frames = 4,
+        .max_maptrack_frames = 128,
     };
 
     xs_fd = open("/dev/xen/xenbus_backend", O_RDWR);
@@ -104,17 +112,6 @@ static int build(xc_interface *xch)
         fprintf(stderr, "xc_domain_setmaxmem failed\n");
         goto err;
     }
-    /*
-     * 1 grant frame is enough: we don't need many grants.
-     * Mini-OS doesn't like less than 4, though, so use 4.
-     * 128 maptrack frames: 256 entries per frame, enough for 32768 domains.
-     */
-    rv = xc_domain_set_gnttab_limits(xch, domid, 4, 128);
-    if ( rv )
-    {
-        fprintf(stderr, "xc_domain_set_gnttab_limits failed\n");
-        goto err;
-    }
     rv = xc_domain_set_memmap_limit(xch, domid, limit_kb);
     if ( rv )
     {
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 88a175f..0c7c07c 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1071,19 +1071,6 @@ int xc_domain_set_access_required(xc_interface *xch,
  */
 int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq);
 
-/**
- * Set the maximum number of grant frames and maptrack frames a domain
- * can have. Must be used at domain setup time and only then.
- *
- * @param xch a handle to an open hypervisor interface
- * @param domid the domain id
- * @param grant_frames max. number of grant frames
- * @param maptrack_frames max. number of maptrack frames
- */
-int xc_domain_set_gnttab_limits(xc_interface *xch, uint32_t domid,
-                                uint32_t grant_frames,
-                                uint32_t maptrack_frames);
-
 /*
  * CPUPOOL MANAGEMENT FUNCTIONS
  */
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 2bc695c..e8d0734 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2256,19 +2256,6 @@ int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq)
     return do_domctl(xch, &domctl);
 }
 
-int xc_domain_set_gnttab_limits(xc_interface *xch, uint32_t domid,
-                                uint32_t grant_frames,
-                                uint32_t maptrack_frames)
-{
-    DECLARE_DOMCTL;
-
-    domctl.cmd = XEN_DOMCTL_set_gnttab_limits;
-    domctl.domain = domid;
-    domctl.u.set_gnttab_limits.grant_frames = grant_frames;
-    domctl.u.set_gnttab_limits.maptrack_frames = maptrack_frames;
-    return do_domctl(xch, &domctl);
-}
-
 /* Plumbing Xen with vNUMA topology */
 int xc_domain_setvnuma(xc_interface *xch,
                        uint32_t domid,
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 2cb3460..c8eb59c 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -566,6 +566,8 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         struct xen_domctl_createdomain create = {
             .ssidref = info->ssidref,
             .max_evtchn_port = b_info->event_channels,
+            .max_grant_frames = b_info->max_grant_frames,
+            .max_maptrack_frames = b_info->max_maptrack_frames,
         };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 227e6cf..9194109 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -358,12 +358,6 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
         return ERROR_FAIL;
     }
 
-    if (xc_domain_set_gnttab_limits(ctx->xch, domid, info->max_grant_frames,
-                                    info->max_maptrack_frames) != 0) {
-        LOG(ERROR, "Couldn't set grant table limits");
-        return ERROR_FAIL;
-    }
-
     /*
      * Check if the domain has any CPU or node affinity already. If not, try
      * to build up the latter via automatic NUMA placement. In fact, in case
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 6dc0dd7..7c8d6ab 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -64,6 +64,8 @@ type domctl_create_config =
 	handle: string;
 	flags: domain_create_flag list;
 	max_evtchn_port: int32;
+	max_grant_frames: int32;
+	max_maptrack_frames: int32;
 	arch: arch_domainconfig;
 }
 
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index b0fec24..f150a5d 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -56,6 +56,8 @@ type domctl_create_config = {
   handle: string;
   flags: domain_create_flag list;
   max_evtchn_port: int32;
+  max_grant_frames: int32;
+  max_maptrack_frames: int32;
   arch: arch_domainconfig;
 }
 
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index c022de9..882828f 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -128,7 +128,9 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #define VAL_HANDLE              Field(config, 1)
 #define VAL_FLAGS               Field(config, 2)
 #define VAL_MAX_EVTCHN_PORT     Field(config, 3)
-#define VAL_ARCH                Field(config, 4)
+#define VAL_MAX_GRANT_FRAMES    Field(config, 4)
+#define VAL_MAX_MAPTRACK_FRAMES Field(config, 5)
+#define VAL_ARCH                Field(config, 6)
 
 	uint32_t domid = 0;
 	int result;
@@ -136,6 +138,8 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 	struct xen_domctl_createdomain cfg = {
 		.ssidref = Int32_val(VAL_SSIDREF),
 		.max_evtchn_port = Int32_val(VAL_MAX_EVTCHN_PORT),
+		.max_grant_frames = Int32_val(VAL_MAX_GRANT_FRAMES),
+		.max_maptrack_frames = Int32_val(VAL_MAX_MAPTRACK_FRAMES),
 	};
 
 	domain_handle_of_uuid_string(cfg.handle, String_val(VAL_HANDLE));
@@ -164,6 +168,8 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 	}
 
 #undef VAL_ARCH
+#undef VAL_MAX_MAPTRACK_FRAMES
+#undef VAL_MAX_GRANT_FRAMES
 #undef VAL_MAX_EVTCHN_PORT
 #undef VAL_FLAGS
 #undef VAL_HANDLE
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index a307de7..db99a52 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -126,6 +126,8 @@ static PyObject *pyxc_domain_create(XcObject *self,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
         },
         .max_evtchn_port = -1, /* No limit. */
+        .max_grant_frames = 32,
+        .max_maptrack_frames = 1024,
     };
 
     static char *kwd_list[] = { "domid", "ssidref", "handle", "flags", "target", NULL };
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 14dab56..d8ba461 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -539,14 +539,37 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             break;
         }
 
+        /* Stash the new domid for the toolstack. */
+        op->domain = d->domain_id;
+        copyback = true;
+
         d->max_evtchn_port =
             min_t(unsigned int, op->u.createdomain.max_evtchn_port, INT_MAX);
 
-        ret = 0;
-        op->domain = d->domain_id;
-        copyback = 1;
+        ret = grant_table_set_limits(d, op->u.createdomain.max_grant_frames,
+                                     op->u.createdomain.max_maptrack_frames);
+        if ( !ret )
+            goto createdomain_fail_late;
+
         d = NULL;
         break;
+
+    createdomain_fail_late:
+        /*
+         * We've hit an error after putting the domain into the domain list,
+         * meaning that other entities in the system can refer to it.
+         *
+         * Unwinding is substantially more complicated, and without
+         * returning success, the toolstack wont know to clean up.
+         *
+         * Reuse the continuation logic to turn this hypercall into a
+         * destroydomain on behalf of the toolstack.
+         */
+        op->cmd = XEN_DOMCTL_destroydomain;
+        d = NULL;
+
+        ret = hypercall_create_continuation(__HYPERVISOR_domctl, "h", u_domctl);
+        break;
     }
 
     case XEN_DOMCTL_max_vcpus:
@@ -1114,11 +1137,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             copyback = 1;
         break;
 
-    case XEN_DOMCTL_set_gnttab_limits:
-        ret = grant_table_set_limits(d, op->u.set_gnttab_limits.grant_frames,
-                                     op->u.set_gnttab_limits.maptrack_frames);
-        break;
-
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 515671e..424f0a8 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -71,6 +71,8 @@ struct xen_domctl_createdomain {
      * mapping space, xenheap, etc) a guest may consume.
      */
     uint32_t max_evtchn_port;
+    uint32_t max_grant_frames;
+    uint32_t max_maptrack_frames;
 
     struct xen_arch_domainconfig arch;
 };
@@ -1065,11 +1067,6 @@ struct xen_domctl_psr_alloc {
     uint64_t data;      /* IN/OUT */
 };
 
-struct xen_domctl_set_gnttab_limits {
-    uint32_t grant_frames;     /* IN */
-    uint32_t maptrack_frames;  /* IN */
-};
-
 /* XEN_DOMCTL_vuart_op */
 struct xen_domctl_vuart_op {
 #define XEN_DOMCTL_VUART_OP_INIT  0
@@ -1168,7 +1165,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_monitor_op                    77
 #define XEN_DOMCTL_psr_alloc                     78
 #define XEN_DOMCTL_soft_reset                    79
-#define XEN_DOMCTL_set_gnttab_limits             80
+/* #define XEN_DOMCTL_set_gnttab_limits          80 - Moved into XEN_DOMCTL_createdomain */
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
@@ -1229,7 +1226,6 @@ struct xen_domctl {
         struct xen_domctl_psr_cmt_op        psr_cmt_op;
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
-        struct xen_domctl_set_gnttab_limits set_gnttab_limits;
         struct xen_domctl_vuart_op          vuart_op;
         uint8_t                             pad[128];
     } u;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 1978703..cccd1c7 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -740,9 +740,6 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_soft_reset:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SOFT_RESET);
 
-    case XEN_DOMCTL_set_gnttab_limits:
-        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_GNTTAB_LIMITS);
-
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index deecfa2..6f6e969 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -246,8 +246,6 @@ class domain2
     mem_sharing
 # XEN_DOMCTL_psr_alloc
     psr_alloc
-# XEN_DOMCTL_set_gnttab_limits
-    set_gnttab_limits
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 12/20] xen/domctl: Merge max_vcpus into createdomain
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (10 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 11/20] xen/domctl: Merge set_gnttab_limits " Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20 19:27   ` Daniel De Graaf
                     ` (3 more replies)
  2018-03-19 19:13 ` [PATCH 13/20] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
                   ` (7 subsequent siblings)
  19 siblings, 4 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Marek Marczykowski-Górecki, Wei Liu, Andrew Cooper,
	Ian Jackson, Jon Ludlam, Rob Hoes, Christian Lindig, Jan Beulich,
	David Scott, Daniel De Graaf

XEN_DOMCTL_max_vcpus is a mandatory hypercall, but nothing actually prevents a
toolstack from unpausing a domain with no vcpus.

Originally, d->vcpus[] was an embedded array in struct domain, but c/s
fb442e217 "x86_64: allow more vCPU-s per guest" in Xen 4.0 altered it to being
dynamically allocated.  A side effect of this is that d->vcpu[] is NULL until
XEN_DOMCTL_max_vcpus has completed, but a lot of hypercalls blindly
dereference it.

Even today, the behaviour of XEN_DOMCTL_max_vcpus is a mandatory singleton
call which can't change the number of vcpus once a value has been chosen.
Therefore, delete XEN_DOMCTL_max_vcpus (including XSM hooks and toolstack
wrappers) and retain the functionality in XEN_DOMCTL_createdomain.

This will allow future cleanup to ensure that d->vcpus[] is always valid for a
locatable domain, and allow simplification of some creation logic which needs
to size domain-wide objects based on max_cpus, which currently have to be
deferred until vcpu construction.

For the python stubs, extend the domain_create keyword list to take a
max_vcpus parameter, in lieu of deleting the pyxc_domain_max_vcpus function.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
CC: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Hypervisor side cleanup is present in later patchs
---
 tools/flask/policy/modules/dom0.te   |   2 +-
 tools/flask/policy/modules/xen.if    |   2 +-
 tools/helpers/init-xenstore-domain.c |   7 +--
 tools/libxc/include/xenctrl.h        |  12 ----
 tools/libxc/xc_domain.c              |   9 ---
 tools/libxl/libxl_create.c           |   1 +
 tools/libxl/libxl_dom.c              |   5 --
 tools/ocaml/libs/xc/xenctrl.ml       |   4 +-
 tools/ocaml/libs/xc/xenctrl.mli      |   3 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  25 +++-----
 tools/python/xen/lowlevel/xc/xc.c    |  31 ++--------
 xen/common/domctl.c                  | 110 ++++++++++++-----------------------
 xen/include/public/domctl.h          |  10 +---
 xen/xsm/flask/hooks.c                |   3 -
 xen/xsm/flask/policy/access_vectors  |   2 -
 15 files changed, 57 insertions(+), 169 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index dfdcdcd..70a35fb 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -32,7 +32,7 @@ allow dom0_t xen_t:mmu memorymap;
 # Allow dom0 to use these domctls on itself. For domctls acting on other
 # domains, see the definitions of create_domain and manage_domain.
 allow dom0_t dom0_t:domain {
-	setvcpucontext max_vcpus setaffinity getaffinity getscheduler
+	setvcpucontext setaffinity getaffinity getscheduler
 	getdomaininfo getvcpuinfo getvcpucontext setdomainmaxmem setdomainhandle
 	setdebugging hypercall settime setaddrsize getaddrsize trigger
 	getpodtarget setpodtarget set_misc_info set_virq_handler
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 5af984c..350d5fb 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -46,7 +46,7 @@ define(`declare_build_label', `
 ')
 
 define(`create_domain_common', `
-	allow $1 $2:domain { create max_vcpus setdomainmaxmem setaddrsize
+	allow $1 $2:domain { create setdomainmaxmem setaddrsize
 			getdomaininfo hypercall setvcpucontext getscheduler
 			getvcpuinfo getaddrsize getaffinity setaffinity
 			settime setdomainhandle getvcpucontext set_misc_info };
diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 4771750..0a09261 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -66,6 +66,7 @@ static int build(xc_interface *xch)
     struct xen_domctl_createdomain config = {
         .ssidref = SECINITSID_DOMU,
         .flags = XEN_DOMCTL_CDF_xs_domain,
+        .max_vcpus = 1,
         .max_evtchn_port = -1, /* No limit. */
 
         /*
@@ -100,12 +101,6 @@ static int build(xc_interface *xch)
         fprintf(stderr, "xc_domain_create failed\n");
         goto err;
     }
-    rv = xc_domain_max_vcpus(xch, domid, 1);
-    if ( rv )
-    {
-        fprintf(stderr, "xc_domain_max_vcpus failed\n");
-        goto err;
-    }
     rv = xc_domain_setmaxmem(xch, domid, limit_kb);
     if ( rv )
     {
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 0c7c07c..b7ea16c 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -521,18 +521,6 @@ int xc_domain_dumpcore_via_callback(xc_interface *xch,
                                     void *arg,
                                     dumpcore_rtn_t dump_rtn);
 
-/*
- * This function sets the maximum number of vcpus that a domain may create.
- *
- * @parm xch a handle to an open hypervisor interface.
- * @parm domid the domain id in which vcpus are to be created.
- * @parm max the maximum number of vcpus that the domain may create.
- * @return 0 on success, -1 on failure.
- */
-int xc_domain_max_vcpus(xc_interface *xch,
-                        uint32_t domid,
-                        unsigned int max);
-
 /**
  * This function pauses a domain. A paused domain still exists in memory
  * however it does not receive any timeslices from the hypervisor.
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index e8d0734..3835d2e 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1292,15 +1292,6 @@ int xc_domain_get_pod_target(xc_interface *xch,
 }
 #endif
 
-int xc_domain_max_vcpus(xc_interface *xch, uint32_t domid, unsigned int max)
-{
-    DECLARE_DOMCTL;
-    domctl.cmd = XEN_DOMCTL_max_vcpus;
-    domctl.domain = domid;
-    domctl.u.max_vcpus.max    = max;
-    return do_domctl(xch, &domctl);
-}
-
 int xc_domain_sethandle(xc_interface *xch, uint32_t domid,
                         xen_domain_handle_t handle)
 {
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index c8eb59c..de59d2a 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -565,6 +565,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
     if (!libxl_domid_valid_guest(*domid)) {
         struct xen_domctl_createdomain create = {
             .ssidref = info->ssidref,
+            .max_vcpus = b_info->max_vcpus,
             .max_evtchn_port = b_info->event_channels,
             .max_grant_frames = b_info->max_grant_frames,
             .max_maptrack_frames = b_info->max_maptrack_frames,
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 9194109..762ee9a 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -353,11 +353,6 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     int rc;
     uint64_t size;
 
-    if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) != 0) {
-        LOG(ERROR, "Couldn't set max vcpu count");
-        return ERROR_FAIL;
-    }
-
     /*
      * Check if the domain has any CPU or node affinity already. If not, try
      * to build up the latter via automatic NUMA placement. In fact, in case
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 7c8d6ab..c465a5a 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -63,6 +63,7 @@ type domctl_create_config =
 	ssidref: int32;
 	handle: string;
 	flags: domain_create_flag list;
+	max_vcpus: int32;
 	max_evtchn_port: int32;
 	max_grant_frames: int32;
 	max_maptrack_frames: int32;
@@ -152,9 +153,6 @@ external domain_create: handle -> domctl_create_config -> domid
 external domain_sethandle: handle -> domid -> string -> unit
        = "stub_xc_domain_sethandle"
 
-external domain_max_vcpus: handle -> domid -> int -> unit
-       = "stub_xc_domain_max_vcpus"
-
 external domain_pause: handle -> domid -> unit = "stub_xc_domain_pause"
 external domain_unpause: handle -> domid -> unit = "stub_xc_domain_unpause"
 external domain_resume_fast: handle -> domid -> unit = "stub_xc_domain_resume_fast"
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index f150a5d..c06dd5a 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -55,6 +55,7 @@ type domctl_create_config = {
   ssidref: int32;
   handle: string;
   flags: domain_create_flag list;
+  max_vcpus: int32;
   max_evtchn_port: int32;
   max_grant_frames: int32;
   max_maptrack_frames: int32;
@@ -111,8 +112,6 @@ val with_intf : (handle -> 'a) -> 'a
 external domain_create : handle -> domctl_create_config -> domid
   = "stub_xc_domain_create"
 external domain_sethandle : handle -> domid -> string -> unit = "stub_xc_domain_sethandle"
-external domain_max_vcpus : handle -> domid -> int -> unit
-  = "stub_xc_domain_max_vcpus"
 external domain_pause : handle -> domid -> unit = "stub_xc_domain_pause"
 external domain_unpause : handle -> domid -> unit = "stub_xc_domain_unpause"
 external domain_resume_fast : handle -> domid -> unit
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 882828f..03b5956 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -127,16 +127,18 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #define VAL_SSIDREF             Field(config, 0)
 #define VAL_HANDLE              Field(config, 1)
 #define VAL_FLAGS               Field(config, 2)
-#define VAL_MAX_EVTCHN_PORT     Field(config, 3)
-#define VAL_MAX_GRANT_FRAMES    Field(config, 4)
-#define VAL_MAX_MAPTRACK_FRAMES Field(config, 5)
-#define VAL_ARCH                Field(config, 6)
+#define VAL_MAX_VCPUS           Field(config, 3)
+#define VAL_MAX_EVTCHN_PORT     Field(config, 4)
+#define VAL_MAX_GRANT_FRAMES    Field(config, 5)
+#define VAL_MAX_MAPTRACK_FRAMES Field(config, 6)
+#define VAL_ARCH                Field(config, 7)
 
 	uint32_t domid = 0;
 	int result;
 	value l;
 	struct xen_domctl_createdomain cfg = {
 		.ssidref = Int32_val(VAL_SSIDREF),
+		.max_vcpus = Int32_val(VAL_MAX_VCPUS),
 		.max_evtchn_port = Int32_val(VAL_MAX_EVTCHN_PORT),
 		.max_grant_frames = Int32_val(VAL_MAX_GRANT_FRAMES),
 		.max_maptrack_frames = Int32_val(VAL_MAX_MAPTRACK_FRAMES),
@@ -171,6 +173,7 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #undef VAL_MAX_MAPTRACK_FRAMES
 #undef VAL_MAX_GRANT_FRAMES
 #undef VAL_MAX_EVTCHN_PORT
+#undef VAL_MAX_VCPUS
 #undef VAL_FLAGS
 #undef VAL_HANDLE
 #undef VAL_SSIDREF
@@ -185,20 +188,6 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 	CAMLreturn(Val_int(domid));
 }
 
-CAMLprim value stub_xc_domain_max_vcpus(value xch, value domid,
-                                        value max_vcpus)
-{
-	CAMLparam3(xch, domid, max_vcpus);
-	int r;
-
-	r = xc_domain_max_vcpus(_H(xch), _D(domid), Int_val(max_vcpus));
-	if (r)
-		failwith_xc(_H(xch));
-
-	CAMLreturn(Val_unit);
-}
-
-
 value stub_xc_domain_sethandle(value xch, value domid, value handle)
 {
 	CAMLparam3(xch, domid, handle);
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index db99a52..d647b68 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -125,16 +125,19 @@ static PyObject *pyxc_domain_create(XcObject *self,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
         },
+        .max_vcpus = 1,
         .max_evtchn_port = -1, /* No limit. */
         .max_grant_frames = 32,
         .max_maptrack_frames = 1024,
     };
 
-    static char *kwd_list[] = { "domid", "ssidref", "handle", "flags", "target", NULL };
+    static char *kwd_list[] = { "domid", "ssidref", "handle", "flags",
+                                "target", "max_vcpus", NULL };
 
-    if ( !PyArg_ParseTupleAndKeywords(args, kwds, "|iiOii", kwd_list,
+    if ( !PyArg_ParseTupleAndKeywords(args, kwds, "|iiOiii", kwd_list,
                                       &dom, &config.ssidref, &pyhandle,
-                                      &config.flags, &target))
+                                      &config.flags, &target,
+                                      &config.max_vcpus))
         return NULL;
     if ( pyhandle != NULL )
     {
@@ -176,20 +179,6 @@ static PyObject *pyxc_domain_create(XcObject *self,
     return NULL;
 }
 
-static PyObject *pyxc_domain_max_vcpus(XcObject *self, PyObject *args)
-{
-    uint32_t dom, max;
-
-    if (!PyArg_ParseTuple(args, "ii", &dom, &max))
-      return NULL;
-
-    if (xc_domain_max_vcpus(self->xc_handle, dom, max) != 0)
-        return pyxc_error_to_exception(self->xc_handle);
-    
-    Py_INCREF(zero);
-    return zero;
-}
-
 static PyObject *pyxc_domain_pause(XcObject *self, PyObject *args)
 {
     return dom_op(self, args, xc_domain_pause);
@@ -2049,14 +2038,6 @@ static PyMethodDef pyxc_methods[] = {
       " dom    [int, 0]:        Domain identifier to use (allocated if zero).\n"
       "Returns: [int] new domain identifier; -1 on error.\n" },
 
-    { "domain_max_vcpus", 
-      (PyCFunction)pyxc_domain_max_vcpus,
-      METH_VARARGS, "\n"
-      "Set the maximum number of VCPUs a domain may create.\n"
-      " dom       [int, 0]:      Domain identifier to use.\n"
-      " max     [int, 0]:      New maximum number of VCPUs in domain.\n"
-      "Returns: [int] 0 on success; -1 on error.\n" },
-
     { "domain_dumpcore", 
       (PyCFunction)pyxc_domain_dumpcore, 
       METH_VARARGS, "\n"
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index d8ba461..0326e43 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -496,6 +496,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     case XEN_DOMCTL_createdomain:
     {
         domid_t        dom;
+        unsigned int   vcpus = op->u.createdomain.max_vcpus;
+        unsigned int   i, cpu;
+        cpumask_t     *online;
         static domid_t rover = 0;
 
         ret = -EINVAL;
@@ -504,7 +507,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                | XEN_DOMCTL_CDF_hap
                | XEN_DOMCTL_CDF_s3_integrity
                | XEN_DOMCTL_CDF_oos_off
-               | XEN_DOMCTL_CDF_xs_domain)) )
+               | XEN_DOMCTL_CDF_xs_domain)) ||
+             vcpus < 1 )
             break;
 
         dom = op->domain;
@@ -551,6 +555,37 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( !ret )
             goto createdomain_fail_late;
 
+        ret = -EINVAL;
+        if ( vcpus > domain_max_vcpus(d) )
+            goto createdomain_fail_late;
+
+        ret = -ENOMEM;
+        online = cpupool_domain_cpumask(d);
+
+        BUG_ON(d->vcpu);
+        BUG_ON(d->max_vcpus);
+
+        d->vcpu = xzalloc_array(struct vcpu *, vcpus);
+        /* Install vcpu array /then/ update max_vcpus. */
+        smp_wmb();
+        if ( !d->vcpu )
+            goto createdomain_fail_late;
+        d->max_vcpus = vcpus;
+
+        cpu = cpumask_any(online);
+        for ( i = 0; i < vcpus; ++i )
+        {
+            BUG_ON(d->vcpu[i]);
+
+            if ( alloc_vcpu(d, i, cpu) == NULL )
+                goto createdomain_fail_late;
+
+            BUG_ON(!d->vcpu[i]);
+
+            cpu = cpumask_cycle(d->vcpu[i]->processor, online);
+        }
+
+        ret = 0;
         d = NULL;
         break;
 
@@ -572,79 +607,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         break;
     }
 
-    case XEN_DOMCTL_max_vcpus:
-    {
-        unsigned int i, max = op->u.max_vcpus.max, cpu;
-        cpumask_t *online;
-
-        ret = -EINVAL;
-        if ( (d == current->domain) || /* no domain_pause() */
-             (max > domain_max_vcpus(d)) )
-            break;
-
-        /* Until Xenoprof can dynamically grow its vcpu-s array... */
-        if ( d->xenoprof )
-        {
-            ret = -EAGAIN;
-            break;
-        }
-
-        /* Needed, for example, to ensure writable p.t. state is synced. */
-        domain_pause(d);
-
-        /* We cannot reduce maximum VCPUs. */
-        ret = -EINVAL;
-        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )
-            goto maxvcpu_out;
-
-        /*
-         * For now don't allow increasing the vcpu count from a non-zero
-         * value: This code and all readers of d->vcpu would otherwise need
-         * to be converted to use RCU, but at present there's no tools side
-         * code path that would issue such a request.
-         */
-        ret = -EBUSY;
-        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
-            goto maxvcpu_out;
-
-        ret = -ENOMEM;
-        online = cpupool_domain_cpumask(d);
-        if ( max > d->max_vcpus )
-        {
-            struct vcpu **vcpus;
-
-            BUG_ON(d->vcpu != NULL);
-            BUG_ON(d->max_vcpus != 0);
-
-            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
-                goto maxvcpu_out;
-
-            /* Install vcpu array /then/ update max_vcpus. */
-            d->vcpu = vcpus;
-            smp_wmb();
-            d->max_vcpus = max;
-        }
-
-        for ( i = 0; i < max; i++ )
-        {
-            if ( d->vcpu[i] != NULL )
-                continue;
-
-            cpu = (i == 0) ?
-                cpumask_any(online) :
-                cpumask_cycle(d->vcpu[i-1]->processor, online);
-
-            if ( alloc_vcpu(d, i, cpu) == NULL )
-                goto maxvcpu_out;
-        }
-
-        ret = 0;
-
-    maxvcpu_out:
-        domain_unpause(d);
-        break;
-    }
-
     case XEN_DOMCTL_soft_reset:
         if ( d == current->domain ) /* no domain_pause() */
         {
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 424f0a8..09ad277 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -70,6 +70,7 @@ struct xen_domctl_createdomain {
      * Various domain limits, which impact the quantity of resources (global
      * mapping space, xenheap, etc) a guest may consume.
      */
+    uint32_t max_vcpus;
     uint32_t max_evtchn_port;
     uint32_t max_grant_frames;
     uint32_t max_maptrack_frames;
@@ -311,12 +312,6 @@ struct xen_domctl_vcpuaffinity {
 };
 
 
-/* XEN_DOMCTL_max_vcpus */
-struct xen_domctl_max_vcpus {
-    uint32_t max;           /* maximum number of vcpus */
-};
-
-
 /* XEN_DOMCTL_scheduler_op */
 /* Scheduler types. */
 /* #define XEN_SCHEDULER_SEDF  4 (Removed) */
@@ -1104,7 +1099,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setvcpucontext                12
 #define XEN_DOMCTL_getvcpucontext                13
 #define XEN_DOMCTL_getvcpuinfo                   14
-#define XEN_DOMCTL_max_vcpus                     15
+/* #define XEN_DOMCTL_max_vcpus                  15 - Moved into XEN_DOMCTL_createdomain */
 #define XEN_DOMCTL_scheduler_op                  16
 #define XEN_DOMCTL_setdomainhandle               17
 #define XEN_DOMCTL_setdebugging                  18
@@ -1183,7 +1178,6 @@ struct xen_domctl {
         struct xen_domctl_max_mem           max_mem;
         struct xen_domctl_vcpucontext       vcpucontext;
         struct xen_domctl_getvcpuinfo       getvcpuinfo;
-        struct xen_domctl_max_vcpus         max_vcpus;
         struct xen_domctl_scheduler_op      scheduler_op;
         struct xen_domctl_setdomainhandle   setdomainhandle;
         struct xen_domctl_setdebugging      setdebugging;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index cccd1c7..806faa6 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -636,9 +636,6 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_resumedomain:
         return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__RESUME);
 
-    case XEN_DOMCTL_max_vcpus:
-        return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__MAX_VCPUS);
-
     case XEN_DOMCTL_max_mem:
         return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETDOMAINMAXMEM);
 
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 6f6e969..c76a180 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -132,8 +132,6 @@ class domain
 #  target = the new label of the domain
 # see also the domain2 relabel{from,to,self} permissions
     transition
-# XEN_DOMCTL_max_vcpus
-    max_vcpus
 # XEN_DOMCTL_destroydomain
     destroy
 # XEN_DOMCTL_setvcpuaffinity
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 13/20] xen/evtchn: Pass max_evtchn_port into evtchn_init()
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (11 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 12/20] xen/domctl: Merge max_vcpus " Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-26 14:01   ` Jan Beulich
  2018-03-19 19:13 ` [PATCH 14/20] xen/gnttab: Remove replace_grant_supported() Andrew Cooper
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Jan Beulich

... rather than setting it up domain_create() has completed.  This involves
constructing a default value for dom0.

No practical change in functionality.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/setup.c       | 4 +++-
 xen/arch/x86/setup.c       | 1 +
 xen/common/domain.c        | 2 +-
 xen/common/domctl.c        | 3 ---
 xen/common/event_channel.c | 4 ++--
 xen/include/xen/sched.h    | 2 +-
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index e6f8e23..f07c826 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -693,7 +693,9 @@ void __init start_xen(unsigned long boot_phys_offset,
     const char *cmdline;
     struct bootmodule *xen_bootmodule;
     struct domain *dom0;
-    struct xen_domctl_createdomain dom0_cfg = {};
+    struct xen_domctl_createdomain dom0_cfg = {
+        .max_evtchn_port = -1,
+    };
 
     dcache_line_bytes = read_dcache_line_bytes();
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index fa77bae..a82e3a1 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -673,6 +673,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     };
     struct xen_domctl_createdomain dom0_cfg = {
         .flags = XEN_DOMCTL_CDF_s3_integrity,
+        .max_evtchn_port = -1,
     };
 
     /* Critical region without IDT or TSS.  Any fault is deadly! */
diff --git a/xen/common/domain.c b/xen/common/domain.c
index b00cc1f..946d0e1 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -360,7 +360,7 @@ struct domain *domain_create(domid_t domid,
 
         radix_tree_init(&d->pirq_tree);
 
-        if ( (err = evtchn_init(d)) != 0 )
+        if ( (err = evtchn_init(d, config->max_evtchn_port)) != 0 )
             goto fail;
         init_status |= INIT_evtchn;
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 0326e43..d59fa9e 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -547,9 +547,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         op->domain = d->domain_id;
         copyback = true;
 
-        d->max_evtchn_port =
-            min_t(unsigned int, op->u.createdomain.max_evtchn_port, INT_MAX);
-
         ret = grant_table_set_limits(d, op->u.createdomain.max_grant_frames,
                                      op->u.createdomain.max_maptrack_frames);
         if ( !ret )
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index c620465..41cbbae 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1284,10 +1284,10 @@ void evtchn_check_pollers(struct domain *d, unsigned int port)
     }
 }
 
-int evtchn_init(struct domain *d)
+int evtchn_init(struct domain *d, unsigned int max_port)
 {
     evtchn_2l_init(d);
-    d->max_evtchn_port = INT_MAX;
+    d->max_evtchn_port = min_t(unsigned int, max_port, INT_MAX);
 
     d->evtchn = alloc_evtchn_bucket(d, 0);
     if ( !d->evtchn )
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index f89896e..7e8a79c 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -133,7 +133,7 @@ struct evtchn
 #endif
 } __attribute__((aligned(64)));
 
-int  evtchn_init(struct domain *d); /* from domain_create */
+int  evtchn_init(struct domain *d, unsigned int max_port); /* from domain_create */
 void evtchn_destroy(struct domain *d); /* from domain_kill */
 void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 14/20] xen/gnttab: Remove replace_grant_supported()
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (12 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 13/20] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20  3:44   ` Julien Grall
  2018-03-26 14:03   ` Jan Beulich
  2018-03-19 19:13 ` [PATCH 15/20] xen/gnttab: Export opt_max_{grant, maptrack}_frames Andrew Cooper
                   ` (5 subsequent siblings)
  19 siblings, 2 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Jan Beulich

It is identical on all architecture, and this is a better overall than fixing
it up to have a proper boolean return value.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
 xen/common/grant_table.c          | 3 ---
 xen/include/asm-arm/grant_table.h | 4 ----
 xen/include/asm-x86/grant_table.h | 5 -----
 3 files changed, 12 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 1820191..93443be 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3459,9 +3459,6 @@ do_grant_table_op(
 
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
-        rc = -ENOSYS;
-        if ( unlikely(!replace_grant_supported()) )
-            goto out;
         rc = gnttab_unmap_and_replace(unmap, count);
         if ( rc > 0 )
         {
diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
index d2027d2..63a2fdd 100644
--- a/xen/include/asm-arm/grant_table.h
+++ b/xen/include/asm-arm/grant_table.h
@@ -23,10 +23,6 @@ int replace_grant_host_mapping(unsigned long gpaddr, unsigned long mfn,
 void gnttab_mark_dirty(struct domain *d, unsigned long l);
 #define gnttab_create_status_page(d, t, i) do {} while (0)
 #define gnttab_release_host_mappings(domain) 1
-static inline int replace_grant_supported(void)
-{
-    return 1;
-}
 
 /*
  * The region used by Xen on the memory will never be mapped in DOM0
diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
index 4ac0b9b..514eaf3 100644
--- a/xen/include/asm-x86/grant_table.h
+++ b/xen/include/asm-x86/grant_table.h
@@ -101,9 +101,4 @@ static inline void gnttab_clear_flag(unsigned int nr, uint16_t *st)
 #define gnttab_need_iommu_mapping(d)                \
     (!paging_mode_translate(d) && need_iommu(d))
 
-static inline int replace_grant_supported(void)
-{
-    return 1;
-}
-
 #endif /* __ASM_GRANT_TABLE_H__ */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 15/20] xen/gnttab: Export opt_max_{grant, maptrack}_frames
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (13 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 14/20] xen/gnttab: Remove replace_grant_supported() Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-26 14:17   ` Jan Beulich
  2018-03-19 19:13 ` [PATCH 16/20] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Jan Beulich

This is to facilitate the values being passed in via domain_create(), at which
point the dom0 construction code needs to know them.

While cleaning up, drop the DEFAULT_* defines, which are only used immediately
adjacent in a context which makes it obvious that they are the defaults, and
drop the (unused) logic to allow a per-arch override of
DEFAULT_MAX_NR_GRANT_FRAMES.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
 xen/common/grant_table.c      | 26 +++++++++-----------------
 xen/include/xen/grant_table.h |  3 +++
 2 files changed, 12 insertions(+), 17 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 93443be..5596659 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -82,20 +82,11 @@ struct grant_table {
     struct grant_table_arch arch;
 };
 
-#ifndef DEFAULT_MAX_NR_GRANT_FRAMES /* to allow arch to override */
-/* Default maximum size of a grant table. [POLICY] */
-#define DEFAULT_MAX_NR_GRANT_FRAMES   64
-#endif
-
-static unsigned int __read_mostly max_grant_frames =
-                                               DEFAULT_MAX_NR_GRANT_FRAMES;
-integer_runtime_param("gnttab_max_frames", max_grant_frames);
-
-#define DEFAULT_MAX_MAPTRACK_FRAMES 1024
+unsigned int __read_mostly opt_max_grant_frames = 64;
+integer_runtime_param("gnttab_max_frames", opt_max_grant_frames);
 
-static unsigned int __read_mostly max_maptrack_frames =
-                                               DEFAULT_MAX_MAPTRACK_FRAMES;
-integer_runtime_param("gnttab_max_maptrack_frames", max_maptrack_frames);
+unsigned int __read_mostly opt_max_maptrack_frames = 1024;
+integer_runtime_param("gnttab_max_maptrack_frames", opt_max_maptrack_frames);
 
 static unsigned int __read_mostly opt_gnttab_max_version = 2;
 static bool __read_mostly opt_transitive_grants = true;
@@ -3601,7 +3592,8 @@ grant_table_create(
 
     if ( d->domain_id == 0 )
     {
-        ret = grant_table_init(d, t, gnttab_dom0_frames(), max_maptrack_frames);
+        ret = grant_table_init(d, t, gnttab_dom0_frames(),
+                               opt_max_maptrack_frames);
     }
 
     return ret;
@@ -3803,8 +3795,8 @@ int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
     struct grant_table *gt = d->grant_table;
 
     if ( grant_frames < INITIAL_NR_GRANT_FRAMES ||
-         grant_frames > max_grant_frames ||
-         maptrack_frames > max_maptrack_frames )
+         grant_frames > opt_max_grant_frames ||
+         maptrack_frames > opt_max_maptrack_frames )
         return -EINVAL;
     if ( !gt )
         return -ENOENT;
@@ -3984,7 +3976,7 @@ __initcall(gnttab_usage_init);
 
 unsigned int __init gnttab_dom0_frames(void)
 {
-    return min(max_grant_frames, gnttab_dom0_max());
+    return min(opt_max_grant_frames, gnttab_dom0_max());
 }
 
 /*
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index b3a95fd..0286ba3 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -31,6 +31,9 @@
 
 struct grant_table;
 
+extern unsigned int opt_max_grant_frames;
+extern unsigned int opt_max_maptrack_frames;
+
 /* Create/destroy per-domain grant table context. */
 int grant_table_create(
     struct domain *d);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 16/20] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (14 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 15/20] xen/gnttab: Export opt_max_{grant, maptrack}_frames Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-26 14:18   ` Jan Beulich
  2018-03-19 19:13 ` [PATCH 17/20] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init() Andrew Cooper
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Jan Beulich

... rather than setting the limits up after domain_create() has completed.

This removes all the gnttab infrastructure for calculating the number of dom0
grant frames, opting instead to require the dom0 construction code to pass a
sane value in via the configuration.

In practice, this now means that there is never a partially constructed grant
table for a reference-able domain.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/domain_build.c       |  3 ++-
 xen/arch/arm/setup.c              | 12 ++++++++++++
 xen/arch/x86/setup.c              |  3 +++
 xen/common/domain.c               |  3 ++-
 xen/common/domctl.c               |  5 -----
 xen/common/grant_table.c          | 16 +++-------------
 xen/include/asm-arm/grant_table.h | 12 ------------
 xen/include/asm-x86/grant_table.h |  5 -----
 xen/include/xen/grant_table.h     |  6 ++----
 9 files changed, 24 insertions(+), 41 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9ef9030..a375de0 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2078,7 +2078,8 @@ static void __init find_gnttab_region(struct domain *d,
      * enough space for a large grant table
      */
     kinfo->gnttab_start = __pa(_stext);
-    kinfo->gnttab_size = gnttab_dom0_frames() << PAGE_SHIFT;
+    kinfo->gnttab_size = min_t(unsigned int, opt_max_grant_frames,
+                               PFN_DOWN(_etext - _stext)) << PAGE_SHIFT;
 
 #ifdef CONFIG_ARM_32
     /*
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index f07c826..c181389 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -20,6 +20,7 @@
 #include <xen/compile.h>
 #include <xen/device_tree.h>
 #include <xen/domain_page.h>
+#include <xen/grant_table.h>
 #include <xen/types.h>
 #include <xen/string.h>
 #include <xen/serial.h>
@@ -695,6 +696,17 @@ void __init start_xen(unsigned long boot_phys_offset,
     struct domain *dom0;
     struct xen_domctl_createdomain dom0_cfg = {
         .max_evtchn_port = -1,
+
+        /*
+         * The region used by Xen on the memory will never be mapped in DOM0
+         * memory layout. Therefore it can be used for the grant table.
+         *
+         * Only use the text section as it's always present and will contain
+         * enough space for a large grant table
+         */
+        .max_grant_frames = min_t(unsigned int, opt_max_grant_frames,
+                                  PFN_DOWN(_etext - _stext)),
+        .max_maptrack_frames = opt_max_maptrack_frames,
     };
 
     dcache_line_bytes = read_dcache_line_bytes();
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index a82e3a1..a79a8b6 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1,6 +1,7 @@
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/err.h>
+#include <xen/grant_table.h>
 #include <xen/sched.h>
 #include <xen/sched-if.h>
 #include <xen/domain.h>
@@ -674,6 +675,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     struct xen_domctl_createdomain dom0_cfg = {
         .flags = XEN_DOMCTL_CDF_s3_integrity,
         .max_evtchn_port = -1,
+        .max_grant_frames = opt_max_grant_frames,
+        .max_maptrack_frames = opt_max_maptrack_frames,
     };
 
     /* Critical region without IDT or TSS.  Any fault is deadly! */
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 946d0e1..4539c39 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -364,7 +364,8 @@ struct domain *domain_create(domid_t domid,
             goto fail;
         init_status |= INIT_evtchn;
 
-        if ( (err = grant_table_create(d)) != 0 )
+        if ( (err = grant_table_create(d, config->max_grant_frames,
+                                       config->max_maptrack_frames)) != 0 )
             goto fail;
         init_status |= INIT_gnttab;
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index d59fa9e..2f9d993 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -547,11 +547,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         op->domain = d->domain_id;
         copyback = true;
 
-        ret = grant_table_set_limits(d, op->u.createdomain.max_grant_frames,
-                                     op->u.createdomain.max_maptrack_frames);
-        if ( !ret )
-            goto createdomain_fail_late;
-
         ret = -EINVAL;
         if ( vcpus > domain_max_vcpus(d) )
             goto createdomain_fail_late;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 5596659..cf47c78 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3572,9 +3572,8 @@ do_grant_table_op(
 #include "compat/grant_table.c"
 #endif
 
-int
-grant_table_create(
-    struct domain *d)
+int grant_table_create(struct domain *d, unsigned int max_grant_frames,
+                       unsigned int max_maptrack_frames)
 {
     struct grant_table *t;
     int ret = 0;
@@ -3590,11 +3589,7 @@ grant_table_create(
     t->domain = d;
     d->grant_table = t;
 
-    if ( d->domain_id == 0 )
-    {
-        ret = grant_table_init(d, t, gnttab_dom0_frames(),
-                               opt_max_maptrack_frames);
-    }
+    ret = grant_table_set_limits(d, max_maptrack_frames, max_maptrack_frames);
 
     return ret;
 }
@@ -3974,11 +3969,6 @@ static int __init gnttab_usage_init(void)
 }
 __initcall(gnttab_usage_init);
 
-unsigned int __init gnttab_dom0_frames(void)
-{
-    return min(opt_max_grant_frames, gnttab_dom0_max());
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
index 63a2fdd..053680a 100644
--- a/xen/include/asm-arm/grant_table.h
+++ b/xen/include/asm-arm/grant_table.h
@@ -24,18 +24,6 @@ void gnttab_mark_dirty(struct domain *d, unsigned long l);
 #define gnttab_create_status_page(d, t, i) do {} while (0)
 #define gnttab_release_host_mappings(domain) 1
 
-/*
- * The region used by Xen on the memory will never be mapped in DOM0
- * memory layout. Therefore it can be used for the grant table.
- *
- * Only use the text section as it's always present and will contain
- * enough space for a large grant table
- */
-static inline unsigned int gnttab_dom0_max(void)
-{
-    return PFN_DOWN(_etext - _stext);
-}
-
 #define gnttab_init_arch(gt)                                             \
 ({                                                                       \
     unsigned int ngf_ = (gt)->max_grant_frames;                          \
diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
index 514eaf3..dc4476c 100644
--- a/xen/include/asm-x86/grant_table.h
+++ b/xen/include/asm-x86/grant_table.h
@@ -39,11 +39,6 @@ static inline int replace_grant_host_mapping(uint64_t addr, unsigned long frame,
     return replace_grant_pv_mapping(addr, frame, new_addr, flags);
 }
 
-static inline unsigned int gnttab_dom0_max(void)
-{
-    return UINT_MAX;
-}
-
 #define gnttab_init_arch(gt) 0
 #define gnttab_destroy_arch(gt) do {} while ( 0 )
 #define gnttab_set_frame_gfn(gt, st, idx, gfn) do {} while ( 0 )
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 0286ba3..ebd5024 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -35,8 +35,8 @@ extern unsigned int opt_max_grant_frames;
 extern unsigned int opt_max_maptrack_frames;
 
 /* Create/destroy per-domain grant table context. */
-int grant_table_create(
-    struct domain *d);
+int grant_table_create(struct domain *d, unsigned int max_grant_frames,
+                       unsigned int max_maptrack_frames);
 void grant_table_destroy(
     struct domain *d);
 void grant_table_init_vcpu(struct vcpu *v);
@@ -59,6 +59,4 @@ int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
 int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
                      mfn_t *mfn);
 
-unsigned int gnttab_dom0_frames(void);
-
 #endif /* __XEN_GRANT_TABLE_H__ */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 17/20] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init()
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (15 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 16/20] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-26 14:29   ` Jan Beulich
  2018-03-19 19:13 ` [PATCH 18/20] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
                   ` (2 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Jan Beulich

Now that the max_{grant,maptrack}_frames are specified from the very beginning
of grant table construction, the various initialisation functions can be
folded together and simplified as a result.

Leave grant_table_init() as the public interface, which is more consistent
with other subsystems.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
 xen/common/domain.c           |  4 +--
 xen/common/grant_table.c      | 71 +++++++++++++------------------------------
 xen/include/xen/grant_table.h |  6 ++--
 3 files changed, 25 insertions(+), 56 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4539c39..c86cf47 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -364,8 +364,8 @@ struct domain *domain_create(domid_t domid,
             goto fail;
         init_status |= INIT_evtchn;
 
-        if ( (err = grant_table_create(d, config->max_grant_frames,
-                                       config->max_maptrack_frames)) != 0 )
+        if ( (err = grant_table_init(d, config->max_grant_frames,
+                                     config->max_maptrack_frames)) != 0 )
             goto fail;
         init_status |= INIT_gnttab;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index cf47c78..cb6e875 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1808,22 +1808,28 @@ gnttab_grow_table(struct domain *d, unsigned int req_nr_frames)
     return -ENOMEM;
 }
 
-static int
-grant_table_init(struct domain *d, struct grant_table *gt,
-                 unsigned int grant_frames, unsigned int maptrack_frames)
+int grant_table_init(struct domain *d, unsigned int max_grant_frames,
+                     unsigned int max_maptrack_frames)
 {
+    struct grant_table *gt;
     int ret = -ENOMEM;
 
-    grant_write_lock(gt);
+    if ( max_grant_frames < INITIAL_NR_GRANT_FRAMES ||
+         max_grant_frames > opt_max_grant_frames ||
+         max_maptrack_frames > opt_max_maptrack_frames )
+        return -EINVAL;
 
-    if ( gt->active )
-    {
-        ret = -EBUSY;
-        goto out_no_cleanup;
-    }
+    if ( (gt = xzalloc(struct grant_table)) == NULL )
+        return -ENOMEM;
+
+    /* Simple stuff. */
+    percpu_rwlock_resource_init(&gt->lock, grant_rwlock);
+    spin_lock_init(&gt->maptrack_lock);
+
+    grant_write_lock(gt);
 
-    gt->max_grant_frames = grant_frames;
-    gt->max_maptrack_frames = maptrack_frames;
+    gt->max_grant_frames = max_grant_frames;
+    gt->max_maptrack_frames = max_maptrack_frames;
 
     /* Active grant table. */
     gt->active = xzalloc_array(struct active_grant_entry *,
@@ -1854,6 +1860,10 @@ grant_table_init(struct domain *d, struct grant_table *gt,
     if ( ret )
         goto out;
 
+    /* Okay, install the structure. */
+    gt->domain = d;
+    d->grant_table = gt;
+
     /* gnttab_grow_table() allocates a min number of frames, so 0 is okay. */
     ret = gnttab_grow_table(d, 0);
 
@@ -1871,7 +1881,6 @@ grant_table_init(struct domain *d, struct grant_table *gt,
         gt->active = NULL;
     }
 
- out_no_cleanup:
     grant_write_unlock(gt);
 
     return ret;
@@ -3572,28 +3581,6 @@ do_grant_table_op(
 #include "compat/grant_table.c"
 #endif
 
-int grant_table_create(struct domain *d, unsigned int max_grant_frames,
-                       unsigned int max_maptrack_frames)
-{
-    struct grant_table *t;
-    int ret = 0;
-
-    if ( (t = xzalloc(struct grant_table)) == NULL )
-        return -ENOMEM;
-
-    /* Simple stuff. */
-    percpu_rwlock_resource_init(&t->lock, grant_rwlock);
-    spin_lock_init(&t->maptrack_lock);
-
-    /* Okay, install the structure. */
-    t->domain = d;
-    d->grant_table = t;
-
-    ret = grant_table_set_limits(d, max_maptrack_frames, max_maptrack_frames);
-
-    return ret;
-}
-
 void
 gnttab_release_mappings(
     struct domain *d)
@@ -3784,22 +3771,6 @@ void grant_table_init_vcpu(struct vcpu *v)
     v->maptrack_tail = MAPTRACK_TAIL;
 }
 
-int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
-                           unsigned int maptrack_frames)
-{
-    struct grant_table *gt = d->grant_table;
-
-    if ( grant_frames < INITIAL_NR_GRANT_FRAMES ||
-         grant_frames > opt_max_grant_frames ||
-         maptrack_frames > opt_max_maptrack_frames )
-        return -EINVAL;
-    if ( !gt )
-        return -ENOENT;
-
-    /* Set limits. */
-    return grant_table_init(d, gt, grant_frames, maptrack_frames);
-}
-
 #ifdef CONFIG_HAS_MEM_SHARING
 int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
                             gfn_t *gfn, uint16_t *status)
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index ebd5024..424afe0 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -35,13 +35,11 @@ extern unsigned int opt_max_grant_frames;
 extern unsigned int opt_max_maptrack_frames;
 
 /* Create/destroy per-domain grant table context. */
-int grant_table_create(struct domain *d, unsigned int max_grant_frames,
-                       unsigned int max_maptrack_frames);
+int grant_table_init(struct domain *d, unsigned int max_grant_frames,
+                     unsigned int max_maptrack_frames);
 void grant_table_destroy(
     struct domain *d);
 void grant_table_init_vcpu(struct vcpu *v);
-int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
-                           unsigned int maptrack_frames);
 
 /*
  * Check if domain has active grants and log first 10 of them.
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 18/20] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (16 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 17/20] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init() Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20  3:54   ` Julien Grall
  2018-03-26 15:19   ` Jan Beulich
  2018-03-19 19:13 ` [PATCH 19/20] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
  2018-03-19 19:13 ` [PATCH 20/20] xen/domain: Allocate d->vcpu[] in arch_domain_create() Andrew Cooper
  19 siblings, 2 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Jan Beulich

Make dom0_max_vcpus() a common interface, and implement it on ARM by splitting
the existing alloc_dom0_vcpu0() function in half.

As domain_create() doesn't yet set up the vcpu array, the max value is also
passed into alloc_dom0_vcpu0().  This is temporary for bisectibility and
removed in the following patch.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/domain_build.c | 12 +++++++++---
 xen/arch/arm/setup.c        |  3 ++-
 xen/arch/x86/dom0_build.c   |  5 ++---
 xen/arch/x86/setup.c        |  3 ++-
 xen/include/asm-x86/setup.h |  2 --
 xen/include/xen/domain.h    |  5 ++++-
 6 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index a375de0..2e145d9 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -66,17 +66,23 @@ struct map_range_data
  */
 #define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry))
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
+unsigned int __init dom0_max_vcpus(void)
 {
     if ( opt_dom0_max_vcpus == 0 )
         opt_dom0_max_vcpus = num_online_cpus();
     if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
         opt_dom0_max_vcpus = MAX_VIRT_CPUS;
 
-    dom0->vcpu = xzalloc_array(struct vcpu *, opt_dom0_max_vcpus);
+    return opt_dom0_max_vcpus;
+}
+
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
+                                     unsigned int max_vcpus)
+{
+    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
     if ( !dom0->vcpu )
         return NULL;
-    dom0->max_vcpus = opt_dom0_max_vcpus;
+    dom0->max_vcpus = max_vcpus;
 
     return alloc_vcpu(dom0, 0, 0);
 }
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index c181389..be24f20 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -856,9 +856,10 @@ void __init start_xen(unsigned long boot_phys_offset,
     /* The vGIC for DOM0 is exactly emulating the hardware GIC */
     dom0_cfg.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
     dom0_cfg.arch.nr_spis = gic_number_lines() - 32;
+    dom0_cfg.max_vcpus = dom0_max_vcpus();
 
     dom0 = domain_create(0, &dom0_cfg);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
             panic("Error creating domain 0");
 
     dom0->is_privileged = 1;
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 555660b..e82bc48 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -200,10 +200,9 @@ unsigned int __init dom0_max_vcpus(void)
     return max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
+                                     unsigned int max_vcpus)
 {
-    unsigned int max_vcpus = dom0_max_vcpus();
-
     dom0->node_affinity = dom0_nodes;
     dom0->auto_node_affinity = !dom0_nr_pxms;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index a79a8b6..b0e85b0 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1645,10 +1645,11 @@ void __init noreturn __start_xen(unsigned long mbi_p)
         dom0_cfg.arch.emulation_flags |=
             XEN_X86_EMU_LAPIC | XEN_X86_EMU_IOAPIC;
     }
+    dom0_cfg.max_vcpus = dom0_max_vcpus();
 
     /* Create initial domain 0. */
     dom0 = domain_create(get_initial_domain_id(), &dom0_cfg);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
         panic("Error creating domain 0");
 
     if ( !pv_shim )
diff --git a/xen/include/asm-x86/setup.h b/xen/include/asm-x86/setup.h
index 19232af..751c245 100644
--- a/xen/include/asm-x86/setup.h
+++ b/xen/include/asm-x86/setup.h
@@ -51,8 +51,6 @@ unsigned long initial_images_nrpages(nodeid_t node);
 void discard_initial_images(void);
 void *bootstrap_map(const module_t *mod);
 
-unsigned int dom0_max_vcpus(void);
-
 int xen_in_range(unsigned long mfn);
 
 void microcode_grab_module(
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 1929fa0..dc022b4 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -15,7 +15,10 @@ typedef union {
 
 struct vcpu *alloc_vcpu(
     struct domain *d, unsigned int vcpu_id, unsigned int cpu_id);
-struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
+
+unsigned int dom0_max_vcpus(void);
+struct vcpu *alloc_dom0_vcpu0(struct domain *dom0, unsigned int max_vcpus);
+
 int vcpu_reset(struct vcpu *);
 int vcpu_up(struct vcpu *v);
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 19/20] xen/domain: Call arch_domain_create() as early as possible in domain_create()
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (17 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 18/20] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-26 15:58   ` Jan Beulich
  2018-03-19 19:13 ` [PATCH 20/20] xen/domain: Allocate d->vcpu[] in arch_domain_create() Andrew Cooper
  19 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Jan Beulich

This is in preparation to set up d->max_cpus and d->vcpu[] in
arch_domain_create(), and allow later parts of domain construction to have
access to the values.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
 xen/common/domain.c | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index c86cf47..94a78d6 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -320,6 +320,23 @@ struct domain *domain_create(domid_t domid,
         else
             d->guest_type = guest_type_pv;
 
+        if ( !is_hardware_domain(d) )
+            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
+        else
+            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
+                                           : arch_hwdom_irqs(domid);
+        if ( d->nr_pirqs > nr_irqs )
+            d->nr_pirqs = nr_irqs;
+
+        radix_tree_init(&d->pirq_tree);
+    }
+
+    if ( (err = arch_domain_create(d, config)) != 0 )
+        goto fail;
+    init_status |= INIT_arch;
+
+    if ( !is_idle_domain(d) )
+    {
         watchdog_domain_init(d);
         init_status |= INIT_watchdog;
 
@@ -350,16 +367,6 @@ struct domain *domain_create(domid_t domid,
         d->controller_pause_count = 1;
         atomic_inc(&d->pause_count);
 
-        if ( !is_hardware_domain(d) )
-            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
-        else
-            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
-                                           : arch_hwdom_irqs(domid);
-        if ( d->nr_pirqs > nr_irqs )
-            d->nr_pirqs = nr_irqs;
-
-        radix_tree_init(&d->pirq_tree);
-
         if ( (err = evtchn_init(d, config->max_evtchn_port)) != 0 )
             goto fail;
         init_status |= INIT_evtchn;
@@ -374,14 +381,7 @@ struct domain *domain_create(domid_t domid,
         d->pbuf = xzalloc_array(char, DOMAIN_PBUF_SIZE);
         if ( !d->pbuf )
             goto fail;
-    }
-
-    if ( (err = arch_domain_create(d, config)) != 0 )
-        goto fail;
-    init_status |= INIT_arch;
 
-    if ( !is_idle_domain(d) )
-    {
         if ( (err = sched_init_domain(d, 0)) != 0 )
             goto fail;
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH 20/20] xen/domain: Allocate d->vcpu[] in arch_domain_create()
  2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
                   ` (18 preceding siblings ...)
  2018-03-19 19:13 ` [PATCH 19/20] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
@ 2018-03-19 19:13 ` Andrew Cooper
  2018-03-20  4:17   ` Julien Grall
  2018-03-20 15:28   ` [PATCH v1.5 " Andrew Cooper
  19 siblings, 2 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-19 19:13 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Jan Beulich

On the ARM side, audit config->max_vcpus after the vgic has been initialised,
at which point we have a real upper bound to test against.  This allows for
the removal of the vgic_max_vcpus() juggling to cope with the call from
evtchn_init() before the vgic settings are known.

For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
of dom0->vcpu.

With arch_domain_create() now in charge of auditing config->max_vcpus, the
per-arch domain_max_vcpus() can be dropped.  Finally, evtchn_init() can be
updated to allocate a poll mask suitable for the domain, rather than suitable
for the worst case setting.

From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
can be looked up by ID.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/domain.c        | 13 +++++++++++++
 xen/arch/arm/domain_build.c  |  8 +-------
 xen/arch/arm/setup.c         |  2 +-
 xen/arch/arm/vgic.c          | 14 --------------
 xen/arch/x86/dom0_build.c    |  8 +-------
 xen/arch/x86/domain.c        | 11 +++++++++++
 xen/arch/x86/setup.c         |  2 +-
 xen/common/domctl.c          | 14 --------------
 xen/common/event_channel.c   |  4 ++--
 xen/include/asm-arm/domain.h |  6 ------
 xen/include/asm-arm/vgic.h   |  2 --
 xen/include/asm-x86/domain.h |  2 --
 xen/include/xen/domain.h     |  2 +-
 13 files changed, 31 insertions(+), 57 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 0931ce6..ae40971 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -646,6 +646,19 @@ int arch_domain_create(struct domain *d,
     if ( (rc = domain_vtimer_init(d, &config->arch)) != 0 )
         goto fail;
 
+    rc = -EINVAL;
+    /* On ARM, the number of VCPUs is limited by the type of GIC emulated. */
+    if ( (config->max_vcpus < 1) ||
+         (config->max_vcpus > min_t(unsigned int, MAX_VIRT_CPUS,
+                                    d->arch.vgic.handler->max_vcpus)) )
+        goto fail;
+
+    rc = -ENOMEM;
+    d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
+    if ( !d->vcpu )
+        goto fail;
+    d->max_vcpus = config->max_vcpus;
+
     update_domain_wallclock_time(d);
 
     /*
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 2e145d9..b13c47e 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -76,14 +76,8 @@ unsigned int __init dom0_max_vcpus(void)
     return opt_dom0_max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
-                                     unsigned int max_vcpus)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
 {
-    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
-    if ( !dom0->vcpu )
-        return NULL;
-    dom0->max_vcpus = max_vcpus;
-
     return alloc_vcpu(dom0, 0, 0);
 }
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index be24f20..0ada4d5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -859,7 +859,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     dom0_cfg.max_vcpus = dom0_max_vcpus();
 
     dom0 = domain_create(0, &dom0_cfg);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
             panic("Error creating domain 0");
 
     dom0->is_privileged = 1;
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index eb09d9c..dc89b81 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -672,20 +672,6 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
     clear_bit(virq, d->arch.vgic.allocated_irqs);
 }
 
-unsigned int vgic_max_vcpus(const struct domain *d)
-{
-    /*
-     * Since evtchn_init would call domain_max_vcpus for poll_mask
-     * allocation when the vgic_ops haven't been initialised yet,
-     * we return MAX_VIRT_CPUS if d->arch.vgic.handler is null.
-     */
-    if ( !d->arch.vgic.handler )
-        return MAX_VIRT_CPUS;
-    else
-        return min_t(unsigned int, MAX_VIRT_CPUS,
-                     d->arch.vgic.handler->max_vcpus);
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index e82bc48..4c25789 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -200,17 +200,11 @@ unsigned int __init dom0_max_vcpus(void)
     return max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
-                                     unsigned int max_vcpus)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
 {
     dom0->node_affinity = dom0_nodes;
     dom0->auto_node_affinity = !dom0_nr_pxms;
 
-    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
-    if ( !dom0->vcpu )
-        return NULL;
-    dom0->max_vcpus = max_vcpus;
-
     return dom0_setup_vcpu(dom0, 0,
                            cpumask_last(&dom0_cpus) /* so it wraps around to first pcpu */);
 }
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index c4c34b4..90fe50a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -504,6 +504,17 @@ int arch_domain_create(struct domain *d,
     HYPERVISOR_COMPAT_VIRT_START(d) =
         is_pv_domain(d) ? __HYPERVISOR_COMPAT_VIRT_START : ~0u;
 
+    if ( (config->max_vcpus < 1) ||
+         (config->max_vcpus >
+          (is_hvm_domain(d) ? HVM_MAX_VCPUS : MAX_VIRT_CPUS)) )
+        return -EINVAL;
+
+    rc = -ENOMEM;
+    d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
+    if ( !d->vcpu )
+        goto fail;
+    d->max_vcpus = config->max_vcpus;
+
     /* Need to determine if HAP is enabled before initialising paging */
     if ( is_hvm_domain(d) )
         d->arch.hvm_domain.hap_enabled =
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b0e85b0..07e9893 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1649,7 +1649,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     /* Create initial domain 0. */
     dom0 = domain_create(get_initial_domain_id(), &dom0_cfg);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
         panic("Error creating domain 0");
 
     if ( !pv_shim )
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 2f9d993..1dc19ea 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -547,23 +547,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         op->domain = d->domain_id;
         copyback = true;
 
-        ret = -EINVAL;
-        if ( vcpus > domain_max_vcpus(d) )
-            goto createdomain_fail_late;
-
         ret = -ENOMEM;
         online = cpupool_domain_cpumask(d);
 
-        BUG_ON(d->vcpu);
-        BUG_ON(d->max_vcpus);
-
-        d->vcpu = xzalloc_array(struct vcpu *, vcpus);
-        /* Install vcpu array /then/ update max_vcpus. */
-        smp_wmb();
-        if ( !d->vcpu )
-            goto createdomain_fail_late;
-        d->max_vcpus = vcpus;
-
         cpu = cpumask_any(online);
         for ( i = 0; i < vcpus; ++i )
         {
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 41cbbae..2e6f84b 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1303,8 +1303,8 @@ int evtchn_init(struct domain *d, unsigned int max_port)
     evtchn_from_port(d, 0)->state = ECS_RESERVED;
 
 #if MAX_VIRT_CPUS > BITS_PER_LONG
-    d->poll_mask = xzalloc_array(unsigned long,
-                                 BITS_TO_LONGS(domain_max_vcpus(d)));
+    BUG_ON(d->max_vcpus == 0);
+    d->poll_mask = xzalloc_array(unsigned long, BITS_TO_LONGS(d->max_vcpus));
     if ( !d->poll_mask )
     {
         free_evtchn_bucket(d, d->evtchn);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7ba6528..2273b1b 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -208,12 +208,6 @@ void vcpu_show_execution_state(struct vcpu *);
 void vcpu_show_registers(const struct vcpu *);
 void vcpu_switch_to_aarch64_mode(struct vcpu *);
 
-/* On ARM, the number of VCPUs is limited by the type of GIC emulated. */
-static inline unsigned int domain_max_vcpus(const struct domain *d)
-{
-    return vgic_max_vcpus(d);
-}
-
 /*
  * Due to the restriction of GICv3, the number of vCPUs in AFF0 is
  * limited to 16, thus only the first 4 bits of AFF0 are legal. We will
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 0787ba9..6357916 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -348,8 +348,6 @@ extern void vgic_clear_pending_irqs(struct vcpu *v);
 
 extern bool vgic_emulate(struct cpu_user_regs *regs, union hsr hsr);
 
-unsigned int vgic_max_vcpus(const struct domain *d);
-
 void vgic_v2_setup_hw(paddr_t dbase, paddr_t cbase, paddr_t csize,
                       paddr_t vbase, uint32_t aliased_offset);
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 47aadc2..9a21e0f 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -624,8 +624,6 @@ unsigned long pv_guest_cr4_fixup(const struct vcpu *, unsigned long guest_cr4);
              X86_CR4_OSXSAVE | X86_CR4_SMEP |               \
              X86_CR4_FSGSBASE | X86_CR4_SMAP))
 
-#define domain_max_vcpus(d) (is_hvm_domain(d) ? HVM_MAX_VCPUS : MAX_VIRT_CPUS)
-
 static inline struct vcpu_guest_context *alloc_vcpu_guest_context(void)
 {
     return vmalloc(sizeof(struct vcpu_guest_context));
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index dc022b4..3dcec06 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -17,7 +17,7 @@ struct vcpu *alloc_vcpu(
     struct domain *d, unsigned int vcpu_id, unsigned int cpu_id);
 
 unsigned int dom0_max_vcpus(void);
-struct vcpu *alloc_dom0_vcpu0(struct domain *dom0, unsigned int max_vcpus);
+struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
 
 int vcpu_reset(struct vcpu *);
 int vcpu_up(struct vcpu *v);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* Re: [PATCH 11/20] xen/domctl: Merge set_gnttab_limits into createdomain
  2018-03-19 19:13 ` [PATCH 11/20] xen/domctl: Merge set_gnttab_limits " Andrew Cooper
@ 2018-03-19 21:43   ` Christian Lindig
  2018-03-20 10:11     ` Andrew Cooper
  2018-03-20 19:27   ` Daniel De Graaf
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 65+ messages in thread
From: Christian Lindig @ 2018-03-19 21:43 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Marek Marczykowski-Górecki, Wei Liu, Jonathan Ludlam,
	Rob Hoes, Xen-devel, Jan Beulich, David Scott, Ian Jackson,
	Daniel De Graaf



> On 19. Mar 2018, at 19:13, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 
> +	max_grant_frames: int32;
> +	max_maptrack_frames: int32;

As part of:

> +type domctl_create_config =
> +{
> +       ssidref: int32;
> +       handle: string;
> +       flags: domain_create_flag list;
> +       max_vcpus: int32;
> +       max_evtchn_port: int32;
> +       max_grant_frames: int32;
> +       max_maptrack_frames: int32;
> +       arch: arch_domainconfig;
> +}

This is a minor point: in OCaml, values of type int32 and int64 are represented as pointers to a memory block containing the value. This is unlike an int, which is represented simply as part of a memory block that represents the record value. Because OCaml uses one bit as a tag, the range of int and int32 (on a 32-bit system) is different. Most of the time the range is large enough and people use int rather than int32 or int64 for that reason. I would expect that an int is large enough, especially on a 64-bit system. However, if int32 makes it easier to align this with the C code, this is fine, especially because there are not many values of omctl_create_config to be expected.

— Christian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 05/20] arm/boot: Mark construct_dom0() as __init
  2018-03-19 19:13 ` [PATCH 05/20] arm/boot: Mark construct_dom0() as __init Andrew Cooper
@ 2018-03-20  3:40   ` Julien Grall
  2018-03-20  8:31     ` Julien Grall
  0 siblings, 1 reply; 65+ messages in thread
From: Julien Grall @ 2018-03-20  3:40 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Stefano Stabellini

Hi Andrew,

On 03/19/2018 07:13 PM, Andrew Cooper wrote:
> Its sole caller, start_xen(), is __init.

Actually, most of that files could benefits of __init as this is domain 
only build.

In any case, this patch is already a good start. So:

Acked-by: Julien Grall <julien.grall@arm.com>

Cheers,

> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> ---
>   xen/arch/arm/domain_build.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 28ee876..9ef9030 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2094,7 +2094,7 @@ static void __init find_gnttab_region(struct domain *d,
>              kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
>   }
>   
> -int construct_dom0(struct domain *d)
> +int __init construct_dom0(struct domain *d)
>   {
>       struct kernel_info kinfo = {};
>       struct vcpu *saved_current;
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 14/20] xen/gnttab: Remove replace_grant_supported()
  2018-03-19 19:13 ` [PATCH 14/20] xen/gnttab: Remove replace_grant_supported() Andrew Cooper
@ 2018-03-20  3:44   ` Julien Grall
  2018-03-26 14:03   ` Jan Beulich
  1 sibling, 0 replies; 65+ messages in thread
From: Julien Grall @ 2018-03-20  3:44 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Stefano Stabellini, Jan Beulich

Hi Andrew,

On 03/19/2018 07:13 PM, Andrew Cooper wrote:
> It is identical on all architecture, and this is a better overall than fixing
> it up to have a proper boolean return value.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> ---
>   xen/common/grant_table.c          | 3 ---
>   xen/include/asm-arm/grant_table.h | 4 ----
>   xen/include/asm-x86/grant_table.h | 5 -----
>   3 files changed, 12 deletions(-)
> 
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 1820191..93443be 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -3459,9 +3459,6 @@ do_grant_table_op(
>   
>           if ( unlikely(!guest_handle_okay(unmap, count)) )
>               goto out;
> -        rc = -ENOSYS;
> -        if ( unlikely(!replace_grant_supported()) )
> -            goto out;
>           rc = gnttab_unmap_and_replace(unmap, count);
>           if ( rc > 0 )
>           {
> diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
> index d2027d2..63a2fdd 100644
> --- a/xen/include/asm-arm/grant_table.h
> +++ b/xen/include/asm-arm/grant_table.h
> @@ -23,10 +23,6 @@ int replace_grant_host_mapping(unsigned long gpaddr, unsigned long mfn,
>   void gnttab_mark_dirty(struct domain *d, unsigned long l);
>   #define gnttab_create_status_page(d, t, i) do {} while (0)
>   #define gnttab_release_host_mappings(domain) 1
> -static inline int replace_grant_supported(void)
> -{
> -    return 1;
> -}
>   
>   /*
>    * The region used by Xen on the memory will never be mapped in DOM0
> diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
> index 4ac0b9b..514eaf3 100644
> --- a/xen/include/asm-x86/grant_table.h
> +++ b/xen/include/asm-x86/grant_table.h
> @@ -101,9 +101,4 @@ static inline void gnttab_clear_flag(unsigned int nr, uint16_t *st)
>   #define gnttab_need_iommu_mapping(d)                \
>       (!paging_mode_translate(d) && need_iommu(d))
>   
> -static inline int replace_grant_supported(void)
> -{
> -    return 1;
> -}
> -
>   #endif /* __ASM_GRANT_TABLE_H__ */
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 18/20] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value
  2018-03-19 19:13 ` [PATCH 18/20] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
@ 2018-03-20  3:54   ` Julien Grall
  2018-03-26 15:19   ` Jan Beulich
  1 sibling, 0 replies; 65+ messages in thread
From: Julien Grall @ 2018-03-20  3:54 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Stefano Stabellini, Jan Beulich

Hi Andrew,

On 03/19/2018 07:13 PM, Andrew Cooper wrote:
> Make dom0_max_vcpus() a common interface, and implement it on ARM by splitting
> the existing alloc_dom0_vcpu0() function in half.
> 
> As domain_create() doesn't yet set up the vcpu array, the max value is also
> passed into alloc_dom0_vcpu0().  This is temporary for bisectibility and
> removed in the following patch.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 20/20] xen/domain: Allocate d->vcpu[] in arch_domain_create()
  2018-03-19 19:13 ` [PATCH 20/20] xen/domain: Allocate d->vcpu[] in arch_domain_create() Andrew Cooper
@ 2018-03-20  4:17   ` Julien Grall
  2018-03-20 15:28   ` [PATCH v1.5 " Andrew Cooper
  1 sibling, 0 replies; 65+ messages in thread
From: Julien Grall @ 2018-03-20  4:17 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Andre Przywara, Stefano Stabellini, Jan Beulich

(+ Andre)

On 03/19/2018 07:13 PM, Andrew Cooper wrote:
> On the ARM side, audit config->max_vcpus after the vgic has been initialised,
> at which point we have a real upper bound to test against.  This allows for
> the removal of the vgic_max_vcpus() juggling to cope with the call from
> evtchn_init() before the vgic settings are known.

I would rather keep vgic_max_vcpus() around. We are introducing a new 
vGIC in Xen that will cohabit with the current vGIC for a few releases. 
See [1].

So a better solution is to rework vgic_max_vcpus() to drop the NULL check.

I would also fine to directly call vgic_max_vcpus() in 
arch_domain_create and therefore drop domain_max_vcpus().

Cheers,

[1] https://lists.xen.org/archives/html/xen-devel/2018-03/msg01831.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 05/20] arm/boot: Mark construct_dom0() as __init
  2018-03-20  3:40   ` Julien Grall
@ 2018-03-20  8:31     ` Julien Grall
  0 siblings, 0 replies; 65+ messages in thread
From: Julien Grall @ 2018-03-20  8:31 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Stefano Stabellini



On 03/20/2018 03:40 AM, Julien Grall wrote:
> Hi Andrew,
> 
> On 03/19/2018 07:13 PM, Andrew Cooper wrote:
>> Its sole caller, start_xen(), is __init.
> 
> Actually, most of that files could benefits of __init as this is domain 
> only build.
> 
> In any case, this patch is already a good start. So:
> 
> Acked-by: Julien Grall <julien.grall@arm.com>

Committed.

Cheers,

> 
> Cheers,
> 
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> ---
>>   xen/arch/arm/domain_build.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 28ee876..9ef9030 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -2094,7 +2094,7 @@ static void __init find_gnttab_region(struct 
>> domain *d,
>>              kinfo->gnttab_start, kinfo->gnttab_start + 
>> kinfo->gnttab_size);
>>   }
>> -int construct_dom0(struct domain *d)
>> +int __init construct_dom0(struct domain *d)
>>   {
>>       struct kernel_info kinfo = {};
>>       struct vcpu *saved_current;
>>
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 11/20] xen/domctl: Merge set_gnttab_limits into createdomain
  2018-03-19 21:43   ` Christian Lindig
@ 2018-03-20 10:11     ` Andrew Cooper
  2018-03-20 19:42       ` Christian Lindig
  0 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-20 10:11 UTC (permalink / raw)
  To: Christian Lindig
  Cc: Marek Marczykowski-Górecki, Wei Liu, Jonathan Ludlam,
	Rob Hoes, Xen-devel, Jan Beulich, David Scott, Ian Jackson,
	Daniel De Graaf

On 19/03/2018 21:43, Christian Lindig wrote:
>
>> On 19. Mar 2018, at 19:13, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>
>> +	max_grant_frames: int32;
>> +	max_maptrack_frames: int32;
> As part of:
>
>> +type domctl_create_config =
>> +{
>> +       ssidref: int32;
>> +       handle: string;
>> +       flags: domain_create_flag list;
>> +       max_vcpus: int32;
>> +       max_evtchn_port: int32;
>> +       max_grant_frames: int32;
>> +       max_maptrack_frames: int32;
>> +       arch: arch_domainconfig;
>> +}
> This is a minor point: in OCaml, values of type int32 and int64 are represented as pointers to a memory block containing the value. This is unlike an int, which is represented simply as part of a memory block that represents the record value. Because OCaml uses one bit as a tag, the range of int and int32 (on a 32-bit system) is different. Most of the time the range is large enough and people use int rather than int32 or int64 for that reason. I would expect that an int is large enough, especially on a 64-bit system. However, if int32 makes it easier to align this with the C code, this is fine, especially because there are not many values of omctl_create_config to be expected.

I'm aware of of in32/64 being properly tagged objects, and did consider
whether they were the sensible value to use here.  I sided with the
ssidref type for simplicity.

That said, while ssidref might plausibly need a full 32 bits of range
(and even then, I'm not entirely sure, but it is an opaque handle at the
end of the day), none of the max_* fields do.  vcpus is currently capped
at 128, grant frames at 64, maptrack frames at 1024 (both configurable
on the Xen command line).  evtchn_port is effectively capped at the ABI
limit which is 1024/4096 or 2^27 depending on the guests setup.

I'll swap all the max_* fields to being a plain int.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [PATCH v1.5 20/20] xen/domain: Allocate d->vcpu[] in arch_domain_create()
  2018-03-19 19:13 ` [PATCH 20/20] xen/domain: Allocate d->vcpu[] in arch_domain_create() Andrew Cooper
  2018-03-20  4:17   ` Julien Grall
@ 2018-03-20 15:28   ` Andrew Cooper
  1 sibling, 0 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-20 15:28 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andre Przywara, Julien Grall, Stefano Stabellini, Jan Beulich,
	Andrew Cooper

On the ARM side, audit config->max_vcpus after the vgic has been initialised,
at which point we have a real upper bound to test against.  This allows for
the removal of the vgic_max_vcpus() juggling to cope with the call from
evtchn_init() before the vgic settings are known.

For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
of dom0->vcpu.

With arch_domain_create() now in charge of auditing config->max_vcpus, the
per-arch domain_max_vcpus() can be dropped.  Finally, evtchn_init() can be
updated to allocate a poll mask suitable for the domain, rather than suitable
for the worst case setting.

From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
can be looked up by ID.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>

v2:
 * Retain vgic_max_vcpus() on the ARM side, but remove the NULL special case.
---
 xen/arch/arm/domain.c        | 11 +++++++++++
 xen/arch/arm/domain_build.c  |  8 +-------
 xen/arch/arm/setup.c         |  2 +-
 xen/arch/arm/vgic.c          | 11 +----------
 xen/arch/x86/dom0_build.c    |  8 +-------
 xen/arch/x86/domain.c        | 11 +++++++++++
 xen/arch/x86/setup.c         |  2 +-
 xen/common/domctl.c          | 14 --------------
 xen/common/event_channel.c   |  4 ++--
 xen/include/asm-arm/domain.h |  6 ------
 xen/include/asm-x86/domain.h |  2 --
 xen/include/xen/domain.h     |  2 +-
 12 files changed, 30 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 57a2b8b..7abe766 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -646,6 +646,17 @@ int arch_domain_create(struct domain *d,
     if ( (rc = domain_vtimer_init(d, &config->arch)) != 0 )
         goto fail;
 
+    rc = -EINVAL;
+    /* On ARM, the number of VCPUs is limited by the type of GIC emulated. */
+    if ( (config->max_vcpus < 1) || (config->max_vcpus > vgic_max_vcpus(d)) )
+        goto fail;
+
+    rc = -ENOMEM;
+    d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
+    if ( !d->vcpu )
+        goto fail;
+    d->max_vcpus = config->max_vcpus;
+
     update_domain_wallclock_time(d);
 
     /*
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 2e145d9..b13c47e 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -76,14 +76,8 @@ unsigned int __init dom0_max_vcpus(void)
     return opt_dom0_max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
-                                     unsigned int max_vcpus)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
 {
-    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
-    if ( !dom0->vcpu )
-        return NULL;
-    dom0->max_vcpus = max_vcpus;
-
     return alloc_vcpu(dom0, 0, 0);
 }
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index be24f20..0ada4d5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -859,7 +859,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     dom0_cfg.max_vcpus = dom0_max_vcpus();
 
     dom0 = domain_create(0, &dom0_cfg);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
             panic("Error creating domain 0");
 
     dom0->is_privileged = 1;
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 3fafdd0..4d2b00c 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -669,16 +669,7 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
 
 unsigned int vgic_max_vcpus(const struct domain *d)
 {
-    /*
-     * Since evtchn_init would call domain_max_vcpus for poll_mask
-     * allocation when the vgic_ops haven't been initialised yet,
-     * we return MAX_VIRT_CPUS if d->arch.vgic.handler is null.
-     */
-    if ( !d->arch.vgic.handler )
-        return MAX_VIRT_CPUS;
-    else
-        return min_t(unsigned int, MAX_VIRT_CPUS,
-                     d->arch.vgic.handler->max_vcpus);
+    return min_t(unsigned int, MAX_VIRT_CPUS, d->arch.vgic.handler->max_vcpus);
 }
 
 /*
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index e82bc48..4c25789 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -200,17 +200,11 @@ unsigned int __init dom0_max_vcpus(void)
     return max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
-                                     unsigned int max_vcpus)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
 {
     dom0->node_affinity = dom0_nodes;
     dom0->auto_node_affinity = !dom0_nr_pxms;
 
-    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
-    if ( !dom0->vcpu )
-        return NULL;
-    dom0->max_vcpus = max_vcpus;
-
     return dom0_setup_vcpu(dom0, 0,
                            cpumask_last(&dom0_cpus) /* so it wraps around to first pcpu */);
 }
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index c4c34b4..90fe50a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -504,6 +504,17 @@ int arch_domain_create(struct domain *d,
     HYPERVISOR_COMPAT_VIRT_START(d) =
         is_pv_domain(d) ? __HYPERVISOR_COMPAT_VIRT_START : ~0u;
 
+    if ( (config->max_vcpus < 1) ||
+         (config->max_vcpus >
+          (is_hvm_domain(d) ? HVM_MAX_VCPUS : MAX_VIRT_CPUS)) )
+        return -EINVAL;
+
+    rc = -ENOMEM;
+    d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
+    if ( !d->vcpu )
+        goto fail;
+    d->max_vcpus = config->max_vcpus;
+
     /* Need to determine if HAP is enabled before initialising paging */
     if ( is_hvm_domain(d) )
         d->arch.hvm_domain.hap_enabled =
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b0e85b0..07e9893 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1649,7 +1649,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     /* Create initial domain 0. */
     dom0 = domain_create(get_initial_domain_id(), &dom0_cfg);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
         panic("Error creating domain 0");
 
     if ( !pv_shim )
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 2f9d993..1dc19ea 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -547,23 +547,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         op->domain = d->domain_id;
         copyback = true;
 
-        ret = -EINVAL;
-        if ( vcpus > domain_max_vcpus(d) )
-            goto createdomain_fail_late;
-
         ret = -ENOMEM;
         online = cpupool_domain_cpumask(d);
 
-        BUG_ON(d->vcpu);
-        BUG_ON(d->max_vcpus);
-
-        d->vcpu = xzalloc_array(struct vcpu *, vcpus);
-        /* Install vcpu array /then/ update max_vcpus. */
-        smp_wmb();
-        if ( !d->vcpu )
-            goto createdomain_fail_late;
-        d->max_vcpus = vcpus;
-
         cpu = cpumask_any(online);
         for ( i = 0; i < vcpus; ++i )
         {
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 41cbbae..2e6f84b 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1303,8 +1303,8 @@ int evtchn_init(struct domain *d, unsigned int max_port)
     evtchn_from_port(d, 0)->state = ECS_RESERVED;
 
 #if MAX_VIRT_CPUS > BITS_PER_LONG
-    d->poll_mask = xzalloc_array(unsigned long,
-                                 BITS_TO_LONGS(domain_max_vcpus(d)));
+    BUG_ON(d->max_vcpus == 0);
+    d->poll_mask = xzalloc_array(unsigned long, BITS_TO_LONGS(d->max_vcpus));
     if ( !d->poll_mask )
     {
         free_evtchn_bucket(d, d->evtchn);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7ba6528..2273b1b 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -208,12 +208,6 @@ void vcpu_show_execution_state(struct vcpu *);
 void vcpu_show_registers(const struct vcpu *);
 void vcpu_switch_to_aarch64_mode(struct vcpu *);
 
-/* On ARM, the number of VCPUs is limited by the type of GIC emulated. */
-static inline unsigned int domain_max_vcpus(const struct domain *d)
-{
-    return vgic_max_vcpus(d);
-}
-
 /*
  * Due to the restriction of GICv3, the number of vCPUs in AFF0 is
  * limited to 16, thus only the first 4 bits of AFF0 are legal. We will
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 47aadc2..9a21e0f 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -624,8 +624,6 @@ unsigned long pv_guest_cr4_fixup(const struct vcpu *, unsigned long guest_cr4);
              X86_CR4_OSXSAVE | X86_CR4_SMEP |               \
              X86_CR4_FSGSBASE | X86_CR4_SMAP))
 
-#define domain_max_vcpus(d) (is_hvm_domain(d) ? HVM_MAX_VCPUS : MAX_VIRT_CPUS)
-
 static inline struct vcpu_guest_context *alloc_vcpu_guest_context(void)
 {
     return vmalloc(sizeof(struct vcpu_guest_context));
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index dc022b4..3dcec06 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -17,7 +17,7 @@ struct vcpu *alloc_vcpu(
     struct domain *d, unsigned int vcpu_id, unsigned int cpu_id);
 
 unsigned int dom0_max_vcpus(void);
-struct vcpu *alloc_dom0_vcpu0(struct domain *dom0, unsigned int max_vcpus);
+struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
 
 int vcpu_reset(struct vcpu *);
 int vcpu_up(struct vcpu *v);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* Re: [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch
  2018-03-19 19:13 ` [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch Andrew Cooper
@ 2018-03-20 16:53   ` Jan Beulich
  2018-03-21  5:03   ` Julien Grall
  2018-03-21 17:27   ` Wei Liu
  2 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-20 16:53 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Ian Jackson, JulienGrall, Stefano Stabellini, WeiLiu, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> This is a tools only hypercall so fine to change.  Altering the name avoids
> having confusing code such as config->config all over the hypervisor and
> toolstack.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock
  2018-03-19 19:13 ` [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock Andrew Cooper
@ 2018-03-20 16:58   ` Jan Beulich
  2018-03-20 17:22     ` Andrew Cooper
  0 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2018-03-20 16:58 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> It is not entirely clear why this interlock was introduced in c/s 8cbb5278e
> "x86/AMD: Add support for AMD's OSVW feature in guests".
> 
> At the time, svm_handle_osvw() could have seen an unexpected change in OSVW
> (not the case now, due to the new CPUID Policy infrastructure), but even then,
> it would have caused spurious changes in behaviour when handling
> OSVW_{ID_LENGTH,STATUS} read requests on behalf of an already-running guest.
> 
> There are plenty of other aspects of domain creation which depend on hardware
> details which may change across a microcode load, but where not protected by
> this interlock.

Are there? We don't re-read CPUID (yet), for example. But of
course it is also not really specified which aspects may change
across microcode updates.

> A host administrator choosing to perform late microcode loading has plenty of
> other problems to worry about, and is it not unreasonable to expect them to
> temporarily cease domain construction activities while the microcode loading
> is in progress.

But it is also not unreasonable to expect the hypervisor to guard
against inconsistencies here. On the whole I'm not really
convinced; I think I'd like to hear others' opinions.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock
  2018-03-20 16:58   ` Jan Beulich
@ 2018-03-20 17:22     ` Andrew Cooper
  2018-03-21  5:46       ` Juergen Gross
  0 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-20 17:22 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Xen-devel

On 20/03/18 16:58, Jan Beulich wrote:
>>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
>> It is not entirely clear why this interlock was introduced in c/s 8cbb5278e
>> "x86/AMD: Add support for AMD's OSVW feature in guests".
>>
>> At the time, svm_handle_osvw() could have seen an unexpected change in OSVW
>> (not the case now, due to the new CPUID Policy infrastructure), but even then,
>> it would have caused spurious changes in behaviour when handling
>> OSVW_{ID_LENGTH,STATUS} read requests on behalf of an already-running guest.
>>
>> There are plenty of other aspects of domain creation which depend on hardware
>> details which may change across a microcode load, but where not protected by
>> this interlock.
> Are there? We don't re-read CPUID (yet), for example. But of
> course it is also not really specified which aspects may change
> across microcode updates.
>
>> A host administrator choosing to perform late microcode loading has plenty of
>> other problems to worry about, and is it not unreasonable to expect them to
>> temporarily cease domain construction activities while the microcode loading
>> is in progress.
> But it is also not unreasonable to expect the hypervisor to guard
> against inconsistencies here. On the whole I'm not really
> convinced; I think I'd like to hear others' opinions.

The underlying problem is that this lock cannot say when merging
max_cpus into createdomain, because we cannot continue the hypercall
midway through.

As it doesn't currently protect createdomain, which amongst other things
contains init_domain_cpuid_policy() and init_domain_msr_policy() (the
most likely structures to be affected by microcode updates), I don't see
any purpose in keeping it for the minute area it does cover.

i.e. the behaviour is currently broken, and removing the lock doesn't
make it meaningfully more broken, but does let us resolve a whole class
of issues relating to partially-constructed domain objects.

Making things work consistently/sensibly after a microcode update is far
more complicated than just this, and strictly does require administrator
input because we can't go blindly substituting guest visible state.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain
  2018-03-19 19:13 ` [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
@ 2018-03-20 19:27   ` Daniel De Graaf
  2018-03-20 19:42   ` Christian Lindig
  2018-03-21 17:40   ` Wei Liu
  2 siblings, 0 replies; 65+ messages in thread
From: Daniel De Graaf @ 2018-03-20 19:27 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Marek Marczykowski-Górecki, Wei Liu, Ian Jackson,
	Jon Ludlam, Rob Hoes, Christian Lindig, Jan Beulich, David Scott

On 03/19/2018 03:13 PM, Andrew Cooper wrote:
> set_max_evtchn is somewhat weird.  It was introduced with the event_fifo work,
> but has never been used.  Still, it is a bounding on resources consumed by the
> event channel infrastructure, and should be part of createdomain, rather than
> editable after the fact.
> 
> Drop XEN_DOMCTL_set_max_evtchn completely (including XSM hooks and libxc
> wrappers), and retain the functionality in XEN_DOMCTL_createdomain.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 11/20] xen/domctl: Merge set_gnttab_limits into createdomain
  2018-03-19 19:13 ` [PATCH 11/20] xen/domctl: Merge set_gnttab_limits " Andrew Cooper
  2018-03-19 21:43   ` Christian Lindig
@ 2018-03-20 19:27   ` Daniel De Graaf
  2018-03-21 17:45   ` Wei Liu
  2018-03-23 16:08   ` Jan Beulich
  3 siblings, 0 replies; 65+ messages in thread
From: Daniel De Graaf @ 2018-03-20 19:27 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Marek Marczykowski-Górecki, Wei Liu, Ian Jackson,
	Jon Ludlam, Rob Hoes, Christian Lindig, Jan Beulich, David Scott

On 03/19/2018 03:13 PM, Andrew Cooper wrote:
> XEN_DOMCTL_set_gnttab_limits is a fairly new hypercall, and is strictly
> mandatory.  Adding support for it introduced a state where a domain has a
> mostly un-constructed grant table, and there were cases where mis-ordering of
> toolstack hypercalls could cause a NULL pointer deference in the hypervisor.
> In fixing this, the grant table initialisation code became very tangled.
> 
> As the settings are mandatory, delete XEN_DOMCTL_set_gnttab_limits (including
> XSM hooks and libxc wrappers) and retain the functionality in
> XEN_DOMCTL_createdomain.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 12/20] xen/domctl: Merge max_vcpus into createdomain
  2018-03-19 19:13 ` [PATCH 12/20] xen/domctl: Merge max_vcpus " Andrew Cooper
@ 2018-03-20 19:27   ` Daniel De Graaf
  2018-03-20 19:42   ` Christian Lindig
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 65+ messages in thread
From: Daniel De Graaf @ 2018-03-20 19:27 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Marek Marczykowski-Górecki, Wei Liu, Ian Jackson,
	Jon Ludlam, Rob Hoes, Christian Lindig, Jan Beulich, David Scott

On 03/19/2018 03:13 PM, Andrew Cooper wrote:
> XEN_DOMCTL_max_vcpus is a mandatory hypercall, but nothing actually prevents a
> toolstack from unpausing a domain with no vcpus.
> 
> Originally, d->vcpus[] was an embedded array in struct domain, but c/s
> fb442e217 "x86_64: allow more vCPU-s per guest" in Xen 4.0 altered it to being
> dynamically allocated.  A side effect of this is that d->vcpu[] is NULL until
> XEN_DOMCTL_max_vcpus has completed, but a lot of hypercalls blindly
> dereference it.
> 
> Even today, the behaviour of XEN_DOMCTL_max_vcpus is a mandatory singleton
> call which can't change the number of vcpus once a value has been chosen.
> Therefore, delete XEN_DOMCTL_max_vcpus (including XSM hooks and toolstack
> wrappers) and retain the functionality in XEN_DOMCTL_createdomain.
> 
> This will allow future cleanup to ensure that d->vcpus[] is always valid for a
> locatable domain, and allow simplification of some creation logic which needs
> to size domain-wide objects based on max_cpus, which currently have to be
> deferred until vcpu construction.
> 
> For the python stubs, extend the domain_create keyword list to take a
> max_vcpus parameter, in lieu of deleting the pyxc_domain_max_vcpus function.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 11/20] xen/domctl: Merge set_gnttab_limits into createdomain
  2018-03-20 10:11     ` Andrew Cooper
@ 2018-03-20 19:42       ` Christian Lindig
  0 siblings, 0 replies; 65+ messages in thread
From: Christian Lindig @ 2018-03-20 19:42 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Marek Marczykowski-Górecki, Wei Liu, Jonathan Ludlam,
	Rob Hoes, Xen-devel, Jan Beulich, David Scott, Ian Jackson,
	Daniel De Graaf



> On 20. Mar 2018, at 10:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 
> That said, while ssidref might plausibly need a full 32 bits of range
> (and even then, I'm not entirely sure, but it is an opaque handle at the
> end of the day), none of the max_* fields do.  vcpus is currently capped
> at 128, grant frames at 64, maptrack frames at 1024 (both configurable
> on the Xen command line).  evtchn_port is effectively capped at the ABI
> limit which is 1024/4096 or 2^27 depending on the guests setup.
> 
> I'll swap all the max_* fields to being a plain int.
> 
> ~Andrew

Acked-by: Christian Lindig <christian.lindig@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 06/20] tools/ocaml: Drop domain_create_flag_table[]
  2018-03-19 19:13 ` [PATCH 06/20] tools/ocaml: Drop domain_create_flag_table[] Andrew Cooper
@ 2018-03-20 19:42   ` Christian Lindig
  0 siblings, 0 replies; 65+ messages in thread
From: Christian Lindig @ 2018-03-20 19:42 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Jonathan Ludlam, David Scott, Rob Hoes, Xen-devel



> On 19. Mar 2018, at 19:13, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 
> This is a logarithm in disguise.  Update the logic to match how
> x86_arch_emulation_flags works in c/s 9d683b5e37 and b38d96f596.

Acked-by: Christian Lindig <christian.lindig@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 12/20] xen/domctl: Merge max_vcpus into createdomain
  2018-03-19 19:13 ` [PATCH 12/20] xen/domctl: Merge max_vcpus " Andrew Cooper
  2018-03-20 19:27   ` Daniel De Graaf
@ 2018-03-20 19:42   ` Christian Lindig
  2018-03-21 17:46   ` Wei Liu
  2018-03-23 16:14   ` Jan Beulich
  3 siblings, 0 replies; 65+ messages in thread
From: Christian Lindig @ 2018-03-20 19:42 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Marek Marczykowski-Górecki, Wei Liu, Jonathan Ludlam,
	Rob Hoes, Xen-devel, Jan Beulich, David Scott, Ian Jackson,
	Daniel De Graaf



> On 19. Mar 2018, at 19:13, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 
> XEN_DOMCTL_max_vcpus is a mandatory hypercall, but nothing actually prevents a
> toolstack from unpausing a domain with no vcpus.
> 
> Originally, d->vcpus[] was an embedded array in struct domain, but c/s
> fb442e217 "x86_64: allow more vCPU-s per guest" in Xen 4.0 altered it to being
> dynamically allocated.  A side effect of this is that d->vcpu[] is NULL until
> XEN_DOMCTL_max_vcpus has completed, but a lot of hypercalls blindly
> dereference it.
> 
> Even today, the behaviour of XEN_DOMCTL_max_vcpus is a mandatory singleton
> call which can't change the number of vcpus once a value has been chosen.
> Therefore, delete XEN_DOMCTL_max_vcpus (including XSM hooks and toolstack
> wrappers) and retain the functionality in XEN_DOMCTL_createdomain.
> 
> This will allow future cleanup to ensure that d->vcpus[] is always valid for a
> locatable domain, and allow simplification of some creation logic which needs
> to size domain-wide objects based on max_cpus, which currently have to be
> deferred until vcpu construction.
> 
> For the python stubs, extend the domain_create keyword list to take a
> max_vcpus parameter, in lieu of deleting the pyxc_domain_max_vcpus function.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Christian Lindig <christian.lindig@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain
  2018-03-19 19:13 ` [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
  2018-03-20 19:27   ` Daniel De Graaf
@ 2018-03-20 19:42   ` Christian Lindig
  2018-03-21 17:40   ` Wei Liu
  2 siblings, 0 replies; 65+ messages in thread
From: Christian Lindig @ 2018-03-20 19:42 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Marek Marczykowski-Górecki, Wei Liu, Jonathan Ludlam,
	Rob Hoes, Xen-devel, Jan Beulich, David Scott, Ian Jackson,
	Daniel De Graaf



> On 19. Mar 2018, at 19:13, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 
> set_max_evtchn is somewhat weird.  It was introduced with the event_fifo work,
> but has never been used.  Still, it is a bounding on resources consumed by the
> event channel infrastructure, and should be part of createdomain, rather than
> editable after the fact.
> 
> Drop XEN_DOMCTL_set_max_evtchn completely (including XSM hooks and libxc
> wrappers), and retain the functionality in XEN_DOMCTL_createdomain.

Acked-by: Christian Lindig <christian.lindig@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain
  2018-03-19 19:13 ` [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain Andrew Cooper
@ 2018-03-20 19:42   ` Christian Lindig
  2018-03-21 16:44   ` Roger Pau Monné
  2018-03-21 17:39   ` Wei Liu
  2 siblings, 0 replies; 65+ messages in thread
From: Christian Lindig @ 2018-03-20 19:42 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Juergen Gross, Marek Marczykowski-Górecki, Wei Liu,
	Jonathan Ludlam, Rob Hoes, Xen-devel, David Scott, Ian Jackson



> On 19. Mar 2018, at 19:13, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 
> In future patches, the structure will be extended with further information,
> and this is far cleaner than adding extra parameters.
> 
> The python stubs are the only user which passes NULL for the existing config
> option (which is actually the arch substructure).  Therefore, the #ifdefary
> moves to compensate.
> 
> For libxl, pass the full config object down into
> libxl__arch_domain_{prepare,save}_config(), as there are in practice arch
> specific settings in the common part of the structure (flags s3_integrity and
> oos_off specifically).
> 
> No practical change in behaviour.

Acked-by: Christian Lindig <christian.lindig@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 08/20] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create()
  2018-03-19 19:13 ` [PATCH 08/20] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create() Andrew Cooper
@ 2018-03-20 19:43   ` Christian Lindig
  0 siblings, 0 replies; 65+ messages in thread
From: Christian Lindig @ 2018-03-20 19:43 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Jonathan Ludlam, David Scott, Rob Hoes, Xen-devel



> On 19. Mar 2018, at 19:13, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 
> The underlying C function is about to make the same change, and the structure
> is going to gain extra fields.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Christian Lindig <christian.lindig@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch
  2018-03-19 19:13 ` [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch Andrew Cooper
  2018-03-20 16:53   ` Jan Beulich
@ 2018-03-21  5:03   ` Julien Grall
  2018-03-21 17:27   ` Wei Liu
  2 siblings, 0 replies; 65+ messages in thread
From: Julien Grall @ 2018-03-21  5:03 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Wei Liu, Stefano Stabellini, Ian Jackson, Jan Beulich

Hi,

On 03/19/2018 07:13 PM, Andrew Cooper wrote:
> This is a tools only hypercall so fine to change.  Altering the name avoids
> having confusing code such as config->config all over the hypervisor and
> toolstack.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> ---
>   tools/libxc/xc_domain.c     |  4 ++--
>   xen/arch/arm/domain.c       | 10 +++++-----
>   xen/arch/arm/setup.c        |  4 ++--
>   xen/arch/x86/domain.c       |  2 +-
>   xen/arch/x86/setup.c        |  2 +-
>   xen/include/public/domctl.h |  2 +-
>   6 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index ea3df1e..f122ea6 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -57,12 +57,12 @@ int xc_domain_create(xc_interface *xch, uint32_t ssidref,
>       domctl.u.createdomain.flags   = flags;
>       memcpy(domctl.u.createdomain.handle, handle, sizeof(xen_domain_handle_t));
>       /* xc_domain_configure_t is an alias of arch_domainconfig_t */
> -    memcpy(&domctl.u.createdomain.config, config, sizeof(*config));
> +    memcpy(&domctl.u.createdomain.arch, config, sizeof(*config));
>       if ( (err = do_domctl(xch, &domctl)) != 0 )
>           return err;
>   
>       *pdomid = (uint16_t)domctl.domain;
> -    memcpy(config, &domctl.u.createdomain.config, sizeof(*config));
> +    memcpy(config, &domctl.u.createdomain.arch, sizeof(*config));
>   
>       return 0;
>   }
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 7193531..0931ce6 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -601,18 +601,18 @@ int arch_domain_create(struct domain *d,
>       clear_page(d->shared_info);
>       share_xen_page_with_guest(virt_to_page(d->shared_info), d, SHARE_rw);
>   
> -    switch ( config->config.gic_version )
> +    switch ( config->arch.gic_version )
>       {
>       case XEN_DOMCTL_CONFIG_GIC_NATIVE:
>           switch ( gic_hw_version () )
>           {
>           case GIC_V2:
> -            config->config.gic_version = XEN_DOMCTL_CONFIG_GIC_V2;
> +            config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_V2;
>               d->arch.vgic.version = GIC_V2;
>               break;
>   
>           case GIC_V3:
> -            config->config.gic_version = XEN_DOMCTL_CONFIG_GIC_V3;
> +            config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_V3;
>               d->arch.vgic.version = GIC_V3;
>               break;
>   
> @@ -640,10 +640,10 @@ int arch_domain_create(struct domain *d,
>       if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )
>           goto fail;
>   
> -    if ( (rc = domain_vgic_init(d, config->config.nr_spis)) != 0 )
> +    if ( (rc = domain_vgic_init(d, config->arch.nr_spis)) != 0 )
>           goto fail;
>   
> -    if ( (rc = domain_vtimer_init(d, &config->config)) != 0 )
> +    if ( (rc = domain_vtimer_init(d, &config->arch)) != 0 )
>           goto fail;
>   
>       update_domain_wallclock_time(d);
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index b17797d..e6f8e23 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -840,8 +840,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>   
>       /* Create initial domain 0. */
>       /* The vGIC for DOM0 is exactly emulating the hardware GIC */
> -    dom0_cfg.config.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
> -    dom0_cfg.config.nr_spis = gic_number_lines() - 32;
> +    dom0_cfg.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
> +    dom0_cfg.arch.nr_spis = gic_number_lines() - 32;
>   
>       dom0 = domain_create(0, &dom0_cfg);
>       if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 4cac890..c4c34b4 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -480,7 +480,7 @@ int arch_domain_create(struct domain *d,
>   
>       d->arch.s3_integrity = config->flags & XEN_DOMCTL_CDF_s3_integrity;
>   
> -    emflags = config->config.emulation_flags;
> +    emflags = config->arch.emulation_flags;
>   
>       if ( is_hardware_domain(d) && is_pv_domain(d) )
>           emflags |= XEN_X86_EMU_PIT;
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 3f6ecf4..fa77bae 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1638,7 +1638,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>                              ((hvm_funcs.hap_supported && !opt_dom0_shadow) ?
>                               XEN_DOMCTL_CDF_hap : 0));
>   
> -        dom0_cfg.config.emulation_flags |=
> +        dom0_cfg.arch.emulation_flags |=
>               XEN_X86_EMU_LAPIC | XEN_X86_EMU_IOAPIC;
>       }
>   
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index ec7a860..0535da8 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -65,7 +65,7 @@ struct xen_domctl_createdomain {
>   #define _XEN_DOMCTL_CDF_xs_domain     4
>   #define XEN_DOMCTL_CDF_xs_domain      (1U<<_XEN_DOMCTL_CDF_xs_domain)
>       uint32_t flags;
> -    struct xen_arch_domainconfig config;
> +    struct xen_arch_domainconfig arch;
>   };
>   
>   /* XEN_DOMCTL_getdomaininfo */
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock
  2018-03-20 17:22     ` Andrew Cooper
@ 2018-03-21  5:46       ` Juergen Gross
  2018-03-21 17:57         ` Andrew Cooper
  0 siblings, 1 reply; 65+ messages in thread
From: Juergen Gross @ 2018-03-21  5:46 UTC (permalink / raw)
  To: Andrew Cooper, Jan Beulich; +Cc: Xen-devel

On 20/03/18 18:22, Andrew Cooper wrote:
> On 20/03/18 16:58, Jan Beulich wrote:
>>>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
>>> It is not entirely clear why this interlock was introduced in c/s 8cbb5278e
>>> "x86/AMD: Add support for AMD's OSVW feature in guests".
>>>
>>> At the time, svm_handle_osvw() could have seen an unexpected change in OSVW
>>> (not the case now, due to the new CPUID Policy infrastructure), but even then,
>>> it would have caused spurious changes in behaviour when handling
>>> OSVW_{ID_LENGTH,STATUS} read requests on behalf of an already-running guest.
>>>
>>> There are plenty of other aspects of domain creation which depend on hardware
>>> details which may change across a microcode load, but where not protected by
>>> this interlock.
>> Are there? We don't re-read CPUID (yet), for example. But of
>> course it is also not really specified which aspects may change
>> across microcode updates.
>>
>>> A host administrator choosing to perform late microcode loading has plenty of
>>> other problems to worry about, and is it not unreasonable to expect them to
>>> temporarily cease domain construction activities while the microcode loading
>>> is in progress.
>> But it is also not unreasonable to expect the hypervisor to guard
>> against inconsistencies here. On the whole I'm not really
>> convinced; I think I'd like to hear others' opinions.
> 
> The underlying problem is that this lock cannot say when merging
> max_cpus into createdomain, because we cannot continue the hypercall
> midway through.
> 
> As it doesn't currently protect createdomain, which amongst other things
> contains init_domain_cpuid_policy() and init_domain_msr_policy() (the
> most likely structures to be affected by microcode updates), I don't see
> any purpose in keeping it for the minute area it does cover.

What about failing domain creation e.g. via -EAGAIN in case a
microcode update happened in between? This would be easy by adding a
microcode generation count which would have to be the same for start
and end of the create domain hypercall.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 01/20] tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state
  2018-03-19 19:13 ` [PATCH 01/20] tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state Andrew Cooper
@ 2018-03-21 16:09   ` Roger Pau Monné
  2018-03-21 17:17   ` Wei Liu
  1 sibling, 0 replies; 65+ messages in thread
From: Roger Pau Monné @ 2018-03-21 16:09 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Ian Jackson, Xen-devel

On Mon, Mar 19, 2018 at 07:13:40PM +0000, Andrew Cooper wrote:
> The data it stores is initialised and exclusively used within
> libxl__domain_make(), with the important details written back elsewhere by
> libxl__arch_domain_save_config().  Prepare xc_config on libxl__domain_make()'s
> stack, and drop the parameter.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 02/20] tools/libxl: Don't prepare or save xc_config when soft resetting a domain
  2018-03-19 19:13 ` [PATCH 02/20] tools/libxl: Don't prepare or save xc_config when soft resetting a domain Andrew Cooper
@ 2018-03-21 16:18   ` Roger Pau Monné
  2018-03-21 17:35     ` Andrew Cooper
  2018-03-21 17:27   ` Wei Liu
  1 sibling, 1 reply; 65+ messages in thread
From: Roger Pau Monné @ 2018-03-21 16:18 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Ian Jackson, Xen-devel

On Mon, Mar 19, 2018 at 07:13:41PM +0000, Andrew Cooper wrote:
> xc_config is only used by xc_domain_create(), but by calling
> libxl__arch_domain_{prepare,save}_config() we clobber the real settings with
> the default settings.
> 
> Move all data and calls relating to xc_domain_create() into the path which
> calls it.
> 
> As far as I can tell, soft_reset has always been broken for ARM domains using
> LIBXL_GIC_VERSION_DEFAULT, which elicits a hard error out of
> libxl__arch_domain_save_config().
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

AFAICT this didn't affect x86 because libxl__arch_domain_save_config
is a noop?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain
  2018-03-19 19:13 ` [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain Andrew Cooper
  2018-03-20 19:42   ` Christian Lindig
@ 2018-03-21 16:44   ` Roger Pau Monné
  2018-03-21 17:39   ` Wei Liu
  2 siblings, 0 replies; 65+ messages in thread
From: Roger Pau Monné @ 2018-03-21 16:44 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Juergen Gross, Wei Liu, Rob Hoes, Ian Jackson, Jon Ludlam,
	Marek Marczykowski-Górecki, Christian Lindig, David Scott,
	Xen-devel

On Mon, Mar 19, 2018 at 07:13:48PM +0000, Andrew Cooper wrote:
> In future patches, the structure will be extended with further information,
> and this is far cleaner than adding extra parameters.
> 
> The python stubs are the only user which passes NULL for the existing config
> option (which is actually the arch substructure).  Therefore, the #ifdefary
> moves to compensate.
> 
> For libxl, pass the full config object down into
> libxl__arch_domain_{prepare,save}_config(), as there are in practice arch
> specific settings in the common part of the structure (flags s3_integrity and
> oos_off specifically).
> 
> No practical change in behaviour.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 01/20] tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state
  2018-03-19 19:13 ` [PATCH 01/20] tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state Andrew Cooper
  2018-03-21 16:09   ` Roger Pau Monné
@ 2018-03-21 17:17   ` Wei Liu
  1 sibling, 0 replies; 65+ messages in thread
From: Wei Liu @ 2018-03-21 17:17 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Ian Jackson, Xen-devel

On Mon, Mar 19, 2018 at 07:13:40PM +0000, Andrew Cooper wrote:
> The data it stores is initialised and exclusively used within
> libxl__domain_make(), with the important details written back elsewhere by
> libxl__arch_domain_save_config().  Prepare xc_config on libxl__domain_make()'s
> stack, and drop the parameter.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 02/20] tools/libxl: Don't prepare or save xc_config when soft resetting a domain
  2018-03-19 19:13 ` [PATCH 02/20] tools/libxl: Don't prepare or save xc_config when soft resetting a domain Andrew Cooper
  2018-03-21 16:18   ` Roger Pau Monné
@ 2018-03-21 17:27   ` Wei Liu
  1 sibling, 0 replies; 65+ messages in thread
From: Wei Liu @ 2018-03-21 17:27 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Ian Jackson, Xen-devel

On Mon, Mar 19, 2018 at 07:13:41PM +0000, Andrew Cooper wrote:
> xc_config is only used by xc_domain_create(), but by calling
> libxl__arch_domain_{prepare,save}_config() we clobber the real settings with
> the default settings.
> 
> Move all data and calls relating to xc_domain_create() into the path which
> calls it.
> 
> As far as I can tell, soft_reset has always been broken for ARM domains using
> LIBXL_GIC_VERSION_DEFAULT, which elicits a hard error out of
> libxl__arch_domain_save_config().
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch
  2018-03-19 19:13 ` [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch Andrew Cooper
  2018-03-20 16:53   ` Jan Beulich
  2018-03-21  5:03   ` Julien Grall
@ 2018-03-21 17:27   ` Wei Liu
  2 siblings, 0 replies; 65+ messages in thread
From: Wei Liu @ 2018-03-21 17:27 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Stefano Stabellini, Wei Liu, Ian Jackson, Xen-devel,
	Julien Grall, Jan Beulich

On Mon, Mar 19, 2018 at 07:13:42PM +0000, Andrew Cooper wrote:
> This is a tools only hypercall so fine to change.  Altering the name avoids
> having confusing code such as config->config all over the hypervisor and
> toolstack.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 02/20] tools/libxl: Don't prepare or save xc_config when soft resetting a domain
  2018-03-21 16:18   ` Roger Pau Monné
@ 2018-03-21 17:35     ` Andrew Cooper
  0 siblings, 0 replies; 65+ messages in thread
From: Andrew Cooper @ 2018-03-21 17:35 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Wei Liu, Ian Jackson, Xen-devel

On 21/03/18 16:18, Roger Pau Monné wrote:
> On Mon, Mar 19, 2018 at 07:13:41PM +0000, Andrew Cooper wrote:
>> xc_config is only used by xc_domain_create(), but by calling
>> libxl__arch_domain_{prepare,save}_config() we clobber the real settings with
>> the default settings.
>>
>> Move all data and calls relating to xc_domain_create() into the path which
>> calls it.
>>
>> As far as I can tell, soft_reset has always been broken for ARM domains using
>> LIBXL_GIC_VERSION_DEFAULT, which elicits a hard error out of
>> libxl__arch_domain_save_config().
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>
> AFAICT this didn't affect x86 because libxl__arch_domain_save_config
> is a noop?

Correct.  I've tweaked the commit message to include this.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain
  2018-03-19 19:13 ` [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain Andrew Cooper
  2018-03-20 19:42   ` Christian Lindig
  2018-03-21 16:44   ` Roger Pau Monné
@ 2018-03-21 17:39   ` Wei Liu
  2 siblings, 0 replies; 65+ messages in thread
From: Wei Liu @ 2018-03-21 17:39 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Juergen Gross, Marek Marczykowski-Górecki, Wei Liu,
	Rob Hoes, Ian Jackson, Jon Ludlam, Xen-devel, Christian Lindig,
	David Scott

On Mon, Mar 19, 2018 at 07:13:48PM +0000, Andrew Cooper wrote:
> In future patches, the structure will be extended with further information,
> and this is far cleaner than adding extra parameters.
> 
> The python stubs are the only user which passes NULL for the existing config
> option (which is actually the arch substructure).  Therefore, the #ifdefary
> moves to compensate.
> 
> For libxl, pass the full config object down into
> libxl__arch_domain_{prepare,save}_config(), as there are in practice arch
> specific settings in the common part of the structure (flags s3_integrity and
> oos_off specifically).
> 
> No practical change in behaviour.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain
  2018-03-19 19:13 ` [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
  2018-03-20 19:27   ` Daniel De Graaf
  2018-03-20 19:42   ` Christian Lindig
@ 2018-03-21 17:40   ` Wei Liu
  2 siblings, 0 replies; 65+ messages in thread
From: Wei Liu @ 2018-03-21 17:40 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Marek Marczykowski-Górecki, Wei Liu, Rob Hoes, Ian Jackson,
	Jon Ludlam, Xen-devel, Christian Lindig, Jan Beulich,
	David Scott, Daniel De Graaf

On Mon, Mar 19, 2018 at 07:13:49PM +0000, Andrew Cooper wrote:
> set_max_evtchn is somewhat weird.  It was introduced with the event_fifo work,
> but has never been used.  Still, it is a bounding on resources consumed by the
> event channel infrastructure, and should be part of createdomain, rather than
> editable after the fact.
> 
> Drop XEN_DOMCTL_set_max_evtchn completely (including XSM hooks and libxc
> wrappers), and retain the functionality in XEN_DOMCTL_createdomain.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 11/20] xen/domctl: Merge set_gnttab_limits into createdomain
  2018-03-19 19:13 ` [PATCH 11/20] xen/domctl: Merge set_gnttab_limits " Andrew Cooper
  2018-03-19 21:43   ` Christian Lindig
  2018-03-20 19:27   ` Daniel De Graaf
@ 2018-03-21 17:45   ` Wei Liu
  2018-03-23 16:08   ` Jan Beulich
  3 siblings, 0 replies; 65+ messages in thread
From: Wei Liu @ 2018-03-21 17:45 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Marek Marczykowski-Górecki, Wei Liu, Rob Hoes, Ian Jackson,
	Jon Ludlam, Xen-devel, Christian Lindig, Jan Beulich,
	David Scott, Daniel De Graaf

On Mon, Mar 19, 2018 at 07:13:50PM +0000, Andrew Cooper wrote:
> XEN_DOMCTL_set_gnttab_limits is a fairly new hypercall, and is strictly
> mandatory.  Adding support for it introduced a state where a domain has a
> mostly un-constructed grant table, and there were cases where mis-ordering of
> toolstack hypercalls could cause a NULL pointer deference in the hypervisor.
> In fixing this, the grant table initialisation code became very tangled.
> 
> As the settings are mandatory, delete XEN_DOMCTL_set_gnttab_limits (including
> XSM hooks and libxc wrappers) and retain the functionality in
> XEN_DOMCTL_createdomain.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 12/20] xen/domctl: Merge max_vcpus into createdomain
  2018-03-19 19:13 ` [PATCH 12/20] xen/domctl: Merge max_vcpus " Andrew Cooper
  2018-03-20 19:27   ` Daniel De Graaf
  2018-03-20 19:42   ` Christian Lindig
@ 2018-03-21 17:46   ` Wei Liu
  2018-03-23 16:14   ` Jan Beulich
  3 siblings, 0 replies; 65+ messages in thread
From: Wei Liu @ 2018-03-21 17:46 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Marek Marczykowski-Górecki, Wei Liu, Rob Hoes, Ian Jackson,
	Jon Ludlam, Xen-devel, Christian Lindig, Jan Beulich,
	David Scott, Daniel De Graaf

On Mon, Mar 19, 2018 at 07:13:51PM +0000, Andrew Cooper wrote:
> XEN_DOMCTL_max_vcpus is a mandatory hypercall, but nothing actually prevents a
> toolstack from unpausing a domain with no vcpus.
> 
> Originally, d->vcpus[] was an embedded array in struct domain, but c/s
> fb442e217 "x86_64: allow more vCPU-s per guest" in Xen 4.0 altered it to being
> dynamically allocated.  A side effect of this is that d->vcpu[] is NULL until
> XEN_DOMCTL_max_vcpus has completed, but a lot of hypercalls blindly
> dereference it.
> 
> Even today, the behaviour of XEN_DOMCTL_max_vcpus is a mandatory singleton
> call which can't change the number of vcpus once a value has been chosen.
> Therefore, delete XEN_DOMCTL_max_vcpus (including XSM hooks and toolstack
> wrappers) and retain the functionality in XEN_DOMCTL_createdomain.
> 
> This will allow future cleanup to ensure that d->vcpus[] is always valid for a
> locatable domain, and allow simplification of some creation logic which needs
> to size domain-wide objects based on max_cpus, which currently have to be
> deferred until vcpu construction.
> 
> For the python stubs, extend the domain_create keyword list to take a
> max_vcpus parameter, in lieu of deleting the pyxc_domain_max_vcpus function.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock
  2018-03-21  5:46       ` Juergen Gross
@ 2018-03-21 17:57         ` Andrew Cooper
  2018-03-22  7:24           ` Jan Beulich
  0 siblings, 1 reply; 65+ messages in thread
From: Andrew Cooper @ 2018-03-21 17:57 UTC (permalink / raw)
  To: Juergen Gross, Jan Beulich; +Cc: Xen-devel

On 21/03/18 05:46, Juergen Gross wrote:
> On 20/03/18 18:22, Andrew Cooper wrote:
>> On 20/03/18 16:58, Jan Beulich wrote:
>>>>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
>>>> It is not entirely clear why this interlock was introduced in c/s 8cbb5278e
>>>> "x86/AMD: Add support for AMD's OSVW feature in guests".
>>>>
>>>> At the time, svm_handle_osvw() could have seen an unexpected change in OSVW
>>>> (not the case now, due to the new CPUID Policy infrastructure), but even then,
>>>> it would have caused spurious changes in behaviour when handling
>>>> OSVW_{ID_LENGTH,STATUS} read requests on behalf of an already-running guest.
>>>>
>>>> There are plenty of other aspects of domain creation which depend on hardware
>>>> details which may change across a microcode load, but where not protected by
>>>> this interlock.
>>> Are there? We don't re-read CPUID (yet), for example. But of
>>> course it is also not really specified which aspects may change
>>> across microcode updates.
>>>
>>>> A host administrator choosing to perform late microcode loading has plenty of
>>>> other problems to worry about, and is it not unreasonable to expect them to
>>>> temporarily cease domain construction activities while the microcode loading
>>>> is in progress.
>>> But it is also not unreasonable to expect the hypervisor to guard
>>> against inconsistencies here. On the whole I'm not really
>>> convinced; I think I'd like to hear others' opinions.
>> The underlying problem is that this lock cannot say when merging
>> max_cpus into createdomain, because we cannot continue the hypercall
>> midway through.
>>
>> As it doesn't currently protect createdomain, which amongst other things
>> contains init_domain_cpuid_policy() and init_domain_msr_policy() (the
>> most likely structures to be affected by microcode updates), I don't see
>> any purpose in keeping it for the minute area it does cover.
> What about failing domain creation e.g. via -EAGAIN in case a
> microcode update happened in between? This would be easy by adding a
> microcode generation count which would have to be the same for start
> and end of the create domain hypercall.

Failing the hypercall is very complicated once domain_create() has
completed successfully.  See patch 11

(Although in writing this reply, I see that patch 11 is buggy).

Amongst other things, failing that late burns a domid, because there is
a period during which the domain has to live in the domain list.

I don't see the point of trying to retain a broken mechanism, especially
at the expense of error path complexity.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock
  2018-03-21 17:57         ` Andrew Cooper
@ 2018-03-22  7:24           ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-22  7:24 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Juergen Gross, Xen-devel

>>> On 21.03.18 at 18:57, <andrew.cooper3@citrix.com> wrote:
> On 21/03/18 05:46, Juergen Gross wrote:
>> On 20/03/18 18:22, Andrew Cooper wrote:
>>> On 20/03/18 16:58, Jan Beulich wrote:
>>>>>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
>>>>> It is not entirely clear why this interlock was introduced in c/s 8cbb5278e
>>>>> "x86/AMD: Add support for AMD's OSVW feature in guests".
>>>>>
>>>>> At the time, svm_handle_osvw() could have seen an unexpected change in OSVW
>>>>> (not the case now, due to the new CPUID Policy infrastructure), but even 
> then,
>>>>> it would have caused spurious changes in behaviour when handling
>>>>> OSVW_{ID_LENGTH,STATUS} read requests on behalf of an already-running guest.
>>>>>
>>>>> There are plenty of other aspects of domain creation which depend on 
> hardware
>>>>> details which may change across a microcode load, but where not protected by
>>>>> this interlock.
>>>> Are there? We don't re-read CPUID (yet), for example. But of
>>>> course it is also not really specified which aspects may change
>>>> across microcode updates.
>>>>
>>>>> A host administrator choosing to perform late microcode loading has plenty 
> of
>>>>> other problems to worry about, and is it not unreasonable to expect them to
>>>>> temporarily cease domain construction activities while the microcode loading
>>>>> is in progress.
>>>> But it is also not unreasonable to expect the hypervisor to guard
>>>> against inconsistencies here. On the whole I'm not really
>>>> convinced; I think I'd like to hear others' opinions.
>>> The underlying problem is that this lock cannot say when merging
>>> max_cpus into createdomain, because we cannot continue the hypercall
>>> midway through.
>>>
>>> As it doesn't currently protect createdomain, which amongst other things
>>> contains init_domain_cpuid_policy() and init_domain_msr_policy() (the
>>> most likely structures to be affected by microcode updates), I don't see
>>> any purpose in keeping it for the minute area it does cover.
>> What about failing domain creation e.g. via -EAGAIN in case a
>> microcode update happened in between? This would be easy by adding a
>> microcode generation count which would have to be the same for start
>> and end of the create domain hypercall.
> 
> Failing the hypercall is very complicated once domain_create() has
> completed successfully.  See patch 11
> 
> (Although in writing this reply, I see that patch 11 is buggy).
> 
> Amongst other things, failing that late burns a domid, because there is
> a period during which the domain has to live in the domain list.
> 
> I don't see the point of trying to retain a broken mechanism, especially
> at the expense of error path complexity.

Well, I can certainly understand your dislike of the current situation.
Nevertheless, the common route is to improve things, not make them
worse (even if perhaps just slightly). That's not to say that I intend
to outright NAK the patch, but I continue to be of two minds.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 11/20] xen/domctl: Merge set_gnttab_limits into createdomain
  2018-03-19 19:13 ` [PATCH 11/20] xen/domctl: Merge set_gnttab_limits " Andrew Cooper
                     ` (2 preceding siblings ...)
  2018-03-21 17:45   ` Wei Liu
@ 2018-03-23 16:08   ` Jan Beulich
  3 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-23 16:08 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Wei Liu, Rob Hoes, Ian Jackson, Jon Ludlam,
	Marek Marczykowski-Górecki, Christian Lindig, DavidScott,
	Daniel De Graaf, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -539,14 +539,37 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>              break;
>          }
>  
> +        /* Stash the new domid for the toolstack. */
> +        op->domain = d->domain_id;
> +        copyback = true;
> +
>          d->max_evtchn_port =
>              min_t(unsigned int, op->u.createdomain.max_evtchn_port, INT_MAX);
>  
> -        ret = 0;
> -        op->domain = d->domain_id;
> -        copyback = 1;
> +        ret = grant_table_set_limits(d, op->u.createdomain.max_grant_frames,
> +                                     op->u.createdomain.max_maptrack_frames);
> +        if ( !ret )
> +            goto createdomain_fail_late;
> +
>          d = NULL;
>          break;
> +
> +    createdomain_fail_late:
> +        /*
> +         * We've hit an error after putting the domain into the domain list,
> +         * meaning that other entities in the system can refer to it.
> +         *
> +         * Unwinding is substantially more complicated, and without
> +         * returning success, the toolstack wont know to clean up.
> +         *
> +         * Reuse the continuation logic to turn this hypercall into a
> +         * destroydomain on behalf of the toolstack.
> +         */
> +        op->cmd = XEN_DOMCTL_destroydomain;
> +        d = NULL;
> +
> +        ret = hypercall_create_continuation(__HYPERVISOR_domctl, "h", u_domctl);
> +        break;
>      }

Nice idea, but what exactly is the reason you can't do this before
inserting the domain into the list?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 12/20] xen/domctl: Merge max_vcpus into createdomain
  2018-03-19 19:13 ` [PATCH 12/20] xen/domctl: Merge max_vcpus " Andrew Cooper
                     ` (2 preceding siblings ...)
  2018-03-21 17:46   ` Wei Liu
@ 2018-03-23 16:14   ` Jan Beulich
  3 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-23 16:14 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Wei Liu, Rob Hoes, Ian Jackson, Jon Ludlam,
	Marek Marczykowski-Górecki, Christian Lindig, DavidScott,
	Daniel De Graaf, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> @@ -551,6 +555,37 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>          if ( !ret )
>              goto createdomain_fail_late;
>  
> +        ret = -EINVAL;
> +        if ( vcpus > domain_max_vcpus(d) )
> +            goto createdomain_fail_late;
> +
> +        ret = -ENOMEM;
> +        online = cpupool_domain_cpumask(d);
> +
> +        BUG_ON(d->vcpu);
> +        BUG_ON(d->max_vcpus);
> +
> +        d->vcpu = xzalloc_array(struct vcpu *, vcpus);
> +        /* Install vcpu array /then/ update max_vcpus. */
> +        smp_wmb();
> +        if ( !d->vcpu )
> +            goto createdomain_fail_late;
> +        d->max_vcpus = vcpus;
> +
> +        cpu = cpumask_any(online);
> +        for ( i = 0; i < vcpus; ++i )
> +        {
> +            BUG_ON(d->vcpu[i]);
> +
> +            if ( alloc_vcpu(d, i, cpu) == NULL )
> +                goto createdomain_fail_late;
> +
> +            BUG_ON(!d->vcpu[i]);
> +
> +            cpu = cpumask_cycle(d->vcpu[i]->processor, online);
> +        }
> +
> +        ret = 0;
>          d = NULL;
>          break;

Same question here regarding the late placement. Additionally, by
doing this after insertion onto the list, you still leave a window
where the domain can be found but has a NULL vcpus pointer.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 13/20] xen/evtchn: Pass max_evtchn_port into evtchn_init()
  2018-03-19 19:13 ` [PATCH 13/20] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
@ 2018-03-26 14:01   ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-26 14:01 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: JulienGrall, Stefano Stabellini, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -693,7 +693,9 @@ void __init start_xen(unsigned long boot_phys_offset,
>      const char *cmdline;
>      struct bootmodule *xen_bootmodule;
>      struct domain *dom0;
> -    struct xen_domctl_createdomain dom0_cfg = {};
> +    struct xen_domctl_createdomain dom0_cfg = {
> +        .max_evtchn_port = -1,
> +    };
>  
>      dcache_line_bytes = read_dcache_line_bytes();
>  
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -673,6 +673,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      };
>      struct xen_domctl_createdomain dom0_cfg = {
>          .flags = XEN_DOMCTL_CDF_s3_integrity,
> +        .max_evtchn_port = -1,
>      };

Any chance I can talk you into using INT_MAX, UINT_MAX, or ~0u
in both cases above, seeing that max_evtchn_port's type is
unsigned int? Other than this
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 14/20] xen/gnttab: Remove replace_grant_supported()
  2018-03-19 19:13 ` [PATCH 14/20] xen/gnttab: Remove replace_grant_supported() Andrew Cooper
  2018-03-20  3:44   ` Julien Grall
@ 2018-03-26 14:03   ` Jan Beulich
  1 sibling, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-26 14:03 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: JulienGrall, Stefano Stabellini, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> It is identical on all architecture, and this is a better overall than fixing
> it up to have a proper boolean return value.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 15/20] xen/gnttab: Export opt_max_{grant, maptrack}_frames
  2018-03-19 19:13 ` [PATCH 15/20] xen/gnttab: Export opt_max_{grant, maptrack}_frames Andrew Cooper
@ 2018-03-26 14:17   ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-26 14:17 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: JulienGrall, Stefano Stabellini, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> This is to facilitate the values being passed in via domain_create(), at which
> point the dom0 construction code needs to know them.
> 
> While cleaning up, drop the DEFAULT_* defines, which are only used immediately
> adjacent in a context which makes it obvious that they are the defaults, and
> drop the (unused) logic to allow a per-arch override of
> DEFAULT_MAX_NR_GRANT_FRAMES.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 16/20] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-03-19 19:13 ` [PATCH 16/20] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
@ 2018-03-26 14:18   ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-26 14:18 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: JulienGrall, Stefano Stabellini, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2078,7 +2078,8 @@ static void __init find_gnttab_region(struct domain *d,
>       * enough space for a large grant table
>       */
>      kinfo->gnttab_start = __pa(_stext);
> -    kinfo->gnttab_size = gnttab_dom0_frames() << PAGE_SHIFT;
> +    kinfo->gnttab_size = min_t(unsigned int, opt_max_grant_frames,
> +                               PFN_DOWN(_etext - _stext)) << PAGE_SHIFT;

ARM folks will know whether having the same expression here
and ...

> @@ -695,6 +696,17 @@ void __init start_xen(unsigned long boot_phys_offset,
>      struct domain *dom0;
>      struct xen_domctl_createdomain dom0_cfg = {
>          .max_evtchn_port = -1,
> +
> +        /*
> +         * The region used by Xen on the memory will never be mapped in DOM0
> +         * memory layout. Therefore it can be used for the grant table.
> +         *
> +         * Only use the text section as it's always present and will contain
> +         * enough space for a large grant table
> +         */
> +        .max_grant_frames = min_t(unsigned int, opt_max_grant_frames,
> +                                  PFN_DOWN(_etext - _stext)),

... here isn't risking someone updating one but not the other.

> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -547,11 +547,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>          op->domain = d->domain_id;
>          copyback = true;
>  
> -        ret = grant_table_set_limits(d, op->u.createdomain.max_grant_frames,
> -                                     op->u.createdomain.max_maptrack_frames);
> -        if ( !ret )
> -            goto createdomain_fail_late;

With this grant_table_set_limits() can become static, at which point
it becomes questionable whether it wouldn't better be expanded
into its only caller. Oh, looks like that's the subject of patch 17. In
which case non-ARM bits
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 17/20] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init()
  2018-03-19 19:13 ` [PATCH 17/20] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init() Andrew Cooper
@ 2018-03-26 14:29   ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-26 14:29 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: JulienGrall, Stefano Stabellini, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1808,22 +1808,28 @@ gnttab_grow_table(struct domain *d, unsigned int req_nr_frames)
>      return -ENOMEM;
>  }
>  
> -static int
> -grant_table_init(struct domain *d, struct grant_table *gt,
> -                 unsigned int grant_frames, unsigned int maptrack_frames)
> +int grant_table_init(struct domain *d, unsigned int max_grant_frames,
> +                     unsigned int max_maptrack_frames)
>  {
> +    struct grant_table *gt;
>      int ret = -ENOMEM;
>  
> -    grant_write_lock(gt);
> +    if ( max_grant_frames < INITIAL_NR_GRANT_FRAMES ||
> +         max_grant_frames > opt_max_grant_frames ||
> +         max_maptrack_frames > opt_max_maptrack_frames )
> +        return -EINVAL;
>  
> -    if ( gt->active )
> -    {
> -        ret = -EBUSY;
> -        goto out_no_cleanup;
> -    }
> +    if ( (gt = xzalloc(struct grant_table)) == NULL )
> +        return -ENOMEM;
> +
> +    /* Simple stuff. */
> +    percpu_rwlock_resource_init(&gt->lock, grant_rwlock);
> +    spin_lock_init(&gt->maptrack_lock);
> +
> +    grant_write_lock(gt);

This was sort of sensible with the old (split) code structure, but
without having stored gt anywhere yet I think you want to
acquire this later. Of course otoh there's not going to be any
contention on the lock here, and it doesn't look like you can
avoid acquiring the lock altogether.

> @@ -1871,7 +1881,6 @@ grant_table_init(struct domain *d, struct grant_table *gt,
>          gt->active = NULL;
>      }
>  
> - out_no_cleanup:

You now want to free gt upwards from here (where the closing
brace is visible in context - the caller won't call grant_table_destroy()
when an error comes back from here. Or wait - that's a bug in patch
16 already. So I'm afraid I have to withdraw my R-b given there.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 18/20] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value
  2018-03-19 19:13 ` [PATCH 18/20] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
  2018-03-20  3:54   ` Julien Grall
@ 2018-03-26 15:19   ` Jan Beulich
  1 sibling, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-26 15:19 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: JulienGrall, Stefano Stabellini, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> Make dom0_max_vcpus() a common interface, and implement it on ARM by splitting
> the existing alloc_dom0_vcpu0() function in half.
> 
> As domain_create() doesn't yet set up the vcpu array, the max value is also
> passed into alloc_dom0_vcpu0().  This is temporary for bisectibility and
> removed in the following patch.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH 19/20] xen/domain: Call arch_domain_create() as early as possible in domain_create()
  2018-03-19 19:13 ` [PATCH 19/20] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
@ 2018-03-26 15:58   ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2018-03-26 15:58 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: JulienGrall, Stefano Stabellini, Xen-devel

>>> On 19.03.18 at 20:13, <andrew.cooper3@citrix.com> wrote:
> This is in preparation to set up d->max_cpus and d->vcpu[] in
> arch_domain_create(), and allow later parts of domain construction to have
> access to the values.

I'm not convinced of the allocation to be done in arch_domain_create()
(in the next patch), and hence I'm not sure we need what you do here.
I can see that you want arch code to audit the vCPU count, but why
does arch code need to set up the array and store ->max_vcpus? You
could simply call a per-arch auditing function early (even the lower
bound check could be done in common code), set up both fields early in
domain_create(), and leave the call to arch_domain_create() where it is
now, couldn't you? As the ARM auditing may need to do more than just
auditing (to select the vGIC variant), perhaps this could be
arch_domain_configure(), doing the auditing and all necessary setup
needed for the auditing to be possible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 65+ messages in thread

end of thread, other threads:[~2018-03-26 15:58 UTC | newest]

Thread overview: 65+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-19 19:13 [PATCH For-4.11 00/20] Improvements to domain creation Andrew Cooper
2018-03-19 19:13 ` [PATCH 01/20] tools/libxl: Drop xc_domain_configuration_t from libxl__domain_build_state Andrew Cooper
2018-03-21 16:09   ` Roger Pau Monné
2018-03-21 17:17   ` Wei Liu
2018-03-19 19:13 ` [PATCH 02/20] tools/libxl: Don't prepare or save xc_config when soft resetting a domain Andrew Cooper
2018-03-21 16:18   ` Roger Pau Monné
2018-03-21 17:35     ` Andrew Cooper
2018-03-21 17:27   ` Wei Liu
2018-03-19 19:13 ` [PATCH 03/20] xen/public: Rename xen_domctl_createdomain.config to arch Andrew Cooper
2018-03-20 16:53   ` Jan Beulich
2018-03-21  5:03   ` Julien Grall
2018-03-21 17:27   ` Wei Liu
2018-03-19 19:13 ` [PATCH 04/20] xen/domctl: Drop vcpu_alloc_lock Andrew Cooper
2018-03-20 16:58   ` Jan Beulich
2018-03-20 17:22     ` Andrew Cooper
2018-03-21  5:46       ` Juergen Gross
2018-03-21 17:57         ` Andrew Cooper
2018-03-22  7:24           ` Jan Beulich
2018-03-19 19:13 ` [PATCH 05/20] arm/boot: Mark construct_dom0() as __init Andrew Cooper
2018-03-20  3:40   ` Julien Grall
2018-03-20  8:31     ` Julien Grall
2018-03-19 19:13 ` [PATCH 06/20] tools/ocaml: Drop domain_create_flag_table[] Andrew Cooper
2018-03-20 19:42   ` Christian Lindig
2018-03-19 19:13 ` [PATCH 07/20] tools/ocaml: Drop int_array_of_uuid_string() Andrew Cooper
2018-03-19 19:13 ` [PATCH 08/20] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create() Andrew Cooper
2018-03-20 19:43   ` Christian Lindig
2018-03-19 19:13 ` [PATCH 09/20] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain Andrew Cooper
2018-03-20 19:42   ` Christian Lindig
2018-03-21 16:44   ` Roger Pau Monné
2018-03-21 17:39   ` Wei Liu
2018-03-19 19:13 ` [PATCH 10/20] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
2018-03-20 19:27   ` Daniel De Graaf
2018-03-20 19:42   ` Christian Lindig
2018-03-21 17:40   ` Wei Liu
2018-03-19 19:13 ` [PATCH 11/20] xen/domctl: Merge set_gnttab_limits " Andrew Cooper
2018-03-19 21:43   ` Christian Lindig
2018-03-20 10:11     ` Andrew Cooper
2018-03-20 19:42       ` Christian Lindig
2018-03-20 19:27   ` Daniel De Graaf
2018-03-21 17:45   ` Wei Liu
2018-03-23 16:08   ` Jan Beulich
2018-03-19 19:13 ` [PATCH 12/20] xen/domctl: Merge max_vcpus " Andrew Cooper
2018-03-20 19:27   ` Daniel De Graaf
2018-03-20 19:42   ` Christian Lindig
2018-03-21 17:46   ` Wei Liu
2018-03-23 16:14   ` Jan Beulich
2018-03-19 19:13 ` [PATCH 13/20] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
2018-03-26 14:01   ` Jan Beulich
2018-03-19 19:13 ` [PATCH 14/20] xen/gnttab: Remove replace_grant_supported() Andrew Cooper
2018-03-20  3:44   ` Julien Grall
2018-03-26 14:03   ` Jan Beulich
2018-03-19 19:13 ` [PATCH 15/20] xen/gnttab: Export opt_max_{grant, maptrack}_frames Andrew Cooper
2018-03-26 14:17   ` Jan Beulich
2018-03-19 19:13 ` [PATCH 16/20] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
2018-03-26 14:18   ` Jan Beulich
2018-03-19 19:13 ` [PATCH 17/20] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init() Andrew Cooper
2018-03-26 14:29   ` Jan Beulich
2018-03-19 19:13 ` [PATCH 18/20] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
2018-03-20  3:54   ` Julien Grall
2018-03-26 15:19   ` Jan Beulich
2018-03-19 19:13 ` [PATCH 19/20] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
2018-03-26 15:58   ` Jan Beulich
2018-03-19 19:13 ` [PATCH 20/20] xen/domain: Allocate d->vcpu[] in arch_domain_create() Andrew Cooper
2018-03-20  4:17   ` Julien Grall
2018-03-20 15:28   ` [PATCH v1.5 " Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.