All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/12] Improvements to domain creation
@ 2018-08-13 10:00 Andrew Cooper
  2018-08-13 10:00 ` [PATCH v2 01/12] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create() Andrew Cooper
                   ` (12 more replies)
  0 siblings, 13 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:00 UTC (permalink / raw)
  To: Xen-devel
  Cc: Juergen Gross, Marek Marczykowski-Górecki, Rob Hoes,
	Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Jon Ludlam, Tim Deegan, Julien Grall,
	Christian Lindig, Jan Beulich, David Scott, Daniel De Graaf

The main purpose of this series is to move the allocation of d->vcpu[] into
XEN_DOMCTL_createdomain, which resolves a longstanding issue since Xen 4.0
whereby the toolstack can cause NULL pointer deferences in Xen by issuing
hypercalls in an unexpected order.

Due to the way cleanup is currently performed, XEN_DOMCTL_max_vcpus is still
required at this point.  Further hypervisor cleanup and rearrangement is going
to be required before the hypercall can be dropped.

This series can be found in git form here:
  http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/xen-create-v1

v2 has had comprehensive testing in XenServer, but is only build tested for ARM.

Andrew Cooper (12):
  tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create()
  tools: Rework xc_domain_create() to take a full xen_domctl_createdomain
  xen/domctl: Merge set_max_evtchn into createdomain
  xen/evtchn: Pass max_evtchn_port into evtchn_init()
  tools: Pass grant table limits to XEN_DOMCTL_set_gnttab_limits
  xen/gnttab: Pass max_{grant,maptrack}_frames into grant_table_create()
  xen/domctl: Remove XEN_DOMCTL_set_gnttab_limits
  xen/gnttab: Fold grant_table_{create,set_limits}() into grant_table_init()
  xen/domain: Call arch_domain_create() as early as possible in domain_create()
  tools: Pass max_vcpus to XEN_DOMCTL_createdomain
  xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value
  xen/domain: Allocate d->vcpu[] in domain_create()

 tools/flask/policy/modules/dom0.te   |   4 +-
 tools/flask/policy/modules/xen.if    |   4 +-
 tools/helpers/init-xenstore-domain.c |  39 ++++++-------
 tools/libxc/include/xenctrl.h        |  31 +----------
 tools/libxc/xc_domain.c              |  55 ++-----------------
 tools/libxl/libxl_arch.h             |   4 +-
 tools/libxl/libxl_arm.c              |  16 +++---
 tools/libxl/libxl_create.c           |  28 ++++++----
 tools/libxl/libxl_dom.c              |  13 -----
 tools/libxl/libxl_x86.c              |  10 ++--
 tools/ocaml/libs/xc/xenctrl.ml       |  18 +++++-
 tools/ocaml/libs/xc/xenctrl.mli      |  17 +++++-
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  63 +++++++++++++++------
 tools/python/xen/lowlevel/xc/xc.c    |  42 ++++++++++----
 xen/arch/arm/domain_build.c          |  13 +++--
 xen/arch/arm/setup.c                 |  17 +++++-
 xen/arch/arm/vgic.c                  |  11 +---
 xen/arch/arm/vgic/vgic.c             |  22 +-------
 xen/arch/x86/dom0_build.c            |   7 ---
 xen/arch/x86/setup.c                 |   5 ++
 xen/common/domain.c                  |  57 ++++++++++++-------
 xen/common/domctl.c                  |  50 +----------------
 xen/common/event_channel.c           |   7 +--
 xen/common/grant_table.c             | 103 +++++++++--------------------------
 xen/include/asm-arm/grant_table.h    |  12 ----
 xen/include/asm-x86/grant_table.h    |   5 --
 xen/include/asm-x86/setup.h          |   2 -
 xen/include/public/domctl.h          |  30 ++++------
 xen/include/xen/domain.h             |   3 +
 xen/include/xen/grant_table.h        |   8 +--
 xen/include/xen/sched.h              |   2 +-
 xen/xsm/flask/hooks.c                |   6 --
 xen/xsm/flask/policy/access_vectors  |   4 --
 33 files changed, 281 insertions(+), 427 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 01/12] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create()
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
@ 2018-08-13 10:00 ` Andrew Cooper
  2018-08-13 10:00 ` [PATCH v2 02/12] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain Andrew Cooper
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:00 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper

The underlying C function is about to make the same change, and the structure
is going to gain extra fields.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
---
v2:
 * Fix indirection bug with xen_x86_arch_domctlconfig
---
 tools/ocaml/libs/xc/xenctrl.ml      | 14 +++++++---
 tools/ocaml/libs/xc/xenctrl.mli     | 13 +++++++---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 52 ++++++++++++++++++++++++-------------
 3 files changed, 55 insertions(+), 24 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index b3b33bb..3b7526e 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -56,6 +56,16 @@ type arch_domainconfig =
 	| ARM of xen_arm_arch_domainconfig
 	| X86 of xen_x86_arch_domainconfig
 
+type domain_create_flag = CDF_HVM | CDF_HAP
+
+type domctl_create_config =
+{
+	ssidref: int32;
+	handle: string;
+	flags: domain_create_flag list;
+	arch: arch_domainconfig;
+}
+
 type domaininfo =
 {
 	domid             : domid;
@@ -120,8 +130,6 @@ type compile_info =
 
 type shutdown_reason = Poweroff | Reboot | Suspend | Crash | Watchdog | Soft_reset
 
-type domain_create_flag = CDF_HVM | CDF_HAP
-
 exception Error of string
 
 type handle
@@ -135,7 +143,7 @@ let with_intf f =
 	interface_close xc;
 	r
 
-external domain_create: handle -> int32 -> domain_create_flag list -> string -> arch_domainconfig -> domid
+external domain_create: handle -> domctl_create_config -> domid
        = "stub_xc_domain_create"
 
 external domain_sethandle: handle -> domid -> string -> unit
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index 35303ab..d103a33 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -49,6 +49,15 @@ type arch_domainconfig =
   | ARM of xen_arm_arch_domainconfig
   | X86 of xen_x86_arch_domainconfig
 
+type domain_create_flag = CDF_HVM | CDF_HAP
+
+type domctl_create_config = {
+  ssidref: int32;
+  handle: string;
+  flags: domain_create_flag list;
+  arch: arch_domainconfig;
+}
+
 type domaininfo = {
   domid : domid;
   dying : bool;
@@ -91,14 +100,12 @@ type compile_info = {
 }
 type shutdown_reason = Poweroff | Reboot | Suspend | Crash | Watchdog | Soft_reset
 
-type domain_create_flag = CDF_HVM | CDF_HAP
-
 exception Error of string
 type handle
 external interface_open : unit -> handle = "stub_xc_interface_open"
 external interface_close : handle -> unit = "stub_xc_interface_close"
 val with_intf : (handle -> 'a) -> 'a
-external domain_create : handle -> int32 -> domain_create_flag list -> string -> arch_domainconfig -> domid
+external domain_create : handle -> domctl_create_config -> domid
   = "stub_xc_domain_create"
 external domain_sethandle : handle -> domid -> string -> unit = "stub_xc_domain_sethandle"
 external domain_max_vcpus : handle -> domid -> int -> unit
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 5274e56..dbe9c3e 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -119,36 +119,46 @@ static void domain_handle_of_uuid_string(xen_domain_handle_t h,
 #undef X
 }
 
-CAMLprim value stub_xc_domain_create(value xch, value ssidref,
-                                     value flags, value handle,
-                                     value domconfig)
+CAMLprim value stub_xc_domain_create(value xch, value config)
 {
-	CAMLparam4(xch, ssidref, flags, handle);
+	CAMLparam2(xch, config);
+	CAMLlocal2(l, arch_domconfig);
+
+	/* Mnemonics for the named fields inside domctl_create_config */
+#define VAL_SSIDREF             Field(config, 0)
+#define VAL_HANDLE              Field(config, 1)
+#define VAL_FLAGS               Field(config, 2)
+#define VAL_ARCH                Field(config, 3)
 
 	uint32_t domid = 0;
-	xen_domain_handle_t h;
 	int result;
-	uint32_t c_ssidref = Int32_val(ssidref);
-	unsigned int c_flags = 0;
-	value l;
-	xc_domain_configuration_t config = {};
+	struct xen_domctl_createdomain cfg = {
+		.ssidref = Int32_val(VAL_SSIDREF),
+	};
 
-	domain_handle_of_uuid_string(h, String_val(handle));
+	domain_handle_of_uuid_string(cfg.handle, String_val(VAL_HANDLE));
 
-	for (l = flags; l != Val_none; l = Field(l, 1))
-		c_flags |= 1u << Int_val(Field(l, 0));
+	for ( l = VAL_FLAGS; l != Val_none; l = Field(l, 1) )
+		cfg.flags |= 1u << Int_val(Field(l, 0));
 
-	switch(Tag_val(domconfig)) {
+	arch_domconfig = Field(VAL_ARCH, 0);
+	switch ( Tag_val(VAL_ARCH) )
+	{
 	case 0: /* ARM - nothing to do */
 		caml_failwith("Unhandled: ARM");
 		break;
 
 	case 1: /* X86 - emulation flags in the block */
 #if defined(__i386__) || defined(__x86_64__)
-		for (l = Field(Field(domconfig, 0), 0);
-		     l != Val_none;
-		     l = Field(l, 1))
-			config.emulation_flags |= 1u << Int_val(Field(l, 0));
+
+        /* Mnemonics for the named fields inside xen_x86_arch_domctlconfig */
+#define VAL_EMUL_FLAGS          Field(arch_domconfig, 0)
+
+		for ( l = VAL_EMUL_FLAGS; l != Val_none; l = Field(l, 1) )
+			cfg.arch.emulation_flags |= 1u << Int_val(Field(l, 0));
+
+#undef VAL_EMUL_FLAGS
+
 #else
 		caml_failwith("Unhandled: x86");
 #endif
@@ -158,8 +168,14 @@ CAMLprim value stub_xc_domain_create(value xch, value ssidref,
 		caml_failwith("Unhandled domconfig type");
 	}
 
+#undef VAL_ARCH
+#undef VAL_FLAGS
+#undef VAL_HANDLE
+#undef VAL_SSIDREF
+
 	caml_enter_blocking_section();
-	result = xc_domain_create(_H(xch), c_ssidref, h, c_flags, &domid, &config);
+	result = xc_domain_create(_H(xch), cfg.ssidref, cfg.handle, cfg.flags,
+				  &domid, &cfg.arch);
 	caml_leave_blocking_section();
 
 	if (result < 0)
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 02/12] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
  2018-08-13 10:00 ` [PATCH v2 01/12] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create() Andrew Cooper
@ 2018-08-13 10:00 ` Andrew Cooper
  2018-08-13 10:01 ` [PATCH v2 03/12] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:00 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Marek Marczykowski-Górecki

In future patches, the structure will be extended with further information,
and this is far cleaner than adding extra parameters.

The python stubs are the only user which passes NULL for the existing config
option (which is actually the arch substructure).  Therefore, the #ifdefary
moves to compensate.

For libxl, pass the full config object down into
libxl__arch_domain_{prepare,save}_config(), as there are in practice arch
specific settings in the common part of the structure (flags s3_integrity and
oos_off specifically).

No practical change in behaviour.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 tools/helpers/init-xenstore-domain.c | 16 +++++++---------
 tools/libxc/include/xenctrl.h        |  6 ++----
 tools/libxc/xc_domain.c              | 31 ++++---------------------------
 tools/libxl/libxl_arch.h             |  4 ++--
 tools/libxl/libxl_arm.c              | 16 ++++++++--------
 tools/libxl/libxl_create.c           | 23 ++++++++++++-----------
 tools/libxl/libxl_x86.c              | 10 +++++-----
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  3 +--
 tools/python/xen/lowlevel/xc/xc.c    | 28 ++++++++++++++++++++--------
 9 files changed, 61 insertions(+), 76 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 8453be2..785e570 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -60,11 +60,13 @@ static void usage(void)
 static int build(xc_interface *xch)
 {
     char cmdline[512];
-    uint32_t ssid;
-    xen_domain_handle_t handle = { 0 };
     int rv, xs_fd;
     struct xc_dom_image *dom = NULL;
     int limit_kb = (maxmem ? : (memory + 1)) * 1024;
+    struct xen_domctl_createdomain config = {
+        .ssidref = SECINITSID_DOMU,
+        .flags = XEN_DOMCTL_CDF_xs_domain,
+    };
 
     xs_fd = open("/dev/xen/xenbus_backend", O_RDWR);
     if ( xs_fd == -1 )
@@ -75,19 +77,15 @@ static int build(xc_interface *xch)
 
     if ( flask )
     {
-        rv = xc_flask_context_to_sid(xch, flask, strlen(flask), &ssid);
+        rv = xc_flask_context_to_sid(xch, flask, strlen(flask), &config.ssidref);
         if ( rv )
         {
             fprintf(stderr, "xc_flask_context_to_sid failed\n");
             goto err;
         }
     }
-    else
-    {
-        ssid = SECINITSID_DOMU;
-    }
-    rv = xc_domain_create(xch, ssid, handle, XEN_DOMCTL_CDF_xs_domain,
-                          &domid, NULL);
+
+    rv = xc_domain_create(xch, &domid, &config);
     if ( rv )
     {
         fprintf(stderr, "xc_domain_create failed\n");
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index dd7d8a9..2c4ac32 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -504,10 +504,8 @@ typedef struct xc_vcpu_extstate {
     void *buffer;
 } xc_vcpu_extstate_t;
 
-typedef struct xen_arch_domainconfig xc_domain_configuration_t;
-int xc_domain_create(xc_interface *xch, uint32_t ssidref,
-                     xen_domain_handle_t handle, uint32_t flags,
-                     uint32_t *pdomid, xc_domain_configuration_t *config);
+int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
+                     struct xen_domctl_createdomain *config);
 
 
 /* Functions to produce a dump of a given domain
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 57e18ee..0124cea 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -26,43 +26,20 @@
 #include <xen/memory.h>
 #include <xen/hvm/hvm_op.h>
 
-int xc_domain_create(xc_interface *xch, uint32_t ssidref,
-                     xen_domain_handle_t handle, uint32_t flags,
-                     uint32_t *pdomid, xc_domain_configuration_t *config)
+int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
+                     struct xen_domctl_createdomain *config)
 {
-    xc_domain_configuration_t lconfig;
     int err;
     DECLARE_DOMCTL;
 
-    if ( config == NULL )
-    {
-        memset(&lconfig, 0, sizeof(lconfig));
-
-#if defined (__i386) || defined(__x86_64__)
-        if ( flags & XEN_DOMCTL_CDF_hvm_guest )
-            lconfig.emulation_flags = (XEN_X86_EMU_ALL & ~XEN_X86_EMU_VPCI);
-#elif defined (__arm__) || defined(__aarch64__)
-        lconfig.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
-        lconfig.nr_spis = 0;
-#else
-#error Architecture not supported
-#endif
-
-        config = &lconfig;
-    }
-
     domctl.cmd = XEN_DOMCTL_createdomain;
     domctl.domain = *pdomid;
-    domctl.u.createdomain.ssidref = ssidref;
-    domctl.u.createdomain.flags   = flags;
-    memcpy(domctl.u.createdomain.handle, handle, sizeof(xen_domain_handle_t));
-    /* xc_domain_configure_t is an alias of arch_domainconfig_t */
-    memcpy(&domctl.u.createdomain.arch, config, sizeof(*config));
+    domctl.u.createdomain = *config;
+
     if ( (err = do_domctl(xch, &domctl)) != 0 )
         return err;
 
     *pdomid = (uint16_t)domctl.domain;
-    memcpy(config, &domctl.u.createdomain.arch, sizeof(*config));
 
     return 0;
 }
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 74a5af3..c8ccaaf 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -19,14 +19,14 @@
 _hidden
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
-                                      xc_domain_configuration_t *xc_config);
+                                      struct xen_domctl_createdomain *config);
 
 /* save the arch specific configuration for the domain */
 _hidden
 int libxl__arch_domain_save_config(libxl__gc *gc,
                                    libxl_domain_config *d_config,
                                    libxl__domain_build_state *state,
-                                   const xc_domain_configuration_t *xc_config);
+                                   const struct xen_domctl_createdomain *config);
 
 /* arch specific internal domain creation function */
 _hidden
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 8af9f6f..2a25201 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -39,7 +39,7 @@ static const char *gicv_to_string(libxl_gic_version gic_version)
 
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
-                                      xc_domain_configuration_t *xc_config)
+                                      struct xen_domctl_createdomain *config)
 {
     uint32_t nr_spis = 0;
     unsigned int i;
@@ -86,18 +86,18 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 
     LOG(DEBUG, "Configure the domain");
 
-    xc_config->nr_spis = nr_spis;
+    config->arch.nr_spis = nr_spis;
     LOG(DEBUG, " - Allocate %u SPIs", nr_spis);
 
     switch (d_config->b_info.arch_arm.gic_version) {
     case LIBXL_GIC_VERSION_DEFAULT:
-        xc_config->gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
+        config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
         break;
     case LIBXL_GIC_VERSION_V2:
-        xc_config->gic_version = XEN_DOMCTL_CONFIG_GIC_V2;
+        config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_V2;
         break;
     case LIBXL_GIC_VERSION_V3:
-        xc_config->gic_version = XEN_DOMCTL_CONFIG_GIC_V3;
+        config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_V3;
         break;
     default:
         LOG(ERROR, "Unknown GIC version %d",
@@ -111,9 +111,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 int libxl__arch_domain_save_config(libxl__gc *gc,
                                    libxl_domain_config *d_config,
                                    libxl__domain_build_state *state,
-                                   const xc_domain_configuration_t *xc_config)
+                                   const struct xen_domctl_createdomain *config)
 {
-    switch (xc_config->gic_version) {
+    switch (config->arch.gic_version) {
     case XEN_DOMCTL_CONFIG_GIC_V2:
         d_config->b_info.arch_arm.gic_version = LIBXL_GIC_VERSION_V2;
         break;
@@ -121,7 +121,7 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
         d_config->b_info.arch_arm.gic_version = LIBXL_GIC_VERSION_V3;
         break;
     default:
-        LOG(ERROR, "Unexpected gic version %u", xc_config->gic_version);
+        LOG(ERROR, "Unexpected gic version %u", config->arch.gic_version);
         return ERROR_FAIL;
     }
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1ccb3e3..dd9d8c8 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -563,35 +563,36 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     /* Valid domid here means we're soft resetting. */
     if (!libxl_domid_valid_guest(*domid)) {
-        int flags = 0;
-        xen_domain_handle_t handle;
-        xc_domain_configuration_t xc_config = {};
+        struct xen_domctl_createdomain create = {
+            .ssidref = info->ssidref,
+        };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
-            flags |= XEN_DOMCTL_CDF_hvm_guest;
-            flags |= libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
-            flags |= libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
+            create.flags |= XEN_DOMCTL_CDF_hvm_guest;
+            create.flags |=
+                libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
+            create.flags |=
+                libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
         }
 
         /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
-        libxl_uuid_copy(ctx, (libxl_uuid *)handle, &info->uuid);
+        libxl_uuid_copy(ctx, (libxl_uuid *)&create.handle, &info->uuid);
 
-        ret = libxl__arch_domain_prepare_config(gc, d_config, &xc_config);
+        ret = libxl__arch_domain_prepare_config(gc, d_config, &create);
         if (ret < 0) {
             LOGED(ERROR, *domid, "fail to get domain config");
             rc = ERROR_FAIL;
             goto out;
         }
 
-        ret = xc_domain_create(ctx->xch, info->ssidref, handle, flags, domid,
-                               &xc_config);
+        ret = xc_domain_create(ctx->xch, domid, &create);
         if (ret < 0) {
             LOGED(ERROR, *domid, "domain creation fail");
             rc = ERROR_FAIL;
             goto out;
         }
 
-        rc = libxl__arch_domain_save_config(gc, d_config, state, &xc_config);
+        rc = libxl__arch_domain_save_config(gc, d_config, state, &create);
         if (rc < 0)
             goto out;
     }
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index ab88562..6f670b0 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -5,17 +5,17 @@
 
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
-                                      xc_domain_configuration_t *xc_config)
+                                      struct xen_domctl_createdomain *config)
 {
     switch(d_config->c_info.type) {
     case LIBXL_DOMAIN_TYPE_HVM:
-        xc_config->emulation_flags = (XEN_X86_EMU_ALL & ~XEN_X86_EMU_VPCI);
+        config->arch.emulation_flags = (XEN_X86_EMU_ALL & ~XEN_X86_EMU_VPCI);
         break;
     case LIBXL_DOMAIN_TYPE_PVH:
-        xc_config->emulation_flags = XEN_X86_EMU_LAPIC;
+        config->arch.emulation_flags = XEN_X86_EMU_LAPIC;
         break;
     case LIBXL_DOMAIN_TYPE_PV:
-        xc_config->emulation_flags = 0;
+        config->arch.emulation_flags = 0;
         break;
     default:
         abort();
@@ -27,7 +27,7 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 int libxl__arch_domain_save_config(libxl__gc *gc,
                                    libxl_domain_config *d_config,
                                    libxl__domain_build_state *state,
-                                   const xc_domain_configuration_t *xc_config)
+                                   const struct xen_domctl_createdomain *config)
 {
     return 0;
 }
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index dbe9c3e..1206eae 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -174,8 +174,7 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #undef VAL_SSIDREF
 
 	caml_enter_blocking_section();
-	result = xc_domain_create(_H(xch), cfg.ssidref, cfg.handle, cfg.flags,
-				  &domid, &cfg.arch);
+	result = xc_domain_create(_H(xch), &domid, &cfg);
 	caml_leave_blocking_section();
 
 	if (result < 0)
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 5ade127..5a2923a 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -117,17 +117,21 @@ static PyObject *pyxc_domain_create(XcObject *self,
                                     PyObject *args,
                                     PyObject *kwds)
 {
-    uint32_t dom = 0, ssidref = 0, flags = 0, target = 0;
+    uint32_t dom = 0, target = 0;
     int      ret, i;
     PyObject *pyhandle = NULL;
-    xen_domain_handle_t handle = { 
-        0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
-        0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef };
+    struct xen_domctl_createdomain config = {
+        .handle = {
+            0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
+            0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
+        },
+    };
 
     static char *kwd_list[] = { "domid", "ssidref", "handle", "flags", "target", NULL };
 
     if ( !PyArg_ParseTupleAndKeywords(args, kwds, "|iiOii", kwd_list,
-                                      &dom, &ssidref, &pyhandle, &flags, &target))
+                                      &dom, &config.ssidref, &pyhandle,
+                                      &config.flags, &target))
         return NULL;
     if ( pyhandle != NULL )
     {
@@ -140,12 +144,20 @@ static PyObject *pyxc_domain_create(XcObject *self,
             PyObject *p = PyList_GetItem(pyhandle, i);
             if ( !PyLongOrInt_Check(p) )
                 goto out_exception;
-            handle[i] = (uint8_t)PyLongOrInt_AsLong(p);
+            config.handle[i] = (uint8_t)PyLongOrInt_AsLong(p);
         }
     }
 
-    if ( (ret = xc_domain_create(self->xc_handle, ssidref,
-                                 handle, flags, &dom, NULL)) < 0 )
+#if defined (__i386) || defined(__x86_64__)
+    if ( config.flags & XEN_DOMCTL_CDF_hvm_guest )
+        config.arch.emulation_flags = (XEN_X86_EMU_ALL & ~XEN_X86_EMU_VPCI);
+#elif defined (__arm__) || defined(__aarch64__)
+    config.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
+#else
+#error Architecture not supported
+#endif
+
+    if ( (ret = xc_domain_create(self->xc_handle, &dom, &config)) < 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
     if ( target )
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 03/12] xen/domctl: Merge set_max_evtchn into createdomain
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
  2018-08-13 10:00 ` [PATCH v2 01/12] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create() Andrew Cooper
  2018-08-13 10:00 ` [PATCH v2 02/12] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-14 13:58   ` Roger Pau Monné
  2018-08-13 10:01 ` [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Ian Jackson, Wei Liu, Jan Beulich,
	Marek Marczykowski-Górecki

set_max_evtchn is somewhat weird.  It was introduced with the event_fifo work,
but has never been used.  Still, it is a bounding on resources consumed by the
event channel infrastructure, and should be part of createdomain, rather than
editable after the fact.

Drop XEN_DOMCTL_set_max_evtchn completely (including XSM hooks and libxc
wrappers), and retain the functionality in XEN_DOMCTL_createdomain.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Hypervisor side cleanup is present in a later patch

v2:
 * Use int rather than int32 for Ocaml stubs
---
 tools/flask/policy/modules/dom0.te   |  2 +-
 tools/flask/policy/modules/xen.if    |  2 +-
 tools/helpers/init-xenstore-domain.c |  1 +
 tools/libxc/include/xenctrl.h        | 12 ------------
 tools/libxc/xc_domain.c              | 11 -----------
 tools/libxl/libxl_create.c           |  2 ++
 tools/libxl/libxl_dom.c              |  7 -------
 tools/ocaml/libs/xc/xenctrl.ml       |  1 +
 tools/ocaml/libs/xc/xenctrl.mli      |  1 +
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  5 ++++-
 tools/python/xen/lowlevel/xc/xc.c    |  1 +
 xen/common/domctl.c                  |  9 +++------
 xen/include/public/domctl.h          | 19 ++++++++-----------
 xen/xsm/flask/hooks.c                |  3 ---
 xen/xsm/flask/policy/access_vectors  |  2 --
 15 files changed, 23 insertions(+), 55 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index bf794d9..4eb3843 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -38,7 +38,7 @@ allow dom0_t dom0_t:domain {
 	getpodtarget setpodtarget set_misc_info set_virq_handler
 };
 allow dom0_t dom0_t:domain2 {
-	set_cpuid gettsc settsc setscheduler set_max_evtchn set_vnumainfo
+	set_cpuid gettsc settsc setscheduler set_vnumainfo
 	get_vnumainfo psr_cmt_op psr_alloc set_gnttab_limits
 };
 allow dom0_t dom0_t:resource { add remove };
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 7aefd00..61b0e76 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -51,7 +51,7 @@ define(`create_domain_common', `
 			getvcpuinfo getaddrsize getaffinity setaffinity
 			settime setdomainhandle getvcpucontext set_misc_info };
 	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim
-			set_max_evtchn set_vnumainfo get_vnumainfo cacheflush
+			set_vnumainfo get_vnumainfo cacheflush
 			psr_cmt_op psr_alloc soft_reset set_gnttab_limits
 			resource_map };
 	allow $1 $2:security check_context;
diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 785e570..89c329c 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -66,6 +66,7 @@ static int build(xc_interface *xch)
     struct xen_domctl_createdomain config = {
         .ssidref = SECINITSID_DOMU,
         .flags = XEN_DOMCTL_CDF_xs_domain,
+        .max_evtchn_port = -1, /* No limit. */
     };
 
     xs_fd = open("/dev/xen/xenbus_backend", O_RDWR);
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 2c4ac32..c626984 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1082,18 +1082,6 @@ int xc_domain_set_access_required(xc_interface *xch,
 int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq);
 
 /**
- * Set the maximum event channel port a domain may bind.
- *
- * This does not affect ports that are already bound.
- *
- * @param xch a handle to an open hypervisor interface
- * @param domid the domain id
- * @param max_port maximum port number
- */
-int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
-                             uint32_t max_port);
-
-/**
  * Set the maximum number of grant frames and maptrack frames a domain
  * can have. Must be used at domain setup time and only then.
  *
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 0124cea..2bc695c 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2256,17 +2256,6 @@ int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq)
     return do_domctl(xch, &domctl);
 }
 
-int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
-                             uint32_t max_port)
-{
-    DECLARE_DOMCTL;
-
-    domctl.cmd = XEN_DOMCTL_set_max_evtchn;
-    domctl.domain = domid;
-    domctl.u.set_max_evtchn.max_port = max_port;
-    return do_domctl(xch, &domctl);
-}
-
 int xc_domain_set_gnttab_limits(xc_interface *xch, uint32_t domid,
                                 uint32_t grant_frames,
                                 uint32_t maptrack_frames)
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index dd9d8c8..b7b44e2 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -554,6 +554,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     /* convenience aliases */
     libxl_domain_create_info *info = &d_config->c_info;
+    libxl_domain_build_info *b_info = &d_config->b_info;
 
     uuid_string = libxl__uuid2string(gc, info->uuid);
     if (!uuid_string) {
@@ -565,6 +566,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
     if (!libxl_domid_valid_guest(*domid)) {
         struct xen_domctl_createdomain create = {
             .ssidref = info->ssidref,
+            .max_evtchn_port = b_info->event_channels,
         };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index c8a1dc7..eb401cf 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -590,13 +590,6 @@ int libxl__build_post(libxl__gc *gc, uint32_t domid,
     if (rc)
         return rc;
 
-    rc = xc_domain_set_max_evtchn(ctx->xch, domid, info->event_channels);
-    if (rc) {
-        LOG(ERROR, "Failed to set event channel limit to %d (%d)",
-            info->event_channels, rc);
-        return ERROR_FAIL;
-    }
-
     libxl_cpuid_apply_policy(ctx, domid);
     if (info->cpuid != NULL)
         libxl_cpuid_set(ctx, domid, info->cpuid);
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 3b7526e..219355a 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -63,6 +63,7 @@ type domctl_create_config =
 	ssidref: int32;
 	handle: string;
 	flags: domain_create_flag list;
+	max_evtchn_port: int;
 	arch: arch_domainconfig;
 }
 
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index d103a33..c0c724b 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -55,6 +55,7 @@ type domctl_create_config = {
   ssidref: int32;
   handle: string;
   flags: domain_create_flag list;
+  max_evtchn_port: int;
   arch: arch_domainconfig;
 }
 
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 1206eae..9c8457b 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -128,12 +128,14 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #define VAL_SSIDREF             Field(config, 0)
 #define VAL_HANDLE              Field(config, 1)
 #define VAL_FLAGS               Field(config, 2)
-#define VAL_ARCH                Field(config, 3)
+#define VAL_MAX_EVTCHN_PORT     Field(config, 3)
+#define VAL_ARCH                Field(config, 4)
 
 	uint32_t domid = 0;
 	int result;
 	struct xen_domctl_createdomain cfg = {
 		.ssidref = Int32_val(VAL_SSIDREF),
+		.max_evtchn_port = Int_val(VAL_MAX_EVTCHN_PORT),
 	};
 
 	domain_handle_of_uuid_string(cfg.handle, String_val(VAL_HANDLE));
@@ -169,6 +171,7 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 	}
 
 #undef VAL_ARCH
+#undef VAL_MAX_EVTCHN_PORT
 #undef VAL_FLAGS
 #undef VAL_HANDLE
 #undef VAL_SSIDREF
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 5a2923a..4dc6d1c 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -125,6 +125,7 @@ static PyObject *pyxc_domain_create(XcObject *self,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
         },
+        .max_evtchn_port = -1, /* No limit. */
     };
 
     static char *kwd_list[] = { "domid", "ssidref", "handle", "flags", "target", NULL };
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index c86dc21..3a68fc9 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -540,6 +540,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             break;
         }
 
+        d->max_evtchn_port = min_t(unsigned int,
+                                   op->u.createdomain.max_evtchn_port, INT_MAX);
+
         ret = 0;
         op->domain = d->domain_id;
         copyback = 1;
@@ -1103,12 +1106,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         ret = set_global_virq_handler(d, op->u.set_virq_handler.virq);
         break;
 
-    case XEN_DOMCTL_set_max_evtchn:
-        d->max_evtchn_port = min_t(unsigned int,
-                                   op->u.set_max_evtchn.max_port,
-                                   INT_MAX);
-        break;
-
     case XEN_DOMCTL_setvnumainfo:
     {
         struct vnuma_info *vnuma;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 5c3916c..a945382 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -65,6 +65,13 @@ struct xen_domctl_createdomain {
 #define _XEN_DOMCTL_CDF_xs_domain     4
 #define XEN_DOMCTL_CDF_xs_domain      (1U<<_XEN_DOMCTL_CDF_xs_domain)
     uint32_t flags;
+
+    /*
+     * Various domain limits, which impact the quantity of resources (global
+     * mapping space, xenheap, etc) a guest may consume.
+     */
+    uint32_t max_evtchn_port;
+
     struct xen_arch_domainconfig arch;
 };
 
@@ -875,15 +882,6 @@ struct xen_domctl_set_broken_page_p2m {
 };
 
 /*
- * XEN_DOMCTL_set_max_evtchn: sets the maximum event channel port
- * number the guest may use.  Use this limit the amount of resources
- * (global mapping space, xenheap) a guest may use for event channels.
- */
-struct xen_domctl_set_max_evtchn {
-    uint32_t max_port;
-};
-
-/*
  * ARM: Clean and invalidate caches associated with given region of
  * guest memory.
  */
@@ -1163,7 +1161,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_set_broken_page_p2m           67
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
-#define XEN_DOMCTL_set_max_evtchn                70
+/* #define XEN_DOMCTL_set_max_evtchn             70 - Moved into XEN_DOMCTL_createdomain */
 #define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_get_vcpu_msrs                 72
 #define XEN_DOMCTL_set_vcpu_msrs                 73
@@ -1224,7 +1222,6 @@ struct xen_domctl {
         struct xen_domctl_set_access_required access_required;
         struct xen_domctl_audit_p2m         audit_p2m;
         struct xen_domctl_set_virq_handler  set_virq_handler;
-        struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_cacheflush        cacheflush;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7a3ccfa..a4fbe62 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -728,9 +728,6 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_audit_p2m:
         return current_has_perm(d, SECCLASS_HVM, HVM__AUDIT_P2M);
 
-    case XEN_DOMCTL_set_max_evtchn:
-        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
-
     case XEN_DOMCTL_cacheflush:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
 
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c5d8548..b768870 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -222,8 +222,6 @@ class domain2
     setscheduler
 # XENMEM_claim_pages
     setclaim
-# XEN_DOMCTL_set_max_evtchn
-    set_max_evtchn
 # XEN_DOMCTL_cacheflush
     cacheflush
 # Creation of the hardware domain when it is not dom0
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init()
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (2 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 03/12] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-14 14:07   ` Roger Pau Monné
                     ` (2 more replies)
  2018-08-13 10:01 ` [PATCH v2 05/12] tools: Pass grant table limits to XEN_DOMCTL_set_gnttab_limits Andrew Cooper
                   ` (8 subsequent siblings)
  12 siblings, 3 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich

... rather than setting it up once domain_create() has completed.  This
involves constructing a default value for dom0.

No practical change in functionality.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/arm/setup.c       | 4 +++-
 xen/arch/x86/setup.c       | 1 +
 xen/common/domain.c        | 2 +-
 xen/common/domctl.c        | 3 ---
 xen/common/event_channel.c | 4 ++--
 xen/include/xen/sched.h    | 2 +-
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 7d40a84..45f3841 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -691,7 +691,9 @@ void __init start_xen(unsigned long boot_phys_offset,
     const char *cmdline;
     struct bootmodule *xen_bootmodule;
     struct domain *dom0;
-    struct xen_domctl_createdomain dom0_cfg = {};
+    struct xen_domctl_createdomain dom0_cfg = {
+        .max_evtchn_port = -1,
+    };
 
     dcache_line_bytes = read_dcache_line_bytes();
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 8301de8..015099f 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -681,6 +681,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     };
     struct xen_domctl_createdomain dom0_cfg = {
         .flags = XEN_DOMCTL_CDF_s3_integrity,
+        .max_evtchn_port = -1,
     };
 
     /* Critical region without IDT or TSS.  Any fault is deadly! */
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 749722b..171d25e 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -362,7 +362,7 @@ struct domain *domain_create(domid_t domid,
 
         radix_tree_init(&d->pirq_tree);
 
-        if ( (err = evtchn_init(d)) != 0 )
+        if ( (err = evtchn_init(d, config->max_evtchn_port)) != 0 )
             goto fail;
         init_status |= INIT_evtchn;
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 3a68fc9..0ef554a 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -540,9 +540,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             break;
         }
 
-        d->max_evtchn_port = min_t(unsigned int,
-                                   op->u.createdomain.max_evtchn_port, INT_MAX);
-
         ret = 0;
         op->domain = d->domain_id;
         copyback = 1;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index c620465..41cbbae 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1284,10 +1284,10 @@ void evtchn_check_pollers(struct domain *d, unsigned int port)
     }
 }
 
-int evtchn_init(struct domain *d)
+int evtchn_init(struct domain *d, unsigned int max_port)
 {
     evtchn_2l_init(d);
-    d->max_evtchn_port = INT_MAX;
+    d->max_evtchn_port = min_t(unsigned int, max_port, INT_MAX);
 
     d->evtchn = alloc_evtchn_bucket(d, 0);
     if ( !d->evtchn )
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3c35473..51ceebe 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -134,7 +134,7 @@ struct evtchn
 #endif
 } __attribute__((aligned(64)));
 
-int  evtchn_init(struct domain *d); /* from domain_create */
+int  evtchn_init(struct domain *d, unsigned int max_port);
 void evtchn_destroy(struct domain *d); /* from domain_kill */
 void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 05/12] tools: Pass grant table limits to XEN_DOMCTL_set_gnttab_limits
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (3 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-13 10:01 ` [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Marek Marczykowski-Górecki

XEN_DOMCTL_set_gnttab_limits is a fairly new hypercall, and is strictly
mandatory.  As it pertains to domain limits, it should be provided at
createdomain time.

In preparation to remove the hypercall, extend xen_domctl_createdomain with
the fields and arrange for all callers to pass appropriate details.  There is
no change in construction behaviour yet, but later patches will rearrange the
hypervisor internals, then delete the hypercall.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

v2:
 * Split out of previous v1 patch to avoid the post-domain-create error path.
   Retain appropriate acks.
 * Use int rather than int32 in the ocaml stubs.
---
 tools/helpers/init-xenstore-domain.c | 16 ++++++++++------
 tools/libxl/libxl_create.c           |  2 ++
 tools/ocaml/libs/xc/xenctrl.ml       |  2 ++
 tools/ocaml/libs/xc/xenctrl.mli      |  2 ++
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  8 +++++++-
 tools/python/xen/lowlevel/xc/xc.c    |  2 ++
 xen/include/public/domctl.h          |  2 ++
 7 files changed, 27 insertions(+), 7 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 89c329c..cd27edc 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -67,6 +67,14 @@ static int build(xc_interface *xch)
         .ssidref = SECINITSID_DOMU,
         .flags = XEN_DOMCTL_CDF_xs_domain,
         .max_evtchn_port = -1, /* No limit. */
+
+        /*
+         * 1 grant frame is enough: we don't need many grants.
+         * Mini-OS doesn't like less than 4, though, so use 4.
+         * 128 maptrack frames: 256 entries per frame, enough for 32768 domains.
+         */
+        .max_grant_frames = 4,
+        .max_maptrack_frames = 128,
     };
 
     xs_fd = open("/dev/xen/xenbus_backend", O_RDWR);
@@ -104,12 +112,8 @@ static int build(xc_interface *xch)
         fprintf(stderr, "xc_domain_setmaxmem failed\n");
         goto err;
     }
-    /*
-     * 1 grant frame is enough: we don't need many grants.
-     * Mini-OS doesn't like less than 4, though, so use 4.
-     * 128 maptrack frames: 256 entries per frame, enough for 32768 domains.
-     */
-    rv = xc_domain_set_gnttab_limits(xch, domid, 4, 128);
+    rv = xc_domain_set_gnttab_limits(xch, domid, config.max_grant_frames,
+                                     config.max_maptrack_frames);
     if ( rv )
     {
         fprintf(stderr, "xc_domain_set_gnttab_limits failed\n");
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index b7b44e2..8b755e4 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -567,6 +567,8 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         struct xen_domctl_createdomain create = {
             .ssidref = info->ssidref,
             .max_evtchn_port = b_info->event_channels,
+            .max_grant_frames = b_info->max_grant_frames,
+            .max_maptrack_frames = b_info->max_maptrack_frames,
         };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 219355a..42f45c4 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -64,6 +64,8 @@ type domctl_create_config =
 	handle: string;
 	flags: domain_create_flag list;
 	max_evtchn_port: int;
+	max_grant_frames: int;
+	max_maptrack_frames: int;
 	arch: arch_domainconfig;
 }
 
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index c0c724b..0db5816 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -56,6 +56,8 @@ type domctl_create_config = {
   handle: string;
   flags: domain_create_flag list;
   max_evtchn_port: int;
+  max_grant_frames: int;
+  max_maptrack_frames: int;
   arch: arch_domainconfig;
 }
 
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 9c8457b..a9759e0 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -129,13 +129,17 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #define VAL_HANDLE              Field(config, 1)
 #define VAL_FLAGS               Field(config, 2)
 #define VAL_MAX_EVTCHN_PORT     Field(config, 3)
-#define VAL_ARCH                Field(config, 4)
+#define VAL_MAX_GRANT_FRAMES    Field(config, 4)
+#define VAL_MAX_MAPTRACK_FRAMES Field(config, 5)
+#define VAL_ARCH                Field(config, 6)
 
 	uint32_t domid = 0;
 	int result;
 	struct xen_domctl_createdomain cfg = {
 		.ssidref = Int32_val(VAL_SSIDREF),
 		.max_evtchn_port = Int_val(VAL_MAX_EVTCHN_PORT),
+		.max_grant_frames = Int_val(VAL_MAX_GRANT_FRAMES),
+		.max_maptrack_frames = Int_val(VAL_MAX_MAPTRACK_FRAMES),
 	};
 
 	domain_handle_of_uuid_string(cfg.handle, String_val(VAL_HANDLE));
@@ -171,6 +175,8 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 	}
 
 #undef VAL_ARCH
+#undef VAL_MAX_MAPTRACK_FRAMES
+#undef VAL_MAX_GRANT_FRAMES
 #undef VAL_MAX_EVTCHN_PORT
 #undef VAL_FLAGS
 #undef VAL_HANDLE
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 4dc6d1c..6bd58ec 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -126,6 +126,8 @@ static PyObject *pyxc_domain_create(XcObject *self,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
         },
         .max_evtchn_port = -1, /* No limit. */
+        .max_grant_frames = 32,
+        .max_maptrack_frames = 1024,
     };
 
     static char *kwd_list[] = { "domid", "ssidref", "handle", "flags", "target", NULL };
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index a945382..676e686 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -71,6 +71,8 @@ struct xen_domctl_createdomain {
      * mapping space, xenheap, etc) a guest may consume.
      */
     uint32_t max_evtchn_port;
+    uint32_t max_grant_frames;
+    uint32_t max_maptrack_frames;
 
     struct xen_arch_domainconfig arch;
 };
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (4 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 05/12] tools: Pass grant table limits to XEN_DOMCTL_set_gnttab_limits Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-14 14:17   ` Roger Pau Monné
                     ` (3 more replies)
  2018-08-13 10:01 ` [PATCH v2 07/12] xen/domctl: Remove XEN_DOMCTL_set_gnttab_limits Andrew Cooper
                   ` (6 subsequent siblings)
  12 siblings, 4 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich

... rather than setting the limits up after domain_create() has completed.

This removes all the gnttab infrastructure for calculating the number of dom0
grant frames, opting instead to require the dom0 construction code to pass a
sane value in via the configuration.

In practice, this now means that there is never a partially constructed grant
table for a reference-able domain.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Wei Liu <wei.liu2@citrix.com>

v2:
 * Split/rearrange to avoid the post-domain-create error path.
---
 xen/arch/arm/domain_build.c       |  3 ++-
 xen/arch/arm/setup.c              | 12 ++++++++++++
 xen/arch/x86/setup.c              |  3 +++
 xen/common/domain.c               |  3 ++-
 xen/common/grant_table.c          | 16 +++-------------
 xen/include/asm-arm/grant_table.h | 12 ------------
 xen/include/asm-x86/grant_table.h |  5 -----
 xen/include/xen/grant_table.h     |  6 ++----
 8 files changed, 24 insertions(+), 36 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 1351572..737e0f3 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2079,7 +2079,8 @@ static void __init find_gnttab_region(struct domain *d,
      * enough space for a large grant table
      */
     kinfo->gnttab_start = __pa(_stext);
-    kinfo->gnttab_size = gnttab_dom0_frames() << PAGE_SHIFT;
+    kinfo->gnttab_size = min_t(unsigned int, opt_max_grant_frames,
+                               PFN_DOWN(_etext - _stext)) << PAGE_SHIFT;
 
 #ifdef CONFIG_ARM_32
     /*
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 45f3841..3d3b30c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -20,6 +20,7 @@
 #include <xen/compile.h>
 #include <xen/device_tree.h>
 #include <xen/domain_page.h>
+#include <xen/grant_table.h>
 #include <xen/types.h>
 #include <xen/string.h>
 #include <xen/serial.h>
@@ -693,6 +694,17 @@ void __init start_xen(unsigned long boot_phys_offset,
     struct domain *dom0;
     struct xen_domctl_createdomain dom0_cfg = {
         .max_evtchn_port = -1,
+
+        /*
+         * The region used by Xen on the memory will never be mapped in DOM0
+         * memory layout. Therefore it can be used for the grant table.
+         *
+         * Only use the text section as it's always present and will contain
+         * enough space for a large grant table
+         */
+        .max_grant_frames = min_t(unsigned int, opt_max_grant_frames,
+                                  PFN_DOWN(_etext - _stext)),
+        .max_maptrack_frames = opt_max_maptrack_frames,
     };
 
     dcache_line_bytes = read_dcache_line_bytes();
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 015099f..2cfae89 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1,6 +1,7 @@
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/err.h>
+#include <xen/grant_table.h>
 #include <xen/sched.h>
 #include <xen/sched-if.h>
 #include <xen/domain.h>
@@ -682,6 +683,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     struct xen_domctl_createdomain dom0_cfg = {
         .flags = XEN_DOMCTL_CDF_s3_integrity,
         .max_evtchn_port = -1,
+        .max_grant_frames = opt_max_grant_frames,
+        .max_maptrack_frames = opt_max_maptrack_frames,
     };
 
     /* Critical region without IDT or TSS.  Any fault is deadly! */
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 171d25e..1dcab8d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -366,7 +366,8 @@ struct domain *domain_create(domid_t domid,
             goto fail;
         init_status |= INIT_evtchn;
 
-        if ( (err = grant_table_create(d)) != 0 )
+        if ( (err = grant_table_create(d, config->max_grant_frames,
+                                       config->max_maptrack_frames)) != 0 )
             goto fail;
         init_status |= INIT_gnttab;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 8b843b1..3200542 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3563,9 +3563,8 @@ do_grant_table_op(
 #include "compat/grant_table.c"
 #endif
 
-int
-grant_table_create(
-    struct domain *d)
+int grant_table_create(struct domain *d, unsigned int max_grant_frames,
+                       unsigned int max_maptrack_frames)
 {
     struct grant_table *t;
     int ret = 0;
@@ -3583,11 +3582,7 @@ grant_table_create(
     t->domain = d;
     d->grant_table = t;
 
-    if ( d->domain_id == 0 )
-    {
-        ret = grant_table_init(d, t, gnttab_dom0_frames(),
-                               opt_max_maptrack_frames);
-    }
+    ret = grant_table_set_limits(d, max_maptrack_frames, max_maptrack_frames);
 
     return ret;
 }
@@ -4045,11 +4040,6 @@ static int __init gnttab_usage_init(void)
 }
 __initcall(gnttab_usage_init);
 
-unsigned int __init gnttab_dom0_frames(void)
-{
-    return min(opt_max_grant_frames, gnttab_dom0_max());
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
index 9c2c815..ee11c38 100644
--- a/xen/include/asm-arm/grant_table.h
+++ b/xen/include/asm-arm/grant_table.h
@@ -23,18 +23,6 @@ void gnttab_mark_dirty(struct domain *d, mfn_t mfn);
 #define gnttab_create_status_page(d, t, i) do {} while (0)
 #define gnttab_release_host_mappings(domain) 1
 
-/*
- * The region used by Xen on the memory will never be mapped in DOM0
- * memory layout. Therefore it can be used for the grant table.
- *
- * Only use the text section as it's always present and will contain
- * enough space for a large grant table
- */
-static inline unsigned int gnttab_dom0_max(void)
-{
-    return PFN_DOWN(_etext - _stext);
-}
-
 #define gnttab_init_arch(gt)                                             \
 ({                                                                       \
     unsigned int ngf_ = (gt)->max_grant_frames;                          \
diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
index 76ec5dd..761a8c3 100644
--- a/xen/include/asm-x86/grant_table.h
+++ b/xen/include/asm-x86/grant_table.h
@@ -39,11 +39,6 @@ static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame,
     return replace_grant_pv_mapping(addr, frame, new_addr, flags);
 }
 
-static inline unsigned int gnttab_dom0_max(void)
-{
-    return UINT_MAX;
-}
-
 #define gnttab_init_arch(gt) 0
 #define gnttab_destroy_arch(gt) do {} while ( 0 )
 #define gnttab_set_frame_gfn(gt, st, idx, gfn) do {} while ( 0 )
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index c881414..b46bb0a 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -35,8 +35,8 @@ extern unsigned int opt_max_grant_frames;
 extern unsigned int opt_max_maptrack_frames;
 
 /* Create/destroy per-domain grant table context. */
-int grant_table_create(
-    struct domain *d);
+int grant_table_create(struct domain *d, unsigned int max_grant_frames,
+                       unsigned int max_maptrack_frames);
 void grant_table_destroy(
     struct domain *d);
 void grant_table_init_vcpu(struct vcpu *v);
@@ -63,6 +63,4 @@ int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
 int gnttab_get_status_frame(struct domain *d, unsigned long idx,
                             mfn_t *mfn);
 
-unsigned int gnttab_dom0_frames(void);
-
 #endif /* __XEN_GRANT_TABLE_H__ */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 07/12] xen/domctl: Remove XEN_DOMCTL_set_gnttab_limits
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (5 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-14 14:19   ` Roger Pau Monné
  2018-08-13 10:01 ` [PATCH v2 08/12] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init() Andrew Cooper
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich

Now that XEN_DOMCTL_createdomain handles the grant table limits, remove
XEN_DOMCTL_set_gnttab_limits (including XSM hooks and libxc wrappers).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>

v2:
 * Split/rearrange to avoid the post-domain-create error path.  Retain
   appropriate acks.
---
 tools/flask/policy/modules/dom0.te   |  2 +-
 tools/flask/policy/modules/xen.if    |  2 +-
 tools/helpers/init-xenstore-domain.c |  7 -------
 tools/libxc/include/xenctrl.h        | 13 -------------
 tools/libxc/xc_domain.c              | 13 -------------
 tools/libxl/libxl_dom.c              |  6 ------
 xen/common/domctl.c                  |  5 -----
 xen/include/public/domctl.h          |  8 +-------
 xen/xsm/flask/hooks.c                |  3 ---
 xen/xsm/flask/policy/access_vectors  |  2 --
 10 files changed, 3 insertions(+), 58 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 4eb3843..dfdcdcd 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -39,7 +39,7 @@ allow dom0_t dom0_t:domain {
 };
 allow dom0_t dom0_t:domain2 {
 	set_cpuid gettsc settsc setscheduler set_vnumainfo
-	get_vnumainfo psr_cmt_op psr_alloc set_gnttab_limits
+	get_vnumainfo psr_cmt_op psr_alloc
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 61b0e76..4e06cfc 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -52,7 +52,7 @@ define(`create_domain_common', `
 			settime setdomainhandle getvcpucontext set_misc_info };
 	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim
 			set_vnumainfo get_vnumainfo cacheflush
-			psr_cmt_op psr_alloc soft_reset set_gnttab_limits
+			psr_cmt_op psr_alloc soft_reset
 			resource_map };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index cd27edc..4771750 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -112,13 +112,6 @@ static int build(xc_interface *xch)
         fprintf(stderr, "xc_domain_setmaxmem failed\n");
         goto err;
     }
-    rv = xc_domain_set_gnttab_limits(xch, domid, config.max_grant_frames,
-                                     config.max_maptrack_frames);
-    if ( rv )
-    {
-        fprintf(stderr, "xc_domain_set_gnttab_limits failed\n");
-        goto err;
-    }
     rv = xc_domain_set_memmap_limit(xch, domid, limit_kb);
     if ( rv )
     {
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index c626984..bb75bcc 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1081,19 +1081,6 @@ int xc_domain_set_access_required(xc_interface *xch,
  */
 int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq);
 
-/**
- * Set the maximum number of grant frames and maptrack frames a domain
- * can have. Must be used at domain setup time and only then.
- *
- * @param xch a handle to an open hypervisor interface
- * @param domid the domain id
- * @param grant_frames max. number of grant frames
- * @param maptrack_frames max. number of maptrack frames
- */
-int xc_domain_set_gnttab_limits(xc_interface *xch, uint32_t domid,
-                                uint32_t grant_frames,
-                                uint32_t maptrack_frames);
-
 /*
  * CPUPOOL MANAGEMENT FUNCTIONS
  */
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 2bc695c..e8d0734 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2256,19 +2256,6 @@ int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq)
     return do_domctl(xch, &domctl);
 }
 
-int xc_domain_set_gnttab_limits(xc_interface *xch, uint32_t domid,
-                                uint32_t grant_frames,
-                                uint32_t maptrack_frames)
-{
-    DECLARE_DOMCTL;
-
-    domctl.cmd = XEN_DOMCTL_set_gnttab_limits;
-    domctl.domain = domid;
-    domctl.u.set_gnttab_limits.grant_frames = grant_frames;
-    domctl.u.set_gnttab_limits.maptrack_frames = maptrack_frames;
-    return do_domctl(xch, &domctl);
-}
-
 /* Plumbing Xen with vNUMA topology */
 int xc_domain_setvnuma(xc_interface *xch,
                        uint32_t domid,
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index eb401cf..8a8a32c 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -358,12 +358,6 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
         return ERROR_FAIL;
     }
 
-    if (xc_domain_set_gnttab_limits(ctx->xch, domid, info->max_grant_frames,
-                                    info->max_maptrack_frames) != 0) {
-        LOG(ERROR, "Couldn't set grant table limits");
-        return ERROR_FAIL;
-    }
-
     /*
      * Check if the domain has any CPU or node affinity already. If not, try
      * to build up the latter via automatic NUMA placement. In fact, in case
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 0ef554a..58e51b2 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -1129,11 +1129,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             copyback = 1;
         break;
 
-    case XEN_DOMCTL_set_gnttab_limits:
-        ret = grant_table_set_limits(d, op->u.set_gnttab_limits.grant_frames,
-                                     op->u.set_gnttab_limits.maptrack_frames);
-        break;
-
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 676e686..e01fe06 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1069,11 +1069,6 @@ struct xen_domctl_psr_alloc {
     uint64_t data;      /* IN/OUT */
 };
 
-struct xen_domctl_set_gnttab_limits {
-    uint32_t grant_frames;     /* IN */
-    uint32_t maptrack_frames;  /* IN */
-};
-
 /* XEN_DOMCTL_vuart_op */
 struct xen_domctl_vuart_op {
 #define XEN_DOMCTL_VUART_OP_INIT  0
@@ -1172,7 +1167,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_monitor_op                    77
 #define XEN_DOMCTL_psr_alloc                     78
 #define XEN_DOMCTL_soft_reset                    79
-#define XEN_DOMCTL_set_gnttab_limits             80
+/* #define XEN_DOMCTL_set_gnttab_limits          80 - Moved into XEN_DOMCTL_createdomain */
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
@@ -1233,7 +1228,6 @@ struct xen_domctl {
         struct xen_domctl_psr_cmt_op        psr_cmt_op;
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
-        struct xen_domctl_set_gnttab_limits set_gnttab_limits;
         struct xen_domctl_vuart_op          vuart_op;
         uint8_t                             pad[128];
     } u;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index a4fbe62..500af2c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -742,9 +742,6 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_soft_reset:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SOFT_RESET);
 
-    case XEN_DOMCTL_set_gnttab_limits:
-        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_GNTTAB_LIMITS);
-
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index b768870..d01a7a0 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -246,8 +246,6 @@ class domain2
     mem_sharing
 # XEN_DOMCTL_psr_alloc
     psr_alloc
-# XEN_DOMCTL_set_gnttab_limits
-    set_gnttab_limits
 # XENMEM_resource_map
     resource_map
 }
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 08/12] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init()
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (6 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 07/12] xen/domctl: Remove XEN_DOMCTL_set_gnttab_limits Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-14 14:31   ` Roger Pau Monné
  2018-08-15 12:54   ` Jan Beulich
  2018-08-13 10:01 ` [PATCH v2 09/12] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
                   ` (4 subsequent siblings)
  12 siblings, 2 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich

Now that the max_{grant,maptrack}_frames are specified from the very beginning
of grant table construction, the various initialisation functions can be
folded together and simplified as a result.

Leave grant_table_init() as the public interface, which is more consistent
with other subsystems.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Wei Liu <wei.liu2@citrix.com>

v2:
 * Fix the cleanup path to avoid memory leaks.
---
 xen/common/domain.c           |  4 +-
 xen/common/grant_table.c      | 93 +++++++++++++------------------------------
 xen/include/xen/grant_table.h |  6 +--
 3 files changed, 31 insertions(+), 72 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 1dcab8d..be51426 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -366,8 +366,8 @@ struct domain *domain_create(domid_t domid,
             goto fail;
         init_status |= INIT_evtchn;
 
-        if ( (err = grant_table_create(d, config->max_grant_frames,
-                                       config->max_maptrack_frames)) != 0 )
+        if ( (err = grant_table_init(d, config->max_grant_frames,
+                                     config->max_maptrack_frames)) != 0 )
             goto fail;
         init_status |= INIT_gnttab;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 3200542..a60d166 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1802,22 +1802,31 @@ gnttab_grow_table(struct domain *d, unsigned int req_nr_frames)
     return -ENOMEM;
 }
 
-static int
-grant_table_init(struct domain *d, struct grant_table *gt,
-                 unsigned int grant_frames, unsigned int maptrack_frames)
+int grant_table_init(struct domain *d, unsigned int max_grant_frames,
+                     unsigned int max_maptrack_frames)
 {
+    struct grant_table *gt;
     int ret = -ENOMEM;
 
-    grant_write_lock(gt);
+    if ( max_grant_frames < INITIAL_NR_GRANT_FRAMES ||
+         max_grant_frames > opt_max_grant_frames ||
+         max_maptrack_frames > opt_max_maptrack_frames )
+        return -EINVAL;
 
-    if ( gt->active )
-    {
-        ret = -EBUSY;
-        goto out_no_cleanup;
-    }
+    if ( (gt = xzalloc(struct grant_table)) == NULL )
+        return -ENOMEM;
+
+    /* Simple stuff. */
+    percpu_rwlock_resource_init(&gt->lock, grant_rwlock);
+    spin_lock_init(&gt->maptrack_lock);
+
+    gt->gt_version = 1;
+    gt->max_grant_frames = max_grant_frames;
+    gt->max_maptrack_frames = max_maptrack_frames;
 
-    gt->max_grant_frames = grant_frames;
-    gt->max_maptrack_frames = maptrack_frames;
+    /* Install the structure early to simplify the error path. */
+    gt->domain = d;
+    d->grant_table = gt;
 
     /* Active grant table. */
     gt->active = xzalloc_array(struct active_grant_entry *,
@@ -1844,29 +1853,21 @@ grant_table_init(struct domain *d, struct grant_table *gt,
     if ( gt->status == NULL )
         goto out;
 
+    grant_write_lock(gt);
+
     ret = gnttab_init_arch(gt);
     if ( ret )
-        goto out;
+        goto unlock;
 
     /* gnttab_grow_table() allocates a min number of frames, so 0 is okay. */
     ret = gnttab_grow_table(d, 0);
 
+ unlock:
+    grant_write_unlock(gt);
+
  out:
     if ( ret )
-    {
-        gnttab_destroy_arch(gt);
-        xfree(gt->status);
-        gt->status = NULL;
-        xfree(gt->shared_raw);
-        gt->shared_raw = NULL;
-        vfree(gt->maptrack);
-        gt->maptrack = NULL;
-        xfree(gt->active);
-        gt->active = NULL;
-    }
-
- out_no_cleanup:
-    grant_write_unlock(gt);
+        grant_table_destroy(d);
 
     return ret;
 }
@@ -3563,30 +3564,6 @@ do_grant_table_op(
 #include "compat/grant_table.c"
 #endif
 
-int grant_table_create(struct domain *d, unsigned int max_grant_frames,
-                       unsigned int max_maptrack_frames)
-{
-    struct grant_table *t;
-    int ret = 0;
-
-    if ( (t = xzalloc(struct grant_table)) == NULL )
-        return -ENOMEM;
-
-    /* Simple stuff. */
-    percpu_rwlock_resource_init(&t->lock, grant_rwlock);
-    spin_lock_init(&t->maptrack_lock);
-
-    t->gt_version = 1;
-
-    /* Okay, install the structure. */
-    t->domain = d;
-    d->grant_table = t;
-
-    ret = grant_table_set_limits(d, max_maptrack_frames, max_maptrack_frames);
-
-    return ret;
-}
-
 void
 gnttab_release_mappings(
     struct domain *d)
@@ -3777,22 +3754,6 @@ void grant_table_init_vcpu(struct vcpu *v)
     v->maptrack_tail = MAPTRACK_TAIL;
 }
 
-int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
-                           unsigned int maptrack_frames)
-{
-    struct grant_table *gt = d->grant_table;
-
-    if ( grant_frames < INITIAL_NR_GRANT_FRAMES ||
-         grant_frames > opt_max_grant_frames ||
-         maptrack_frames > opt_max_maptrack_frames )
-        return -EINVAL;
-    if ( !gt )
-        return -ENOENT;
-
-    /* Set limits. */
-    return grant_table_init(d, gt, grant_frames, maptrack_frames);
-}
-
 #ifdef CONFIG_HAS_MEM_SHARING
 int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
                             gfn_t *gfn, uint16_t *status)
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index b46bb0a..12e8a4b 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -35,13 +35,11 @@ extern unsigned int opt_max_grant_frames;
 extern unsigned int opt_max_maptrack_frames;
 
 /* Create/destroy per-domain grant table context. */
-int grant_table_create(struct domain *d, unsigned int max_grant_frames,
-                       unsigned int max_maptrack_frames);
+int grant_table_init(struct domain *d, unsigned int max_grant_frames,
+                     unsigned int max_maptrack_frames);
 void grant_table_destroy(
     struct domain *d);
 void grant_table_init_vcpu(struct vcpu *v);
-int grant_table_set_limits(struct domain *d, unsigned int grant_frames,
-                           unsigned int maptrack_frames);
 
 /*
  * Check if domain has active grants and log first 10 of them.
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 09/12] xen/domain: Call arch_domain_create() as early as possible in domain_create()
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (7 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 08/12] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init() Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-14 14:37   ` Roger Pau Monné
                     ` (2 more replies)
  2018-08-13 10:01 ` [PATCH v2 10/12] tools: Pass max_vcpus to XEN_DOMCTL_createdomain Andrew Cooper
                   ` (3 subsequent siblings)
  12 siblings, 3 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich

This is in preparation to set up d->max_cpus and d->vcpu[] in domain_create(),
and allow later parts of domain construction to have access to the values.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/domain.c | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index be51426..0c44f27 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -322,6 +322,23 @@ struct domain *domain_create(domid_t domid,
         else
             d->guest_type = guest_type_pv;
 
+        if ( !is_hardware_domain(d) )
+            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
+        else
+            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
+                                           : arch_hwdom_irqs(domid);
+        if ( d->nr_pirqs > nr_irqs )
+            d->nr_pirqs = nr_irqs;
+
+        radix_tree_init(&d->pirq_tree);
+    }
+
+    if ( (err = arch_domain_create(d, config)) != 0 )
+        goto fail;
+    init_status |= INIT_arch;
+
+    if ( !is_idle_domain(d) )
+    {
         watchdog_domain_init(d);
         init_status |= INIT_watchdog;
 
@@ -352,16 +369,6 @@ struct domain *domain_create(domid_t domid,
         d->controller_pause_count = 1;
         atomic_inc(&d->pause_count);
 
-        if ( !is_hardware_domain(d) )
-            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
-        else
-            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
-                                           : arch_hwdom_irqs(domid);
-        if ( d->nr_pirqs > nr_irqs )
-            d->nr_pirqs = nr_irqs;
-
-        radix_tree_init(&d->pirq_tree);
-
         if ( (err = evtchn_init(d, config->max_evtchn_port)) != 0 )
             goto fail;
         init_status |= INIT_evtchn;
@@ -376,14 +383,7 @@ struct domain *domain_create(domid_t domid,
         d->pbuf = xzalloc_array(char, DOMAIN_PBUF_SIZE);
         if ( !d->pbuf )
             goto fail;
-    }
-
-    if ( (err = arch_domain_create(d, config)) != 0 )
-        goto fail;
-    init_status |= INIT_arch;
 
-    if ( !is_idle_domain(d) )
-    {
         if ( (err = sched_init_domain(d, 0)) != 0 )
             goto fail;
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 10/12] tools: Pass max_vcpus to XEN_DOMCTL_createdomain
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (8 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 09/12] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-13 10:01 ` [PATCH v2 11/12] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Marek Marczykowski-Górecki

XEN_DOMCTL_max_vcpus is a mandatory hypercall, but nothing actually prevents a
toolstack from unpausing a domain with no vcpus.

Originally, d->vcpus[] was an embedded array in struct domain, but c/s
fb442e217 "x86_64: allow more vCPU-s per guest" in Xen 4.0 altered it to being
dynamically allocated.  A side effect of this is that d->vcpu[] is NULL until
XEN_DOMCTL_max_vcpus has completed, but a lot of hypercalls blindly
dereference it.

Even today, the behaviour of XEN_DOMCTL_max_vcpus is a mandatory singleton
call which can't change the number of vcpus once a value has been chosen.

In preparation to remote the hypercall, extend xen_domctl_createdomain with
the a max_vcpus field and arrange for all callers to pass the appropriate
value.  There is no change in construction behaviour yet, but later patches
will rearrange the hypervisor internals.

For the python stubs, extend the domain_create keyword list to take a
max_vcpus parameter, in lieu of deleting the pyxc_domain_max_vcpus function.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

v2:
 * Split out of previous v1 patch to avoid the post-domain-create error path.
   Retain appropriate acks.
 * Use int rather than int32 in the ocaml stubs.
---
 tools/helpers/init-xenstore-domain.c |  3 ++-
 tools/libxl/libxl_create.c           |  1 +
 tools/ocaml/libs/xc/xenctrl.ml       |  1 +
 tools/ocaml/libs/xc/xenctrl.mli      |  1 +
 tools/ocaml/libs/xc/xenctrl_stubs.c  | 11 +++++++----
 tools/python/xen/lowlevel/xc/xc.c    |  9 ++++++---
 xen/include/public/domctl.h          |  1 +
 7 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 4771750..3236d14 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -66,6 +66,7 @@ static int build(xc_interface *xch)
     struct xen_domctl_createdomain config = {
         .ssidref = SECINITSID_DOMU,
         .flags = XEN_DOMCTL_CDF_xs_domain,
+        .max_vcpus = 1,
         .max_evtchn_port = -1, /* No limit. */
 
         /*
@@ -100,7 +101,7 @@ static int build(xc_interface *xch)
         fprintf(stderr, "xc_domain_create failed\n");
         goto err;
     }
-    rv = xc_domain_max_vcpus(xch, domid, 1);
+    rv = xc_domain_max_vcpus(xch, domid, config.max_vcpus);
     if ( rv )
     {
         fprintf(stderr, "xc_domain_max_vcpus failed\n");
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 8b755e4..6067630 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -566,6 +566,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
     if (!libxl_domid_valid_guest(*domid)) {
         struct xen_domctl_createdomain create = {
             .ssidref = info->ssidref,
+            .max_vcpus = b_info->max_vcpus,
             .max_evtchn_port = b_info->event_channels,
             .max_grant_frames = b_info->max_grant_frames,
             .max_maptrack_frames = b_info->max_maptrack_frames,
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 42f45c4..40fbd37 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -63,6 +63,7 @@ type domctl_create_config =
 	ssidref: int32;
 	handle: string;
 	flags: domain_create_flag list;
+	max_vcpus: int;
 	max_evtchn_port: int;
 	max_grant_frames: int;
 	max_maptrack_frames: int;
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index 0db5816..906ce94 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -55,6 +55,7 @@ type domctl_create_config = {
   ssidref: int32;
   handle: string;
   flags: domain_create_flag list;
+  max_vcpus: int;
   max_evtchn_port: int;
   max_grant_frames: int;
   max_maptrack_frames: int;
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index a9759e0..04b3561 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -128,15 +128,17 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #define VAL_SSIDREF             Field(config, 0)
 #define VAL_HANDLE              Field(config, 1)
 #define VAL_FLAGS               Field(config, 2)
-#define VAL_MAX_EVTCHN_PORT     Field(config, 3)
-#define VAL_MAX_GRANT_FRAMES    Field(config, 4)
-#define VAL_MAX_MAPTRACK_FRAMES Field(config, 5)
-#define VAL_ARCH                Field(config, 6)
+#define VAL_MAX_VCPUS           Field(config, 3)
+#define VAL_MAX_EVTCHN_PORT     Field(config, 4)
+#define VAL_MAX_GRANT_FRAMES    Field(config, 5)
+#define VAL_MAX_MAPTRACK_FRAMES Field(config, 6)
+#define VAL_ARCH                Field(config, 7)
 
 	uint32_t domid = 0;
 	int result;
 	struct xen_domctl_createdomain cfg = {
 		.ssidref = Int32_val(VAL_SSIDREF),
+		.max_vcpus = Int_val(VAL_MAX_VCPUS),
 		.max_evtchn_port = Int_val(VAL_MAX_EVTCHN_PORT),
 		.max_grant_frames = Int_val(VAL_MAX_GRANT_FRAMES),
 		.max_maptrack_frames = Int_val(VAL_MAX_MAPTRACK_FRAMES),
@@ -178,6 +180,7 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #undef VAL_MAX_MAPTRACK_FRAMES
 #undef VAL_MAX_GRANT_FRAMES
 #undef VAL_MAX_EVTCHN_PORT
+#undef VAL_MAX_VCPUS
 #undef VAL_FLAGS
 #undef VAL_HANDLE
 #undef VAL_SSIDREF
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 6bd58ec..b137d5a 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -125,16 +125,19 @@ static PyObject *pyxc_domain_create(XcObject *self,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
             0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef,
         },
+        .max_vcpus = 1,
         .max_evtchn_port = -1, /* No limit. */
         .max_grant_frames = 32,
         .max_maptrack_frames = 1024,
     };
 
-    static char *kwd_list[] = { "domid", "ssidref", "handle", "flags", "target", NULL };
+    static char *kwd_list[] = { "domid", "ssidref", "handle", "flags",
+                                "target", "max_vcpus", NULL };
 
-    if ( !PyArg_ParseTupleAndKeywords(args, kwds, "|iiOii", kwd_list,
+    if ( !PyArg_ParseTupleAndKeywords(args, kwds, "|iiOiii", kwd_list,
                                       &dom, &config.ssidref, &pyhandle,
-                                      &config.flags, &target))
+                                      &config.flags, &target,
+                                      &config.max_vcpus) )
         return NULL;
     if ( pyhandle != NULL )
     {
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index e01fe06..d48454b 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -70,6 +70,7 @@ struct xen_domctl_createdomain {
      * Various domain limits, which impact the quantity of resources (global
      * mapping space, xenheap, etc) a guest may consume.
      */
+    uint32_t max_vcpus;
     uint32_t max_evtchn_port;
     uint32_t max_grant_frames;
     uint32_t max_maptrack_frames;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 11/12] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (9 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 10/12] tools: Pass max_vcpus to XEN_DOMCTL_createdomain Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-14 15:05   ` Roger Pau Monné
  2018-08-15 12:59   ` Jan Beulich
  2018-08-13 10:01 ` [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create() Andrew Cooper
  2018-08-14 13:12 ` [PATCH v2 00/12] Improvements to domain creation Christian Lindig
  12 siblings, 2 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Stefano Stabellini, Wei Liu, Jan Beulich

Make dom0_max_vcpus() a common interface, and implement it on ARM by splitting
the existing alloc_dom0_vcpu0() function in half.

As domain_create() doesn't yet set up the vcpu array, the max value is also
passed into alloc_dom0_vcpu0().  This is temporary for bisectibility and
removed in the following patch.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Julien Grall <julien.grall@arm.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/arm/domain_build.c | 12 +++++++++---
 xen/arch/arm/setup.c        |  3 ++-
 xen/arch/x86/dom0_build.c   |  5 ++---
 xen/arch/x86/setup.c        |  3 ++-
 xen/include/asm-x86/setup.h |  2 --
 xen/include/xen/domain.h    |  5 ++++-
 6 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 737e0f3..f4a1225 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -62,17 +62,23 @@ struct map_range_data
  */
 #define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry))
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
+unsigned int __init dom0_max_vcpus(void)
 {
     if ( opt_dom0_max_vcpus == 0 )
         opt_dom0_max_vcpus = num_online_cpus();
     if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
         opt_dom0_max_vcpus = MAX_VIRT_CPUS;
 
-    dom0->vcpu = xzalloc_array(struct vcpu *, opt_dom0_max_vcpus);
+    return opt_dom0_max_vcpus;
+}
+
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
+                                     unsigned int max_vcpus)
+{
+    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
     if ( !dom0->vcpu )
         return NULL;
-    dom0->max_vcpus = opt_dom0_max_vcpus;
+    dom0->max_vcpus = max_vcpus;
 
     return alloc_vcpu(dom0, 0, 0);
 }
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 3d3b30c..72e42e8 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -860,9 +860,10 @@ void __init start_xen(unsigned long boot_phys_offset,
     /* The vGIC for DOM0 is exactly emulating the hardware GIC */
     dom0_cfg.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
     dom0_cfg.arch.nr_spis = gic_number_lines() - 32;
+    dom0_cfg.max_vcpus = dom0_max_vcpus();
 
     dom0 = domain_create(0, &dom0_cfg, true);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
             panic("Error creating domain 0");
 
     if ( construct_dom0(dom0) != 0)
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index b744791..b42eac3 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -199,10 +199,9 @@ unsigned int __init dom0_max_vcpus(void)
     return max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
+                                     unsigned int max_vcpus)
 {
-    unsigned int max_vcpus = dom0_max_vcpus();
-
     dom0->node_affinity = dom0_nodes;
     dom0->auto_node_affinity = !dom0_nr_pxms;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 2cfae89..46dcc71 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1689,10 +1689,11 @@ void __init noreturn __start_xen(unsigned long mbi_p)
         dom0_cfg.arch.emulation_flags |=
             XEN_X86_EMU_LAPIC | XEN_X86_EMU_IOAPIC | XEN_X86_EMU_VPCI;
     }
+    dom0_cfg.max_vcpus = dom0_max_vcpus();
 
     /* Create initial domain 0. */
     dom0 = domain_create(get_initial_domain_id(), &dom0_cfg, !pv_shim);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
         panic("Error creating domain 0");
 
     /* Grab the DOM0 command line. */
diff --git a/xen/include/asm-x86/setup.h b/xen/include/asm-x86/setup.h
index b2bf16c..42fddeb 100644
--- a/xen/include/asm-x86/setup.h
+++ b/xen/include/asm-x86/setup.h
@@ -44,8 +44,6 @@ unsigned long initial_images_nrpages(nodeid_t node);
 void discard_initial_images(void);
 void *bootstrap_map(const module_t *mod);
 
-unsigned int dom0_max_vcpus(void);
-
 int xen_in_range(unsigned long mfn);
 
 void microcode_grab_module(
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index f35e360..651205d 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -15,7 +15,10 @@ typedef union {
 
 struct vcpu *alloc_vcpu(
     struct domain *d, unsigned int vcpu_id, unsigned int cpu_id);
-struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
+
+unsigned int dom0_max_vcpus(void);
+struct vcpu *alloc_dom0_vcpu0(struct domain *dom0, unsigned int max_vcpus);
+
 int vcpu_reset(struct vcpu *);
 int vcpu_up(struct vcpu *v);
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (10 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 11/12] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
@ 2018-08-13 10:01 ` Andrew Cooper
  2018-08-14 15:17   ` Roger Pau Monné
                     ` (2 more replies)
  2018-08-14 13:12 ` [PATCH v2 00/12] Improvements to domain creation Christian Lindig
  12 siblings, 3 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-13 10:01 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich

For ARM, the call to arch_domain_create() needs to have completed before
domain_max_vcpus() will return the correct upper bound.

For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
of dom0->vcpu.

With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
can be constructed suitably for the domain, rather than for the worst-case
setting.

Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
ARM's two implementations of vgic_max_vcpus() no longer need work around the
out-of-order call.

From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
can be looked up by domid.

The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
max != d->max_vcpus, which does match the older semantics (not that it is
obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
this point the hypercall still needs making to allocate each vcpu.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Wei Liu <wei.liu2@citrix.com>

v2:
 * Allocate in domain_create() rather than arch_domain_create().
 * Retain domain_max_vcpus().
---
 xen/arch/arm/domain_build.c |  8 +-------
 xen/arch/arm/setup.c        |  2 +-
 xen/arch/arm/vgic.c         | 11 +----------
 xen/arch/arm/vgic/vgic.c    | 22 +---------------------
 xen/arch/x86/dom0_build.c   |  8 +-------
 xen/arch/x86/setup.c        |  2 +-
 xen/common/domain.c         | 18 ++++++++++++++++++
 xen/common/domctl.c         | 39 +--------------------------------------
 xen/common/event_channel.c  |  3 +--
 xen/include/xen/domain.h    |  2 +-
 10 files changed, 27 insertions(+), 88 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f4a1225..6f45e56 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -72,14 +72,8 @@ unsigned int __init dom0_max_vcpus(void)
     return opt_dom0_max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
-                                     unsigned int max_vcpus)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
 {
-    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
-    if ( !dom0->vcpu )
-        return NULL;
-    dom0->max_vcpus = max_vcpus;
-
     return alloc_vcpu(dom0, 0, 0);
 }
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 72e42e8..a3e1ef7 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -863,7 +863,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     dom0_cfg.max_vcpus = dom0_max_vcpus();
 
     dom0 = domain_create(0, &dom0_cfg, true);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
             panic("Error creating domain 0");
 
     if ( construct_dom0(dom0) != 0)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7a2c455..5a4f082 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -669,16 +669,7 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
 
 unsigned int vgic_max_vcpus(const struct domain *d)
 {
-    /*
-     * Since evtchn_init would call domain_max_vcpus for poll_mask
-     * allocation when the vgic_ops haven't been initialised yet,
-     * we return MAX_VIRT_CPUS if d->arch.vgic.handler is null.
-     */
-    if ( !d->arch.vgic.handler )
-        return MAX_VIRT_CPUS;
-    else
-        return min_t(unsigned int, MAX_VIRT_CPUS,
-                     d->arch.vgic.handler->max_vcpus);
+    return min_t(unsigned int, MAX_VIRT_CPUS, d->arch.vgic.handler->max_vcpus);
 }
 
 /*
diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
index 832632a..4124817 100644
--- a/xen/arch/arm/vgic/vgic.c
+++ b/xen/arch/arm/vgic/vgic.c
@@ -951,27 +951,7 @@ void vgic_sync_hardware_irq(struct domain *d,
 
 unsigned int vgic_max_vcpus(const struct domain *d)
 {
-    unsigned int vgic_vcpu_limit;
-
-    switch ( d->arch.vgic.version )
-    {
-    case GIC_INVALID:
-        /*
-         * Since evtchn_init would call domain_max_vcpus for poll_mask
-         * allocation before the VGIC has been initialised, we need to
-         * return some safe value in this case. As this is for allocation
-         * purposes, go with the maximum value.
-         */
-        vgic_vcpu_limit = MAX_VIRT_CPUS;
-        break;
-    case GIC_V2:
-        vgic_vcpu_limit = VGIC_V2_MAX_CPUS;
-        break;
-    default:
-        BUG();
-    }
-
-    return min_t(unsigned int, MAX_VIRT_CPUS, vgic_vcpu_limit);
+    return min_t(unsigned int, MAX_VIRT_CPUS, d->arch.vgic.handler->max_vcpus);
 }
 
 #ifdef CONFIG_GICV3
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index b42eac3..423fdec 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -199,17 +199,11 @@ unsigned int __init dom0_max_vcpus(void)
     return max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
-                                     unsigned int max_vcpus)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
 {
     dom0->node_affinity = dom0_nodes;
     dom0->auto_node_affinity = !dom0_nr_pxms;
 
-    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
-    if ( !dom0->vcpu )
-        return NULL;
-    dom0->max_vcpus = max_vcpus;
-
     return dom0_setup_vcpu(dom0, 0,
                            cpumask_last(&dom0_cpus) /* so it wraps around to first pcpu */);
 }
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 46dcc71..532aca7 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1693,7 +1693,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     /* Create initial domain 0. */
     dom0 = domain_create(get_initial_domain_id(), &dom0_cfg, !pv_shim);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
         panic("Error creating domain 0");
 
     /* Grab the DOM0 command line. */
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 0c44f27..902276d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -339,6 +339,19 @@ struct domain *domain_create(domid_t domid,
 
     if ( !is_idle_domain(d) )
     {
+        /* Check d->max_vcpus and allocate d->vcpu[]. */
+        err = -EINVAL;
+        if ( config->max_vcpus < 1 ||
+             config->max_vcpus > domain_max_vcpus(d) )
+            goto fail;
+
+        err = -ENOMEM;
+        d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
+        if ( !d->vcpu )
+            goto fail;
+
+        d->max_vcpus = config->max_vcpus;
+
         watchdog_domain_init(d);
         init_status |= INIT_watchdog;
 
@@ -423,6 +436,11 @@ struct domain *domain_create(domid_t domid,
 
     sched_destroy_domain(d);
 
+    if ( d->max_vcpus )
+    {
+        d->max_vcpus = 0;
+        XFREE(d->vcpu);
+    }
     if ( init_status & INIT_arch )
         arch_domain_destroy(d);
     if ( init_status & INIT_gnttab )
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 58e51b2..ee0983d 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -554,16 +554,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
         ret = -EINVAL;
         if ( (d == current->domain) || /* no domain_pause() */
-             (max > domain_max_vcpus(d)) )
+             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain */
             break;
 
-        /* Until Xenoprof can dynamically grow its vcpu-s array... */
-        if ( d->xenoprof )
-        {
-            ret = -EAGAIN;
-            break;
-        }
-
         /* Needed, for example, to ensure writable p.t. state is synced. */
         domain_pause(d);
 
@@ -581,38 +574,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             }
         }
 
-        /* We cannot reduce maximum VCPUs. */
-        ret = -EINVAL;
-        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )
-            goto maxvcpu_out;
-
-        /*
-         * For now don't allow increasing the vcpu count from a non-zero
-         * value: This code and all readers of d->vcpu would otherwise need
-         * to be converted to use RCU, but at present there's no tools side
-         * code path that would issue such a request.
-         */
-        ret = -EBUSY;
-        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
-            goto maxvcpu_out;
-
         ret = -ENOMEM;
         online = cpupool_domain_cpumask(d);
-        if ( max > d->max_vcpus )
-        {
-            struct vcpu **vcpus;
-
-            BUG_ON(d->vcpu != NULL);
-            BUG_ON(d->max_vcpus != 0);
-
-            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
-                goto maxvcpu_out;
-
-            /* Install vcpu array /then/ update max_vcpus. */
-            d->vcpu = vcpus;
-            smp_wmb();
-            d->max_vcpus = max;
-        }
 
         for ( i = 0; i < max; i++ )
         {
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 41cbbae..381f30e 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1303,8 +1303,7 @@ int evtchn_init(struct domain *d, unsigned int max_port)
     evtchn_from_port(d, 0)->state = ECS_RESERVED;
 
 #if MAX_VIRT_CPUS > BITS_PER_LONG
-    d->poll_mask = xzalloc_array(unsigned long,
-                                 BITS_TO_LONGS(domain_max_vcpus(d)));
+    d->poll_mask = xzalloc_array(unsigned long, BITS_TO_LONGS(d->max_vcpus));
     if ( !d->poll_mask )
     {
         free_evtchn_bucket(d, d->evtchn);
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 651205d..ce31999 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -17,7 +17,7 @@ struct vcpu *alloc_vcpu(
     struct domain *d, unsigned int vcpu_id, unsigned int cpu_id);
 
 unsigned int dom0_max_vcpus(void);
-struct vcpu *alloc_dom0_vcpu0(struct domain *dom0, unsigned int max_vcpus);
+struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
 
 int vcpu_reset(struct vcpu *);
 int vcpu_up(struct vcpu *v);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/12] Improvements to domain creation
  2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
                   ` (11 preceding siblings ...)
  2018-08-13 10:01 ` [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create() Andrew Cooper
@ 2018-08-14 13:12 ` Christian Lindig
  2018-08-14 13:34   ` Andrew Cooper
  12 siblings, 1 reply; 63+ messages in thread
From: Christian Lindig @ 2018-08-14 13:12 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Juergen Gross, Marek Marczykowski-Górecki,
	Stefano Stabellini, Wei Liu, George Dunlap, Tim Deegan,
	Ian Jackson, Jon Ludlam, Rob Hoes, Julien Grall, Jan Beulich,
	David Scott, Daniel De Graaf


On 13/08/18 11:00, Andrew Cooper wrote:
> This series can be found in git form here:
>    http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/xen-create-v1
Is this the correct URL? The subject says v2 but this is branch v1.

Looking over the OCaml-related patches, I think they are looking good. 
Since we would like all code to be safe-string compliant I checked that 
OCaml values of type string are not being mutated but I would like 
Andrew to confirm this.

-- Christian

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/12] Improvements to domain creation
  2018-08-14 13:12 ` [PATCH v2 00/12] Improvements to domain creation Christian Lindig
@ 2018-08-14 13:34   ` Andrew Cooper
  0 siblings, 0 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-14 13:34 UTC (permalink / raw)
  To: Christian Lindig, Xen-devel
  Cc: Juergen Gross, Marek Marczykowski-Górecki,
	Stefano Stabellini, Wei Liu, George Dunlap, Tim Deegan,
	Ian Jackson, Jon Ludlam, Rob Hoes, Julien Grall, Jan Beulich,
	David Scott, Daniel De Graaf

On 14/08/18 14:12, Christian Lindig wrote:
>
> On 13/08/18 11:00, Andrew Cooper wrote:
>> This series can be found in git form here:
>>   
>> http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/xen-create-v1
> Is this the correct URL? The subject says v2 but this is branch v1.

Oops yes.

http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/xen-create-v2
works as well, and is the intended URL.

>
> Looking over the OCaml-related patches, I think they are looking good.
> Since we would like all code to be safe-string compliant I checked
> that OCaml values of type string are not being mutated but I would
> like Andrew to confirm this.

There is no change to any string handling here.  The only string used
during the domaincreate hypercall is the UUID string, which is only read
by the stubs, not altered.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 03/12] xen/domctl: Merge set_max_evtchn into createdomain
  2018-08-13 10:01 ` [PATCH v2 03/12] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
@ 2018-08-14 13:58   ` Roger Pau Monné
  0 siblings, 0 replies; 63+ messages in thread
From: Roger Pau Monné @ 2018-08-14 13:58 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Wei Liu, Ian Jackson, Marek Marczykowski-Górecki,
	Jan Beulich, Xen-devel

On Mon, Aug 13, 2018 at 11:01:00AM +0100, Andrew Cooper wrote:
> set_max_evtchn is somewhat weird.  It was introduced with the event_fifo work,
> but has never been used.  Still, it is a bounding on resources consumed by the
> event channel infrastructure, and should be part of createdomain, rather than
> editable after the fact.
> 
> Drop XEN_DOMCTL_set_max_evtchn completely (including XSM hooks and libxc
> wrappers), and retain the functionality in XEN_DOMCTL_createdomain.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Acked-by: Christian Lindig <christian.lindig@citrix.com>
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init()
  2018-08-13 10:01 ` [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
@ 2018-08-14 14:07   ` Roger Pau Monné
  2018-08-15 12:45   ` Jan Beulich
  2018-08-15 12:57   ` Julien Grall
  2 siblings, 0 replies; 63+ messages in thread
From: Roger Pau Monné @ 2018-08-14 14:07 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

On Mon, Aug 13, 2018 at 11:01:01AM +0100, Andrew Cooper wrote:
> ... rather than setting it up once domain_create() has completed.  This
> involves constructing a default value for dom0.
> 
> No practical change in functionality.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-13 10:01 ` [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
@ 2018-08-14 14:17   ` Roger Pau Monné
  2018-08-15 12:51   ` Jan Beulich
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 63+ messages in thread
From: Roger Pau Monné @ 2018-08-14 14:17 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

On Mon, Aug 13, 2018 at 11:01:03AM +0100, Andrew Cooper wrote:
> ... rather than setting the limits up after domain_create() has completed.
> 
> This removes all the gnttab infrastructure for calculating the number of dom0
> grant frames, opting instead to require the dom0 construction code to pass a
> sane value in via the configuration.
> 
> In practice, this now means that there is never a partially constructed grant
> table for a reference-able domain.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> 
> v2:
>  * Split/rearrange to avoid the post-domain-create error path.
> ---
>  xen/arch/arm/domain_build.c       |  3 ++-
>  xen/arch/arm/setup.c              | 12 ++++++++++++
>  xen/arch/x86/setup.c              |  3 +++
>  xen/common/domain.c               |  3 ++-
>  xen/common/grant_table.c          | 16 +++-------------
>  xen/include/asm-arm/grant_table.h | 12 ------------
>  xen/include/asm-x86/grant_table.h |  5 -----
>  xen/include/xen/grant_table.h     |  6 ++----
>  8 files changed, 24 insertions(+), 36 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 1351572..737e0f3 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2079,7 +2079,8 @@ static void __init find_gnttab_region(struct domain *d,
>       * enough space for a large grant table
>       */
>      kinfo->gnttab_start = __pa(_stext);
> -    kinfo->gnttab_size = gnttab_dom0_frames() << PAGE_SHIFT;
> +    kinfo->gnttab_size = min_t(unsigned int, opt_max_grant_frames,
> +                               PFN_DOWN(_etext - _stext)) << PAGE_SHIFT;
>  
>  #ifdef CONFIG_ARM_32
>      /*
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 45f3841..3d3b30c 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -20,6 +20,7 @@
>  #include <xen/compile.h>
>  #include <xen/device_tree.h>
>  #include <xen/domain_page.h>
> +#include <xen/grant_table.h>
>  #include <xen/types.h>
>  #include <xen/string.h>
>  #include <xen/serial.h>
> @@ -693,6 +694,17 @@ void __init start_xen(unsigned long boot_phys_offset,
>      struct domain *dom0;
>      struct xen_domctl_createdomain dom0_cfg = {
>          .max_evtchn_port = -1,
> +
> +        /*
> +         * The region used by Xen on the memory will never be mapped in DOM0
> +         * memory layout. Therefore it can be used for the grant table.
> +         *
> +         * Only use the text section as it's always present and will contain
> +         * enough space for a large grant table
> +         */
> +        .max_grant_frames = min_t(unsigned int, opt_max_grant_frames,
> +                                  PFN_DOWN(_etext - _stext)),

You have this calculation here and in the chunk above. Maybe you want
to keep gnttab_dom0_max (or something similar with the min included)
in order to avoid open coding this calculation twice?

The rest LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/12] xen/domctl: Remove XEN_DOMCTL_set_gnttab_limits
  2018-08-13 10:01 ` [PATCH v2 07/12] xen/domctl: Remove XEN_DOMCTL_set_gnttab_limits Andrew Cooper
@ 2018-08-14 14:19   ` Roger Pau Monné
  0 siblings, 0 replies; 63+ messages in thread
From: Roger Pau Monné @ 2018-08-14 14:19 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Jan Beulich, Xen-devel

On Mon, Aug 13, 2018 at 11:01:04AM +0100, Andrew Cooper wrote:
> Now that XEN_DOMCTL_createdomain handles the grant table limits, remove
> XEN_DOMCTL_set_gnttab_limits (including XSM hooks and libxc wrappers).
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 08/12] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init()
  2018-08-13 10:01 ` [PATCH v2 08/12] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init() Andrew Cooper
@ 2018-08-14 14:31   ` Roger Pau Monné
  2018-08-15 12:54   ` Jan Beulich
  1 sibling, 0 replies; 63+ messages in thread
From: Roger Pau Monné @ 2018-08-14 14:31 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

On Mon, Aug 13, 2018 at 11:01:05AM +0100, Andrew Cooper wrote:
> Now that the max_{grant,maptrack}_frames are specified from the very beginning
> of grant table construction, the various initialisation functions can be
> folded together and simplified as a result.
> 
> Leave grant_table_init() as the public interface, which is more consistent
> with other subsystems.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 09/12] xen/domain: Call arch_domain_create() as early as possible in domain_create()
  2018-08-13 10:01 ` [PATCH v2 09/12] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
@ 2018-08-14 14:37   ` Roger Pau Monné
  2018-08-15 12:56   ` Jan Beulich
  2018-09-04 18:44   ` Rats nest with domain pirq initialisation Andrew Cooper
  2 siblings, 0 replies; 63+ messages in thread
From: Roger Pau Monné @ 2018-08-14 14:37 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

On Mon, Aug 13, 2018 at 11:01:06AM +0100, Andrew Cooper wrote:
> This is in preparation to set up d->max_cpus and d->vcpu[] in domain_create(),
> and allow later parts of domain construction to have access to the values.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/common/domain.c | 34 +++++++++++++++++-----------------
>  1 file changed, 17 insertions(+), 17 deletions(-)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index be51426..0c44f27 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -322,6 +322,23 @@ struct domain *domain_create(domid_t domid,
>          else
>              d->guest_type = guest_type_pv;
>  
> +        if ( !is_hardware_domain(d) )
> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
> +        else
> +            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
> +                                           : arch_hwdom_irqs(domid);
> +        if ( d->nr_pirqs > nr_irqs )
> +            d->nr_pirqs = nr_irqs;

d->nr_pirqs = min(d->nr_pirqs, nr_irqs);

LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 11/12] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value
  2018-08-13 10:01 ` [PATCH v2 11/12] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
@ 2018-08-14 15:05   ` Roger Pau Monné
  2018-08-15 12:59   ` Jan Beulich
  1 sibling, 0 replies; 63+ messages in thread
From: Roger Pau Monné @ 2018-08-14 15:05 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

On Mon, Aug 13, 2018 at 11:01:08AM +0100, Andrew Cooper wrote:
> Make dom0_max_vcpus() a common interface, and implement it on ARM by splitting
> the existing alloc_dom0_vcpu0() function in half.
> 
> As domain_create() doesn't yet set up the vcpu array, the max value is also
> passed into alloc_dom0_vcpu0().  This is temporary for bisectibility and
> removed in the following patch.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-13 10:01 ` [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create() Andrew Cooper
@ 2018-08-14 15:17   ` Roger Pau Monné
  2018-08-15 13:17     ` Julien Grall
  2018-08-15 13:11   ` Jan Beulich
  2018-08-29 14:40   ` [PATCH v3 " Andrew Cooper
  2 siblings, 1 reply; 63+ messages in thread
From: Roger Pau Monné @ 2018-08-14 15:17 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

On Mon, Aug 13, 2018 at 11:01:09AM +0100, Andrew Cooper wrote:
> For ARM, the call to arch_domain_create() needs to have completed before
> domain_max_vcpus() will return the correct upper bound.
> 
> For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
> of dom0->vcpu.
> 
> With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
> can be constructed suitably for the domain, rather than for the worst-case
> setting.
> 
> Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
> ARM's two implementations of vgic_max_vcpus() no longer need work around the
> out-of-order call.
> 
> From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
> can be looked up by domid.
> 
> The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
> max != d->max_vcpus, which does match the older semantics (not that it is
> obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
> this point the hypercall still needs making to allocate each vcpu.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> 
> v2:
>  * Allocate in domain_create() rather than arch_domain_create().
>  * Retain domain_max_vcpus().
> ---
>  xen/arch/arm/domain_build.c |  8 +-------
>  xen/arch/arm/setup.c        |  2 +-
>  xen/arch/arm/vgic.c         | 11 +----------
>  xen/arch/arm/vgic/vgic.c    | 22 +---------------------
>  xen/arch/x86/dom0_build.c   |  8 +-------
>  xen/arch/x86/setup.c        |  2 +-
>  xen/common/domain.c         | 18 ++++++++++++++++++
>  xen/common/domctl.c         | 39 +--------------------------------------
>  xen/common/event_channel.c  |  3 +--
>  xen/include/xen/domain.h    |  2 +-
>  10 files changed, 27 insertions(+), 88 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f4a1225..6f45e56 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -72,14 +72,8 @@ unsigned int __init dom0_max_vcpus(void)
>      return opt_dom0_max_vcpus;
>  }
>  
> -struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
> -                                     unsigned int max_vcpus)
> +struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
>  {
> -    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
> -    if ( !dom0->vcpu )
> -        return NULL;
> -    dom0->max_vcpus = max_vcpus;
> -
>      return alloc_vcpu(dom0, 0, 0);
>  }
>  
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 72e42e8..a3e1ef7 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -863,7 +863,7 @@ void __init start_xen(unsigned long boot_phys_offset,
>      dom0_cfg.max_vcpus = dom0_max_vcpus();
>  
>      dom0 = domain_create(0, &dom0_cfg, true);
> -    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
> +    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
>              panic("Error creating domain 0");
>  
>      if ( construct_dom0(dom0) != 0)
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 7a2c455..5a4f082 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -669,16 +669,7 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
>  
>  unsigned int vgic_max_vcpus(const struct domain *d)
>  {
> -    /*
> -     * Since evtchn_init would call domain_max_vcpus for poll_mask
> -     * allocation when the vgic_ops haven't been initialised yet,
> -     * we return MAX_VIRT_CPUS if d->arch.vgic.handler is null.
> -     */
> -    if ( !d->arch.vgic.handler )
> -        return MAX_VIRT_CPUS;
> -    else
> -        return min_t(unsigned int, MAX_VIRT_CPUS,
> -                     d->arch.vgic.handler->max_vcpus);
> +    return min_t(unsigned int, MAX_VIRT_CPUS, d->arch.vgic.handler->max_vcpus);
>  }
>  
>  /*
> diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
> index 832632a..4124817 100644
> --- a/xen/arch/arm/vgic/vgic.c
> +++ b/xen/arch/arm/vgic/vgic.c
> @@ -951,27 +951,7 @@ void vgic_sync_hardware_irq(struct domain *d,
>  
>  unsigned int vgic_max_vcpus(const struct domain *d)
>  {
> -    unsigned int vgic_vcpu_limit;
> -
> -    switch ( d->arch.vgic.version )
> -    {
> -    case GIC_INVALID:
> -        /*
> -         * Since evtchn_init would call domain_max_vcpus for poll_mask
> -         * allocation before the VGIC has been initialised, we need to
> -         * return some safe value in this case. As this is for allocation
> -         * purposes, go with the maximum value.
> -         */
> -        vgic_vcpu_limit = MAX_VIRT_CPUS;
> -        break;
> -    case GIC_V2:
> -        vgic_vcpu_limit = VGIC_V2_MAX_CPUS;
> -        break;
> -    default:
> -        BUG();
> -    }
> -
> -    return min_t(unsigned int, MAX_VIRT_CPUS, vgic_vcpu_limit);
> +    return min_t(unsigned int, MAX_VIRT_CPUS, d->arch.vgic.handler->max_vcpus);
>  }

Since both implementations are equal now, can you place this in vgic.h
as a static inline function?

With that:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init()
  2018-08-13 10:01 ` [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
  2018-08-14 14:07   ` Roger Pau Monné
@ 2018-08-15 12:45   ` Jan Beulich
  2018-08-15 12:57   ` Julien Grall
  2 siblings, 0 replies; 63+ messages in thread
From: Jan Beulich @ 2018-08-15 12:45 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

>>> On 13.08.18 at 12:01, <andrew.cooper3@citrix.com> wrote:
> ... rather than setting it up once domain_create() has completed.  This
> involves constructing a default value for dom0.
> 
> No practical change in functionality.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-13 10:01 ` [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
  2018-08-14 14:17   ` Roger Pau Monné
@ 2018-08-15 12:51   ` Jan Beulich
  2018-08-15 13:04   ` Julien Grall
  2018-08-29  9:38   ` [PATCH v3 6/12] " Andrew Cooper
  3 siblings, 0 replies; 63+ messages in thread
From: Jan Beulich @ 2018-08-15 12:51 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

>>> On 13.08.18 at 12:01, <andrew.cooper3@citrix.com> wrote:
> ... rather than setting the limits up after domain_create() has completed.
> 
> This removes all the gnttab infrastructure for calculating the number of dom0
> grant frames, opting instead to require the dom0 construction code to pass a
> sane value in via the configuration.
> 
> In practice, this now means that there is never a partially constructed grant
> table for a reference-able domain.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 08/12] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init()
  2018-08-13 10:01 ` [PATCH v2 08/12] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init() Andrew Cooper
  2018-08-14 14:31   ` Roger Pau Monné
@ 2018-08-15 12:54   ` Jan Beulich
  1 sibling, 0 replies; 63+ messages in thread
From: Jan Beulich @ 2018-08-15 12:54 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

>>> On 13.08.18 at 12:01, <andrew.cooper3@citrix.com> wrote:
> Now that the max_{grant,maptrack}_frames are specified from the very beginning
> of grant table construction, the various initialisation functions can be
> folded together and simplified as a result.
> 
> Leave grant_table_init() as the public interface, which is more consistent
> with other subsystems.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 09/12] xen/domain: Call arch_domain_create() as early as possible in domain_create()
  2018-08-13 10:01 ` [PATCH v2 09/12] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
  2018-08-14 14:37   ` Roger Pau Monné
@ 2018-08-15 12:56   ` Jan Beulich
  2018-09-04 18:44   ` Rats nest with domain pirq initialisation Andrew Cooper
  2 siblings, 0 replies; 63+ messages in thread
From: Jan Beulich @ 2018-08-15 12:56 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

>>> On 13.08.18 at 12:01, <andrew.cooper3@citrix.com> wrote:
> This is in preparation to set up d->max_cpus and d->vcpu[] in domain_create(),
> and allow later parts of domain construction to have access to the values.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init()
  2018-08-13 10:01 ` [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
  2018-08-14 14:07   ` Roger Pau Monné
  2018-08-15 12:45   ` Jan Beulich
@ 2018-08-15 12:57   ` Julien Grall
  2 siblings, 0 replies; 63+ messages in thread
From: Julien Grall @ 2018-08-15 12:57 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Stefano Stabellini, Wei Liu, Jan Beulich

Hi Andrew,

On 08/13/2018 11:01 AM, Andrew Cooper wrote:
> ... rather than setting it up once domain_create() has completed.  This
> involves constructing a default value for dom0.
> 
> No practical change in functionality.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

For the Arm bits:

Acked-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> ---
>   xen/arch/arm/setup.c       | 4 +++-
>   xen/arch/x86/setup.c       | 1 +
>   xen/common/domain.c        | 2 +-
>   xen/common/domctl.c        | 3 ---
>   xen/common/event_channel.c | 4 ++--
>   xen/include/xen/sched.h    | 2 +-
>   6 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 7d40a84..45f3841 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -691,7 +691,9 @@ void __init start_xen(unsigned long boot_phys_offset,
>       const char *cmdline;
>       struct bootmodule *xen_bootmodule;
>       struct domain *dom0;
> -    struct xen_domctl_createdomain dom0_cfg = {};
> +    struct xen_domctl_createdomain dom0_cfg = {
> +        .max_evtchn_port = -1,
> +    };
>   
>       dcache_line_bytes = read_dcache_line_bytes();
>   
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 8301de8..015099f 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -681,6 +681,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>       };
>       struct xen_domctl_createdomain dom0_cfg = {
>           .flags = XEN_DOMCTL_CDF_s3_integrity,
> +        .max_evtchn_port = -1,
>       };
>   
>       /* Critical region without IDT or TSS.  Any fault is deadly! */
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 749722b..171d25e 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -362,7 +362,7 @@ struct domain *domain_create(domid_t domid,
>   
>           radix_tree_init(&d->pirq_tree);
>   
> -        if ( (err = evtchn_init(d)) != 0 )
> +        if ( (err = evtchn_init(d, config->max_evtchn_port)) != 0 )
>               goto fail;
>           init_status |= INIT_evtchn;
>   
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 3a68fc9..0ef554a 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -540,9 +540,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>               break;
>           }
>   
> -        d->max_evtchn_port = min_t(unsigned int,
> -                                   op->u.createdomain.max_evtchn_port, INT_MAX);
> -
>           ret = 0;
>           op->domain = d->domain_id;
>           copyback = 1;
> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index c620465..41cbbae 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1284,10 +1284,10 @@ void evtchn_check_pollers(struct domain *d, unsigned int port)
>       }
>   }
>   
> -int evtchn_init(struct domain *d)
> +int evtchn_init(struct domain *d, unsigned int max_port)
>   {
>       evtchn_2l_init(d);
> -    d->max_evtchn_port = INT_MAX;
> +    d->max_evtchn_port = min_t(unsigned int, max_port, INT_MAX);
>   
>       d->evtchn = alloc_evtchn_bucket(d, 0);
>       if ( !d->evtchn )
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 3c35473..51ceebe 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -134,7 +134,7 @@ struct evtchn
>   #endif
>   } __attribute__((aligned(64)));
>   
> -int  evtchn_init(struct domain *d); /* from domain_create */
> +int  evtchn_init(struct domain *d, unsigned int max_port);
>   void evtchn_destroy(struct domain *d); /* from domain_kill */
>   void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
>   
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 11/12] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value
  2018-08-13 10:01 ` [PATCH v2 11/12] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
  2018-08-14 15:05   ` Roger Pau Monné
@ 2018-08-15 12:59   ` Jan Beulich
  1 sibling, 0 replies; 63+ messages in thread
From: Jan Beulich @ 2018-08-15 12:59 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Stefano Stabellini, Wei Liu, Xen-devel

>>> On 13.08.18 at 12:01, <andrew.cooper3@citrix.com> wrote:
> Make dom0_max_vcpus() a common interface, and implement it on ARM by splitting
> the existing alloc_dom0_vcpu0() function in half.
> 
> As domain_create() doesn't yet set up the vcpu array, the max value is also
> passed into alloc_dom0_vcpu0().  This is temporary for bisectibility and
> removed in the following patch.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-13 10:01 ` [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
  2018-08-14 14:17   ` Roger Pau Monné
  2018-08-15 12:51   ` Jan Beulich
@ 2018-08-15 13:04   ` Julien Grall
  2018-08-15 13:08     ` Andrew Cooper
  2018-08-29  9:38   ` [PATCH v3 6/12] " Andrew Cooper
  3 siblings, 1 reply; 63+ messages in thread
From: Julien Grall @ 2018-08-15 13:04 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Stefano Stabellini, Wei Liu, Jan Beulich

Hi Andrew,

On 08/13/2018 11:01 AM, Andrew Cooper wrote:
> ... rather than setting the limits up after domain_create() has completed.
> 
> This removes all the gnttab infrastructure for calculating the number of dom0
> grant frames, opting instead to require the dom0 construction code to pass a
> sane value in via the configuration.
> 
> In practice, this now means that there is never a partially constructed grant
> table for a reference-able domain.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> 
> v2:
>   * Split/rearrange to avoid the post-domain-create error path.
> ---
>   xen/arch/arm/domain_build.c       |  3 ++-
>   xen/arch/arm/setup.c              | 12 ++++++++++++
>   xen/arch/x86/setup.c              |  3 +++
>   xen/common/domain.c               |  3 ++-
>   xen/common/grant_table.c          | 16 +++-------------
>   xen/include/asm-arm/grant_table.h | 12 ------------
>   xen/include/asm-x86/grant_table.h |  5 -----
>   xen/include/xen/grant_table.h     |  6 ++----
>   8 files changed, 24 insertions(+), 36 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 1351572..737e0f3 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2079,7 +2079,8 @@ static void __init find_gnttab_region(struct domain *d,
>        * enough space for a large grant table
>        */
>       kinfo->gnttab_start = __pa(_stext);
> -    kinfo->gnttab_size = gnttab_dom0_frames() << PAGE_SHIFT;
> +    kinfo->gnttab_size = min_t(unsigned int, opt_max_grant_frames,
> +                               PFN_DOWN(_etext - _stext)) << PAGE_SHIFT;


I agree with Jan's comment on v1 that there is a risk someone will 
update the size here but ...


>   
>   #ifdef CONFIG_ARM_32
>       /*
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 45f3841..3d3b30c 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -20,6 +20,7 @@
>   #include <xen/compile.h>
>   #include <xen/device_tree.h>
>   #include <xen/domain_page.h>
> +#include <xen/grant_table.h>
>   #include <xen/types.h>
>   #include <xen/string.h>
>   #include <xen/serial.h>
> @@ -693,6 +694,17 @@ void __init start_xen(unsigned long boot_phys_offset,
>       struct domain *dom0;
>       struct xen_domctl_createdomain dom0_cfg = {
>           .max_evtchn_port = -1,
> +
> +        /*
> +         * The region used by Xen on the memory will never be mapped in DOM0
> +         * memory layout. Therefore it can be used for the grant table.
> +         *
> +         * Only use the text section as it's always present and will contain
> +         * enough space for a large grant table
> +         */
> +        .max_grant_frames = min_t(unsigned int, opt_max_grant_frames,
> +                                  PFN_DOWN(_etext - _stext)),

... not here. So I would prefer if we either keep an helper to find the 
size of pass that size around to domain_build. Do we store the size in 
the domain information?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-15 13:04   ` Julien Grall
@ 2018-08-15 13:08     ` Andrew Cooper
  2018-08-15 13:32       ` Julien Grall
  0 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-08-15 13:08 UTC (permalink / raw)
  To: Julien Grall, Xen-devel; +Cc: Stefano Stabellini, Wei Liu, Jan Beulich

On 15/08/18 14:04, Julien Grall wrote:
> Hi Andrew,
>
> On 08/13/2018 11:01 AM, Andrew Cooper wrote:
>> ... rather than setting the limits up after domain_create() has
>> completed.
>>
>> This removes all the gnttab infrastructure for calculating the number
>> of dom0
>> grant frames, opting instead to require the dom0 construction code to
>> pass a
>> sane value in via the configuration.
>>
>> In practice, this now means that there is never a partially
>> constructed grant
>> table for a reference-able domain.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>>
>> v2:
>>   * Split/rearrange to avoid the post-domain-create error path.
>> ---
>>   xen/arch/arm/domain_build.c       |  3 ++-
>>   xen/arch/arm/setup.c              | 12 ++++++++++++
>>   xen/arch/x86/setup.c              |  3 +++
>>   xen/common/domain.c               |  3 ++-
>>   xen/common/grant_table.c          | 16 +++-------------
>>   xen/include/asm-arm/grant_table.h | 12 ------------
>>   xen/include/asm-x86/grant_table.h |  5 -----
>>   xen/include/xen/grant_table.h     |  6 ++----
>>   8 files changed, 24 insertions(+), 36 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 1351572..737e0f3 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -2079,7 +2079,8 @@ static void __init find_gnttab_region(struct
>> domain *d,
>>        * enough space for a large grant table
>>        */
>>       kinfo->gnttab_start = __pa(_stext);
>> -    kinfo->gnttab_size = gnttab_dom0_frames() << PAGE_SHIFT;
>> +    kinfo->gnttab_size = min_t(unsigned int, opt_max_grant_frames,
>> +                               PFN_DOWN(_etext - _stext)) <<
>> PAGE_SHIFT;
>
>
> I agree with Jan's comment on v1 that there is a risk someone will
> update the size here but ...
>
>
>>     #ifdef CONFIG_ARM_32
>>       /*
>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>> index 45f3841..3d3b30c 100644
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -20,6 +20,7 @@
>>   #include <xen/compile.h>
>>   #include <xen/device_tree.h>
>>   #include <xen/domain_page.h>
>> +#include <xen/grant_table.h>
>>   #include <xen/types.h>
>>   #include <xen/string.h>
>>   #include <xen/serial.h>
>> @@ -693,6 +694,17 @@ void __init start_xen(unsigned long
>> boot_phys_offset,
>>       struct domain *dom0;
>>       struct xen_domctl_createdomain dom0_cfg = {
>>           .max_evtchn_port = -1,
>> +
>> +        /*
>> +         * The region used by Xen on the memory will never be mapped
>> in DOM0
>> +         * memory layout. Therefore it can be used for the grant table.
>> +         *
>> +         * Only use the text section as it's always present and will
>> contain
>> +         * enough space for a large grant table
>> +         */
>> +        .max_grant_frames = min_t(unsigned int, opt_max_grant_frames,
>> +                                  PFN_DOWN(_etext - _stext)),
>
> ... not here. So I would prefer if we either keep an helper to find
> the size of pass that size around to domain_build. Do we store the
> size in the domain information?

I have to admit that I'm somewhat perplexed by ARM's
find_gnttab_region(), and I'm not sure why it exists.

The value is available from d->grant_table.max_grant_frames but ISTR
finding that the order of construction meant that it wasn't available
when needed (although this was all from code inspection, so I could very
easily be wrong).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-13 10:01 ` [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create() Andrew Cooper
  2018-08-14 15:17   ` Roger Pau Monné
@ 2018-08-15 13:11   ` Jan Beulich
  2018-08-15 14:03     ` Andrew Cooper
  2018-08-29 14:40   ` [PATCH v3 " Andrew Cooper
  2 siblings, 1 reply; 63+ messages in thread
From: Jan Beulich @ 2018-08-15 13:11 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

>>> On 13.08.18 at 12:01, <andrew.cooper3@citrix.com> wrote:
> @@ -423,6 +436,11 @@ struct domain *domain_create(domid_t domid,
>  
>      sched_destroy_domain(d);
>  
> +    if ( d->max_vcpus )
> +    {
> +        d->max_vcpus = 0;
> +        XFREE(d->vcpu);
> +    }
>      if ( init_status & INIT_arch )
>          arch_domain_destroy(d);

I'm not sure it is a good idea to free the vcpus this early, in particular
before arch_domain_destroy().

> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -554,16 +554,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  
>          ret = -EINVAL;
>          if ( (d == current->domain) || /* no domain_pause() */
> -             (max > domain_max_vcpus(d)) )
> +             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain */
>              break;
>  
> -        /* Until Xenoprof can dynamically grow its vcpu-s array... */
> -        if ( d->xenoprof )
> -        {
> -            ret = -EAGAIN;
> -            break;
> -        }
> -
>          /* Needed, for example, to ensure writable p.t. state is synced. */
>          domain_pause(d);
>  
> @@ -581,38 +574,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>              }
>          }
>  
> -        /* We cannot reduce maximum VCPUs. */
> -        ret = -EINVAL;
> -        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )
> -            goto maxvcpu_out;
> -
> -        /*
> -         * For now don't allow increasing the vcpu count from a non-zero
> -         * value: This code and all readers of d->vcpu would otherwise need
> -         * to be converted to use RCU, but at present there's no tools side
> -         * code path that would issue such a request.
> -         */
> -        ret = -EBUSY;
> -        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
> -            goto maxvcpu_out;
> -
>          ret = -ENOMEM;
>          online = cpupool_domain_cpumask(d);
> -        if ( max > d->max_vcpus )
> -        {
> -            struct vcpu **vcpus;
> -
> -            BUG_ON(d->vcpu != NULL);
> -            BUG_ON(d->max_vcpus != 0);
> -
> -            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
> -                goto maxvcpu_out;
> -
> -            /* Install vcpu array /then/ update max_vcpus. */
> -            d->vcpu = vcpus;
> -            smp_wmb();
> -            d->max_vcpus = max;
> -        }
>  
>          for ( i = 0; i < max; i++ )
>          {

With all of this dropped, I think the domctl should be renamed. By
dropping its "max" input at the same time, there would then also
no longer be a need to check that the value matches what was
stored during domain creation.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-14 15:17   ` Roger Pau Monné
@ 2018-08-15 13:17     ` Julien Grall
  2018-08-15 13:50       ` Andrew Cooper
  0 siblings, 1 reply; 63+ messages in thread
From: Julien Grall @ 2018-08-15 13:17 UTC (permalink / raw)
  To: Roger Pau Monné, Andrew Cooper
  Cc: Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

Hi,

On 08/14/2018 04:17 PM, Roger Pau Monné wrote:
> On Mon, Aug 13, 2018 at 11:01:09AM +0100, Andrew Cooper wrote:
>> For ARM, the call to arch_domain_create() needs to have completed before
>> domain_max_vcpus() will return the correct upper bound.
>>
>> For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
>> of dom0->vcpu.
>>
>> With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
>> can be constructed suitably for the domain, rather than for the worst-case
>> setting.
>>
>> Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
>> ARM's two implementations of vgic_max_vcpus() no longer need work around the
>> out-of-order call.
>>
>>  From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
>> can be looked up by domid.
>>
>> The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
>> max != d->max_vcpus, which does match the older semantics (not that it is
>> obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
>> this point the hypercall still needs making to allocate each vcpu.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>>
>> v2:
>>   * Allocate in domain_create() rather than arch_domain_create().
>>   * Retain domain_max_vcpus().
>> ---
>>   xen/arch/arm/domain_build.c |  8 +-------
>>   xen/arch/arm/setup.c        |  2 +-
>>   xen/arch/arm/vgic.c         | 11 +----------
>>   xen/arch/arm/vgic/vgic.c    | 22 +---------------------
>>   xen/arch/x86/dom0_build.c   |  8 +-------
>>   xen/arch/x86/setup.c        |  2 +-
>>   xen/common/domain.c         | 18 ++++++++++++++++++
>>   xen/common/domctl.c         | 39 +--------------------------------------
>>   xen/common/event_channel.c  |  3 +--
>>   xen/include/xen/domain.h    |  2 +-
>>   10 files changed, 27 insertions(+), 88 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index f4a1225..6f45e56 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -72,14 +72,8 @@ unsigned int __init dom0_max_vcpus(void)
>>       return opt_dom0_max_vcpus;
>>   }
>>   
>> -struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
>> -                                     unsigned int max_vcpus)
>> +struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
>>   {
>> -    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
>> -    if ( !dom0->vcpu )
>> -        return NULL;
>> -    dom0->max_vcpus = max_vcpus;
>> -
>>       return alloc_vcpu(dom0, 0, 0);
>>   }
>>   
>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>> index 72e42e8..a3e1ef7 100644
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -863,7 +863,7 @@ void __init start_xen(unsigned long boot_phys_offset,
>>       dom0_cfg.max_vcpus = dom0_max_vcpus();
>>   
>>       dom0 = domain_create(0, &dom0_cfg, true);
>> -    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
>> +    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
>>               panic("Error creating domain 0");
>>   
>>       if ( construct_dom0(dom0) != 0)
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index 7a2c455..5a4f082 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -669,16 +669,7 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
>>   
>>   unsigned int vgic_max_vcpus(const struct domain *d)
>>   {
>> -    /*
>> -     * Since evtchn_init would call domain_max_vcpus for poll_mask
>> -     * allocation when the vgic_ops haven't been initialised yet,
>> -     * we return MAX_VIRT_CPUS if d->arch.vgic.handler is null.
>> -     */
>> -    if ( !d->arch.vgic.handler )
>> -        return MAX_VIRT_CPUS;
>> -    else
>> -        return min_t(unsigned int, MAX_VIRT_CPUS,
>> -                     d->arch.vgic.handler->max_vcpus);
>> +    return min_t(unsigned int, MAX_VIRT_CPUS, d->arch.vgic.handler->max_vcpus);
>>   }
>>   
>>   /*
>> diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
>> index 832632a..4124817 100644
>> --- a/xen/arch/arm/vgic/vgic.c
>> +++ b/xen/arch/arm/vgic/vgic.c
>> @@ -951,27 +951,7 @@ void vgic_sync_hardware_irq(struct domain *d,
>>   
>>   unsigned int vgic_max_vcpus(const struct domain *d)
>>   {
>> -    unsigned int vgic_vcpu_limit;
>> -
>> -    switch ( d->arch.vgic.version )
>> -    {
>> -    case GIC_INVALID:
>> -        /*
>> -         * Since evtchn_init would call domain_max_vcpus for poll_mask
>> -         * allocation before the VGIC has been initialised, we need to
>> -         * return some safe value in this case. As this is for allocation
>> -         * purposes, go with the maximum value.
>> -         */
>> -        vgic_vcpu_limit = MAX_VIRT_CPUS;
>> -        break;
>> -    case GIC_V2:
>> -        vgic_vcpu_limit = VGIC_V2_MAX_CPUS;
>> -        break;
>> -    default:
>> -        BUG();
>> -    }
>> -
>> -    return min_t(unsigned int, MAX_VIRT_CPUS, vgic_vcpu_limit);
>> +    return min_t(unsigned int, MAX_VIRT_CPUS, d->arch.vgic.handler->max_vcpus);
>>   }
> 
> Since both implementations are equal now, can you place this in vgic.h
> as a static inline function?

vgic/vgic.c is part of the new vGIC implementation (selectable at th e 
compilation time) and using a different layout for the vgic_dist 
structure. The structure is described in asm/new_vgic.h and does not 
store the max vcpus anymore.

Instead, the switch should be retained and only the case GIC_INVALID 
should be dropped.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-15 13:08     ` Andrew Cooper
@ 2018-08-15 13:32       ` Julien Grall
  2018-08-15 19:03         ` Andrew Cooper
  0 siblings, 1 reply; 63+ messages in thread
From: Julien Grall @ 2018-08-15 13:32 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Stefano Stabellini, Wei Liu, Jan Beulich

Hi Andrew,

On 08/15/2018 02:08 PM, Andrew Cooper wrote:
> On 15/08/18 14:04, Julien Grall wrote:
>> Hi Andrew,
>>
>> On 08/13/2018 11:01 AM, Andrew Cooper wrote:
>>> ... rather than setting the limits up after domain_create() has
>>> completed.
>>>
>>> This removes all the gnttab infrastructure for calculating the number
>>> of dom0
>>> grant frames, opting instead to require the dom0 construction code to
>>> pass a
>>> sane value in via the configuration.
>>>
>>> In practice, this now means that there is never a partially
>>> constructed grant
>>> table for a reference-able domain.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> ---
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> CC: Julien Grall <julien.grall@arm.com>
>>> CC: Wei Liu <wei.liu2@citrix.com>
>>>
>>> v2:
>>>    * Split/rearrange to avoid the post-domain-create error path.
>>> ---
>>>    xen/arch/arm/domain_build.c       |  3 ++-
>>>    xen/arch/arm/setup.c              | 12 ++++++++++++
>>>    xen/arch/x86/setup.c              |  3 +++
>>>    xen/common/domain.c               |  3 ++-
>>>    xen/common/grant_table.c          | 16 +++-------------
>>>    xen/include/asm-arm/grant_table.h | 12 ------------
>>>    xen/include/asm-x86/grant_table.h |  5 -----
>>>    xen/include/xen/grant_table.h     |  6 ++----
>>>    8 files changed, 24 insertions(+), 36 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index 1351572..737e0f3 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -2079,7 +2079,8 @@ static void __init find_gnttab_region(struct
>>> domain *d,
>>>         * enough space for a large grant table
>>>         */
>>>        kinfo->gnttab_start = __pa(_stext);
>>> -    kinfo->gnttab_size = gnttab_dom0_frames() << PAGE_SHIFT;
>>> +    kinfo->gnttab_size = min_t(unsigned int, opt_max_grant_frames,
>>> +                               PFN_DOWN(_etext - _stext)) <<
>>> PAGE_SHIFT;
>>
>>
>> I agree with Jan's comment on v1 that there is a risk someone will
>> update the size here but ...
>>
>>
>>>      #ifdef CONFIG_ARM_32
>>>        /*
>>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>>> index 45f3841..3d3b30c 100644
>>> --- a/xen/arch/arm/setup.c
>>> +++ b/xen/arch/arm/setup.c
>>> @@ -20,6 +20,7 @@
>>>    #include <xen/compile.h>
>>>    #include <xen/device_tree.h>
>>>    #include <xen/domain_page.h>
>>> +#include <xen/grant_table.h>
>>>    #include <xen/types.h>
>>>    #include <xen/string.h>
>>>    #include <xen/serial.h>
>>> @@ -693,6 +694,17 @@ void __init start_xen(unsigned long
>>> boot_phys_offset,
>>>        struct domain *dom0;
>>>        struct xen_domctl_createdomain dom0_cfg = {
>>>            .max_evtchn_port = -1,
>>> +
>>> +        /*
>>> +         * The region used by Xen on the memory will never be mapped
>>> in DOM0
>>> +         * memory layout. Therefore it can be used for the grant table.
>>> +         *
>>> +         * Only use the text section as it's always present and will
>>> contain
>>> +         * enough space for a large grant table
>>> +         */
>>> +        .max_grant_frames = min_t(unsigned int, opt_max_grant_frames,
>>> +                                  PFN_DOWN(_etext - _stext)),
>>
>> ... not here. So I would prefer if we either keep an helper to find
>> the size of pass that size around to domain_build. Do we store the
>> size in the domain information?
> 
> I have to admit that I'm somewhat perplexed by ARM's
> find_gnttab_region(), and I'm not sure why it exists.

Dom0 is using the host memory layout that may differ between platforms. 
So there is not a region address that would fit everyone.

This function is here to find at boot a suitable region in the layout 
where the OS can map the grant-table. The result will be written in the 
firmware table.

> 
> The value is available from d->grant_table.max_grant_frames but ISTR
> finding that the order of construction meant that it wasn't available
> when needed (although this was all from code inspection, so I could very
> easily be wrong).

I think it should be fine for Dom0 as find_gnttab_region is called from 
construct_dom0 and d->grant_table.max_grant_frames would be set before 
via domain_create().

Assuming d->grant_table.max_grant_frames can only be 0 before 
initialization, I would potentially add a 
BUG_ON(!d->grant_table.max_grant_frames) to make sure this always stay 
like that.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-15 13:17     ` Julien Grall
@ 2018-08-15 13:50       ` Andrew Cooper
  2018-08-15 13:52         ` Julien Grall
  0 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-08-15 13:50 UTC (permalink / raw)
  To: Julien Grall, Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

On 15/08/18 14:17, Julien Grall wrote:
>>> diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
>>> index 832632a..4124817 100644
>>> --- a/xen/arch/arm/vgic/vgic.c
>>> +++ b/xen/arch/arm/vgic/vgic.c
>>> @@ -951,27 +951,7 @@ void vgic_sync_hardware_irq(struct domain *d,
>>>     unsigned int vgic_max_vcpus(const struct domain *d)
>>>   {
>>> -    unsigned int vgic_vcpu_limit;
>>> -
>>> -    switch ( d->arch.vgic.version )
>>> -    {
>>> -    case GIC_INVALID:
>>> -        /*
>>> -         * Since evtchn_init would call domain_max_vcpus for poll_mask
>>> -         * allocation before the VGIC has been initialised, we need to
>>> -         * return some safe value in this case. As this is for
>>> allocation
>>> -         * purposes, go with the maximum value.
>>> -         */
>>> -        vgic_vcpu_limit = MAX_VIRT_CPUS;
>>> -        break;
>>> -    case GIC_V2:
>>> -        vgic_vcpu_limit = VGIC_V2_MAX_CPUS;
>>> -        break;
>>> -    default:
>>> -        BUG();
>>> -    }
>>> -
>>> -    return min_t(unsigned int, MAX_VIRT_CPUS, vgic_vcpu_limit);
>>> +    return min_t(unsigned int, MAX_VIRT_CPUS,
>>> d->arch.vgic.handler->max_vcpus);
>>>   }
>>
>> Since both implementations are equal now, can you place this in vgic.h
>> as a static inline function?
>
> vgic/vgic.c is part of the new vGIC implementation (selectable at th e
> compilation time) and using a different layout for the vgic_dist
> structure. The structure is described in asm/new_vgic.h and does not
> store the max vcpus anymore.
>
> Instead, the switch should be retained and only the case GIC_INVALID
> should be dropped.

What about GIC_V3?  VGIC_V3_MAX_CPUS seems to be 255 at the moment.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-15 13:50       ` Andrew Cooper
@ 2018-08-15 13:52         ` Julien Grall
  2018-08-15 13:56           ` Andrew Cooper
  0 siblings, 1 reply; 63+ messages in thread
From: Julien Grall @ 2018-08-15 13:52 UTC (permalink / raw)
  To: Andrew Cooper, Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

Hi Andrew,

On 08/15/2018 02:50 PM, Andrew Cooper wrote:
> On 15/08/18 14:17, Julien Grall wrote:
>>>> diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
>>>> index 832632a..4124817 100644
>>>> --- a/xen/arch/arm/vgic/vgic.c
>>>> +++ b/xen/arch/arm/vgic/vgic.c
>>>> @@ -951,27 +951,7 @@ void vgic_sync_hardware_irq(struct domain *d,
>>>>      unsigned int vgic_max_vcpus(const struct domain *d)
>>>>    {
>>>> -    unsigned int vgic_vcpu_limit;
>>>> -
>>>> -    switch ( d->arch.vgic.version )
>>>> -    {
>>>> -    case GIC_INVALID:
>>>> -        /*
>>>> -         * Since evtchn_init would call domain_max_vcpus for poll_mask
>>>> -         * allocation before the VGIC has been initialised, we need to
>>>> -         * return some safe value in this case. As this is for
>>>> allocation
>>>> -         * purposes, go with the maximum value.
>>>> -         */
>>>> -        vgic_vcpu_limit = MAX_VIRT_CPUS;
>>>> -        break;
>>>> -    case GIC_V2:
>>>> -        vgic_vcpu_limit = VGIC_V2_MAX_CPUS;
>>>> -        break;
>>>> -    default:
>>>> -        BUG();
>>>> -    }
>>>> -
>>>> -    return min_t(unsigned int, MAX_VIRT_CPUS, vgic_vcpu_limit);
>>>> +    return min_t(unsigned int, MAX_VIRT_CPUS,
>>>> d->arch.vgic.handler->max_vcpus);
>>>>    }
>>>
>>> Since both implementations are equal now, can you place this in vgic.h
>>> as a static inline function?
>>
>> vgic/vgic.c is part of the new vGIC implementation (selectable at th e
>> compilation time) and using a different layout for the vgic_dist
>> structure. The structure is described in asm/new_vgic.h and does not
>> store the max vcpus anymore.
>>
>> Instead, the switch should be retained and only the case GIC_INVALID
>> should be dropped.
> 
> What about GIC_V3?  VGIC_V3_MAX_CPUS seems to be 255 at the moment.

GICv3 is not yet supported by the new vGIC and disabled at compile time. 
So we should never reach this code with d->arch.vgic.version == GIC_V3.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-15 13:52         ` Julien Grall
@ 2018-08-15 13:56           ` Andrew Cooper
  0 siblings, 0 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-15 13:56 UTC (permalink / raw)
  To: Julien Grall, Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Jan Beulich, Xen-devel

On 15/08/18 14:52, Julien Grall wrote:
> Hi Andrew,
>
> On 08/15/2018 02:50 PM, Andrew Cooper wrote:
>> On 15/08/18 14:17, Julien Grall wrote:
>>>>> diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
>>>>> index 832632a..4124817 100644
>>>>> --- a/xen/arch/arm/vgic/vgic.c
>>>>> +++ b/xen/arch/arm/vgic/vgic.c
>>>>> @@ -951,27 +951,7 @@ void vgic_sync_hardware_irq(struct domain *d,
>>>>>      unsigned int vgic_max_vcpus(const struct domain *d)
>>>>>    {
>>>>> -    unsigned int vgic_vcpu_limit;
>>>>> -
>>>>> -    switch ( d->arch.vgic.version )
>>>>> -    {
>>>>> -    case GIC_INVALID:
>>>>> -        /*
>>>>> -         * Since evtchn_init would call domain_max_vcpus for
>>>>> poll_mask
>>>>> -         * allocation before the VGIC has been initialised, we
>>>>> need to
>>>>> -         * return some safe value in this case. As this is for
>>>>> allocation
>>>>> -         * purposes, go with the maximum value.
>>>>> -         */
>>>>> -        vgic_vcpu_limit = MAX_VIRT_CPUS;
>>>>> -        break;
>>>>> -    case GIC_V2:
>>>>> -        vgic_vcpu_limit = VGIC_V2_MAX_CPUS;
>>>>> -        break;
>>>>> -    default:
>>>>> -        BUG();
>>>>> -    }
>>>>> -
>>>>> -    return min_t(unsigned int, MAX_VIRT_CPUS, vgic_vcpu_limit);
>>>>> +    return min_t(unsigned int, MAX_VIRT_CPUS,
>>>>> d->arch.vgic.handler->max_vcpus);
>>>>>    }
>>>>
>>>> Since both implementations are equal now, can you place this in vgic.h
>>>> as a static inline function?
>>>
>>> vgic/vgic.c is part of the new vGIC implementation (selectable at th e
>>> compilation time) and using a different layout for the vgic_dist
>>> structure. The structure is described in asm/new_vgic.h and does not
>>> store the max vcpus anymore.
>>>
>>> Instead, the switch should be retained and only the case GIC_INVALID
>>> should be dropped.
>>
>> What about GIC_V3?  VGIC_V3_MAX_CPUS seems to be 255 at the moment.
>
> GICv3 is not yet supported by the new vGIC and disabled at compile
> time. So we should never reach this code with d->arch.vgic.version ==
> GIC_V3.

Ok - no problem.  I'll refresh this to just deleting the INVALID case.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-15 13:11   ` Jan Beulich
@ 2018-08-15 14:03     ` Andrew Cooper
  2018-08-15 15:18       ` Jan Beulich
  0 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-08-15 14:03 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

On 15/08/18 14:11, Jan Beulich wrote:
>>>> On 13.08.18 at 12:01, <andrew.cooper3@citrix.com> wrote:
>> @@ -423,6 +436,11 @@ struct domain *domain_create(domid_t domid,
>>  
>>      sched_destroy_domain(d);
>>  
>> +    if ( d->max_vcpus )
>> +    {
>> +        d->max_vcpus = 0;
>> +        XFREE(d->vcpu);
>> +    }
>>      if ( init_status & INIT_arch )
>>          arch_domain_destroy(d);
> I'm not sure it is a good idea to free the vcpus this early, in particular
> before arch_domain_destroy().

Actually, this positioning is deliberate, so as not to change the
current behaviour of arch_domain_destroy().

Before this patch, d-vcpu[] was guaranteed to be NULL in the
arch_domain_destroy() call, and I don't currently trust it to work
properly if changed.  All of this cleanup logic needs further improvements.

>
>> --- a/xen/common/domctl.c
>> +++ b/xen/common/domctl.c
>> @@ -554,16 +554,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>  
>>          ret = -EINVAL;
>>          if ( (d == current->domain) || /* no domain_pause() */
>> -             (max > domain_max_vcpus(d)) )
>> +             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain */
>>              break;
>>  
>> -        /* Until Xenoprof can dynamically grow its vcpu-s array... */
>> -        if ( d->xenoprof )
>> -        {
>> -            ret = -EAGAIN;
>> -            break;
>> -        }
>> -
>>          /* Needed, for example, to ensure writable p.t. state is synced. */
>>          domain_pause(d);
>>  
>> @@ -581,38 +574,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>              }
>>          }
>>  
>> -        /* We cannot reduce maximum VCPUs. */
>> -        ret = -EINVAL;
>> -        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )
>> -            goto maxvcpu_out;
>> -
>> -        /*
>> -         * For now don't allow increasing the vcpu count from a non-zero
>> -         * value: This code and all readers of d->vcpu would otherwise need
>> -         * to be converted to use RCU, but at present there's no tools side
>> -         * code path that would issue such a request.
>> -         */
>> -        ret = -EBUSY;
>> -        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
>> -            goto maxvcpu_out;
>> -
>>          ret = -ENOMEM;
>>          online = cpupool_domain_cpumask(d);
>> -        if ( max > d->max_vcpus )
>> -        {
>> -            struct vcpu **vcpus;
>> -
>> -            BUG_ON(d->vcpu != NULL);
>> -            BUG_ON(d->max_vcpus != 0);
>> -
>> -            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
>> -                goto maxvcpu_out;
>> -
>> -            /* Install vcpu array /then/ update max_vcpus. */
>> -            d->vcpu = vcpus;
>> -            smp_wmb();
>> -            d->max_vcpus = max;
>> -        }
>>  
>>          for ( i = 0; i < max; i++ )
>>          {
> With all of this dropped, I think the domctl should be renamed. By
> dropping its "max" input at the same time, there would then also
> no longer be a need to check that the value matches what was
> stored during domain creation.

I'm still looking to eventually delete the hypercall, but we need to be
able to clean up all domain/vcpu allocations without calling
complete_domain_destroy, or rearrange the entry logic so
complete_domain_destroy() can be reused for a domain which isn't
currently in the domlist.

Unfortunately, I think this is going to be fairly complicated, I think.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-15 14:03     ` Andrew Cooper
@ 2018-08-15 15:18       ` Jan Beulich
  2018-08-29 10:36         ` Andrew Cooper
  0 siblings, 1 reply; 63+ messages in thread
From: Jan Beulich @ 2018-08-15 15:18 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

>>> On 15.08.18 at 16:03, <andrew.cooper3@citrix.com> wrote:
> On 15/08/18 14:11, Jan Beulich wrote:
>>>>> On 13.08.18 at 12:01, <andrew.cooper3@citrix.com> wrote:
>>> @@ -423,6 +436,11 @@ struct domain *domain_create(domid_t domid,
>>>  
>>>      sched_destroy_domain(d);
>>>  
>>> +    if ( d->max_vcpus )
>>> +    {
>>> +        d->max_vcpus = 0;
>>> +        XFREE(d->vcpu);
>>> +    }
>>>      if ( init_status & INIT_arch )
>>>          arch_domain_destroy(d);
>> I'm not sure it is a good idea to free the vcpus this early, in particular
>> before arch_domain_destroy().
> 
> Actually, this positioning is deliberate, so as not to change the
> current behaviour of arch_domain_destroy().
> 
> Before this patch, d-vcpu[] was guaranteed to be NULL in the
> arch_domain_destroy() call, and I don't currently trust it to work
> properly if changed.  All of this cleanup logic needs further improvements.

Oh, good point.

>>> --- a/xen/common/domctl.c
>>> +++ b/xen/common/domctl.c
>>> @@ -554,16 +554,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>>  
>>>          ret = -EINVAL;
>>>          if ( (d == current->domain) || /* no domain_pause() */
>>> -             (max > domain_max_vcpus(d)) )
>>> +             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain */
>>>              break;
>>>  
>>> -        /* Until Xenoprof can dynamically grow its vcpu-s array... */
>>> -        if ( d->xenoprof )
>>> -        {
>>> -            ret = -EAGAIN;
>>> -            break;
>>> -        }
>>> -
>>>          /* Needed, for example, to ensure writable p.t. state is synced. */
>>>          domain_pause(d);
>>>  
>>> @@ -581,38 +574,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>>              }
>>>          }
>>>  
>>> -        /* We cannot reduce maximum VCPUs. */
>>> -        ret = -EINVAL;
>>> -        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )
>>> -            goto maxvcpu_out;
>>> -
>>> -        /*
>>> -         * For now don't allow increasing the vcpu count from a non-zero
>>> -         * value: This code and all readers of d->vcpu would otherwise need
>>> -         * to be converted to use RCU, but at present there's no tools side
>>> -         * code path that would issue such a request.
>>> -         */
>>> -        ret = -EBUSY;
>>> -        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
>>> -            goto maxvcpu_out;
>>> -
>>>          ret = -ENOMEM;
>>>          online = cpupool_domain_cpumask(d);
>>> -        if ( max > d->max_vcpus )
>>> -        {
>>> -            struct vcpu **vcpus;
>>> -
>>> -            BUG_ON(d->vcpu != NULL);
>>> -            BUG_ON(d->max_vcpus != 0);
>>> -
>>> -            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
>>> -                goto maxvcpu_out;
>>> -
>>> -            /* Install vcpu array /then/ update max_vcpus. */
>>> -            d->vcpu = vcpus;
>>> -            smp_wmb();
>>> -            d->max_vcpus = max;
>>> -        }
>>>  
>>>          for ( i = 0; i < max; i++ )
>>>          {
>> With all of this dropped, I think the domctl should be renamed. By
>> dropping its "max" input at the same time, there would then also
>> no longer be a need to check that the value matches what was
>> stored during domain creation.
> 
> I'm still looking to eventually delete the hypercall, but we need to be
> able to clean up all domain/vcpu allocations without calling
> complete_domain_destroy, or rearrange the entry logic so
> complete_domain_destroy() can be reused for a domain which isn't
> currently in the domlist.
> 
> Unfortunately, I think this is going to be fairly complicated, I think.

Especially when we expect this to take some time, I think it would
be quite helpful for the domctl to actually say what it does until
then, rather than retaining its current (then misleading) name.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-15 13:32       ` Julien Grall
@ 2018-08-15 19:03         ` Andrew Cooper
  2018-08-16  8:59           ` Julien Grall
  0 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-08-15 19:03 UTC (permalink / raw)
  To: Julien Grall, Xen-devel; +Cc: Stefano Stabellini, Wei Liu, Jan Beulich

On 15/08/18 14:32, Julien Grall wrote:
> Hi Andrew,
>>>>      #ifdef CONFIG_ARM_32
>>>>        /*
>>>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>>>> index 45f3841..3d3b30c 100644
>>>> --- a/xen/arch/arm/setup.c
>>>> +++ b/xen/arch/arm/setup.c
>>>> @@ -20,6 +20,7 @@
>>>>    #include <xen/compile.h>
>>>>    #include <xen/device_tree.h>
>>>>    #include <xen/domain_page.h>
>>>> +#include <xen/grant_table.h>
>>>>    #include <xen/types.h>
>>>>    #include <xen/string.h>
>>>>    #include <xen/serial.h>
>>>> @@ -693,6 +694,17 @@ void __init start_xen(unsigned long
>>>> boot_phys_offset,
>>>>        struct domain *dom0;
>>>>        struct xen_domctl_createdomain dom0_cfg = {
>>>>            .max_evtchn_port = -1,
>>>> +
>>>> +        /*
>>>> +         * The region used by Xen on the memory will never be mapped
>>>> in DOM0
>>>> +         * memory layout. Therefore it can be used for the grant
>>>> table.
>>>> +         *
>>>> +         * Only use the text section as it's always present and will
>>>> contain
>>>> +         * enough space for a large grant table
>>>> +         */
>>>> +        .max_grant_frames = min_t(unsigned int, opt_max_grant_frames,
>>>> +                                  PFN_DOWN(_etext - _stext)),
>>>
>>> ... not here. So I would prefer if we either keep an helper to find
>>> the size of pass that size around to domain_build. Do we store the
>>> size in the domain information?
>>
>> I have to admit that I'm somewhat perplexed by ARM's
>> find_gnttab_region(), and I'm not sure why it exists.
>
> Dom0 is using the host memory layout that may differ between
> platforms. So there is not a region address that would fit everyone.
>
> This function is here to find at boot a suitable region in the layout
> where the OS can map the grant-table. The result will be written in
> the firmware table.
>
>>
>> The value is available from d->grant_table.max_grant_frames but ISTR
>> finding that the order of construction meant that it wasn't available
>> when needed (although this was all from code inspection, so I could very
>> easily be wrong).
>
> I think it should be fine for Dom0 as find_gnttab_region is called
> from construct_dom0 and d->grant_table.max_grant_frames would be set
> before via domain_create().
>
> Assuming d->grant_table.max_grant_frames can only be 0 before
> initialization, I would potentially add a
> BUG_ON(!d->grant_table.max_grant_frames) to make sure this always stay
> like that.

Actually, I remember now what the problem was.  d->grant_table is an
opaque type, so .max_grant_frames can't be accessed.

One of my indented bits of cleanup here is to remove the
gnttab_dom0_frames() function, because it has no business living in the
core grant_table.c

Would you be happy if I replaced gnttab_dom0_max() in asm-arm with
gnttab_dom0_frames() which accounts for the exiting min(), and means
that domain_build.c will be ultimately unchanged?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-15 19:03         ` Andrew Cooper
@ 2018-08-16  8:59           ` Julien Grall
  0 siblings, 0 replies; 63+ messages in thread
From: Julien Grall @ 2018-08-16  8:59 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Stefano Stabellini, Wei Liu, Jan Beulich

Hi Andrew,

On 08/15/2018 08:03 PM, Andrew Cooper wrote:
> Actually, I remember now what the problem was.  d->grant_table is an
> opaque type, so .max_grant_frames can't be accessed.
> 
> One of my indented bits of cleanup here is to remove the
> gnttab_dom0_frames() function, because it has no business living in the
> core grant_table.c
> 
> Would you be happy if I replaced gnttab_dom0_max() in asm-arm with
> gnttab_dom0_frames() which accounts for the exiting min(), and means
> that domain_build.c will be ultimately unchanged?

I would be happy with such change.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v3 6/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-13 10:01 ` [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
                     ` (2 preceding siblings ...)
  2018-08-15 13:04   ` Julien Grall
@ 2018-08-29  9:38   ` Andrew Cooper
  2018-08-30 19:40     ` Julien Grall
  3 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-08-29  9:38 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Wei Liu

... rather than setting the limits up after domain_create() has completed.

This removes the common gnttab infrastructure for calculating the number of
dom0 grant frames (as the common grant table code is not an appropriate place
for it to live), opting instead to require the dom0 construction code to pass
a sane value in via the configuration.

In practice, this now means that there is never a partially constructed grant
table for a reference-able domain.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Wei Liu <wei.liu2@citrix.com>

v2:
 * Split/rearrange to avoid the post-domain-create error path.
v3:
 * Retain gnttab_dom0_frames() for ARM.  Sadly needs to be a macro as
   opt_max_grant_frames isn't declared until later in xen/grant_table.h
---
 xen/arch/arm/setup.c              |  3 +++
 xen/arch/x86/setup.c              |  3 +++
 xen/common/domain.c               |  3 ++-
 xen/common/grant_table.c          | 16 +++-------------
 xen/include/asm-arm/grant_table.h |  6 ++----
 xen/include/asm-x86/grant_table.h |  5 -----
 xen/include/xen/grant_table.h     |  6 ++----
 7 files changed, 15 insertions(+), 27 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 45f3841..501a9d5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -20,6 +20,7 @@
 #include <xen/compile.h>
 #include <xen/device_tree.h>
 #include <xen/domain_page.h>
+#include <xen/grant_table.h>
 #include <xen/types.h>
 #include <xen/string.h>
 #include <xen/serial.h>
@@ -693,6 +694,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     struct domain *dom0;
     struct xen_domctl_createdomain dom0_cfg = {
         .max_evtchn_port = -1,
+        .max_grant_frames = gnttab_dom0_frames(),
+        .max_maptrack_frames = opt_max_maptrack_frames,
     };
 
     dcache_line_bytes = read_dcache_line_bytes();
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index dd11815..8440643 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1,6 +1,7 @@
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/err.h>
+#include <xen/grant_table.h>
 #include <xen/sched.h>
 #include <xen/sched-if.h>
 #include <xen/domain.h>
@@ -682,6 +683,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     struct xen_domctl_createdomain dom0_cfg = {
         .flags = XEN_DOMCTL_CDF_s3_integrity,
         .max_evtchn_port = -1,
+        .max_grant_frames = opt_max_grant_frames,
+        .max_maptrack_frames = opt_max_maptrack_frames,
     };
 
     /* Critical region without IDT or TSS.  Any fault is deadly! */
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 171d25e..1dcab8d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -366,7 +366,8 @@ struct domain *domain_create(domid_t domid,
             goto fail;
         init_status |= INIT_evtchn;
 
-        if ( (err = grant_table_create(d)) != 0 )
+        if ( (err = grant_table_create(d, config->max_grant_frames,
+                                       config->max_maptrack_frames)) != 0 )
             goto fail;
         init_status |= INIT_gnttab;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ad55cfa..f08341e 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3567,9 +3567,8 @@ do_grant_table_op(
 #include "compat/grant_table.c"
 #endif
 
-int
-grant_table_create(
-    struct domain *d)
+int grant_table_create(struct domain *d, unsigned int max_grant_frames,
+                       unsigned int max_maptrack_frames)
 {
     struct grant_table *t;
     int ret = 0;
@@ -3587,11 +3586,7 @@ grant_table_create(
     t->domain = d;
     d->grant_table = t;
 
-    if ( d->domain_id == 0 )
-    {
-        ret = grant_table_init(d, t, gnttab_dom0_frames(),
-                               opt_max_maptrack_frames);
-    }
+    ret = grant_table_set_limits(d, max_maptrack_frames, max_maptrack_frames);
 
     return ret;
 }
@@ -4049,11 +4044,6 @@ static int __init gnttab_usage_init(void)
 }
 __initcall(gnttab_usage_init);
 
-unsigned int __init gnttab_dom0_frames(void)
-{
-    return min(opt_max_grant_frames, gnttab_dom0_max());
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
index 5113b91..d8fde01 100644
--- a/xen/include/asm-arm/grant_table.h
+++ b/xen/include/asm-arm/grant_table.h
@@ -30,10 +30,8 @@ void gnttab_mark_dirty(struct domain *d, mfn_t mfn);
  * Only use the text section as it's always present and will contain
  * enough space for a large grant table
  */
-static inline unsigned int gnttab_dom0_max(void)
-{
-    return PFN_DOWN(_etext - _stext);
-}
+#define gnttab_dom0_frames()                                             \
+    min_t(unsigned int, opt_max_grant_frames, PFN_DOWN(_etext - _stext))
 
 #define gnttab_init_arch(gt)                                             \
 ({                                                                       \
diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
index 76ec5dd..761a8c3 100644
--- a/xen/include/asm-x86/grant_table.h
+++ b/xen/include/asm-x86/grant_table.h
@@ -39,11 +39,6 @@ static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame,
     return replace_grant_pv_mapping(addr, frame, new_addr, flags);
 }
 
-static inline unsigned int gnttab_dom0_max(void)
-{
-    return UINT_MAX;
-}
-
 #define gnttab_init_arch(gt) 0
 #define gnttab_destroy_arch(gt) do {} while ( 0 )
 #define gnttab_set_frame_gfn(gt, st, idx, gfn) do {} while ( 0 )
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index c881414..b46bb0a 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -35,8 +35,8 @@ extern unsigned int opt_max_grant_frames;
 extern unsigned int opt_max_maptrack_frames;
 
 /* Create/destroy per-domain grant table context. */
-int grant_table_create(
-    struct domain *d);
+int grant_table_create(struct domain *d, unsigned int max_grant_frames,
+                       unsigned int max_maptrack_frames);
 void grant_table_destroy(
     struct domain *d);
 void grant_table_init_vcpu(struct vcpu *v);
@@ -63,6 +63,4 @@ int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
 int gnttab_get_status_frame(struct domain *d, unsigned long idx,
                             mfn_t *mfn);
 
-unsigned int gnttab_dom0_frames(void);
-
 #endif /* __XEN_GRANT_TABLE_H__ */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-15 15:18       ` Jan Beulich
@ 2018-08-29 10:36         ` Andrew Cooper
  2018-08-29 12:10           ` Jan Beulich
  0 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-08-29 10:36 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

On 15/08/18 16:18, Jan Beulich wrote:
>
>>>> --- a/xen/common/domctl.c
>>>> +++ b/xen/common/domctl.c
>>>> @@ -554,16 +554,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>>>  
>>>>          ret = -EINVAL;
>>>>          if ( (d == current->domain) || /* no domain_pause() */
>>>> -             (max > domain_max_vcpus(d)) )
>>>> +             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain */
>>>>              break;
>>>>  
>>>> -        /* Until Xenoprof can dynamically grow its vcpu-s array... */
>>>> -        if ( d->xenoprof )
>>>> -        {
>>>> -            ret = -EAGAIN;
>>>> -            break;
>>>> -        }
>>>> -
>>>>          /* Needed, for example, to ensure writable p.t. state is synced. */
>>>>          domain_pause(d);
>>>>  
>>>> @@ -581,38 +574,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>>>              }
>>>>          }
>>>>  
>>>> -        /* We cannot reduce maximum VCPUs. */
>>>> -        ret = -EINVAL;
>>>> -        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )xc_domain_max_vcpus
>>>> -            goto maxvcpu_out;
>>>> -
>>>> -        /*
>>>> -         * For now don't allow increasing the vcpu count from a non-zero
>>>> -         * value: This code and all readers of d->vcpu would otherwise need
>>>> -         * to be converted to use RCU, but at present there's no tools side
>>>> -         * code path that would issue such a request.
>>>> -         */
>>>> -        ret = -EBUSY;
>>>> -        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
>>>> -            goto maxvcpu_out;
>>>> -
>>>>          ret = -ENOMEM;
>>>>          online = cpupool_domain_cpumask(d);
>>>> -        if ( max > d->max_vcpus )
>>>> -        {
>>>> -            struct vcpu **vcpus;
>>>> -
>>>> -            BUG_ON(d->vcpu != NULL);
>>>> -            BUG_ON(d->max_vcpus != 0);
>>>> -
>>>> -            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
>>>> -                goto maxvcpu_out;
>>>> -
>>>> -            /* Install vcpu array /then/ update max_vcpus. */
>>>> -            d->vcpu = vcpus;
>>>> -            smp_wmb();
>>>> -            d->max_vcpus = max;
>>>> -        }
>>>>  
>>>>          for ( i = 0; i < max; i++ )
>>>>          {
>>> With all of this dropped, I think the domctl should be renamed. By
>>> dropping its "max" input at the same time, there would then also
>>> no longer be a need to check that the value matches what was
>>> stored during domain creation.
>> I'm still looking to eventually delete the hypercall, but we need to be
>> able to clean up all domain/vcpu allocations without calling
>> complete_domain_destroy, or rearrange the entry logic so
>> complete_domain_destroy() can be reused for a domain which isn't
>> currently in the domlist.
>>
>> Unfortunately, I think this is going to be fairly complicated, I think.
> Especially when we expect this to take some time, I think it would
> be quite helpful for the domctl to actually say what it does until
> then, rather than retaining its current (then misleading) name.

Renaming the domctl means renaming xc_domain_max_vcpus(), and the
python/ocaml stubs, the latter of which does have external users.

In this case, leaving things unchanged is the least disruptive course of
action.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-29 10:36         ` Andrew Cooper
@ 2018-08-29 12:10           ` Jan Beulich
  2018-08-29 12:29             ` Andrew Cooper
  0 siblings, 1 reply; 63+ messages in thread
From: Jan Beulich @ 2018-08-29 12:10 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

>>> On 29.08.18 at 12:36, <andrew.cooper3@citrix.com> wrote:
> On 15/08/18 16:18, Jan Beulich wrote:
>>
>>>>> --- a/xen/common/domctl.c
>>>>> +++ b/xen/common/domctl.c
>>>>> @@ -554,16 +554,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
> u_domctl)
>>>>>  
>>>>>          ret = -EINVAL;
>>>>>          if ( (d == current->domain) || /* no domain_pause() */
>>>>> -             (max > domain_max_vcpus(d)) )
>>>>> +             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain 
> */
>>>>>              break;
>>>>>  
>>>>> -        /* Until Xenoprof can dynamically grow its vcpu-s array... */
>>>>> -        if ( d->xenoprof )
>>>>> -        {
>>>>> -            ret = -EAGAIN;
>>>>> -            break;
>>>>> -        }
>>>>> -
>>>>>          /* Needed, for example, to ensure writable p.t. state is synced. */
>>>>>          domain_pause(d);
>>>>>  
>>>>> @@ -581,38 +574,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
> u_domctl)
>>>>>              }
>>>>>          }
>>>>>  
>>>>> -        /* We cannot reduce maximum VCPUs. */
>>>>> -        ret = -EINVAL;
>>>>> -        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )xc_domain_max_vcpus
>>>>> -            goto maxvcpu_out;
>>>>> -
>>>>> -        /*
>>>>> -         * For now don't allow increasing the vcpu count from a non-zero
>>>>> -         * value: This code and all readers of d->vcpu would otherwise need
>>>>> -         * to be converted to use RCU, but at present there's no tools side
>>>>> -         * code path that would issue such a request.
>>>>> -         */
>>>>> -        ret = -EBUSY;
>>>>> -        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
>>>>> -            goto maxvcpu_out;
>>>>> -
>>>>>          ret = -ENOMEM;
>>>>>          online = cpupool_domain_cpumask(d);
>>>>> -        if ( max > d->max_vcpus )
>>>>> -        {
>>>>> -            struct vcpu **vcpus;
>>>>> -
>>>>> -            BUG_ON(d->vcpu != NULL);
>>>>> -            BUG_ON(d->max_vcpus != 0);
>>>>> -
>>>>> -            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
>>>>> -                goto maxvcpu_out;
>>>>> -
>>>>> -            /* Install vcpu array /then/ update max_vcpus. */
>>>>> -            d->vcpu = vcpus;
>>>>> -            smp_wmb();
>>>>> -            d->max_vcpus = max;
>>>>> -        }
>>>>>  
>>>>>          for ( i = 0; i < max; i++ )
>>>>>          {
>>>> With all of this dropped, I think the domctl should be renamed. By
>>>> dropping its "max" input at the same time, there would then also
>>>> no longer be a need to check that the value matches what was
>>>> stored during domain creation.
>>> I'm still looking to eventually delete the hypercall, but we need to be
>>> able to clean up all domain/vcpu allocations without calling
>>> complete_domain_destroy, or rearrange the entry logic so
>>> complete_domain_destroy() can be reused for a domain which isn't
>>> currently in the domlist.
>>>
>>> Unfortunately, I think this is going to be fairly complicated, I think.
>> Especially when we expect this to take some time, I think it would
>> be quite helpful for the domctl to actually say what it does until
>> then, rather than retaining its current (then misleading) name.
> 
> Renaming the domctl means renaming xc_domain_max_vcpus(), and the
> python/ocaml stubs, the latter of which does have external users.

This is an option, but the libxc and higher layer functions could as well
be left alone, perhaps with a comment added to the function you name
explaining why its name doesn't match the domctl it uses.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-29 12:10           ` Jan Beulich
@ 2018-08-29 12:29             ` Andrew Cooper
  2018-08-29 12:49               ` Jan Beulich
  0 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-08-29 12:29 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

On 29/08/18 13:10, Jan Beulich wrote:
>>>> On 29.08.18 at 12:36, <andrew.cooper3@citrix.com> wrote:
>> On 15/08/18 16:18, Jan Beulich wrote:
>>>>>> --- a/xen/common/domctl.c
>>>>>> +++ b/xen/common/domctl.c
>>>>>> @@ -554,16 +554,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
>> u_domctl)
>>>>>>  
>>>>>>          ret = -EINVAL;
>>>>>>          if ( (d == current->domain) || /* no domain_pause() */
>>>>>> -             (max > domain_max_vcpus(d)) )
>>>>>> +             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain 
>> */
>>>>>>              break;
>>>>>>  
>>>>>> -        /* Until Xenoprof can dynamically grow its vcpu-s array... */
>>>>>> -        if ( d->xenoprof )
>>>>>> -        {
>>>>>> -            ret = -EAGAIN;
>>>>>> -            break;
>>>>>> -        }
>>>>>> -
>>>>>>          /* Needed, for example, to ensure writable p.t. state is synced. */
>>>>>>          domain_pause(d);
>>>>>>  
>>>>>> @@ -581,38 +574,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
>> u_domctl)
>>>>>>              }
>>>>>>          }
>>>>>>  
>>>>>> -        /* We cannot reduce maximum VCPUs. */
>>>>>> -        ret = -EINVAL;
>>>>>> -        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )xc_domain_max_vcpus
>>>>>> -            goto maxvcpu_out;
>>>>>> -
>>>>>> -        /*
>>>>>> -         * For now don't allow increasing the vcpu count from a non-zero
>>>>>> -         * value: This code and all readers of d->vcpu would otherwise need
>>>>>> -         * to be converted to use RCU, but at present there's no tools side
>>>>>> -         * code path that would issue such a request.
>>>>>> -         */
>>>>>> -        ret = -EBUSY;
>>>>>> -        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
>>>>>> -            goto maxvcpu_out;
>>>>>> -
>>>>>>          ret = -ENOMEM;
>>>>>>          online = cpupool_domain_cpumask(d);
>>>>>> -        if ( max > d->max_vcpus )
>>>>>> -        {
>>>>>> -            struct vcpu **vcpus;
>>>>>> -
>>>>>> -            BUG_ON(d->vcpu != NULL);
>>>>>> -            BUG_ON(d->max_vcpus != 0);
>>>>>> -
>>>>>> -            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
>>>>>> -                goto maxvcpu_out;
>>>>>> -
>>>>>> -            /* Install vcpu array /then/ update max_vcpus. */
>>>>>> -            d->vcpu = vcpus;
>>>>>> -            smp_wmb();
>>>>>> -            d->max_vcpus = max;
>>>>>> -        }
>>>>>>  
>>>>>>          for ( i = 0; i < max; i++ )
>>>>>>          {
>>>>> With all of this dropped, I think the domctl should be renamed. By
>>>>> dropping its "max" input at the same time, there would then also
>>>>> no longer be a need to check that the value matches what was
>>>>> stored during domain creation.
>>>> I'm still looking to eventually delete the hypercall, but we need to be
>>>> able to clean up all domain/vcpu allocations without calling
>>>> complete_domain_destroy, or rearrange the entry logic so
>>>> complete_domain_destroy() can be reused for a domain which isn't
>>>> currently in the domlist.
>>>>
>>>> Unfortunately, I think this is going to be fairly complicated, I think.
>>> Especially when we expect this to take some time, I think it would
>>> be quite helpful for the domctl to actually say what it does until
>>> then, rather than retaining its current (then misleading) name.
>> Renaming the domctl means renaming xc_domain_max_vcpus(), and the
>> python/ocaml stubs, the latter of which does have external users.
> This is an option, but the libxc and higher layer functions could as well
> be left alone, perhaps with a comment added to the function you name
> explaining why its name doesn't match the domctl it uses.

And what good will that do?  You'll now have inconsistent naming, which
is worse.

Its either all or nothing, and there are several good reasons to not
change everything.  I definitely don't think renaming the infrastructure
is a constructive use of my time, or anyone elses for that matter.

I'm open to the idea of leaving a comment by the implementation of
XEN_DOMCTL_max_vcpus: explaining its change in behaviour, but I think
that the extent of what is reasonable to do here.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-29 12:29             ` Andrew Cooper
@ 2018-08-29 12:49               ` Jan Beulich
  0 siblings, 0 replies; 63+ messages in thread
From: Jan Beulich @ 2018-08-29 12:49 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Stefano Stabellini, Wei Liu, Xen-devel

>>> On 29.08.18 at 14:29, <andrew.cooper3@citrix.com> wrote:
> On 29/08/18 13:10, Jan Beulich wrote:
>>>>> On 29.08.18 at 12:36, <andrew.cooper3@citrix.com> wrote:
>>> On 15/08/18 16:18, Jan Beulich wrote:
>>>>>>> --- a/xen/common/domctl.c
>>>>>>> +++ b/xen/common/domctl.c
>>>>>>> @@ -554,16 +554,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
>>> u_domctl)
>>>>>>>  
>>>>>>>          ret = -EINVAL;
>>>>>>>          if ( (d == current->domain) || /* no domain_pause() */
>>>>>>> -             (max > domain_max_vcpus(d)) )
>>>>>>> +             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain 
>>> */
>>>>>>>              break;
>>>>>>>  
>>>>>>> -        /* Until Xenoprof can dynamically grow its vcpu-s array... */
>>>>>>> -        if ( d->xenoprof )
>>>>>>> -        {
>>>>>>> -            ret = -EAGAIN;
>>>>>>> -            break;
>>>>>>> -        }
>>>>>>> -
>>>>>>>          /* Needed, for example, to ensure writable p.t. state is synced. */
>>>>>>>          domain_pause(d);
>>>>>>>  
>>>>>>> @@ -581,38 +574,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
>>> u_domctl)
>>>>>>>              }
>>>>>>>          }
>>>>>>>  
>>>>>>> -        /* We cannot reduce maximum VCPUs. */
>>>>>>> -        ret = -EINVAL;
>>>>>>> -        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )xc_domain_max_vcpus
>>>>>>> -            goto maxvcpu_out;
>>>>>>> -
>>>>>>> -        /*
>>>>>>> -         * For now don't allow increasing the vcpu count from a non-zero
>>>>>>> -         * value: This code and all readers of d->vcpu would otherwise need
>>>>>>> -         * to be converted to use RCU, but at present there's no tools side
>>>>>>> -         * code path that would issue such a request.
>>>>>>> -         */
>>>>>>> -        ret = -EBUSY;
>>>>>>> -        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
>>>>>>> -            goto maxvcpu_out;
>>>>>>> -
>>>>>>>          ret = -ENOMEM;
>>>>>>>          online = cpupool_domain_cpumask(d);
>>>>>>> -        if ( max > d->max_vcpus )
>>>>>>> -        {
>>>>>>> -            struct vcpu **vcpus;
>>>>>>> -
>>>>>>> -            BUG_ON(d->vcpu != NULL);
>>>>>>> -            BUG_ON(d->max_vcpus != 0);
>>>>>>> -
>>>>>>> -            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
>>>>>>> -                goto maxvcpu_out;
>>>>>>> -
>>>>>>> -            /* Install vcpu array /then/ update max_vcpus. */
>>>>>>> -            d->vcpu = vcpus;
>>>>>>> -            smp_wmb();
>>>>>>> -            d->max_vcpus = max;
>>>>>>> -        }
>>>>>>>  
>>>>>>>          for ( i = 0; i < max; i++ )
>>>>>>>          {
>>>>>> With all of this dropped, I think the domctl should be renamed. By
>>>>>> dropping its "max" input at the same time, there would then also
>>>>>> no longer be a need to check that the value matches what was
>>>>>> stored during domain creation.
>>>>> I'm still looking to eventually delete the hypercall, but we need to be
>>>>> able to clean up all domain/vcpu allocations without calling
>>>>> complete_domain_destroy, or rearrange the entry logic so
>>>>> complete_domain_destroy() can be reused for a domain which isn't
>>>>> currently in the domlist.
>>>>>
>>>>> Unfortunately, I think this is going to be fairly complicated, I think.
>>>> Especially when we expect this to take some time, I think it would
>>>> be quite helpful for the domctl to actually say what it does until
>>>> then, rather than retaining its current (then misleading) name.
>>> Renaming the domctl means renaming xc_domain_max_vcpus(), and the
>>> python/ocaml stubs, the latter of which does have external users.
>> This is an option, but the libxc and higher layer functions could as well
>> be left alone, perhaps with a comment added to the function you name
>> explaining why its name doesn't match the domctl it uses.
> 
> And what good will that do?  You'll now have inconsistent naming, which
> is worse.
> 
> Its either all or nothing, and there are several good reasons to not
> change everything.  I definitely don't think renaming the infrastructure
> is a constructive use of my time, or anyone elses for that matter.
> 
> I'm open to the idea of leaving a comment by the implementation of
> XEN_DOMCTL_max_vcpus: explaining its change in behaviour, but I think
> that the extent of what is reasonable to do here.

Well, I'll leave it to the other REST maintainers then.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v3 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-13 10:01 ` [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create() Andrew Cooper
  2018-08-14 15:17   ` Roger Pau Monné
  2018-08-15 13:11   ` Jan Beulich
@ 2018-08-29 14:40   ` Andrew Cooper
  2018-08-29 15:03     ` Jan Beulich
  2018-08-30 19:46     ` Julien Grall
  2 siblings, 2 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-29 14:40 UTC (permalink / raw)
  To: Xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Tim Deegan, Julien Grall, Jan Beulich

For ARM, the call to arch_domain_create() needs to have completed before
domain_max_vcpus() will return the correct upper bound.

For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
of dom0->vcpu.

With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
can be constructed suitably for the domain, rather than for the worst-case
setting.

Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
ARM's two implementations of vgic_max_vcpus() no longer need work around the
out-of-order call.

From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
can be looked up by domid.

The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
max != d->max_vcpus, which does match the older semantics (not that it is
obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
this point the hypercall still needs making to allocate each vcpu.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Wei Liu <wei.liu2@citrix.com>

v2:
 * Allocate in domain_create() rather than arch_domain_create().
 * Retain domain_max_vcpus().
v3:
 * Only drop the GIC_INVALID case in vgic_max_vcpus().
 * Leave a note concerning the new behaviour of XEN_DOMCTL_max_vcpus in the
   interim before it gets deleted.
---
 xen/arch/arm/domain_build.c |  8 +-------
 xen/arch/arm/setup.c        |  2 +-
 xen/arch/arm/vgic.c         | 11 +----------
 xen/arch/arm/vgic/vgic.c    |  9 ---------
 xen/arch/x86/dom0_build.c   |  8 +-------
 xen/arch/x86/setup.c        |  2 +-
 xen/common/domain.c         | 18 ++++++++++++++++++
 xen/common/domctl.c         | 46 ++++++++-------------------------------------
 xen/common/event_channel.c  |  3 +--
 xen/include/xen/domain.h    |  2 +-
 10 files changed, 33 insertions(+), 76 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 6900a93..9ceb33d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -72,14 +72,8 @@ unsigned int __init dom0_max_vcpus(void)
     return opt_dom0_max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
-                                     unsigned int max_vcpus)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
 {
-    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
-    if ( !dom0->vcpu )
-        return NULL;
-    dom0->max_vcpus = max_vcpus;
-
     return alloc_vcpu(dom0, 0, 0);
 }
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 048d5f3..01aaaab 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -854,7 +854,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     dom0_cfg.max_vcpus = dom0_max_vcpus();
 
     dom0 = domain_create(0, &dom0_cfg, true);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
             panic("Error creating domain 0");
 
     if ( construct_dom0(dom0) != 0)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7a2c455..5a4f082 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -669,16 +669,7 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
 
 unsigned int vgic_max_vcpus(const struct domain *d)
 {
-    /*
-     * Since evtchn_init would call domain_max_vcpus for poll_mask
-     * allocation when the vgic_ops haven't been initialised yet,
-     * we return MAX_VIRT_CPUS if d->arch.vgic.handler is null.
-     */
-    if ( !d->arch.vgic.handler )
-        return MAX_VIRT_CPUS;
-    else
-        return min_t(unsigned int, MAX_VIRT_CPUS,
-                     d->arch.vgic.handler->max_vcpus);
+    return min_t(unsigned int, MAX_VIRT_CPUS, d->arch.vgic.handler->max_vcpus);
 }
 
 /*
diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
index 832632a..3272952 100644
--- a/xen/arch/arm/vgic/vgic.c
+++ b/xen/arch/arm/vgic/vgic.c
@@ -955,15 +955,6 @@ unsigned int vgic_max_vcpus(const struct domain *d)
 
     switch ( d->arch.vgic.version )
     {
-    case GIC_INVALID:
-        /*
-         * Since evtchn_init would call domain_max_vcpus for poll_mask
-         * allocation before the VGIC has been initialised, we need to
-         * return some safe value in this case. As this is for allocation
-         * purposes, go with the maximum value.
-         */
-        vgic_vcpu_limit = MAX_VIRT_CPUS;
-        break;
     case GIC_V2:
         vgic_vcpu_limit = VGIC_V2_MAX_CPUS;
         break;
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index b42eac3..423fdec 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -199,17 +199,11 @@ unsigned int __init dom0_max_vcpus(void)
     return max_vcpus;
 }
 
-struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0,
-                                     unsigned int max_vcpus)
+struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
 {
     dom0->node_affinity = dom0_nodes;
     dom0->auto_node_affinity = !dom0_nr_pxms;
 
-    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
-    if ( !dom0->vcpu )
-        return NULL;
-    dom0->max_vcpus = max_vcpus;
-
     return dom0_setup_vcpu(dom0, 0,
                            cpumask_last(&dom0_cpus) /* so it wraps around to first pcpu */);
 }
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 3ffcb7a..c9e66ea 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1701,7 +1701,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     /* Create initial domain 0. */
     dom0 = domain_create(get_initial_domain_id(), &dom0_cfg, !pv_shim);
-    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0, dom0_cfg.max_vcpus) == NULL) )
+    if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
         panic("Error creating domain 0");
 
     /* Grab the DOM0 command line. */
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 8143532..f64ad5f 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -338,6 +338,19 @@ struct domain *domain_create(domid_t domid,
 
     if ( !is_idle_domain(d) )
     {
+        /* Check d->max_vcpus and allocate d->vcpu[]. */
+        err = -EINVAL;
+        if ( config->max_vcpus < 1 ||
+             config->max_vcpus > domain_max_vcpus(d) )
+            goto fail;
+
+        err = -ENOMEM;
+        d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
+        if ( !d->vcpu )
+            goto fail;
+
+        d->max_vcpus = config->max_vcpus;
+
         watchdog_domain_init(d);
         init_status |= INIT_watchdog;
 
@@ -422,6 +435,11 @@ struct domain *domain_create(domid_t domid,
 
     sched_destroy_domain(d);
 
+    if ( d->max_vcpus )
+    {
+        d->max_vcpus = 0;
+        XFREE(d->vcpu);
+    }
     if ( init_status & INIT_arch )
         arch_domain_destroy(d);
     if ( init_status & INIT_gnttab )
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 58e51b2..8a803b3 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -547,6 +547,13 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         break;
     }
 
+    /*
+     * Note: The parameter passed to XEN_DOMCTL_max_vcpus must match the value
+     * passed to XEN_DOMCTL_createdomain.  This hypercall is in the process of
+     * being removed (once the failure paths in domain_create() have been
+     * improved), but is still required in the short term to allocate the
+     * vcpus themselves.
+     */
     case XEN_DOMCTL_max_vcpus:
     {
         unsigned int i, max = op->u.max_vcpus.max, cpu;
@@ -554,16 +561,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
         ret = -EINVAL;
         if ( (d == current->domain) || /* no domain_pause() */
-             (max > domain_max_vcpus(d)) )
+             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain */
             break;
 
-        /* Until Xenoprof can dynamically grow its vcpu-s array... */
-        if ( d->xenoprof )
-        {
-            ret = -EAGAIN;
-            break;
-        }
-
         /* Needed, for example, to ensure writable p.t. state is synced. */
         domain_pause(d);
 
@@ -581,38 +581,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             }
         }
 
-        /* We cannot reduce maximum VCPUs. */
-        ret = -EINVAL;
-        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )
-            goto maxvcpu_out;
-
-        /*
-         * For now don't allow increasing the vcpu count from a non-zero
-         * value: This code and all readers of d->vcpu would otherwise need
-         * to be converted to use RCU, but at present there's no tools side
-         * code path that would issue such a request.
-         */
-        ret = -EBUSY;
-        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
-            goto maxvcpu_out;
-
         ret = -ENOMEM;
         online = cpupool_domain_cpumask(d);
-        if ( max > d->max_vcpus )
-        {
-            struct vcpu **vcpus;
-
-            BUG_ON(d->vcpu != NULL);
-            BUG_ON(d->max_vcpus != 0);
-
-            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
-                goto maxvcpu_out;
-
-            /* Install vcpu array /then/ update max_vcpus. */
-            d->vcpu = vcpus;
-            smp_wmb();
-            d->max_vcpus = max;
-        }
 
         for ( i = 0; i < max; i++ )
         {
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 41cbbae..381f30e 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1303,8 +1303,7 @@ int evtchn_init(struct domain *d, unsigned int max_port)
     evtchn_from_port(d, 0)->state = ECS_RESERVED;
 
 #if MAX_VIRT_CPUS > BITS_PER_LONG
-    d->poll_mask = xzalloc_array(unsigned long,
-                                 BITS_TO_LONGS(domain_max_vcpus(d)));
+    d->poll_mask = xzalloc_array(unsigned long, BITS_TO_LONGS(d->max_vcpus));
     if ( !d->poll_mask )
     {
         free_evtchn_bucket(d, d->evtchn);
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 651205d..ce31999 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -17,7 +17,7 @@ struct vcpu *alloc_vcpu(
     struct domain *d, unsigned int vcpu_id, unsigned int cpu_id);
 
 unsigned int dom0_max_vcpus(void);
-struct vcpu *alloc_dom0_vcpu0(struct domain *dom0, unsigned int max_vcpus);
+struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
 
 int vcpu_reset(struct vcpu *);
 int vcpu_up(struct vcpu *v);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH v3 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-29 14:40   ` [PATCH v3 " Andrew Cooper
@ 2018-08-29 15:03     ` Jan Beulich
  2018-08-31 10:33       ` Wei Liu
  2018-08-30 19:46     ` Julien Grall
  1 sibling, 1 reply; 63+ messages in thread
From: Jan Beulich @ 2018-08-29 15:03 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Tim Deegan,
	Xen-devel, Julien Grall

>>> On 29.08.18 at 16:40, <andrew.cooper3@citrix.com> wrote:
> For ARM, the call to arch_domain_create() needs to have completed before
> domain_max_vcpus() will return the correct upper bound.
> 
> For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
> of dom0->vcpu.
> 
> With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
> can be constructed suitably for the domain, rather than for the worst-case
> setting.
> 
> Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
> ARM's two implementations of vgic_max_vcpus() no longer need work around the
> out-of-order call.
> 
> From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
> can be looked up by domid.
> 
> The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
> max != d->max_vcpus, which does match the older semantics (not that it is
> obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
> this point the hypercall still needs making to allocate each vcpu.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
in principle, but as said before the lack of renaming of the domctl
makes my ack dependent upon some other REST maintainer
agreeing with your position there (the more that you've added
the comment to the implementation rather than the public header).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v3 6/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create()
  2018-08-29  9:38   ` [PATCH v3 6/12] " Andrew Cooper
@ 2018-08-30 19:40     ` Julien Grall
  0 siblings, 0 replies; 63+ messages in thread
From: Julien Grall @ 2018-08-30 19:40 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Stefano Stabellini, Wei Liu

Hi Andrew,

On 08/29/2018 10:38 AM, Andrew Cooper wrote:
> ... rather than setting the limits up after domain_create() has completed.
> 
> This removes the common gnttab infrastructure for calculating the number of
> dom0 grant frames (as the common grant table code is not an appropriate place
> for it to live), opting instead to require the dom0 construction code to pass
> a sane value in via the configuration.
> 
> In practice, this now means that there is never a partially constructed grant
> table for a reference-able domain.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> 
> v2:
>   * Split/rearrange to avoid the post-domain-create error path.
> v3:
>   * Retain gnttab_dom0_frames() for ARM.  Sadly needs to be a macro as
>     opt_max_grant_frames isn't declared until later in xen/grant_table.h
> ---
>   xen/arch/arm/setup.c              |  3 +++
>   xen/arch/x86/setup.c              |  3 +++
>   xen/common/domain.c               |  3 ++-
>   xen/common/grant_table.c          | 16 +++-------------
>   xen/include/asm-arm/grant_table.h |  6 ++----
>   xen/include/asm-x86/grant_table.h |  5 -----
>   xen/include/xen/grant_table.h     |  6 ++----
>   7 files changed, 15 insertions(+), 27 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 45f3841..501a9d5 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -20,6 +20,7 @@
>   #include <xen/compile.h>
>   #include <xen/device_tree.h>
>   #include <xen/domain_page.h>
> +#include <xen/grant_table.h>
>   #include <xen/types.h>
>   #include <xen/string.h>
>   #include <xen/serial.h>
> @@ -693,6 +694,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>       struct domain *dom0;
>       struct xen_domctl_createdomain dom0_cfg = {
>           .max_evtchn_port = -1,
> +        .max_grant_frames = gnttab_dom0_frames(),
> +        .max_maptrack_frames = opt_max_maptrack_frames,
>       };
>   
>       dcache_line_bytes = read_dcache_line_bytes();
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index dd11815..8440643 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1,6 +1,7 @@
>   #include <xen/init.h>
>   #include <xen/lib.h>
>   #include <xen/err.h>
> +#include <xen/grant_table.h>
>   #include <xen/sched.h>
>   #include <xen/sched-if.h>
>   #include <xen/domain.h>
> @@ -682,6 +683,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>       struct xen_domctl_createdomain dom0_cfg = {
>           .flags = XEN_DOMCTL_CDF_s3_integrity,
>           .max_evtchn_port = -1,
> +        .max_grant_frames = opt_max_grant_frames,
> +        .max_maptrack_frames = opt_max_maptrack_frames,
>       };
>   
>       /* Critical region without IDT or TSS.  Any fault is deadly! */
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 171d25e..1dcab8d 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -366,7 +366,8 @@ struct domain *domain_create(domid_t domid,
>               goto fail;
>           init_status |= INIT_evtchn;
>   
> -        if ( (err = grant_table_create(d)) != 0 )
> +        if ( (err = grant_table_create(d, config->max_grant_frames,
> +                                       config->max_maptrack_frames)) != 0 )
>               goto fail;
>           init_status |= INIT_gnttab;
>   
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index ad55cfa..f08341e 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -3567,9 +3567,8 @@ do_grant_table_op(
>   #include "compat/grant_table.c"
>   #endif
>   
> -int
> -grant_table_create(
> -    struct domain *d)
> +int grant_table_create(struct domain *d, unsigned int max_grant_frames,
> +                       unsigned int max_maptrack_frames)
>   {
>       struct grant_table *t;
>       int ret = 0;
> @@ -3587,11 +3586,7 @@ grant_table_create(
>       t->domain = d;
>       d->grant_table = t;
>   
> -    if ( d->domain_id == 0 )
> -    {
> -        ret = grant_table_init(d, t, gnttab_dom0_frames(),
> -                               opt_max_maptrack_frames);
> -    }
> +    ret = grant_table_set_limits(d, max_maptrack_frames, max_maptrack_frames);
>   
>       return ret;
>   }
> @@ -4049,11 +4044,6 @@ static int __init gnttab_usage_init(void)
>   }
>   __initcall(gnttab_usage_init);
>   
> -unsigned int __init gnttab_dom0_frames(void)
> -{
> -    return min(opt_max_grant_frames, gnttab_dom0_max());
> -}
> -
>   /*
>    * Local variables:
>    * mode: C
> diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
> index 5113b91..d8fde01 100644
> --- a/xen/include/asm-arm/grant_table.h
> +++ b/xen/include/asm-arm/grant_table.h
> @@ -30,10 +30,8 @@ void gnttab_mark_dirty(struct domain *d, mfn_t mfn);
>    * Only use the text section as it's always present and will contain
>    * enough space for a large grant table
>    */
> -static inline unsigned int gnttab_dom0_max(void)
> -{
> -    return PFN_DOWN(_etext - _stext);
> -}
> +#define gnttab_dom0_frames()                                             \
> +    min_t(unsigned int, opt_max_grant_frames, PFN_DOWN(_etext - _stext))
>   
>   #define gnttab_init_arch(gt)                                             \
>   ({                                                                       \
> diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
> index 76ec5dd..761a8c3 100644
> --- a/xen/include/asm-x86/grant_table.h
> +++ b/xen/include/asm-x86/grant_table.h
> @@ -39,11 +39,6 @@ static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame,
>       return replace_grant_pv_mapping(addr, frame, new_addr, flags);
>   }
>   
> -static inline unsigned int gnttab_dom0_max(void)
> -{
> -    return UINT_MAX;
> -}
> -
>   #define gnttab_init_arch(gt) 0
>   #define gnttab_destroy_arch(gt) do {} while ( 0 )
>   #define gnttab_set_frame_gfn(gt, st, idx, gfn) do {} while ( 0 )
> diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
> index c881414..b46bb0a 100644
> --- a/xen/include/xen/grant_table.h
> +++ b/xen/include/xen/grant_table.h
> @@ -35,8 +35,8 @@ extern unsigned int opt_max_grant_frames;
>   extern unsigned int opt_max_maptrack_frames;
>   
>   /* Create/destroy per-domain grant table context. */
> -int grant_table_create(
> -    struct domain *d);
> +int grant_table_create(struct domain *d, unsigned int max_grant_frames,
> +                       unsigned int max_maptrack_frames);
>   void grant_table_destroy(
>       struct domain *d);
>   void grant_table_init_vcpu(struct vcpu *v);
> @@ -63,6 +63,4 @@ int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
>   int gnttab_get_status_frame(struct domain *d, unsigned long idx,
>                               mfn_t *mfn);
>   
> -unsigned int gnttab_dom0_frames(void);
> -
>   #endif /* __XEN_GRANT_TABLE_H__ */
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v3 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-29 14:40   ` [PATCH v3 " Andrew Cooper
  2018-08-29 15:03     ` Jan Beulich
@ 2018-08-30 19:46     ` Julien Grall
  2018-08-30 20:04       ` Andrew Cooper
  1 sibling, 1 reply; 63+ messages in thread
From: Julien Grall @ 2018-08-30 19:46 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Tim Deegan, Jan Beulich

Hi Andrew,

On 08/29/2018 03:40 PM, Andrew Cooper wrote:
> For ARM, the call to arch_domain_create() needs to have completed before
> domain_max_vcpus() will return the correct upper bound.
> 
> For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
> of dom0->vcpu.
> 
> With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
> can be constructed suitably for the domain, rather than for the worst-case
> setting.
> 
> Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
> ARM's two implementations of vgic_max_vcpus() no longer need work around the
> out-of-order call.
> 
>  From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
> can be looked up by domid.
> 
> The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
> max != d->max_vcpus, which does match the older semantics (not that it is
> obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
> this point the hypercall still needs making to allocate each vcpu.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

With one request below, for the ARM + Common code:

Acked-by: Julien Grall <julien.grall@arm.com>

FAOD, I agree with your position to avoid the renaming.

[...]

> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 58e51b2..8a803b3 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -547,6 +547,13 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>           break;
>       }
>   
> +    /*
> +     * Note: The parameter passed to XEN_DOMCTL_max_vcpus must match the value
> +     * passed to XEN_DOMCTL_createdomain.  This hypercall is in the process of
> +     * being removed (once the failure paths in domain_create() have been
> +     * improved), but is still required in the short term to allocate the
> +     * vcpus themselves.
> +     */

This comment might be more useful in the public header. This is usually 
the first place I would look for description of a domctl.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v3 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-30 19:46     ` Julien Grall
@ 2018-08-30 20:04       ` Andrew Cooper
  0 siblings, 0 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-30 20:04 UTC (permalink / raw)
  To: Julien Grall, Xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Tim Deegan, Jan Beulich

On 30/08/18 20:46, Julien Grall wrote:
> Hi Andrew,
>
> On 08/29/2018 03:40 PM, Andrew Cooper wrote:
>> For ARM, the call to arch_domain_create() needs to have completed before
>> domain_max_vcpus() will return the correct upper bound.
>>
>> For each arch's dom0's, drop the temporary max_vcpus parameter, and
>> allocation
>> of dom0->vcpu.
>>
>> With d->max_vcpus now correctly configured before evtchn_init(), the
>> poll mask
>> can be constructed suitably for the domain, rather than for the
>> worst-case
>> setting.
>>
>> Due to the evtchn_init() fixes, it no longer calls
>> domain_max_vcpus(), and
>> ARM's two implementations of vgic_max_vcpus() no longer need work
>> around the
>> out-of-order call.
>>
>>  From this point on, d->max_vcpus and d->vcpus[] are valid for any
>> domain which
>> can be looked up by domid.
>>
>> The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call
>> attempt with
>> max != d->max_vcpus, which does match the older semantics (not that
>> it is
>> obvious from the code).  The logic to allocate d->vcpu[] is dropped,
>> but at
>> this point the hypercall still needs making to allocate each vcpu.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> With one request below, for the ARM + Common code:
>
> Acked-by: Julien Grall <julien.grall@arm.com>
>
> FAOD, I agree with your position to avoid the renaming.
>
> [...]
>
>> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
>> index 58e51b2..8a803b3 100644
>> --- a/xen/common/domctl.c
>> +++ b/xen/common/domctl.c
>> @@ -547,6 +547,13 @@ long
>> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>           break;
>>       }
>>   +    /*
>> +     * Note: The parameter passed to XEN_DOMCTL_max_vcpus must match
>> the value
>> +     * passed to XEN_DOMCTL_createdomain.  This hypercall is in the
>> process of
>> +     * being removed (once the failure paths in domain_create() have
>> been
>> +     * improved), but is still required in the short term to
>> allocate the
>> +     * vcpus themselves.
>> +     */
>
> This comment might be more useful in the public header. This is
> usually the first place I would look for description of a domctl.

Ok - will do.

That said, it is very likely that the next person to read this will be
me when I finally remove it (hopefully in 4.12).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v3 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-29 15:03     ` Jan Beulich
@ 2018-08-31 10:33       ` Wei Liu
  2018-08-31 10:42         ` Jan Beulich
  0 siblings, 1 reply; 63+ messages in thread
From: Wei Liu @ 2018-08-31 10:33 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Tim Deegan, Xen-devel, Julien Grall

On Wed, Aug 29, 2018 at 09:03:36AM -0600, Jan Beulich wrote:
> >>> On 29.08.18 at 16:40, <andrew.cooper3@citrix.com> wrote:
> > For ARM, the call to arch_domain_create() needs to have completed before
> > domain_max_vcpus() will return the correct upper bound.
> > 
> > For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
> > of dom0->vcpu.
> > 
> > With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
> > can be constructed suitably for the domain, rather than for the worst-case
> > setting.
> > 
> > Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
> > ARM's two implementations of vgic_max_vcpus() no longer need work around the
> > out-of-order call.
> > 
> > From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
> > can be looked up by domid.
> > 
> > The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
> > max != d->max_vcpus, which does match the older semantics (not that it is
> > obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
> > this point the hypercall still needs making to allocate each vcpu.
> > 
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> in principle, but as said before the lack of renaming of the domctl
> makes my ack dependent upon some other REST maintainer
> agreeing with your position there (the more that you've added
> the comment to the implementation rather than the public header).

I don't see much value in renaming something that is due to be removed
soon.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v3 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-31 10:33       ` Wei Liu
@ 2018-08-31 10:42         ` Jan Beulich
  2018-08-31 10:57           ` Julien Grall
  2018-08-31 10:58           ` Andrew Cooper
  0 siblings, 2 replies; 63+ messages in thread
From: Jan Beulich @ 2018-08-31 10:42 UTC (permalink / raw)
  To: Wei Liu
  Cc: Stefano Stabellini, George Dunlap, Andrew Cooper, Tim Deegan,
	Xen-devel, Julien Grall

>>> On 31.08.18 at 12:33, <wei.liu2@citrix.com> wrote:
> On Wed, Aug 29, 2018 at 09:03:36AM -0600, Jan Beulich wrote:
>> >>> On 29.08.18 at 16:40, <andrew.cooper3@citrix.com> wrote:
>> > For ARM, the call to arch_domain_create() needs to have completed before
>> > domain_max_vcpus() will return the correct upper bound.
>> > 
>> > For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
>> > of dom0->vcpu.
>> > 
>> > With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
>> > can be constructed suitably for the domain, rather than for the worst-case
>> > setting.
>> > 
>> > Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
>> > ARM's two implementations of vgic_max_vcpus() no longer need work around the
>> > out-of-order call.
>> > 
>> > From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
>> > can be looked up by domid.
>> > 
>> > The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
>> > max != d->max_vcpus, which does match the older semantics (not that it is
>> > obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
>> > this point the hypercall still needs making to allocate each vcpu.
>> > 
>> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> 
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>> in principle, but as said before the lack of renaming of the domctl
>> makes my ack dependent upon some other REST maintainer
>> agreeing with your position there (the more that you've added
>> the comment to the implementation rather than the public header).
> 
> I don't see much value in renaming something that is due to be removed
> soon.

I would agree if "soon" meant "soon" for sure. But we all know how things
get delayed. What I'd like to avoid is shipping 4.12 with a mis-named
domctl.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v3 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-31 10:42         ` Jan Beulich
@ 2018-08-31 10:57           ` Julien Grall
  2018-08-31 11:00             ` Juergen Gross
  2018-08-31 10:58           ` Andrew Cooper
  1 sibling, 1 reply; 63+ messages in thread
From: Julien Grall @ 2018-08-31 10:57 UTC (permalink / raw)
  To: Jan Beulich, Wei Liu
  Cc: Juergen Gross, Stefano Stabellini, George Dunlap, Andrew Cooper,
	Tim Deegan, Xen-devel

(+ Juergen)

On 08/31/2018 11:42 AM, Jan Beulich wrote:
>>>> On 31.08.18 at 12:33, <wei.liu2@citrix.com> wrote:
>> On Wed, Aug 29, 2018 at 09:03:36AM -0600, Jan Beulich wrote:
>>>>>> On 29.08.18 at 16:40, <andrew.cooper3@citrix.com> wrote:
>>>> For ARM, the call to arch_domain_create() needs to have completed before
>>>> domain_max_vcpus() will return the correct upper bound.
>>>>
>>>> For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
>>>> of dom0->vcpu.
>>>>
>>>> With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
>>>> can be constructed suitably for the domain, rather than for the worst-case
>>>> setting.
>>>>
>>>> Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
>>>> ARM's two implementations of vgic_max_vcpus() no longer need work around the
>>>> out-of-order call.
>>>>
>>>>  From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
>>>> can be looked up by domid.
>>>>
>>>> The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
>>>> max != d->max_vcpus, which does match the older semantics (not that it is
>>>> obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
>>>> this point the hypercall still needs making to allocate each vcpu.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>
>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>> in principle, but as said before the lack of renaming of the domctl
>>> makes my ack dependent upon some other REST maintainer
>>> agreeing with your position there (the more that you've added
>>> the comment to the implementation rather than the public header).
>>
>> I don't see much value in renaming something that is due to be removed
>> soon.
> 
> I would agree if "soon" meant "soon" for sure. But we all know how things
> get delayed. What I'd like to avoid is shipping 4.12 with a mis-named
> domctl.

But that would be a waste of our time today if the DOMCTL is actually 
removed by Xen 4.12.

Can we delay the renaming until 4.12 freeze? If the removal does not 
make it, then we can discuss whether we want to rename the DOMCTL.

I guess the Juergen could track and remind us around the code freeze?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v3 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-31 10:42         ` Jan Beulich
  2018-08-31 10:57           ` Julien Grall
@ 2018-08-31 10:58           ` Andrew Cooper
  1 sibling, 0 replies; 63+ messages in thread
From: Andrew Cooper @ 2018-08-31 10:58 UTC (permalink / raw)
  To: Jan Beulich, Wei Liu
  Cc: Stefano Stabellini, George Dunlap, Tim Deegan, Xen-devel, Julien Grall

On 31/08/18 11:42, Jan Beulich wrote:
>>>> On 31.08.18 at 12:33, <wei.liu2@citrix.com> wrote:
>> On Wed, Aug 29, 2018 at 09:03:36AM -0600, Jan Beulich wrote:
>>>>>> On 29.08.18 at 16:40, <andrew.cooper3@citrix.com> wrote:
>>>> For ARM, the call to arch_domain_create() needs to have completed before
>>>> domain_max_vcpus() will return the correct upper bound.
>>>>
>>>> For each arch's dom0's, drop the temporary max_vcpus parameter, and allocation
>>>> of dom0->vcpu.
>>>>
>>>> With d->max_vcpus now correctly configured before evtchn_init(), the poll mask
>>>> can be constructed suitably for the domain, rather than for the worst-case
>>>> setting.
>>>>
>>>> Due to the evtchn_init() fixes, it no longer calls domain_max_vcpus(), and
>>>> ARM's two implementations of vgic_max_vcpus() no longer need work around the
>>>> out-of-order call.
>>>>
>>>> From this point on, d->max_vcpus and d->vcpus[] are valid for any domain which
>>>> can be looked up by domid.
>>>>
>>>> The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call attempt with
>>>> max != d->max_vcpus, which does match the older semantics (not that it is
>>>> obvious from the code).  The logic to allocate d->vcpu[] is dropped, but at
>>>> this point the hypercall still needs making to allocate each vcpu.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>> in principle, but as said before the lack of renaming of the domctl
>>> makes my ack dependent upon some other REST maintainer
>>> agreeing with your position there (the more that you've added
>>> the comment to the implementation rather than the public header).
>> I don't see much value in renaming something that is due to be removed
>> soon.
> I would agree if "soon" meant "soon" for sure. But we all know how things
> get delayed. What I'd like to avoid is shipping 4.12 with a mis-named
> domctl.

I do intend to get this fixed within the 4.12 timeframe.

However, irrespective of the timeframe, this isn't a hypercall which
anyone is realistically going to look at.  If you want to get pedantic
about naming, it should have been named set_max_vcpus from the outset.

This is one example where the effort required to adjust the
inconsistency completely dwarfs the outcome.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v3 12/12] xen/domain: Allocate d->vcpu[] in domain_create()
  2018-08-31 10:57           ` Julien Grall
@ 2018-08-31 11:00             ` Juergen Gross
  0 siblings, 0 replies; 63+ messages in thread
From: Juergen Gross @ 2018-08-31 11:00 UTC (permalink / raw)
  To: Julien Grall, Jan Beulich, Wei Liu
  Cc: Stefano Stabellini, George Dunlap, Andrew Cooper, Tim Deegan, Xen-devel

On 31/08/18 12:57, Julien Grall wrote:
> (+ Juergen)
> 
> On 08/31/2018 11:42 AM, Jan Beulich wrote:
>>>>> On 31.08.18 at 12:33, <wei.liu2@citrix.com> wrote:
>>> On Wed, Aug 29, 2018 at 09:03:36AM -0600, Jan Beulich wrote:
>>>>>>> On 29.08.18 at 16:40, <andrew.cooper3@citrix.com> wrote:
>>>>> For ARM, the call to arch_domain_create() needs to have completed
>>>>> before
>>>>> domain_max_vcpus() will return the correct upper bound.
>>>>>
>>>>> For each arch's dom0's, drop the temporary max_vcpus parameter, and
>>>>> allocation
>>>>> of dom0->vcpu.
>>>>>
>>>>> With d->max_vcpus now correctly configured before evtchn_init(),
>>>>> the poll mask
>>>>> can be constructed suitably for the domain, rather than for the
>>>>> worst-case
>>>>> setting.
>>>>>
>>>>> Due to the evtchn_init() fixes, it no longer calls
>>>>> domain_max_vcpus(), and
>>>>> ARM's two implementations of vgic_max_vcpus() no longer need work
>>>>> around the
>>>>> out-of-order call.
>>>>>
>>>>>  From this point on, d->max_vcpus and d->vcpus[] are valid for any
>>>>> domain which
>>>>> can be looked up by domid.
>>>>>
>>>>> The XEN_DOMCTL_max_vcpus hypercall is modified to reject any call
>>>>> attempt with
>>>>> max != d->max_vcpus, which does match the older semantics (not that
>>>>> it is
>>>>> obvious from the code).  The logic to allocate d->vcpu[] is
>>>>> dropped, but at
>>>>> this point the hypercall still needs making to allocate each vcpu.
>>>>>
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>
>>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>>> in principle, but as said before the lack of renaming of the domctl
>>>> makes my ack dependent upon some other REST maintainer
>>>> agreeing with your position there (the more that you've added
>>>> the comment to the implementation rather than the public header).
>>>
>>> I don't see much value in renaming something that is due to be removed
>>> soon.
>>
>> I would agree if "soon" meant "soon" for sure. But we all know how things
>> get delayed. What I'd like to avoid is shipping 4.12 with a mis-named
>> domctl.
> 
> But that would be a waste of our time today if the DOMCTL is actually
> removed by Xen 4.12.
> 
> Can we delay the renaming until 4.12 freeze? If the removal does not
> make it, then we can discuss whether we want to rename the DOMCTL.
> 
> I guess the Juergen could track and remind us around the code freeze?

I'm fine with that.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Rats nest with domain pirq initialisation
  2018-08-13 10:01 ` [PATCH v2 09/12] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
  2018-08-14 14:37   ` Roger Pau Monné
  2018-08-15 12:56   ` Jan Beulich
@ 2018-09-04 18:44   ` Andrew Cooper
  2018-09-05  7:24     ` Jan Beulich
  2 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-09-04 18:44 UTC (permalink / raw)
  To: Xen-devel; +Cc: Wei Liu, Jan Beulich, Roger Pau Monne

On 13/08/18 11:01, Andrew Cooper wrote:
> This is in preparation to set up d->max_cpus and d->vcpu[] in domain_create(),
> and allow later parts of domain construction to have access to the values.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/common/domain.c | 34 +++++++++++++++++-----------------
>  1 file changed, 17 insertions(+), 17 deletions(-)
>
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index be51426..0c44f27 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -322,6 +322,23 @@ struct domain *domain_create(domid_t domid,
>          else
>              d->guest_type = guest_type_pv;
>  
> +        if ( !is_hardware_domain(d) )
> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
> +        else
> +            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
> +                                           : arch_hwdom_irqs(domid);
> +        if ( d->nr_pirqs > nr_irqs )
> +            d->nr_pirqs = nr_irqs;
> +
> +        radix_tree_init(&d->pirq_tree);
> +    }
> +
> +    if ( (err = arch_domain_create(d, config)) != 0 )
> +        goto fail;
> +    init_status |= INIT_arch;
> +
> +    if ( !is_idle_domain(d) )
> +    {
>          watchdog_domain_init(d);
>          init_status |= INIT_watchdog;
>  
> @@ -352,16 +369,6 @@ struct domain *domain_create(domid_t domid,

Between these two hunks is:

        d->iomem_caps = rangeset_new(d, "I/O Memory",
RANGESETF_prettyprint_hex);
        d->irq_caps   = rangeset_new(d, "Interrupts", 0);

which is important, because it turns out that x86's
arch_domain_destroy() depends on d->irq_caps already being initialised.

The path which blows up is:

arch_domain_destroy()
  free_domain_pirqs()
    unmap_domain_pirq()
      irq_deny_access()
        rangeset_remove_singleton((d)->irq_caps, i)

Unlike the boolean-nature rangeset_contains_*() helpers, I don't think
it is reasonable to make rangeset_remove_*() tolerate a NULL rangeset.

The behaviour of automatically revoking irq access is dubious at best. 
It is asymmetric with the XEN_DOMCTL_irq_permission, and a caller would
reasonably expect not to have to re-grant identical permissions as the
irq is mapped/unmapped.  Does anyone know why we have this suspect
behaviour in the first place?

One way or another, this path needs to become idempotent, but simply
throwing some NULL pointer checks into unmap_domain_pirq() doesn't feel
like the right thing to do.


A separate mess is that we appear to allocate full pirq structures for
all legacy irqs for every single domain, in init_domain_irq_mapping(). 
At the very least, this is wasteful as very few domains get access to
real hardware in the first place.

The other thing I notice is that alloc_pirq_struct() is downright
dangerous, as it deliberately tries to allocate half a struct pirq for
the !hvm case.  I can only assume this is a space saving measure, but
there is absolutely no help in the commit message which introduced it
(c/s c24536b636f).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: Rats nest with domain pirq initialisation
  2018-09-04 18:44   ` Rats nest with domain pirq initialisation Andrew Cooper
@ 2018-09-05  7:24     ` Jan Beulich
  2018-09-05 11:38       ` Jan Beulich
  2018-09-05 12:04       ` Andrew Cooper
  0 siblings, 2 replies; 63+ messages in thread
From: Jan Beulich @ 2018-09-05  7:24 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 04.09.18 at 20:44, <andrew.cooper3@citrix.com> wrote:
> On 13/08/18 11:01, Andrew Cooper wrote:
>> This is in preparation to set up d->max_cpus and d->vcpu[] in domain_create(),
>> and allow later parts of domain construction to have access to the values.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> ---
>>  xen/common/domain.c | 34 +++++++++++++++++-----------------
>>  1 file changed, 17 insertions(+), 17 deletions(-)
>>
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index be51426..0c44f27 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -322,6 +322,23 @@ struct domain *domain_create(domid_t domid,
>>          else
>>              d->guest_type = guest_type_pv;
>>  
>> +        if ( !is_hardware_domain(d) )
>> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>> +        else
>> +            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
>> +                                           : arch_hwdom_irqs(domid);
>> +        if ( d->nr_pirqs > nr_irqs )
>> +            d->nr_pirqs = nr_irqs;
>> +
>> +        radix_tree_init(&d->pirq_tree);
>> +    }
>> +
>> +    if ( (err = arch_domain_create(d, config)) != 0 )
>> +        goto fail;
>> +    init_status |= INIT_arch;
>> +
>> +    if ( !is_idle_domain(d) )
>> +    {
>>          watchdog_domain_init(d);
>>          init_status |= INIT_watchdog;
>>  
>> @@ -352,16 +369,6 @@ struct domain *domain_create(domid_t domid,
> 
> Between these two hunks is:
> 
>         d->iomem_caps = rangeset_new(d, "I/O Memory", RANGESETF_prettyprint_hex);
>         d->irq_caps   = rangeset_new(d, "Interrupts", 0);
> 
> which is important, because it turns out that x86's
> arch_domain_destroy() depends on d->irq_caps already being initialised.

Moving this up looks reasonable to me. "Simple" initialization can
certainly be done early (i.e. before arch_domain_create()), don't
you think?

> The path which blows up is:
> 
> arch_domain_destroy()
>   free_domain_pirqs()
>     unmap_domain_pirq()
>       irq_deny_access()
>         rangeset_remove_singleton((d)->irq_caps, i)

But what IRQ do we find to unmap here? There can't be any that have
been mapped, when ->irq_caps is still NULL. IOW I don't currently see
how domain_pirq_to_irq() would legitimately return a positive value at
this point in time, yet that's what guards the calls to unmap_domain_pirq().

> Unlike the boolean-nature rangeset_contains_*() helpers, I don't think
> it is reasonable to make rangeset_remove_*() tolerate a NULL rangeset.

+1

> The behaviour of automatically revoking irq access is dubious at best. 
> It is asymmetric with the XEN_DOMCTL_irq_permission, and a caller would
> reasonably expect not to have to re-grant identical permissions as the
> irq is mapped/unmapped.  Does anyone know why we have this suspect
> behaviour in the first place?

Wasn't it that it was symmetric originally, and the grant/map side has been
split perhaps a couple of years ago? If so, the unmap side splitting was
perhaps simply missed?

> One way or another, this path needs to become idempotent, but simply
> throwing some NULL pointer checks into unmap_domain_pirq() doesn't feel
> like the right thing to do.

As per above - I think either free_domain_pirqs() should gain a single
such NULL check, or domain_pirq_to_irq() should be made sure doesn't
return positive values prior to ->irq_caps having been set up.

> A separate mess is that we appear to allocate full pirq structures for
> all legacy irqs for every single domain, in init_domain_irq_mapping(). 
> At the very least, this is wasteful as very few domains get access to
> real hardware in the first place.

I vaguely recall there was some hope to get rid of this, but I don't
recall the prereqs necessary.

> The other thing I notice is that alloc_pirq_struct() is downright
> dangerous, as it deliberately tries to allocate half a struct pirq for
> the !hvm case.  I can only assume this is a space saving measure, but
> there is absolutely no help in the commit message which introduced it
> (c/s c24536b636f).

Space saving, yes. Just like it is forbidden to access d->arch.hvm
for a PV d, accessing pirq->arch.hvm is forbidden to access for a
PV domain's pirq. What point is there to allocate the space then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: Rats nest with domain pirq initialisation
  2018-09-05  7:24     ` Jan Beulich
@ 2018-09-05 11:38       ` Jan Beulich
  2018-09-05 12:04       ` Andrew Cooper
  1 sibling, 0 replies; 63+ messages in thread
From: Jan Beulich @ 2018-09-05 11:38 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 05.09.18 at 09:24, <JBeulich@suse.com> wrote:
>>>> On 04.09.18 at 20:44, <andrew.cooper3@citrix.com> wrote:
>> Unlike the boolean-nature rangeset_contains_*() helpers, I don't think
>> it is reasonable to make rangeset_remove_*() tolerate a NULL rangeset.
> 
> +1

Hmm, upon further thought: rangeset_remove_*() is a no-op on an
empty rangeset. Making it bail on a NULL one would therefore seem
like not so unreasonable a thing.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: Rats nest with domain pirq initialisation
  2018-09-05  7:24     ` Jan Beulich
  2018-09-05 11:38       ` Jan Beulich
@ 2018-09-05 12:04       ` Andrew Cooper
  2018-09-05 12:25         ` Jan Beulich
  1 sibling, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-09-05 12:04 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

On 05/09/18 08:24, Jan Beulich wrote:
>>>> On 04.09.18 at 20:44, <andrew.cooper3@citrix.com> wrote:
>> On 13/08/18 11:01, Andrew Cooper wrote:
>>> This is in preparation to set up d->max_cpus and d->vcpu[] in domain_create(),
>>> and allow later parts of domain construction to have access to the values.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> ---
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> CC: Julien Grall <julien.grall@arm.com>
>>> CC: Wei Liu <wei.liu2@citrix.com>
>>> ---
>>>  xen/common/domain.c | 34 +++++++++++++++++-----------------
>>>  1 file changed, 17 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>> index be51426..0c44f27 100644
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -322,6 +322,23 @@ struct domain *domain_create(domid_t domid,
>>>          else
>>>              d->guest_type = guest_type_pv;
>>>  
>>> +        if ( !is_hardware_domain(d) )
>>> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>>> +        else
>>> +            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
>>> +                                           : arch_hwdom_irqs(domid);
>>> +        if ( d->nr_pirqs > nr_irqs )
>>> +            d->nr_pirqs = nr_irqs;
>>> +
>>> +        radix_tree_init(&d->pirq_tree);
>>> +    }
>>> +
>>> +    if ( (err = arch_domain_create(d, config)) != 0 )
>>> +        goto fail;
>>> +    init_status |= INIT_arch;
>>> +
>>> +    if ( !is_idle_domain(d) )
>>> +    {
>>>          watchdog_domain_init(d);
>>>          init_status |= INIT_watchdog;
>>>  
>>> @@ -352,16 +369,6 @@ struct domain *domain_create(domid_t domid,
>> Between these two hunks is:
>>
>>         d->iomem_caps = rangeset_new(d, "I/O Memory", RANGESETF_prettyprint_hex);
>>         d->irq_caps   = rangeset_new(d, "Interrupts", 0);
>>
>> which is important, because it turns out that x86's
>> arch_domain_destroy() depends on d->irq_caps already being initialised.
> Moving this up looks reasonable to me. "Simple" initialization can
> certainly be done early (i.e. before arch_domain_create()), don't
> you think?

No - that defeats the purpose of making the destroy path idempotent. 
For us to remove the max_vcpus hypercall, _domain_destroy() must be
capable of correctly cleaning up a domain from any state of
initialisation, including if the relevant init calls haven't been made yet.

These rangeset_new() calls cannot move earlier than the first action
which might fail (which is the XSM init call to get the security label
correct).

>
>> The path which blows up is:
>>
>> arch_domain_destroy()
>>   free_domain_pirqs()
>>     unmap_domain_pirq()
>>       irq_deny_access()
>>         rangeset_remove_singleton((d)->irq_caps, i)
> But what IRQ do we find to unmap here? There can't be any that have
> been mapped, when ->irq_caps is still NULL. IOW I don't currently see
> how domain_pirq_to_irq() would legitimately return a positive value at
> this point in time, yet that's what guards the calls to unmap_domain_pirq().

It is pirq 2 which explodes, which is the first of the redundant pirq
structures allocated for legacy routing.

I'm not sure I understand this code well enough to comment on why
domain_pirq_to_irq() returns a positive value at this point, but I'm
going to go out on a limb and suggest it might be related to our
unnecessary(?) preallocation.

>
>> Unlike the boolean-nature rangeset_contains_*() helpers, I don't think
>> it is reasonable to make rangeset_remove_*() tolerate a NULL rangeset.
> +1
>
>> The behaviour of automatically revoking irq access is dubious at best. 
>> It is asymmetric with the XEN_DOMCTL_irq_permission, and a caller would
>> reasonably expect not to have to re-grant identical permissions as the
>> irq is mapped/unmapped.  Does anyone know why we have this suspect
>> behaviour in the first place?
> Wasn't it that it was symmetric originally, and the grant/map side has been
> split perhaps a couple of years ago? If so, the unmap side splitting was
> perhaps simply missed?

Perhaps?  I don't know the answers to these.

>
>> One way or another, this path needs to become idempotent, but simply
>> throwing some NULL pointer checks into unmap_domain_pirq() doesn't feel
>> like the right thing to do.
> As per above - I think either free_domain_pirqs() should gain a single
> such NULL check, or domain_pirq_to_irq() should be made sure doesn't
> return positive values prior to ->irq_caps having been set up.
>
>> A separate mess is that we appear to allocate full pirq structures for
>> all legacy irqs for every single domain, in init_domain_irq_mapping(). 
>> At the very least, this is wasteful as very few domains get access to
>> real hardware in the first place.
> I vaguely recall there was some hope to get rid of this, but I don't
> recall the prereqs necessary.

I'm beginning to regret looking at this code.  Whatever is going on, it
looks like it is far more complicated than it needs to be.

It would help if there were even some comments...

>> The other thing I notice is that alloc_pirq_struct() is downright
>> dangerous, as it deliberately tries to allocate half a struct pirq for
>> the !hvm case.  I can only assume this is a space saving measure, but
>> there is absolutely no help in the commit message which introduced it
>> (c/s c24536b636f).
> Space saving, yes. Just like it is forbidden to access d->arch.hvm
> for a PV d, accessing pirq->arch.hvm is forbidden to access for a
> PV domain's pirq. What point is there to allocate the space then?

Because when the code inevitably gets things wrong, you only
read/corrupt your own pirq structure, rather than whichever object
happens to be allocated adjacently.  Most likely, this will be tlsf
metadata.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: Rats nest with domain pirq initialisation
  2018-09-05 12:04       ` Andrew Cooper
@ 2018-09-05 12:25         ` Jan Beulich
  2018-09-05 12:39           ` Andrew Cooper
  0 siblings, 1 reply; 63+ messages in thread
From: Jan Beulich @ 2018-09-05 12:25 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 05.09.18 at 14:04, <andrew.cooper3@citrix.com> wrote:
> On 05/09/18 08:24, Jan Beulich wrote:
>>>>> On 04.09.18 at 20:44, <andrew.cooper3@citrix.com> wrote:
>>> On 13/08/18 11:01, Andrew Cooper wrote:
>>>> This is in preparation to set up d->max_cpus and d->vcpu[] in domain_create(),
>>>> and allow later parts of domain construction to have access to the values.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> ---
>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>> CC: Julien Grall <julien.grall@arm.com>
>>>> CC: Wei Liu <wei.liu2@citrix.com>
>>>> ---
>>>>  xen/common/domain.c | 34 +++++++++++++++++-----------------
>>>>  1 file changed, 17 insertions(+), 17 deletions(-)
>>>>
>>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>>> index be51426..0c44f27 100644
>>>> --- a/xen/common/domain.c
>>>> +++ b/xen/common/domain.c
>>>> @@ -322,6 +322,23 @@ struct domain *domain_create(domid_t domid,
>>>>          else
>>>>              d->guest_type = guest_type_pv;
>>>>  
>>>> +        if ( !is_hardware_domain(d) )
>>>> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>>>> +        else
>>>> +            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + 
> extra_hwdom_irqs
>>>> +                                           : arch_hwdom_irqs(domid);
>>>> +        if ( d->nr_pirqs > nr_irqs )
>>>> +            d->nr_pirqs = nr_irqs;
>>>> +
>>>> +        radix_tree_init(&d->pirq_tree);
>>>> +    }
>>>> +
>>>> +    if ( (err = arch_domain_create(d, config)) != 0 )
>>>> +        goto fail;
>>>> +    init_status |= INIT_arch;
>>>> +
>>>> +    if ( !is_idle_domain(d) )
>>>> +    {
>>>>          watchdog_domain_init(d);
>>>>          init_status |= INIT_watchdog;
>>>>  
>>>> @@ -352,16 +369,6 @@ struct domain *domain_create(domid_t domid,
>>> Between these two hunks is:
>>>
>>>         d->iomem_caps = rangeset_new(d, "I/O Memory", 
> RANGESETF_prettyprint_hex);
>>>         d->irq_caps   = rangeset_new(d, "Interrupts", 0);
>>>
>>> which is important, because it turns out that x86's
>>> arch_domain_destroy() depends on d->irq_caps already being initialised.
>> Moving this up looks reasonable to me. "Simple" initialization can
>> certainly be done early (i.e. before arch_domain_create()), don't
>> you think?
> 
> No - that defeats the purpose of making the destroy path idempotent. 
> For us to remove the max_vcpus hypercall, _domain_destroy() must be
> capable of correctly cleaning up a domain from any state of
> initialisation, including if the relevant init calls haven't been made yet.

I agree up to here.

> These rangeset_new() calls cannot move earlier than the first action
> which might fail (which is the XSM init call to get the security label
> correct).

But I must be overlooking something crucial here: If _domain_destroy()
was idempotent, how does it matter at what point the rangesets get
initialized?

>>> The path which blows up is:
>>>
>>> arch_domain_destroy()
>>>   free_domain_pirqs()
>>>     unmap_domain_pirq()
>>>       irq_deny_access()
>>>         rangeset_remove_singleton((d)->irq_caps, i)
>> But what IRQ do we find to unmap here? There can't be any that have
>> been mapped, when ->irq_caps is still NULL. IOW I don't currently see
>> how domain_pirq_to_irq() would legitimately return a positive value at
>> this point in time, yet that's what guards the calls to unmap_domain_pirq().
> 
> It is pirq 2 which explodes, which is the first of the redundant pirq
> structures allocated for legacy routing.
> 
> I'm not sure I understand this code well enough to comment on why
> domain_pirq_to_irq() returns a positive value at this point, but I'm
> going to go out on a limb and suggest it might be related to our
> unnecessary(?) preallocation.

I've meanwhile considered this as the reason, too. And iirc the
pre-allocation is because guests (including Dom0) bypass some of
the setup they would do for non-legacy IRQs. This may have been
just a XenoLinux (mis)behavior, but even then I'm not convinced
we could easily alter things.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: Rats nest with domain pirq initialisation
  2018-09-05 12:25         ` Jan Beulich
@ 2018-09-05 12:39           ` Andrew Cooper
  2018-09-05 15:44             ` Roger Pau Monné
  0 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2018-09-05 12:39 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

On 05/09/18 13:25, Jan Beulich wrote:
>>>> On 05.09.18 at 14:04, <andrew.cooper3@citrix.com> wrote:
>> On 05/09/18 08:24, Jan Beulich wrote:
>>>>>> On 04.09.18 at 20:44, <andrew.cooper3@citrix.com> wrote:
>>>> On 13/08/18 11:01, Andrew Cooper wrote:
>>>>> This is in preparation to set up d->max_cpus and d->vcpu[] in domain_create(),
>>>>> and allow later parts of domain construction to have access to the values.
>>>>>
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> ---
>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>>> CC: Julien Grall <julien.grall@arm.com>
>>>>> CC: Wei Liu <wei.liu2@citrix.com>
>>>>> ---
>>>>>  xen/common/domain.c | 34 +++++++++++++++++-----------------
>>>>>  1 file changed, 17 insertions(+), 17 deletions(-)
>>>>>
>>>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>>>> index be51426..0c44f27 100644
>>>>> --- a/xen/common/domain.c
>>>>> +++ b/xen/common/domain.c
>>>>> @@ -322,6 +322,23 @@ struct domain *domain_create(domid_t domid,
>>>>>          else
>>>>>              d->guest_type = guest_type_pv;
>>>>>  
>>>>> +        if ( !is_hardware_domain(d) )
>>>>> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>>>>> +        else
>>>>> +            d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + 
>> extra_hwdom_irqs
>>>>> +                                           : arch_hwdom_irqs(domid);
>>>>> +        if ( d->nr_pirqs > nr_irqs )
>>>>> +            d->nr_pirqs = nr_irqs;
>>>>> +
>>>>> +        radix_tree_init(&d->pirq_tree);
>>>>> +    }
>>>>> +
>>>>> +    if ( (err = arch_domain_create(d, config)) != 0 )
>>>>> +        goto fail;
>>>>> +    init_status |= INIT_arch;
>>>>> +
>>>>> +    if ( !is_idle_domain(d) )
>>>>> +    {
>>>>>          watchdog_domain_init(d);
>>>>>          init_status |= INIT_watchdog;
>>>>>  
>>>>> @@ -352,16 +369,6 @@ struct domain *domain_create(domid_t domid,
>>>> Between these two hunks is:
>>>>
>>>>         d->iomem_caps = rangeset_new(d, "I/O Memory", 
>> RANGESETF_prettyprint_hex);
>>>>         d->irq_caps   = rangeset_new(d, "Interrupts", 0);
>>>>
>>>> which is important, because it turns out that x86's
>>>> arch_domain_destroy() depends on d->irq_caps already being initialised.
>>> Moving this up looks reasonable to me. "Simple" initialization can
>>> certainly be done early (i.e. before arch_domain_create()), don't
>>> you think?
>> No - that defeats the purpose of making the destroy path idempotent. 
>> For us to remove the max_vcpus hypercall, _domain_destroy() must be
>> capable of correctly cleaning up a domain from any state of
>> initialisation, including if the relevant init calls haven't been made yet.
> I agree up to here.
>
>> These rangeset_new() calls cannot move earlier than the first action
>> which might fail (which is the XSM init call to get the security label
>> correct).
> But I must be overlooking something crucial here: If _domain_destroy()
> was idempotent, how does it matter at what point the rangesets get
> initialized?

_domain_destroy() is idempotent (for the very small quantity of state it
currently looks after).  The problem is that arch_domain_destroy() is
not idempotent, and needs needs to become so, and moving the
rangeset_new() calls as you originally suggested is not a fix for
arch_domain_destroy()'s idempotency bug.

>
>>>> The path which blows up is:
>>>>
>>>> arch_domain_destroy()
>>>>   free_domain_pirqs()
>>>>     unmap_domain_pirq()
>>>>       irq_deny_access()
>>>>         rangeset_remove_singleton((d)->irq_caps, i)
>>> But what IRQ do we find to unmap here? There can't be any that have
>>> been mapped, when ->irq_caps is still NULL. IOW I don't currently see
>>> how domain_pirq_to_irq() would legitimately return a positive value at
>>> this point in time, yet that's what guards the calls to unmap_domain_pirq().
>> It is pirq 2 which explodes, which is the first of the redundant pirq
>> structures allocated for legacy routing.
>>
>> I'm not sure I understand this code well enough to comment on why
>> domain_pirq_to_irq() returns a positive value at this point, but I'm
>> going to go out on a limb and suggest it might be related to our
>> unnecessary(?) preallocation.
> I've meanwhile considered this as the reason, too. And iirc the
> pre-allocation is because guests (including Dom0) bypass some of
> the setup they would do for non-legacy IRQs. This may have been
> just a XenoLinux (mis)behavior, but even then I'm not convinced
> we could easily alter things.

Bypass which setup?  One way or another they have to bind the irq before
it can be used, so I still don't see why any structure preallocation is
needed.  (Reservation of legacy irq numbers, perahps.)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: Rats nest with domain pirq initialisation
  2018-09-05 12:39           ` Andrew Cooper
@ 2018-09-05 15:44             ` Roger Pau Monné
  0 siblings, 0 replies; 63+ messages in thread
From: Roger Pau Monné @ 2018-09-05 15:44 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Wed, Sep 05, 2018 at 01:39:18PM +0100, Andrew Cooper wrote:
> On 05/09/18 13:25, Jan Beulich wrote:
> >>>> On 05.09.18 at 14:04, <andrew.cooper3@citrix.com> wrote:
> >> On 05/09/18 08:24, Jan Beulich wrote:
> >>>>>> On 04.09.18 at 20:44, <andrew.cooper3@citrix.com> wrote:
> >>>> The path which blows up is:
> >>>>
> >>>> arch_domain_destroy()
> >>>>   free_domain_pirqs()
> >>>>     unmap_domain_pirq()
> >>>>       irq_deny_access()
> >>>>         rangeset_remove_singleton((d)->irq_caps, i)
> >>> But what IRQ do we find to unmap here? There can't be any that have
> >>> been mapped, when ->irq_caps is still NULL. IOW I don't currently see
> >>> how domain_pirq_to_irq() would legitimately return a positive value at
> >>> this point in time, yet that's what guards the calls to unmap_domain_pirq().
> >> It is pirq 2 which explodes, which is the first of the redundant pirq
> >> structures allocated for legacy routing.
> >>
> >> I'm not sure I understand this code well enough to comment on why
> >> domain_pirq_to_irq() returns a positive value at this point, but I'm
> >> going to go out on a limb and suggest it might be related to our
> >> unnecessary(?) preallocation.
> > I've meanwhile considered this as the reason, too. And iirc the
> > pre-allocation is because guests (including Dom0) bypass some of
> > the setup they would do for non-legacy IRQs. This may have been
> > just a XenoLinux (mis)behavior, but even then I'm not convinced
> > we could easily alter things.
> 
> Bypass which setup?  One way or another they have to bind the irq before
> it can be used, so I still don't see why any structure preallocation is
> needed.  (Reservation of legacy irq numbers, perahps.)

For PIRQs you need to first allocate a PIRQ, then configure it and
bind the PIRQ to an event channel. I have no idea, but it wouldn't
seem that weird that old Dom0 kernels would assume that legacy IRQs
(<16) are already allocated, and just configure and bind them.

Last time I looked when working on legacy PVH support for FreeBSD
pvops Linux would correctly allocate legacy PIRQs before attempting to
bind them, but I haven't looked at classic xenolinux Dom0 kernels.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2018-09-05 15:44 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-13 10:00 [PATCH v2 00/12] Improvements to domain creation Andrew Cooper
2018-08-13 10:00 ` [PATCH v2 01/12] tools/ocaml: Pass a full domctl_create_config into stub_xc_domain_create() Andrew Cooper
2018-08-13 10:00 ` [PATCH v2 02/12] tools: Rework xc_domain_create() to take a full xen_domctl_createdomain Andrew Cooper
2018-08-13 10:01 ` [PATCH v2 03/12] xen/domctl: Merge set_max_evtchn into createdomain Andrew Cooper
2018-08-14 13:58   ` Roger Pau Monné
2018-08-13 10:01 ` [PATCH v2 04/12] xen/evtchn: Pass max_evtchn_port into evtchn_init() Andrew Cooper
2018-08-14 14:07   ` Roger Pau Monné
2018-08-15 12:45   ` Jan Beulich
2018-08-15 12:57   ` Julien Grall
2018-08-13 10:01 ` [PATCH v2 05/12] tools: Pass grant table limits to XEN_DOMCTL_set_gnttab_limits Andrew Cooper
2018-08-13 10:01 ` [PATCH v2 06/12] xen/gnttab: Pass max_{grant, maptrack}_frames into grant_table_create() Andrew Cooper
2018-08-14 14:17   ` Roger Pau Monné
2018-08-15 12:51   ` Jan Beulich
2018-08-15 13:04   ` Julien Grall
2018-08-15 13:08     ` Andrew Cooper
2018-08-15 13:32       ` Julien Grall
2018-08-15 19:03         ` Andrew Cooper
2018-08-16  8:59           ` Julien Grall
2018-08-29  9:38   ` [PATCH v3 6/12] " Andrew Cooper
2018-08-30 19:40     ` Julien Grall
2018-08-13 10:01 ` [PATCH v2 07/12] xen/domctl: Remove XEN_DOMCTL_set_gnttab_limits Andrew Cooper
2018-08-14 14:19   ` Roger Pau Monné
2018-08-13 10:01 ` [PATCH v2 08/12] xen/gnttab: Fold grant_table_{create, set_limits}() into grant_table_init() Andrew Cooper
2018-08-14 14:31   ` Roger Pau Monné
2018-08-15 12:54   ` Jan Beulich
2018-08-13 10:01 ` [PATCH v2 09/12] xen/domain: Call arch_domain_create() as early as possible in domain_create() Andrew Cooper
2018-08-14 14:37   ` Roger Pau Monné
2018-08-15 12:56   ` Jan Beulich
2018-09-04 18:44   ` Rats nest with domain pirq initialisation Andrew Cooper
2018-09-05  7:24     ` Jan Beulich
2018-09-05 11:38       ` Jan Beulich
2018-09-05 12:04       ` Andrew Cooper
2018-09-05 12:25         ` Jan Beulich
2018-09-05 12:39           ` Andrew Cooper
2018-09-05 15:44             ` Roger Pau Monné
2018-08-13 10:01 ` [PATCH v2 10/12] tools: Pass max_vcpus to XEN_DOMCTL_createdomain Andrew Cooper
2018-08-13 10:01 ` [PATCH v2 11/12] xen/dom0: Arrange for dom0_cfg to contain the real max_vcpus value Andrew Cooper
2018-08-14 15:05   ` Roger Pau Monné
2018-08-15 12:59   ` Jan Beulich
2018-08-13 10:01 ` [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create() Andrew Cooper
2018-08-14 15:17   ` Roger Pau Monné
2018-08-15 13:17     ` Julien Grall
2018-08-15 13:50       ` Andrew Cooper
2018-08-15 13:52         ` Julien Grall
2018-08-15 13:56           ` Andrew Cooper
2018-08-15 13:11   ` Jan Beulich
2018-08-15 14:03     ` Andrew Cooper
2018-08-15 15:18       ` Jan Beulich
2018-08-29 10:36         ` Andrew Cooper
2018-08-29 12:10           ` Jan Beulich
2018-08-29 12:29             ` Andrew Cooper
2018-08-29 12:49               ` Jan Beulich
2018-08-29 14:40   ` [PATCH v3 " Andrew Cooper
2018-08-29 15:03     ` Jan Beulich
2018-08-31 10:33       ` Wei Liu
2018-08-31 10:42         ` Jan Beulich
2018-08-31 10:57           ` Julien Grall
2018-08-31 11:00             ` Juergen Gross
2018-08-31 10:58           ` Andrew Cooper
2018-08-30 19:46     ` Julien Grall
2018-08-30 20:04       ` Andrew Cooper
2018-08-14 13:12 ` [PATCH v2 00/12] Improvements to domain creation Christian Lindig
2018-08-14 13:34   ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.