All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Boot time cpupools
@ 2022-02-15 10:15 Luca Fancellu
  2022-02-15 10:15 ` [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
                   ` (4 more replies)
  0 siblings, 5 replies; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 10:15 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, Wei Liu, Anthony PERARD, Juergen Gross, Dario Faggioli,
	George Dunlap, Andrew Cooper, Jan Beulich, Julien Grall,
	Stefano Stabellini, Volodymyr Babchuk, Bertrand Marquis,
	Roger Pau Monné

This serie introduces a feature for Xen to create cpu pools at boot time, the
feature is enabled using a configurable that is disabled by default.
The boot time cpupool feature relies on the device tree to describe the cpu
pools.
Another feature is introduced by the serie, the possibility to assign a
dom0less guest to a cpupool at boot time.

Here follows an example, Xen is built with CONFIG_BOOT_TIME_CPUPOOLS=y.

From the DT:

  [...]

  a72_0: cpu@0 {
    compatible = "arm,cortex-a72";
    reg = <0x0 0x0>;
    device_type = "cpu";
    [...]
  };

  a72_1: cpu@1 {
    compatible = "arm,cortex-a72";
    reg = <0x0 0x1>;
    device_type = "cpu";
    [...]
  };

  a53_0: cpu@100 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x100>;
    device_type = "cpu";
    [...]
  };

  a53_1: cpu@101 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x101>;
    device_type = "cpu";
    [...]
  };

  a53_2: cpu@102 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x102>;
    device_type = "cpu";
    [...]
  };

  a53_3: cpu@103 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x103>;
    device_type = "cpu";
    [...]
  };

  chosen {
    #size-cells = <0x1>;
    #address-cells = <0x1>;
    xen,dom0-bootargs = "...";
    xen,xen-bootargs = "...";

    cpupool0 {
      compatible = "xen,cpupool";
      cpupool-id = <0>;
      cpupool-cpus = <&a72_0 &a72_1>;
    };

    cp1: cpupool1 {
      compatible = "xen,cpupool";
      cpupool-id = <1>;
      cpupool-cpus = <&a53_0 &a53_1 &a53_2 &a53_3>;
      cpupool-sched = "null";
    };

    module@0 {
      reg = <0x80080000 0x1300000>;
      compatible = "multiboot,module";
    };

    domU1 {
      #size-cells = <0x1>;
      #address-cells = <0x1>;
      compatible = "xen,domain";
      cpus = <1>;
      memory = <0 0xC0000>;
      vpl011;
      domain-cpupool = <&cp1>;

      module@92000000 {
        compatible = "multiboot,kernel", "multiboot,module";
        reg = <0x92000000 0x1ffffff>;
        bootargs = "...";
      };
    };
  };

  [...]

The example DT is instructing Xen to have two cpu pools, the one with id 0
having two phisical cpus and the one with id 1 having 4 phisical cpu, the
second cpu pool uses the null scheduler and from the /chosen node we can see
that a dom0less guest will be started on that cpu pool.

In this particular case Xen must boot with different type of cpus, so the
boot argument hmp_unsafe must be enabled.

Luca Fancellu (5):
  tools/cpupools: Give a name to unnamed cpupools
  xen/sched: create public function for cpupools creation
  xen/sched: retrieve scheduler id by name
  xen/cpupool: Create different cpupools at boot time
  arm/dom0less: assign dom0less guests to cpupools

 docs/misc/arm/device-tree/booting.txt  |   5 ++
 docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
 tools/helpers/xen-init-dom0.c          |  26 +++++-
 tools/libs/light/libxl_utils.c         |   3 +-
 xen/arch/arm/Kconfig                   |   9 ++
 xen/arch/arm/Makefile                  |   1 +
 xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
 xen/arch/arm/domain.c                  |   6 ++
 xen/arch/arm/domain_build.c            |   9 +-
 xen/arch/x86/domain.c                  |   6 ++
 xen/common/domain.c                    |   5 +-
 xen/common/sched/core.c                |  11 +++
 xen/common/sched/cpupool.c             |  30 ++++++-
 xen/include/public/arch-arm.h          |   2 +
 xen/include/public/domctl.h            |   2 +-
 xen/include/xen/domain.h               |   3 +
 xen/include/xen/sched.h                |  39 ++++++++
 17 files changed, 386 insertions(+), 7 deletions(-)
 create mode 100644 docs/misc/arm/device-tree/cpupools.txt
 create mode 100644 xen/arch/arm/cpupool.c

-- 
2.17.1



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools
  2022-02-15 10:15 [PATCH 0/5] Boot time cpupools Luca Fancellu
@ 2022-02-15 10:15 ` Luca Fancellu
  2022-02-15 10:33   ` Juergen Gross
  2022-02-15 10:15 ` [PATCH 2/5] xen/sched: create public function for cpupools creation Luca Fancellu
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 10:15 UTC (permalink / raw)
  To: xen-devel; +Cc: wei.chen, Wei Liu, Anthony PERARD, Juergen Gross

With the introduction of boot time cpupools, Xen can create many
different cpupools at boot time other than cpupool with id 0.

Since these newly created cpupools can't have an
entry in Xenstore, create the entry using xen-init-dom0
helper with the usual convention: Pool-<cpupool id>.

Given the change, remove the check for poolid == 0 from
libxl_cpupoolid_to_name(...).

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 tools/helpers/xen-init-dom0.c  | 26 +++++++++++++++++++++++++-
 tools/libs/light/libxl_utils.c |  3 +--
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
index c99224a4b607..3539f56faeb0 100644
--- a/tools/helpers/xen-init-dom0.c
+++ b/tools/helpers/xen-init-dom0.c
@@ -43,7 +43,10 @@ int main(int argc, char **argv)
     int rc;
     struct xs_handle *xsh = NULL;
     xc_interface *xch = NULL;
-    char *domname_string = NULL, *domid_string = NULL;
+    char *domname_string = NULL, *domid_string = NULL, *pool_string = NULL;
+    char pool_path[strlen("/local/pool") + 12], pool_name[strlen("Pool-") + 5];
+    xc_cpupoolinfo_t *xcinfo;
+    unsigned int pool_id = 0;
     libxl_uuid uuid;
 
     /* Accept 0 or 1 argument */
@@ -114,6 +117,27 @@ int main(int argc, char **argv)
         goto out;
     }
 
+    /* Create an entry in xenstore for each cpupool on the system */
+    do {
+        xcinfo = xc_cpupool_getinfo(xch, pool_id);
+        if (xcinfo != NULL) {
+            if (xcinfo->cpupool_id != pool_id)
+                pool_id = xcinfo->cpupool_id;
+            snprintf(pool_path, sizeof(pool_path), "/local/pool/%d/name",
+                     pool_id);
+            snprintf(pool_name, sizeof(pool_name), "Pool-%d", pool_id);
+            pool_id++;
+            if (!xs_write(xsh, XBT_NULL, pool_path, pool_name,
+                          strlen(pool_name))) {
+                fprintf(stderr, "cannot set pool name\n");
+                rc = 1;
+            }
+            xc_cpupool_infofree(xch, xcinfo);
+            if (rc)
+                goto out;
+        }
+    } while(xcinfo != NULL);
+
     printf("Done setting up Dom0\n");
 
 out:
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index b91c2cafa223..81780da3ff40 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -151,8 +151,7 @@ char *libxl_cpupoolid_to_name(libxl_ctx *ctx, uint32_t poolid)
 
     snprintf(path, sizeof(path), "/local/pool/%d/name", poolid);
     s = xs_read(ctx->xsh, XBT_NULL, path, &len);
-    if (!s && (poolid == 0))
-        return strdup("Pool-0");
+
     return s;
 }
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 2/5] xen/sched: create public function for cpupools creation
  2022-02-15 10:15 [PATCH 0/5] Boot time cpupools Luca Fancellu
  2022-02-15 10:15 ` [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
@ 2022-02-15 10:15 ` Luca Fancellu
  2022-02-15 10:38   ` Juergen Gross
  2022-02-15 10:15 ` [PATCH 3/5] xen/sched: retrieve scheduler id by name Luca Fancellu
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 10:15 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, Juergen Gross, Dario Faggioli, George Dunlap,
	Andrew Cooper, Jan Beulich, Julien Grall, Stefano Stabellini,
	Wei Liu

Create new public function to create cpupools, it checks for pool id
uniqueness before creating the pool and can take a scheduler id or
a negative value that means the default Xen scheduler will be used.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/common/sched/cpupool.c | 26 ++++++++++++++++++++++++++
 xen/include/xen/sched.h    | 17 +++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 8c6e6eb9ccd5..4da12528d6b9 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -1218,6 +1218,32 @@ static void cpupool_hypfs_init(void)
 
 #endif /* CONFIG_HYPFS */
 
+struct cpupool *__init cpupool_create_pool(unsigned int pool_id, int sched_id)
+{
+    struct cpupool *pool;
+
+    ASSERT(!spin_is_locked(&cpupool_lock));
+
+    spin_lock(&cpupool_lock);
+    /* Check if a cpupool with pool_id exists */
+    pool = __cpupool_find_by_id(pool_id, true);
+    spin_unlock(&cpupool_lock);
+
+    /* Pool exists, return an error */
+    if ( pool )
+        return NULL;
+
+    if ( sched_id < 0 )
+        sched_id = scheduler_get_default()->sched_id;
+
+    pool = cpupool_create(pool_id, sched_id);
+
+    BUG_ON(IS_ERR(pool));
+    cpupool_put(pool);
+
+    return pool;
+}
+
 static int __init cpupool_init(void)
 {
     unsigned int cpu;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 37f78cc4c4c9..a50df1bccdc0 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1145,6 +1145,23 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 unsigned int cpupool_get_id(const struct domain *d);
 const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);
+
+/*
+ * cpupool_create_pool - Creates a cpupool
+ * @pool_id: id of the pool to be created
+ * @sched_id: id of the scheduler to be used for the pool
+ *
+ * Creates a cpupool with pool_id id, the id must be unique and the function
+ * will return an error if the pool id exists.
+ * The sched_id parameter identifies the scheduler to be used, if it is
+ * negative, the default scheduler of Xen will be used.
+ *
+ * returns:
+ *     pointer to the struct cpupool just created, on success
+ *     NULL, on cpupool creation error
+ */
+struct cpupool *cpupool_create_pool(unsigned int pool_id, int sched_id);
+
 extern void dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 3/5] xen/sched: retrieve scheduler id by name
  2022-02-15 10:15 [PATCH 0/5] Boot time cpupools Luca Fancellu
  2022-02-15 10:15 ` [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
  2022-02-15 10:15 ` [PATCH 2/5] xen/sched: create public function for cpupools creation Luca Fancellu
@ 2022-02-15 10:15 ` Luca Fancellu
  2022-02-15 10:40   ` Juergen Gross
  2022-02-15 10:15 ` [PATCH 4/5] xen/cpupool: Create different cpupools at boot time Luca Fancellu
  2022-02-15 10:15 ` [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
  4 siblings, 1 reply; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 10:15 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, George Dunlap, Dario Faggioli, Andrew Cooper,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

Add a public function to retrieve the scheduler id by the scheduler
name.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/common/sched/core.c | 11 +++++++++++
 xen/include/xen/sched.h | 11 +++++++++++
 2 files changed, 22 insertions(+)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 8f4b1ca10d1c..9696d3c1d769 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2947,6 +2947,17 @@ void scheduler_enable(void)
     scheduler_active = true;
 }
 
+int __init sched_get_id_by_name(const char *sched_name)
+{
+    unsigned int i;
+
+    for ( i = 0; i < NUM_SCHEDULERS; i++ )
+        if ( schedulers[i] && !strcmp(schedulers[i]->opt_name, sched_name) )
+            return schedulers[i]->sched_id;
+
+    return -1;
+}
+
 /* Initialise the data structures. */
 void __init scheduler_init(void)
 {
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a50df1bccdc0..a67a9eb2fe9d 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -756,6 +756,17 @@ void sched_destroy_domain(struct domain *d);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
 int  sched_id(void);
+
+/*
+ * sched_get_id_by_name - retrieves a scheduler id given a scheduler name
+ * @sched_name: scheduler name as a string
+ *
+ * returns:
+ *     positive value being the scheduler id, on success
+ *     negative value if the scheduler name is not found.
+ */
+int sched_get_id_by_name(const char *sched_name);
+
 void vcpu_wake(struct vcpu *v);
 long vcpu_yield(void);
 void vcpu_sleep_nosync(struct vcpu *v);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-15 10:15 [PATCH 0/5] Boot time cpupools Luca Fancellu
                   ` (2 preceding siblings ...)
  2022-02-15 10:15 ` [PATCH 3/5] xen/sched: retrieve scheduler id by name Luca Fancellu
@ 2022-02-15 10:15 ` Luca Fancellu
  2022-02-15 10:48   ` Juergen Gross
  2022-02-16  2:45   ` Stefano Stabellini
  2022-02-15 10:15 ` [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
  4 siblings, 2 replies; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 10:15 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, Stefano Stabellini, Julien Grall, Volodymyr Babchuk,
	Bertrand Marquis, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Juergen Gross, Dario Faggioli

Introduce an architecture specific way to create different cpupools
at boot time, this is particularly useful on ARM big.LITTLE system
where there might be the need to have different cpupools for each type
of core, but also systems using NUMA can have different cpu pools for
each node.

The feature on arm relies on a specification of the cpupools from the
device tree to build pools and assign cpus to them.

Documentation is created to explain the feature.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
 xen/arch/arm/Kconfig                   |   9 ++
 xen/arch/arm/Makefile                  |   1 +
 xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
 xen/common/sched/cpupool.c             |   4 +-
 xen/include/xen/sched.h                |  11 +++
 6 files changed, 260 insertions(+), 1 deletion(-)
 create mode 100644 docs/misc/arm/device-tree/cpupools.txt
 create mode 100644 xen/arch/arm/cpupool.c

diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
new file mode 100644
index 000000000000..7298b6394332
--- /dev/null
+++ b/docs/misc/arm/device-tree/cpupools.txt
@@ -0,0 +1,118 @@
+Boot time cpupools
+==================
+
+On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is
+possible to create cpupools during boot phase by specifying them in the device
+tree.
+
+Cpupools specification nodes shall be direct childs of /chosen node.
+Each cpupool node contains the following properties:
+
+- compatible (mandatory)
+
+    Must always include the compatiblity string: "xen,cpupool".
+
+- cpupool-id (mandatory)
+
+    Must be a positive integer number.
+
+- cpupool-cpus (mandatory)
+
+    Must be a list of device tree phandle to nodes describing cpus (e.g. having
+    device_type = "cpu"), it can't be empty.
+
+- cpupool-sched (optional)
+
+    Must be a string having the name of a Xen scheduler, it has no effect when
+    used in conjunction of a cpupool-id equal to zero, in that case the
+    default Xen scheduler is selected (sched=<...> boot argument).
+
+
+Constraints
+===========
+
+The cpupool with id zero is implicitly created even if not specified, that pool
+must have at least one cpu assigned, otherwise Xen will stop.
+
+Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's
+not assigned to any other cpupool.
+
+If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
+stop.
+
+
+Examples
+========
+
+A system having two types of core, the following device tree specification will
+instruct Xen to have two cpupools:
+
+- The cpupool with id 0 will have 4 cpus assigned.
+- The cpupool with id 1 will have 2 cpus assigned.
+
+As can be seen from the example, cpupool_a has only two cpus assigned, but since
+there are two cpus unassigned, they are automatically assigned to cpupool with
+id zero. The following example can work only if hmp-unsafe=1 is passed to Xen
+boot arguments, otherwise not all cores will be brought up by Xen and the
+cpupool creation process will stop Xen.
+
+
+a72_1: cpu@0 {
+        compatible = "arm,cortex-a72";
+        reg = <0x0 0x0>;
+        device_type = "cpu";
+        [...]
+};
+
+a72_2: cpu@1 {
+        compatible = "arm,cortex-a72";
+        reg = <0x0 0x1>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_1: cpu@100 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x100>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_2: cpu@101 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x101>;
+        device_type = "cpu";
+        [...]
+};
+
+cpu@102 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x102>;
+        device_type = "cpu";
+        [...]
+};
+
+cpu@103 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x103>;
+        device_type = "cpu";
+        [...]
+};
+
+chosen {
+
+    cpupool_a {
+        compatible = "xen,cpupool";
+        cpupool-id = <0>;
+        cpupool-cpus = <&a53_1 &a53_2>;
+    };
+    cpupool_b {
+        compatible = "xen,cpupool";
+        cpupool-id = <1>;
+        cpupool-cpus = <&a72_1 &a72_2>;
+        cpupool-sched = "credit2";
+    };
+
+    [...]
+
+};
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index ecfa6822e4d3..64c2879513b7 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -33,6 +33,15 @@ config ACPI
 	  Advanced Configuration and Power Interface (ACPI) support for Xen is
 	  an alternative to device tree on ARM64.
 
+config BOOT_TIME_CPUPOOLS
+	bool "Create cpupools at boot time"
+	depends on ARM
+	default n
+	help
+
+	  Creates cpupools during boot time and assigns cpus to them. Cpupools
+	  options can be specified in the device tree.
+
 config GICV3
 	bool "GICv3 driver"
 	depends on ARM_64 && !NEW_VGIC
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index d0dee10102b6..6165da4e77b4 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -13,6 +13,7 @@ obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
 obj-y += bootfdt.init.o
 obj-y += cpuerrata.o
 obj-y += cpufeature.o
+obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += cpupool.o
 obj-y += decode.o
 obj-y += device.o
 obj-$(CONFIG_IOREQ_SERVER) += dm.o
diff --git a/xen/arch/arm/cpupool.c b/xen/arch/arm/cpupool.c
new file mode 100644
index 000000000000..a9d5b94635b9
--- /dev/null
+++ b/xen/arch/arm/cpupool.c
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * xen/arch/arm/cpupool.c
+ *
+ * Code to create cpupools at boot time for arm architecture.
+ *
+ * Copyright (C) 2022 Arm Ltd.
+ */
+
+#include <xen/sched.h>
+
+static struct cpupool *__initdata pool_cpu_map[NR_CPUS];
+
+void __init arch_allocate_cpupools(const cpumask_t *cpu_online_map)
+{
+    const struct dt_device_node *chosen, *node;
+    unsigned int cpu_num, cpupool0_cpu_count = 0;
+    cpumask_t cpus_to_assign;
+
+    chosen = dt_find_node_by_path("/chosen");
+    if ( !chosen )
+        return;
+
+    cpumask_copy(&cpus_to_assign, cpu_online_map);
+
+    dt_for_each_child_node(chosen, node)
+    {
+        const struct dt_device_node *cpu_node;
+        unsigned int pool_id;
+        int i = 0, sched_id = -1;
+        const char* scheduler_name;
+        struct cpupool *pool = cpupool0;
+
+        if ( !dt_device_is_compatible(node, "xen,cpupool") )
+            continue;
+
+        if ( !dt_property_read_u32(node, "cpupool-id", &pool_id) )
+            panic("Missing cpupool-id property!\n");
+
+        if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) )
+        {
+            sched_id = sched_get_id_by_name(scheduler_name);
+            if ( sched_id < 0 )
+                panic("Scheduler %s does not exists!\n", scheduler_name);
+        }
+
+        if ( pool_id )
+        {
+            pool = cpupool_create_pool(pool_id, sched_id);
+            if ( !pool )
+                panic("Error creating pool id %u!\n", pool_id);
+        }
+
+        cpu_node = dt_parse_phandle(node, "cpupool-cpus", 0);
+        if ( !cpu_node )
+            panic("Missing or empty cpupool-cpus property!\n");
+
+        while ( cpu_node )
+        {
+            register_t cpu_reg;
+            const __be32 *prop;
+
+            prop = dt_get_property(cpu_node, "reg", NULL);
+            if ( !prop )
+                panic("cpupool-cpus pointed node has no reg property!\n");
+
+            cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
+
+            /* Check if the cpu is online and in the set to be assigned */
+            for_each_cpu ( cpu_num, &cpus_to_assign )
+                if ( cpu_logical_map(cpu_num) == cpu_reg )
+                    break;
+
+            if ( cpu_num >= nr_cpu_ids )
+                panic("Cpu found in %s is not online or it's assigned twice!\n",
+                      dt_node_name(node));
+
+            pool_cpu_map[cpu_num] = pool;
+            cpumask_clear_cpu(cpu_num, &cpus_to_assign);
+
+            printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-%u.\n",
+                   cpu_reg, pool_id);
+
+            /* Keep track of how many cpus are assigned to Pool-0 */
+            if ( !pool_id )
+                cpupool0_cpu_count++;
+
+            cpu_node = dt_parse_phandle(node, "cpupool-cpus", ++i);
+        }
+    }
+
+    /* Assign every non assigned cpu to Pool-0 */
+    for_each_cpu ( cpu_num, &cpus_to_assign )
+    {
+        pool_cpu_map[cpu_num] = cpupool0;
+        cpupool0_cpu_count++;
+        printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-0.\n",
+               cpu_logical_map(cpu_num));
+    }
+
+    if ( !cpupool0_cpu_count )
+        panic("No cpu assigned to cpupool0!\n");
+}
+
+struct cpupool *__init arch_get_cpupool(unsigned int cpu)
+{
+    return pool_cpu_map[cpu];
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 4da12528d6b9..6013d75e2edd 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -1257,12 +1257,14 @@ static int __init cpupool_init(void)
     cpupool_put(cpupool0);
     register_cpu_notifier(&cpu_nfb);
 
+    arch_allocate_cpupools(&cpu_online_map);
+
     spin_lock(&cpupool_lock);
 
     cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
 
     for_each_cpu ( cpu, &cpupool_free_cpus )
-        cpupool_assign_cpu_locked(cpupool0, cpu);
+        cpupool_assign_cpu_locked(arch_get_cpupool(cpu), cpu);
 
     spin_unlock(&cpupool_lock);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a67a9eb2fe9d..dda7db2ba51f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1177,6 +1177,17 @@ extern void dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
 
+#ifdef CONFIG_BOOT_TIME_CPUPOOLS
+void arch_allocate_cpupools(const cpumask_t *cpu_online_map);
+struct cpupool *arch_get_cpupool(unsigned int cpu);
+#else
+static inline void arch_allocate_cpupools(const cpumask_t *cpu_online_map) {}
+static inline struct cpupool *arch_get_cpupool(unsigned int cpu)
+{
+    return cpupool0;
+}
+#endif
+
 #endif /* __SCHED_H__ */
 
 /*
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools
  2022-02-15 10:15 [PATCH 0/5] Boot time cpupools Luca Fancellu
                   ` (3 preceding siblings ...)
  2022-02-15 10:15 ` [PATCH 4/5] xen/cpupool: Create different cpupools at boot time Luca Fancellu
@ 2022-02-15 10:15 ` Luca Fancellu
  2022-02-15 10:56   ` Juergen Gross
  2022-02-16  4:01   ` Stefano Stabellini
  4 siblings, 2 replies; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 10:15 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, Stefano Stabellini, Julien Grall, Volodymyr Babchuk,
	Bertrand Marquis, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné

Introduce domain-cpupool property of a xen,domain device tree node,
that specifies the cpupool device tree handle of a xen,cpupool node
that identifies a cpupool created at boot time where the guest will
be assigned on creation.

Add member to the xen_arch_domainconfig public interface so the
XEN_DOMCTL_INTERFACE_VERSION version is bumped.

Update documentation about the property.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 docs/misc/arm/device-tree/booting.txt | 5 +++++
 xen/arch/arm/domain.c                 | 6 ++++++
 xen/arch/arm/domain_build.c           | 9 ++++++++-
 xen/arch/x86/domain.c                 | 6 ++++++
 xen/common/domain.c                   | 5 ++++-
 xen/include/public/arch-arm.h         | 2 ++
 xen/include/public/domctl.h           | 2 +-
 xen/include/xen/domain.h              | 3 +++
 8 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 71895663a4de..0f1f210fa449 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -182,6 +182,11 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- domain-cpupool
+
+    Optional. Handle to a xen,cpupool device tree node that identifies the
+    cpupool where the guest will be started at boot.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 92a6c509e5c5..be350b28b588 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -788,6 +788,12 @@ fail:
     return rc;
 }
 
+unsigned int
+arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config)
+{
+    return config->arch.cpupool_id;
+}
+
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 6931c022a2e8..4f239e756775 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3015,7 +3015,8 @@ static int __init construct_domU(struct domain *d,
 void __init create_domUs(void)
 {
     struct dt_device_node *node;
-    const struct dt_device_node *chosen = dt_find_node_by_path("/chosen");
+    const struct dt_device_node *cpupool_node,
+                                *chosen = dt_find_node_by_path("/chosen");
 
     BUG_ON(chosen == NULL);
     dt_for_each_child_node(chosen, node)
@@ -3053,6 +3054,12 @@ void __init create_domUs(void)
                                          GUEST_VPL011_SPI - 32 + 1);
         }
 
+        /* Get the optional property domain-cpupool */
+        cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0);
+        if ( cpupool_node )
+            dt_property_read_u32(cpupool_node, "cpupool-id",
+                                 &d_cfg.arch.cpupool_id);
+
         /*
          * The variable max_init_domid is initialized with zero, so here it's
          * very important to use the pre-increment operator to call
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index ef1812dc1402..3e3cf88c9c82 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -880,6 +880,12 @@ int arch_domain_create(struct domain *d,
     return rc;
 }
 
+unsigned int
+arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config)
+{
+    return 0;
+}
+
 void arch_domain_destroy(struct domain *d)
 {
     if ( is_hvm_domain(d) )
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 2048ebad86ff..d42ca8292025 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -665,6 +665,8 @@ struct domain *domain_create(domid_t domid,
 
     if ( !is_idle_domain(d) )
     {
+        unsigned int domain_cpupool_id;
+
         watchdog_domain_init(d);
         init_status |= INIT_watchdog;
 
@@ -698,7 +700,8 @@ struct domain *domain_create(domid_t domid,
         if ( !d->pbuf )
             goto fail;
 
-        if ( (err = sched_init_domain(d, 0)) != 0 )
+        domain_cpupool_id = arch_get_domain_cpupool_id(config);
+        if ( (err = sched_init_domain(d, domain_cpupool_id)) != 0 )
             goto fail;
 
         if ( (err = late_hwdom_init(d)) != 0 )
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 94b31511ddea..2c5d1ea7f01a 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -321,6 +321,8 @@ struct xen_arch_domainconfig {
     uint16_t tee_type;
     /* IN */
     uint32_t nr_spis;
+    /* IN */
+    unsigned int cpupool_id;
     /*
      * OUT
      * Based on the property clock-frequency in the DT timer node.
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index b85e6170b0aa..31ec083cb06e 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -38,7 +38,7 @@
 #include "hvm/save.h"
 #include "memory.h"
 
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
 
 /*
  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 160c8dbdab33..fb018871bc17 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -63,6 +63,9 @@ void unmap_vcpu_info(struct vcpu *v);
 int arch_domain_create(struct domain *d,
                        struct xen_domctl_createdomain *config);
 
+unsigned int
+arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config);
+
 void arch_domain_destroy(struct domain *d);
 
 void arch_domain_shutdown(struct domain *d);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools
  2022-02-15 10:15 ` [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
@ 2022-02-15 10:33   ` Juergen Gross
  2022-02-15 17:48     ` Luca Fancellu
  0 siblings, 1 reply; 31+ messages in thread
From: Juergen Gross @ 2022-02-15 10:33 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel; +Cc: wei.chen, Wei Liu, Anthony PERARD


[-- Attachment #1.1.1: Type: text/plain, Size: 2872 bytes --]

On 15.02.22 11:15, Luca Fancellu wrote:
> With the introduction of boot time cpupools, Xen can create many
> different cpupools at boot time other than cpupool with id 0.
> 
> Since these newly created cpupools can't have an
> entry in Xenstore, create the entry using xen-init-dom0
> helper with the usual convention: Pool-<cpupool id>.
> 
> Given the change, remove the check for poolid == 0 from
> libxl_cpupoolid_to_name(...).
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>   tools/helpers/xen-init-dom0.c  | 26 +++++++++++++++++++++++++-
>   tools/libs/light/libxl_utils.c |  3 +--
>   2 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
> index c99224a4b607..3539f56faeb0 100644
> --- a/tools/helpers/xen-init-dom0.c
> +++ b/tools/helpers/xen-init-dom0.c
> @@ -43,7 +43,10 @@ int main(int argc, char **argv)
>       int rc;
>       struct xs_handle *xsh = NULL;
>       xc_interface *xch = NULL;
> -    char *domname_string = NULL, *domid_string = NULL;
> +    char *domname_string = NULL, *domid_string = NULL, *pool_string = NULL;

pool_string seems to be unused.

> +    char pool_path[strlen("/local/pool") + 12], pool_name[strlen("Pool-") + 5];

I don't like that. Why don't you use pointers and ...

> +    xc_cpupoolinfo_t *xcinfo;
> +    unsigned int pool_id = 0;
>       libxl_uuid uuid;
>   
>       /* Accept 0 or 1 argument */
> @@ -114,6 +117,27 @@ int main(int argc, char **argv)
>           goto out;
>       }
>   
> +    /* Create an entry in xenstore for each cpupool on the system */
> +    do {
> +        xcinfo = xc_cpupool_getinfo(xch, pool_id);
> +        if (xcinfo != NULL) {
> +            if (xcinfo->cpupool_id != pool_id)
> +                pool_id = xcinfo->cpupool_id;
> +            snprintf(pool_path, sizeof(pool_path), "/local/pool/%d/name",
> +                     pool_id);
> +            snprintf(pool_name, sizeof(pool_name), "Pool-%d", pool_id);

... use asprintf() here for allocating the strings in the needed size?

> +            pool_id++;
> +            if (!xs_write(xsh, XBT_NULL, pool_path, pool_name,
> +                          strlen(pool_name))) {
> +                fprintf(stderr, "cannot set pool name\n");
> +                rc = 1;
> +            }
> +            xc_cpupool_infofree(xch, xcinfo);
> +            if (rc)
> +                goto out;

Moving the call of xc_cpupool_infofree() ahead of the call of xs_write()
would drop the need for this last if statement, as you could add the
goto to the upper if.

> +        }
> +    } while(xcinfo != NULL);
> +

With doing all of this for being able to assign other domains created
at boot to cpupools, shouldn't you add names for other domains than dom0
here, too?


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/5] xen/sched: create public function for cpupools creation
  2022-02-15 10:15 ` [PATCH 2/5] xen/sched: create public function for cpupools creation Luca Fancellu
@ 2022-02-15 10:38   ` Juergen Gross
  2022-02-15 17:50     ` Luca Fancellu
  0 siblings, 1 reply; 31+ messages in thread
From: Juergen Gross @ 2022-02-15 10:38 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: wei.chen, Dario Faggioli, George Dunlap, Andrew Cooper,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu


[-- Attachment #1.1.1: Type: text/plain, Size: 545 bytes --]

On 15.02.22 11:15, Luca Fancellu wrote:
> Create new public function to create cpupools, it checks for pool id
> uniqueness before creating the pool and can take a scheduler id or
> a negative value that means the default Xen scheduler will be used.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Juergen Gross <jgross@suse.com>

with one further question: you are allowing to use another scheduler,
but what if someone wants to set non-standard scheduling parameters
(e.g. another time slice)?


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/5] xen/sched: retrieve scheduler id by name
  2022-02-15 10:15 ` [PATCH 3/5] xen/sched: retrieve scheduler id by name Luca Fancellu
@ 2022-02-15 10:40   ` Juergen Gross
  2022-02-15 17:52     ` Luca Fancellu
  0 siblings, 1 reply; 31+ messages in thread
From: Juergen Gross @ 2022-02-15 10:40 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: wei.chen, George Dunlap, Dario Faggioli, Andrew Cooper,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu


[-- Attachment #1.1.1: Type: text/plain, Size: 1024 bytes --]

On 15.02.22 11:15, Luca Fancellu wrote:
> Add a public function to retrieve the scheduler id by the scheduler
> name.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>   xen/common/sched/core.c | 11 +++++++++++
>   xen/include/xen/sched.h | 11 +++++++++++
>   2 files changed, 22 insertions(+)
> 
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index 8f4b1ca10d1c..9696d3c1d769 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -2947,6 +2947,17 @@ void scheduler_enable(void)
>       scheduler_active = true;
>   }
>   
> +int __init sched_get_id_by_name(const char *sched_name)
> +{
> +    unsigned int i;
> +
> +    for ( i = 0; i < NUM_SCHEDULERS; i++ )
> +        if ( schedulers[i] && !strcmp(schedulers[i]->opt_name, sched_name) )
> +            return schedulers[i]->sched_id;
> +
> +    return -1;
> +}
> +

Please make use of this function in scheduler_init(), as this
functionality is open coded there, too.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-15 10:15 ` [PATCH 4/5] xen/cpupool: Create different cpupools at boot time Luca Fancellu
@ 2022-02-15 10:48   ` Juergen Gross
  2022-02-15 17:56     ` Luca Fancellu
  2022-02-16  2:45   ` Stefano Stabellini
  1 sibling, 1 reply; 31+ messages in thread
From: Juergen Gross @ 2022-02-15 10:48 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: wei.chen, Stefano Stabellini, Julien Grall, Volodymyr Babchuk,
	Bertrand Marquis, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 758 bytes --]

On 15.02.22 11:15, Luca Fancellu wrote:
> Introduce an architecture specific way to create different cpupools
> at boot time, this is particularly useful on ARM big.LITTLE system
> where there might be the need to have different cpupools for each type
> of core, but also systems using NUMA can have different cpu pools for
> each node.
> 
> The feature on arm relies on a specification of the cpupools from the
> device tree to build pools and assign cpus to them.
> 
> Documentation is created to explain the feature.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

IIRC I suggested to have the core functionality in common code in order
to allow using boot time cpupool creation e.g. via commandline for x86,
too.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools
  2022-02-15 10:15 ` [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
@ 2022-02-15 10:56   ` Juergen Gross
  2022-02-15 18:06     ` Luca Fancellu
  2022-02-16  4:01   ` Stefano Stabellini
  1 sibling, 1 reply; 31+ messages in thread
From: Juergen Gross @ 2022-02-15 10:56 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: wei.chen, Stefano Stabellini, Julien Grall, Volodymyr Babchuk,
	Bertrand Marquis, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné


[-- Attachment #1.1.1: Type: text/plain, Size: 5400 bytes --]

On 15.02.22 11:15, Luca Fancellu wrote:
> Introduce domain-cpupool property of a xen,domain device tree node,
> that specifies the cpupool device tree handle of a xen,cpupool node
> that identifies a cpupool created at boot time where the guest will
> be assigned on creation.
> 
> Add member to the xen_arch_domainconfig public interface so the
> XEN_DOMCTL_INTERFACE_VERSION version is bumped.
> 
> Update documentation about the property.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>   docs/misc/arm/device-tree/booting.txt | 5 +++++
>   xen/arch/arm/domain.c                 | 6 ++++++
>   xen/arch/arm/domain_build.c           | 9 ++++++++-
>   xen/arch/x86/domain.c                 | 6 ++++++
>   xen/common/domain.c                   | 5 ++++-
>   xen/include/public/arch-arm.h         | 2 ++
>   xen/include/public/domctl.h           | 2 +-
>   xen/include/xen/domain.h              | 3 +++
>   8 files changed, 35 insertions(+), 3 deletions(-)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index 71895663a4de..0f1f210fa449 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -182,6 +182,11 @@ with the following properties:
>       Both #address-cells and #size-cells need to be specified because
>       both sub-nodes (described shortly) have reg properties.
>   
> +- domain-cpupool
> +
> +    Optional. Handle to a xen,cpupool device tree node that identifies the
> +    cpupool where the guest will be started at boot.
> +
>   Under the "xen,domain" compatible node, one or more sub-nodes are present
>   for the DomU kernel and ramdisk.
>   
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 92a6c509e5c5..be350b28b588 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -788,6 +788,12 @@ fail:
>       return rc;
>   }
>   
> +unsigned int
> +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config)
> +{
> +    return config->arch.cpupool_id;
> +}
> +

I don't see why this should be arch specific.

>   void arch_domain_destroy(struct domain *d)
>   {
>       /* IOMMU page table is shared with P2M, always call
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 6931c022a2e8..4f239e756775 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3015,7 +3015,8 @@ static int __init construct_domU(struct domain *d,
>   void __init create_domUs(void)
>   {
>       struct dt_device_node *node;
> -    const struct dt_device_node *chosen = dt_find_node_by_path("/chosen");
> +    const struct dt_device_node *cpupool_node,
> +                                *chosen = dt_find_node_by_path("/chosen");
>   
>       BUG_ON(chosen == NULL);
>       dt_for_each_child_node(chosen, node)
> @@ -3053,6 +3054,12 @@ void __init create_domUs(void)
>                                            GUEST_VPL011_SPI - 32 + 1);
>           }
>   
> +        /* Get the optional property domain-cpupool */
> +        cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0);
> +        if ( cpupool_node )
> +            dt_property_read_u32(cpupool_node, "cpupool-id",
> +                                 &d_cfg.arch.cpupool_id);
> +
>           /*
>            * The variable max_init_domid is initialized with zero, so here it's
>            * very important to use the pre-increment operator to call
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index ef1812dc1402..3e3cf88c9c82 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -880,6 +880,12 @@ int arch_domain_create(struct domain *d,
>       return rc;
>   }
>   
> +unsigned int
> +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config)
> +{
> +    return 0;
> +}
> +
>   void arch_domain_destroy(struct domain *d)
>   {
>       if ( is_hvm_domain(d) )
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 2048ebad86ff..d42ca8292025 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -665,6 +665,8 @@ struct domain *domain_create(domid_t domid,
>   
>       if ( !is_idle_domain(d) )
>       {
> +        unsigned int domain_cpupool_id;
> +
>           watchdog_domain_init(d);
>           init_status |= INIT_watchdog;
>   
> @@ -698,7 +700,8 @@ struct domain *domain_create(domid_t domid,
>           if ( !d->pbuf )
>               goto fail;
>   
> -        if ( (err = sched_init_domain(d, 0)) != 0 )
> +        domain_cpupool_id = arch_get_domain_cpupool_id(config);
> +        if ( (err = sched_init_domain(d, domain_cpupool_id)) != 0 )
>               goto fail;
>   
>           if ( (err = late_hwdom_init(d)) != 0 )
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 94b31511ddea..2c5d1ea7f01a 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -321,6 +321,8 @@ struct xen_arch_domainconfig {
>       uint16_t tee_type;
>       /* IN */
>       uint32_t nr_spis;
> +    /* IN */
> +    unsigned int cpupool_id;

As said above: why is this arch specific? Moving it to the common part
would enable libxl to get rid of having to call xc_cpupool_movedomain()
in libxl__domain_make().


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools
  2022-02-15 10:33   ` Juergen Gross
@ 2022-02-15 17:48     ` Luca Fancellu
  2022-02-16  6:13       ` Juergen Gross
  0 siblings, 1 reply; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 17:48 UTC (permalink / raw)
  To: Juergen Gross; +Cc: xen-devel, wei.chen, Wei Liu, Anthony PERARD



> On 15 Feb 2022, at 10:33, Juergen Gross <jgross@suse.com> wrote:
> 
> On 15.02.22 11:15, Luca Fancellu wrote:
>> With the introduction of boot time cpupools, Xen can create many
>> different cpupools at boot time other than cpupool with id 0.
>> Since these newly created cpupools can't have an
>> entry in Xenstore, create the entry using xen-init-dom0
>> helper with the usual convention: Pool-<cpupool id>.
>> Given the change, remove the check for poolid == 0 from
>> libxl_cpupoolid_to_name(...).
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>>  tools/helpers/xen-init-dom0.c  | 26 +++++++++++++++++++++++++-
>>  tools/libs/light/libxl_utils.c |  3 +--
>>  2 files changed, 26 insertions(+), 3 deletions(-)
>> diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
>> index c99224a4b607..3539f56faeb0 100644
>> --- a/tools/helpers/xen-init-dom0.c
>> +++ b/tools/helpers/xen-init-dom0.c
>> @@ -43,7 +43,10 @@ int main(int argc, char **argv)
>>      int rc;
>>      struct xs_handle *xsh = NULL;
>>      xc_interface *xch = NULL;
>> -    char *domname_string = NULL, *domid_string = NULL;
>> +    char *domname_string = NULL, *domid_string = NULL, *pool_string = NULL;

Hi Juergen,

> 
> pool_string seems to be unused.

I will remove it

> 
>> +    char pool_path[strlen("/local/pool") + 12], pool_name[strlen("Pool-") + 5];
> 
> I don't like that. Why don't you use pointers and ...
> 
>> +    xc_cpupoolinfo_t *xcinfo;
>> +    unsigned int pool_id = 0;
>>      libxl_uuid uuid;
>>        /* Accept 0 or 1 argument */
>> @@ -114,6 +117,27 @@ int main(int argc, char **argv)
>>          goto out;
>>      }
>>  +    /* Create an entry in xenstore for each cpupool on the system */
>> +    do {
>> +        xcinfo = xc_cpupool_getinfo(xch, pool_id);
>> +        if (xcinfo != NULL) {
>> +            if (xcinfo->cpupool_id != pool_id)
>> +                pool_id = xcinfo->cpupool_id;
>> +            snprintf(pool_path, sizeof(pool_path), "/local/pool/%d/name",
>> +                     pool_id);
>> +            snprintf(pool_name, sizeof(pool_name), "Pool-%d", pool_id);
> 
> ... use asprintf() here for allocating the strings in the needed size?

Why would you like more this approach? I was trying to avoid allocation/free
operations in a loop that would need also more code to check cases in which
memory is not allocated. Instead with the buffers in the stack we don’t have problems.

> 
>> +            pool_id++;
>> +            if (!xs_write(xsh, XBT_NULL, pool_path, pool_name,
>> +                          strlen(pool_name))) {
>> +                fprintf(stderr, "cannot set pool name\n");
>> +                rc = 1;
>> +            }
>> +            xc_cpupool_infofree(xch, xcinfo);
>> +            if (rc)
>> +                goto out;
> 
> Moving the call of xc_cpupool_infofree() ahead of the call of xs_write()
> would drop the need for this last if statement, as you could add the
> goto to the upper if.

Yes you are right, it would simplify the code

> 
>> +        }
>> +    } while(xcinfo != NULL);
>> +
> 
> With doing all of this for being able to assign other domains created
> at boot to cpupools, shouldn't you add names for other domains than dom0
> here, too?

This serie is more about cpupools, maybe I can do that in another commit out
of this serie.

Thanks for your review.

Cheers,
Luca

> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/5] xen/sched: create public function for cpupools creation
  2022-02-15 10:38   ` Juergen Gross
@ 2022-02-15 17:50     ` Luca Fancellu
  2022-02-16  6:16       ` Juergen Gross
  0 siblings, 1 reply; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 17:50 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Xen-devel, Wei Chen, Dario Faggioli, George Dunlap,
	Andrew Cooper, Jan Beulich, Julien Grall, Stefano Stabellini,
	Wei Liu



> On 15 Feb 2022, at 10:38, Juergen Gross <jgross@suse.com> wrote:
> 
> On 15.02.22 11:15, Luca Fancellu wrote:
>> Create new public function to create cpupools, it checks for pool id
>> uniqueness before creating the pool and can take a scheduler id or
>> a negative value that means the default Xen scheduler will be used.
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>
> 
> with one further question: you are allowing to use another scheduler,
> but what if someone wants to set non-standard scheduling parameters
> (e.g. another time slice)?

I guess for now parameters can be tuned only by xl tool, however it would
be possible as future work to allow parameters in the device tree for each
scheduler.

Cheers,
Luca

> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/5] xen/sched: retrieve scheduler id by name
  2022-02-15 10:40   ` Juergen Gross
@ 2022-02-15 17:52     ` Luca Fancellu
  0 siblings, 0 replies; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 17:52 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Xen-devel, Wei Chen, George Dunlap, Dario Faggioli,
	Andrew Cooper, Jan Beulich, Julien Grall, Stefano Stabellini,
	Wei Liu



> On 15 Feb 2022, at 10:40, Juergen Gross <jgross@suse.com> wrote:
> 
> On 15.02.22 11:15, Luca Fancellu wrote:
>> Add a public function to retrieve the scheduler id by the scheduler
>> name.
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>>  xen/common/sched/core.c | 11 +++++++++++
>>  xen/include/xen/sched.h | 11 +++++++++++
>>  2 files changed, 22 insertions(+)
>> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
>> index 8f4b1ca10d1c..9696d3c1d769 100644
>> --- a/xen/common/sched/core.c
>> +++ b/xen/common/sched/core.c
>> @@ -2947,6 +2947,17 @@ void scheduler_enable(void)
>>      scheduler_active = true;
>>  }
>>  +int __init sched_get_id_by_name(const char *sched_name)
>> +{
>> +    unsigned int i;
>> +
>> +    for ( i = 0; i < NUM_SCHEDULERS; i++ )
>> +        if ( schedulers[i] && !strcmp(schedulers[i]->opt_name, sched_name) )
>> +            return schedulers[i]->sched_id;
>> +
>> +    return -1;
>> +}
>> +
> 
> Please make use of this function in scheduler_init(), as this
> functionality is open coded there, too.
> 

Ok I will change the code in scheduler_init to use the new function.

Cheers,
Luca

> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-15 10:48   ` Juergen Gross
@ 2022-02-15 17:56     ` Luca Fancellu
  2022-02-16  6:18       ` Juergen Gross
  0 siblings, 1 reply; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 17:56 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Dario Faggioli



> On 15 Feb 2022, at 10:48, Juergen Gross <jgross@suse.com> wrote:
> 
> On 15.02.22 11:15, Luca Fancellu wrote:
>> Introduce an architecture specific way to create different cpupools
>> at boot time, this is particularly useful on ARM big.LITTLE system
>> where there might be the need to have different cpupools for each type
>> of core, but also systems using NUMA can have different cpu pools for
>> each node.
>> The feature on arm relies on a specification of the cpupools from the
>> device tree to build pools and assign cpus to them.
>> Documentation is created to explain the feature.
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> IIRC I suggested to have the core functionality in common code in order
> to allow using boot time cpupool creation e.g. via commandline for x86,
> too.

Yes, however I think the parser to handle everything by command line would
be huge due to input sanitisation and not easy enough as the DT, however
I see Hyperlaunch has plans to use DT on x86 so I guess it would be ok to make
this feature common once the DT is available also on x86.

Cheers,
Luca

> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools
  2022-02-15 10:56   ` Juergen Gross
@ 2022-02-15 18:06     ` Luca Fancellu
  0 siblings, 0 replies; 31+ messages in thread
From: Luca Fancellu @ 2022-02-15 18:06 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Roger Pau Monné



> On 15 Feb 2022, at 10:56, Juergen Gross <jgross@suse.com> wrote:
> 
> On 15.02.22 11:15, Luca Fancellu wrote:
>> Introduce domain-cpupool property of a xen,domain device tree node,
>> that specifies the cpupool device tree handle of a xen,cpupool node
>> that identifies a cpupool created at boot time where the guest will
>> be assigned on creation.
>> Add member to the xen_arch_domainconfig public interface so the
>> XEN_DOMCTL_INTERFACE_VERSION version is bumped.
>> Update documentation about the property.
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>>  docs/misc/arm/device-tree/booting.txt | 5 +++++
>>  xen/arch/arm/domain.c                 | 6 ++++++
>>  xen/arch/arm/domain_build.c           | 9 ++++++++-
>>  xen/arch/x86/domain.c                 | 6 ++++++
>>  xen/common/domain.c                   | 5 ++++-
>>  xen/include/public/arch-arm.h         | 2 ++
>>  xen/include/public/domctl.h           | 2 +-
>>  xen/include/xen/domain.h              | 3 +++
>>  8 files changed, 35 insertions(+), 3 deletions(-)
>> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
>> index 71895663a4de..0f1f210fa449 100644
>> --- a/docs/misc/arm/device-tree/booting.txt
>> +++ b/docs/misc/arm/device-tree/booting.txt
>> @@ -182,6 +182,11 @@ with the following properties:
>>      Both #address-cells and #size-cells need to be specified because
>>      both sub-nodes (described shortly) have reg properties.
>>  +- domain-cpupool
>> +
>> +    Optional. Handle to a xen,cpupool device tree node that identifies the
>> +    cpupool where the guest will be started at boot.
>> +
>>  Under the "xen,domain" compatible node, one or more sub-nodes are present
>>  for the DomU kernel and ramdisk.
>>  diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 92a6c509e5c5..be350b28b588 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -788,6 +788,12 @@ fail:
>>      return rc;
>>  }
>>  +unsigned int
>> +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config)
>> +{
>> +    return config->arch.cpupool_id;
>> +}
>> +
> 
> I don't see why this should be arch specific.
> 
>>  void arch_domain_destroy(struct domain *d)
>>  {
>>      /* IOMMU page table is shared with P2M, always call
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 6931c022a2e8..4f239e756775 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -3015,7 +3015,8 @@ static int __init construct_domU(struct domain *d,
>>  void __init create_domUs(void)
>>  {
>>      struct dt_device_node *node;
>> -    const struct dt_device_node *chosen = dt_find_node_by_path("/chosen");
>> +    const struct dt_device_node *cpupool_node,
>> +                                *chosen = dt_find_node_by_path("/chosen");
>>        BUG_ON(chosen == NULL);
>>      dt_for_each_child_node(chosen, node)
>> @@ -3053,6 +3054,12 @@ void __init create_domUs(void)
>>                                           GUEST_VPL011_SPI - 32 + 1);
>>          }
>>  +        /* Get the optional property domain-cpupool */
>> +        cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0);
>> +        if ( cpupool_node )
>> +            dt_property_read_u32(cpupool_node, "cpupool-id",
>> +                                 &d_cfg.arch.cpupool_id);
>> +
>>          /*
>>           * The variable max_init_domid is initialized with zero, so here it's
>>           * very important to use the pre-increment operator to call
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index ef1812dc1402..3e3cf88c9c82 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -880,6 +880,12 @@ int arch_domain_create(struct domain *d,
>>      return rc;
>>  }
>>  +unsigned int
>> +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config)
>> +{
>> +    return 0;
>> +}
>> +
>>  void arch_domain_destroy(struct domain *d)
>>  {
>>      if ( is_hvm_domain(d) )
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index 2048ebad86ff..d42ca8292025 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -665,6 +665,8 @@ struct domain *domain_create(domid_t domid,
>>        if ( !is_idle_domain(d) )
>>      {
>> +        unsigned int domain_cpupool_id;
>> +
>>          watchdog_domain_init(d);
>>          init_status |= INIT_watchdog;
>>  @@ -698,7 +700,8 @@ struct domain *domain_create(domid_t domid,
>>          if ( !d->pbuf )
>>              goto fail;
>>  -        if ( (err = sched_init_domain(d, 0)) != 0 )
>> +        domain_cpupool_id = arch_get_domain_cpupool_id(config);
>> +        if ( (err = sched_init_domain(d, domain_cpupool_id)) != 0 )
>>              goto fail;
>>            if ( (err = late_hwdom_init(d)) != 0 )
>> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
>> index 94b31511ddea..2c5d1ea7f01a 100644
>> --- a/xen/include/public/arch-arm.h
>> +++ b/xen/include/public/arch-arm.h
>> @@ -321,6 +321,8 @@ struct xen_arch_domainconfig {
>>      uint16_t tee_type;
>>      /* IN */
>>      uint32_t nr_spis;
>> +    /* IN */
>> +    unsigned int cpupool_id;
> 
> As said above: why is this arch specific? Moving it to the common part
> would enable libxl to get rid of having to call xc_cpupool_movedomain()
> in libxl__domain_make().

I’ve put it in arch because it’s only modified by the arm code, but if you think it’s ok
to have it in struct xen_domctl_createdomain, I don’t see any problem.
My knowledge of the tool stack is limited so I didn’t know about the advantages
of that change.

Cheers,
Luca

> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-15 10:15 ` [PATCH 4/5] xen/cpupool: Create different cpupools at boot time Luca Fancellu
  2022-02-15 10:48   ` Juergen Gross
@ 2022-02-16  2:45   ` Stefano Stabellini
  2022-02-16 12:10     ` Luca Fancellu
  1 sibling, 1 reply; 31+ messages in thread
From: Stefano Stabellini @ 2022-02-16  2:45 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: xen-devel, wei.chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Juergen Gross,
	Dario Faggioli

On Tue, 15 Feb 2022, Luca Fancellu wrote:
> Introduce an architecture specific way to create different cpupools
> at boot time, this is particularly useful on ARM big.LITTLE system
> where there might be the need to have different cpupools for each type
> of core, but also systems using NUMA can have different cpu pools for
> each node.
> 
> The feature on arm relies on a specification of the cpupools from the
> device tree to build pools and assign cpus to them.
> 
> Documentation is created to explain the feature.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
>  xen/arch/arm/Kconfig                   |   9 ++
>  xen/arch/arm/Makefile                  |   1 +
>  xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
>  xen/common/sched/cpupool.c             |   4 +-
>  xen/include/xen/sched.h                |  11 +++
>  6 files changed, 260 insertions(+), 1 deletion(-)
>  create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>  create mode 100644 xen/arch/arm/cpupool.c
> 
> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
> new file mode 100644
> index 000000000000..7298b6394332
> --- /dev/null
> +++ b/docs/misc/arm/device-tree/cpupools.txt
> @@ -0,0 +1,118 @@
> +Boot time cpupools
> +==================
> +
> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is
> +possible to create cpupools during boot phase by specifying them in the device
> +tree.
> +
> +Cpupools specification nodes shall be direct childs of /chosen node.
> +Each cpupool node contains the following properties:
> +
> +- compatible (mandatory)
> +
> +    Must always include the compatiblity string: "xen,cpupool".
> +
> +- cpupool-id (mandatory)
> +
> +    Must be a positive integer number.

Why is cpupool-id mandatory? It looks like it could be generated by Xen.
Or is it actually better to have the user specify it anyway?


> +- cpupool-cpus (mandatory)
> +
> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
> +    device_type = "cpu"), it can't be empty.
> +
> +- cpupool-sched (optional)
> +
> +    Must be a string having the name of a Xen scheduler, it has no effect when
> +    used in conjunction of a cpupool-id equal to zero, in that case the
> +    default Xen scheduler is selected (sched=<...> boot argument).

I don't get why cpupool-id == 0 should trigger a special cpupool-sched
behavior.


> +Constraints
> +===========
> +
> +The cpupool with id zero is implicitly created even if not specified, that pool
> +must have at least one cpu assigned, otherwise Xen will stop.
> +
> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's
> +not assigned to any other cpupool.
> +
> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
> +stop.

Thank you for documenting the constraints, but why do we have them?
Imagine a user specifying 3 cpu pools and imagine the cpupool-id is
optional and missing. We could take care of the cpupool-id generation in
Xen and we could also assign the default scheduler everywhere
cpupool-sched is not specified. Maybe I am missing something?

Does cpupool0 has to exist? I guess the answer could be yes, but if it
is specified as id of one of the pools we are fine, otherwise it could
be automatically generated by Xen.

In any case, I don't think that cpupool0 has to have the default
scheduler?

My suggestion would be:

- make cpupool-id optional
- assign automatically cpupool-ids starting from 0
    - respect cpupool-ids chosen by the user
- if some CPUs are left out (not specified in any pool) add an extra cpupool
    - the extra cpupool doesn't have to be cpupool-id == 0, it could be
      cpupool-id == n
    - the extra cpupool uses the default scheduler

If the user created cpupools in device tree covering all CPUs and also
specified all cpupool-ids everywhere, and none of them are 0 (no cpupool
in the system is cpupool0) then panic. (Assuming that cpupool0 is
required.)


> +Examples
> +========
> +
> +A system having two types of core, the following device tree specification will
> +instruct Xen to have two cpupools:
> +
> +- The cpupool with id 0 will have 4 cpus assigned.
> +- The cpupool with id 1 will have 2 cpus assigned.
> +
> +As can be seen from the example, cpupool_a has only two cpus assigned, but since
> +there are two cpus unassigned, they are automatically assigned to cpupool with
> +id zero. The following example can work only if hmp-unsafe=1 is passed to Xen
> +boot arguments, otherwise not all cores will be brought up by Xen and the
> +cpupool creation process will stop Xen.
> +
> +
> +a72_1: cpu@0 {
> +        compatible = "arm,cortex-a72";
> +        reg = <0x0 0x0>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a72_2: cpu@1 {
> +        compatible = "arm,cortex-a72";
> +        reg = <0x0 0x1>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_1: cpu@100 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x100>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_2: cpu@101 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x101>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +cpu@102 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x102>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +cpu@103 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x103>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +chosen {
> +
> +    cpupool_a {
> +        compatible = "xen,cpupool";
> +        cpupool-id = <0>;
> +        cpupool-cpus = <&a53_1 &a53_2>;
> +    };
> +    cpupool_b {
> +        compatible = "xen,cpupool";
> +        cpupool-id = <1>;
> +        cpupool-cpus = <&a72_1 &a72_2>;
> +        cpupool-sched = "credit2";
> +    };

Question above notwithstanding, I like it!


> +    [...]
> +
> +};
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index ecfa6822e4d3..64c2879513b7 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -33,6 +33,15 @@ config ACPI
>  	  Advanced Configuration and Power Interface (ACPI) support for Xen is
>  	  an alternative to device tree on ARM64.
>  
> +config BOOT_TIME_CPUPOOLS
> +	bool "Create cpupools at boot time"
> +	depends on ARM
> +	default n
> +	help
> +
> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
> +	  options can be specified in the device tree.
> +
>  config GICV3
>  	bool "GICv3 driver"
>  	depends on ARM_64 && !NEW_VGIC
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index d0dee10102b6..6165da4e77b4 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -13,6 +13,7 @@ obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
>  obj-y += bootfdt.init.o
>  obj-y += cpuerrata.o
>  obj-y += cpufeature.o
> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += cpupool.o
>  obj-y += decode.o
>  obj-y += device.o
>  obj-$(CONFIG_IOREQ_SERVER) += dm.o
> diff --git a/xen/arch/arm/cpupool.c b/xen/arch/arm/cpupool.c
> new file mode 100644
> index 000000000000..a9d5b94635b9
> --- /dev/null
> +++ b/xen/arch/arm/cpupool.c
> @@ -0,0 +1,118 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * xen/arch/arm/cpupool.c
> + *
> + * Code to create cpupools at boot time for arm architecture.
> + *
> + * Copyright (C) 2022 Arm Ltd.
> + */
> +
> +#include <xen/sched.h>
> +
> +static struct cpupool *__initdata pool_cpu_map[NR_CPUS];
> +
> +void __init arch_allocate_cpupools(const cpumask_t *cpu_online_map)
> +{
> +    const struct dt_device_node *chosen, *node;
> +    unsigned int cpu_num, cpupool0_cpu_count = 0;
> +    cpumask_t cpus_to_assign;
> +
> +    chosen = dt_find_node_by_path("/chosen");
> +    if ( !chosen )
> +        return;
> +
> +    cpumask_copy(&cpus_to_assign, cpu_online_map);
> +
> +    dt_for_each_child_node(chosen, node)
> +    {
> +        const struct dt_device_node *cpu_node;
> +        unsigned int pool_id;
> +        int i = 0, sched_id = -1;
> +        const char* scheduler_name;
> +        struct cpupool *pool = cpupool0;
> +
> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
> +            continue;
> +
> +        if ( !dt_property_read_u32(node, "cpupool-id", &pool_id) )
> +            panic("Missing cpupool-id property!\n");
> +
> +        if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) )
> +        {
> +            sched_id = sched_get_id_by_name(scheduler_name);
> +            if ( sched_id < 0 )
> +                panic("Scheduler %s does not exists!\n", scheduler_name);
> +        }
> +
> +        if ( pool_id )
> +        {
> +            pool = cpupool_create_pool(pool_id, sched_id);
> +            if ( !pool )
> +                panic("Error creating pool id %u!\n", pool_id);
> +        }
> +
> +        cpu_node = dt_parse_phandle(node, "cpupool-cpus", 0);
> +        if ( !cpu_node )
> +            panic("Missing or empty cpupool-cpus property!\n");
> +
> +        while ( cpu_node )
> +        {
> +            register_t cpu_reg;
> +            const __be32 *prop;
> +
> +            prop = dt_get_property(cpu_node, "reg", NULL);
> +            if ( !prop )
> +                panic("cpupool-cpus pointed node has no reg property!\n");
> +
> +            cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
> +
> +            /* Check if the cpu is online and in the set to be assigned */
> +            for_each_cpu ( cpu_num, &cpus_to_assign )
> +                if ( cpu_logical_map(cpu_num) == cpu_reg )
> +                    break;
> +
> +            if ( cpu_num >= nr_cpu_ids )
> +                panic("Cpu found in %s is not online or it's assigned twice!\n",
> +                      dt_node_name(node));
> +
> +            pool_cpu_map[cpu_num] = pool;
> +            cpumask_clear_cpu(cpu_num, &cpus_to_assign);
> +
> +            printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-%u.\n",
> +                   cpu_reg, pool_id);
> +
> +            /* Keep track of how many cpus are assigned to Pool-0 */
> +            if ( !pool_id )
> +                cpupool0_cpu_count++;
> +
> +            cpu_node = dt_parse_phandle(node, "cpupool-cpus", ++i);
> +        }
> +    }
> +
> +    /* Assign every non assigned cpu to Pool-0 */
> +    for_each_cpu ( cpu_num, &cpus_to_assign )
> +    {
> +        pool_cpu_map[cpu_num] = cpupool0;
> +        cpupool0_cpu_count++;
> +        printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-0.\n",
> +               cpu_logical_map(cpu_num));
> +    }
> +
> +    if ( !cpupool0_cpu_count )
> +        panic("No cpu assigned to cpupool0!\n");
> +}
> +
> +struct cpupool *__init arch_get_cpupool(unsigned int cpu)
> +{
> +    return pool_cpu_map[cpu];
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
> index 4da12528d6b9..6013d75e2edd 100644
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -1257,12 +1257,14 @@ static int __init cpupool_init(void)
>      cpupool_put(cpupool0);
>      register_cpu_notifier(&cpu_nfb);
>  
> +    arch_allocate_cpupools(&cpu_online_map);
> +
>      spin_lock(&cpupool_lock);
>  
>      cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
>  
>      for_each_cpu ( cpu, &cpupool_free_cpus )
> -        cpupool_assign_cpu_locked(cpupool0, cpu);
> +        cpupool_assign_cpu_locked(arch_get_cpupool(cpu), cpu);
>  
>      spin_unlock(&cpupool_lock);
>  
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index a67a9eb2fe9d..dda7db2ba51f 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1177,6 +1177,17 @@ extern void dump_runq(unsigned char key);
>  
>  void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
>  
> +#ifdef CONFIG_BOOT_TIME_CPUPOOLS
> +void arch_allocate_cpupools(const cpumask_t *cpu_online_map);
> +struct cpupool *arch_get_cpupool(unsigned int cpu);
> +#else
> +static inline void arch_allocate_cpupools(const cpumask_t *cpu_online_map) {}
> +static inline struct cpupool *arch_get_cpupool(unsigned int cpu)
> +{
> +    return cpupool0;
> +}
> +#endif
> +
>  #endif /* __SCHED_H__ */
>  
>  /*
> -- 
> 2.17.1
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools
  2022-02-15 10:15 ` [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
  2022-02-15 10:56   ` Juergen Gross
@ 2022-02-16  4:01   ` Stefano Stabellini
  1 sibling, 0 replies; 31+ messages in thread
From: Stefano Stabellini @ 2022-02-16  4:01 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: xen-devel, wei.chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Roger Pau Monné

On Tue, 15 Feb 2022, Luca Fancellu wrote:
> Introduce domain-cpupool property of a xen,domain device tree node,
> that specifies the cpupool device tree handle of a xen,cpupool node
> that identifies a cpupool created at boot time where the guest will
> be assigned on creation.
> 
> Add member to the xen_arch_domainconfig public interface so the
> XEN_DOMCTL_INTERFACE_VERSION version is bumped.
> 
> Update documentation about the property.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  docs/misc/arm/device-tree/booting.txt | 5 +++++
>  xen/arch/arm/domain.c                 | 6 ++++++
>  xen/arch/arm/domain_build.c           | 9 ++++++++-
>  xen/arch/x86/domain.c                 | 6 ++++++
>  xen/common/domain.c                   | 5 ++++-
>  xen/include/public/arch-arm.h         | 2 ++
>  xen/include/public/domctl.h           | 2 +-
>  xen/include/xen/domain.h              | 3 +++
>  8 files changed, 35 insertions(+), 3 deletions(-)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index 71895663a4de..0f1f210fa449 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -182,6 +182,11 @@ with the following properties:
>      Both #address-cells and #size-cells need to be specified because
>      both sub-nodes (described shortly) have reg properties.
>  
> +- domain-cpupool
> +
> +    Optional. Handle to a xen,cpupool device tree node that identifies the
> +    cpupool where the guest will be started at boot.
> +
>  Under the "xen,domain" compatible node, one or more sub-nodes are present
>  for the DomU kernel and ramdisk.
>  
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 92a6c509e5c5..be350b28b588 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -788,6 +788,12 @@ fail:
>      return rc;
>  }
>  
> +unsigned int
> +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config)
> +{
> +    return config->arch.cpupool_id;
> +}
> +
>  void arch_domain_destroy(struct domain *d)
>  {
>      /* IOMMU page table is shared with P2M, always call
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 6931c022a2e8..4f239e756775 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3015,7 +3015,8 @@ static int __init construct_domU(struct domain *d,
>  void __init create_domUs(void)
>  {
>      struct dt_device_node *node;
> -    const struct dt_device_node *chosen = dt_find_node_by_path("/chosen");
> +    const struct dt_device_node *cpupool_node,
> +                                *chosen = dt_find_node_by_path("/chosen");
>  
>      BUG_ON(chosen == NULL);
>      dt_for_each_child_node(chosen, node)
> @@ -3053,6 +3054,12 @@ void __init create_domUs(void)
>                                           GUEST_VPL011_SPI - 32 + 1);
>          }
>  
> +        /* Get the optional property domain-cpupool */
> +        cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0);
> +        if ( cpupool_node )
> +            dt_property_read_u32(cpupool_node, "cpupool-id",
> +                                 &d_cfg.arch.cpupool_id);

Is this the reason to make "cpupool-id" mandatory in device tree? If so,
I think we can avoid it. Instead, we could:

- generate the cpupool-id in xen/arch/arm/cpupool.c
- keep track of cpupool-id <--> "cpupool name", where "cpupool name"
  is the node name in device tree (sibling node names are unique in
  device tree). (Alternatively we could use the phandle.) We could have
  a __initdata pool_names_map to convert cpupool names to cpupool-ids.
- here retrieve the cpupool_id from the cpupool node name


>          /*
>           * The variable max_init_domid is initialized with zero, so here it's
>           * very important to use the pre-increment operator to call
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index ef1812dc1402..3e3cf88c9c82 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -880,6 +880,12 @@ int arch_domain_create(struct domain *d,
>      return rc;
>  }
>  
> +unsigned int
> +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config)
> +{
> +    return 0;
> +}
> +
>  void arch_domain_destroy(struct domain *d)
>  {
>      if ( is_hvm_domain(d) )
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 2048ebad86ff..d42ca8292025 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -665,6 +665,8 @@ struct domain *domain_create(domid_t domid,
>  
>      if ( !is_idle_domain(d) )
>      {
> +        unsigned int domain_cpupool_id;
> +
>          watchdog_domain_init(d);
>          init_status |= INIT_watchdog;
>  
> @@ -698,7 +700,8 @@ struct domain *domain_create(domid_t domid,
>          if ( !d->pbuf )
>              goto fail;
>  
> -        if ( (err = sched_init_domain(d, 0)) != 0 )
> +        domain_cpupool_id = arch_get_domain_cpupool_id(config);
> +        if ( (err = sched_init_domain(d, domain_cpupool_id)) != 0 )
>              goto fail;
>  
>          if ( (err = late_hwdom_init(d)) != 0 )
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 94b31511ddea..2c5d1ea7f01a 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -321,6 +321,8 @@ struct xen_arch_domainconfig {
>      uint16_t tee_type;
>      /* IN */
>      uint32_t nr_spis;
> +    /* IN */
> +    unsigned int cpupool_id;
>      /*
>       * OUT
>       * Based on the property clock-frequency in the DT timer node.
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index b85e6170b0aa..31ec083cb06e 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -38,7 +38,7 @@
>  #include "hvm/save.h"
>  #include "memory.h"
>  
> -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014
> +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
>  
>  /*
>   * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
> index 160c8dbdab33..fb018871bc17 100644
> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -63,6 +63,9 @@ void unmap_vcpu_info(struct vcpu *v);
>  int arch_domain_create(struct domain *d,
>                         struct xen_domctl_createdomain *config);
>  
> +unsigned int
> +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config);
> +
>  void arch_domain_destroy(struct domain *d);
>  
>  void arch_domain_shutdown(struct domain *d);
> -- 
> 2.17.1
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools
  2022-02-15 17:48     ` Luca Fancellu
@ 2022-02-16  6:13       ` Juergen Gross
  2022-02-16  8:16         ` Luca Fancellu
  0 siblings, 1 reply; 31+ messages in thread
From: Juergen Gross @ 2022-02-16  6:13 UTC (permalink / raw)
  To: Luca Fancellu; +Cc: xen-devel, wei.chen, Wei Liu, Anthony PERARD


[-- Attachment #1.1.1: Type: text/plain, Size: 4516 bytes --]

On 15.02.22 18:48, Luca Fancellu wrote:
> 
> 
>> On 15 Feb 2022, at 10:33, Juergen Gross <jgross@suse.com> wrote:
>>
>> On 15.02.22 11:15, Luca Fancellu wrote:
>>> With the introduction of boot time cpupools, Xen can create many
>>> different cpupools at boot time other than cpupool with id 0.
>>> Since these newly created cpupools can't have an
>>> entry in Xenstore, create the entry using xen-init-dom0
>>> helper with the usual convention: Pool-<cpupool id>.
>>> Given the change, remove the check for poolid == 0 from
>>> libxl_cpupoolid_to_name(...).
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>>   tools/helpers/xen-init-dom0.c  | 26 +++++++++++++++++++++++++-
>>>   tools/libs/light/libxl_utils.c |  3 +--
>>>   2 files changed, 26 insertions(+), 3 deletions(-)
>>> diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
>>> index c99224a4b607..3539f56faeb0 100644
>>> --- a/tools/helpers/xen-init-dom0.c
>>> +++ b/tools/helpers/xen-init-dom0.c
>>> @@ -43,7 +43,10 @@ int main(int argc, char **argv)
>>>       int rc;
>>>       struct xs_handle *xsh = NULL;
>>>       xc_interface *xch = NULL;
>>> -    char *domname_string = NULL, *domid_string = NULL;
>>> +    char *domname_string = NULL, *domid_string = NULL, *pool_string = NULL;
> 
> Hi Juergen,
> 
>>
>> pool_string seems to be unused.
> 
> I will remove it
> 
>>
>>> +    char pool_path[strlen("/local/pool") + 12], pool_name[strlen("Pool-") + 5];
>>
>> I don't like that. Why don't you use pointers and ...
>>
>>> +    xc_cpupoolinfo_t *xcinfo;
>>> +    unsigned int pool_id = 0;
>>>       libxl_uuid uuid;
>>>         /* Accept 0 or 1 argument */
>>> @@ -114,6 +117,27 @@ int main(int argc, char **argv)
>>>           goto out;
>>>       }
>>>   +    /* Create an entry in xenstore for each cpupool on the system */
>>> +    do {
>>> +        xcinfo = xc_cpupool_getinfo(xch, pool_id);
>>> +        if (xcinfo != NULL) {
>>> +            if (xcinfo->cpupool_id != pool_id)
>>> +                pool_id = xcinfo->cpupool_id;
>>> +            snprintf(pool_path, sizeof(pool_path), "/local/pool/%d/name",
>>> +                     pool_id);
>>> +            snprintf(pool_name, sizeof(pool_name), "Pool-%d", pool_id);
>>
>> ... use asprintf() here for allocating the strings in the needed size?
> 
> Why would you like more this approach? I was trying to avoid allocation/free
> operations in a loop that would need also more code to check cases in which
> memory is not allocated. Instead with the buffers in the stack we don’t have problems.

My main concerns are the usage of strlen() for sizing an on-stack array,
the duplication of the format strings (once in the arrays definition and
once in the snprintf()), and the mixture of strlen() and constants for
sizing the arrays.

There are actually some errors in your approach for sizing the arrays,
showing how fragile your solution is: you are allowing a "positive
integer number" for a cpupool-id, which could easily need 10 digits,
while your arrays allow only for 5 or 4 digits, depending on the array.

And doing the two asprintf() calls and then checking that both arrays
are not NULL isn't that much code. BTW, your approach is missing the
test that the arrays have been large enough.

The performance of that loop shouldn't be that critical that a few
additional microseconds really hurt, especially as I don't think any
use case will exceed single digit loop iterations.

> 
>>
>>> +            pool_id++;
>>> +            if (!xs_write(xsh, XBT_NULL, pool_path, pool_name,
>>> +                          strlen(pool_name))) {
>>> +                fprintf(stderr, "cannot set pool name\n");
>>> +                rc = 1;
>>> +            }
>>> +            xc_cpupool_infofree(xch, xcinfo);
>>> +            if (rc)
>>> +                goto out;
>>
>> Moving the call of xc_cpupool_infofree() ahead of the call of xs_write()
>> would drop the need for this last if statement, as you could add the
>> goto to the upper if.
> 
> Yes you are right, it would simplify the code
> 
>>
>>> +        }
>>> +    } while(xcinfo != NULL);
>>> +
>>
>> With doing all of this for being able to assign other domains created
>> at boot to cpupools, shouldn't you add names for other domains than dom0
>> here, too?
> 
> This serie is more about cpupools, maybe I can do that in another commit out
> of this serie.

Fine with me.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/5] xen/sched: create public function for cpupools creation
  2022-02-15 17:50     ` Luca Fancellu
@ 2022-02-16  6:16       ` Juergen Gross
  0 siblings, 0 replies; 31+ messages in thread
From: Juergen Gross @ 2022-02-16  6:16 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Xen-devel, Wei Chen, Dario Faggioli, George Dunlap,
	Andrew Cooper, Jan Beulich, Julien Grall, Stefano Stabellini,
	Wei Liu


[-- Attachment #1.1.1: Type: text/plain, Size: 1175 bytes --]

On 15.02.22 18:50, Luca Fancellu wrote:
> 
> 
>> On 15 Feb 2022, at 10:38, Juergen Gross <jgross@suse.com> wrote:
>>
>> On 15.02.22 11:15, Luca Fancellu wrote:
>>> Create new public function to create cpupools, it checks for pool id
>>> uniqueness before creating the pool and can take a scheduler id or
>>> a negative value that means the default Xen scheduler will be used.
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
>>
>> with one further question: you are allowing to use another scheduler,
>> but what if someone wants to set non-standard scheduling parameters
>> (e.g. another time slice)?
> 
> I guess for now parameters can be tuned only by xl tool, however it would
> be possible as future work to allow parameters in the device tree for each
> scheduler.

That is basically my concern here: A true dom0less setup won't have the
possibility to use xl...

I don't mind this series not supporting that scheme, but the chosen
syntax for the device tree should support that extension (I have not
looked into that, as I have no detailed knowledge about that topic).


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-15 17:56     ` Luca Fancellu
@ 2022-02-16  6:18       ` Juergen Gross
  2022-02-16 12:37         ` Luca Fancellu
  0 siblings, 1 reply; 31+ messages in thread
From: Juergen Gross @ 2022-02-16  6:18 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 1321 bytes --]

On 15.02.22 18:56, Luca Fancellu wrote:
> 
> 
>> On 15 Feb 2022, at 10:48, Juergen Gross <jgross@suse.com> wrote:
>>
>> On 15.02.22 11:15, Luca Fancellu wrote:
>>> Introduce an architecture specific way to create different cpupools
>>> at boot time, this is particularly useful on ARM big.LITTLE system
>>> where there might be the need to have different cpupools for each type
>>> of core, but also systems using NUMA can have different cpu pools for
>>> each node.
>>> The feature on arm relies on a specification of the cpupools from the
>>> device tree to build pools and assign cpus to them.
>>> Documentation is created to explain the feature.
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>
>> IIRC I suggested to have the core functionality in common code in order
>> to allow using boot time cpupool creation e.g. via commandline for x86,
>> too.
> 
> Yes, however I think the parser to handle everything by command line would
> be huge due to input sanitisation and not easy enough as the DT, however
> I see Hyperlaunch has plans to use DT on x86 so I guess it would be ok to make
> this feature common once the DT is available also on x86.

Everything not being explicitly specific to Arm should be in common
code. Think of the work in progress for Risc-V.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools
  2022-02-16  6:13       ` Juergen Gross
@ 2022-02-16  8:16         ` Luca Fancellu
  0 siblings, 0 replies; 31+ messages in thread
From: Luca Fancellu @ 2022-02-16  8:16 UTC (permalink / raw)
  To: Juergen Gross; +Cc: Xen-devel, wei.chen, Wei Liu, Anthony PERARD



> On 16 Feb 2022, at 06:13, Juergen Gross <jgross@suse.com> wrote:
> 
> On 15.02.22 18:48, Luca Fancellu wrote:
>>> On 15 Feb 2022, at 10:33, Juergen Gross <jgross@suse.com> wrote:
>>> 
>>> On 15.02.22 11:15, Luca Fancellu wrote:
>>>> With the introduction of boot time cpupools, Xen can create many
>>>> different cpupools at boot time other than cpupool with id 0.
>>>> Since these newly created cpupools can't have an
>>>> entry in Xenstore, create the entry using xen-init-dom0
>>>> helper with the usual convention: Pool-<cpupool id>.
>>>> Given the change, remove the check for poolid == 0 from
>>>> libxl_cpupoolid_to_name(...).
>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>> ---
>>>>  tools/helpers/xen-init-dom0.c  | 26 +++++++++++++++++++++++++-
>>>>  tools/libs/light/libxl_utils.c |  3 +--
>>>>  2 files changed, 26 insertions(+), 3 deletions(-)
>>>> diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
>>>> index c99224a4b607..3539f56faeb0 100644
>>>> --- a/tools/helpers/xen-init-dom0.c
>>>> +++ b/tools/helpers/xen-init-dom0.c
>>>> @@ -43,7 +43,10 @@ int main(int argc, char **argv)
>>>>      int rc;
>>>>      struct xs_handle *xsh = NULL;
>>>>      xc_interface *xch = NULL;
>>>> -    char *domname_string = NULL, *domid_string = NULL;
>>>> +    char *domname_string = NULL, *domid_string = NULL, *pool_string = NULL;
>> Hi Juergen,
>>> 
>>> pool_string seems to be unused.
>> I will remove it
>>> 
>>>> +    char pool_path[strlen("/local/pool") + 12], pool_name[strlen("Pool-") + 5];
>>> 
>>> I don't like that. Why don't you use pointers and ...
>>> 
>>>> +    xc_cpupoolinfo_t *xcinfo;
>>>> +    unsigned int pool_id = 0;
>>>>      libxl_uuid uuid;
>>>>        /* Accept 0 or 1 argument */
>>>> @@ -114,6 +117,27 @@ int main(int argc, char **argv)
>>>>          goto out;
>>>>      }
>>>>  +    /* Create an entry in xenstore for each cpupool on the system */
>>>> +    do {
>>>> +        xcinfo = xc_cpupool_getinfo(xch, pool_id);
>>>> +        if (xcinfo != NULL) {
>>>> +            if (xcinfo->cpupool_id != pool_id)
>>>> +                pool_id = xcinfo->cpupool_id;
>>>> +            snprintf(pool_path, sizeof(pool_path), "/local/pool/%d/name",
>>>> +                     pool_id);
>>>> +            snprintf(pool_name, sizeof(pool_name), "Pool-%d", pool_id);
>>> 
>>> ... use asprintf() here for allocating the strings in the needed size?
>> Why would you like more this approach? I was trying to avoid allocation/free
>> operations in a loop that would need also more code to check cases in which
>> memory is not allocated. Instead with the buffers in the stack we don’t have problems.
> 
> My main concerns are the usage of strlen() for sizing an on-stack array,
> the duplication of the format strings (once in the arrays definition and
> once in the snprintf()), and the mixture of strlen() and constants for
> sizing the arrays.
> 
> There are actually some errors in your approach for sizing the arrays,
> showing how fragile your solution is: you are allowing a "positive
> integer number" for a cpupool-id, which could easily need 10 digits,
> while your arrays allow only for 5 or 4 digits, depending on the array.
> 
> And doing the two asprintf() calls and then checking that both arrays
> are not NULL isn't that much code. BTW, your approach is missing the
> test that the arrays have been large enough.
> 
> The performance of that loop shouldn't be that critical that a few
> additional microseconds really hurt, especially as I don't think any
> use case will exceed single digit loop iterations.

Hi Juergen,

Thank you for your explanation, totally makes sense. I took inspiration from
libxl_cpupoolid_to_name in libxl_utils.c writing this code but I see the limitation
now.

I will change it to use asprintf().

Cheers,
Luca

> 
>>> 
>>>> +            pool_id++;
>>>> +            if (!xs_write(xsh, XBT_NULL, pool_path, pool_name,
>>>> +                          strlen(pool_name))) {
>>>> +                fprintf(stderr, "cannot set pool name\n");
>>>> +                rc = 1;
>>>> +            }
>>>> +            xc_cpupool_infofree(xch, xcinfo);
>>>> +            if (rc)
>>>> +                goto out;
>>> 
>>> Moving the call of xc_cpupool_infofree() ahead of the call of xs_write()
>>> would drop the need for this last if statement, as you could add the
>>> goto to the upper if.
>> Yes you are right, it would simplify the code
>>> 
>>>> +        }
>>>> +    } while(xcinfo != NULL);
>>>> +
>>> 
>>> With doing all of this for being able to assign other domains created
>>> at boot to cpupools, shouldn't you add names for other domains than dom0
>>> here, too?
>> This serie is more about cpupools, maybe I can do that in another commit out
>> of this serie.
> 
> Fine with me.
> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-16  2:45   ` Stefano Stabellini
@ 2022-02-16 12:10     ` Luca Fancellu
  2022-02-16 12:55       ` Juergen Gross
  2022-02-16 21:58       ` Stefano Stabellini
  0 siblings, 2 replies; 31+ messages in thread
From: Luca Fancellu @ 2022-02-16 12:10 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Xen-devel, Wei Chen, Julien Grall, Volodymyr Babchuk,
	Bertrand Marquis, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Juergen Gross, Dario Faggioli



> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> On Tue, 15 Feb 2022, Luca Fancellu wrote:
>> Introduce an architecture specific way to create different cpupools
>> at boot time, this is particularly useful on ARM big.LITTLE system
>> where there might be the need to have different cpupools for each type
>> of core, but also systems using NUMA can have different cpu pools for
>> each node.
>> 
>> The feature on arm relies on a specification of the cpupools from the
>> device tree to build pools and assign cpus to them.
>> 
>> Documentation is created to explain the feature.
>> 
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
>> xen/arch/arm/Kconfig                   |   9 ++
>> xen/arch/arm/Makefile                  |   1 +
>> xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
>> xen/common/sched/cpupool.c             |   4 +-
>> xen/include/xen/sched.h                |  11 +++
>> 6 files changed, 260 insertions(+), 1 deletion(-)
>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>> create mode 100644 xen/arch/arm/cpupool.c
>> 
>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>> new file mode 100644
>> index 000000000000..7298b6394332
>> --- /dev/null
>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>> @@ -0,0 +1,118 @@
>> +Boot time cpupools
>> +==================
>> +
>> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is
>> +possible to create cpupools during boot phase by specifying them in the device
>> +tree.
>> +
>> +Cpupools specification nodes shall be direct childs of /chosen node.
>> +Each cpupool node contains the following properties:
>> +
>> +- compatible (mandatory)
>> +
>> +    Must always include the compatiblity string: "xen,cpupool".
>> +
>> +- cpupool-id (mandatory)
>> +
>> +    Must be a positive integer number.
> 

Hi Stefano,

Thank you for your review,

> Why is cpupool-id mandatory? It looks like it could be generated by Xen.
> Or is it actually better to have the user specify it anyway?
> 

Yes at first I thought to automatically generate that, however I needed a structure
to map the id to the cpupool DT node. Here my doubt was about the size of the
structure, because the user could even specify a cpupool for each cpu. I could allocate
It dynamically and free it after domUs creation in setup_xen.
What do you think could be the right way?
Or the dom0less guest could specify the id, but I like it more when using a phandle to the
Xen,cpupool node.

> 
>> +- cpupool-cpus (mandatory)
>> +
>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>> +    device_type = "cpu"), it can't be empty.
>> +
>> +- cpupool-sched (optional)
>> +
>> +    Must be a string having the name of a Xen scheduler, it has no effect when
>> +    used in conjunction of a cpupool-id equal to zero, in that case the
>> +    default Xen scheduler is selected (sched=<...> boot argument).
> 
> I don't get why cpupool-id == 0 should trigger a special cpupool-sched
> behavior.

Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create
that is giving it the default scheduler. I thought it was better to leave it as it was, however the
cpupool0 scheduler can be modified using sched= boot args as it was before.

> 
> 
>> +Constraints
>> +===========
>> +
>> +The cpupool with id zero is implicitly created even if not specified, that pool
>> +must have at least one cpu assigned, otherwise Xen will stop.
>> +
>> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's
>> +not assigned to any other cpupool.
>> +
>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>> +stop.
> 
> Thank you for documenting the constraints, but why do we have them?
> Imagine a user specifying 3 cpu pools and imagine the cpupool-id is
> optional and missing. We could take care of the cpupool-id generation in
> Xen and we could also assign the default scheduler everywhere
> cpupool-sched is not specified. Maybe I am missing something?

Yes we could make the cpupool-id optional, my doubts are in the fist comment above.
Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler.

> 
> Does cpupool0 has to exist? I guess the answer could be yes, but if it
> is specified as id of one of the pools we are fine, otherwise it could
> be automatically generated by Xen.

Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT
specifications. In fact you could not specify in the DT any xen,cpupool compatible node
with the cpupool-id == 0 and Xen will generate the cpupool0 anyway
(Xen internals are tied with the existence of a cpupool0).

> 
> In any case, I don't think that cpupool0 has to have the default
> scheduler?

Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation,
I would need to test it to be sure I don’t find something strange.

> 
> My suggestion would be:
> 
> - make cpupool-id optional
> - assign automatically cpupool-ids starting from 0
>    - respect cpupool-ids chosen by the user

Ok, it would start from 1 because cpupool0 always exists

> - if some CPUs are left out (not specified in any pool) add an extra cpupool
>    - the extra cpupool doesn't have to be cpupool-id == 0, it could be
>      cpupool-id == n
>    - the extra cpupool uses the default scheduler

I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that
a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the
current behaviour as the feature is not enabled.
However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0,
else assign them to a new cpupool and...

> 
> If the user created cpupools in device tree covering all CPUs and also
> specified all cpupool-ids everywhere, and none of them are 0 (no cpupool
> in the system is cpupool0) then panic. (Assuming that cpupool0 is
> required.)

… panic if cpupool0 has no cpus.


Cheers,
Luca


> 
> 
>> +Examples
>> +========
>> +
>> +A system having two types of core, the following device tree specification will
>> +instruct Xen to have two cpupools:
>> +
>> +- The cpupool with id 0 will have 4 cpus assigned.
>> +- The cpupool with id 1 will have 2 cpus assigned.
>> +
>> +As can be seen from the example, cpupool_a has only two cpus assigned, but since
>> +there are two cpus unassigned, they are automatically assigned to cpupool with
>> +id zero. The following example can work only if hmp-unsafe=1 is passed to Xen
>> +boot arguments, otherwise not all cores will be brought up by Xen and the
>> +cpupool creation process will stop Xen.
>> +
>> +
>> +a72_1: cpu@0 {
>> +        compatible = "arm,cortex-a72";
>> +        reg = <0x0 0x0>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a72_2: cpu@1 {
>> +        compatible = "arm,cortex-a72";
>> +        reg = <0x0 0x1>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_1: cpu@100 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x100>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_2: cpu@101 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x101>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +cpu@102 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x102>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +cpu@103 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x103>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +chosen {
>> +
>> +    cpupool_a {
>> +        compatible = "xen,cpupool";
>> +        cpupool-id = <0>;
>> +        cpupool-cpus = <&a53_1 &a53_2>;
>> +    };
>> +    cpupool_b {
>> +        compatible = "xen,cpupool";
>> +        cpupool-id = <1>;
>> +        cpupool-cpus = <&a72_1 &a72_2>;
>> +        cpupool-sched = "credit2";
>> +    };
> 
> Question above notwithstanding, I like it!
> 
> 
>> +    [...]
>> +
>> +};
>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>> index ecfa6822e4d3..64c2879513b7 100644
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -33,6 +33,15 @@ config ACPI
>> 	  Advanced Configuration and Power Interface (ACPI) support for Xen is
>> 	  an alternative to device tree on ARM64.
>> 
>> +config BOOT_TIME_CPUPOOLS
>> +	bool "Create cpupools at boot time"
>> +	depends on ARM
>> +	default n
>> +	help
>> +
>> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
>> +	  options can be specified in the device tree.
>> +
>> config GICV3
>> 	bool "GICv3 driver"
>> 	depends on ARM_64 && !NEW_VGIC
>> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
>> index d0dee10102b6..6165da4e77b4 100644
>> --- a/xen/arch/arm/Makefile
>> +++ b/xen/arch/arm/Makefile
>> @@ -13,6 +13,7 @@ obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
>> obj-y += bootfdt.init.o
>> obj-y += cpuerrata.o
>> obj-y += cpufeature.o
>> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += cpupool.o
>> obj-y += decode.o
>> obj-y += device.o
>> obj-$(CONFIG_IOREQ_SERVER) += dm.o
>> diff --git a/xen/arch/arm/cpupool.c b/xen/arch/arm/cpupool.c
>> new file mode 100644
>> index 000000000000..a9d5b94635b9
>> --- /dev/null
>> +++ b/xen/arch/arm/cpupool.c
>> @@ -0,0 +1,118 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * xen/arch/arm/cpupool.c
>> + *
>> + * Code to create cpupools at boot time for arm architecture.
>> + *
>> + * Copyright (C) 2022 Arm Ltd.
>> + */
>> +
>> +#include <xen/sched.h>
>> +
>> +static struct cpupool *__initdata pool_cpu_map[NR_CPUS];
>> +
>> +void __init arch_allocate_cpupools(const cpumask_t *cpu_online_map)
>> +{
>> +    const struct dt_device_node *chosen, *node;
>> +    unsigned int cpu_num, cpupool0_cpu_count = 0;
>> +    cpumask_t cpus_to_assign;
>> +
>> +    chosen = dt_find_node_by_path("/chosen");
>> +    if ( !chosen )
>> +        return;
>> +
>> +    cpumask_copy(&cpus_to_assign, cpu_online_map);
>> +
>> +    dt_for_each_child_node(chosen, node)
>> +    {
>> +        const struct dt_device_node *cpu_node;
>> +        unsigned int pool_id;
>> +        int i = 0, sched_id = -1;
>> +        const char* scheduler_name;
>> +        struct cpupool *pool = cpupool0;
>> +
>> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
>> +            continue;
>> +
>> +        if ( !dt_property_read_u32(node, "cpupool-id", &pool_id) )
>> +            panic("Missing cpupool-id property!\n");
>> +
>> +        if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) )
>> +        {
>> +            sched_id = sched_get_id_by_name(scheduler_name);
>> +            if ( sched_id < 0 )
>> +                panic("Scheduler %s does not exists!\n", scheduler_name);
>> +        }
>> +
>> +        if ( pool_id )
>> +        {
>> +            pool = cpupool_create_pool(pool_id, sched_id);
>> +            if ( !pool )
>> +                panic("Error creating pool id %u!\n", pool_id);
>> +        }
>> +
>> +        cpu_node = dt_parse_phandle(node, "cpupool-cpus", 0);
>> +        if ( !cpu_node )
>> +            panic("Missing or empty cpupool-cpus property!\n");
>> +
>> +        while ( cpu_node )
>> +        {
>> +            register_t cpu_reg;
>> +            const __be32 *prop;
>> +
>> +            prop = dt_get_property(cpu_node, "reg", NULL);
>> +            if ( !prop )
>> +                panic("cpupool-cpus pointed node has no reg property!\n");
>> +
>> +            cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
>> +
>> +            /* Check if the cpu is online and in the set to be assigned */
>> +            for_each_cpu ( cpu_num, &cpus_to_assign )
>> +                if ( cpu_logical_map(cpu_num) == cpu_reg )
>> +                    break;
>> +
>> +            if ( cpu_num >= nr_cpu_ids )
>> +                panic("Cpu found in %s is not online or it's assigned twice!\n",
>> +                      dt_node_name(node));
>> +
>> +            pool_cpu_map[cpu_num] = pool;
>> +            cpumask_clear_cpu(cpu_num, &cpus_to_assign);
>> +
>> +            printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-%u.\n",
>> +                   cpu_reg, pool_id);
>> +
>> +            /* Keep track of how many cpus are assigned to Pool-0 */
>> +            if ( !pool_id )
>> +                cpupool0_cpu_count++;
>> +
>> +            cpu_node = dt_parse_phandle(node, "cpupool-cpus", ++i);
>> +        }
>> +    }
>> +
>> +    /* Assign every non assigned cpu to Pool-0 */
>> +    for_each_cpu ( cpu_num, &cpus_to_assign )
>> +    {
>> +        pool_cpu_map[cpu_num] = cpupool0;
>> +        cpupool0_cpu_count++;
>> +        printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-0.\n",
>> +               cpu_logical_map(cpu_num));
>> +    }
>> +
>> +    if ( !cpupool0_cpu_count )
>> +        panic("No cpu assigned to cpupool0!\n");
>> +}
>> +
>> +struct cpupool *__init arch_get_cpupool(unsigned int cpu)
>> +{
>> +    return pool_cpu_map[cpu];
>> +}
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * tab-width: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
>> index 4da12528d6b9..6013d75e2edd 100644
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -1257,12 +1257,14 @@ static int __init cpupool_init(void)
>>     cpupool_put(cpupool0);
>>     register_cpu_notifier(&cpu_nfb);
>> 
>> +    arch_allocate_cpupools(&cpu_online_map);
>> +
>>     spin_lock(&cpupool_lock);
>> 
>>     cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
>> 
>>     for_each_cpu ( cpu, &cpupool_free_cpus )
>> -        cpupool_assign_cpu_locked(cpupool0, cpu);
>> +        cpupool_assign_cpu_locked(arch_get_cpupool(cpu), cpu);
>> 
>>     spin_unlock(&cpupool_lock);
>> 
>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index a67a9eb2fe9d..dda7db2ba51f 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -1177,6 +1177,17 @@ extern void dump_runq(unsigned char key);
>> 
>> void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
>> 
>> +#ifdef CONFIG_BOOT_TIME_CPUPOOLS
>> +void arch_allocate_cpupools(const cpumask_t *cpu_online_map);
>> +struct cpupool *arch_get_cpupool(unsigned int cpu);
>> +#else
>> +static inline void arch_allocate_cpupools(const cpumask_t *cpu_online_map) {}
>> +static inline struct cpupool *arch_get_cpupool(unsigned int cpu)
>> +{
>> +    return cpupool0;
>> +}
>> +#endif
>> +
>> #endif /* __SCHED_H__ */
>> 
>> /*
>> -- 
>> 2.17.1



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-16  6:18       ` Juergen Gross
@ 2022-02-16 12:37         ` Luca Fancellu
  2022-02-16 16:32           ` Dario Faggioli
  0 siblings, 1 reply; 31+ messages in thread
From: Luca Fancellu @ 2022-02-16 12:37 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Dario Faggioli



> On 16 Feb 2022, at 06:18, Juergen Gross <jgross@suse.com> wrote:
> 
> On 15.02.22 18:56, Luca Fancellu wrote:
>>> On 15 Feb 2022, at 10:48, Juergen Gross <jgross@suse.com> wrote:
>>> 
>>> On 15.02.22 11:15, Luca Fancellu wrote:
>>>> Introduce an architecture specific way to create different cpupools
>>>> at boot time, this is particularly useful on ARM big.LITTLE system
>>>> where there might be the need to have different cpupools for each type
>>>> of core, but also systems using NUMA can have different cpu pools for
>>>> each node.
>>>> The feature on arm relies on a specification of the cpupools from the
>>>> device tree to build pools and assign cpus to them.
>>>> Documentation is created to explain the feature.
>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> 
>>> IIRC I suggested to have the core functionality in common code in order
>>> to allow using boot time cpupool creation e.g. via commandline for x86,
>>> too.
>> Yes, however I think the parser to handle everything by command line would
>> be huge due to input sanitisation and not easy enough as the DT, however
>> I see Hyperlaunch has plans to use DT on x86 so I guess it would be ok to make
>> this feature common once the DT is available also on x86.
> 
> Everything not being explicitly specific to Arm should be in common
> code. Think of the work in progress for Risc-V.

Ok I will put it in common and I will make the feature depend on HAS_DEVICE_TREE.

Thank you.

Cheers,
Luca

> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-16 12:10     ` Luca Fancellu
@ 2022-02-16 12:55       ` Juergen Gross
  2022-02-16 13:01         ` Luca Fancellu
  2022-02-16 21:58       ` Stefano Stabellini
  1 sibling, 1 reply; 31+ messages in thread
From: Juergen Gross @ 2022-02-16 12:55 UTC (permalink / raw)
  To: Luca Fancellu, Stefano Stabellini
  Cc: Xen-devel, Wei Chen, Julien Grall, Volodymyr Babchuk,
	Bertrand Marquis, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 6668 bytes --]

On 16.02.22 13:10, Luca Fancellu wrote:
> 
> 
>> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>
>> On Tue, 15 Feb 2022, Luca Fancellu wrote:
>>> Introduce an architecture specific way to create different cpupools
>>> at boot time, this is particularly useful on ARM big.LITTLE system
>>> where there might be the need to have different cpupools for each type
>>> of core, but also systems using NUMA can have different cpu pools for
>>> each node.
>>>
>>> The feature on arm relies on a specification of the cpupools from the
>>> device tree to build pools and assign cpus to them.
>>>
>>> Documentation is created to explain the feature.
>>>
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
>>> xen/arch/arm/Kconfig                   |   9 ++
>>> xen/arch/arm/Makefile                  |   1 +
>>> xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
>>> xen/common/sched/cpupool.c             |   4 +-
>>> xen/include/xen/sched.h                |  11 +++
>>> 6 files changed, 260 insertions(+), 1 deletion(-)
>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>> create mode 100644 xen/arch/arm/cpupool.c
>>>
>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>>> new file mode 100644
>>> index 000000000000..7298b6394332
>>> --- /dev/null
>>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>>> @@ -0,0 +1,118 @@
>>> +Boot time cpupools
>>> +==================
>>> +
>>> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is
>>> +possible to create cpupools during boot phase by specifying them in the device
>>> +tree.
>>> +
>>> +Cpupools specification nodes shall be direct childs of /chosen node.
>>> +Each cpupool node contains the following properties:
>>> +
>>> +- compatible (mandatory)
>>> +
>>> +    Must always include the compatiblity string: "xen,cpupool".
>>> +
>>> +- cpupool-id (mandatory)
>>> +
>>> +    Must be a positive integer number.
>>
> 
> Hi Stefano,
> 
> Thank you for your review,
> 
>> Why is cpupool-id mandatory? It looks like it could be generated by Xen.
>> Or is it actually better to have the user specify it anyway?
>>
> 
> Yes at first I thought to automatically generate that, however I needed a structure
> to map the id to the cpupool DT node. Here my doubt was about the size of the
> structure, because the user could even specify a cpupool for each cpu. I could allocate
> It dynamically and free it after domUs creation in setup_xen.
> What do you think could be the right way?
> Or the dom0less guest could specify the id, but I like it more when using a phandle to the
> Xen,cpupool node.
> 
>>
>>> +- cpupool-cpus (mandatory)
>>> +
>>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>>> +    device_type = "cpu"), it can't be empty.
>>> +
>>> +- cpupool-sched (optional)
>>> +
>>> +    Must be a string having the name of a Xen scheduler, it has no effect when
>>> +    used in conjunction of a cpupool-id equal to zero, in that case the
>>> +    default Xen scheduler is selected (sched=<...> boot argument).
>>
>> I don't get why cpupool-id == 0 should trigger a special cpupool-sched
>> behavior.
> 
> Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create
> that is giving it the default scheduler. I thought it was better to leave it as it was, however the
> cpupool0 scheduler can be modified using sched= boot args as it was before.
> 
>>
>>
>>> +Constraints
>>> +===========
>>> +
>>> +The cpupool with id zero is implicitly created even if not specified, that pool
>>> +must have at least one cpu assigned, otherwise Xen will stop.
>>> +
>>> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's
>>> +not assigned to any other cpupool.
>>> +
>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>>> +stop.
>>
>> Thank you for documenting the constraints, but why do we have them?
>> Imagine a user specifying 3 cpu pools and imagine the cpupool-id is
>> optional and missing. We could take care of the cpupool-id generation in
>> Xen and we could also assign the default scheduler everywhere
>> cpupool-sched is not specified. Maybe I am missing something?
> 
> Yes we could make the cpupool-id optional, my doubts are in the fist comment above.
> Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler.
> 
>>
>> Does cpupool0 has to exist? I guess the answer could be yes, but if it
>> is specified as id of one of the pools we are fine, otherwise it could
>> be automatically generated by Xen.
> 
> Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT
> specifications. In fact you could not specify in the DT any xen,cpupool compatible node
> with the cpupool-id == 0 and Xen will generate the cpupool0 anyway
> (Xen internals are tied with the existence of a cpupool0).
> 
>>
>> In any case, I don't think that cpupool0 has to have the default
>> scheduler?
> 
> Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation,
> I would need to test it to be sure I don’t find something strange.
> 
>>
>> My suggestion would be:
>>
>> - make cpupool-id optional
>> - assign automatically cpupool-ids starting from 0
>>     - respect cpupool-ids chosen by the user
> 
> Ok, it would start from 1 because cpupool0 always exists
> 
>> - if some CPUs are left out (not specified in any pool) add an extra cpupool
>>     - the extra cpupool doesn't have to be cpupool-id == 0, it could be
>>       cpupool-id == n
>>     - the extra cpupool uses the default scheduler
> 
> I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that
> a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the
> current behaviour as the feature is not enabled.
> However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0,
> else assign them to a new cpupool and...
> 
>>
>> If the user created cpupools in device tree covering all CPUs and also
>> specified all cpupool-ids everywhere, and none of them are 0 (no cpupool
>> in the system is cpupool0) then panic. (Assuming that cpupool0 is
>> required.)
> 
> … panic if cpupool0 has no cpus.

Today cpu 0 is always member of cpupool0, and changing that might be
hard.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-16 12:55       ` Juergen Gross
@ 2022-02-16 13:01         ` Luca Fancellu
  2022-02-16 13:07           ` Juergen Gross
  0 siblings, 1 reply; 31+ messages in thread
From: Luca Fancellu @ 2022-02-16 13:01 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Xen-devel, Wei Chen, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Dario Faggioli



> On 16 Feb 2022, at 12:55, Juergen Gross <jgross@suse.com> wrote:
> 
> On 16.02.22 13:10, Luca Fancellu wrote:
>>> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>> 
>>> On Tue, 15 Feb 2022, Luca Fancellu wrote:
>>>> Introduce an architecture specific way to create different cpupools
>>>> at boot time, this is particularly useful on ARM big.LITTLE system
>>>> where there might be the need to have different cpupools for each type
>>>> of core, but also systems using NUMA can have different cpu pools for
>>>> each node.
>>>> 
>>>> The feature on arm relies on a specification of the cpupools from the
>>>> device tree to build pools and assign cpus to them.
>>>> 
>>>> Documentation is created to explain the feature.
>>>> 
>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>> ---
>>>> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
>>>> xen/arch/arm/Kconfig                   |   9 ++
>>>> xen/arch/arm/Makefile                  |   1 +
>>>> xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
>>>> xen/common/sched/cpupool.c             |   4 +-
>>>> xen/include/xen/sched.h                |  11 +++
>>>> 6 files changed, 260 insertions(+), 1 deletion(-)
>>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>>> create mode 100644 xen/arch/arm/cpupool.c
>>>> 
>>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>>>> new file mode 100644
>>>> index 000000000000..7298b6394332
>>>> --- /dev/null
>>>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>>>> @@ -0,0 +1,118 @@
>>>> +Boot time cpupools
>>>> +==================
>>>> +
>>>> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is
>>>> +possible to create cpupools during boot phase by specifying them in the device
>>>> +tree.
>>>> +
>>>> +Cpupools specification nodes shall be direct childs of /chosen node.
>>>> +Each cpupool node contains the following properties:
>>>> +
>>>> +- compatible (mandatory)
>>>> +
>>>> +    Must always include the compatiblity string: "xen,cpupool".
>>>> +
>>>> +- cpupool-id (mandatory)
>>>> +
>>>> +    Must be a positive integer number.
>>> 
>> Hi Stefano,
>> Thank you for your review,
>>> Why is cpupool-id mandatory? It looks like it could be generated by Xen.
>>> Or is it actually better to have the user specify it anyway?
>>> 
>> Yes at first I thought to automatically generate that, however I needed a structure
>> to map the id to the cpupool DT node. Here my doubt was about the size of the
>> structure, because the user could even specify a cpupool for each cpu. I could allocate
>> It dynamically and free it after domUs creation in setup_xen.
>> What do you think could be the right way?
>> Or the dom0less guest could specify the id, but I like it more when using a phandle to the
>> Xen,cpupool node.
>>> 
>>>> +- cpupool-cpus (mandatory)
>>>> +
>>>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>>>> +    device_type = "cpu"), it can't be empty.
>>>> +
>>>> +- cpupool-sched (optional)
>>>> +
>>>> +    Must be a string having the name of a Xen scheduler, it has no effect when
>>>> +    used in conjunction of a cpupool-id equal to zero, in that case the
>>>> +    default Xen scheduler is selected (sched=<...> boot argument).
>>> 
>>> I don't get why cpupool-id == 0 should trigger a special cpupool-sched
>>> behavior.
>> Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create
>> that is giving it the default scheduler. I thought it was better to leave it as it was, however the
>> cpupool0 scheduler can be modified using sched= boot args as it was before.
>>> 
>>> 
>>>> +Constraints
>>>> +===========
>>>> +
>>>> +The cpupool with id zero is implicitly created even if not specified, that pool
>>>> +must have at least one cpu assigned, otherwise Xen will stop.
>>>> +
>>>> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's
>>>> +not assigned to any other cpupool.
>>>> +
>>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>>>> +stop.
>>> 
>>> Thank you for documenting the constraints, but why do we have them?
>>> Imagine a user specifying 3 cpu pools and imagine the cpupool-id is
>>> optional and missing. We could take care of the cpupool-id generation in
>>> Xen and we could also assign the default scheduler everywhere
>>> cpupool-sched is not specified. Maybe I am missing something?
>> Yes we could make the cpupool-id optional, my doubts are in the fist comment above.
>> Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler.
>>> 
>>> Does cpupool0 has to exist? I guess the answer could be yes, but if it
>>> is specified as id of one of the pools we are fine, otherwise it could
>>> be automatically generated by Xen.
>> Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT
>> specifications. In fact you could not specify in the DT any xen,cpupool compatible node
>> with the cpupool-id == 0 and Xen will generate the cpupool0 anyway
>> (Xen internals are tied with the existence of a cpupool0).
>>> 
>>> In any case, I don't think that cpupool0 has to have the default
>>> scheduler?
>> Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation,
>> I would need to test it to be sure I don’t find something strange.
>>> 
>>> My suggestion would be:
>>> 
>>> - make cpupool-id optional
>>> - assign automatically cpupool-ids starting from 0
>>>    - respect cpupool-ids chosen by the user
>> Ok, it would start from 1 because cpupool0 always exists
>>> - if some CPUs are left out (not specified in any pool) add an extra cpupool
>>>    - the extra cpupool doesn't have to be cpupool-id == 0, it could be
>>>      cpupool-id == n
>>>    - the extra cpupool uses the default scheduler
>> I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that
>> a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the
>> current behaviour as the feature is not enabled.
>> However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0,
>> else assign them to a new cpupool and...
>>> 
>>> If the user created cpupools in device tree covering all CPUs and also
>>> specified all cpupool-ids everywhere, and none of them are 0 (no cpupool
>>> in the system is cpupool0) then panic. (Assuming that cpupool0 is
>>> required.)
>> … panic if cpupool0 has no cpus.
> 
> Today cpu 0 is always member of cpupool0, and changing that might be
> hard.

Oh, are you sure? I did some test in the past for this serie using a Juno board,
giving to cpupool0 only a72 cores and the a53 cores in another cpupool, my Juno
firmware configuration makes Xen having the boot cpu (cpu 0) to be one of the a53
and it was working fine. But it was long time ago so I would need to try it again.

> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-16 13:01         ` Luca Fancellu
@ 2022-02-16 13:07           ` Juergen Gross
  2022-02-17  8:59             ` Luca Fancellu
  0 siblings, 1 reply; 31+ messages in thread
From: Juergen Gross @ 2022-02-16 13:07 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Stefano Stabellini, Xen-devel, Wei Chen, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 7554 bytes --]

On 16.02.22 14:01, Luca Fancellu wrote:
> 
> 
>> On 16 Feb 2022, at 12:55, Juergen Gross <jgross@suse.com> wrote:
>>
>> On 16.02.22 13:10, Luca Fancellu wrote:
>>>> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>
>>>> On Tue, 15 Feb 2022, Luca Fancellu wrote:
>>>>> Introduce an architecture specific way to create different cpupools
>>>>> at boot time, this is particularly useful on ARM big.LITTLE system
>>>>> where there might be the need to have different cpupools for each type
>>>>> of core, but also systems using NUMA can have different cpu pools for
>>>>> each node.
>>>>>
>>>>> The feature on arm relies on a specification of the cpupools from the
>>>>> device tree to build pools and assign cpus to them.
>>>>>
>>>>> Documentation is created to explain the feature.
>>>>>
>>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>>> ---
>>>>> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
>>>>> xen/arch/arm/Kconfig                   |   9 ++
>>>>> xen/arch/arm/Makefile                  |   1 +
>>>>> xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
>>>>> xen/common/sched/cpupool.c             |   4 +-
>>>>> xen/include/xen/sched.h                |  11 +++
>>>>> 6 files changed, 260 insertions(+), 1 deletion(-)
>>>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>>>> create mode 100644 xen/arch/arm/cpupool.c
>>>>>
>>>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>>>>> new file mode 100644
>>>>> index 000000000000..7298b6394332
>>>>> --- /dev/null
>>>>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>>>>> @@ -0,0 +1,118 @@
>>>>> +Boot time cpupools
>>>>> +==================
>>>>> +
>>>>> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is
>>>>> +possible to create cpupools during boot phase by specifying them in the device
>>>>> +tree.
>>>>> +
>>>>> +Cpupools specification nodes shall be direct childs of /chosen node.
>>>>> +Each cpupool node contains the following properties:
>>>>> +
>>>>> +- compatible (mandatory)
>>>>> +
>>>>> +    Must always include the compatiblity string: "xen,cpupool".
>>>>> +
>>>>> +- cpupool-id (mandatory)
>>>>> +
>>>>> +    Must be a positive integer number.
>>>>
>>> Hi Stefano,
>>> Thank you for your review,
>>>> Why is cpupool-id mandatory? It looks like it could be generated by Xen.
>>>> Or is it actually better to have the user specify it anyway?
>>>>
>>> Yes at first I thought to automatically generate that, however I needed a structure
>>> to map the id to the cpupool DT node. Here my doubt was about the size of the
>>> structure, because the user could even specify a cpupool for each cpu. I could allocate
>>> It dynamically and free it after domUs creation in setup_xen.
>>> What do you think could be the right way?
>>> Or the dom0less guest could specify the id, but I like it more when using a phandle to the
>>> Xen,cpupool node.
>>>>
>>>>> +- cpupool-cpus (mandatory)
>>>>> +
>>>>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>>>>> +    device_type = "cpu"), it can't be empty.
>>>>> +
>>>>> +- cpupool-sched (optional)
>>>>> +
>>>>> +    Must be a string having the name of a Xen scheduler, it has no effect when
>>>>> +    used in conjunction of a cpupool-id equal to zero, in that case the
>>>>> +    default Xen scheduler is selected (sched=<...> boot argument).
>>>>
>>>> I don't get why cpupool-id == 0 should trigger a special cpupool-sched
>>>> behavior.
>>> Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create
>>> that is giving it the default scheduler. I thought it was better to leave it as it was, however the
>>> cpupool0 scheduler can be modified using sched= boot args as it was before.
>>>>
>>>>
>>>>> +Constraints
>>>>> +===========
>>>>> +
>>>>> +The cpupool with id zero is implicitly created even if not specified, that pool
>>>>> +must have at least one cpu assigned, otherwise Xen will stop.
>>>>> +
>>>>> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's
>>>>> +not assigned to any other cpupool.
>>>>> +
>>>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>>>>> +stop.
>>>>
>>>> Thank you for documenting the constraints, but why do we have them?
>>>> Imagine a user specifying 3 cpu pools and imagine the cpupool-id is
>>>> optional and missing. We could take care of the cpupool-id generation in
>>>> Xen and we could also assign the default scheduler everywhere
>>>> cpupool-sched is not specified. Maybe I am missing something?
>>> Yes we could make the cpupool-id optional, my doubts are in the fist comment above.
>>> Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler.
>>>>
>>>> Does cpupool0 has to exist? I guess the answer could be yes, but if it
>>>> is specified as id of one of the pools we are fine, otherwise it could
>>>> be automatically generated by Xen.
>>> Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT
>>> specifications. In fact you could not specify in the DT any xen,cpupool compatible node
>>> with the cpupool-id == 0 and Xen will generate the cpupool0 anyway
>>> (Xen internals are tied with the existence of a cpupool0).
>>>>
>>>> In any case, I don't think that cpupool0 has to have the default
>>>> scheduler?
>>> Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation,
>>> I would need to test it to be sure I don’t find something strange.
>>>>
>>>> My suggestion would be:
>>>>
>>>> - make cpupool-id optional
>>>> - assign automatically cpupool-ids starting from 0
>>>>     - respect cpupool-ids chosen by the user
>>> Ok, it would start from 1 because cpupool0 always exists
>>>> - if some CPUs are left out (not specified in any pool) add an extra cpupool
>>>>     - the extra cpupool doesn't have to be cpupool-id == 0, it could be
>>>>       cpupool-id == n
>>>>     - the extra cpupool uses the default scheduler
>>> I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that
>>> a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the
>>> current behaviour as the feature is not enabled.
>>> However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0,
>>> else assign them to a new cpupool and...
>>>>
>>>> If the user created cpupools in device tree covering all CPUs and also
>>>> specified all cpupool-ids everywhere, and none of them are 0 (no cpupool
>>>> in the system is cpupool0) then panic. (Assuming that cpupool0 is
>>>> required.)
>>> … panic if cpupool0 has no cpus.
>>
>> Today cpu 0 is always member of cpupool0, and changing that might be
>> hard.
> 
> Oh, are you sure? I did some test in the past for this serie using a Juno board,
> giving to cpupool0 only a72 cores and the a53 cores in another cpupool, my Juno
> firmware configuration makes Xen having the boot cpu (cpu 0) to be one of the a53
> and it was working fine. But it was long time ago so I would need to try it again.

Maybe on Arm the restrictions are less problematic, but I wouldn't bet
that all operations (like moving cpus between cpupools, cpu hotplug,
destroying cpupools, shutdown of the host, ...) are working in a sane
way.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-16 12:37         ` Luca Fancellu
@ 2022-02-16 16:32           ` Dario Faggioli
  2022-02-16 16:45             ` Luca Fancellu
  0 siblings, 1 reply; 31+ messages in thread
From: Dario Faggioli @ 2022-02-16 16:32 UTC (permalink / raw)
  To: Luca Fancellu, Juergen Gross
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu

[-- Attachment #1: Type: text/plain, Size: 1626 bytes --]

On Wed, 2022-02-16 at 12:37 +0000, Luca Fancellu wrote:
> > On 16 Feb 2022, at 06:18, Juergen Gross <jgross@suse.com> wrote:
> > On 15.02.22 18:56, Luca Fancellu wrote:
> > > > 
> > > Yes, however I think the parser to handle everything by command
> > > line would
> > > be huge due to input sanitisation and not easy enough as the DT,
> > > however
> > > I see Hyperlaunch has plans to use DT on x86 so I guess it would
> > > be ok to make
> > > this feature common once the DT is available also on x86.
> > 
> > Everything not being explicitly specific to Arm should be in common
> > code. Think of the work in progress for Risc-V.
> 
> Ok I will put it in common and I will make the feature depend on
> HAS_DEVICE_TREE.
> 
Can't we split the DT parsing logic & code, which is fine to be either
ARM or HAS_DEVICE_TREE specific, and the actual core logic, which can
be common and not gated by any particular condition?

This way, if one wants to add boot-time cpupool to x86 via command
line, he/she would have to implement "only" the command line parsing.

I've looked at the code, and I appreciate that it's not trivial and
that it's probably impossible to achieve 100% decoupling (at least not
without adding a lot more complexity)... But any step we can make in
that direction would be, IMO, a good investment.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-16 16:32           ` Dario Faggioli
@ 2022-02-16 16:45             ` Luca Fancellu
  0 siblings, 0 replies; 31+ messages in thread
From: Luca Fancellu @ 2022-02-16 16:45 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Juergen Gross, Xen-devel, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu



> On 16 Feb 2022, at 16:32, Dario Faggioli <dfaggioli@suse.com> wrote:
> 
> On Wed, 2022-02-16 at 12:37 +0000, Luca Fancellu wrote:
>>> On 16 Feb 2022, at 06:18, Juergen Gross <jgross@suse.com> wrote:
>>> On 15.02.22 18:56, Luca Fancellu wrote:
>>>>> 
>>>> Yes, however I think the parser to handle everything by command
>>>> line would
>>>> be huge due to input sanitisation and not easy enough as the DT,
>>>> however
>>>> I see Hyperlaunch has plans to use DT on x86 so I guess it would
>>>> be ok to make
>>>> this feature common once the DT is available also on x86.
>>> 
>>> Everything not being explicitly specific to Arm should be in common
>>> code. Think of the work in progress for Risc-V.
>> 
>> Ok I will put it in common and I will make the feature depend on
>> HAS_DEVICE_TREE.
>> 
> Can't we split the DT parsing logic & code, which is fine to be either
> ARM or HAS_DEVICE_TREE specific, and the actual core logic, which can
> be common and not gated by any particular condition?
> 
> This way, if one wants to add boot-time cpupool to x86 via command
> line, he/she would have to implement "only" the command line parsing.
> 
> I've looked at the code, and I appreciate that it's not trivial and
> that it's probably impossible to achieve 100% decoupling (at least not
> without adding a lot more complexity)... But any step we can make in
> that direction would be, IMO, a good investment.
> 

Hi Dario,

Sure I will try to do my best to point in that direction.

Cheers,
Luca

> Regards
> -- 
> Dario Faggioli, Ph.D
> http://about.me/dario.faggioli
> Virtualization Software Engineer
> SUSE Labs, SUSE https://www.suse.com/
> -------------------------------------------------------------------
> <<This happens because _I_ choose it to happen!>> (Raistlin Majere)



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-16 12:10     ` Luca Fancellu
  2022-02-16 12:55       ` Juergen Gross
@ 2022-02-16 21:58       ` Stefano Stabellini
  1 sibling, 0 replies; 31+ messages in thread
From: Stefano Stabellini @ 2022-02-16 21:58 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Stefano Stabellini, Xen-devel, Wei Chen, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Juergen Gross,
	Dario Faggioli

[-- Attachment #1: Type: text/plain, Size: 7766 bytes --]

On Wed, 16 Feb 2022, Luca Fancellu wrote:
> > On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Tue, 15 Feb 2022, Luca Fancellu wrote:
> >> Introduce an architecture specific way to create different cpupools
> >> at boot time, this is particularly useful on ARM big.LITTLE system
> >> where there might be the need to have different cpupools for each type
> >> of core, but also systems using NUMA can have different cpu pools for
> >> each node.
> >> 
> >> The feature on arm relies on a specification of the cpupools from the
> >> device tree to build pools and assign cpus to them.
> >> 
> >> Documentation is created to explain the feature.
> >> 
> >> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> >> ---
> >> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
> >> xen/arch/arm/Kconfig                   |   9 ++
> >> xen/arch/arm/Makefile                  |   1 +
> >> xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
> >> xen/common/sched/cpupool.c             |   4 +-
> >> xen/include/xen/sched.h                |  11 +++
> >> 6 files changed, 260 insertions(+), 1 deletion(-)
> >> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
> >> create mode 100644 xen/arch/arm/cpupool.c
> >> 
> >> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
> >> new file mode 100644
> >> index 000000000000..7298b6394332
> >> --- /dev/null
> >> +++ b/docs/misc/arm/device-tree/cpupools.txt
> >> @@ -0,0 +1,118 @@
> >> +Boot time cpupools
> >> +==================
> >> +
> >> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is
> >> +possible to create cpupools during boot phase by specifying them in the device
> >> +tree.
> >> +
> >> +Cpupools specification nodes shall be direct childs of /chosen node.
> >> +Each cpupool node contains the following properties:
> >> +
> >> +- compatible (mandatory)
> >> +
> >> +    Must always include the compatiblity string: "xen,cpupool".
> >> +
> >> +- cpupool-id (mandatory)
> >> +
> >> +    Must be a positive integer number.
> > 
> 
> Hi Stefano,
> 
> Thank you for your review,
> 
> > Why is cpupool-id mandatory? It looks like it could be generated by Xen.
> > Or is it actually better to have the user specify it anyway?
> > 
> 
> Yes at first I thought to automatically generate that, however I needed a structure
> to map the id to the cpupool DT node. Here my doubt was about the size of the
> structure, because the user could even specify a cpupool for each cpu. I could allocate
> It dynamically and free it after domUs creation in setup_xen.
> What do you think could be the right way?

Maybe we can achieve the goal with the structure we already have:
pool_cpu_map. pool_cpu_map returns struct cpupool*, which has a
unsigned int cpupool_id field. As a pCPU can only be in 1 cpupool, we
could for each dom0less domain:

- get the xen,cpupool phandle from "domain-cpupool"
- get the first CPU phandle from "cpupool-cpus" in xen,cpupool
- from the CPU node phandle get the CPU number from "reg"
- pool_cpu_map[cpu_reg]->cpupool_id is the id that we need

It should be fast as they are all direct accesses (no walking long lists
or binary trees.)


> Or the dom0less guest could specify the id, but I like it more when using a phandle to the
> Xen,cpupool node. 

No, I think the ID is something Xen should generate.


> >> +- cpupool-cpus (mandatory)
> >> +
> >> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
> >> +    device_type = "cpu"), it can't be empty.
> >> +
> >> +- cpupool-sched (optional)
> >> +
> >> +    Must be a string having the name of a Xen scheduler, it has no effect when
> >> +    used in conjunction of a cpupool-id equal to zero, in that case the
> >> +    default Xen scheduler is selected (sched=<...> boot argument).
> > 
> > I don't get why cpupool-id == 0 should trigger a special cpupool-sched
> > behavior.
> 
> Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create
> that is giving it the default scheduler. I thought it was better to leave it as it was, however the
> cpupool0 scheduler can be modified using sched= boot args as it was before.
> 
> > 
> > 
> >> +Constraints
> >> +===========
> >> +
> >> +The cpupool with id zero is implicitly created even if not specified, that pool
> >> +must have at least one cpu assigned, otherwise Xen will stop.
> >> +
> >> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's
> >> +not assigned to any other cpupool.
> >> +
> >> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
> >> +stop.
> > 
> > Thank you for documenting the constraints, but why do we have them?
> > Imagine a user specifying 3 cpu pools and imagine the cpupool-id is
> > optional and missing. We could take care of the cpupool-id generation in
> > Xen and we could also assign the default scheduler everywhere
> > cpupool-sched is not specified. Maybe I am missing something?
> 
> Yes we could make the cpupool-id optional, my doubts are in the fist comment above.
> Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler.
> 
> > 
> > Does cpupool0 has to exist? I guess the answer could be yes, but if it
> > is specified as id of one of the pools we are fine, otherwise it could
> > be automatically generated by Xen.
> 
> Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT
> specifications. In fact you could not specify in the DT any xen,cpupool compatible node
> with the cpupool-id == 0 and Xen will generate the cpupool0 anyway
> (Xen internals are tied with the existence of a cpupool0).
> 
> > 
> > In any case, I don't think that cpupool0 has to have the default
> > scheduler?
> 
> Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation,
> I would need to test it to be sure I don’t find something strange.
> 
> > 
> > My suggestion would be:
> > 
> > - make cpupool-id optional
> > - assign automatically cpupool-ids starting from 0
> >    - respect cpupool-ids chosen by the user
> 
> Ok, it would start from 1 because cpupool0 always exists
> 
> > - if some CPUs are left out (not specified in any pool) add an extra cpupool
> >    - the extra cpupool doesn't have to be cpupool-id == 0, it could be
> >      cpupool-id == n
> >    - the extra cpupool uses the default scheduler
> 
> I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that
> a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the
> current behaviour as the feature is not enabled.
> However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0,
> else assign them to a new cpupool and...
> 
> > 
> > If the user created cpupools in device tree covering all CPUs and also
> > specified all cpupool-ids everywhere, and none of them are 0 (no cpupool
> > in the system is cpupool0) then panic. (Assuming that cpupool0 is
> > required.)
> 
> … panic if cpupool0 has no cpus.

That could be a good plan.

However, if cpupool0 has to have CPU0 (as Juergen wrote) then we could
automatically assign cpupool-id == 0 to whatever xen,cpupool node has
CPU0:

- if CPU0 is unassigned, cpupool0 is the one with all the unassigned CPUs
- if CPU0 is assigned to one of the xen,cpupool nodes, then that cpupool
  gets id == 0

Alternative we could fix the Xen limitation that cpupool0 has to have
CPU0.

In any case the good thing is that from a device interface perspective,
it doesn't matter. The device tree description doesn't have to change.
The user doesn't need to care how Xen comes up with the IDs.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time
  2022-02-16 13:07           ` Juergen Gross
@ 2022-02-17  8:59             ` Luca Fancellu
  0 siblings, 0 replies; 31+ messages in thread
From: Luca Fancellu @ 2022-02-17  8:59 UTC (permalink / raw)
  To: Juergen Gross, Dario Faggioli
  Cc: Stefano Stabellini, Xen-devel, Wei Chen, Julien Grall,
	Volodymyr Babchuk, Bertrand Marquis, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu



> On 16 Feb 2022, at 13:07, Juergen Gross <jgross@suse.com> wrote:
> 
> On 16.02.22 14:01, Luca Fancellu wrote:
>>> On 16 Feb 2022, at 12:55, Juergen Gross <jgross@suse.com> wrote:
>>> 
>>> On 16.02.22 13:10, Luca Fancellu wrote:
>>>>> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>> 
>>>>> On Tue, 15 Feb 2022, Luca Fancellu wrote:
>>>>>> Introduce an architecture specific way to create different cpupools
>>>>>> at boot time, this is particularly useful on ARM big.LITTLE system
>>>>>> where there might be the need to have different cpupools for each type
>>>>>> of core, but also systems using NUMA can have different cpu pools for
>>>>>> each node.
>>>>>> 
>>>>>> The feature on arm relies on a specification of the cpupools from the
>>>>>> device tree to build pools and assign cpus to them.
>>>>>> 
>>>>>> Documentation is created to explain the feature.
>>>>>> 
>>>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>>>> ---
>>>>>> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
>>>>>> xen/arch/arm/Kconfig                   |   9 ++
>>>>>> xen/arch/arm/Makefile                  |   1 +
>>>>>> xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
>>>>>> xen/common/sched/cpupool.c             |   4 +-
>>>>>> xen/include/xen/sched.h                |  11 +++
>>>>>> 6 files changed, 260 insertions(+), 1 deletion(-)
>>>>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>>>>> create mode 100644 xen/arch/arm/cpupool.c
>>>>>> 
>>>>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>>>>>> new file mode 100644
>>>>>> index 000000000000..7298b6394332
>>>>>> --- /dev/null
>>>>>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>>>>>> @@ -0,0 +1,118 @@
>>>>>> +Boot time cpupools
>>>>>> +==================
>>>>>> +
>>>>>> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is
>>>>>> +possible to create cpupools during boot phase by specifying them in the device
>>>>>> +tree.
>>>>>> +
>>>>>> +Cpupools specification nodes shall be direct childs of /chosen node.
>>>>>> +Each cpupool node contains the following properties:
>>>>>> +
>>>>>> +- compatible (mandatory)
>>>>>> +
>>>>>> +    Must always include the compatiblity string: "xen,cpupool".
>>>>>> +
>>>>>> +- cpupool-id (mandatory)
>>>>>> +
>>>>>> +    Must be a positive integer number.
>>>>> 
>>>> Hi Stefano,
>>>> Thank you for your review,
>>>>> Why is cpupool-id mandatory? It looks like it could be generated by Xen.
>>>>> Or is it actually better to have the user specify it anyway?
>>>>> 
>>>> Yes at first I thought to automatically generate that, however I needed a structure
>>>> to map the id to the cpupool DT node. Here my doubt was about the size of the
>>>> structure, because the user could even specify a cpupool for each cpu. I could allocate
>>>> It dynamically and free it after domUs creation in setup_xen.
>>>> What do you think could be the right way?
>>>> Or the dom0less guest could specify the id, but I like it more when using a phandle to the
>>>> Xen,cpupool node.
>>>>> 
>>>>>> +- cpupool-cpus (mandatory)
>>>>>> +
>>>>>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>>>>>> +    device_type = "cpu"), it can't be empty.
>>>>>> +
>>>>>> +- cpupool-sched (optional)
>>>>>> +
>>>>>> +    Must be a string having the name of a Xen scheduler, it has no effect when
>>>>>> +    used in conjunction of a cpupool-id equal to zero, in that case the
>>>>>> +    default Xen scheduler is selected (sched=<...> boot argument).
>>>>> 
>>>>> I don't get why cpupool-id == 0 should trigger a special cpupool-sched
>>>>> behavior.
>>>> Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create
>>>> that is giving it the default scheduler. I thought it was better to leave it as it was, however the
>>>> cpupool0 scheduler can be modified using sched= boot args as it was before.
>>>>> 
>>>>> 
>>>>>> +Constraints
>>>>>> +===========
>>>>>> +
>>>>>> +The cpupool with id zero is implicitly created even if not specified, that pool
>>>>>> +must have at least one cpu assigned, otherwise Xen will stop.
>>>>>> +
>>>>>> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's
>>>>>> +not assigned to any other cpupool.
>>>>>> +
>>>>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>>>>>> +stop.
>>>>> 
>>>>> Thank you for documenting the constraints, but why do we have them?
>>>>> Imagine a user specifying 3 cpu pools and imagine the cpupool-id is
>>>>> optional and missing. We could take care of the cpupool-id generation in
>>>>> Xen and we could also assign the default scheduler everywhere
>>>>> cpupool-sched is not specified. Maybe I am missing something?
>>>> Yes we could make the cpupool-id optional, my doubts are in the fist comment above.
>>>> Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler.
>>>>> 
>>>>> Does cpupool0 has to exist? I guess the answer could be yes, but if it
>>>>> is specified as id of one of the pools we are fine, otherwise it could
>>>>> be automatically generated by Xen.
>>>> Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT
>>>> specifications. In fact you could not specify in the DT any xen,cpupool compatible node
>>>> with the cpupool-id == 0 and Xen will generate the cpupool0 anyway
>>>> (Xen internals are tied with the existence of a cpupool0).
>>>>> 
>>>>> In any case, I don't think that cpupool0 has to have the default
>>>>> scheduler?
>>>> Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation,
>>>> I would need to test it to be sure I don’t find something strange.
>>>>> 
>>>>> My suggestion would be:
>>>>> 
>>>>> - make cpupool-id optional
>>>>> - assign automatically cpupool-ids starting from 0
>>>>>    - respect cpupool-ids chosen by the user
>>>> Ok, it would start from 1 because cpupool0 always exists
>>>>> - if some CPUs are left out (not specified in any pool) add an extra cpupool
>>>>>    - the extra cpupool doesn't have to be cpupool-id == 0, it could be
>>>>>      cpupool-id == n
>>>>>    - the extra cpupool uses the default scheduler
>>>> I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that
>>>> a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the
>>>> current behaviour as the feature is not enabled.
>>>> However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0,
>>>> else assign them to a new cpupool and...
>>>>> 
>>>>> If the user created cpupools in device tree covering all CPUs and also
>>>>> specified all cpupool-ids everywhere, and none of them are 0 (no cpupool
>>>>> in the system is cpupool0) then panic. (Assuming that cpupool0 is
>>>>> required.)
>>>> … panic if cpupool0 has no cpus.
>>> 
>>> Today cpu 0 is always member of cpupool0, and changing that might be
>>> hard.
>> Oh, are you sure? I did some test in the past for this serie using a Juno board,
>> giving to cpupool0 only a72 cores and the a53 cores in another cpupool, my Juno
>> firmware configuration makes Xen having the boot cpu (cpu 0) to be one of the a53
>> and it was working fine. But it was long time ago so I would need to try it again.
> 
> Maybe on Arm the restrictions are less problematic, but I wouldn't bet
> that all operations (like moving cpus between cpupools, cpu hotplug,
> destroying cpupools, shutdown of the host, ...) are working in a sane
> way.

Hi Juergen, Dario,

I will try to check the cases you list (on arm because I don’t have an x86 setup),
I spotted in the code some places where the cpu0 can be hardcoded but I would
need a feedback from you (and Dario, I know he worked around that area too)
since the scheduler code is very complex.

While I see cpu0 is hardcoded in these places:

cpu_schedule_up
scheduler_init

I can’t see the relation with cpupool0.

However here:

#ifdef CONFIG_X86
void __init sched_setup_dom0_vcpus(struct domain *d)
[…]

I see something that I’m not able to test, could you confirm if that code would
lead to problems if cpu0 is not in cpupool0, since Dom0 is attached to that pool?

Thank you very much for every feedback you can provide.

Cheers,
Luca

> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2022-02-17  9:00 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-15 10:15 [PATCH 0/5] Boot time cpupools Luca Fancellu
2022-02-15 10:15 ` [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
2022-02-15 10:33   ` Juergen Gross
2022-02-15 17:48     ` Luca Fancellu
2022-02-16  6:13       ` Juergen Gross
2022-02-16  8:16         ` Luca Fancellu
2022-02-15 10:15 ` [PATCH 2/5] xen/sched: create public function for cpupools creation Luca Fancellu
2022-02-15 10:38   ` Juergen Gross
2022-02-15 17:50     ` Luca Fancellu
2022-02-16  6:16       ` Juergen Gross
2022-02-15 10:15 ` [PATCH 3/5] xen/sched: retrieve scheduler id by name Luca Fancellu
2022-02-15 10:40   ` Juergen Gross
2022-02-15 17:52     ` Luca Fancellu
2022-02-15 10:15 ` [PATCH 4/5] xen/cpupool: Create different cpupools at boot time Luca Fancellu
2022-02-15 10:48   ` Juergen Gross
2022-02-15 17:56     ` Luca Fancellu
2022-02-16  6:18       ` Juergen Gross
2022-02-16 12:37         ` Luca Fancellu
2022-02-16 16:32           ` Dario Faggioli
2022-02-16 16:45             ` Luca Fancellu
2022-02-16  2:45   ` Stefano Stabellini
2022-02-16 12:10     ` Luca Fancellu
2022-02-16 12:55       ` Juergen Gross
2022-02-16 13:01         ` Luca Fancellu
2022-02-16 13:07           ` Juergen Gross
2022-02-17  8:59             ` Luca Fancellu
2022-02-16 21:58       ` Stefano Stabellini
2022-02-15 10:15 ` [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
2022-02-15 10:56   ` Juergen Gross
2022-02-15 18:06     ` Luca Fancellu
2022-02-16  4:01   ` Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.